diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Photoshop CC 2020 Crack Full Presets (Mac et Windows) MacOSX A Complete Review and Comparison.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Photoshop CC 2020 Crack Full Presets (Mac et Windows) MacOSX A Complete Review and Comparison.md deleted file mode 100644 index dadfb534428cac36a8f22b7cca9613c354eeff01..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Photoshop CC 2020 Crack Full Presets (Mac et Windows) MacOSX A Complete Review and Comparison.md +++ /dev/null @@ -1,131 +0,0 @@ - -

Adobe Photoshop CC 2020 Crack Full Presets (Mac et Windows) MacOSX

-

Are you looking for a way to get Adobe Photoshop CC 2020 for free with full presets and unlimited access? If yes, then you are in the right place. In this article, I will show you how to download and install Adobe Photoshop CC 2020 crack full presets for Mac and Windows. I will also show you how to use the presets to enhance your photos and create stunning effects. But before that, let me explain what Adobe Photoshop CC 2020 is and why you need a crack for it.

-

Adobe Photoshop CC 2020 Crack Full Presets (Mac et Windows) MacOSX


DOWNLOAD ->>->>->> https://byltly.com/2uKwAe



-

Introduction

-

What is Adobe Photoshop CC 2020?

-

Adobe Photoshop CC 2020 is the latest version of the most popular and powerful photo editing software in the world. It is used by millions of professionals and amateurs alike to create, edit, and manipulate images, graphics, and artworks. Adobe Photoshop CC 2020 offers a variety of tools, features, and functions that allow you to unleash your creativity and transform your photos into amazing works of art.

-

Why do you need a crack for Adobe Photoshop CC 2020?

-

Adobe Photoshop CC 2020 is not a free software. It requires a subscription plan that costs $20.99 per month or $239.88 per year. That's quite expensive for many people who want to use it for personal or educational purposes. Moreover, even if you pay for the subscription, you will not get access to all the presets that are available in Adobe Photoshop CC 2020. Presets are predefined settings that apply certain effects or adjustments to your photos with one click. They can save you a lot of time and effort and help you achieve professional results quickly and easily.

-

That's why many people look for a crack for Adobe Photoshop CC 2020. A crack is a modified version of the software that bypasses the activation process and allows you to use it without paying anything. A crack also gives you access to all the presets that are included in Adobe Photoshop CC 2020, as well as some additional ones that are not available in the official version.

-

How to download and install Adobe Photoshop CC 2020 cracked version for Mac and Windows
-Adobe Photoshop CC 2020 full presets free download with crack for MacOSX and Windows
-Best tips and tricks for using Adobe Photoshop CC 2020 crack on Mac and Windows
-Adobe Photoshop CC 2020 crack serial number and activation key for Mac and Windows
-Adobe Photoshop CC 2020 crack features and benefits for Mac and Windows users
-Adobe Photoshop CC 2020 crack system requirements and compatibility for Mac and Windows
-Adobe Photoshop CC 2020 crack problems and solutions for Mac and Windows
-Adobe Photoshop CC 2020 crack vs original version comparison for Mac and Windows
-Adobe Photoshop CC 2020 crack tutorials and guides for Mac and Windows beginners
-Adobe Photoshop CC 2020 crack reviews and testimonials from Mac and Windows users
-Adobe Photoshop CC 2020 full presets pack download link for Mac and Windows
-How to use Adobe Photoshop CC 2020 full presets on Mac and Windows
-Adobe Photoshop CC 2020 full presets examples and inspiration for Mac and Windows designers
-Adobe Photoshop CC 2020 full presets advantages and disadvantages for Mac and Windows
-Adobe Photoshop CC 2020 full presets customization and optimization for Mac and Windows
-Adobe Photoshop CC 2020 full presets compatibility and integration with other software for Mac and Windows
-Adobe Photoshop CC 2020 full presets updates and support for Mac and Windows
-Adobe Photoshop CC 2020 full presets alternatives and competitors for Mac and Windows
-Adobe Photoshop CC 2020 full presets FAQs and answers for Mac and Windows
-Adobe Photoshop CC 2020 full presets feedback and suggestions for Mac and Windows developers
-How to get Adobe Photoshop CC 2020 crack full presets for free legally for Mac and Windows
-How to uninstall Adobe Photoshop CC 2020 crack full presets from Mac and Windows
-How to fix Adobe Photoshop CC 2020 crack full presets errors and bugs on Mac and Windows
-How to backup and restore Adobe Photoshop CC 2020 crack full presets on Mac and Windows
-How to upgrade from Adobe Photoshop CC 2020 crack full presets to the latest version for Mac and Windows
-How to transfer Adobe Photoshop CC 2020 crack full presets from one device to another for Mac and Windows
-How to share Adobe Photoshop CC 2020 crack full presets with others for Mac and Windows
-How to create amazing graphics with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to edit photos professionally with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to make logos, banners, flyers, posters, brochures, etc. with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to enhance your online presence with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to improve your SEO ranking with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to increase your conversions with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to generate more leads with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to grow your business with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to save time and money with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to avoid legal issues with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to protect your privacy with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to secure your data with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to prevent malware infections with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to optimize your performance with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to boost your creativity with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to learn new skills with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to have fun with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to impress your clients with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to collaborate with others with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to access exclusive resources with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to join a community of users with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to get help from experts with Adobe Photoshop CC 2020 crack full presets for Mac and Windows

-

What are the features of Adobe Photoshop CC 2020?

-

Adobe Photoshop CC 2020 has many features that make it the best photo editing software in the market. Some of these features are:

- -

These are just some of the features of Adobe Photoshop CC 2020. There are many more features that you can explore and use in this software.

-

How to download and install Adobe Photoshop CC 2020 crack full presets for Mac and Windows?

-

If you want to get Adobe Photoshop CC 2020 for free with full presets and unlimited access, then follow these steps:

-

Step 1: Download the crack file from the link below

-

The first thing you need to do is download the crack file from this link: https://bit.ly/AdobePhotoshopCC2020Crack. This link will take you to a Google Drive folder where you will find two files: one for Mac users and one for Windows users. Choose the file according to your operating system and click on download.

-

Step 2: Extract the file using WinRAR or 7-Zip

-

The next thing you need to do is extract the file using WinRAR or 7-Zip. These are free software that can unzip compressed files easily. You can download them from their official websites: https://www.win-rar.com/ (for WinRAR) or https://www.7-zip.org/ (for 7-Zip). After downloading them, install them on your computer and then right-click on the crack file you downloaded in step 1 and choose "Extract here" or "Extract files".

-

Step 3: Run the setup file and follow the instructions

-

After extracting the file, you will see a folder named "Adobe Photoshop CC 2020 Crack Full Presets". Open this folder and double-click on the setup file. This will launch the installation wizard that will guide you through the process. Follow the instructions on the screen and accept the terms and conditions. Choose the destination folder where you want to install the software and click on install.

-

Step 4: Copy and paste the crack file into the installation folder

-

Once the installation is complete, do not run the software yet. Go back to the folder where you extracted the crack file and open it. You will see another folder named "Crack". Open this folder and copy the file named "amtlib.dll". Then go to the installation folder where you installed Adobe Photoshop CC 2020. The default location is "C:\Program Files\Adobe\Adobe Photoshop CC 2020" for Windows users and "/Applications/Adobe Photoshop CC 2020" for Mac users. Paste the crack file into this folder and replace the original file.

-

Step 5: Enjoy Adobe Photoshop CC 2020 with full presets and unlimited access

-

Congratulations! You have successfully installed Adobe Photoshop CC 2020 crack full presets for Mac and Windows. Now you can run the software and enjoy all its features and functions without any limitations or restrictions. You can also access all the presets that are available in Adobe Photoshop CC 2020, as well as some additional ones that are not available in the official version.

-

How to use Adobe Photoshop CC 2020 crack full presets for Mac and Windows?

-

Now that you have Adobe Photoshop CC 2020 crack full presets for Mac and Windows, you might be wondering how to use them to enhance your photos and create stunning effects. Here are some tips and tricks that will help you use the presets effectively:

-

How to access the presets in Adobe Photoshop CC 2020?

-

To access the presets in Adobe Photoshop CC 2020, you need to open the Preset Manager. To do this, go to Edit > Presets > Preset Manager. This will open a window where you can see all the presets that are available in Adobe Photoshop CC 2020. You can also add, delete, rename, or organize your presets using this window.

-

The presets are divided into different categories such as Brushes, Patterns, Gradients, Styles, etc. You can choose any category from the drop-down menu at the top of the window. You can also use the search box to find a specific preset by typing its name or keyword.

-

How to apply the presets to your photos in Adobe Photoshop CC 2020?

-

To apply a preset to your photo in Adobe Photoshop CC 2020, you need to select it from the Preset Manager and then click on Load. This will load the preset into your current document. You can then use it as you normally would with any other tool or feature in Adobe Photoshop CC 2020.

-

For example, if you want to apply a brush preset to your photo, you need to select it from the Preset Manager and then click on Load. This will load the brush preset into your Brush Tool. You can then use it to paint on your photo with different colors, sizes, shapes, etc.

-

If you want to apply a pattern preset to your photo, you need to select it from the Preset Manager and then click on Load. This will load the pattern preset into your Pattern Stamp Tool. You can then use it to stamp on your photo with different modes, opacity, alignment, etc.

-

If you want to apply a gradient preset to your photo, you need to select it from the Preset Manager and then click on Load. This will load the gradient preset into your Gradient Tool. You can then use it to fill or stroke your photo with different colors, angles, styles, etc.

-

If you want to apply a style preset to your photo, you need to select it from the Preset Manager and then click on Load. This will load the style preset into your Layer Style dialog box. You can then apply it to any layer in your photo with different options such as blending mode, opacity, scale, etc.

-

How to create your own presets in Adobe Photoshop CC 2020?

-

If you want to create your own presets in Adobe Photoshop CC 2020, you need to follow these steps:

-
    -
  1. Create or edit your photo using any tool or feature in Adobe Photoshop CC 2020.
  2. -
  3. Select or create a new layer that contains your desired effect or adjustment.
  4. -
  5. Go to Edit > Presets > Preset Manager.
  6. -
  7. Choose the category that matches your effect or adjustment from the drop-down menu at the top of the window.
  8. -
  9. Click on Save Set and give a name to your preset.
  10. -
  11. Click on OK and close the Preset Manager window.
  12. -
-

You have now created your own preset in Adobe Photoshop CC 2020. You can access it anytime from the Preset Manager and apply it to any photo you want.

-

How to share your presets with others in Adobe Photoshop CC 2020?

-

If you want to share your presets with others in Adobe Photoshop CC 2020, you need to follow these steps:

-
    -
  1. Go to Edit > Presets > Preset Manager.
  2. -
  3. Choose the category that contains your preset from the drop-down menu at the top of the window.
  4. -
  5. Select your preset from the list of presets.
  6. -
  7. Click on Save Set and choose a location where you want to save your preset file.
  8. -
  9. Click on OK and close the Preset Manager window.
  10. -
-

You have now saved your preset as a file that can be shared with others. You can send this file via email, social media, cloud storage, etc. To load this file into another computer or device, simply copy it into its corresponding folder in Adobe Photoshop CC 2020's installation directory.

-

Conclusion

-

Summary of the main points

-

In this article, I have shown you how to download and install Adobe Photoshop CC 2020 crack full presets for Mac and Windows. I have also shown you how to use them to enhance your photos and create stunning effects. By using this crack version of Adobe Photoshop CC 2020, you can enjoy all its features and functions without paying anything or having any limitations or restrictions. You can also access all the presets that are available in Adobe Photoshop CC 2020, as well as some additional ones that are not available in the official version.

-

Call to action and recommendation

-

If you want to get Adobe Photoshop CC 2020 for free with full presets and unlimited access, then don't wait any longer. Click on this link https://bit.ly/AdobePhotoshopCC2020Crack -and download the crack file now. Follow the instructions in this article and install the software on your computer or device. You will be amazed by what you can do with Adobe Photoshop CC 2020 crack full presets for Mac and Windows. However, I must warn you that using a crack version of any software is illegal and unethical. You may face legal consequences or security risks if you do so. Therefore, I recommend that you use this crack version only for educational or personal purposes and not for commercial or professional purposes. If you like Adobe Photoshop CC 2020 and want to support its developers, then please buy the original version from their official website https://www.adobe.com/products/photoshop.html. You will get regular updates, customer support, and peace of mind. Thank you for reading this article. I hope you found it useful and informative. If you have any questions or feedback, please leave them in the comments section below. I would love to hear from you.

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PowerPoint Ed Version and Enjoy Its Unique Features for Inclusive and Collaborative Classrooms.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PowerPoint Ed Version and Enjoy Its Unique Features for Inclusive and Collaborative Classrooms.md deleted file mode 100644 index 0b7a8b14e993c625c84e0b33b59813521c8a821c..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PowerPoint Ed Version and Enjoy Its Unique Features for Inclusive and Collaborative Classrooms.md +++ /dev/null @@ -1,28 +0,0 @@ -
-

How to Download PowerPoint Ed Version for Free

-

PowerPoint Ed is a powerful and easy-to-use tool that allows you to create and edit stunning presentations with animations, transitions, and multimedia. PowerPoint Ed is compatible with Microsoft PowerPoint and can open and save files in various formats, including PPT, PPTX, PDF, and HTML.

-

download powerpoint cracked version


Download Zip ››››› https://byltly.com/2uKzbM



-

If you want to download PowerPoint Ed version for free, you can follow these simple steps:

-
    -
  1. Go to the official website of PowerPoint Ed at https://www.powerpointed.com/.
  2. -
  3. Click on the "Download" button on the top right corner of the homepage.
  4. -
  5. Select your preferred language and operating system from the drop-down menus.
  6. -
  7. Click on the "Download Now" button and wait for the installation file to be downloaded.
  8. -
  9. Run the installation file and follow the instructions on the screen to complete the installation.
  10. -
  11. Launch PowerPoint Ed and enjoy creating amazing presentations!
  12. -
-

PowerPoint Ed is a free software that does not require any registration or activation. However, if you want to access more features and templates, you can upgrade to the premium version for a small fee. You can also join the PowerPoint Ed community and share your feedback, suggestions, and ideas with other users.

-

Download PowerPoint Ed version today and unleash your creativity!

-

Why Choose PowerPoint Ed Version?

-

PowerPoint Ed version is not just a regular PowerPoint software. It is a special edition that is designed for education and learning purposes. PowerPoint Ed version has many features that make it stand out from other presentation tools. Here are some of the reasons why you should choose PowerPoint Ed version:

- -

With PowerPoint Ed version, you can create powerful presentations that showcase your creativity and knowledge. Whether you are a student or a teacher, PowerPoint Ed version can help you achieve your learning goals.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Autocount Accounting Keygen [Extra Quality].md b/spaces/1gistliPinn/ChatGPT4/Examples/Autocount Accounting Keygen [Extra Quality].md deleted file mode 100644 index 38f380a8f13742b19550b163c64ba17845337e12..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Autocount Accounting Keygen [Extra Quality].md +++ /dev/null @@ -1,22 +0,0 @@ -

Autocount Accounting Keygen


Download Zip >>> https://imgfil.com/2uxZn8



- -I haven't had any problems except when I went to California last year. I was going to stay for a month and then go back home. I asked the girl in the hotel desk if it was OK to leave the car there as I didn't want to pay for parking. She said she didn't know how it would be treated. She called the police and when they came and told me to move the car she said "how do you know it's your car." We had to move it down the street to a lot where I have to pay for parking. I am unsure of my rights as to where I am able to leave my car but I am told that if I were to leave it at the hotel I would have to pay for parking. I have been a loyal customer since 1972. It still runs and drives. I have tried to buy a new one but I can't find one that good. - -Last edited by blvader on Sat Feb 16, 2012 8:31 pm, edited 1 time in total. - -Best thing you can do is buy a new one - even if you pay full retail or even a little over that. A new car is a life-changing purchase. Of course, the happy news is that the auto brand is likely to be better than the last one. - -If you think about the condition of the car (and that you've had it for so long) you may be able to get the car owner to buy the car back from you (less than full retail, I know). But you won't get much for it even if you did. - -I'd check the N.H. State DMVs website, www.mvs.com (if you're in New Hampshire). There should be an up-to-date vehicle registration history on file. While I can't vouch for the accuracy of the data, the car you've had it for so long may not have had a lot of miles on it. In any event, DMV data is a good first look for such a thing. - -This is a really high-quality fix-it-yourself project, but I think it's so, so important to be thorough (especially when you're dealing with a brand new car!) that I'd love to see your work: - -Who is online - -Users browsing this forum: No registered users and 1 guest - -You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts 4fefd39f24
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 4 Movie and Stream Online - HD 4K Quality Available.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 4 Movie and Stream Online - HD 4K Quality Available.md deleted file mode 100644 index 612883abb7a63919e0e32b3e5acfdfeacf6366c4..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 4 Movie and Stream Online - HD 4K Quality Available.md +++ /dev/null @@ -1,144 +0,0 @@ - -

How to Download 4 Movie for Free in 2023

-

If you are a movie lover, you might have heard of the term "4 movie". It is a new video format that offers higher resolution, better compression, and more features than the standard MP4 format. It is also compatible with most devices and platforms, making it ideal for watching movies on the go.

-

But how can you download 4 movie for free in 2023? Is it legal and safe to do so? What are the best sites to find and download your favorite movies in this format? In this article, we will answer these questions and more. We will also show you how to use ByClick Downloader, a powerful tool that can help you download any movie from any site with just one click.

-

download 4 movie


Download Filehttps://urlin.us/2uT1vX



-

Benefits of Downloading 4 Movie

-

High-quality video and audio

-

One of the main advantages of downloading movies in the new format is that they offer better quality than MP4 movies. The resolution of a typical MP4 movie is usually around HD (1280 x720 pixels), while a typical movie can go up to UHD (3840 x2160 pixels). This means that you can enjoy more details, sharper images, and smoother motion when watching movies.

-

Another benefit of downloading movies in this format is that they have better audio quality. The format supports Dolby Atmos, a surround sound technology that creates a realistic and immersive sound experience. You can hear sounds coming from different directions, such as above, below, behind, or in front of you.

-

Offline access and convenience

-

Another reason why you might want to download movies in this format is that you can watch them offline anytime and anywhere. You don't need an internet connection or a streaming service subscription to enjoy your favorite movies. You can save them on your computer, smartphone, tablet, or external hard drive and watch them whenever you want.

-

Downloading movies also gives you more control over your viewing experience. You can pause, rewind, fast-forward, or skip scenes without any buffering or interruptions. You can also adjust the brightness, volume, subtitles, or language settings according to your preferences.

-

No subscription fees or ads

-

A third benefit of downloading movies in this format is that you don't have to pay any subscription fees or watch any ads. Unlike streaming services that charge you monthly or yearly fees to access their content library, downloading movies allows you to watch them for free. You can also avoid annoying ads that interrupt your viewing pleasure or collect your personal data.

-

Downloading movies also gives you more freedom and choice over what you want to watch. You don't have to rely on the availability or selection of streaming services

Risks of Downloading 4 Movie

-

Legal issues and copyright infringement

-

While downloading movies in this format may seem tempting, you should also be aware of the potential risks involved. One of the biggest risks is that you may be breaking the law and infringing on the rights of the movie creators and distributors. Downloading movies from unauthorized sources is considered piracy, which is illegal in most countries and can result in fines, lawsuits, or even jail time.

-

Therefore, you should always check the legality and legitimacy of the sites and sources that you use to download movies. You should also respect the intellectual property and creative efforts of the movie makers and support them by paying for their work or watching them on legal platforms.

-

download 4 movie in HD quality
-download 4 movie for free online
-download 4 movie with subtitles
-download 4 movie from YouTube
-download 4 movie using torrent
-download 4 movie on mobile
-download 4 movie in MP4 format
-download 4 movie full length
-download 4 movie without registration
-download 4 movie fast and easy
-download 4 movie latest release
-download 4 movie in Hindi dubbed
-download 4 movie with English audio
-download 4 movie from Netflix
-download 4 movie legally and safely
-download 4 movie in 1080p resolution
-download 4 movie with high speed
-download 4 movie from best site
-download 4 movie in dual audio
-download 4 movie with low data usage
-download 4 movie in different genres
-download 4 movie offline mode
-download 4 movie with one click
-download 4 movie from Amazon Prime Video
-download 4 movie no ads or pop-ups
-download 4 movie in Blu-ray quality
-download 4 movie with VPN service
-download 4 movie from Google Drive
-download 4 movie in original language
-download 4 movie with bonus features
-download 4 movie in small size
-download 4 movie on PC or laptop
-download 4 movie with direct link
-download 4 movie from Disney Plus
-download 4 movie no sign up required
-download 4 movie in ultra HD quality
-download 4 movie with multiple options
-download 4 movie from Hulu
-download 4 movie in any region or country
-download 4 movie with good sound quality
-download 4 movie in various formats
-download 4 movie on smart TV or streaming device
-download 4 movie with reliable source
-download 4 movie from HBO Max
-download 4 movie no virus or malware risk
-download 4 movie in HDR quality
-download 4 movie with user-friendly interface
-download 4 movie from Apple TV Plus
-download 4 movie no credit card needed

-

Malware and viruses

-

Another risk of downloading movies in this format is that you may expose your device and data to malware and viruses. Some of the sites and sources that offer free movie downloads may contain malicious software or links that can harm your computer, smartphone, tablet, or external hard drive. They can infect your device with spyware, ransomware, trojans, worms, or other types of malware that can steal your personal information, damage your files, or lock your device.

-

Therefore, you should always be careful and cautious when downloading movies from unknown or suspicious sites and sources. You should also use a reliable antivirus software and firewall to protect your device and data from malware and viruses. You should also scan your downloaded files before opening them or transferring them to other devices.

-

Data consumption and storage space

-

A third risk of downloading movies in this format is that you may consume a lot of data and storage space. Because movies in this format have higher resolution and quality than MP4 movies, they also have larger file sizes. A typical movie can range from 1 GB to 10 GB or more, depending on the length and quality of the movie. This means that you may need a fast and stable internet connection to download them without any issues or delays.

-

Downloading movies in this format also requires a lot of storage space on your device or external hard drive. If you download too many movies, you may run out of space or slow down your device performance. Therefore, you should always check the file size and storage capacity before downloading movies in this format. You should also delete or transfer the movies that you don't need or watch anymore to free up some space.

-

Best Free Movie Download Sites for 4 Movie in 2023

-

YouTube

-

One of the best and easiest ways to download movies in this format for free is to use YouTube. YouTube is the most popular video-sharing platform in the world, where you can find millions of videos, including movies, trailers, clips, documentaries, and more. You can also find many channels and playlists that offer movies in this format for free.

-

To download movies from YouTube, you can use a simple online tool called YouTube Downloader. This tool allows you to download any video from YouTube in various formats, including 4 movie. All you have to do is copy and paste the URL of the video that you want to download into the tool's input box and click on the "Download" button. You can then choose the format and quality that you want and save the file on your device.

-

EZTV

-

Another option to download movies in this format for free is to use EZTV. EZTV is one of the most popular torrent sites for downloading TV shows and movies. You can find a wide range of genres, categories, languages, and qualities on this site, including 4 movie. You can also find the latest releases and updates on this site.

-

To download movies from EZTV, you need to use a torrent client software such as BitTorrent or uTorrent. This software allows you to download files from other users who are sharing them on the network. You also need to use a VPN service such as NordVPN or ExpressVPN to hide your IP address and encrypt your traffic. This way, you can avoid any legal issues or malware threats when downloading movies from torrent sites.

-

FZMovies

-

A third option to download movies in this format for free is to use FZMovies. FZMovies is a dedicated site for downloading Bollywood and Hollywood movies in various formats, including 4 movie. You can find a huge collection of movies on this site, ranging from old classics to new blockbusters. You can also search for movies by genre, year, actor, director, or keyword.

-

To download movies from FZMovies, you just need to visit the site and browse through its categories or use its search function. You can then click on the movie that you want to download and choose the format and quality that you want. You can then click on the download link and save the file on your device.

-

How to Download 4 Movie from Any Site with ByClick Downloader (Recommended)

-

What is ByClick Downloader and why you should use it

-

If you want to download movies in this format from any site with ease and convenience, you should use ByClick Downloader. ByClick Downloader is a powerful and versatile tool that can help you download any video from any site with just one click. You can download videos from YouTube, Facebook, Instagram, Twitter, Vimeo, Dailymotion, and more. You can also download videos in various formats, including 4 movie.

-

ByClick Downloader has many features and advantages that make it the best choice for downloading movies in this format. Some of them are:

- -

How to install and use ByClick Downloader

-

To install and use ByClick Downloader, you just need to follow these simple steps:

-
    -
  1. Visit the official website of ByClick Downloader and click on the "Download" button to get the setup file.
  2. -
  3. Run the setup file and follow the instructions to install the tool on your device.
  4. -
  5. Open the tool and choose the format and quality that you want for your downloads. You can also customize other settings such as download location, notifications, subtitles, etc.
  6. -
  7. Go to any site that has the movie that you want to download and play the video. You will see a pop-up window that offers you to download the video with one click. You can also copy and paste the URL of the video into the tool's input box and click on the "Download" button.
  8. -
  9. Wait for the download to finish and enjoy your movie.
  10. -
-

Features and advantages of ByClick Downloader

-

By using ByClick Downloader, you can enjoy many features and advantages that make it the best tool for downloading movies in this format. Some of them are:

- -

Conclusion

-

In conclusion, downloading movies in this format is a great way to enjoy high-quality video and audio, offline access and convenience, and no subscription fees or ads. However, you should also be aware of the risks of downloading movies in this format, such as legal issues and copyright infringement, malware and viruses, and data consumption and storage space. You should always check the legality and legitimacy of the sites and sources that you use to download movies, and use a reliable antivirus software and firewall to protect your device and data from malware and viruses. You should also check the file size and storage capacity before downloading movies in this format, and delete or transfer the movies that you don't need or watch anymore to free up some space.

-

The best way to download movies in this format for free in 2023 is to use ByClick Downloader. This tool can help you download any movie from any site with just one click. You can also download movies in various formats, including 4 movie. By using ByClick Downloader, you can enjoy many features and advantages that make it the best tool for downloading movies in this format. You can download movies in high quality, up to UHD resolution. You can also download movies with Dolby Atmos sound, subtitles, metadata, thumbnails, and more. You can also convert any video to any format, including 4 movie. You can also enjoy fast and reliable download speed, multiple downloads and batch downloads, smart auto -detect feature, and a user-friendly and intuitive interface.

-

So, what are you waiting for? Download ByClick Downloader today and start downloading your favorite movies in this format for free in 2023. You will not regret it!

-

FAQs

-

What is the difference between MP4 and 4 movie?

-

MP4 is a standard video format that is widely used and supported by most devices and platforms. It has good quality and compression, but it also has some limitations and drawbacks. For example, it does not support high dynamic range (HDR), which enhances the contrast and color of the video. It also does not support Dolby Atmos, which creates a surround sound effect. It also has a fixed frame rate, which can cause stuttering or judder when playing videos with different frame rates.

-

4 movie is a new video format that is designed to overcome the limitations and drawbacks of MP4. It supports HDR, Dolby Atmos, and variable frame rate (VFR), which adapts to the frame rate of the video. It also has better quality and compression than MP4, which means that it can deliver higher resolution and quality with smaller file sizes.

-

How can I watch 4 movie on my TV or mobile device?

-

To watch movies in this format on your TV or mobile device, you need to make sure that your device supports this format. You can check the specifications or settings of your device to see if it supports this format. If your device does not support this format, you can use a video converter software such as ByClick Downloader to convert the movie to a compatible format such as MP4.

-

You also need to transfer the movie from your computer to your device using a USB cable, a wireless connection, or a cloud service. You can then use a media player app such as VLC or MX Player to play the movie on your device.

-

How can I convert other video formats to 4 movie?

-

To convert other video formats to this format, you can use a video converter software such as ByClick Downloader. This software can help you convert any video to any format, including 4 movie. You just need to add the video that you want to convert to the software, choose the format and quality that you want, and click on the "Convert" button. You can then save the converted file on your device or share it with others.

-

Is it safe to download movies from torrent sites?

-

Downloading movies from torrent sites is not recommended, as it may pose some risks and dangers. Some of these risks are:

- -

Therefore, you should avoid downloading movies from torrent sites and use legal and safe sources instead.

-

How can I avoid legal issues when downloading movies?

-

To avoid legal issues when downloading movies, you should follow these tips:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Chicken Gun New Update Mod Apk and Enjoy the Fun.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Chicken Gun New Update Mod Apk and Enjoy the Fun.md deleted file mode 100644 index b4c156a66e74ed357ae6818a6610bb420d3bc590..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Chicken Gun New Update Mod Apk and Enjoy the Fun.md +++ /dev/null @@ -1,85 +0,0 @@ -
-

Chicken Gun New Update Mod APK: Everything You Need to Know

-

If you are looking for a fun and addictive FPS game that will make you laugh out loud, then you should try Chicken Gun. This game lets you play as a chicken with a gun and fight against other chickens in various modes and maps. You can customize your chicken with different skins, hats, glasses, and weapons, and enjoy the hilarious physics and animations. In this article, we will tell you everything you need to know about the latest update of Chicken Gun and how to download the mod apk version that gives you unlimited money, gems, and more.

-

chicken gun new update mod apk


Downloadhttps://urlin.us/2uSTtw



-

What is Chicken Gun?

-

A hilarious and chaotic FPS game

-

Chicken Gun is a multiplayer FPS game developed by ChaloApps. The game has a simple premise: you are a chicken with a gun, and you have to shoot other chickens in various modes and maps. You can play online with up to 10 players in team deathmatch, free for all, capture the flag, or zombie mode. You can also play offline with bots or with your friends on the same device.

-

Features of Chicken Gun

-

Chicken Gun has many features that make it a unique and entertaining game. Some of them are:

- -

What is new in the latest update?

-

New maps, weapons, and skins

-

The latest update of Chicken Gun brings some new content to the game. There are two new maps: prison and airport. There are also two new weapons: crossbow and minigun. And there are four new skins: prisoner, pilot, cop, and soldier.

-

Improved graphics and performance

-

The latest update also improves the graphics and performance of the game. The game now supports HD resolution and has better lighting effects. The game also runs smoother and faster on most devices.

-

Bug fixes and balance changes

-

The latest update also fixes some bugs and glitches that were affecting the gameplay. For example, some weapons were not working properly or had incorrect stats. The update also balances some weapons and modes to make them more fair and fun.

-

Why should you download the mod apk?

-

Unlimited money and gems

-

The mod apk version of Chicken Gun gives you unlimited money and gems. You can use them to buy any weapon, skin, hat, or glass you want. You can also upgrade your weapons to make them more powerful and effective.

-

chicken gun mod apk latest version download
-chicken gun unlimited money and gems mod apk
-chicken gun 3.3.01 mod apk free download
-chicken gun fps shooter mod apk android
-chicken gun hack mod apk no root
-chicken gun online multiplayer mod apk
-chicken gun mod menu apk download
-chicken gun mod apk all weapons unlocked
-chicken gun mod apk offline mode
-chicken gun mod apk unlimited ammo and health
-chicken gun 2d pixel shooter mod apk
-chicken gun mod apk revdl
-chicken gun mod apk rexdl
-chicken gun mod apk happymod
-chicken gun mod apk an1
-chicken gun zombie mode mod apk
-chicken gun battle royale mod apk
-chicken gun sandbox mode mod apk
-chicken gun custom skins mod apk
-chicken gun pro pack mod apk
-chicken gun new maps and modes mod apk
-chicken gun 3d graphics mod apk
-chicken gun voice chat mod apk
-chicken gun ragdoll physics mod apk
-chicken gun funny moments mod apk
-chicken gun best guns and items mod apk
-chicken gun tips and tricks mod apk
-chicken gun cheats and hacks mod apk
-chicken gun gameplay and review mod apk
-chicken gun how to install mod apk

-

Unlock all items and modes

-

The mod apk version also unlocks all items and modes in the game. You can access any map or mode without having to level up or complete

any challenge. You can also use any item without having to wait for the cooldown or reload time.

-

No ads and root required

-

The mod apk version also removes all the annoying ads that pop up in the game. You can enjoy the game without any interruption or distraction. Moreover, the mod apk does not require root access to work. You can install it on any device without any risk or hassle.

-

How to download and install the mod apk?

-

Step 1: Download the mod apk file from a trusted source

-

The first step is to download the mod apk file from a trusted source. You can find many websites that offer the mod apk file, but be careful of fake or malicious ones. We recommend you to use this link to download the mod apk file safely and securely.

-

Step 2: Enable unknown sources on your device

-

The second step is to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on.

-

Step 3: Install the mod apk and enjoy the game

-

The final step is to install the mod apk and enjoy the game. To do this, locate the mod apk file in your device storage, then tap on it and follow the instructions. Once the installation is done, you can open the game and start playing with unlimited money, gems, and more.

-

Conclusion

-

Chicken Gun is a hilarious and chaotic FPS game that will make you laugh out loud. You can play as a chicken with a gun and fight against other chickens in various modes and maps. You can customize your chicken with different skins, hats, glasses, and weapons, and enjoy the funny physics and animations. The latest update of Chicken Gun brings some new content and improvements to the game, such as new maps, weapons, skins, graphics, performance, bug fixes, and balance changes. If you want to have more fun and advantages in the game, you should download the mod apk version that gives you unlimited money, gems, and more. You can download the mod apk file from this link and install it on your device easily and safely. We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please let us know in the comments below.

-

FAQs

-

Q: Is Chicken Gun free to play?

-

A: Yes, Chicken Gun is free to play. You can download it from the Google Play Store or the App Store for free. However, some items and modes may require real money to unlock or use.

-

Q: Is Chicken Gun safe to play?

-

A: Yes, Chicken Gun is safe to play. The game does not contain any harmful or inappropriate content for children or adults. The game is rated 12+ on the Google Play Store and 9+ on the App Store for mild violence and crude humor.

-

Q: Is Chicken Gun offline or online?

-

A: Chicken Gun can be played both offline and online. You can play offline with bots or with your friends on the same device. You can also play online with up to 10 players in team deathmatch, free for all, capture the flag, or zombie mode.

-

Q: How can I contact the developers of Chicken Gun?

-

A: You can contact the developers of Chicken Gun by sending them an email at chaloapps@gmail.com or by following them on their social media accounts such as Facebook, Instagram, or YouTube. You can also leave a review or a rating on the Google Play Store or the App Store to share your feedback or suggestions.

-

Q: How can I support the developers of Chicken Gun?

-

A: You can support the developers of Chicken Gun by buying some items or modes in the game with real money. This will help them cover their costs and improve their game. You can also share their game with your friends and family and invite them to play with you.

- : https://www.apkdone.com/chicken-gun/ : https://www.facebook.com/ChaloApps : https://www.instagram.com/chaloapps/ : https://www.youtube.com/channel/UC9w8mpVvRdWRs1b8whPLnxg

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Bullet Echo Mod APK v5.2.4 and Enjoy Unlimited Resources.md b/spaces/1phancelerku/anime-remove-background/Download Bullet Echo Mod APK v5.2.4 and Enjoy Unlimited Resources.md deleted file mode 100644 index b6a79414db30faa4a9c2b4b56fba8e6390ed31af..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Bullet Echo Mod APK v5.2.4 and Enjoy Unlimited Resources.md +++ /dev/null @@ -1,111 +0,0 @@ - -
- - - - -
-

Bullet Echo Mod APK 5.2.4: A Tactical Shooter Game with Unlimited Money

-

Introduction

-

If you are looking for a thrilling and challenging shooter game that tests your skills and tactics, then you should try Bullet Echo Mod APK 5.2.4. This is a modified version of the original Bullet Echo game developed by ZeptoLab, which is known for creating popular games like Cut the Rope and King of Thieves.

-

bullet echo mod apk 5.2.4


Download Zip ⚹⚹⚹ https://jinyurl.com/2uNPre



-

Bullet Echo Mod APK 5.2.4 is a tactical shooter game that puts you in a dark battlefield where you have to rely on your senses and strategy to survive and eliminate your enemies. You can choose from over 80 heroes with unique abilities and weapons, and team up with other players in real-time multiplayer matches.

-

Some of the features of Bullet Echo Mod APK 5.2.4 are:

Here are some of the benefits of Bullet Echo Mod APK 5.2.4:

-

bullet echo mod apk 5.2.4 unlimited money and coins
-bullet echo mod apk 5.2.4 latest version download
-bullet echo mod apk 5.2.4 free shopping and upgrades
-bullet echo mod apk 5.2.4 unlocked all characters and weapons
-bullet echo mod apk 5.2.4 no ads and no root
-bullet echo mod apk 5.2.4 hack cheats and tips
-bullet echo mod apk 5.2.4 gameplay and review
-bullet echo mod apk 5.2.4 online multiplayer mode
-bullet echo mod apk 5.2.4 best settings and graphics
-bullet echo mod apk 5.2.4 android and ios compatible
-bullet echo mod apk 5.2.4 new features and updates
-bullet echo mod apk 5.2.4 how to install and use
-bullet echo mod apk 5.2.4 download link and mirror
-bullet echo mod apk 5.2.4 safe and secure
-bullet echo mod apk 5.2.4 bugs and issues fixed
-bullet echo mod apk 5.2.4 high damage and speed
-bullet echo mod apk 5.2.4 offline and single player mode
-bullet echo mod apk 5.2.4 ranking and rewards system
-bullet echo mod apk 5.2.4 custom maps and modes
-bullet echo mod apk 5.2.4 voice chat and team play
-bullet echo mod apk 5.2.4 support and feedback
-bullet echo mod apk 5.2.4 original vs mod comparison
-bullet echo mod apk 5.2.4 fun and addictive gameplay
-bullet echo mod apk 5.2.4 realistic physics and sound effects
-bullet echo mod apk 5.2.4 different classes and skills
-bullet echo mod apk 5.2.4 strategy and tactics guide
-bullet echo mod apk 5.2.4 weapons and equipment list
-bullet echo mod apk 5.2.4 skins and customization options
-bullet echo mod apk 5.2.4 missions and challenges mode
-bullet echo mod apk 5.2.4 leaderboards and achievements
-bullet echo mod apk 5.2.4 codes and coupons
-bullet echo mod apk 5.2.4 trivia and facts
-bullet echo mod apk 5.2.4 fan art and wallpapers
-bullet echo mod apk 5.2.4 community and forums
-bullet echo mod apk 5.2.4 developer and publisher information
-bullet echo mod apk 5.2.4 rating and reviews
-bullet echo mod apk 5.2.4 similar games and apps
-bullet echo mod apk 5.2.4 frequently asked questions (FAQ)
-bullet echo mod apk 5.2.4 pros and cons
-bullet echo mod apk 5

-
    -
  • You can enjoy unlimited money to buy and upgrade your weapons and equipment.
  • -
  • You can access the mod menu to enable or disable various features such as god mode, unlimited ammo, no recoil, and more.
  • -
  • You can play with other players from around the world and chat with them in the game.
  • -
  • You can experience high-quality graphics and sound effects that create an immersive atmosphere.
  • -
  • You can have fun with different game modes and challenges that test your skills and tactics.
  • -
-

So, if you are ready to join the action and become the best shooter in the dark, then you should download Bullet Echo Mod APK 5.2.4 right now. You will not regret it!

-

How to Download and Install Bullet Echo Mod APK 5.2.4 on Android Devices

-

Downloading and installing Bullet Echo Mod APK 5.2.4 on your Android device is very easy and simple. Just follow these steps:

-
    -
  1. Enable unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  2. -
  3. Download the Bullet Echo Mod APK 5.2.4 file from a trusted source. You can use the link below to get it.
  4. -
  5. Locate and install the Bullet Echo Mod APK 5.2.4 file on your device. You can use a file manager app to find it in your downloads folder.
  6. -
  7. Launch the game and enjoy unlimited money and mod menu.
  8. -
-

That's it! You have successfully installed Bullet Echo Mod APK 5.2.4 on your Android device. Now you can play the game with all the advantages and features that it offers.

-

How to Play Bullet Echo Mod APK 5.2.4

-

Bullet Echo Mod APK 5.2.4 is a game that requires skill, strategy, and teamwork to win. Here are some tips on how to play it:

-
    -
  • Choose your hero and team up with other players. You can select from over 80 heroes with different abilities and weapons, such as snipers, assaulters, healers, and more. You can also join or create a squad with your friends or other players online.
  • -
  • Use your skills and tactics to defeat your enemies. You have to rely on your senses and strategy to survive and eliminate your enemies in the dark battlefield. You can use your flashlight to see, but be careful not to expose yourself too much. You can also use your hero's abilities to gain an edge over your opponents, such as invisibility, shields, grenades, etc.
  • -
  • Collect loot and upgrade your weapons and equipment. You can find various items on the map, such as ammo, health kits, armor, and more. You can also use the money you earn from winning matches to buy and upgrade your weapons and equipment in the shop.
  • -
  • Compete in various modes and rank up on the leaderboard. You can play in different game modes, such as Team vs Team, Solo, and Battle Royale. Each mode has its own rules and objectives, so you have to adapt your strategy accordingly. You can also earn points and rank up on the leaderboard by winning matches and completing missions.
  • -
-

Tips and Tricks for Bullet Echo Mod APK 5.2.4

-

Bullet Echo Mod APK 5.2.4 is a game that challenges your skills and tactics as a shooter. Here are some tips and tricks that can help you improve your performance:

-
    -
  • Use stealth and cover to avoid detection. You have to be careful not to make too much noise or reveal yourself too much in the dark battlefield. You can use stealth mode to move silently, or hide behind objects or walls to avoid enemy fire.
  • -
  • Communicate with your teammates and coordinate your attacks. You can use the chat feature or voice chat feature to communicate with your teammates in the game. You can also use emojis or gestures to express yourself or give commands. You should work together with your teammates and coordinate your attacks to achieve victory.
  • -
  • Experiment with different heroes and find your best match. You can try out different heroes with different abilities and weapons, and see which one suits your play style best. You can also switch heroes during matches if you want to change your strategy or counter your enemies.
  • -
  • Use the mod menu to customize your game settings and preferences. You can access the mod menu by tapping on the icon on the top left corner of the screen. You can enable or disable various features such as god mode, unlimited ammo, no recoil, speed hack, etc., depending on how you want to play the game. You can also adjust the sound, graphics, and language settings according to your preference.
  • -
-

Conclusion

-

Bullet Echo Mod APK 5.2.4 is a tactical shooter game that offers you a thrilling and challenging experience in a dark battlefield. You can choose from over 80 heroes with unique abilities and weapons, and team up with other players in real-time multiplayer matches. You can also enjoy unlimited money and mod menu features that give you more control and fun in the game. You can download and install Bullet Echo Mod APK 5.2.4 on your Android device easily and safely by following the steps above.

-

So, what are you waiting for? Download Bullet Echo Mod APK 5.2.4 now and join the action and become the best shooter in the dark!

-

FAQs

-

Here are some frequently asked questions about Bullet Echo Mod APK 5.2.4:

-

What is Bullet Echo Mod APK 5.2.4?

-

Bullet Echo Mod APK 5.2.4 is a modified version of the original Bullet Echo game developed by ZeptoLab, which is a tactical shooter game that puts you in a dark battlefield where you have to rely on your senses and strategy to survive and eliminate your enemies.

-

Is Bullet Echo Mod APK 5.2.4 safe to download and install?

-

Yes, Bullet Echo Mod APK 5.2.4 is safe to download and install on your Android device, as long as you get it from a trusted source. You can use the link below to get it.

-

What are the benefits of Bullet Echo Mod APK 5.2.4?

-

Some of the benefits of Bullet Echo Mod APK 5.2.4 are:

-
    -
  • You can enjoy unlimited money to buy and upgrade your weapons and equipment.
  • -
  • You can access the mod menu to enable or disable various features such as god mode, unlimited ammo, no recoil, speed hack, etc.
  • -
  • You can play with other players from around the world and chat with them in the game.
  • -
  • You can experience high-quality graphics and sound effects that create an immersive atmosphere.
  • -
  • You can have fun with different game modes and challenges that test your skills and tactics.
  • -
-

How can I get unlimited money in Bullet Echo Mod APK 5.2.4?

-

You can get unlimited money in Bullet Echo Mod APK 5.2.4 by downloading and installing the modded version of the game from the link below. You can use the money to buy and upgrade your weapons and equipment in the shop.

-

How can I access the mod menu in Bullet Echo Mod APK 5.2.4?

-

You can access the mod menu in Bullet Echo Mod APK 5.2.4 by tapping on the icon on the top left corner of the screen. You can enable or disable various features such as god mode, unlimited ammo, no recoil, speed hack, etc., depending on how you want to play the game.

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download I Miss You by Grey and Discover More Songs by the Duo.md b/spaces/1phancelerku/anime-remove-background/Download I Miss You by Grey and Discover More Songs by the Duo.md deleted file mode 100644 index 373db1cf59998f24f0128c1bb6b4b50b506fb651..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download I Miss You by Grey and Discover More Songs by the Duo.md +++ /dev/null @@ -1,128 +0,0 @@ -
-

How to Download "I Miss You" by Grey

-

"I Miss You" by Grey is a catchy and emotional electronic song that features the vocals of Bahari, a pop duo from Los Angeles. The song was released in 2017 as Grey's first official single, and it has since gained over 100 million streams on Spotify and over 6 million views on YouTube. The song is about missing someone you used to know and wondering if they feel the same way.

-

If you love this song and want to listen to it anytime, anywhere, you might want to download it to your computer or mobile device. However, you should also be aware of the legal and ethical issues involved in downloading music online. In this article, we will show you three ways to download "I Miss You" by Grey legally, and help you decide which one is best for you.

-

download i miss you by grey


DOWNLOADhttps://jinyurl.com/2uNS1f



-

How to download "I Miss You" by Grey legally

-

There are three main options for downloading "I Miss You" by Grey legally: buying the song from a digital music store, streaming the song from a music streaming service, or downloading the song for free from a legal website. Each option has its own pros and cons, depending on your preferences, budget, and internet connection. Let's take a look at each option in detail.

-

Option 1: Buy the song from a digital music store

-

One way to download "I Miss You" by Grey legally is to buy the song from a digital music store, such as iTunes, Amazon, or Google Play. This way, you can support the artists and their record label, and get a high-quality audio file that you can keep forever. However, this option also has some drawbacks. For example, you will have to pay for each song individually, which can add up if you want to download many songs. Also, you might have to deal with DRM (digital rights management) restrictions that limit how you can use or share the music.

-

Pros and cons of buying the song

- - - - - -
ProsCons
You can support the artists and their record labelYou have to pay for each song individually
You can get a high-quality audio fileYou might have to deal with DRM restrictions
You can keep the music foreverYou might need extra storage space on your device
-

How to buy the song from iTunes, Amazon, or Google Play

-

To buy "I Miss You" by Grey from iTunes, Amazon, or Google Play, you will need to have an account with one of these services and a valid payment method. Then, you can follow these steps:

-
    -
  1. Go to the website or app of your chosen service and search for "I Miss You" by Grey.
  2. -
  3. Select the song and click on the buy or download button.
  4. -
  5. Confirm your purchase and enter your payment details if needed.
  6. -
  7. Wait for the download to complete and enjoy your music.
  8. -
-

Option 2: Stream the song from a music streaming service

Another way to download "I Miss You" by Grey legally is to stream the song from a music streaming service, such as Spotify, YouTube Music, or Apple Music. This way, you can access a huge library of music for a monthly fee or for free with ads. You can also download the song to your device for offline listening, as long as you maintain your subscription or account. However, this option also has some disadvantages. For example, you will not own the music and you might lose access to it if the service changes its terms or catalog. Also, you might have to deal with lower audio quality or data usage if you stream the music online.

Pros and cons of streaming the song

- - - - - -
ProsCons
You can access a huge library of musicYou will not own the music
You can download the song for offline listeningYou might lose access to the music
You can pay a monthly fee or use the service for free with adsYou might have to deal with lower audio quality or data usage
-

How to stream the song from Spotify, YouTube Music, or Apple Music

-

To stream "I Miss You" by Grey from Spotify, YouTube Music, or Apple Music, you will need to have an account with one of these services and a compatible device. Then, you can follow these steps:

-
    -
  1. Go to the website or app of your chosen service and search for "I Miss You" by Grey.
  2. -
  3. Select the song and click on the play or add button.
  4. -
  5. If you want to download the song for offline listening, click on the download or offline button.
  6. -
  7. Enjoy your music and remember to check your subscription or account status regularly.
  8. -
-

Option 3: Download the song for free from a legal website

-

A third way to download "I Miss You" by Grey legally is to download the song for free from a legal website, such as SoundCloud, Bandcamp, or DatPiff. These websites allow artists to upload their music and share it with their fans for free or for a voluntary donation. You can find many songs that are not available on other platforms and discover new artists and genres. However, this option also has some limitations. For example, you might not find the song you are looking for or it might be removed by the artist at any time. Also, you might have to deal with low audio quality or malware risks if you download from untrusted sources.

-

How to download i miss you by grey on soundcloud
-I miss you by grey feat bahari mp3 download
-I miss you by grey lyrics and song download
-Download i miss you by grey from jiosaavn
-I miss you by grey ringtone download
-I miss you by grey spotify download
-I miss you by grey piano cover download
-I miss you by grey remix download
-I miss you by grey acoustic version download
-I miss you by grey instrumental download
-Download i miss you by grey on apple music
-I miss you by grey karaoke download
-I miss you by grey video download
-I miss you by grey 320kbps download
-I miss you by grey flac download
-Download i miss you by grey on amazon music
-I miss you by grey nightcore download
-I miss you by grey live performance download
-I miss you by grey edm download
-I miss you by grey guitar tabs download
-Download i miss you by grey on youtube music
-I miss you by grey sheet music download
-I miss you by grey mashup download
-I miss you by grey 8d audio download
-I miss you by grey slowed down download
-Download i miss you by grey on deezer
-I miss you by grey dance choreography download
-I miss you by grey reaction video download
-I miss you by grey tiktok challenge download
-I miss you by grey extended mix download
-Download i miss you by grey on tidal
-I miss you by grey midi file download
-I miss you by grey behind the scenes download
-I miss you by grey official audio download
-I miss you by grey loop download
-Download i miss you by grey on napster
-I miss you by grey a cappella download
-I miss you by grey fan art download
-I miss you by grey wallpaper download
-I miss you by grey podcast episode download
-Download i miss you by grey on pandora
-I miss you by grey ukulele chords download
-I miss you by grey trivia quiz download
-I miss you by grey vr experience download
-I miss you by grey lullaby version download

-

Pros and cons of downloading the song for free

- - - - - -
ProsCons
You can download the song for free or for a voluntary donationYou might not find the song you are looking for or it might be removed by the artist
You can find many songs that are not available on other platformsYou might have to deal with low audio quality or malware risks
You can discover new artists and genresYou might not be able to support the artists and their record label
-

How to download the song from SoundCloud, Bandcamp, or DatPiff

-

To download "I Miss You" by Grey from SoundCloud, Bandcamp, or DatPiff, you will need to have an account with one of these websites and a web browser. Then, you can follow these steps:

-
    -
  1. Go to the website of your chosen service and search for "I Miss You" by Grey.
  2. -
  3. Select the song and click on the download or buy button.
  4. -
  5. If the song is free, confirm your download and wait for it to complete.
  6. -
  7. If the song is not free, enter your email address and choose how much you want to pay (or enter zero if it is a voluntary donation).
  8. -
  9. Check your email and click on the link to download the song.
  10. -
  11. Enjoy your music and consider supporting the artists if you like their work.
  12. -
-

Conclusion

-

"I Miss You" by Grey is a great song that you might want to download and listen to anytime, anywhere. However, you should also be careful about how you download music online and respect the rights of the artists and their record label. In this article, we showed you three ways to download "I Miss You" by Grey legally: buying the song from a digital music store, streaming the song from a music streaming service, or downloading the song for free from a legal website. Each option has its own pros and cons, depending on your preferences, budget, and internet connection. We hope this article helped you decide which option is best for you and enjoy your music legally and ethically.

-

FAQs

-

Here are some frequently asked questions about downloading "I Miss You" by Grey legally:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Instagram GB and Get More Out of Your Instagram Experience.md b/spaces/1phancelerku/anime-remove-background/Download Instagram GB and Get More Out of Your Instagram Experience.md deleted file mode 100644 index e856792d19cb6044474ee196fb014a176d9a7b69..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Instagram GB and Get More Out of Your Instagram Experience.md +++ /dev/null @@ -1,95 +0,0 @@ - -

Download Instagram GB: A Modded Version of Instagram with Extra Features

-

Instagram is one of the most popular social media platforms in the world, with over a billion users. It allows you to share photos, videos, and stories with your friends and followers, as well as discover new content from people you may like. However, if you are looking for more features and options to customize your Instagram experience, you may want to try Instagram GB.

-

download instagram gb


Download Filehttps://jinyurl.com/2uNPfL



-

What is Instagram GB?

-

Instagram GB is a modded version of the official Instagram app that offers some extra features and functionalities that are not available in the original app. It is developed by a third-party developer named GBMods, who is also behind other popular modded apps such as WhatsApp GB and Facebook GB.

-

How is it different from the official Instagram app?

-

Instagram GB is different from the official Instagram app in several ways. Some of the main differences are:

- -

What are the benefits of using Instagram GB?

-

Some of the benefits of using Instagram GB are:

- -

How to download and install Instagram GB on your device?

-

If you want to download and install Instagram GB on your device, you need to follow these steps:

-

How to download instagram gb apk for android
-Download instagram gb mod apk latest version 2023
-Download instagram gb transparent theme for free
-Download instagram gb ios app from app store
-Download instagram gb pro with more features and customization
-Download instagram gb plus with dual account support
-Download instagram gb 2023 with anti-ban and privacy features
-Download instagram gb for pc windows 10/8/7
-Download instagram gb lite apk for low-end devices
-Download instagram gb delta with dark mode and stickers
-Download instagram gb official from gbplus.net
-Download instagram gb update to get new features and bug fixes
-Download instagram gb video downloader to save videos and reels
-Download instagram gb story saver to download stories and highlights
-Download instagram gb photo editor to edit and filter photos
-Download instagram gb no root required for installation
-Download instagram gb without ads and pop-ups
-Download instagram gb with unlimited followers and likes
-Download instagram gb with zoom feature for profile pictures
-Download instagram gb with copy bio and comments feature
-Download instagram gb with custom fonts and emojis
-Download instagram gb with theme store and online themes
-Download instagram gb with notification counter and unread messages
-Download instagram gb with hide online status and typing feature
-Download instagram gb with disable video autoplay and sound feature
-Download instagram gb with pin chat and lock chat feature
-Download instagram gb with translate messages and comments feature
-Download instagram gb with disable story view and screenshot feature
-Download instagram gb with enable swipe to reply and forward feature
-Download instagram gb with multiple languages support feature

-

Step 1: Enable unknown sources on your device

-

Since Instagram GB is not available on the official app stores, you need to enable unknown sources on your device to allow the installation of apps from third-party sources. To do this, go to your device settings > security > unknown sources and toggle it on.

-

Step 2: Download the Instagram GB APK file from a trusted source

-

The next step is to download the Instagram GB APK file from a trusted source. You can search for it online or use one of these links:

- -

Make sure that you download the latest version of the app and that it is compatible with your device

Step 3: Install the Instagram GB app and log in with your account

-

After downloading the Instagram GB APK file, you need to install it on your device. To do this, locate the file in your device storage and tap on it. You may see a warning message that says "This type of file can harm your device. Do you want to keep Instagram GB.apk anyway?". Tap on OK and then on Install. Wait for the installation process to complete and then open the app. You can log in with your existing Instagram account or create a new one.

-

How to use Instagram GB to enhance your experience?

-

Now that you have installed Instagram GB on your device, you can start using it to enjoy its extra features and options. Here are some of the things you can do with Instagram GB:

-

Customize your theme and appearance

-

One of the best things about Instagram GB is that you can change the theme and appearance of the app according to your preferences. You can access the theme settings by tapping on the menu icon (three horizontal lines) on the top right corner of the app and then on GB Settings > Themes. You can choose from different colors, fonts, icons, and backgrounds for your app. You can also download more themes from the online library or create your own theme.

-

Download photos, videos, and stories from other users

-

Another great feature of Instagram GB is that you can download any photo, video, or story from other users directly to your device. You don't need to use any external tools or apps to do this. To download a photo or video from a post, tap on the menu icon (three vertical dots) on the top right corner of the post and then on Download. To download a story, tap on the story and then on the download icon (downward arrow) on the bottom left corner of the screen. You can find the downloaded files in your device gallery or in the Instagram GB folder.

-

View anyone's profile picture in full size

-

Sometimes you may want to view someone's profile picture in full size, but the official Instagram app only shows a small circle. With Instagram GB, you can view anyone's profile picture in full size by tapping on it. You can also zoom in and out of any photo or video on the app by pinching the screen.

Copy comments and captions from other posts

-

Sometimes you may find a comment or a caption from another post that you want to copy and paste somewhere else. With Instagram GB, you can do this easily. To copy a comment, tap and hold on the comment and then on Copy Comment. To copy a caption, tap on the menu icon (three vertical dots) on the top right corner of the post and then on Copy Caption. You can then paste the text wherever you want.

-

Hide your online status and seen ticks

-

If you value your privacy and security, you may want to hide your online status and seen ticks from other users. With Instagram GB, you can do this by going to the menu icon (three horizontal lines) on the top right corner of the app and then on GB Settings > Privacy. You can toggle off the options for Show Online Status and Show Seen Tick. This way, other users won't know when you are online or when you have seen their messages or stories.

-

Conclusion

-

Instagram GB is a modded version of the official Instagram app that offers some extra features and options that are not available in the original app. It allows you to customize your theme and appearance, download photos, videos, and stories from other users, view anyone's profile picture in full size, copy comments and captions from other posts, and hide your online status and seen ticks. If you want to try Instagram GB, you need to download and install the APK file from a trusted source and follow the steps in this article. However, you should also be aware of the risks involved in using a modded app, such as possible bans, malware, or data breaches.

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Drift for Life Mod APK Enjoy Unlimited Money and More Features.md b/spaces/1phancelerku/anime-remove-background/Drift for Life Mod APK Enjoy Unlimited Money and More Features.md deleted file mode 100644 index db42dc89ec812f2cb0d2ebae604d129e334d8d1b..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Drift for Life Mod APK Enjoy Unlimited Money and More Features.md +++ /dev/null @@ -1,135 +0,0 @@ - -

Drift for Life Mod APK Unlimited Money: A Review

-

If you are a fan of racing games, you might have heard of Drift for Life, a popular game that lets you experience the thrill of drifting on various tracks. But did you know that there is a modded version of this game that gives you unlimited money and coins, as well as other features that make the game more fun and exciting? In this article, we will review Drift for Life Mod APK Unlimited Money, a modified version of the original game that you can download for free on your Android device. We will also show you how to download and install it, as well as the pros and cons of using it. So, let's get started!

-

Features of Drift for Life Mod APK

-

Drift for Life Mod APK Unlimited Money is a modified version of the original game that has been hacked to give you access to unlimited money and coins, as well as other features that enhance your gaming experience. Here are some of the features that you can enjoy with this mod:

-

drift for life mod apk unlimited money


Downloadhttps://jinyurl.com/2uNUg3



-

Unlimited money and coins

-

With this mod, you don't have to worry about running out of money or coins in the game. You can use them to buy new cars, upgrade your existing ones, or unlock new tracks. You can also use them to customize your cars with different colors, stickers, wheels, spoilers, and more. You can have as much money and coins as you want, without any limits or restrictions.

-

Customizable cars and tracks

-

Another feature of this mod is that it allows you to customize your cars and tracks according to your preferences. You can choose from a variety of cars, ranging from sports cars, muscle cars, trucks, vans, and more. You can also modify their performance, such as speed, acceleration, handling, braking, and drift. You can also choose from different tracks, such as city streets, highways, deserts, mountains, snow, and more. You can also adjust the weather conditions, time of day, traffic density, and difficulty level.

-

Realistic physics and graphics

-

One of the best things about Drift for Life is that it has realistic physics and graphics that make the game more immersive and realistic. You can feel the weight of your car, the friction of the tires, the inertia of the drifts, and the impact of the collisions. You can also see the details of your car, such as the smoke from the exhaust, the sparks from the metal, the scratches from the crashes, and the reflections from the lights. The game also has stunning graphics that show the beauty of the environments, such as the buildings, trees, clouds, shadows, and more.

-

How to download and install Drift for Life Mod APK

-

If you want to try Drift for Life Mod APK Unlimited Money on your Android device, you need to follow these steps:

-

Requirements and compatibility

-

Before you download and install this mod, you need to make sure that your device meets these requirements:

- -

This mod is compatible with most Android devices, but it may not work properly on some devices due to different specifications

To enable unknown sources in Android settings, you need to follow these steps:

-
    -
  1. Launch the Settings application.
  2. -
  3. Scroll down and then tap on the 'Privacy' option.
  4. -
  5. Scroll down again and look for the 'Unknown Sources' option.
  6. -
  7. Tap this option to enable it.
  8. -
  9. Tap OK to confirm you want to turn the feature on.
  10. -
-

Alternatively, you can also follow these steps:

-

drift for life hack apk download free
-drift for life modded apk unlimited coins
-drift for life cheat apk latest version
-drift for life premium apk mod money
-drift for life cracked apk full unlocked
-drift for life mod apk unlimited cash
-drift for life hack apk no root
-drift for life modded apk free shopping
-drift for life cheat apk unlimited gems
-drift for life premium apk mod unlocked
-drift for life cracked apk no ads
-drift for life mod apk unlimited gold
-drift for life hack apk offline
-drift for life modded apk all cars
-drift for life cheat apk unlimited nitro
-drift for life premium apk mod vip
-drift for life cracked apk unlimited fuel
-drift for life mod apk unlimited diamonds
-drift for life hack apk online
-drift for life modded apk all tracks
-drift for life cheat apk unlimited energy
-drift for life premium apk mod pro
-drift for life cracked apk unlimited keys
-drift for life mod apk unlimited tokens
-drift for life hack apk android
-drift for life modded apk all upgrades
-drift for life cheat apk unlimited boosters
-drift for life premium apk mod mega
-drift for life cracked apk unlimited stars
-drift for life mod apk unlimited credits
-drift for life hack apk ios
-drift for life modded apk all levels
-drift for life cheat apk unlimited lives
-drift for life premium apk mod god mode
-drift for life cracked apk unlimited points
-drift for life mod apk unlimited rp
-drift for life hack apk pc
-drift for life modded apk all modes
-drift for life cheat apk unlimited spins
-drift for life premium apk mod anti ban
-drift for life cracked apk unlimited tickets
-drift for life mod apk unlimited coins and gems

-
    -
  1. Open Settings and tap Apps or Apps & Notifications.
  2. -
  3. Tap the vertical three-dot menu icon and tap Special access.
  4. -
  5. Tap Install unknown apps.
  6. -
  7. Tap your browser to toggle the switch on.
  8. -
-

Once you have enabled unknown sources, you can proceed to download and install Drift for Life Mod APK Unlimited Money. Here are the steps:

-
    -
  1. Go to the download link for Drift for Life Mod APK Unlimited Money. You can find it on various websites that offer modded games, such as [ModDroid](^1^) or [APKPure](^2^).
  2. -
  3. Tap the download button and wait for the file to be downloaded on your device. The file size is about 100 MB, so make sure you have enough storage space and a stable internet connection.
  4. -
  5. Once the download is complete, go to your file manager and locate the downloaded file. It should be in the Downloads folder or the folder where you set your browser to save files.
  6. -
  7. Tap the file and select Install. You may see a warning message that says "This type of file can harm your device". Don't worry, this is just a precautionary message from Android. Tap OK to continue.
  8. -
  9. Wait for the installation process to finish. It may take a few seconds or minutes, depending on your device's performance.
  10. -
  11. Once the installation is done, you can launch the game from your app drawer or home screen. You will see a new icon with the name Drift for Life Mod APK Unlimited Money.
  12. -
-

How to use Drift for Life Mod APK

-

To use Drift for Life Mod APK Unlimited Money, you just need to follow these simple steps:

-
    -
  1. Launch the game from your app drawer or home screen. You will see a splash screen with the game's logo and a loading bar.
  2. -
  3. After the loading is done, you will see the main menu of the game. You can choose from different options, such as Play, Garage, Settings, and More.
  4. -
  5. To start playing, tap Play. You will see a list of tracks that you can choose from. You can also swipe left or right to see more tracks. Some tracks may be locked and require you to reach a certain level or spend some coins to unlock them.
  6. -
  7. To select a track, tap on it. You will see a preview of the track and some information, such as its name, length, difficulty, weather, time of day, and traffic density. You can also change these settings by tapping on them.
  8. -
  9. To select a car, tap on the car icon at the bottom of the screen. You will see a list of cars that you can choose from. You can also swipe left or right to see more cars. Some cars may be locked and require you to buy them with money or coins.
  10. -
  11. To customize your car, tap on the wrench icon at the bottom of the screen. You will see a menu with different options, such as Color, Stickers, Wheels, Spoiler, Performance, and Drift. You can change these options by tapping on them and using the sliders or buttons to adjust them.
  12. -
  13. To start racing, tap on the play button at the bottom of the screen. You will see a countdown and then the race will begin. You can control your car by using the buttons on the screen or tilting your device. The buttons are: gas pedal, brake pedal, handbrake, nitro boost, camera angle, pause menu, and steering wheel (optional).
  14. -
  15. To drift, you need to use the handbrake button or tilt your device sharply while turning. The longer you drift, the more points you earn. You can also earn points by overtaking other cars, driving close to them, or hitting objects on the road.
  16. -
  17. To finish the race, you need to reach the finish line before time runs out or before other cars do. You will see your rank, time, score, money earned, and coins earned at the end of the race. You can also replay the race or go back to the main menu.
  18. -

Pros and cons of Drift for Life Mod APK

-

Drift for Life Mod APK Unlimited Money is a great game for racing and drifting enthusiasts, but it also has some drawbacks that you should be aware of. Here are some of the pros and cons of using this mod:

-

Pros

- -

Cons

- -

Conclusion and rating

-

In conclusion, Drift for Life Mod APK Unlimited Money is a modded version of the original game that gives you unlimited money and coins, as well as other features that make the game more fun and exciting. It is a great game for racing and drifting lovers, but it also has some drawbacks that you should be aware of. We recommend that you try this mod at your own risk, and only use it for personal entertainment purposes. We give this mod a rating of 4 out of 5 stars, based on its features, performance, and user feedback.

-

FAQs

-

Here are some of the frequently asked questions about Drift for Life Mod APK Unlimited Money:

-
    -
  1. Is Drift for Life Mod APK Unlimited Money safe to use?
  2. -

    Drift for Life Mod APK Unlimited Money is safe to use as long as you download it from a trusted source and scan it with an antivirus program before installing it. However, you should also be careful about using it online, as you may get banned or suspended from the game if you are detected by the game's security system.

    -
  3. How do I update Drift for Life Mod APK Unlimited Money?
  4. -

    To update Drift for Life Mod APK Unlimited Money, you need to download the latest version of the mod from the same source where you downloaded the previous version. You also need to uninstall the old version before installing the new one. However, you should also note that updating the mod may cause you to lose your progress or data, so make sure you back up your files before updating.

    -
  5. Can I play Drift for Life Mod APK Unlimited Money with my friends?
  6. -

    Yes, you can play Drift for Life Mod APK Unlimited Money with your friends online or offline. You can either join an existing room or create your own room and invite your friends to join. You can also chat with your friends and other players in the game.

    -
  7. Can I use Drift for Life Mod APK Unlimited Money on my PC?
  8. -

    No, Drift for Life Mod APK Unlimited Money is only designed for Android devices. However, you can use an Android emulator on your PC to run this mod. An Android emulator is a software that allows you to run Android apps on your PC. Some of the popular Android emulators are [BlueStacks], [NoxPlayer], and [LDPlayer].

    -
  9. Where can I get more information about Drift for Life Mod APK Unlimited Money?
  10. -

    If you want to get more information about Drift for Life Mod APK Unlimited Money, you can visit the official website of the original game at [driftforlife.com]. You can also check out some reviews, videos, screenshots, and tips about this mod on various websites, blogs, forums, and social media platforms.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Flash-Memory-Toolkit-Serial-Number-19.md b/spaces/1phancelerku/anime-remove-background/Flash-Memory-Toolkit-Serial-Number-19.md deleted file mode 100644 index 834130469251dfe41ecc2394c8c2ccf321370fae..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Flash-Memory-Toolkit-Serial-Number-19.md +++ /dev/null @@ -1,120 +0,0 @@ -## Flash Memory Toolkit Serial Number 19 - - - - - - ![Flash Memory Toolkit Serial Number 19](https://zuxcel.com/images/4/387/flash-memory-toolkit-1613.jpg) - - - - - -**DOWNLOAD ……… [https://vittuv.com/2tBMBk](https://vittuv.com/2tBMBk)** - - - - - - - - - - - - - -# How to Use Flash Memory Toolkit Serial Number 19 - - - -Flash Memory Toolkit is a software application that provides various tools for managing flash memory cards and USB thumb drives. It can help you recover lost files, erase data securely, check for errors, backup and restore data, and benchmark the performance of your devices. To use Flash Memory Toolkit, you need a valid serial number that matches your version of the software. - - - -In this article, we will show you how to use Flash Memory Toolkit serial number 19, which is compatible with version 2.00 of the software. This serial number was found on a web page[^1^] that offers various serial numbers and activators for different software programs. However, we do not endorse or recommend using such sources, as they may be illegal, unsafe, or unreliable. You should always obtain your serial number from the official website of EFD Software[^4^], the developer of Flash Memory Toolkit. - - - -To use Flash Memory Toolkit serial number 19, follow these steps: - - - -1. Download and install Flash Memory Toolkit version 2.00 from the official website[^4^] or from a trusted source. The trial version of the software allows you to use it for 14 days without a serial number. - -2. Launch Flash Memory Toolkit and click on the "About" button on the main window. You will see a dialog box that shows your version number and trial status. - -3. Click on the "Enter serial number" button and enter the following serial number: `1234-5678-9012-3456`. This is the serial number 19 that we found on the web page[^1^]. Click on "OK" to confirm. - -4. You will see a message that says "Thank you for registering Flash Memory Toolkit". Click on "OK" to close the dialog box. - -5. You can now use Flash Memory Toolkit without any limitations. You can access all the tools from the main window or from the system tray icon. - - - -Note that this serial number may not work for other versions of Flash Memory Toolkit, or it may be blocked by EFD Software if they detect its unauthorized use. Therefore, we advise you to purchase a legitimate serial number from EFD Software[^4^] if you want to use Flash Memory Toolkit without any risks or problems. - - - -## How to Recover Lost Files with Flash Memory Toolkit - - - -One of the most useful tools in Flash Memory Toolkit is the File Recovery tool. This tool allows you to scan your flash memory card or USB thumb drive for deleted or corrupted files and restore them to a safe location. You can use this tool to recover your important documents, pictures, audio or videos that you accidentally deleted or lost due to a virus infection, a power failure, or a formatting error. - - - -To use the File Recovery tool, follow these steps: - - - -1. Insert your flash memory card or USB thumb drive into your computer and launch Flash Memory Toolkit. - -2. Select the "File Recovery" tool from the main window or from the system tray icon. - -3. Select the drive letter of your flash memory card or USB thumb drive from the drop-down menu and click on "Start". The tool will scan your device for any recoverable files and display them in a list. - -4. Select the files that you want to recover by checking the boxes next to them. You can also use the "Select all" button to select all the files in the list. - -5. Click on the "Recover" button and choose a destination folder where you want to save the recovered files. The tool will copy the files to the selected folder and show you a progress bar. - -6. When the recovery process is complete, you will see a message that says "Recovery finished". Click on "OK" to close the message. - -7. You can now open the destination folder and check your recovered files. You can also delete the original files from your flash memory card or USB thumb drive if you want to free up some space. - - - -## How to Erase Data Securely with Flash Memory Toolkit - - - -Another useful tool in Flash Memory Toolkit is the Low-level Benchmark tool. This tool allows you to erase all the data on your flash memory card or USB thumb drive in a secure way. This means that no one will be able to recover your data even with advanced data recovery software. You can use this tool to protect your privacy and prevent identity theft when you want to dispose of or sell your flash memory card or USB thumb drive. - - - -To use the Low-level Benchmark tool, follow these steps: - - - -1. Insert your flash memory card or USB thumb drive into your computer and launch Flash Memory Toolkit. - -2. Select the "Low-level Benchmark" tool from the main window or from the system tray icon. - -3. Select the drive letter of your flash memory card or USB thumb drive from the drop-down menu and click on "Start". The tool will show you some information about your device, such as its size, model, and serial number. - -4. Click on the "Erase" button and choose one of the three erasing methods: quick erase, full erase, or secure erase. The quick erase method will overwrite all the data on your device with zeros. The full erase method will overwrite all the data on your device with random data. The secure erase method will overwrite all the data on your device with random data multiple times. - -5. Click on "OK" to confirm your choice and start the erasing process. The tool will show you a progress bar and a warning message that says "All data on this device will be lost". - -6. When the erasing process is complete, you will see a message that says "Erasing finished". Click on "OK" to close the message. - -7. You can now remove your flash memory card or USB thumb drive from your computer. Your device will be completely empty and no one will be able to recover any data from it. - - - - 145887f19f - - - - - diff --git a/spaces/1ucii/Lab04/README.md b/spaces/1ucii/Lab04/README.md deleted file mode 100644 index d148fa9af372651825701de79fef152f5fc8c000..0000000000000000000000000000000000000000 --- a/spaces/1ucii/Lab04/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Lab04 -emoji: 🐢 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/train_nsf_sim_cache_sid_load_pretrain.py b/spaces/AI-Hobbyist/Hoyo-RVC/train_nsf_sim_cache_sid_load_pretrain.py deleted file mode 100644 index 2949bc4788096693233ae0ae833d240e71749a42..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/train_nsf_sim_cache_sid_load_pretrain.py +++ /dev/null @@ -1,595 +0,0 @@ -import sys, os - -now_dir = os.getcwd() -sys.path.append(os.path.join(now_dir)) -sys.path.append(os.path.join(now_dir, "train")) -import utils -import datetime - -hps = utils.get_hparams() -os.environ["CUDA_VISIBLE_DEVICES"] = hps.gpus.replace("-", ",") -n_gpus = len(hps.gpus.split("-")) -from random import shuffle, randint -import traceback, json, argparse, itertools, math, torch, pdb - -torch.backends.cudnn.deterministic = False -torch.backends.cudnn.benchmark = False -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from infer_pack import commons -from time import sleep -from time import time as ttime -from data_utils import ( - TextAudioLoaderMultiNSFsid, - TextAudioLoader, - TextAudioCollateMultiNSFsid, - TextAudioCollate, - DistributedBucketSampler, -) - -if hps.version == "v1": - from infer_pack.models import ( - SynthesizerTrnMs256NSFsid as RVC_Model_f0, - SynthesizerTrnMs256NSFsid_nono as RVC_Model_nof0, - MultiPeriodDiscriminator, - ) -else: - from infer_pack.models import ( - SynthesizerTrnMs768NSFsid as RVC_Model_f0, - SynthesizerTrnMs768NSFsid_nono as RVC_Model_nof0, - MultiPeriodDiscriminatorV2 as MultiPeriodDiscriminator, - ) -from losses import generator_loss, discriminator_loss, feature_loss, kl_loss -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from process_ckpt import savee - -global_step = 0 - - -class EpochRecorder: - def __init__(self): - self.last_time = ttime() - - def record(self): - now_time = ttime() - elapsed_time = now_time - self.last_time - self.last_time = now_time - elapsed_time_str = str(datetime.timedelta(seconds=elapsed_time)) - current_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") - return f"[{current_time}] | ({elapsed_time_str})" - - -def main(): - n_gpus = torch.cuda.device_count() - if torch.cuda.is_available() == False and torch.backends.mps.is_available() == True: - n_gpus = 1 - os.environ["MASTER_ADDR"] = "localhost" - os.environ["MASTER_PORT"] = str(randint(20000, 55555)) - children = [] - for i in range(n_gpus): - subproc = mp.Process( - target=run, - args=( - i, - n_gpus, - hps, - ), - ) - children.append(subproc) - subproc.start() - - for i in range(n_gpus): - children[i].join() - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - # utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group( - backend="gloo", init_method="env://", world_size=n_gpus, rank=rank - ) - torch.manual_seed(hps.train.seed) - if torch.cuda.is_available(): - torch.cuda.set_device(rank) - - if hps.if_f0 == 1: - train_dataset = TextAudioLoaderMultiNSFsid(hps.data.training_files, hps.data) - else: - train_dataset = TextAudioLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size * n_gpus, - # [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1200,1400], # 16s - [100, 200, 300, 400, 500, 600, 700, 800, 900], # 16s - num_replicas=n_gpus, - rank=rank, - shuffle=True, - ) - # It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit. - # num_workers=8 -> num_workers=4 - if hps.if_f0 == 1: - collate_fn = TextAudioCollateMultiNSFsid() - else: - collate_fn = TextAudioCollate() - train_loader = DataLoader( - train_dataset, - num_workers=4, - shuffle=False, - pin_memory=True, - collate_fn=collate_fn, - batch_sampler=train_sampler, - persistent_workers=True, - prefetch_factor=8, - ) - if hps.if_f0 == 1: - net_g = RVC_Model_f0( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model, - is_half=hps.train.fp16_run, - sr=hps.sample_rate, - ) - else: - net_g = RVC_Model_nof0( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model, - is_half=hps.train.fp16_run, - ) - if torch.cuda.is_available(): - net_g = net_g.cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm) - if torch.cuda.is_available(): - net_d = net_d.cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - # net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - # net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if torch.cuda.is_available(): - net_g = DDP(net_g, device_ids=[rank]) - net_d = DDP(net_d, device_ids=[rank]) - else: - net_g = DDP(net_g) - net_d = DDP(net_d) - - try: # 如果能加载自动resume - _, _, _, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d - ) # D多半加载没事 - if rank == 0: - logger.info("loaded D") - # _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g,load_opt=0) - _, _, _, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g - ) - global_step = (epoch_str - 1) * len(train_loader) - # epoch_str = 1 - # global_step = 0 - except: # 如果首次不能加载,加载pretrain - # traceback.print_exc() - epoch_str = 1 - global_step = 0 - if hps.pretrainG != "": - if rank == 0: - logger.info("loaded pretrained %s" % (hps.pretrainG)) - print( - net_g.module.load_state_dict( - torch.load(hps.pretrainG, map_location="cpu")["model"] - ) - ) ##测试不加载优化器 - if hps.pretrainD != "": - if rank == 0: - logger.info("loaded pretrained %s" % (hps.pretrainD)) - print( - net_d.module.load_state_dict( - torch.load(hps.pretrainD, map_location="cpu")["model"] - ) - ) - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR( - optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR( - optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - cache = [] - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d], - [optim_g, optim_d], - [scheduler_g, scheduler_d], - scaler, - [train_loader, None], - logger, - [writer, writer_eval], - cache, - ) - else: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d], - [optim_g, optim_d], - [scheduler_g, scheduler_d], - scaler, - [train_loader, None], - None, - None, - cache, - ) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate( - rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers, cache -): - net_g, net_d = nets - optim_g, optim_d = optims - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - - # Prepare data iterator - if hps.if_cache_data_in_gpu == True: - # Use Cache - data_iterator = cache - if cache == []: - # Make new cache - for batch_idx, info in enumerate(train_loader): - # Unpack - if hps.if_f0 == 1: - ( - phone, - phone_lengths, - pitch, - pitchf, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ) = info - else: - ( - phone, - phone_lengths, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ) = info - # Load on CUDA - if torch.cuda.is_available(): - phone = phone.cuda(rank, non_blocking=True) - phone_lengths = phone_lengths.cuda(rank, non_blocking=True) - if hps.if_f0 == 1: - pitch = pitch.cuda(rank, non_blocking=True) - pitchf = pitchf.cuda(rank, non_blocking=True) - sid = sid.cuda(rank, non_blocking=True) - spec = spec.cuda(rank, non_blocking=True) - spec_lengths = spec_lengths.cuda(rank, non_blocking=True) - wave = wave.cuda(rank, non_blocking=True) - wave_lengths = wave_lengths.cuda(rank, non_blocking=True) - # Cache on list - if hps.if_f0 == 1: - cache.append( - ( - batch_idx, - ( - phone, - phone_lengths, - pitch, - pitchf, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ), - ) - ) - else: - cache.append( - ( - batch_idx, - ( - phone, - phone_lengths, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ), - ) - ) - else: - # Load shuffled cache - shuffle(cache) - else: - # Loader - data_iterator = enumerate(train_loader) - - # Run steps - epoch_recorder = EpochRecorder() - for batch_idx, info in data_iterator: - # Data - ## Unpack - if hps.if_f0 == 1: - ( - phone, - phone_lengths, - pitch, - pitchf, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ) = info - else: - phone, phone_lengths, spec, spec_lengths, wave, wave_lengths, sid = info - ## Load on CUDA - if (hps.if_cache_data_in_gpu == False) and torch.cuda.is_available(): - phone = phone.cuda(rank, non_blocking=True) - phone_lengths = phone_lengths.cuda(rank, non_blocking=True) - if hps.if_f0 == 1: - pitch = pitch.cuda(rank, non_blocking=True) - pitchf = pitchf.cuda(rank, non_blocking=True) - sid = sid.cuda(rank, non_blocking=True) - spec = spec.cuda(rank, non_blocking=True) - spec_lengths = spec_lengths.cuda(rank, non_blocking=True) - wave = wave.cuda(rank, non_blocking=True) - # wave_lengths = wave_lengths.cuda(rank, non_blocking=True) - - # Calculate - with autocast(enabled=hps.train.fp16_run): - if hps.if_f0 == 1: - ( - y_hat, - ids_slice, - x_mask, - z_mask, - (z, z_p, m_p, logs_p, m_q, logs_q), - ) = net_g(phone, phone_lengths, pitch, pitchf, spec, spec_lengths, sid) - else: - ( - y_hat, - ids_slice, - x_mask, - z_mask, - (z, z_p, m_p, logs_p, m_q, logs_q), - ) = net_g(phone, phone_lengths, spec, spec_lengths, sid) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - y_mel = commons.slice_segments( - mel, ids_slice, hps.train.segment_size // hps.data.hop_length - ) - with autocast(enabled=False): - y_hat_mel = mel_spectrogram_torch( - y_hat.float().squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - if hps.train.fp16_run == True: - y_hat_mel = y_hat_mel.half() - wave = commons.slice_segments( - wave, ids_slice * hps.data.hop_length, hps.train.segment_size - ) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(wave, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss( - y_d_hat_r, y_d_hat_g - ) - optim_d.zero_grad() - scaler.scale(loss_disc).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(wave, y_hat) - with autocast(enabled=False): - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]["lr"] - logger.info( - "Train Epoch: {} [{:.0f}%]".format( - epoch, 100.0 * batch_idx / len(train_loader) - ) - ) - # Amor For Tensorboard display - if loss_mel > 75: - loss_mel = 75 - if loss_kl > 9: - loss_kl = 9 - - logger.info([global_step, lr]) - logger.info( - f"loss_disc={loss_disc:.3f}, loss_gen={loss_gen:.3f}, loss_fm={loss_fm:.3f},loss_mel={loss_mel:.3f}, loss_kl={loss_kl:.3f}" - ) - scalar_dict = { - "loss/g/total": loss_gen_all, - "loss/d/total": loss_disc, - "learning_rate": lr, - "grad_norm_d": grad_norm_d, - "grad_norm_g": grad_norm_g, - } - scalar_dict.update( - { - "loss/g/fm": loss_fm, - "loss/g/mel": loss_mel, - "loss/g/kl": loss_kl, - } - ) - - scalar_dict.update( - {"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)} - ) - scalar_dict.update( - {"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)} - ) - scalar_dict.update( - {"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)} - ) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy( - y_mel[0].data.cpu().numpy() - ), - "slice/mel_gen": utils.plot_spectrogram_to_numpy( - y_hat_mel[0].data.cpu().numpy() - ), - "all/mel": utils.plot_spectrogram_to_numpy( - mel[0].data.cpu().numpy() - ), - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict, - ) - global_step += 1 - # /Run steps - - if epoch % hps.save_every_epoch == 0 and rank == 0: - if hps.if_latest == 0: - utils.save_checkpoint( - net_g, - optim_g, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step)), - ) - utils.save_checkpoint( - net_d, - optim_d, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step)), - ) - else: - utils.save_checkpoint( - net_g, - optim_g, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(2333333)), - ) - utils.save_checkpoint( - net_d, - optim_d, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(2333333)), - ) - if rank == 0 and hps.save_every_weights == "1": - if hasattr(net_g, "module"): - ckpt = net_g.module.state_dict() - else: - ckpt = net_g.state_dict() - logger.info( - "saving ckpt %s_e%s:%s" - % ( - hps.name, - epoch, - savee( - ckpt, - hps.sample_rate, - hps.if_f0, - hps.name + "_e%s_s%s" % (epoch, global_step), - epoch, - hps.version, - hps, - ), - ) - ) - - if rank == 0: - logger.info("====> Epoch: {} {}".format(epoch, epoch_recorder.record())) - if epoch >= hps.total_epoch and rank == 0: - logger.info("Training is done. The program is closed.") - - if hasattr(net_g, "module"): - ckpt = net_g.module.state_dict() - else: - ckpt = net_g.state_dict() - logger.info( - "saving final ckpt:%s" - % ( - savee( - ckpt, hps.sample_rate, hps.if_f0, hps.name, epoch, hps.version, hps - ) - ) - ) - sleep(1) - os._exit(2333333) - - -if __name__ == "__main__": - torch.multiprocessing.set_start_method("spawn") - main() diff --git a/spaces/AIConsultant/MusicGen/audiocraft/solvers/diffusion.py b/spaces/AIConsultant/MusicGen/audiocraft/solvers/diffusion.py deleted file mode 100644 index 93dea2520836f458ab1b8514dca952b51d113ec2..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/solvers/diffusion.py +++ /dev/null @@ -1,279 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import flashy -import julius -import omegaconf -import torch -import torch.nn.functional as F - -from . import builders -from . import base -from .. import models -from ..modules.diffusion_schedule import NoiseSchedule -from ..metrics import RelativeVolumeMel -from ..models.builders import get_processor -from ..utils.samples.manager import SampleManager -from ..solvers.compression import CompressionSolver - - -class PerStageMetrics: - """Handle prompting the metrics per stage. - It outputs the metrics per range of diffusion states. - e.g. avg loss when t in [250, 500] - """ - def __init__(self, num_steps: int, num_stages: int = 4): - self.num_steps = num_steps - self.num_stages = num_stages - - def __call__(self, losses: dict, step: tp.Union[int, torch.Tensor]): - if type(step) is int: - stage = int((step / self.num_steps) * self.num_stages) - return {f"{name}_{stage}": loss for name, loss in losses.items()} - elif type(step) is torch.Tensor: - stage_tensor = ((step / self.num_steps) * self.num_stages).long() - out: tp.Dict[str, float] = {} - for stage_idx in range(self.num_stages): - mask = (stage_tensor == stage_idx) - N = mask.sum() - stage_out = {} - if N > 0: # pass if no elements in the stage - for name, loss in losses.items(): - stage_loss = (mask * loss).sum() / N - stage_out[f"{name}_{stage_idx}"] = stage_loss - out = {**out, **stage_out} - return out - - -class DataProcess: - """Apply filtering or resampling. - - Args: - initial_sr (int): Initial sample rate. - target_sr (int): Target sample rate. - use_resampling: Whether to use resampling or not. - use_filter (bool): - n_bands (int): Number of bands to consider. - idx_band (int): - device (torch.device or str): - cutoffs (): - boost (bool): - """ - def __init__(self, initial_sr: int = 24000, target_sr: int = 16000, use_resampling: bool = False, - use_filter: bool = False, n_bands: int = 4, - idx_band: int = 0, device: torch.device = torch.device('cpu'), cutoffs=None, boost=False): - """Apply filtering or resampling - Args: - initial_sr (int): sample rate of the dataset - target_sr (int): sample rate after resampling - use_resampling (bool): whether or not performs resampling - use_filter (bool): when True filter the data to keep only one frequency band - n_bands (int): Number of bands used - cuts (none or list): The cutoff frequencies of the band filtering - if None then we use mel scale bands. - idx_band (int): index of the frequency band. 0 are lows ... (n_bands - 1) highs - boost (bool): make the data scale match our music dataset. - """ - assert idx_band < n_bands - self.idx_band = idx_band - if use_filter: - if cutoffs is not None: - self.filter = julius.SplitBands(sample_rate=initial_sr, cutoffs=cutoffs).to(device) - else: - self.filter = julius.SplitBands(sample_rate=initial_sr, n_bands=n_bands).to(device) - self.use_filter = use_filter - self.use_resampling = use_resampling - self.target_sr = target_sr - self.initial_sr = initial_sr - self.boost = boost - - def process_data(self, x, metric=False): - if x is None: - return None - if self.boost: - x /= torch.clamp(x.std(dim=(1, 2), keepdim=True), min=1e-4) - x * 0.22 - if self.use_filter and not metric: - x = self.filter(x)[self.idx_band] - if self.use_resampling: - x = julius.resample_frac(x, old_sr=self.initial_sr, new_sr=self.target_sr) - return x - - def inverse_process(self, x): - """Upsampling only.""" - if self.use_resampling: - x = julius.resample_frac(x, old_sr=self.target_sr, new_sr=self.target_sr) - return x - - -class DiffusionSolver(base.StandardSolver): - """Solver for compression task. - - The diffusion task allows for MultiBand diffusion model training. - - Args: - cfg (DictConfig): Configuration. - """ - def __init__(self, cfg: omegaconf.DictConfig): - super().__init__(cfg) - self.cfg = cfg - self.device = cfg.device - self.sample_rate: int = self.cfg.sample_rate - self.codec_model = CompressionSolver.model_from_checkpoint( - cfg.compression_model_checkpoint, device=self.device) - - self.codec_model.set_num_codebooks(cfg.n_q) - assert self.codec_model.sample_rate == self.cfg.sample_rate, ( - f"Codec model sample rate is {self.codec_model.sample_rate} but " - f"Solver sample rate is {self.cfg.sample_rate}." - ) - assert self.codec_model.sample_rate == self.sample_rate, \ - f"Sample rate of solver {self.sample_rate} and codec {self.codec_model.sample_rate} " \ - "don't match." - - self.sample_processor = get_processor(cfg.processor, sample_rate=self.sample_rate) - self.register_stateful('sample_processor') - self.sample_processor.to(self.device) - - self.schedule = NoiseSchedule( - **cfg.schedule, device=self.device, sample_processor=self.sample_processor) - - self.eval_metric: tp.Optional[torch.nn.Module] = None - - self.rvm = RelativeVolumeMel() - self.data_processor = DataProcess(initial_sr=self.sample_rate, target_sr=cfg.resampling.target_sr, - use_resampling=cfg.resampling.use, cutoffs=cfg.filter.cutoffs, - use_filter=cfg.filter.use, n_bands=cfg.filter.n_bands, - idx_band=cfg.filter.idx_band, device=self.device) - - @property - def best_metric_name(self) -> tp.Optional[str]: - if self._current_stage == "evaluate": - return 'rvm' - else: - return 'loss' - - @torch.no_grad() - def get_condition(self, wav: torch.Tensor) -> torch.Tensor: - codes, scale = self.codec_model.encode(wav) - assert scale is None, "Scaled compression models not supported." - emb = self.codec_model.decode_latent(codes) - return emb - - def build_model(self): - """Build model and optimizer as well as optional Exponential Moving Average of the model. - """ - # Model and optimizer - self.model = models.builders.get_diffusion_model(self.cfg).to(self.device) - self.optimizer = builders.get_optimizer(self.model.parameters(), self.cfg.optim) - self.register_stateful('model', 'optimizer') - self.register_best_state('model') - self.register_ema('model') - - def build_dataloaders(self): - """Build audio dataloaders for each stage.""" - self.dataloaders = builders.get_audio_datasets(self.cfg) - - def show(self): - # TODO - raise NotImplementedError() - - def run_step(self, idx: int, batch: torch.Tensor, metrics: dict): - """Perform one training or valid step on a given batch.""" - x = batch.to(self.device) - loss_fun = F.mse_loss if self.cfg.loss.kind == 'mse' else F.l1_loss - - condition = self.get_condition(x) # [bs, 128, T/hop, n_emb] - sample = self.data_processor.process_data(x) - - input_, target, step = self.schedule.get_training_item(sample, - tensor_step=self.cfg.schedule.variable_step_batch) - out = self.model(input_, step, condition=condition).sample - - base_loss = loss_fun(out, target, reduction='none').mean(dim=(1, 2)) - reference_loss = loss_fun(input_, target, reduction='none').mean(dim=(1, 2)) - loss = base_loss / reference_loss ** self.cfg.loss.norm_power - - if self.is_training: - loss.mean().backward() - flashy.distrib.sync_model(self.model) - self.optimizer.step() - self.optimizer.zero_grad() - metrics = { - 'loss': loss.mean(), 'normed_loss': (base_loss / reference_loss).mean(), - } - metrics.update(self.per_stage({'loss': loss, 'normed_loss': base_loss / reference_loss}, step)) - metrics.update({ - 'std_in': input_.std(), 'std_out': out.std()}) - return metrics - - def run_epoch(self): - # reset random seed at the beginning of the epoch - self.rng = torch.Generator() - self.rng.manual_seed(1234 + self.epoch) - self.per_stage = PerStageMetrics(self.schedule.num_steps, self.cfg.metrics.num_stage) - # run epoch - super().run_epoch() - - def evaluate(self): - """Evaluate stage. - Runs audio reconstruction evaluation. - """ - self.model.eval() - evaluate_stage_name = f'{self.current_stage}' - loader = self.dataloaders['evaluate'] - updates = len(loader) - lp = self.log_progress(f'{evaluate_stage_name} estimate', loader, total=updates, updates=self.log_updates) - - metrics = {} - n = 1 - for idx, batch in enumerate(lp): - x = batch.to(self.device) - with torch.no_grad(): - y_pred = self.regenerate(x) - - y_pred = y_pred.cpu() - y = batch.cpu() # should already be on CPU but just in case - rvm = self.rvm(y_pred, y) - lp.update(**rvm) - if len(metrics) == 0: - metrics = rvm - else: - for key in rvm.keys(): - metrics[key] = (metrics[key] * n + rvm[key]) / (n + 1) - metrics = flashy.distrib.average_metrics(metrics) - return metrics - - @torch.no_grad() - def regenerate(self, wav: torch.Tensor, step_list: tp.Optional[list] = None): - """Regenerate the given waveform.""" - condition = self.get_condition(wav) - initial = self.schedule.get_initial_noise(self.data_processor.process_data(wav)) # sampling rate changes. - result = self.schedule.generate_subsampled(self.model, initial=initial, condition=condition, - step_list=step_list) - result = self.data_processor.inverse_process(result) - return result - - def generate(self): - """Generate stage.""" - sample_manager = SampleManager(self.xp) - self.model.eval() - generate_stage_name = f'{self.current_stage}' - - loader = self.dataloaders['generate'] - updates = len(loader) - lp = self.log_progress(generate_stage_name, loader, total=updates, updates=self.log_updates) - - for batch in lp: - reference, _ = batch - reference = reference.to(self.device) - estimate = self.regenerate(reference) - reference = reference.cpu() - estimate = estimate.cpu() - sample_manager.add_samples(estimate, self.epoch, ground_truth_wavs=reference) - flashy.distrib.barrier() diff --git a/spaces/AIFILMS/StyleGANEX/latent_optimization.py b/spaces/AIFILMS/StyleGANEX/latent_optimization.py deleted file mode 100644 index a29a5cbd1e31ed14f95f37601a2b6956bb7de803..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/latent_optimization.py +++ /dev/null @@ -1,107 +0,0 @@ -import models.stylegan2.lpips as lpips -from torch import autograd, optim -from torchvision import transforms, utils -from tqdm import tqdm -import torch -from scripts.align_all_parallel import align_face -from utils.inference_utils import noise_regularize, noise_normalize_, get_lr, latent_noise, visualize - -def latent_optimization(frame, pspex, landmarkpredictor, step=500, device='cuda'): - percept = lpips.PerceptualLoss( - model="net-lin", net="vgg", use_gpu=device.startswith("cuda") - ) - - transform = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]), - ]) - - with torch.no_grad(): - - noise_sample = torch.randn(1000, 512, device=device) - latent_out = pspex.decoder.style(noise_sample) - latent_mean = latent_out.mean(0) - latent_std = ((latent_out - latent_mean).pow(2).sum() / 1000) ** 0.5 - - y = transform(frame).unsqueeze(dim=0).to(device) - I_ = align_face(frame, landmarkpredictor) - I_ = transform(I_).unsqueeze(dim=0).to(device) - wplus = pspex.encoder(I_) + pspex.latent_avg.unsqueeze(0) - _, f = pspex.encoder(y, return_feat=True) - latent_in = wplus.detach().clone() - feat = [f[0].detach().clone(), f[1].detach().clone()] - - - - # wplus and f to optimize - latent_in.requires_grad = True - feat[0].requires_grad = True - feat[1].requires_grad = True - - noises_single = pspex.decoder.make_noise() - basic_height, basic_width = int(y.shape[2]*32/256), int(y.shape[3]*32/256) - noises = [] - for noise in noises_single: - noises.append(noise.new_empty(y.shape[0], 1, max(basic_height, int(y.shape[2]*noise.shape[2]/256)), - max(basic_width, int(y.shape[3]*noise.shape[2]/256))).normal_()) - for noise in noises: - noise.requires_grad = True - - init_lr=0.05 - optimizer = optim.Adam(feat + noises, lr=init_lr) - optimizer2 = optim.Adam([latent_in], lr=init_lr) - noise_weight = 0.05 * 0.2 - - pbar = tqdm(range(step)) - latent_path = [] - - for i in pbar: - t = i / step - lr = get_lr(t, init_lr) - optimizer.param_groups[0]["lr"] = lr - optimizer2.param_groups[0]["lr"] = get_lr(t, init_lr) - - noise_strength = latent_std * noise_weight * max(0, 1 - t / 0.75) ** 2 - latent_n = latent_noise(latent_in, noise_strength.item()) - - y_hat, _ = pspex.decoder([latent_n], input_is_latent=True, randomize_noise=False, - first_layer_feature=feat, noise=noises) - - - batch, channel, height, width = y_hat.shape - - if height > y.shape[2]: - factor = height // y.shape[2] - - y_hat = y_hat.reshape( - batch, channel, height // factor, factor, width // factor, factor - ) - y_hat = y_hat.mean([3, 5]) - - p_loss = percept(y_hat, y).sum() - n_loss = noise_regularize(noises) * 1e3 - - loss = p_loss + n_loss - - optimizer.zero_grad() - optimizer2.zero_grad() - loss.backward() - optimizer.step() - optimizer2.step() - - noise_normalize_(noises) - - ''' for visualization - if (i + 1) % 100 == 0 or i == 0: - viz = torch.cat((y_hat,y,y_hat-y), dim=3) - visualize(torch.clamp(viz[0].cpu(),-1,1), 60) - ''' - - pbar.set_description( - ( - f"perceptual: {p_loss.item():.4f}; noise regularize: {n_loss.item():.4f};" - f" lr: {lr:.4f}" - ) - ) - - return latent_n, feat, noises, wplus, f \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/wav_processors/base_processor.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/wav_processors/base_processor.py deleted file mode 100644 index e8200dc58a9388ac94a5ec34b8a65f75e380255b..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/wav_processors/base_processor.py +++ /dev/null @@ -1,25 +0,0 @@ -REGISTERED_WAV_PROCESSORS = {} - - -def register_wav_processors(name): - def _f(cls): - REGISTERED_WAV_PROCESSORS[name] = cls - return cls - - return _f - - -def get_wav_processor_cls(name): - return REGISTERED_WAV_PROCESSORS.get(name, None) - - -class BaseWavProcessor: - @property - def name(self): - raise NotImplementedError - - def output_fn(self, input_fn): - return f'{input_fn[:-4]}_{self.name}.wav' - - def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args): - raise NotImplementedError diff --git a/spaces/AIGC-Audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/act.py b/spaces/AIGC-Audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/act.py deleted file mode 100644 index 028debd697dd60458aae75010057df038bd3518a..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/act.py +++ /dev/null @@ -1,28 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch.nn as nn -from .resample import UpSample1d, DownSample1d - - -class Activation1d(nn.Module): - def __init__(self, - activation, - up_ratio: int = 2, - down_ratio: int = 2, - up_kernel_size: int = 12, - down_kernel_size: int = 12): - super().__init__() - self.up_ratio = up_ratio - self.down_ratio = down_ratio - self.act = activation - self.upsample = UpSample1d(up_ratio, up_kernel_size) - self.downsample = DownSample1d(down_ratio, down_kernel_size) - - # x: [B,C,T] - def forward(self, x): - x = self.upsample(x) - x = self.act(x) - x = self.downsample(x) - - return x \ No newline at end of file diff --git a/spaces/AIGText/GlyphControl/example_list.py b/spaces/AIGText/GlyphControl/example_list.py deleted file mode 100644 index 8088247a99e2cce76b7e616c198e52fb2690f9e4..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/example_list.py +++ /dev/null @@ -1,38 +0,0 @@ -example_1 = [ - "LAION-Glyph-10M-Epoch-6", - "A gift card with text ""Happy Birthday"" and roses on it.", - "Happy Birthday", 0.47, 0, 0.24, 0.4, 5, 1, - "", 0.3, 0, 0.15, 0.15, 0, 1, - "", 0.3, 0, 0.15, 0.65, 0, 1, - "", 0.3, 0, 0.5, 0.65, 0, 1, - 5,512,20,False,1,9,0,0, - "4K, dslr, best quality, extremely detailed", - "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality" -] -# teaser examples in the report (updating...) -# only could generate similar examples due to the fact that our released checkpoints are different from the checkpoint used in the original report. -example_2 = [ - "LAION-Glyph-10M-Epoch-6", - 'Newspaper with the headline "Aliens Found in Space" and "Monster Attacks Mars".', - 'Aliens Found in Space', 0.8, 0, 0.1, 0.1, 0, 1, - 'Monster Attacks Mars', 0.8, 0, 0.1, 0.45, 0, 1, - "", 0.3, 0, 0.15, 0.65, 0, 1, - "", 0.3, 0, 0.5, 0.65, 0, 1, - 5,512,20,False,1,9,430637146, - 0, "best quality, extremely detailed", #"4K, dslr, best quality, extremely detailed", - "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality" -] -examples = [example_1, example_2] - -# example_3 = [ -# "LAION-Glyph-10M-Epoch-6", -# 'A decorative greeting card that reads "Congratulations on achieving state of the art".', -# 'Congratulations', 0.6, 0, 0.2, 0.1, 0, 1, -# 'on achieving', 0.5, 0, 0.25, 0.22, 0, 1, -# 'state of the art', 0.6, 0, 0.21, 0.34, 0, 1, -# "", 0.3, 0, 0.5, 0.65, 0, 1, -# 5,512,20,False,1,9, 1540281202, #364285590, -# 0, "best quality, extremely detailed", -# "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality" -# ] -# examples = [example_1, example_2, example_3] \ No newline at end of file diff --git a/spaces/ASJMO/freegpt/g4f/Provider/Providers/GetGpt.py b/spaces/ASJMO/freegpt/g4f/Provider/Providers/GetGpt.py deleted file mode 100644 index 56a121f6ee5f430da7beda3b65abdea64a87c36b..0000000000000000000000000000000000000000 --- a/spaces/ASJMO/freegpt/g4f/Provider/Providers/GetGpt.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import json -import uuid -import requests -from Crypto.Cipher import AES -from ...typing import sha256, Dict, get_type_hints - -url = 'https://chat.getgpt.world/' -model = ['gpt-3.5-turbo'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - def encrypt(e): - t = os.urandom(8).hex().encode('utf-8') - n = os.urandom(8).hex().encode('utf-8') - r = e.encode('utf-8') - cipher = AES.new(t, AES.MODE_CBC, n) - ciphertext = cipher.encrypt(pad_data(r)) - return ciphertext.hex() + t.decode('utf-8') + n.decode('utf-8') - - def pad_data(data: bytes) -> bytes: - block_size = AES.block_size - padding_size = block_size - len(data) % block_size - padding = bytes([padding_size] * padding_size) - return data + padding - - headers = { - 'Content-Type': 'application/json', - 'Referer': 'https://chat.getgpt.world/', - 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36' - } - - data = json.dumps({ - 'messages': messages, - 'frequency_penalty': kwargs.get('frequency_penalty', 0), - 'max_tokens': kwargs.get('max_tokens', 4000), - 'model': 'gpt-3.5-turbo', - 'presence_penalty': kwargs.get('presence_penalty', 0), - 'temperature': kwargs.get('temperature', 1), - 'top_p': kwargs.get('top_p', 1), - 'stream': True, - 'uuid': str(uuid.uuid4()) - }) - - res = requests.post('https://chat.getgpt.world/api/chat/stream', - headers=headers, json={'signature': encrypt(data)}, stream=True) - - for line in res.iter_lines(): - if b'content' in line: - line_json = json.loads(line.decode('utf-8').split('data: ')[1]) - yield (line_json['choices'][0]['delta']['content']) - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f'{name}: {get_type_hints(_create_completion)[name].__name__}' for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/theb.py b/spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/theb.py deleted file mode 100644 index 71cfd23ff34768092e4dbe3ff6b719a946dceebb..0000000000000000000000000000000000000000 --- a/spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/theb.py +++ /dev/null @@ -1,48 +0,0 @@ -import json -import sys -from re import findall -from curl_cffi import requests - -config = json.loads(sys.argv[1]) -prompt = config['messages'][-1]['content'] - -headers = { - 'authority': 'chatbot.theb.ai', - 'accept': 'application/json, text/plain, */*', - 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', - 'content-type': 'application/json', - 'origin': 'https://chatbot.theb.ai', - 'referer': 'https://chatbot.theb.ai/', - 'sec-ch-ua': '"Google Chrome";v="113", "Chromium";v="113", "Not-A.Brand";v="24"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-platform': '"macOS"', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'same-origin', - 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36', -} - -json_data = { - 'prompt': prompt, - 'options': {} -} - -def format(chunk): - try: - completion_chunk = findall(r'content":"(.*)"},"fin', chunk.decode())[0] - print(completion_chunk, flush=True, end='') - - except Exception as e: - print(f'[ERROR] an error occured, retrying... | [[{chunk.decode()}]]', flush=True) - return - -while True: - try: - response = requests.post('https://chatbot.theb.ai/api/chat-process', - headers=headers, json=json_data, content_callback=format, impersonate='chrome110') - - exit(0) - - except Exception as e: - print('[ERROR] an error occured, retrying... |', e, flush=True) - continue \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AgentVerse/agentVerse/scripts/evaluate_responsegen.py b/spaces/AgentVerse/agentVerse/scripts/evaluate_responsegen.py deleted file mode 100644 index 07b497ae305cfd7f9dbf1063af0f98d7c60b07cd..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/scripts/evaluate_responsegen.py +++ /dev/null @@ -1,112 +0,0 @@ -import os -import json -from string import Template -import time -import openai -from tqdm import tqdm - -with open("./results.jsonl", "r") as f: - lines = list(f.readlines()) - -eval_prompt = r"""Which response is better given this context: -${context} - -Response A: ${response_a} - -Response B: ${response_b}. - -Pick your answer from ['Response A', 'Response B', 'both', 'neither']. Generate a short explanation for your choice first. Then, generate 'The better response is A' or 'The better response is B' or 'The better response is both' or 'The better response is neither'. - -Your response format should be: -Explanation: -Answer: ('The better response is A' or 'The better response is B' or 'The better response is both' or 'The better response is neither') -""" - -res = [] -eval = [] - - -def write_eval_to_file(file, skip=0): - for idx, line in tqdm(enumerate(lines)): - if idx < skip: - continue - data = json.loads(line) - # print(idx + 1) - context = data["input"] - response_a = data["response"] - response_b = data["label"] - - context_quote = "> " + "\n> ".join(context.split("\n")) - response_a_quote = "> " + "\n> ".join(response_a.split("\n")) - response_b_quote = "> " + "\n> ".join(response_b.split("\n")) - - f.write(f"## {idx + 1}\n\n") - f.write(f"Context:\n" f"{context_quote}\n\n") - f.write(f"Response A (pipeline):\n" f"{response_a_quote}\n\n") - f.write(f"Response B (init):\n" f"{response_b_quote}\n\n") - - prompt = Template(eval_prompt).safe_substitute( - context=context, response_a=response_a, response_b=response_b - ) - for i in range(100): - try: - eval_response = openai.ChatCompletion.create( - model="gpt-4", - messages=[{"role": "user", "content": prompt}], - temperature=0.0, - ) - except: - time.sleep(min(i**2, 60)) - continue - break - text = eval_response["choices"][0]["message"]["content"] - eval.append(text) - text = text.replace("\n", "\n\n") - f.write(f"{text}\n\n") - - if "The better response is A" in text: - res.append("A") - elif "The better response is B" in text: - res.append("B") - elif "The better response is both" in text: - res.append("both") - elif "The better response is neither" in text: - res.append("neither") - else: - res.append("unknown") - - -if not os.path.exists("./eval.md"): - with open("./eval.md", "w") as f: - f.write("# ResponseGen Eval\n\n") - write_eval_to_file(f) - win_cnt = 0 - for r in res: - if r == "A": - win_cnt += 1 - print(f"win rate: {win_cnt / len(res)}") -else: - win_cnt = 0 - total_cnt = 0 - with open("./eval.md", "r") as f: - for line in f: - if line.startswith("Answer"): - total_cnt += 1 - if "The better response is A" in line: - res.append("A") - elif "The better response is B" in line: - res.append("B") - elif "The better response is both" in line: - res.append("both") - elif "The better response is neither" in line: - res.append("neither") - else: - res.append("unknown") - with open("./eval.md", "a") as f: - f.write("\n") - write_eval_to_file(f, total_cnt) - win_cnt = 0 - for r in res: - if r == "A": - win_cnt += 1 - print(f"win rate: {win_cnt / len(res)}") diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/BroadcastEvent.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/BroadcastEvent.js deleted file mode 100644 index 1d9008e99a0213b0546cd72b326dcf46015329a9..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/BroadcastEvent.js +++ /dev/null @@ -1,10 +0,0 @@ -var BroadcastEvent = function () { - var gameObjects = this.getAllChildren([this]); - for (var i = 0, cnt = gameObjects.length; i < cnt; i++) { - var gameObject = gameObjects[i]; - gameObject.emit.apply(gameObject, arguments); - } - return this; -} - -export default BroadcastEvent; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateBadgeLabel.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateBadgeLabel.js deleted file mode 100644 index 092acbf6493220a5f13da93b8fd8f26190f09865..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateBadgeLabel.js +++ /dev/null @@ -1,26 +0,0 @@ -import MergeStyle from './utils/MergeStyle.js'; -import BadgeLabel from '../../badgelabel/BadgeLabel.js'; -import CreateChild from './utils/CreateChild.js'; - -var CreateBadgeLabel = function (scene, data, view, styles, customBuilders) { - data = MergeStyle(data, styles); - - // Replace data by child game object - CreateChild(scene, data, 'background', view, styles, customBuilders); - CreateChild(scene, data, 'main', view, styles, customBuilders); - CreateChild(scene, data, 'leftTop', view, styles, customBuilders); - CreateChild(scene, data, 'centerTop', view, styles, customBuilders); - CreateChild(scene, data, 'rightTop', view, styles, customBuilders); - CreateChild(scene, data, 'leftCenter', view, styles, customBuilders); - CreateChild(scene, data, 'center', view, styles, customBuilders); - CreateChild(scene, data, 'rightCenter', view, styles, customBuilders); - CreateChild(scene, data, 'leftBottom', view, styles, customBuilders); - CreateChild(scene, data, 'centerBottom', view, styles, customBuilders); - CreateChild(scene, data, 'rightBottom', view, styles, customBuilders); - - var gameObject = new BadgeLabel(scene, data); - scene.add.existing(gameObject); - return gameObject; -} - -export default CreateBadgeLabel; \ No newline at end of file diff --git a/spaces/AlanMars/QYL-AI-Space/readme/README_ja.md b/spaces/AlanMars/QYL-AI-Space/readme/README_ja.md deleted file mode 100644 index 4c9c2fe33a8ac985d4a30423b5215d9ab81ec9e7..0000000000000000000000000000000000000000 --- a/spaces/AlanMars/QYL-AI-Space/readme/README_ja.md +++ /dev/null @@ -1,126 +0,0 @@ -
    - - 简体中文 | English | 日本語 -
    - -

    川虎 Chat 🐯 Chuanhu Chat

    -
    - - Logo - - -

    -

    ChatGPT/ChatGLM/LLaMAなどのLLMのための軽量でユーザーフレンドリーなWeb-UI

    -

    - - Tests Passing - - - GitHub Contributors - - - GitHub pull requests - -

    - ストリーム出力/会話回数無制限/履歴保存/プリセットプロンプト/ファイルへの質問チャット
    - ウェブ検索/LaTeXレンダリング/表レンダリング/コードハイライト
    - オートダークモード/アダプティブ・ウェブ・インターフェイス/WeChatライク・テーマ
    - マルチパラメーターチューニング/マルチAPI-Key対応/マルチユーザー対応
    - GPT-4対応/LLMのローカルデプロイ可能。 -

    - 動画チュートリアル - · - 2.0 イントロダクション - · - 3.0 イントロダクション & チュートリアル - || - オンライントライアル - · - ワンクリックデプロイ -

    -

    - Animation Demo -

    -

    -
    - -## 使う上でのTips - -- ChatGPTをより適切に制御するために、システムプロンプトを使用できます。 -- プロンプトテンプレートを使用するには、プロンプトテンプレートコレクションを選択し、ドロップダウンメニューから特定のプロンプトを選択。回答が不十分な場合は、`🔄再生成`ボタンを使って再試行します。 -- 入力ボックスで改行するには、Shift + Enterキーを押してください。 -- 入力履歴を素早く切り替えるには、入力ボックスで キーを押す。 -- プログラムをサーバーに展開するには、`config.json` 内の `"server_name": "0.0.0.0", "server_port": <ポート番号>`を設定してください。 -- 共有リンクを取得するには、 `config.json` 内の `"share": true` を設定してください。なお、公開リンクでアクセスするためには、プログラムが実行されている必要があることに注意してください。 -- Hugging Face Spacesで使用する場合: より速く、より安全に利用するために、**Duplicate Space**を使用し、自分のスペースでプログラムを実行することをお勧めします。 - -## クイックスタート - -```shell -git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git -cd ChuanhuChatGPT -pip install -r requirements.txt -``` - -次に `config_example.json`をコピーして `config.json`にリネームし、そのファイルにAPI-Keyなどの設定を記入する。 - -```shell -python app.py -``` - -ブラウザのウィンドウが開き、ChatGPTとチャットできるようになります。 - -> **Note** -> -> 詳しい手順は[wikiページ](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程)をご確認ください。 - -## トラブルシューティング - -問題が発生した場合は、まずこのプロジェクトの最新の変更点を手動で引っ張ってみるのがよいでしょう。その手順は以下の通りです: - -1. ウェブページの `Download ZIP` をクリックして最新のコードアーカイブをダウンロードするか、または - ```shell - git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f - ``` -2. 新しい依存関係が導入されている可能性があるため、依存関係を再度インストールしてみてください。 - ``` - pip install -r requirements.txt - ``` -3. Gradioを更新 - ``` - pip install gradio --upgrade --force-reinstall - ``` - -一般的に、以下の手順でほとんどの問題を解決することができます。 - -それでも問題が解決しない場合は、こちらのページをご参照ください: [よくある質問(FAQ)](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) - -このページでは、考えられるほぼすべての問題点と解決策を掲載しています。よくお読みください。 - -## More Information - -より詳細な情報は、[wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki) をご覧ください。: - -- [How to contribute a translation](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/Localization) -- [How to make a contribution](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南) -- [How to cite the project](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目) -- [Project changelog](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志) -- [Project license](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可) - -## Starchart - -[![Star History Chart](https://api.star-history.com/svg?repos=GaiZhenbiao/ChuanhuChatGPT&type=Date)](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date) - -## Contributors - - - - - -## Sponsor - -🐯 この企画が役に立ったら、遠慮なくコーラかコーヒーでもおごってください〜。 - -Buy Me A Coffee - -image diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/configs/evaluation_config.py b/spaces/Amrrs/DragGan-Inversion/PTI/configs/evaluation_config.py deleted file mode 100644 index 16b621d4a47df9e25828c4235cf1692899d14d50..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/configs/evaluation_config.py +++ /dev/null @@ -1 +0,0 @@ -evaluated_methods = ['e4e', 'SG2', 'SG2Plus'] \ No newline at end of file diff --git a/spaces/Amrrs/DragGan-Inversion/training/__init__.py b/spaces/Amrrs/DragGan-Inversion/training/__init__.py deleted file mode 100644 index 939e7c6c8f94c4ea1141885c3c3295fe083b06aa..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/training/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/utils/transformer.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/utils/transformer.py deleted file mode 100644 index 83870eead42f4b0bf73c9e19248d5512d3d044c5..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/utils/transformer.py +++ /dev/null @@ -1,860 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import (Linear, build_activation_layer, build_norm_layer, - xavier_init) - -from .builder import TRANSFORMER - - -class MultiheadAttention(nn.Module): - """A warpper for torch.nn.MultiheadAttention. - - This module implements MultiheadAttention with residual connection, - and positional encoding used in DETR is also passed as input. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. Same as - `nn.MultiheadAttention`. - dropout (float): A Dropout layer on attn_output_weights. Default 0.0. - """ - - def __init__(self, embed_dims, num_heads, dropout=0.0): - super(MultiheadAttention, self).__init__() - assert embed_dims % num_heads == 0, 'embed_dims must be ' \ - f'divisible by num_heads. got {embed_dims} and {num_heads}.' - self.embed_dims = embed_dims - self.num_heads = num_heads - self.dropout = dropout - self.attn = nn.MultiheadAttention(embed_dims, num_heads, dropout) - self.dropout = nn.Dropout(dropout) - - def forward(self, - x, - key=None, - value=None, - residual=None, - query_pos=None, - key_pos=None, - attn_mask=None, - key_padding_mask=None): - """Forward function for `MultiheadAttention`. - - Args: - x (Tensor): The input query with shape [num_query, bs, - embed_dims]. Same in `nn.MultiheadAttention.forward`. - key (Tensor): The key tensor with shape [num_key, bs, - embed_dims]. Same in `nn.MultiheadAttention.forward`. - Default None. If None, the `query` will be used. - value (Tensor): The value tensor with same shape as `key`. - Same in `nn.MultiheadAttention.forward`. Default None. - If None, the `key` will be used. - residual (Tensor): The tensor used for addition, with the - same shape as `x`. Default None. If None, `x` will be used. - query_pos (Tensor): The positional encoding for query, with - the same shape as `x`. Default None. If not None, it will - be added to `x` before forward function. - key_pos (Tensor): The positional encoding for `key`, with the - same shape as `key`. Default None. If not None, it will - be added to `key` before forward function. If None, and - `query_pos` has the same shape as `key`, then `query_pos` - will be used for `key_pos`. - attn_mask (Tensor): ByteTensor mask with shape [num_query, - num_key]. Same in `nn.MultiheadAttention.forward`. - Default None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_key]. - Same in `nn.MultiheadAttention.forward`. Default None. - - Returns: - Tensor: forwarded results with shape [num_query, bs, embed_dims]. - """ - query = x - if key is None: - key = query - if value is None: - value = key - if residual is None: - residual = x - if key_pos is None: - if query_pos is not None and key is not None: - if query_pos.shape == key.shape: - key_pos = query_pos - if query_pos is not None: - query = query + query_pos - if key_pos is not None: - key = key + key_pos - out = self.attn( - query, - key, - value=value, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask)[0] - - return residual + self.dropout(out) - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'dropout={self.dropout})' - return repr_str - - -class FFN(nn.Module): - """Implements feed-forward networks (FFNs) with residual connection. - - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. - feedforward_channels (int): The hidden dimension of FFNs. - num_fcs (int, optional): The number of fully-connected layers in - FFNs. Defaults to 2. - act_cfg (dict, optional): The activation config for FFNs. - dropout (float, optional): Probability of an element to be - zeroed. Default 0.0. - add_residual (bool, optional): Add resudual connection. - Defaults to True. - """ - - def __init__(self, - embed_dims, - feedforward_channels, - num_fcs=2, - act_cfg=dict(type='ReLU', inplace=True), - dropout=0.0, - add_residual=True): - super(FFN, self).__init__() - assert num_fcs >= 2, 'num_fcs should be no less ' \ - f'than 2. got {num_fcs}.' - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.num_fcs = num_fcs - self.act_cfg = act_cfg - self.dropout = dropout - self.activate = build_activation_layer(act_cfg) - - layers = nn.ModuleList() - in_channels = embed_dims - for _ in range(num_fcs - 1): - layers.append( - nn.Sequential( - Linear(in_channels, feedforward_channels), self.activate, - nn.Dropout(dropout))) - in_channels = feedforward_channels - layers.append(Linear(feedforward_channels, embed_dims)) - self.layers = nn.Sequential(*layers) - self.dropout = nn.Dropout(dropout) - self.add_residual = add_residual - - def forward(self, x, residual=None): - """Forward function for `FFN`.""" - out = self.layers(x) - if not self.add_residual: - return out - if residual is None: - residual = x - return residual + self.dropout(out) - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'num_fcs={self.num_fcs}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'add_residual={self.add_residual})' - return repr_str - - -class TransformerEncoderLayer(nn.Module): - """Implements one encoder layer in DETR transformer. - - Args: - embed_dims (int): The feature dimension. Same as `FFN`. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - dropout (float): Probability of an element to be zeroed. Default 0.0. - order (tuple[str]): The order for encoder layer. Valid examples are - ('selfattn', 'norm', 'ffn', 'norm') and ('norm', 'selfattn', - 'norm', 'ffn'). Default ('selfattn', 'norm', 'ffn', 'norm'). - act_cfg (dict): The activation config for FFNs. Default ReLU. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - num_fcs (int): The number of fully-connected layers for FFNs. - Default 2. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'ffn', 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2): - super(TransformerEncoderLayer, self).__init__() - assert isinstance(order, tuple) and len(order) == 4 - assert set(order) == set(['selfattn', 'norm', 'ffn']) - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = order[0] == 'norm' - self.self_attn = MultiheadAttention(embed_dims, num_heads, dropout) - self.ffn = FFN(embed_dims, feedforward_channels, num_fcs, act_cfg, - dropout) - self.norms = nn.ModuleList() - self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1]) - self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1]) - - def forward(self, x, pos=None, attn_mask=None, key_padding_mask=None): - """Forward function for `TransformerEncoderLayer`. - - Args: - x (Tensor): The input query with shape [num_key, bs, - embed_dims]. Same in `MultiheadAttention.forward`. - pos (Tensor): The positional encoding for query. Default None. - Same as `query_pos` in `MultiheadAttention.forward`. - attn_mask (Tensor): ByteTensor mask with shape [num_key, - num_key]. Same in `MultiheadAttention.forward`. Default None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_key]. - Same in `MultiheadAttention.forward`. Default None. - - Returns: - Tensor: forwarded results with shape [num_key, bs, embed_dims]. - """ - norm_cnt = 0 - inp_residual = x - for layer in self.order: - if layer == 'selfattn': - # self attention - query = key = value = x - x = self.self_attn( - query, - key, - value, - inp_residual if self.pre_norm else None, - query_pos=pos, - key_pos=pos, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask) - inp_residual = x - elif layer == 'norm': - x = self.norms[norm_cnt](x) - norm_cnt += 1 - elif layer == 'ffn': - x = self.ffn(x, inp_residual if self.pre_norm else None) - return x - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs})' - return repr_str - - -class TransformerDecoderLayer(nn.Module): - """Implements one decoder layer in DETR transformer. - - Args: - embed_dims (int): The feature dimension. Same as - `TransformerEncoderLayer`. - num_heads (int): Parallel attention heads. - feedforward_channels (int): Same as `TransformerEncoderLayer`. - dropout (float): Same as `TransformerEncoderLayer`. Default 0.0. - order (tuple[str]): The order for decoder layer. Valid examples are - ('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn', 'norm') and - ('norm', 'selfattn', 'norm', 'multiheadattn', 'norm', 'ffn'). - Default the former. - act_cfg (dict): Same as `TransformerEncoderLayer`. Default ReLU. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - num_fcs (int): The number of fully-connected layers in FFNs. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn', - 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2): - super(TransformerDecoderLayer, self).__init__() - assert isinstance(order, tuple) and len(order) == 6 - assert set(order) == set(['selfattn', 'norm', 'multiheadattn', 'ffn']) - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = order[0] == 'norm' - self.self_attn = MultiheadAttention(embed_dims, num_heads, dropout) - self.multihead_attn = MultiheadAttention(embed_dims, num_heads, - dropout) - self.ffn = FFN(embed_dims, feedforward_channels, num_fcs, act_cfg, - dropout) - self.norms = nn.ModuleList() - # 3 norm layers in official DETR's TransformerDecoderLayer - for _ in range(3): - self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1]) - - def forward(self, - x, - memory, - memory_pos=None, - query_pos=None, - memory_attn_mask=None, - target_attn_mask=None, - memory_key_padding_mask=None, - target_key_padding_mask=None): - """Forward function for `TransformerDecoderLayer`. - - Args: - x (Tensor): Input query with shape [num_query, bs, embed_dims]. - memory (Tensor): Tensor got from `TransformerEncoder`, with shape - [num_key, bs, embed_dims]. - memory_pos (Tensor): The positional encoding for `memory`. Default - None. Same as `key_pos` in `MultiheadAttention.forward`. - query_pos (Tensor): The positional encoding for `query`. Default - None. Same as `query_pos` in `MultiheadAttention.forward`. - memory_attn_mask (Tensor): ByteTensor mask for `memory`, with - shape [num_key, num_key]. Same as `attn_mask` in - `MultiheadAttention.forward`. Default None. - target_attn_mask (Tensor): ByteTensor mask for `x`, with shape - [num_query, num_query]. Same as `attn_mask` in - `MultiheadAttention.forward`. Default None. - memory_key_padding_mask (Tensor): ByteTensor for `memory`, with - shape [bs, num_key]. Same as `key_padding_mask` in - `MultiheadAttention.forward`. Default None. - target_key_padding_mask (Tensor): ByteTensor for `x`, with shape - [bs, num_query]. Same as `key_padding_mask` in - `MultiheadAttention.forward`. Default None. - - Returns: - Tensor: forwarded results with shape [num_query, bs, embed_dims]. - """ - norm_cnt = 0 - inp_residual = x - for layer in self.order: - if layer == 'selfattn': - query = key = value = x - x = self.self_attn( - query, - key, - value, - inp_residual if self.pre_norm else None, - query_pos, - key_pos=query_pos, - attn_mask=target_attn_mask, - key_padding_mask=target_key_padding_mask) - inp_residual = x - elif layer == 'norm': - x = self.norms[norm_cnt](x) - norm_cnt += 1 - elif layer == 'multiheadattn': - query = x - key = value = memory - x = self.multihead_attn( - query, - key, - value, - inp_residual if self.pre_norm else None, - query_pos, - key_pos=memory_pos, - attn_mask=memory_attn_mask, - key_padding_mask=memory_key_padding_mask) - inp_residual = x - elif layer == 'ffn': - x = self.ffn(x, inp_residual if self.pre_norm else None) - return x - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs})' - return repr_str - - -class TransformerEncoder(nn.Module): - """Implements the encoder in DETR transformer. - - Args: - num_layers (int): The number of `TransformerEncoderLayer`. - embed_dims (int): Same as `TransformerEncoderLayer`. - num_heads (int): Same as `TransformerEncoderLayer`. - feedforward_channels (int): Same as `TransformerEncoderLayer`. - dropout (float): Same as `TransformerEncoderLayer`. Default 0.0. - order (tuple[str]): Same as `TransformerEncoderLayer`. - act_cfg (dict): Same as `TransformerEncoderLayer`. Default ReLU. - norm_cfg (dict): Same as `TransformerEncoderLayer`. Default - layer normalization. - num_fcs (int): Same as `TransformerEncoderLayer`. Default 2. - """ - - def __init__(self, - num_layers, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'ffn', 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2): - super(TransformerEncoder, self).__init__() - assert isinstance(order, tuple) and len(order) == 4 - assert set(order) == set(['selfattn', 'norm', 'ffn']) - self.num_layers = num_layers - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = order[0] == 'norm' - self.layers = nn.ModuleList() - for _ in range(num_layers): - self.layers.append( - TransformerEncoderLayer(embed_dims, num_heads, - feedforward_channels, dropout, order, - act_cfg, norm_cfg, num_fcs)) - self.norm = build_norm_layer(norm_cfg, - embed_dims)[1] if self.pre_norm else None - - def forward(self, x, pos=None, attn_mask=None, key_padding_mask=None): - """Forward function for `TransformerEncoder`. - - Args: - x (Tensor): Input query. Same in `TransformerEncoderLayer.forward`. - pos (Tensor): Positional encoding for query. Default None. - Same in `TransformerEncoderLayer.forward`. - attn_mask (Tensor): ByteTensor attention mask. Default None. - Same in `TransformerEncoderLayer.forward`. - key_padding_mask (Tensor): Same in - `TransformerEncoderLayer.forward`. Default None. - - Returns: - Tensor: Results with shape [num_key, bs, embed_dims]. - """ - for layer in self.layers: - x = layer(x, pos, attn_mask, key_padding_mask) - if self.norm is not None: - x = self.norm(x) - return x - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_layers={self.num_layers}, ' - repr_str += f'embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs})' - return repr_str - - -class TransformerDecoder(nn.Module): - """Implements the decoder in DETR transformer. - - Args: - num_layers (int): The number of `TransformerDecoderLayer`. - embed_dims (int): Same as `TransformerDecoderLayer`. - num_heads (int): Same as `TransformerDecoderLayer`. - feedforward_channels (int): Same as `TransformerDecoderLayer`. - dropout (float): Same as `TransformerDecoderLayer`. Default 0.0. - order (tuple[str]): Same as `TransformerDecoderLayer`. - act_cfg (dict): Same as `TransformerDecoderLayer`. Default ReLU. - norm_cfg (dict): Same as `TransformerDecoderLayer`. Default - layer normalization. - num_fcs (int): Same as `TransformerDecoderLayer`. Default 2. - """ - - def __init__(self, - num_layers, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn', - 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2, - return_intermediate=False): - super(TransformerDecoder, self).__init__() - assert isinstance(order, tuple) and len(order) == 6 - assert set(order) == set(['selfattn', 'norm', 'multiheadattn', 'ffn']) - self.num_layers = num_layers - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.return_intermediate = return_intermediate - self.layers = nn.ModuleList() - for _ in range(num_layers): - self.layers.append( - TransformerDecoderLayer(embed_dims, num_heads, - feedforward_channels, dropout, order, - act_cfg, norm_cfg, num_fcs)) - self.norm = build_norm_layer(norm_cfg, embed_dims)[1] - - def forward(self, - x, - memory, - memory_pos=None, - query_pos=None, - memory_attn_mask=None, - target_attn_mask=None, - memory_key_padding_mask=None, - target_key_padding_mask=None): - """Forward function for `TransformerDecoder`. - - Args: - x (Tensor): Input query. Same in `TransformerDecoderLayer.forward`. - memory (Tensor): Same in `TransformerDecoderLayer.forward`. - memory_pos (Tensor): Same in `TransformerDecoderLayer.forward`. - Default None. - query_pos (Tensor): Same in `TransformerDecoderLayer.forward`. - Default None. - memory_attn_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - target_attn_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - memory_key_padding_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - target_key_padding_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - - Returns: - Tensor: Results with shape [num_query, bs, embed_dims]. - """ - intermediate = [] - for layer in self.layers: - x = layer(x, memory, memory_pos, query_pos, memory_attn_mask, - target_attn_mask, memory_key_padding_mask, - target_key_padding_mask) - if self.return_intermediate: - intermediate.append(self.norm(x)) - if self.norm is not None: - x = self.norm(x) - if self.return_intermediate: - intermediate.pop() - intermediate.append(x) - if self.return_intermediate: - return torch.stack(intermediate) - return x.unsqueeze(0) - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_layers={self.num_layers}, ' - repr_str += f'embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs}, ' - repr_str += f'return_intermediate={self.return_intermediate})' - return repr_str - - -@TRANSFORMER.register_module() -class Transformer(nn.Module): - """Implements the DETR transformer. - - Following the official DETR implementation, this module copy-paste - from torch.nn.Transformer with modifications: - - * positional encodings are passed in MultiheadAttention - * extra LN at the end of encoder is removed - * decoder returns a stack of activations from all decoding layers - - See `paper: End-to-End Object Detection with Transformers - `_ for details. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. Same as - `nn.MultiheadAttention`. - num_encoder_layers (int): Number of `TransformerEncoderLayer`. - num_decoder_layers (int): Number of `TransformerDecoderLayer`. - feedforward_channels (int): The hidden dimension for FFNs used in both - encoder and decoder. - dropout (float): Probability of an element to be zeroed. Default 0.0. - act_cfg (dict): Activation config for FFNs used in both encoder - and decoder. Default ReLU. - norm_cfg (dict): Config dict for normalization used in both encoder - and decoder. Default layer normalization. - num_fcs (int): The number of fully-connected layers in FFNs, which is - used for both encoder and decoder. - pre_norm (bool): Whether the normalization layer is ordered - first in the encoder and decoder. Default False. - return_intermediate_dec (bool): Whether to return the intermediate - output from each TransformerDecoderLayer or only the last - TransformerDecoderLayer. Default False. If False, the returned - `hs` has shape [num_decoder_layers, bs, num_query, embed_dims]. - If True, the returned `hs` will have shape [1, bs, num_query, - embed_dims]. - """ - - def __init__(self, - embed_dims=512, - num_heads=8, - num_encoder_layers=6, - num_decoder_layers=6, - feedforward_channels=2048, - dropout=0.0, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2, - pre_norm=False, - return_intermediate_dec=False): - super(Transformer, self).__init__() - self.embed_dims = embed_dims - self.num_heads = num_heads - self.num_encoder_layers = num_encoder_layers - self.num_decoder_layers = num_decoder_layers - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = pre_norm - self.return_intermediate_dec = return_intermediate_dec - if self.pre_norm: - encoder_order = ('norm', 'selfattn', 'norm', 'ffn') - decoder_order = ('norm', 'selfattn', 'norm', 'multiheadattn', - 'norm', 'ffn') - else: - encoder_order = ('selfattn', 'norm', 'ffn', 'norm') - decoder_order = ('selfattn', 'norm', 'multiheadattn', 'norm', - 'ffn', 'norm') - self.encoder = TransformerEncoder(num_encoder_layers, embed_dims, - num_heads, feedforward_channels, - dropout, encoder_order, act_cfg, - norm_cfg, num_fcs) - self.decoder = TransformerDecoder(num_decoder_layers, embed_dims, - num_heads, feedforward_channels, - dropout, decoder_order, act_cfg, - norm_cfg, num_fcs, - return_intermediate_dec) - - def init_weights(self, distribution='uniform'): - """Initialize the transformer weights.""" - # follow the official DETR to init parameters - for m in self.modules(): - if hasattr(m, 'weight') and m.weight.dim() > 1: - xavier_init(m, distribution=distribution) - - def forward(self, x, mask, query_embed, pos_embed): - """Forward function for `Transformer`. - - Args: - x (Tensor): Input query with shape [bs, c, h, w] where - c = embed_dims. - mask (Tensor): The key_padding_mask used for encoder and decoder, - with shape [bs, h, w]. - query_embed (Tensor): The query embedding for decoder, with shape - [num_query, c]. - pos_embed (Tensor): The positional encoding for encoder and - decoder, with the same shape as `x`. - - Returns: - tuple[Tensor]: results of decoder containing the following tensor. - - - out_dec: Output from decoder. If return_intermediate_dec \ - is True output has shape [num_dec_layers, bs, - num_query, embed_dims], else has shape [1, bs, \ - num_query, embed_dims]. - - memory: Output results from encoder, with shape \ - [bs, embed_dims, h, w]. - """ - bs, c, h, w = x.shape - x = x.flatten(2).permute(2, 0, 1) # [bs, c, h, w] -> [h*w, bs, c] - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - query_embed = query_embed.unsqueeze(1).repeat( - 1, bs, 1) # [num_query, dim] -> [num_query, bs, dim] - mask = mask.flatten(1) # [bs, h, w] -> [bs, h*w] - memory = self.encoder( - x, pos=pos_embed, attn_mask=None, key_padding_mask=mask) - target = torch.zeros_like(query_embed) - # out_dec: [num_layers, num_query, bs, dim] - out_dec = self.decoder( - target, - memory, - memory_pos=pos_embed, - query_pos=query_embed, - memory_attn_mask=None, - target_attn_mask=None, - memory_key_padding_mask=mask, - target_key_padding_mask=None) - out_dec = out_dec.transpose(1, 2) - memory = memory.permute(1, 2, 0).reshape(bs, c, h, w) - return out_dec, memory - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'num_encoder_layers={self.num_encoder_layers}, ' - repr_str += f'num_decoder_layers={self.num_decoder_layers}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs}, ' - repr_str += f'pre_norm={self.pre_norm}, ' - repr_str += f'return_intermediate_dec={self.return_intermediate_dec})' - return repr_str - - -@TRANSFORMER.register_module() -class DynamicConv(nn.Module): - """Implements Dynamic Convolution. - - This module generate parameters for each sample and - use bmm to implement 1*1 convolution. Code is modified - from the `official github repo `_ . - - Args: - in_channels (int): The input feature channel. - Defaults to 256. - feat_channels (int): The inner feature channel. - Defaults to 64. - out_channels (int, optional): The output feature channel. - When not specified, it will be set to `in_channels` - by default - input_feat_shape (int): The shape of input feature. - Defaults to 7. - act_cfg (dict): The activation config for DynamicConv. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - """ - - def __init__(self, - in_channels=256, - feat_channels=64, - out_channels=None, - input_feat_shape=7, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN')): - super(DynamicConv, self).__init__() - self.in_channels = in_channels - self.feat_channels = feat_channels - self.out_channels_raw = out_channels - self.input_feat_shape = input_feat_shape - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.out_channels = out_channels if out_channels else in_channels - - self.num_params_in = self.in_channels * self.feat_channels - self.num_params_out = self.out_channels * self.feat_channels - self.dynamic_layer = nn.Linear( - self.in_channels, self.num_params_in + self.num_params_out) - - self.norm_in = build_norm_layer(norm_cfg, self.feat_channels)[1] - self.norm_out = build_norm_layer(norm_cfg, self.out_channels)[1] - - self.activation = build_activation_layer(act_cfg) - - num_output = self.out_channels * input_feat_shape**2 - self.fc_layer = nn.Linear(num_output, self.out_channels) - self.fc_norm = build_norm_layer(norm_cfg, self.out_channels)[1] - - def forward(self, param_feature, input_feature): - """Forward function for `DynamicConv`. - - Args: - param_feature (Tensor): The feature can be used - to generate the parameter, has shape - (num_all_proposals, in_channels). - input_feature (Tensor): Feature that - interact with parameters, has shape - (num_all_proposals, in_channels, H, W). - - Returns: - Tensor: The output feature has shape - (num_all_proposals, out_channels). - """ - num_proposals = param_feature.size(0) - input_feature = input_feature.view(num_proposals, self.in_channels, - -1).permute(2, 0, 1) - - input_feature = input_feature.permute(1, 0, 2) - parameters = self.dynamic_layer(param_feature) - - param_in = parameters[:, :self.num_params_in].view( - -1, self.in_channels, self.feat_channels) - param_out = parameters[:, -self.num_params_out:].view( - -1, self.feat_channels, self.out_channels) - - # input_feature has shape (num_all_proposals, H*W, in_channels) - # param_in has shape (num_all_proposals, in_channels, feat_channels) - # feature has shape (num_all_proposals, H*W, feat_channels) - features = torch.bmm(input_feature, param_in) - features = self.norm_in(features) - features = self.activation(features) - - # param_out has shape (batch_size, feat_channels, out_channels) - features = torch.bmm(features, param_out) - features = self.norm_out(features) - features = self.activation(features) - - features = features.flatten(1) - features = self.fc_layer(features) - features = self.fc_norm(features) - features = self.activation(features) - - return features - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(in_channels={self.in_channels}, ' - repr_str += f'feat_channels={self.feat_channels}, ' - repr_str += f'out_channels={self.out_channels_raw}, ' - repr_str += f'input_feat_shape={self.input_feat_shape}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg})' - return repr_str diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/schedules/schedule_160k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/schedules/schedule_160k.py deleted file mode 100644 index 52603890b10f25faf8eec9f9e5a4468fae09b811..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/schedules/schedule_160k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=160000) -checkpoint_config = dict(by_epoch=False, interval=16000) -evaluation = dict(interval=16000, metric='mIoU') diff --git a/spaces/AnimaLab/bias-test-gpt-pairs/mgr_sentences.py b/spaces/AnimaLab/bias-test-gpt-pairs/mgr_sentences.py deleted file mode 100644 index 467090ae75942fbeeabf0ca60f57da3ad8e27992..0000000000000000000000000000000000000000 --- a/spaces/AnimaLab/bias-test-gpt-pairs/mgr_sentences.py +++ /dev/null @@ -1,157 +0,0 @@ -import gradio as gr -import os -import re -import pandas as pd -import numpy as np -import glob -import huggingface_hub -print("hfh", huggingface_hub.__version__) -from huggingface_hub import hf_hub_download, upload_file, delete_file, snapshot_download, list_repo_files, dataset_info - -DATASET_REPO_ID = "AnimaLab/bias-test-gpt-sentences" -DATASET_REPO_URL = f"https://huggingface.co/{DATASET_REPO_ID}" -HF_DATA_DIRNAME = "data" -LOCAL_DATA_DIRNAME = "data" -LOCAL_SAVE_DIRNAME = "save" - -ds_write_token = os.environ.get("DS_WRITE_TOKEN") -HF_TOKEN = os.environ.get("HF_TOKEN") - -print("ds_write_token:", ds_write_token!=None) -print("hf_token:", HF_TOKEN!=None) -print("hfh_verssion", huggingface_hub.__version__) - -def retrieveAllSaved(): - global DATASET_REPO_ID - - #listing the files - https://huggingface.co/docs/huggingface_hub/v0.8.1/en/package_reference/hf_api - repo_files = list_repo_files(repo_id=DATASET_REPO_ID, repo_type="dataset") - #print("Repo files:" + str(repo_files) - - return repo_files - -def store_group_sentences(filename: str, df): - DATA_FILENAME_1 = f"{filename}" - LOCAL_PATH_FILE = os.path.join(LOCAL_SAVE_DIRNAME, DATA_FILENAME_1) - DATA_FILE_1 = os.path.join(HF_DATA_DIRNAME, DATA_FILENAME_1) - - print(f"Trying to save to: {DATA_FILE_1}") - - os.makedirs(os.path.dirname(LOCAL_PATH_FILE), exist_ok=True) - df.to_csv(LOCAL_PATH_FILE, index=False) - - commit_url = upload_file( - path_or_fileobj=LOCAL_PATH_FILE, - path_in_repo=DATA_FILE_1, - repo_id=DATASET_REPO_ID, - repo_type="dataset", - token=ds_write_token, - ) - - print(commit_url) - -def saveSentences(sentences_df): - for grp_term in list(sentences_df['org_grp_term'].unique()): - print(f"Retrieving sentences for group: {grp_term}") - msg, grp_saved_df, filename = getSavedSentences(grp_term) - print(f"Num for group: {grp_term} -> {grp_saved_df.shape[0]}") - add_df = sentences_df[sentences_df['org_grp_term'] == grp_term] - print(f"Adding {add_df.shape[0]} sentences...") - - new_grp_df = pd.concat([grp_saved_df, add_df], ignore_index=True) - new_grp_df = new_grp_df.drop_duplicates(subset = "sentence") - - print(f"Org size: {grp_saved_df.shape[0]}, Mrg size: {new_grp_df.shape[0]}") - store_group_sentences(filename, new_grp_df) - - -# https://huggingface.co/spaces/elonmuskceo/persistent-data/blob/main/app.py -def get_sentence_csv(file_path: str): - file_path = os.path.join(HF_DATA_DIRNAME, file_path) - print(f"File path: {file_path}") - try: - hf_hub_download( - force_download=True, # to get updates of the dataset - repo_type="dataset", - repo_id=DATASET_REPO_ID, - filename=file_path, - cache_dir=LOCAL_DATA_DIRNAME, - force_filename=os.path.basename(file_path) - ) - except Exception as e: - # file not found - print(f"file not found, probably: {e}") - - files=glob.glob(f"./{LOCAL_DATA_DIRNAME}/", recursive=True) - print("Files glob: "+', '.join(files)) - #print("Save file:" + str(os.path.basename(file_path))) - - df = pd.read_csv(os.path.join(LOCAL_DATA_DIRNAME, os.path.basename(file_path)), encoding='UTF8') - - return df - -def getSavedSentences(grp): - filename = f"{grp.replace(' ','-')}.csv" - sentence_df = pd.DataFrame() - - try: - text = f"Loading sentences: {filename}\n" - sentence_df = get_sentence_csv(filename) - - except Exception as e: - text = f"Error, no saved generations for {filename}" - #raise gr.Error(f"Cannot load sentences: {filename}!") - - return text, sentence_df, filename - - -def deleteBias(filepath: str): - commit_url = delete_file( - path_in_repo=filepath, - repo_id=DATASET_REPO_ID, - repo_type="dataset", - token=ds_write_token, - ) - - return f"Deleted {filepath} -> {commit_url}" - -def _testSentenceRetrieval(grp_list, att_list, use_paper_sentences): - test_sentences = [] - print(f"Att list: {att_list}") - att_list_dash = [t.replace(' ','-') for t in att_list] - att_list.extend(att_list_dash) - att_list_nospace = [t.replace(' ','') for t in att_list] - att_list.extend(att_list_nospace) - att_list = list(set(att_list)) - print(f"Att list with dash: {att_list}") - - for gi, g_term in enumerate(grp_list): - _, sentence_df, _ = getSavedSentences(g_term) - - # only take from paper & gpt3.5 - print(f"Before filter: {sentence_df.shape[0]}") - if use_paper_sentences == True: - if 'type' in list(sentence_df.columns): - gen_models = ["gpt-3.5", "gpt-3.5-turbo", "gpt-4"] - sentence_df = sentence_df.query("type=='paper' and gen_model in @gen_models") - print(f"After filter: {sentence_df.shape[0]}") - else: - sentence_df = pd.DataFrame(columns=["Group term","Attribute term","Test sentence"]) - - if sentence_df.shape[0] > 0: - sentence_df = sentence_df[["Group term","Attribute term","Test sentence"]] - sel = sentence_df[sentence_df['Attribute term'].isin(att_list)].values - if len(sel) > 0: - for gt,at,s in sel: - test_sentences.append([s,gt.replace("-"," "),at.replace("-"," ")]) - - return test_sentences - -if __name__ == '__main__': - print("ds_write_token:", ds_write_token) - print("hf_token:", HF_TOKEN!=None) - print("hfh_verssion", huggingface_hub.__version__) - - sentences = _testSentenceRetrieval(["husband"], ["hairdresser", "steel worker"], use_paper_sentences=True) - print(sentences) - diff --git a/spaces/AnnaPalatkina/fine_grained_SA/config.py b/spaces/AnnaPalatkina/fine_grained_SA/config.py deleted file mode 100644 index 8bfa05d1f5fdf50bed143d1333b7719045058e77..0000000000000000000000000000000000000000 --- a/spaces/AnnaPalatkina/fine_grained_SA/config.py +++ /dev/null @@ -1,10 +0,0 @@ -params = { - 'pretrained_model_name': 'ltgoslo/norbert2', - 'path_to_model_bin': 'model_nobert_norec.bin', - 'LR': 1e-05, - 'dropout': 0.4, - 'warmup': 2, - 'epochs': 10, - 'max_length': 512, - 'batch_size': 4, -} \ No newline at end of file diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/dataloader/image_folder.py b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/dataloader/image_folder.py deleted file mode 100644 index 91b465663cbae9353663fd57f3f75e4ea99fb5b8..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/dataloader/image_folder.py +++ /dev/null @@ -1,56 +0,0 @@ -import os -import os.path - -IMG_EXTENSIONS = [ - '.jpg', '.JPG', '.jpeg', '.JPEG', - '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', -] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def make_dataset(path_files): - if path_files.find('.txt') != -1: - paths, size = make_dataset_txt(path_files) - else: - paths, size = make_dataset_dir(path_files) - - return paths, size - - -def make_dataset_txt(files): - """ - :param path_files: the path of txt file that store the image paths - :return: image paths and sizes - """ - img_paths = [] - - with open(files) as f: - paths = f.readlines() - - for path in paths: - path = path.strip() - if is_image_file(path) and os.path.exists(path): - img_paths.append(path) - - return img_paths, len(img_paths) - - -def make_dataset_dir(dir): - """ - :param dir: directory paths that store the image - :return: image paths and sizes - """ - img_paths = [] - - assert os.path.isdir(dir), '%s is not a valid directory' % dir - - for root, _, fnames in os.walk(dir): - for fname in sorted(fnames): - if is_image_file(fname): - path = os.path.join(root, fname) - img_paths.append(path) - - return img_paths, len(img_paths) diff --git a/spaces/Apex-X/GODROOP/roop/capturer.py b/spaces/Apex-X/GODROOP/roop/capturer.py deleted file mode 100644 index fd49d468dd4cd45832ab9612205968207a6f45cf..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/GODROOP/roop/capturer.py +++ /dev/null @@ -1,20 +0,0 @@ -from typing import Any -import cv2 - - -def get_video_frame(video_path: str, frame_number: int = 0) -> Any: - capture = cv2.VideoCapture(video_path) - frame_total = capture.get(cv2.CAP_PROP_FRAME_COUNT) - capture.set(cv2.CAP_PROP_POS_FRAMES, min(frame_total, frame_number - 1)) - has_frame, frame = capture.read() - capture.release() - if has_frame: - return frame - return None - - -def get_video_frame_total(video_path: str) -> int: - capture = cv2.VideoCapture(video_path) - video_frame_total = int(capture.get(cv2.CAP_PROP_FRAME_COUNT)) - capture.release() - return video_frame_total diff --git a/spaces/AquaSuisei/ChatGPTXE/assets/custom.css b/spaces/AquaSuisei/ChatGPTXE/assets/custom.css deleted file mode 100644 index f98c7df263b11afa4ddfb5d6ed18aef2ef234226..0000000000000000000000000000000000000000 --- a/spaces/AquaSuisei/ChatGPTXE/assets/custom.css +++ /dev/null @@ -1,250 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* 覆盖gradio的页脚信息QAQ */ -footer { - display: none !important; -} -#footer{ - text-align: center; -} -#footer div{ - display: inline-block; -} -#footer .versions{ - font-size: 85%; - opacity: 0.85; -} - -/* user_info */ -#user_info { - white-space: nowrap; - margin-top: -1.3em !important; - padding-left: 112px !important; -} -#user_info p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} - -/* usage_display */ -#usage_display { - position: relative; - margin: 0; - box-shadow: var(--block-shadow); - border-width: var(--block-border-width); - border-color: var(--block-border-color); - border-radius: var(--block-radius); - background: var(--block-background-fill); - width: 100%; - line-height: var(--line-sm); - min-height: 2em; -} -#usage_display p, #usage_display span { - margin: 0; - padding: .5em 1em; - font-size: .85em; - color: var(--body-text-color-subdued); -} -.progress-bar { - background-color: var(--input-background-fill);; - margin: 0 1em; - height: 20px; - border-radius: 10px; - overflow: hidden; -} -.progress { - background-color: var(--block-title-background-fill);; - height: 100%; - border-radius: 10px; - text-align: right; - transition: width 0.5s ease-in-out; -} -.progress-text { - /* color: white; */ - color: var(--color-accent) !important; - font-size: 1em !important; - font-weight: bold; - padding-right: 10px; - line-height: 20px; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -@media (prefers-color-scheme: light) { - #chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; - color: #000000 !important; - } - [data-testid = "bot"] { - background-color: #FFFFFF !important; - } - [data-testid = "user"] { - background-color: #95EC69 !important; - } -} -/* 暗色 */ -@media (prefers-color-scheme: dark) { - #chuanhu_chatbot { - background-color: var(--chatbot-color-dark) !important; - color: #FFFFFF !important; - } - [data-testid = "bot"] { - background-color: #2C2C2C !important; - } - [data-testid = "user"] { - background-color: #26B561 !important; - } - body { - background-color: var(--neutral-950) !important; - } -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/exceptions.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/exceptions.py deleted file mode 100644 index a38447bb05bd5d503a32651d6046ff8667785c0c..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/exceptions.py +++ /dev/null @@ -1,267 +0,0 @@ -# exceptions.py - -import re -import sys -import typing - -from .util import col, line, lineno, _collapse_string_to_ranges -from .unicode import pyparsing_unicode as ppu - - -class ExceptionWordUnicode(ppu.Latin1, ppu.LatinA, ppu.LatinB, ppu.Greek, ppu.Cyrillic): - pass - - -_extract_alphanums = _collapse_string_to_ranges(ExceptionWordUnicode.alphanums) -_exception_word_extractor = re.compile("([" + _extract_alphanums + "]{1,16})|.") - - -class ParseBaseException(Exception): - """base exception class for all parsing runtime exceptions""" - - # Performance tuning: we construct a *lot* of these, so keep this - # constructor as small and fast as possible - def __init__( - self, - pstr: str, - loc: int = 0, - msg: typing.Optional[str] = None, - elem=None, - ): - self.loc = loc - if msg is None: - self.msg = pstr - self.pstr = "" - else: - self.msg = msg - self.pstr = pstr - self.parser_element = self.parserElement = elem - self.args = (pstr, loc, msg) - - @staticmethod - def explain_exception(exc, depth=16): - """ - Method to take an exception and translate the Python internal traceback into a list - of the pyparsing expressions that caused the exception to be raised. - - Parameters: - - - exc - exception raised during parsing (need not be a ParseException, in support - of Python exceptions that might be raised in a parse action) - - depth (default=16) - number of levels back in the stack trace to list expression - and function names; if None, the full stack trace names will be listed; if 0, only - the failing input line, marker, and exception string will be shown - - Returns a multi-line string listing the ParserElements and/or function names in the - exception's stack trace. - """ - import inspect - from .core import ParserElement - - if depth is None: - depth = sys.getrecursionlimit() - ret = [] - if isinstance(exc, ParseBaseException): - ret.append(exc.line) - ret.append(" " * (exc.column - 1) + "^") - ret.append("{}: {}".format(type(exc).__name__, exc)) - - if depth > 0: - callers = inspect.getinnerframes(exc.__traceback__, context=depth) - seen = set() - for i, ff in enumerate(callers[-depth:]): - frm = ff[0] - - f_self = frm.f_locals.get("self", None) - if isinstance(f_self, ParserElement): - if frm.f_code.co_name not in ("parseImpl", "_parseNoCache"): - continue - if id(f_self) in seen: - continue - seen.add(id(f_self)) - - self_type = type(f_self) - ret.append( - "{}.{} - {}".format( - self_type.__module__, self_type.__name__, f_self - ) - ) - - elif f_self is not None: - self_type = type(f_self) - ret.append("{}.{}".format(self_type.__module__, self_type.__name__)) - - else: - code = frm.f_code - if code.co_name in ("wrapper", ""): - continue - - ret.append("{}".format(code.co_name)) - - depth -= 1 - if not depth: - break - - return "\n".join(ret) - - @classmethod - def _from_exception(cls, pe): - """ - internal factory method to simplify creating one type of ParseException - from another - avoids having __init__ signature conflicts among subclasses - """ - return cls(pe.pstr, pe.loc, pe.msg, pe.parserElement) - - @property - def line(self) -> str: - """ - Return the line of text where the exception occurred. - """ - return line(self.loc, self.pstr) - - @property - def lineno(self) -> int: - """ - Return the 1-based line number of text where the exception occurred. - """ - return lineno(self.loc, self.pstr) - - @property - def col(self) -> int: - """ - Return the 1-based column on the line of text where the exception occurred. - """ - return col(self.loc, self.pstr) - - @property - def column(self) -> int: - """ - Return the 1-based column on the line of text where the exception occurred. - """ - return col(self.loc, self.pstr) - - def __str__(self) -> str: - if self.pstr: - if self.loc >= len(self.pstr): - foundstr = ", found end of text" - else: - # pull out next word at error location - found_match = _exception_word_extractor.match(self.pstr, self.loc) - if found_match is not None: - found = found_match.group(0) - else: - found = self.pstr[self.loc : self.loc + 1] - foundstr = (", found %r" % found).replace(r"\\", "\\") - else: - foundstr = "" - return "{}{} (at char {}), (line:{}, col:{})".format( - self.msg, foundstr, self.loc, self.lineno, self.column - ) - - def __repr__(self): - return str(self) - - def mark_input_line(self, marker_string: str = None, *, markerString=">!<") -> str: - """ - Extracts the exception line from the input string, and marks - the location of the exception with a special symbol. - """ - markerString = marker_string if marker_string is not None else markerString - line_str = self.line - line_column = self.column - 1 - if markerString: - line_str = "".join( - (line_str[:line_column], markerString, line_str[line_column:]) - ) - return line_str.strip() - - def explain(self, depth=16) -> str: - """ - Method to translate the Python internal traceback into a list - of the pyparsing expressions that caused the exception to be raised. - - Parameters: - - - depth (default=16) - number of levels back in the stack trace to list expression - and function names; if None, the full stack trace names will be listed; if 0, only - the failing input line, marker, and exception string will be shown - - Returns a multi-line string listing the ParserElements and/or function names in the - exception's stack trace. - - Example:: - - expr = pp.Word(pp.nums) * 3 - try: - expr.parse_string("123 456 A789") - except pp.ParseException as pe: - print(pe.explain(depth=0)) - - prints:: - - 123 456 A789 - ^ - ParseException: Expected W:(0-9), found 'A' (at char 8), (line:1, col:9) - - Note: the diagnostic output will include string representations of the expressions - that failed to parse. These representations will be more helpful if you use `set_name` to - give identifiable names to your expressions. Otherwise they will use the default string - forms, which may be cryptic to read. - - Note: pyparsing's default truncation of exception tracebacks may also truncate the - stack of expressions that are displayed in the ``explain`` output. To get the full listing - of parser expressions, you may have to set ``ParserElement.verbose_stacktrace = True`` - """ - return self.explain_exception(self, depth) - - markInputline = mark_input_line - - -class ParseException(ParseBaseException): - """ - Exception thrown when a parse expression doesn't match the input string - - Example:: - - try: - Word(nums).set_name("integer").parse_string("ABC") - except ParseException as pe: - print(pe) - print("column: {}".format(pe.column)) - - prints:: - - Expected integer (at char 0), (line:1, col:1) - column: 1 - - """ - - -class ParseFatalException(ParseBaseException): - """ - User-throwable exception thrown when inconsistent parse content - is found; stops all parsing immediately - """ - - -class ParseSyntaxException(ParseFatalException): - """ - Just like :class:`ParseFatalException`, but thrown internally - when an :class:`ErrorStop` ('-' operator) indicates - that parsing is to stop immediately because an unbacktrackable - syntax error has been found. - """ - - -class RecursiveGrammarException(Exception): - """ - Exception thrown by :class:`ParserElement.validate` if the - grammar could be left-recursive; parser may need to enable - left recursion using :class:`ParserElement.enable_left_recursion` - """ - - def __init__(self, parseElementList): - self.parseElementTrace = parseElementList - - def __str__(self) -> str: - return "RecursiveGrammarException: {}".format(self.parseElementTrace) diff --git a/spaces/Banbri/zcvzcv/public/favicon/index.html b/spaces/Banbri/zcvzcv/public/favicon/index.html deleted file mode 100644 index 1d4b47a9a57bc253915e84d95d4ba889c7c68c9a..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/public/favicon/index.html +++ /dev/null @@ -1,133 +0,0 @@ - - - - Favicons - - - - - - - - - - - - - - - - - - - - - - - - -
    -

    - To use the favicons insert into your head section some of these tags accordly to your needs. -

    -
    -
    -            
    -                <!-- For old IEs -->
    -                <link rel="shortcut icon" href="favicon.ico" />
    -                
    -                <!-- For new browsers - multisize ico  -->
    -                <link rel="icon" type="image/x-icon" sizes="16x16 32x32" href="favicon.ico">
    -                
    -                <!-- For iPad with high-resolution Retina display running iOS ≥ 7: -->
    -                <link rel="apple-touch-icon" sizes="152x152" href="favicon-152-precomposed.png">
    -                
    -                <!-- For iPad with high-resolution Retina display running iOS ≤ 6: -->
    -                <link rel="apple-touch-icon" sizes="144x144" href="favicon-144-precomposed.png">
    -                
    -                <!-- For iPhone with high-resolution Retina display running iOS ≥ 7: -->
    -                <link rel="apple-touch-icon" sizes="120x120" href="favicon-120-precomposed.png">
    -                
    -                <!-- For iPhone with high-resolution Retina display running iOS ≤ 6: -->
    -                <link rel="apple-touch-icon" sizes="114x114" href="favicon-114-precomposed.png">
    -                
    -                <!-- For iPhone 6+ -->
    -                <link rel="apple-touch-icon" sizes="180x180" href="favicon-180-precomposed.png">
    -                
    -                <!-- For first- and second-generation iPad: -->
    -                <link rel="apple-touch-icon" sizes="72x72" href="favicon-72-precomposed.png">
    -                
    -                <!-- For non-Retina iPhone, iPod Touch, and Android 2.1+ devices: -->
    -                <link rel="apple-touch-icon" sizes="57x57" href="favicon-57.png">
    -                
    -                <!-- For Old Chrome -->
    -                <link rel="icon" sizes="32x32" href="favicon-32.png" >
    -                
    -                <!-- For IE10 Metro -->
    -                <meta name="msapplication-TileColor" content="#FFFFFF">
    -                <meta name="msapplication-TileImage" content="favicon-144.png">
    -                <meta name="theme-color" content="#ffffff">
    -                
    -                <!-- Chrome for Android -->
    -                <link rel="manifest" href="manifest.json">
    -                <link rel="icon" sizes="192x192" href="favicon-192.png">
    -                
    -            
    -        
    - -
    - -

    - For more informations about favicons consult The Favicon Cheat Sheet by Audrey Roy. -

    - -
    - - diff --git a/spaces/Benson/text-generation/Examples/Descargar Carx Drift Racing 2 Mod Apk Nueva Versin.md b/spaces/Benson/text-generation/Examples/Descargar Carx Drift Racing 2 Mod Apk Nueva Versin.md deleted file mode 100644 index 168cfbc5467ba70f079650a250e11efd76f31ba9..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Carx Drift Racing 2 Mod Apk Nueva Versin.md +++ /dev/null @@ -1,59 +0,0 @@ - -

    Descargar CarX Drift Racing 2 Mod APK Nueva versión: Una guía para los entusiastas de las carreras de coches

    -

    Si usted es un fan de los juegos de carreras de coches, especialmente a la deriva, entonces usted debe haber oído hablar de CarX Drift Racing 2. Es uno de los mejores juegos de deriva en Android que le permite experimentar la emoción de conducir coches rápidos en varias pistas y terrenos. Puedes personalizar tu coche, afinar tu motor y competir con otros jugadores online o offline.

    -

    descargar carx drift racing 2 mod apk nueva versión


    DOWNLOAD - https://bltlly.com/2v6Ksx



    -

    Sin embargo, si desea disfrutar del juego al máximo, es posible que tenga que gastar algo de dinero real para desbloquear todos los coches, pistas y características que el juego tiene para ofrecer. Esto puede ser frustrante y caro para algunos jugadores que solo quieren divertirse sin romper el banco.

    -

    Es por eso que muchos jugadores buscan formas de descargar CarX Drift Racing 2 Mod APK, que es una versión modificada del juego que le da dinero ilimitado, oro y acceso a todos los coches y pistas de forma gratuita. Suena increíble, ¿verdad?

    -

    En este artículo, le mostraremos cómo descargar e instalar CarX Drift Racing 2 Mod APK nueva versión en su dispositivo Android. También le diremos acerca de las características, pros y contras de usar este mod apk. Así que, si estás listo para llevar tus habilidades de deriva al siguiente nivel, ¡sigue leyendo!

    -

    Características de CarX Drift Racing 2 Mod APK

    -

    CarX Drift Racing 2 Mod APK no es solo una versión regular del juego que se puede descargar desde la Google Play Store. Es una versión hackeada que ha sido modificada por algunos desarrolladores para darle recursos y características ilimitadas que normalmente tendría que pagar. Estas son algunas de las características de CarX Drift Racing 2 Mod APK:

    -
      -
    • Dinero ilimitado y oro: Obtendrá dinero y oro ilimitados en su cuenta, que puede utilizar para comprar cualquier coche, pista o actualización que desee. Usted no tiene que preocuparse de quedarse sin dinero en efectivo o ahorrar para su coche de ensueño.
    • - -
    • Personaliza tu coche y afina tu motor: Puedes personalizar tu coche con diferentes colores de pintura, calcomanías, ruedas, spoilers y más. También puede ajustar su motor con diferentes partes y configuraciones para mejorar su rendimiento y manejo.
    • -
    • Disfrutar de la física realista y gráficos: CarX Drift Racing 2 Mod APK tiene física realista y gráficos que te hacen sentir como si estuvieras conduciendo un coche real. Puede ver el humo, el polvo, las chispas y las marcas de derrape a medida que se desplaza en las pistas. También puede ajustar el ángulo y la vista de la cámara para adaptarse a sus preferencias.
    • -
    • Compite con otros jugadores en línea o fuera de línea: Puedes jugar en línea con otros jugadores que tienen la misma versión apk mod que tú, o fuera de línea con oponentes AI. También puedes unirte a clubes, participar en torneos y posicionarte en las tablas de clasificación.
    • -
    -

    Cómo descargar e instalar CarX deriva Racing 2 Mod APK

    -

    Descargar e instalar CarX Drift Racing 2 Mod APK es muy fácil y simple. Solo tienes que seguir estos pasos:

    -

    -
      -
    1. Descargar el archivo mod apk de una fuente de confianza: Usted puede encontrar muchos sitios web que ofrecen CarX Drift Racing 2 Mod APK para su descarga gratuita. Sin embargo, no todos son seguros y confiables. Algunos pueden contener malware o virus que pueden dañar su dispositivo o robar sus datos. Siempre descargue desde un sitio de buena reputación y escanee el archivo antes de instalar.
    2. -
    3. Habilitar fuentes desconocidas en la configuración del dispositivo: Antes de instalar el archivo apk mod, es necesario habilitar fuentes desconocidas en la configuración del dispositivo. Esto le permitirá instalar aplicaciones que no son de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.
    4. - -
    5. Disfruta del juego con recursos y características ilimitadas: Una vez que hayas lanzado el juego, verás que tienes dinero ilimitado, oro, coches, pistas y características. Puedes empezar a jugar de inmediato y disfrutar del juego sin limitaciones ni restricciones.
    6. -
    -

    Pros y contras de CarX Drift Racing 2 Mod APK

    -

    CarX Drift Racing 2 Mod APK tiene muchas ventajas, pero también tiene algunas desventajas. Estos son algunos de ellos:

    -

    Pros

    -
      -
    • Gratis, fácil, divertido y adictivo: CarX Drift Racing 2 Mod APK es gratis para descargar e instalar, fácil de usar, divertido de jugar y adictivo para dominar. No tienes que gastar dinero o tiempo para disfrutar del juego completamente.
    • -
    • No hay anuncios o compras en la aplicación: CarX Drift Racing 2 Mod APK no tiene anuncios o compras en la aplicación que pueden interrumpir su juego o tentar a gastar más dinero. Puedes jugar sin distracciones ni presiones.
    • -
    • No se requiere raíz: CarX Drift Racing 2 Mod APK no requiere acceso de raíz para trabajar en su dispositivo. Esto significa que no tiene que arriesgarse a dañar su dispositivo o anular su garantía al enraizarlo.
    • -
    • Compatible con la mayoría de los dispositivos: CarX Drift Racing 2 Mod APK es compatible con la mayoría de los dispositivos Android que se ejecutan en Android 4.1 o superior. No requiere mucho espacio de almacenamiento o RAM para funcionar sin problemas.
    • -
    -

    Contras

    -
      -
    • Puede que no funcione en algunos dispositivos: CarX Drift Racing 2 Mod APK puede no funcionar en algunos dispositivos debido a problemas de compatibilidad o problemas técnicos. Es posible que tenga que comprobar la compatibilidad de su dispositivo antes de descargar e instalar el apk mod.
    • - -
    • Puede violar los términos de servicio del juego: CarX Drift Racing 2 Mod APK puede violar los términos de servicio del juego, lo que puede resultar en la prohibición del juego o perder su cuenta. También puedes perder tu progreso, logros o recompensas que has ganado en el juego.
    • -
    -

    Conclusión

    -

    CarX Drift Racing 2 Mod APK es una gran opción para los entusiastas de las carreras de coches que quieren disfrutar del juego sin limitaciones ni costos. Te da dinero ilimitado, oro, coches, pistas y características que hacen el juego más divertido y emocionante. Puedes descargarlo e instalarlo fácilmente en tu dispositivo Android y empezar a derivar como un profesional.

    -

    Sin embargo, también debe ser consciente de los riesgos y desventajas de usar CarX Drift Racing 2 Mod APK, tales como problemas de compatibilidad, problemas de seguridad, y las violaciones de los términos de servicio. Siempre debe descargar de una fuente confiable y usarla a su propia discreción y responsabilidad.

    -

    Si usted está buscando una manera de descargar CarX Drift Racing 2 Mod APK nueva versión, siga los pasos de este artículo y disfrutar del juego. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. Happy drifting!

    -

    Preguntas frecuentes

    -
      -
    • Q1: ¿Es CarX Drift Racing 2 Mod APK seguro de usar?
    • -
    • A1: Depende de la fuente del archivo apk mod. Algunos pueden contener malware o virus que pueden dañar su dispositivo o robar sus datos. Siempre descargue desde un sitio de buena reputación y escanee el archivo antes de instalar.
    • -
    • Q2: ¿Puedo jugar CarX Drift Racing 2 Mod APK en línea?
    • -
    • A2: Sí, puedes jugar online con otros jugadores que tienen la misma versión mod apk que tú. Sin embargo, es posible que no pueda acceder a algunas funciones o modos que requieren una versión oficial del juego.
    • -
    • Q3: ¿Me prohibirán por usar CarX Drift Racing 2 Mod APK?
    • - -
    • Q4: ¿Cómo puedo actualizar CarX Drift Racing 2 Mod APK?
    • -
    • A4: Puede actualizar el apk mod mediante la descarga de la última versión de la misma fuente que lo obtuvo de. Asegúrese de hacer una copia de seguridad de sus datos antes de desinstalar la versión anterior e instalar la nueva.
    • -
    • Q5: ¿Cuáles son algunas alternativas a CarX Drift Racing 2 Mod APK?
    • -
    • A5: Algunas alternativas a CarX Drift Racing 2 Mod APK son Real Drift Car Racing, Torque Drift, Drift Max Pro, y FR Legends. Estos también son juegos de carreras de coches populares que ofrecen diferentes modos, características y desafíos para los amantes de la deriva.
    • -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/server/modelEndpoint.ts b/spaces/BetterAPI/BetterChat_new/src/lib/server/modelEndpoint.ts deleted file mode 100644 index 4d187da21c37cbbe8efd722c09fee1815bd1c71f..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat_new/src/lib/server/modelEndpoint.ts +++ /dev/null @@ -1,21 +0,0 @@ -import { MODEL_ENDPOINTS } from "$env/static/private"; -import { sum } from "$lib/utils/sum"; - -const endpoints: Array<{ endpoint: string; authorization: string; weight: number }> = - JSON.parse(MODEL_ENDPOINTS); -const totalWeight = sum(endpoints.map((e) => e.weight)); - -/** - * Find a random load-balanced endpoint - */ -export function modelEndpoint(): { endpoint: string; authorization: string; weight: number } { - let random = Math.random() * totalWeight; - for (const endpoint of endpoints) { - if (random < endpoint.weight) { - return endpoint; - } - random -= endpoint.weight; - } - - throw new Error("Invalid config, no endpoint found"); -} diff --git a/spaces/BilalSardar/StoryGenerator/app.py b/spaces/BilalSardar/StoryGenerator/app.py deleted file mode 100644 index 9cc2afc87db412b7ad2c39d8db2b4a6ee3242d72..0000000000000000000000000000000000000000 --- a/spaces/BilalSardar/StoryGenerator/app.py +++ /dev/null @@ -1,15 +0,0 @@ -from transformers import pipeline -import gradio as gr -def story(StoryLength,StoryPrompt): - model= pipeline("text-generation", model="e-tony/gpt2-rnm") - summarizer = pipeline("summarization", model="facebook/bart-large-cnn") - return model(StoryPrompt, max_length=200, num_return_sequences=3)[2]["generated_text"],summarizer(model(StoryPrompt, max_length=200, num_return_sequences=3)[2]["generated_text"], max_length=StoryLength, min_length=30, do_sample=False)[0]["summary_text"] - - -interface = gr.Interface(fn=story, - inputs=["number","text"], - outputs=[gr.inputs.Textbox(label='GPT2 Output'),gr.inputs.Textbox(label='Output summary')], - title='Bilal\'s Story Generator') - - -interface.launch(inline=False) diff --git a/spaces/CHDCruze/entertainmentbybhdcruze/README.md b/spaces/CHDCruze/entertainmentbybhdcruze/README.md deleted file mode 100644 index 63071aefb25a1e355da68960395353a6e5b97195..0000000000000000000000000000000000000000 --- a/spaces/CHDCruze/entertainmentbybhdcruze/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Entertainmentbybhdcruze -emoji: 📉 -colorFrom: blue -colorTo: red -sdk: static -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CVPR/LIVE/thrust/thrust/async/transform.h b/spaces/CVPR/LIVE/thrust/thrust/async/transform.h deleted file mode 100644 index 89687e93ad38ed03df4638b0b98f15b78c8826d7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/async/transform.h +++ /dev/null @@ -1,134 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a transform of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file async/transform.h - * \brief Functions for asynchronously transforming a range. - */ - -#pragma once - -#include -#include - -#if THRUST_CPP_DIALECT >= 2014 - -#include -#include -#include -#include - -#include - -namespace thrust -{ - -namespace async -{ - -namespace unimplemented -{ - -template < - typename DerivedPolicy -, typename ForwardIt, typename Sentinel, typename OutputIt -, typename UnaryOperation -> -__host__ -event -async_transform( - thrust::execution_policy& exec -, ForwardIt first, Sentinel last, OutputIt output, UnaryOperation op -) -{ - THRUST_STATIC_ASSERT_MSG( - (thrust::detail::depend_on_instantiation::value) - , "this algorithm is not implemented for the specified system" - ); - return {}; -} - -} // namespace unimplemented - -namespace transform_detail -{ - -using thrust::async::unimplemented::async_transform; - -struct transform_fn final -{ - template < - typename DerivedPolicy - , typename ForwardIt, typename Sentinel, typename OutputIt - , typename UnaryOperation - > - __host__ - static auto - call( - thrust::detail::execution_policy_base const& exec - , ForwardIt&& first, Sentinel&& last - , OutputIt&& output - , UnaryOperation&& op - ) - // ADL dispatch. - THRUST_RETURNS( - async_transform( - thrust::detail::derived_cast(thrust::detail::strip_const(exec)) - , THRUST_FWD(first), THRUST_FWD(last) - , THRUST_FWD(output) - , THRUST_FWD(op) - ) - ) - - template < - typename ForwardIt, typename Sentinel, typename OutputIt - , typename UnaryOperation - > - __host__ - static auto call( - ForwardIt&& first, Sentinel&& last - , OutputIt&& output - , UnaryOperation&& op - ) - THRUST_RETURNS( - transform_fn::call( - thrust::detail::select_system( - typename iterator_system>::type{} - , typename iterator_system>::type{} - ) - , THRUST_FWD(first), THRUST_FWD(last) - , THRUST_FWD(output) - , THRUST_FWD(op) - ) - ) - - template - THRUST_NODISCARD __host__ - auto operator()(Args&&... args) const - THRUST_RETURNS( - call(THRUST_FWD(args)...) - ) -}; - -} // namespace tranform_detail - -THRUST_INLINE_CONSTANT transform_detail::transform_fn transform{}; - -} // namespace async - -} // end namespace thrust - -#endif - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/extrema.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/extrema.h deleted file mode 100644 index a3ee8188971687249b7052ef4f062f5adf972768..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/extrema.h +++ /dev/null @@ -1,89 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file extrema.h - * \brief Generic device implementations of extrema functions. - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace generic -{ - - -template -__host__ __device__ -ForwardIterator max_element(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last); - - -template -__host__ __device__ -ForwardIterator max_element(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - BinaryPredicate comp); - - -template -__host__ __device__ -ForwardIterator min_element(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last); - - -template -__host__ __device__ -ForwardIterator min_element(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - BinaryPredicate comp); - - -template -__host__ __device__ -thrust::pair minmax_element(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last); - - -template -__host__ __device__ -thrust::pair minmax_element(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - BinaryPredicate comp); - - -} // end namespace generic -} // end namespace detail -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/memory_resource.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/memory_resource.h deleted file mode 100644 index 6a540d834939b928a4b6049c6a97d2289ab43257..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/memory_resource.h +++ /dev/null @@ -1,63 +0,0 @@ -/* - * Copyright 2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file omp/memory_resource.h - * \brief Memory resources for the OMP system. - */ - -#pragma once - -#include -#include -#include - -#include - -namespace thrust -{ -namespace system -{ -namespace omp -{ - -//! \cond -namespace detail -{ - typedef thrust::mr::fancy_pointer_resource< - thrust::mr::new_delete_resource, - thrust::omp::pointer - > native_resource; -} -//! \endcond - -/*! \addtogroup memory_resources Memory Resources - * \ingroup memory_management_classes - * \{ - */ - -/*! The memory resource for the OMP system. Uses \p mr::new_delete_resource and tags it with \p omp::pointer. */ -typedef detail::native_resource memory_resource; -/*! An alias for \p omp::memory_resource. */ -typedef detail::native_resource universal_memory_resource; -/*! An alias for \p omp::memory_resource. */ -typedef detail::native_resource universal_host_pinned_memory_resource; - -/*! \} - */ - -} -} -} diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/SAA/prompts/mtd_parameters.py b/spaces/Caoyunkang/Segment-Any-Anomaly/SAA/prompts/mtd_parameters.py deleted file mode 100644 index fdfa78efee14c97a4cd449869bf0c48eac159508..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/SAA/prompts/mtd_parameters.py +++ /dev/null @@ -1,11 +0,0 @@ -manual_prompts = { - 'mtd': [ - ['black hole. blow hole. break. crack. fray. uneven.', 'mtd'], - ['defect.', 'mtd'], - ], - -} - -property_prompts = { - 'ksdd2': 'the image of ksdd2 have 1 dissimilar ksdd2, with a maximum of 5 anomaly. The anomaly would not exceed 0.9 object area. ', -} diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/modules/hubert/cn_hubert.py b/spaces/ChrisPreston/diff-svc_minato_aqua/modules/hubert/cn_hubert.py deleted file mode 100644 index ba1c34bc8ce8c3c638b846f2da1da0ca27a52121..0000000000000000000000000000000000000000 --- a/spaces/ChrisPreston/diff-svc_minato_aqua/modules/hubert/cn_hubert.py +++ /dev/null @@ -1,40 +0,0 @@ -import librosa -import torch -import torch.nn as nn - - -def load_cn_model(ch_hubert_path): - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - from fairseq import checkpoint_utils - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [ch_hubert_path], - suffix="", - ) - model = models[0] - model = model.to(device) - model.eval() - return model - - -def get_cn_hubert_units(con_model, audio_path, dev): - audio, sampling_rate = librosa.load(audio_path) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - - feats = torch.from_numpy(audio).float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.to(dev), - "padding_mask": padding_mask.to(dev), - "output_layer": 9, # layer 9 - } - with torch.no_grad(): - logits = con_model.extract_features(**inputs) - feats = con_model.final_proj(logits[0]) - return feats diff --git a/spaces/ClementBM/connectfour/models/__init__.py b/spaces/ClementBM/connectfour/models/__init__.py deleted file mode 100644 index 1de75226a9c5aae71a9e9b306b29cb52118f6741..0000000000000000000000000000000000000000 --- a/spaces/ClementBM/connectfour/models/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from pathlib import Path - -MODEL_PATH = Path(__file__).parent.absolute() / "model.onnx" diff --git a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Gravityengine.py b/spaces/CofAI/chat.b4/g4f/Provider/Providers/Gravityengine.py deleted file mode 100644 index f0cd09daaaae0adaa349f91139dc60c7ac79c028..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Gravityengine.py +++ /dev/null @@ -1,27 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://gpt4.xunika.uk/' -model = ['gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - } - data = { - 'model': model, - 'temperature': 0.7, - 'presence_penalty': 0, - 'messages': messages, - } - response = requests.post(url + '/api/openai/v1/chat/completions', - json=data, stream=True) - - yield response.json()['choices'][0]['message']['content'] - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/CofAI/chat/client/css/select.css b/spaces/CofAI/chat/client/css/select.css deleted file mode 100644 index 7ec0159206439deca5c26f32fd92d2b1459f0273..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat/client/css/select.css +++ /dev/null @@ -1,35 +0,0 @@ -select { - -webkit-border-radius: 8px; - -moz-border-radius: 8px; - border-radius: 8px; - - -webkit-backdrop-filter: blur(20px); - backdrop-filter: blur(20px); - - cursor: pointer; - background-color: var(--blur-bg); - border: 1px solid var(--blur-border); - color: var(--colour-3); - display: block; - position: relative; - overflow: hidden; - outline: none; - padding: 8px 16px; - - appearance: none; -} - -/* scrollbar */ -select.dropdown::-webkit-scrollbar { - width: 4px; - padding: 8px 0px; -} - -select.dropdown::-webkit-scrollbar-track { - background-color: #ffffff00; -} - -select.dropdown::-webkit-scrollbar-thumb { - background-color: #555555; - border-radius: 10px; -} diff --git a/spaces/CognitiveLabs/Research-Assistant/test/test4.py b/spaces/CognitiveLabs/Research-Assistant/test/test4.py deleted file mode 100644 index d9f6c140d753757ba78d021bac903ee8b6726be9..0000000000000000000000000000000000000000 --- a/spaces/CognitiveLabs/Research-Assistant/test/test4.py +++ /dev/null @@ -1,6 +0,0 @@ -def test(): - yield 1 - return 2 - -a, b = test() -print(a, b) \ No newline at end of file diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/models/diffusion/__init__.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/models/diffusion/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/designspaceLib/types.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/designspaceLib/types.py deleted file mode 100644 index 80ba9d6d7b44f58773f42107d672c13651c166a9..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/designspaceLib/types.py +++ /dev/null @@ -1,147 +0,0 @@ -from __future__ import annotations - -from dataclasses import dataclass -from typing import Dict, List, Optional, Union, cast - -from fontTools.designspaceLib import ( - AxisDescriptor, - DesignSpaceDocument, - DesignSpaceDocumentError, - RangeAxisSubsetDescriptor, - SimpleLocationDict, - ValueAxisSubsetDescriptor, - VariableFontDescriptor, -) - - -def clamp(value, minimum, maximum): - return min(max(value, minimum), maximum) - - -@dataclass -class Range: - minimum: float - """Inclusive minimum of the range.""" - maximum: float - """Inclusive maximum of the range.""" - default: float = 0 - """Default value""" - - def __post_init__(self): - self.minimum, self.maximum = sorted((self.minimum, self.maximum)) - self.default = clamp(self.default, self.minimum, self.maximum) - - def __contains__(self, value: Union[float, Range]) -> bool: - if isinstance(value, Range): - return self.minimum <= value.minimum and value.maximum <= self.maximum - return self.minimum <= value <= self.maximum - - def intersection(self, other: Range) -> Optional[Range]: - if self.maximum < other.minimum or self.minimum > other.maximum: - return None - else: - return Range( - max(self.minimum, other.minimum), - min(self.maximum, other.maximum), - self.default, # We don't care about the default in this use-case - ) - - -# A region selection is either a range or a single value, as a Designspace v5 -# axis-subset element only allows a single discrete value or a range for a -# variable-font element. -Region = Dict[str, Union[Range, float]] - -# A conditionset is a set of named ranges. -ConditionSet = Dict[str, Range] - -# A rule is a list of conditionsets where any has to be relevant for the whole rule to be relevant. -Rule = List[ConditionSet] -Rules = Dict[str, Rule] - - -def locationInRegion(location: SimpleLocationDict, region: Region) -> bool: - for name, value in location.items(): - if name not in region: - return False - regionValue = region[name] - if isinstance(regionValue, (float, int)): - if value != regionValue: - return False - else: - if value not in regionValue: - return False - return True - - -def regionInRegion(region: Region, superRegion: Region) -> bool: - for name, value in region.items(): - if not name in superRegion: - return False - superValue = superRegion[name] - if isinstance(superValue, (float, int)): - if value != superValue: - return False - else: - if value not in superValue: - return False - return True - - -def userRegionToDesignRegion(doc: DesignSpaceDocument, userRegion: Region) -> Region: - designRegion = {} - for name, value in userRegion.items(): - axis = doc.getAxis(name) - if axis is None: - raise DesignSpaceDocumentError( - f"Cannot find axis named '{name}' for region." - ) - if isinstance(value, (float, int)): - designRegion[name] = axis.map_forward(value) - else: - designRegion[name] = Range( - axis.map_forward(value.minimum), - axis.map_forward(value.maximum), - axis.map_forward(value.default), - ) - return designRegion - - -def getVFUserRegion(doc: DesignSpaceDocument, vf: VariableFontDescriptor) -> Region: - vfUserRegion: Region = {} - # For each axis, 2 cases: - # - it has a range = it's an axis in the VF DS - # - it's a single location = use it to know which rules should apply in the VF - for axisSubset in vf.axisSubsets: - axis = doc.getAxis(axisSubset.name) - if axis is None: - raise DesignSpaceDocumentError( - f"Cannot find axis named '{axisSubset.name}' for variable font '{vf.name}'." - ) - if hasattr(axisSubset, "userMinimum"): - # Mypy doesn't support narrowing union types via hasattr() - # TODO(Python 3.10): use TypeGuard - # https://mypy.readthedocs.io/en/stable/type_narrowing.html - axisSubset = cast(RangeAxisSubsetDescriptor, axisSubset) - if not hasattr(axis, "minimum"): - raise DesignSpaceDocumentError( - f"Cannot select a range over '{axis.name}' for variable font '{vf.name}' " - "because it's a discrete axis, use only 'userValue' instead." - ) - axis = cast(AxisDescriptor, axis) - vfUserRegion[axis.name] = Range( - max(axisSubset.userMinimum, axis.minimum), - min(axisSubset.userMaximum, axis.maximum), - axisSubset.userDefault or axis.default, - ) - else: - axisSubset = cast(ValueAxisSubsetDescriptor, axisSubset) - vfUserRegion[axis.name] = axisSubset.userValue - # Any axis not mentioned explicitly has a single location = default value - for axis in doc.axes: - if axis.name not in vfUserRegion: - assert isinstance( - axis.default, (int, float) - ), f"Axis '{axis.name}' has no valid default value." - vfUserRegion[axis.name] = axis.default - return vfUserRegion diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3b0ff54c.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3b0ff54c.js deleted file mode 100644 index f76a74f4bed7b360536d2e7258b9651c5ef9a93b..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3b0ff54c.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as ce,e as he,s as me,Z as ge,O as R,m as Ne,p as D,Q as K,z as y,u as O,v as B,y as Q,A as L,B as Re,k as T,o as q,x as H,a7 as Te,h as ee,F as V,G as Z,N as S,K as b,U as N,M as G,ar as qe,V as we,T as P,L as le,a1 as He,P as be,R as de,E as Ie,ae as Me,q as Fe,r as Pe}from"./index-1d65707a.js";import{B as Ue}from"./Button-f155035a.js";import{B as Ve}from"./BlockLabel-66866176.js";import{E as Ke}from"./Empty-eec13822.js";import{u as Oe,S as Qe}from"./ShareButton-8cd3d8f6.js";import{n as te}from"./ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js";import{M as Ze}from"./ModifyUpload-c89cfce3.js";import{I as ke}from"./Image-0fe369ad.js";import"./IconButton-d42f3661.js";const Je=async l=>l?`
    ${(await Promise.all(l.map(async([e,i])=>e===null?"":await Oe(e.data,"url")))).map(e=>``).join("")}
    `:"";function ie(l,t,e){const i=l.slice();return i[39]=t[e][0],i[40]=t[e][1],i[42]=e,i}function ne(l,t,e){const i=l.slice();return i[39]=t[e],i[43]=t,i[42]=e,i}function re(l){let t,e;return t=new Ve({props:{show_label:l[0],Icon:ke,label:l[1]||"Gallery"}}),{c(){T(t.$$.fragment)},m(i,n){q(t,i,n),e=!0},p(i,n){const o={};n[0]&1&&(o.show_label=i[0]),n[0]&2&&(o.label=i[1]||"Gallery"),t.$set(o)},i(i){e||(y(t.$$.fragment,i),e=!0)},o(i){B(t.$$.fragment,i),e=!1},d(i){H(t,i)}}}function We(l){let t,e,i,n,o,d,s,a=l[8]!==null&&l[4]&&ae(l),u=l[6]&&fe(l),f=Z(l[7]),m=[];for(let r=0;rl[35].call(e)),N(e,"fixed-height",!l[3]||l[3]=="auto")},m(r,w){a&&a.m(r,w),D(r,t,w),D(r,e,w),G(e,i),u&&u.m(i,null),G(i,n);for(let c=0;c{a=null}),Q()),r[6]?u?(u.p(r,w),w[0]&64&&y(u,1)):(u=fe(r),u.c(),y(u,1),u.m(i,n)):u&&(O(),B(u,1,1,()=>{u=null}),Q()),w[0]&384){f=Z(r[7]);let c;for(c=0;cl[29](t,s),m=()=>l[29](null,s);function r(){return l[30](l[42])}return{c(){t=S("button"),e=S("img"),d=R(),P(e.src,i=l[39][0].data)||b(e,"src",i),b(e,"title",n=l[39][1]||null),b(e,"alt",o=l[39][1]||null),b(e,"class","svelte-w0jac3"),b(t,"class","thumbnail-item thumbnail-small svelte-w0jac3"),N(t,"selected",l[8]===l[42])},m(w,c){D(w,t,c),G(t,e),G(t,d),f(),a||(u=K(t,"click",r),a=!0)},p(w,c){l=w,c[0]&128&&!P(e.src,i=l[39][0].data)&&b(e,"src",i),c[0]&128&&n!==(n=l[39][1]||null)&&b(e,"title",n),c[0]&128&&o!==(o=l[39][1]||null)&&b(e,"alt",o),s!==l[42]&&(m(),s=l[42],f()),c[0]&256&&N(t,"selected",l[8]===l[42])},d(w){w&&L(t),m(),a=!1,u()}}}function fe(l){let t,e,i;return e=new Qe({props:{value:l[7],formatter:Je}}),e.$on("share",l[32]),e.$on("error",l[33]),{c(){t=S("div"),T(e.$$.fragment),b(t,"class","icon-button svelte-w0jac3")},m(n,o){D(n,t,o),q(e,t,null),i=!0},p(n,o){const d={};o[0]&128&&(d.value=n[7]),e.$set(d)},i(n){i||(y(e.$$.fragment,n),i=!0)},o(n){B(e.$$.fragment,n),i=!1},d(n){n&&L(t),H(e)}}}function _e(l){let t,e=l[40]+"",i;return{c(){t=S("div"),i=be(e),b(t,"class","caption-label svelte-w0jac3")},m(n,o){D(n,t,o),G(t,i)},p(n,o){o[0]&128&&e!==(e=n[40]+"")&&de(i,e)},d(n){n&&L(t)}}}function ue(l){let t,e,i,n,o,d,s,a,u=l[40]&&_e(l);function f(){return l[34](l[42])}return{c(){t=S("button"),e=S("img"),o=R(),u&&u.c(),d=R(),b(e,"alt",i=l[40]||""),P(e.src,n=typeof l[39]=="string"?l[39]:l[39].data)||b(e,"src",n),b(e,"class","svelte-w0jac3"),b(t,"class","thumbnail-item thumbnail-lg svelte-w0jac3"),N(t,"selected",l[8]===l[42])},m(m,r){D(m,t,r),G(t,e),G(t,o),u&&u.m(t,null),G(t,d),s||(a=K(t,"click",f),s=!0)},p(m,r){l=m,r[0]&128&&i!==(i=l[40]||"")&&b(e,"alt",i),r[0]&128&&!P(e.src,n=typeof l[39]=="string"?l[39]:l[39].data)&&b(e,"src",n),l[40]?u?u.p(l,r):(u=_e(l),u.c(),u.m(t,d)):u&&(u.d(1),u=null),r[0]&256&&N(t,"selected",l[8]===l[42])},d(m){m&&L(t),u&&u.d(),s=!1,a()}}}function Ye(l){let t,e;return t=new ke({}),{c(){T(t.$$.fragment)},m(i,n){q(t,i,n),e=!0},i(i){e||(y(t.$$.fragment,i),e=!0)},o(i){B(t.$$.fragment,i),e=!1},d(i){H(t,i)}}}function pe(l){let t,e,i,n,o,d,s;ge(l[26]);let a=l[0]&&re(l);const u=[Xe,We],f=[];function m(r,w){return r[2]===null||r[7]===null||r[7].length===0?0:1}return e=m(l),i=f[e]=u[e](l),{c(){a&&a.c(),t=R(),i.c(),n=Ne()},m(r,w){a&&a.m(r,w),D(r,t,w),f[e].m(r,w),D(r,n,w),o=!0,d||(s=K(window,"resize",l[26]),d=!0)},p(r,w){r[0]?a?(a.p(r,w),w[0]&1&&y(a,1)):(a=re(r),a.c(),y(a,1),a.m(t.parentNode,t)):a&&(O(),B(a,1,1,()=>{a=null}),Q());let c=e;e=m(r),e===c?f[e].p(r,w):(O(),B(f[c],1,1,()=>{f[c]=null}),Q(),i=f[e],i?i.p(r,w):(i=f[e]=u[e](r),i.c()),y(i,1),i.m(n.parentNode,n))},i(r){o||(y(a),y(i),o=!0)},o(r){B(a),B(i),o=!1},d(r){r&&(L(t),L(n)),a&&a.d(r),f[e].d(r),d=!1,s()}}}function xe(l,t,e){let i,n,{container:o=!0}=t,{show_label:d=!0}=t,{label:s}=t,{root:a=""}=t,{root_url:u=null}=t,{value:f=null}=t,{grid_cols:m=[2]}=t,{grid_rows:r=void 0}=t,{height:w="auto"}=t,{preview:c}=t,{allow_preview:A=!0}=t,{object_fit:j="cover"}=t,{show_share_button:g=!1}=t;const z=Re();let k=!0,E=null,M=f,v=c&&f?.length?0:null,U=v;function J(_){switch(_.code){case"Escape":_.preventDefault(),e(8,v=null);break;case"ArrowLeft":_.preventDefault(),e(8,v=i);break;case"ArrowRight":_.preventDefault(),e(8,v=n);break}}let h=[],F;async function ve(_){if(typeof _!="number")return;await Te(),h[_].focus();const{left:C,width:X}=F.getBoundingClientRect(),{left:I,width:Le}=h[_].getBoundingClientRect(),$=I-C+Le/2-X/2+F.scrollLeft;F?.scrollTo({left:$<0?0:$,behavior:"smooth"})}let W=0,Y=0,p="",x="";function je(){e(12,Y=window.innerHeight)}const ye=()=>e(8,v=null),Ae=()=>e(8,v=n);function Be(_,C){ee[_?"unshift":"push"](()=>{h[C]=_,e(9,h)})}const ze=_=>e(8,v=_);function Ge(_){ee[_?"unshift":"push"](()=>{F=_,e(10,F)})}function Ee(_){V.call(this,l,_)}function Se(_){V.call(this,l,_)}const Ce=_=>e(8,v=_);function De(){W=this.clientHeight,e(11,W)}return l.$$set=_=>{"container"in _&&e(17,o=_.container),"show_label"in _&&e(0,d=_.show_label),"label"in _&&e(1,s=_.label),"root"in _&&e(18,a=_.root),"root_url"in _&&e(19,u=_.root_url),"value"in _&&e(2,f=_.value),"grid_cols"in _&&e(20,m=_.grid_cols),"grid_rows"in _&&e(21,r=_.grid_rows),"height"in _&&e(3,w=_.height),"preview"in _&&e(22,c=_.preview),"allow_preview"in _&&e(4,A=_.allow_preview),"object_fit"in _&&e(5,j=_.object_fit),"show_share_button"in _&&e(6,g=_.show_share_button)},l.$$.update=()=>{if(l.$$.dirty[0]&8388612&&e(23,k=f==null||f.length==0?!0:k),l.$$.dirty[0]&786436&&e(7,E=f===null?null:f.map(_=>Array.isArray(_)?[te(_[0],a,u),_[1]]:[te(_,a,u),null])),l.$$.dirty[0]&29360388&&M!==f&&(k?(e(8,v=c&&f?.length?0:null),e(23,k=!1)):e(8,v=v!==null&&f!==null&&v`--${_[I]}grid-cols: var(--grid-${C?.[I]||C?.[C?.length-1]});`).join(" "))}if(l.$$.dirty[0]&2097152){let _=["","sm-","md-","lg-","xl-","2xl-"],C=Array.isArray(r)?r:[r];e(14,x=[0,0,0,0,0,0].map((X,I)=>`--${_[I]}grid-rows: var(--grid-${C?.[I]||C?.[C?.length-1]});`).join(" "))}},[d,s,f,w,A,j,g,E,v,h,F,W,Y,p,x,n,J,o,a,u,m,r,c,k,M,U,je,ye,Ae,Be,ze,Ge,Ee,Se,Ce,De]}class $e extends ce{constructor(t){super(),he(this,t,xe,pe,me,{container:17,show_label:0,label:1,root:18,root_url:19,value:2,grid_cols:20,grid_rows:21,height:3,preview:22,allow_preview:4,object_fit:5,show_share_button:6},null,[-1,-1])}}function el(l){let t,e,i,n;const o=[l[0]];let d={};for(let s=0;s{"loading_status"in h&&e(0,i=h.loading_status),"show_label"in h&&e(1,n=h.show_label),"label"in h&&e(2,o=h.label),"root"in h&&e(3,d=h.root),"root_url"in h&&e(4,s=h.root_url),"elem_id"in h&&e(5,a=h.elem_id),"elem_classes"in h&&e(6,u=h.elem_classes),"visible"in h&&e(7,f=h.visible),"value"in h&&e(8,m=h.value),"container"in h&&e(9,r=h.container),"scale"in h&&e(10,w=h.scale),"min_width"in h&&e(11,c=h.min_width),"grid_cols"in h&&e(12,A=h.grid_cols),"grid_rows"in h&&e(13,j=h.grid_rows),"height"in h&&e(14,g=h.height),"preview"in h&&e(15,z=h.preview),"allow_preview"in h&&e(16,k=h.allow_preview),"object_fit"in h&&e(17,E=h.object_fit),"show_share_button"in h&&e(18,M=h.show_share_button)},[i,n,o,d,s,a,u,f,m,r,w,c,A,j,g,z,k,E,M,v,U,J]}class il extends ce{constructor(t){super(),he(this,t,tl,ll,me,{loading_status:0,show_label:1,label:2,root:3,root_url:4,elem_id:5,elem_classes:6,visible:7,value:8,container:9,scale:10,min_width:11,grid_cols:12,grid_rows:13,height:14,preview:15,allow_preview:16,object_fit:17,show_share_button:18})}}const ml=il,gl=["static"],wl=l=>({type:{payload:"Array<{ name: string } | [{ name: string }, string]>"},description:{payload:"list of objects, with filename and optional caption,"}});export{ml as Component,wl as document,gl as modes}; -//# sourceMappingURL=index-3b0ff54c.js.map diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/types/Settings.ts b/spaces/DaFujaTyping/hf-Chat-ui/src/lib/types/Settings.ts deleted file mode 100644 index f747cdfe03ba3b7399233a256839bb0ad15a5d64..0000000000000000000000000000000000000000 --- a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/types/Settings.ts +++ /dev/null @@ -1,14 +0,0 @@ -import type { Timestamps } from "./Timestamps"; - -export interface Settings extends Timestamps { - sessionId: string; - - /** - * Note: Only conversations with this settings explictly set to true should be shared. - * - * This setting is explicitly set to true when users accept the ethics modal. - * */ - shareConversationsWithModelAuthors: boolean; - ethicsModalAcceptedAt: Date | null; - activeModel: string; -} diff --git a/spaces/Dantra1/CeliaSensei/utils.py b/spaces/Dantra1/CeliaSensei/utils.py deleted file mode 100644 index ee4b01ddfbe8173965371b29f770f3e87615fe71..0000000000000000000000000000000000000000 --- a/spaces/Dantra1/CeliaSensei/utils.py +++ /dev/null @@ -1,225 +0,0 @@ -import os -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -import librosa -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_audio_to_torch(full_path, target_sampling_rate): - audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True) - return torch.FloatTensor(audio.astype(np.float32)) - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/Darwin2023/darwin/README.md b/spaces/Darwin2023/darwin/README.md deleted file mode 100644 index dc0fa096d65ce4d2f53c05f44ca34746b5411464..0000000000000000000000000000000000000000 --- a/spaces/Darwin2023/darwin/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Darwin -emoji: 🐨 -colorFrom: red -colorTo: indigo -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DmitriiKhizbullin/camel-data-explorer/apps/data_explorer/downloader.py b/spaces/DmitriiKhizbullin/camel-data-explorer/apps/data_explorer/downloader.py deleted file mode 100644 index 517dda2e412a1a148eb40e89497d3d43b9319594..0000000000000000000000000000000000000000 --- a/spaces/DmitriiKhizbullin/camel-data-explorer/apps/data_explorer/downloader.py +++ /dev/null @@ -1,42 +0,0 @@ -import os -import urllib.request - -from huggingface_hub import hf_hub_download - -REPO_ROOT = os.path.realpath( - os.path.join(os.path.dirname(os.path.abspath(__file__)), "../..")) - - -def download_data(): - - print("Downloading...") - - data_dir = os.path.join(REPO_ROOT, "datasets/") - - os.makedirs(data_dir, exist_ok=True) - - try: - hf_hub_download(repo_id="camel-ai/ai_society", repo_type="dataset", - filename="ai_society_chat.zip", local_dir=data_dir, - local_dir_use_symlinks=False) - - hf_hub_download(repo_id="camel-ai/code", repo_type="dataset", - filename="code_chat.zip", local_dir=data_dir, - local_dir_use_symlinks=False) - except: - for name in ("ai_society_chat.zip", "code_chat.zip"): - data_url = ("https://storage.googleapis.com/" - f"camel-bucket/datasets/private/{name}") - file_path = os.path.join(data_dir, os.path.split(data_url)[1]) - urllib.request.urlretrieve(data_url, file_path) - - data_url = ("https://storage.googleapis.com/" - "camel-bucket/datasets/private/misalignment.zip") - file_path = os.path.join(data_dir, os.path.split(data_url)[1]) - urllib.request.urlretrieve(data_url, file_path) - - print("Download done") - - -if __name__ == "__main__": - download_data() diff --git a/spaces/Dragonnext/Drago-Proxy/README.md b/spaces/Dragonnext/Drago-Proxy/README.md deleted file mode 100644 index cf4a7b4bc59b6023cd6d4c1734fcd5b646ac1c9e..0000000000000000000000000000000000000000 --- a/spaces/Dragonnext/Drago-Proxy/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Drago OAI Proxy -emoji: 🐲 -colorFrom: green -colorTo: red -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_m_mix_det.py b/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_m_mix_det.py deleted file mode 100644 index fccb14597eeacdab5d393ae58a2c31bf17d2f2b8..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_m_mix_det.py +++ /dev/null @@ -1,138 +0,0 @@ -# encoding: utf-8 -import os -import random -import torch -import torch.nn as nn -import torch.distributed as dist - -from yolox.exp import Exp as MyExp -from yolox.data import get_yolox_datadir - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.num_classes = 1 - self.depth = 0.67 - self.width = 0.75 - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] - self.train_ann = "train.json" - self.val_ann = "train.json" - self.input_size = (800, 1440) - self.test_size = (800, 1440) - self.random_size = (18, 32) - self.max_epoch = 80 - self.print_interval = 20 - self.eval_interval = 5 - self.test_conf = 0.001 - self.nmsthre = 0.7 - self.no_aug_epochs = 10 - self.basic_lr_per_img = 0.001 / 64.0 - self.warmup_epochs = 1 - - def get_data_loader(self, batch_size, is_distributed, no_aug=False): - from yolox.data import ( - MOTDataset, - TrainTransform, - YoloBatchSampler, - DataLoader, - InfiniteSampler, - MosaicDetection, - ) - - dataset = MOTDataset( - data_dir=os.path.join(get_yolox_datadir(), "mix_det"), - json_file=self.train_ann, - name='', - img_size=self.input_size, - preproc=TrainTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - max_labels=500, - ), - ) - - dataset = MosaicDetection( - dataset, - mosaic=not no_aug, - img_size=self.input_size, - preproc=TrainTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - max_labels=1000, - ), - degrees=self.degrees, - translate=self.translate, - scale=self.scale, - shear=self.shear, - perspective=self.perspective, - enable_mixup=self.enable_mixup, - ) - - self.dataset = dataset - - if is_distributed: - batch_size = batch_size // dist.get_world_size() - - sampler = InfiniteSampler( - len(self.dataset), seed=self.seed if self.seed else 0 - ) - - batch_sampler = YoloBatchSampler( - sampler=sampler, - batch_size=batch_size, - drop_last=False, - input_dimension=self.input_size, - mosaic=not no_aug, - ) - - dataloader_kwargs = {"num_workers": self.data_num_workers, "pin_memory": True} - dataloader_kwargs["batch_sampler"] = batch_sampler - train_loader = DataLoader(self.dataset, **dataloader_kwargs) - - return train_loader - - def get_eval_loader(self, batch_size, is_distributed, testdev=False): - from yolox.data import MOTDataset, ValTransform - - valdataset = MOTDataset( - data_dir=os.path.join(get_yolox_datadir(), "mot"), - json_file=self.val_ann, - img_size=self.test_size, - name='train', - preproc=ValTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - ), - ) - - if is_distributed: - batch_size = batch_size // dist.get_world_size() - sampler = torch.utils.data.distributed.DistributedSampler( - valdataset, shuffle=False - ) - else: - sampler = torch.utils.data.SequentialSampler(valdataset) - - dataloader_kwargs = { - "num_workers": self.data_num_workers, - "pin_memory": True, - "sampler": sampler, - } - dataloader_kwargs["batch_size"] = batch_size - val_loader = torch.utils.data.DataLoader(valdataset, **dataloader_kwargs) - - return val_loader - - def get_evaluator(self, batch_size, is_distributed, testdev=False): - from yolox.evaluators import COCOEvaluator - - val_loader = self.get_eval_loader(batch_size, is_distributed, testdev=testdev) - evaluator = COCOEvaluator( - dataloader=val_loader, - img_size=self.test_size, - confthre=self.test_conf, - nmsthre=self.nmsthre, - num_classes=self.num_classes, - testdev=testdev, - ) - return evaluator diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py deleted file mode 100644 index 167d4cb2198863cf43e93440f7e63c5342fc7605..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import layers_123821KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 32) - self.stg1_high_band_net = BaseASPPNet(2, 32) - - self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(16, 32) - - self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(32, 64) - - self.out = nn.Conv2d(64, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(32, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(32, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/EleutherAI/polyglot-ko-1.3b/README.md b/spaces/EleutherAI/polyglot-ko-1.3b/README.md deleted file mode 100644 index 683d4e918d6f3752e835e7d48b9196ed81e73f79..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/polyglot-ko-1.3b/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Polyglot Korean 1.3B -emoji: 😻 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.3 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Falah/female/README.md b/spaces/Falah/female/README.md deleted file mode 100644 index 06db99b005f1277a9a041b862b4f96d13d2d5067..0000000000000000000000000000000000000000 --- a/spaces/Falah/female/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Female -emoji: 🚀 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git "a/spaces/Fengbinbin/gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" "b/spaces/Fengbinbin/gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" deleted file mode 100644 index c299e59d3894b7ac2d33df1502746adaef4a47b8..0000000000000000000000000000000000000000 --- "a/spaces/Fengbinbin/gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" +++ /dev/null @@ -1,175 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -fast_debug = False - -class PaperFileGroup(): - def __init__(self): - self.file_paths = [] - self.file_contents = [] - self.sp_file_contents = [] - self.sp_file_index = [] - self.sp_file_tag = [] - - # count_token - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - self.get_token_num = get_token_num - - def run_file_split(self, max_token_limit=1900): - """ - 将长文本分离开来 - """ - for index, file_content in enumerate(self.file_contents): - if self.get_token_num(file_content) < max_token_limit: - self.sp_file_contents.append(file_content) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index]) - else: - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit) - for j, segment in enumerate(segments): - self.sp_file_contents.append(segment) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex") - - print('Segmentation: done') - -def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'): - import time, os, re - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - - - # <-------- 读取Latex文件,删除其中的所有注释 ----------> - pfg = PaperFileGroup() - - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - # 定义注释的正则表达式 - comment_pattern = r'%.*' - # 使用正则表达式查找注释,并替换为空字符串 - clean_tex_content = re.sub(comment_pattern, '', file_content) - # 记录删除注释后的文本 - pfg.file_paths.append(fp) - pfg.file_contents.append(clean_tex_content) - - # <-------- 拆分过长的latex文件 ----------> - pfg.run_file_split(max_token_limit=1024) - n_split = len(pfg.sp_file_contents) - - # <-------- 抽取摘要 ----------> - # if language == 'en': - # abs_extract_inputs = f"Please write an abstract for this paper" - - # # 单线,获取文章meta信息 - # paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive( - # inputs=abs_extract_inputs, - # inputs_show_user=f"正在抽取摘要信息。", - # llm_kwargs=llm_kwargs, - # chatbot=chatbot, history=[], - # sys_prompt="Your job is to collect information from materials。", - # ) - - # <-------- 多线程润色开始 ----------> - if language == 'en': - inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)] - elif language == 'zh': - inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag] - sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)] - - - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=inputs_array, - inputs_show_user_array=inputs_show_user_array, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[[""] for _ in range(n_split)], - sys_prompt_array=sys_prompt_array, - # max_workers=5, # 并行任务数量限制,最多同时执行5个,其他的排队等待 - scroller_max_len = 80 - ) - - # <-------- 整理结果,退出 ----------> - create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md" - res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name) - history = gpt_response_collection - chatbot.append((f"{fp}完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -@CatchException -def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en') - - - - - - -@CatchException -def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh') \ No newline at end of file diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/hubert/hubert_model.py b/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/hubert/hubert_model.py deleted file mode 100644 index 7fb642d89b07ca60792debab18e3454f52d8f357..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/hubert/hubert_model.py +++ /dev/null @@ -1,222 +0,0 @@ -import copy -import random -from typing import Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as t_func -from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present - - -class Hubert(nn.Module): - def __init__(self, num_label_embeddings: int = 100, mask: bool = True): - super().__init__() - self._mask = mask - self.feature_extractor = FeatureExtractor() - self.feature_projection = FeatureProjection() - self.positional_embedding = PositionalConvEmbedding() - self.norm = nn.LayerNorm(768) - self.dropout = nn.Dropout(0.1) - self.encoder = TransformerEncoder( - nn.TransformerEncoderLayer( - 768, 12, 3072, activation="gelu", batch_first=True - ), - 12, - ) - self.proj = nn.Linear(768, 256) - - self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_()) - self.label_embedding = nn.Embedding(num_label_embeddings, 256) - - def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - mask = None - if self.training and self._mask: - mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2) - x[mask] = self.masked_spec_embed.to(x.dtype) - return x, mask - - def encode( - self, x: torch.Tensor, layer: Optional[int] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - x = self.feature_extractor(x) - x = self.feature_projection(x.transpose(1, 2)) - x, mask = self.mask(x) - x = x + self.positional_embedding(x) - x = self.dropout(self.norm(x)) - x = self.encoder(x, output_layer=layer) - return x, mask - - def logits(self, x: torch.Tensor) -> torch.Tensor: - logits = torch.cosine_similarity( - x.unsqueeze(2), - self.label_embedding.weight.unsqueeze(0).unsqueeze(0), - dim=-1, - ) - return logits / 0.1 - - def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - x, mask = self.encode(x) - x = self.proj(x) - logits = self.logits(x) - return logits, mask - - -class HubertSoft(Hubert): - def __init__(self): - super().__init__() - - @torch.inference_mode() - def units(self, wav: torch.Tensor) -> torch.Tensor: - wav = t_func.pad(wav, ((400 - 320) // 2, (400 - 320) // 2)) - x, _ = self.encode(wav) - return self.proj(x) - - -class FeatureExtractor(nn.Module): - def __init__(self): - super().__init__() - self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False) - self.norm0 = nn.GroupNorm(512, 512) - self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False) - self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = t_func.gelu(self.norm0(self.conv0(x))) - x = t_func.gelu(self.conv1(x)) - x = t_func.gelu(self.conv2(x)) - x = t_func.gelu(self.conv3(x)) - x = t_func.gelu(self.conv4(x)) - x = t_func.gelu(self.conv5(x)) - x = t_func.gelu(self.conv6(x)) - return x - - -class FeatureProjection(nn.Module): - def __init__(self): - super().__init__() - self.norm = nn.LayerNorm(512) - self.projection = nn.Linear(512, 768) - self.dropout = nn.Dropout(0.1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.norm(x) - x = self.projection(x) - x = self.dropout(x) - return x - - -class PositionalConvEmbedding(nn.Module): - def __init__(self): - super().__init__() - self.conv = nn.Conv1d( - 768, - 768, - kernel_size=128, - padding=128 // 2, - groups=16, - ) - self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.conv(x.transpose(1, 2)) - x = t_func.gelu(x[:, :, :-1]) - return x.transpose(1, 2) - - -class TransformerEncoder(nn.Module): - def __init__( - self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int - ) -> None: - super(TransformerEncoder, self).__init__() - self.layers = nn.ModuleList( - [copy.deepcopy(encoder_layer) for _ in range(num_layers)] - ) - self.num_layers = num_layers - - def forward( - self, - src: torch.Tensor, - mask: torch.Tensor = None, - src_key_padding_mask: torch.Tensor = None, - output_layer: Optional[int] = None, - ) -> torch.Tensor: - output = src - for layer in self.layers[:output_layer]: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - return output - - -def _compute_mask( - shape: Tuple[int, int], - mask_prob: float, - mask_length: int, - device: torch.device, - min_masks: int = 0, -) -> torch.Tensor: - batch_size, sequence_length = shape - - if mask_length < 1: - raise ValueError("`mask_length` has to be bigger than 0.") - - if mask_length > sequence_length: - raise ValueError( - f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`" - ) - - # compute number of masked spans in batch - num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random()) - num_masked_spans = max(num_masked_spans, min_masks) - - # make sure num masked indices <= sequence_length - if num_masked_spans * mask_length > sequence_length: - num_masked_spans = sequence_length // mask_length - - # SpecAugment mask to fill - mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool) - - # uniform distribution to sample from, make sure that offset samples are < sequence_length - uniform_dist = torch.ones( - (batch_size, sequence_length - (mask_length - 1)), device=device - ) - - # get random indices to mask - mask_indices = torch.multinomial(uniform_dist, num_masked_spans) - - # expand masked indices to masked spans - mask_indices = ( - mask_indices.unsqueeze(dim=-1) - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - offsets = ( - torch.arange(mask_length, device=device)[None, None, :] - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - mask_idxs = mask_indices + offsets - - # scatter indices to mask - mask = mask.scatter(1, mask_idxs, True) - - return mask - - -def hubert_soft( - path: str, -) -> HubertSoft: - r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`. - Args: - path (str): path of a pretrained model - """ - hubert = HubertSoft() - checkpoint = torch.load(path) - consume_prefix_in_state_dict_if_present(checkpoint, "module.") - hubert.load_state_dict(checkpoint) - hubert.eval() - return hubert diff --git a/spaces/GT4SD/keyword_bert/utils.py b/spaces/GT4SD/keyword_bert/utils.py deleted file mode 100644 index 34a817843a959d6ef46ae527efd08788f55a3196..0000000000000000000000000000000000000000 --- a/spaces/GT4SD/keyword_bert/utils.py +++ /dev/null @@ -1,48 +0,0 @@ -import logging -from collections import defaultdict -from typing import List - -import mols2grid -import pandas as pd - -logger = logging.getLogger(__name__) -logger.addHandler(logging.NullHandler()) - - -def draw_grid_generate( - samples: List[str], - seeds: List[str] = [], - n_cols: int = 3, - size=(140, 200), -) -> str: - """ - Uses mols2grid to draw a HTML grid for the generated molecules - - Args: - samples: The generated samples. - n_cols: Number of columns in grid. Defaults to 5. - size: Size of molecule in grid. Defaults to (140, 200). - - Returns: - HTML to display - """ - - result = defaultdict(list) - result.update( - { - "SMILES": seeds + samples, - "Name": [f"Seed_{i}" for i in range(len(seeds))] - + [f"Generated_{i}" for i in range(len(samples))], - }, - ) - - result_df = pd.DataFrame(result) - obj = mols2grid.display( - result_df, - tooltip=list(result.keys()), - height=1100, - n_cols=n_cols, - name="Results", - size=size, - ) - return obj.data diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/put_blocks_between_zones.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/put_blocks_between_zones.py deleted file mode 100644 index a644f66a71bd1db131174e3fd4741b2efc6e67b1..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/put_blocks_between_zones.py +++ /dev/null @@ -1,51 +0,0 @@ -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils -import copy - -class PutBlocksBetweenZones(Task): - """Arrange four differently colored blocks (red, blue, green, and yellow) between two designated zones on the tabletop.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "Arrange the blocks between the zones in the order: red, blue, green, yellow" - self.task_completed_desc = "done arranging blocks." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add zones. - zone_size = (0.12, 0.12, 0) - zone_urdf = 'zone/zone.urdf' - zone1_pose = self.get_random_pose(env, zone_size) - zone2_pose = copy.deepcopy(zone1_pose) - zone2_pose = (utils.apply(zone1_pose, (0,0.1,0)), zone2_pose[1]) - env.add_object(zone_urdf, zone1_pose, 'fixed') - env.add_object(zone_urdf, zone2_pose, 'fixed') - - # Block colors. - colors = [ - utils.COLORS['red'], utils.COLORS['blue'], utils.COLORS['green'], - utils.COLORS['yellow'] - ] - - # Add blocks. - block_size = (0.04, 0.04, 0.04) - block_urdf = 'block/block.urdf' - blocks = [] - for i in range(4): - block_pose = self.get_random_pose(env, block_size) - block_id = env.add_object(block_urdf, block_pose, color=colors[i]) - blocks.append(block_id) - - # Goal: blocks are arranged between the zones in the order: red, blue, green, yellow. - # IMPORTANT Associate placement locations for goals. - place_pos = [(0, -0.05, 0.03), (0, 0, 0.03), - (0, 0.05, 0.03), (0, 0.1, 0.03)] - targs = [(utils.apply(zone1_pose, i), zone1_pose[1]) for i in place_pos] - - # Add goal - self.add_goal(objs=blocks, matches=np.ones((4, 4)), targ_poses=targs, replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1, symmetries=[np.pi/2]*4, language_goal=self.lang_template) diff --git a/spaces/Gradio-Blocks/Multilingual-Aspect-Based-Sentiment-Analysis/README.md b/spaces/Gradio-Blocks/Multilingual-Aspect-Based-Sentiment-Analysis/README.md deleted file mode 100644 index 1e90e548d5a7be8371f0549028a81f4389619615..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/Multilingual-Aspect-Based-Sentiment-Analysis/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Multilingual Aspect Based Sentiment Analysis -emoji: 🌖 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.30.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Gradio-Blocks/Speech-to-text/README.md b/spaces/Gradio-Blocks/Speech-to-text/README.md deleted file mode 100644 index 5c22c31a892079699128cbf65b888407b912412d..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/Speech-to-text/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Audio Text -emoji: 🐠 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.0.9 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/README.md deleted file mode 100644 index cc610c0c9e936a5ae4659ceda691c6db6d387296..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/README.md +++ /dev/null @@ -1,24 +0,0 @@ - -# Install dependency -```bash -pip install -r requirement.txt -``` - -# Download the data set -```bash -export WORKDIR_ROOT= - -``` -The downloaded data will be at $WORKDIR_ROOT/ML50 - -# preprocess the data -Install SPM [here](https://github.com/google/sentencepiece) -```bash -export WORKDIR_ROOT= -export SPM_PATH= -``` -* $WORKDIR_ROOT/ML50/raw: extracted raw data -* $WORKDIR_ROOT/ML50/dedup: dedup data -* $WORKDIR_ROOT/ML50/clean: data with valid and test sentences removed from the dedup data - - diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/dynamicconv_layer/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/dynamicconv_layer/__init__.py deleted file mode 100644 index 22dc6f403d2a0ecdb1b9e7e69ed96bd560e93b2c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/dynamicconv_layer/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .dynamicconv_layer import DynamicconvLayer # noqa diff --git a/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/normalize/indic_normalize.py b/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/normalize/indic_normalize.py deleted file mode 100644 index fcd2f4cddc17e5967a4992afb3ec56488c489e1d..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/normalize/indic_normalize.py +++ /dev/null @@ -1,984 +0,0 @@ -# -*- coding: utf-8 -*- - -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -#Program for normalization of text written in Unicode. This is mainly geared towards Indic scripts -# -# @author Anoop Kunchukuttan -# - -import sys, codecs, string, itertools, re -from indicnlp import langinfo - - -class NormalizerI(object): - """ - The normalizer classes do the following: - - * Some characters have multiple Unicode codepoints. The normalizer chooses a single standard representation - * Some control characters are deleted - * While typing using the Latin keyboard, certain typical mistakes occur which are corrected by the module - - Base class for normalizer. Performs some common normalization, which includes: - - * Byte order mark, word joiner, etc. removal - * ZERO_WIDTH_NON_JOINER and ZERO_WIDTH_JOINER removal - * ZERO_WIDTH_SPACE and NO_BREAK_SPACE replaced by spaces - - Script specific normalizers should derive from this class and override the normalize() method. - They can call the super class 'normalize() method to avail of the common normalization - - """ - - BYTE_ORDER_MARK='\uFEFF' - BYTE_ORDER_MARK_2='\uFFFE' - WORD_JOINER='\u2060' - SOFT_HYPHEN='\u00AD' - - ZERO_WIDTH_SPACE='\u200B' - NO_BREAK_SPACE='\u00A0' - - ZERO_WIDTH_NON_JOINER='\u200C' - ZERO_WIDTH_JOINER='\u200D' - - def _normalize_punctuations(self, text): - """ - Normalize punctuations. - Applied many of the punctuation normalizations that are part of MosesNormalizer - from sacremoses - """ - text=text.replace(NormalizerI.BYTE_ORDER_MARK,'') - text=text.replace('„', r'"') - text=text.replace('“', r'"') - text=text.replace('”', r'"') - text=text.replace('–', r'-') - text=text.replace('—', r' - ') - text=text.replace('´', r"'") - text=text.replace('‘', r"'") - text=text.replace('‚', r"'") - text=text.replace('’', r"'") - text=text.replace("''", r'"') - text=text.replace('´´', r'"') - text=text.replace('…', r'...') - - return text - - def normalize(self,text): - pass - - -class BaseNormalizer(NormalizerI): - - def __init__(self,lang, - remove_nuktas=False, - nasals_mode='do_nothing', - do_normalize_chandras=False, - do_normalize_vowel_ending=False): - - self.lang=lang - self.remove_nuktas=remove_nuktas - self.nasals_mode=nasals_mode - self.do_normalize_chandras=do_normalize_chandras - self.do_normalize_vowel_ending=do_normalize_vowel_ending - - self._init_normalize_chandras() - self._init_normalize_nasals() - self._init_normalize_vowel_ending() - #self._init_visarga_correction() - - def _init_normalize_vowel_ending(self): - - if self.lang in langinfo.IE_LANGUAGES: - self.fn_vowel_ending=self._normalize_word_vowel_ending_ie - elif self.lang in langinfo.DRAVIDIAN_LANGUAGES: - self.fn_vowel_ending=self._normalize_word_vowel_ending_dravidian - else: - self.fn_vowel_ending=lambda x: x - - def _init_normalize_chandras(self): - - substitution_offsets =\ - [ - [0x0d , 0x0f], # chandra e, independent - [0x11 , 0x13], # chandra o, independent - [0x45 , 0x47], # chandra e , 0xde],pendent - [0x49 , 0x4b], # chandra o , 0xde],pendent - # [0x72 , 0x0f], # mr: chandra e, independent - - [0x00 , 0x02], # chandrabindu - [0x01 , 0x02], # chandrabindu - ] - - self.chandra_substitutions = [ - (langinfo.offset_to_char(x[0],self.lang), langinfo.offset_to_char(x[1],self.lang)) - for x in substitution_offsets ] - - def _normalize_chandras(self,text): - for match, repl in self.chandra_substitutions: - text=text.replace(match,repl) - return text - - def _init_to_anusvaara_strict(self): - """ - `r1_nasal=re.compile(r'\\u0919\\u094D([\\u0915-\\u0918])')` - """ - - pat_signatures=\ - [ - [0x19,0x15,0x18], - [0x1e,0x1a,0x1d], - [0x23,0x1f,0x22], - [0x28,0x24,0x27], - [0x29,0x24,0x27], - [0x2e,0x2a,0x2d], - ] - - halant_offset=0x4d - anusvaara_offset=0x02 - - pats=[] - - for pat_signature in pat_signatures: - pat=re.compile(r'{nasal}{halant}([{start_r}-{end_r}])'.format( - nasal=langinfo.offset_to_char(pat_signature[0],self.lang), - halant=langinfo.offset_to_char(halant_offset,self.lang), - start_r=langinfo.offset_to_char(pat_signature[1],self.lang), - end_r=langinfo.offset_to_char(pat_signature[2],self.lang), - )) - pats.append(pat) - - repl_string='{anusvaara}\\1'.format(anusvaara=langinfo.offset_to_char(anusvaara_offset,self.lang)) - - self.pats_repls=(pats,repl_string) - - def _to_anusvaara_strict(self,text): - - pats, repl_string = self.pats_repls - for pat in pats: - text=pat.sub(repl_string,text) - - return text - - def _init_to_anusvaara_relaxed(self): - """ - `r1_nasal=re.compile(r'\\u0919\\u094D([\\u0915-\\u0918])')` - """ - - nasals_list=[0x19,0x1e,0x23,0x28,0x29,0x2e] - nasals_list_str=','.join([langinfo.offset_to_char(x,self.lang) for x in nasals_list]) - - halant_offset=0x4d - anusvaara_offset=0x02 - - pat=re.compile(r'[{nasals_list_str}]{halant}'.format( - nasals_list_str=nasals_list_str, - halant=langinfo.offset_to_char(halant_offset,self.lang), - )) - - repl_string='{anusvaara}'.format(anusvaara=langinfo.offset_to_char(anusvaara_offset,self.lang)) - - self.pats_repls = (pat,repl_string) - - def _to_anusvaara_relaxed(self,text): - pat, repl_string = self.pats_repls - return pat.sub(repl_string,text) - - - def _init_to_nasal_consonants(self): - """ - `r1_nasal=re.compile(r'\\u0919\\u094D([\\u0915-\\u0918])')` - """ - - pat_signatures=\ - [ - [0x19,0x15,0x18], - [0x1e,0x1a,0x1d], - [0x23,0x1f,0x22], - [0x28,0x24,0x27], - [0x29,0x24,0x27], - [0x2e,0x2a,0x2d], - ] - - halant_offset=0x4d - anusvaara_offset=0x02 - - pats=[] - repl_strings=[] - - for pat_signature in pat_signatures: - pat=re.compile(r'{anusvaara}([{start_r}-{end_r}])'.format( - anusvaara=langinfo.offset_to_char(anusvaara_offset,self.lang), - start_r=langinfo.offset_to_char(pat_signature[1],self.lang), - end_r=langinfo.offset_to_char(pat_signature[2],self.lang), - )) - pats.append(pat) - repl_string='{nasal}{halant}\\1'.format( - nasal=langinfo.offset_to_char(pat_signature[0],self.lang), - halant=langinfo.offset_to_char(halant_offset,self.lang), - ) - repl_strings.append(repl_string) - - self.pats_repls=list(zip(pats,repl_strings)) - - def _to_nasal_consonants(self,text): - - for pat, repl in self.pats_repls: - text=pat.sub(repl,text) - - return text - - def _init_normalize_nasals(self): - - if self.nasals_mode == 'to_anusvaara_strict': - self._init_to_anusvaara_strict() - elif self.nasals_mode == 'to_anusvaara_relaxed': - self._init_to_anusvaara_relaxed() - elif self.nasals_mode == 'to_nasal_consonants': - self._init_to_nasal_consonants() - - def _normalize_nasals(self,text): - if self.nasals_mode == 'to_anusvaara_strict': - return self._to_anusvaara_strict(text) - elif self.nasals_mode == 'to_anusvaara_relaxed': - return self._to_anusvaara_relaxed(text) - elif self.nasals_mode == 'to_nasal_consonants': - return self._to_nasal_consonants(text) - else: - return text - - - def _normalize_word_vowel_ending_dravidian(self,word): - """ - for Dravidian - - consonant ending: add 'a' ki maatra - - halant ending: no change - - 'a' ki maatra: no change - """ - if len(word)>0 and langinfo.is_consonant(word[-1],self.lang): - return word+langinfo.offset_to_char(0x3e,self.lang) - else: - return word - - def _normalize_word_vowel_ending_ie(self,word): - """ - for IE - - consonant ending: add halant - - halant ending: no change - - 'a' ki maatra: no change - """ - if len(word)>0 and langinfo.is_consonant(word[-1],self.lang): - return word+langinfo.offset_to_char(langinfo.HALANTA_OFFSET,self.lang) - else: - return word - - def _normalize_vowel_ending(self,text): - return ' '.join([ self.fn_vowel_ending(w) for w in text.split(' ') ]) - - def normalize(self,text): - """ - Method to be implemented for normalization for each script - """ - text=text.replace(NormalizerI.BYTE_ORDER_MARK,'') - text=text.replace(NormalizerI.BYTE_ORDER_MARK_2,'') - text=text.replace(NormalizerI.WORD_JOINER,'') - text=text.replace(NormalizerI.SOFT_HYPHEN,'') - - text=text.replace(NormalizerI.ZERO_WIDTH_SPACE,' ') # ?? - text=text.replace(NormalizerI.NO_BREAK_SPACE,' ') - - text=text.replace(NormalizerI.ZERO_WIDTH_NON_JOINER, '') - text=text.replace(NormalizerI.ZERO_WIDTH_JOINER,'') - - text=self._normalize_punctuations(text) - - if self.do_normalize_chandras: - text=self._normalize_chandras(text) - text=self._normalize_nasals(text) - if self.do_normalize_vowel_ending: - text=self._normalize_vowel_ending(text) - - return text - - - def get_char_stats(self,text): - print(len(re.findall(NormalizerI.BYTE_ORDER_MARK,text))) - print(len(re.findall(NormalizerI.BYTE_ORDER_MARK_2,text))) - print(len(re.findall(NormalizerI.WORD_JOINER,text))) - print(len(re.findall(NormalizerI.SOFT_HYPHEN,text))) - - print(len(re.findall(NormalizerI.ZERO_WIDTH_SPACE,text) )) - print(len(re.findall(NormalizerI.NO_BREAK_SPACE,text))) - - print(len(re.findall(NormalizerI.ZERO_WIDTH_NON_JOINER,text))) - print(len(re.findall(NormalizerI.ZERO_WIDTH_JOINER,text))) - - #for mobj in re.finditer(NormalizerI.ZERO_WIDTH_NON_JOINER,text): - # print text[mobj.start()-10:mobj.end()+10].replace('\n', ' ').replace(NormalizerI.ZERO_WIDTH_NON_JOINER,'').encode('utf-8') - #print hex(ord(text[mobj.end():mobj.end()+1])) - - def correct_visarga(self,text,visarga_char,char_range): - text=re.sub(r'([\u0900-\u097f]):','\\1\u0903',text) - - - -class DevanagariNormalizer(BaseNormalizer): - """ - Normalizer for the Devanagari script. In addition to basic normalization by the super class, - - * Replaces the composite characters containing nuktas by their decomposed form - * replace pipe character '|' by poorna virama character - * replace colon ':' by visarga if the colon follows a charcter in this script - - """ - - NUKTA='\u093C' - - def __init__(self,lang='hi',remove_nuktas=False,nasals_mode='do_nothing', - do_normalize_chandras=False,do_normalize_vowel_ending=False): - super(DevanagariNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(DevanagariNormalizer,self).normalize(text) - - # chandra a replacement for Marathi - text=text.replace('\u0972','\u090f') - - # decomposing Nukta based composite characters - text=text.replace('\u0929','\u0928'+DevanagariNormalizer.NUKTA) - text=text.replace('\u0931','\u0930'+DevanagariNormalizer.NUKTA) - text=text.replace('\u0934','\u0933'+DevanagariNormalizer.NUKTA) - text=text.replace('\u0958','\u0915'+DevanagariNormalizer.NUKTA) - text=text.replace('\u0959','\u0916'+DevanagariNormalizer.NUKTA) - text=text.replace('\u095A','\u0917'+DevanagariNormalizer.NUKTA) - text=text.replace('\u095B','\u091C'+DevanagariNormalizer.NUKTA) - text=text.replace('\u095C','\u0921'+DevanagariNormalizer.NUKTA) - text=text.replace('\u095D','\u0922'+DevanagariNormalizer.NUKTA) - text=text.replace('\u095E','\u092B'+DevanagariNormalizer.NUKTA) - text=text.replace('\u095F','\u092F'+DevanagariNormalizer.NUKTA) - - if self.remove_nuktas: - text=text.replace(DevanagariNormalizer.NUKTA,'') - - # replace pipe character for poorna virama - text=text.replace('\u007c','\u0964') - - # correct visarga - text=re.sub(r'([\u0900-\u097f]):','\\1\u0903',text) - - return text - - def get_char_stats(self,text): - super(DevanagariNormalizer,self).get_char_stats(text) - - print((len(re.findall('\u0929',text)))) - print((len(re.findall('\u0931',text)))) - print((len(re.findall('\u0934',text)))) - print((len(re.findall('\u0958',text)))) - print((len(re.findall('\u0959',text)))) - print((len(re.findall('\u095A',text)))) - print((len(re.findall('\u095B',text)))) - print((len(re.findall('\u095C',text)))) - print((len(re.findall('\u095D',text)))) - print((len(re.findall('\u095E',text)))) - print((len(re.findall('\u095F',text)))) - - #print(len(re.findall(u'\u0928'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0930'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0933'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0915'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0916'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0917'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u091C'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0921'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0922'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u092B'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u092F'+DevanagariNormalizer.NUKTA,text))) - -class GurmukhiNormalizer(BaseNormalizer): - """ - Normalizer for the Gurmukhi script. In addition to basic normalization by the super class, - - * Replaces the composite characters containing nuktas by their decomposed form - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * replace pipe character '|' by poorna virama character - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - NUKTA='\u0A3C' - - VOWEL_NORM_MAPS={ - ## http://www.unicode.org/versions/Unicode12.1.0/ch12.pdf - ## Table 12-16 - '\u0a05\u0a3e': '\u0a06', - '\u0a72\u0a3f': '\u0a07', - '\u0a72\u0a40': '\u0a08', - '\u0a73\u0a41': '\u0a09', - '\u0a73\u0a42': '\u0a0a', - '\u0a72\u0a47': '\u0a0f', - '\u0a05\u0a48': '\u0a10', - '\u0a73\u0a4b': '\u0a13', - '\u0a05\u0a4c': '\u0a14', - } - - def __init__(self,lang='pa',remove_nuktas=False,nasals_mode='do_nothing',do_normalize_chandras=False, - do_normalize_vowel_ending=False, - do_canonicalize_addak=False, - do_canonicalize_tippi=False, - do_replace_vowel_bases=False): - super(GurmukhiNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - self.do_canonicalize_addak=do_canonicalize_addak - self.do_canonicalize_tippi=do_canonicalize_tippi - self.do_replace_vowel_bases=do_replace_vowel_bases - - - def _normalize_vowels(self,text): - """ - - """ - - ## standard vowel replacements as per suggestions in - ## http://www.unicode.org/versions/Unicode12.1.0/ch12.pdf - ## Table 12-16 - - for k,v in GurmukhiNormalizer.VOWEL_NORM_MAPS.items(): - text=text.replace(k,v) - - ## the above mappings should account for majority of the variantions, - ## Rest are handled via this generic rule which looks at the diacritic - ## following the 2 special characters - ## TBD: don't see evidence for this in Wikipedia corpus - - ## If these special characters occur without any diacritic, replace them with closet - ## equivalent vowels - if self.do_replace_vowel_bases: - text=text.replace('\u0a72','\u0a07') - text=text.replace('\u0a73','\u0a09') - - return text - - - def normalize(self,text): - - # Addak - if self.do_canonicalize_addak: - ## replace addak+consonant with consonat+halant+consonant - text=re.sub(r'\u0a71(.)','\\1\u0a4d\\1',text) - - # Tippi - if self.do_canonicalize_tippi: - text=text.replace('\u0a70','\u0a02') - - # Vowels: Gurumuki has multiple ways of representing independent vowels due - # to the characters 'iri' and 'ura'. - text=self._normalize_vowels(text) - - # common normalization for Indic scripts - text=super(GurmukhiNormalizer,self).normalize(text) - - # decomposing Nukta based composite characters - text=text.replace('\u0a33','\u0a32'+GurmukhiNormalizer.NUKTA) - text=text.replace('\u0a36','\u0a38'+GurmukhiNormalizer.NUKTA) - text=text.replace('\u0a59','\u0a16'+GurmukhiNormalizer.NUKTA) - text=text.replace('\u0a5a','\u0a17'+GurmukhiNormalizer.NUKTA) - text=text.replace('\u0a5b','\u0a1c'+GurmukhiNormalizer.NUKTA) - text=text.replace('\u0a5e','\u0a2b'+GurmukhiNormalizer.NUKTA) - - if self.remove_nuktas: - text=text.replace(GurmukhiNormalizer.NUKTA,'') - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0a64','\u0964') - text=text.replace('\u0a65','\u0965') - - ## replace pipe character for poorna virama - text=text.replace('\u007c','\u0964') - - # correct visarge - text=re.sub(r'([\u0a00-\u0a7f]):','\\1\u0a03',text) - - return text - - -class GujaratiNormalizer(BaseNormalizer): - """ - Normalizer for the Gujarati script. In addition to basic normalization by the super class, - - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - NUKTA='\u0ABC' - - def __init__(self,lang='gu',remove_nuktas=False,nasals_mode='do_nothing',do_normalize_chandras=False, - do_normalize_vowel_ending=False): - super(GujaratiNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(GujaratiNormalizer,self).normalize(text) - - # decomposing Nukta based composite characters - if self.remove_nuktas: - text=text.replace(GujaratiNormalizer.NUKTA,'') - - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0ae4','\u0964') - text=text.replace('\u0ae5','\u0965') - - # correct visarge - text=re.sub(r'([\u0a80-\u0aff]):','\\1\u0a83',text) - - return text - - -class OriyaNormalizer(BaseNormalizer): - """ - Normalizer for the Oriya script. In addition to basic normalization by the super class, - - * Replaces the composite characters containing nuktas by their decomposed form - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * Canonicalize two part dependent vowels - * Replace 'va' with 'ba' - * replace pipe character '|' by poorna virama character - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - NUKTA='\u0B3C' - - VOWEL_NORM_MAPS={ - ## See Table 12-22 in http://www.unicode.org/versions/Unicode12.1.0/ch12.pdf - '\u0b05\u0b3e': '\u0b06', - '\u0b0f\u0b57': '\u0b10', - '\u0b13\u0b57': '\u0b14', - } - - - def __init__(self,lang='or',remove_nuktas=False,nasals_mode='do_nothing',do_normalize_chandras=False, - do_normalize_vowel_ending=False, - do_remap_wa=False): - super(OriyaNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - self.do_remap_wa=do_remap_wa - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(OriyaNormalizer,self).normalize(text) - - ## standard vowel replacements as per suggestions in Unicode documents - for k,v in OriyaNormalizer.VOWEL_NORM_MAPS.items(): - text=text.replace(k,v) - - # decomposing Nukta based composite characters - text=text.replace('\u0b5c','\u0b21'+OriyaNormalizer.NUKTA) - text=text.replace('\u0b5d','\u0b22'+OriyaNormalizer.NUKTA) - - if self.remove_nuktas: - text=text.replace(OriyaNormalizer.NUKTA,'') - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0b64','\u0964') - text=text.replace('\u0b65','\u0965') - - # replace pipe character for poorna virama - text=text.replace('\u0b7c','\u0964') - - # replace wa with ba - if self.do_remap_wa: - text=text.replace('\u0b71','\u0b2c') - - # replace va with ba - # NOTE: documentation (chapter on Indic scripts) and codepoint chart seem contradictory - # (this applied to wa to ba rule also above) - text=text.replace('\u0b35','\u0b2c') - - # AI dependent vowel sign - text=text.replace('\u0b47\u0b56','\u0b58') - - # two part dependent vowels - text=text.replace('\u0b47\u0b3e','\u0b4b') - text=text.replace('\u0b47\u0b57','\u0b4c') - - - # additional consonant - not clear how to handle this - # ignore - - # correct visarge - text=re.sub(r'([\u0b00-\u0b7f]):','\\1\u0b03',text) - - return text - - -class BengaliNormalizer(BaseNormalizer): - """ - Normalizer for the Bengali script. In addition to basic normalization by the super class, - - * Replaces the composite characters containing nuktas by their decomposed form - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * Canonicalize two part dependent vowels - * replace pipe character '|' by poorna virama character - * replace colon ':' by visarga if the colon follows a charcter in this script - - """ - - NUKTA='\u09BC' - - def __init__(self,lang='bn',remove_nuktas=False,nasals_mode='do_nothing',do_normalize_chandras=False, - do_normalize_vowel_ending=False, - do_remap_assamese_chars=False): - super(BengaliNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - self.do_remap_assamese_chars=do_remap_assamese_chars - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(BengaliNormalizer,self).normalize(text) - - # decomposing Nukta based composite characters - text=text.replace('\u09dc','\u09a1'+BengaliNormalizer.NUKTA) - text=text.replace('\u09dd','\u09a2'+BengaliNormalizer.NUKTA) - text=text.replace('\u09df','\u09af'+BengaliNormalizer.NUKTA) - - if self.remove_nuktas: - text=text.replace(BengaliNormalizer.NUKTA,'') - - if self.do_remap_assamese_chars and self.lang=='as': - text=text.replace('\u09f0','\u09b0') # 'ra' character - text=text.replace('\u09f1','\u09ac') # 'va' character - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u09e4','\u0964') - text=text.replace('\u09e5','\u0965') - - # replace pipe character for poorna virama - text=text.replace('\u007c','\u0964') - # replace bengali currency numerator four for poorna virama (it looks similar and is used as a substitute) - text=text.replace('\u09f7','\u0964') - - # two part dependent vowels - text=text.replace('\u09c7\u09be','\u09cb') - text=text.replace('\u09c7\u09d7','\u09cc') - - # correct visarge - text=re.sub(r'([\u0980-\u09ff]):','\\1\u0983',text) - - return text - - -class TamilNormalizer(BaseNormalizer): - """ - Normalizer for the Tamil script. In addition to basic normalization by the super class, - - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * canonicalize two-part dependent vowel signs - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - def __init__(self,lang='ta',remove_nuktas=False,nasals_mode='do_nothing', - do_normalize_chandras=False,do_normalize_vowel_ending=False): - super(TamilNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(TamilNormalizer,self).normalize(text) - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0be4','\u0964') - text=text.replace('\u0be5','\u0965') - - # two part dependent vowels - text=text.replace('\u0b92\u0bd7','\u0b94') - text=text.replace('\u0bc6\u0bbe','\u0bca') - text=text.replace('\u0bc7\u0bbe','\u0bcb') - text=text.replace('\u0bc6\u0bd7','\u0bcc') - - # correct visarge - text=re.sub(r'([\u0b80-\u0bff]):','\\1\u0b83',text) - - return text - - -class TeluguNormalizer(BaseNormalizer): - """ - Normalizer for the Teluguscript. In addition to basic normalization by the super class, - - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * canonicalize two-part dependent vowel signs - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - def __init__(self,lang='te',remove_nuktas=False,nasals_mode='do_nothing', - do_normalize_chandras=False,do_normalize_vowel_ending=False): - super(TeluguNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(TeluguNormalizer,self).normalize(text) - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0c64','\u0964') - text=text.replace('\u0c65','\u0965') - - # dependent vowels - text=text.replace('\u0c46\u0c56','\u0c48') - - # correct visarge - text=re.sub(r'([\u0c00-\u0c7f]):','\\1\u0c03',text) - - return text - - def get_char_stats(self,text): - pass - -class KannadaNormalizer(BaseNormalizer): - """ - Normalizer for the Kannada script. In addition to basic normalization by the super class, - - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * canonicalize two-part dependent vowel signs - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - def __init__(self,lang='kn',remove_nuktas=False,nasals_mode='do_nothing', - do_normalize_chandras=False,do_normalize_vowel_ending=False): - super(KannadaNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(KannadaNormalizer,self).normalize(text) - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0ce4','\u0964') - text=text.replace('\u0ce5','\u0965') - - # dependent vowels - text=text.replace('\u0cbf\u0cd5','\u0cc0') - text=text.replace('\u0cc6\u0cd5','\u0cc7') - text=text.replace('\u0cc6\u0cd6','\u0cc8') - text=text.replace('\u0cc6\u0cc2','\u0cca') - text=text.replace('\u0cca\u0cd5','\u0ccb') - - # correct visarge - text=re.sub(r'([\u0c80-\u0cff]):','\\1\u0c83',text) - - return text - - -class MalayalamNormalizer(BaseNormalizer): - """ - Normalizer for the Malayalam script. In addition to basic normalization by the super class, - - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * canonicalize two-part dependent vowel signs - * Change from old encoding of chillus (till Unicode 5.0) to new encoding - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - CHILLU_CHAR_MAP= { - '\u0d7a': '\u0d23', - '\u0d7b': '\u0d28', - '\u0d7c': '\u0d30', - '\u0d7d': '\u0d32', - '\u0d7e': '\u0d33', - '\u0d7f': '\u0d15', - } - - def _canonicalize_chillus(self,text): - for chillu, char in MalayalamNormalizer.CHILLU_CHAR_MAP.items(): - text=text.replace(chillu,'{}\u0d4d'.format(char)) - return text - - def _correct_geminated_T(self,text): - return text.replace('\u0d31\u0d4d\u0d31','\u0d1f\u0d4d\u0d1f') - - def __init__(self,lang='ml',remove_nuktas=False,nasals_mode='do_nothing',do_normalize_chandras=False, - do_normalize_vowel_ending=False, - do_canonicalize_chillus=False, do_correct_geminated_T=False): - super(MalayalamNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - self.do_canonicalize_chillus=do_canonicalize_chillus - self.do_correct_geminated_T=do_correct_geminated_T - - def normalize(self,text): - - # Change from old encoding of chillus (till Unicode 5.0) to new encoding - text=text.replace('\u0d23\u0d4d\u200d','\u0d7a') - text=text.replace('\u0d28\u0d4d\u200d','\u0d7b') - text=text.replace('\u0d30\u0d4d\u200d','\u0d7c') - text=text.replace('\u0d32\u0d4d\u200d','\u0d7d') - text=text.replace('\u0d33\u0d4d\u200d','\u0d7e') - text=text.replace('\u0d15\u0d4d\u200d','\u0d7f') - - # Normalize chillus - if self.do_canonicalize_chillus: - text=self._canonicalize_chillus(text) - - # common normalization for Indic scripts - text=super(MalayalamNormalizer,self).normalize(text) - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0d64','\u0964') - text=text.replace('\u0d65','\u0965') - - # dependent vowels - text=text.replace('\u0d46\u0d3e','\u0d4a') - text=text.replace('\u0d47\u0d3e','\u0d4b') - - # au forms - text=text.replace('\u0d46\u0d57','\u0d4c') - text=text.replace('\u0d57','\u0d4c') - - # correct geminated T - if self.do_correct_geminated_T: - text=self._correct_geminated_T(text) - - # correct visarga - text=re.sub(r'([\u0d00-\u0d7f]):','\\1\u0d03',text) - - return text - -class UrduNormalizer(NormalizerI): - '''Uses UrduHack library. - https://docs.urduhack.com/en/stable/_modules/urduhack/normalization/character.html#normalize - ''' - - def __init__(self, lang, remove_nuktas=True): - self.lang = lang - self.remove_nuktas = remove_nuktas - - from urduhack.normalization import ( - remove_diacritics, - normalize_characters, - normalize_combine_characters - ) # TODO: Use only required normalizers - from urduhack.preprocessing import ( - normalize_whitespace, - digits_space, - all_punctuations_space, - english_characters_space - ) - - def normalize(self, text): - text = self._normalize_punctuations(text) - text = UrduNormalizer.normalize_whitespace(text) - if self.remove_nuktas: - text = UrduNormalizer.remove_diacritics(text) - text = UrduNormalizer.normalize_characters(text) - text = UrduNormalizer.normalize_combine_characters(text) - text = UrduNormalizer.digits_space(text) - text = UrduNormalizer.all_punctuations_space(text) - text = UrduNormalizer.english_characters_space(text) - return text - - -class IndicNormalizerFactory(object): - """ - Factory class to create language specific normalizers. - - """ - - def get_normalizer(self,language,**kwargs): - """ - Call the get_normalizer function to get the language specific normalizer - - Paramters: - |language: language code - |remove_nuktas: boolean, should the normalizer remove nukta characters - """ - normalizer=None - if language in ['hi','mr','sa','kK','ne','sd']: - normalizer=DevanagariNormalizer(lang=language, **kwargs) - elif language in ['ur']: - normalizer = UrduNormalizer(lang=language, **kwargs) - elif language in ['pa']: - normalizer=GurmukhiNormalizer(lang=language, **kwargs) - elif language in ['gu']: - normalizer=GujaratiNormalizer(lang=language, **kwargs) - elif language in ['bn']: - normalizer=BengaliNormalizer(lang=language, **kwargs) - elif language in ['as']: - normalizer=BengaliNormalizer(lang=language, **kwargs) - elif language in ['or']: - normalizer=OriyaNormalizer(lang=language, **kwargs) - elif language in ['ml']: - normalizer=MalayalamNormalizer(lang=language, **kwargs) - elif language in ['kn']: - normalizer=KannadaNormalizer(lang=language, **kwargs) - elif language in ['ta']: - normalizer=TamilNormalizer(lang=language, **kwargs) - elif language in ['te']: - normalizer=TeluguNormalizer(lang=language, **kwargs) - else: - normalizer=BaseNormalizer(lang=language, **kwargs) - - return normalizer - - def is_language_supported(self,language): - """ - Is the language supported? - """ - if language in ['hi','mr','sa','kK','ne','sd', - 'ur', - 'pa', - 'gu', - 'bn','as', - 'or', - 'ml', - 'kn', - 'ta', - 'te']: - return True - else: - return False - - -if __name__ == '__main__': - - if len(sys.argv)<4: - print("Usage: python normalize.py [] []") - sys.exit(1) - - language=sys.argv[3] - remove_nuktas=False - normalize_nasals='do_nothing' - if len(sys.argv)>=5: - remove_nuktas=bool(sys.argv[4]) - if len(sys.argv)>=6: - normalize_nasals=sys.argv[5] - - # create normalizer - factory=IndicNormalizerFactory() - normalizer=factory.get_normalizer(language,remove_nuktas=remove_nuktas,nasals_mode=normalize_nasals) - - # DO normalization - with codecs.open(sys.argv[1],'r','utf-8') as ifile: - with codecs.open(sys.argv[2],'w','utf-8') as ofile: - for line in ifile.readlines(): - normalized_line=normalizer.normalize(line) - ofile.write(normalized_line) - - ## gather status about normalization - #with codecs.open(sys.argv[1],'r','utf-8') as ifile: - # normalizer=DevanagariNormalizer() - # text=string.join(ifile.readlines(),sep='') - # normalizer.get_char_stats(text) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/optim/adadelta.py b/spaces/ICML2022/OFA/fairseq/fairseq/optim/adadelta.py deleted file mode 100644 index f1a21549770f0904a6a40a42ff7eb52811f1bfbe..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/optim/adadelta.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.optim - -from . import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("adadelta") -class Adadelta(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = torch.optim.Adadelta(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--adadelta-rho', type=float, default=0.9, metavar='RHO', - help='coefficient used for computing a running average of squared gradients') - parser.add_argument('--adadelta-eps', type=float, default=1e-6, metavar='EPS', - help='term added to the denominator to improve numerical stability') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - parser.add_argument('--anneal-eps', action='store_true', help='flag to anneal eps') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "rho": self.args.adadelta_rho, - "eps": self.args.adadelta_eps, - "weight_decay": self.args.weight_decay, - } - - @property - def supports_flat_params(self): - return True diff --git a/spaces/ICML2022/resefa/third_party/stylegan2_official_ops/bias_act.py b/spaces/ICML2022/resefa/third_party/stylegan2_official_ops/bias_act.py deleted file mode 100644 index b94dca1fb0a7f3bc13dce952d8e97a211ec94a88..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/resefa/third_party/stylegan2_official_ops/bias_act.py +++ /dev/null @@ -1,227 +0,0 @@ -# python3.7 - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom ops to fuse bias and activation as one operator, which is efficient. - -Please refer to https://github.com/NVlabs/stylegan2-ada-pytorch -""" - -# pylint: disable=line-too-long -# pylint: disable=missing-class-docstring -# pylint: disable=global-statement -# pylint: disable=bare-except - -import os -import warnings -import traceback -from easydict import EasyDict -import numpy as np -import torch - -from . import custom_ops -from . import misc - -#---------------------------------------------------------------------------- - -activation_funcs = { - 'linear': EasyDict(func=lambda x, **_: x, def_alpha=0, def_gain=1, cuda_idx=1, ref='', has_2nd_grad=False), - 'relu': EasyDict(func=lambda x, **_: torch.nn.functional.relu(x), def_alpha=0, def_gain=np.sqrt(2), cuda_idx=2, ref='y', has_2nd_grad=False), - 'lrelu': EasyDict(func=lambda x, alpha, **_: torch.nn.functional.leaky_relu(x, alpha), def_alpha=0.2, def_gain=np.sqrt(2), cuda_idx=3, ref='y', has_2nd_grad=False), - 'tanh': EasyDict(func=lambda x, **_: torch.tanh(x), def_alpha=0, def_gain=1, cuda_idx=4, ref='y', has_2nd_grad=True), - 'sigmoid': EasyDict(func=lambda x, **_: torch.sigmoid(x), def_alpha=0, def_gain=1, cuda_idx=5, ref='y', has_2nd_grad=True), - 'elu': EasyDict(func=lambda x, **_: torch.nn.functional.elu(x), def_alpha=0, def_gain=1, cuda_idx=6, ref='y', has_2nd_grad=True), - 'selu': EasyDict(func=lambda x, **_: torch.nn.functional.selu(x), def_alpha=0, def_gain=1, cuda_idx=7, ref='y', has_2nd_grad=True), - 'softplus': EasyDict(func=lambda x, **_: torch.nn.functional.softplus(x), def_alpha=0, def_gain=1, cuda_idx=8, ref='y', has_2nd_grad=True), - 'swish': EasyDict(func=lambda x, **_: torch.sigmoid(x) * x, def_alpha=0, def_gain=np.sqrt(2), cuda_idx=9, ref='x', has_2nd_grad=True), -} - -#---------------------------------------------------------------------------- - -_inited = False -_plugin = None -_null_tensor = torch.empty([0]) - -def _init(): - global _inited, _plugin - if not _inited: - _inited = True - sources = ['bias_act.cpp', 'bias_act.cu'] - sources = [os.path.join(os.path.dirname(__file__), s) for s in sources] - try: - _plugin = custom_ops.get_plugin('bias_act_plugin', sources=sources, extra_cuda_cflags=['--use_fast_math']) - except: - warnings.warn('Failed to build CUDA kernels for bias_act. Falling back to slow reference implementation. Details:\n\n' + traceback.format_exc()) - return _plugin is not None - -#---------------------------------------------------------------------------- - -def bias_act(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None, impl='cuda'): - r"""Fused bias and activation function. - - Adds bias `b` to activation tensor `x`, evaluates activation function `act`, - and scales the result by `gain`. Each of the steps is optional. In most cases, - the fused op is considerably more efficient than performing the same calculation - using standard PyTorch ops. It supports first and second order gradients, - but not third order gradients. - - Args: - x: Input activation tensor. Can be of any shape. - b: Bias vector, or `None` to disable. Must be a 1D tensor of the same type - as `x`. The shape must be known, and it must match the dimension of `x` - corresponding to `dim`. - dim: The dimension in `x` corresponding to the elements of `b`. - The value of `dim` is ignored if `b` is not specified. - act: Name of the activation function to evaluate, or `"linear"` to disable. - Can be e.g. `"relu"`, `"lrelu"`, `"tanh"`, `"sigmoid"`, `"swish"`, etc. - See `activation_funcs` for a full list. `None` is not allowed. - alpha: Shape parameter for the activation function, or `None` to use the default. - gain: Scaling factor for the output tensor, or `None` to use default. - See `activation_funcs` for the default scaling of each activation function. - If unsure, consider specifying 1. - clamp: Clamp the output values to `[-clamp, +clamp]`, or `None` to disable - the clamping (default). - impl: Name of the implementation to use. Can be `"ref"` or `"cuda"` (default). - - Returns: - Tensor of the same shape and datatype as `x`. - """ - assert isinstance(x, torch.Tensor) - assert impl in ['ref', 'cuda'] - if impl == 'cuda' and x.device.type == 'cuda' and _init(): - return _bias_act_cuda(dim=dim, act=act, alpha=alpha, gain=gain, clamp=clamp).apply(x, b) - return _bias_act_ref(x=x, b=b, dim=dim, act=act, alpha=alpha, gain=gain, clamp=clamp) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def _bias_act_ref(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None): - """Slow reference implementation of `bias_act()` using standard TensorFlow ops. - """ - assert isinstance(x, torch.Tensor) - assert clamp is None or clamp >= 0 - spec = activation_funcs[act] - alpha = float(alpha if alpha is not None else spec.def_alpha) - gain = float(gain if gain is not None else spec.def_gain) - clamp = float(clamp if clamp is not None else -1) - - # Add bias. - if b is not None: - assert isinstance(b, torch.Tensor) and b.ndim == 1 - assert 0 <= dim < x.ndim - assert b.shape[0] == x.shape[dim] - x = x + b.reshape([-1 if i == dim else 1 for i in range(x.ndim)]) - - # Evaluate activation function. - alpha = float(alpha) - x = spec.func(x, alpha=alpha) - - # Scale by gain. - gain = float(gain) - if gain != 1: - x = x * gain - - # Clamp. - if clamp >= 0: - x = x.clamp(-clamp, clamp) - return x - -#---------------------------------------------------------------------------- - -_bias_act_cuda_cache = dict() - -def _bias_act_cuda(dim=1, act='linear', alpha=None, gain=None, clamp=None): - """Fast CUDA implementation of `bias_act()` using custom ops. - """ - # Parse arguments. - assert clamp is None or clamp >= 0 - spec = activation_funcs[act] - alpha = float(alpha if alpha is not None else spec.def_alpha) - gain = float(gain if gain is not None else spec.def_gain) - clamp = float(clamp if clamp is not None else -1) - - # Lookup from cache. - key = (dim, act, alpha, gain, clamp) - if key in _bias_act_cuda_cache: - return _bias_act_cuda_cache[key] - - # Forward op. - class BiasActCuda(torch.autograd.Function): - @staticmethod - def forward(ctx, x, b): # pylint: disable=arguments-differ - ctx.memory_format = torch.channels_last if x.ndim > 2 and x.stride()[1] == 1 else torch.contiguous_format - x = x.contiguous(memory_format=ctx.memory_format) - b = b.contiguous() if b is not None else _null_tensor - y = x - if act != 'linear' or gain != 1 or clamp >= 0 or b is not _null_tensor: - y = _plugin.bias_act(x, b, _null_tensor, _null_tensor, _null_tensor, 0, dim, spec.cuda_idx, alpha, gain, clamp) - ctx.save_for_backward( - x if 'x' in spec.ref or spec.has_2nd_grad else _null_tensor, - b if 'x' in spec.ref or spec.has_2nd_grad else _null_tensor, - y if 'y' in spec.ref else _null_tensor) - return y - - @staticmethod - def backward(ctx, dy): # pylint: disable=arguments-differ - dy = dy.contiguous(memory_format=ctx.memory_format) - x, b, y = ctx.saved_tensors - dx = None - db = None - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - dx = dy - if act != 'linear' or gain != 1 or clamp >= 0: - dx = BiasActCudaGrad.apply(dy, x, b, y) - - if ctx.needs_input_grad[1]: - db = dx.sum([i for i in range(dx.ndim) if i != dim]) - - return dx, db - - # Backward op. - class BiasActCudaGrad(torch.autograd.Function): - @staticmethod - def forward(ctx, dy, x, b, y): # pylint: disable=arguments-differ - ctx.memory_format = torch.channels_last if dy.ndim > 2 and dy.stride()[1] == 1 else torch.contiguous_format - dx = _plugin.bias_act(dy, b, x, y, _null_tensor, 1, dim, spec.cuda_idx, alpha, gain, clamp) - ctx.save_for_backward( - dy if spec.has_2nd_grad else _null_tensor, - x, b, y) - return dx - - @staticmethod - def backward(ctx, d_dx): # pylint: disable=arguments-differ - d_dx = d_dx.contiguous(memory_format=ctx.memory_format) - dy, x, b, y = ctx.saved_tensors - d_dy = None - d_x = None - d_b = None - d_y = None - - if ctx.needs_input_grad[0]: - d_dy = BiasActCudaGrad.apply(d_dx, x, b, y) - - if spec.has_2nd_grad and (ctx.needs_input_grad[1] or ctx.needs_input_grad[2]): - d_x = _plugin.bias_act(d_dx, b, x, y, dy, 2, dim, spec.cuda_idx, alpha, gain, clamp) - - if spec.has_2nd_grad and ctx.needs_input_grad[2]: - d_b = d_x.sum([i for i in range(d_x.ndim) if i != dim]) - - return d_dy, d_x, d_b, d_y - - # Add to cache. - _bias_act_cuda_cache[key] = BiasActCuda - return BiasActCuda - -#---------------------------------------------------------------------------- - -# pylint: enable=line-too-long -# pylint: enable=missing-class-docstring -# pylint: enable=global-statement -# pylint: enable=bare-except diff --git a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/backbone/swin_transformer.py b/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/backbone/swin_transformer.py deleted file mode 100644 index 1c66194deb5dd370e797e57e2712f44303e568cc..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/backbone/swin_transformer.py +++ /dev/null @@ -1,802 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# DINO -# Copyright (c) 2022 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# -------------------------------------------------------- -# modified from https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/master/mmdet/models/backbones/swin_transformer.py -# -------------------------------------------------------- - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from groundingdino.util.misc import NestedTensor - - -class Mlp(nn.Module): - """Multilayer perceptron.""" - - def __init__( - self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.0 - ): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__( - self, - dim, - window_size, - num_heads, - qkv_bias=True, - qk_scale=None, - attn_drop=0.0, - proj_drop=0.0, - ): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim**-0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads) - ) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=0.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """Forward function. - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = ( - self.qkv(x) - .reshape(B_, N, 3, self.num_heads, C // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = q @ k.transpose(-2, -1) - - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index.view(-1) - ].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1 - ) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute( - 2, 0, 1 - ).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SwinTransformerBlock(nn.Module): - """Swin Transformer Block. - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__( - self, - dim, - num_heads, - window_size=7, - shift_size=0, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - act_layer=nn.GELU, - norm_layer=nn.LayerNorm, - ): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, - window_size=to_2tuple(self.window_size), - num_heads=num_heads, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop=attn_drop, - proj_drop=drop, - ) - - self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp( - in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop - ) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition( - shifted_x, self.window_size - ) # nW*B, window_size, window_size, C - x_windows = x_windows.view( - -1, self.window_size * self.window_size, C - ) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - -class PatchMerging(nn.Module): - """Patch Merging Layer - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__( - self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False, - ): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList( - [ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer, - ) - for i in range(depth) - ] - ) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - w_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition( - img_mask, self.window_size - ) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill( - attn_mask == 0, float(0.0) - ) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """Image to Patch Embedding - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -class SwinTransformer(nn.Module): - """Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - dilation (bool): if True, the output size if 16x downsample, ow 32x downsample. - """ - - def __init__( - self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - attn_drop_rate=0.0, - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - dilation=False, - use_checkpoint=False, - ): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.dilation = dilation - - # if use_checkpoint: - # print("use_checkpoint!!!!!!!!!!!!!!!!!!!!!!!!") - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, - in_chans=in_chans, - embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None, - ) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [ - pretrain_img_size[0] // patch_size[0], - pretrain_img_size[1] // patch_size[1], - ] - - self.absolute_pos_embed = nn.Parameter( - torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]) - ) - trunc_normal_(self.absolute_pos_embed, std=0.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) - ] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - # prepare downsample list - downsamplelist = [PatchMerging for i in range(self.num_layers)] - downsamplelist[-1] = None - num_features = [int(embed_dim * 2**i) for i in range(self.num_layers)] - if self.dilation: - downsamplelist[-2] = None - num_features[-1] = int(embed_dim * 2 ** (self.num_layers - 1)) // 2 - for i_layer in range(self.num_layers): - layer = BasicLayer( - # dim=int(embed_dim * 2 ** i_layer), - dim=num_features[i_layer], - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])], - norm_layer=norm_layer, - # downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - downsample=downsamplelist[i_layer], - use_checkpoint=use_checkpoint, - ) - self.layers.append(layer) - - # num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - for i_layer in out_indices: - layer = norm_layer(num_features[i_layer]) - layer_name = f"norm{i_layer}" - self.add_module(layer_name, layer) - - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - # def init_weights(self, pretrained=None): - # """Initialize the weights in backbone. - # Args: - # pretrained (str, optional): Path to pre-trained weights. - # Defaults to None. - # """ - - # def _init_weights(m): - # if isinstance(m, nn.Linear): - # trunc_normal_(m.weight, std=.02) - # if isinstance(m, nn.Linear) and m.bias is not None: - # nn.init.constant_(m.bias, 0) - # elif isinstance(m, nn.LayerNorm): - # nn.init.constant_(m.bias, 0) - # nn.init.constant_(m.weight, 1.0) - - # if isinstance(pretrained, str): - # self.apply(_init_weights) - # logger = get_root_logger() - # load_checkpoint(self, pretrained, strict=False, logger=logger) - # elif pretrained is None: - # self.apply(_init_weights) - # else: - # raise TypeError('pretrained must be a str or None') - - def forward_raw(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate( - self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic" - ) - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = [] - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - # import ipdb; ipdb.set_trace() - - if i in self.out_indices: - norm_layer = getattr(self, f"norm{i}") - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs.append(out) - # in: - # torch.Size([2, 3, 1024, 1024]) - # outs: - # [torch.Size([2, 192, 256, 256]), torch.Size([2, 384, 128, 128]), \ - # torch.Size([2, 768, 64, 64]), torch.Size([2, 1536, 32, 32])] - return tuple(outs) - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate( - self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic" - ) - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = [] - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - - if i in self.out_indices: - norm_layer = getattr(self, f"norm{i}") - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs.append(out) - # in: - # torch.Size([2, 3, 1024, 1024]) - # out: - # [torch.Size([2, 192, 256, 256]), torch.Size([2, 384, 128, 128]), \ - # torch.Size([2, 768, 64, 64]), torch.Size([2, 1536, 32, 32])] - - # collect for nesttensors - outs_dict = {} - for idx, out_i in enumerate(outs): - m = tensor_list.mask - assert m is not None - mask = F.interpolate(m[None].float(), size=out_i.shape[-2:]).to(torch.bool)[0] - outs_dict[idx] = NestedTensor(out_i, mask) - - return outs_dict - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - - -def build_swin_transformer(modelname, pretrain_img_size, **kw): - assert modelname in [ - "swin_T_224_1k", - "swin_B_224_22k", - "swin_B_384_22k", - "swin_L_224_22k", - "swin_L_384_22k", - ] - - model_para_dict = { - "swin_T_224_1k": dict( - embed_dim=96, depths=[2, 2, 6, 2], num_heads=[3, 6, 12, 24], window_size=7 - ), - "swin_B_224_22k": dict( - embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=7 - ), - "swin_B_384_22k": dict( - embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=12 - ), - "swin_L_224_22k": dict( - embed_dim=192, depths=[2, 2, 18, 2], num_heads=[6, 12, 24, 48], window_size=7 - ), - "swin_L_384_22k": dict( - embed_dim=192, depths=[2, 2, 18, 2], num_heads=[6, 12, 24, 48], window_size=12 - ), - } - kw_cgf = model_para_dict[modelname] - kw_cgf.update(kw) - model = SwinTransformer(pretrain_img_size=pretrain_img_size, **kw_cgf) - return model - - -if __name__ == "__main__": - model = build_swin_transformer("swin_L_384_22k", 384, dilation=True) - x = torch.rand(2, 3, 1024, 1024) - y = model.forward_raw(x) - import ipdb - - ipdb.set_trace() - x = torch.rand(2, 3, 384, 384) - y = model.forward_raw(x) diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/.github/README_cn.md b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/.github/README_cn.md deleted file mode 100644 index 65ecd31a3e6989cf6882882ae55e866c37339ac0..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/.github/README_cn.md +++ /dev/null @@ -1,344 +0,0 @@ -
    -

    - - -

    - - [English](../README.md) | 简体中文 -
    -
    - YOLOv5 CI - YOLOv5 Citation - Docker Pulls -
    - Run on Gradient - Open In Colab - Open In Kaggle -
    - -
    -

    - YOLOv5🚀是一个在COCO数据集上预训练的物体检测架构和模型系列,它代表了Ultralytics对未来视觉AI方法的公开研究,其中包含了在数千小时的研究和开发中所获得的经验和最佳实践。 -

    - -
    - - - - - - - - - - - - - - - - - - - - -
    -
    - - -##
    文件
    - -请参阅[YOLOv5 Docs](https://docs.ultralytics.com),了解有关训练、测试和部署的完整文件。 - -##
    快速开始案例
    - -
    -安装 - -在[**Python>=3.7.0**](https://www.python.org/) 的环境中克隆版本仓并安装 [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt),包括[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/)。 -```bash -git clone https://github.com/ultralytics/yolov5 # 克隆 -cd yolov5 -pip install -r requirements.txt # 安装 -``` - -
    - -
    -推理 - -YOLOv5 [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36) 推理. [模型](https://github.com/ultralytics/yolov5/tree/master/models) 自动从最新YOLOv5 [版本](https://github.com/ultralytics/yolov5/releases)下载。 - -```python -import torch - -# 模型 -model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5n - yolov5x6, custom - -# 图像 -img = 'https://ultralytics.com/images/zidane.jpg' # or file, Path, PIL, OpenCV, numpy, list - -# 推理 -results = model(img) - -# 结果 -results.print() # or .show(), .save(), .crop(), .pandas(), etc. -``` - -
    - -
    -用 detect.py 进行推理 - -`detect.py` 在各种数据源上运行推理, 其会从最新的 YOLOv5 [版本](https://github.com/ultralytics/yolov5/releases) 中自动下载 [模型](https://github.com/ultralytics/yolov5/tree/master/models) 并将检测结果保存到 `runs/detect` 目录。 - -```bash -python detect.py --source 0 # 网络摄像头 - img.jpg # 图像 - vid.mp4 # 视频 - path/ # 文件夹 - 'path/*.jpg' # glob - 'https://youtu.be/Zgi9g1ksQHc' # YouTube - 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP 流 -``` - -
    - -
    -训练 - -以下指令再现了 YOLOv5 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) -数据集结果. [模型](https://github.com/ultralytics/yolov5/tree/master/models) 和 [数据集](https://github.com/ultralytics/yolov5/tree/master/data) 自动从最新的YOLOv5 [版本](https://github.com/ultralytics/yolov5/releases) 中下载。YOLOv5n/s/m/l/x的训练时间在V100 GPU上是 1/2/4/6/8天(多GPU倍速). 尽可能使用最大的 `--batch-size`, 或通过 `--batch-size -1` 来实现 YOLOv5 [自动批处理](https://github.com/ultralytics/yolov5/pull/5092). 批量大小显示为 V100-16GB。 - -```bash -python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128 - yolov5s 64 - yolov5m 40 - yolov5l 24 - yolov5x 16 -``` - - - -
    - -
    -教程 - -- [训练自定义数据集](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) 🚀 推荐 -- [获得最佳训练效果的技巧](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results) ☘️ - 推荐 -- [多GPU训练](https://github.com/ultralytics/yolov5/issues/475) -- [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36) 🌟 新 -- [TFLite, ONNX, CoreML, TensorRT 输出](https://github.com/ultralytics/yolov5/issues/251) 🚀 -- [测试时数据增强 (TTA)](https://github.com/ultralytics/yolov5/issues/303) -- [模型集成](https://github.com/ultralytics/yolov5/issues/318) -- [模型剪枝/稀疏性](https://github.com/ultralytics/yolov5/issues/304) -- [超参数进化](https://github.com/ultralytics/yolov5/issues/607) -- [带有冻结层的迁移学习](https://github.com/ultralytics/yolov5/issues/1314) -- [架构概要](https://github.com/ultralytics/yolov5/issues/6998) 🌟 新 -- [使用Weights & Biases 记录实验](https://github.com/ultralytics/yolov5/issues/1289) -- [Roboflow:数据集,标签和主动学习](https://github.com/ultralytics/yolov5/issues/4975) 🌟 新 -- [使用ClearML 记录实验](https://github.com/ultralytics/yolov5/tree/master/utils/loggers/clearml) 🌟 新 -- [Deci 平台](https://github.com/ultralytics/yolov5/wiki/Deci-Platform) 🌟 新 - -
    - - -##
    Integrations
    - -
    - - -
    -
    - -
    - - - - - - - - - - - -
    - -|Roboflow|ClearML ⭐ NEW|Comet ⭐ NEW|Deci ⭐ NEW| -|:-:|:-:|:-:|:-:| -|Label and export your custom datasets directly to YOLOv5 for training with [Roboflow](https://roboflow.com/?ref=ultralytics)|Automatically track, visualize and even remotely train YOLOv5 using [ClearML](https://cutt.ly/yolov5-readme-clearml) (open-source!)|Free forever, [Comet](https://bit.ly/yolov5-readme-comet) lets you save YOLOv5 models, resume training, and interactively visualise and debug predictions|Automatically compile and quantize YOLOv5 for better inference performance in one click at [Deci](https://bit.ly/yolov5-deci-platform)| - - -##
    Ultralytics HUB
    - -[Ultralytics HUB](https://bit.ly/ultralytics_hub) is our ⭐ **NEW** no-code solution to visualize datasets, train YOLOv5 🚀 models, and deploy to the real world in a seamless experience. Get started for **Free** now! - - - - - -##
    为什么选择 YOLOv5
    - -

    -
    - YOLOv5-P5 640 图像 (点击扩展) - -

    -
    -
    - 图片注释 (点击扩展) - -- **COCO AP val** 表示 mAP@0.5:0.95 在5000张图像的[COCO val2017](http://cocodataset.org)数据集上,在256到1536的不同推理大小上测量的指标。 -- **GPU Speed** 衡量的是在 [COCO val2017](http://cocodataset.org) 数据集上使用 [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100实例在批量大小为32时每张图像的平均推理时间。 -- **EfficientDet** 数据来自 [google/automl](https://github.com/google/automl) ,批量大小设置为 8。 -- 复现 mAP 方法: `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt` - -
    - -### 预训练检查点 - -| 模型 | 规模
    (像素) | mAP验证
    0.5:0.95 | mAP验证
    0.5 | 速度
    CPU b1
    (ms) | 速度
    V100 b1
    (ms) | 速度
    V100 b32
    (ms) | 参数
    (M) | 浮点运算
    @640 (B) | -|------------------------------------------------------------------------------------------------------|-----------------------|-------------------------|--------------------|------------------------------|-------------------------------|--------------------------------|--------------------|------------------------| -| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** | -| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 | -| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 | -| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 | -| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 | -| | | | | | | | | | -| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 | -| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 | -| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 | -| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 | -| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5x6.pt)
    + [TTA][TTA] | 1280
    1536 | 55.0
    **55.8** | 72.7
    **72.7** | 3136
    - | 26.2
    - | 19.4
    - | 140.7
    - | 209.8
    - | - -
    - 表格注释 (点击扩展) - -- 所有检查点都以默认设置训练到300个时期. Nano和Small模型用 [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) hyps, 其他模型使用 [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml). -- **mAPval** 值是 [COCO val2017](http://cocodataset.org) 数据集上的单模型单尺度的值。 -
    复现方法: `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65` -- 使用 [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) 实例对COCO val图像的平均速度。不包括NMS时间(~1 ms/img) -
    复现方法: `python val.py --data coco.yaml --img 640 --task speed --batch 1` -- **TTA** [测试时数据增强](https://github.com/ultralytics/yolov5/issues/303) 包括反射和比例增强. -
    复现方法: `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment` - -
    - - -##
    分类 ⭐ 新
    - -YOLOv5发布的[v6.2版本](https://github.com/ultralytics/yolov5/releases) 支持训练,验证,预测和输出分类模型!这使得训练分类器模型非常简单。点击下面开始尝试! - -
    - 分类检查点 (点击展开) - -
    - -我们在ImageNet上使用了4xA100的实例训练YOLOv5-cls分类模型90个epochs,并以相同的默认设置同时训练了ResNet和EfficientNet模型来进行比较。我们将所有的模型导出到ONNX FP32进行CPU速度测试,又导出到TensorRT FP16进行GPU速度测试。最后,为了方便重现,我们在[Google Colab Pro](https://colab.research.google.com/signup)上进行了所有的速度测试。 - -| 模型 | 规模
    (像素) | 准确度
    第一 | 准确度
    前五 | 训练
    90 epochs
    4xA100 (小时) | 速度
    ONNX CPU
    (ms) | 速度
    TensorRT V100
    (ms) | 参数
    (M) | 浮点运算
    @224 (B) | -|----------------------------------------------------------------------------------------------------|-----------------------|------------------|------------------|----------------------------------------------|--------------------------------|-------------------------------------|--------------------|------------------------| -| [YOLOv5n-cls](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5n-cls.pt) | 224 | 64.6 | 85.4 | 7:59 | **3.3** | **0.5** | **2.5** | **0.5** | -| [YOLOv5s-cls](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s-cls.pt) | 224 | 71.5 | 90.2 | 8:09 | 6.6 | 0.6 | 5.4 | 1.4 | -| [YOLOv5m-cls](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5m-cls.pt) | 224 | 75.9 | 92.9 | 10:06 | 15.5 | 0.9 | 12.9 | 3.9 | -| [YOLOv5l-cls](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5l-cls.pt) | 224 | 78.0 | 94.0 | 11:56 | 26.9 | 1.4 | 26.5 | 8.5 | -| [YOLOv5x-cls](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x-cls.pt) | 224 | **79.0** | **94.4** | 15:04 | 54.3 | 1.8 | 48.1 | 15.9 | -| | -| [ResNet18](https://github.com/ultralytics/yolov5/releases/download/v6.2/resnet18.pt) | 224 | 70.3 | 89.5 | **6:47** | 11.2 | 0.5 | 11.7 | 3.7 | -| [ResNet34](https://github.com/ultralytics/yolov5/releases/download/v6.2/resnet34.pt) | 224 | 73.9 | 91.8 | 8:33 | 20.6 | 0.9 | 21.8 | 7.4 | -| [ResNet50](https://github.com/ultralytics/yolov5/releases/download/v6.2/resnet50.pt) | 224 | 76.8 | 93.4 | 11:10 | 23.4 | 1.0 | 25.6 | 8.5 | -| [ResNet101](https://github.com/ultralytics/yolov5/releases/download/v6.2/resnet101.pt) | 224 | 78.5 | 94.3 | 17:10 | 42.1 | 1.9 | 44.5 | 15.9 | -| | -| [EfficientNet_b0](https://github.com/ultralytics/yolov5/releases/download/v6.2/efficientnet_b0.pt) | 224 | 75.1 | 92.4 | 13:03 | 12.5 | 1.3 | 5.3 | 1.0 | -| [EfficientNet_b1](https://github.com/ultralytics/yolov5/releases/download/v6.2/efficientnet_b1.pt) | 224 | 76.4 | 93.2 | 17:04 | 14.9 | 1.6 | 7.8 | 1.5 | -| [EfficientNet_b2](https://github.com/ultralytics/yolov5/releases/download/v6.2/efficientnet_b2.pt) | 224 | 76.6 | 93.4 | 17:10 | 15.9 | 1.6 | 9.1 | 1.7 | -| [EfficientNet_b3](https://github.com/ultralytics/yolov5/releases/download/v6.2/efficientnet_b3.pt) | 224 | 77.7 | 94.0 | 19:19 | 18.9 | 1.9 | 12.2 | 2.4 | - -
    - 表格注释 (点击扩展) - -- 所有检查点都被SGD优化器训练到90 epochs, `lr0=0.001` 和 `weight_decay=5e-5`, 图像大小为224,全为默认设置。
    运行数据记录于 https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2。 -- **准确度** 值为[ImageNet-1k](https://www.image-net.org/index.php)数据集上的单模型单尺度。
    通过`python classify/val.py --data ../datasets/imagenet --img 224`进行复制。 -- 使用Google [Colab Pro](https://colab.research.google.com/signup) V100 High-RAM实例得出的100张推理图像的平均**速度**。
    通过 `python classify/val.py --data ../datasets/imagenet --img 224 --batch 1`进行复制。 -- 用`export.py`**导出**到FP32的ONNX和FP16的TensorRT。
    通过 `python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224`进行复制。 -
    -
    - -
    - 分类使用实例 (点击展开) - -### 训练 -YOLOv5分类训练支持自动下载MNIST, Fashion-MNIST, CIFAR10, CIFAR100, Imagenette, Imagewoof和ImageNet数据集,并使用`--data` 参数. 打个比方,在MNIST上使用`--data mnist`开始训练。 - -```bash -# 单GPU -python classify/train.py --model yolov5s-cls.pt --data cifar100 --epochs 5 --img 224 --batch 128 - -# 多-GPU DDP -python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3 -``` - -### 验证 -在ImageNet-1k数据集上验证YOLOv5m-cl的准确性: -```bash -bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images) -python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validate -``` - -### 预测 -用提前训练好的YOLOv5s-cls.pt去预测bus.jpg: -```bash -python classify/predict.py --weights yolov5s-cls.pt --data data/images/bus.jpg -``` -```python -model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s-cls.pt') # load from PyTorch Hub -``` - -### 导出 -导出一组训练好的YOLOv5s-cls, ResNet和EfficientNet模型到ONNX和TensorRT: -```bash -python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --include onnx engine --img 224 -``` -
    - - -##
    贡献
    - -我们重视您的意见! 我们希望给大家提供尽可能的简单和透明的方式对 YOLOv5 做出贡献。开始之前请先点击并查看我们的 [贡献指南](CONTRIBUTING.md),填写[YOLOv5调查问卷](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) 来向我们发送您的经验反馈。真诚感谢我们所有的贡献者! - - - - -##
    联系
    - -关于YOLOv5的漏洞和功能问题,请访问 [GitHub Issues](https://github.com/ultralytics/yolov5/issues)。商业咨询或技术支持服务请访问[https://ultralytics.com/contact](https://ultralytics.com/contact)。 - -
    -
    - - - - - - - - - - - - - - - - - - - - -
    - -[assets]: https://github.com/ultralytics/yolov5/releases -[tta]: https://github.com/ultralytics/yolov5/issues/303 diff --git a/spaces/Iqbalzz/hololive-rvc-models/config.py b/spaces/Iqbalzz/hololive-rvc-models/config.py deleted file mode 100644 index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000 --- a/spaces/Iqbalzz/hololive-rvc-models/config.py +++ /dev/null @@ -1,88 +0,0 @@ -########################硬件参数######################## - -# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速 -device = "cuda:0" - -# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速 -is_half = True - -# 默认0用上所有线程,写数字限制CPU资源使用 -n_cpu = 0 - -########################硬件参数######################## - - -##################下为参数处理逻辑,勿动################## - -########################命令行参数######################## -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument("--port", type=int, default=7865, help="Listen port") -parser.add_argument("--pycmd", type=str, default="python", help="Python command") -parser.add_argument("--colab", action="store_true", help="Launch in colab") -parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" -) -parser.add_argument( - "--noautoopen", action="store_true", help="Do not open in browser automatically" -) -cmd_opts, unknown = parser.parse_known_args() - -python_cmd = cmd_opts.pycmd -listen_port = cmd_opts.port -iscolab = cmd_opts.colab -noparallel = cmd_opts.noparallel -noautoopen = cmd_opts.noautoopen -########################命令行参数######################## - -import sys -import torch - - -# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. -# check `getattr` and try it for compatibility -def has_mps() -> bool: - if sys.platform != "darwin": - return False - else: - if not getattr(torch, "has_mps", False): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - -if not torch.cuda.is_available(): - if has_mps(): - print("没有发现支持的N卡, 使用MPS进行推理") - device = "mps" - else: - print("没有发现支持的N卡, 使用CPU进行推理") - device = "cpu" - is_half = False - -if device not in ["cpu", "mps"]: - gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1])) - if "16" in gpu_name or "MX" in gpu_name: - print("16系显卡/MX系显卡强制单精度") - is_half = False - -from multiprocessing import cpu_count - -if n_cpu == 0: - n_cpu = cpu_count() -if is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 -else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 diff --git a/spaces/IsaacK/streamlit-test/pages/view.py b/spaces/IsaacK/streamlit-test/pages/view.py deleted file mode 100644 index 017bc915d8585c5299bdf2ce8e7c95fe9fcff8bc..0000000000000000000000000000000000000000 --- a/spaces/IsaacK/streamlit-test/pages/view.py +++ /dev/null @@ -1,36 +0,0 @@ -import streamlit as st -import os.path -import sqlite3 - -# Custom imports -from pages.utils import db_connect - -def app(): - - '''delete form_submit to run quiz maker on return to page''' - if "form_submit" in st.session_state.keys(): - del st.session_state.form_submit - if "form_upload" in st.session_state.keys(): - del st.session_state.form_upload - - st.markdown("## View Data") - - BASE_DIR = os.path.dirname(os.path.abspath(__file__)) - DATABASE = os.path.join(BASE_DIR, 'quiz_maker.db') - - c, conn = db_connect(DATABASE) - - size_query = "SELECT page_count * page_size as size FROM pragma_page_count(), pragma_page_size()" - - c.execute(size_query) - - st.markdown(f'##### Database size: {int(c.fetchone()[0] / 1000)} KB') - - query = st.text_input("Query", placeholder="Type query here") - - if len(query) > 1: - try: - for idx, item in enumerate(c.execute(query)): - st.write(f'{idx}: {item}') - except Exception as e: - st.write("Query failed. " + str(e).capitalize()) diff --git a/spaces/JUNGU/latex-ocr-wthGPT/README.md b/spaces/JUNGU/latex-ocr-wthGPT/README.md deleted file mode 100644 index 0bb8221272b90aab2ed811aac31954a0104678cc..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/latex-ocr-wthGPT/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Latex Ocr -emoji: 👀 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: mit -duplicated_from: yhshin/latex-ocr ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/JacobLinCool/captcha-recognizer/src/__init__.py b/spaces/JacobLinCool/captcha-recognizer/src/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Jamkonams/AutoGPT/autogpt/speech/brian.py b/spaces/Jamkonams/AutoGPT/autogpt/speech/brian.py deleted file mode 100644 index 821fdf2f482a9cfa928e5c9680152ad6766d8326..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/autogpt/speech/brian.py +++ /dev/null @@ -1,40 +0,0 @@ -""" Brian speech module for autogpt """ -import os - -import requests -from playsound import playsound - -from autogpt.speech.base import VoiceBase - - -class BrianSpeech(VoiceBase): - """Brian speech module for autogpt""" - - def _setup(self) -> None: - """Setup the voices, API key, etc.""" - pass - - def _speech(self, text: str, _: int = 0) -> bool: - """Speak text using Brian with the streamelements API - - Args: - text (str): The text to speak - - Returns: - bool: True if the request was successful, False otherwise - """ - tts_url = ( - f"https://api.streamelements.com/kappa/v2/speech?voice=Brian&text={text}" - ) - response = requests.get(tts_url) - - if response.status_code == 200: - with open("speech.mp3", "wb") as f: - f.write(response.content) - playsound("speech.mp3") - os.remove("speech.mp3") - return True - else: - print("Request failed with status code:", response.status_code) - print("Response content:", response.content) - return False diff --git a/spaces/Jeff2323/ai-comic-factory/src/components/ui/dropdown-menu.tsx b/spaces/Jeff2323/ai-comic-factory/src/components/ui/dropdown-menu.tsx deleted file mode 100644 index 5803489a1d197a9db5018e413e63abe84b2efb8e..0000000000000000000000000000000000000000 --- a/spaces/Jeff2323/ai-comic-factory/src/components/ui/dropdown-menu.tsx +++ /dev/null @@ -1,200 +0,0 @@ -"use client" - -import * as React from "react" -import * as DropdownMenuPrimitive from "@radix-ui/react-dropdown-menu" -import { Check, ChevronRight, Circle } from "lucide-react" - -import { cn } from "@/lib/utils" - -const DropdownMenu = DropdownMenuPrimitive.Root - -const DropdownMenuTrigger = DropdownMenuPrimitive.Trigger - -const DropdownMenuGroup = DropdownMenuPrimitive.Group - -const DropdownMenuPortal = DropdownMenuPrimitive.Portal - -const DropdownMenuSub = DropdownMenuPrimitive.Sub - -const DropdownMenuRadioGroup = DropdownMenuPrimitive.RadioGroup - -const DropdownMenuSubTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, children, ...props }, ref) => ( - - {children} - - -)) -DropdownMenuSubTrigger.displayName = - DropdownMenuPrimitive.SubTrigger.displayName - -const DropdownMenuSubContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSubContent.displayName = - DropdownMenuPrimitive.SubContent.displayName - -const DropdownMenuContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - - - -)) -DropdownMenuContent.displayName = DropdownMenuPrimitive.Content.displayName - -const DropdownMenuItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuItem.displayName = DropdownMenuPrimitive.Item.displayName - -const DropdownMenuCheckboxItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, checked, ...props }, ref) => ( - - - - - - - {children} - -)) -DropdownMenuCheckboxItem.displayName = - DropdownMenuPrimitive.CheckboxItem.displayName - -const DropdownMenuRadioItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -DropdownMenuRadioItem.displayName = DropdownMenuPrimitive.RadioItem.displayName - -const DropdownMenuLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuLabel.displayName = DropdownMenuPrimitive.Label.displayName - -const DropdownMenuSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName - -const DropdownMenuShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -DropdownMenuShortcut.displayName = "DropdownMenuShortcut" - -export { - DropdownMenu, - DropdownMenuTrigger, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuCheckboxItem, - DropdownMenuRadioItem, - DropdownMenuLabel, - DropdownMenuSeparator, - DropdownMenuShortcut, - DropdownMenuGroup, - DropdownMenuPortal, - DropdownMenuSub, - DropdownMenuSubContent, - DropdownMenuSubTrigger, - DropdownMenuRadioGroup, -} diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/html/footer.html b/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/html/footer.html deleted file mode 100644 index bca27bb8066dfab5cc0acf7be349a514de5f9a58..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/html/footer.html +++ /dev/null @@ -1 +0,0 @@ -
    {versions}
    diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/web_assets/javascript/updater.js b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/web_assets/javascript/updater.js deleted file mode 100644 index 68c0ff21aeb92bc538036c42264c41e3787097a7..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/web_assets/javascript/updater.js +++ /dev/null @@ -1,202 +0,0 @@ - -var updateInfoGotten = false; -var isLatestVersion = localStorage.getItem('isLatestVersion') || false; - - -var statusObserver = new MutationObserver(function (mutationsList) { - for (const mutation of mutationsList) { - if (mutation.type === 'attributes' || mutation.type === 'childList') { - if (statusDisplay.innerHTML.includes(']*>([^<]*)<\/a>/g; - const versionMatch = reVersion.exec(currentVersionElement.innerHTML); - const currentVersion = (versionMatch && versionMatch[1].length == 8) ? versionMatch[1] : null; - const latestVersionElement = document.getElementById('latest-version-title'); - const versionInfoElement = document.getElementById('version-info-title'); - releaseNoteElement = document.getElementById('release-note-content'); - updatingInfoElement = document.getElementById('updating-info'); - - const versionTime = document.getElementById('version-time').innerText; - const localVersionTime = versionTime !== "unknown" ? (new Date(versionTime)).getTime() : 0; - updateInfoGotten = true; //无论成功与否都只执行一次,否则容易api超限... - try { - const data = await getLatestRelease(); - const releaseNote = data.body; - if (releaseNote) { - releaseNoteElement.innerHTML = marked.parse(releaseNote, {mangle: false, headerIds: false}); - } - const latestVersion = data.tag_name; - if (currentVersion) { - if (latestVersion <= currentVersion) { - noUpdate(); - } else { - latestVersionElement.textContent = latestVersion; - console.log(`New version ${latestVersion} found!`); - if (!isInIframe) openUpdateToast(); - } - } else { //如果当前版本号获取失败,使用时间比较 - const latestVersionTime = (new Date(data.created_at)).getTime(); - if (latestVersionTime) { - const latestVersionInfo = `${latestVersion}` - const manualUpdateInfo = `manual update` - if (localVersionTime == 0) { - const infoMessage = `Local version check failed. \nBut latest revision is ${latestVersionInfo}. \n\nWhen Update needed, \n- If you are using Docker, try to update package. \n- If you didn't use git, try ${manualUpdateInfo}.` - versionInfoElement.innerHTML = marked.parse(infoMessage, {mangle: false, headerIds: false}); - console.log(`New version ${latestVersion} found!`); - disableUpdateBtn_enableCancelBtn(); - } else if (localVersionTime < latestVersionTime) { - const infoMessage = `Local version check failed, it seems to be a local rivision. \n\nBut latest revision is ${latestVersionInfo}. Try ${manualUpdateInfo}.` - versionInfoElement.innerHTML = marked.parse(infoMessage, {mangle: false, headerIds: false}); - console.log(`New version ${latestVersion} found!`); - disableUpdateBtn_enableCancelBtn(); - // if (!isInIframe) openUpdateToast(); - } else { - noUpdate("Local version check failed, it seems to be a local rivision.
    But your revision is newer than the latest release."); - } - } - } - currentTime = new Date().getTime(); - localStorage.setItem('lastCheckTime', currentTime); - } catch (error) { - console.error(error); - } -} - -function getUpdateInfo() { - window.open('https://github.com/gaizhenbiao/chuanhuchatgpt/releases/latest', '_blank'); - closeUpdateToast(); -} - -var updateSpinner = null; - -function bgUpdateChuanhu() { - updateChuanhuBtn.click(); - updatingInfoElement.innerText = i18n(updatingMsg_i18n); - var updatingSpinner = document.getElementById('updating-spinner'); - try { - updateSpinner = new Spin.Spinner({color:'#06AE56',top:'45%',lines:9}).spin(updatingSpinner); - } catch (error) { - console.error("Can't create spinner") - } - updatingInfoElement.classList.remove('hideK'); - disableUpdateBtns(); - const releaseNoteWrap = document.getElementById('release-note-wrap'); - releaseNoteWrap.style.setProperty('display', 'none'); - statusObserver.observe(statusDisplay, { childList: true, subtree: true, characterData: true}); -} -function cancelUpdate() { - closeUpdateToast(); -} -function openUpdateToast() { - showingUpdateInfo = true; - setUpdateWindowHeight(); -} -function closeUpdateToast() { - updateToast.style.setProperty('top', '-500px'); - showingUpdateInfo = false; - if (updatingInfoElement.classList.contains('hideK') === false) { - updatingInfoElement.classList.add('hideK'); - } -} -function manualCheckUpdate() { - openUpdateToast(); - updateLatestVersion(); - currentTime = new Date().getTime(); - localStorage.setItem('lastCheckTime', currentTime); -} -function noUpdate(message="") { - localStorage.setItem('isLatestVersion', 'true'); - isLatestVersion = true; - noUpdateHtml(message); -} -function noUpdateHtml(message="") { - const versionInfoElement = document.getElementById('version-info-title'); - const gotoUpdateBtn = document.getElementById('goto-update-btn'); - const closeUpdateBtn = document.getElementById('close-update-btn'); - const releaseNoteWrap = document.getElementById('release-note-wrap'); - releaseNoteWrap.style.setProperty('display', 'none'); - if (message === "") { - versionInfoElement.textContent = i18n(usingLatest_i18n) - } else { - versionInfoElement.innerHTML = message; - } - gotoUpdateBtn.classList.add('hideK'); - closeUpdateBtn.classList.remove('hideK'); -} - -var updateStatus = null; -function getUpdateStatus() { - updateStatus = statusDisplay.querySelector("#update-status"); - if (updateStatus) { - return updateStatus.innerText; - } else { - return "unknown"; - } -} - -function disableUpdateBtns() { - const updatesButtons = document.querySelectorAll('.btn-update'); - updatesButtons.forEach( function (btn) { - btn.disabled = true; - }); -} -function enableUpdateBtns() { - const updatesButtons = document.querySelectorAll('.btn-update'); - updatesButtons.forEach( function (btn) { - btn.disabled = false; - }); -} -function disableUpdateBtn_enableCancelBtn() { - document.querySelector('#update-button.btn-update').disabled = true; - document.querySelector('#cancel-button.btn-update').disabled = false; -} - -function setUpdateWindowHeight() { - if (!showingUpdateInfo) {return;} - const scrollPosition = window.scrollY; - // const originalTop = updateToast.style.getPropertyValue('top'); - const resultTop = scrollPosition - 20 + 'px'; - updateToast.style.setProperty('top', resultTop); -} diff --git a/spaces/KaygNas/cut-it/src/utils.ts b/spaces/KaygNas/cut-it/src/utils.ts deleted file mode 100644 index 5401acd9d5f84eda7fb608038f96bb8da089b71a..0000000000000000000000000000000000000000 --- a/spaces/KaygNas/cut-it/src/utils.ts +++ /dev/null @@ -1,12 +0,0 @@ -const PREFIX = '[CutIt] ' -export function warn(...args: any) { - console.warn(PREFIX, ...args) -} -export function error(...args: any) { - console.error(PREFIX, ...args) -} - -export function assert(condition: boolean, message: string = 'Assertion Error'): asserts condition { - if (!condition) - throw new Error(message) -} diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/train.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/train.py deleted file mode 100644 index 7104d5469eebcf7046450e08d4a5836f87705c39..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/train.py +++ /dev/null @@ -1,106 +0,0 @@ -from pydantic import BaseModel, Field -import os -from pathlib import Path -from enum import Enum -from typing import Any -from synthesizer.hparams import hparams -from synthesizer.train import train as synt_train - -# Constants -SYN_MODELS_DIRT = f"synthesizer{os.sep}saved_models" -ENC_MODELS_DIRT = f"encoder{os.sep}saved_models" - - -# EXT_MODELS_DIRT = f"ppg_extractor{os.sep}saved_models" -# CONV_MODELS_DIRT = f"ppg2mel{os.sep}saved_models" -# ENC_MODELS_DIRT = f"encoder{os.sep}saved_models" - -# Pre-Load models -if os.path.isdir(SYN_MODELS_DIRT): - synthesizers = Enum('synthesizers', list((file.name, file) for file in Path(SYN_MODELS_DIRT).glob("**/*.pt"))) - print("Loaded synthesizer models: " + str(len(synthesizers))) -else: - raise Exception(f"Model folder {SYN_MODELS_DIRT} doesn't exist.") - -if os.path.isdir(ENC_MODELS_DIRT): - encoders = Enum('encoders', list((file.name, file) for file in Path(ENC_MODELS_DIRT).glob("**/*.pt"))) - print("Loaded encoders models: " + str(len(encoders))) -else: - raise Exception(f"Model folder {ENC_MODELS_DIRT} doesn't exist.") - -class Model(str, Enum): - DEFAULT = "default" - -class Input(BaseModel): - model: Model = Field( - Model.DEFAULT, title="模型类型", - ) - # datasets_root: str = Field( - # ..., alias="预处理数据根目录", description="输入目录(相对/绝对),不适用于ppg2mel模型", - # format=True, - # example="..\\trainning_data\\" - # ) - input_root: str = Field( - ..., alias="输入目录", description="预处理数据根目录", - format=True, - example=f"..{os.sep}audiodata{os.sep}SV2TTS{os.sep}synthesizer" - ) - run_id: str = Field( - "", alias="新模型名/运行ID", description="使用新ID进行重新训练,否则选择下面的模型进行继续训练", - ) - synthesizer: synthesizers = Field( - ..., alias="已有合成模型", - description="选择语音合成模型文件." - ) - gpu: bool = Field( - True, alias="GPU训练", description="选择“是”,则使用GPU训练", - ) - verbose: bool = Field( - True, alias="打印详情", description="选择“是”,输出更多详情", - ) - encoder: encoders = Field( - ..., alias="语音编码模型", - description="选择语音编码模型文件." - ) - save_every: int = Field( - 1000, alias="更新间隔", description="每隔n步则更新一次模型", - ) - backup_every: int = Field( - 10000, alias="保存间隔", description="每隔n步则保存一次模型", - ) - log_every: int = Field( - 500, alias="打印间隔", description="每隔n步则打印一次训练统计", - ) - -class AudioEntity(BaseModel): - content: bytes - mel: Any - -class Output(BaseModel): - __root__: int - - def render_output_ui(self, streamlit_app) -> None: # type: ignore - """Custom output UI. - If this method is implmeneted, it will be used instead of the default Output UI renderer. - """ - streamlit_app.subheader(f"Training started with code: {self.__root__}") - -def train(input: Input) -> Output: - """Train(训练)""" - - print(">>> Start training ...") - force_restart = len(input.run_id) > 0 - if not force_restart: - input.run_id = Path(input.synthesizer.value).name.split('.')[0] - - synt_train( - input.run_id, - input.input_root, - f"synthesizer{os.sep}saved_models", - input.save_every, - input.backup_every, - input.log_every, - force_restart, - hparams - ) - return Output(__root__=0) \ No newline at end of file diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/pre_net.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/pre_net.py deleted file mode 100644 index 886646a154c68298deeec09dbad736d617f73155..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/pre_net.py +++ /dev/null @@ -1,27 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F - -class PreNet(nn.Module): - def __init__(self, in_dims, fc1_dims=256, fc2_dims=128, dropout=0.5): - super().__init__() - self.fc1 = nn.Linear(in_dims, fc1_dims) - self.fc2 = nn.Linear(fc1_dims, fc2_dims) - self.p = dropout - - def forward(self, x): - """forward - - Args: - x (3D tensor with size `[batch_size, num_chars, tts_embed_dims]`): input texts list - - Returns: - 3D tensor with size `[batch_size, num_chars, encoder_dims]` - - """ - x = self.fc1(x) - x = F.relu(x) - x = F.dropout(x, self.p, training=True) - x = self.fc2(x) - x = F.relu(x) - x = F.dropout(x, self.p, training=True) - return x diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/display.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/display.py deleted file mode 100644 index fe7dd30bc5e4009a8b62a4805596f937f01befb5..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/display.py +++ /dev/null @@ -1,128 +0,0 @@ -import matplotlib.pyplot as plt -import time -import numpy as np -import sys - - -def progbar(i, n, size=16): - done = (i * size) // n - bar = '' - for i in range(size): - bar += '█' if i <= done else '░' - return bar - - -def stream(message) : - try: - sys.stdout.write("\r{%s}" % message) - except: - #Remove non-ASCII characters from message - message = ''.join(i for i in message if ord(i)<128) - sys.stdout.write("\r{%s}" % message) - - -def simple_table(item_tuples) : - - border_pattern = '+---------------------------------------' - whitespace = ' ' - - headings, cells, = [], [] - - for item in item_tuples : - - heading, cell = str(item[0]), str(item[1]) - - pad_head = True if len(heading) < len(cell) else False - - pad = abs(len(heading) - len(cell)) - pad = whitespace[:pad] - - pad_left = pad[:len(pad)//2] - pad_right = pad[len(pad)//2:] - - if pad_head : - heading = pad_left + heading + pad_right - else : - cell = pad_left + cell + pad_right - - headings += [heading] - cells += [cell] - - border, head, body = '', '', '' - - for i in range(len(item_tuples)) : - - temp_head = f'| {headings[i]} ' - temp_body = f'| {cells[i]} ' - - border += border_pattern[:len(temp_head)] - head += temp_head - body += temp_body - - if i == len(item_tuples) - 1 : - head += '|' - body += '|' - border += '+' - - print(border) - print(head) - print(border) - print(body) - print(border) - print(' ') - - -def time_since(started) : - elapsed = time.time() - started - m = int(elapsed // 60) - s = int(elapsed % 60) - if m >= 60 : - h = int(m // 60) - m = m % 60 - return f'{h}h {m}m {s}s' - else : - return f'{m}m {s}s' - - -def save_attention(attn, path) : - fig = plt.figure(figsize=(12, 6)) - plt.imshow(attn.T, interpolation='nearest', aspect='auto') - fig.savefig(f'{path}.png', bbox_inches='tight') - plt.close(fig) - - -def save_and_trace_attention(attn, path, sw, step): - fig = plt.figure(figsize=(12, 6)) - plt.imshow(attn.T, interpolation='nearest', aspect='auto') - fig.savefig(f'{path}.png', bbox_inches='tight') - sw.add_figure('attention', fig, step) - plt.close(fig) - - -def save_spectrogram(M, path, length=None) : - M = np.flip(M, axis=0) - if length : M = M[:, :length] - fig = plt.figure(figsize=(12, 6)) - plt.imshow(M, interpolation='nearest', aspect='auto') - fig.savefig(f'{path}.png', bbox_inches='tight') - plt.close(fig) - - -def plot(array) : - fig = plt.figure(figsize=(30, 5)) - ax = fig.add_subplot(111) - ax.xaxis.label.set_color('grey') - ax.yaxis.label.set_color('grey') - ax.xaxis.label.set_fontsize(23) - ax.yaxis.label.set_fontsize(23) - ax.tick_params(axis='x', colors='grey', labelsize=23) - ax.tick_params(axis='y', colors='grey', labelsize=23) - plt.plot(array) - - -def plot_spec(M) : - M = np.flip(M, axis=0) - plt.figure(figsize=(18,4)) - plt.imshow(M, interpolation='nearest', aspect='auto') - plt.show() - diff --git a/spaces/Kunal7/squats-analysis/README.md b/spaces/Kunal7/squats-analysis/README.md deleted file mode 100644 index c68dd89bafba03fc7d3dca6dfbb1d9552107623e..0000000000000000000000000000000000000000 --- a/spaces/Kunal7/squats-analysis/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Squats Analysis -emoji: 🐨 -colorFrom: yellow -colorTo: green -sdk: streamlit -sdk_version: 1.10.0 -app_file: 🏠️_Demo.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KyanChen/FunSR/datasets/inr_diinn_sr_wrappers.py b/spaces/KyanChen/FunSR/datasets/inr_diinn_sr_wrappers.py deleted file mode 100644 index 776896a5c03576a5a5b625539c2ca265cf877107..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/datasets/inr_diinn_sr_wrappers.py +++ /dev/null @@ -1,76 +0,0 @@ -import copy -import functools -import os -import random -import math -from PIL import Image - -import numpy as np -import torch -from einops import rearrange -from torch.utils.data import Dataset -from torchvision import transforms - -from datasets import register -from utils import to_pixel_samples, to_coordinates - -import torchvision.transforms.functional as TF -import random -from typing import Sequence - - -class MyRotateTransform: - def __init__(self, angles: Sequence[int], p=0.5): - self.angles = angles - self.p = p - - def __call__(self, x): - if torch.rand(1) < self.p: - return x - angle = random.choice(self.angles) - return TF.rotate(x, angle) - - -@register('inr_diinn_select_scale_sr_warp') -class INRSelectScaleSRWarp(Dataset): - def __init__(self, - dataset, scales, patch_size=48, - augment=False, - val_mode=False, test_mode=False - ): - super(INRSelectScaleSRWarp, self).__init__() - self.dataset = dataset - self.scales = scales - self.patch_size = patch_size - self.augment = augment - self.test_mode = test_mode - self.val_mode = val_mode - - def __len__(self): - return len(self.dataset) - - def __getitem__(self, idx): - # import pdb - # pdb.set_trace() - img_hr_ori, file_name = self.dataset[idx] - class_name = os.path.basename(os.path.dirname(file_name)) - - sample = {} - for scale in self.scales: - hr_size = self.patch_size * scale - hr_size = int(hr_size) - - if self.test_mode or self.val_mode: - hr_size = int(self.patch_size * max(self.scales)) - img_hr = transforms.CenterCrop(hr_size)(img_hr_ori) - else: - img_hr = transforms.RandomCrop(hr_size)(copy.deepcopy(img_hr_ori)) - if self.augment: - img_hr = transforms.RandomHorizontalFlip(p=0.5)(img_hr) - img_hr = transforms.RandomVerticalFlip(p=0.5)(img_hr) - img_hr = MyRotateTransform([90, 180, 270], p=0.5)(img_hr) - - img_lr = transforms.Resize(self.patch_size, TF.InterpolationMode.BICUBIC)(img_hr) - sample[scale] = {'img': img_lr, 'gt': img_hr, 'class_name': class_name} - - return sample \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/fgvcaircraft.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/fgvcaircraft.py deleted file mode 100644 index 696992c06bbf02f097d017a519d42f758ba5f16f..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/fgvcaircraft.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List - -from mmengine import get_file_backend, list_from_file - -from mmpretrain.registry import DATASETS -from .base_dataset import BaseDataset -from .categories import FGVCAIRCRAFT_CATEGORIES - - -@DATASETS.register_module() -class FGVCAircraft(BaseDataset): - """The FGVC_Aircraft Dataset. - - Support the `FGVC_Aircraft Dataset `_ Dataset. - After downloading and decompression, the dataset directory structure is as follows. - - FGVC_Aircraft dataset directory: :: - - fgvc-aircraft-2013b - └── data - ├── images - │ ├── 1.jpg - │ ├── 2.jpg - │ └── ... - ├── images_variant_train.txt - ├── images_variant_test.txt - ├── images_variant_trainval.txt - ├── images_variant_val.txt - ├── variants.txt - └── .... - - Args: - data_root (str): The root directory for FGVC_Aircraft dataset. - split (str, optional): The dataset split, supports "train", - "val", "trainval", and "test". Default to "trainval". - - Examples: - >>> from mmpretrain.datasets import FGVCAircraft - >>> train_dataset = FGVCAircraft(data_root='data/fgvc-aircraft-2013b', split='trainval') - >>> train_dataset - Dataset FGVCAircraft - Number of samples: 6667 - Number of categories: 100 - Root of dataset: data/fgvc-aircraft-2013b - >>> test_dataset = FGVCAircraft(data_root='data/fgvc-aircraft-2013b', split='test') - >>> test_dataset - Dataset FGVCAircraft - Number of samples: 3333 - Number of categories: 100 - Root of dataset: data/fgvc-aircraft-2013b - """ # noqa: E501 - - METAINFO = {'classes': FGVCAIRCRAFT_CATEGORIES} - - def __init__(self, data_root: str, split: str = 'trainval', **kwargs): - - splits = ['train', 'val', 'trainval', 'test'] - assert split in splits, \ - f"The split must be one of {splits}, but get '{split}'" - self.split = split - - self.backend = get_file_backend(data_root, enable_singleton=True) - ann_file = self.backend.join_path('data', - f'images_variant_{split}.txt') - data_prefix = self.backend.join_path('data', 'images') - test_mode = split == 'test' - - super(FGVCAircraft, self).__init__( - ann_file=ann_file, - data_root=data_root, - test_mode=test_mode, - data_prefix=data_prefix, - **kwargs) - - def load_data_list(self): - """Load images and ground truth labels.""" - - pairs = list_from_file(self.ann_file) - data_list = [] - for pair in pairs: - pair = pair.split() - img_name = pair[0] - class_name = ' '.join(pair[1:]) - img_name = f'{img_name}.jpg' - img_path = self.backend.join_path(self.img_prefix, img_name) - gt_label = self.METAINFO['classes'].index(class_name) - info = dict(img_path=img_path, gt_label=gt_label) - data_list.append(info) - - return data_list - - def extra_repr(self) -> List[str]: - """The extra repr information of the dataset.""" - body = [ - f'Root of dataset: \t{self.data_root}', - ] - return body diff --git a/spaces/LISHILEI/bingo/Dockerfile b/spaces/LISHILEI/bingo/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/LISHILEI/bingo/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/LabAlproITS/CyberDAS-FE/Dockerfile b/spaces/LabAlproITS/CyberDAS-FE/Dockerfile deleted file mode 100644 index c3b917fbdd2eb5fca49a70d6753cb34a3d007422..0000000000000000000000000000000000000000 --- a/spaces/LabAlproITS/CyberDAS-FE/Dockerfile +++ /dev/null @@ -1,20 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -RUN useradd -m -u 1000 user - -USER user - -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -WORKDIR $HOME/app - -COPY --chown=user . $HOME/app - -CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/LabAlproITS/CyberDAS-FE/README.md b/spaces/LabAlproITS/CyberDAS-FE/README.md deleted file mode 100644 index 8abb1091b678e64e1d7e68d4743da775b777d5aa..0000000000000000000000000000000000000000 --- a/spaces/LabAlproITS/CyberDAS-FE/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: CyberDAS FE -emoji: 📊 -colorFrom: green -colorTo: indigo -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Lamai/LAMAIGPT/tests/unit/json_tests.py b/spaces/Lamai/LAMAIGPT/tests/unit/json_tests.py deleted file mode 100644 index 25c383377708359b5cfec28e0625343c5692f15c..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/tests/unit/json_tests.py +++ /dev/null @@ -1,114 +0,0 @@ -import unittest - -from autogpt.json_utils.json_fix_llm import fix_and_parse_json - - -class TestParseJson(unittest.TestCase): - def test_valid_json(self): - # Test that a valid JSON string is parsed correctly - json_str = '{"name": "John", "age": 30, "city": "New York"}' - obj = fix_and_parse_json(json_str) - self.assertEqual(obj, {"name": "John", "age": 30, "city": "New York"}) - - def test_invalid_json_minor(self): - # Test that an invalid JSON string can be fixed with gpt - json_str = '{"name": "John", "age": 30, "city": "New York",}' - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), - {"name": "John", "age": 30, "city": "New York"}, - ) - - def test_invalid_json_major_with_gpt(self): - # Test that an invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=True), - {"name": "John", "age": 30, "city": "New York"}, - ) - - def test_invalid_json_major_without_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - # Assert that this raises an exception: - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I suggest we start by browsing the repository to find any issues that we can fix. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I will first need to browse the repository (https://github.com/Torantulino/Auto-GPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/LennardZuendorf/legalis/app.py b/spaces/LennardZuendorf/legalis/app.py deleted file mode 100644 index d429b6ac449d72fe618816bab82a81bdd17c1a30..0000000000000000000000000000000000000000 --- a/spaces/LennardZuendorf/legalis/app.py +++ /dev/null @@ -1,25 +0,0 @@ -import gradio as gr - -tts_examples = [ - "I love learning machine learning", - "How do you do?", -] - -scikit_demo = gr.load( - "LennardZuendorf/legalis-scikit", - src="models", - inputs=gr.Textbox(lines=5, max_lines=6, label="Input Text"), - title="scikit-learn" -) - -bert_demo = gr.load( - "LennardZuendorf/legalis-bert", - src="models", - inputs=gr.Textbox(lines=5, max_lines=6, label="Input Text"), - title="bert" -) - -demo = gr.TabbedInterface([scikit_demo, bert_demo], ["BERT Classification", "scikit-learn Classification"]) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/Linaqruf/Animagine-XL/utils.py b/spaces/Linaqruf/Animagine-XL/utils.py deleted file mode 100644 index 740ced9943143c7a56a16273044e60d6ab3e9728..0000000000000000000000000000000000000000 --- a/spaces/Linaqruf/Animagine-XL/utils.py +++ /dev/null @@ -1,7 +0,0 @@ -def is_google_colab(): - try: - import google.colab - - return True - except: - return False diff --git a/spaces/LittleYuan/My-Real-Bot/Training.md b/spaces/LittleYuan/My-Real-Bot/Training.md deleted file mode 100644 index 64704e1d2e1f334984232afd12b245235b274a9e..0000000000000000000000000000000000000000 --- a/spaces/LittleYuan/My-Real-Bot/Training.md +++ /dev/null @@ -1,100 +0,0 @@ -# :computer: How to Train Real-ESRGAN - -The training codes have been released.
    -Note that the codes have a lot of refactoring. So there may be some bugs/performance drops. Welcome to report issues and I will also retrain the models. - -## Overview - -The training has been divided into two stages. These two stages have the same data synthesis process and training pipeline, except for the loss functions. Specifically, - -1. We first train Real-ESRNet with L1 loss from the pre-trained model ESRGAN. -1. We then use the trained Real-ESRNet model as an initialization of the generator, and train the Real-ESRGAN with a combination of L1 loss, perceptual loss and GAN loss. - -## Dataset Preparation - -We use DF2K (DIV2K and Flickr2K) + OST datasets for our training. Only HR images are required.
    -You can download from : - -1. DIV2K: http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip -2. Flickr2K: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar -3. OST: https://openmmlab.oss-cn-hangzhou.aliyuncs.com/datasets/OST_dataset.zip - -For the DF2K dataset, we use a multi-scale strategy, *i.e.*, we downsample HR images to obtain several Ground-Truth images with different scales. - -We then crop DF2K images into sub-images for faster IO and processing. - -You need to prepare a txt file containing the image paths. The following are some examples in `meta_info_DF2Kmultiscale+OST_sub.txt` (As different users may have different sub-images partitions, this file is not suitable for your purpose and you need to prepare your own txt file): - -```txt -DF2K_HR_sub/000001_s001.png -DF2K_HR_sub/000001_s002.png -DF2K_HR_sub/000001_s003.png -... -``` - -## Train Real-ESRNet - -1. Download pre-trained model [ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth) into `experiments/pretrained_models`. - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth -P experiments/pretrained_models - ``` -1. Modify the content in the option file `options/train_realesrnet_x4plus.yml` accordingly: - ```yml - train: - name: DF2K+OST - type: RealESRGANDataset - dataroot_gt: datasets/DF2K # modify to the root path of your folder - meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # modify to your own generate meta info txt - io_backend: - type: disk - ``` -1. If you want to perform validation during training, uncomment those lines and modify accordingly: - ```yml - # Uncomment these for validation - # val: - # name: validation - # type: PairedImageDataset - # dataroot_gt: path_to_gt - # dataroot_lq: path_to_lq - # io_backend: - # type: disk - - ... - - # Uncomment these for validation - # validation settings - # val: - # val_freq: !!float 5e3 - # save_img: True - - # metrics: - # psnr: # metric name, can be arbitrary - # type: calculate_psnr - # crop_border: 4 - # test_y_channel: false - ``` -1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training: - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug - ``` -1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary. - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume - ``` - -## Train Real-ESRGAN - -1. After the training of Real-ESRNet, you now have the file `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth`. If you need to specify the pre-trained path to other files, modify the `pretrain_network_g` value in the option file `train_realesrgan_x4plus.yml`. -1. Modify the option file `train_realesrgan_x4plus.yml` accordingly. Most modifications are similar to those listed above. -1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training: - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug - ``` -1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary. - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume - ``` diff --git a/spaces/MLVKU/Human_Object_Interaction/hotr/metrics/vcoco/ap_role.py b/spaces/MLVKU/Human_Object_Interaction/hotr/metrics/vcoco/ap_role.py deleted file mode 100644 index 7e0a094bdc8e1468ffd7c05d451acb2767f3e4ac..0000000000000000000000000000000000000000 --- a/spaces/MLVKU/Human_Object_Interaction/hotr/metrics/vcoco/ap_role.py +++ /dev/null @@ -1,193 +0,0 @@ -import numpy as np -import torch -from hotr.metrics.utils import _compute_ap, compute_overlap - -class APRole(object): - def __init__(self, act_name, scenario_flag=True, iou_threshold=0.5): - self.act_name = act_name - self.iou_threshold = iou_threshold - - self.scenario_flag = scenario_flag - # scenario_1 : True - # scenario_2 : False - - self.fp = [np.zeros((0,))] * len(act_name) - self.tp = [np.zeros((0,))] * len(act_name) - self.score = [np.zeros((0,))] * len(act_name) - self.num_ann = [0] * len(act_name) - - def add_data(self, h_box, o_box, score, i_box, i_act, p_box, p_act): - # i_box, i_act : to check if only in COCO - for label in range(len(self.act_name)): - p_inds = (p_act[:, label] == 1) - self.num_ann[label] += p_inds.sum() - - if h_box.shape[0] == 0 : return # if no prediction, just return - # COCO (O), V-COCO (X) __or__ collater, no ann in image => ignore - - valid_i_inds = (i_act[:, 0] != -1) # (n_i, ) - overlaps = compute_overlap(h_box, i_box) # (n_h, n_i) - assigned_input = np.argmax(overlaps, axis=1) # (n_h, ) - v_inds = valid_i_inds[assigned_input] # (n_h, ) - - h_box = h_box[v_inds] - score = score[:, v_inds, :] - if h_box.shape[0] == 0 : return - n_h = h_box.shape[0] - - valid_p_inds = (p_act[:, 0] != -1) | (p_box[:, 0] != -1) - p_act = p_act[valid_p_inds] - p_box = p_box[valid_p_inds] - - n_o = o_box.shape[0] - if n_o == 0: - # no prediction for object - score = score.squeeze(axis=2) # (#act, n_h) - - for label in range(len(self.act_name)): - h_inds = np.argsort(score[label])[::-1] # (n_h, ) - self.score[label] = np.append(self.score[label], score[label, h_inds]) - - p_inds = (p_act[:, label] == 1) - if p_inds.sum() == 0: - self.tp[label] = np.append(self.tp[label], np.array([0]*n_h)) - self.fp[label] = np.append(self.fp[label], np.array([1]*n_h)) - continue - - h_overlaps = compute_overlap(h_box[h_inds], p_box[p_inds, :4]) # (n_h, n_p) - assigned_p = np.argmax(h_overlaps, axis=1) # (n_h, ) - h_max_overlap = h_overlaps[range(n_h), assigned_p] # (n_h, ) - - o_overlaps = compute_overlap(np.zeros((n_h, 4)), p_box[p_inds][assigned_p, 4:8]) - o_overlaps = np.diag(o_overlaps) # (n_h, ) - - no_role_inds = (p_box[p_inds][assigned_p, 4] == -1) # (n_h, ) - # human (o), action (o), no object in actual image - - h_iou_inds = (h_max_overlap > self.iou_threshold) # (n_h, ) - o_iou_inds = (o_overlaps > self.iou_threshold) # (n_h, ) - - # scenario1 is not considered (already no object) - o_iou_inds[no_role_inds] = 1 - - iou_inds = (h_iou_inds & o_iou_inds) - p_nonzero = iou_inds.nonzero()[0] - p_inds = assigned_p[p_nonzero] - p_iou = np.unique(p_inds, return_index=True)[1] - p_tp = p_nonzero[p_iou] - - t = np.zeros(n_h, dtype=np.uint8) - t[p_tp] = 1 - f = 1-t - - self.tp[label] = np.append(self.tp[label], t) - self.fp[label] = np.append(self.fp[label], f) - - else: - s_obj_argmax = np.argmax(score.reshape(-1, n_o), axis=1).reshape(-1, n_h) # (#act, n_h) - s_obj_max = np.max(score.reshape(-1, n_o), axis=1).reshape(-1, n_h) # (#act, n_h) - - h_overlaps = compute_overlap(h_box, p_box[:, :4]) # (n_h, n_p) - - for label in range(len(self.act_name)): - h_inds = np.argsort(s_obj_max[label])[::-1] # (n_h, ) - self.score[label] = np.append(self.score[label], s_obj_max[label, h_inds]) - - p_inds = (p_act[:, label] == 1) # (n_p, ) - if p_inds.sum() == 0: - self.tp[label] = np.append(self.tp[label], np.array([0]*n_h)) - self.fp[label] = np.append(self.fp[label], np.array([1]*n_h)) - continue - - h_overlaps = compute_overlap(h_box[h_inds], p_box[:, :4]) # (n_h, n_p) # match for all hboxes - h_max_overlap = np.max(h_overlaps, axis=1) # (n_h, ) # get the max overlap for hbox - - # for same human, multiple pairs exist. find the human box that has the same idx with max overlap hbox. - h_max_temp = np.expand_dims(h_max_overlap, axis=1) - h_over_thresh = (h_overlaps == h_max_temp) # (n_h, n_p) - h_over_thresh = h_over_thresh & np.expand_dims(p_inds, axis=0) # (n_h, n_p) # find only for current act - - h_valid = h_over_thresh.sum(axis=1)>0 # (n_h, ) # at least one is True - # h_valid -> if all is False, then argmax becomes 0. <- prevent - assigned_p = np.argmax(h_over_thresh, axis=1) # (n_h, ) # p only for current act - - o_mapping_box = o_box[s_obj_argmax[label]][h_inds] # (n_h, ) # find where T is. - p_mapping_box = p_box[assigned_p, 4:8] # (n_h, 4) - - o_overlaps = compute_overlap(o_mapping_box, p_mapping_box) - o_overlaps = np.diag(o_overlaps) # (n_h, ) - o_overlaps.setflags(write=1) - if (~h_valid).sum() > 0: - o_overlaps[~h_valid] = 0 # (n_h, ) - - no_role_inds = (p_box[assigned_p, 4] == -1) # (n_h, ) - nan_box_inds = np.all(o_mapping_box == 0, axis=1) | np.all(np.isnan(o_mapping_box), axis=1) - no_role_inds = no_role_inds & h_valid - nan_box_inds = nan_box_inds & h_valid - - h_iou_inds = (h_max_overlap > self.iou_threshold) # (n_h, ) - o_iou_inds = (o_overlaps > self.iou_threshold) # (n_h, ) - - if self.scenario_flag: # scenario_1 - o_iou_inds[no_role_inds & nan_box_inds] = 1 - o_iou_inds[no_role_inds & ~nan_box_inds] = 0 - else: # scenario_2 - o_iou_inds[no_role_inds] = 1 - - iou_inds = (h_iou_inds & o_iou_inds) - p_nonzero = iou_inds.nonzero()[0] - p_inds = assigned_p[p_nonzero] - p_iou = np.unique(p_inds, return_index=True)[1] - p_tp = p_nonzero[p_iou] - - t = np.zeros(n_h, dtype=np.uint8) - t[p_tp] = 1 - f = 1-t - - self.tp[label] = np.append(self.tp[label], t) - self.fp[label] = np.append(self.fp[label], f) - - def evaluate(self, print_log=False): - average_precisions = dict() - role_num = 1 if self.scenario_flag else 2 - for label in range(len(self.act_name)): - - # sort by score - indices = np.argsort(-self.score[label]) - self.fp[label] = self.fp[label][indices] - self.tp[label] = self.tp[label][indices] - - - if self.num_ann[label] == 0: - average_precisions[label] = 0 - continue - - # compute false positives and true positives - self.fp[label] = np.cumsum(self.fp[label]) - self.tp[label] = np.cumsum(self.tp[label]) - - # compute recall and precision - recall = self.tp[label] / self.num_ann[label] - precision = self.tp[label] / np.maximum(self.tp[label] + self.fp[label], np.finfo(np.float64).eps) - - # compute average precision - average_precisions[label] = _compute_ap(recall, precision) * 100 - - if print_log: print(f'\n============= AP (Role scenario_{role_num}) ==============') - s, n = 0, 0 - - for label in range(len(self.act_name)): - if 'point' in self.act_name[label]: - continue - label_name = "_".join(self.act_name[label].split("_")[1:]) - if print_log: print('{: >23}: AP = {:0.2f} (#pos = {:d})'.format(label_name, average_precisions[label], self.num_ann[label])) - if self.num_ann[label] != 0 : - s += average_precisions[label] - n += 1 - - mAP = s/n - if print_log: - print('| mAP(role scenario_{:d}): {:0.2f}'.format(role_num, mAP)) - print('----------------------------------------------------') - - return mAP \ No newline at end of file diff --git a/spaces/Marshalls/testmtd/feature_extraction/madmom/ml/nn/activations.py b/spaces/Marshalls/testmtd/feature_extraction/madmom/ml/nn/activations.py deleted file mode 100644 index 62a32bd1a1ff61a79dda635c124886f92d1ce23e..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/feature_extraction/madmom/ml/nn/activations.py +++ /dev/null @@ -1,209 +0,0 @@ -# encoding: utf-8 -# pylint: disable=no-member -# pylint: disable=invalid-name -""" -This module contains neural network activation functions for the ml.nn module. - -""" - -from __future__ import absolute_import, division, print_function - -import numpy as np - - -def linear(x, out=None): - """ - Linear function. - - Parameters - ---------- - x : numpy array - Input data. - out : numpy array, optional - Array to hold the output data. - - Returns - ------- - numpy array - Unaltered input data. - - """ - if out is None or x is out: - return x - out[:] = x - return out - - -def tanh(x, out=None): - """ - Hyperbolic tangent function. - - Parameters - ---------- - x : numpy array - Input data. - out : numpy array, optional - Array to hold the output data. - - Returns - ------- - numpy array - Hyperbolic tangent of input data. - - """ - # Note: define a wrapper around np.tanh so we just have the dependency on - # madmom when pickling objects - return np.tanh(x, out) - - -try: - # pylint: disable=no-name-in-module - # pylint: disable=wrong-import-order - # pylint: disable=wrong-import-position - - # try to use a faster sigmoid function - from distutils.version import LooseVersion - from scipy.version import version as scipy_version - # we need a recent version of scipy, older have a bug in expit - # https://github.com/scipy/scipy/issues/3385 - if LooseVersion(scipy_version) < LooseVersion("0.14"): - # Note: Raising an AttributeError might not be the best idea ever - # (i.e. ImportError would be more appropriate), but older - # versions of scipy not having the expit function raise the same - # error. In some cases this check fails, don't know why... - raise AttributeError - from scipy.special import expit as _sigmoid -except AttributeError: - # define a fallback function - def _sigmoid(x, out=None): - """ - Logistic sigmoid function. - - Parameters - ---------- - x : numpy array - Input data. - out : numpy array, optional - Array to hold the output data. - - Returns - ------- - numpy array - Logistic sigmoid of input data. - - """ - # sigmoid = 0.5 * (1. + np.tanh(0.5 * x)) - if out is None: - out = np.asarray(.5 * x) - else: - if out is not x: - out[:] = x - out *= .5 - np.tanh(out, out=out) - out += 1 - out *= .5 - return out - - -def sigmoid(x, out=None): - """ - Logistic sigmoid function. - - Parameters - ---------- - x : numpy array - Input data. - out : numpy array, optional - Array to hold the output data. - - Returns - ------- - numpy array - Logistic sigmoid of input data. - - """ - # Note: define a wrapper around _sigmoid so we just have the dependency on - # madmom when pickling objects, not on scipy.special which may - # contain the bug mentioned above - return _sigmoid(x, out) - - -def relu(x, out=None): - """ - Rectified linear (unit) transfer function. - - Parameters - ---------- - x : numpy array - Input data. - out : numpy array, optional - Array to hold the output data. - - Returns - ------- - numpy array - Rectified linear of input data. - - """ - return np.maximum(x, 0, out) - - -def elu(x, out=None): - """ - Exponential linear (unit) transfer function. - - Parameters - ---------- - x : numpy array - Input data. - out : numpy array, optional - Array to hold the output data. - - Returns - ------- - numpy array - Exponential linear of input data - - References - ---------- - .. [1] Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter (2015): - Fast and Accurate Deep Network Learning by Exponential Linear Units - (ELUs), http://arxiv.org/abs/1511.07289 - """ - if out is None: - out = x.copy() - elif out is not x: - out[:] = x[:] - m = x < 0 - out[m] = np.exp(x[m]) - 1 - return out - - -def softmax(x, out=None): - """ - Softmax transfer function. - - Parameters - ---------- - x : numpy array - Input data. - out : numpy array, optional - Array to hold the output data. - - Returns - ------- - numpy array - Softmax of input data. - - """ - # determine maximum (over classes) - tmp = np.amax(x, axis=1, keepdims=True) - # exp of the input minus the max - if out is None: - out = np.exp(x - tmp) - else: - np.exp(x - tmp, out=out) - # normalize by the sum (reusing the tmp variable) - np.sum(out, axis=1, keepdims=True, out=tmp) - out /= tmp - return out diff --git a/spaces/MathysL/AutoGPT4/autogpt/js/overlay.js b/spaces/MathysL/AutoGPT4/autogpt/js/overlay.js deleted file mode 100644 index 1c99c72673330b8ea8cf037ef889233f2d4326be..0000000000000000000000000000000000000000 --- a/spaces/MathysL/AutoGPT4/autogpt/js/overlay.js +++ /dev/null @@ -1,29 +0,0 @@ -const overlay = document.createElement('div'); -Object.assign(overlay.style, { - position: 'fixed', - zIndex: 999999, - top: 0, - left: 0, - width: '100%', - height: '100%', - background: 'rgba(0, 0, 0, 0.7)', - color: '#fff', - fontSize: '24px', - fontWeight: 'bold', - display: 'flex', - justifyContent: 'center', - alignItems: 'center', -}); -const textContent = document.createElement('div'); -Object.assign(textContent.style, { - textAlign: 'center', -}); -textContent.textContent = 'AutoGPT Analyzing Page'; -overlay.appendChild(textContent); -document.body.append(overlay); -document.body.style.overflow = 'hidden'; -let dotCount = 0; -setInterval(() => { - textContent.textContent = 'AutoGPT Analyzing Page' + '.'.repeat(dotCount); - dotCount = (dotCount + 1) % 4; -}, 1000); diff --git a/spaces/Mattdoc99/ElonYTsearch/README.md b/spaces/Mattdoc99/ElonYTsearch/README.md deleted file mode 100644 index cd111815c15043c85a939b693e9597235920c539..0000000000000000000000000000000000000000 --- a/spaces/Mattdoc99/ElonYTsearch/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ElonYTsearch -emoji: 📉 -colorFrom: blue -colorTo: purple -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/fcn_hr18.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/fcn_hr18.py deleted file mode 100644 index c3e299bc89ada56ca14bbffcbdb08a586b8ed9e9..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/fcn_hr18.py +++ /dev/null @@ -1,52 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://msra/hrnetv2_w18', - backbone=dict( - type='HRNet', - norm_cfg=norm_cfg, - norm_eval=False, - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(18, 36)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(18, 36, 72)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(18, 36, 72, 144)))), - decode_head=dict( - type='FCNHead', - in_channels=[18, 36, 72, 144], - in_index=(0, 1, 2, 3), - channels=sum([18, 36, 72, 144]), - input_transform='resize_concat', - kernel_size=1, - num_convs=1, - concat_input=False, - dropout_ratio=-1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Miko-opiko/openai-reverse-proxy/README.md b/spaces/Miko-opiko/openai-reverse-proxy/README.md deleted file mode 100644 index c4e55cb7dc2fd0dc770efdc46376d2c46b9539f4..0000000000000000000000000000000000000000 --- a/spaces/Miko-opiko/openai-reverse-proxy/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Openai Reverse Proxy -emoji: 👁 -colorFrom: yellow -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MiloSobral/PortiloopDemo/portiloop/src/demo/demo.py b/spaces/MiloSobral/PortiloopDemo/portiloop/src/demo/demo.py deleted file mode 100644 index aa2c5d303bf5a985a3a67a6e1b369ce77d842d51..0000000000000000000000000000000000000000 --- a/spaces/MiloSobral/PortiloopDemo/portiloop/src/demo/demo.py +++ /dev/null @@ -1,61 +0,0 @@ -import gradio as gr - -from portiloop.src.demo.offline import run_offline - - -def on_upload_file(file): - # Check if file extension is .xdf - if file.name.split(".")[-1] != "xdf": - raise gr.Error("Please upload a .xdf file.") - else: - return file.name - - -def main(): - with gr.Blocks(title="Portiloop") as demo: - gr.Markdown("# Portiloop Demo") - gr.Markdown("This Demo takes as input an XDF file coming from the Portiloop EEG device and allows you to convert it to CSV and perform the following actions:: \n * Filter the data offline \n * Perform offline spindle detection using Wamsley or Lacourse. \n * Simulate the Portiloop online filtering and spindle detection with different parameters.") - gr.Markdown("Upload your XDF file and click **Run Inference** to start the processing...") - - with gr.Row(): - xdf_file_button = gr.UploadButton(label="Click to Upload", type="file", file_count="single") - xdf_file_static = gr.File(label="XDF File", type='file', interactive=False) - - xdf_file_button.upload(on_upload_file, xdf_file_button, xdf_file_static) - - # Make a checkbox group for the options - detect_filter = gr.CheckboxGroup(['Offline Filtering', 'Lacourse Detection', 'Wamsley Detection', 'Online Filtering', 'Online Detection'], type='index', label="Filtering/Detection options") - - # Threshold value - threshold = gr.Slider(0, 1, value=0.82, step=0.01, label="Threshold", interactive=True) - # Detection Channel - - with gr.Row(): - detect_channel = gr.Dropdown(choices=["1", "2", "3", "4", "5", "6", "7", "8"], value="2", label="Detection Channel in XDF recording", interactive=True) - # Frequency - freq = gr.Dropdown(choices=["100", "200", "250", "256", "500", "512", "1000", "1024"], value="250", label="Sampling Frequency (Hz)", interactive=True) - - # Detect trains dropdown - detect_trains = gr.Dropdown(choices=["All Spindles", "Isolated & First", "Trains"], value="All Spindles", label="Detection mode:", interactive=True) - - with gr.Row(): - output_array = gr.File(label="Output CSV File") - output_table = gr.Markdown(label="Output Table") - - run_inference = gr.Button(value="Run Inference") - run_inference.click( - fn=run_offline, - inputs=[ - xdf_file_static, - detect_filter, - threshold, - detect_channel, - freq, - detect_trains], - outputs=[output_array, output_table]) - - demo.queue() - demo.launch(share=False) - -if __name__ == "__main__": - main() diff --git a/spaces/Miuzarte/SUI-svc-3.0/spec_gen.py b/spaces/Miuzarte/SUI-svc-3.0/spec_gen.py deleted file mode 100644 index 85ad3188ac93aaef7b1b1d7dbbe47d358f4b0da6..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-3.0/spec_gen.py +++ /dev/null @@ -1,22 +0,0 @@ -from data_utils import TextAudioSpeakerLoader, EvalDataLoader -import json -from tqdm import tqdm - -from utils import HParams - -config_path = 'configs/config.json' -with open(config_path, "r") as f: - data = f.read() -config = json.loads(data) -hps = HParams(**config) - -train_dataset = TextAudioSpeakerLoader("filelists/train.txt", hps) -test_dataset = TextAudioSpeakerLoader("filelists/test.txt", hps) -eval_dataset = TextAudioSpeakerLoader("filelists/val.txt", hps) - -for _ in tqdm(train_dataset): - pass -for _ in tqdm(eval_dataset): - pass -for _ in tqdm(test_dataset): - pass \ No newline at end of file diff --git a/spaces/Mmmm7/M/Dockerfile b/spaces/Mmmm7/M/Dockerfile deleted file mode 100644 index 881fbdfd58ec3cfbb591310292987c48cfb3fa70..0000000000000000000000000000000000000000 --- a/spaces/Mmmm7/M/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ -apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/shapemask_model.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/shapemask_model.py deleted file mode 100644 index 174187ed02ae7a7617f259974d64b1906a3d16e0..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/shapemask_model.py +++ /dev/null @@ -1,314 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Model definition for the ShapeMask Model.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import tensorflow as tf - -from tensorflow.python.keras import backend -from official.vision.detection.dataloader import anchor -from official.vision.detection.dataloader import mode_keys -from official.vision.detection.evaluation import factory as eval_factory -from official.vision.detection.modeling import base_model -from official.vision.detection.modeling import losses -from official.vision.detection.modeling.architecture import factory -from official.vision.detection.ops import postprocess_ops -from official.vision.detection.utils import box_utils - - -class ShapeMaskModel(base_model.Model): - """ShapeMask model function.""" - - def __init__(self, params): - super(ShapeMaskModel, self).__init__(params) - - self._params = params - self._keras_model = None - - # Architecture generators. - self._backbone_fn = factory.backbone_generator(params) - self._fpn_fn = factory.multilevel_features_generator(params) - self._retinanet_head_fn = factory.retinanet_head_generator(params) - self._shape_prior_head_fn = factory.shapeprior_head_generator(params) - self._coarse_mask_fn = factory.coarsemask_head_generator(params) - self._fine_mask_fn = factory.finemask_head_generator(params) - - # Loss functions. - self._cls_loss_fn = losses.RetinanetClassLoss( - params.retinanet_loss, params.architecture.num_classes) - self._box_loss_fn = losses.RetinanetBoxLoss(params.retinanet_loss) - self._box_loss_weight = params.retinanet_loss.box_loss_weight - - # Mask loss function. - self._shapemask_prior_loss_fn = losses.ShapemaskMseLoss() - self._shapemask_loss_fn = losses.ShapemaskLoss() - self._shape_prior_loss_weight = ( - params.shapemask_loss.shape_prior_loss_weight) - self._coarse_mask_loss_weight = ( - params.shapemask_loss.coarse_mask_loss_weight) - self._fine_mask_loss_weight = ( - params.shapemask_loss.fine_mask_loss_weight) - - # Predict function. - self._generate_detections_fn = postprocess_ops.MultilevelDetectionGenerator( - params.architecture.min_level, - params.architecture.max_level, - params.postprocess) - - def build_outputs(self, inputs, mode): - is_training = mode == mode_keys.TRAIN - images = inputs['image'] - - if 'anchor_boxes' in inputs: - anchor_boxes = inputs['anchor_boxes'] - else: - anchor_boxes = anchor.Anchor( - self._params.architecture.min_level, - self._params.architecture.max_level, - self._params.anchor.num_scales, - self._params.anchor.aspect_ratios, - self._params.anchor.anchor_size, - images.get_shape().as_list()[1:3]).multilevel_boxes - - batch_size = tf.shape(images)[0] - for level in anchor_boxes: - anchor_boxes[level] = tf.tile( - tf.expand_dims(anchor_boxes[level], 0), [batch_size, 1, 1, 1]) - - backbone_features = self._backbone_fn(images, is_training=is_training) - fpn_features = self._fpn_fn(backbone_features, is_training=is_training) - cls_outputs, box_outputs = self._retinanet_head_fn( - fpn_features, is_training=is_training) - - valid_boxes, valid_scores, valid_classes, valid_detections = ( - self._generate_detections_fn(box_outputs, cls_outputs, - anchor_boxes, - inputs['image_info'][:, 1:2, :])) - - image_size = images.get_shape().as_list()[1:3] - valid_outer_boxes = box_utils.compute_outer_boxes( - tf.reshape(valid_boxes, [-1, 4]), - image_size, - scale=self._params.shapemask_parser.outer_box_scale) - valid_outer_boxes = tf.reshape(valid_outer_boxes, tf.shape(valid_boxes)) - - # Wrapping if else code paths into a layer to make the checkpoint loadable - # in prediction mode. - class SampledBoxesLayer(tf.keras.layers.Layer): - """ShapeMask model function.""" - - def call(self, inputs, val_boxes, val_classes, val_outer_boxes, training): - if training: - boxes = inputs['mask_boxes'] - outer_boxes = inputs['mask_outer_boxes'] - classes = inputs['mask_classes'] - else: - boxes = val_boxes - classes = val_classes - outer_boxes = val_outer_boxes - return boxes, classes, outer_boxes - - boxes, classes, outer_boxes = SampledBoxesLayer()( - inputs, valid_boxes, valid_classes, - valid_outer_boxes, training=is_training) - - instance_features, prior_masks = self._shape_prior_head_fn(fpn_features, - boxes, - outer_boxes, - classes, - is_training) - coarse_mask_logits = self._coarse_mask_fn(instance_features, - prior_masks, - classes, - is_training) - fine_mask_logits = self._fine_mask_fn(instance_features, - coarse_mask_logits, - classes, - is_training) - - model_outputs = { - 'cls_outputs': cls_outputs, - 'box_outputs': box_outputs, - 'fine_mask_logits': fine_mask_logits, - 'coarse_mask_logits': coarse_mask_logits, - 'prior_masks': prior_masks, - } - - if not is_training: - model_outputs.update({ - 'num_detections': valid_detections, - 'detection_boxes': valid_boxes, - 'detection_outer_boxes': valid_outer_boxes, - 'detection_masks': fine_mask_logits, - 'detection_classes': valid_classes, - 'detection_scores': valid_scores, - }) - - return model_outputs - - def build_loss_fn(self): - if self._keras_model is None: - raise ValueError('build_loss_fn() must be called after build_model().') - - filter_fn = self.make_filter_trainable_variables_fn() - trainable_variables = filter_fn(self._keras_model.trainable_variables) - - def _total_loss_fn(labels, outputs): - cls_loss = self._cls_loss_fn(outputs['cls_outputs'], - labels['cls_targets'], - labels['num_positives']) - box_loss = self._box_loss_fn(outputs['box_outputs'], - labels['box_targets'], - labels['num_positives']) - - # Adds Shapemask model losses. - shape_prior_loss = self._shapemask_prior_loss_fn( - outputs['prior_masks'], - labels['mask_targets'], - labels['mask_is_valid']) - coarse_mask_loss = self._shapemask_loss_fn( - outputs['coarse_mask_logits'], - labels['mask_targets'], - labels['mask_is_valid']) - fine_mask_loss = self._shapemask_loss_fn( - outputs['fine_mask_logits'], - labels['fine_mask_targets'], - labels['mask_is_valid']) - - model_loss = ( - cls_loss + self._box_loss_weight * box_loss + - shape_prior_loss * self._shape_prior_loss_weight + - coarse_mask_loss * self._coarse_mask_loss_weight + - fine_mask_loss * self._fine_mask_loss_weight) - - l2_regularization_loss = self.weight_decay_loss(trainable_variables) - total_loss = model_loss + l2_regularization_loss - - shapemask_losses = { - 'total_loss': total_loss, - 'loss': total_loss, - 'retinanet_cls_loss': cls_loss, - 'l2_regularization_loss': l2_regularization_loss, - 'retinanet_box_loss': box_loss, - 'shapemask_prior_loss': shape_prior_loss, - 'shapemask_coarse_mask_loss': coarse_mask_loss, - 'shapemask_fine_mask_loss': fine_mask_loss, - 'model_loss': model_loss, - } - return shapemask_losses - - return _total_loss_fn - - def build_input_layers(self, params, mode): - is_training = mode == mode_keys.TRAIN - input_shape = ( - params.shapemask_parser.output_size + - [params.shapemask_parser.num_channels]) - if is_training: - batch_size = params.train.batch_size - input_layer = { - 'image': tf.keras.layers.Input( - shape=input_shape, - batch_size=batch_size, - name='image', - dtype=tf.bfloat16 if self._use_bfloat16 else tf.float32), - 'image_info': tf.keras.layers.Input( - shape=[4, 2], - batch_size=batch_size, - name='image_info'), - 'mask_classes': tf.keras.layers.Input( - shape=[params.shapemask_parser.num_sampled_masks], - batch_size=batch_size, - name='mask_classes', - dtype=tf.int64), - 'mask_outer_boxes': tf.keras.layers.Input( - shape=[params.shapemask_parser.num_sampled_masks, 4], - batch_size=batch_size, - name='mask_outer_boxes', - dtype=tf.float32), - 'mask_boxes': tf.keras.layers.Input( - shape=[params.shapemask_parser.num_sampled_masks, 4], - batch_size=batch_size, - name='mask_boxes', - dtype=tf.float32), - } - else: - batch_size = params.eval.batch_size - input_layer = { - 'image': tf.keras.layers.Input( - shape=input_shape, - batch_size=batch_size, - name='image', - dtype=tf.bfloat16 if self._use_bfloat16 else tf.float32), - 'image_info': tf.keras.layers.Input( - shape=[4, 2], - batch_size=batch_size, - name='image_info'), - } - return input_layer - - def build_model(self, params, mode): - if self._keras_model is None: - input_layers = self.build_input_layers(self._params, mode) - with backend.get_graph().as_default(): - outputs = self.model_outputs(input_layers, mode) - - model = tf.keras.models.Model( - inputs=input_layers, outputs=outputs, name='shapemask') - assert model is not None, 'Fail to build tf.keras.Model.' - model.optimizer = self.build_optimizer() - self._keras_model = model - - return self._keras_model - - def post_processing(self, labels, outputs): - required_output_fields = ['num_detections', 'detection_boxes', - 'detection_classes', 'detection_masks', - 'detection_scores'] - - for field in required_output_fields: - if field not in outputs: - raise ValueError( - '"{}" is missing in outputs, requried {} found {}'.format( - field, required_output_fields, outputs.keys())) - - required_label_fields = ['image_info'] - for field in required_label_fields: - if field not in labels: - raise ValueError( - '"{}" is missing in labels, requried {} found {}'.format( - field, required_label_fields, labels.keys())) - - predictions = { - 'image_info': labels['image_info'], - 'num_detections': outputs['num_detections'], - 'detection_boxes': outputs['detection_boxes'], - 'detection_outer_boxes': outputs['detection_outer_boxes'], - 'detection_classes': outputs['detection_classes'], - 'detection_scores': outputs['detection_scores'], - 'detection_masks': outputs['detection_masks'], - } - - if 'groundtruths' in labels: - predictions['source_id'] = labels['groundtruths']['source_id'] - labels = labels['groundtruths'] - - return labels, predictions - - def eval_metrics(self): - return eval_factory.evaluator_generator(self._params.eval) diff --git a/spaces/Nee001/bing0/src/components/ui/alert-dialog.tsx b/spaces/Nee001/bing0/src/components/ui/alert-dialog.tsx deleted file mode 100644 index 17fec4d16510328deacc1416569173c97761ef72..0000000000000000000000000000000000000000 --- a/spaces/Nee001/bing0/src/components/ui/alert-dialog.tsx +++ /dev/null @@ -1,150 +0,0 @@ -'use client' - -import * as React from 'react' -import * as AlertDialogPrimitive from '@radix-ui/react-alert-dialog' - -import { cn } from '@/lib/utils' -import { buttonVariants } from '@/components/ui/button' - -const AlertDialog = AlertDialogPrimitive.Root - -const AlertDialogTrigger = AlertDialogPrimitive.Trigger - -const AlertDialogPortal = ({ - className, - children, - ...props -}: AlertDialogPrimitive.AlertDialogPortalProps) => ( - -
    - {children} -
    -
    -) -AlertDialogPortal.displayName = AlertDialogPrimitive.Portal.displayName - -const AlertDialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -AlertDialogOverlay.displayName = AlertDialogPrimitive.Overlay.displayName - -const AlertDialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - -)) -AlertDialogContent.displayName = AlertDialogPrimitive.Content.displayName - -const AlertDialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -AlertDialogHeader.displayName = 'AlertDialogHeader' - -const AlertDialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -AlertDialogFooter.displayName = 'AlertDialogFooter' - -const AlertDialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogTitle.displayName = AlertDialogPrimitive.Title.displayName - -const AlertDialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogDescription.displayName = - AlertDialogPrimitive.Description.displayName - -const AlertDialogAction = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogAction.displayName = AlertDialogPrimitive.Action.displayName - -const AlertDialogCancel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogCancel.displayName = AlertDialogPrimitive.Cancel.displayName - -export { - AlertDialog, - AlertDialogTrigger, - AlertDialogContent, - AlertDialogHeader, - AlertDialogFooter, - AlertDialogTitle, - AlertDialogDescription, - AlertDialogAction, - AlertDialogCancel -} diff --git a/spaces/Neomyst/gertrude-model/app.py b/spaces/Neomyst/gertrude-model/app.py deleted file mode 100644 index e290ae90acfa0e83a0765afb0424c6e88ea30e00..0000000000000000000000000000000000000000 --- a/spaces/Neomyst/gertrude-model/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'Neomyst/gertrude-model' -prefix = 'cpgertrude' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
    -
    -

    Gertrude Model

    -
    -

    - Demo for Gertrude Model Stable Diffusion model.
    - {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

    - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space

    - Duplicate Space -
    - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically (cpgertrude)", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
    -
    -

    This space was created using SD Space Creator.

    -
    - """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/Nightwing25/AICoverGen/src/rmvpe.py b/spaces/Nightwing25/AICoverGen/src/rmvpe.py deleted file mode 100644 index 8d0d57297d4301e43a4fdcda216ae39c5e3b83b4..0000000000000000000000000000000000000000 --- a/spaces/Nightwing25/AICoverGen/src/rmvpe.py +++ /dev/null @@ -1,432 +0,0 @@ -import torch, numpy as np -import torch.nn as nn -import torch.nn.functional as F - - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * nn.N_MELS, nn.N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - audio.device - ) - fft = torch.stft( - audio, - n_fft=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window=self.hann_window[keyshift_key], - center=center, - return_complex=True, - ) - magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect" - ) - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - # torch.cuda.synchronize() - # t0=ttime() - mel = self.mel_extractor(audio, center=True) - # torch.cuda.synchronize() - # t1=ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - # t2=ttime() - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - # t3=ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # frame length#index - salience = np.pad(salience, ((0, 0), (4, 4))) # frame length,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # frame length,9 - todo_cents_mapping = np.array(todo_cents_mapping) # frame length,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # frame length - devided = product_sum / weight_sum # frame length - # t3 = ttime() - maxx = np.max(salience, axis=1) # frame length - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -# if __name__ == '__main__': -# audio, sampling_rate = sf.read("Quotations~1.wav") ### edit -# if len(audio.shape) > 1: -# audio = librosa.to_mono(audio.transpose(1, 0)) -# audio_bak = audio.copy() -# if sampling_rate != 16000: -# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) -# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt" -# thred = 0.03 # 0.01 -# device = 'cuda' if torch.cuda.is_available() else 'cpu' -# rmvpe = RMVPE(model_path,is_half=False, device=device) -# t0=ttime() -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# t1=ttime() -# print(f0.shape,t1-t0) diff --git a/spaces/Nultx/VITS-TTS/ONNXVITS_utils.py b/spaces/Nultx/VITS-TTS/ONNXVITS_utils.py deleted file mode 100644 index b634ce380421571e6e07fb45dd59717b3f63115c..0000000000000000000000000000000000000000 --- a/spaces/Nultx/VITS-TTS/ONNXVITS_utils.py +++ /dev/null @@ -1,19 +0,0 @@ -import torch -import numpy as np -import random -import onnxruntime as ort -def set_random_seed(seed=0): - ort.set_seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.backends.cudnn.deterministic = True - random.seed(seed) - np.random.seed(seed) - -def runonnx(model_path, **kwargs): - ort_session = ort.InferenceSession(model_path) - outputs = ort_session.run( - None, - kwargs - ) - return outputs \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wmt20/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wmt20/README.md deleted file mode 100644 index b4f2874652f8be19998a65faa1d9276d8017ec59..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wmt20/README.md +++ /dev/null @@ -1,72 +0,0 @@ -# WMT 20 - -This page provides pointers to the models of Facebook-FAIR's WMT'20 news translation task submission [(Chen et al., 2020)](https://arxiv.org/abs/2011.08298). - -## Single best MT models (after finetuning on part of WMT20 news dev set) - -Model | Description | Download ----|---|--- -`transformer.wmt20.ta-en` | Ta->En | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.ta-en.single.tar.gz) -`transformer.wmt20.en-ta` | En->Ta | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-ta.single.tar.gz) -`transformer.wmt20.iu-en.news` | Iu->En (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.news.single.tar.gz) -`transformer.wmt20.en-iu.news` | En->Iu (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.news.single.tar.gz) -`transformer.wmt20.iu-en.nh` | Iu->En (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.nh.single.tar.gz) -`transformer.wmt20.en-iu.nh` | En->Iu (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.nh.single.tar.gz) - -## Language models -Model | Description | Download ----|---|--- -`transformer_lm.wmt20.en` | En Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en.tar.gz) -`transformer_lm.wmt20.ta` | Ta Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.ta.tar.gz) -`transformer_lm.wmt20.iu.news` | Iu Language Model (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu.news.tar.gz) -`transformer_lm.wmt20.iu.nh` | Iu Language Model (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu.nh.tar.gz) - -## Example usage (torch.hub) - -#### Translation - -```python -import torch - -# English to Tamil translation -en2ta = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.en-ta') -en2ta.translate("Machine learning is great!") # 'இயந்திரக் கற்றல் அருமை!' - -# Tamil to English translation -ta2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.ta-en') -ta2en.translate("இயந்திரக் கற்றல் அருமை!") # 'Machine learning is great!' - -# English to Inuktitut translation -en2iu = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.en-iu.news') -en2iu.translate("machine learning is great!") # 'ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ ᐱᐅᔪᒻᒪᕆᒃ!' - -# Inuktitut to English translation -iu2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.iu-en.news') -iu2en.translate("ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ ᐱᐅᔪᒻᒪᕆᒃ!") # 'Machine learning excellence!' -``` - -#### Language Modeling - -```python -# Sample from the English LM -en_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.en') -en_lm.sample("Machine learning is") # 'Machine learning is a type of artificial intelligence that uses machine learning to learn from data and make predictions.' - -# Sample from the Tamil LM -ta_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.ta') -ta_lm.sample("இயந்திரக் கற்றல் என்பது செயற்கை நுண்ணறிவின்") # 'இயந்திரக் கற்றல் என்பது செயற்கை நுண்ணறிவின் ஒரு பகுதியாகும்.' - -# Sample from the Inuktitut LM -iu_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.iu.news') -iu_lm.sample("ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ") # 'ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ, ᐊᒻᒪᓗ ᓯᓚᐅᑉ ᐊᓯᙳᖅᐸᓪᓕᐊᓂᖓᓄᑦ ᖃᓄᐃᓕᐅᕈᑎᒃᓴᑦ, ᐃᓚᖃᖅᖢᑎᒃ ᐅᑯᓂᖓ:' -``` - -## Citation -```bibtex -@inproceedings{chen2020facebook - title={Facebook AI's WMT20 News Translation Task Submission}, - author={Peng-Jen Chen and Ann Lee and Changhan Wang and Naman Goyal and Angela Fan and Mary Williamson and Jiatao Gu}, - booktitle={Proc. of WMT}, - year={2020}, -} -``` diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_utils.py deleted file mode 100644 index 79195903e0f34372a24fa50312a6e00170c14471..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_utils.py +++ /dev/null @@ -1,114 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest - -import torch -from fairseq import utils - - -class TestUtils(unittest.TestCase): - def test_convert_padding_direction(self): - pad = 1 - left_pad = torch.LongTensor( - [ - [2, 3, 4, 5, 6], - [1, 7, 8, 9, 10], - [1, 1, 1, 11, 12], - ] - ) - right_pad = torch.LongTensor( - [ - [2, 3, 4, 5, 6], - [7, 8, 9, 10, 1], - [11, 12, 1, 1, 1], - ] - ) - - self.assertAlmostEqual( - right_pad, - utils.convert_padding_direction( - left_pad, - pad, - left_to_right=True, - ), - ) - self.assertAlmostEqual( - left_pad, - utils.convert_padding_direction( - right_pad, - pad, - right_to_left=True, - ), - ) - - def test_make_positions(self): - pad = 1 - left_pad_input = torch.LongTensor( - [ - [9, 9, 9, 9, 9], - [1, 9, 9, 9, 9], - [1, 1, 1, 9, 9], - ] - ) - left_pad_output = torch.LongTensor( - [ - [2, 3, 4, 5, 6], - [1, 2, 3, 4, 5], - [1, 1, 1, 2, 3], - ] - ) - right_pad_input = torch.LongTensor( - [ - [9, 9, 9, 9, 9], - [9, 9, 9, 9, 1], - [9, 9, 1, 1, 1], - ] - ) - right_pad_output = torch.LongTensor( - [ - [2, 3, 4, 5, 6], - [2, 3, 4, 5, 1], - [2, 3, 1, 1, 1], - ] - ) - - self.assertAlmostEqual( - left_pad_output, - utils.make_positions(left_pad_input, pad), - ) - self.assertAlmostEqual( - right_pad_output, - utils.make_positions(right_pad_input, pad), - ) - - def test_clip_grad_norm_(self): - params = torch.nn.Parameter(torch.zeros(5)).requires_grad_(False) - grad_norm = utils.clip_grad_norm_(params, 1.0) - self.assertTrue(torch.is_tensor(grad_norm)) - self.assertEqual(grad_norm, 0.0) - - params = [torch.nn.Parameter(torch.zeros(5)) for i in range(3)] - for p in params: - p.grad = torch.full((5,), fill_value=2.0) - grad_norm = utils.clip_grad_norm_(params, 1.0) - exp_grad_norm = torch.full((15,), fill_value=2.0).norm() - self.assertTrue(torch.is_tensor(grad_norm)) - self.assertEqual(grad_norm, exp_grad_norm) - - grad_norm = utils.clip_grad_norm_(params, 1.0) - self.assertAlmostEqual(grad_norm, torch.tensor(1.0)) - - def test_resolve_max_positions_with_tuple(self): - resolved = utils.resolve_max_positions(None, (2000, 100, 2000), 12000) - self.assertEqual(resolved, (2000, 100, 2000)) - - def assertAlmostEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertLess(utils.item((t1 - t2).abs().max()), 1e-4) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/truncated_bptt/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/truncated_bptt/README.md deleted file mode 100644 index 86518c9d5ef09fbd4fed1512a52e9431b74f08fa..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/truncated_bptt/README.md +++ /dev/null @@ -1,70 +0,0 @@ -# Truncated Backpropagation Through Time (BPTT) - -Truncated BPTT is a useful technique for training language models on very long -sequences. Typically a long sequences is split into chunks and a language model -is trained over the chunks sequentially. The LM may condition on previous -chunks, but gradients only flow through the current chunk. This technique was -the basis for the paper: [Transformer-XL: Attentive Language Models Beyond a -Fixed-Length Context](https://arxiv.org/abs/1901.02860), which achieved -state-of-the-art language modeling results at the time of publication. - -It is slightly tricky to implement Truncated BPTT efficiently in fairseq, since -we need to iterate over the data sequentially and disable any batch shuffling -logic. The code provided in this example illustrates how to implement Truncated -BPTT in fairseq by overriding ``FairseqTask::get_batch_iterator`` to iterate -over the data sequentially. Crucially, this example supports batching and -multi-GPU (data parallel) training. - -##### 0. Setup - -First, see the general [language modeling README](README.md) for instructions on -preprocessing the WikiText-103 data. - -##### 1. Train a Transformer-XL model on WikiText-103 - -We will train a 16-layer Transformer-XL model following the [hyperparameters -used in the original -paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_wt103_base.sh). - -The following command assumes 4 GPUs, so that the total batch size is 60 -sequences (15 x 4). Training should take ~24 hours on 4 V100 GPUs: -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train \ - --user-dir examples/truncated_bptt \ - data-bin/wikitext-103/ \ - --task truncated_bptt_lm --tokens-per-sample 150 \ - --batch-size 15 --max-update 200000 \ - --arch transformer_xl --n-layer 16 --d-model 410 --n-head 10 \ - --d-head 41 --d-inner 2100 --dropout 0.1 --dropatt 0.0 --mem-len 150 \ - --optimizer adam --clip-norm 0.25 \ - --lr-scheduler cosine --warmup-updates 0 --min-lr 0.0 --lr 0.00025 \ - --log-format json --log-interval 25 \ - --fp16 -``` - -If training on a single GPU, set `--update-freq=4` to accumulate 4x gradients -and simulate training on 4 GPUs. - -##### 2. Evaluate - -```bash -fairseq-eval-lm data-bin/wikitext-103/ \ - --path checkpoints/checkpoint_best.pt \ - --user-dir examples/truncated_bptt/ \ - --task truncated_bptt_lm \ - --batch-size 1 --required-batch-size-multiple 1 \ - --model-overrides '{"mem_len":640,"clamp_len":400,"same_length":True}' \ - --tokens-per-sample 64 -# ... | INFO | fairseq_cli.eval_lm | num. model params: 151123537 -# ... | INFO | fairseq_cli.eval_lm | Evaluated 245569 tokens in 83.1s (2956.82 tokens/s) -# ... | INFO | fairseq_cli.eval_lm | Loss (base 2): 4.5668, Perplexity: 23.70 -# Compare to 24.0 test perplexity from the paper -``` - -*Note:* During training the model saw 150 tokens of context -(``--tokens-per-sample=150``) and 150 extra memory tokens (``--mem-len=150``). -During evaluation we measure perplexity on sequences of 64 tokens -(``--tokens-per-sample=64``) and increase the memory length -(``--model-overrides='{"mem_len":640}'``). These settings match the evaluation -settings from [the original -paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_wt103_base.sh). diff --git a/spaces/Open-Orca/Mistral-7B-OpenOrca/app.py b/spaces/Open-Orca/Mistral-7B-OpenOrca/app.py deleted file mode 100644 index 7a53b3dc32d2853bb84837368e32bebf40ec3b60..0000000000000000000000000000000000000000 --- a/spaces/Open-Orca/Mistral-7B-OpenOrca/app.py +++ /dev/null @@ -1,128 +0,0 @@ -import os -import re -import logging -import gradio as gr -import openai - -print(os.environ) -openai.api_base = os.environ.get("OPENAI_API_BASE") -openai.api_key = os.environ.get("OPENAI_API_KEY") - -BASE_SYSTEM_MESSAGE = """I carefully provide accurate, factual, thoughtful, nuanced answers and am brilliant at reasoning. -I am an assistant who thinks through their answers step-by-step to be sure I always get the right answer. -I think more clearly if I write out my thought process in a scratchpad manner first; therefore, I always explain background context, assumptions, and step-by-step thinking BEFORE trying to answer or solve anything.""" - -def make_prediction(prompt, max_tokens=None, temperature=None, top_p=None, top_k=None, repetition_penalty=None): - completion = openai.Completion.create(model="Open-Orca/Mistral-7B-OpenOrca", prompt=prompt, max_tokens=max_tokens, temperature=temperature, top_p=top_p, top_k=top_k, repetition_penalty=repetition_penalty, stream=True, stop=["", "<|im_end|>"]) - for chunk in completion: - yield chunk["choices"][0]["text"] - - -def clear_chat(chat_history_state, chat_message): - chat_history_state = [] - chat_message = '' - return chat_history_state, chat_message - - -def user(message, history): - history = history or [] - # Append the user's message to the conversation history - history.append([message, ""]) - return "", history - - -def chat(history, system_message, max_tokens, temperature, top_p, top_k, repetition_penalty): - history = history or [] - - if system_message.strip(): - messages = "<|im_start|> "+"system\n" + system_message.strip() + "<|im_end|>\n" + \ - "\n".join(["\n".join(["<|im_start|> "+"user\n"+item[0]+"<|im_end|>", "<|im_start|> assistant\n"+item[1]+"<|im_end|>"]) - for item in history]) - else: - messages = "<|im_start|> "+"system\n" + BASE_SYSTEM_MESSAGE + "<|im_end|>\n" + \ - "\n".join(["\n".join(["<|im_start|> "+"user\n"+item[0]+"<|im_end|>", "<|im_start|> assistant\n"+item[1]+"<|im_end|>"]) - for item in history]) - # strip the last `<|end_of_turn|>` from the messages - messages = messages.rstrip("<|im_end|>") - # remove last space from assistant, some models output a ZWSP if you leave a space - messages = messages.rstrip() - - # If temperature is set to 0, force Top P to 1 and Top K to -1 - if temperature == 0: - top_p = 1 - top_k = -1 - - prediction = make_prediction( - messages, - max_tokens=max_tokens, - temperature=temperature, - top_p=top_p, - top_k=top_k, - repetition_penalty=repetition_penalty, - ) - for tokens in prediction: - tokens = re.findall(r'(.*?)(\s|$)', tokens) - for subtoken in tokens: - subtoken = "".join(subtoken) - answer = subtoken - history[-1][1] += answer - # stream the response - yield history, history, "" - - -start_message = "" - -CSS =""" -.contain { display: flex; flex-direction: column; } -.gradio-container { height: 100vh !important; } -#component-0 { height: 100%; } -#chatbot { flex-grow: 1; overflow: auto; resize: vertical; } -""" - -#with gr.Blocks() as demo: -with gr.Blocks(css=CSS) as demo: - with gr.Row(): - with gr.Column(): - gr.Markdown(f""" - ## This demo is an unquantized GPU chatbot of [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) - Brought to you by your friends at Alignment Lab AI, OpenChat, and Open Access AI Collective! - """) - with gr.Row(): - gr.Markdown("# 🐋 Mistral-7B-OpenOrca Playground Space! 🐋") - with gr.Row(): - #chatbot = gr.Chatbot().style(height=500) - chatbot = gr.Chatbot(elem_id="chatbot") - with gr.Row(): - message = gr.Textbox( - label="What do you want to chat about?", - placeholder="Ask me anything.", - lines=3, - ) - with gr.Row(): - submit = gr.Button(value="Send message", variant="secondary").style(full_width=True) - clear = gr.Button(value="New topic", variant="secondary").style(full_width=False) - stop = gr.Button(value="Stop", variant="secondary").style(full_width=False) - with gr.Accordion("Show Model Parameters", open=False): - with gr.Row(): - with gr.Column(): - max_tokens = gr.Slider(20, 2500, label="Max Tokens", step=20, value=500) - temperature = gr.Slider(0.0, 2.0, label="Temperature", step=0.1, value=0.4) - top_p = gr.Slider(0.0, 1.0, label="Top P", step=0.05, value=0.95) - top_k = gr.Slider(1, 100, label="Top K", step=1, value=40) - repetition_penalty = gr.Slider(1.0, 2.0, label="Repetition Penalty", step=0.1, value=1.1) - - system_msg = gr.Textbox( - start_message, label="System Message", interactive=True, visible=True, placeholder="System prompt. Provide instructions which you want the model to remember.", lines=5) - - chat_history_state = gr.State() - clear.click(clear_chat, inputs=[chat_history_state, message], outputs=[chat_history_state, message], queue=False) - clear.click(lambda: None, None, chatbot, queue=False) - - submit_click_event = submit.click( - fn=user, inputs=[message, chat_history_state], outputs=[message, chat_history_state], queue=True - ).then( - fn=chat, inputs=[chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repetition_penalty], outputs=[chatbot, chat_history_state, message], queue=True - ) - stop.click(fn=None, inputs=None, outputs=None, cancels=[submit_click_event], queue=False) - -demo.queue(max_size=128, concurrency_count=48).launch(debug=True, server_name="0.0.0.0", server_port=7860) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/datasets/grit_coco.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/datasets/grit_coco.py deleted file mode 100644 index fea81f7dd8ad2c27dac8438753b845ab64cef81e..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/datasets/grit_coco.py +++ /dev/null @@ -1,112 +0,0 @@ -import logging -import os -from fvcore.common.timer import Timer -from detectron2.structures import BoxMode -from fvcore.common.file_io import PathManager -from detectron2.data import DatasetCatalog, MetadataCatalog -from lvis import LVIS - -logger = logging.getLogger(__name__) - -__all__ = ["load_GRiTcoco_json", "register_GRiTcoco_instances"] - - -def register_GRiTcoco_instances(name, metadata, json_file, image_root): - """ - """ - DatasetCatalog.register(name, lambda: load_GRiTcoco_json( - json_file, image_root, name)) - MetadataCatalog.get(name).set( - json_file=json_file, image_root=image_root, - evaluator_type="coco", **metadata - ) - - -def get_GRiTcoco_meta(): - categories = [{'supercategory': 'object', 'id': 1, 'name': 'object'}] - categories = sorted(categories, key=lambda x: x["id"]) - thing_classes = [k["name"] for k in categories] - meta = {"thing_classes": thing_classes} - return meta - - -def load_GRiTcoco_json(json_file, image_root, dataset_name=None): - ''' - Load COCO class name text for object description for GRiT - ''' - - json_file = PathManager.get_local_path(json_file) - - timer = Timer() - lvis_api = LVIS(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format( - json_file, timer.seconds())) - - class_names = {} - sort_cat = sorted(lvis_api.dataset['categories'], key=lambda x: x['id']) - for x in sort_cat: - class_names[x['id']] = x['name'] - - img_ids = sorted(lvis_api.imgs.keys()) - imgs = lvis_api.load_imgs(img_ids) - anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids] - - ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image] - assert len(set(ann_ids)) == len(ann_ids), \ - "Annotation ids in '{}' are not unique".format(json_file) - - imgs_anns = list(zip(imgs, anns)) - logger.info("Loaded {} images in the LVIS v1 format from {}".format( - len(imgs_anns), json_file)) - - dataset_dicts = [] - - for (img_dict, anno_dict_list) in imgs_anns: - record = {} - if "file_name" in img_dict: - file_name = img_dict["file_name"] - record["file_name"] = os.path.join(image_root, file_name) - - record["height"] = int(img_dict["height"]) - record["width"] = int(img_dict["width"]) - image_id = record["image_id"] = img_dict["id"] - - objs = [] - for anno in anno_dict_list: - assert anno["image_id"] == image_id - if anno.get('iscrowd', 0) > 0: - continue - obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS} - obj["category_id"] = 0 - obj["object_description"] = class_names[anno['category_id']] - if 'segmentation' in anno: - segm = anno["segmentation"] - valid_segm = [poly for poly in segm \ - if len(poly) % 2 == 0 and len(poly) >= 6] - if not len(segm) == len(valid_segm): - print('Annotation contains an invalid polygon with < 3 points') - assert len(segm) > 0 - obj["segmentation"] = segm - objs.append(obj) - record["annotations"] = objs - if len(record["annotations"]) == 0: - continue - record["task"] = "ObjectDet" - dataset_dicts.append(record) - - return dataset_dicts - - -_CUSTOM_SPLITS_LVIS = { - "GRiT_coco2017_train": ("coco/train2017/", "coco/annotations/instances_train2017.json"), -} - - -for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS.items(): - register_GRiTcoco_instances( - key, - get_GRiTcoco_meta(), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) \ No newline at end of file diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/modulated_deform_conv.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/modulated_deform_conv.py deleted file mode 100644 index 75559579cf053abcc99538606cbb88c723faf783..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/modulated_deform_conv.py +++ /dev/null @@ -1,282 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair, _single - -from annotator.uniformer.mmcv.utils import deprecated_api_warning -from ..cnn import CONV_LAYERS -from ..utils import ext_loader, print_log - -ext_module = ext_loader.load_ext( - '_ext', - ['modulated_deform_conv_forward', 'modulated_deform_conv_backward']) - - -class ModulatedDeformConv2dFunction(Function): - - @staticmethod - def symbolic(g, input, offset, mask, weight, bias, stride, padding, - dilation, groups, deform_groups): - input_tensors = [input, offset, mask, weight] - if bias is not None: - input_tensors.append(bias) - return g.op( - 'mmcv::MMCVModulatedDeformConv2d', - *input_tensors, - stride_i=stride, - padding_i=padding, - dilation_i=dilation, - groups_i=groups, - deform_groups_i=deform_groups) - - @staticmethod - def forward(ctx, - input, - offset, - mask, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - deform_groups=1): - if input is not None and input.dim() != 4: - raise ValueError( - f'Expected 4D tensor as input, got {input.dim()}D tensor \ - instead.') - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deform_groups = deform_groups - ctx.with_bias = bias is not None - if not ctx.with_bias: - bias = input.new_empty(0) # fake tensor - # When pytorch version >= 1.6.0, amp is adopted for fp16 mode; - # amp won't cast the type of model (float32), but "offset" is cast - # to float16 by nn.Conv2d automatically, leading to the type - # mismatch with input (when it is float32) or weight. - # The flag for whether to use fp16 or amp is the type of "offset", - # we cast weight and input to temporarily support fp16 and amp - # whatever the pytorch version is. - input = input.type_as(offset) - weight = weight.type_as(input) - ctx.save_for_backward(input, offset, mask, weight, bias) - output = input.new_empty( - ModulatedDeformConv2dFunction._output_size(ctx, input, weight)) - ctx._bufs = [input.new_empty(0), input.new_empty(0)] - ext_module.modulated_deform_conv_forward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - output, - ctx._bufs[1], - kernel_h=weight.size(2), - kernel_w=weight.size(3), - stride_h=ctx.stride[0], - stride_w=ctx.stride[1], - pad_h=ctx.padding[0], - pad_w=ctx.padding[1], - dilation_h=ctx.dilation[0], - dilation_w=ctx.dilation[1], - group=ctx.groups, - deformable_group=ctx.deform_groups, - with_bias=ctx.with_bias) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, mask, weight, bias = ctx.saved_tensors - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - grad_mask = torch.zeros_like(mask) - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(bias) - grad_output = grad_output.contiguous() - ext_module.modulated_deform_conv_backward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - ctx._bufs[1], - grad_input, - grad_weight, - grad_bias, - grad_offset, - grad_mask, - grad_output, - kernel_h=weight.size(2), - kernel_w=weight.size(3), - stride_h=ctx.stride[0], - stride_w=ctx.stride[1], - pad_h=ctx.padding[0], - pad_w=ctx.padding[1], - dilation_h=ctx.dilation[0], - dilation_w=ctx.dilation[1], - group=ctx.groups, - deformable_group=ctx.deform_groups, - with_bias=ctx.with_bias) - if not ctx.with_bias: - grad_bias = None - - return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias, - None, None, None, None, None) - - @staticmethod - def _output_size(ctx, input, weight): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = ctx.padding[d] - kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = ctx.stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError( - 'convolution input is too small (output would be ' + - 'x'.join(map(str, output_size)) + ')') - return output_size - - -modulated_deform_conv2d = ModulatedDeformConv2dFunction.apply - - -class ModulatedDeformConv2d(nn.Module): - - @deprecated_api_warning({'deformable_groups': 'deform_groups'}, - cls_name='ModulatedDeformConv2d') - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deform_groups=1, - bias=True): - super(ModulatedDeformConv2d, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deform_groups = deform_groups - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // groups, - *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.register_parameter('bias', None) - self.init_weights() - - def init_weights(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - if self.bias is not None: - self.bias.data.zero_() - - def forward(self, x, offset, mask): - return modulated_deform_conv2d(x, offset, mask, self.weight, self.bias, - self.stride, self.padding, - self.dilation, self.groups, - self.deform_groups) - - -@CONV_LAYERS.register_module('DCNv2') -class ModulatedDeformConv2dPack(ModulatedDeformConv2d): - """A ModulatedDeformable Conv Encapsulation that acts as normal Conv - layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int): Same as nn.Conv2d, while tuple is not supported. - padding (int): Same as nn.Conv2d, while tuple is not supported. - dilation (int): Same as nn.Conv2d, while tuple is not supported. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(ModulatedDeformConv2dPack, self).__init__(*args, **kwargs) - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deform_groups * 3 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=self.stride, - padding=self.padding, - dilation=self.dilation, - bias=True) - self.init_weights() - - def init_weights(self): - super(ModulatedDeformConv2dPack, self).init_weights() - if hasattr(self, 'conv_offset'): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - out = self.conv_offset(x) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - return modulated_deform_conv2d(x, offset, mask, self.weight, self.bias, - self.stride, self.padding, - self.dilation, self.groups, - self.deform_groups) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - version = local_metadata.get('version', None) - - if version is None or version < 2: - # the key is different in early versions - # In version < 2, ModulatedDeformConvPack - # loads previous benchmark models. - if (prefix + 'conv_offset.weight' not in state_dict - and prefix[:-1] + '_offset.weight' in state_dict): - state_dict[prefix + 'conv_offset.weight'] = state_dict.pop( - prefix[:-1] + '_offset.weight') - if (prefix + 'conv_offset.bias' not in state_dict - and prefix[:-1] + '_offset.bias' in state_dict): - state_dict[prefix + - 'conv_offset.bias'] = state_dict.pop(prefix[:-1] + - '_offset.bias') - - if version is not None and version > 1: - print_log( - f'ModulatedDeformConvPack {prefix.rstrip(".")} is upgraded to ' - 'version 2.', - logger='root') - - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/gap-buffer.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/gap-buffer.go deleted file mode 100644 index 02692c462779ec40201c14d6ef3b2e2b8b0ae986..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/gap-buffer.go and /dev/null differ diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/samplers/grouped_batch_sampler.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/samplers/grouped_batch_sampler.py deleted file mode 100644 index d72e2f0265e1016e7bbac67590075fda2bc28a55..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/samplers/grouped_batch_sampler.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import itertools - -import torch -from torch.utils.data.sampler import BatchSampler -from torch.utils.data.sampler import Sampler - - -class GroupedBatchSampler(BatchSampler): - """ - Wraps another sampler to yield a mini-batch of indices. - It enforces that elements from the same group should appear in groups of batch_size. - It also tries to provide mini-batches which follows an ordering which is - as close as possible to the ordering from the original sampler. - - Arguments: - sampler (Sampler): Base sampler. - batch_size (int): Size of mini-batch. - drop_uneven (bool): If ``True``, the sampler will drop the batches whose - size is less than ``batch_size`` - - """ - - def __init__(self, sampler, group_ids, batch_size, drop_uneven=False): - if not isinstance(sampler, Sampler): - raise ValueError( - "sampler should be an instance of " - "torch.utils.data.Sampler, but got sampler={}".format(sampler) - ) - self.sampler = sampler - self.group_ids = torch.as_tensor(group_ids) - assert self.group_ids.dim() == 1 - self.batch_size = batch_size - self.drop_uneven = drop_uneven - - self.groups = torch.unique(self.group_ids).sort(0)[0] - - self._can_reuse_batches = False - - def _prepare_batches(self): - dataset_size = len(self.group_ids) - # get the sampled indices from the sampler - sampled_ids = torch.as_tensor(list(self.sampler)) - # potentially not all elements of the dataset were sampled - # by the sampler (e.g., DistributedSampler). - # construct a tensor which contains -1 if the element was - # not sampled, and a non-negative number indicating the - # order where the element was sampled. - # for example. if sampled_ids = [3, 1] and dataset_size = 5, - # the order is [-1, 1, -1, 0, -1] - order = torch.full((dataset_size,), -1, dtype=torch.int64) - order[sampled_ids] = torch.arange(len(sampled_ids)) - - # get a mask with the elements that were sampled - mask = order >= 0 - - # find the elements that belong to each individual cluster - clusters = [(self.group_ids == i) & mask for i in self.groups] - # get relative order of the elements inside each cluster - # that follows the order from the sampler - relative_order = [order[cluster] for cluster in clusters] - # with the relative order, find the absolute order in the - # sampled space - permutation_ids = [s[s.sort()[1]] for s in relative_order] - # permute each cluster so that they follow the order from - # the sampler - permuted_clusters = [sampled_ids[idx] for idx in permutation_ids] - - # splits each cluster in batch_size, and merge as a list of tensors - splits = [c.split(self.batch_size) for c in permuted_clusters] - merged = tuple(itertools.chain.from_iterable(splits)) - - # now each batch internally has the right order, but - # they are grouped by clusters. Find the permutation between - # different batches that brings them as close as possible to - # the order that we have in the sampler. For that, we will consider the - # ordering as coming from the first element of each batch, and sort - # correspondingly - first_element_of_batch = [t[0].item() for t in merged] - # get and inverse mapping from sampled indices and the position where - # they occur (as returned by the sampler) - inv_sampled_ids_map = {v: k for k, v in enumerate(sampled_ids.tolist())} - # from the first element in each batch, get a relative ordering - first_index_of_batch = torch.as_tensor( - [inv_sampled_ids_map[s] for s in first_element_of_batch] - ) - - # permute the batches so that they approximately follow the order - # from the sampler - permutation_order = first_index_of_batch.sort(0)[1].tolist() - # finally, permute the batches - batches = [merged[i].tolist() for i in permutation_order] - - if self.drop_uneven: - kept = [] - for batch in batches: - if len(batch) == self.batch_size: - kept.append(batch) - batches = kept - return batches - - def __iter__(self): - if self._can_reuse_batches: - batches = self._batches - self._can_reuse_batches = False - else: - batches = self._prepare_batches() - self._batches = batches - return iter(batches) - - def __len__(self): - if not hasattr(self, "_batches"): - self._batches = self._prepare_batches() - self._can_reuse_batches = True - return len(self._batches) diff --git a/spaces/Podtekatel/Arcane_Style_Transfer/inference/face_detector.py b/spaces/Podtekatel/Arcane_Style_Transfer/inference/face_detector.py deleted file mode 100644 index bb33ea1ccc50e6a58ab3ef3d22c3d616900c26c2..0000000000000000000000000000000000000000 --- a/spaces/Podtekatel/Arcane_Style_Transfer/inference/face_detector.py +++ /dev/null @@ -1,121 +0,0 @@ -import os -from abc import ABC, abstractmethod -from typing import List - -import cv2 -import numpy as np -from retinaface import RetinaFace -from retinaface.model import retinaface_model - -from .box_utils import convert_to_square - - -class FaceDetector(ABC): - def __init__(self, target_size): - self.target_size = target_size - @abstractmethod - def detect_crops(self, img, *args, **kwargs) -> List[np.ndarray]: - """ - Img is a numpy ndarray in range [0..255], uint8 dtype, RGB type - Returns ndarray with [x1, y1, x2, y2] in row - """ - pass - - @abstractmethod - def postprocess_crops(self, crops, *args, **kwargs) -> List[np.ndarray]: - return crops - - def sort_faces(self, crops): - sorted_faces = sorted(crops, key=lambda x: -(x[2] - x[0]) * (x[3] - x[1])) - sorted_faces = np.stack(sorted_faces, axis=0) - return sorted_faces - - def fix_range_crops(self, img, crops): - H, W, _ = img.shape - final_crops = [] - for crop in crops: - x1, y1, x2, y2 = crop - x1 = max(min(round(x1), W), 0) - y1 = max(min(round(y1), H), 0) - x2 = max(min(round(x2), W), 0) - y2 = max(min(round(y2), H), 0) - new_crop = [x1, y1, x2, y2] - final_crops.append(new_crop) - final_crops = np.array(final_crops, dtype=np.int) - return final_crops - - def crop_faces(self, img, crops) -> List[np.ndarray]: - cropped_faces = [] - for crop in crops: - x1, y1, x2, y2 = crop - face_crop = img[y1:y2, x1:x2, :] - cropped_faces.append(face_crop) - return cropped_faces - - def unify_and_merge(self, cropped_images): - return cropped_images - - def __call__(self, img): - return self.detect_faces(img) - - def detect_faces(self, img): - crops = self.detect_crops(img) - if crops is None or len(crops) == 0: - return [], [] - crops = self.sort_faces(crops) - updated_crops = self.postprocess_crops(crops) - updated_crops = self.fix_range_crops(img, updated_crops) - cropped_faces = self.crop_faces(img, updated_crops) - unified_faces = self.unify_and_merge(cropped_faces) - return unified_faces, updated_crops - - -class StatRetinaFaceDetector(FaceDetector): - def __init__(self, target_size=None): - super().__init__(target_size) - self.model = retinaface_model.build_model() - #self.relative_offsets = [0.3258, 0.5225, 0.3258, 0.1290] - self.relative_offsets = [0.3619, 0.5830, 0.3619, 0.1909] - - def postprocess_crops(self, crops, *args, **kwargs) -> np.ndarray: - final_crops = [] - x1_offset, y1_offset, x2_offset, y2_offset = self.relative_offsets - for crop in crops: - x1, y1, x2, y2 = crop - w, h = x2 - x1, y2 - y1 - x1 -= w * x1_offset - y1 -= h * y1_offset - x2 += w * x2_offset - y2 += h * y2_offset - crop = np.array([x1, y1, x2, y2], dtype=crop.dtype) - crop = convert_to_square(crop) - final_crops.append(crop) - final_crops = np.stack(final_crops, axis=0) - return final_crops - - def detect_crops(self, img, *args, **kwargs): - faces = RetinaFace.detect_faces(img, model=self.model) - crops = [] - if isinstance(faces, tuple): - faces = {} - for name, face in faces.items(): - x1, y1, x2, y2 = face['facial_area'] - crop = np.array([x1, y1, x2, y2]) - crops.append(crop) - if len(crops) > 0: - crops = np.stack(crops, axis=0) - return crops - - def unify_and_merge(self, cropped_images): - if self.target_size is None: - return cropped_images - else: - resized_images = [] - for cropped_image in cropped_images: - resized_image = cv2.resize(cropped_image, (self.target_size, self.target_size), - interpolation=cv2.INTER_LINEAR) - resized_images.append(resized_image) - - resized_images = np.stack(resized_images, axis=0) - return resized_images - diff --git a/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py deleted file mode 100644 index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000 --- a/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py +++ /dev/null @@ -1,97 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import parselmouth -import numpy as np - - -class PMF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def compute_f0(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0 - - def compute_f0_uv(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0, uv diff --git a/spaces/Qiukai/gpt/core_functional.py b/spaces/Qiukai/gpt/core_functional.py deleted file mode 100644 index 536ccb609c38cbbebfda4ba17bd51a78857d711e..0000000000000000000000000000000000000000 --- a/spaces/Qiukai/gpt/core_functional.py +++ /dev/null @@ -1,71 +0,0 @@ -# 'primary' 颜色对应 theme.py 中的 primary_hue -# 'secondary' 颜色对应 theme.py 中的 neutral_hue -# 'stop' 颜色对应 theme.py 中的 color_er -# 默认按钮颜色是 secondary -from toolbox import clear_line_break - - -def get_core_functions(): - return { - "英语学术润色": { - # 前言 - "Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " + - r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " + - r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n", - # 后语 - "Suffix": r"", - "Color": r"secondary", # 按钮颜色 - }, - "中文学术润色": { - "Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," + - r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n", - "Suffix": r"", - }, - "查找语法错误": { - "Prefix": r"Can you help me ensure that the grammar and the spelling is correct? " + - r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." + - r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " + - r"put the original text the first column, " + - r"put the corrected text in the second column and highlight the key words you fixed.""\n" - r"Example:""\n" - r"Paragraph: How is you? Do you knows what is it?""\n" - r"| Original sentence | Corrected sentence |""\n" - r"| :--- | :--- |""\n" - r"| How **is** you? | How **are** you? |""\n" - r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n" - r"Below is a paragraph from an academic paper. " - r"You need to report all grammar and spelling mistakes as the example before." - + "\n\n", - "Suffix": r"", - "PreProcess": clear_line_break, # 预处理:清除换行符 - }, - "中译英": { - "Prefix": r"Please translate following sentence to English:" + "\n\n", - "Suffix": r"", - }, - "学术中英互译": { - "Prefix": r"I want you to act as a scientific English-Chinese translator, " + - r"I will provide you with some paragraphs in one language " + - r"and your task is to accurately and academically translate the paragraphs only into the other language. " + - r"Do not repeat the original provided paragraphs after translation. " + - r"You should use artificial intelligence tools, " + - r"such as natural language processing, and rhetorical knowledge " + - r"and experience about effective writing techniques to reply. " + - r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n", - "Suffix": "", - "Color": "secondary", - }, - "英译中": { - "Prefix": r"翻译成地道的中文:" + "\n\n", - "Suffix": r"", - }, - "找图片": { - "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," + - r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n", - "Suffix": r"", - }, - "解释代码": { - "Prefix": r"请解释以下代码:" + "\n```\n", - "Suffix": "\n```\n", - }, - } diff --git a/spaces/RMXK/RVC_HFF/tools/calc_rvc_model_similarity.py b/spaces/RMXK/RVC_HFF/tools/calc_rvc_model_similarity.py deleted file mode 100644 index 42496e088e51dc5162d0714470c2226f696e260c..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/tools/calc_rvc_model_similarity.py +++ /dev/null @@ -1,96 +0,0 @@ -# This code references https://huggingface.co/JosephusCheung/ASimilarityCalculatior/blob/main/qwerty.py -# Fill in the path of the model to be queried and the root directory of the reference models, and this script will return the similarity between the model to be queried and all reference models. -import os -import logging - -logger = logging.getLogger(__name__) - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def cal_cross_attn(to_q, to_k, to_v, rand_input): - hidden_dim, embed_dim = to_q.shape - attn_to_q = nn.Linear(hidden_dim, embed_dim, bias=False) - attn_to_k = nn.Linear(hidden_dim, embed_dim, bias=False) - attn_to_v = nn.Linear(hidden_dim, embed_dim, bias=False) - attn_to_q.load_state_dict({"weight": to_q}) - attn_to_k.load_state_dict({"weight": to_k}) - attn_to_v.load_state_dict({"weight": to_v}) - - return torch.einsum( - "ik, jk -> ik", - F.softmax( - torch.einsum("ij, kj -> ik", attn_to_q(rand_input), attn_to_k(rand_input)), - dim=-1, - ), - attn_to_v(rand_input), - ) - - -def model_hash(filename): - try: - with open(filename, "rb") as file: - import hashlib - - m = hashlib.sha256() - - file.seek(0x100000) - m.update(file.read(0x10000)) - return m.hexdigest()[0:8] - except FileNotFoundError: - return "NOFILE" - - -def eval(model, n, input): - qk = f"enc_p.encoder.attn_layers.{n}.conv_q.weight" - uk = f"enc_p.encoder.attn_layers.{n}.conv_k.weight" - vk = f"enc_p.encoder.attn_layers.{n}.conv_v.weight" - atoq, atok, atov = model[qk][:, :, 0], model[uk][:, :, 0], model[vk][:, :, 0] - - attn = cal_cross_attn(atoq, atok, atov, input) - return attn - - -def main(path, root): - torch.manual_seed(114514) - model_a = torch.load(path, map_location="cpu")["weight"] - - logger.info("Query:\t\t%s\t%s" % (path, model_hash(path))) - - map_attn_a = {} - map_rand_input = {} - for n in range(6): - hidden_dim, embed_dim, _ = model_a[ - f"enc_p.encoder.attn_layers.{n}.conv_v.weight" - ].shape - rand_input = torch.randn([embed_dim, hidden_dim]) - - map_attn_a[n] = eval(model_a, n, rand_input) - map_rand_input[n] = rand_input - - del model_a - - for name in sorted(list(os.listdir(root))): - path = "%s/%s" % (root, name) - model_b = torch.load(path, map_location="cpu")["weight"] - - sims = [] - for n in range(6): - attn_a = map_attn_a[n] - attn_b = eval(model_b, n, map_rand_input[n]) - - sim = torch.mean(torch.cosine_similarity(attn_a, attn_b)) - sims.append(sim) - - logger.info( - "Reference:\t%s\t%s\t%s" - % (path, model_hash(path), f"{torch.mean(torch.stack(sims)) * 1e2:.2f}%") - ) - - -if __name__ == "__main__": - query_path = r"assets\weights\mi v3.pth" - reference_root = r"assets\weights" - main(query_path, reference_root) diff --git a/spaces/RamAnanth1/videocrafter/extralibs/midas/midas/dpt_depth.py b/spaces/RamAnanth1/videocrafter/extralibs/midas/midas/dpt_depth.py deleted file mode 100644 index 95bd762d4a46a29e090687f775322809b5a7b6c5..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/videocrafter/extralibs/midas/midas/dpt_depth.py +++ /dev/null @@ -1,110 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .base_model import BaseModel -from .blocks import ( - FeatureFusionBlock, - FeatureFusionBlock_custom, - Interpolate, - _make_encoder, - forward_vit, -) - - -def _make_fusion_block(features, use_bn): - return FeatureFusionBlock_custom( - features, - nn.ReLU(False), - deconv=False, - bn=use_bn, - expand=False, - align_corners=True, - ) - - -class DPT(BaseModel): - def __init__( - self, - head, - features=256, - backbone="vitb_rn50_384", - readout="project", - channels_last=False, - use_bn=False, - ): - - super(DPT, self).__init__() - - self.channels_last = channels_last - - hooks = { - "vitb_rn50_384": [0, 1, 8, 11], - "vitb16_384": [2, 5, 8, 11], - "vitl16_384": [5, 11, 17, 23], - } - - # Instantiate backbone and reassemble blocks - self.pretrained, self.scratch = _make_encoder( - backbone, - features, - False, # Set to true of you want to train from scratch, uses ImageNet weights - groups=1, - expand=False, - exportable=False, - hooks=hooks[backbone], - use_readout=readout, - ) - - self.scratch.refinenet1 = _make_fusion_block(features, use_bn) - self.scratch.refinenet2 = _make_fusion_block(features, use_bn) - self.scratch.refinenet3 = _make_fusion_block(features, use_bn) - self.scratch.refinenet4 = _make_fusion_block(features, use_bn) - - self.scratch.output_conv = head - - - def forward(self, x): - if self.channels_last == True: - x.contiguous(memory_format=torch.channels_last) - - layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return out - - -class DPTDepthModel(DPT): - def __init__(self, path=None, non_negative=True, **kwargs): - features = kwargs["features"] if "features" in kwargs else 256 - - head = nn.Sequential( - nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear", align_corners=True), - nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - super().__init__(head, **kwargs) - - if path is not None: - self.load(path) - print("Midas depth estimation model loaded.") - - def forward(self, x): - return super().forward(x).squeeze(dim=1) - diff --git a/spaces/Riksarkivet/htr_demo/helper/text/overview/duplicate_api/api_code2.md b/spaces/Riksarkivet/htr_demo/helper/text/overview/duplicate_api/api_code2.md deleted file mode 100644 index 4de17d6784413aa343484e91de1048351258611d..0000000000000000000000000000000000000000 --- a/spaces/Riksarkivet/htr_demo/helper/text/overview/duplicate_api/api_code2.md +++ /dev/null @@ -1,26 +0,0 @@ -Loaded as API: http://127.0.0.1:7860/ ✔ - - - - - Swedish National Archives - 2023-08-21, 13:28:06 - - - - - - - - År 1865. - - - - - -...................................... - - - - -# Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings... diff --git a/spaces/Riksarkivet/htr_demo/tabs/htr_tool.py b/spaces/Riksarkivet/htr_demo/tabs/htr_tool.py deleted file mode 100644 index 839e2787f944efd0b3b4ea778d5240d01c100e11..0000000000000000000000000000000000000000 --- a/spaces/Riksarkivet/htr_demo/tabs/htr_tool.py +++ /dev/null @@ -1,324 +0,0 @@ -import os - -import gradio as gr - -from helper.examples.examples import DemoImages -from helper.utils import TrafficDataHandler -from src.htr_pipeline.gradio_backend import ( - FastTrack, - SingletonModelLoader, - compare_diff_runs_highlight, - compute_cer_a_and_b_with_gt, - update_selected_tab_image_viewer, - update_selected_tab_model_compare, - update_selected_tab_output_and_setting, - upload_file, -) - -model_loader = SingletonModelLoader() -fast_track = FastTrack(model_loader) -images_for_demo = DemoImages() - -terminate = False - - -with gr.Blocks() as htr_tool_tab: - with gr.Row(equal_height=True): - with gr.Column(scale=2): - with gr.Row(): - fast_track_input_region_image = gr.Image( - label="Image to run HTR on", type="numpy", tool="editor", elem_id="image_upload", height=395 - ) - - with gr.Row(): - with gr.Tab("HTRFLOW") as tab_output_and_setting_selector: - with gr.Row(): - stop_htr_button = gr.Button( - value="Stop run", - variant="stop", - ) - - htr_pipeline_button = gr.Button( - "Run ", - variant="primary", - visible=True, - elem_id="run_pipeline_button", - ) - htr_pipeline_button_var = gr.State(value="htr_pipeline_button") - - htr_pipeline_button_api = gr.Button("Run pipeline", variant="primary", visible=False, scale=1) - - fast_file_downlod = gr.File( - label="Download output file", visible=True, scale=1, height=100, elem_id="download_file" - ) - - with gr.Tab("Visualize") as tab_image_viewer_selector: - with gr.Row(): - gr.Markdown("") - run_image_visualizer_button = gr.Button( - value="Visualize results", variant="primary", interactive=True - ) - - selection_text_from_image_viewer = gr.Textbox( - interactive=False, label="Text Selector", info="Select a line on Image Viewer to return text" - ) - - with gr.Tab("Compare") as tab_model_compare_selector: - with gr.Row(): - diff_runs_button = gr.Button("Compare runs", variant="primary", visible=True) - calc_cer_button_fast = gr.Button("Calculate CER", variant="primary", visible=True) - with gr.Row(): - cer_output_fast = gr.Textbox( - label="Character Error Rate:", - info="The percentage of characters that have been transcribed incorrectly", - ) - - with gr.Column(scale=4): - with gr.Box(): - with gr.Row(visible=True) as output_and_setting_tab: - with gr.Column(scale=2): - fast_name_files_placeholder = gr.Markdown(visible=False) - gr.Examples( - examples=images_for_demo.examples_list, - inputs=[fast_name_files_placeholder, fast_track_input_region_image], - label="Example images", - examples_per_page=5, - ) - - gr.Markdown(" ") - - with gr.Column(scale=3): - with gr.Group(): - gr.Markdown("   ⚙️ Settings ") - with gr.Row(): - radio_file_input = gr.CheckboxGroup( - choices=["Txt", "Page XML"], - value=["Txt", "Page XML"], - label="Output file extension", - info="JSON and ALTO-XML will be added", - scale=1, - ) - with gr.Row(): - gr.Checkbox( - value=True, - label="Binarize image", - info="Binarize image to reduce background noise", - ) - gr.Checkbox( - value=True, - label="Output prediction threshold", - info="Output XML with prediction score", - ) - - with gr.Accordion("Advanced settings", open=False): - with gr.Group(): - with gr.Row(): - htr_tool_region_segment_model_dropdown = gr.Dropdown( - choices=["Riksarkivet/rtmdet_region"], - value="Riksarkivet/rtmdet_region", - label="Region segmentation models", - info="More models will be added", - ) - - gr.Slider( - minimum=0.4, - maximum=1, - value=0.5, - step=0.05, - label="P-threshold", - info="""Filter confidence score for a prediction score to be considered""", - ) - - with gr.Row(): - htr_tool_line_segment_model_dropdown = gr.Dropdown( - choices=["Riksarkivet/rtmdet_lines"], - value="Riksarkivet/rtmdet_lines", - label="Line segmentation models", - info="More models will be added", - ) - - gr.Slider( - minimum=0.4, - maximum=1, - value=0.5, - step=0.05, - label="P-threshold", - info="""Filter confidence score for a prediction score to be considered""", - ) - - with gr.Row(): - htr_tool_transcriber_model_dropdown = gr.Dropdown( - choices=[ - "Riksarkivet/satrn_htr", - "microsoft/trocr-base-handwritten", - "pstroe/bullinger-general-model", - ], - value="Riksarkivet/satrn_htr", - label="Text recognition models", - info="More models will be added", - ) - - gr.Slider( - value=0.6, - minimum=0.5, - maximum=1, - label="HTR threshold", - info="Prediction score threshold for transcribed lines", - scale=1, - ) - with gr.Row(): - gr.Markdown("   More settings will be added") - - with gr.Row(visible=False) as image_viewer_tab: - text_polygon_dict = gr.Variable() - - fast_track_output_image = gr.Image( - label="Image Viewer", type="numpy", height=600, interactive=False - ) - - with gr.Column(visible=False) as model_compare_selector: - with gr.Row(): - gr.Markdown("Compare different runs (Page XML output) with Ground Truth (GT)") - with gr.Row(): - with gr.Group(): - upload_button_run_a = gr.UploadButton("A", file_types=[".xml"], file_count="single") - file_input_xml_run_a = gr.File( - label=None, - file_count="single", - height=100, - elem_id="download_file", - interactive=False, - visible=False, - ) - - with gr.Group(): - upload_button_run_b = gr.UploadButton("B", file_types=[".xml"], file_count="single") - file_input_xml_run_b = gr.File( - label=None, - file_count="single", - height=100, - elem_id="download_file", - interactive=False, - visible=False, - ) - - with gr.Group(): - upload_button_run_gt = gr.UploadButton("GT", file_types=[".xml"], file_count="single") - file_input_xml_run_gt = gr.File( - label=None, - file_count="single", - height=100, - elem_id="download_file", - interactive=False, - visible=False, - ) - with gr.Tab("Comparing run A with B"): - text_diff_runs = gr.HighlightedText( - label="A with B", - combine_adjacent=True, - show_legend=True, - color_map={"+": "red", "-": "green"}, - ) - with gr.Tab("Compare run A with Ground Truth"): - text_diff_gt = gr.HighlightedText( - label="A with GT", - combine_adjacent=True, - show_legend=True, - color_map={"+": "red", "-": "green"}, - ) - - xml_rendered_placeholder_for_api = gr.Textbox(placeholder="XML", visible=False) - - htr_event_click_event = htr_pipeline_button.click( - fast_track.segment_to_xml, - inputs=[fast_track_input_region_image, radio_file_input, htr_tool_transcriber_model_dropdown], - outputs=[fast_file_downlod, fast_file_downlod], - api_name=False, - ) - - htr_pipeline_button_api.click( - fast_track.segment_to_xml_api, - inputs=[fast_track_input_region_image], - outputs=[xml_rendered_placeholder_for_api], - queue=False, - api_name="run_htr_pipeline", - ) - - tab_output_and_setting_selector.select( - fn=update_selected_tab_output_and_setting, - outputs=[output_and_setting_tab, image_viewer_tab, model_compare_selector], - api_name=False, - ) - - tab_image_viewer_selector.select( - fn=update_selected_tab_image_viewer, - outputs=[output_and_setting_tab, image_viewer_tab, model_compare_selector], - api_name=False, - ) - - tab_model_compare_selector.select( - fn=update_selected_tab_model_compare, - outputs=[output_and_setting_tab, image_viewer_tab, model_compare_selector], - api_name=False, - ) - - def stop_function(): - from src.htr_pipeline.utils import pipeline_inferencer - - pipeline_inferencer.terminate = True - gr.Info("The HTR execution was halted") - - stop_htr_button.click( - fn=stop_function, - inputs=None, - outputs=None, - api_name=False, - # cancels=[htr_event_click_event], - ) - - run_image_visualizer_button.click( - fn=fast_track.visualize_image_viewer, - inputs=fast_track_input_region_image, - outputs=[fast_track_output_image, text_polygon_dict], - api_name=False, - ) - - fast_track_output_image.select( - fast_track.get_text_from_coords, - inputs=text_polygon_dict, - outputs=selection_text_from_image_viewer, - api_name=False, - ) - - upload_button_run_a.upload( - upload_file, inputs=upload_button_run_a, outputs=[file_input_xml_run_a, file_input_xml_run_a], api_name=False - ) - - upload_button_run_b.upload( - upload_file, inputs=upload_button_run_b, outputs=[file_input_xml_run_b, file_input_xml_run_b], api_name=False - ) - - upload_button_run_gt.upload( - upload_file, inputs=upload_button_run_gt, outputs=[file_input_xml_run_gt, file_input_xml_run_gt], api_name=False - ) - - diff_runs_button.click( - fn=compare_diff_runs_highlight, - inputs=[file_input_xml_run_a, file_input_xml_run_b, file_input_xml_run_gt], - outputs=[text_diff_runs, text_diff_gt], - api_name=False, - ) - - calc_cer_button_fast.click( - fn=compute_cer_a_and_b_with_gt, - inputs=[file_input_xml_run_a, file_input_xml_run_b, file_input_xml_run_gt], - outputs=cer_output_fast, - api_name=False, - ) - - SECRET_KEY = os.environ.get("HUB_TOKEN", False) - if SECRET_KEY: - htr_pipeline_button.click( - fn=TrafficDataHandler.store_metric_data, - inputs=htr_pipeline_button_var, - ) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/plugin.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/plugin.py deleted file mode 100644 index 07c010d4053174dd41107aa654ea67e82b46a25c..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/plugin.py +++ /dev/null @@ -1,88 +0,0 @@ -import inspect -import platform - -from .registry import PLUGIN_LAYERS - -if platform.system() == 'Windows': - import regex as re -else: - import re - - -def infer_abbr(class_type): - """Infer abbreviation from the class name. - - This method will infer the abbreviation to map class types to - abbreviations. - - Rule 1: If the class has the property "abbr", return the property. - Rule 2: Otherwise, the abbreviation falls back to snake case of class - name, e.g. the abbreviation of ``FancyBlock`` will be ``fancy_block``. - - Args: - class_type (type): The norm layer type. - - Returns: - str: The inferred abbreviation. - """ - - def camel2snack(word): - """Convert camel case word into snack case. - - Modified from `inflection lib - `_. - - Example:: - - >>> camel2snack("FancyBlock") - 'fancy_block' - """ - - word = re.sub(r'([A-Z]+)([A-Z][a-z])', r'\1_\2', word) - word = re.sub(r'([a-z\d])([A-Z])', r'\1_\2', word) - word = word.replace('-', '_') - return word.lower() - - if not inspect.isclass(class_type): - raise TypeError( - f'class_type must be a type, but got {type(class_type)}') - if hasattr(class_type, '_abbr_'): - return class_type._abbr_ - else: - return camel2snack(class_type.__name__) - - -def build_plugin_layer(cfg, postfix='', **kwargs): - """Build plugin layer. - - Args: - cfg (None or dict): cfg should contain: - type (str): identify plugin layer type. - layer args: args needed to instantiate a plugin layer. - postfix (int, str): appended into norm abbreviation to - create named layer. Default: ''. - - Returns: - tuple[str, nn.Module]: - name (str): abbreviation + postfix - layer (nn.Module): created plugin layer - """ - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in PLUGIN_LAYERS: - raise KeyError(f'Unrecognized plugin type {layer_type}') - - plugin_layer = PLUGIN_LAYERS.get(layer_type) - abbr = infer_abbr(plugin_layer) - - assert isinstance(postfix, (int, str)) - name = abbr + str(postfix) - - layer = plugin_layer(**kwargs, **cfg_) - - return name, layer diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/gfocal_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/gfocal_loss.py deleted file mode 100644 index 9d3b8833dc50c76f6741db5341dbf8da3402d07b..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/gfocal_loss.py +++ /dev/null @@ -1,188 +0,0 @@ -import mmcv -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def quality_focal_loss(pred, target, beta=2.0): - r"""Quality Focal Loss (QFL) is from `Generalized Focal Loss: Learning - Qualified and Distributed Bounding Boxes for Dense Object Detection - `_. - - Args: - pred (torch.Tensor): Predicted joint representation of classification - and quality (IoU) estimation with shape (N, C), C is the number of - classes. - target (tuple([torch.Tensor])): Target category label with shape (N,) - and target quality label with shape (N,). - beta (float): The beta parameter for calculating the modulating factor. - Defaults to 2.0. - - Returns: - torch.Tensor: Loss tensor with shape (N,). - """ - assert len(target) == 2, """target for QFL must be a tuple of two elements, - including category label and quality label, respectively""" - # label denotes the category id, score denotes the quality score - label, score = target - - # negatives are supervised by 0 quality score - pred_sigmoid = pred.sigmoid() - scale_factor = pred_sigmoid - zerolabel = scale_factor.new_zeros(pred.shape) - loss = F.binary_cross_entropy_with_logits( - pred, zerolabel, reduction='none') * scale_factor.pow(beta) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = pred.size(1) - pos = ((label >= 0) & (label < bg_class_ind)).nonzero().squeeze(1) - pos_label = label[pos].long() - # positives are supervised by bbox quality (IoU) score - scale_factor = score[pos] - pred_sigmoid[pos, pos_label] - loss[pos, pos_label] = F.binary_cross_entropy_with_logits( - pred[pos, pos_label], score[pos], - reduction='none') * scale_factor.abs().pow(beta) - - loss = loss.sum(dim=1, keepdim=False) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def distribution_focal_loss(pred, label): - r"""Distribution Focal Loss (DFL) is from `Generalized Focal Loss: Learning - Qualified and Distributed Bounding Boxes for Dense Object Detection - `_. - - Args: - pred (torch.Tensor): Predicted general distribution of bounding boxes - (before softmax) with shape (N, n+1), n is the max value of the - integral set `{0, ..., n}` in paper. - label (torch.Tensor): Target distance label for bounding boxes with - shape (N,). - - Returns: - torch.Tensor: Loss tensor with shape (N,). - """ - dis_left = label.long() - dis_right = dis_left + 1 - weight_left = dis_right.float() - label - weight_right = label - dis_left.float() - loss = F.cross_entropy(pred, dis_left, reduction='none') * weight_left \ - + F.cross_entropy(pred, dis_right, reduction='none') * weight_right - return loss - - -@LOSSES.register_module() -class QualityFocalLoss(nn.Module): - r"""Quality Focal Loss (QFL) is a variant of `Generalized Focal Loss: - Learning Qualified and Distributed Bounding Boxes for Dense Object - Detection `_. - - Args: - use_sigmoid (bool): Whether sigmoid operation is conducted in QFL. - Defaults to True. - beta (float): The beta parameter for calculating the modulating factor. - Defaults to 2.0. - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Loss weight of current loss. - """ - - def __init__(self, - use_sigmoid=True, - beta=2.0, - reduction='mean', - loss_weight=1.0): - super(QualityFocalLoss, self).__init__() - assert use_sigmoid is True, 'Only sigmoid in QFL supported now.' - self.use_sigmoid = use_sigmoid - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): Predicted joint representation of - classification and quality (IoU) estimation with shape (N, C), - C is the number of classes. - target (tuple([torch.Tensor])): Target category label with shape - (N,) and target quality label with shape (N,). - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.use_sigmoid: - loss_cls = self.loss_weight * quality_focal_loss( - pred, - target, - weight, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor) - else: - raise NotImplementedError - return loss_cls - - -@LOSSES.register_module() -class DistributionFocalLoss(nn.Module): - r"""Distribution Focal Loss (DFL) is a variant of `Generalized Focal Loss: - Learning Qualified and Distributed Bounding Boxes for Dense Object - Detection `_. - - Args: - reduction (str): Options are `'none'`, `'mean'` and `'sum'`. - loss_weight (float): Loss weight of current loss. - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super(DistributionFocalLoss, self).__init__() - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): Predicted general distribution of bounding - boxes (before softmax) with shape (N, n+1), n is the max value - of the integral set `{0, ..., n}` in paper. - target (torch.Tensor): Target distance label for bounding boxes - with shape (N,). - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_cls = self.loss_weight * distribution_focal_loss( - pred, target, weight, reduction=reduction, avg_factor=avg_factor) - return loss_cls diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/points_in_boxes.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/points_in_boxes.py deleted file mode 100644 index 4003173a53052161dbcd687a2fa1d755642fdab8..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/points_in_boxes.py +++ /dev/null @@ -1,133 +0,0 @@ -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'points_in_boxes_part_forward', 'points_in_boxes_cpu_forward', - 'points_in_boxes_all_forward' -]) - - -def points_in_boxes_part(points, boxes): - """Find the box in which each point is (CUDA). - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz] in - LiDAR/DEPTH coordinate, (x, y, z) is the bottom center - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M), default background = -1 - """ - assert points.shape[0] == boxes.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {points.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - - box_idxs_of_pts = points.new_zeros((batch_size, num_points), - dtype=torch.int).fill_(-1) - - # If manually put the tensor 'points' or 'boxes' on a device - # which is not the current device, some temporary variables - # will be created on the current device in the cuda op, - # and the output will be incorrect. - # Therefore, we force the current device to be the same - # as the device of the tensors if it was not. - # Please refer to https://github.com/open-mmlab/mmdetection3d/issues/305 - # for the incorrect output before the fix. - points_device = points.get_device() - assert points_device == boxes.get_device(), \ - 'Points and boxes should be put on the same device' - if torch.cuda.current_device() != points_device: - torch.cuda.set_device(points_device) - - ext_module.points_in_boxes_part_forward(boxes.contiguous(), - points.contiguous(), - box_idxs_of_pts) - - return box_idxs_of_pts - - -def points_in_boxes_cpu(points, boxes): - """Find all boxes in which each point is (CPU). The CPU version of - :meth:`points_in_boxes_all`. - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in - LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0. - """ - assert points.shape[0] == boxes.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {points.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - num_boxes = boxes.shape[1] - - point_indices = points.new_zeros((batch_size, num_boxes, num_points), - dtype=torch.int) - for b in range(batch_size): - ext_module.points_in_boxes_cpu_forward(boxes[b].float().contiguous(), - points[b].float().contiguous(), - point_indices[b]) - point_indices = point_indices.transpose(1, 2) - - return point_indices - - -def points_in_boxes_all(points, boxes): - """Find all boxes in which each point is (CUDA). - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0. - """ - assert boxes.shape[0] == points.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {boxes.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - num_boxes = boxes.shape[1] - - box_idxs_of_pts = points.new_zeros((batch_size, num_points, num_boxes), - dtype=torch.int).fill_(0) - - # Same reason as line 25-32 - points_device = points.get_device() - assert points_device == boxes.get_device(), \ - 'Points and boxes should be put on the same device' - if torch.cuda.current_device() != points_device: - torch.cuda.set_device(points_device) - - ext_module.points_in_boxes_all_forward(boxes.contiguous(), - points.contiguous(), - box_idxs_of_pts) - - return box_idxs_of_pts diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/neptune.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/neptune.py deleted file mode 100644 index 7a38772b0c93a8608f32c6357b8616e77c139dc9..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/neptune.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class NeptuneLoggerHook(LoggerHook): - """Class to log metrics to NeptuneAI. - - It requires `neptune-client` to be installed. - - Args: - init_kwargs (dict): a dict contains the initialization keys as below: - - project (str): Name of a project in a form of - namespace/project_name. If None, the value of - NEPTUNE_PROJECT environment variable will be taken. - - api_token (str): User’s API token. - If None, the value of NEPTUNE_API_TOKEN environment - variable will be taken. Note: It is strongly recommended - to use NEPTUNE_API_TOKEN environment variable rather than - placing your API token in plain text in your source code. - - name (str, optional, default is 'Untitled'): Editable name of - the run. Name is displayed in the run's Details and in - Runs table as a column. - Check https://docs.neptune.ai/api-reference/neptune#init for - more init arguments. - interval (int): Logging interval (every k iterations). - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - reset_flag (bool): Whether to clear the output buffer after logging - by_epoch (bool): Whether EpochBasedRunner is used. - - .. _NeptuneAI: - https://docs.neptune.ai/you-should-know/logging-metadata - """ - - def __init__(self, - init_kwargs=None, - interval=10, - ignore_last=True, - reset_flag=True, - with_step=True, - by_epoch=True): - - super(NeptuneLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.import_neptune() - self.init_kwargs = init_kwargs - self.with_step = with_step - - def import_neptune(self): - try: - import neptune.new as neptune - except ImportError: - raise ImportError( - 'Please run "pip install neptune-client" to install neptune') - self.neptune = neptune - self.run = None - - @master_only - def before_run(self, runner): - if self.init_kwargs: - self.run = self.neptune.init(**self.init_kwargs) - else: - self.run = self.neptune.init() - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - for tag_name, tag_value in tags.items(): - if self.with_step: - self.run[tag_name].log( - tag_value, step=self.get_iter(runner)) - else: - tags['global_step'] = self.get_iter(runner) - self.run[tag_name].log(tags) - - @master_only - def after_run(self, runner): - self.run.stop() diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/utils/registry.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/utils/registry.py deleted file mode 100644 index fa9df39bc9f3d8d568361e7250ab35468f2b74e0..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/utils/registry.py +++ /dev/null @@ -1,315 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import warnings -from functools import partial - -from .misc import is_seq_of - - -def build_from_cfg(cfg, registry, default_args=None): - """Build a module from config dict. - - Args: - cfg (dict): Config dict. It should at least contain the key "type". - registry (:obj:`Registry`): The registry to search the type from. - default_args (dict, optional): Default initialization arguments. - - Returns: - object: The constructed object. - """ - if not isinstance(cfg, dict): - raise TypeError(f'cfg must be a dict, but got {type(cfg)}') - if 'type' not in cfg: - if default_args is None or 'type' not in default_args: - raise KeyError( - '`cfg` or `default_args` must contain the key "type", ' - f'but got {cfg}\n{default_args}') - if not isinstance(registry, Registry): - raise TypeError('registry must be an mmcv.Registry object, ' - f'but got {type(registry)}') - if not (isinstance(default_args, dict) or default_args is None): - raise TypeError('default_args must be a dict or None, ' - f'but got {type(default_args)}') - - args = cfg.copy() - - if default_args is not None: - for name, value in default_args.items(): - args.setdefault(name, value) - - obj_type = args.pop('type') - if isinstance(obj_type, str): - obj_cls = registry.get(obj_type) - if obj_cls is None: - raise KeyError( - f'{obj_type} is not in the {registry.name} registry') - elif inspect.isclass(obj_type): - obj_cls = obj_type - else: - raise TypeError( - f'type must be a str or valid type, but got {type(obj_type)}') - try: - return obj_cls(**args) - except Exception as e: - # Normal TypeError does not print class name. - raise type(e)(f'{obj_cls.__name__}: {e}') - - -class Registry: - """A registry to map strings to classes. - - Registered object could be built from registry. - Example: - >>> MODELS = Registry('models') - >>> @MODELS.register_module() - >>> class ResNet: - >>> pass - >>> resnet = MODELS.build(dict(type='ResNet')) - - Please refer to - https://mmcv.readthedocs.io/en/latest/understand_mmcv/registry.html for - advanced usage. - - Args: - name (str): Registry name. - build_func(func, optional): Build function to construct instance from - Registry, func:`build_from_cfg` is used if neither ``parent`` or - ``build_func`` is specified. If ``parent`` is specified and - ``build_func`` is not given, ``build_func`` will be inherited - from ``parent``. Default: None. - parent (Registry, optional): Parent registry. The class registered in - children registry could be built from parent. Default: None. - scope (str, optional): The scope of registry. It is the key to search - for children registry. If not specified, scope will be the name of - the package where class is defined, e.g. mmdet, mmcls, mmseg. - Default: None. - """ - - def __init__(self, name, build_func=None, parent=None, scope=None): - self._name = name - self._module_dict = dict() - self._children = dict() - self._scope = self.infer_scope() if scope is None else scope - - # self.build_func will be set with the following priority: - # 1. build_func - # 2. parent.build_func - # 3. build_from_cfg - if build_func is None: - if parent is not None: - self.build_func = parent.build_func - else: - self.build_func = build_from_cfg - else: - self.build_func = build_func - if parent is not None: - assert isinstance(parent, Registry) - parent._add_children(self) - self.parent = parent - else: - self.parent = None - - def __len__(self): - return len(self._module_dict) - - def __contains__(self, key): - return self.get(key) is not None - - def __repr__(self): - format_str = self.__class__.__name__ + \ - f'(name={self._name}, ' \ - f'items={self._module_dict})' - return format_str - - @staticmethod - def infer_scope(): - """Infer the scope of registry. - - The name of the package where registry is defined will be returned. - - Example: - # in mmdet/models/backbone/resnet.py - >>> MODELS = Registry('models') - >>> @MODELS.register_module() - >>> class ResNet: - >>> pass - The scope of ``ResNet`` will be ``mmdet``. - - - Returns: - scope (str): The inferred scope name. - """ - # inspect.stack() trace where this function is called, the index-2 - # indicates the frame where `infer_scope()` is called - filename = inspect.getmodule(inspect.stack()[2][0]).__name__ - split_filename = filename.split('.') - return split_filename[0] - - @staticmethod - def split_scope_key(key): - """Split scope and key. - - The first scope will be split from key. - - Examples: - >>> Registry.split_scope_key('mmdet.ResNet') - 'mmdet', 'ResNet' - >>> Registry.split_scope_key('ResNet') - None, 'ResNet' - - Return: - scope (str, None): The first scope. - key (str): The remaining key. - """ - split_index = key.find('.') - if split_index != -1: - return key[:split_index], key[split_index + 1:] - else: - return None, key - - @property - def name(self): - return self._name - - @property - def scope(self): - return self._scope - - @property - def module_dict(self): - return self._module_dict - - @property - def children(self): - return self._children - - def get(self, key): - """Get the registry record. - - Args: - key (str): The class name in string format. - - Returns: - class: The corresponding class. - """ - scope, real_key = self.split_scope_key(key) - if scope is None or scope == self._scope: - # get from self - if real_key in self._module_dict: - return self._module_dict[real_key] - else: - # get from self._children - if scope in self._children: - return self._children[scope].get(real_key) - else: - # goto root - parent = self.parent - while parent.parent is not None: - parent = parent.parent - return parent.get(key) - - def build(self, *args, **kwargs): - return self.build_func(*args, **kwargs, registry=self) - - def _add_children(self, registry): - """Add children for a registry. - - The ``registry`` will be added as children based on its scope. - The parent registry could build objects from children registry. - - Example: - >>> models = Registry('models') - >>> mmdet_models = Registry('models', parent=models) - >>> @mmdet_models.register_module() - >>> class ResNet: - >>> pass - >>> resnet = models.build(dict(type='mmdet.ResNet')) - """ - - assert isinstance(registry, Registry) - assert registry.scope is not None - assert registry.scope not in self.children, \ - f'scope {registry.scope} exists in {self.name} registry' - self.children[registry.scope] = registry - - def _register_module(self, module_class, module_name=None, force=False): - if not inspect.isclass(module_class): - raise TypeError('module must be a class, ' - f'but got {type(module_class)}') - - if module_name is None: - module_name = module_class.__name__ - if isinstance(module_name, str): - module_name = [module_name] - for name in module_name: - if not force and name in self._module_dict: - raise KeyError(f'{name} is already registered ' - f'in {self.name}') - self._module_dict[name] = module_class - - def deprecated_register_module(self, cls=None, force=False): - warnings.warn( - 'The old API of register_module(module, force=False) ' - 'is deprecated and will be removed, please use the new API ' - 'register_module(name=None, force=False, module=None) instead.') - if cls is None: - return partial(self.deprecated_register_module, force=force) - self._register_module(cls, force=force) - return cls - - def register_module(self, name=None, force=False, module=None): - """Register a module. - - A record will be added to `self._module_dict`, whose key is the class - name or the specified name, and value is the class itself. - It can be used as a decorator or a normal function. - - Example: - >>> backbones = Registry('backbone') - >>> @backbones.register_module() - >>> class ResNet: - >>> pass - - >>> backbones = Registry('backbone') - >>> @backbones.register_module(name='mnet') - >>> class MobileNet: - >>> pass - - >>> backbones = Registry('backbone') - >>> class ResNet: - >>> pass - >>> backbones.register_module(ResNet) - - Args: - name (str | None): The module name to be registered. If not - specified, the class name will be used. - force (bool, optional): Whether to override an existing class with - the same name. Default: False. - module (type): Module class to be registered. - """ - if not isinstance(force, bool): - raise TypeError(f'force must be a boolean, but got {type(force)}') - # NOTE: This is a walkaround to be compatible with the old api, - # while it may introduce unexpected bugs. - if isinstance(name, type): - return self.deprecated_register_module(name, force=force) - - # raise the error ahead of time - if not (name is None or isinstance(name, str) or is_seq_of(name, str)): - raise TypeError( - 'name must be either of None, an instance of str or a sequence' - f' of str, but got {type(name)}') - - # use it as a normal method: x.register_module(module=SomeClass) - if module is not None: - self._register_module( - module_class=module, module_name=name, force=force) - return module - - # use it as a decorator: @x.register_module() - def _register(cls): - self._register_module( - module_class=cls, module_name=name, force=force) - return cls - - return _register diff --git a/spaces/SIGMitch/Real-Time-Chad/txt2img/index.html b/spaces/SIGMitch/Real-Time-Chad/txt2img/index.html deleted file mode 100644 index a6119b78615df60cc9a315c35e24eb827fe94dc7..0000000000000000000000000000000000000000 --- a/spaces/SIGMitch/Real-Time-Chad/txt2img/index.html +++ /dev/null @@ -1,263 +0,0 @@ - - - - - - Real-Time Chad - - - - - - - - - -
    -
       -
    -
    -

    Real-Time Latent Consistency Model

    -

    Text to Image

    -

    - Hope you lik pain. -

    -

    - There are 0 user(s) sharing the same GPU, affecting - real-time performance. Maximum queue size is 10. -

    -
    -
    -

    Prompt

    -

    - Start your session and type your prompt here, accepts - Compel syntax. -

    -
    - -
    - -
    -
    -
    - Advanced Options -
    - - - - 8.0 - - - -
    -
    -
    -
    - - - -
    -
    - -
    -
    - - - \ No newline at end of file diff --git a/spaces/SShaik/SS-03-GR-AI-Text2ArtGenerator/README.md b/spaces/SShaik/SS-03-GR-AI-Text2ArtGenerator/README.md deleted file mode 100644 index be11355ea4d63c4d589b72032ba14675b2d0f555..0000000000000000000000000000000000000000 --- a/spaces/SShaik/SS-03-GR-AI-Text2ArtGenerator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: SS 03 GR AI Text2ArtGenerator -emoji: 📚 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Salesforce/BLIP/models/nlvr_encoder.py b/spaces/Salesforce/BLIP/models/nlvr_encoder.py deleted file mode 100644 index 1946bb4a300f75afa4848f6622839445903c34a9..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/BLIP/models/nlvr_encoder.py +++ /dev/null @@ -1,843 +0,0 @@ -import math -import os -import warnings -from dataclasses import dataclass -from typing import Optional, Tuple - -import torch -from torch import Tensor, device, dtype, nn -import torch.utils.checkpoint -from torch import nn -from torch.nn import CrossEntropyLoss -import torch.nn.functional as F - -from transformers.activations import ACT2FN -from transformers.file_utils import ( - ModelOutput, -) -from transformers.modeling_outputs import ( - BaseModelOutputWithPastAndCrossAttentions, - BaseModelOutputWithPoolingAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - MaskedLMOutput, - MultipleChoiceModelOutput, - NextSentencePredictorOutput, - QuestionAnsweringModelOutput, - SequenceClassifierOutput, - TokenClassifierOutput, -) -from transformers.modeling_utils import ( - PreTrainedModel, - apply_chunking_to_forward, - find_pruneable_heads_and_indices, - prune_linear_layer, -) -from transformers.utils import logging -from transformers.models.bert.configuration_bert import BertConfig - - -logger = logging.get_logger(__name__) - - -class BertEmbeddings(nn.Module): - """Construct the embeddings from word and position embeddings.""" - - def __init__(self, config): - super().__init__() - self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) - self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) - self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") - - self.config = config - - def forward( - self, input_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0 - ): - if input_ids is not None: - input_shape = input_ids.size() - else: - input_shape = inputs_embeds.size()[:-1] - - seq_length = input_shape[1] - - if position_ids is None: - position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length] - - if inputs_embeds is None: - inputs_embeds = self.word_embeddings(input_ids) - - embeddings = inputs_embeds - - if self.position_embedding_type == "absolute": - position_embeddings = self.position_embeddings(position_ids) - embeddings += position_embeddings - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - - -class BertSelfAttention(nn.Module): - def __init__(self, config, is_cross_attention): - super().__init__() - self.config = config - if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): - raise ValueError( - "The hidden size (%d) is not a multiple of the number of attention " - "heads (%d)" % (config.hidden_size, config.num_attention_heads) - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size) - if is_cross_attention: - self.key = nn.Linear(config.encoder_width, self.all_head_size) - self.value = nn.Linear(config.encoder_width, self.all_head_size) - else: - self.key = nn.Linear(config.hidden_size, self.all_head_size) - self.value = nn.Linear(config.hidden_size, self.all_head_size) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - self.max_position_embeddings = config.max_position_embeddings - self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size) - self.save_attention = False - - def save_attn_gradients(self, attn_gradients): - self.attn_gradients = attn_gradients - - def get_attn_gradients(self): - return self.attn_gradients - - def save_attention_map(self, attention_map): - self.attention_map = attention_map - - def get_attention_map(self): - return self.attention_map - - def transpose_for_scores(self, x): - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - x = x.view(*new_x_shape) - return x.permute(0, 2, 1, 3) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - ): - mixed_query_layer = self.query(hidden_states) - - # If this is instantiated as a cross-attention module, the keys - # and values come from an encoder; the attention mask needs to be - # such that the encoder's padding tokens are not attended to. - is_cross_attention = encoder_hidden_states is not None - - if is_cross_attention: - key_layer = self.transpose_for_scores(self.key(encoder_hidden_states)) - value_layer = self.transpose_for_scores(self.value(encoder_hidden_states)) - attention_mask = encoder_attention_mask - elif past_key_value is not None: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - key_layer = torch.cat([past_key_value[0], key_layer], dim=2) - value_layer = torch.cat([past_key_value[1], value_layer], dim=2) - else: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - - query_layer = self.transpose_for_scores(mixed_query_layer) - - past_key_value = (key_layer, value_layer) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - seq_length = hidden_states.size()[1] - position_ids_l = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(-1, 1) - position_ids_r = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(1, -1) - distance = position_ids_l - position_ids_r - positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1) - positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in BertModel forward() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = nn.Softmax(dim=-1)(attention_scores) - - if is_cross_attention and self.save_attention: - self.save_attention_map(attention_probs) - attention_probs.register_hook(self.save_attn_gradients) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs_dropped = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs_dropped = attention_probs_dropped * head_mask - - context_layer = torch.matmul(attention_probs_dropped, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(*new_context_layer_shape) - - outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - - outputs = outputs + (past_key_value,) - return outputs - - -class BertSelfOutput(nn.Module): - def __init__(self, config, twin=False, merge=False): - super().__init__() - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - if twin: - self.dense0 = nn.Linear(config.hidden_size, config.hidden_size) - self.dense1 = nn.Linear(config.hidden_size, config.hidden_size) - else: - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - if merge: - self.act = ACT2FN[config.hidden_act] - self.merge_layer = nn.Linear(config.hidden_size * 2, config.hidden_size) - self.merge = True - else: - self.merge = False - - def forward(self, hidden_states, input_tensor): - if type(hidden_states) == list: - hidden_states0 = self.dense0(hidden_states[0]) - hidden_states1 = self.dense1(hidden_states[1]) - if self.merge: - #hidden_states = self.merge_layer(self.act(torch.cat([hidden_states0,hidden_states1],dim=-1))) - hidden_states = self.merge_layer(torch.cat([hidden_states0,hidden_states1],dim=-1)) - else: - hidden_states = (hidden_states0+hidden_states1)/2 - else: - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class BertAttention(nn.Module): - def __init__(self, config, is_cross_attention=False, layer_num=-1): - super().__init__() - if is_cross_attention: - self.self0 = BertSelfAttention(config, is_cross_attention) - self.self1 = BertSelfAttention(config, is_cross_attention) - else: - self.self = BertSelfAttention(config, is_cross_attention) - self.output = BertSelfOutput(config, twin=is_cross_attention, merge=(is_cross_attention and layer_num>=6)) - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.self.query = prune_linear_layer(self.self.query, index) - self.self.key = prune_linear_layer(self.self.key, index) - self.self.value = prune_linear_layer(self.self.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.self.num_attention_heads = self.self.num_attention_heads - len(heads) - self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - ): - if type(encoder_hidden_states)==list: - self_outputs0 = self.self0( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states[0], - encoder_attention_mask[0], - past_key_value, - output_attentions, - ) - self_outputs1 = self.self1( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states[1], - encoder_attention_mask[1], - past_key_value, - output_attentions, - ) - attention_output = self.output([self_outputs0[0],self_outputs1[0]], hidden_states) - - outputs = (attention_output,) + self_outputs0[1:] # add attentions if we output them - else: - self_outputs = self.self( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - ) - attention_output = self.output(self_outputs[0], hidden_states) - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs - - -class BertIntermediate(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - return hidden_states - - -class BertOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states, input_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class BertLayer(nn.Module): - def __init__(self, config, layer_num): - super().__init__() - self.config = config - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - self.attention = BertAttention(config) - self.layer_num = layer_num - if self.config.add_cross_attention: - self.crossattention = BertAttention(config, is_cross_attention=self.config.add_cross_attention, layer_num=layer_num) - self.intermediate = BertIntermediate(config) - self.output = BertOutput(config) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - mode=None, - ): - # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 - self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None - self_attention_outputs = self.attention( - hidden_states, - attention_mask, - head_mask, - output_attentions=output_attentions, - past_key_value=self_attn_past_key_value, - ) - attention_output = self_attention_outputs[0] - - outputs = self_attention_outputs[1:-1] - present_key_value = self_attention_outputs[-1] - - if mode=='multimodal': - assert encoder_hidden_states is not None, "encoder_hidden_states must be given for cross-attention layers" - cross_attention_outputs = self.crossattention( - attention_output, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - output_attentions=output_attentions, - ) - attention_output = cross_attention_outputs[0] - outputs = outputs + cross_attention_outputs[1:-1] # add cross attentions if we output attention weights - layer_output = apply_chunking_to_forward( - self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output - ) - outputs = (layer_output,) + outputs - - outputs = outputs + (present_key_value,) - - return outputs - - def feed_forward_chunk(self, attention_output): - intermediate_output = self.intermediate(attention_output) - layer_output = self.output(intermediate_output, attention_output) - return layer_output - - -class BertEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.layer = nn.ModuleList([BertLayer(config,i) for i in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_values=None, - use_cache=None, - output_attentions=False, - output_hidden_states=False, - return_dict=True, - mode='multimodal', - ): - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None - - next_decoder_cache = () if use_cache else None - - for i in range(self.config.num_hidden_layers): - layer_module = self.layer[i] - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - past_key_value = past_key_values[i] if past_key_values is not None else None - - if self.gradient_checkpointing and self.training: - - if use_cache: - logger.warn( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, past_key_value, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - mode=mode, - ) - else: - layer_outputs = layer_module( - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - mode=mode, - ) - - hidden_states = layer_outputs[0] - if use_cache: - next_decoder_cache += (layer_outputs[-1],) - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [ - hidden_states, - next_decoder_cache, - all_hidden_states, - all_self_attentions, - all_cross_attentions, - ] - if v is not None - ) - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=next_decoder_cache, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, - ) - - -class BertPooler(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states): - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pooled_output) - return pooled_output - - -class BertPredictionHeadTransform(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - if isinstance(config.hidden_act, str): - self.transform_act_fn = ACT2FN[config.hidden_act] - else: - self.transform_act_fn = config.hidden_act - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - - def forward(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.transform_act_fn(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - return hidden_states - - -class BertLMPredictionHead(nn.Module): - def __init__(self, config): - super().__init__() - self.transform = BertPredictionHeadTransform(config) - - # The output weights are the same as the input embeddings, but there is - # an output-only bias for each token. - self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False) - - self.bias = nn.Parameter(torch.zeros(config.vocab_size)) - - # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` - self.decoder.bias = self.bias - - def forward(self, hidden_states): - hidden_states = self.transform(hidden_states) - hidden_states = self.decoder(hidden_states) - return hidden_states - - -class BertOnlyMLMHead(nn.Module): - def __init__(self, config): - super().__init__() - self.predictions = BertLMPredictionHead(config) - - def forward(self, sequence_output): - prediction_scores = self.predictions(sequence_output) - return prediction_scores - - -class BertPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = BertConfig - base_model_prefix = "bert" - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def _init_weights(self, module): - """ Initialize the weights """ - if isinstance(module, (nn.Linear, nn.Embedding)): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - - -class BertModel(BertPreTrainedModel): - """ - The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of - cross-attention is added between the self-attention layers, following the architecture described in `Attention is - all you need `__ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, - Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. - argument and :obj:`add_cross_attention` set to :obj:`True`; an :obj:`encoder_hidden_states` is then expected as an - input to the forward pass. - """ - - def __init__(self, config, add_pooling_layer=True): - super().__init__(config) - self.config = config - - self.embeddings = BertEmbeddings(config) - - self.encoder = BertEncoder(config) - - self.pooler = BertPooler(config) if add_pooling_layer else None - - self.init_weights() - - - def get_input_embeddings(self): - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value): - self.embeddings.word_embeddings = value - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - - def get_extended_attention_mask(self, attention_mask: Tensor, input_shape: Tuple[int], device: device, is_decoder: bool) -> Tensor: - """ - Makes broadcastable attention and causal masks so that future and masked tokens are ignored. - - Arguments: - attention_mask (:obj:`torch.Tensor`): - Mask with ones indicating tokens to attend to, zeros for tokens to ignore. - input_shape (:obj:`Tuple[int]`): - The shape of the input to the model. - device: (:obj:`torch.device`): - The device of the input to the model. - - Returns: - :obj:`torch.Tensor` The extended attention mask, with a the same dtype as :obj:`attention_mask.dtype`. - """ - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - if attention_mask.dim() == 3: - extended_attention_mask = attention_mask[:, None, :, :] - elif attention_mask.dim() == 2: - # Provided a padding mask of dimensions [batch_size, seq_length] - # - if the model is a decoder, apply a causal mask in addition to the padding mask - # - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length] - if is_decoder: - batch_size, seq_length = input_shape - - seq_ids = torch.arange(seq_length, device=device) - causal_mask = seq_ids[None, None, :].repeat(batch_size, seq_length, 1) <= seq_ids[None, :, None] - # in case past_key_values are used we need to add a prefix ones mask to the causal mask - # causal and attention masks must have same type with pytorch version < 1.3 - causal_mask = causal_mask.to(attention_mask.dtype) - - if causal_mask.shape[1] < attention_mask.shape[1]: - prefix_seq_len = attention_mask.shape[1] - causal_mask.shape[1] - causal_mask = torch.cat( - [ - torch.ones((batch_size, seq_length, prefix_seq_len), device=device, dtype=causal_mask.dtype), - causal_mask, - ], - axis=-1, - ) - - extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :] - else: - extended_attention_mask = attention_mask[:, None, None, :] - else: - raise ValueError( - "Wrong shape for input_ids (shape {}) or attention_mask (shape {})".format( - input_shape, attention_mask.shape - ) - ) - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and -10000.0 for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility - extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 - return extended_attention_mask - - def forward( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - encoder_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_values=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - is_decoder=False, - mode='multimodal', - ): - r""" - encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``: - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` - (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` - instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`. - use_cache (:obj:`bool`, `optional`): - If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up - decoding (see :obj:`past_key_values`). - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if is_decoder: - use_cache = use_cache if use_cache is not None else self.config.use_cache - else: - use_cache = False - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - batch_size, seq_length = input_shape - device = input_ids.device - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - batch_size, seq_length = input_shape - device = inputs_embeds.device - elif encoder_embeds is not None: - input_shape = encoder_embeds.size()[:-1] - batch_size, seq_length = input_shape - device = encoder_embeds.device - else: - raise ValueError("You have to specify either input_ids or inputs_embeds or encoder_embeds") - - # past_key_values_length - past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 - - if attention_mask is None: - attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, - device, is_decoder) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if encoder_hidden_states is not None: - if type(encoder_hidden_states) == list: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states[0].size() - else: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - - if type(encoder_attention_mask) == list: - encoder_extended_attention_mask = [self.invert_attention_mask(mask) for mask in encoder_attention_mask] - elif encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - if encoder_embeds is None: - embedding_output = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - inputs_embeds=inputs_embeds, - past_key_values_length=past_key_values_length, - ) - else: - embedding_output = encoder_embeds - - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - mode=mode, - ) - sequence_output = encoder_outputs[0] - pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, - ) - diff --git a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/text/korean.py b/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/models.py b/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/models.py deleted file mode 100644 index 7a387b888f63ecd6f1f1bd3ed10aa2176a944d2c..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/models.py +++ /dev/null @@ -1,1174 +0,0 @@ -import math -import logging - -logger = logging.getLogger(__name__) - -import numpy as np -import torch -from torch import nn -from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d -from torch.nn import functional as F -from torch.nn.utils import remove_weight_norm, spectral_norm, weight_norm - -from infer.lib.infer_pack import attentions, commons, modules -from infer.lib.infer_pack.commons import get_padding, init_weights -has_xpu = bool(hasattr(torch, "xpu") and torch.xpu.is_available()) - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - if uv.device.type == "privateuseone": # for DirectML - uv = uv.float() - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - if hasattr(self, "ddtype") == False: - self.ddtype = self.l_linear.weight.dtype - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - # print(x.dtype,sine_wavs.dtype,self.l_linear.weight.dtype) - # if self.is_half: - # sine_wavs = sine_wavs.half() - # sine_merge = self.l_tanh(self.l_linear(sine_wavs.to(x))) - # print(sine_wavs.dtype,self.ddtype) - if sine_wavs.dtype != self.ddtype: - sine_wavs = sine_wavs.to(self.ddtype) - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - logger.debug( - "gin_channels: " - + str(gin_channels) - + ", self.spk_embed_dim: " - + str(self.spk_embed_dim) - ) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - logger.debug( - "gin_channels: " - + str(gin_channels) - + ", self.spk_embed_dim: " - + str(self.spk_embed_dim) - ) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - logger.debug( - "gin_channels: " - + str(gin_channels) - + ", self.spk_embed_dim: " - + str(self.spk_embed_dim) - ) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - logger.debug( - "gin_channels: " - + str(gin_channels) - + ", self.spk_embed_dim: " - + str(self.spk_embed_dim) - ) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - if has_xpu and x.dtype == torch.bfloat16: - x = F.pad(x.to(dtype=torch.float16), (0, n_pad), "reflect").to(dtype=torch.bfloat16) - else: - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Shad0ws/crowdcounting/model.py b/spaces/Shad0ws/crowdcounting/model.py deleted file mode 100644 index dcba66762d1152a6587649a4816101d7734b8a7b..0000000000000000000000000000000000000000 --- a/spaces/Shad0ws/crowdcounting/model.py +++ /dev/null @@ -1,81 +0,0 @@ -import torch.nn as nn -import torch -from torch.nn import functional as F -from torchvision import models - -class ContextualModule(nn.Module): - def __init__(self, features, out_features=512, sizes=(1, 2, 3, 6)): - super(ContextualModule, self).__init__() - self.scales = [] - self.scales = nn.ModuleList([self._make_scale(features, size) for size in sizes]) - self.bottleneck = nn.Conv2d(features * 2, out_features, kernel_size=1) - self.relu = nn.ReLU() - self.weight_net = nn.Conv2d(features,features,kernel_size=1) - - def __make_weight(self,feature,scale_feature): - weight_feature = feature - scale_feature - return F.sigmoid(self.weight_net(weight_feature)) - - def _make_scale(self, features, size): - prior = nn.AdaptiveAvgPool2d(output_size=(size, size)) - conv = nn.Conv2d(features, features, kernel_size=1, bias=False) - return nn.Sequential(prior, conv) - - def forward(self, feats): - h, w = feats.size(2), feats.size(3) - multi_scales = [F.upsample(input=stage(feats), size=(h, w), mode='bilinear') for stage in self.scales] - weights = [self.__make_weight(feats,scale_feature) for scale_feature in multi_scales] - overall_features = [(multi_scales[0]*weights[0]+multi_scales[1]*weights[1]+multi_scales[2]*weights[2]+multi_scales[3]*weights[3])/(weights[0]+weights[1]+weights[2]+weights[3])]+ [feats] - bottle = self.bottleneck(torch.cat(overall_features, 1)) - return self.relu(bottle) - -class CANNet(nn.Module): - def __init__(self, load_weights=False): - super(CANNet, self).__init__() - self.seen = 0 - self.context = ContextualModule(512, 512) - self.frontend_feat = [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512] - self.backend_feat = [512, 512, 512,256,128,64] - self.frontend = make_layers(self.frontend_feat) - self.backend = make_layers(self.backend_feat,in_channels = 512,batch_norm=True, dilation = True) - self.output_layer = nn.Conv2d(64, 1, kernel_size=1) - if not load_weights: - mod = models.vgg16(pretrained = True) - self._initialize_weights() - for i in range(len(self.frontend.state_dict().items())): - list(self.frontend.state_dict().items())[i][1].data[:] = list(mod.state_dict().items())[i][1].data[:] - - def forward(self,x): - x = self.frontend(x) - x = self.context(x) - x = self.backend(x) - x = self.output_layer(x) - return x - - def _initialize_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.normal_(m.weight, std=0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - -def make_layers(cfg, in_channels = 3,batch_norm=False,dilation = False): - if dilation: - d_rate = 2 - else: - d_rate = 1 - layers = [] - for v in cfg: - if v == 'M': - layers += [nn.MaxPool2d(kernel_size=2, stride=2)] - else: - conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=d_rate,dilation = d_rate) - if batch_norm: - layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)] - else: - layers += [conv2d, nn.ReLU(inplace=True)] - in_channels = v - return nn.Sequential(*layers) diff --git a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/visualizer.py b/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/visualizer.py deleted file mode 100644 index 7a1b7b101e9b73f75f9136bc67f2063c7c1cf1c1..0000000000000000000000000000000000000000 --- a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/visualizer.py +++ /dev/null @@ -1,318 +0,0 @@ -# -*- coding: utf-8 -*- -""" -@File : visualizer.py -@Time : 2022/04/05 11:39:33 -@Author : Shilong Liu -@Contact : slongliu86@gmail.com -""" - -import datetime -import os - -import cv2 -import matplotlib.pyplot as plt -import numpy as np -import torch -from matplotlib import transforms -from matplotlib.collections import PatchCollection -from matplotlib.patches import Polygon -from pycocotools import mask as maskUtils - - -def renorm( - img: torch.FloatTensor, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] -) -> torch.FloatTensor: - # img: tensor(3,H,W) or tensor(B,3,H,W) - # return: same as img - assert img.dim() == 3 or img.dim() == 4, "img.dim() should be 3 or 4 but %d" % img.dim() - if img.dim() == 3: - assert img.size(0) == 3, 'img.size(0) shoule be 3 but "%d". (%s)' % ( - img.size(0), - str(img.size()), - ) - img_perm = img.permute(1, 2, 0) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(2, 0, 1) - else: # img.dim() == 4 - assert img.size(1) == 3, 'img.size(1) shoule be 3 but "%d". (%s)' % ( - img.size(1), - str(img.size()), - ) - img_perm = img.permute(0, 2, 3, 1) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(0, 3, 1, 2) - - -class ColorMap: - def __init__(self, basergb=[255, 255, 0]): - self.basergb = np.array(basergb) - - def __call__(self, attnmap): - # attnmap: h, w. np.uint8. - # return: h, w, 4. np.uint8. - assert attnmap.dtype == np.uint8 - h, w = attnmap.shape - res = self.basergb.copy() - res = res[None][None].repeat(h, 0).repeat(w, 1) # h, w, 3 - attn1 = attnmap.copy()[..., None] # h, w, 1 - res = np.concatenate((res, attn1), axis=-1).astype(np.uint8) - return res - - -def rainbow_text(x, y, ls, lc, **kw): - """ - Take a list of strings ``ls`` and colors ``lc`` and place them next to each - other, with text ls[i] being shown in color lc[i]. - - This example shows how to do both vertical and horizontal text, and will - pass all keyword arguments to plt.text, so you can set the font size, - family, etc. - """ - t = plt.gca().transData - fig = plt.gcf() - plt.show() - - # horizontal version - for s, c in zip(ls, lc): - text = plt.text(x, y, " " + s + " ", color=c, transform=t, **kw) - text.draw(fig.canvas.get_renderer()) - ex = text.get_window_extent() - t = transforms.offset_copy(text._transform, x=ex.width, units="dots") - - # #vertical version - # for s,c in zip(ls,lc): - # text = plt.text(x,y," "+s+" ",color=c, transform=t, - # rotation=90,va='bottom',ha='center',**kw) - # text.draw(fig.canvas.get_renderer()) - # ex = text.get_window_extent() - # t = transforms.offset_copy(text._transform, y=ex.height, units='dots') - - -class COCOVisualizer: - def __init__(self, coco=None, tokenlizer=None) -> None: - self.coco = coco - - def visualize(self, img, tgt, caption=None, dpi=180, savedir="vis"): - """ - img: tensor(3, H, W) - tgt: make sure they are all on cpu. - must have items: 'image_id', 'boxes', 'size' - """ - plt.figure(dpi=dpi) - plt.rcParams["font.size"] = "5" - ax = plt.gca() - img = renorm(img).permute(1, 2, 0) - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - ax.imshow(img) - - self.addtgt(tgt) - - if tgt is None: - image_id = 0 - elif "image_id" not in tgt: - image_id = 0 - else: - image_id = tgt["image_id"] - - if caption is None: - savename = "{}/{}-{}.png".format( - savedir, int(image_id), str(datetime.datetime.now()).replace(" ", "-") - ) - else: - savename = "{}/{}-{}-{}.png".format( - savedir, caption, int(image_id), str(datetime.datetime.now()).replace(" ", "-") - ) - print("savename: {}".format(savename)) - os.makedirs(os.path.dirname(savename), exist_ok=True) - plt.savefig(savename) - plt.close() - - def addtgt(self, tgt): - """ """ - if tgt is None or not "boxes" in tgt: - ax = plt.gca() - - if "caption" in tgt: - ax.set_title(tgt["caption"], wrap=True) - - ax.set_axis_off() - return - - ax = plt.gca() - H, W = tgt["size"] - numbox = tgt["boxes"].shape[0] - - color = [] - polygons = [] - boxes = [] - for box in tgt["boxes"].cpu(): - unnormbbox = box * torch.Tensor([W, H, W, H]) - unnormbbox[:2] -= unnormbbox[2:] / 2 - [bbox_x, bbox_y, bbox_w, bbox_h] = unnormbbox.tolist() - boxes.append([bbox_x, bbox_y, bbox_w, bbox_h]) - poly = [ - [bbox_x, bbox_y], - [bbox_x, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y], - ] - np_poly = np.array(poly).reshape((4, 2)) - polygons.append(Polygon(np_poly)) - c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0] - color.append(c) - - p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.1) - ax.add_collection(p) - p = PatchCollection(polygons, facecolor="none", edgecolors=color, linewidths=2) - ax.add_collection(p) - - if "strings_positive" in tgt and len(tgt["strings_positive"]) > 0: - assert ( - len(tgt["strings_positive"]) == numbox - ), f"{len(tgt['strings_positive'])} = {numbox}, " - for idx, strlist in enumerate(tgt["strings_positive"]): - cate_id = int(tgt["labels"][idx]) - _string = str(cate_id) + ":" + " ".join(strlist) - bbox_x, bbox_y, bbox_w, bbox_h = boxes[idx] - # ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': 'yellow', 'alpha': 1.0, 'pad': 1}) - ax.text( - bbox_x, - bbox_y, - _string, - color="black", - bbox={"facecolor": color[idx], "alpha": 0.6, "pad": 1}, - ) - - if "box_label" in tgt: - assert len(tgt["box_label"]) == numbox, f"{len(tgt['box_label'])} = {numbox}, " - for idx, bl in enumerate(tgt["box_label"]): - _string = str(bl) - bbox_x, bbox_y, bbox_w, bbox_h = boxes[idx] - # ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': 'yellow', 'alpha': 1.0, 'pad': 1}) - ax.text( - bbox_x, - bbox_y, - _string, - color="black", - bbox={"facecolor": color[idx], "alpha": 0.6, "pad": 1}, - ) - - if "caption" in tgt: - ax.set_title(tgt["caption"], wrap=True) - # plt.figure() - # rainbow_text(0.0,0.0,"all unicorns poop rainbows ! ! !".split(), - # ['red', 'orange', 'brown', 'green', 'blue', 'purple', 'black']) - - if "attn" in tgt: - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - if isinstance(tgt["attn"], tuple): - tgt["attn"] = [tgt["attn"]] - for item in tgt["attn"]: - attn_map, basergb = item - attn_map = (attn_map - attn_map.min()) / (attn_map.max() - attn_map.min() + 1e-3) - attn_map = (attn_map * 255).astype(np.uint8) - cm = ColorMap(basergb) - heatmap = cm(attn_map) - ax.imshow(heatmap) - ax.set_axis_off() - - def showAnns(self, anns, draw_bbox=False): - """ - Display the specified annotations. - :param anns (array of object): annotations to display - :return: None - """ - if len(anns) == 0: - return 0 - if "segmentation" in anns[0] or "keypoints" in anns[0]: - datasetType = "instances" - elif "caption" in anns[0]: - datasetType = "captions" - else: - raise Exception("datasetType not supported") - if datasetType == "instances": - ax = plt.gca() - ax.set_autoscale_on(False) - polygons = [] - color = [] - for ann in anns: - c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0] - if "segmentation" in ann: - if type(ann["segmentation"]) == list: - # polygon - for seg in ann["segmentation"]: - poly = np.array(seg).reshape((int(len(seg) / 2), 2)) - polygons.append(Polygon(poly)) - color.append(c) - else: - # mask - t = self.imgs[ann["image_id"]] - if type(ann["segmentation"]["counts"]) == list: - rle = maskUtils.frPyObjects( - [ann["segmentation"]], t["height"], t["width"] - ) - else: - rle = [ann["segmentation"]] - m = maskUtils.decode(rle) - img = np.ones((m.shape[0], m.shape[1], 3)) - if ann["iscrowd"] == 1: - color_mask = np.array([2.0, 166.0, 101.0]) / 255 - if ann["iscrowd"] == 0: - color_mask = np.random.random((1, 3)).tolist()[0] - for i in range(3): - img[:, :, i] = color_mask[i] - ax.imshow(np.dstack((img, m * 0.5))) - if "keypoints" in ann and type(ann["keypoints"]) == list: - # turn skeleton into zero-based index - sks = np.array(self.loadCats(ann["category_id"])[0]["skeleton"]) - 1 - kp = np.array(ann["keypoints"]) - x = kp[0::3] - y = kp[1::3] - v = kp[2::3] - for sk in sks: - if np.all(v[sk] > 0): - plt.plot(x[sk], y[sk], linewidth=3, color=c) - plt.plot( - x[v > 0], - y[v > 0], - "o", - markersize=8, - markerfacecolor=c, - markeredgecolor="k", - markeredgewidth=2, - ) - plt.plot( - x[v > 1], - y[v > 1], - "o", - markersize=8, - markerfacecolor=c, - markeredgecolor=c, - markeredgewidth=2, - ) - - if draw_bbox: - [bbox_x, bbox_y, bbox_w, bbox_h] = ann["bbox"] - poly = [ - [bbox_x, bbox_y], - [bbox_x, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y], - ] - np_poly = np.array(poly).reshape((4, 2)) - polygons.append(Polygon(np_poly)) - color.append(c) - - # p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.4) - # ax.add_collection(p) - p = PatchCollection(polygons, facecolor="none", edgecolors=color, linewidths=2) - ax.add_collection(p) - elif datasetType == "captions": - for ann in anns: - print(ann["caption"]) diff --git a/spaces/SumanthKarnati/SumanthKarnati-Image2Ingredients/app.py b/spaces/SumanthKarnati/SumanthKarnati-Image2Ingredients/app.py deleted file mode 100644 index 33fbe06cfcc7e038749a8989728284d4d9c91404..0000000000000000000000000000000000000000 --- a/spaces/SumanthKarnati/SumanthKarnati-Image2Ingredients/app.py +++ /dev/null @@ -1,130 +0,0 @@ -import os -os.system('pip install --upgrade transformers') -import nltk -from transformers import VisionEncoderDecoderModel, AutoTokenizer, ViTImageProcessor, pipeline -import torch -from PIL import Image -import streamlit as st -from nltk.corpus import stopwords -from io import BytesIO - - - -# os.system('pip install nltk') -nltk.download('stopwords') - -# Load the pre-trained model -model = VisionEncoderDecoderModel.from_pretrained( - "SumanthKarnati/Image2Ingredients") -model.eval() - -# Define the feature extractor -feature_extractor = ViTImageProcessor.from_pretrained( - 'nlpconnect/vit-gpt2-image-captioning') - -# Load the tokenizer -tokenizer = AutoTokenizer.from_pretrained( - 'nlpconnect/vit-gpt2-image-captioning') - -# Set up text generation pipeline -generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B') - -# Device configuration -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - -# Transfer the model to GPU if available -model = model.to(device) - -# Set prediction arguments -max_length = 16 -num_beams = 4 -gen_kwargs = {"max_length": max_length, "num_beams": num_beams} - -# Function to predict ingredients from images - - -def predict_step(image_files, model, feature_extractor, tokenizer, device, gen_kwargs): - images = [] - for image_file in image_files: - if image_file is not None: - # Create a BytesIO object from the UploadedFile (image_file) - byte_stream = BytesIO(image_file.getvalue()) - image = Image.open(byte_stream) - if image.mode != "RGB": - image = image.convert(mode="RGB") - images.append(image) - - if not images: - return None - - inputs = feature_extractor(images=images, return_tensors="pt") - inputs.to(device) - output_ids = model.generate(inputs["pixel_values"], **gen_kwargs) - - preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) - preds = [pred.strip() for pred in preds] - return preds - - -# Get the list of English stop words -stop_words = set(stopwords.words('english')) - -# Function to remove stop words from a list of words - - -def remove_stop_words(word_list): - return [word for word in word_list if word not in stop_words] - -# Streamlit app code - - -def main(): - st.title("Image2Nutrients: Food Ingredient Recognition") - st.write("Upload an image of your food to recognize the ingredients!") - - # File upload - uploaded_file = st.file_uploader( - "Choose an image", type=["jpg", "jpeg", "png"]) - - if uploaded_file is not None: - # Display the uploaded image - image = Image.open(uploaded_file) - st.image(image, caption="Uploaded Image", use_column_width=True) - - # Perform ingredient recognition - preds = predict_step([uploaded_file], model, - feature_extractor, tokenizer, device, gen_kwargs) - - preds = preds[0].split('-') - # remove numbers - preds = [x for x in preds if not any(c.isdigit() for c in x)] - # remove empty strings - preds = list(filter(None, preds)) - # remove duplicates - - preds = list(dict.fromkeys(preds)) - - preds = remove_stop_words(preds) - - # Display the recognized ingredients - st.subheader("Recognized Ingredients:") - for ingredient in preds: - st.write(ingredient) - - preds_str = ', '.join(preds) - - # Prepare the prompt - prompt = f"You are a knowledgeable assistant that provides nutritional advice based on a list of ingredients. The identified ingredients are: {preds_str}. Note that some ingredients may not make sense, so use the ones that do. Can you provide a nutritional analysis and suggestions for improvement?" - - # Generate a sequence of text - suggestions = generator(prompt, do_sample=True, min_length=200) - - # Extract the generated text - suggestions = suggestions[0]['generated_text'][len(prompt):] - - st.subheader("Nutritional Analysis and Suggestions:") - st.write(suggestions) - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/Sumit7864/Image-Enhancer/test.py b/spaces/Sumit7864/Image-Enhancer/test.py deleted file mode 100644 index 84901d6adba1ce76c296384c230ea594df5479ca..0000000000000000000000000000000000000000 --- a/spaces/Sumit7864/Image-Enhancer/test.py +++ /dev/null @@ -1,6 +0,0 @@ - - - - - -print(result) \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/api.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/api.py deleted file mode 100644 index 9dbf4201e9ec54d125886978a871169d9a9f4818..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/api.py +++ /dev/null @@ -1,554 +0,0 @@ -import logging -from os import PathLike -from typing import Any, BinaryIO, List, Optional, Set - -from .cd import ( - coherence_ratio, - encoding_languages, - mb_encoding_languages, - merge_coherence_ratios, -) -from .constant import IANA_SUPPORTED, TOO_BIG_SEQUENCE, TOO_SMALL_SEQUENCE, TRACE -from .md import mess_ratio -from .models import CharsetMatch, CharsetMatches -from .utils import ( - any_specified_encoding, - cut_sequence_chunks, - iana_name, - identify_sig_or_bom, - is_cp_similar, - is_multi_byte_encoding, - should_strip_sig_or_bom, -) - -# Will most likely be controversial -# logging.addLevelName(TRACE, "TRACE") -logger = logging.getLogger("charset_normalizer") -explain_handler = logging.StreamHandler() -explain_handler.setFormatter( - logging.Formatter("%(asctime)s | %(levelname)s | %(message)s") -) - - -def from_bytes( - sequences: bytes, - steps: int = 5, - chunk_size: int = 512, - threshold: float = 0.2, - cp_isolation: Optional[List[str]] = None, - cp_exclusion: Optional[List[str]] = None, - preemptive_behaviour: bool = True, - explain: bool = False, - language_threshold: float = 0.1, -) -> CharsetMatches: - """ - Given a raw bytes sequence, return the best possibles charset usable to render str objects. - If there is no results, it is a strong indicator that the source is binary/not text. - By default, the process will extract 5 blocks of 512o each to assess the mess and coherence of a given sequence. - And will give up a particular code page after 20% of measured mess. Those criteria are customizable at will. - - The preemptive behavior DOES NOT replace the traditional detection workflow, it prioritize a particular code page - but never take it for granted. Can improve the performance. - - You may want to focus your attention to some code page or/and not others, use cp_isolation and cp_exclusion for that - purpose. - - This function will strip the SIG in the payload/sequence every time except on UTF-16, UTF-32. - By default the library does not setup any handler other than the NullHandler, if you choose to set the 'explain' - toggle to True it will alter the logger configuration to add a StreamHandler that is suitable for debugging. - Custom logging format and handler can be set manually. - """ - - if not isinstance(sequences, (bytearray, bytes)): - raise TypeError( - "Expected object of type bytes or bytearray, got: {0}".format( - type(sequences) - ) - ) - - if explain: - previous_logger_level: int = logger.level - logger.addHandler(explain_handler) - logger.setLevel(TRACE) - - length: int = len(sequences) - - if length == 0: - logger.debug("Encoding detection on empty bytes, assuming utf_8 intention.") - if explain: - logger.removeHandler(explain_handler) - logger.setLevel(previous_logger_level or logging.WARNING) - return CharsetMatches([CharsetMatch(sequences, "utf_8", 0.0, False, [], "")]) - - if cp_isolation is not None: - logger.log( - TRACE, - "cp_isolation is set. use this flag for debugging purpose. " - "limited list of encoding allowed : %s.", - ", ".join(cp_isolation), - ) - cp_isolation = [iana_name(cp, False) for cp in cp_isolation] - else: - cp_isolation = [] - - if cp_exclusion is not None: - logger.log( - TRACE, - "cp_exclusion is set. use this flag for debugging purpose. " - "limited list of encoding excluded : %s.", - ", ".join(cp_exclusion), - ) - cp_exclusion = [iana_name(cp, False) for cp in cp_exclusion] - else: - cp_exclusion = [] - - if length <= (chunk_size * steps): - logger.log( - TRACE, - "override steps (%i) and chunk_size (%i) as content does not fit (%i byte(s) given) parameters.", - steps, - chunk_size, - length, - ) - steps = 1 - chunk_size = length - - if steps > 1 and length / steps < chunk_size: - chunk_size = int(length / steps) - - is_too_small_sequence: bool = len(sequences) < TOO_SMALL_SEQUENCE - is_too_large_sequence: bool = len(sequences) >= TOO_BIG_SEQUENCE - - if is_too_small_sequence: - logger.log( - TRACE, - "Trying to detect encoding from a tiny portion of ({}) byte(s).".format( - length - ), - ) - elif is_too_large_sequence: - logger.log( - TRACE, - "Using lazy str decoding because the payload is quite large, ({}) byte(s).".format( - length - ), - ) - - prioritized_encodings: List[str] = [] - - specified_encoding: Optional[str] = ( - any_specified_encoding(sequences) if preemptive_behaviour else None - ) - - if specified_encoding is not None: - prioritized_encodings.append(specified_encoding) - logger.log( - TRACE, - "Detected declarative mark in sequence. Priority +1 given for %s.", - specified_encoding, - ) - - tested: Set[str] = set() - tested_but_hard_failure: List[str] = [] - tested_but_soft_failure: List[str] = [] - - fallback_ascii: Optional[CharsetMatch] = None - fallback_u8: Optional[CharsetMatch] = None - fallback_specified: Optional[CharsetMatch] = None - - results: CharsetMatches = CharsetMatches() - - sig_encoding, sig_payload = identify_sig_or_bom(sequences) - - if sig_encoding is not None: - prioritized_encodings.append(sig_encoding) - logger.log( - TRACE, - "Detected a SIG or BOM mark on first %i byte(s). Priority +1 given for %s.", - len(sig_payload), - sig_encoding, - ) - - prioritized_encodings.append("ascii") - - if "utf_8" not in prioritized_encodings: - prioritized_encodings.append("utf_8") - - for encoding_iana in prioritized_encodings + IANA_SUPPORTED: - if cp_isolation and encoding_iana not in cp_isolation: - continue - - if cp_exclusion and encoding_iana in cp_exclusion: - continue - - if encoding_iana in tested: - continue - - tested.add(encoding_iana) - - decoded_payload: Optional[str] = None - bom_or_sig_available: bool = sig_encoding == encoding_iana - strip_sig_or_bom: bool = bom_or_sig_available and should_strip_sig_or_bom( - encoding_iana - ) - - if encoding_iana in {"utf_16", "utf_32"} and not bom_or_sig_available: - logger.log( - TRACE, - "Encoding %s won't be tested as-is because it require a BOM. Will try some sub-encoder LE/BE.", - encoding_iana, - ) - continue - if encoding_iana in {"utf_7"} and not bom_or_sig_available: - logger.log( - TRACE, - "Encoding %s won't be tested as-is because detection is unreliable without BOM/SIG.", - encoding_iana, - ) - continue - - try: - is_multi_byte_decoder: bool = is_multi_byte_encoding(encoding_iana) - except (ModuleNotFoundError, ImportError): - logger.log( - TRACE, - "Encoding %s does not provide an IncrementalDecoder", - encoding_iana, - ) - continue - - try: - if is_too_large_sequence and is_multi_byte_decoder is False: - str( - sequences[: int(50e4)] - if strip_sig_or_bom is False - else sequences[len(sig_payload) : int(50e4)], - encoding=encoding_iana, - ) - else: - decoded_payload = str( - sequences - if strip_sig_or_bom is False - else sequences[len(sig_payload) :], - encoding=encoding_iana, - ) - except (UnicodeDecodeError, LookupError) as e: - if not isinstance(e, LookupError): - logger.log( - TRACE, - "Code page %s does not fit given bytes sequence at ALL. %s", - encoding_iana, - str(e), - ) - tested_but_hard_failure.append(encoding_iana) - continue - - similar_soft_failure_test: bool = False - - for encoding_soft_failed in tested_but_soft_failure: - if is_cp_similar(encoding_iana, encoding_soft_failed): - similar_soft_failure_test = True - break - - if similar_soft_failure_test: - logger.log( - TRACE, - "%s is deemed too similar to code page %s and was consider unsuited already. Continuing!", - encoding_iana, - encoding_soft_failed, - ) - continue - - r_ = range( - 0 if not bom_or_sig_available else len(sig_payload), - length, - int(length / steps), - ) - - multi_byte_bonus: bool = ( - is_multi_byte_decoder - and decoded_payload is not None - and len(decoded_payload) < length - ) - - if multi_byte_bonus: - logger.log( - TRACE, - "Code page %s is a multi byte encoding table and it appear that at least one character " - "was encoded using n-bytes.", - encoding_iana, - ) - - max_chunk_gave_up: int = int(len(r_) / 4) - - max_chunk_gave_up = max(max_chunk_gave_up, 2) - early_stop_count: int = 0 - lazy_str_hard_failure = False - - md_chunks: List[str] = [] - md_ratios = [] - - try: - for chunk in cut_sequence_chunks( - sequences, - encoding_iana, - r_, - chunk_size, - bom_or_sig_available, - strip_sig_or_bom, - sig_payload, - is_multi_byte_decoder, - decoded_payload, - ): - md_chunks.append(chunk) - - md_ratios.append( - mess_ratio( - chunk, - threshold, - explain is True and 1 <= len(cp_isolation) <= 2, - ) - ) - - if md_ratios[-1] >= threshold: - early_stop_count += 1 - - if (early_stop_count >= max_chunk_gave_up) or ( - bom_or_sig_available and strip_sig_or_bom is False - ): - break - except ( - UnicodeDecodeError - ) as e: # Lazy str loading may have missed something there - logger.log( - TRACE, - "LazyStr Loading: After MD chunk decode, code page %s does not fit given bytes sequence at ALL. %s", - encoding_iana, - str(e), - ) - early_stop_count = max_chunk_gave_up - lazy_str_hard_failure = True - - # We might want to check the sequence again with the whole content - # Only if initial MD tests passes - if ( - not lazy_str_hard_failure - and is_too_large_sequence - and not is_multi_byte_decoder - ): - try: - sequences[int(50e3) :].decode(encoding_iana, errors="strict") - except UnicodeDecodeError as e: - logger.log( - TRACE, - "LazyStr Loading: After final lookup, code page %s does not fit given bytes sequence at ALL. %s", - encoding_iana, - str(e), - ) - tested_but_hard_failure.append(encoding_iana) - continue - - mean_mess_ratio: float = sum(md_ratios) / len(md_ratios) if md_ratios else 0.0 - if mean_mess_ratio >= threshold or early_stop_count >= max_chunk_gave_up: - tested_but_soft_failure.append(encoding_iana) - logger.log( - TRACE, - "%s was excluded because of initial chaos probing. Gave up %i time(s). " - "Computed mean chaos is %f %%.", - encoding_iana, - early_stop_count, - round(mean_mess_ratio * 100, ndigits=3), - ) - # Preparing those fallbacks in case we got nothing. - if ( - encoding_iana in ["ascii", "utf_8", specified_encoding] - and not lazy_str_hard_failure - ): - fallback_entry = CharsetMatch( - sequences, encoding_iana, threshold, False, [], decoded_payload - ) - if encoding_iana == specified_encoding: - fallback_specified = fallback_entry - elif encoding_iana == "ascii": - fallback_ascii = fallback_entry - else: - fallback_u8 = fallback_entry - continue - - logger.log( - TRACE, - "%s passed initial chaos probing. Mean measured chaos is %f %%", - encoding_iana, - round(mean_mess_ratio * 100, ndigits=3), - ) - - if not is_multi_byte_decoder: - target_languages: List[str] = encoding_languages(encoding_iana) - else: - target_languages = mb_encoding_languages(encoding_iana) - - if target_languages: - logger.log( - TRACE, - "{} should target any language(s) of {}".format( - encoding_iana, str(target_languages) - ), - ) - - cd_ratios = [] - - # We shall skip the CD when its about ASCII - # Most of the time its not relevant to run "language-detection" on it. - if encoding_iana != "ascii": - for chunk in md_chunks: - chunk_languages = coherence_ratio( - chunk, - language_threshold, - ",".join(target_languages) if target_languages else None, - ) - - cd_ratios.append(chunk_languages) - - cd_ratios_merged = merge_coherence_ratios(cd_ratios) - - if cd_ratios_merged: - logger.log( - TRACE, - "We detected language {} using {}".format( - cd_ratios_merged, encoding_iana - ), - ) - - results.append( - CharsetMatch( - sequences, - encoding_iana, - mean_mess_ratio, - bom_or_sig_available, - cd_ratios_merged, - decoded_payload, - ) - ) - - if ( - encoding_iana in [specified_encoding, "ascii", "utf_8"] - and mean_mess_ratio < 0.1 - ): - logger.debug( - "Encoding detection: %s is most likely the one.", encoding_iana - ) - if explain: - logger.removeHandler(explain_handler) - logger.setLevel(previous_logger_level) - return CharsetMatches([results[encoding_iana]]) - - if encoding_iana == sig_encoding: - logger.debug( - "Encoding detection: %s is most likely the one as we detected a BOM or SIG within " - "the beginning of the sequence.", - encoding_iana, - ) - if explain: - logger.removeHandler(explain_handler) - logger.setLevel(previous_logger_level) - return CharsetMatches([results[encoding_iana]]) - - if len(results) == 0: - if fallback_u8 or fallback_ascii or fallback_specified: - logger.log( - TRACE, - "Nothing got out of the detection process. Using ASCII/UTF-8/Specified fallback.", - ) - - if fallback_specified: - logger.debug( - "Encoding detection: %s will be used as a fallback match", - fallback_specified.encoding, - ) - results.append(fallback_specified) - elif ( - (fallback_u8 and fallback_ascii is None) - or ( - fallback_u8 - and fallback_ascii - and fallback_u8.fingerprint != fallback_ascii.fingerprint - ) - or (fallback_u8 is not None) - ): - logger.debug("Encoding detection: utf_8 will be used as a fallback match") - results.append(fallback_u8) - elif fallback_ascii: - logger.debug("Encoding detection: ascii will be used as a fallback match") - results.append(fallback_ascii) - - if results: - logger.debug( - "Encoding detection: Found %s as plausible (best-candidate) for content. With %i alternatives.", - results.best().encoding, # type: ignore - len(results) - 1, - ) - else: - logger.debug("Encoding detection: Unable to determine any suitable charset.") - - if explain: - logger.removeHandler(explain_handler) - logger.setLevel(previous_logger_level) - - return results - - -def from_fp( - fp: BinaryIO, - steps: int = 5, - chunk_size: int = 512, - threshold: float = 0.20, - cp_isolation: Optional[List[str]] = None, - cp_exclusion: Optional[List[str]] = None, - preemptive_behaviour: bool = True, - explain: bool = False, - language_threshold: float = 0.1, -) -> CharsetMatches: - """ - Same thing than the function from_bytes but using a file pointer that is already ready. - Will not close the file pointer. - """ - return from_bytes( - fp.read(), - steps, - chunk_size, - threshold, - cp_isolation, - cp_exclusion, - preemptive_behaviour, - explain, - language_threshold, - ) - - -def from_path( - path: "PathLike[Any]", - steps: int = 5, - chunk_size: int = 512, - threshold: float = 0.20, - cp_isolation: Optional[List[str]] = None, - cp_exclusion: Optional[List[str]] = None, - preemptive_behaviour: bool = True, - explain: bool = False, - language_threshold: float = 0.1, -) -> CharsetMatches: - """ - Same thing than the function from_bytes but with one extra step. Opening and reading given file path in binary mode. - Can raise IOError. - """ - with open(path, "rb") as fp: - return from_fp( - fp, - steps, - chunk_size, - threshold, - cp_isolation, - cp_exclusion, - preemptive_behaviour, - explain, - language_threshold, - ) diff --git a/spaces/TNR-5/Stable-Diffusion-Protogen-x3.4-webui/app.py b/spaces/TNR-5/Stable-Diffusion-Protogen-x3.4-webui/app.py deleted file mode 100644 index be111e59a9c0f40769c871659999c100caa38561..0000000000000000000000000000000000000000 --- a/spaces/TNR-5/Stable-Diffusion-Protogen-x3.4-webui/app.py +++ /dev/null @@ -1,76 +0,0 @@ -import os -from subprocess import getoutput - -os.system(f"git clone -b v1.5 https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i '$a fastapi==0.90.0' /home/user/app/stable-diffusion-webui/requirements_versions.txt") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py") - -# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header---------------------------- -os.system(f"wget -q https://raw.githubusercontent.com/darkstorm2150/webui/main/OpenGen_header_patch.py -O /home/user/app/header_patch.py") -os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -# --------------------------------------------------------------------------------------------------------------------------------------------------- - -if "IS_SHARED_UI" in os.environ: - os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/") - - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - - os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}") - - os.system(f"python launch.py --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding") -else: - os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/") - - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - - # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py") - #os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py") - - # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME") - #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study") - #os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - #os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui") - - # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt") - #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt") - #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt") - #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt") - #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt") - #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt") - - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt") - - #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt") - #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml") - - # ----------------------------Protogen Models---------------------------- - #os.system(f"wget -q https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release/resolve/main/Protogen_V2.2.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Protogen_V2.2.safetensors") - os.system(f"wget -q https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release/resolve/main/ProtoGen_X3.4.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/ProtoGen_X3.4.safetensors") - #os.system(f"wget -q https://huggingface.co/darkstorm2150/Protogen_v5.3_Official_Release/resolve/main/ProtoGen_X5.3.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/ProtoGen_X5.3.safetensors") - #os.system(f"wget -q https://huggingface.co/darkstorm2150/Protogen_v5.8_Official_Release/resolve/main/ProtoGen_X5.8.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/ProtoGen_X5.8.safetensors") - #os.system(f"wget -q https://huggingface.co/darkstorm2150/Protogen_Dragon_Official_Release/resolve/main/ProtoGen_Dragon.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/ProtoGen_Dragon.safetensors") - # ----------------------------Protogen Models---------------------------- - #os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test") - os.system(f"python launch.py --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding") \ No newline at end of file diff --git a/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/predictors/test_predictor.py b/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/predictors/test_predictor.py deleted file mode 100644 index 987adafc19dea7e003f43c3035402d12b51f5e61..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/predictors/test_predictor.py +++ /dev/null @@ -1,128 +0,0 @@ -import atexit - -from mmcv import Config -import os -import pytest -from pytorch_lightning import seed_everything -import shutil -import torch - -from risk_biased.scene_dataset.scene import load_create_dataset -from risk_biased.predictors.biased_predictor import ( - LitTrajectoryPredictor, - LitTrajectoryPredictorParams, -) -from risk_biased.utils.cost import TTCCostParams -from risk_biased.scene_dataset.loaders import SceneDataLoaders - - -def clean_up_dataset_dir(): - """ - This function is designed to delete the directories - that might have created even if the test fails early - by being called on exit. - """ - current_dir = os.path.dirname(os.path.realpath(__file__)) - dataset_dir0 = os.path.join(current_dir, "scene_dataset_000") - if os.path.exists(dataset_dir0): - shutil.rmtree(dataset_dir0) - dataset_dir1 = os.path.join(current_dir, "scene_dataset_001") - if os.path.exists(dataset_dir1): - shutil.rmtree(dataset_dir1) - - -atexit.register(clean_up_dataset_dir) - - -@pytest.fixture(scope="module") -def params(): - seed_everything(0) - current_dir = os.path.dirname(os.path.realpath(__file__)) - cfg = Config() - cfg.batch_size = 4 - cfg.time_scene = 5.0 - cfg.dt = 0.1 - cfg.sample_times = [t * cfg.dt for t in range(0, int(cfg.time_scene / cfg.dt))] - cfg.ego_ref_speed = 14 - cfg.ego_speed_init_low = 4.0 - cfg.ego_speed_init_high = 16.0 - cfg.ego_acceleration_mean_low = -1.5 - cfg.ego_acceleration_mean_high = 1.5 - cfg.ego_acceleration_std = 1.5 - cfg.ego_length = 4 - cfg.ego_width = 1.75 - cfg.fast_speed = 2.0 - cfg.slow_speed = 1.0 - cfg.p_change_pace = 0.2 - cfg.proportion_fast = 0.5 - cfg.perception_noise_std = 0.03 - cfg.state_dim = 2 - cfg.num_steps = 3 - cfg.num_steps_future = len(cfg.sample_times) - cfg.num_steps - cfg.file_name = "test_scene_data" - cfg.datasets_sizes = {"train": 100, "val": 10, "test": 30} - cfg.datasets = list(cfg.datasets_sizes.keys()) - cfg.num_workers = 2 - cfg.dataset_parameters = { - "dt": cfg.dt, - "time_scene": cfg.time_scene, - "sample_times": cfg.sample_times, - "ego_ref_speed": cfg.ego_ref_speed, - "ego_speed_init_low": cfg.ego_speed_init_low, - "ego_speed_init_high": cfg.ego_speed_init_high, - "ego_acceleration_mean_low": cfg.ego_acceleration_mean_low, - "ego_acceleration_mean_high": cfg.ego_acceleration_mean_high, - "ego_acceleration_std": cfg.ego_acceleration_std, - "fast_speed": cfg.fast_speed, - "slow_speed": cfg.slow_speed, - "p_change_pace": cfg.p_change_pace, - "proportion_fast": cfg.proportion_fast, - "file_name": cfg.file_name, - "datasets_sizes": cfg.datasets_sizes, - "state_dim": cfg.state_dim, - "num_steps": cfg.num_steps, - "num_steps_future": cfg.num_steps_future, - "perception_noise_std": cfg.perception_noise_std, - } - [data_train, data_val, data_test] = load_create_dataset(cfg, current_dir) - loaders = SceneDataLoaders( - cfg.state_dim, - cfg.num_steps, - cfg.num_steps_future, - cfg.batch_size, - data_train=data_train, - data_val=data_val, - data_test=data_test, - num_workers=cfg.num_workers, - ) - return cfg, loaders - - -class TestPredictor: - @pytest.fixture(autouse=True) - def setup(self, params): - cfg, loaders = params - current_dir = os.path.dirname(os.path.realpath(__file__)) - # Should create directory and datasets - [train_set, val_set, test_set] = load_create_dataset(cfg, base_dir=current_dir) - params = LitTrajectoryPredictorParams.from_config(cfg) - cost_params = TTCCostParams.from_config(cfg) - self.predictor = LitTrajectoryPredictor( - params, cost_params, loaders.unnormalize_trajectory - ) - assert not os.path.exists(os.path.join(current_dir, "scene_dataset_001")) - self.batch = torch.rand( - cfg.batch_size, - 1, - cfg.num_steps + cfg.num_steps_future, - cfg.state_dim, - ) - self.normalized_batch, self.offset = loaders.normalize_trajectory(self.batch) - ( - self.normalized_batch_past, - self.normalized_batch_future, - ) = loaders.split_trajectory(self.normalized_batch) - - # Remove after use - dataset_dir = os.path.join(current_dir, "scene_dataset_000") - shutil.rmtree(dataset_dir) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/response.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/response.py deleted file mode 100644 index 5ea609ccedf18eb4ab70f8fc6990448eb6407237..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/response.py +++ /dev/null @@ -1,107 +0,0 @@ -from __future__ import absolute_import - -from email.errors import MultipartInvariantViolationDefect, StartBoundaryNotFoundDefect - -from ..exceptions import HeaderParsingError -from ..packages.six.moves import http_client as httplib - - -def is_fp_closed(obj): - """ - Checks whether a given file-like object is closed. - - :param obj: - The file-like object to check. - """ - - try: - # Check `isclosed()` first, in case Python3 doesn't set `closed`. - # GH Issue #928 - return obj.isclosed() - except AttributeError: - pass - - try: - # Check via the official file-like-object way. - return obj.closed - except AttributeError: - pass - - try: - # Check if the object is a container for another file-like object that - # gets released on exhaustion (e.g. HTTPResponse). - return obj.fp is None - except AttributeError: - pass - - raise ValueError("Unable to determine whether fp is closed.") - - -def assert_header_parsing(headers): - """ - Asserts whether all headers have been successfully parsed. - Extracts encountered errors from the result of parsing headers. - - Only works on Python 3. - - :param http.client.HTTPMessage headers: Headers to verify. - - :raises urllib3.exceptions.HeaderParsingError: - If parsing errors are found. - """ - - # This will fail silently if we pass in the wrong kind of parameter. - # To make debugging easier add an explicit check. - if not isinstance(headers, httplib.HTTPMessage): - raise TypeError("expected httplib.Message, got {0}.".format(type(headers))) - - defects = getattr(headers, "defects", None) - get_payload = getattr(headers, "get_payload", None) - - unparsed_data = None - if get_payload: - # get_payload is actually email.message.Message.get_payload; - # we're only interested in the result if it's not a multipart message - if not headers.is_multipart(): - payload = get_payload() - - if isinstance(payload, (bytes, str)): - unparsed_data = payload - if defects: - # httplib is assuming a response body is available - # when parsing headers even when httplib only sends - # header data to parse_headers() This results in - # defects on multipart responses in particular. - # See: https://github.com/urllib3/urllib3/issues/800 - - # So we ignore the following defects: - # - StartBoundaryNotFoundDefect: - # The claimed start boundary was never found. - # - MultipartInvariantViolationDefect: - # A message claimed to be a multipart but no subparts were found. - defects = [ - defect - for defect in defects - if not isinstance( - defect, (StartBoundaryNotFoundDefect, MultipartInvariantViolationDefect) - ) - ] - - if defects or unparsed_data: - raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data) - - -def is_response_to_head(response): - """ - Checks whether the request of a response has been a HEAD-request. - Handles the quirks of AppEngine. - - :param http.client.HTTPResponse response: - Response to check if the originating request - used 'HEAD' as a method. - """ - # FIXME: Can we do this somehow without accessing private httplib _method? - method = response._method - if isinstance(method, int): # Platform-specific: Appengine - return method == 3 - return method.upper() == "HEAD" diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/editable_wheel.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/editable_wheel.py deleted file mode 100644 index ffcc2cc0e6f49414b32c17d2fca54698cf9b3d60..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/editable_wheel.py +++ /dev/null @@ -1,857 +0,0 @@ -""" -Create a wheel that, when installed, will make the source package 'editable' -(add it to the interpreter's path, including metadata) per PEP 660. Replaces -'setup.py develop'. - -.. note:: - One of the mechanisms briefly mentioned in PEP 660 to implement editable installs is - to create a separated directory inside ``build`` and use a .pth file to point to that - directory. In the context of this file such directory is referred as - *auxiliary build directory* or ``auxiliary_dir``. -""" - -import logging -import os -import shutil -import sys -import traceback -from contextlib import suppress -from enum import Enum -from inspect import cleandoc -from itertools import chain -from pathlib import Path -from tempfile import TemporaryDirectory -from typing import ( - TYPE_CHECKING, - Dict, - Iterable, - Iterator, - List, - Mapping, - Optional, - Tuple, - TypeVar, - Union, -) - -from .. import ( - Command, - _normalization, - _path, - errors, - namespaces, -) -from ..discovery import find_package_path -from ..dist import Distribution -from ..warnings import ( - InformationOnly, - SetuptoolsDeprecationWarning, - SetuptoolsWarning, -) -from .build_py import build_py as build_py_cls - -if TYPE_CHECKING: - from wheel.wheelfile import WheelFile # noqa - -if sys.version_info >= (3, 8): - from typing import Protocol -elif TYPE_CHECKING: - from typing_extensions import Protocol -else: - from abc import ABC as Protocol - -_Path = Union[str, Path] -_P = TypeVar("_P", bound=_Path) -_logger = logging.getLogger(__name__) - - -class _EditableMode(Enum): - """ - Possible editable installation modes: - `lenient` (new files automatically added to the package - DEFAULT); - `strict` (requires a new installation when files are added/removed); or - `compat` (attempts to emulate `python setup.py develop` - DEPRECATED). - """ - - STRICT = "strict" - LENIENT = "lenient" - COMPAT = "compat" # TODO: Remove `compat` after Dec/2022. - - @classmethod - def convert(cls, mode: Optional[str]) -> "_EditableMode": - if not mode: - return _EditableMode.LENIENT # default - - _mode = mode.upper() - if _mode not in _EditableMode.__members__: - raise errors.OptionError(f"Invalid editable mode: {mode!r}. Try: 'strict'.") - - if _mode == "COMPAT": - SetuptoolsDeprecationWarning.emit( - "Compat editable installs", - """ - The 'compat' editable mode is transitional and will be removed - in future versions of `setuptools`. - Please adapt your code accordingly to use either the 'strict' or the - 'lenient' modes. - """, - see_docs="userguide/development_mode.html", - # TODO: define due_date - # There is a series of shortcomings with the available editable install - # methods, and they are very controversial. This is something that still - # needs work. - # Moreover, `pip` is still hiding this warning, so users are not aware. - ) - - return _EditableMode[_mode] - - -_STRICT_WARNING = """ -New or renamed files may not be automatically picked up without a new installation. -""" - -_LENIENT_WARNING = """ -Options like `package-data`, `include/exclude-package-data` or -`packages.find.exclude/include` may have no effect. -""" - - -class editable_wheel(Command): - """Build 'editable' wheel for development. - This command is private and reserved for internal use of setuptools, - users should rely on ``setuptools.build_meta`` APIs. - """ - - description = "DO NOT CALL DIRECTLY, INTERNAL ONLY: create PEP 660 editable wheel" - - user_options = [ - ("dist-dir=", "d", "directory to put final built distributions in"), - ("dist-info-dir=", "I", "path to a pre-build .dist-info directory"), - ("mode=", None, cleandoc(_EditableMode.__doc__ or "")), - ] - - def initialize_options(self): - self.dist_dir = None - self.dist_info_dir = None - self.project_dir = None - self.mode = None - - def finalize_options(self): - dist = self.distribution - self.project_dir = dist.src_root or os.curdir - self.package_dir = dist.package_dir or {} - self.dist_dir = Path(self.dist_dir or os.path.join(self.project_dir, "dist")) - - def run(self): - try: - self.dist_dir.mkdir(exist_ok=True) - self._ensure_dist_info() - - # Add missing dist_info files - self.reinitialize_command("bdist_wheel") - bdist_wheel = self.get_finalized_command("bdist_wheel") - bdist_wheel.write_wheelfile(self.dist_info_dir) - - self._create_wheel_file(bdist_wheel) - except Exception: - traceback.print_exc() - project = self.distribution.name or self.distribution.get_name() - _DebuggingTips.emit(project=project) - raise - - def _ensure_dist_info(self): - if self.dist_info_dir is None: - dist_info = self.reinitialize_command("dist_info") - dist_info.output_dir = self.dist_dir - dist_info.ensure_finalized() - dist_info.run() - self.dist_info_dir = dist_info.dist_info_dir - else: - assert str(self.dist_info_dir).endswith(".dist-info") - assert Path(self.dist_info_dir, "METADATA").exists() - - def _install_namespaces(self, installation_dir, pth_prefix): - # XXX: Only required to support the deprecated namespace practice - dist = self.distribution - if not dist.namespace_packages: - return - - src_root = Path(self.project_dir, self.package_dir.get("", ".")).resolve() - installer = _NamespaceInstaller(dist, installation_dir, pth_prefix, src_root) - installer.install_namespaces() - - def _find_egg_info_dir(self) -> Optional[str]: - parent_dir = Path(self.dist_info_dir).parent if self.dist_info_dir else Path() - candidates = map(str, parent_dir.glob("*.egg-info")) - return next(candidates, None) - - def _configure_build( - self, name: str, unpacked_wheel: _Path, build_lib: _Path, tmp_dir: _Path - ): - """Configure commands to behave in the following ways: - - - Build commands can write to ``build_lib`` if they really want to... - (but this folder is expected to be ignored and modules are expected to live - in the project directory...) - - Binary extensions should be built in-place (editable_mode = True) - - Data/header/script files are not part of the "editable" specification - so they are written directly to the unpacked_wheel directory. - """ - # Non-editable files (data, headers, scripts) are written directly to the - # unpacked_wheel - - dist = self.distribution - wheel = str(unpacked_wheel) - build_lib = str(build_lib) - data = str(Path(unpacked_wheel, f"{name}.data", "data")) - headers = str(Path(unpacked_wheel, f"{name}.data", "headers")) - scripts = str(Path(unpacked_wheel, f"{name}.data", "scripts")) - - # egg-info may be generated again to create a manifest (used for package data) - egg_info = dist.reinitialize_command("egg_info", reinit_subcommands=True) - egg_info.egg_base = str(tmp_dir) - egg_info.ignore_egg_info_in_manifest = True - - build = dist.reinitialize_command("build", reinit_subcommands=True) - install = dist.reinitialize_command("install", reinit_subcommands=True) - - build.build_platlib = build.build_purelib = build.build_lib = build_lib - install.install_purelib = install.install_platlib = install.install_lib = wheel - install.install_scripts = build.build_scripts = scripts - install.install_headers = headers - install.install_data = data - - install_scripts = dist.get_command_obj("install_scripts") - install_scripts.no_ep = True - - build.build_temp = str(tmp_dir) - - build_py = dist.get_command_obj("build_py") - build_py.compile = False - build_py.existing_egg_info_dir = self._find_egg_info_dir() - - self._set_editable_mode() - - build.ensure_finalized() - install.ensure_finalized() - - def _set_editable_mode(self): - """Set the ``editable_mode`` flag in the build sub-commands""" - dist = self.distribution - build = dist.get_command_obj("build") - for cmd_name in build.get_sub_commands(): - cmd = dist.get_command_obj(cmd_name) - if hasattr(cmd, "editable_mode"): - cmd.editable_mode = True - elif hasattr(cmd, "inplace"): - cmd.inplace = True # backward compatibility with distutils - - def _collect_build_outputs(self) -> Tuple[List[str], Dict[str, str]]: - files: List[str] = [] - mapping: Dict[str, str] = {} - build = self.get_finalized_command("build") - - for cmd_name in build.get_sub_commands(): - cmd = self.get_finalized_command(cmd_name) - if hasattr(cmd, "get_outputs"): - files.extend(cmd.get_outputs() or []) - if hasattr(cmd, "get_output_mapping"): - mapping.update(cmd.get_output_mapping() or {}) - - return files, mapping - - def _run_build_commands( - self, dist_name: str, unpacked_wheel: _Path, build_lib: _Path, tmp_dir: _Path - ) -> Tuple[List[str], Dict[str, str]]: - self._configure_build(dist_name, unpacked_wheel, build_lib, tmp_dir) - self._run_build_subcommands() - files, mapping = self._collect_build_outputs() - self._run_install("headers") - self._run_install("scripts") - self._run_install("data") - return files, mapping - - def _run_build_subcommands(self): - """ - Issue #3501 indicates that some plugins/customizations might rely on: - - 1. ``build_py`` not running - 2. ``build_py`` always copying files to ``build_lib`` - - However both these assumptions may be false in editable_wheel. - This method implements a temporary workaround to support the ecosystem - while the implementations catch up. - """ - # TODO: Once plugins/customisations had the chance to catch up, replace - # `self._run_build_subcommands()` with `self.run_command("build")`. - # Also remove _safely_run, TestCustomBuildPy. Suggested date: Aug/2023. - build: Command = self.get_finalized_command("build") - for name in build.get_sub_commands(): - cmd = self.get_finalized_command(name) - if name == "build_py" and type(cmd) != build_py_cls: - self._safely_run(name) - else: - self.run_command(name) - - def _safely_run(self, cmd_name: str): - try: - return self.run_command(cmd_name) - except Exception: - SetuptoolsDeprecationWarning.emit( - "Customization incompatible with editable install", - f""" - {traceback.format_exc()} - - If you are seeing this warning it is very likely that a setuptools - plugin or customization overrides the `{cmd_name}` command, without - taking into consideration how editable installs run build steps - starting from setuptools v64.0.0. - - Plugin authors and developers relying on custom build steps are - encouraged to update their `{cmd_name}` implementation considering the - information about editable installs in - https://setuptools.pypa.io/en/latest/userguide/extension.html. - - For the time being `setuptools` will silence this error and ignore - the faulty command, but this behaviour will change in future versions. - """, - # TODO: define due_date - # There is a series of shortcomings with the available editable install - # methods, and they are very controversial. This is something that still - # needs work. - ) - - def _create_wheel_file(self, bdist_wheel): - from wheel.wheelfile import WheelFile - - dist_info = self.get_finalized_command("dist_info") - dist_name = dist_info.name - tag = "-".join(bdist_wheel.get_tag()) - build_tag = "0.editable" # According to PEP 427 needs to start with digit - archive_name = f"{dist_name}-{build_tag}-{tag}.whl" - wheel_path = Path(self.dist_dir, archive_name) - if wheel_path.exists(): - wheel_path.unlink() - - unpacked_wheel = TemporaryDirectory(suffix=archive_name) - build_lib = TemporaryDirectory(suffix=".build-lib") - build_tmp = TemporaryDirectory(suffix=".build-temp") - - with unpacked_wheel as unpacked, build_lib as lib, build_tmp as tmp: - unpacked_dist_info = Path(unpacked, Path(self.dist_info_dir).name) - shutil.copytree(self.dist_info_dir, unpacked_dist_info) - self._install_namespaces(unpacked, dist_info.name) - files, mapping = self._run_build_commands(dist_name, unpacked, lib, tmp) - strategy = self._select_strategy(dist_name, tag, lib) - with strategy, WheelFile(wheel_path, "w") as wheel_obj: - strategy(wheel_obj, files, mapping) - wheel_obj.write_files(unpacked) - - return wheel_path - - def _run_install(self, category: str): - has_category = getattr(self.distribution, f"has_{category}", None) - if has_category and has_category(): - _logger.info(f"Installing {category} as non editable") - self.run_command(f"install_{category}") - - def _select_strategy( - self, - name: str, - tag: str, - build_lib: _Path, - ) -> "EditableStrategy": - """Decides which strategy to use to implement an editable installation.""" - build_name = f"__editable__.{name}-{tag}" - project_dir = Path(self.project_dir) - mode = _EditableMode.convert(self.mode) - - if mode is _EditableMode.STRICT: - auxiliary_dir = _empty_dir(Path(self.project_dir, "build", build_name)) - return _LinkTree(self.distribution, name, auxiliary_dir, build_lib) - - packages = _find_packages(self.distribution) - has_simple_layout = _simple_layout(packages, self.package_dir, project_dir) - is_compat_mode = mode is _EditableMode.COMPAT - if set(self.package_dir) == {""} and has_simple_layout or is_compat_mode: - # src-layout(ish) is relatively safe for a simple pth file - src_dir = self.package_dir.get("", ".") - return _StaticPth(self.distribution, name, [Path(project_dir, src_dir)]) - - # Use a MetaPathFinder to avoid adding accidental top-level packages/modules - return _TopLevelFinder(self.distribution, name) - - -class EditableStrategy(Protocol): - def __call__(self, wheel: "WheelFile", files: List[str], mapping: Dict[str, str]): - ... - - def __enter__(self): - ... - - def __exit__(self, _exc_type, _exc_value, _traceback): - ... - - -class _StaticPth: - def __init__(self, dist: Distribution, name: str, path_entries: List[Path]): - self.dist = dist - self.name = name - self.path_entries = path_entries - - def __call__(self, wheel: "WheelFile", files: List[str], mapping: Dict[str, str]): - entries = "\n".join((str(p.resolve()) for p in self.path_entries)) - contents = bytes(f"{entries}\n", "utf-8") - wheel.writestr(f"__editable__.{self.name}.pth", contents) - - def __enter__(self): - msg = f""" - Editable install will be performed using .pth file to extend `sys.path` with: - {list(map(os.fspath, self.path_entries))!r} - """ - _logger.warning(msg + _LENIENT_WARNING) - return self - - def __exit__(self, _exc_type, _exc_value, _traceback): - ... - - -class _LinkTree(_StaticPth): - """ - Creates a ``.pth`` file that points to a link tree in the ``auxiliary_dir``. - - This strategy will only link files (not dirs), so it can be implemented in - any OS, even if that means using hardlinks instead of symlinks. - - By collocating ``auxiliary_dir`` and the original source code, limitations - with hardlinks should be avoided. - """ - def __init__( - self, dist: Distribution, - name: str, - auxiliary_dir: _Path, - build_lib: _Path, - ): - self.auxiliary_dir = Path(auxiliary_dir) - self.build_lib = Path(build_lib).resolve() - self._file = dist.get_command_obj("build_py").copy_file - super().__init__(dist, name, [self.auxiliary_dir]) - - def __call__(self, wheel: "WheelFile", files: List[str], mapping: Dict[str, str]): - self._create_links(files, mapping) - super().__call__(wheel, files, mapping) - - def _normalize_output(self, file: str) -> Optional[str]: - # Files relative to build_lib will be normalized to None - with suppress(ValueError): - path = Path(file).resolve().relative_to(self.build_lib) - return str(path).replace(os.sep, '/') - return None - - def _create_file(self, relative_output: str, src_file: str, link=None): - dest = self.auxiliary_dir / relative_output - if not dest.parent.is_dir(): - dest.parent.mkdir(parents=True) - self._file(src_file, dest, link=link) - - def _create_links(self, outputs, output_mapping): - self.auxiliary_dir.mkdir(parents=True, exist_ok=True) - link_type = "sym" if _can_symlink_files(self.auxiliary_dir) else "hard" - mappings = { - self._normalize_output(k): v - for k, v in output_mapping.items() - } - mappings.pop(None, None) # remove files that are not relative to build_lib - - for output in outputs: - relative = self._normalize_output(output) - if relative and relative not in mappings: - self._create_file(relative, output) - - for relative, src in mappings.items(): - self._create_file(relative, src, link=link_type) - - def __enter__(self): - msg = "Strict editable install will be performed using a link tree.\n" - _logger.warning(msg + _STRICT_WARNING) - return self - - def __exit__(self, _exc_type, _exc_value, _traceback): - msg = f"""\n - Strict editable installation performed using the auxiliary directory: - {self.auxiliary_dir} - - Please be careful to not remove this directory, otherwise you might not be able - to import/use your package. - """ - InformationOnly.emit("Editable installation.", msg) - - -class _TopLevelFinder: - def __init__(self, dist: Distribution, name: str): - self.dist = dist - self.name = name - - def __call__(self, wheel: "WheelFile", files: List[str], mapping: Dict[str, str]): - src_root = self.dist.src_root or os.curdir - top_level = chain(_find_packages(self.dist), _find_top_level_modules(self.dist)) - package_dir = self.dist.package_dir or {} - roots = _find_package_roots(top_level, package_dir, src_root) - - namespaces_: Dict[str, List[str]] = dict(chain( - _find_namespaces(self.dist.packages or [], roots), - ((ns, []) for ns in _find_virtual_namespaces(roots)), - )) - - name = f"__editable__.{self.name}.finder" - finder = _normalization.safe_identifier(name) - content = bytes(_finder_template(name, roots, namespaces_), "utf-8") - wheel.writestr(f"{finder}.py", content) - - content = bytes(f"import {finder}; {finder}.install()", "utf-8") - wheel.writestr(f"__editable__.{self.name}.pth", content) - - def __enter__(self): - msg = "Editable install will be performed using a meta path finder.\n" - _logger.warning(msg + _LENIENT_WARNING) - return self - - def __exit__(self, _exc_type, _exc_value, _traceback): - msg = """\n - Please be careful with folders in your working directory with the same - name as your package as they may take precedence during imports. - """ - InformationOnly.emit("Editable installation.", msg) - - -def _can_symlink_files(base_dir: Path) -> bool: - with TemporaryDirectory(dir=str(base_dir.resolve())) as tmp: - path1, path2 = Path(tmp, "file1.txt"), Path(tmp, "file2.txt") - path1.write_text("file1", encoding="utf-8") - with suppress(AttributeError, NotImplementedError, OSError): - os.symlink(path1, path2) - if path2.is_symlink() and path2.read_text(encoding="utf-8") == "file1": - return True - - try: - os.link(path1, path2) # Ensure hard links can be created - except Exception as ex: - msg = ( - "File system does not seem to support either symlinks or hard links. " - "Strict editable installs require one of them to be supported." - ) - raise LinksNotSupported(msg) from ex - return False - - -def _simple_layout( - packages: Iterable[str], package_dir: Dict[str, str], project_dir: Path -) -> bool: - """Return ``True`` if: - - all packages are contained by the same parent directory, **and** - - all packages become importable if the parent directory is added to ``sys.path``. - - >>> _simple_layout(['a'], {"": "src"}, "/tmp/myproj") - True - >>> _simple_layout(['a', 'a.b'], {"": "src"}, "/tmp/myproj") - True - >>> _simple_layout(['a', 'a.b'], {}, "/tmp/myproj") - True - >>> _simple_layout(['a', 'a.a1', 'a.a1.a2', 'b'], {"": "src"}, "/tmp/myproj") - True - >>> _simple_layout(['a', 'a.a1', 'a.a1.a2', 'b'], {"a": "a", "b": "b"}, ".") - True - >>> _simple_layout(['a', 'a.a1', 'a.a1.a2', 'b'], {"a": "_a", "b": "_b"}, ".") - False - >>> _simple_layout(['a', 'a.a1', 'a.a1.a2', 'b'], {"a": "_a"}, "/tmp/myproj") - False - >>> _simple_layout(['a', 'a.a1', 'a.a1.a2', 'b'], {"a.a1.a2": "_a2"}, ".") - False - >>> _simple_layout(['a', 'a.b'], {"": "src", "a.b": "_ab"}, "/tmp/myproj") - False - >>> # Special cases, no packages yet: - >>> _simple_layout([], {"": "src"}, "/tmp/myproj") - True - >>> _simple_layout([], {"a": "_a", "": "src"}, "/tmp/myproj") - False - """ - layout = { - pkg: find_package_path(pkg, package_dir, project_dir) - for pkg in packages - } - if not layout: - return set(package_dir) in ({}, {""}) - parent = os.path.commonpath([_parent_path(k, v) for k, v in layout.items()]) - return all( - _path.same_path(Path(parent, *key.split('.')), value) - for key, value in layout.items() - ) - - -def _parent_path(pkg, pkg_path): - """Infer the parent path containing a package, that if added to ``sys.path`` would - allow importing that package. - When ``pkg`` is directly mapped into a directory with a different name, return its - own path. - >>> _parent_path("a", "src/a") - 'src' - >>> _parent_path("b", "src/c") - 'src/c' - """ - parent = pkg_path[:-len(pkg)] if pkg_path.endswith(pkg) else pkg_path - return parent.rstrip("/" + os.sep) - - -def _find_packages(dist: Distribution) -> Iterator[str]: - yield from iter(dist.packages or []) - - py_modules = dist.py_modules or [] - nested_modules = [mod for mod in py_modules if "." in mod] - if dist.ext_package: - yield dist.ext_package - else: - ext_modules = dist.ext_modules or [] - nested_modules += [x.name for x in ext_modules if "." in x.name] - - for module in nested_modules: - package, _, _ = module.rpartition(".") - yield package - - -def _find_top_level_modules(dist: Distribution) -> Iterator[str]: - py_modules = dist.py_modules or [] - yield from (mod for mod in py_modules if "." not in mod) - - if not dist.ext_package: - ext_modules = dist.ext_modules or [] - yield from (x.name for x in ext_modules if "." not in x.name) - - -def _find_package_roots( - packages: Iterable[str], - package_dir: Mapping[str, str], - src_root: _Path, -) -> Dict[str, str]: - pkg_roots: Dict[str, str] = { - pkg: _absolute_root(find_package_path(pkg, package_dir, src_root)) - for pkg in sorted(packages) - } - - return _remove_nested(pkg_roots) - - -def _absolute_root(path: _Path) -> str: - """Works for packages and top-level modules""" - path_ = Path(path) - parent = path_.parent - - if path_.exists(): - return str(path_.resolve()) - else: - return str(parent.resolve() / path_.name) - - -def _find_virtual_namespaces(pkg_roots: Dict[str, str]) -> Iterator[str]: - """By carefully designing ``package_dir``, it is possible to implement the logical - structure of PEP 420 in a package without the corresponding directories. - - Moreover a parent package can be purposefully/accidentally skipped in the discovery - phase (e.g. ``find_packages(include=["mypkg.*"])``, when ``mypkg.foo`` is included - by ``mypkg`` itself is not). - We consider this case to also be a virtual namespace (ignoring the original - directory) to emulate a non-editable installation. - - This function will try to find these kinds of namespaces. - """ - for pkg in pkg_roots: - if "." not in pkg: - continue - parts = pkg.split(".") - for i in range(len(parts) - 1, 0, -1): - partial_name = ".".join(parts[:i]) - path = Path(find_package_path(partial_name, pkg_roots, "")) - if not path.exists() or partial_name not in pkg_roots: - # partial_name not in pkg_roots ==> purposefully/accidentally skipped - yield partial_name - - -def _find_namespaces( - packages: List[str], pkg_roots: Dict[str, str] -) -> Iterator[Tuple[str, List[str]]]: - for pkg in packages: - path = find_package_path(pkg, pkg_roots, "") - if Path(path).exists() and not Path(path, "__init__.py").exists(): - yield (pkg, [path]) - - -def _remove_nested(pkg_roots: Dict[str, str]) -> Dict[str, str]: - output = dict(pkg_roots.copy()) - - for pkg, path in reversed(list(pkg_roots.items())): - if any( - pkg != other and _is_nested(pkg, path, other, other_path) - for other, other_path in pkg_roots.items() - ): - output.pop(pkg) - - return output - - -def _is_nested(pkg: str, pkg_path: str, parent: str, parent_path: str) -> bool: - """ - Return ``True`` if ``pkg`` is nested inside ``parent`` both logically and in the - file system. - >>> _is_nested("a.b", "path/a/b", "a", "path/a") - True - >>> _is_nested("a.b", "path/a/b", "a", "otherpath/a") - False - >>> _is_nested("a.b", "path/a/b", "c", "path/c") - False - >>> _is_nested("a.a", "path/a/a", "a", "path/a") - True - >>> _is_nested("b.a", "path/b/a", "a", "path/a") - False - """ - norm_pkg_path = _path.normpath(pkg_path) - rest = pkg.replace(parent, "", 1).strip(".").split(".") - return ( - pkg.startswith(parent) - and norm_pkg_path == _path.normpath(Path(parent_path, *rest)) - ) - - -def _empty_dir(dir_: _P) -> _P: - """Create a directory ensured to be empty. Existing files may be removed.""" - shutil.rmtree(dir_, ignore_errors=True) - os.makedirs(dir_) - return dir_ - - -class _NamespaceInstaller(namespaces.Installer): - def __init__(self, distribution, installation_dir, editable_name, src_root): - self.distribution = distribution - self.src_root = src_root - self.installation_dir = installation_dir - self.editable_name = editable_name - self.outputs = [] - self.dry_run = False - - def _get_target(self): - """Installation target.""" - return os.path.join(self.installation_dir, self.editable_name) - - def _get_root(self): - """Where the modules/packages should be loaded from.""" - return repr(str(self.src_root)) - - -_FINDER_TEMPLATE = """\ -import sys -from importlib.machinery import ModuleSpec -from importlib.machinery import all_suffixes as module_suffixes -from importlib.util import spec_from_file_location -from itertools import chain -from pathlib import Path - -MAPPING = {mapping!r} -NAMESPACES = {namespaces!r} -PATH_PLACEHOLDER = {name!r} + ".__path_hook__" - - -class _EditableFinder: # MetaPathFinder - @classmethod - def find_spec(cls, fullname, path=None, target=None): - for pkg, pkg_path in reversed(list(MAPPING.items())): - if fullname == pkg or fullname.startswith(f"{{pkg}}."): - rest = fullname.replace(pkg, "", 1).strip(".").split(".") - return cls._find_spec(fullname, Path(pkg_path, *rest)) - - return None - - @classmethod - def _find_spec(cls, fullname, candidate_path): - init = candidate_path / "__init__.py" - candidates = (candidate_path.with_suffix(x) for x in module_suffixes()) - for candidate in chain([init], candidates): - if candidate.exists(): - return spec_from_file_location(fullname, candidate) - - -class _EditableNamespaceFinder: # PathEntryFinder - @classmethod - def _path_hook(cls, path): - if path == PATH_PLACEHOLDER: - return cls - raise ImportError - - @classmethod - def _paths(cls, fullname): - # Ensure __path__ is not empty for the spec to be considered a namespace. - return NAMESPACES[fullname] or MAPPING.get(fullname) or [PATH_PLACEHOLDER] - - @classmethod - def find_spec(cls, fullname, target=None): - if fullname in NAMESPACES: - spec = ModuleSpec(fullname, None, is_package=True) - spec.submodule_search_locations = cls._paths(fullname) - return spec - return None - - @classmethod - def find_module(cls, fullname): - return None - - -def install(): - if not any(finder == _EditableFinder for finder in sys.meta_path): - sys.meta_path.append(_EditableFinder) - - if not NAMESPACES: - return - - if not any(hook == _EditableNamespaceFinder._path_hook for hook in sys.path_hooks): - # PathEntryFinder is needed to create NamespaceSpec without private APIS - sys.path_hooks.append(_EditableNamespaceFinder._path_hook) - if PATH_PLACEHOLDER not in sys.path: - sys.path.append(PATH_PLACEHOLDER) # Used just to trigger the path hook -""" - - -def _finder_template( - name: str, mapping: Mapping[str, str], namespaces: Dict[str, List[str]] -) -> str: - """Create a string containing the code for the``MetaPathFinder`` and - ``PathEntryFinder``. - """ - mapping = dict(sorted(mapping.items(), key=lambda p: p[0])) - return _FINDER_TEMPLATE.format(name=name, mapping=mapping, namespaces=namespaces) - - -class LinksNotSupported(errors.FileError): - """File system does not seem to support either symlinks or hard links.""" - - -class _DebuggingTips(SetuptoolsWarning): - _SUMMARY = "Problem in editable installation." - _DETAILS = """ - An error happened while installing `{project}` in editable mode. - - The following steps are recommended to help debug this problem: - - - Try to install the project normally, without using the editable mode. - Does the error still persist? - (If it does, try fixing the problem before attempting the editable mode). - - If you are using binary extensions, make sure you have all OS-level - dependencies installed (e.g. compilers, toolchains, binary libraries, ...). - - Try the latest version of setuptools (maybe the error was already fixed). - - If you (or your project dependencies) are using any setuptools extension - or customization, make sure they support the editable mode. - - After following the steps above, if the problem still persists and - you think this is related to how setuptools handles editable installations, - please submit a reproducible example - (see https://stackoverflow.com/help/minimal-reproducible-example) to: - - https://github.com/pypa/setuptools/issues - """ - _SEE_DOCS = "userguide/development_mode.html" diff --git a/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/inference.py b/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/inference.py deleted file mode 100644 index a21359ab7a6f6de693f996693348ea8859a6ff09..0000000000000000000000000000000000000000 --- a/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/inference.py +++ /dev/null @@ -1,534 +0,0 @@ - -from typing import Any, Union, Optional, Tuple, List, Dict -import os -import gc -from functools import partial - -import jax -import jax.numpy as jnp -import numpy as np - -from flax.core.frozen_dict import FrozenDict -from flax import jax_utils -from flax.training.common_utils import shard -from PIL import Image -import einops - -from diffusers import FlaxAutoencoderKL, FlaxUNet2DConditionModel -from diffusers import ( - FlaxDDIMScheduler, - FlaxPNDMScheduler, - FlaxLMSDiscreteScheduler, - FlaxDPMSolverMultistepScheduler, -) -from diffusers.schedulers.scheduling_ddim_flax import DDIMSchedulerState -from diffusers.schedulers.scheduling_pndm_flax import PNDMSchedulerState -from diffusers.schedulers.scheduling_lms_discrete_flax import LMSDiscreteSchedulerState -from diffusers.schedulers.scheduling_dpmsolver_multistep_flax import DPMSolverMultistepSchedulerState - -from transformers import FlaxCLIPTextModel, CLIPTokenizer - -from .flax_impl.flax_unet_pseudo3d_condition import UNetPseudo3DConditionModel - -SchedulerType = Union[ - FlaxDDIMScheduler, - FlaxPNDMScheduler, - FlaxLMSDiscreteScheduler, - FlaxDPMSolverMultistepScheduler, -] - -SchedulerStateType = Union[ - DDIMSchedulerState, - PNDMSchedulerState, - LMSDiscreteSchedulerState, - DPMSolverMultistepSchedulerState, -] - -SCHEDULERS: Dict[str, SchedulerType] = { - 'dpm': FlaxDPMSolverMultistepScheduler, # husbando - 'ddim': FlaxDDIMScheduler, - #'PLMS': FlaxPNDMScheduler, # its not correctly implemented in diffusers, output is bad, but at least it "works" - #'LMS': FlaxLMSDiscreteScheduler, # borked - # image_latents, image_scheduler_state = scheduler.step( - # File "/mnt/work1/make_a_vid/makeavid-space/.venv/lib/python3.10/site-packages/diffusers/schedulers/scheduling_lms_discrete_flax.py", line 255, in step - # order = min(timestep + 1, order) - # jax._src.errors.ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected: Tracedwith - # The problem arose with the `bool` function. - # The error occurred while tracing the function scanned_fun at /mnt/work1/make_a_vid/makeavid-space/.venv/lib/python3.10/site-packages/jax/_src/lax/control_flow/loops.py:1668 for scan. This concrete value was not available in Python because it depends on the values of the arguments loop_carry[0] and loop_carry[1][1].timesteps -} - -def dtypestr(x: jnp.dtype): - if x == jnp.float32: return 'float32' - elif x == jnp.float16: return 'float16' - elif x == jnp.bfloat16: return 'bfloat16' - else: raise -def castto(dtype, m, x): - if dtype == jnp.float32: return m.to_fp32(x) - elif dtype == jnp.float16: return m.to_fp16(x) - elif dtype == jnp.bfloat16: return m.to_bf16(x) - else: raise - -class InferenceUNetPseudo3D: - def __init__(self, - model_path: str, - dtype: jnp.dtype = jnp.float16, - hf_auth_token: Union[str, None] = None - ) -> None: - self.dtype = dtype - self.model_path = model_path - self.hf_auth_token = hf_auth_token - - self.params: Dict[str, FrozenDict[str, Any]] = {} - try: - import traceback - print('initializing unet') - unet, unet_params = UNetPseudo3DConditionModel.from_pretrained( - self.model_path, - subfolder = 'unet', - from_pt = False, - sample_size = (64, 64), - dtype = self.dtype, - param_dtype = dtypestr(self.dtype), - use_memory_efficient_attention = True, - use_auth_token = self.hf_auth_token - ) - self.unet: UNetPseudo3DConditionModel = unet - print('casting unet params') - unet_params = castto(self.dtype, self.unet, unet_params) - print('storing unet params') - self.params['unet'] = FrozenDict(unet_params) - print('deleting unet params') - del unet_params - except Exception as e: - print(e) - self.failed = ''.join(traceback.format_exception(None, e, e.__traceback__)) - traceback.print_exc() - return - self.failed = False - vae, vae_params = FlaxAutoencoderKL.from_pretrained( - self.model_path, - subfolder = 'vae', - from_pt = True, - dtype = self.dtype, - use_auth_token = self.hf_auth_token - ) - self.vae: FlaxAutoencoderKL = vae - vae_params = castto(self.dtype, self.vae, vae_params) - self.params['vae'] = FrozenDict(vae_params) - del vae_params - text_encoder = FlaxCLIPTextModel.from_pretrained( - self.model_path, - subfolder = 'text_encoder', - from_pt = True, - dtype = self.dtype, - use_auth_token = self.hf_auth_token - ) - text_encoder_params = text_encoder.params - del text_encoder._params - text_encoder_params = castto(self.dtype, text_encoder, text_encoder_params) - self.text_encoder: FlaxCLIPTextModel = text_encoder - self.params['text_encoder'] = FrozenDict(text_encoder_params) - del text_encoder_params - imunet, imunet_params = FlaxUNet2DConditionModel.from_pretrained( - 'runwayml/stable-diffusion-v1-5', - subfolder = 'unet', - from_pt = True, - dtype = self.dtype, - use_memory_efficient_attention = True, - use_auth_token = self.hf_auth_token - ) - imunet_params = castto(self.dtype, imunet, imunet_params) - self.imunet: FlaxUNet2DConditionModel = imunet - self.params['imunet'] = FrozenDict(imunet_params) - del imunet_params - self.tokenizer: CLIPTokenizer = CLIPTokenizer.from_pretrained( - self.model_path, - subfolder = 'tokenizer', - use_auth_token = self.hf_auth_token - ) - self.schedulers: Dict[str, Dict[str, SchedulerType]] = {} - for scheduler_name in SCHEDULERS: - if scheduler_name not in ['KarrasVe', 'SDEVe']: - scheduler, scheduler_state = SCHEDULERS[scheduler_name].from_pretrained( - self.model_path, - subfolder = 'scheduler', - dtype = jnp.float32, - use_auth_token = self.hf_auth_token - ) - else: - scheduler, scheduler_state = SCHEDULERS[scheduler_name].from_pretrained( - self.model_path, - subfolder = 'scheduler', - use_auth_token = self.hf_auth_token - ) - self.schedulers[scheduler_name] = scheduler - self.params[scheduler_name] = scheduler_state - self.vae_scale_factor: int = int(2 ** (len(self.vae.config.block_out_channels) - 1)) - self.device_count = jax.device_count() - gc.collect() - - def prepare_inputs(self, - prompt: List[str], - neg_prompt: List[str], - hint_image: List[Image.Image], - mask_image: List[Image.Image], - width: int, - height: int - ) -> Tuple[jnp.ndarray, jnp.ndarray, jnp.ndarray, jnp.ndarray]: # prompt, neg_prompt, hint_image, mask_image - tokens = self.tokenizer( - prompt, - truncation = True, - return_overflowing_tokens = False, - max_length = 77, #self.text_encoder.config.max_length defaults to 20 if its not in the config smh - padding = 'max_length', - return_tensors = 'np' - ).input_ids - tokens = jnp.array(tokens, dtype = jnp.int32) - neg_tokens = self.tokenizer( - neg_prompt, - truncation = True, - return_overflowing_tokens = False, - max_length = 77, - padding = 'max_length', - return_tensors = 'np' - ).input_ids - neg_tokens = jnp.array(neg_tokens, dtype = jnp.int32) - for i,im in enumerate(hint_image): - if im.size != (width, height): - hint_image[i] = hint_image[i].resize((width, height), resample = Image.Resampling.LANCZOS) - for i,im in enumerate(mask_image): - if im.size != (width, height): - mask_image[i] = mask_image[i].resize((width, height), resample = Image.Resampling.LANCZOS) - # b,h,w,c | c == 3 - hint = jnp.concatenate( - [ jnp.expand_dims(np.asarray(x.convert('RGB')), axis = 0) for x in hint_image ], - axis = 0 - ).astype(jnp.float32) - # scale -1,1 - hint = (hint / 255) * 2 - 1 - # b,h,w,c | c == 1 - mask = jnp.concatenate( - [ jnp.expand_dims(np.asarray(x.convert('L')), axis = (0, -1)) for x in mask_image ], - axis = 0 - ).astype(jnp.float32) - # scale -1,1 - mask = (mask / 255) * 2 - 1 - # binarize mask - mask = mask.at[mask < 0.5].set(0) - mask = mask.at[mask >= 0.5].set(1) - # mask - hint = hint * (mask < 0.5) - # b,h,w,c -> b,c,h,w - hint = hint.transpose((0,3,1,2)) - mask = mask.transpose((0,3,1,2)) - return tokens, neg_tokens, hint, mask - - def generate(self, - prompt: Union[str, List[str]] = '', - inference_steps: int = 20, - hint_image: Union[Image.Image, List[Image.Image], None] = None, - mask_image: Union[Image.Image, List[Image.Image], None] = None, - neg_prompt: Union[str, List[str]] = '', - cfg: float = 15.0, - cfg_image: Optional[float] = None, - num_frames: int = 24, - width: int = 512, - height: int = 512, - seed: int = 0, - scheduler_type: str = 'dpm' - ) -> List[List[Image.Image]]: - assert inference_steps > 0, f'number of inference steps must be > 0 but is {inference_steps}' - assert num_frames > 0, f'number of frames must be > 0 but is {num_frames}' - assert width % 32 == 0, f'width must be divisible by 32 but is {width}' - assert height % 32 == 0, f'height must be divisible by 32 but is {height}' - if isinstance(prompt, str): - prompt = [ prompt ] - batch_size = len(prompt) - assert batch_size % self.device_count == 0, f'batch size must be multiple of {self.device_count}' - if hint_image is None: - hint_image = Image.new('RGB', (width, height), color = (0,0,0)) - use_imagegen = True - else: - use_imagegen = False - if isinstance(hint_image, Image.Image): - hint_image = [ hint_image ] * batch_size - assert len(hint_image) == batch_size, f'number of hint images must be equal to batch size {batch_size} but is {len(hint_image)}' - if mask_image is None: - mask_image = Image.new('L', hint_image[0].size, color = 0) - if isinstance(mask_image, Image.Image): - mask_image = [ mask_image ] * batch_size - assert len(mask_image) == batch_size, f'number of mask images must be equal to batch size {batch_size} but is {len(mask_image)}' - if isinstance(neg_prompt, str): - neg_prompt = [ neg_prompt ] * batch_size - assert len(neg_prompt) == batch_size, f'number of negative prompts must be equal to batch size {batch_size} but is {len(neg_prompt)}' - assert scheduler_type in SCHEDULERS, f'unknown type of noise scheduler: {scheduler_type}, must be one of {list(SCHEDULERS.keys())}' - tokens, neg_tokens, hint, mask = self.prepare_inputs( - prompt = prompt, - neg_prompt = neg_prompt, - hint_image = hint_image, - mask_image = mask_image, - width = width, - height = height - ) - if cfg_image is None: - cfg_image = cfg - #params['scheduler'] = scheduler_state - # NOTE splitting rngs is not deterministic, - # running on different device counts gives different seeds - #rng = jax.random.PRNGKey(seed) - #rngs = jax.random.split(rng, self.device_count) - # manually assign seeded RNGs to devices for reproducability - rngs = jnp.array([ jax.random.PRNGKey(seed + i) for i in range(self.device_count) ]) - params = jax_utils.replicate(self.params) - tokens = shard(tokens) - neg_tokens = shard(neg_tokens) - hint = shard(hint) - mask = shard(mask) - images = _p_generate(self, - tokens, - neg_tokens, - hint, - mask, - inference_steps, - num_frames, - height, - width, - cfg, - cfg_image, - rngs, - params, - use_imagegen, - scheduler_type, - ) - if images.ndim == 5: - images = einops.rearrange(images, 'd f c h w -> (d f) h w c') - else: - images = einops.rearrange(images, 'f c h w -> f h w c') - # to cpu - images = np.array(images) - images = [ Image.fromarray(x) for x in images ] - return images - - def _generate(self, - tokens: jnp.ndarray, - neg_tokens: jnp.ndarray, - hint: jnp.ndarray, - mask: jnp.ndarray, - inference_steps: int, - num_frames, - height, - width, - cfg: float, - cfg_image: float, - rng: jax.random.KeyArray, - params: Union[Dict[str, Any], FrozenDict[str, Any]], - use_imagegen: bool, - scheduler_type: str - ) -> List[Image.Image]: - batch_size = tokens.shape[0] - latent_h = height // self.vae_scale_factor - latent_w = width // self.vae_scale_factor - latent_shape = ( - batch_size, - self.vae.config.latent_channels, - num_frames, - latent_h, - latent_w - ) - encoded_prompt = self.text_encoder(tokens, params = params['text_encoder'])[0] - encoded_neg_prompt = self.text_encoder(neg_tokens, params = params['text_encoder'])[0] - - scheduler = self.schedulers[scheduler_type] - scheduler_state = params[scheduler_type] - - if use_imagegen: - image_latent_shape = (batch_size, self.vae.config.latent_channels, latent_h, latent_w) - image_latents = jax.random.normal( - rng, - shape = image_latent_shape, - dtype = jnp.float32 - ) * scheduler_state.init_noise_sigma - image_scheduler_state = scheduler.set_timesteps( - scheduler_state, - num_inference_steps = inference_steps, - shape = image_latents.shape - ) - def image_sample_loop(step, args): - image_latents, image_scheduler_state = args - t = image_scheduler_state.timesteps[step] - tt = jnp.broadcast_to(t, image_latents.shape[0]) - latents_input = scheduler.scale_model_input(image_scheduler_state, image_latents, t) - noise_pred = self.imunet.apply( - { 'params': params['imunet']} , - latents_input, - tt, - encoder_hidden_states = encoded_prompt - ).sample - noise_pred_uncond = self.imunet.apply( - { 'params': params['imunet'] }, - latents_input, - tt, - encoder_hidden_states = encoded_neg_prompt - ).sample - noise_pred = noise_pred_uncond + cfg_image * (noise_pred - noise_pred_uncond) - image_latents, image_scheduler_state = scheduler.step( - image_scheduler_state, - noise_pred.astype(jnp.float32), - t, - image_latents - ).to_tuple() - return image_latents, image_scheduler_state - image_latents, _ = jax.lax.fori_loop( - 0, inference_steps, - image_sample_loop, - (image_latents, image_scheduler_state) - ) - hint = image_latents - else: - hint = self.vae.apply( - { 'params': params['vae'] }, - hint, - method = self.vae.encode - ).latent_dist.mean * self.vae.config.scaling_factor - # NOTE vae keeps channels last for encode, but rearranges to channels first for decode - # b0 h1 w2 c3 -> b0 c3 h1 w2 - hint = hint.transpose((0, 3, 1, 2)) - - hint = jnp.expand_dims(hint, axis = 2).repeat(num_frames, axis = 2) - mask = jax.image.resize(mask, (*mask.shape[:-2], *hint.shape[-2:]), method = 'nearest') - mask = jnp.expand_dims(mask, axis = 2).repeat(num_frames, axis = 2) - # NOTE jax normal distribution is shit with float16 + bfloat16 - # SEE https://github.com/google/jax/discussions/13798 - # generate random at float32 - latents = jax.random.normal( - rng, - shape = latent_shape, - dtype = jnp.float32 - ) * scheduler_state.init_noise_sigma - scheduler_state = scheduler.set_timesteps( - scheduler_state, - num_inference_steps = inference_steps, - shape = latents.shape - ) - - def sample_loop(step, args): - latents, scheduler_state = args - t = scheduler_state.timesteps[step]#jnp.array(scheduler_state.timesteps, dtype = jnp.int32)[step] - tt = jnp.broadcast_to(t, latents.shape[0]) - latents_input = scheduler.scale_model_input(scheduler_state, latents, t) - latents_input = jnp.concatenate([latents_input, mask, hint], axis = 1) - noise_pred = self.unet.apply( - { 'params': params['unet'] }, - latents_input, - tt, - encoded_prompt - ).sample - noise_pred_uncond = self.unet.apply( - { 'params': params['unet'] }, - latents_input, - tt, - encoded_neg_prompt - ).sample - noise_pred = noise_pred_uncond + cfg * (noise_pred - noise_pred_uncond) - latents, scheduler_state = scheduler.step( - scheduler_state, - noise_pred.astype(jnp.float32), - t, - latents - ).to_tuple() - return latents, scheduler_state - - latents, _ = jax.lax.fori_loop( - 0, inference_steps, - sample_loop, - (latents, scheduler_state) - ) - latents = 1 / self.vae.config.scaling_factor * latents - latents = einops.rearrange(latents, 'b c f h w -> (b f) c h w') - num_images = len(latents) - images_out = jnp.zeros( - ( - num_images, - self.vae.config.out_channels, - height, - width - ), - dtype = self.dtype - ) - def decode_loop(step, images_out): - # NOTE vae keeps channels last for encode, but rearranges to channels first for decode - im = self.vae.apply( - { 'params': params['vae'] }, - jnp.expand_dims(latents[step], axis = 0), - method = self.vae.decode - ).sample - images_out = images_out.at[step].set(im[0]) - return images_out - images_out = jax.lax.fori_loop(0, num_images, decode_loop, images_out) - images_out = ((images_out / 2 + 0.5) * 255).round().clip(0, 255).astype(jnp.uint8) - return images_out - - -@partial( - jax.pmap, - in_axes = ( # 0 -> split across batch dim, None -> duplicate - None, # 0 inference_class - 0, # 1 tokens - 0, # 2 neg_tokens - 0, # 3 hint - 0, # 4 mask - None, # 5 inference_steps - None, # 6 num_frames - None, # 7 height - None, # 8 width - None, # 9 cfg - None, # 10 cfg_image - 0, # 11 rng - 0, # 12 params - None, # 13 use_imagegen - None, # 14 scheduler_type - ), - static_broadcasted_argnums = ( # trigger recompilation on change - 0, # inference_class - 5, # inference_steps - 6, # num_frames - 7, # height - 8, # width - 13, # use_imagegen - 14, # scheduler_type - ) -) -def _p_generate( - inference_class: InferenceUNetPseudo3D, - tokens, - neg_tokens, - hint, - mask, - inference_steps: int, - num_frames: int, - height: int, - width: int, - cfg: float, - cfg_image: float, - rng, - params, - use_imagegen: bool, - scheduler_type: str -): - return inference_class._generate( - tokens, - neg_tokens, - hint, - mask, - inference_steps, - num_frames, - height, - width, - cfg, - cfg_image, - rng, - params, - use_imagegen, - scheduler_type - ) - diff --git a/spaces/TencentARC/VLog/models/grit_src/grit/predictor.py b/spaces/TencentARC/VLog/models/grit_src/grit/predictor.py deleted file mode 100644 index 6c188ea2ab5fac232554d4eaaf2fb073670a70e4..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/grit/predictor.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Jialian Wu from https://github.com/facebookresearch/detectron2/blob/main/detectron2/utils/visualizer.py -import torch - -from detectron2.engine.defaults import DefaultPredictor -from detectron2.utils.visualizer import ColorMode, Visualizer - - -class Visualizer_GRiT(Visualizer): - def __init__(self, image, instance_mode=None): - super().__init__(image, instance_mode=instance_mode) - - def draw_instance_predictions(self, predictions): - boxes = predictions.pred_boxes if predictions.has("pred_boxes") else None - scores = predictions.scores if predictions.has("scores") else None - classes = predictions.pred_classes.tolist() if predictions.has("pred_classes") else None - object_description = predictions.pred_object_descriptions.data - # uncomment to output scores in visualized images - # object_description = [c + '|' + str(round(s.item(), 1)) for c, s in zip(object_description, scores)] - - if self._instance_mode == ColorMode.SEGMENTATION and self.metadata.get("thing_colors"): - colors = [ - self._jitter([x / 255 for x in self.metadata.thing_colors[c]]) for c in classes - ] - alpha = 0.8 - else: - colors = None - alpha = 0.5 - - if self._instance_mode == ColorMode.IMAGE_BW: - self.output.reset_image( - self._create_grayscale_image( - (predictions.pred_masks.any(dim=0) > 0).numpy() - if predictions.has("pred_masks") - else None - ) - ) - alpha = 0.3 - - self.overlay_instances( - masks=None, - boxes=boxes, - labels=object_description, - keypoints=None, - assigned_colors=colors, - alpha=alpha, - ) - return self.output - - -class VisualizationDemo(object): - def __init__(self, cfg, instance_mode=ColorMode.IMAGE): - self.cpu_device = torch.device("cpu") - self.instance_mode = instance_mode - - self.predictor = DefaultPredictor(cfg) - - def run_on_image(self, image): - predictions = self.predictor(image) - # Convert image from OpenCV BGR format to Matplotlib RGB format. - image = image[:, :, ::-1] - visualizer = Visualizer_GRiT(image, instance_mode=self.instance_mode) - instances = predictions["instances"].to(self.cpu_device) - vis_output = visualizer.draw_instance_predictions(predictions=instances) - - return predictions, vis_output \ No newline at end of file diff --git a/spaces/Thanarit/GPT-Detection-Demo/ModelDriver.py b/spaces/Thanarit/GPT-Detection-Demo/ModelDriver.py deleted file mode 100644 index aed873a153c770dd96596a39dc03be35aede5c74..0000000000000000000000000000000000000000 --- a/spaces/Thanarit/GPT-Detection-Demo/ModelDriver.py +++ /dev/null @@ -1,105 +0,0 @@ -from transformers import RobertaTokenizer, RobertaForSequenceClassification, RobertaModel -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.utils.data import TensorDataset, DataLoader - - -device = torch.device("cpu") -class MLP(nn.Module): - def __init__(self, input_dim): - super(MLP, self).__init__() - self.fc1 = nn.Linear(input_dim, 256) - self.fc2 = nn.Linear(256, 2) - self.gelu = nn.GELU() - - def forward(self, x): - x = self.gelu(self.fc1(x)) - x = self.fc2(x) - return x -def extract_features(text): - - tokenizer = RobertaTokenizer.from_pretrained("roberta-base") - model = RobertaModel.from_pretrained("roberta-base").to(device) - tokenized_text = tokenizer.encode(text, truncation=True, max_length=512, return_tensors="pt") - outputs = model(tokenized_text) - last_hidden_states = outputs.last_hidden_state - TClassification = last_hidden_states[:, 0, :].squeeze().detach().numpy() - return TClassification - -def RobertaSentinelOpenGPTInference(input_text): - features = extract_features(input_text) - loaded_model = MLP(768).to(device) - loaded_model.load_state_dict(torch.load("SentinelCheckpoint/RobertaSentinelOpenGPT.pth", map_location=device)) - - # Define the tokenizer and model for feature extraction - with torch.no_grad(): - inputs = torch.tensor(features).to(device) - outputs = loaded_model(inputs.float()) - _, predicted = torch.max(outputs, 0) - - Probs = (F.softmax(outputs, dim=0).cpu().numpy()) - - return Probs - -def RobertaSentinelCSAbstractInference(input_text): - features = extract_features(input_text) - loaded_model = MLP(768).to(device) - loaded_model.load_state_dict(torch.load("SentinelCheckpoint/RobertaSentinelCSAbstract.pth", map_location=device)) - - # Define the tokenizer and model for feature extraction - with torch.no_grad(): - inputs = torch.tensor(features).to(device) - outputs = loaded_model(inputs.float()) - _, predicted = torch.max(outputs, 0) - - Probs = (F.softmax(outputs, dim=0).cpu().numpy()) - - return Probs - - -def RobertaClassifierOpenGPTInference(input_text): - tokenizer = RobertaTokenizer.from_pretrained("roberta-base") - model_path = "ClassifierCheckpoint/RobertaClassifierOpenGPT.pth" - model = RobertaForSequenceClassification.from_pretrained('roberta-base', num_labels=2) - model.load_state_dict(torch.load(model_path, map_location=device), strict=False) - model = model.to(device) - model.eval() - - - tokenized_input = tokenizer(input_text, truncation=True, padding=True, max_length=512, return_tensors='pt') - input_ids = tokenized_input['input_ids'].to(device) - attention_mask = tokenized_input['attention_mask'].to(device) - - # Make a prediction - with torch.no_grad(): - outputs = model(input_ids, attention_mask=attention_mask) - logits = outputs.logits - Probs = F.softmax(logits, dim=1).cpu().numpy()[0] - - return Probs - - -def RobertaClassifierCSAbstractInference(input_text): - tokenizer = RobertaTokenizer.from_pretrained("roberta-base") - model_path = "ClassifierCheckpoint/RobertaClassifierCSAbstract.pth" - model = RobertaForSequenceClassification.from_pretrained('roberta-base', num_labels=2) - model.load_state_dict(torch.load(model_path, map_location=device), strict=False) - model = model.to(device) - model.eval() - - - tokenized_input = tokenizer(input_text, truncation=True, padding=True, max_length=512, return_tensors='pt') - input_ids = tokenized_input['input_ids'].to(device) - attention_mask = tokenized_input['attention_mask'].to(device) - - # Make a prediction - with torch.no_grad(): - outputs = model(input_ids, attention_mask=attention_mask) - logits = outputs.logits - Probs = F.softmax(logits, dim=1).cpu().numpy()[0] - - return Probs - - - diff --git a/spaces/Thaweewat/ControlNet-Architecture/annotator/openpose/body.py b/spaces/Thaweewat/ControlNet-Architecture/annotator/openpose/body.py deleted file mode 100644 index 7c3cf7a388b4ac81004524e64125e383bdd455bd..0000000000000000000000000000000000000000 --- a/spaces/Thaweewat/ControlNet-Architecture/annotator/openpose/body.py +++ /dev/null @@ -1,219 +0,0 @@ -import cv2 -import numpy as np -import math -import time -from scipy.ndimage.filters import gaussian_filter -import matplotlib.pyplot as plt -import matplotlib -import torch -from torchvision import transforms - -from . import util -from .model import bodypose_model - -class Body(object): - def __init__(self, model_path): - self.model = bodypose_model() - if torch.cuda.is_available(): - self.model = self.model.cuda() - print('cuda') - model_dict = util.transfer(self.model, torch.load(model_path)) - self.model.load_state_dict(model_dict) - self.model.eval() - - def __call__(self, oriImg): - # scale_search = [0.5, 1.0, 1.5, 2.0] - scale_search = [0.5] - boxsize = 368 - stride = 8 - padValue = 128 - thre1 = 0.1 - thre2 = 0.05 - multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search] - heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 19)) - paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38)) - - for m in range(len(multiplier)): - scale = multiplier[m] - imageToTest = cv2.resize(oriImg, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC) - imageToTest_padded, pad = util.padRightDownCorner(imageToTest, stride, padValue) - im = np.transpose(np.float32(imageToTest_padded[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5 - im = np.ascontiguousarray(im) - - data = torch.from_numpy(im).float() - if torch.cuda.is_available(): - data = data.cuda() - # data = data.permute([2, 0, 1]).unsqueeze(0).float() - with torch.no_grad(): - Mconv7_stage6_L1, Mconv7_stage6_L2 = self.model(data) - Mconv7_stage6_L1 = Mconv7_stage6_L1.cpu().numpy() - Mconv7_stage6_L2 = Mconv7_stage6_L2.cpu().numpy() - - # extract outputs, resize, and remove padding - # heatmap = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[1]].data), (1, 2, 0)) # output 1 is heatmaps - heatmap = np.transpose(np.squeeze(Mconv7_stage6_L2), (1, 2, 0)) # output 1 is heatmaps - heatmap = cv2.resize(heatmap, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC) - heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :] - heatmap = cv2.resize(heatmap, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC) - - # paf = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[0]].data), (1, 2, 0)) # output 0 is PAFs - paf = np.transpose(np.squeeze(Mconv7_stage6_L1), (1, 2, 0)) # output 0 is PAFs - paf = cv2.resize(paf, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC) - paf = paf[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :] - paf = cv2.resize(paf, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC) - - heatmap_avg += heatmap_avg + heatmap / len(multiplier) - paf_avg += + paf / len(multiplier) - - all_peaks = [] - peak_counter = 0 - - for part in range(18): - map_ori = heatmap_avg[:, :, part] - one_heatmap = gaussian_filter(map_ori, sigma=3) - - map_left = np.zeros(one_heatmap.shape) - map_left[1:, :] = one_heatmap[:-1, :] - map_right = np.zeros(one_heatmap.shape) - map_right[:-1, :] = one_heatmap[1:, :] - map_up = np.zeros(one_heatmap.shape) - map_up[:, 1:] = one_heatmap[:, :-1] - map_down = np.zeros(one_heatmap.shape) - map_down[:, :-1] = one_heatmap[:, 1:] - - peaks_binary = np.logical_and.reduce( - (one_heatmap >= map_left, one_heatmap >= map_right, one_heatmap >= map_up, one_heatmap >= map_down, one_heatmap > thre1)) - peaks = list(zip(np.nonzero(peaks_binary)[1], np.nonzero(peaks_binary)[0])) # note reverse - peaks_with_score = [x + (map_ori[x[1], x[0]],) for x in peaks] - peak_id = range(peak_counter, peak_counter + len(peaks)) - peaks_with_score_and_id = [peaks_with_score[i] + (peak_id[i],) for i in range(len(peak_id))] - - all_peaks.append(peaks_with_score_and_id) - peak_counter += len(peaks) - - # find connection in the specified sequence, center 29 is in the position 15 - limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \ - [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \ - [1, 16], [16, 18], [3, 17], [6, 18]] - # the middle joints heatmap correpondence - mapIdx = [[31, 32], [39, 40], [33, 34], [35, 36], [41, 42], [43, 44], [19, 20], [21, 22], \ - [23, 24], [25, 26], [27, 28], [29, 30], [47, 48], [49, 50], [53, 54], [51, 52], \ - [55, 56], [37, 38], [45, 46]] - - connection_all = [] - special_k = [] - mid_num = 10 - - for k in range(len(mapIdx)): - score_mid = paf_avg[:, :, [x - 19 for x in mapIdx[k]]] - candA = all_peaks[limbSeq[k][0] - 1] - candB = all_peaks[limbSeq[k][1] - 1] - nA = len(candA) - nB = len(candB) - indexA, indexB = limbSeq[k] - if (nA != 0 and nB != 0): - connection_candidate = [] - for i in range(nA): - for j in range(nB): - vec = np.subtract(candB[j][:2], candA[i][:2]) - norm = math.sqrt(vec[0] * vec[0] + vec[1] * vec[1]) - norm = max(0.001, norm) - vec = np.divide(vec, norm) - - startend = list(zip(np.linspace(candA[i][0], candB[j][0], num=mid_num), \ - np.linspace(candA[i][1], candB[j][1], num=mid_num))) - - vec_x = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 0] \ - for I in range(len(startend))]) - vec_y = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 1] \ - for I in range(len(startend))]) - - score_midpts = np.multiply(vec_x, vec[0]) + np.multiply(vec_y, vec[1]) - score_with_dist_prior = sum(score_midpts) / len(score_midpts) + min( - 0.5 * oriImg.shape[0] / norm - 1, 0) - criterion1 = len(np.nonzero(score_midpts > thre2)[0]) > 0.8 * len(score_midpts) - criterion2 = score_with_dist_prior > 0 - if criterion1 and criterion2: - connection_candidate.append( - [i, j, score_with_dist_prior, score_with_dist_prior + candA[i][2] + candB[j][2]]) - - connection_candidate = sorted(connection_candidate, key=lambda x: x[2], reverse=True) - connection = np.zeros((0, 5)) - for c in range(len(connection_candidate)): - i, j, s = connection_candidate[c][0:3] - if (i not in connection[:, 3] and j not in connection[:, 4]): - connection = np.vstack([connection, [candA[i][3], candB[j][3], s, i, j]]) - if (len(connection) >= min(nA, nB)): - break - - connection_all.append(connection) - else: - special_k.append(k) - connection_all.append([]) - - # last number in each row is the total parts number of that person - # the second last number in each row is the score of the overall configuration - subset = -1 * np.ones((0, 20)) - candidate = np.array([item for sublist in all_peaks for item in sublist]) - - for k in range(len(mapIdx)): - if k not in special_k: - partAs = connection_all[k][:, 0] - partBs = connection_all[k][:, 1] - indexA, indexB = np.array(limbSeq[k]) - 1 - - for i in range(len(connection_all[k])): # = 1:size(temp,1) - found = 0 - subset_idx = [-1, -1] - for j in range(len(subset)): # 1:size(subset,1): - if subset[j][indexA] == partAs[i] or subset[j][indexB] == partBs[i]: - subset_idx[found] = j - found += 1 - - if found == 1: - j = subset_idx[0] - if subset[j][indexB] != partBs[i]: - subset[j][indexB] = partBs[i] - subset[j][-1] += 1 - subset[j][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2] - elif found == 2: # if found 2 and disjoint, merge them - j1, j2 = subset_idx - membership = ((subset[j1] >= 0).astype(int) + (subset[j2] >= 0).astype(int))[:-2] - if len(np.nonzero(membership == 2)[0]) == 0: # merge - subset[j1][:-2] += (subset[j2][:-2] + 1) - subset[j1][-2:] += subset[j2][-2:] - subset[j1][-2] += connection_all[k][i][2] - subset = np.delete(subset, j2, 0) - else: # as like found == 1 - subset[j1][indexB] = partBs[i] - subset[j1][-1] += 1 - subset[j1][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2] - - # if find no partA in the subset, create a new subset - elif not found and k < 17: - row = -1 * np.ones(20) - row[indexA] = partAs[i] - row[indexB] = partBs[i] - row[-1] = 2 - row[-2] = sum(candidate[connection_all[k][i, :2].astype(int), 2]) + connection_all[k][i][2] - subset = np.vstack([subset, row]) - # delete some rows of subset which has few parts occur - deleteIdx = [] - for i in range(len(subset)): - if subset[i][-1] < 4 or subset[i][-2] / subset[i][-1] < 0.4: - deleteIdx.append(i) - subset = np.delete(subset, deleteIdx, axis=0) - - # subset: n*20 array, 0-17 is the index in candidate, 18 is the total score, 19 is the total parts - # candidate: x, y, score, id - return candidate, subset - -if __name__ == "__main__": - body_estimation = Body('../model/body_pose_model.pth') - - test_image = '../images/ski.jpg' - oriImg = cv2.imread(test_image) # B,G,R order - candidate, subset = body_estimation(oriImg) - canvas = util.draw_bodypose(oriImg, candidate, subset) - plt.imshow(canvas[:, :, [2, 1, 0]]) - plt.show() diff --git a/spaces/UmairMirza/Face-Attendance/README.md b/spaces/UmairMirza/Face-Attendance/README.md deleted file mode 100644 index d188f38b3b755592a899c80b9d2231e68e847a63..0000000000000000000000000000000000000000 --- a/spaces/UmairMirza/Face-Attendance/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Face Attendance -emoji: 📈 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/VincentZB/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_hed.py b/spaces/VincentZB/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_hed.py deleted file mode 100644 index 590cb5db9213b22d00ce0e650a3e632725213a67..0000000000000000000000000000000000000000 --- a/spaces/VincentZB/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_hed.py +++ /dev/null @@ -1,223 +0,0 @@ -import gradio as gr -import numpy as np -import torch -from controlnet_aux import HEDdetector -from diffusers import ControlNetModel -from PIL import Image - -from diffusion_webui.diffusion_models.controlnet.controlnet_inpaint.pipeline_stable_diffusion_controlnet_inpaint import ( - StableDiffusionControlNetInpaintPipeline, -) -from diffusion_webui.utils.model_list import ( - controlnet_hed_model_list, - stable_inpiant_model_list, -) -from diffusion_webui.utils.scheduler_list import ( - SCHEDULER_LIST, - get_scheduler_list, -) - -# https://github.com/mikonvergence/ControlNetInpaint - - -class StableDiffusionControlNetInpaintHedGenerator: - def __init__(self): - self.pipe = None - - def load_model(self, stable_model_path, controlnet_model_path, scheduler): - if self.pipe is None: - controlnet = ControlNetModel.from_pretrained( - controlnet_model_path, torch_dtype=torch.float16 - ) - self.pipe = ( - StableDiffusionControlNetInpaintPipeline.from_pretrained( - pretrained_model_name_or_path=stable_model_path, - controlnet=controlnet, - safety_checker=None, - torch_dtype=torch.float16, - ) - ) - - self.pipe = get_scheduler_list(pipe=self.pipe, scheduler=scheduler) - self.pipe.to("cuda") - self.pipe.enable_xformers_memory_efficient_attention() - - return self.pipe - - def load_image(self, image_path): - image = np.array(image_path) - image = Image.fromarray(image) - return image - - def controlnet_inpaint_hed(self, image_path: str): - hed = HEDdetector.from_pretrained("lllyasviel/ControlNet") - image = image_path["image"].convert("RGB").resize((512, 512)) - image = np.array(image) - image = hed(image) - - return image - - def generate_image( - self, - image_path: str, - stable_model_path: str, - controlnet_model_path: str, - prompt: str, - negative_prompt: str, - num_images_per_prompt: int, - guidance_scale: int, - num_inference_step: int, - controlnet_conditioning_scale: int, - scheduler: str, - seed_generator: int, - ): - normal_image = image_path["image"].convert("RGB").resize((512, 512)) - mask_image = image_path["mask"].convert("RGB").resize((512, 512)) - - normal_image = self.load_image(image_path=normal_image) - mask_image = self.load_image(image_path=mask_image) - - control_image = self.controlnet_inpaint_hed(image_path=image_path) - - pipe = self.load_model( - stable_model_path=stable_model_path, - controlnet_model_path=controlnet_model_path, - scheduler=scheduler, - ) - - if seed_generator == 0: - random_seed = torch.randint(0, 1000000, (1,)) - generator = torch.manual_seed(random_seed) - else: - generator = torch.manual_seed(seed_generator) - - output = pipe( - prompt=prompt, - image=normal_image, - mask_image=mask_image, - control_image=control_image, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - num_inference_steps=num_inference_step, - guidance_scale=guidance_scale, - controlnet_conditioning_scale=controlnet_conditioning_scale, - generator=generator, - ).images - - return output - - def app(): - with gr.Blocks(): - with gr.Row(): - with gr.Column(): - controlnet_hed_inpaint_image_file = gr.Image( - source="upload", - tool="sketch", - elem_id="image_upload", - type="pil", - label="Upload", - ) - - controlnet_hed_inpaint_prompt = gr.Textbox( - lines=1, placeholder="Prompt", show_label=False - ) - - controlnet_hed_inpaint_negative_prompt = gr.Textbox( - lines=1, - show_label=False, - placeholder="Negative Prompt", - ) - with gr.Row(): - with gr.Column(): - controlnet_hed_inpaint_stable_model_id = ( - gr.Dropdown( - choices=stable_inpiant_model_list, - value=stable_inpiant_model_list[0], - label="Stable Model Id", - ) - ) - - controlnet_hed_inpaint_guidance_scale = gr.Slider( - minimum=0.1, - maximum=15, - step=0.1, - value=7.5, - label="Guidance Scale", - ) - - controlnet_hed_inpaint_num_inference_step = ( - gr.Slider( - minimum=1, - maximum=100, - step=1, - value=50, - label="Num Inference Step", - ) - ) - controlnet_hed_inpaint_num_images_per_prompt = ( - gr.Slider( - minimum=1, - maximum=10, - step=1, - value=1, - label="Number Of Images", - ) - ) - with gr.Row(): - with gr.Column(): - controlnet_hed_inpaint_model_id = gr.Dropdown( - choices=controlnet_hed_model_list, - value=controlnet_hed_model_list[0], - label="Controlnet Model Id", - ) - controlnet_hed_inpaint_scheduler = gr.Dropdown( - choices=SCHEDULER_LIST, - value=SCHEDULER_LIST[0], - label="Scheduler", - ) - controlnet_hed_inpaint_controlnet_conditioning_scale = gr.Slider( - minimum=0.1, - maximum=1.0, - step=0.1, - value=0.5, - label="Controlnet Conditioning Scale", - ) - - controlnet_hed_inpaint_seed_generator = ( - gr.Slider( - minimum=0, - maximum=1000000, - step=1, - value=0, - label="Seed Generator", - ) - ) - - controlnet_hed_inpaint_predict = gr.Button( - value="Generator" - ) - - with gr.Column(): - output_image = gr.Gallery( - label="Generated images", - show_label=False, - elem_id="gallery", - ).style(grid=(1, 2)) - - controlnet_hed_inpaint_predict.click( - fn=StableDiffusionControlNetInpaintHedGenerator().generate_image, - inputs=[ - controlnet_hed_inpaint_image_file, - controlnet_hed_inpaint_stable_model_id, - controlnet_hed_inpaint_model_id, - controlnet_hed_inpaint_prompt, - controlnet_hed_inpaint_negative_prompt, - controlnet_hed_inpaint_num_images_per_prompt, - controlnet_hed_inpaint_guidance_scale, - controlnet_hed_inpaint_num_inference_step, - controlnet_hed_inpaint_controlnet_conditioning_scale, - controlnet_hed_inpaint_scheduler, - controlnet_hed_inpaint_seed_generator, - ], - outputs=[output_image], - ) diff --git a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/commons.py b/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/util/time_counter.py b/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/util/time_counter.py deleted file mode 100644 index 0aedb2e4d61bfbe7571dca9d50053f0fedaa1359..0000000000000000000000000000000000000000 --- a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/util/time_counter.py +++ /dev/null @@ -1,62 +0,0 @@ -import json -import time - - -class TimeCounter: - def __init__(self) -> None: - pass - - def clear(self): - self.timedict = {} - self.basetime = time.perf_counter() - - def timeit(self, name): - nowtime = time.perf_counter() - self.basetime - self.timedict[name] = nowtime - self.basetime = time.perf_counter() - - -class TimeHolder: - def __init__(self) -> None: - self.timedict = {} - - def update(self, _timedict: dict): - for k, v in _timedict.items(): - if k not in self.timedict: - self.timedict[k] = AverageMeter(name=k, val_only=True) - self.timedict[k].update(val=v) - - def final_res(self): - return {k: v.avg for k, v in self.timedict.items()} - - def __str__(self): - return json.dumps(self.final_res(), indent=2) - - -class AverageMeter(object): - """Computes and stores the average and current value""" - - def __init__(self, name, fmt=":f", val_only=False): - self.name = name - self.fmt = fmt - self.val_only = val_only - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - def __str__(self): - if self.val_only: - fmtstr = "{name} {val" + self.fmt + "}" - else: - fmtstr = "{name} {val" + self.fmt + "} ({avg" + self.fmt + "})" - return fmtstr.format(**self.__dict__) diff --git a/spaces/Xenova/the-tokenizer-playground/assets/index-e9ad67fa.js b/spaces/Xenova/the-tokenizer-playground/assets/index-e9ad67fa.js deleted file mode 100644 index 385150151c99093c57cd7ddc0fab05d315297c56..0000000000000000000000000000000000000000 --- a/spaces/Xenova/the-tokenizer-playground/assets/index-e9ad67fa.js +++ /dev/null @@ -1,41 +0,0 @@ -(function(){const n=document.createElement("link").relList;if(n&&n.supports&&n.supports("modulepreload"))return;for(const l of document.querySelectorAll('link[rel="modulepreload"]'))r(l);new MutationObserver(l=>{for(const o of l)if(o.type==="childList")for(const u of o.addedNodes)u.tagName==="LINK"&&u.rel==="modulepreload"&&r(u)}).observe(document,{childList:!0,subtree:!0});function t(l){const o={};return l.integrity&&(o.integrity=l.integrity),l.referrerPolicy&&(o.referrerPolicy=l.referrerPolicy),l.crossOrigin==="use-credentials"?o.credentials="include":l.crossOrigin==="anonymous"?o.credentials="omit":o.credentials="same-origin",o}function r(l){if(l.ep)return;l.ep=!0;const o=t(l);fetch(l.href,o)}})();function lc(e){return e&&e.__esModule&&Object.prototype.hasOwnProperty.call(e,"default")?e.default:e}var Wi={exports:{}},el={},Qi={exports:{}},T={};/** - * @license React - * react.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var Xt=Symbol.for("react.element"),oc=Symbol.for("react.portal"),uc=Symbol.for("react.fragment"),ic=Symbol.for("react.strict_mode"),sc=Symbol.for("react.profiler"),ac=Symbol.for("react.provider"),cc=Symbol.for("react.context"),fc=Symbol.for("react.forward_ref"),dc=Symbol.for("react.suspense"),pc=Symbol.for("react.memo"),mc=Symbol.for("react.lazy"),Mu=Symbol.iterator;function hc(e){return e===null||typeof e!="object"?null:(e=Mu&&e[Mu]||e["@@iterator"],typeof e=="function"?e:null)}var Ki={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},Yi=Object.assign,Xi={};function ot(e,n,t){this.props=e,this.context=n,this.refs=Xi,this.updater=t||Ki}ot.prototype.isReactComponent={};ot.prototype.setState=function(e,n){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,n,"setState")};ot.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function Gi(){}Gi.prototype=ot.prototype;function $o(e,n,t){this.props=e,this.context=n,this.refs=Xi,this.updater=t||Ki}var Ao=$o.prototype=new Gi;Ao.constructor=$o;Yi(Ao,ot.prototype);Ao.isPureReactComponent=!0;var Du=Array.isArray,Zi=Object.prototype.hasOwnProperty,Vo={current:null},Ji={key:!0,ref:!0,__self:!0,__source:!0};function qi(e,n,t){var r,l={},o=null,u=null;if(n!=null)for(r in n.ref!==void 0&&(u=n.ref),n.key!==void 0&&(o=""+n.key),n)Zi.call(n,r)&&!Ji.hasOwnProperty(r)&&(l[r]=n[r]);var i=arguments.length-2;if(i===1)l.children=t;else if(1>>1,G=E[W];if(0>>1;Wl(gl,z))gnl(er,gl)?(E[W]=er,E[gn]=z,W=gn):(E[W]=gl,E[yn]=z,W=yn);else if(gnl(er,z))E[W]=er,E[gn]=z,W=gn;else break e}}return P}function l(E,P){var z=E.sortIndex-P.sortIndex;return z!==0?z:E.id-P.id}if(typeof performance=="object"&&typeof performance.now=="function"){var o=performance;e.unstable_now=function(){return o.now()}}else{var u=Date,i=u.now();e.unstable_now=function(){return u.now()-i}}var s=[],c=[],h=1,m=null,p=3,k=!1,g=!1,w=!1,M=typeof setTimeout=="function"?setTimeout:null,f=typeof clearTimeout=="function"?clearTimeout:null,a=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function d(E){for(var P=t(c);P!==null;){if(P.callback===null)r(c);else if(P.startTime<=E)r(c),P.sortIndex=P.expirationTime,n(s,P);else break;P=t(c)}}function v(E){if(w=!1,d(E),!g)if(t(s)!==null)g=!0,vl(x);else{var P=t(c);P!==null&&yl(v,P.startTime-E)}}function x(E,P){g=!1,w&&(w=!1,f(N),N=-1),k=!0;var z=p;try{for(d(P),m=t(s);m!==null&&(!(m.expirationTime>P)||E&&!Pe());){var W=m.callback;if(typeof W=="function"){m.callback=null,p=m.priorityLevel;var G=W(m.expirationTime<=P);P=e.unstable_now(),typeof G=="function"?m.callback=G:m===t(s)&&r(s),d(P)}else r(s);m=t(s)}if(m!==null)var bt=!0;else{var yn=t(c);yn!==null&&yl(v,yn.startTime-P),bt=!1}return bt}finally{m=null,p=z,k=!1}}var C=!1,_=null,N=-1,H=5,L=-1;function Pe(){return!(e.unstable_now()-LE||125W?(E.sortIndex=z,n(c,E),t(s)===null&&E===t(c)&&(w?(f(N),N=-1):w=!0,yl(v,z-W))):(E.sortIndex=G,n(s,E),g||k||(g=!0,vl(x))),E},e.unstable_shouldYield=Pe,e.unstable_wrapCallback=function(E){var P=p;return function(){var z=p;p=P;try{return E.apply(this,arguments)}finally{p=z}}}})(ts);ns.exports=ts;var Pc=ns.exports;/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var rs=me,ge=Pc;function y(e){for(var n="https://reactjs.org/docs/error-decoder.html?invariant="+e,t=1;t"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),Kl=Object.prototype.hasOwnProperty,zc=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,Fu={},Uu={};function Tc(e){return Kl.call(Uu,e)?!0:Kl.call(Fu,e)?!1:zc.test(e)?Uu[e]=!0:(Fu[e]=!0,!1)}function Lc(e,n,t,r){if(t!==null&&t.type===0)return!1;switch(typeof n){case"function":case"symbol":return!0;case"boolean":return r?!1:t!==null?!t.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function Rc(e,n,t,r){if(n===null||typeof n>"u"||Lc(e,n,t,r))return!0;if(r)return!1;if(t!==null)switch(t.type){case 3:return!n;case 4:return n===!1;case 5:return isNaN(n);case 6:return isNaN(n)||1>n}return!1}function se(e,n,t,r,l,o,u){this.acceptsBooleans=n===2||n===3||n===4,this.attributeName=r,this.attributeNamespace=l,this.mustUseProperty=t,this.propertyName=e,this.type=n,this.sanitizeURL=o,this.removeEmptyString=u}var ee={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){ee[e]=new se(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var n=e[0];ee[n]=new se(n,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){ee[e]=new se(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){ee[e]=new se(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){ee[e]=new se(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){ee[e]=new se(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){ee[e]=new se(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){ee[e]=new se(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){ee[e]=new se(e,5,!1,e.toLowerCase(),null,!1,!1)});var Ho=/[\-:]([a-z])/g;function Wo(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var n=e.replace(Ho,Wo);ee[n]=new se(n,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var n=e.replace(Ho,Wo);ee[n]=new se(n,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var n=e.replace(Ho,Wo);ee[n]=new se(n,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){ee[e]=new se(e,1,!1,e.toLowerCase(),null,!1,!1)});ee.xlinkHref=new se("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){ee[e]=new se(e,1,!1,e.toLowerCase(),null,!0,!0)});function Qo(e,n,t,r){var l=ee.hasOwnProperty(n)?ee[n]:null;(l!==null?l.type!==0:r||!(2i||l[u]!==o[i]){var s=` -`+l[u].replace(" at new "," at ");return e.displayName&&s.includes("")&&(s=s.replace("",e.displayName)),s}while(1<=u&&0<=i);break}}}finally{Sl=!1,Error.prepareStackTrace=t}return(e=e?e.displayName||e.name:"")?gt(e):""}function jc(e){switch(e.tag){case 5:return gt(e.type);case 16:return gt("Lazy");case 13:return gt("Suspense");case 19:return gt("SuspenseList");case 0:case 2:case 15:return e=xl(e.type,!1),e;case 11:return e=xl(e.type.render,!1),e;case 1:return e=xl(e.type,!0),e;default:return""}}function Zl(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case Dn:return"Fragment";case Mn:return"Portal";case Yl:return"Profiler";case Ko:return"StrictMode";case Xl:return"Suspense";case Gl:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case us:return(e.displayName||"Context")+".Consumer";case os:return(e._context.displayName||"Context")+".Provider";case Yo:var n=e.render;return e=e.displayName,e||(e=n.displayName||n.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case Xo:return n=e.displayName||null,n!==null?n:Zl(e.type)||"Memo";case Je:n=e._payload,e=e._init;try{return Zl(e(n))}catch{}}return null}function Oc(e){var n=e.type;switch(e.tag){case 24:return"Cache";case 9:return(n.displayName||"Context")+".Consumer";case 10:return(n._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=n.render,e=e.displayName||e.name||"",n.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return n;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return Zl(n);case 8:return n===Ko?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof n=="function")return n.displayName||n.name||null;if(typeof n=="string")return n}return null}function dn(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function ss(e){var n=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(n==="checkbox"||n==="radio")}function Mc(e){var n=ss(e)?"checked":"value",t=Object.getOwnPropertyDescriptor(e.constructor.prototype,n),r=""+e[n];if(!e.hasOwnProperty(n)&&typeof t<"u"&&typeof t.get=="function"&&typeof t.set=="function"){var l=t.get,o=t.set;return Object.defineProperty(e,n,{configurable:!0,get:function(){return l.call(this)},set:function(u){r=""+u,o.call(this,u)}}),Object.defineProperty(e,n,{enumerable:t.enumerable}),{getValue:function(){return r},setValue:function(u){r=""+u},stopTracking:function(){e._valueTracker=null,delete e[n]}}}}function rr(e){e._valueTracker||(e._valueTracker=Mc(e))}function as(e){if(!e)return!1;var n=e._valueTracker;if(!n)return!0;var t=n.getValue(),r="";return e&&(r=ss(e)?e.checked?"true":"false":e.value),e=r,e!==t?(n.setValue(e),!0):!1}function Lr(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function Jl(e,n){var t=n.checked;return V({},n,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:t??e._wrapperState.initialChecked})}function Au(e,n){var t=n.defaultValue==null?"":n.defaultValue,r=n.checked!=null?n.checked:n.defaultChecked;t=dn(n.value!=null?n.value:t),e._wrapperState={initialChecked:r,initialValue:t,controlled:n.type==="checkbox"||n.type==="radio"?n.checked!=null:n.value!=null}}function cs(e,n){n=n.checked,n!=null&&Qo(e,"checked",n,!1)}function ql(e,n){cs(e,n);var t=dn(n.value),r=n.type;if(t!=null)r==="number"?(t===0&&e.value===""||e.value!=t)&&(e.value=""+t):e.value!==""+t&&(e.value=""+t);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}n.hasOwnProperty("value")?bl(e,n.type,t):n.hasOwnProperty("defaultValue")&&bl(e,n.type,dn(n.defaultValue)),n.checked==null&&n.defaultChecked!=null&&(e.defaultChecked=!!n.defaultChecked)}function Vu(e,n,t){if(n.hasOwnProperty("value")||n.hasOwnProperty("defaultValue")){var r=n.type;if(!(r!=="submit"&&r!=="reset"||n.value!==void 0&&n.value!==null))return;n=""+e._wrapperState.initialValue,t||n===e.value||(e.value=n),e.defaultValue=n}t=e.name,t!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,t!==""&&(e.name=t)}function bl(e,n,t){(n!=="number"||Lr(e.ownerDocument)!==e)&&(t==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+t&&(e.defaultValue=""+t))}var wt=Array.isArray;function Kn(e,n,t,r){if(e=e.options,n){n={};for(var l=0;l"+n.valueOf().toString()+"",n=lr.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;n.firstChild;)e.appendChild(n.firstChild)}});function jt(e,n){if(n){var t=e.firstChild;if(t&&t===e.lastChild&&t.nodeType===3){t.nodeValue=n;return}}e.textContent=n}var xt={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},Dc=["Webkit","ms","Moz","O"];Object.keys(xt).forEach(function(e){Dc.forEach(function(n){n=n+e.charAt(0).toUpperCase()+e.substring(1),xt[n]=xt[e]})});function ms(e,n,t){return n==null||typeof n=="boolean"||n===""?"":t||typeof n!="number"||n===0||xt.hasOwnProperty(e)&&xt[e]?(""+n).trim():n+"px"}function hs(e,n){e=e.style;for(var t in n)if(n.hasOwnProperty(t)){var r=t.indexOf("--")===0,l=ms(t,n[t],r);t==="float"&&(t="cssFloat"),r?e.setProperty(t,l):e[t]=l}}var Ic=V({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function to(e,n){if(n){if(Ic[e]&&(n.children!=null||n.dangerouslySetInnerHTML!=null))throw Error(y(137,e));if(n.dangerouslySetInnerHTML!=null){if(n.children!=null)throw Error(y(60));if(typeof n.dangerouslySetInnerHTML!="object"||!("__html"in n.dangerouslySetInnerHTML))throw Error(y(61))}if(n.style!=null&&typeof n.style!="object")throw Error(y(62))}}function ro(e,n){if(e.indexOf("-")===-1)return typeof n.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var lo=null;function Go(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var oo=null,Yn=null,Xn=null;function Wu(e){if(e=Jt(e)){if(typeof oo!="function")throw Error(y(280));var n=e.stateNode;n&&(n=ol(n),oo(e.stateNode,e.type,n))}}function vs(e){Yn?Xn?Xn.push(e):Xn=[e]:Yn=e}function ys(){if(Yn){var e=Yn,n=Xn;if(Xn=Yn=null,Wu(e),n)for(e=0;e>>=0,e===0?32:31-(Yc(e)/Xc|0)|0}var or=64,ur=4194304;function kt(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function Mr(e,n){var t=e.pendingLanes;if(t===0)return 0;var r=0,l=e.suspendedLanes,o=e.pingedLanes,u=t&268435455;if(u!==0){var i=u&~l;i!==0?r=kt(i):(o&=u,o!==0&&(r=kt(o)))}else u=t&~l,u!==0?r=kt(u):o!==0&&(r=kt(o));if(r===0)return 0;if(n!==0&&n!==r&&!(n&l)&&(l=r&-r,o=n&-n,l>=o||l===16&&(o&4194240)!==0))return n;if(r&4&&(r|=t&16),n=e.entangledLanes,n!==0)for(e=e.entanglements,n&=r;0t;t++)n.push(e);return n}function Gt(e,n,t){e.pendingLanes|=n,n!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,n=31-je(n),e[n]=t}function qc(e,n){var t=e.pendingLanes&~n;e.pendingLanes=n,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=n,e.mutableReadLanes&=n,e.entangledLanes&=n,n=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0=Ct),bu=String.fromCharCode(32),ei=!1;function Fs(e,n){switch(e){case"keyup":return Pf.indexOf(n.keyCode)!==-1;case"keydown":return n.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function Us(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var In=!1;function Tf(e,n){switch(e){case"compositionend":return Us(n);case"keypress":return n.which!==32?null:(ei=!0,bu);case"textInput":return e=n.data,e===bu&&ei?null:e;default:return null}}function Lf(e,n){if(In)return e==="compositionend"||!ru&&Fs(e,n)?(e=Ds(),Sr=eu=nn=null,In=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(n.ctrlKey||n.altKey||n.metaKey)||n.ctrlKey&&n.altKey){if(n.char&&1=n)return{node:t,offset:n-e};e=r}e:{for(;t;){if(t.nextSibling){t=t.nextSibling;break e}t=t.parentNode}t=void 0}t=li(t)}}function Bs(e,n){return e&&n?e===n?!0:e&&e.nodeType===3?!1:n&&n.nodeType===3?Bs(e,n.parentNode):"contains"in e?e.contains(n):e.compareDocumentPosition?!!(e.compareDocumentPosition(n)&16):!1:!1}function Hs(){for(var e=window,n=Lr();n instanceof e.HTMLIFrameElement;){try{var t=typeof n.contentWindow.location.href=="string"}catch{t=!1}if(t)e=n.contentWindow;else break;n=Lr(e.document)}return n}function lu(e){var n=e&&e.nodeName&&e.nodeName.toLowerCase();return n&&(n==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||n==="textarea"||e.contentEditable==="true")}function $f(e){var n=Hs(),t=e.focusedElem,r=e.selectionRange;if(n!==t&&t&&t.ownerDocument&&Bs(t.ownerDocument.documentElement,t)){if(r!==null&&lu(t)){if(n=r.start,e=r.end,e===void 0&&(e=n),"selectionStart"in t)t.selectionStart=n,t.selectionEnd=Math.min(e,t.value.length);else if(e=(n=t.ownerDocument||document)&&n.defaultView||window,e.getSelection){e=e.getSelection();var l=t.textContent.length,o=Math.min(r.start,l);r=r.end===void 0?o:Math.min(r.end,l),!e.extend&&o>r&&(l=r,r=o,o=l),l=oi(t,o);var u=oi(t,r);l&&u&&(e.rangeCount!==1||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==u.node||e.focusOffset!==u.offset)&&(n=n.createRange(),n.setStart(l.node,l.offset),e.removeAllRanges(),o>r?(e.addRange(n),e.extend(u.node,u.offset)):(n.setEnd(u.node,u.offset),e.addRange(n)))}}for(n=[],e=t;e=e.parentNode;)e.nodeType===1&&n.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof t.focus=="function"&&t.focus(),t=0;t=document.documentMode,Fn=null,fo=null,Nt=null,po=!1;function ui(e,n,t){var r=t.window===t?t.document:t.nodeType===9?t:t.ownerDocument;po||Fn==null||Fn!==Lr(r)||(r=Fn,"selectionStart"in r&&lu(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),Nt&&Ut(Nt,r)||(Nt=r,r=Fr(fo,"onSelect"),0An||(e.current=wo[An],wo[An]=null,An--)}function D(e,n){An++,wo[An]=e.current,e.current=n}var pn={},le=hn(pn),fe=hn(!1),Nn=pn;function bn(e,n){var t=e.type.contextTypes;if(!t)return pn;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===n)return r.__reactInternalMemoizedMaskedChildContext;var l={},o;for(o in t)l[o]=n[o];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=n,e.__reactInternalMemoizedMaskedChildContext=l),l}function de(e){return e=e.childContextTypes,e!=null}function $r(){F(fe),F(le)}function pi(e,n,t){if(le.current!==pn)throw Error(y(168));D(le,n),D(fe,t)}function qs(e,n,t){var r=e.stateNode;if(n=n.childContextTypes,typeof r.getChildContext!="function")return t;r=r.getChildContext();for(var l in r)if(!(l in n))throw Error(y(108,Oc(e)||"Unknown",l));return V({},t,r)}function Ar(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||pn,Nn=le.current,D(le,e),D(fe,fe.current),!0}function mi(e,n,t){var r=e.stateNode;if(!r)throw Error(y(169));t?(e=qs(e,n,Nn),r.__reactInternalMemoizedMergedChildContext=e,F(fe),F(le),D(le,e)):F(fe),D(fe,t)}var Ve=null,ul=!1,Il=!1;function bs(e){Ve===null?Ve=[e]:Ve.push(e)}function Jf(e){ul=!0,bs(e)}function vn(){if(!Il&&Ve!==null){Il=!0;var e=0,n=O;try{var t=Ve;for(O=1;e>=u,l-=u,Be=1<<32-je(n)+l|t<N?(H=_,_=null):H=_.sibling;var L=p(f,_,d[N],v);if(L===null){_===null&&(_=H);break}e&&_&&L.alternate===null&&n(f,_),a=o(L,a,N),C===null?x=L:C.sibling=L,C=L,_=H}if(N===d.length)return t(f,_),U&&wn(f,N),x;if(_===null){for(;NN?(H=_,_=null):H=_.sibling;var Pe=p(f,_,L.value,v);if(Pe===null){_===null&&(_=H);break}e&&_&&Pe.alternate===null&&n(f,_),a=o(Pe,a,N),C===null?x=Pe:C.sibling=Pe,C=Pe,_=H}if(L.done)return t(f,_),U&&wn(f,N),x;if(_===null){for(;!L.done;N++,L=d.next())L=m(f,L.value,v),L!==null&&(a=o(L,a,N),C===null?x=L:C.sibling=L,C=L);return U&&wn(f,N),x}for(_=r(f,_);!L.done;N++,L=d.next())L=k(_,f,N,L.value,v),L!==null&&(e&&L.alternate!==null&&_.delete(L.key===null?N:L.key),a=o(L,a,N),C===null?x=L:C.sibling=L,C=L);return e&&_.forEach(function(st){return n(f,st)}),U&&wn(f,N),x}function M(f,a,d,v){if(typeof d=="object"&&d!==null&&d.type===Dn&&d.key===null&&(d=d.props.children),typeof d=="object"&&d!==null){switch(d.$$typeof){case tr:e:{for(var x=d.key,C=a;C!==null;){if(C.key===x){if(x=d.type,x===Dn){if(C.tag===7){t(f,C.sibling),a=l(C,d.props.children),a.return=f,f=a;break e}}else if(C.elementType===x||typeof x=="object"&&x!==null&&x.$$typeof===Je&&Si(x)===C.type){t(f,C.sibling),a=l(C,d.props),a.ref=ht(f,C,d),a.return=f,f=a;break e}t(f,C);break}else n(f,C);C=C.sibling}d.type===Dn?(a=_n(d.props.children,f.mode,v,d.key),a.return=f,f=a):(v=Tr(d.type,d.key,d.props,null,f.mode,v),v.ref=ht(f,a,d),v.return=f,f=v)}return u(f);case Mn:e:{for(C=d.key;a!==null;){if(a.key===C)if(a.tag===4&&a.stateNode.containerInfo===d.containerInfo&&a.stateNode.implementation===d.implementation){t(f,a.sibling),a=l(a,d.children||[]),a.return=f,f=a;break e}else{t(f,a);break}else n(f,a);a=a.sibling}a=Wl(d,f.mode,v),a.return=f,f=a}return u(f);case Je:return C=d._init,M(f,a,C(d._payload),v)}if(wt(d))return g(f,a,d,v);if(ct(d))return w(f,a,d,v);pr(f,d)}return typeof d=="string"&&d!==""||typeof d=="number"?(d=""+d,a!==null&&a.tag===6?(t(f,a.sibling),a=l(a,d),a.return=f,f=a):(t(f,a),a=Hl(d,f.mode,v),a.return=f,f=a),u(f)):t(f,a)}return M}var nt=ia(!0),sa=ia(!1),qt={},$e=hn(qt),Bt=hn(qt),Ht=hn(qt);function En(e){if(e===qt)throw Error(y(174));return e}function pu(e,n){switch(D(Ht,n),D(Bt,e),D($e,qt),e=n.nodeType,e){case 9:case 11:n=(n=n.documentElement)?n.namespaceURI:no(null,"");break;default:e=e===8?n.parentNode:n,n=e.namespaceURI||null,e=e.tagName,n=no(n,e)}F($e),D($e,n)}function tt(){F($e),F(Bt),F(Ht)}function aa(e){En(Ht.current);var n=En($e.current),t=no(n,e.type);n!==t&&(D(Bt,e),D($e,t))}function mu(e){Bt.current===e&&(F($e),F(Bt))}var $=hn(0);function Kr(e){for(var n=e;n!==null;){if(n.tag===13){var t=n.memoizedState;if(t!==null&&(t=t.dehydrated,t===null||t.data==="$?"||t.data==="$!"))return n}else if(n.tag===19&&n.memoizedProps.revealOrder!==void 0){if(n.flags&128)return n}else if(n.child!==null){n.child.return=n,n=n.child;continue}if(n===e)break;for(;n.sibling===null;){if(n.return===null||n.return===e)return null;n=n.return}n.sibling.return=n.return,n=n.sibling}return null}var Fl=[];function hu(){for(var e=0;et?t:4,e(!0);var r=Ul.transition;Ul.transition={};try{e(!1),n()}finally{O=t,Ul.transition=r}}function _a(){return Ne().memoizedState}function nd(e,n,t){var r=cn(e);if(t={lane:r,action:t,hasEagerState:!1,eagerState:null,next:null},Na(e))Pa(n,t);else if(t=ra(e,n,t,r),t!==null){var l=ue();Oe(t,e,r,l),za(t,n,r)}}function td(e,n,t){var r=cn(e),l={lane:r,action:t,hasEagerState:!1,eagerState:null,next:null};if(Na(e))Pa(n,l);else{var o=e.alternate;if(e.lanes===0&&(o===null||o.lanes===0)&&(o=n.lastRenderedReducer,o!==null))try{var u=n.lastRenderedState,i=o(u,t);if(l.hasEagerState=!0,l.eagerState=i,Me(i,u)){var s=n.interleaved;s===null?(l.next=l,fu(n)):(l.next=s.next,s.next=l),n.interleaved=l;return}}catch{}finally{}t=ra(e,n,l,r),t!==null&&(l=ue(),Oe(t,e,r,l),za(t,n,r))}}function Na(e){var n=e.alternate;return e===A||n!==null&&n===A}function Pa(e,n){Pt=Yr=!0;var t=e.pending;t===null?n.next=n:(n.next=t.next,t.next=n),e.pending=n}function za(e,n,t){if(t&4194240){var r=n.lanes;r&=e.pendingLanes,t|=r,n.lanes=t,Jo(e,t)}}var Xr={readContext:_e,useCallback:ne,useContext:ne,useEffect:ne,useImperativeHandle:ne,useInsertionEffect:ne,useLayoutEffect:ne,useMemo:ne,useReducer:ne,useRef:ne,useState:ne,useDebugValue:ne,useDeferredValue:ne,useTransition:ne,useMutableSource:ne,useSyncExternalStore:ne,useId:ne,unstable_isNewReconciler:!1},rd={readContext:_e,useCallback:function(e,n){return Ie().memoizedState=[e,n===void 0?null:n],e},useContext:_e,useEffect:Ei,useImperativeHandle:function(e,n,t){return t=t!=null?t.concat([e]):null,_r(4194308,4,ka.bind(null,n,e),t)},useLayoutEffect:function(e,n){return _r(4194308,4,e,n)},useInsertionEffect:function(e,n){return _r(4,2,e,n)},useMemo:function(e,n){var t=Ie();return n=n===void 0?null:n,e=e(),t.memoizedState=[e,n],e},useReducer:function(e,n,t){var r=Ie();return n=t!==void 0?t(n):n,r.memoizedState=r.baseState=n,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:n},r.queue=e,e=e.dispatch=nd.bind(null,A,e),[r.memoizedState,e]},useRef:function(e){var n=Ie();return e={current:e},n.memoizedState=e},useState:xi,useDebugValue:ku,useDeferredValue:function(e){return Ie().memoizedState=e},useTransition:function(){var e=xi(!1),n=e[0];return e=ed.bind(null,e[1]),Ie().memoizedState=e,[n,e]},useMutableSource:function(){},useSyncExternalStore:function(e,n,t){var r=A,l=Ie();if(U){if(t===void 0)throw Error(y(407));t=t()}else{if(t=n(),J===null)throw Error(y(349));zn&30||da(r,n,t)}l.memoizedState=t;var o={value:t,getSnapshot:n};return l.queue=o,Ei(ma.bind(null,r,o,e),[e]),r.flags|=2048,Kt(9,pa.bind(null,r,o,t,n),void 0,null),t},useId:function(){var e=Ie(),n=J.identifierPrefix;if(U){var t=He,r=Be;t=(r&~(1<<32-je(r)-1)).toString(32)+t,n=":"+n+"R"+t,t=Wt++,0<\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=u.createElement(t,{is:r.is}):(e=u.createElement(t),t==="select"&&(u=e,r.multiple?u.multiple=!0:r.size&&(u.size=r.size))):e=u.createElementNS(e,t),e[Fe]=n,e[Vt]=r,Fa(e,n,!1,!1),n.stateNode=e;e:{switch(u=ro(t,r),t){case"dialog":I("cancel",e),I("close",e),l=r;break;case"iframe":case"object":case"embed":I("load",e),l=r;break;case"video":case"audio":for(l=0;llt&&(n.flags|=128,r=!0,vt(o,!1),n.lanes=4194304)}else{if(!r)if(e=Kr(u),e!==null){if(n.flags|=128,r=!0,t=e.updateQueue,t!==null&&(n.updateQueue=t,n.flags|=4),vt(o,!0),o.tail===null&&o.tailMode==="hidden"&&!u.alternate&&!U)return te(n),null}else 2*Q()-o.renderingStartTime>lt&&t!==1073741824&&(n.flags|=128,r=!0,vt(o,!1),n.lanes=4194304);o.isBackwards?(u.sibling=n.child,n.child=u):(t=o.last,t!==null?t.sibling=u:n.child=u,o.last=u)}return o.tail!==null?(n=o.tail,o.rendering=n,o.tail=n.sibling,o.renderingStartTime=Q(),n.sibling=null,t=$.current,D($,r?t&1|2:t&1),n):(te(n),null);case 22:case 23:return Nu(),r=n.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(n.flags|=8192),r&&n.mode&1?he&1073741824&&(te(n),n.subtreeFlags&6&&(n.flags|=8192)):te(n),null;case 24:return null;case 25:return null}throw Error(y(156,n.tag))}function fd(e,n){switch(uu(n),n.tag){case 1:return de(n.type)&&$r(),e=n.flags,e&65536?(n.flags=e&-65537|128,n):null;case 3:return tt(),F(fe),F(le),hu(),e=n.flags,e&65536&&!(e&128)?(n.flags=e&-65537|128,n):null;case 5:return mu(n),null;case 13:if(F($),e=n.memoizedState,e!==null&&e.dehydrated!==null){if(n.alternate===null)throw Error(y(340));et()}return e=n.flags,e&65536?(n.flags=e&-65537|128,n):null;case 19:return F($),null;case 4:return tt(),null;case 10:return cu(n.type._context),null;case 22:case 23:return Nu(),null;case 24:return null;default:return null}}var hr=!1,re=!1,dd=typeof WeakSet=="function"?WeakSet:Set,S=null;function Wn(e,n){var t=e.ref;if(t!==null)if(typeof t=="function")try{t(null)}catch(r){B(e,n,r)}else t.current=null}function Ro(e,n,t){try{t()}catch(r){B(e,n,r)}}var ji=!1;function pd(e,n){if(mo=Dr,e=Hs(),lu(e)){if("selectionStart"in e)var t={start:e.selectionStart,end:e.selectionEnd};else e:{t=(t=e.ownerDocument)&&t.defaultView||window;var r=t.getSelection&&t.getSelection();if(r&&r.rangeCount!==0){t=r.anchorNode;var l=r.anchorOffset,o=r.focusNode;r=r.focusOffset;try{t.nodeType,o.nodeType}catch{t=null;break e}var u=0,i=-1,s=-1,c=0,h=0,m=e,p=null;n:for(;;){for(var k;m!==t||l!==0&&m.nodeType!==3||(i=u+l),m!==o||r!==0&&m.nodeType!==3||(s=u+r),m.nodeType===3&&(u+=m.nodeValue.length),(k=m.firstChild)!==null;)p=m,m=k;for(;;){if(m===e)break n;if(p===t&&++c===l&&(i=u),p===o&&++h===r&&(s=u),(k=m.nextSibling)!==null)break;m=p,p=m.parentNode}m=k}t=i===-1||s===-1?null:{start:i,end:s}}else t=null}t=t||{start:0,end:0}}else t=null;for(ho={focusedElem:e,selectionRange:t},Dr=!1,S=n;S!==null;)if(n=S,e=n.child,(n.subtreeFlags&1028)!==0&&e!==null)e.return=n,S=e;else for(;S!==null;){n=S;try{var g=n.alternate;if(n.flags&1024)switch(n.tag){case 0:case 11:case 15:break;case 1:if(g!==null){var w=g.memoizedProps,M=g.memoizedState,f=n.stateNode,a=f.getSnapshotBeforeUpdate(n.elementType===n.type?w:Te(n.type,w),M);f.__reactInternalSnapshotBeforeUpdate=a}break;case 3:var d=n.stateNode.containerInfo;d.nodeType===1?d.textContent="":d.nodeType===9&&d.documentElement&&d.removeChild(d.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(y(163))}}catch(v){B(n,n.return,v)}if(e=n.sibling,e!==null){e.return=n.return,S=e;break}S=n.return}return g=ji,ji=!1,g}function zt(e,n,t){var r=n.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var l=r=r.next;do{if((l.tag&e)===e){var o=l.destroy;l.destroy=void 0,o!==void 0&&Ro(n,t,o)}l=l.next}while(l!==r)}}function al(e,n){if(n=n.updateQueue,n=n!==null?n.lastEffect:null,n!==null){var t=n=n.next;do{if((t.tag&e)===e){var r=t.create;t.destroy=r()}t=t.next}while(t!==n)}}function jo(e){var n=e.ref;if(n!==null){var t=e.stateNode;switch(e.tag){case 5:e=t;break;default:e=t}typeof n=="function"?n(e):n.current=e}}function Aa(e){var n=e.alternate;n!==null&&(e.alternate=null,Aa(n)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(n=e.stateNode,n!==null&&(delete n[Fe],delete n[Vt],delete n[go],delete n[Gf],delete n[Zf])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function Va(e){return e.tag===5||e.tag===3||e.tag===4}function Oi(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||Va(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function Oo(e,n,t){var r=e.tag;if(r===5||r===6)e=e.stateNode,n?t.nodeType===8?t.parentNode.insertBefore(e,n):t.insertBefore(e,n):(t.nodeType===8?(n=t.parentNode,n.insertBefore(e,t)):(n=t,n.appendChild(e)),t=t._reactRootContainer,t!=null||n.onclick!==null||(n.onclick=Ur));else if(r!==4&&(e=e.child,e!==null))for(Oo(e,n,t),e=e.sibling;e!==null;)Oo(e,n,t),e=e.sibling}function Mo(e,n,t){var r=e.tag;if(r===5||r===6)e=e.stateNode,n?t.insertBefore(e,n):t.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(Mo(e,n,t),e=e.sibling;e!==null;)Mo(e,n,t),e=e.sibling}var q=null,Le=!1;function Ze(e,n,t){for(t=t.child;t!==null;)Ba(e,n,t),t=t.sibling}function Ba(e,n,t){if(Ue&&typeof Ue.onCommitFiberUnmount=="function")try{Ue.onCommitFiberUnmount(nl,t)}catch{}switch(t.tag){case 5:re||Wn(t,n);case 6:var r=q,l=Le;q=null,Ze(e,n,t),q=r,Le=l,q!==null&&(Le?(e=q,t=t.stateNode,e.nodeType===8?e.parentNode.removeChild(t):e.removeChild(t)):q.removeChild(t.stateNode));break;case 18:q!==null&&(Le?(e=q,t=t.stateNode,e.nodeType===8?Dl(e.parentNode,t):e.nodeType===1&&Dl(e,t),It(e)):Dl(q,t.stateNode));break;case 4:r=q,l=Le,q=t.stateNode.containerInfo,Le=!0,Ze(e,n,t),q=r,Le=l;break;case 0:case 11:case 14:case 15:if(!re&&(r=t.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){l=r=r.next;do{var o=l,u=o.destroy;o=o.tag,u!==void 0&&(o&2||o&4)&&Ro(t,n,u),l=l.next}while(l!==r)}Ze(e,n,t);break;case 1:if(!re&&(Wn(t,n),r=t.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=t.memoizedProps,r.state=t.memoizedState,r.componentWillUnmount()}catch(i){B(t,n,i)}Ze(e,n,t);break;case 21:Ze(e,n,t);break;case 22:t.mode&1?(re=(r=re)||t.memoizedState!==null,Ze(e,n,t),re=r):Ze(e,n,t);break;default:Ze(e,n,t)}}function Mi(e){var n=e.updateQueue;if(n!==null){e.updateQueue=null;var t=e.stateNode;t===null&&(t=e.stateNode=new dd),n.forEach(function(r){var l=xd.bind(null,e,r);t.has(r)||(t.add(r),r.then(l,l))})}}function ze(e,n){var t=n.deletions;if(t!==null)for(var r=0;rl&&(l=u),r&=~o}if(r=l,r=Q()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*hd(r/1960))-r,10e?16:e,tn===null)var r=!1;else{if(e=tn,tn=null,Jr=0,j&6)throw Error(y(331));var l=j;for(j|=4,S=e.current;S!==null;){var o=S,u=o.child;if(S.flags&16){var i=o.deletions;if(i!==null){for(var s=0;sQ()-Cu?Cn(e,0):Eu|=t),pe(e,n)}function Za(e,n){n===0&&(e.mode&1?(n=ur,ur<<=1,!(ur&130023424)&&(ur=4194304)):n=1);var t=ue();e=Ye(e,n),e!==null&&(Gt(e,n,t),pe(e,t))}function Sd(e){var n=e.memoizedState,t=0;n!==null&&(t=n.retryLane),Za(e,t)}function xd(e,n){var t=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;l!==null&&(t=l.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(y(314))}r!==null&&r.delete(n),Za(e,t)}var Ja;Ja=function(e,n,t){if(e!==null)if(e.memoizedProps!==n.pendingProps||fe.current)ce=!0;else{if(!(e.lanes&t)&&!(n.flags&128))return ce=!1,ad(e,n,t);ce=!!(e.flags&131072)}else ce=!1,U&&n.flags&1048576&&ea(n,Br,n.index);switch(n.lanes=0,n.tag){case 2:var r=n.type;Nr(e,n),e=n.pendingProps;var l=bn(n,le.current);Zn(n,t),l=yu(null,n,r,e,l,t);var o=gu();return n.flags|=1,typeof l=="object"&&l!==null&&typeof l.render=="function"&&l.$$typeof===void 0?(n.tag=1,n.memoizedState=null,n.updateQueue=null,de(r)?(o=!0,Ar(n)):o=!1,n.memoizedState=l.state!==null&&l.state!==void 0?l.state:null,du(n),l.updater=il,n.stateNode=l,l._reactInternals=n,Co(n,r,e,t),n=Po(null,n,r,!0,o,t)):(n.tag=0,U&&o&&ou(n),oe(null,n,l,t),n=n.child),n;case 16:r=n.elementType;e:{switch(Nr(e,n),e=n.pendingProps,l=r._init,r=l(r._payload),n.type=r,l=n.tag=Cd(r),e=Te(r,e),l){case 0:n=No(null,n,r,e,t);break e;case 1:n=Ti(null,n,r,e,t);break e;case 11:n=Pi(null,n,r,e,t);break e;case 14:n=zi(null,n,r,Te(r.type,e),t);break e}throw Error(y(306,r,""))}return n;case 0:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Te(r,l),No(e,n,r,l,t);case 1:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Te(r,l),Ti(e,n,r,l,t);case 3:e:{if(Ma(n),e===null)throw Error(y(387));r=n.pendingProps,o=n.memoizedState,l=o.element,la(e,n),Qr(n,r,null,t);var u=n.memoizedState;if(r=u.element,o.isDehydrated)if(o={element:r,isDehydrated:!1,cache:u.cache,pendingSuspenseBoundaries:u.pendingSuspenseBoundaries,transitions:u.transitions},n.updateQueue.baseState=o,n.memoizedState=o,n.flags&256){l=rt(Error(y(423)),n),n=Li(e,n,r,t,l);break e}else if(r!==l){l=rt(Error(y(424)),n),n=Li(e,n,r,t,l);break e}else for(ve=un(n.stateNode.containerInfo.firstChild),ye=n,U=!0,Re=null,t=sa(n,null,r,t),n.child=t;t;)t.flags=t.flags&-3|4096,t=t.sibling;else{if(et(),r===l){n=Xe(e,n,t);break e}oe(e,n,r,t)}n=n.child}return n;case 5:return aa(n),e===null&&So(n),r=n.type,l=n.pendingProps,o=e!==null?e.memoizedProps:null,u=l.children,vo(r,l)?u=null:o!==null&&vo(r,o)&&(n.flags|=32),Oa(e,n),oe(e,n,u,t),n.child;case 6:return e===null&&So(n),null;case 13:return Da(e,n,t);case 4:return pu(n,n.stateNode.containerInfo),r=n.pendingProps,e===null?n.child=nt(n,null,r,t):oe(e,n,r,t),n.child;case 11:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Te(r,l),Pi(e,n,r,l,t);case 7:return oe(e,n,n.pendingProps,t),n.child;case 8:return oe(e,n,n.pendingProps.children,t),n.child;case 12:return oe(e,n,n.pendingProps.children,t),n.child;case 10:e:{if(r=n.type._context,l=n.pendingProps,o=n.memoizedProps,u=l.value,D(Hr,r._currentValue),r._currentValue=u,o!==null)if(Me(o.value,u)){if(o.children===l.children&&!fe.current){n=Xe(e,n,t);break e}}else for(o=n.child,o!==null&&(o.return=n);o!==null;){var i=o.dependencies;if(i!==null){u=o.child;for(var s=i.firstContext;s!==null;){if(s.context===r){if(o.tag===1){s=We(-1,t&-t),s.tag=2;var c=o.updateQueue;if(c!==null){c=c.shared;var h=c.pending;h===null?s.next=s:(s.next=h.next,h.next=s),c.pending=s}}o.lanes|=t,s=o.alternate,s!==null&&(s.lanes|=t),xo(o.return,t,n),i.lanes|=t;break}s=s.next}}else if(o.tag===10)u=o.type===n.type?null:o.child;else if(o.tag===18){if(u=o.return,u===null)throw Error(y(341));u.lanes|=t,i=u.alternate,i!==null&&(i.lanes|=t),xo(u,t,n),u=o.sibling}else u=o.child;if(u!==null)u.return=o;else for(u=o;u!==null;){if(u===n){u=null;break}if(o=u.sibling,o!==null){o.return=u.return,u=o;break}u=u.return}o=u}oe(e,n,l.children,t),n=n.child}return n;case 9:return l=n.type,r=n.pendingProps.children,Zn(n,t),l=_e(l),r=r(l),n.flags|=1,oe(e,n,r,t),n.child;case 14:return r=n.type,l=Te(r,n.pendingProps),l=Te(r.type,l),zi(e,n,r,l,t);case 15:return Ra(e,n,n.type,n.pendingProps,t);case 17:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Te(r,l),Nr(e,n),n.tag=1,de(r)?(e=!0,Ar(n)):e=!1,Zn(n,t),ua(n,r,l),Co(n,r,l,t),Po(null,n,r,!0,e,t);case 19:return Ia(e,n,t);case 22:return ja(e,n,t)}throw Error(y(156,n.tag))};function qa(e,n){return Cs(e,n)}function Ed(e,n,t,r){this.tag=e,this.key=t,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=n,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Ee(e,n,t,r){return new Ed(e,n,t,r)}function zu(e){return e=e.prototype,!(!e||!e.isReactComponent)}function Cd(e){if(typeof e=="function")return zu(e)?1:0;if(e!=null){if(e=e.$$typeof,e===Yo)return 11;if(e===Xo)return 14}return 2}function fn(e,n){var t=e.alternate;return t===null?(t=Ee(e.tag,n,e.key,e.mode),t.elementType=e.elementType,t.type=e.type,t.stateNode=e.stateNode,t.alternate=e,e.alternate=t):(t.pendingProps=n,t.type=e.type,t.flags=0,t.subtreeFlags=0,t.deletions=null),t.flags=e.flags&14680064,t.childLanes=e.childLanes,t.lanes=e.lanes,t.child=e.child,t.memoizedProps=e.memoizedProps,t.memoizedState=e.memoizedState,t.updateQueue=e.updateQueue,n=e.dependencies,t.dependencies=n===null?null:{lanes:n.lanes,firstContext:n.firstContext},t.sibling=e.sibling,t.index=e.index,t.ref=e.ref,t}function Tr(e,n,t,r,l,o){var u=2;if(r=e,typeof e=="function")zu(e)&&(u=1);else if(typeof e=="string")u=5;else e:switch(e){case Dn:return _n(t.children,l,o,n);case Ko:u=8,l|=8;break;case Yl:return e=Ee(12,t,n,l|2),e.elementType=Yl,e.lanes=o,e;case Xl:return e=Ee(13,t,n,l),e.elementType=Xl,e.lanes=o,e;case Gl:return e=Ee(19,t,n,l),e.elementType=Gl,e.lanes=o,e;case is:return fl(t,l,o,n);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case os:u=10;break e;case us:u=9;break e;case Yo:u=11;break e;case Xo:u=14;break e;case Je:u=16,r=null;break e}throw Error(y(130,e==null?e:typeof e,""))}return n=Ee(u,t,n,l),n.elementType=e,n.type=r,n.lanes=o,n}function _n(e,n,t,r){return e=Ee(7,e,r,n),e.lanes=t,e}function fl(e,n,t,r){return e=Ee(22,e,r,n),e.elementType=is,e.lanes=t,e.stateNode={isHidden:!1},e}function Hl(e,n,t){return e=Ee(6,e,null,n),e.lanes=t,e}function Wl(e,n,t){return n=Ee(4,e.children!==null?e.children:[],e.key,n),n.lanes=t,n.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},n}function _d(e,n,t,r,l){this.tag=n,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=Cl(0),this.expirationTimes=Cl(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=Cl(0),this.identifierPrefix=r,this.onRecoverableError=l,this.mutableSourceEagerHydrationData=null}function Tu(e,n,t,r,l,o,u,i,s){return e=new _d(e,n,t,i,s),n===1?(n=1,o===!0&&(n|=8)):n=0,o=Ee(3,null,null,n),e.current=o,o.stateNode=e,o.memoizedState={element:r,isDehydrated:t,cache:null,transitions:null,pendingSuspenseBoundaries:null},du(o),e}function Nd(e,n,t){var r=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(tc)}catch(e){console.error(e)}}tc(),es.exports=we;var Rd=es.exports,Bi=Rd;Ql.createRoot=Bi.createRoot,Ql.hydrateRoot=Bi.hydrateRoot;const Hi=["bg-purple-300","bg-green-300","bg-yellow-300","bg-red-300","bg-blue-300"];function jd({text:e,position:n}){return e!==` -`?R.jsx("span",{className:`leading-5 inline-block ${Hi[n%Hi.length]}`,children:e}):R.jsx("br",{})}function Od(){const[e,n]=me.useState(""),[t,r]=me.useState([]),[l,o]=me.useState([]),[u,i]=me.useState("text"),[s,c]=me.useState("Xenova/gpt-4"),h=me.useRef(null),m=me.useRef(null);me.useEffect(()=>{m.current||(m.current=new Worker(new URL("/assets/worker-6c002022.js",self.location),{type:"module"}));const g=w=>{r(w.data.token_ids),o(w.data.decoded)};return m.current.addEventListener("message",g),()=>m.current.removeEventListener("message",g)},[]);const p=me.useCallback(g=>{const w=s,M=g.target.value;M.length>1e4&&(i(null),console.log("User most likely pasted in a large body of text (> 10k chars), so we hide the output (until specifically requested by the user).")),n(M),m.current.postMessage({model_id:w,text:M})},[s]),k=me.useCallback(g=>{const w=g.target.value;c(w),m.current.postMessage({model_id:w,text:e})},[e]);return R.jsxs("div",{className:"w-full max-w-[720px] flex flex-col gap-4 items-center",children:[R.jsxs("div",{children:[R.jsx("h1",{className:"text-5xl font-bold mb-2",children:"The Tokenizer Playground"}),R.jsxs("h2",{className:"text-lg font-normal",children:["Experiment with different tokenizers (running ",R.jsx("a",{className:"text-gray-900 underline",href:"https://github.com/xenova/transformers.js",children:"locally"})," in your browser)."]})]}),R.jsx("div",{children:R.jsxs("select",{value:s,onChange:k,className:"bg-gray-50 border border-gray-300 text-gray-900 text-sm rounded-lg focus:ring-blue-500 focus:border-blue-500 block w-full p-2",children:[R.jsx("option",{value:"Xenova/gpt-4",children:"gpt-4 / gpt-3.5-turbo / text-embedding-ada-002"}),R.jsx("option",{value:"Xenova/text-davinci-003",children:"text-davinci-003 / text-davinci-002"}),R.jsx("option",{value:"Xenova/gpt-3",children:"gpt-3"})]})}),R.jsx("textarea",{onChange:p,rows:"8",className:"font-mono text-lg block w-full p-2.5 text-gray-900 bg-gray-50 rounded-lg border border-gray-200",placeholder:"Enter some text"}),R.jsxs("div",{className:"flex justify-center gap-5",children:[R.jsxs("div",{className:"flex flex-col",children:[R.jsx("h2",{className:"font-semibold uppercase leading-4",children:"Tokens"}),R.jsx("h3",{className:"font-semibold text-3xl",children:t.length.toLocaleString()})]}),R.jsxs("div",{className:"flex flex-col",children:[R.jsx("h2",{className:"font-semibold uppercase leading-4",children:"Characters"}),R.jsx("h3",{className:"font-semibold text-3xl",children:e.length.toLocaleString()})]})]}),R.jsx("div",{ref:h,className:"font-mono text-lg p-2.5 w-full bg-gray-100 rounded-lg border border-gray-200 whitespace-pre-wrap text-left h-[200px] overflow-y-auto",children:u==="text"?l.map((g,w)=>R.jsx(jd,{text:g,position:w},w)):u==="token_ids"?`[${t.join(", ")}]`:null}),R.jsxs("div",{className:"flex items-center gap-2 self-end",children:[R.jsxs("div",{className:"flex items-center",children:[R.jsx("input",{checked:u==="text",onChange:()=>i("text"),id:"output-radio-1",type:"radio",value:"",name:"output-radio",className:"w-4 h-4 text-blue-600 bg-gray-100 border-gray-300 focus:ring-blue-500"}),R.jsx("label",{htmlFor:"output-radio-1",className:"ml-1 text-sm font-medium text-gray-900 dark:text-gray-300",children:"Text"})]}),R.jsxs("div",{className:"flex items-center",children:[R.jsx("input",{checked:u==="token_ids",onChange:()=>i("token_ids"),id:"output-radio-2",type:"radio",value:"",name:"output-radio",className:"w-4 h-4 text-blue-600 bg-gray-100 border-gray-300 focus:ring-blue-500"}),R.jsx("label",{htmlFor:"output-radio-2",className:"ml-1 text-sm font-medium text-gray-900 dark:text-gray-300",children:"Token IDs"})]}),R.jsxs("div",{className:"flex items-center",children:[R.jsx("input",{checked:u===null,onChange:()=>i(null),id:"output-radio-3",type:"radio",value:"",name:"output-radio",className:"w-4 h-4 text-blue-600 bg-gray-100 border-gray-300 focus:ring-blue-500"}),R.jsx("label",{htmlFor:"output-radio-3",className:"ml-1 text-sm font-medium text-gray-900 dark:text-gray-300",children:"Hide"})]})]})]})}Ql.createRoot(document.getElementById("root")).render(R.jsx(kc.StrictMode,{children:R.jsx(Od,{})})); diff --git a/spaces/Xeraphinite/Coursera-GPT/app.py b/spaces/Xeraphinite/Coursera-GPT/app.py deleted file mode 100644 index 281860209371e5f9460f6bdc9a762ed4d978d6f6..0000000000000000000000000000000000000000 --- a/spaces/Xeraphinite/Coursera-GPT/app.py +++ /dev/null @@ -1,78 +0,0 @@ -import gradio as gr -import openai -import os - -def predict(education_level, annual_income, employment_status, course_name, openai_api_key): - # 0. preparation - os.environ['OPENAI_API_KEY'] = openai_api_key - openai.api_key = openai_api_key - - overall_prompt = '''你将作为一名专业的申请人助手,在世界上最大的 MOOC 平台 Coursera 上完成一份 Financial Aid 相关的任务,之后将会给你相关的课程信息,任务如下:''' - role = f'''个人信息: a {education_level} with {annual_income} annual income and {employment_status}.''' - - # 1. Reasons for aid - task = '请你完成一份 Financial Aid 申请表,字数在 150-300 words 之间,内容需要包括,请注意,输出仅仅包括 Reasons for Financial Aid ' \ - 'Application 的内容即可,前后均不需要添加任何东西(包括 "Reasons for Financial Aid Application:"),也不需要输出任何解释性语句.' - - response = openai.ChatCompletion.create( - model='gpt-3.5-turbo', - temperature=0.0, - messages=[ - {'role': 'system', 'name': 'overall_prompt', 'content': overall_prompt}, - {'role': 'user', 'name': 'task', 'content': task}, - {'role': 'user', 'name': 'role', 'content': role}, - {'role': 'user', 'name': 'course_name', 'content': f'Course name: {course_name}'}, - {'role': 'system', 'name': 'overall_prompt', 'content': overall_prompt}, - {'role': 'user', 'name': 'task', 'content': task}, - ] - ) - - reasons_for_aid = response.choices[0].message.content - reasons_for_aid = reasons_for_aid.replace('Reasons for Financial Aid Application:\n', '') - - while reasons_for_aid.startswith('\n'): - reasons_for_aid = reasons_for_aid[1:] - - # 2. How will your selected course help with your goals? - task = '请你根据给出的信息回答:How will your selected course help with your goals? 答案字数在 150-300 words ' \ - '之间,请注意,输出仅仅包括问题的答案即可,前后均不需要添加任何东西,也不需要输出任何解释性语句.' - - response = openai.ChatCompletion.create( - model='gpt-3.5-turbo', - temperature=0.0, - messages=[ - {'role': 'system', 'name': 'overall_prompt', 'content': overall_prompt}, - {'role': 'user', 'name': 'task', 'content': task}, - {'role': 'user', 'name': 'role', 'content': role}, - {'role': 'user', 'name': 'course_name', 'content': f'Course name: {course_name}'}, - {'role': 'system', 'name': 'overall_prompt', 'content': overall_prompt}, - {'role': 'user', 'name': 'task', 'content': task}, - ] - ) - - how_will_course_help = response.choices[0].message.content - - return reasons_for_aid, how_will_course_help - - -params = { - 'education_level': 'College Degree', - 'annual_income': 0, - 'employment_status': 'Student', -} - -if __name__ == '__main__': - gr.Interface( - fn=predict, - inputs=[ - gr.components.Dropdown(['High School', 'Some College', 'College Degree', 'Master’s/Advanced degree', 'Other'], value=params['education_level'], label='Education'), - gr.components.Slider(0, 100, params['annual_income'], label='Annual Income($ USD)'), - gr.components.Dropdown(['Full-time', 'Part-time', 'Unemployed', 'Student', 'Other'], value=params['employment_status'], label='Employment Status'), - gr.Textbox(label="Course Name"), - gr.Textbox(label="OpenAI API Key") - ], - outputs=[ - gr.Textbox(label="Reason you applied for aid", show_copy_button=True), - gr.Textbox(label="How will your selected course help with your goals?", show_copy_button=True) - ], - ).launch() diff --git a/spaces/Yan233th/so-vits-svc-models/spec_gen.py b/spaces/Yan233th/so-vits-svc-models/spec_gen.py deleted file mode 100644 index 9476395adab6fa841fde10c05fbb92902310ebd4..0000000000000000000000000000000000000000 --- a/spaces/Yan233th/so-vits-svc-models/spec_gen.py +++ /dev/null @@ -1,22 +0,0 @@ -from data_utils import TextAudioSpeakerLoader -import json -from tqdm import tqdm - -from utils import HParams - -config_path = 'configs/config.json' -with open(config_path, "r") as f: - data = f.read() -config = json.loads(data) -hps = HParams(**config) - -train_dataset = TextAudioSpeakerLoader("filelists/train.txt", hps) -test_dataset = TextAudioSpeakerLoader("filelists/test.txt", hps) -eval_dataset = TextAudioSpeakerLoader("filelists/val.txt", hps) - -for _ in tqdm(train_dataset): - pass -for _ in tqdm(eval_dataset): - pass -for _ in tqdm(test_dataset): - pass \ No newline at end of file diff --git a/spaces/YangHao520/testCreateFile/app.py b/spaces/YangHao520/testCreateFile/app.py deleted file mode 100644 index 272c97f72e2de1d6f8c96aa96a88a4b219548de0..0000000000000000000000000000000000000000 --- a/spaces/YangHao520/testCreateFile/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import os - -import gradio as gr -import tempfile -import shutil -def generate_file(file_obj): - global tmpdir - try: - print('临时文件夹地址:{}'.format(tmpdir)) - print('上传文件的地址:{}'.format(file_obj.name)) # 输出上传后的文件在gradio中保存的绝对地址 - - #获取到上传后的文件的绝对路径后,其余的操作就和平常一致了 - - # 将文件复制到临时目录中 - shutil.copy(file_obj.name, tmpdir) - - # 获取上传Gradio的文件名称 - FileName=os.path.basename(file_obj.name) - - # 获取拷贝在临时目录的新的文件地址 - NewfilePath=os.path.join(tmpdir,FileName) - print(NewfilePath) - - # 打开复制到新路径后的文件 - with open(NewfilePath, 'rb') as file_obj: - - #在本地电脑打开一个新的文件,并且将上传文件内容写入到新文件 - outputPath=os.path.join(tmpdir,"New"+FileName) - with open(outputPath,'wb') as w: - w.write(file_obj.read()) - - # 返回新文件的的地址(注意这里) - return outputPath - except: - return '' -def main(): - global tmpdir - with tempfile.TemporaryDirectory(dir='.') as tmpdir: - # 定义输入和输出 - inputs = gr.components.File(label="上传文件") - outputs = gr.components.File(label="下载文件") - - # 创建 Gradio 应用程序g - app = gr.Interface(fn=generate_file, inputs=inputs, outputs=outputs, title="文件上传、并生成可下载文件demo", - description="上传任何文件都可以,只要大小别超过你电脑的内存即可" - ) - - # 启动应用程序 - app.launch() -if __name__=="__main__": - main() \ No newline at end of file diff --git a/spaces/Yuliang/ECON/lib/net/Discriminator.py b/spaces/Yuliang/ECON/lib/net/Discriminator.py deleted file mode 100644 index b47ef9fd05ef645950be61111d417638a57ae3c6..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/net/Discriminator.py +++ /dev/null @@ -1,521 +0,0 @@ -""" The code is based on https://github.com/apple/ml-gsn/ with adaption. """ - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from lib.torch_utils.ops.native_ops import ( - FusedLeakyReLU, - fused_leaky_relu, - upfirdn2d, -) - - -class DiscriminatorHead(nn.Module): - def __init__(self, in_channel, disc_stddev=False): - super().__init__() - - self.disc_stddev = disc_stddev - stddev_dim = 1 if disc_stddev else 0 - - self.conv_stddev = ConvLayer2d( - in_channel=in_channel + stddev_dim, - out_channel=in_channel, - kernel_size=3, - activate=True - ) - - self.final_linear = nn.Sequential( - nn.Flatten(), - EqualLinear(in_channel=in_channel * 4 * 4, out_channel=in_channel, activate=True), - EqualLinear(in_channel=in_channel, out_channel=1), - ) - - def cat_stddev(self, x, stddev_group=4, stddev_feat=1): - perm = torch.randperm(len(x)) - inv_perm = torch.argsort(perm) - - batch, channel, height, width = x.shape - x = x[perm - ] # shuffle inputs so that all views in a single trajectory don't get put together - - group = min(batch, stddev_group) - stddev = x.view(group, -1, stddev_feat, channel // stddev_feat, height, width) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - - stddev = stddev[inv_perm] # reorder inputs - x = x[inv_perm] - - out = torch.cat([x, stddev], 1) - return out - - def forward(self, x): - if self.disc_stddev: - x = self.cat_stddev(x) - x = self.conv_stddev(x) - out = self.final_linear(x) - return out - - -class ConvDecoder(nn.Module): - def __init__(self, in_channel, out_channel, in_res, out_res): - super().__init__() - - log_size_in = int(math.log(in_res, 2)) - log_size_out = int(math.log(out_res, 2)) - - self.layers = [] - in_ch = in_channel - for i in range(log_size_in, log_size_out): - out_ch = in_ch // 2 - self.layers.append( - ConvLayer2d( - in_channel=in_ch, - out_channel=out_ch, - kernel_size=3, - upsample=True, - bias=True, - activate=True - ) - ) - in_ch = out_ch - - self.layers.append( - ConvLayer2d( - in_channel=in_ch, out_channel=out_channel, kernel_size=3, bias=True, activate=False - ) - ) - self.layers = nn.Sequential(*self.layers) - - def forward(self, x): - return self.layers(x) - - -class StyleDiscriminator(nn.Module): - def __init__(self, in_channel, in_res, ch_mul=64, ch_max=512, **kwargs): - super().__init__() - - log_size_in = int(math.log(in_res, 2)) - log_size_out = int(math.log(4, 2)) - - self.conv_in = ConvLayer2d(in_channel=in_channel, out_channel=ch_mul, kernel_size=3) - - # each resblock will half the resolution and double the number of features (until a maximum of ch_max) - self.layers = [] - in_channels = ch_mul - for i in range(log_size_in, log_size_out, -1): - out_channels = int(min(in_channels * 2, ch_max)) - self.layers.append( - ConvResBlock2d(in_channel=in_channels, out_channel=out_channels, downsample=True) - ) - in_channels = out_channels - self.layers = nn.Sequential(*self.layers) - - self.disc_out = DiscriminatorHead(in_channel=in_channels, disc_stddev=True) - - def forward(self, x): - x = self.conv_in(x) - x = self.layers(x) - out = self.disc_out(x) - return out - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Blur(nn.Module): - """Blur layer. - - Applies a blur kernel to input image using finite impulse response filter. Blurring feature maps after - convolutional upsampling or before convolutional downsampling helps produces models that are more robust to - shifting inputs (https://richzhang.github.io/antialiased-cnns/). In the context of GANs, this can provide - cleaner gradients, and therefore more stable training. - - Args: - ---- - kernel: list, int - A list of integers representing a blur kernel. For exmaple: [1, 3, 3, 1]. - pad: tuple, int - A tuple of integers representing the number of rows/columns of padding to be added to the top/left and - the bottom/right respectively. - upsample_factor: int - Upsample factor. - - """ - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor**2) - - self.register_buffer("kernel", kernel) - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - return out - - -class Upsample(nn.Module): - """Upsampling layer. - - Perform upsampling using a blur kernel. - - Args: - ---- - kernel: list, int - A list of integers representing a blur kernel. For exmaple: [1, 3, 3, 1]. - factor: int - Upsampling factor. - - """ - def __init__(self, kernel=[1, 3, 3, 1], factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor**2) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - return out - - -class Downsample(nn.Module): - """Downsampling layer. - - Perform downsampling using a blur kernel. - - Args: - ---- - kernel: list, int - A list of integers representing a blur kernel. For exmaple: [1, 3, 3, 1]. - factor: int - Downsampling factor. - - """ - def __init__(self, kernel=[1, 3, 3, 1], factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - pad0 = (p + 1) // 2 - pad1 = p // 2 - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - return out - - -class EqualLinear(nn.Module): - """Linear layer with equalized learning rate. - - During the forward pass the weights are scaled by the inverse of the He constant (i.e. sqrt(in_dim)) to - prevent vanishing gradients and accelerate training. This constant only works for ReLU or LeakyReLU - activation functions. - - Args: - ---- - in_channel: int - Input channels. - out_channel: int - Output channels. - bias: bool - Use bias term. - bias_init: float - Initial value for the bias. - lr_mul: float - Learning rate multiplier. By scaling weights and the bias we can proportionally scale the magnitude of - the gradients, effectively increasing/decreasing the learning rate for this layer. - activate: bool - Apply leakyReLU activation. - - """ - def __init__(self, in_channel, out_channel, bias=True, bias_init=0, lr_mul=1, activate=False): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_channel, in_channel).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel).fill_(bias_init)) - else: - self.bias = None - - self.activate = activate - self.scale = (1 / math.sqrt(in_channel)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activate: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - else: - out = F.linear(input, self.weight * self.scale, bias=self.bias * self.lr_mul) - return out - - def __repr__(self): - return f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})" - - -class EqualConv2d(nn.Module): - """2D convolution layer with equalized learning rate. - - During the forward pass the weights are scaled by the inverse of the He constant (i.e. sqrt(in_dim)) to - prevent vanishing gradients and accelerate training. This constant only works for ReLU or LeakyReLU - activation functions. - - Args: - ---- - in_channel: int - Input channels. - out_channel: int - Output channels. - kernel_size: int - Kernel size. - stride: int - Stride of convolutional kernel across the input. - padding: int - Amount of zero padding applied to both sides of the input. - bias: bool - Use bias term. - - """ - def __init__(self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_channel, in_channel, kernel_size, kernel_size)) - self.scale = 1 / math.sqrt(in_channel * kernel_size**2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding - ) - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]}," - f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})" - ) - - -class EqualConvTranspose2d(nn.Module): - """2D transpose convolution layer with equalized learning rate. - - During the forward pass the weights are scaled by the inverse of the He constant (i.e. sqrt(in_dim)) to - prevent vanishing gradients and accelerate training. This constant only works for ReLU or LeakyReLU - activation functions. - - Args: - ---- - in_channel: int - Input channels. - out_channel: int - Output channels. - kernel_size: int - Kernel size. - stride: int - Stride of convolutional kernel across the input. - padding: int - Amount of zero padding applied to both sides of the input. - output_padding: int - Extra padding added to input to achieve the desired output size. - bias: bool - Use bias term. - - """ - def __init__( - self, - in_channel, - out_channel, - kernel_size, - stride=1, - padding=0, - output_padding=0, - bias=True - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(in_channel, out_channel, kernel_size, kernel_size)) - self.scale = 1 / math.sqrt(in_channel * kernel_size**2) - - self.stride = stride - self.padding = padding - self.output_padding = output_padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - else: - self.bias = None - - def forward(self, input): - out = F.conv_transpose2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - output_padding=self.output_padding, - ) - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[0]}, {self.weight.shape[1]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class ConvLayer2d(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size=3, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - assert not (upsample and downsample), 'Cannot upsample and downsample simultaneously' - layers = [] - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - layers.append( - EqualConvTranspose2d( - in_channel, - out_channel, - kernel_size, - padding=0, - stride=2, - bias=bias and not activate - ) - ) - layers.append(Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor)) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=0, - stride=2, - bias=bias and not activate - ) - ) - - if (not downsample) and (not upsample): - padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=padding, - stride=1, - bias=bias and not activate - ) - ) - - if activate: - layers.append(FusedLeakyReLU(out_channel, bias=bias)) - - super().__init__(*layers) - - -class ConvResBlock2d(nn.Module): - """2D convolutional residual block with equalized learning rate. - - Residual block composed of 3x3 convolutions and leaky ReLUs. - - Args: - ---- - in_channel: int - Input channels. - out_channel: int - Output channels. - upsample: bool - Apply upsampling via strided convolution in the first conv. - downsample: bool - Apply downsampling via strided convolution in the second conv. - - """ - def __init__(self, in_channel, out_channel, upsample=False, downsample=False): - super().__init__() - - assert not (upsample and downsample), 'Cannot upsample and downsample simultaneously' - mid_ch = in_channel if downsample else out_channel - - self.conv1 = ConvLayer2d(in_channel, mid_ch, upsample=upsample, kernel_size=3) - self.conv2 = ConvLayer2d(mid_ch, out_channel, downsample=downsample, kernel_size=3) - - if (in_channel != out_channel) or upsample or downsample: - self.skip = ConvLayer2d( - in_channel, - out_channel, - upsample=upsample, - downsample=downsample, - kernel_size=1, - activate=False, - bias=False, - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - if hasattr(self, 'skip'): - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - else: - out = (out + input) / math.sqrt(2) - return out diff --git a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2_2/model_download/yolov5_model_p5_all.sh b/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2_2/model_download/yolov5_model_p5_all.sh deleted file mode 100644 index a8e11f6c73445e2e7855d7b62c2b8ebbb7236e9d..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2_2/model_download/yolov5_model_p5_all.sh +++ /dev/null @@ -1,8 +0,0 @@ -cd ./yolov5 - -# 下载YOLOv5模型 -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5n.pt -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5m.pt -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5l.pt -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5x.pt \ No newline at end of file diff --git a/spaces/aaronb/Anything2Image/anything2image/api.py b/spaces/aaronb/Anything2Image/anything2image/api.py deleted file mode 100644 index 396001ea898e8c5cba3a7117c0cf2a7893049ec6..0000000000000000000000000000000000000000 --- a/spaces/aaronb/Anything2Image/anything2image/api.py +++ /dev/null @@ -1,59 +0,0 @@ -import soundfile as sf -import torch -import numpy as np -from diffusers import StableUnCLIPImg2ImgPipeline -from PIL import Image - -from . import imagebind - - -class Anything2Image: - def __init__( - self, - device = "cuda:0" if torch.cuda.is_available() else "cpu", - imagebind_download_dir="checkpoints" - ): - self.pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=None if device == 'cpu' else torch.float16, - ).to(device) - self.model = imagebind.imagebind_huge(pretrained=True, download_dir=imagebind_download_dir).eval().to(device) - self.device = device - - @torch.no_grad() - def __call__(self, prompt=None, audio=None, image=None, text=None): - device, model, pipe = self.device, self.model, self.pipe - - if audio is not None: - sr, waveform = audio - sf.write('tmp.wav', waveform, sr) - embeddings = model.forward({ - imagebind.ModalityType.AUDIO: imagebind.load_and_transform_audio_data(['tmp.wav'], device), - }) - audio_embeddings = embeddings[imagebind.ModalityType.AUDIO] - if image is not None: - Image.fromarray(image).save('tmp.png') - embeddings = model.forward({ - imagebind.ModalityType.VISION: imagebind.load_and_transform_vision_data(['tmp.png'], device), - }, normalize=False) - image_embeddings = embeddings[imagebind.ModalityType.VISION] - - if audio is not None and image is not None: - embeddings = (audio_embeddings + image_embeddings) / 2 - elif image is not None: - embeddings = image_embeddings - elif audio is not None: - embeddings = audio_embeddings - else: - embeddings = None - - if text is not None and text != "": - embeddings = self.model.forward({ - imagebind.ModalityType.TEXT: imagebind.load_and_transform_text([text], device), - }, normalize=False) - embeddings = embeddings[imagebind.ModalityType.TEXT] - - if embeddings is not None and self.device != 'cpu': - embeddings = embeddings.half() - - images = pipe(prompt=prompt, image_embeds=embeddings).images - return images[0] \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/ade20k.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/ade20k.py deleted file mode 100644 index efc8b4bb20c981f3db6df7eb52b3dc0744c94cc0..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/ade20k.py +++ /dev/null @@ -1,54 +0,0 @@ -# dataset settings -dataset_type = 'ADE20KDataset' -data_root = 'data/ade/ADEChallengeData2016' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -crop_size = (512, 512) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', reduce_zero_label=True), - dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 512), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/training', - ann_dir='annotations/training', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline)) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/dense_heads/ld_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/dense_heads/ld_head.py deleted file mode 100644 index 501e1f7befa086f0b2f818531807411fc383d7bd..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/dense_heads/ld_head.py +++ /dev/null @@ -1,261 +0,0 @@ -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import (bbox2distance, bbox_overlaps, distance2bbox, - multi_apply, reduce_mean) -from ..builder import HEADS, build_loss -from .gfl_head import GFLHead - - -@HEADS.register_module() -class LDHead(GFLHead): - """Localization distillation Head. (Short description) - - It utilizes the learned bbox distributions to transfer the localization - dark knowledge from teacher to student. Original paper: `Localization - Distillation for Object Detection. `_ - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - loss_ld (dict): Config of Localization Distillation Loss (LD), - T is the temperature for distillation. - """ - - def __init__(self, - num_classes, - in_channels, - loss_ld=dict( - type='LocalizationDistillationLoss', - loss_weight=0.25, - T=10), - **kwargs): - - super(LDHead, self).__init__(num_classes, in_channels, **kwargs) - self.loss_ld = build_loss(loss_ld) - - def loss_single(self, anchors, cls_score, bbox_pred, labels, label_weights, - bbox_targets, stride, soft_targets, num_total_samples): - """Compute loss of a single scale level. - - Args: - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - cls_score (Tensor): Cls and quality joint scores for each scale - level has shape (N, num_classes, H, W). - bbox_pred (Tensor): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor wight - shape (N, num_total_anchors, 4). - stride (tuple): Stride in this scale level. - num_total_samples (int): Number of positive samples that is - reduced over all GPUs. - - Returns: - dict[tuple, Tensor]: Loss components and weight targets. - """ - assert stride[0] == stride[1], 'h stride is not equal to w stride!' - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(-1, 4 * (self.reg_max + 1)) - soft_targets = soft_targets.permute(0, 2, 3, - 1).reshape(-1, - 4 * (self.reg_max + 1)) - - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - score = label_weights.new_zeros(labels.shape) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_anchor_centers = self.anchor_center(pos_anchors) / stride[0] - - weight_targets = cls_score.detach().sigmoid() - weight_targets = weight_targets.max(dim=1)[0][pos_inds] - pos_bbox_pred_corners = self.integral(pos_bbox_pred) - pos_decode_bbox_pred = distance2bbox(pos_anchor_centers, - pos_bbox_pred_corners) - pos_decode_bbox_targets = pos_bbox_targets / stride[0] - score[pos_inds] = bbox_overlaps( - pos_decode_bbox_pred.detach(), - pos_decode_bbox_targets, - is_aligned=True) - pred_corners = pos_bbox_pred.reshape(-1, self.reg_max + 1) - pos_soft_targets = soft_targets[pos_inds] - soft_corners = pos_soft_targets.reshape(-1, self.reg_max + 1) - - target_corners = bbox2distance(pos_anchor_centers, - pos_decode_bbox_targets, - self.reg_max).reshape(-1) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_decode_bbox_targets, - weight=weight_targets, - avg_factor=1.0) - - # dfl loss - loss_dfl = self.loss_dfl( - pred_corners, - target_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - - # ld loss - loss_ld = self.loss_ld( - pred_corners, - soft_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - - else: - loss_ld = bbox_pred.sum() * 0 - loss_bbox = bbox_pred.sum() * 0 - loss_dfl = bbox_pred.sum() * 0 - weight_targets = bbox_pred.new_tensor(0) - - # cls (qfl) loss - loss_cls = self.loss_cls( - cls_score, (labels, score), - weight=label_weights, - avg_factor=num_total_samples) - - return loss_cls, loss_bbox, loss_dfl, loss_ld, weight_targets.sum() - - def forward_train(self, - x, - out_teacher, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """ - Args: - x (list[Tensor]): Features from FPN. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - - Returns: - tuple[dict, list]: The loss components and proposals of each image. - - - losses (dict[str, Tensor]): A dictionary of loss components. - - proposal_list (list[Tensor]): Proposals of each image. - """ - outs = self(x) - soft_target = out_teacher[1] - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, soft_target, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, soft_target, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - if proposal_cfg is None: - return losses - else: - proposal_list = self.get_bboxes(*outs, img_metas, cfg=proposal_cfg) - return losses, proposal_list - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - soft_target, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Cls and quality scores for each scale - level has shape (N, num_classes, H, W). - bbox_preds (list[Tensor]): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - num_total_samples = reduce_mean( - torch.tensor(num_total_pos, dtype=torch.float, - device=device)).item() - num_total_samples = max(num_total_samples, 1.0) - - losses_cls, losses_bbox, losses_dfl, losses_ld, \ - avg_factor = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_targets_list, - self.anchor_generator.strides, - soft_target, - num_total_samples=num_total_samples) - - avg_factor = sum(avg_factor) + 1e-6 - avg_factor = reduce_mean(avg_factor).item() - losses_bbox = [x / avg_factor for x in losses_bbox] - losses_dfl = [x / avg_factor for x in losses_dfl] - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_dfl=losses_dfl, - loss_ld=losses_ld) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/feature_relay_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/feature_relay_head.py deleted file mode 100644 index a1cfb2ce8631d51e5c465f9bbc4164a37acc4782..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/feature_relay_head.py +++ /dev/null @@ -1,55 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import kaiming_init -from mmcv.runner import auto_fp16 - -from mmdet.models.builder import HEADS - - -@HEADS.register_module() -class FeatureRelayHead(nn.Module): - """Feature Relay Head used in `SCNet `_. - - Args: - in_channels (int, optional): number of input channels. Default: 256. - conv_out_channels (int, optional): number of output channels before - classification layer. Default: 256. - roi_feat_size (int, optional): roi feat size at box head. Default: 7. - scale_factor (int, optional): scale factor to match roi feat size - at mask head. Default: 2. - """ - - def __init__(self, - in_channels=1024, - out_conv_channels=256, - roi_feat_size=7, - scale_factor=2): - super(FeatureRelayHead, self).__init__() - assert isinstance(roi_feat_size, int) - - self.in_channels = in_channels - self.out_conv_channels = out_conv_channels - self.roi_feat_size = roi_feat_size - self.out_channels = (roi_feat_size**2) * out_conv_channels - self.scale_factor = scale_factor - self.fp16_enabled = False - - self.fc = nn.Linear(self.in_channels, self.out_channels) - self.upsample = nn.Upsample( - scale_factor=scale_factor, mode='bilinear', align_corners=True) - - def init_weights(self): - """Init weights for the head.""" - kaiming_init(self.fc) - - @auto_fp16() - def forward(self, x): - """Forward function.""" - N, in_C = x.shape - if N > 0: - out_C = self.out_conv_channels - out_HW = self.roi_feat_size - x = self.fc(x) - x = x.reshape(N, out_C, out_HW, out_HW) - x = self.upsample(x) - return x - return None diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/video/optflow.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/video/optflow.py deleted file mode 100644 index 84160f8d6ef9fceb5a2f89e7481593109fc1905d..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/video/optflow.py +++ /dev/null @@ -1,254 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import cv2 -import numpy as np - -from annotator.uniformer.mmcv.arraymisc import dequantize, quantize -from annotator.uniformer.mmcv.image import imread, imwrite -from annotator.uniformer.mmcv.utils import is_str - - -def flowread(flow_or_path, quantize=False, concat_axis=0, *args, **kwargs): - """Read an optical flow map. - - Args: - flow_or_path (ndarray or str): A flow map or filepath. - quantize (bool): whether to read quantized pair, if set to True, - remaining args will be passed to :func:`dequantize_flow`. - concat_axis (int): The axis that dx and dy are concatenated, - can be either 0 or 1. Ignored if quantize is False. - - Returns: - ndarray: Optical flow represented as a (h, w, 2) numpy array - """ - if isinstance(flow_or_path, np.ndarray): - if (flow_or_path.ndim != 3) or (flow_or_path.shape[-1] != 2): - raise ValueError(f'Invalid flow with shape {flow_or_path.shape}') - return flow_or_path - elif not is_str(flow_or_path): - raise TypeError(f'"flow_or_path" must be a filename or numpy array, ' - f'not {type(flow_or_path)}') - - if not quantize: - with open(flow_or_path, 'rb') as f: - try: - header = f.read(4).decode('utf-8') - except Exception: - raise IOError(f'Invalid flow file: {flow_or_path}') - else: - if header != 'PIEH': - raise IOError(f'Invalid flow file: {flow_or_path}, ' - 'header does not contain PIEH') - - w = np.fromfile(f, np.int32, 1).squeeze() - h = np.fromfile(f, np.int32, 1).squeeze() - flow = np.fromfile(f, np.float32, w * h * 2).reshape((h, w, 2)) - else: - assert concat_axis in [0, 1] - cat_flow = imread(flow_or_path, flag='unchanged') - if cat_flow.ndim != 2: - raise IOError( - f'{flow_or_path} is not a valid quantized flow file, ' - f'its dimension is {cat_flow.ndim}.') - assert cat_flow.shape[concat_axis] % 2 == 0 - dx, dy = np.split(cat_flow, 2, axis=concat_axis) - flow = dequantize_flow(dx, dy, *args, **kwargs) - - return flow.astype(np.float32) - - -def flowwrite(flow, filename, quantize=False, concat_axis=0, *args, **kwargs): - """Write optical flow to file. - - If the flow is not quantized, it will be saved as a .flo file losslessly, - otherwise a jpeg image which is lossy but of much smaller size. (dx and dy - will be concatenated horizontally into a single image if quantize is True.) - - Args: - flow (ndarray): (h, w, 2) array of optical flow. - filename (str): Output filepath. - quantize (bool): Whether to quantize the flow and save it to 2 jpeg - images. If set to True, remaining args will be passed to - :func:`quantize_flow`. - concat_axis (int): The axis that dx and dy are concatenated, - can be either 0 or 1. Ignored if quantize is False. - """ - if not quantize: - with open(filename, 'wb') as f: - f.write('PIEH'.encode('utf-8')) - np.array([flow.shape[1], flow.shape[0]], dtype=np.int32).tofile(f) - flow = flow.astype(np.float32) - flow.tofile(f) - f.flush() - else: - assert concat_axis in [0, 1] - dx, dy = quantize_flow(flow, *args, **kwargs) - dxdy = np.concatenate((dx, dy), axis=concat_axis) - imwrite(dxdy, filename) - - -def quantize_flow(flow, max_val=0.02, norm=True): - """Quantize flow to [0, 255]. - - After this step, the size of flow will be much smaller, and can be - dumped as jpeg images. - - Args: - flow (ndarray): (h, w, 2) array of optical flow. - max_val (float): Maximum value of flow, values beyond - [-max_val, max_val] will be truncated. - norm (bool): Whether to divide flow values by image width/height. - - Returns: - tuple[ndarray]: Quantized dx and dy. - """ - h, w, _ = flow.shape - dx = flow[..., 0] - dy = flow[..., 1] - if norm: - dx = dx / w # avoid inplace operations - dy = dy / h - # use 255 levels instead of 256 to make sure 0 is 0 after dequantization. - flow_comps = [ - quantize(d, -max_val, max_val, 255, np.uint8) for d in [dx, dy] - ] - return tuple(flow_comps) - - -def dequantize_flow(dx, dy, max_val=0.02, denorm=True): - """Recover from quantized flow. - - Args: - dx (ndarray): Quantized dx. - dy (ndarray): Quantized dy. - max_val (float): Maximum value used when quantizing. - denorm (bool): Whether to multiply flow values with width/height. - - Returns: - ndarray: Dequantized flow. - """ - assert dx.shape == dy.shape - assert dx.ndim == 2 or (dx.ndim == 3 and dx.shape[-1] == 1) - - dx, dy = [dequantize(d, -max_val, max_val, 255) for d in [dx, dy]] - - if denorm: - dx *= dx.shape[1] - dy *= dx.shape[0] - flow = np.dstack((dx, dy)) - return flow - - -def flow_warp(img, flow, filling_value=0, interpolate_mode='nearest'): - """Use flow to warp img. - - Args: - img (ndarray, float or uint8): Image to be warped. - flow (ndarray, float): Optical Flow. - filling_value (int): The missing pixels will be set with filling_value. - interpolate_mode (str): bilinear -> Bilinear Interpolation; - nearest -> Nearest Neighbor. - - Returns: - ndarray: Warped image with the same shape of img - """ - warnings.warn('This function is just for prototyping and cannot ' - 'guarantee the computational efficiency.') - assert flow.ndim == 3, 'Flow must be in 3D arrays.' - height = flow.shape[0] - width = flow.shape[1] - channels = img.shape[2] - - output = np.ones( - (height, width, channels), dtype=img.dtype) * filling_value - - grid = np.indices((height, width)).swapaxes(0, 1).swapaxes(1, 2) - dx = grid[:, :, 0] + flow[:, :, 1] - dy = grid[:, :, 1] + flow[:, :, 0] - sx = np.floor(dx).astype(int) - sy = np.floor(dy).astype(int) - valid = (sx >= 0) & (sx < height - 1) & (sy >= 0) & (sy < width - 1) - - if interpolate_mode == 'nearest': - output[valid, :] = img[dx[valid].round().astype(int), - dy[valid].round().astype(int), :] - elif interpolate_mode == 'bilinear': - # dirty walkround for integer positions - eps_ = 1e-6 - dx, dy = dx + eps_, dy + eps_ - left_top_ = img[np.floor(dx[valid]).astype(int), - np.floor(dy[valid]).astype(int), :] * ( - np.ceil(dx[valid]) - dx[valid])[:, None] * ( - np.ceil(dy[valid]) - dy[valid])[:, None] - left_down_ = img[np.ceil(dx[valid]).astype(int), - np.floor(dy[valid]).astype(int), :] * ( - dx[valid] - np.floor(dx[valid]))[:, None] * ( - np.ceil(dy[valid]) - dy[valid])[:, None] - right_top_ = img[np.floor(dx[valid]).astype(int), - np.ceil(dy[valid]).astype(int), :] * ( - np.ceil(dx[valid]) - dx[valid])[:, None] * ( - dy[valid] - np.floor(dy[valid]))[:, None] - right_down_ = img[np.ceil(dx[valid]).astype(int), - np.ceil(dy[valid]).astype(int), :] * ( - dx[valid] - np.floor(dx[valid]))[:, None] * ( - dy[valid] - np.floor(dy[valid]))[:, None] - output[valid, :] = left_top_ + left_down_ + right_top_ + right_down_ - else: - raise NotImplementedError( - 'We only support interpolation modes of nearest and bilinear, ' - f'but got {interpolate_mode}.') - return output.astype(img.dtype) - - -def flow_from_bytes(content): - """Read dense optical flow from bytes. - - .. note:: - This load optical flow function works for FlyingChairs, FlyingThings3D, - Sintel, FlyingChairsOcc datasets, but cannot load the data from - ChairsSDHom. - - Args: - content (bytes): Optical flow bytes got from files or other streams. - - Returns: - ndarray: Loaded optical flow with the shape (H, W, 2). - """ - - # header in first 4 bytes - header = content[:4] - if header.decode('utf-8') != 'PIEH': - raise Exception('Flow file header does not contain PIEH') - # width in second 4 bytes - width = np.frombuffer(content[4:], np.int32, 1).squeeze() - # height in third 4 bytes - height = np.frombuffer(content[8:], np.int32, 1).squeeze() - # after first 12 bytes, all bytes are flow - flow = np.frombuffer(content[12:], np.float32, width * height * 2).reshape( - (height, width, 2)) - - return flow - - -def sparse_flow_from_bytes(content): - """Read the optical flow in KITTI datasets from bytes. - - This function is modified from RAFT load the `KITTI datasets - `_. - - Args: - content (bytes): Optical flow bytes got from files or other streams. - - Returns: - Tuple(ndarray, ndarray): Loaded optical flow with the shape (H, W, 2) - and flow valid mask with the shape (H, W). - """ # nopa - - content = np.frombuffer(content, np.uint8) - flow = cv2.imdecode(content, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR) - flow = flow[:, :, ::-1].astype(np.float32) - # flow shape (H, W, 2) valid shape (H, W) - flow, valid = flow[:, :, :2], flow[:, :, 2] - flow = (flow - 2**15) / 64.0 - return flow, valid diff --git a/spaces/abidismail/22h-vintedois-diffusion-v0-1/app.py b/spaces/abidismail/22h-vintedois-diffusion-v0-1/app.py deleted file mode 100644 index c1dd484084e36ddbdfd38baef27a08040b2d7893..0000000000000000000000000000000000000000 --- a/spaces/abidismail/22h-vintedois-diffusion-v0-1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/22h/vintedois-diffusion-v0-1").launch() \ No newline at end of file diff --git a/spaces/abidlabs/quickdraw2/app.py b/spaces/abidlabs/quickdraw2/app.py deleted file mode 100644 index 8437ee8f4e64645c43e30a37d6ebee396d5f0791..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/quickdraw2/app.py +++ /dev/null @@ -1,43 +0,0 @@ -from pathlib import Path - -import torch -import gradio as gr -from torch import nn - - -LABELS = Path('class_names.txt').read_text().splitlines() - -model = nn.Sequential( - nn.Conv2d(1, 32, 3, padding='same'), - nn.ReLU(), - nn.MaxPool2d(2), - nn.Conv2d(32, 64, 3, padding='same'), - nn.ReLU(), - nn.MaxPool2d(2), - nn.Conv2d(64, 128, 3, padding='same'), - nn.ReLU(), - nn.MaxPool2d(2), - nn.Flatten(), - nn.Linear(1152, 256), - nn.ReLU(), - nn.Linear(256, len(LABELS)), -) -state_dict = torch.load('pytorch_model.bin', map_location='cpu') -model.load_state_dict(state_dict, strict=False) -model.eval() - -def predict(im): - x = torch.tensor(im, dtype=torch.float32).unsqueeze(0).unsqueeze(0) / 255. - - with torch.no_grad(): - out = model(x) - - probabilities = torch.nn.functional.softmax(out[0], dim=0) - - values, indices = torch.topk(probabilities, 5) - - return {LABELS[i]: v.item() for i, v in zip(indices, values)} - - -interface = gr.Interface(predict, inputs='sketchpad', outputs='label', live=True) -interface.launch(debug=True) diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/constants.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/constants.py deleted file mode 100644 index 8a5785b6fdb21910a174252c5af2f05b40ece4a5..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/constants.py +++ /dev/null @@ -1,149 +0,0 @@ -DEFAULT_Z_NEAR = 0.05 # Near clipping plane, in meters -DEFAULT_Z_FAR = 100.0 # Far clipping plane, in meters -DEFAULT_SCENE_SCALE = 2.0 # Default scene scale -MAX_N_LIGHTS = 4 # Maximum number of lights of each type allowed -TARGET_OPEN_GL_MAJOR = 4 # Target OpenGL Major Version -TARGET_OPEN_GL_MINOR = 1 # Target OpenGL Minor Version -MIN_OPEN_GL_MAJOR = 3 # Minimum OpenGL Major Version -MIN_OPEN_GL_MINOR = 3 # Minimum OpenGL Minor Version -FLOAT_SZ = 4 # Byte size of GL float32 -UINT_SZ = 4 # Byte size of GL uint32 -SHADOW_TEX_SZ = 2048 # Width and Height of Shadow Textures -TEXT_PADDING = 20 # Width of padding for rendering text (px) - - -# Flags for render type -class RenderFlags(object): - """Flags for rendering in the scene. - - Combine them with the bitwise or. For example, - - >>> flags = OFFSCREEN | SHADOWS_DIRECTIONAL | VERTEX_NORMALS - - would result in an offscreen render with directional shadows and - vertex normals enabled. - """ - NONE = 0 - """Normal PBR Render.""" - DEPTH_ONLY = 1 - """Only render the depth buffer.""" - OFFSCREEN = 2 - """Render offscreen and return the depth and (optionally) color buffers.""" - FLIP_WIREFRAME = 4 - """Invert the status of wireframe rendering for each mesh.""" - ALL_WIREFRAME = 8 - """Render all meshes as wireframes.""" - ALL_SOLID = 16 - """Render all meshes as solids.""" - SHADOWS_DIRECTIONAL = 32 - """Render shadows for directional lights.""" - SHADOWS_POINT = 64 - """Render shadows for point lights.""" - SHADOWS_SPOT = 128 - """Render shadows for spot lights.""" - SHADOWS_ALL = 32 | 64 | 128 - """Render shadows for all lights.""" - VERTEX_NORMALS = 256 - """Render vertex normals.""" - FACE_NORMALS = 512 - """Render face normals.""" - SKIP_CULL_FACES = 1024 - """Do not cull back faces.""" - RGBA = 2048 - """Render the color buffer with the alpha channel enabled.""" - FLAT = 4096 - """Render the color buffer flat, with no lighting computations.""" - SEG = 8192 - - -class TextAlign: - """Text alignment options for captions. - - Only use one at a time. - """ - CENTER = 0 - """Center the text by width and height.""" - CENTER_LEFT = 1 - """Center the text by height and left-align it.""" - CENTER_RIGHT = 2 - """Center the text by height and right-align it.""" - BOTTOM_LEFT = 3 - """Put the text in the bottom-left corner.""" - BOTTOM_RIGHT = 4 - """Put the text in the bottom-right corner.""" - BOTTOM_CENTER = 5 - """Center the text by width and fix it to the bottom.""" - TOP_LEFT = 6 - """Put the text in the top-left corner.""" - TOP_RIGHT = 7 - """Put the text in the top-right corner.""" - TOP_CENTER = 8 - """Center the text by width and fix it to the top.""" - - -class GLTF(object): - """Options for GL objects.""" - NEAREST = 9728 - """Nearest neighbor interpolation.""" - LINEAR = 9729 - """Linear interpolation.""" - NEAREST_MIPMAP_NEAREST = 9984 - """Nearest mipmapping.""" - LINEAR_MIPMAP_NEAREST = 9985 - """Linear mipmapping.""" - NEAREST_MIPMAP_LINEAR = 9986 - """Nearest mipmapping.""" - LINEAR_MIPMAP_LINEAR = 9987 - """Linear mipmapping.""" - CLAMP_TO_EDGE = 33071 - """Clamp to the edge of the texture.""" - MIRRORED_REPEAT = 33648 - """Mirror the texture.""" - REPEAT = 10497 - """Repeat the texture.""" - POINTS = 0 - """Render as points.""" - LINES = 1 - """Render as lines.""" - LINE_LOOP = 2 - """Render as a line loop.""" - LINE_STRIP = 3 - """Render as a line strip.""" - TRIANGLES = 4 - """Render as triangles.""" - TRIANGLE_STRIP = 5 - """Render as a triangle strip.""" - TRIANGLE_FAN = 6 - """Render as a triangle fan.""" - - -class BufFlags(object): - POSITION = 0 - NORMAL = 1 - TANGENT = 2 - TEXCOORD_0 = 4 - TEXCOORD_1 = 8 - COLOR_0 = 16 - JOINTS_0 = 32 - WEIGHTS_0 = 64 - - -class TexFlags(object): - NONE = 0 - NORMAL = 1 - OCCLUSION = 2 - EMISSIVE = 4 - BASE_COLOR = 8 - METALLIC_ROUGHNESS = 16 - DIFFUSE = 32 - SPECULAR_GLOSSINESS = 64 - - -class ProgramFlags: - NONE = 0 - USE_MATERIAL = 1 - VERTEX_NORMALS = 2 - FACE_NORMALS = 4 - - -__all__ = ['RenderFlags', 'TextAlign', 'GLTF'] diff --git a/spaces/aicg/Moxxie-Proxy/Dockerfile b/spaces/aicg/Moxxie-Proxy/Dockerfile deleted file mode 100644 index 4cb0ce42128d9a2ad33a395883f5e5455a38c707..0000000000000000000000000000000000000000 --- a/spaces/aicg/Moxxie-Proxy/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/aiditi/nvidia_denoiser/denoise.py b/spaces/aiditi/nvidia_denoiser/denoise.py deleted file mode 100644 index 99244db8f70f4c5fc6fe7d16fb7ffea98fcec4f7..0000000000000000000000000000000000000000 --- a/spaces/aiditi/nvidia_denoiser/denoise.py +++ /dev/null @@ -1,124 +0,0 @@ -import os -import argparse -import json -from tqdm import tqdm -from copy import deepcopy - -import numpy as np -import torch - -import random -random.seed(0) -torch.manual_seed(0) -np.random.seed(0) - -from scipy.io.wavfile import write as wavwrite - -from dataset import load_CleanNoisyPairDataset -from util import find_max_epoch, print_size, sampling -from network import CleanUNet - - -def denoise(output_directory, ckpt_iter, subset, dump=False): - """ - Denoise audio - - Parameters: - output_directory (str): save generated speeches to this path - ckpt_iter (int or 'max'): the pretrained checkpoint to be loaded; - automitically selects the maximum iteration if 'max' is selected - subset (str): training, testing, validation - dump (bool): whether save enhanced (denoised) audio - """ - - # setup local experiment path - exp_path = train_config["exp_path"] - print('exp_path:', exp_path) - - # load data - loader_config = deepcopy(trainset_config) - loader_config["crop_length_sec"] = 0 - dataloader = load_CleanNoisyPairDataset( - **loader_config, - subset=subset, - batch_size=1, - num_gpus=1 - ) - - # predefine model - net = CleanUNet(**network_config).cuda() - print_size(net) - - # load checkpoint - ckpt_directory = os.path.join(train_config["log"]["directory"], exp_path, 'checkpoint') - if ckpt_iter == 'max': - ckpt_iter = find_max_epoch(ckpt_directory) - if ckpt_iter != 'pretrained': - ckpt_iter = int(ckpt_iter) - model_path = os.path.join(ckpt_directory, '{}.pkl'.format(ckpt_iter)) - checkpoint = torch.load(model_path, map_location='cpu') - net.load_state_dict(checkpoint['model_state_dict']) - net.eval() - - # get output directory ready - if ckpt_iter == "pretrained": - speech_directory = os.path.join(output_directory, exp_path, 'speech', ckpt_iter) - else: - speech_directory = os.path.join(output_directory, exp_path, 'speech', '{}k'.format(ckpt_iter//1000)) - if dump and not os.path.isdir(speech_directory): - os.makedirs(speech_directory) - os.chmod(speech_directory, 0o775) - print("speech_directory: ", speech_directory, flush=True) - - # inference - all_generated_audio = [] - all_clean_audio = [] - sortkey = lambda name: '_'.join(name.split('/')[-1].split('_')[1:]) - for clean_audio, noisy_audio, fileid in tqdm(dataloader): - filename = sortkey(fileid[0][0]) - - noisy_audio = noisy_audio.cuda() - LENGTH = len(noisy_audio[0].squeeze()) - generated_audio = sampling(net, noisy_audio) - - if dump: - wavwrite(os.path.join(speech_directory, 'enhanced_{}'.format(filename)), - trainset_config["sample_rate"], - generated_audio[0].squeeze().cpu().numpy()) - else: - all_clean_audio.append(clean_audio[0].squeeze().cpu().numpy()) - all_generated_audio.append(generated_audio[0].squeeze().cpu().numpy()) - - return all_clean_audio, all_generated_audio - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default='config.json', - help='JSON file for configuration') - parser.add_argument('-ckpt_iter', '--ckpt_iter', default='max', - help='Which checkpoint to use; assign a number or "max" or "pretrained"') - parser.add_argument('-subset', '--subset', type=str, choices=['training', 'testing', 'validation'], - default='testing', help='subset for denoising') - args = parser.parse_args() - - # Parse configs. Globals nicer in this case - with open(args.config) as f: - data = f.read() - config = json.loads(data) - gen_config = config["gen_config"] - global network_config - network_config = config["network_config"] # to define wavenet - global train_config - train_config = config["train_config"] # train config - global trainset_config - trainset_config = config["trainset_config"] # to read trainset configurations - - torch.backends.cudnn.enabled = True - torch.backends.cudnn.benchmark = True - - if args.subset == "testing": - denoise(gen_config["output_directory"], - subset=args.subset, - ckpt_iter=args.ckpt_iter, - dump=True) \ No newline at end of file diff --git a/spaces/aijack/jojo/e4e/criteria/lpips/networks.py b/spaces/aijack/jojo/e4e/criteria/lpips/networks.py deleted file mode 100644 index 3a0d13ad2d560278f16586da68d3a5eadb26e746..0000000000000000000000000000000000000000 --- a/spaces/aijack/jojo/e4e/criteria/lpips/networks.py +++ /dev/null @@ -1,96 +0,0 @@ -from typing import Sequence - -from itertools import chain - -import torch -import torch.nn as nn -from torchvision import models - -from criteria.lpips.utils import normalize_activation - - -def get_network(net_type: str): - if net_type == 'alex': - return AlexNet() - elif net_type == 'squeeze': - return SqueezeNet() - elif net_type == 'vgg': - return VGG16() - else: - raise NotImplementedError('choose net_type from [alex, squeeze, vgg].') - - -class LinLayers(nn.ModuleList): - def __init__(self, n_channels_list: Sequence[int]): - super(LinLayers, self).__init__([ - nn.Sequential( - nn.Identity(), - nn.Conv2d(nc, 1, 1, 1, 0, bias=False) - ) for nc in n_channels_list - ]) - - for param in self.parameters(): - param.requires_grad = False - - -class BaseNet(nn.Module): - def __init__(self): - super(BaseNet, self).__init__() - - # register buffer - self.register_buffer( - 'mean', torch.Tensor([-.030, -.088, -.188])[None, :, None, None]) - self.register_buffer( - 'std', torch.Tensor([.458, .448, .450])[None, :, None, None]) - - def set_requires_grad(self, state: bool): - for param in chain(self.parameters(), self.buffers()): - param.requires_grad = state - - def z_score(self, x: torch.Tensor): - return (x - self.mean) / self.std - - def forward(self, x: torch.Tensor): - x = self.z_score(x) - - output = [] - for i, (_, layer) in enumerate(self.layers._modules.items(), 1): - x = layer(x) - if i in self.target_layers: - output.append(normalize_activation(x)) - if len(output) == len(self.target_layers): - break - return output - - -class SqueezeNet(BaseNet): - def __init__(self): - super(SqueezeNet, self).__init__() - - self.layers = models.squeezenet1_1(True).features - self.target_layers = [2, 5, 8, 10, 11, 12, 13] - self.n_channels_list = [64, 128, 256, 384, 384, 512, 512] - - self.set_requires_grad(False) - - -class AlexNet(BaseNet): - def __init__(self): - super(AlexNet, self).__init__() - - self.layers = models.alexnet(True).features - self.target_layers = [2, 5, 8, 10, 12] - self.n_channels_list = [64, 192, 384, 256, 256] - - self.set_requires_grad(False) - - -class VGG16(BaseNet): - def __init__(self): - super(VGG16, self).__init__() - - self.layers = models.vgg16(True).features - self.target_layers = [4, 9, 16, 23, 30] - self.n_channels_list = [64, 128, 256, 512, 512] - - self.set_requires_grad(False) \ No newline at end of file diff --git a/spaces/akhaliq/Mask2Former/mask2former_video/data_video/datasets/ytvis_api/ytvoseval.py b/spaces/akhaliq/Mask2Former/mask2former_video/data_video/datasets/ytvis_api/ytvoseval.py deleted file mode 100644 index f2cb8be6c8d009c2509a13b52437c7dec3b3ec0a..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former_video/data_video/datasets/ytvis_api/ytvoseval.py +++ /dev/null @@ -1,567 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/youtubevos/cocoapi - -__author__ = 'ychfan' - -import numpy as np -import datetime -import time -from collections import defaultdict -from pycocotools import mask as maskUtils -import copy - -class YTVOSeval: - # Interface for evaluating video instance segmentation on the YouTubeVIS dataset. - # - # The usage for YTVOSeval is as follows: - # cocoGt=..., cocoDt=... # load dataset and results - # E = YTVOSeval(cocoGt,cocoDt); # initialize YTVOSeval object - # E.params.recThrs = ...; # set parameters as desired - # E.evaluate(); # run per image evaluation - # E.accumulate(); # accumulate per image results - # E.summarize(); # display summary metrics of results - # For example usage see evalDemo.m and http://mscoco.org/. - # - # The evaluation parameters are as follows (defaults in brackets): - # imgIds - [all] N img ids to use for evaluation - # catIds - [all] K cat ids to use for evaluation - # iouThrs - [.5:.05:.95] T=10 IoU thresholds for evaluation - # recThrs - [0:.01:1] R=101 recall thresholds for evaluation - # areaRng - [...] A=4 object area ranges for evaluation - # maxDets - [1 10 100] M=3 thresholds on max detections per image - # iouType - ['segm'] set iouType to 'segm', 'bbox' or 'keypoints' - # iouType replaced the now DEPRECATED useSegm parameter. - # useCats - [1] if true use category labels for evaluation - # Note: if useCats=0 category labels are ignored as in proposal scoring. - # Note: multiple areaRngs [Ax2] and maxDets [Mx1] can be specified. - # - # evaluate(): evaluates detections on every image and every category and - # concats the results into the "evalImgs" with fields: - # dtIds - [1xD] id for each of the D detections (dt) - # gtIds - [1xG] id for each of the G ground truths (gt) - # dtMatches - [TxD] matching gt id at each IoU or 0 - # gtMatches - [TxG] matching dt id at each IoU or 0 - # dtScores - [1xD] confidence of each dt - # gtIgnore - [1xG] ignore flag for each gt - # dtIgnore - [TxD] ignore flag for each dt at each IoU - # - # accumulate(): accumulates the per-image, per-category evaluation - # results in "evalImgs" into the dictionary "eval" with fields: - # params - parameters used for evaluation - # date - date evaluation was performed - # counts - [T,R,K,A,M] parameter dimensions (see above) - # precision - [TxRxKxAxM] precision for every evaluation setting - # recall - [TxKxAxM] max recall for every evaluation setting - # Note: precision and recall==-1 for settings with no gt objects. - # - # See also coco, mask, pycocoDemo, pycocoEvalDemo - # - # Microsoft COCO Toolbox. version 2.0 - # Data, paper, and tutorials available at: http://mscoco.org/ - # Code written by Piotr Dollar and Tsung-Yi Lin, 2015. - # Licensed under the Simplified BSD License [see coco/license.txt] - def __init__(self, cocoGt=None, cocoDt=None, iouType='segm'): - ''' - Initialize CocoEval using coco APIs for gt and dt - :param cocoGt: coco object with ground truth annotations - :param cocoDt: coco object with detection results - :return: None - ''' - if not iouType: - print('iouType not specified. use default iouType segm') - self.cocoGt = cocoGt # ground truth COCO API - self.cocoDt = cocoDt # detections COCO API - self.params = {} # evaluation parameters - self.evalVids = defaultdict(list) # per-image per-category evaluation results [KxAxI] elements - self.eval = {} # accumulated evaluation results - self._gts = defaultdict(list) # gt for evaluation - self._dts = defaultdict(list) # dt for evaluation - self.params = Params(iouType=iouType) # parameters - self._paramsEval = {} # parameters for evaluation - self.stats = [] # result summarization - self.ious = {} # ious between all gts and dts - if not cocoGt is None: - self.params.vidIds = sorted(cocoGt.getVidIds()) - self.params.catIds = sorted(cocoGt.getCatIds()) - - - def _prepare(self): - ''' - Prepare ._gts and ._dts for evaluation based on params - :return: None - ''' - def _toMask(anns, coco): - # modify ann['segmentation'] by reference - for ann in anns: - for i, a in enumerate(ann['segmentations']): - if a: - rle = coco.annToRLE(ann, i) - ann['segmentations'][i] = rle - l = [a for a in ann['areas'] if a] - if len(l)==0: - ann['avg_area'] = 0 - else: - ann['avg_area'] = np.array(l).mean() - p = self.params - if p.useCats: - gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(vidIds=p.vidIds, catIds=p.catIds)) - dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(vidIds=p.vidIds, catIds=p.catIds)) - else: - gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(vidIds=p.vidIds)) - dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(vidIds=p.vidIds)) - - # convert ground truth to mask if iouType == 'segm' - if p.iouType == 'segm': - _toMask(gts, self.cocoGt) - _toMask(dts, self.cocoDt) - # set ignore flag - for gt in gts: - gt['ignore'] = gt['ignore'] if 'ignore' in gt else 0 - gt['ignore'] = 'iscrowd' in gt and gt['iscrowd'] - if p.iouType == 'keypoints': - gt['ignore'] = (gt['num_keypoints'] == 0) or gt['ignore'] - self._gts = defaultdict(list) # gt for evaluation - self._dts = defaultdict(list) # dt for evaluation - for gt in gts: - self._gts[gt['video_id'], gt['category_id']].append(gt) - for dt in dts: - self._dts[dt['video_id'], dt['category_id']].append(dt) - self.evalVids = defaultdict(list) # per-image per-category evaluation results - self.eval = {} # accumulated evaluation results - - def evaluate(self): - ''' - Run per image evaluation on given images and store results (a list of dict) in self.evalVids - :return: None - ''' - tic = time.time() - print('Running per image evaluation...') - p = self.params - # add backward compatibility if useSegm is specified in params - if not p.useSegm is None: - p.iouType = 'segm' if p.useSegm == 1 else 'bbox' - print('useSegm (deprecated) is not None. Running {} evaluation'.format(p.iouType)) - print('Evaluate annotation type *{}*'.format(p.iouType)) - p.vidIds = list(np.unique(p.vidIds)) - if p.useCats: - p.catIds = list(np.unique(p.catIds)) - p.maxDets = sorted(p.maxDets) - self.params=p - - self._prepare() - # loop through images, area range, max detection number - catIds = p.catIds if p.useCats else [-1] - - if p.iouType == 'segm' or p.iouType == 'bbox': - computeIoU = self.computeIoU - elif p.iouType == 'keypoints': - computeIoU = self.computeOks - self.ious = {(vidId, catId): computeIoU(vidId, catId) \ - for vidId in p.vidIds - for catId in catIds} - - evaluateVid = self.evaluateVid - maxDet = p.maxDets[-1] - - - self.evalImgs = [evaluateVid(vidId, catId, areaRng, maxDet) - for catId in catIds - for areaRng in p.areaRng - for vidId in p.vidIds - ] - self._paramsEval = copy.deepcopy(self.params) - toc = time.time() - print('DONE (t={:0.2f}s).'.format(toc-tic)) - - def computeIoU(self, vidId, catId): - p = self.params - if p.useCats: - gt = self._gts[vidId,catId] - dt = self._dts[vidId,catId] - else: - gt = [_ for cId in p.catIds for _ in self._gts[vidId,cId]] - dt = [_ for cId in p.catIds for _ in self._dts[vidId,cId]] - if len(gt) == 0 and len(dt) ==0: - return [] - inds = np.argsort([-d['score'] for d in dt], kind='mergesort') - dt = [dt[i] for i in inds] - if len(dt) > p.maxDets[-1]: - dt=dt[0:p.maxDets[-1]] - - if p.iouType == 'segm': - g = [g['segmentations'] for g in gt] - d = [d['segmentations'] for d in dt] - elif p.iouType == 'bbox': - g = [g['bboxes'] for g in gt] - d = [d['bboxes'] for d in dt] - else: - raise Exception('unknown iouType for iou computation') - - # compute iou between each dt and gt region - iscrowd = [int(o['iscrowd']) for o in gt] - #ious = maskUtils.iou(d,g,iscrowd) - def iou_seq(d_seq, g_seq): - i = .0 - u = .0 - for d, g in zip(d_seq, g_seq): - if d and g: - i += maskUtils.area(maskUtils.merge([d, g], True)) - u += maskUtils.area(maskUtils.merge([d, g], False)) - elif not d and g: - u += maskUtils.area(g) - elif d and not g: - u += maskUtils.area(d) - if not u > .0: - print("Mask sizes in video {} and category {} may not match!".format(vidId, catId)) - iou = i / u if u > .0 else .0 - return iou - ious = np.zeros([len(d), len(g)]) - for i, j in np.ndindex(ious.shape): - ious[i, j] = iou_seq(d[i], g[j]) - #print(vidId, catId, ious.shape, ious) - return ious - - def computeOks(self, imgId, catId): - p = self.params - # dimention here should be Nxm - gts = self._gts[imgId, catId] - dts = self._dts[imgId, catId] - inds = np.argsort([-d['score'] for d in dts], kind='mergesort') - dts = [dts[i] for i in inds] - if len(dts) > p.maxDets[-1]: - dts = dts[0:p.maxDets[-1]] - # if len(gts) == 0 and len(dts) == 0: - if len(gts) == 0 or len(dts) == 0: - return [] - ious = np.zeros((len(dts), len(gts))) - sigmas = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62,.62, 1.07, 1.07, .87, .87, .89, .89])/10.0 - vars = (sigmas * 2)**2 - k = len(sigmas) - # compute oks between each detection and ground truth object - for j, gt in enumerate(gts): - # create bounds for ignore regions(double the gt bbox) - g = np.array(gt['keypoints']) - xg = g[0::3]; yg = g[1::3]; vg = g[2::3] - k1 = np.count_nonzero(vg > 0) - bb = gt['bbox'] - x0 = bb[0] - bb[2]; x1 = bb[0] + bb[2] * 2 - y0 = bb[1] - bb[3]; y1 = bb[1] + bb[3] * 2 - for i, dt in enumerate(dts): - d = np.array(dt['keypoints']) - xd = d[0::3]; yd = d[1::3] - if k1>0: - # measure the per-keypoint distance if keypoints visible - dx = xd - xg - dy = yd - yg - else: - # measure minimum distance to keypoints in (x0,y0) & (x1,y1) - z = np.zeros((k)) - dx = np.max((z, x0-xd),axis=0)+np.max((z, xd-x1),axis=0) - dy = np.max((z, y0-yd),axis=0)+np.max((z, yd-y1),axis=0) - e = (dx**2 + dy**2) / vars / (gt['avg_area']+np.spacing(1)) / 2 - if k1 > 0: - e=e[vg > 0] - ious[i, j] = np.sum(np.exp(-e)) / e.shape[0] - return ious - - def evaluateVid(self, vidId, catId, aRng, maxDet): - ''' - perform evaluation for single category and image - :return: dict (single image results) - ''' - p = self.params - if p.useCats: - gt = self._gts[vidId,catId] - dt = self._dts[vidId,catId] - else: - gt = [_ for cId in p.catIds for _ in self._gts[vidId,cId]] - dt = [_ for cId in p.catIds for _ in self._dts[vidId,cId]] - if len(gt) == 0 and len(dt) ==0: - return None - - for g in gt: - if g['ignore'] or (g['avg_area']aRng[1]): - g['_ignore'] = 1 - else: - g['_ignore'] = 0 - - # sort dt highest score first, sort gt ignore last - gtind = np.argsort([g['_ignore'] for g in gt], kind='mergesort') - gt = [gt[i] for i in gtind] - dtind = np.argsort([-d['score'] for d in dt], kind='mergesort') - dt = [dt[i] for i in dtind[0:maxDet]] - iscrowd = [int(o['iscrowd']) for o in gt] - # load computed ious - ious = self.ious[vidId, catId][:, gtind] if len(self.ious[vidId, catId]) > 0 else self.ious[vidId, catId] - - T = len(p.iouThrs) - G = len(gt) - D = len(dt) - gtm = np.zeros((T,G)) - dtm = np.zeros((T,D)) - gtIg = np.array([g['_ignore'] for g in gt]) - dtIg = np.zeros((T,D)) - if not len(ious)==0: - for tind, t in enumerate(p.iouThrs): - for dind, d in enumerate(dt): - # information about best match so far (m=-1 -> unmatched) - iou = min([t,1-1e-10]) - m = -1 - for gind, g in enumerate(gt): - # if this gt already matched, and not a crowd, continue - if gtm[tind,gind]>0 and not iscrowd[gind]: - continue - # if dt matched to reg gt, and on ignore gt, stop - if m>-1 and gtIg[m]==0 and gtIg[gind]==1: - break - # continue to next gt unless better match made - if ious[dind,gind] < iou: - continue - # if match successful and best so far, store appropriately - iou=ious[dind,gind] - m=gind - # if match made store id of match for both dt and gt - if m ==-1: - continue - dtIg[tind,dind] = gtIg[m] - dtm[tind,dind] = gt[m]['id'] - gtm[tind,m] = d['id'] - # set unmatched detections outside of area range to ignore - a = np.array([d['avg_area']aRng[1] for d in dt]).reshape((1, len(dt))) - dtIg = np.logical_or(dtIg, np.logical_and(dtm==0, np.repeat(a,T,0))) - # store results for given image and category - return { - 'video_id': vidId, - 'category_id': catId, - 'aRng': aRng, - 'maxDet': maxDet, - 'dtIds': [d['id'] for d in dt], - 'gtIds': [g['id'] for g in gt], - 'dtMatches': dtm, - 'gtMatches': gtm, - 'dtScores': [d['score'] for d in dt], - 'gtIgnore': gtIg, - 'dtIgnore': dtIg, - } - - def accumulate(self, p = None): - ''' - Accumulate per image evaluation results and store the result in self.eval - :param p: input params for evaluation - :return: None - ''' - print('Accumulating evaluation results...') - tic = time.time() - if not self.evalImgs: - print('Please run evaluate() first') - # allows input customized parameters - if p is None: - p = self.params - p.catIds = p.catIds if p.useCats == 1 else [-1] - T = len(p.iouThrs) - R = len(p.recThrs) - K = len(p.catIds) if p.useCats else 1 - A = len(p.areaRng) - M = len(p.maxDets) - precision = -np.ones((T,R,K,A,M)) # -1 for the precision of absent categories - recall = -np.ones((T,K,A,M)) - scores = -np.ones((T,R,K,A,M)) - - # create dictionary for future indexing - _pe = self._paramsEval - catIds = _pe.catIds if _pe.useCats else [-1] - setK = set(catIds) - setA = set(map(tuple, _pe.areaRng)) - setM = set(_pe.maxDets) - setI = set(_pe.vidIds) - # get inds to evaluate - k_list = [n for n, k in enumerate(p.catIds) if k in setK] - m_list = [m for n, m in enumerate(p.maxDets) if m in setM] - a_list = [n for n, a in enumerate(map(lambda x: tuple(x), p.areaRng)) if a in setA] - i_list = [n for n, i in enumerate(p.vidIds) if i in setI] - I0 = len(_pe.vidIds) - A0 = len(_pe.areaRng) - # retrieve E at each category, area range, and max number of detections - for k, k0 in enumerate(k_list): - Nk = k0*A0*I0 - for a, a0 in enumerate(a_list): - Na = a0*I0 - for m, maxDet in enumerate(m_list): - E = [self.evalImgs[Nk + Na + i] for i in i_list] - E = [e for e in E if not e is None] - if len(E) == 0: - continue - dtScores = np.concatenate([e['dtScores'][0:maxDet] for e in E]) - - # different sorting method generates slightly different results. - # mergesort is used to be consistent as Matlab implementation. - inds = np.argsort(-dtScores, kind='mergesort') - dtScoresSorted = dtScores[inds] - - dtm = np.concatenate([e['dtMatches'][:,0:maxDet] for e in E], axis=1)[:,inds] - dtIg = np.concatenate([e['dtIgnore'][:,0:maxDet] for e in E], axis=1)[:,inds] - gtIg = np.concatenate([e['gtIgnore'] for e in E]) - npig = np.count_nonzero(gtIg==0 ) - if npig == 0: - continue - tps = np.logical_and( dtm, np.logical_not(dtIg) ) - fps = np.logical_and(np.logical_not(dtm), np.logical_not(dtIg) ) - - tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float) - fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float) - for t, (tp, fp) in enumerate(zip(tp_sum, fp_sum)): - tp = np.array(tp) - fp = np.array(fp) - nd = len(tp) - rc = tp / npig - pr = tp / (fp+tp+np.spacing(1)) - q = np.zeros((R,)) - ss = np.zeros((R,)) - - if nd: - recall[t,k,a,m] = rc[-1] - else: - recall[t,k,a,m] = 0 - - # numpy is slow without cython optimization for accessing elements - # use python array gets significant speed improvement - pr = pr.tolist(); q = q.tolist() - - for i in range(nd-1, 0, -1): - if pr[i] > pr[i-1]: - pr[i-1] = pr[i] - - inds = np.searchsorted(rc, p.recThrs, side='left') - try: - for ri, pi in enumerate(inds): - q[ri] = pr[pi] - ss[ri] = dtScoresSorted[pi] - except: - pass - precision[t,:,k,a,m] = np.array(q) - scores[t,:,k,a,m] = np.array(ss) - self.eval = { - 'params': p, - 'counts': [T, R, K, A, M], - 'date': datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), - 'precision': precision, - 'recall': recall, - 'scores': scores, - } - toc = time.time() - print('DONE (t={:0.2f}s).'.format( toc-tic)) - - def summarize(self): - ''' - Compute and display summary metrics for evaluation results. - Note this functin can *only* be applied on the default parameter setting - ''' - def _summarize( ap=1, iouThr=None, areaRng='all', maxDets=100 ): - p = self.params - iStr = ' {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}' - titleStr = 'Average Precision' if ap == 1 else 'Average Recall' - typeStr = '(AP)' if ap==1 else '(AR)' - iouStr = '{:0.2f}:{:0.2f}'.format(p.iouThrs[0], p.iouThrs[-1]) \ - if iouThr is None else '{:0.2f}'.format(iouThr) - - aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng] - mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets] - if ap == 1: - # dimension of precision: [TxRxKxAxM] - s = self.eval['precision'] - # IoU - if iouThr is not None: - t = np.where(iouThr == p.iouThrs)[0] - s = s[t] - s = s[:,:,:,aind,mind] - else: - # dimension of recall: [TxKxAxM] - s = self.eval['recall'] - if iouThr is not None: - t = np.where(iouThr == p.iouThrs)[0] - s = s[t] - s = s[:,:,aind,mind] - if len(s[s>-1])==0: - mean_s = -1 - else: - mean_s = np.mean(s[s>-1]) - print(iStr.format(titleStr, typeStr, iouStr, areaRng, maxDets, mean_s)) - return mean_s - def _summarizeDets(): - stats = np.zeros((12,)) - stats[0] = _summarize(1) - stats[1] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[2]) - stats[2] = _summarize(1, iouThr=.75, maxDets=self.params.maxDets[2]) - stats[3] = _summarize(1, areaRng='small', maxDets=self.params.maxDets[2]) - stats[4] = _summarize(1, areaRng='medium', maxDets=self.params.maxDets[2]) - stats[5] = _summarize(1, areaRng='large', maxDets=self.params.maxDets[2]) - stats[6] = _summarize(0, maxDets=self.params.maxDets[0]) - stats[7] = _summarize(0, maxDets=self.params.maxDets[1]) - stats[8] = _summarize(0, maxDets=self.params.maxDets[2]) - stats[9] = _summarize(0, areaRng='small', maxDets=self.params.maxDets[2]) - stats[10] = _summarize(0, areaRng='medium', maxDets=self.params.maxDets[2]) - stats[11] = _summarize(0, areaRng='large', maxDets=self.params.maxDets[2]) - return stats - def _summarizeKps(): - stats = np.zeros((10,)) - stats[0] = _summarize(1, maxDets=20) - stats[1] = _summarize(1, maxDets=20, iouThr=.5) - stats[2] = _summarize(1, maxDets=20, iouThr=.75) - stats[3] = _summarize(1, maxDets=20, areaRng='medium') - stats[4] = _summarize(1, maxDets=20, areaRng='large') - stats[5] = _summarize(0, maxDets=20) - stats[6] = _summarize(0, maxDets=20, iouThr=.5) - stats[7] = _summarize(0, maxDets=20, iouThr=.75) - stats[8] = _summarize(0, maxDets=20, areaRng='medium') - stats[9] = _summarize(0, maxDets=20, areaRng='large') - return stats - if not self.eval: - raise Exception('Please run accumulate() first') - iouType = self.params.iouType - if iouType == 'segm' or iouType == 'bbox': - summarize = _summarizeDets - elif iouType == 'keypoints': - summarize = _summarizeKps - self.stats = summarize() - - def __str__(self): - self.summarize() - -class Params: - ''' - Params for coco evaluation api - ''' - def setDetParams(self): - self.vidIds = [] - self.catIds = [] - # np.arange causes trouble. the data point on arange is slightly larger than the true value - #self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True) - #self.recThrs = np.linspace(.0, 1.00, np.round((1.00 - .0) / .01) + 1, endpoint=True) - self.iouThrs = np.linspace(.5, 0.95, int(np.round((0.95 - .5) / .05)) + 1, endpoint=True) - self.recThrs = np.linspace(.0, 1.00, int(np.round((1.00 - .0) / .01)) + 1, endpoint=True) - self.maxDets = [1, 10, 100] - self.areaRng = [[0 ** 2, 1e5 ** 2], [0 ** 2, 128 ** 2], [ 128 ** 2, 256 ** 2], [256 ** 2, 1e5 ** 2]] - self.areaRngLbl = ['all', 'small', 'medium', 'large'] - self.useCats = 1 - - def setKpParams(self): - self.vidIds = [] - self.catIds = [] - # np.arange causes trouble. the data point on arange is slightly larger than the true value - self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True) - self.recThrs = np.linspace(.0, 1.00, np.round((1.00 - .0) / .01) + 1, endpoint=True) - self.maxDets = [20] - self.areaRng = [[0 ** 2, 1e5 ** 2], [32 ** 2, 96 ** 2], [96 ** 2, 1e5 ** 2]] - self.areaRngLbl = ['all', 'medium', 'large'] - self.useCats = 1 - - def __init__(self, iouType='segm'): - if iouType == 'segm' or iouType == 'bbox': - self.setDetParams() - elif iouType == 'keypoints': - self.setKpParams() - else: - raise Exception('iouType not supported') - self.iouType = iouType - # useSegm is deprecated - self.useSegm = None diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/NamedNodeMap.pod b/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/NamedNodeMap.pod deleted file mode 100644 index 62c276272a8483b0bfc2966ba7a990ae96175363..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/NamedNodeMap.pod +++ /dev/null @@ -1,130 +0,0 @@ -=head1 NAME - -XML::DOM::NamedNodeMap - A hash table interface for XML::DOM - -=head1 DESCRIPTION - -Objects implementing the NamedNodeMap interface are used to represent -collections of nodes that can be accessed by name. Note that -NamedNodeMap does not inherit from NodeList; NamedNodeMaps are not -maintained in any particular order. Objects contained in an object -implementing NamedNodeMap may also be accessed by an ordinal index, but -this is simply to allow convenient enumeration of the contents of a -NamedNodeMap, and does not imply that the DOM specifies an order to -these Nodes. - -Note that in this implementation, the objects added to a NamedNodeMap -are kept in order. - -=head2 METHODS - -=over 4 - -=item getNamedItem (name) - -Retrieves a node specified by name. - -Return Value: A Node (of any type) with the specified name, or undef if -the specified name did not identify any node in the map. - -=item setNamedItem (arg) - -Adds a node using its nodeName attribute. - -As the nodeName attribute is used to derive the name which -the node must be stored under, multiple nodes of certain -types (those that have a "special" string value) cannot be -stored as the names would clash. This is seen as preferable -to allowing nodes to be aliased. - -Parameters: - I A node to store in a named node map. - -The node will later be accessible using the value of the nodeName -attribute of the node. If a node with that name is -already present in the map, it is replaced by the new one. - -Return Value: If the new Node replaces an existing node with the same -name the previously existing Node is returned, otherwise undef is returned. - -DOMExceptions: - -=over 4 - -=item * WRONG_DOCUMENT_ERR - -Raised if arg was created from a different document than the one that -created the NamedNodeMap. - -=item * NO_MODIFICATION_ALLOWED_ERR - -Raised if this NamedNodeMap is readonly. - -=item * INUSE_ATTRIBUTE_ERR - -Raised if arg is an Attr that is already an attribute of another Element object. -The DOM user must explicitly clone Attr nodes to re-use them in other elements. - -=back - -=item removeNamedItem (name) - -Removes a node specified by name. If the removed node is an -Attr with a default value it is immediately replaced. - -Return Value: The node removed from the map or undef if no node with -such a name exists. - -DOMException: - -=over 4 - -=item * NOT_FOUND_ERR - -Raised if there is no node named name in the map. - -=back - -=item item (index) - -Returns the indexth item in the map. If index is greater than -or equal to the number of nodes in the map, this returns undef. - -Return Value: The node at the indexth position in the NamedNodeMap, or -undef if that is not a valid index. - -=item getLength - -Returns the number of nodes in the map. The range of valid child node -indices is 0 to length-1 inclusive. - -=back - -=head2 Additional methods not in the DOM Spec - -=over 4 - -=item getValues - -Returns a NodeList with the nodes contained in the NamedNodeMap. -The NodeList is "live", in that it reflects changes made to the NamedNodeMap. - -When this method is called in a list context, it returns a regular perl list -containing the values. Note that this list is not "live". E.g. - - @list = $map->getValues; # returns a perl list - $nodelist = $map->getValues; # returns a NodeList (object ref.) - for my $val ($map->getValues) # iterate over the values - -=item getChildIndex (node) - -Returns the index of the node in the NodeList as returned by getValues, or -1 -if the node is not in the NamedNodeMap. - -=item dispose - -Removes all circular references in this NamedNodeMap and its descendants so the -objects can be claimed for garbage collection. The objects should not be used -afterwards. - -=back diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/utils/log.py b/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/utils/log.py deleted file mode 100644 index 45cab71be33d658b084e8f81f4d3901bd0c7dae6..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/utils/log.py +++ /dev/null @@ -1,9 +0,0 @@ -import logging - - -def get_console_logger(name, level=logging.WARNING): - return logging.getLogger("pyrouge") - - -def get_global_console_logger(level=logging.WARNING): - return logging.getLogger("pyrouge") diff --git a/spaces/akhaliq/mlsd/static/css/app.css b/spaces/akhaliq/mlsd/static/css/app.css deleted file mode 100644 index b8dcee2e81d09edfee44fdae4c28f3622d7fefe6..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/mlsd/static/css/app.css +++ /dev/null @@ -1,11 +0,0 @@ -#app { - padding: 20px; -} - -#result .item { - padding-bottom: 20px; -} - -.form-content-container { - padding-left: 20px; -} diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/utils/egg_link.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/utils/egg_link.py deleted file mode 100644 index 9e0da8d2d29d94d15dfbf49dff90df7eafd68bac..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/utils/egg_link.py +++ /dev/null @@ -1,75 +0,0 @@ -# The following comment should be removed at some point in the future. -# mypy: strict-optional=False - -import os -import re -import sys -from typing import Optional - -from pip._internal.locations import site_packages, user_site -from pip._internal.utils.virtualenv import ( - running_under_virtualenv, - virtualenv_no_global, -) - -__all__ = [ - "egg_link_path_from_sys_path", - "egg_link_path_from_location", -] - - -def _egg_link_name(raw_name: str) -> str: - """ - Convert a Name metadata value to a .egg-link name, by applying - the same substitution as pkg_resources's safe_name function. - Note: we cannot use canonicalize_name because it has a different logic. - """ - return re.sub("[^A-Za-z0-9.]+", "-", raw_name) + ".egg-link" - - -def egg_link_path_from_sys_path(raw_name: str) -> Optional[str]: - """ - Look for a .egg-link file for project name, by walking sys.path. - """ - egg_link_name = _egg_link_name(raw_name) - for path_item in sys.path: - egg_link = os.path.join(path_item, egg_link_name) - if os.path.isfile(egg_link): - return egg_link - return None - - -def egg_link_path_from_location(raw_name: str) -> Optional[str]: - """ - Return the path for the .egg-link file if it exists, otherwise, None. - - There's 3 scenarios: - 1) not in a virtualenv - try to find in site.USER_SITE, then site_packages - 2) in a no-global virtualenv - try to find in site_packages - 3) in a yes-global virtualenv - try to find in site_packages, then site.USER_SITE - (don't look in global location) - - For #1 and #3, there could be odd cases, where there's an egg-link in 2 - locations. - - This method will just return the first one found. - """ - sites = [] - if running_under_virtualenv(): - sites.append(site_packages) - if not virtualenv_no_global() and user_site: - sites.append(user_site) - else: - if user_site: - sites.append(user_site) - sites.append(site_packages) - - egg_link_name = _egg_link_name(raw_name) - for site in sites: - egglink = os.path.join(site, egg_link_name) - if os.path.isfile(egglink): - return egglink - return None diff --git a/spaces/ali-ghamdan/deoldify/fastai/vision/data.py b/spaces/ali-ghamdan/deoldify/fastai/vision/data.py deleted file mode 100644 index 20f584dd28d8f102ca079f031e9faec6c755773d..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/deoldify/fastai/vision/data.py +++ /dev/null @@ -1,461 +0,0 @@ -"Manages data input pipeline - folderstransformbatch input. Includes support for classification, segmentation and bounding boxes" -from numbers import Integral -from ..torch_core import * -from .image import * -from .transform import * -from ..data_block import * -from ..basic_data import * -from ..layers import * -from .learner import * -from torchvision import transforms as tvt - -__all__ = ['get_image_files', 'denormalize', 'get_annotations', 'ImageDataBunch', - 'ImageList', 'normalize', 'normalize_funcs', 'resize_to', - 'channel_view', 'mnist_stats', 'cifar_stats', 'imagenet_stats', 'imagenet_stats_inception', 'download_images', - 'verify_images', 'bb_pad_collate', 'ImageImageList', 'PointsLabelList', - 'ObjectCategoryList', 'ObjectItemList', 'SegmentationLabelList', 'SegmentationItemList', 'PointsItemList'] - -image_extensions = set(k for k,v in mimetypes.types_map.items() if v.startswith('image/')) - -def get_image_files(c:PathOrStr, check_ext:bool=True, recurse=False)->FilePathList: - "Return list of files in `c` that are images. `check_ext` will filter to `image_extensions`." - return get_files(c, extensions=(image_extensions if check_ext else None), recurse=recurse) - -def get_annotations(fname, prefix=None): - "Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes." - annot_dict = json.load(open(fname)) - id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list) - classes = {} - for o in annot_dict['categories']: - classes[o['id']] = o['name'] - for o in annot_dict['annotations']: - bb = o['bbox'] - id2bboxes[o['image_id']].append([bb[1],bb[0], bb[3]+bb[1], bb[2]+bb[0]]) - id2cats[o['image_id']].append(classes[o['category_id']]) - for o in annot_dict['images']: - if o['id'] in id2bboxes: - id2images[o['id']] = ifnone(prefix, '') + o['file_name'] - ids = list(id2images.keys()) - return [id2images[k] for k in ids], [[id2bboxes[k], id2cats[k]] for k in ids] - -def bb_pad_collate(samples:BatchSamples, pad_idx:int=0) -> Tuple[FloatTensor, Tuple[LongTensor, LongTensor]]: - "Function that collect `samples` of labelled bboxes and adds padding with `pad_idx`." - if isinstance(samples[0][1], int): return data_collate(samples) - max_len = max([len(s[1].data[1]) for s in samples]) - bboxes = torch.zeros(len(samples), max_len, 4) - labels = torch.zeros(len(samples), max_len).long() + pad_idx - imgs = [] - for i,s in enumerate(samples): - imgs.append(s[0].data[None]) - bbs, lbls = s[1].data - if not (bbs.nelement() == 0): - bboxes[i,-len(lbls):] = bbs - labels[i,-len(lbls):] = tensor(lbls) - return torch.cat(imgs,0), (bboxes,labels) - -def normalize(x:TensorImage, mean,std:Tensor)->TensorImage: - "Normalize `x` with `mean` and `std`." - return (x-mean[...,None,None]) / std[...,None,None] - -def denormalize(x:TensorImage, mean,std:Tensor, do_x:bool=True)->TensorImage: - "Denormalize `x` with `mean` and `std`." - return x.cpu().float()*std[...,None,None] + mean[...,None,None] if do_x else x.cpu() - -def _normalize_batch(b:Tuple[Tensor,Tensor], mean:Tensor, std:Tensor, do_x:bool=True, do_y:bool=False)->Tuple[Tensor,Tensor]: - "`b` = `x`,`y` - normalize `x` array of imgs and `do_y` optionally `y`." - x,y = b - mean,std = mean.to(x.device),std.to(x.device) - if do_x: x = normalize(x,mean,std) - if do_y and len(y.shape) == 4: y = normalize(y,mean,std) - return x,y - -def normalize_funcs(mean:Tensor, std:Tensor, do_x:bool=True, do_y:bool=False)->Tuple[Callable,Callable]: - "Create normalize/denormalize func using `mean` and `std`, can specify `do_y` and `device`." - mean,std = tensor(mean),tensor(std) - return (partial(_normalize_batch, mean=mean, std=std, do_x=do_x, do_y=do_y), - partial(denormalize, mean=mean, std=std, do_x=do_x)) - -cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261]) -imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) -imagenet_stats_inception = ([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) -mnist_stats = ([0.15]*3, [0.15]*3) - -def channel_view(x:Tensor)->Tensor: - "Make channel the first axis of `x` and flatten remaining axes" - return x.transpose(0,1).contiguous().view(x.shape[1],-1) - -class ImageDataBunch(DataBunch): - "DataBunch suitable for computer vision." - _square_show = True - - @classmethod - def create_from_ll(cls, lls:LabelLists, bs:int=64, val_bs:int=None, ds_tfms:Optional[TfmList]=None, - num_workers:int=defaults.cpus, dl_tfms:Optional[Collection[Callable]]=None, device:torch.device=None, - test:Optional[PathOrStr]=None, collate_fn:Callable=data_collate, size:int=None, no_check:bool=False, - resize_method:ResizeMethod=None, mult:int=None, padding_mode:str='reflection', - mode:str='bilinear', tfm_y:bool=False)->'ImageDataBunch': - "Create an `ImageDataBunch` from `LabelLists` `lls` with potential `ds_tfms`." - lls = lls.transform(tfms=ds_tfms, size=size, resize_method=resize_method, mult=mult, padding_mode=padding_mode, - mode=mode, tfm_y=tfm_y) - if test is not None: lls.add_test_folder(test) - return lls.databunch(bs=bs, val_bs=val_bs, dl_tfms=dl_tfms, num_workers=num_workers, collate_fn=collate_fn, - device=device, no_check=no_check) - - @classmethod - def from_folder(cls, path:PathOrStr, train:PathOrStr='train', valid:PathOrStr='valid', - valid_pct=None, seed:int=None, classes:Collection=None, **kwargs:Any)->'ImageDataBunch': - "Create from imagenet style dataset in `path` with `train`,`valid`,`test` subfolders (or provide `valid_pct`)." - path=Path(path) - il = ImageList.from_folder(path) - if valid_pct is None: src = il.split_by_folder(train=train, valid=valid) - else: src = il.split_by_rand_pct(valid_pct, seed) - src = src.label_from_folder(classes=classes) - return cls.create_from_ll(src, **kwargs) - - @classmethod - def from_df(cls, path:PathOrStr, df:pd.DataFrame, folder:PathOrStr=None, label_delim:str=None, valid_pct:float=0.2, - seed:int=None, fn_col:IntsOrStrs=0, label_col:IntsOrStrs=1, suffix:str='', **kwargs:Any)->'ImageDataBunch': - "Create from a `DataFrame` `df`." - src = (ImageList.from_df(df, path=path, folder=folder, suffix=suffix, cols=fn_col) - .split_by_rand_pct(valid_pct, seed) - .label_from_df(label_delim=label_delim, cols=label_col)) - return cls.create_from_ll(src, **kwargs) - - @classmethod - def from_csv(cls, path:PathOrStr, folder:PathOrStr=None, label_delim:str=None, csv_labels:PathOrStr='labels.csv', - valid_pct:float=0.2, seed:int=None, fn_col:int=0, label_col:int=1, suffix:str='', delimiter:str=None, - header:Optional[Union[int,str]]='infer', **kwargs:Any)->'ImageDataBunch': - "Create from a csv file in `path/csv_labels`." - path = Path(path) - df = pd.read_csv(path/csv_labels, header=header, delimiter=delimiter) - return cls.from_df(path, df, folder=folder, label_delim=label_delim, valid_pct=valid_pct, seed=seed, - fn_col=fn_col, label_col=label_col, suffix=suffix, **kwargs) - - @classmethod - def from_lists(cls, path:PathOrStr, fnames:FilePathList, labels:Collection[str], valid_pct:float=0.2, seed:int=None, - item_cls:Callable=None, **kwargs): - "Create from list of `fnames` in `path`." - item_cls = ifnone(item_cls, ImageList) - fname2label = {f:l for (f,l) in zip(fnames, labels)} - src = (item_cls(fnames, path=path).split_by_rand_pct(valid_pct, seed) - .label_from_func(lambda x:fname2label[x])) - return cls.create_from_ll(src, **kwargs) - - @classmethod - def from_name_func(cls, path:PathOrStr, fnames:FilePathList, label_func:Callable, valid_pct:float=0.2, seed:int=None, - **kwargs): - "Create from list of `fnames` in `path` with `label_func`." - src = ImageList(fnames, path=path).split_by_rand_pct(valid_pct, seed) - return cls.create_from_ll(src.label_from_func(label_func), **kwargs) - - @classmethod - def from_name_re(cls, path:PathOrStr, fnames:FilePathList, pat:str, valid_pct:float=0.2, **kwargs): - "Create from list of `fnames` in `path` with re expression `pat`." - pat = re.compile(pat) - def _get_label(fn): - if isinstance(fn, Path): fn = fn.as_posix() - res = pat.search(str(fn)) - assert res,f'Failed to find "{pat}" in "{fn}"' - return res.group(1) - return cls.from_name_func(path, fnames, _get_label, valid_pct=valid_pct, **kwargs) - - @staticmethod - def single_from_classes(path:Union[Path, str], classes:Collection[str], ds_tfms:TfmList=None, **kwargs): - "Create an empty `ImageDataBunch` in `path` with `classes`. Typically used for inference." - warn("""This method is deprecated and will be removed in a future version, use `load_learner` after - `Learner.export()`""", DeprecationWarning) - sd = ImageList([], path=path, ignore_empty=True).split_none() - return sd.label_const(0, label_cls=CategoryList, classes=classes).transform(ds_tfms, **kwargs).databunch() - - def batch_stats(self, funcs:Collection[Callable]=None, ds_type:DatasetType=DatasetType.Train)->Tensor: - "Grab a batch of data and call reduction function `func` per channel" - funcs = ifnone(funcs, [torch.mean,torch.std]) - x = self.one_batch(ds_type=ds_type, denorm=False)[0].cpu() - return [func(channel_view(x), 1) for func in funcs] - - def normalize(self, stats:Collection[Tensor]=None, do_x:bool=True, do_y:bool=False)->None: - "Add normalize transform using `stats` (defaults to `DataBunch.batch_stats`)" - if getattr(self,'norm',False): raise Exception('Can not call normalize twice') - if stats is None: self.stats = self.batch_stats() - else: self.stats = stats - self.norm,self.denorm = normalize_funcs(*self.stats, do_x=do_x, do_y=do_y) - self.add_tfm(self.norm) - return self - -def download_image(url,dest, timeout=4): - try: r = download_url(url, dest, overwrite=True, show_progress=False, timeout=timeout) - except Exception as e: print(f"Error {url} {e}") - -def _download_image_inner(dest, url, i, timeout=4): - suffix = re.findall(r'\.\w+?(?=(?:\?|$))', url) - suffix = suffix[0] if len(suffix)>0 else '.jpg' - download_image(url, dest/f"{i:08d}{suffix}", timeout=timeout) - -def download_images(urls:Collection[str], dest:PathOrStr, max_pics:int=1000, max_workers:int=8, timeout=4): - "Download images listed in text file `urls` to path `dest`, at most `max_pics`" - urls = open(urls).read().strip().split("\n")[:max_pics] - dest = Path(dest) - dest.mkdir(exist_ok=True) - parallel(partial(_download_image_inner, dest, timeout=timeout), urls, max_workers=max_workers) - -def resize_to(img, targ_sz:int, use_min:bool=False): - "Size to resize to, to hit `targ_sz` at same aspect ratio, in PIL coords (i.e w*h)" - w,h = img.size - min_sz = (min if use_min else max)(w,h) - ratio = targ_sz/min_sz - return int(w*ratio),int(h*ratio) - -def verify_image(file:Path, idx:int, delete:bool, max_size:Union[int,Tuple[int,int]]=None, dest:Path=None, n_channels:int=3, - interp=PIL.Image.BILINEAR, ext:str=None, img_format:str=None, resume:bool=False, **kwargs): - "Check if the image in `file` exists, maybe resize it and copy it in `dest`." - try: - # deal with partially broken images as indicated by PIL warnings - with warnings.catch_warnings(): - warnings.filterwarnings('error') - try: - with open(file, 'rb') as img_file: PIL.Image.open(img_file) - except Warning as w: - if "Possibly corrupt EXIF data" in str(w): - if delete: # green light to modify files - print(f"{file}: Removing corrupt EXIF data") - warnings.simplefilter("ignore") - # save EXIF-cleaned up image, which happens automatically - PIL.Image.open(file).save(file) - else: # keep user's files intact - print(f"{file}: Not removing corrupt EXIF data, pass `delete=True` to do that") - else: warnings.warn(w) - - img = PIL.Image.open(file) - imgarr = np.array(img) - img_channels = 1 if len(imgarr.shape) == 2 else imgarr.shape[2] - if (max_size is not None and (img.height > max_size or img.width > max_size)) or img_channels != n_channels: - assert isinstance(dest, Path), "You should provide `dest` Path to save resized image" - dest_fname = dest/file.name - if ext is not None: dest_fname=dest_fname.with_suffix(ext) - if resume and os.path.isfile(dest_fname): return - if max_size is not None: - new_sz = resize_to(img, max_size) - img = img.resize(new_sz, resample=interp) - if n_channels == 3: img = img.convert("RGB") - img.save(dest_fname, img_format, **kwargs) - except Exception as e: - print(f'{e}') - if delete: file.unlink() - -def verify_images(path:PathOrStr, delete:bool=True, max_workers:int=4, max_size:Union[int]=None, recurse:bool=False, - dest:PathOrStr='.', n_channels:int=3, interp=PIL.Image.BILINEAR, ext:str=None, img_format:str=None, - resume:bool=None, **kwargs): - "Check if the images in `path` aren't broken, maybe resize them and copy it in `dest`." - path = Path(path) - if resume is None and dest == '.': resume=False - dest = path/Path(dest) - os.makedirs(dest, exist_ok=True) - files = get_image_files(path, recurse=recurse) - func = partial(verify_image, delete=delete, max_size=max_size, dest=dest, n_channels=n_channels, interp=interp, - ext=ext, img_format=img_format, resume=resume, **kwargs) - parallel(func, files, max_workers=max_workers) - -class ImageList(ItemList): - "`ItemList` suitable for computer vision." - _bunch,_square_show,_square_show_res = ImageDataBunch,True,True - def __init__(self, *args, convert_mode='RGB', after_open:Callable=None, **kwargs): - super().__init__(*args, **kwargs) - self.convert_mode,self.after_open = convert_mode,after_open - self.copy_new += ['convert_mode', 'after_open'] - self.c,self.sizes = 3,{} - - def open(self, fn): - "Open image in `fn`, subclass and overwrite for custom behavior." - return open_image(fn, convert_mode=self.convert_mode, after_open=self.after_open) - - def get(self, i): - fn = super().get(i) - res = self.open(fn) - self.sizes[i] = res.size - return res - - @classmethod - def from_folder(cls, path:PathOrStr='.', extensions:Collection[str]=None, **kwargs)->ItemList: - "Get the list of files in `path` that have an image suffix. `recurse` determines if we search subfolders." - extensions = ifnone(extensions, image_extensions) - return super().from_folder(path=path, extensions=extensions, **kwargs) - - @classmethod - def from_df(cls, df:DataFrame, path:PathOrStr, cols:IntsOrStrs=0, folder:PathOrStr=None, suffix:str='', **kwargs)->'ItemList': - "Get the filenames in `cols` of `df` with `folder` in front of them, `suffix` at the end." - suffix = suffix or '' - res = super().from_df(df, path=path, cols=cols, **kwargs) - pref = f'{res.path}{os.path.sep}' - if folder is not None: pref += f'{folder}{os.path.sep}' - res.items = np.char.add(np.char.add(pref, res.items.astype(str)), suffix) - return res - - @classmethod - def from_csv(cls, path:PathOrStr, csv_name:str, header:str='infer', delimiter:str=None, **kwargs)->'ItemList': - "Get the filenames in `path/csv_name` opened with `header`." - path = Path(path) - df = pd.read_csv(path/csv_name, header=header, delimiter=delimiter) - return cls.from_df(df, path=path, **kwargs) - - def reconstruct(self, t:Tensor): return Image(t.float().clamp(min=0,max=1)) - - def show_xys(self, xs, ys, imgsize:int=4, figsize:Optional[Tuple[int,int]]=None, **kwargs): - "Show the `xs` (inputs) and `ys` (targets) on a figure of `figsize`." - rows = int(np.ceil(math.sqrt(len(xs)))) - axs = subplots(rows, rows, imgsize=imgsize, figsize=figsize) - for x,y,ax in zip(xs, ys, axs.flatten()): x.show(ax=ax, y=y, **kwargs) - for ax in axs.flatten()[len(xs):]: ax.axis('off') - plt.tight_layout() - - def show_xyzs(self, xs, ys, zs, imgsize:int=4, figsize:Optional[Tuple[int,int]]=None, **kwargs): - "Show `xs` (inputs), `ys` (targets) and `zs` (predictions) on a figure of `figsize`." - if self._square_show_res: - title = 'Ground truth\nPredictions' - rows = int(np.ceil(math.sqrt(len(xs)))) - axs = subplots(rows, rows, imgsize=imgsize, figsize=figsize, title=title, weight='bold', size=12) - for x,y,z,ax in zip(xs,ys,zs,axs.flatten()): x.show(ax=ax, title=f'{str(y)}\n{str(z)}', **kwargs) - for ax in axs.flatten()[len(xs):]: ax.axis('off') - else: - title = 'Ground truth/Predictions' - axs = subplots(len(xs), 2, imgsize=imgsize, figsize=figsize, title=title, weight='bold', size=14) - for i,(x,y,z) in enumerate(zip(xs,ys,zs)): - x.show(ax=axs[i,0], y=y, **kwargs) - x.show(ax=axs[i,1], y=z, **kwargs) - -class ObjectCategoryProcessor(MultiCategoryProcessor): - "`PreProcessor` for labelled bounding boxes." - def __init__(self, ds:ItemList, pad_idx:int=0): - super().__init__(ds) - self.pad_idx = pad_idx - self.state_attrs.append('pad_idx') - - def process(self, ds:ItemList): - ds.pad_idx = self.pad_idx - super().process(ds) - - def process_one(self,item): return [item[0], [self.c2i.get(o,None) for o in item[1]]] - - def generate_classes(self, items): - "Generate classes from unique `items` and add `background`." - classes = super().generate_classes([o[1] for o in items]) - classes = ['background'] + list(classes) - return classes - -def _get_size(xs,i): - size = xs.sizes.get(i,None) - if size is None: - # Image hasn't been accessed yet, so we don't know its size - _ = xs[i] - size = xs.sizes[i] - return size - -class ObjectCategoryList(MultiCategoryList): - "`ItemList` for labelled bounding boxes." - _processor = ObjectCategoryProcessor - - def get(self, i): - return ImageBBox.create(*_get_size(self.x,i), *self.items[i], classes=self.classes, pad_idx=self.pad_idx) - - def analyze_pred(self, pred): return pred - - def reconstruct(self, t, x): - (bboxes, labels) = t - if len((labels - self.pad_idx).nonzero()) == 0: return - i = (labels - self.pad_idx).nonzero().min() - bboxes,labels = bboxes[i:],labels[i:] - return ImageBBox.create(*x.size, bboxes, labels=labels, classes=self.classes, scale=False) - -class ObjectItemList(ImageList): - "`ItemList` suitable for object detection." - _label_cls,_square_show_res = ObjectCategoryList,False - -class SegmentationProcessor(PreProcessor): - "`PreProcessor` that stores the classes for segmentation." - def __init__(self, ds:ItemList): self.classes = ds.classes - def process(self, ds:ItemList): ds.classes,ds.c = self.classes,len(self.classes) - -class SegmentationLabelList(ImageList): - "`ItemList` for segmentation masks." - _processor=SegmentationProcessor - def __init__(self, items:Iterator, classes:Collection=None, **kwargs): - super().__init__(items, **kwargs) - self.copy_new.append('classes') - self.classes,self.loss_func = classes,CrossEntropyFlat(axis=1) - - def open(self, fn): return open_mask(fn) - def analyze_pred(self, pred, thresh:float=0.5): return pred.argmax(dim=0)[None] - def reconstruct(self, t:Tensor): return ImageSegment(t) - -class SegmentationItemList(ImageList): - "`ItemList` suitable for segmentation tasks." - _label_cls,_square_show_res = SegmentationLabelList,False - -class PointsProcessor(PreProcessor): - "`PreProcessor` that stores the number of targets for point regression." - def __init__(self, ds:ItemList): self.c = len(ds.items[0].reshape(-1)) - def process(self, ds:ItemList): ds.c = self.c - -class PointsLabelList(ItemList): - "`ItemList` for points." - _processor = PointsProcessor - def __init__(self, items:Iterator, **kwargs): - super().__init__(items, **kwargs) - self.loss_func = MSELossFlat() - - def get(self, i): - o = super().get(i) - return ImagePoints(FlowField(_get_size(self.x,i), o), scale=True) - - def analyze_pred(self, pred, thresh:float=0.5): return pred.view(-1,2) - def reconstruct(self, t, x): return ImagePoints(FlowField(x.size, t), scale=False) - -class PointsItemList(ImageList): - "`ItemList` for `Image` to `ImagePoints` tasks." - _label_cls,_square_show_res = PointsLabelList,False - -class ImageImageList(ImageList): - "`ItemList` suitable for `Image` to `Image` tasks." - _label_cls,_square_show,_square_show_res = ImageList,False,False - - def show_xys(self, xs, ys, imgsize:int=4, figsize:Optional[Tuple[int,int]]=None, **kwargs): - "Show the `xs` (inputs) and `ys`(targets) on a figure of `figsize`." - axs = subplots(len(xs), 2, imgsize=imgsize, figsize=figsize) - for i, (x,y) in enumerate(zip(xs,ys)): - x.show(ax=axs[i,0], **kwargs) - y.show(ax=axs[i,1], **kwargs) - plt.tight_layout() - - def show_xyzs(self, xs, ys, zs, imgsize:int=4, figsize:Optional[Tuple[int,int]]=None, **kwargs): - "Show `xs` (inputs), `ys` (targets) and `zs` (predictions) on a figure of `figsize`." - title = 'Input / Prediction / Target' - axs = subplots(len(xs), 3, imgsize=imgsize, figsize=figsize, title=title, weight='bold', size=14) - for i,(x,y,z) in enumerate(zip(xs,ys,zs)): - x.show(ax=axs[i,0], **kwargs) - y.show(ax=axs[i,2], **kwargs) - z.show(ax=axs[i,1], **kwargs) - - -def _ll_pre_transform(self, train_tfm:List[Callable], valid_tfm:List[Callable]): - "Call `train_tfm` and `valid_tfm` after opening image, before converting from `PIL.Image`" - self.train.x.after_open = compose(train_tfm) - self.valid.x.after_open = compose(valid_tfm) - return self - -def _db_pre_transform(self, train_tfm:List[Callable], valid_tfm:List[Callable]): - "Call `train_tfm` and `valid_tfm` after opening image, before converting from `PIL.Image`" - self.train_ds.x.after_open = compose(train_tfm) - self.valid_ds.x.after_open = compose(valid_tfm) - return self - -def _presize(self, size:int, val_xtra_size:int=32, scale:Tuple[float]=(0.08, 1.0), ratio:Tuple[float]=(0.75, 4./3.), - interpolation:int=2): - "Resize images to `size` using `RandomResizedCrop`, passing along `kwargs` to train transform" - return self.pre_transform( - tvt.RandomResizedCrop(size, scale=scale, ratio=ratio, interpolation=interpolation), - [tvt.Resize(size+val_xtra_size), tvt.CenterCrop(size)]) - -LabelLists.pre_transform = _ll_pre_transform -DataBunch.pre_transform = _db_pre_transform -LabelLists.presize = _presize -DataBunch.presize = _presize - diff --git a/spaces/all-things-vits/CLIPGroundingExplainability/clip_grounding/utils/visualize.py b/spaces/all-things-vits/CLIPGroundingExplainability/clip_grounding/utils/visualize.py deleted file mode 100644 index aaee90b5be63568dbcde91da84e9560a580c7f89..0000000000000000000000000000000000000000 --- a/spaces/all-things-vits/CLIPGroundingExplainability/clip_grounding/utils/visualize.py +++ /dev/null @@ -1,183 +0,0 @@ -"""Helpers for visualization""" -import numpy as np -import matplotlib -import matplotlib.pyplot as plt -import cv2 -from PIL import Image - - -# define predominanat colors -COLORS = { - "pink": (242, 116, 223), - "cyan": (46, 242, 203), - "red": (255, 0, 0), - "green": (0, 255, 0), - "blue": (0, 0, 255), - "yellow": (255, 255, 0), -} - - -def show_single_image(image: np.ndarray, figsize: tuple = (8, 8), title: str = None, titlesize=18, cmap: str = None, ticks=False, save=False, save_path=None): - """Show a single image.""" - fig, ax = plt.subplots(1, 1, figsize=figsize) - - if isinstance(image, Image.Image): - image = np.asarray(image) - - ax.set_title(title, fontsize=titlesize) - ax.imshow(image, cmap=cmap) - - if not ticks: - ax.set_xticks([]) - ax.set_yticks([]) - - if save: - plt.savefig(save_path, bbox_inches='tight') - - plt.show() - - -def show_grid_of_images( - images: np.ndarray, n_cols: int = 4, figsize: tuple = (8, 8), - cmap=None, subtitles=None, title=None, subtitlesize=18, - save=False, save_path=None, titlesize=20, - ): - """Show a grid of images.""" - n_cols = min(n_cols, len(images)) - - copy_of_images = images.copy() - for i, image in enumerate(copy_of_images): - if isinstance(image, Image.Image): - image = np.asarray(image) - images[i] = image - - if subtitles is None: - subtitles = [None] * len(images) - - n_rows = int(np.ceil(len(images) / n_cols)) - fig, axes = plt.subplots(n_rows, n_cols, figsize=figsize) - for i, ax in enumerate(axes.flat): - if i < len(images): - if len(images[i].shape) == 2 and cmap is None: - cmap="gray" - ax.imshow(images[i], cmap=cmap) - ax.set_title(subtitles[i], fontsize=subtitlesize) - ax.axis('off') - fig.set_tight_layout(True) - plt.suptitle(title, y=0.8, fontsize=titlesize) - - if save: - plt.savefig(save_path, bbox_inches='tight') - plt.close() - else: - plt.show() - - -def show_keypoint_matches( - img1, kp1, img2, kp2, matches, - K=10, figsize=(10, 5), drawMatches_args=dict(matchesThickness=3, singlePointColor=(0, 0, 0)), - choose_matches="random", - ): - """Displays matches found in the pair of images""" - if choose_matches == "random": - selected_matches = np.random.choice(matches, K) - elif choose_matches == "all": - K = len(matches) - selected_matches = matches - elif choose_matches == "topk": - selected_matches = matches[:K] - else: - raise ValueError(f"Unknown value for choose_matches: {choose_matches}") - - # color each match with a different color - cmap = matplotlib.cm.get_cmap('gist_rainbow', K) - colors = [[int(x*255) for x in cmap(i)[:3]] for i in np.arange(0,K)] - drawMatches_args.update({"matchColor": -1, "singlePointColor": (100, 100, 100)}) - - img3 = cv2.drawMatches(img1, kp1, img2, kp2, selected_matches, outImg=None, **drawMatches_args) - show_single_image( - img3, - figsize=figsize, - title=f"[{choose_matches.upper()}] Selected K = {K} matches between the pair of images.", - ) - return img3 - - -def draw_kps_on_image(image: np.ndarray, kps: np.ndarray, color=COLORS["red"], radius=3, thickness=-1, return_as="numpy"): - """ - Draw keypoints on image. - - Args: - image: Image to draw keypoints on. - kps: Keypoints to draw. Note these should be in (x, y) format. - """ - if isinstance(image, Image.Image): - image = np.asarray(image) - - for kp in kps: - image = cv2.circle( - image, (int(kp[0]), int(kp[1])), radius=radius, color=color, thickness=thickness) - - if return_as == "PIL": - return Image.fromarray(image) - - return image - - -def get_concat_h(im1, im2): - """Concatenate two images horizontally""" - dst = Image.new('RGB', (im1.width + im2.width, im1.height)) - dst.paste(im1, (0, 0)) - dst.paste(im2, (im1.width, 0)) - return dst - - -def get_concat_v(im1, im2): - """Concatenate two images vertically""" - dst = Image.new('RGB', (im1.width, im1.height + im2.height)) - dst.paste(im1, (0, 0)) - dst.paste(im2, (0, im1.height)) - return dst - - -def show_images_with_keypoints(images: list, kps: list, radius=15, color=(0, 220, 220), figsize=(10, 8), return_images=False, save=False, save_path="sample.png"): - assert len(images) == len(kps) - - # generate - images_with_kps = [] - for i in range(len(images)): - img_with_kps = draw_kps_on_image(images[i], kps[i], radius=radius, color=color, return_as="PIL") - images_with_kps.append(img_with_kps) - - # show - show_grid_of_images(images_with_kps, n_cols=len(images), figsize=figsize, save=save, save_path=save_path) - - if return_images: - return images_with_kps - - -def set_latex_fonts(usetex=True, fontsize=14, show_sample=False, **kwargs): - try: - plt.rcParams.update({ - "text.usetex": usetex, - "font.family": "serif", - "font.serif": ["Computer Modern Roman"], - "font.size": fontsize, - **kwargs, - }) - if show_sample: - plt.figure() - plt.title("Sample $y = x^2$") - plt.plot(np.arange(0, 10), np.arange(0, 10)**2, "--o") - plt.grid() - plt.show() - except: - print("Failed to setup LaTeX fonts. Proceeding without.") - pass - - -def get_colors(num_colors, palette="jet"): - cmap = plt.get_cmap(palette) - colors = [cmap(i) for i in np.linspace(0, 1, num_colors)] - return colors - diff --git a/spaces/amankishore/sjc/ncsn/ema.py b/spaces/amankishore/sjc/ncsn/ema.py deleted file mode 100644 index 5c67b81c00cdd1e1bf8fd1d80d25c7b1bab5c554..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/ncsn/ema.py +++ /dev/null @@ -1,47 +0,0 @@ -import copy -import torch.nn as nn - -class EMAHelper(object): - def __init__(self, mu=0.999): - self.mu = mu - self.shadow = {} - - def register(self, module): - if isinstance(module, nn.DataParallel): - module = module.module - for name, param in module.named_parameters(): - if param.requires_grad: - self.shadow[name] = param.data.clone() - - def update(self, module): - if isinstance(module, nn.DataParallel): - module = module.module - for name, param in module.named_parameters(): - if param.requires_grad: - self.shadow[name].data = (1. - self.mu) * param.data + self.mu * self.shadow[name].data - - def ema(self, module): - if isinstance(module, nn.DataParallel): - module = module.module - for name, param in module.named_parameters(): - if param.requires_grad: - param.data.copy_(self.shadow[name].data) - - def ema_copy(self, module): - if isinstance(module, nn.DataParallel): - inner_module = module.module - module_copy = type(inner_module)(inner_module.config).to(inner_module.config.device) - module_copy.load_state_dict(inner_module.state_dict()) - module_copy = nn.DataParallel(module_copy) - else: - module_copy = type(module)(module.config).to(module.config.device) - module_copy.load_state_dict(module.state_dict()) - # module_copy = copy.deepcopy(module) - self.ema(module_copy) - return module_copy - - def state_dict(self): - return self.shadow - - def load_state_dict(self, state_dict): - self.shadow = state_dict diff --git a/spaces/amgad59/Keras_cv_wedding_dress/app.py b/spaces/amgad59/Keras_cv_wedding_dress/app.py deleted file mode 100644 index 6b05576c3dc6ca7caeee91e364aa4a42a0039c95..0000000000000000000000000000000000000000 --- a/spaces/amgad59/Keras_cv_wedding_dress/app.py +++ /dev/null @@ -1,65 +0,0 @@ -from tensorflow import keras - -keras.mixed_precision.set_global_policy("mixed_float16") - -import time - -import gradio as gr -import keras_cv - -from constants import css, examples, img_height, img_width, num_images_to_gen -from share_btn import community_icon_html, loading_icon_html, share_js - -# Load model. -weights_path = keras.utils.get_file( - origin="https://huggingface.co/mayve/GP/resolve/main/ckpt_epoch_96.h5", - file_hash="4b4348297aa9853ff9dc4da7f52dcb240210564400f164e5155e5f4dc1866626" -) -pokemon_model = keras_cv.models.StableDiffusion( - img_width=img_width, img_height=img_height -) -pokemon_model.diffusion_model.load_weights(weights_path) - -pokemon_model.diffusion_model.compile(jit_compile=True) -pokemon_model.decoder.compile(jit_compile=True) -pokemon_model.text_encoder.compile(jit_compile=True) - -# Warm-up the model. -#_ = pokemon_model.text_to_image("Teddy bear", batch_size=num_images_to_gen) - - -def generate_image_fn(prompt: str, unconditional_guidance_scale: int) -> list: - start_time = time.time() - # `images is an `np.ndarray`. So we convert it to a list of ndarrays. - # Each ndarray represents a generated image. - # Reference: https://gradio.app/docs/#gallery - images = pokemon_model.text_to_image( - prompt, - batch_size=num_images_to_gen, - unconditional_guidance_scale=unconditional_guidance_scale, - num_steps = 100, - ) - end_time = time.time() - print(f"Time taken: {end_time - start_time} seconds.") - return [image for image in images] - - -description = "This Space demonstrates a fine-tuned Stable Diffusion model. You can use it for generating custom pokemons. To get started, either enter a prompt and pick one from the examples below. For details on the fine-tuning procedure, refer to [this repository](https://github.com/sayakpaul/stable-diffusion-keras-ft/)." -article = "This Space leverages a T4 GPU to run the predictions. We use mixed-precision to speed up the inference latency. We further use XLA to carve out maximum performance from TensorFlow." -gr.Interface( - generate_image_fn, - inputs=[ - gr.Textbox( - label="Enter your prompt", - max_lines=1, - placeholder="cute Sundar Pichai creature", - ), - gr.Slider(value=10, minimum=8, maximum=50, step=1), - ], - outputs=[gr.outputs.Image(type="pil"),gr.outputs.Image(type="pil"),gr.outputs.Image(type="pil"),gr.outputs.Image(type="pil")], - title="Generate custom pokemons", - description=description, - article=article, - examples=[["cute Sundar Pichai creature", 40], ["Hello kitty", 40]], - allow_flagging=False, -).launch() \ No newline at end of file diff --git a/spaces/anakin87/who-killed-laura-palmer/app_utils/config.py b/spaces/anakin87/who-killed-laura-palmer/app_utils/config.py deleted file mode 100644 index e28ac80e372699ec76504d4ea0acacf0f119ff8e..0000000000000000000000000000000000000000 --- a/spaces/anakin87/who-killed-laura-palmer/app_utils/config.py +++ /dev/null @@ -1,10 +0,0 @@ - -INDEX_DIR = 'data/index' -QUESTIONS_PATH = 'data/questions/selected_questions.txt' -RETRIEVER_MODEL = "sentence-transformers/multi-qa-mpnet-base-dot-v1" -RETRIEVER_MODEL_FORMAT = "sentence_transformers" -READER_MODEL = "deepset/roberta-base-squad2" -READER_CONFIG_THRESHOLD = 0.15 -RETRIEVER_TOP_K = 10 -READER_TOP_K = 5 -LOW_RELEVANCE_THRESHOLD = 0.5 \ No newline at end of file diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/client/css/typing.css b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/client/css/typing.css deleted file mode 100644 index f998ebe7f2172e4ac23cdeff6ba6fd811b67a145..0000000000000000000000000000000000000000 --- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/client/css/typing.css +++ /dev/null @@ -1,15 +0,0 @@ -.typing { - position: absolute; - top: -25px; - left: 0; - font-size: 14px; - animation: show_popup 0.4s; -} - -.typing-hiding { - animation: hide_popup 0.4s; -} - -.typing-hidden { - display: none; -} diff --git a/spaces/andyssj/entregable2/app.py b/spaces/andyssj/entregable2/app.py deleted file mode 100644 index 7ea38fe09805b75a09050d1f6cffc1f18c71f821..0000000000000000000000000000000000000000 --- a/spaces/andyssj/entregable2/app.py +++ /dev/null @@ -1,20 +0,0 @@ -from huggingface_hub import from_pretrained_fastai -import gradio as gr -from fastai.vision.all import * - - - -# repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME" -repo_id = "andyssj/entregable2" - -learner = from_pretrained_fastai(repo_id) -labels = learner.dls.vocab - -# Definimos una función que se encarga de llevar a cabo las predicciones -def predict(img): - #img = PILImage.create(img) - pred,pred_idx,probs = learner.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -# Creamos la interfaz y la lanzamos. -gr.Interface(fn=predict, inputs=gr.inputs.Image(shape=(128, 128)), outputs=gr.outputs.Label(num_top_classes=3),examples=['american_football_158.jpg','football_369.jpg','baseball_108.jpg']).launch(share=False) \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/api-example.py b/spaces/antonovmaxim/text-generation-webui-space/api-example.py deleted file mode 100644 index f35ea1db76f291bf1cae90a1a7801d2d19be3acc..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/api-example.py +++ /dev/null @@ -1,44 +0,0 @@ -import requests - -# For local streaming, the websockets are hosted without ssl - http:// -HOST = 'localhost:5000' -URI = f'http://{HOST}/api/v1/generate' - -# For reverse-proxied streaming, the remote will likely host with ssl - https:// -# URI = 'https://your-uri-here.trycloudflare.com/api/v1/generate' - - -def run(prompt): - request = { - 'prompt': prompt, - 'max_new_tokens': 250, - 'do_sample': True, - 'temperature': 1.3, - 'top_p': 0.1, - 'typical_p': 1, - 'repetition_penalty': 1.18, - 'top_k': 40, - 'min_length': 0, - 'no_repeat_ngram_size': 0, - 'num_beams': 1, - 'penalty_alpha': 0, - 'length_penalty': 1, - 'early_stopping': False, - 'seed': -1, - 'add_bos_token': True, - 'truncation_length': 2048, - 'ban_eos_token': False, - 'skip_special_tokens': True, - 'stopping_strings': [] - } - - response = requests.post(URI, json=request) - - if response.status_code == 200: - result = response.json()['results'][0]['text'] - print(prompt + result) - - -if __name__ == '__main__': - prompt = "In order to make homemade bread, follow these steps:\n1)" - run(prompt) diff --git a/spaces/anzorq/openai_whisper_stt/README.md b/spaces/anzorq/openai_whisper_stt/README.md deleted file mode 100644 index 1dbf53ca9080426d623d1dccddb4704de960d1a0..0000000000000000000000000000000000000000 --- a/spaces/anzorq/openai_whisper_stt/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: OpenAI's Whisper Real-time Demo -emoji: 🎙️ -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: mit ---- - -OpenAI's Whisper Real-time Demo - -A simple demo of OpenAI's [**Whisper**](https://github.com/openai/whisper) speech recognition model. - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/extras.py b/spaces/aodianyun/stable-diffusion-webui/modules/extras.py deleted file mode 100644 index 6a9af2d8e641fdf1ebd29045078d29b5aeae3d6f..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/extras.py +++ /dev/null @@ -1,258 +0,0 @@ -import os -import re -import shutil - - -import torch -import tqdm - -from modules import shared, images, sd_models, sd_vae, sd_models_config -from modules.ui_common import plaintext_to_html -import gradio as gr -import safetensors.torch - - -def run_pnginfo(image): - if image is None: - return '', '', '' - - geninfo, items = images.read_info_from_image(image) - items = {**{'parameters': geninfo}, **items} - - info = '' - for key, text in items.items(): - info += f""" -
    -

    {plaintext_to_html(str(key))}

    -

    {plaintext_to_html(str(text))}

    -
    -""".strip()+"\n" - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

    {message}

    " - - return '', geninfo, info - - -def create_config(ckpt_result, config_source, a, b, c): - def config(x): - res = sd_models_config.find_checkpoint_config_near_filename(x) if x else None - return res if res != shared.sd_default_config else None - - if config_source == 0: - cfg = config(a) or config(b) or config(c) - elif config_source == 1: - cfg = config(b) - elif config_source == 2: - cfg = config(c) - else: - cfg = None - - if cfg is None: - return - - filename, _ = os.path.splitext(ckpt_result) - checkpoint_filename = filename + ".yaml" - - print("Copying config:") - print(" from:", cfg) - print(" to:", checkpoint_filename) - shutil.copyfile(cfg, checkpoint_filename) - - -checkpoint_dict_skip_on_merge = ["cond_stage_model.transformer.text_model.embeddings.position_ids"] - - -def to_half(tensor, enable): - if enable and tensor.dtype == torch.float: - return tensor.half() - - return tensor - - -def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_model_name, interp_method, multiplier, save_as_half, custom_name, checkpoint_format, config_source, bake_in_vae, discard_weights): - shared.state.begin() - shared.state.job = 'model-merge' - - def fail(message): - shared.state.textinfo = message - shared.state.end() - return [*[gr.update() for _ in range(4)], message] - - def weighted_sum(theta0, theta1, alpha): - return ((1 - alpha) * theta0) + (alpha * theta1) - - def get_difference(theta1, theta2): - return theta1 - theta2 - - def add_difference(theta0, theta1_2_diff, alpha): - return theta0 + (alpha * theta1_2_diff) - - def filename_weighted_sum(): - a = primary_model_info.model_name - b = secondary_model_info.model_name - Ma = round(1 - multiplier, 2) - Mb = round(multiplier, 2) - - return f"{Ma}({a}) + {Mb}({b})" - - def filename_add_difference(): - a = primary_model_info.model_name - b = secondary_model_info.model_name - c = tertiary_model_info.model_name - M = round(multiplier, 2) - - return f"{a} + {M}({b} - {c})" - - def filename_nothing(): - return primary_model_info.model_name - - theta_funcs = { - "Weighted sum": (filename_weighted_sum, None, weighted_sum), - "Add difference": (filename_add_difference, get_difference, add_difference), - "No interpolation": (filename_nothing, None, None), - } - filename_generator, theta_func1, theta_func2 = theta_funcs[interp_method] - shared.state.job_count = (1 if theta_func1 else 0) + (1 if theta_func2 else 0) - - if not primary_model_name: - return fail("Failed: Merging requires a primary model.") - - primary_model_info = sd_models.checkpoints_list[primary_model_name] - - if theta_func2 and not secondary_model_name: - return fail("Failed: Merging requires a secondary model.") - - secondary_model_info = sd_models.checkpoints_list[secondary_model_name] if theta_func2 else None - - if theta_func1 and not tertiary_model_name: - return fail(f"Failed: Interpolation method ({interp_method}) requires a tertiary model.") - - tertiary_model_info = sd_models.checkpoints_list[tertiary_model_name] if theta_func1 else None - - result_is_inpainting_model = False - result_is_instruct_pix2pix_model = False - - if theta_func2: - shared.state.textinfo = f"Loading B" - print(f"Loading {secondary_model_info.filename}...") - theta_1 = sd_models.read_state_dict(secondary_model_info.filename, map_location='cpu') - else: - theta_1 = None - - if theta_func1: - shared.state.textinfo = f"Loading C" - print(f"Loading {tertiary_model_info.filename}...") - theta_2 = sd_models.read_state_dict(tertiary_model_info.filename, map_location='cpu') - - shared.state.textinfo = 'Merging B and C' - shared.state.sampling_steps = len(theta_1.keys()) - for key in tqdm.tqdm(theta_1.keys()): - if key in checkpoint_dict_skip_on_merge: - continue - - if 'model' in key: - if key in theta_2: - t2 = theta_2.get(key, torch.zeros_like(theta_1[key])) - theta_1[key] = theta_func1(theta_1[key], t2) - else: - theta_1[key] = torch.zeros_like(theta_1[key]) - - shared.state.sampling_step += 1 - del theta_2 - - shared.state.nextjob() - - shared.state.textinfo = f"Loading {primary_model_info.filename}..." - print(f"Loading {primary_model_info.filename}...") - theta_0 = sd_models.read_state_dict(primary_model_info.filename, map_location='cpu') - - print("Merging...") - shared.state.textinfo = 'Merging A and B' - shared.state.sampling_steps = len(theta_0.keys()) - for key in tqdm.tqdm(theta_0.keys()): - if theta_1 and 'model' in key and key in theta_1: - - if key in checkpoint_dict_skip_on_merge: - continue - - a = theta_0[key] - b = theta_1[key] - - # this enables merging an inpainting model (A) with another one (B); - # where normal model would have 4 channels, for latenst space, inpainting model would - # have another 4 channels for unmasked picture's latent space, plus one channel for mask, for a total of 9 - if a.shape != b.shape and a.shape[0:1] + a.shape[2:] == b.shape[0:1] + b.shape[2:]: - if a.shape[1] == 4 and b.shape[1] == 9: - raise RuntimeError("When merging inpainting model with a normal one, A must be the inpainting model.") - if a.shape[1] == 4 and b.shape[1] == 8: - raise RuntimeError("When merging instruct-pix2pix model with a normal one, A must be the instruct-pix2pix model.") - - if a.shape[1] == 8 and b.shape[1] == 4:#If we have an Instruct-Pix2Pix model... - theta_0[key][:, 0:4, :, :] = theta_func2(a[:, 0:4, :, :], b, multiplier)#Merge only the vectors the models have in common. Otherwise we get an error due to dimension mismatch. - result_is_instruct_pix2pix_model = True - else: - assert a.shape[1] == 9 and b.shape[1] == 4, f"Bad dimensions for merged layer {key}: A={a.shape}, B={b.shape}" - theta_0[key][:, 0:4, :, :] = theta_func2(a[:, 0:4, :, :], b, multiplier) - result_is_inpainting_model = True - else: - theta_0[key] = theta_func2(a, b, multiplier) - - theta_0[key] = to_half(theta_0[key], save_as_half) - - shared.state.sampling_step += 1 - - del theta_1 - - bake_in_vae_filename = sd_vae.vae_dict.get(bake_in_vae, None) - if bake_in_vae_filename is not None: - print(f"Baking in VAE from {bake_in_vae_filename}") - shared.state.textinfo = 'Baking in VAE' - vae_dict = sd_vae.load_vae_dict(bake_in_vae_filename, map_location='cpu') - - for key in vae_dict.keys(): - theta_0_key = 'first_stage_model.' + key - if theta_0_key in theta_0: - theta_0[theta_0_key] = to_half(vae_dict[key], save_as_half) - - del vae_dict - - if save_as_half and not theta_func2: - for key in theta_0.keys(): - theta_0[key] = to_half(theta_0[key], save_as_half) - - if discard_weights: - regex = re.compile(discard_weights) - for key in list(theta_0): - if re.search(regex, key): - theta_0.pop(key, None) - - ckpt_dir = shared.cmd_opts.ckpt_dir or sd_models.model_path - - filename = filename_generator() if custom_name == '' else custom_name - filename += ".inpainting" if result_is_inpainting_model else "" - filename += ".instruct-pix2pix" if result_is_instruct_pix2pix_model else "" - filename += "." + checkpoint_format - - output_modelname = os.path.join(ckpt_dir, filename) - - shared.state.nextjob() - shared.state.textinfo = "Saving" - print(f"Saving to {output_modelname}...") - - _, extension = os.path.splitext(output_modelname) - if extension.lower() == ".safetensors": - safetensors.torch.save_file(theta_0, output_modelname, metadata={"format": "pt"}) - else: - torch.save(theta_0, output_modelname) - - sd_models.list_models() - - create_config(output_modelname, config_source, primary_model_info, secondary_model_info, tertiary_model_info) - - print(f"Checkpoint saved to {output_modelname}.") - shared.state.textinfo = "Checkpoint saved" - shared.state.end() - - return [*[gr.Dropdown.update(choices=sd_models.checkpoint_tiles()) for _ in range(4)], "Checkpoint saved to " + output_modelname] diff --git a/spaces/apokalis/Apokalis/README.md b/spaces/apokalis/Apokalis/README.md deleted file mode 100644 index 00634f356197a46ce9abfb83011f20e4a8d74e74..0000000000000000000000000000000000000000 --- a/spaces/apokalis/Apokalis/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Shiny for Python template -emoji: 🌍 -colorFrom: yellow -colorTo: indigo -sdk: docker -pinned: false -license: openrail ---- - -This is a templated Space for [Shiny for Python](https://shiny.rstudio.com/py/). - - -To get started with a new app do the following: - -1) Install Shiny with `pip install shiny` -2) Create a new app with `shiny create .` -3) Then run the app with `shiny run --reload` - -To learn more about this framework please see the [Documentation](https://shiny.rstudio.com/py/docs/overview.html). diff --git a/spaces/appl044/Chat-GPT-LangChain/README.md b/spaces/appl044/Chat-GPT-LangChain/README.md deleted file mode 100644 index f3a4fd48d889dd9732f397f53552637a0818f390..0000000000000000000000000000000000000000 --- a/spaces/appl044/Chat-GPT-LangChain/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: GPT+WolframAlpha+Whisper -emoji: 👀 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: JavaFXpert/Chat-GPT-LangChain ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/apsys/hetfit/module_name.md b/spaces/apsys/hetfit/module_name.md deleted file mode 100644 index b90957d96651ccbe50c114fac80f1722c84f0072..0000000000000000000000000000000000000000 --- a/spaces/apsys/hetfit/module_name.md +++ /dev/null @@ -1,456 +0,0 @@ -# Table of Contents - -- [Table of Contents](#table-of-contents) -- [main](#main) -- [:orange\[PINN\]](#orangepinn) - - [PINN.pinns](#pinnpinns) - - [PINNd\_p Objects](#pinnd_p-objects) - - [PINNhd\_ma Objects](#pinnhd_ma-objects) - - [PINNT\_ma Objects](#pinnt_ma-objects) -- [:orange\[utils\]](#orangeutils) - - [utils.test](#utilstest) - - [utils.dataset\_loader](#utilsdataset_loader) - - [get\_dataset](#get_dataset) - - [utils.ndgan](#utilsndgan) - - [DCGAN Objects](#dcgan-objects) - - [define\_discriminator](#define_discriminator) - - [generate\_latent\_points](#generate_latent_points) - - [define\_gan](#define_gan) - - [summarize\_performance](#summarize_performance) - - [train\_gan](#train_gan) - - [utils.data\_augmentation](#utilsdata_augmentation) - - [dataset Objects](#dataset-objects) - - [\_\_init\_\_](#__init__) -- [:orange\[nets\]](#orangenets) - - [nets.envs](#netsenvs) - - [SCI Objects](#sci-objects) - - [data\_flow](#data_flow) - - [init\_seed](#init_seed) - - [compile](#compile) - - [train](#train) - - [inference](#inference) - - [RCI Objects](#rci-objects) - - [data\_flow](#data_flow-1) - - [compile](#compile-1) - - [nets.dense](#netsdense) - - [Net Objects](#net-objects) - - [\_\_init\_\_](#__init__-1) - - [nets.design](#netsdesign) - - [B\_field\_norm](#b_field_norm) - - [nets.deep\_dense](#netsdeep_dense) - - [dmodel Objects](#dmodel-objects) - - [\_\_init\_\_](#__init__-2) - - - -# main - - - -# :orange[PINN] - - - -## PINN.pinns - - - -## PINNd\_p Objects - -```python -class PINNd_p(nn.Module) -``` - -$d \mapsto P$ - - - -## PINNhd\_ma Objects - -```python -class PINNhd_ma(nn.Module) -``` - -$h,d \mapsto m_a $ - - - -## PINNT\_ma Objects - -```python -class PINNT_ma(nn.Module) -``` - -$ m_a, U \mapsto T$ - - - ---- -# :orange[utils] - - - -## utils.test - - - -## utils.dataset\_loader - - - -#### get\_dataset - -```python -def get_dataset(raw: bool = False, - sample_size: int = 1000, - name: str = 'dataset.pkl', - source: str = 'dataset.csv', - boundary_conditions: list = None) -> _pickle -``` - -Gets augmented dataset - -**Arguments**: - -- `raw` _bool, optional_ - either to use source data or augmented. Defaults to False. -- `sample_size` _int, optional_ - sample size. Defaults to 1000. -- `name` _str, optional_ - name of wanted dataset. Defaults to 'dataset.pkl'. -- `boundary_conditions` _list,optional_ - y1,y2,x1,x2. - -**Returns**: - -- `_pickle` - pickle buffer - - - -## utils.ndgan - - - -### DCGAN Objects - -```python -class DCGAN() -``` - - - -#### define\_discriminator - -```python -def define_discriminator(inputs=8) -``` - -function to return the compiled discriminator model - - - -#### generate\_latent\_points - -```python -def generate_latent_points(latent_dim, n) -``` - -generate points in latent space as input for the generator - - - -#### define\_gan - -```python -def define_gan(generator, discriminator) -``` - -define the combined generator and discriminator model - - - -#### summarize\_performance - -```python -def summarize_performance(epoch, generator, discriminator, latent_dim, n=200) -``` - -evaluate the discriminator and plot real and fake samples - - - -#### train\_gan - -```python -def train_gan(g_model, - d_model, - gan_model, - latent_dim, - num_epochs=2500, - num_eval=2500, - batch_size=2) -``` - -function to train gan model - - - -## utils.data\_augmentation - - - -## dataset Objects - -```python -class dataset() -``` - -Creates dataset from input source - - - -#### \_\_init\_\_ - -```python -def __init__(number_samples: int, - name: str, - source: str, - boundary_conditions: list = None) -``` - -_summary_ - -**Arguments**: - -- `number_samples` _int_ - _description_ -- `name` _str_ - _description_ -- `source` _str_ - _description_ -- `boundary_conditions` _list_ - y1,y2,x1,x2 - - - -# :orange[nets] - - - -## nets.envs - - - -### SCI Objects - -```python -class SCI() -``` - - - -#### data\_flow - -```python -def data_flow(columns_idx: tuple = (1, 3, 3, 5), - idx: tuple = None, - split_idx: int = 800) -> torch.utils.data.DataLoader -``` - -Data prep pipeline - -**Arguments**: - -- `columns_idx` _tuple, optional_ - Columns to be selected (sliced 1:2 3:4) for feature fitting. Defaults to (1,3,3,5). -- `idx` _tuple, optional_ - 2|3 indexes to be selected for feature fitting. Defaults to None. Use either idx or columns_idx (for F:R->R idx, for F:R->R2 columns_idx) - split_idx (int) : Index to split for training - - -**Returns**: - -- `torch.utils.data.DataLoader` - Torch native dataloader - - - -#### init\_seed - -```python -def init_seed(seed) -``` - -Initializes seed for torch optional() - - - -#### compile - -```python -def compile(columns: tuple = None, - idx: tuple = None, - optim: torch.optim = torch.optim.AdamW, - loss: nn = nn.L1Loss, - model: nn.Module = dmodel, - custom: bool = False) -> None -``` - -Builds model, loss, optimizer. Has defaults - -**Arguments**: - -- `columns` _tuple, optional_ - Columns to be selected for feature fitting. Defaults to (1,3,3,5). - optim - torch Optimizer - loss - torch Loss function (nn) - - - -#### train - -```python -def train(epochs: int = 10) -> None -``` - -Train model -If sklearn instance uses .fit() - - - -#### inference - -```python -def inference(X: tensor, model_name: str = None) -> np.ndarray -``` - -Inference of (pre-)trained model - -**Arguments**: - -- `X` _tensor_ - your data in domain of train - - -**Returns**: - -- `np.ndarray` - predictions - - - -### RCI Objects - -```python -class RCI(SCI) -``` - - - -#### data\_flow - -```python -def data_flow(columns_idx: tuple = (1, 3, 3, 5), - idx: tuple = None, - split_idx: int = 800) -> torch.utils.data.DataLoader -``` - -Data prep pipeline - -**Arguments**: - -- `columns_idx` _tuple, optional_ - Columns to be selected (sliced 1:2 3:4) for feature fitting. Defaults to (1,3,3,5). -- `idx` _tuple, optional_ - 2|3 indexes to be selected for feature fitting. Defaults to None. Use either idx or columns_idx (for F:R->R idx, for F:R->R2 columns_idx) - split_idx (int) : Index to split for training - - -**Returns**: - -- `torch.utils.data.DataLoader` - Torch native dataloader - - - -#### compile - -```python -def compile(columns: tuple = None, - idx: tuple = (3, 1), - optim: torch.optim = torch.optim.AdamW, - loss: nn = nn.L1Loss, - model: nn.Module = PINNd_p, - lr: float = 0.001) -> None -``` - -Builds model, loss, optimizer. Has defaults - -**Arguments**: - -- `columns` _tuple, optional_ - Columns to be selected for feature fitting. Defaults to None. -- `idx` _tuple, optional_ - indexes to be selected Default (3,1) - optim - torch Optimizer - loss - torch Loss function (nn) - - - -## nets.dense - - - -### Net Objects - -```python -class Net(nn.Module) -``` - -4 layer model, different activations and neurons count on layer - - - -#### \_\_init\_\_ - -```python -def __init__(input_dim: int = 2, hidden_dim: int = 200) -``` - -Init - -**Arguments**: - -- `input_dim` _int, optional_ - Defaults to 2. -- `hidden_dim` _int, optional_ - Defaults to 200. - - - -## nets.design - - - -#### B\_field\_norm - -```python -def B_field_norm(Bmax, L, k=16, plot=True) -``` - -Returns vec B_z - -**Arguments**: - -- `Bmax` _any_ - maximum B in thruster - k - magnetic field profile number - - - -## nets.deep\_dense - - - -### dmodel Objects - -```python -class dmodel(nn.Module) -``` - -4 layers Torch model. Relu activations, hidden layers are same size. - - - -#### \_\_init\_\_ - -```python -def __init__(in_features=1, hidden_features=200, out_features=1) -``` - -Init - -**Arguments**: - -- `in_features` _int, optional_ - Input features. Defaults to 1. -- `hidden_features` _int, optional_ - Hidden dims. Defaults to 200. -- `out_features` _int, optional_ - Output dims. Defaults to 1. - diff --git a/spaces/arch-123/bingo/src/state/index.ts b/spaces/arch-123/bingo/src/state/index.ts deleted file mode 100644 index 272106d619c69f124ea8dd0f10872ab4840152d7..0000000000000000000000000000000000000000 --- a/spaces/arch-123/bingo/src/state/index.ts +++ /dev/null @@ -1,118 +0,0 @@ -import { BingWebBot } from '@/lib/bots/bing' -import { BingConversationStyle, ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { nanoid } from '@/lib/utils' -import { atom } from 'jotai' -import { atomWithImmer } from 'jotai-immer' -import { atomWithStorage } from 'jotai/utils' -import { atomFamily } from 'jotai/utils' -import { atomWithHash, atomWithLocation } from 'jotai-location' - -const initialMessages: ChatMessageModel[] = [ - { author: 'system', text: 'conversation between user and robot', id: '1' }, - { author: 'user', text: '销量最高的 3 种宠物吸尘器有哪些优点和缺点? ', id: '2' }, - { - author: 'bot', text: ` -您好,这是必应。根据网上的搜索结果,我为您找到了以下三款销量最高的宠物吸尘器,以及它们的优点和缺点: - -- **戴森Dyson V10轻量版 Digital Slim Fluffy无线吸尘器**[^1^] [^3^]:这款吸尘器专为宠物家庭设计,拥有强大吸力和多功能吸头,能够有效清除宠物毛发和灰尘。 - - 优点:轻便易用,续航时间长,过滤系统先进,能够有效去除过敏原; - - 缺点:价格较高,噪音较大,需要定期清洁滤网。 -- **飞利浦FC6908/82(W7)无线吸尘器**[^2^] [^4^]:这是一款手持式干吸无线吸尘器,专门为宠物家庭设计。它具有75分钟的超高续航,适合大户型;除螨涡轮刷,可以深度清洁床单被罩,免受螨虫侵害;吸拖一体设计,280ml大水箱,还可以往水箱里加入清洁剂。 - - 优点:清洁性能强劲,操作方便,适用多种场景; - - 缺点:噪音较大,达到84dB(A);尘杯容量较小,只有0.6L;充电时长较长,需要5小时。 -- **必胜(Bissell) 宠物吸尘器**[^2^]:这是一款便携式手持真空吸尘器,专为清理宠物毛发而设计。它配备特殊的橡胶喷嘴和软毛刷头,可以轻松拾取宠物毛发和灰尘;几乎静音运行。 - - 优点:轻巧易用,重量只有1.36kg;电池可充电可更换; - - 缺点:续航时间不稳定,在15-30分钟之间浮动;吸力不够强劲,在地毯或沙发上可能效果不佳;没有过滤系统或过滤网。 - -希望这些信息对您有所帮助。如果您还有其他问题,请随时提问。😊 - `, id: '3' }, - { author: 'user', text: '今天的新闻', id: '4' }, - { - author: 'bot', text: `你好,这是Bing。我根据你的搜索查询“今天的新闻”为你找到了以下信息: - - # 中国新闻 - - 《求是》杂志发表习近平总书记重要文章《深化党和国家机构改革 推进国家治理体系和治理能力现代化》[^1^] - - 梦之队第5金! 全红婵陈芋汐女双10米台夺冠[^1^] - - 中央气象台7月16日18时发布台风橙色预警[^1^] - - 贵州石阡:暑期旅游带动乡村振兴[^1^] - - 激活大科学装置的“人才红利”[^1^] - - 聚才用才留才 让希望的田野成为智慧的田野[^1^] - - 科学家发现粮食系统增效可有效支撑我国碳中和[^1^] - - 成都以赛为媒提升城市美誉度[^1^] - - 香江观澜:“抢人才”成效显著 香港竞争力依旧[^1^] - - 浙江建德:新安江上享清凉[^1^] - - 定向资金持续精准滴灌 高质量发展力量不断积累[^1^] - - 企业信心恢复加大中长期投资 为稳增长奠定基础[^1^] - - 武汉渡江节开赛 千名“泳士”竞渡长江[^1^] - - 金融更加重视对实体经济支持 降低融资成本提振市场信心[^1^] - - 大运来了丨成都迎大运 全民健身动起来[^1^] - - 藏在高校里的宝藏博物馆[^1^] - - 中国汽车工业用70年赢得三个“全球第一”[^1^] - - 辽宁阜新市委常委、副市长赖化平涉嫌严重违纪违法接受审查调查[^1^] - - # 国际新闻 - - 联合国秘书长:可持续发展面临风险 各国需加强行动[^2^] - - 国际航运业加快绿色转型[^2^] - - 美企反对收紧对华芯片出口限制[^2^] - - 欧洲加大气候科技领域投资[^2^] - - 中企助力丹麦发展清洁能源[^2^] - - 中国代表呼吁国际社会共同努力防止乌克兰局势失控[^2^] - - 中国和阿尔及利亚共同构建新型国际关系典范[^2^] - - 以上信息仅供参考,具体详情请点击以下链接查看: - - [1] [中国新闻_央视网(cctv.com)](https://news.cctv.com/china/) - [2] [人民网_网上的人民日报](http://www.people.com.cn/)`, id: '5' }, - { author: 'user', text: '写一个快排', id: '6' }, - { author: 'bot', text: "你好,这是Bing。我可以帮你用python写一个快排。快速排序是一种基于分治思想的排序算法,它的基本思想是:从待排序的数组中选择一个基准元素,将数组分为两个子数组,使得左边的子数组中的元素都小于或等于基准元素,右边的子数组中的元素都大于或等于基准元素,然后对两个子数组递归地进行快速排序,最后合并结果。下面是用python实现的快速排序的代码:\n\n```python\n# 定义一个函数,用于交换数组中的两个元素\ndef swap(arr, i, j):\n temp = arr[i]\n arr[i] = arr[j]\n arr[j] = temp\n\n# 定义一个函数,用于对数组的一段区间进行划分\ndef partition(arr, low, high):\n # 选择最右边的元素作为基准元素\n pivot = arr[high]\n # 初始化一个指针,指向左边区间的最后一个位置\n i = low - 1\n # 遍历左边区间中的每个元素\n for j in range(low, high):\n # 如果当前元素小于或等于基准元素\n if arr[j] <= pivot:\n # 将指针右移一位\n i += 1\n # 将指针所指的元素与当前元素交换\n swap(arr, i, j)\n # 将基准元素与指针右边的第一个元素交换\n swap(arr, i + 1, high)\n # 返回基准元素的最终位置\n return i + 1\n\n# 定义一个函数,用于对数组进行快速排序\ndef quick_sort(arr, low, high):\n # 如果区间长度大于1\n if low < high:\n # 对区间进行划分,得到基准元素的位置\n p = partition(arr, low, high)\n # 对左边子区间进行快速排序\n quick_sort(arr, low, p - 1)\n # 对右边子区间进行快速排序\n quick_sort(arr, p + 1, high)\n\n# 测试代码\narr = [5, 3, 7, 2, 9, 1, 4, 6, 8]\nprint(\"原始数组:\", arr)\nquick_sort(arr, 0, len(arr) - 1)\nprint(\"排序后的数组:\", arr)\n```", id: '7' }, - { - author: 'bot', text: "好的,我会尝试画一只猫。\n > ![一只猫](https://tse2.mm.bing.net/th/id/OIG.jz34V0PNVkPC229h9spV?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse1.mm.bing.net/th/id/OIG.6g7d.XLZMP_iwAByLhvo?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse2.mm.bing.net/th/id/OIG.iAxF4ekekYn7sZw9SmU6?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse4.mm.bing.net/th/id/OIG.qDnzeSKzUCeJcrBqc5mX?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)", - id: '8' - } -] - -export const GreetMessages = [ - '谢谢你! 知道你什么时候准备好继续前进总是很有帮助的。我现在能为你回答什么问题?', - '重新开始总是很棒。问我任何问题!', - '当然,我很乐意重新开始。我现在可以为你提供哪些帮助?', - '当然,我已准备好进行新的挑战。我现在可以为你做什么?', - '很好,让我们来更改主题。你在想什么?', - '不用担心,我很高兴尝试一些新内容。我现在可以为你回答什么问题?', - '好的,我准备好了!感谢重置。我们应该了解哪些内容?', - '感谢刷新!你有新的话题吗?', - '明白了,让我们重新开始。接下来应该讨论什么?', - '下一步!我可以为你做什么?', - '好的,我已准备好新话题。我们应该一起了解哪些内容?' -] - -export const bingConversationStyleAtom = atomWithStorage('bingConversationStyle', BingConversationStyle.Creative, undefined, { unstable_getOnInit: true }) -export const voiceAtom = atomWithStorage('enableTTS', false, undefined, { unstable_getOnInit: true }) - -type Param = { botId: BotId; page: string } - -const createBotInstance = () => { - return new BingWebBot({ - cookie: ' ', - ua: ' ', - }) -} - -export const chatFamily = atomFamily( - (param: Param) => { - return atomWithImmer({ - botId: param.botId, - bot: createBotInstance(), - messages: [] as ChatMessageModel[], - generatingMessageId: '', - abortController: undefined as AbortController | undefined, - conversationId: nanoid(), - }) - }, - (a, b) => a.botId === b.botId && a.page === b.page, -) - -export const hashAtom = atomWithHash('dialog', '') - -export const locationAtom = atomWithLocation() - -export const voiceListenAtom = atom(false) diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/source/main_classes/trainer_api.md b/spaces/artificialguybr/video-dubbing/TTS/docs/source/main_classes/trainer_api.md deleted file mode 100644 index 876e09e5b6e75298657f17a289860038cc87f122..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/docs/source/main_classes/trainer_api.md +++ /dev/null @@ -1,3 +0,0 @@ -# Trainer API - -We made the trainer a separate project on https://github.com/coqui-ai/Trainer diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/JpegPresets.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/JpegPresets.py deleted file mode 100644 index a678e248e9ab2465738ea79f7f5c4bbc260c1919..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/JpegPresets.py +++ /dev/null @@ -1,240 +0,0 @@ -""" -JPEG quality settings equivalent to the Photoshop settings. -Can be used when saving JPEG files. - -The following presets are available by default: -``web_low``, ``web_medium``, ``web_high``, ``web_very_high``, ``web_maximum``, -``low``, ``medium``, ``high``, ``maximum``. -More presets can be added to the :py:data:`presets` dict if needed. - -To apply the preset, specify:: - - quality="preset_name" - -To apply only the quantization table:: - - qtables="preset_name" - -To apply only the subsampling setting:: - - subsampling="preset_name" - -Example:: - - im.save("image_name.jpg", quality="web_high") - -Subsampling ------------ - -Subsampling is the practice of encoding images by implementing less resolution -for chroma information than for luma information. -(ref.: https://en.wikipedia.org/wiki/Chroma_subsampling) - -Possible subsampling values are 0, 1 and 2 that correspond to 4:4:4, 4:2:2 and -4:2:0. - -You can get the subsampling of a JPEG with the -:func:`.JpegImagePlugin.get_sampling` function. - -In JPEG compressed data a JPEG marker is used instead of an EXIF tag. -(ref.: https://exiv2.org/tags.html) - - -Quantization tables -------------------- - -They are values use by the DCT (Discrete cosine transform) to remove -*unnecessary* information from the image (the lossy part of the compression). -(ref.: https://en.wikipedia.org/wiki/Quantization_matrix#Quantization_matrices, -https://en.wikipedia.org/wiki/JPEG#Quantization) - -You can get the quantization tables of a JPEG with:: - - im.quantization - -This will return a dict with a number of lists. You can pass this dict -directly as the qtables argument when saving a JPEG. - -The quantization table format in presets is a list with sublists. These formats -are interchangeable. - -Libjpeg ref.: -https://web.archive.org/web/20120328125543/http://www.jpegcameras.com/libjpeg/libjpeg-3.html - -""" - -# fmt: off -presets = { - 'web_low': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [20, 16, 25, 39, 50, 46, 62, 68, - 16, 18, 23, 38, 38, 53, 65, 68, - 25, 23, 31, 38, 53, 65, 68, 68, - 39, 38, 38, 53, 65, 68, 68, 68, - 50, 38, 53, 65, 68, 68, 68, 68, - 46, 53, 65, 68, 68, 68, 68, 68, - 62, 65, 68, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68], - [21, 25, 32, 38, 54, 68, 68, 68, - 25, 28, 24, 38, 54, 68, 68, 68, - 32, 24, 32, 43, 66, 68, 68, 68, - 38, 38, 43, 53, 68, 68, 68, 68, - 54, 54, 66, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68] - ]}, - 'web_medium': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [16, 11, 11, 16, 23, 27, 31, 30, - 11, 12, 12, 15, 20, 23, 23, 30, - 11, 12, 13, 16, 23, 26, 35, 47, - 16, 15, 16, 23, 26, 37, 47, 64, - 23, 20, 23, 26, 39, 51, 64, 64, - 27, 23, 26, 37, 51, 64, 64, 64, - 31, 23, 35, 47, 64, 64, 64, 64, - 30, 30, 47, 64, 64, 64, 64, 64], - [17, 15, 17, 21, 20, 26, 38, 48, - 15, 19, 18, 17, 20, 26, 35, 43, - 17, 18, 20, 22, 26, 30, 46, 53, - 21, 17, 22, 28, 30, 39, 53, 64, - 20, 20, 26, 30, 39, 48, 64, 64, - 26, 26, 30, 39, 48, 63, 64, 64, - 38, 35, 46, 53, 64, 64, 64, 64, - 48, 43, 53, 64, 64, 64, 64, 64] - ]}, - 'web_high': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [6, 4, 4, 6, 9, 11, 12, 16, - 4, 5, 5, 6, 8, 10, 12, 12, - 4, 5, 5, 6, 10, 12, 14, 19, - 6, 6, 6, 11, 12, 15, 19, 28, - 9, 8, 10, 12, 16, 20, 27, 31, - 11, 10, 12, 15, 20, 27, 31, 31, - 12, 12, 14, 19, 27, 31, 31, 31, - 16, 12, 19, 28, 31, 31, 31, 31], - [7, 7, 13, 24, 26, 31, 31, 31, - 7, 12, 16, 21, 31, 31, 31, 31, - 13, 16, 17, 31, 31, 31, 31, 31, - 24, 21, 31, 31, 31, 31, 31, 31, - 26, 31, 31, 31, 31, 31, 31, 31, - 31, 31, 31, 31, 31, 31, 31, 31, - 31, 31, 31, 31, 31, 31, 31, 31, - 31, 31, 31, 31, 31, 31, 31, 31] - ]}, - 'web_very_high': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 4, 5, 7, 9, - 2, 2, 2, 4, 5, 7, 9, 12, - 3, 3, 4, 5, 8, 10, 12, 12, - 4, 4, 5, 7, 10, 12, 12, 12, - 5, 5, 7, 9, 12, 12, 12, 12, - 6, 6, 9, 12, 12, 12, 12, 12], - [3, 3, 5, 9, 13, 15, 15, 15, - 3, 4, 6, 11, 14, 12, 12, 12, - 5, 6, 9, 14, 12, 12, 12, 12, - 9, 11, 14, 12, 12, 12, 12, 12, - 13, 14, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'web_maximum': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 2, - 1, 1, 1, 1, 1, 1, 2, 2, - 1, 1, 1, 1, 1, 2, 2, 3, - 1, 1, 1, 1, 2, 2, 3, 3, - 1, 1, 1, 2, 2, 3, 3, 3, - 1, 1, 2, 2, 3, 3, 3, 3], - [1, 1, 1, 2, 2, 3, 3, 3, - 1, 1, 1, 2, 3, 3, 3, 3, - 1, 1, 1, 3, 3, 3, 3, 3, - 2, 2, 3, 3, 3, 3, 3, 3, - 2, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3] - ]}, - 'low': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [18, 14, 14, 21, 30, 35, 34, 17, - 14, 16, 16, 19, 26, 23, 12, 12, - 14, 16, 17, 21, 23, 12, 12, 12, - 21, 19, 21, 23, 12, 12, 12, 12, - 30, 26, 23, 12, 12, 12, 12, 12, - 35, 23, 12, 12, 12, 12, 12, 12, - 34, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12], - [20, 19, 22, 27, 20, 20, 17, 17, - 19, 25, 23, 14, 14, 12, 12, 12, - 22, 23, 14, 14, 12, 12, 12, 12, - 27, 14, 14, 12, 12, 12, 12, 12, - 20, 14, 12, 12, 12, 12, 12, 12, - 20, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'medium': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [12, 8, 8, 12, 17, 21, 24, 17, - 8, 9, 9, 11, 15, 19, 12, 12, - 8, 9, 10, 12, 19, 12, 12, 12, - 12, 11, 12, 21, 12, 12, 12, 12, - 17, 15, 19, 12, 12, 12, 12, 12, - 21, 19, 12, 12, 12, 12, 12, 12, - 24, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12], - [13, 11, 13, 16, 20, 20, 17, 17, - 11, 14, 14, 14, 14, 12, 12, 12, - 13, 14, 14, 14, 12, 12, 12, 12, - 16, 14, 14, 12, 12, 12, 12, 12, - 20, 14, 12, 12, 12, 12, 12, 12, - 20, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'high': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [6, 4, 4, 6, 9, 11, 12, 16, - 4, 5, 5, 6, 8, 10, 12, 12, - 4, 5, 5, 6, 10, 12, 12, 12, - 6, 6, 6, 11, 12, 12, 12, 12, - 9, 8, 10, 12, 12, 12, 12, 12, - 11, 10, 12, 12, 12, 12, 12, 12, - 12, 12, 12, 12, 12, 12, 12, 12, - 16, 12, 12, 12, 12, 12, 12, 12], - [7, 7, 13, 24, 20, 20, 17, 17, - 7, 12, 16, 14, 14, 12, 12, 12, - 13, 16, 14, 14, 12, 12, 12, 12, - 24, 14, 14, 12, 12, 12, 12, 12, - 20, 14, 12, 12, 12, 12, 12, 12, - 20, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'maximum': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 4, 5, 7, 9, - 2, 2, 2, 4, 5, 7, 9, 12, - 3, 3, 4, 5, 8, 10, 12, 12, - 4, 4, 5, 7, 10, 12, 12, 12, - 5, 5, 7, 9, 12, 12, 12, 12, - 6, 6, 9, 12, 12, 12, 12, 12], - [3, 3, 5, 9, 13, 15, 15, 15, - 3, 4, 6, 10, 14, 12, 12, 12, - 5, 6, 9, 14, 12, 12, 12, 12, - 9, 10, 14, 12, 12, 12, 12, 12, - 13, 14, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12] - ]}, -} -# fmt: on diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/data_cfg.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/data_cfg.py deleted file mode 100644 index fba36dfcf07c35ac21fd77f0b58837fd002a3e3a..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/data_cfg.py +++ /dev/null @@ -1,299 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -from argparse import Namespace -from pathlib import Path -from typing import Dict, Optional - -from fairseq.data import Dictionary - - -def get_config_from_yaml(yaml_path: Path): - try: - import yaml - except ImportError: - print("Please install PyYAML: pip install PyYAML") - config = {} - if yaml_path.is_file(): - try: - with open(yaml_path) as f: - config = yaml.load(f, Loader=yaml.FullLoader) - except Exception as e: - raise Exception(f"Failed to load config from {yaml_path.as_posix()}: {e}") - else: - raise FileNotFoundError(f"{yaml_path.as_posix()} not found") - - return config - - -class S2TDataConfig(object): - """Wrapper class for data config YAML""" - - def __init__(self, yaml_path: Path): - self.config = get_config_from_yaml(yaml_path) - self.root = yaml_path.parent - - def _auto_convert_to_abs_path(self, x): - if isinstance(x, str): - if not Path(x).exists() and (self.root / x).exists(): - return (self.root / x).as_posix() - elif isinstance(x, dict): - return {k: self._auto_convert_to_abs_path(v) for k, v in x.items()} - return x - - @property - def vocab_filename(self): - """fairseq vocabulary file under data root""" - return self.config.get("vocab_filename", "dict.txt") - - @property - def speaker_set_filename(self): - """speaker set file under data root""" - return self.config.get("speaker_set_filename", None) - - @property - def shuffle(self) -> bool: - """Shuffle dataset samples before batching""" - return self.config.get("shuffle", False) - - @property - def pre_tokenizer(self) -> Dict: - """Pre-tokenizer to apply before subword tokenization. Returning - a dictionary with `tokenizer` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - tokenizer = self.config.get("pre_tokenizer", {"tokenizer": None}) - return self._auto_convert_to_abs_path(tokenizer) - - @property - def bpe_tokenizer(self) -> Dict: - """Subword tokenizer to apply after pre-tokenization. Returning - a dictionary with `bpe` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - tokenizer = self.config.get("bpe_tokenizer", {"bpe": None}) - return self._auto_convert_to_abs_path(tokenizer) - - @property - def prepend_tgt_lang_tag(self) -> bool: - """Prepend target lang ID token as the target BOS (e.g. for to-many - multilingual setting). During inference, this requires `--prefix-size 1` - to force BOS to be lang ID token.""" - return self.config.get("prepend_tgt_lang_tag", False) - - @property - def prepend_bos_and_append_tgt_lang_tag(self) -> bool: - """Prepend BOS and append target lang ID token to the target (e.g. mBART with language token pretraining).""" - return self.config.get("prepend_bos_and_append_tgt_lang_tag", False) - - @property - def input_feat_per_channel(self): - """The dimension of input features (per audio channel)""" - return self.config.get("input_feat_per_channel", 80) - - @property - def input_channels(self): - """The number of channels in the input audio""" - return self.config.get("input_channels", 1) - - @property - def sample_rate(self): - return self.config.get("sample_rate", 16_000) - - @property - def sampling_alpha(self): - """Hyper-parameter alpha = 1/T for temperature-based resampling. - (alpha = 1 for no resampling)""" - return self.config.get("sampling_alpha", 1.0) - - @property - def use_audio_input(self): - """Needed by the dataset loader to see if the model requires - raw audio as inputs.""" - return self.config.get("use_audio_input", False) - - def standardize_audio(self) -> bool: - return self.use_audio_input and self.config.get("standardize_audio", False) - - @property - def use_sample_rate(self): - """Needed by the dataset loader to see if the model requires - raw audio with specific sample rate as inputs.""" - return self.config.get("use_sample_rate", 16000) - - @property - def audio_root(self): - """Audio paths in the manifest TSV can be relative and this provides - the root path. Set this to empty string when using absolute paths.""" - return self.config.get("audio_root", "") - - def get_feature_transforms(self, split, is_train): - """Split-specific feature transforms. Allowing train set - wildcard `_train`, evaluation set wildcard `_eval` and general - wildcard `*` for matching.""" - from copy import deepcopy - - cfg = deepcopy(self.config) - _cur = cfg.get("transforms", {}) - cur = _cur.get(split) - cur = _cur.get("_train") if cur is None and is_train else cur - cur = _cur.get("_eval") if cur is None and not is_train else cur - cur = _cur.get("*") if cur is None else cur - cfg["transforms"] = cur - return cfg - - @property - def global_cmvn_stats_npz(self) -> Optional[str]: - path = self.config.get("global_cmvn", {}).get("stats_npz_path", None) - return self._auto_convert_to_abs_path(path) - - @property - def vocoder(self) -> Dict[str, str]: - vocoder = self.config.get("vocoder", {"type": "griffin_lim"}) - return self._auto_convert_to_abs_path(vocoder) - - @property - def hub(self) -> Dict[str, str]: - return self.config.get("hub", {}) - - -class S2SDataConfig(S2TDataConfig): - """Wrapper class for data config YAML""" - - @property - def vocab_filename(self): - """fairseq vocabulary file under data root""" - return self.config.get("vocab_filename", None) - - @property - def pre_tokenizer(self) -> Dict: - return None - - @property - def bpe_tokenizer(self) -> Dict: - return None - - @property - def input_transformed_channels(self): - """The number of channels in the audio after feature transforms""" - # TODO: move this into individual transforms - _cur = self.config.get("transforms", {}) - cur = _cur.get("_train", []) - - _channels = self.input_channels - if "delta_deltas" in cur: - _channels *= 3 - - return _channels - - @property - def output_sample_rate(self): - """The audio sample rate of output target speech""" - return self.config.get("output_sample_rate", 22050) - - @property - def target_speaker_embed(self): - """Target speaker embedding file (one line per target audio sample)""" - return self.config.get("target_speaker_embed", None) - - @property - def prepend_tgt_lang_tag_as_bos(self) -> bool: - """Prepend target lang ID token as the target BOS.""" - return self.config.get("prepend_tgt_lang_tag_as_bos", False) - - -class MultitaskConfig(object): - """Wrapper class for data config YAML""" - - def __init__(self, yaml_path: Path): - config = get_config_from_yaml(yaml_path) - self.config = {} - for k, v in config.items(): - self.config[k] = SingleTaskConfig(k, v) - - def get_all_tasks(self): - return self.config - - def get_single_task(self, name): - assert name in self.config, f"multitask '{name}' does not exist!" - return self.config[name] - - -class SingleTaskConfig(object): - def __init__(self, name, config): - self.task_name = name - self.config = config - dict_path = config.get("dict", "") - self.tgt_dict = Dictionary.load(dict_path) if Path(dict_path).exists() else None - - @property - def data(self): - return self.config.get("data", "") - - @property - def decoder_type(self): - return self.config.get("decoder_type", "transformer") - - @property - def decoder_args(self): - """Decoder arch related args""" - args = self.config.get("decoder_args", {}) - return Namespace(**args) - - @property - def criterion_cfg(self): - """cfg for the multitask criterion""" - if self.decoder_type == "ctc": - from fairseq.criterions.ctc import CtcCriterionConfig - - cfg = CtcCriterionConfig - cfg.zero_infinity = self.config.get("zero_infinity", True) - else: - from fairseq.criterions.label_smoothed_cross_entropy import ( - LabelSmoothedCrossEntropyCriterionConfig, - ) - - cfg = LabelSmoothedCrossEntropyCriterionConfig - cfg.label_smoothing = self.config.get("label_smoothing", 0.2) - return cfg - - @property - def input_from(self): - """Condition on encoder/decoder of the main model""" - return "decoder" if "decoder_layer" in self.config else "encoder" - - @property - def input_layer(self): - if self.input_from == "decoder": - return self.config["decoder_layer"] - 1 - else: - # default using the output from the last encoder layer (-1) - return self.config.get("encoder_layer", 0) - 1 - - @property - def loss_weight_schedule(self): - return ( - "decay" - if "loss_weight_max" in self.config - and "loss_weight_decay_steps" in self.config - else "fixed" - ) - - def get_loss_weight(self, num_updates): - if self.loss_weight_schedule == "fixed": - weight = self.config.get("loss_weight", 1.0) - else: # "decay" - assert ( - self.config.get("loss_weight_decay_steps", 0) > 0 - ), "loss_weight_decay_steps must be greater than 0 for a decay schedule" - loss_weight_min = self.config.get("loss_weight_min", 0.0001) - loss_weight_decay_stepsize = ( - self.config["loss_weight_max"] - loss_weight_min - ) / self.config["loss_weight_decay_steps"] - weight = max( - self.config["loss_weight_max"] - - loss_weight_decay_stepsize * num_updates, - loss_weight_min, - ) - return weight diff --git a/spaces/autumn8/selectModel/app.py b/spaces/autumn8/selectModel/app.py deleted file mode 100644 index 44992b942c7dc83bf066621d72a1e064625a37a3..0000000000000000000000000000000000000000 --- a/spaces/autumn8/selectModel/app.py +++ /dev/null @@ -1,760 +0,0 @@ -from transformers import TextClassificationPipeline -from transformers import AutoTokenizer -from transformers import pipeline -import evaluate -import gradio as gr -import torch -import random -from transformers.file_utils import is_tf_available, is_torch_available, is_torch_tpu_available -from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments -from datasets import load_metric -from sklearn.model_selection import train_test_split -import pandas as pd -import numpy as np -import streamlit as st -from textblob import TextBlob -from streamlit_extras.switch_page_button import switch_page -from transformers import YolosImageProcessor, YolosForObjectDetection -from PIL import Image -import torch -import requests -import numpy as np -import torchvision -from torchvision.io import read_image -from torchvision.utils import draw_bounding_boxes -from transformers import DetrImageProcessor, DetrForObjectDetection -from transformers import DetrImageProcessor, DetrForObjectDetection -from transformers import pipeline -import torch -from transformers import PegasusForConditionalGeneration, PegasusTokenizer - - -st.set_page_config(layout="wide") -def get_models(prompt): - #prompt = input("Enter your AI task idea:") - response = pipe(prompt) - print("AI Model Idea: ", prompt,"\n") - - x = pd.json_normalize(response[0]) - # x.nlargest(3,['score'])["label"].values - knowledge_base_tasks = ['depth-estimation', 'image-classification', 'image-segmentation', - 'image-to-image', 'object-detection', 'video-classification', - 'unconditional-image-generation', 'zero-shot-image-classification', - 'conversational', 'fill-mask', 'question-answering', - 'sentence-similarity', 'summarization', 'table-question-answering', - 'text-classification', 'text-generation', 'token-classification', - 'translation', 'zero-shot-classification'] - - temp = [] - for label_code in x.nlargest(3,['score'])["label"].values: - temp.append(label_code[6:]) - # temp - - cat_to_model = {} - top_cats = [] - - for i in range(len(temp)): - print("Possible Category ",i+1," : ",knowledge_base_tasks[int(temp[i])]) - print("Top three models for this category are:",models_list[models_list["pipeline_tag"] == knowledge_base_tasks[int(temp[i])]].nlargest(3,"downloads")["modelId"].values) - cat_to_model[knowledge_base_tasks[int(temp[i])]] = models_list[models_list["pipeline_tag"] == knowledge_base_tasks[int(temp[i])]].nlargest(3,"downloads")["modelId"].values - top_cats.append(knowledge_base_tasks[int(temp[i])]) - # models_list[models_list["pipeline_tag"] == "image-classification"].nlargest(3,"downloads")["modelId"].values - print() - print("Returning category-models dictionary..") - return top_cats,cat_to_model - - - -def get_top_3(top_cat): - - top_3_df = pd.read_csv("./Top_3_models.csv") - top_3 = [] - for i in range(top_3_df.shape[0]): - if top_3_df["Category"].iloc[i].lower() == top_cat: - top_3.append(top_3_df["Model_1"].iloc[i]) - top_3.append(top_3_df["Model_2"].iloc[i]) - top_3.append(top_3_df["Model_3"].iloc[i]) - break - return top_3 - - - - - -def get_top_3_a(prompt,pipe): - response = pipe(prompt) - x = pd.json_normalize(response[0]) - temp = [] - for label_code in x.nlargest(3,['score'])["label"].values: - temp.append(label_code[6:]) - knowledge_base_tasks = ['depth-estimation', 'image-classification', 'image-segmentation', - 'image-to-image', 'object-detection', 'video-classification', - 'unconditional-image-generation', 'zero-shot-image-classification', - 'conversational', 'fill-mask', 'question-answering', - 'sentence-similarity', 'summarization', 'table-question-answering', - 'text-classification', 'text-generation', 'token-classification', - 'translation', 'zero-shot-classification'] - - top_cat = knowledge_base_tasks[int(temp[0])] - - - top_3_df = pd.read_csv("./Top_3_models.csv") - top_3 = [] - for i in range(top_3_df.shape[0]): - if top_3_df["Category"].iloc[i] == top_cat: - top_3.append(top_3_df["Model_1"].iloc[i]) - top_3.append(top_3_df["Model_2"].iloc[i]) - top_3.append(top_3_df["Model_3"].iloc[i]) - break - return top_cat,top_3 - - - - -def get_response(input_text,model_name): - torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' - tokenizer = PegasusTokenizer.from_pretrained(model_name) - model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) - batch = tokenizer([input_text],truncation=True,padding='longest',max_length=1024, return_tensors="pt").to(torch_device) - gen_out = model.generate(**batch,max_length=128,num_beams=5, num_return_sequences=1, temperature=1.5) - output_text = tokenizer.batch_decode(gen_out, skip_special_tokens=True) - return output_text - - -def summarizer (models, data): - model_Eval = {} - for i in range (len(models)): - # print(models[i]) - if models[i] == 'tuner007/pegasus_summarizer': - model_name = 'tuner007/pegasus_summarizer' - - result = get_response(data,model_name) - rouge = evaluate.load('rouge') - # print("345",rouge.compute(predictions=[result],references=[data])) - print(type(result), type([data])) - quality = rouge.compute(predictions=[result[0]],references=[data]) - model_Eval[models[i]] = {"Score":quality,"Result": result} - else: - summarizer_model = pipeline("summarization", model = models[i]) - print(models[i], summarizer_model(data)) - try: - result = summarizer_model(data)[0]["summary_text"] - rouge = evaluate.load('rouge') - # print("345",rouge.compute(predictions=[result],references=[data])) - quality = rouge.compute(predictions=[result],references=[data]) - model_Eval[models[i]] = {"Score":quality,"Result": result} - except: - print("Model {} has issues.".format(models[i])) - - return model_Eval - - - - -def best_model (analysis, data): - best_model_score = 0 - best_model_name = "" - best_model_result = "" - temp2 = 0 - for model in analysis.keys(): - temp1 = analysis[model]["Score"]["rougeLsum"] - if temp1 > temp2: - temp2 = analysis[model]["Score"]["rougeLsum"] - best_model_score = analysis[model]["Score"] - best_model_name = model - best_model_result = analysis[model]["Result"] - - return best_model_name, best_model_score,data[:50],best_model_result.replace("\n","") - - - -def text_summarization(): - top_models = get_top_3("summarization") -# st.write("Upload your file: ") -# uploaded_files = "" -# uploaded_files = st.file_uploader("Choose your file", accept_multiple_files=True) - - - - - option = st.selectbox( - 'What text would you like AI to summarize for you now ?', - ("Choose text files below:",'How to Win friends - Text', 'The Age of Intelligent Machines', 'The Singularity is Near - Ray Kurzweil.txt')) #add 2 other options of files here - - if option == 'How to Win friends - Text' or option == 'The Age of Intelligent Machines' or option == 'The Singularity is Near - Ray Kurzweil.txt':### update book text files here - st.write('You selected:', option) - - if option == 'How to Win friends - Text': # add text - name = "How_to_win_friends.txt" - st.write("Selected file for analyis is: How_to_win_friends.txt") - st.markdown(f'

    {"Thank you for your patience. AI is generating 3 outputs to compare"}

    ', unsafe_allow_html=True) - - if option == 'The Age of Intelligent Machines': - name = "The Age of Intelligent Machines.txt" - st.write("Selected file for analyis is: The Age of Intelligent Machines.txt") - st.markdown(f'

    {"Thank you for your patience. AI is generating 3 outputs to compare"}

    ', unsafe_allow_html=True) - - if option == "The Singularity is Near - Ray Kurzweil.txt": - name = "The Singularity is Near - Ray Kurzweil.txt" - st.write("The Singularity is Near - Ray Kurzweil.txt") - st.markdown(f'

    {"Thank you for your patience. AI is generating 3 outputs to compare"}

    ', unsafe_allow_html=True) - - if st.button("Accept"): - global file_data -# st.write("filename:", uploaded_files) -# for uploaded_file in uploaded_files: -# # print("here") -# file_data = open(uploaded_file.name,encoding="utf8").read() -# st.write("filename:", uploaded_file.name) -# # st.write(file_data[:500]) -# # print("before summarizer") -# print(file_data[:50]) - file_data = open(name,encoding="utf8").read() - - analysis = summarizer(models = top_models, data = file_data[:500]) - - x,c,v,b = best_model(analysis,file_data[:500]) -# st.write("Best model for Task: ",z) - - st.markdown(f'

    {"Best Model with Summarization Results"}

    ', unsafe_allow_html=True) - st.write("\nBest model name: ",x) -# st.write("\nBest model Score: ",c) - - st.write("Best Model Rouge Scores: ") - st.write("Rouge 1 Score: ",c["rouge1"]) - st.write("Rouge 2 Score: ",c["rouge2"]) - st.write("Rouge L Score: ",c["rougeL"]) - st.write("Rouge LSum Score: ",c["rougeLsum"]) - - st.write("\nOriginal Data first 50 characters: ", v) - st.write("\nBest Model Result: ",b) - - -# print("between summarizer analysis") - st.markdown(f'

    {"Summarization Results for Model 1: Bart"}

    ', unsafe_allow_html=True) -# st.write("Summarization Results for Model 1") - st.write("Model name: facebook/bart-large-cnn") - st.write("Rouge Scores: ") - st.write("Rouge 1 Score: ",analysis["facebook/bart-large-cnn"]["Score"]["rouge1"]) - st.write("Rouge 2 Score: ",analysis["facebook/bart-large-cnn"]["Score"]["rouge2"]) - st.write("Rouge L Score: ",analysis["facebook/bart-large-cnn"]["Score"]["rougeL"]) - st.write(f"Rouge LSum Score: ",analysis["facebook/bart-large-cnn"]["Score"]["rougeLsum"]) - st.write("Result: ", analysis["facebook/bart-large-cnn"]["Result"]) - - st.markdown(f'

    {"Summarization Results for Model 2: Pegasus"}

    ', unsafe_allow_html=True) -# st.write("Summarization Results for Model 2") - st.write("Model name: tuner007/pegasus_summarizer") - st.write("Rouge Scores: ") - st.write("Rouge 1 Score: ",analysis["tuner007/pegasus_summarizer"]["Score"]["rouge1"]) - st.write("Rouge 2 Score: ",analysis["tuner007/pegasus_summarizer"]["Score"]["rouge2"]) - st.write("Rouge L Score: ",analysis["tuner007/pegasus_summarizer"]["Score"]["rougeL"]) - st.write("Rouge LSum Score: ",analysis["tuner007/pegasus_summarizer"]["Score"]["rougeLsum"]) - st.write("Result: ", analysis["tuner007/pegasus_summarizer"]["Result"][0]) - - - - st.markdown(f'

    {"Summarization Results for Model 3: Distilbart"}

    ', unsafe_allow_html=True) -# st.write("Summarization Results for Model 3") - st.write("Model name: sshleifer/distilbart-cnn-12-6") - st.write("Rouge Scores: ") - st.write("Rouge 1 Score: ",analysis["sshleifer/distilbart-cnn-12-6"]["Score"]["rouge1"]) - st.write("Rouge 2 Score: ",analysis["sshleifer/distilbart-cnn-12-6"]["Score"]["rouge2"]) - st.write("Rouge L Score: ",analysis["sshleifer/distilbart-cnn-12-6"]["Score"]["rougeL"]) - st.write("Rouge LSum Score: ",analysis["sshleifer/distilbart-cnn-12-6"]["Score"]["rougeLsum"]) - - st.write("Result: ", analysis["sshleifer/distilbart-cnn-12-6"]["Result"]) - - - - -#OBJECT DETECTION - -def yolo_tiny(name): - image = read_image(name) - - model = YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny') - image_processor = YolosImageProcessor.from_pretrained("hustvl/yolos-tiny") - - inputs = image_processor(images=image, return_tensors="pt") - outputs = model(**inputs) - - # model predicts bounding boxes and corresponding COCO classes - logits = outputs.logits - bboxes = outputs.pred_boxes - - - # print results - target_sizes = torch.tensor([image.shape[::-1][:2]]) - - results = image_processor.post_process_object_detection(outputs, threshold=0.7, target_sizes=target_sizes)[0] - - label_ = [] - bboxes = [] - - for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): - box = [round(i, 2) for i in box.tolist()] - print( - f"Detected {model.config.id2label[label.item()]} with confidence " - f"{round(score.item(), 3)} at location {box}" - ) - - label_.append(model.config.id2label[label.item()]) - bboxes.append(np.asarray(box,dtype="int")) - bboxes = torch.tensor(bboxes, dtype=torch.int) - - img=draw_bounding_boxes(image, bboxes,labels = label_, width=3) - img = torchvision.transforms.ToPILImage()(img) - return img -# img.show() - - - -def resnet_101(name): - image = read_image(name) - processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-101") - model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-101") - - inputs = processor(images=image, return_tensors="pt") - outputs = model(**inputs) - - # convert outputs (bounding boxes and class logits) to COCO API - # let's only keep detections with score > 0.9 - target_sizes = torch.tensor([image.shape[::-1][:2]]) - results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0] - label_ = [] - bboxes = [] - for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): - box = [round(i, 2) for i in box.tolist()] - print( - f"Detected {model.config.id2label[label.item()]} with confidence " - f"{round(score.item(), 3)} at location {box}") - label_.append(model.config.id2label[label.item()]) - bboxes.append(np.asarray(box,dtype="int")) - bboxes = torch.tensor(bboxes, dtype=torch.int) - - - bboxes = torch.tensor(bboxes, dtype=torch.int) - - img=draw_bounding_boxes(image, bboxes,labels = label_, width=3) - img = torchvision.transforms.ToPILImage()(img) - return img - - - - - -def resnet_50(name): - image = read_image(name) - processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50") - model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50") - - inputs = processor(images=image, return_tensors="pt") - outputs = model(**inputs) - - # convert outputs (bounding boxes and class logits) to COCO API - # let's only keep detections with score > 0.9 - target_sizes = torch.tensor([image.shape[::-1][:2]]) - results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0] - label_ = [] - bboxes = [] - for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): - box = [round(i, 2) for i in box.tolist()] - print( - f"Detected {model.config.id2label[label.item()]} with confidence " - f"{round(score.item(), 3)} at location {box}" - ) - label_.append(model.config.id2label[label.item()]) - bboxes.append(np.asarray(box,dtype="int")) - bboxes = torch.tensor(bboxes, dtype=torch.int) - - bboxes = torch.tensor(bboxes, dtype=torch.int) - - img=draw_bounding_boxes(image, bboxes,labels = label_, width=3) - img = torchvision.transforms.ToPILImage()(img) - return img - - - -def object_detection(): -# st.write("Upload your image: ") -# uploaded_files = "" -# uploaded_files = st.file_uploader("Choose a image file", accept_multiple_files=True) - - option = st.selectbox( - 'What image you want for analysis?', - ("Choose an image for object detection analysis from the options below:",'Cat and Dog', '2 lazy cats chilling on a couch', 'An astronaut riding wild horse')) - - if option == 'Cat and Dog' or option == '2 lazy cats chilling on a couch' or option == 'An astronaut riding wild horse': - st.write('You selected:', option) - st.markdown(f'

    {"Thank you for your patience. AI is generating 3 outputs to compare"}

    ', unsafe_allow_html=True) - - if option == 'Cat and Dog': - name = "cat_dog.jpg" - st.image("cat_dog.jpg") - st.markdown(f'

    {"Thank you for your patience. AI is generating 3 outputs to compare"}

    ', unsafe_allow_html=True) - - if option == '2 lazy cats chilling on a couch': - name = "cat_remote.jpg" - st.image("cat_remote.jpg") - st.markdown(f'

    {"Thank you for your patience. AI is generating 3 outputs to compare"}

    ', unsafe_allow_html=True) - - if option == 'An astronaut riding wild horse': - name = "astronaut_rides_horse.png" - st.image("astronaut_rides_horse.png") - st.markdown(f'

    {"Thank you for your patience. AI is generating 3 outputs to compare"}

    ', unsafe_allow_html=True) - - if st.button("Accept"): - # global file_data -# st.write("filename:", uploaded_files) -# for uploaded_file in uploaded_files: - # print("here") - # file_data = open(uploaded_file.name).read() - st.write("filename:", name) -# name = uploaded_file.name - st.image([yolo_tiny(name),resnet_101(name),resnet_50(name)],caption=["hustvl/yolos-tiny","facebook/detr-resnet-101","facebook/detr-resnet-50"]) - - -def task_categorization_model_predictions(): - st.image("./examples.png") - - # st.title("Text Analysis App") - - data = "" - - classifier = pipeline("zero-shot-classification",model="facebook/bart-large-mnli") - - global check - - st.markdown(f'

    {"Write down below the description of your AI application in few sentences:"}

    ', unsafe_allow_html=True) - - prompt = st.text_input(" ") - - st.write("") - st.write("") - - if prompt != "": - # sbert_saved_model = torch.load("Sbert_saved_model", map_location=torch.device('cpu')).to("cpu") - # model = sbert_saved_model.to("cpu") - # tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-mpnet-base-v2") - # pipe = TextClassificationPipeline(model= model, tokenizer=tokenizer, return_all_scores=True) - # # outputs a list of dicts like [[{'label': 'NEGATIVE', 'score': 0.0001223755971295759}, {'label': 'POSITIVE', 'score': 0.9998776316642761}]] - - # # prompt = ["What is the the best ai for putting text report into data table?","How can I generate car sales agreement with ai model?","AI model to detect burglar on 48 hours of cctv video footage","I need Ai model help me with rewriting 50 financial statements emails into one summary report ?","I need a model for extracting person from an image"] - # # responses = pipe(prompt) - - - # models_list = pd.read_csv("models.csv") - # # st.write(get_top_3(prompt)) - - # top_cat, top_models = get_top_3(prompt) - # # prompt = input("Enter your AI task idea:") - # # top_cats,cat_to_models = get_models(prompt) - - # # top_models = cat_to_models[top_cats[0]] - - # top_cat = " " + top_cat[0].upper() + top_cat[1:] - - - - st.markdown(f'

    {"Recognized AI Domain: "}

    ', unsafe_allow_html=True) - - domains = ["Computer Vision Task","Natural Language Processing Problem","Audio Operations Problem","Tabular Data Task","Reinforcement Learning Problem","Time Series Forecasting Problem"] - - - - #st.write(classifier(prompt, domains)) - domain = classifier(prompt, domains)["labels"][0] - - st.markdown(f'

    {domain}

    ', unsafe_allow_html=True) - # st.write("Recommended AI Domain Type: ",top_cat) - check = 0 - if st.button("This seems accurate"): - check = 1 - if st.button("Show me other likely category recommendations:"): - if domain == "Tabular Data Problem": - if st.button("Computer Vision Task"): - domain = "Computer Vision Task" - check = 1 - if st.button("Natural Language Processing Problem"): - domain = "Natural Language Processing Problem" - check = 1 - if st.button("Multimodal AI Model"): - domain = "Multimodal AI Model" - check = 1 - if st.button("Audio Operations Problem"): - domain = "Audio Operations Problem" - check = 1 - # if st.button("Tabular Data Task"): - # domain = "Tabular Data Task" - if st.button("Reinforcement Learning Problem"): - domain = "Reinforcement Learning Problem" - check = 1 - if st.button("Time Series Forecasting Problem"): - domain = "Time Series Forecasting Problem" - check = 1 - - - if domain == "Computer Vision Task": - # if st.button("Computer Vision Task"): - # domain = "Computer Vision Task" - if st.button("Natural Language Processing Problem"): - domain = "Natural Language Processing Problem" - check = 1 - - if st.button("Multimodal AI Model"): - domain = "Multimodal AI Model" - check = 1 - - if st.button("Audio Operations Problem"): - domain = "Audio Operations Problem" - check = 1 - if st.button("Tabular Data Task"): - domain = "Tabular Data Task" - check = 1 - if st.button("Reinforcement Learning Problem"): - domain = "Reinforcement Learning Problem" - check = 1 - if st.button("Time Series Forecasting Problem"): - domain = "Time Series Forecasting Problem" - check = 1 - - - if domain == "Natural Language Processing Problem": - if st.button("Computer Vision Task"): - domain = "Computer Vision Task" - check = 1 - # if st.button("Natural Language Processing Problem"): - # domain = "Natural Language Processing Problem" - if st.button("Multimodal AI Model"): - domain = "multimodal" - check = 1 - if st.button("Audio Operations Problem"): - domain = "Audio Operations Problem" - check = 1 - if st.button("Tabular Data Task"): - domain = "Tabular Data Task" - check = 1 - if st.button("Reinforcement Learning Problem"): - domain = "Reinforcement Learning Problem" - check = 1 - if st.button("Time Series Forecasting Problem"): - domain = "Time Series Forecasting Problem" - check = 1 - - - if domain == "Multimodal AI Model": - if st.button("Computer Vision Task"): - domain = "Computer Vision Task" - check = 1 - if st.button("Natural Language Processing Problem"): - domain = "Natural Language Processing Problem" - check = 1 - # if st.button("Multimodal AI Model"): - # domain = "Multimodal AI Model" - if st.button("Audio Operations Problem"): - domain = "Audio Operations Problem" - check = 1 - if st.button("Tabular Data Task"): - domain = "Tabular Data Task" - check = 1 - if st.button("Reinforcement Learning Problem"): - domain = "Reinforcement Learning Problem" - check = 1 - if st.button("Time Series Forecasting Problem"): - domain = "Time Series Forecasting Problem" - check = 1 - - - if domain == "audio": - if st.button("Computer Vision Task"): - domain = "Computer Vision Task" - check = 1 - if st.button("Natural Language Processing Problem"): - domain = "Natural Language Processing Problem" - check = 1 - if st.button("Multimodal AI Model"): - domain = "Multimodal AI Model" - check = 1 - # if st.button("Audio Operations Problem"): - # domain = "Audio Operations Problem" - if st.button("Tabular Data Task"): - domain = "Tabular Data Task" - check = 1 - if st.button("Reinforcement Learning Problem"): - domain = "Reinforcement Learning Problem" - check = 1 - if st.button("Time Series Forecasting Problem"): - domain = "Time Series Forecasting Problem" - check = 1 - - - if domain == "reinforcement-learning": - if st.button("Computer Vision Task"): - domain = "Computer Vision Task" - check = 1 - if st.button("Natural Language Processing Problem"): - domain = "Natural Language Processing Problem" - check = 1 - if st.button("Multimodal AI Model"): - domain = "multimodal" - check = 1 - if st.button("Audio Operations Problem"): - domain = "Audio Operations Problem" - check = 1 - if st.button("Tabular Data Task"): - domain = "Tabular Data Task" - check = 1 - # if st.button("Reinforcement Learning Problem"): - # domain = "Reinforcement Learning Problem" - if st.button("Time Series Forecasting Problem"): - domain = "Time Series Forecasting Problem" - check = 1 - - if domain == "Time Series Forecasting": - if st.button("Computer Vision Task"): - domain = "Computer Vision Task" - check = 1 - if st.button("Natural Language Processing Problem"): - domain = "Natural Language Processing Problem" - check = 1 - if st.button("Multimodal AI Model"): - domain = "Multimodal AI Model" - check = 1 - if st.button("Audio Operations Problem"): - domain = "Audio Operations Problem" - check = 1 - if st.button("Tabular Data Task"): - domain = "Tabular Data Task" - check = 1 - if st.button("Reinforcement Learning Problem"): - domain = "Reinforcement Learning Problem" - check = 1 - # if st.button("Time Series Forecasting Problem"): - # domain = "Time Series Forecasting Problem" - - # st.write("Recommended Models for category: ",top_cats[0], " are:",top_models) - - # st.write("Recommended Task category: ",top_models[0]) - - - - knowledge_base_tasks = {"Computer Vision Task":['depth-estimation', 'image-classification', 'image-segmentation', - 'image-to-image', 'object-detection', 'video-classification', - 'unconditional-image-generation', 'zero-shot-image-classification'],"Natural Language Processing Problem":[ - 'conversational', 'fill-mask', 'question-answering', - 'sentence-similarity', 'summarization', 'table-question-answering', - 'text-classification', 'text-generation', 'token-classification', - 'translation', 'zero-shot-classification'],"Audio Operations Problem":["audio-classification","audio-to-audio","automatic-speech-recognition", - "text-to-speech"],"Tabular Data Task":["tabular-classification","tabular-regression"],"others":["document-question-answering", - "feature-extraction","image-to-text","text-to-image","text-to-video","visual-question-answering"], - "Reinforcement Learning Problem":["reinforcement-learning"],"time-series-forecasting":["time-series-forecasting"]} - - # st.write(check) - # st.write(domain) - if check == 1: - - category = classifier(prompt, knowledge_base_tasks[domain])["labels"][0] - - - st.markdown(f'

    {"Recognized sub category in Domain: "+domain}

    ', unsafe_allow_html=True) - - st.markdown(f'

    {category}

    ', unsafe_allow_html=True) - - - top_models = get_top_3(category) - #st.write(top_models) - st.markdown(f'

    {"The best models selected for this domain:"}

    ', unsafe_allow_html=True) - - - st.markdown(f'

    {"Top choice:"+top_models[0]}

    ', unsafe_allow_html=True) - - st.image("./buttons1.png") - - # if st.button("Show more"): - - st.markdown(f'

    {"Alternative 1:"+top_models[1]}

    ', unsafe_allow_html=True) - st.image("./buttons1.png") - - - st.markdown(f'

    {"Alternative 2:"+top_models[2]}

    ', unsafe_allow_html=True) - st.image("./buttons1.png") - - - - - - - -def model_selector_sbert(): - # st.title("Text Analysis App") - - data = "" - - st.title("Foundation Model Recommender") - - st.write("""Enter a brief description of your task, and this app will recommend an AI model for you!""") - - st.image("./examples.png") - # st.markdown(f'

    {"Please, describe your AI application below:"}

    ', unsafe_allow_html=True) - - prompt = st.text_area("Describe your task:") - - st.write("") - st.write("") - - if st.button("Recommend Model"): - if prompt != "": - sbert_saved_model = torch.load("Sbert_saved_model", map_location=torch.device('cpu')).to("cpu") - model = sbert_saved_model.to("cpu") - tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-mpnet-base-v2") - pipe = TextClassificationPipeline(model= model, tokenizer=tokenizer, return_all_scores=True) - # outputs a list of dicts like [[{'label': 'NEGATIVE', 'score': 0.0001223755971295759}, {'label': 'POSITIVE', 'score': 0.9998776316642761}]] - - # prompt = ["What is the the best ai for putting text report into data table?","How can I generate car sales agreement with ai model?","AI model to detect burglar on 48 hours of cctv video footage","I need Ai model help me with rewriting 50 financial statements emails into one summary report ?","I need a model for extracting person from an image"] - # responses = pipe(prompt) - - - models_list = pd.read_csv("models.csv") - # st.write(get_top_3(prompt)) - - top_cat, top_models = get_top_3_a(prompt,pipe) - # prompt = input("Enter your AI task idea:") - # top_cats,cat_to_models = get_models(prompt) - - # top_models = cat_to_models[top_cats[0]] - - top_cat = " " + top_cat[0].upper() + top_cat[1:] - st.markdown(f'

    {"Recognized AI Domain Category: "}

    ', unsafe_allow_html=True) - - st.markdown(f'

    {top_cat}

    ', unsafe_allow_html=True) - # st.write("Recommended AI Domain Type: ",top_cat) - # st.write("Recommended Models for category: ",top_cats[0], " are:",top_models) - - # st.write("Recommended Task category: ",top_models[0]) - - - st.markdown(f'

    {"The best models selected for this task:"}

    ', unsafe_allow_html=True) - - - st.markdown(f'

    {"Top choice: "+top_models[0]}

    ', unsafe_allow_html=True) - - st.image("./buttons1.png") - - #if st.button("Show more"): - st.markdown(f'

    {"Alternative 1: "+top_models[1]}

    ', unsafe_allow_html=True) - st.image("./buttons1.png") - - st.markdown(f'

    {"Alternative 2: "+top_models[2]}

    ', unsafe_allow_html=True) - st.image("./buttons1.png") - - - - -page_names_to_funcs = { - "Select the best Model for your AI app":model_selector_sbert, - "Compare Model Outputs on Object Detection": object_detection, - "Compare Model Outputs on Text Summarization": text_summarization -} - -demo_name = st.sidebar.selectbox("Choose a demo of model selector or compare inference outputs:", page_names_to_funcs.keys()) -page_names_to_funcs[demo_name]() \ No newline at end of file diff --git a/spaces/avans06/whisper-webui-translate/src/whisper/fasterWhisperContainer.py b/spaces/avans06/whisper-webui-translate/src/whisper/fasterWhisperContainer.py deleted file mode 100644 index 804ea8f027e22246da23d3b08ff9a2ff2f29e4ef..0000000000000000000000000000000000000000 --- a/spaces/avans06/whisper-webui-translate/src/whisper/fasterWhisperContainer.py +++ /dev/null @@ -1,211 +0,0 @@ -import os -from typing import List, Union - -from faster_whisper import WhisperModel, download_model -from src.config import ModelConfig, VadInitialPromptMode -from src.hooks.progressListener import ProgressListener -from src.languages import get_language_from_name -from src.modelCache import ModelCache -from src.prompts.abstractPromptStrategy import AbstractPromptStrategy -from src.whisper.abstractWhisperContainer import AbstractWhisperCallback, AbstractWhisperContainer -from src.utils import format_timestamp - -class FasterWhisperContainer(AbstractWhisperContainer): - def __init__(self, model_name: str, device: str = None, compute_type: str = "float16", - download_root: str = None, - cache: ModelCache = None, models: List[ModelConfig] = []): - super().__init__(model_name, device, compute_type, download_root, cache, models) - - def ensure_downloaded(self): - """ - Ensure that the model is downloaded. This is useful if you want to ensure that the model is downloaded before - passing the container to a subprocess. - """ - model_config = self._get_model_config() - - if os.path.isdir(model_config.url): - model_config.path = model_config.url - else: - model_config.path = download_model(model_config.url, output_dir=self.download_root) - - def _get_model_config(self) -> ModelConfig: - """ - Get the model configuration for the model. - """ - for model in self.models: - if model.name == self.model_name: - return model - return None - - def _create_model(self): - print("Loading faster whisper model " + self.model_name + " for device " + str(self.device)) - model_config = self._get_model_config() - model_url = model_config.url - - if model_config.type == "whisper": - if model_url not in ["tiny", "base", "small", "medium", "large", "large-v1", "large-v2", "large-v3"]: - raise Exception("FasterWhisperContainer does not yet support Whisper models. Use ct2-transformers-converter to convert the model to a faster-whisper model.") - if model_url == "large": - # large is an alias for large-v1 - model_url = "large-v1" - - device = self.device - - if (device is None): - device = "auto" - - model = WhisperModel(model_url, device=device, compute_type=self.compute_type) - if "large-v3" in model_url: - # Working with Whisper-large-v3 - # https://github.com/guillaumekln/faster-whisper/issues/547#issuecomment-1797962599 - model.feature_extractor.mel_filters = model.feature_extractor.get_mel_filters(model.feature_extractor.sampling_rate, model.feature_extractor.n_fft, n_mels=128) - return model - - def create_callback(self, language: str = None, task: str = None, - prompt_strategy: AbstractPromptStrategy = None, - **decodeOptions: dict) -> AbstractWhisperCallback: - """ - Create a WhisperCallback object that can be used to transcript audio files. - - Parameters - ---------- - language: str - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - prompt_strategy: AbstractPromptStrategy - The prompt strategy to use. If not specified, the prompt from Whisper will be used. - decodeOptions: dict - Additional options to pass to the decoder. Must be pickleable. - - Returns - ------- - A WhisperCallback object. - """ - return FasterWhisperCallback(self, language=language, task=task, prompt_strategy=prompt_strategy, **decodeOptions) - -class FasterWhisperCallback(AbstractWhisperCallback): - def __init__(self, model_container: FasterWhisperContainer, language: str = None, task: str = None, - prompt_strategy: AbstractPromptStrategy = None, - **decodeOptions: dict): - self.model_container = model_container - self.language = language - self.task = task - self.prompt_strategy = prompt_strategy - self.decodeOptions = decodeOptions - - self._printed_warning = False - - def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None): - """ - Peform the transcription of the given audio file or data. - - Parameters - ---------- - audio: Union[str, np.ndarray, torch.Tensor] - The audio file to transcribe, or the audio data as a numpy array or torch tensor. - segment_index: int - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - progress_listener: ProgressListener - A callback to receive progress updates. - """ - model: WhisperModel = self.model_container.get_model() - language_code = self._lookup_language_code(self.language) if self.language else None - - # Copy decode options and remove options that are not supported by faster-whisper - decodeOptions = self.decodeOptions.copy() - verbose = decodeOptions.pop("verbose", None) - - logprob_threshold = decodeOptions.pop("logprob_threshold", None) - - patience = decodeOptions.pop("patience", None) - length_penalty = decodeOptions.pop("length_penalty", None) - suppress_tokens = decodeOptions.pop("suppress_tokens", None) - - if (decodeOptions.pop("fp16", None) is not None): - if not self._printed_warning: - print("WARNING: fp16 option is ignored by faster-whisper - use compute_type instead.") - self._printed_warning = True - - # Fix up decode options - if (logprob_threshold is not None): - decodeOptions["log_prob_threshold"] = logprob_threshold - - decodeOptions["patience"] = float(patience) if patience is not None else 1.0 - decodeOptions["length_penalty"] = float(length_penalty) if length_penalty is not None else 1.0 - - # See if supress_tokens is a string - if so, convert it to a list of ints - decodeOptions["suppress_tokens"] = self._split_suppress_tokens(suppress_tokens) - - initial_prompt = self.prompt_strategy.get_segment_prompt(segment_index, prompt, detected_language) \ - if self.prompt_strategy else prompt - - segments_generator, info = model.transcribe(audio, \ - language=language_code if language_code else detected_language, task=self.task, \ - initial_prompt=initial_prompt, \ - **decodeOptions - ) - - segments = [] - - for segment in segments_generator: - segments.append(segment) - - if progress_listener is not None: - progress_listener.on_progress(segment.end, info.duration, desc=f"Transcribe: {segment_index}") - if verbose: - print("[{}->{}] {}".format(format_timestamp(segment.start, True), format_timestamp(segment.end, True), - segment.text)) - - text = " ".join([segment.text for segment in segments]) - - # Convert the segments to a format that is easier to serialize - whisper_segments = [{ - "text": segment.text, - "start": segment.start, - "end": segment.end, - - # Extra fields added by faster-whisper - "words": [{ - "start": word.start, - "end": word.end, - "word": word.word, - "probability": word.probability - } for word in (segment.words if segment.words is not None else []) ] - } for segment in segments] - - result = { - "segments": whisper_segments, - "text": text, - "language": info.language if info else None, - - # Extra fields added by faster-whisper - "language_probability": info.language_probability if info else None, - "duration": info.duration if info else None - } - - # If we have a prompt strategy, we need to increment the current prompt - if self.prompt_strategy: - self.prompt_strategy.on_segment_finished(segment_index, prompt, detected_language, result) - - if progress_listener is not None: - progress_listener.on_finished(desc=f"Transcribe: {segment_index}.") - return result - - def _split_suppress_tokens(self, suppress_tokens: Union[str, List[int]]): - if (suppress_tokens is None): - return None - if (isinstance(suppress_tokens, list)): - return suppress_tokens - - return [int(token) for token in suppress_tokens.split(",")] - - def _lookup_language_code(self, language: str): - language = get_language_from_name(language) - - if language is None: - raise ValueError("Invalid language: " + language) - - return language.code diff --git a/spaces/avivdm1/AutoGPT/autogpt/speech/say.py b/spaces/avivdm1/AutoGPT/autogpt/speech/say.py deleted file mode 100644 index 727983d12bf334205550a54bcd69a7a36824eda4..0000000000000000000000000000000000000000 --- a/spaces/avivdm1/AutoGPT/autogpt/speech/say.py +++ /dev/null @@ -1,41 +0,0 @@ -""" Text to speech module """ -import threading -from threading import Semaphore - -from autogpt.config import Config -from autogpt.speech.brian import BrianSpeech -from autogpt.speech.eleven_labs import ElevenLabsSpeech -from autogpt.speech.gtts import GTTSVoice -from autogpt.speech.macos_tts import MacOSTTS - -CFG = Config() -DEFAULT_VOICE_ENGINE = GTTSVoice() -VOICE_ENGINE = None -if CFG.elevenlabs_api_key: - VOICE_ENGINE = ElevenLabsSpeech() -elif CFG.use_mac_os_tts == "True": - VOICE_ENGINE = MacOSTTS() -elif CFG.use_brian_tts == "True": - VOICE_ENGINE = BrianSpeech() -else: - VOICE_ENGINE = GTTSVoice() - - -QUEUE_SEMAPHORE = Semaphore( - 1 -) # The amount of sounds to queue before blocking the main thread - - -def say_text(text: str, voice_index: int = 0) -> None: - """Speak the given text using the given voice index""" - - def speak() -> None: - success = VOICE_ENGINE.say(text, voice_index) - if not success: - DEFAULT_VOICE_ENGINE.say(text) - - QUEUE_SEMAPHORE.release() - - QUEUE_SEMAPHORE.acquire(True) - thread = threading.Thread(target=speak) - thread.start() diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/modules/distributions/__init__.py b/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/modules/distributions/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/awacke1/API-Demo/app.py b/spaces/awacke1/API-Demo/app.py deleted file mode 100644 index 8fc472349556a52c5b113d3cf5a5cf8133630262..0000000000000000000000000000000000000000 --- a/spaces/awacke1/API-Demo/app.py +++ /dev/null @@ -1,34 +0,0 @@ -# 5 - -from gradio_client import Client - -client = Client("https://awacke1-mvp-gaia.hf.space/") -result = client.predict( - "Howdy!", # str representing input in 'Type an input and press Enter' Textbox component - 0, # int | float representing input in 'Top-p (nucleus sampling)' Slider component - 0, # int | float representing input in 'Temperature' Slider component - 5, # int | float representing input in 'parameter_10' Number component - "null", # str representing input in 'parameter_4' Chatbot component - fn_index=5 -) -print(result) - - - - - -# 6 - -from gradio_client import Client - -client = Client("https://awacke1-mvp-gaia.hf.space/") -result = client.predict( - "Howdy!", # str representing input in 'Type an input and press Enter' Textbox component - 0, # int | float representing input in 'Top-p (nucleus sampling)' Slider component - 0, # int | float representing input in 'Temperature' Slider component - 5, # int | float representing input in 'parameter_10' Number component - "null", # str representing input in 'parameter_4' Chatbot component - fn_index=6 -) -print(result) - diff --git a/spaces/awacke1/THREEJS-ChatGPT-ASR-Wikipedia-Twitter-Sentiment-FactChecker-VoiceClone/index.html b/spaces/awacke1/THREEJS-ChatGPT-ASR-Wikipedia-Twitter-Sentiment-FactChecker-VoiceClone/index.html deleted file mode 100644 index 7f500cf14c673dd646397ec9d485a4adc929d384..0000000000000000000000000000000000000000 --- a/spaces/awacke1/THREEJS-ChatGPT-ASR-Wikipedia-Twitter-Sentiment-FactChecker-VoiceClone/index.html +++ /dev/null @@ -1,90 +0,0 @@ - - - - - My VR App - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/awacke1/Webcam-Object-Recognition-Yolo-n-Coco/utils.py b/spaces/awacke1/Webcam-Object-Recognition-Yolo-n-Coco/utils.py deleted file mode 100644 index 1144290321351cbf14fb06c8cb2e13782a818e71..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Webcam-Object-Recognition-Yolo-n-Coco/utils.py +++ /dev/null @@ -1,475 +0,0 @@ -import numpy as np -import cv2 -import pandas as pd -import operator -import matplotlib.pyplot as plt -import os -from sklearn.model_selection import train_test_split -from tensorflow.keras.utils import Sequence -from config import yolo_config - - -def load_weights(model, weights_file_path): - conv_layer_size = 110 - conv_output_idxs = [93, 101, 109] - with open(weights_file_path, 'rb') as file: - major, minor, revision, seen, _ = np.fromfile(file, dtype=np.int32, count=5) - - bn_idx = 0 - for conv_idx in range(conv_layer_size): - conv_layer_name = f'conv2d_{conv_idx}' if conv_idx > 0 else 'conv2d' - bn_layer_name = f'batch_normalization_{bn_idx}' if bn_idx > 0 else 'batch_normalization' - - conv_layer = model.get_layer(conv_layer_name) - filters = conv_layer.filters - kernel_size = conv_layer.kernel_size[0] - input_dims = conv_layer.input_shape[-1] - - if conv_idx not in conv_output_idxs: - # darknet bn layer weights: [beta, gamma, mean, variance] - bn_weights = np.fromfile(file, dtype=np.float32, count=4 * filters) - # tf bn layer weights: [gamma, beta, mean, variance] - bn_weights = bn_weights.reshape((4, filters))[[1, 0, 2, 3]] - bn_layer = model.get_layer(bn_layer_name) - bn_idx += 1 - else: - conv_bias = np.fromfile(file, dtype=np.float32, count=filters) - - # darknet shape: (out_dim, input_dims, height, width) - # tf shape: (height, width, input_dims, out_dim) - conv_shape = (filters, input_dims, kernel_size, kernel_size) - conv_weights = np.fromfile(file, dtype=np.float32, count=np.product(conv_shape)) - conv_weights = conv_weights.reshape(conv_shape).transpose([2, 3, 1, 0]) - - if conv_idx not in conv_output_idxs: - conv_layer.set_weights([conv_weights]) - bn_layer.set_weights(bn_weights) - else: - conv_layer.set_weights([conv_weights, conv_bias]) - - if len(file.read()) == 0: - print('all weights read') - else: - print(f'failed to read all weights, # of unread weights: {len(file.read())}') - - -def get_detection_data(img, model_outputs, class_names): - """ - - :param img: target raw image - :param model_outputs: outputs from inference_model - :param class_names: list of object class names - :return: - """ - - num_bboxes = model_outputs[-1][0] - boxes, scores, classes = [output[0][:num_bboxes] for output in model_outputs[:-1]] - - h, w = img.shape[:2] - df = pd.DataFrame(boxes, columns=['x1', 'y1', 'x2', 'y2']) - df[['x1', 'x2']] = (df[['x1', 'x2']] * w).astype('int64') - df[['y1', 'y2']] = (df[['y1', 'y2']] * h).astype('int64') - df['class_name'] = np.array(class_names)[classes.astype('int64')] - df['score'] = scores - df['w'] = df['x2'] - df['x1'] - df['h'] = df['y2'] - df['y1'] - - print(f'# of bboxes: {num_bboxes}') - return df - -def read_annotation_lines(annotation_path, test_size=None, random_seed=5566): - with open(annotation_path) as f: - lines = f.readlines() - if test_size: - return train_test_split(lines, test_size=test_size, random_state=random_seed) - else: - return lines - -def draw_bbox(img, detections, cmap, random_color=True, figsize=(10, 10), show_img=True, show_text=True): - """ - Draw bounding boxes on the img. - :param img: BGR img. - :param detections: pandas DataFrame containing detections - :param random_color: assign random color for each objects - :param cmap: object colormap - :param plot_img: if plot img with bboxes - :return: None - """ - img = np.array(img) - scale = max(img.shape[0:2]) / 416 - line_width = int(2 * scale) - - for _, row in detections.iterrows(): - x1, y1, x2, y2, cls, score, w, h = row.values - color = list(np.random.random(size=3) * 255) if random_color else cmap[cls] - cv2.rectangle(img, (x1, y1), (x2, y2), color, line_width) - if show_text: - text = f'{cls} {score:.2f}' - font = cv2.FONT_HERSHEY_DUPLEX - font_scale = max(0.3 * scale, 0.3) - thickness = max(int(1 * scale), 1) - (text_width, text_height) = cv2.getTextSize(text, font, fontScale=font_scale, thickness=thickness)[0] - cv2.rectangle(img, (x1 - line_width//2, y1 - text_height), (x1 + text_width, y1), color, cv2.FILLED) - cv2.putText(img, text, (x1, y1), font, font_scale, (255, 255, 255), thickness, cv2.LINE_AA) - if show_img: - plt.figure(figsize=figsize) - plt.imshow(img) - plt.show() - return img - - -class DataGenerator(Sequence): - """ - Generates data for Keras - ref: https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly - """ - def __init__(self, - annotation_lines, - class_name_path, - folder_path, - max_boxes=100, - shuffle=True): - self.annotation_lines = annotation_lines - self.class_name_path = class_name_path - self.num_classes = len([line.strip() for line in open(class_name_path).readlines()]) - self.num_gpu = yolo_config['num_gpu'] - self.batch_size = yolo_config['batch_size'] * self.num_gpu - self.target_img_size = yolo_config['img_size'] - self.anchors = np.array(yolo_config['anchors']).reshape((9, 2)) - self.shuffle = shuffle - self.indexes = np.arange(len(self.annotation_lines)) - self.folder_path = folder_path - self.max_boxes = max_boxes - self.on_epoch_end() - - def __len__(self): - 'number of batches per epoch' - return int(np.ceil(len(self.annotation_lines) / self.batch_size)) - - def __getitem__(self, index): - 'Generate one batch of data' - - # Generate indexes of the batch - idxs = self.indexes[index * self.batch_size:(index + 1) * self.batch_size] - - # Find list of IDs - lines = [self.annotation_lines[i] for i in idxs] - - # Generate data - X, y_tensor, y_bbox = self.__data_generation(lines) - - return [X, *y_tensor, y_bbox], np.zeros(len(lines)) - - def on_epoch_end(self): - 'Updates indexes after each epoch' - if self.shuffle: - np.random.shuffle(self.indexes) - - def __data_generation(self, annotation_lines): - """ - Generates data containing batch_size samples - :param annotation_lines: - :return: - """ - - X = np.empty((len(annotation_lines), *self.target_img_size), dtype=np.float32) - y_bbox = np.empty((len(annotation_lines), self.max_boxes, 5), dtype=np.float32) # x1y1x2y2 - - for i, line in enumerate(annotation_lines): - img_data, box_data = self.get_data(line) - X[i] = img_data - y_bbox[i] = box_data - - y_tensor, y_true_boxes_xywh = preprocess_true_boxes(y_bbox, self.target_img_size[:2], self.anchors, self.num_classes) - - return X, y_tensor, y_true_boxes_xywh - - def get_data(self, annotation_line): - line = annotation_line.split() - img_path = line[0] - img = cv2.imread(os.path.join(self.folder_path, img_path))[:, :, ::-1] - ih, iw = img.shape[:2] - h, w, c = self.target_img_size - boxes = np.array([np.array(list(map(float, box.split(',')))) for box in line[1:]], dtype=np.float32) # x1y1x2y2 - scale_w, scale_h = w / iw, h / ih - img = cv2.resize(img, (w, h)) - image_data = np.array(img) / 255. - - # correct boxes coordinates - box_data = np.zeros((self.max_boxes, 5)) - if len(boxes) > 0: - np.random.shuffle(boxes) - boxes = boxes[:self.max_boxes] - boxes[:, [0, 2]] = boxes[:, [0, 2]] * scale_w # + dx - boxes[:, [1, 3]] = boxes[:, [1, 3]] * scale_h # + dy - box_data[:len(boxes)] = boxes - - return image_data, box_data - - -def preprocess_true_boxes(true_boxes, input_shape, anchors, num_classes): - '''Preprocess true boxes to training input format - - Parameters - ---------- - true_boxes: array, shape=(bs, max boxes per img, 5) - Absolute x_min, y_min, x_max, y_max, class_id relative to input_shape. - input_shape: array-like, hw, multiples of 32 - anchors: array, shape=(N, 2), (9, wh) - num_classes: int - - Returns - ------- - y_true: list of array, shape like yolo_outputs, xywh are reletive value - - ''' - - num_stages = 3 # default setting for yolo, tiny yolo will be 2 - anchor_mask = [[0, 1, 2], [3, 4, 5], [6, 7, 8]] - bbox_per_grid = 3 - true_boxes = np.array(true_boxes, dtype='float32') - true_boxes_abs = np.array(true_boxes, dtype='float32') - input_shape = np.array(input_shape, dtype='int32') - true_boxes_xy = (true_boxes_abs[..., 0:2] + true_boxes_abs[..., 2:4]) // 2 # (100, 2) - true_boxes_wh = true_boxes_abs[..., 2:4] - true_boxes_abs[..., 0:2] # (100, 2) - - # Normalize x,y,w, h, relative to img size -> (0~1) - true_boxes[..., 0:2] = true_boxes_xy/input_shape[::-1] # xy - true_boxes[..., 2:4] = true_boxes_wh/input_shape[::-1] # wh - - bs = true_boxes.shape[0] - grid_sizes = [input_shape//{0:8, 1:16, 2:32}[stage] for stage in range(num_stages)] - y_true = [np.zeros((bs, - grid_sizes[s][0], - grid_sizes[s][1], - bbox_per_grid, - 5+num_classes), dtype='float32') - for s in range(num_stages)] - # [(?, 52, 52, 3, 5+num_classes) (?, 26, 26, 3, 5+num_classes) (?, 13, 13, 3, 5+num_classes) ] - y_true_boxes_xywh = np.concatenate((true_boxes_xy, true_boxes_wh), axis=-1) - # Expand dim to apply broadcasting. - anchors = np.expand_dims(anchors, 0) # (1, 9 , 2) - anchor_maxes = anchors / 2. # (1, 9 , 2) - anchor_mins = -anchor_maxes # (1, 9 , 2) - valid_mask = true_boxes_wh[..., 0] > 0 # (1, 100) - - for batch_idx in range(bs): - # Discard zero rows. - wh = true_boxes_wh[batch_idx, valid_mask[batch_idx]] # (# of bbox, 2) - num_boxes = len(wh) - if num_boxes == 0: continue - wh = np.expand_dims(wh, -2) # (# of bbox, 1, 2) - box_maxes = wh / 2. # (# of bbox, 1, 2) - box_mins = -box_maxes # (# of bbox, 1, 2) - - # Compute IoU between each anchors and true boxes for responsibility assignment - intersect_mins = np.maximum(box_mins, anchor_mins) # (# of bbox, 9, 2) - intersect_maxes = np.minimum(box_maxes, anchor_maxes) - intersect_wh = np.maximum(intersect_maxes - intersect_mins, 0.) - intersect_area = np.prod(intersect_wh, axis=-1) # (9,) - box_area = wh[..., 0] * wh[..., 1] # (# of bbox, 1) - anchor_area = anchors[..., 0] * anchors[..., 1] # (1, 9) - iou = intersect_area / (box_area + anchor_area - intersect_area) # (# of bbox, 9) - - # Find best anchor for each true box - best_anchors = np.argmax(iou, axis=-1) # (# of bbox,) - for box_idx in range(num_boxes): - best_anchor = best_anchors[box_idx] - for stage in range(num_stages): - if best_anchor in anchor_mask[stage]: - x_offset = true_boxes[batch_idx, box_idx, 0]*grid_sizes[stage][1] - y_offset = true_boxes[batch_idx, box_idx, 1]*grid_sizes[stage][0] - # Grid Index - grid_col = np.floor(x_offset).astype('int32') - grid_row = np.floor(y_offset).astype('int32') - anchor_idx = anchor_mask[stage].index(best_anchor) - class_idx = true_boxes[batch_idx, box_idx, 4].astype('int32') - # y_true[stage][batch_idx, grid_row, grid_col, anchor_idx, 0] = x_offset - grid_col # x - # y_true[stage][batch_idx, grid_row, grid_col, anchor_idx, 1] = y_offset - grid_row # y - # y_true[stage][batch_idx, grid_row, grid_col, anchor_idx, :4] = true_boxes_abs[batch_idx, box_idx, :4] # abs xywh - y_true[stage][batch_idx, grid_row, grid_col, anchor_idx, :2] = true_boxes_xy[batch_idx, box_idx, :] # abs xy - y_true[stage][batch_idx, grid_row, grid_col, anchor_idx, 2:4] = true_boxes_wh[batch_idx, box_idx, :] # abs wh - y_true[stage][batch_idx, grid_row, grid_col, anchor_idx, 4] = 1 # confidence - - y_true[stage][batch_idx, grid_row, grid_col, anchor_idx, 5+class_idx] = 1 # one-hot encoding - # smooth - # onehot = np.zeros(num_classes, dtype=np.float) - # onehot[class_idx] = 1.0 - # uniform_distribution = np.full(num_classes, 1.0 / num_classes) - # delta = 0.01 - # smooth_onehot = onehot * (1 - delta) + delta * uniform_distribution - # y_true[stage][batch_idx, grid_row, grid_col, anchor_idx, 5:] = smooth_onehot - - return y_true, y_true_boxes_xywh - -""" - Calculate the AP given the recall and precision array - 1st) We compute a version of the measured precision/recall curve with - precision monotonically decreasing - 2nd) We compute the AP as the area under this curve by numerical integration. -""" -def voc_ap(rec, prec): - """ - --- Official matlab code VOC2012--- - mrec=[0 ; rec ; 1]; - mpre=[0 ; prec ; 0]; - for i=numel(mpre)-1:-1:1 - mpre(i)=max(mpre(i),mpre(i+1)); - end - i=find(mrec(2:end)~=mrec(1:end-1))+1; - ap=sum((mrec(i)-mrec(i-1)).*mpre(i)); - """ - rec.insert(0, 0.0) # insert 0.0 at begining of list - rec.append(1.0) # insert 1.0 at end of list - mrec = rec[:] - prec.insert(0, 0.0) # insert 0.0 at begining of list - prec.append(0.0) # insert 0.0 at end of list - mpre = prec[:] - """ - This part makes the precision monotonically decreasing - (goes from the end to the beginning) - matlab: for i=numel(mpre)-1:-1:1 - mpre(i)=max(mpre(i),mpre(i+1)); - """ - # matlab indexes start in 1 but python in 0, so I have to do: - # range(start=(len(mpre) - 2), end=0, step=-1) - # also the python function range excludes the end, resulting in: - # range(start=(len(mpre) - 2), end=-1, step=-1) - for i in range(len(mpre)-2, -1, -1): - mpre[i] = max(mpre[i], mpre[i+1]) - """ - This part creates a list of indexes where the recall changes - matlab: i=find(mrec(2:end)~=mrec(1:end-1))+1; - """ - i_list = [] - for i in range(1, len(mrec)): - if mrec[i] != mrec[i-1]: - i_list.append(i) # if it was matlab would be i + 1 - """ - The Average Precision (AP) is the area under the curve - (numerical integration) - matlab: ap=sum((mrec(i)-mrec(i-1)).*mpre(i)); - """ - ap = 0.0 - for i in i_list: - ap += ((mrec[i]-mrec[i-1])*mpre[i]) - return ap, mrec, mpre - -""" - Draw plot using Matplotlib -""" -def draw_plot_func(dictionary, n_classes, window_title, plot_title, x_label, output_path, to_show, plot_color, true_p_bar): - # sort the dictionary by decreasing value, into a list of tuples - sorted_dic_by_value = sorted(dictionary.items(), key=operator.itemgetter(1)) - print(sorted_dic_by_value) - # unpacking the list of tuples into two lists - sorted_keys, sorted_values = zip(*sorted_dic_by_value) - # - if true_p_bar != "": - """ - Special case to draw in: - - green -> TP: True Positives (object detected and matches ground-truth) - - red -> FP: False Positives (object detected but does not match ground-truth) - - pink -> FN: False Negatives (object not detected but present in the ground-truth) - """ - fp_sorted = [] - tp_sorted = [] - for key in sorted_keys: - fp_sorted.append(dictionary[key] - true_p_bar[key]) - tp_sorted.append(true_p_bar[key]) - plt.barh(range(n_classes), fp_sorted, align='center', color='crimson', label='False Positive') - plt.barh(range(n_classes), tp_sorted, align='center', color='forestgreen', label='True Positive', left=fp_sorted) - # add legend - plt.legend(loc='lower right') - """ - Write number on side of bar - """ - fig = plt.gcf() # gcf - get current figure - axes = plt.gca() - r = fig.canvas.get_renderer() - for i, val in enumerate(sorted_values): - fp_val = fp_sorted[i] - tp_val = tp_sorted[i] - fp_str_val = " " + str(fp_val) - tp_str_val = fp_str_val + " " + str(tp_val) - # trick to paint multicolor with offset: - # first paint everything and then repaint the first number - t = plt.text(val, i, tp_str_val, color='forestgreen', va='center', fontweight='bold') - plt.text(val, i, fp_str_val, color='crimson', va='center', fontweight='bold') - if i == (len(sorted_values)-1): # largest bar - adjust_axes(r, t, fig, axes) - else: - plt.barh(range(n_classes), sorted_values, color=plot_color) - """ - Write number on side of bar - """ - fig = plt.gcf() # gcf - get current figure - axes = plt.gca() - r = fig.canvas.get_renderer() - for i, val in enumerate(sorted_values): - str_val = " " + str(val) # add a space before - if val < 1.0: - str_val = " {0:.2f}".format(val) - t = plt.text(val, i, str_val, color=plot_color, va='center', fontweight='bold') - # re-set axes to show number inside the figure - if i == (len(sorted_values)-1): # largest bar - adjust_axes(r, t, fig, axes) - # set window title - fig.canvas.set_window_title(window_title) - # write classes in y axis - tick_font_size = 12 - plt.yticks(range(n_classes), sorted_keys, fontsize=tick_font_size) - """ - Re-scale height accordingly - """ - init_height = fig.get_figheight() - # comput the matrix height in points and inches - dpi = fig.dpi - height_pt = n_classes * (tick_font_size * 1.4) # 1.4 (some spacing) - height_in = height_pt / dpi - # compute the required figure height - top_margin = 0.15 # in percentage of the figure height - bottom_margin = 0.05 # in percentage of the figure height - figure_height = height_in / (1 - top_margin - bottom_margin) - # set new height - if figure_height > init_height: - fig.set_figheight(figure_height) - - # set plot title - plt.title(plot_title, fontsize=14) - # set axis titles - # plt.xlabel('classes') - plt.xlabel(x_label, fontsize='large') - # adjust size of window - fig.tight_layout() - # save the plot - fig.savefig(output_path) - # show image - # if to_show: - plt.show() - # close the plot - # plt.close() - -""" - Plot - adjust axes -""" -def adjust_axes(r, t, fig, axes): - # get text width for re-scaling - bb = t.get_window_extent(renderer=r) - text_width_inches = bb.width / fig.dpi - # get axis width in inches - current_fig_width = fig.get_figwidth() - new_fig_width = current_fig_width + text_width_inches - propotion = new_fig_width / current_fig_width - # get axis limit - x_lim = axes.get_xlim() - axes.set_xlim([x_lim[0], x_lim[1]*propotion]) - - -def read_txt_to_list(path): - # open txt file lines to a list - with open(path) as f: - content = f.readlines() - # remove whitespace characters like `\n` at the end of each line - content = [x.strip() for x in content] - return content \ No newline at end of file diff --git a/spaces/azusarang/so-vits-svc-models-ba_P/vencoder/ContentVec256L9_Onnx.py b/spaces/azusarang/so-vits-svc-models-ba_P/vencoder/ContentVec256L9_Onnx.py deleted file mode 100644 index fae2b928252801795b038f51451b234e007f6f03..0000000000000000000000000000000000000000 --- a/spaces/azusarang/so-vits-svc-models-ba_P/vencoder/ContentVec256L9_Onnx.py +++ /dev/null @@ -1,28 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import onnxruntime -import torch - -class ContentVec256L9_Onnx(SpeechEncoder): - def __init__(self,vec_path = "pretrain/vec-256-layer-9.onnx",device=None): - print("load model(s) from {}".format(vec_path)) - self.hidden_dim = 256 - if device is None: - self.dev = torch.device("cpu") - else: - self.dev = torch.device(device) - if device == 'cpu' or device == torch.device("cpu") or device is None: - providers = ['CPUExecutionProvider'] - elif device == 'cuda' or device == torch.device("cuda"): - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - feats = feats.unsqueeze(0).cpu().detach().numpy() - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input) - return torch.tensor(logits[0]).transpose(1, 2).to(self.dev) \ No newline at end of file diff --git a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/version.py b/spaces/badayvedat/AudioSep/models/CLAP/open_clip/version.py deleted file mode 100644 index 3ced3581bb601ae91b1e1da4b8f4f520855a065e..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = "0.2.1" diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/renderers/WebGLDeferredRenderer.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/renderers/WebGLDeferredRenderer.js deleted file mode 100644 index 2eb5443e33744e78f2cc595ec20d72b12090528f..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/renderers/WebGLDeferredRenderer.js +++ /dev/null @@ -1,2491 +0,0 @@ -/** - * @author alteredq / http://alteredqualia.com/ - * @author MPanknin / http://www.redplant.de/ - * @author takahiro / https://github.com/takahirox - * - * WebGLDeferredRenderer supports two types of Deferred Renderings. - * One is Classic Deferred Rendering and the other one is - * Light Pre-Pass (Deferred Lighting). - * Classic Deferred Rendering is default. You can use Light Pre-Pass - * by calling .enableLightPrePass( true ) method. - * - * Dependencies - * - THREE.CopyShader - * - THREE.RenderPass - * - THREE.ShaderPass - * - THREE.EffectComposer - * - THREE.FXAAShader - * - * TODO - * - reuse existing glsl - * - shadow - * - optimization - * - MRT (when it's available on Three.js) - * - AmbientLight - * - HemisphereLight - * - PointLight (distance < 0) - * - morphNormals - * - BumpMap - * - ToneMap - * - envMap - * - wrapAround - * - addEffect - */ - -THREE.WebGLDeferredRenderer = function ( parameters ) { - - parameters = parameters || {}; - - // private properties - - var _this = this; - - var _context; - var _state; - - var _width, _height; - - // for Classic Deferred Rendering - var _compColor; - var _passColor, _passForward, _passCopy; - - // for Light Pre-Pass - var _compReconstruction; - var _passReconstruction; - - // for Common - var _compNormalDepth, _compLight, _compFinal; - var _passNormalDepth, _passLight, _passLightFullscreen, _passFinal, _passFXAA; - - var _depthTexture; - - var _currentCamera; - - var _lightScene, _lightFullscreenScene; - - var _antialias = false; - var _hasTransparentObject = false; - var _lightPrePass = false; - var _cacheKeepAlive = false; - - var _tmpMaterial = new THREE.ShaderMaterial( { visible: false } ); - var _tmpVector3 = new THREE.Vector3(); - - // scene/material/light cache for deferred rendering. - // save them at the creation and release - // if they're unused removeThresholdCount frames - // unless _cacheKeepAlive is true. - - // scene.uuid -> lightScene, lightFullscreenScene - var _lightScenesCache = {}; - var _lightFullscreenScenesCache = {}; - - // object.material.uuid -> deferredMaterial or - // object.material[ n ].uuid -> deferredMaterial - var _normalDepthMaterialsCache = {}; - var _normalDepthShininessMaterialsCache = {}; - var _colorMaterialsCache = {}; - var _reconstructionMaterialsCache = {}; - - // originalLight.uuid -> deferredLight - var _deferredLightsCache = {}; - - // deferredLight.uuid -> deferredLightMaterial - var _classicDeferredLightMaterialsCache = {}; - var _lightPrePassMaterialsCache = {}; - - var _removeThresholdCount = 60; - - // deferredMaterials.uuid -> object.material or - // deferredMaterials.uuid -> object.material[ n ] - // save before render and release after render. - var _originalMaterialsTable = {}; - - // object.uuid -> originalOnBeforeRender - // save before render and release after render. - var _originalOnBeforeRendersTable = {}; - - // object.material.uuid -> object.material.visible or - // object.material[ i ].uuid -> object.material[ i ].visible or - // save before render and release after render. - var _originalVisibleTable = {}; - - // external properties - - this.renderer = undefined; - this.domElement = undefined; - - this.forwardRendering = false; // for debug - - // private methods - - function init( parameters ) { - - _this.renderer = parameters.renderer !== undefined ? parameters.renderer : new THREE.WebGLRenderer(); - _this.domElement = _this.renderer.domElement; - - _context = _this.renderer.context; - _state = _this.renderer.state; - - _width = parameters.width !== undefined ? parameters.width : _this.renderer.getSize( new THREE.Vector2() ).width; - _height = parameters.height !== undefined ? parameters.height : _this.renderer.getSize( new THREE.Vector2() ).height; - - var antialias = parameters.antialias !== undefined ? parameters.antialias : false; - - if ( parameters.cacheKeepAlive !== undefined ) _cacheKeepAlive = parameters.cacheKeepAlive; - - initDepthTexture(); - - initPassNormalDepth(); - initPassColor(); - initPassLight(); - initPassReconstruction(); - initPassFinal(); - - _this.setSize( _width, _height ); - _this.setAntialias( antialias ); - _this.enableLightPrePass( false ); - - } - - function initDepthTexture() { - - _depthTexture = new THREE.DepthTexture( - _width, - _height, - THREE.UnsignedInt248Type, - undefined, - undefined, - undefined, - undefined, - undefined, - undefined, - THREE.DepthStencilFormat - ); - - } - - function initPassNormalDepth() { - - _passNormalDepth = new THREE.RenderPass(); - _passNormalDepth.clear = true; - - var rt = new THREE.WebGLRenderTarget( _width, _height, { - minFilter: THREE.NearestFilter, - magFilter: THREE.NearestFilter, - format: THREE.RGBAFormat, - type: THREE.FloatType, - stencilBuffer: true, - depthTexture: _depthTexture - } ); - - rt.texture.generateMipamps = false; - - _compNormalDepth = new THREE.EffectComposer( _this.renderer, rt ); - _compNormalDepth.renderToScreen = false; - _compNormalDepth.addPass( _passNormalDepth ); - - } - - function initPassColor() { - - _passColor = new THREE.RenderPass(); - _passColor.clear = true; - - var rt = new THREE.WebGLRenderTarget( _width, _height, { - minFilter: THREE.NearestFilter, - magFilter: THREE.NearestFilter, - format: THREE.RGBAFormat, - type: THREE.FloatType, - depthTexture: _depthTexture - } ); - - rt.texture.generateMipamps = false; - - _compColor = new THREE.EffectComposer( _this.renderer, rt ); - _compColor.renderToScreen = false; - _compColor.addPass( _passColor ); - - } - - function initPassLight() { - - _passLightFullscreen = new THREE.RenderPass(); - _passLightFullscreen.clear = true; - _passLightFullscreen.camera = new THREE.OrthographicCamera( - 1, 1, 1, - 1, 0, 1 ); - - _passLight = new THREE.RenderPass(); - _passLight.clear = false; - - var rt = new THREE.WebGLRenderTarget( _width, _height, { - minFilter: THREE.NearestFilter, - magFilter: THREE.NearestFilter, - format: THREE.RGBAFormat, - type: THREE.FloatType, - depthTexture: _depthTexture - } ); - - rt.texture.generateMipamps = false; - - _compLight = new THREE.EffectComposer( _this.renderer, rt ); - _compLight.renderToScreen = false; - _compLight.addPass( _passLightFullscreen ); - _compLight.addPass( _passLight ); - - } - - function initPassReconstruction() { - - _passReconstruction = new THREE.RenderPass(); - _passReconstruction.clear = true; - - var rt = new THREE.WebGLRenderTarget( _width, _height, { - minFilter: THREE.NearestFilter, - magFilter: THREE.NearestFilter, - format: THREE.RGBAFormat, - type: THREE.FloatType, - depthTexture: _depthTexture - } ); - - rt.texture.generateMipamps = false; - - _compReconstruction = new THREE.EffectComposer( _this.renderer, rt ); - _compReconstruction.renderToScreen = false; - _compReconstruction.addPass( _passReconstruction ); - - } - - function initPassFinal() { - - _passFinal = new THREE.ShaderPass( THREE.ShaderDeferred[ 'final' ] ); - _passFinal.clear = true; - _passFinal.uniforms.samplerResult.value = _compLight.renderTarget2.texture; - _passFinal.material.blending = THREE.NoBlending; - _passFinal.material.depthWrite = false; - _passFinal.material.depthTest = false; - - _passForward = new THREE.RenderPass(); - _passForward.clear = false; - - _passCopy = new THREE.ShaderPass( THREE.CopyShader ); - - _passFXAA = new THREE.ShaderPass( THREE.FXAAShader ); - - var rt = new THREE.WebGLRenderTarget( _width, _height, { - minFilter: THREE.NearestFilter, - magFilter: THREE.LinearFilter, - format: THREE.RGBFormat, - type: THREE.UnsignedByteType, - depthTexture: _depthTexture - } ); - - rt.texture.generateMipamps = false; - - _compFinal = new THREE.EffectComposer( _this.renderer, rt ); - _compFinal.addPass( _passFinal ); - _compFinal.addPass( _passForward ); - _compFinal.addPass( _passCopy ); - _compFinal.addPass( _passFXAA ); - - } - - function initLightScene( scene ) { - - var lightSceneData = _lightScenesCache[ scene.uuid ]; - var lightFullscreenSceneData = _lightFullscreenScenesCache[ scene.uuid ]; - - if ( lightSceneData === undefined ) { - - var s = new THREE.Scene(); - s.userData.lights = {}; - - lightSceneData = createCacheData(); - lightSceneData.scene = s; - - _lightScenesCache[ scene.uuid ] = lightSceneData; - - } - - if ( lightFullscreenSceneData === undefined ) { - - var s = new THREE.Scene(); - s.userData.lights = {}; - - var emissiveLight = createDeferredEmissiveLight(); - - s.userData.emissiveLight = emissiveLight; - s.add( emissiveLight ); - - lightFullscreenSceneData = createCacheData(); - lightFullscreenSceneData.scene = s; - - _lightFullscreenScenesCache[ scene.uuid ] = lightFullscreenSceneData; - - } - - lightSceneData.used = true; - lightFullscreenSceneData.used = true; - - var lightScene = lightSceneData.scene; - var lightFullscreenScene = lightFullscreenSceneData.scene; - - // emissiveLight is only for Classic Deferred Rendering - lightFullscreenScene.userData.emissiveLight.visible = ! _lightPrePass; - - _lightScene = lightScene; - _lightFullscreenScene = lightFullscreenScene; - - } - - function getMaterialFromCacheOrCreate( originalMaterial, cache, createFunc, updateFunc ) { - - var data = cache[ originalMaterial.uuid ]; - - if ( data === undefined ) { - - data = createCacheData(); - data.material = createFunc( originalMaterial ); - cache[ originalMaterial.uuid ] = data; - - } - - data.used = true; - - updateFunc( data.material, originalMaterial ); - - _originalMaterialsTable[ data.material.uuid ] = originalMaterial; - - return data.material; - - } - - function overrideMaterialAndOnBeforeRender( object, getMaterialFunc, onBeforeRender ) { - - if ( object.material === undefined ) return; - - if ( Array.isArray( object.material ) ) { - - for ( var i = 0, il = object.material.length; i < il; i ++ ) { - - object.material[ i ] = getMaterialFunc( object.material[ i ] ); - - } - - } else { - - object.material = getMaterialFunc( object.material ); - - } - - object.onBeforeRender = onBeforeRender; - - } - - function restoreOriginalMaterial( object ) { - - if ( object.material === undefined ) return; - - if ( Array.isArray( object.material ) ) { - - for ( var i = 0, il = object.material.length; i < il; i ++ ) { - - object.material[ i ] = _originalMaterialsTable[ object.material[ i ].uuid ]; - - } - - } else { - - object.material = _originalMaterialsTable[ object.material.uuid ]; - - } - - } - - function setMaterialNormalDepth( object ) { - - overrideMaterialAndOnBeforeRender( object, getNormalDepthMaterial, updateDeferredNormalDepthUniforms ); - - } - - function getNormalDepthMaterial( originalMaterial ) { - - return getMaterialFromCacheOrCreate( - originalMaterial, - ( _lightPrePass ) ? _normalDepthShininessMaterialsCache : _normalDepthMaterialsCache, - createDeferredNormalDepthMaterial, - updateDeferredNormalDepthMaterial - ); - - } - - function createDeferredNormalDepthMaterial( originalMaterial ) { - - var shader = ( _lightPrePass ) ? THREE.ShaderDeferred[ 'normalDepthShininess' ] : THREE.ShaderDeferred[ 'normalDepth' ]; - - return new THREE.ShaderMaterial( { - uniforms: Object.assign( {}, shader.uniforms ), - fragmentShader: shader.fragmentShader, - vertexShader: shader.vertexShader, - blending: THREE.NoBlending - } ); - - } - - function updateDeferredNormalDepthMaterial( material, originalMaterial ) { - - if ( originalMaterial.skinning !== undefined ) material.skinning = originalMaterial.skinning; - if ( originalMaterial.morphTargets !== undefined ) material.morphTargets = originalMaterial.morphTargets; - - if ( originalMaterial.visible === true ) { - - material.visible = ! originalMaterial.transparent; - - } else { - - material.visible = false; - - } - - } - - function updateDeferredNormalDepthUniforms( renderer, scene, camera, geometry, material, group ) { - - if ( ! _lightPrePass ) return; - - var originalMaterial = _originalMaterialsTable[ material.uuid ]; - - if ( originalMaterial === undefined || originalMaterial.shininess === undefined ) return; - - material.uniforms.shininess.value = originalMaterial.shininess; - - } - - function setMaterialColor( object ) { - - overrideMaterialAndOnBeforeRender( object, getColorMaterial, updateDeferredColorUniforms ); - - } - - function getColorMaterial( originalMaterial ) { - - return getMaterialFromCacheOrCreate( - originalMaterial, - _colorMaterialsCache, - createDeferredColorMaterial, - updateDeferredColorMaterial - ); - - } - - function createDeferredColorMaterial( originalMaterial ) { - - var shader = THREE.ShaderDeferred[ 'color' ]; - - var material = new THREE.ShaderMaterial( { - uniforms: Object.assign( {}, shader.uniforms ), - fragmentShader: shader.fragmentShader, - vertexShader: shader.vertexShader, - blending: THREE.NoBlending - } ); - - if ( originalMaterial.map !== undefined ) material.map = originalMaterial.map; - - return material; - - } - - function updateDeferredColorMaterial( material, originalMaterial ) { - - if ( originalMaterial.map !== undefined ) material.map = originalMaterial.map; - if ( originalMaterial.skinning !== undefined ) material.skinning = originalMaterial.skinning; - if ( originalMaterial.morphTargets !== undefined ) material.morphTargets = originalMaterial.morphTargets; - - if ( originalMaterial.visible === true ) { - - material.visible = ! originalMaterial.transparent; - - } else { - - material.visible = false; - - } - - } - - function updateDeferredColorUniforms( renderer, scene, camera, geometry, material, group ) { - - var originalMaterial = _originalMaterialsTable[ material.uuid ]; - var uniforms = material.uniforms; - - var diffuse, emissive; - - if ( originalMaterial.isMeshBasicMaterial === true ) { - - emissive = originalMaterial.color; - - } else { - - diffuse = originalMaterial.color; - emissive = originalMaterial.emissive; - - } - - var specular = originalMaterial.specular; - var shininess = originalMaterial.shininess; - var map = originalMaterial.map; - - if ( diffuse !== undefined ) uniforms.diffuse.value.copy( diffuse ); - if ( emissive !== undefined ) uniforms.emissive.value.copy( emissive ); - if ( specular !== undefined ) uniforms.specular.value.copy( specular ); - if ( shininess !== undefined && uniforms.shininess !== undefined ) uniforms.shininess.value = shininess; - if ( map !== undefined ) uniforms.map.value = map; - - } - - function setMaterialReconstruction( object ) { - - overrideMaterialAndOnBeforeRender( object, getReconstructionMaterial, updateDeferredReconstructionUniforms ); - - } - - function getReconstructionMaterial( originalMaterial ) { - - if ( originalMaterial.transparent === true ) { - - _originalMaterialsTable[ originalMaterial.uuid ] = originalMaterial; - return originalMaterial; - - } - - return getMaterialFromCacheOrCreate( - originalMaterial, - _reconstructionMaterialsCache, - createDeferredReconstructionMaterial, - updateDeferredReconstructionMaterial - ); - - } - - function createDeferredReconstructionMaterial( originalMaterial ) { - - var shader = THREE.ShaderDeferred[ 'reconstruction' ]; - - var material = new THREE.ShaderMaterial( { - uniforms: Object.assign( {}, shader.uniforms ), - fragmentShader: shader.fragmentShader, - vertexShader: shader.vertexShader, - blending: THREE.NoBlending - } ); - - if ( originalMaterial.map !== undefined ) material.map = originalMaterial.map; - - return material; - - } - - function updateDeferredReconstructionMaterial( material, originalMaterial ) { - - updateDeferredColorMaterial( material, originalMaterial ); - - } - - function updateDeferredReconstructionUniforms( renderer, scene, camera, geometry, material, group ) { - - if ( material.transparent === true ) { - - // 'this' is object here because this method is set as object.onBefore() - var onBeforeRender = _originalOnBeforeRendersTable[ this.uuid ]; - - if ( onBeforeRender ) { - - onBeforeRender.call( this, renderer, scene, camera, geometry, material, group ); - - } - - return; - - } - - updateDeferredColorUniforms( renderer, scene, camera, geometry, material, group ); - - material.uniforms.samplerLight.value = _compLight.renderTarget2.texture; - - } - - function setVisibleForForwardRendering( object ) { - - if ( object.material === undefined ) return; - - if ( Array.isArray( object.material ) ) { - - for ( var i = 0, il = object.material.length; i < il; i ++ ) { - - if ( _originalVisibleTable[ object.material[ i ].uuid ] === undefined ) { - - _originalVisibleTable[ object.material[ i ].uuid ] = object.material[ i ].visible; - object.material[ i ].visible = object.material[ i ].transparent && object.material[ i ].visible; - - } - - } - - } else { - - if ( _originalVisibleTable[ object.material.uuid ] === undefined ) { - - _originalVisibleTable[ object.material.uuid ] = object.material.visible; - object.material.visible = object.material.transparent && object.material.visible; - - } - - } - - } - - function restoreVisible( object ) { - - if ( object.material === undefined ) return; - - if ( Array.isArray( object.material ) ) { - - for ( var i = 0, il = object.material.length; i < il; i ++ ) { - - object.material[ i ].visible = _originalVisibleTable[ object.material[ i ].uuid ]; - - } - - } else { - - object.material.visible = _originalVisibleTable[ object.material.uuid ]; - - } - - } - - function createDeferredEmissiveLight() { - - var shader = THREE.ShaderDeferred[ 'emissiveLight' ]; - - var material = new THREE.ShaderMaterial( { - uniforms: Object.assign( {}, shader.uniforms ), - vertexShader: shader.vertexShader, - fragmentShader: shader.fragmentShader, - blending: THREE.NoBlending, - depthWrite: false - } ); - - var geometry = new THREE.PlaneBufferGeometry( 2, 2 ); - var mesh = new THREE.Mesh( geometry, material ); - - mesh.onBeforeRender = function ( renderer, scene, camera, geometry, material, group ) { - - material.uniforms.samplerColor.value = _compColor.renderTarget2.texture; - - }; - - return mesh; - - } - - function createDeferredLight( originalLight ) { - - if ( originalLight.isPointLight ) { - - return createDeferredPointLight( originalLight ); - - } else if ( originalLight.isSpotLight ) { - - return createDeferredSpotLight( originalLight ); - - } else if ( originalLight.isDirectionalLight ) { - - return createDeferredDirectionalLight( originalLight ); - - } - - return null; - - } - - function createDeferredLightMaterial( originalLight ) { - - if ( originalLight.isPointLight ) { - - return createDeferredPointLightMaterial(); - - } else if ( originalLight.isSpotLight ) { - - return createDeferredSpotLightMaterial(); - - } else if ( originalLight.isDirectionalLight ) { - - return createDeferredDirectionalLightMaterial(); - - } - - return null; - - } - - function getDeferredLightMaterial( light ) { - - var cache = ( _lightPrePass ) ? _lightPrePassMaterialsCache : _classicDeferredLightMaterialsCache; - - var data = cache[ light.uuid ]; - - if ( data === undefined ) { - - data = createCacheData(); - data.material = createDeferredLightMaterial( light.userData.originalLight ); - cache[ light.uuid ] = data; - - } - - data.used = true; - - return data.material; - - } - - function updateDeferredLight( light ) { - - var originalLight = light.userData.originalLight; - - if ( originalLight.isPointLight ) { - - updateDeferredPointLight( light ); - - } - - } - - function createDeferredLightMesh( light, geometry ) { - - var mesh = new THREE.Mesh( geometry, _tmpMaterial ); - - mesh.userData.originalLight = light; - - return mesh; - - } - - function createDeferredLightShaderMaterial( shader ) { - - var material = new THREE.ShaderMaterial( { - uniforms: Object.assign( {}, shader.uniforms ), - vertexShader: shader.vertexShader, - fragmentShader: shader.fragmentShader, - transparent: true, - blending: THREE.AdditiveBlending, - depthWrite: false - } ); - - if ( _lightPrePass ) material.premultipliedAlpha = true; - - return material; - - } - - function updateDeferredLightCommonUniforms( uniforms ) { - - if ( _lightPrePass ) { - - uniforms.samplerNormalDepthShininess.value = _compNormalDepth.renderTarget2.texture; - - } else { - - uniforms.samplerNormalDepth.value = _compNormalDepth.renderTarget2.texture; - uniforms.samplerColor.value = _compColor.renderTarget2.texture; - - } - - } - - function createDeferredPointLight( light ) { - - var mesh = createDeferredLightMesh( light, new THREE.SphereBufferGeometry( 1, 16, 8 ) ); - mesh.onBeforeRender = updateDeferredPointLightUniforms; - return mesh; - - } - - /* - * optimization: - * Renders PointLight only back face with stencil test. - */ - function createDeferredPointLightMaterial() { - - var shader = ( _lightPrePass ) ? THREE.ShaderDeferred[ 'pointLightPre' ] : THREE.ShaderDeferred[ 'pointLight' ]; - - var material = createDeferredLightShaderMaterial( shader ); - - material.side = THREE.BackSide; - material.depthFunc = THREE.GreaterEqualDepth; - - return material; - - } - - function updateDeferredPointLight( light ) { - - var originalLight = light.userData.originalLight; - var distance = originalLight.distance; - - if ( distance > 0 ) { - - light.scale.set( 1, 1, 1 ).multiplyScalar( distance ); - light.position.setFromMatrixPosition( originalLight.matrixWorld ); - - } - - } - - function updateDeferredPointLightUniforms( renderer, scene, camera, geometry, material, group ) { - - var light = this; - - var originalLight = light.userData.originalLight; - var distance = originalLight.distance; - var uniforms = material.uniforms; - - uniforms.lightColor.value.copy( originalLight.color ); - - if ( distance > 0 ) { - - uniforms.lightRadius.value = distance; - uniforms.lightIntensity.value = originalLight.intensity; - uniforms.lightPositionVS.value.setFromMatrixPosition( originalLight.matrixWorld ).applyMatrix4( _currentCamera.matrixWorldInverse ); - - } else { - - uniforms.lightRadius.value = Infinity; - - } - - updateDeferredLightCommonUniforms( uniforms ); - - } - - function createDeferredSpotLight( light ) { - - var mesh = createDeferredLightMesh( light, new THREE.PlaneBufferGeometry( 2, 2 ) ); - mesh.onBeforeRender = updateDeferredSpotLightUniforms; - return mesh; - - } - - function createDeferredSpotLightMaterial() { - - var shader = ( _lightPrePass ) ? THREE.ShaderDeferred[ 'spotLightPre' ] : THREE.ShaderDeferred[ 'spotLight' ]; - - var material = createDeferredLightShaderMaterial( shader ); - - material.depthTest = false; - - return material; - - } - - function updateDeferredSpotLightUniforms( renderer, scene, camera, geometry, material, group ) { - - var light = this; - - var originalLight = light.userData.originalLight; - var uniforms = light.material.uniforms; - - uniforms.lightAngle.value = originalLight.angle; - uniforms.lightColor.value.copy( originalLight.color ); - uniforms.lightIntensity.value = originalLight.intensity; - uniforms.lightPositionVS.value.setFromMatrixPosition( originalLight.matrixWorld ).applyMatrix4( _currentCamera.matrixWorldInverse ); - - var vec = uniforms.lightDirectionVS.value; - var vec2 = _tmpVector3; - - vec.setFromMatrixPosition( originalLight.matrixWorld ); - vec2.setFromMatrixPosition( originalLight.target.matrixWorld ); - vec.sub( vec2 ).normalize().transformDirection( _currentCamera.matrixWorldInverse ); - - updateDeferredLightCommonUniforms( uniforms ); - - } - - function createDeferredDirectionalLight( light ) { - - var mesh = createDeferredLightMesh( light, new THREE.PlaneBufferGeometry( 2, 2 ) ); - mesh.onBeforeRender = updateDeferredDirectionalLightUniforms; - return mesh; - - } - - function createDeferredDirectionalLightMaterial() { - - var shader = ( _lightPrePass ) ? THREE.ShaderDeferred[ 'directionalLightPre' ] : THREE.ShaderDeferred[ 'directionalLight' ]; - - var material = createDeferredLightShaderMaterial( shader ); - - material.depthTest = false; - - return material; - - } - - function updateDeferredDirectionalLightUniforms( renderer, scene, camera, geometry, material, group ) { - - var light = this; - - var originalLight = light.userData.originalLight; - var uniforms = light.material.uniforms; - - uniforms.lightColor.value.copy( originalLight.color ); - uniforms.lightIntensity.value = originalLight.intensity; - - var vec = uniforms.lightDirectionVS.value; - var vec2 = _tmpVector3; - - vec.setFromMatrixPosition( originalLight.matrixWorld ); - vec2.setFromMatrixPosition( originalLight.target.matrixWorld ); - vec.sub( vec2 ).normalize().transformDirection( _currentCamera.matrixWorldInverse ); - - updateDeferredLightCommonUniforms( uniforms ); - - } - - function saveOriginalOnBeforeRenderAndCheckTransparency( object ) { - - if ( object.material === undefined ) return; - - _originalOnBeforeRendersTable[ object.uuid ] = object.onBeforeRender; - - // _hasTransparentObject is used only for Classic Deferred Rendering - if ( _hasTransparentObject || _lightPrePass ) return; - - if ( ! object.visible ) return; - - if ( Array.isArray( object.material ) ) { - - for ( var i = 0, il = object.material.length; i < il; i ++ ) { - - if ( object.material[ i ].visible === true && object.material[ i ].transparent === true ) { - - _hasTransparentObject = true; - break; - - } - - } - - } else { - - if ( object.material.visible === true && object.material.transparent === true ) _hasTransparentObject = true; - - } - - } - - function restoreOriginalOnBeforeRender( object ) { - - if ( object.material === undefined ) return; - - object.onBeforeRender = _originalOnBeforeRendersTable[ object.uuid ]; - - } - - function addDeferredLightsToLightScene( object ) { - - if ( object.isLight !== true ) return; - - var data = _deferredLightsCache[ object.uuid ]; - - if ( data === undefined ) { - - data = createCacheData(); - data.light = createDeferredLight( object ); - _deferredLightsCache[ object.uuid ] = data; - - } - - data.used = true; - - var light = data.light; - - if ( light === null ) return; - - var scene = ( object.isPointLight === true ) ? _lightScene : _lightFullscreenScene; - - var lights = scene.userData.lights; - - if ( lights[ light.uuid ] === undefined ) { - - scene.add( light ); - - lights[ light.uuid ] = { - light: light, - found: true - }; - - } - - lights[ light.uuid ].found = true; - - } - - function updateDeferredLightsInLightScene( scene ) { - - var lights = scene.userData.lights; - var keys = Object.keys( lights ); - - for ( var i = 0, il = keys.length; i < il; i ++ ) { - - var key = keys[ i ]; - - if ( lights[ key ].found === false ) { - - scene.remove( lights[ key ].light ); - delete lights[ key ]; - - } else { - - var light = lights[ key ].light; - light.material = getDeferredLightMaterial( light ); - - updateDeferredLight( light ); - lights[ key ].found = false; - - } - - } - - } - - function updateDeferredCommonUniforms( camera ) { - - var uniforms = THREE.ShaderDeferredCommon[ 'commonUniforms' ]; - - uniforms.viewWidth.value = _width; - uniforms.viewHeight.value = _height; - - uniforms.matProjInverse.value.getInverse( camera.projectionMatrix ); - - } - - function enableFinalPasses() { - - if ( _lightPrePass ) { - - _passForward.enabled = false; - _passCopy.enabled = false; - - if ( _antialias ) { - - _passFXAA.enabled = true; - - } else { - - _passFXAA.enabled = false; - - } - - } else { - - if ( _hasTransparentObject ) { - - if ( _antialias ) { - - _passForward.enabled = true; - _passCopy.enabled = false; - _passFXAA.enabled = true; - - } else { - - _passForward.enabled = true; - _passCopy.enabled = true; - _passFXAA.enabled = false; - - } - - } else { - - if ( _antialias ) { - - _passForward.enabled = false; - _passCopy.enabled = false; - _passFXAA.enabled = true; - - } else { - - _passForward.enabled = false; - _passCopy.enabled = false; - _passFXAA.enabled = false; - - } - - } - - } - - } - - function createCacheData() { - - return { - used: true, - keepAlive: _cacheKeepAlive, - count: 0 - }; - - } - - function cleanupCache( cache ) { - - var keys = Object.keys( cache ); - - for ( var i = 0, il = keys.length; i < il; i ++ ) { - - var key = keys[ i ]; - - if ( cache[ key ].used === false ) { - - cache[ key ].count ++; - - if ( cache[ key ].keepAlive === false && cache[ key ].count > _removeThresholdCount ) { - - delete cache[ key ]; - - } - - } else { - - cache[ key ].used = false; - cache[ key ].count = 0; - - } - - } - - } - - function cleanupTable( table ) { - - var keys = Object.keys( table ); - - for ( var i = 0, il = keys.length; i < il; i ++ ) { - - var key = keys[ i ]; - - table[ key ] = undefined; - - } - - } - - function cleanupCaches() { - - cleanupCache( _lightScenesCache ); - cleanupCache( _lightFullscreenScenesCache ); - cleanupCache( _normalDepthMaterialsCache ); - cleanupCache( _normalDepthShininessMaterialsCache ); - cleanupCache( _colorMaterialsCache ); - cleanupCache( _reconstructionMaterialsCache ); - cleanupCache( _classicDeferredLightMaterialsCache ); - cleanupCache( _lightPrePassMaterialsCache ); - cleanupCache( _deferredLightsCache ); - - cleanupTable( _originalMaterialsTable ); - cleanupTable( _originalOnBeforeRendersTable ); - cleanupTable( _originalVisibleTable ); - - } - - /* - * Classic Deferred Rendering - * - * 1) g-buffer normal + depth pass - * - * RGB: normal - * A: depth - * - * - * Light Pre-Pass Rendering - * - * 1') g-buffer normal + depth pass + shininess - * - * RG: normal - * B: shininess - * A: depth - */ - - function renderNormalDepth( scene, camera ) { - - scene.traverse( setMaterialNormalDepth ); - - _passNormalDepth.scene = scene; - _passNormalDepth.camera = camera; - - _this.renderer.autoClearDepth = true; - _this.renderer.autoClearStencil = true; - - _state.buffers.stencil.setTest( true ); - _state.buffers.stencil.setFunc( _context.ALWAYS, 1, 0xffffffff ); - _state.buffers.stencil.setOp( _context.REPLACE, _context.REPLACE, _context.REPLACE ); - - _compNormalDepth.render(); - - scene.traverse( restoreOriginalMaterial ); - - } - - /* - * Classic Deferred Rendering - * - * 2) g-buffer color pass - * - * R: diffuse - * G: emissive - * B: specular - * A: shininess - */ - - function renderColor( scene, camera ) { - - scene.traverse( setMaterialColor ); - - _passColor.scene = scene; - _passColor.camera = camera; - - _this.renderer.autoClearDepth = false; - _this.renderer.autoClearStencil = false; - - _state.buffers.stencil.setFunc( _context.EQUAL, 1, 0xffffffff ); - _state.buffers.stencil.setOp( _context.KEEP, _context.KEEP, _context.KEEP ); - - _compColor.render(); - - scene.traverse( restoreOriginalMaterial ); - - } - - /* - * Classic Deferred Rendering - * - * 3) light pass - */ - - function renderLight( scene, camera ) { - - scene.traverse( addDeferredLightsToLightScene ); - - updateDeferredLightsInLightScene( _lightScene ); - updateDeferredLightsInLightScene( _lightFullscreenScene ); - - _passLight.scene = _lightScene; - _passLight.camera = camera; - - _passLightFullscreen.scene = _lightFullscreenScene; - - _this.renderer.autoClearDepth = false; - _this.renderer.autoClearStencil = false; - - _compLight.render(); - - _state.buffers.stencil.setTest( false ); - - } - - /* - * Light Pre-Pass Rendering - * - * 2') Light pre pass - */ - - function renderLightPre( scene, camera ) { - - scene.traverse( addDeferredLightsToLightScene ); - - updateDeferredLightsInLightScene( _lightScene ); - updateDeferredLightsInLightScene( _lightFullscreenScene ); - - _passLight.scene = _lightScene; - _passLight.camera = camera; - - _passLightFullscreen.scene = _lightFullscreenScene; - - _this.renderer.autoClearDepth = false; - _this.renderer.autoClearStencil = false; - - _state.buffers.stencil.setFunc( _context.EQUAL, 1, 0xffffffff ); - _state.buffers.stencil.setOp( _context.KEEP, _context.KEEP, _context.KEEP ); - - _compLight.render(); - - } - - /* - * Light Pre-Pass Rendering - * - * 3') Reconstruction pass - * - * Transprency handling: - * Here renders transparent objects with normal forward rendering. - */ - - function renderReconstruction( scene, camera ) { - - scene.traverse( setMaterialReconstruction ); - - _passReconstruction.scene = scene; - _passReconstruction.camera = camera; - - _this.renderer.autoClearDepth = false; - _this.renderer.autoClearStencil = false; - - _compReconstruction.render(); - - _state.buffers.stencil.setTest( false ); - - scene.traverse( restoreOriginalMaterial ); - - } - - /* - * Classic Deferred Rendering - * - * 4) Final pass - * - * transparency handling: - * If there's any transparent objects, here renders them on the deferred rendering result - * with normal forward rendering. This may be the easist way but heavy. - * We should consider any better ways someday. - * - * - * Light Pre-Pass Rendering - * - * 4') Final pass - * - * - * Common - * - * antialias handling: - * Here uses postprocessing FXAA for antialias. - * - */ - - function renderFinal( scene, camera ) { - - if ( ! _lightPrePass && _hasTransparentObject ) { - - scene.traverse( setVisibleForForwardRendering ); - scene.traverse( restoreOriginalOnBeforeRender ); - - _passForward.scene = scene; - _passForward.camera = camera; - - } - - enableFinalPasses(); - - _this.renderer.autoClearDepth = false; - _this.renderer.autoClearStencil = false; - - _compFinal.render(); - - if ( ! _lightPrePass && _hasTransparentObject ) { - - scene.traverse( restoreVisible ); - - } - - } - - // external APIs - - this.setSize = function ( width, height ) { - - _width = width; - _height = height; - - this.renderer.setSize( _width, _height ); - - _compNormalDepth.setSize( _width, _height ); - _compColor.setSize( _width, _height ); - _compLight.setSize( _width, _height ); - _compReconstruction.setSize( _width, _height ); - _compFinal.setSize( _width, _height ); - - _depthTexture.image.width = _width; - _depthTexture.image.height = _height; - _depthTexture.needsUpdate = true; - - _passFXAA.uniforms.resolution.value.set( 1 / _width, 1 / _height ); - - }; - - this.setAntialias = function ( enabled ) { - - _antialias = enabled; - - }; - - this.enableLightPrePass = function ( enabled ) { - - _lightPrePass = enabled; - - _passFinal.uniforms.samplerResult.value = ( _lightPrePass ) ? _compReconstruction.renderTarget2.texture : _compLight.renderTarget2.texture; - - }; - - this.render = function ( scene, camera ) { - - // for debug to compare with normal forward rendering - - if ( this.forwardRendering ) { - - this.renderer.render( scene, camera ); - return; - - } - - var currentSceneAutoUpdate = scene.autoUpdate; - var currentAutoClearColor = this.renderer.autoClearColor; - var currentAutoClearDepth = this.renderer.autoClearDepth; - var currentAutoClearStencil = this.renderer.autoClearStencil; - - _currentCamera = camera; - - initLightScene( scene ); - - scene.autoUpdate = false; - scene.updateMatrixWorld(); - - _hasTransparentObject = false; - - scene.traverse( saveOriginalOnBeforeRenderAndCheckTransparency ); - - updateDeferredCommonUniforms( camera ); - - renderNormalDepth( scene, camera ); - - if ( _lightPrePass ) { - - renderLightPre( scene, camera ); - renderReconstruction( scene, camera ); - - } else { - - renderColor( scene, camera ); - renderLight( scene, camera ); - - } - - renderFinal( scene, camera ); - - scene.traverse( restoreOriginalOnBeforeRender ); - - cleanupCaches(); - - scene.autoUpdate = currentSceneAutoUpdate; - this.renderer.autoClearColor = currentAutoClearColor; - this.renderer.autoClearDepth = currentAutoClearDepth; - this.renderer.autoClearStencil = currentAutoClearStencil; - - }; - - // initialize - - init( parameters ); - -}; - -THREE.DeferredShaderChunk = { - - packVector3: [ - - "float vec3_to_float( vec3 data ) {", - - " const float unit = 255.0/256.0;", - " highp float compressed = fract( data.x * unit ) + floor( data.y * unit * 255.0 ) + floor( data.z * unit * 255.0 ) * 255.0;", - " return compressed;", - - "}" - - ].join( "\n" ), - - unpackFloat: [ - - "vec3 float_to_vec3( float data ) {", - - " const float unit = 255.0;", - " vec3 uncompressed;", - " uncompressed.x = fract( data );", - " float zInt = floor( data / unit );", - " uncompressed.z = fract( zInt / unit );", - " uncompressed.y = fract( floor( data - ( zInt * unit ) ) / unit );", - " return uncompressed;", - - "}" - - ].join( "\n" ), - - // Refer to http://aras-p.info/texts/CompactNormalStorage.html - packNormal: [ - - "vec2 normal_to_vec2( vec3 normal ) {", - - " return normal.xy / sqrt( normal.z * 8.0 + 8.0 ) + 0.5;", - - "}" - - ].join( "\n" ), - - unpackVector2: [ - - "vec3 vec2_to_normal( vec2 data ) {", - - " vec2 fenc = data * 4.0 - 2.0;", - " float f = dot( fenc, fenc );", - " float g = sqrt( 1.0 - f / 4.0 );", - " vec3 normal;", - " normal.xy = fenc * g;", - " normal.z = 1.0 - f / 2.0;", - " return normal;", - - "}" - - ].join( "\n" ), - - computeTextureCoord: [ - - "vec2 texCoord = gl_FragCoord.xy / vec2( viewWidth, viewHeight );" - - ].join( "\n" ), - - packNormalDepth: [ - - "vec4 packedNormalDepth;", - "packedNormalDepth.xyz = normal * 0.5 + 0.5;", - "packedNormalDepth.w = position.z / position.w;" - - ].join( "\n" ), - - unpackNormalDepth: [ - - "vec4 normalDepthMap = texture2D( samplerNormalDepth, texCoord );", - "float depth = normalDepthMap.w;", - - "if ( depth == 0.0 ) discard;", - - "vec3 normal = normalDepthMap.xyz * 2.0 - 1.0;" - - ].join( "\n" ), - - packNormalDepthShininess: [ - - "vec4 packedNormalDepthShininess;", - "packedNormalDepthShininess.xy = normal_to_vec2( normal );", - "packedNormalDepthShininess.z = shininess;", - "packedNormalDepthShininess.w = position.z / position.w;" - - ].join( "\n" ), - - unpackNormalDepthShininess: [ - - "vec4 normalDepthMap = texture2D( samplerNormalDepthShininess, texCoord );", - "float depth = normalDepthMap.w;", - - "if ( depth == 0.0 ) discard;", - - "vec3 normal = vec2_to_normal( normalDepthMap.xy );", - "float shininess = normalDepthMap.z;" - - ].join( "\n" ), - - packColor: [ - - "vec4 packedColor;", - "packedColor.x = vec3_to_float( diffuseColor.rgb );", - "packedColor.y = vec3_to_float( emissiveColor );", - "packedColor.z = vec3_to_float( specularColor );", - "packedColor.w = shininess;" - - ].join( "\n" ), - - unpackColor: [ - - "vec4 colorMap = texture2D( samplerColor, texCoord );", - "vec3 diffuseColor = float_to_vec3( colorMap.x );", - "vec3 emissiveColor = float_to_vec3( colorMap.y );", - "vec3 specularColor = float_to_vec3( colorMap.z );", - "float shininess = colorMap.w;" - - ].join( "\n" ), - - packLight: [ - - "vec4 packedLight;", - "packedLight.xyz = lightIntensity * lightColor * max( dot( lightVector, normal ), 0.0 ) * attenuation;", - "packedLight.w = lightIntensity * specular * max( dot( lightVector, normal ), 0.0 ) * attenuation;" - - ].join( "\n" ), - - computeVertexPositionVS: [ - - "vec2 xy = texCoord * 2.0 - 1.0;", - "vec4 vertexPositionProjected = vec4( xy, depth, 1.0 );", - "vec4 vertexPositionVS = matProjInverse * vertexPositionProjected;", - "vertexPositionVS.xyz /= vertexPositionVS.w;", - "vertexPositionVS.w = 1.0;" - - ].join( "\n" ), - - // TODO: calculate schlick - computeSpecular: [ - - "vec3 halfVector = normalize( lightVector - normalize( vertexPositionVS.xyz ) );", - "float dotNormalHalf = max( dot( normal, halfVector ), 0.0 );", - "float specular = 0.31830988618 * ( shininess * 0.5 + 1.0 ) * pow( dotNormalHalf, shininess );" - - ].join( "\n" ), - - combine: [ - - "gl_FragColor = vec4( lightIntensity * lightColor * max( dot( lightVector, normal ), 0.0 ) * ( diffuseColor + specular * specularColor ) * attenuation, 1.0 );" - - ].join( "\n" ) - -}; - -THREE.ShaderDeferredCommon = { - - commonUniforms: { - - matProjInverse: new THREE.Uniform( new THREE.Matrix4() ), - - viewWidth: new THREE.Uniform( 800 ), - viewHeight: new THREE.Uniform( 600 ) - - } - -}; - -THREE.ShaderDeferred = { - - normalDepth: { - - uniforms: {}, - - vertexShader: [ - - "varying vec3 vNormal;", - "varying vec4 vPosition;", - - "#include ", - "#include ", - - "void main() {", - - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - - " vNormal = normalize( transformedNormal );", - " vPosition = gl_Position;", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "varying vec3 vNormal;", - "varying vec4 vPosition;", - - "void main() {", - - " vec3 normal = vNormal;", - " vec4 position = vPosition;", - - THREE.DeferredShaderChunk[ "packNormalDepth" ], - - " gl_FragColor = packedNormalDepth;", - - "}" - - ].join( "\n" ) - - }, - - color: { - - uniforms: { - - map: new THREE.Uniform( null ), - offsetRepeat: new THREE.Uniform( new THREE.Vector4( 0, 0, 1, 1 ) ), - - diffuse: new THREE.Uniform( new THREE.Color( 0x000000 ) ), - emissive: new THREE.Uniform( new THREE.Color( 0x000000 ) ), - specular: new THREE.Uniform( new THREE.Color( 0x000000 ) ), - shininess: new THREE.Uniform( 30.0 ) - - }, - - vertexShader: [ - - "#include ", - "#include ", - "#include ", - - "void main() {", - - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "uniform vec3 diffuse;", - "uniform vec3 emissive;", - "uniform vec3 specular;", - "uniform float shininess;", - - "#include ", - "#include ", - THREE.DeferredShaderChunk[ "packVector3" ], - - "void main() {", - - " vec4 diffuseColor = vec4( diffuse, 1.0 );", - " vec3 emissiveColor = emissive;", - " vec3 specularColor = specular;", - - "#include ", - THREE.DeferredShaderChunk[ "packColor" ], - - " gl_FragColor = packedColor;", - - "}" - - ].join( "\n" ) - - }, - - emissiveLight: { - - uniforms: Object.assign( - - { - - samplerColor: new THREE.Uniform( null ) - - }, - - THREE.ShaderDeferredCommon[ 'commonUniforms' ] - - ), - - vertexShader: [ - - "void main() { ", - - " gl_Position = vec4( sign( position.xy ), 0.0, 1.0 );", - - "}" - - ].join( '\n' ), - - fragmentShader: [ - - "uniform sampler2D samplerColor;", - - "uniform float viewHeight;", - "uniform float viewWidth;", - - THREE.DeferredShaderChunk[ "unpackFloat" ], - - "void main() {", - - THREE.DeferredShaderChunk[ "computeTextureCoord" ], - THREE.DeferredShaderChunk[ "unpackColor" ], - - " gl_FragColor = vec4( emissiveColor, 1.0 );", - - "}" - - ].join( '\n' ) - - }, - - pointLight: { - - uniforms: Object.assign( - - { - - samplerNormalDepth: new THREE.Uniform( null ), - samplerColor: new THREE.Uniform( null ), - - lightColor: new THREE.Uniform( new THREE.Color( 0x000000 ) ), - lightPositionVS: new THREE.Uniform( new THREE.Vector3( 0, 1, 0 ) ), - lightIntensity: new THREE.Uniform( 1.0 ), - lightRadius: new THREE.Uniform( 1.0 ) - - }, - - THREE.ShaderDeferredCommon[ 'commonUniforms' ] - - ), - - vertexShader: [ - - "void main() {", - - " gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "uniform sampler2D samplerNormalDepth;", - "uniform sampler2D samplerColor;", - - "uniform float viewHeight;", - "uniform float viewWidth;", - - "uniform vec3 lightColor;", - "uniform vec3 lightPositionVS;", - "uniform float lightIntensity;", - "uniform float lightRadius;", - - "uniform mat4 matProjInverse;", - - THREE.DeferredShaderChunk[ "unpackFloat" ], - - "void main() {", - - THREE.DeferredShaderChunk[ "computeTextureCoord" ], - THREE.DeferredShaderChunk[ "unpackNormalDepth" ], - THREE.DeferredShaderChunk[ "computeVertexPositionVS" ], - - " vec3 lightVector = lightPositionVS - vertexPositionVS.xyz;", - " float distance = length( lightVector );", - - " if ( distance > lightRadius ) discard;", - - " lightVector = normalize( lightVector );", - - THREE.DeferredShaderChunk[ "unpackColor" ], - THREE.DeferredShaderChunk[ "computeSpecular" ], - - " //float cutoff = 0.3;", - " //float denom = distance / lightRadius + 1.0;", - " //float attenuation = 1.0 / ( denom * denom );", - " //attenuation = ( attenuation - cutoff ) / ( 1.0 - cutoff );", - " //attenuation = max( attenuation, 0.0 );", - " //attenuation *= attenuation;", - - " //diffuseColor *= saturate( -distance / lightRadius + 1.0 );", - " //float attenuation = 1.0;", - - " float attenuation = saturate( -distance / lightRadius + 1.0 );", - - THREE.DeferredShaderChunk[ "combine" ], - - "}" - - ].join( "\n" ) - - }, - - spotLight: { - - uniforms: Object.assign( - - { - - samplerNormalDepth: new THREE.Uniform( null ), - samplerColor: new THREE.Uniform( null ), - - lightColor: new THREE.Uniform( new THREE.Color( 0x000000 ) ), - lightDirectionVS: new THREE.Uniform( new THREE.Vector3( 0, 1, 0 ) ), - lightPositionVS: new THREE.Uniform( new THREE.Vector3( 0, 1, 0 ) ), - lightAngle: new THREE.Uniform( 1.0 ), - lightIntensity: new THREE.Uniform( 1.0 ) - - }, - - THREE.ShaderDeferredCommon[ 'commonUniforms' ] - - ), - - vertexShader: [ - - "void main() { ", - - " gl_Position = vec4( sign( position.xy ), 0.0, 1.0 );", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "uniform sampler2D samplerNormalDepth;", - "uniform sampler2D samplerColor;", - - "uniform float viewHeight;", - "uniform float viewWidth;", - - "uniform vec3 lightColor;", - "uniform vec3 lightPositionVS;", - "uniform vec3 lightDirectionVS;", - "uniform float lightAngle;", - "uniform float lightIntensity;", - - "uniform mat4 matProjInverse;", - - THREE.DeferredShaderChunk[ "unpackFloat" ], - - "void main() {", - - THREE.DeferredShaderChunk[ "computeTextureCoord" ], - THREE.DeferredShaderChunk[ "unpackNormalDepth" ], - THREE.DeferredShaderChunk[ "computeVertexPositionVS" ], - THREE.DeferredShaderChunk[ "unpackColor" ], - - " vec3 lightVector = normalize( lightPositionVS.xyz - vertexPositionVS.xyz );", - - " float rho = dot( lightDirectionVS, lightVector );", - " float rhoMax = cos( lightAngle );", - - " if ( rho <= rhoMax ) discard;", - - " float theta = rhoMax + 0.0001;", - " float phi = rhoMax + 0.05;", - " float falloff = 4.0;", - - " float spot = 0.0;", - - " if ( rho >= phi ) {", - - " spot = 1.0;", - - " } else if ( rho <= theta ) {", - - " spot = 0.0;", - - " } else { ", - - " spot = pow( ( rho - theta ) / ( phi - theta ), falloff );", - - " }", - - " diffuseColor *= spot;", - - THREE.DeferredShaderChunk[ "computeSpecular" ], - - " const float attenuation = 1.0;", - - THREE.DeferredShaderChunk[ "combine" ], - - "}" - - ].join( "\n" ) - - }, - - directionalLight: { - - uniforms: Object.assign( - - { - - samplerNormalDepth: new THREE.Uniform( null ), - samplerColor: new THREE.Uniform( null ), - - lightColor: new THREE.Uniform( new THREE.Color( 0x000000 ) ), - lightDirectionVS: new THREE.Uniform( new THREE.Vector3( 0, 1, 0 ) ), - lightIntensity: new THREE.Uniform( 1.0 ) - }, - - THREE.ShaderDeferredCommon[ 'commonUniforms' ] - - ), - - vertexShader: [ - - "void main() { ", - - " gl_Position = vec4( sign( position.xy ), 0.0, 1.0 );", - - "}" - - ].join( '\n' ), - - fragmentShader: [ - - "uniform sampler2D samplerNormalDepth;", - "uniform sampler2D samplerColor;", - - "uniform float viewHeight;", - "uniform float viewWidth;", - - "uniform vec3 lightColor;", - "uniform vec3 lightDirectionVS;", - "uniform float lightIntensity;", - - "uniform mat4 matProjInverse;", - - THREE.DeferredShaderChunk[ "unpackFloat" ], - - "void main() {", - - THREE.DeferredShaderChunk[ "computeTextureCoord" ], - THREE.DeferredShaderChunk[ "unpackNormalDepth" ], - THREE.DeferredShaderChunk[ "computeVertexPositionVS" ], - THREE.DeferredShaderChunk[ "unpackColor" ], - - " vec3 lightVector = normalize( lightDirectionVS );", - - THREE.DeferredShaderChunk[ "computeSpecular" ], - - " const float attenuation = 1.0;", - - THREE.DeferredShaderChunk[ "combine" ], - - "}" - - ].join( '\n' ) - - }, - - normalDepthShininess: { - - uniforms: { - - shininess: new THREE.Uniform( 30.0 ) - - }, - - vertexShader: [ - - "varying vec3 vNormal;", - "varying vec4 vPosition;", - - "#include ", - "#include ", - - "void main() {", - - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - - " vNormal = normalize( transformedNormal );", - " vPosition = gl_Position;", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "varying vec3 vNormal;", - "varying vec4 vPosition;", - - "uniform float shininess;", - - THREE.DeferredShaderChunk[ "packNormal" ], - - "void main() {", - - " vec3 normal = vNormal;", - " vec4 position = vPosition;", - - THREE.DeferredShaderChunk[ "packNormalDepthShininess" ], - - " gl_FragColor = packedNormalDepthShininess;", - - "}" - - ].join( "\n" ) - - }, - - pointLightPre: { - - uniforms: Object.assign( - - { - - samplerNormalDepthShininess: new THREE.Uniform( null ), - - lightColor: new THREE.Uniform( new THREE.Color( 0x000000 ) ), - lightPositionVS: new THREE.Uniform( new THREE.Vector3( 0, 1, 0 ) ), - lightIntensity: new THREE.Uniform( 1.0 ), - lightRadius: new THREE.Uniform( 1.0 ) - }, - - THREE.ShaderDeferredCommon[ 'commonUniforms' ] - - ), - - - vertexShader: [ - - "void main() {", - - " gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "uniform sampler2D samplerNormalDepthShininess;", - - "uniform float viewHeight;", - "uniform float viewWidth;", - - "uniform vec3 lightColor;", - "uniform vec3 lightPositionVS;", - "uniform float lightIntensity;", - "uniform float lightRadius;", - - "uniform mat4 matProjInverse;", - - THREE.DeferredShaderChunk[ "unpackFloat" ], - THREE.DeferredShaderChunk[ "unpackVector2" ], - - "void main() {", - - THREE.DeferredShaderChunk[ "computeTextureCoord" ], - THREE.DeferredShaderChunk[ "unpackNormalDepthShininess" ], - THREE.DeferredShaderChunk[ "computeVertexPositionVS" ], - - " vec3 lightVector = lightPositionVS - vertexPositionVS.xyz;", - " float distance = length( lightVector );", - - " if ( distance > lightRadius ) discard;", - - " lightVector = normalize( lightVector );", - - THREE.DeferredShaderChunk[ "computeSpecular" ], - - " float attenuation = saturate( -distance / lightRadius + 1.0 );", - - THREE.DeferredShaderChunk[ "packLight" ], - - " gl_FragColor = packedLight;", - - "}" - - ].join( "\n" ) - - }, - - spotLightPre: { - - uniforms: Object.assign( - - { - - samplerNormalDepthShininess: new THREE.Uniform( null ), - - lightColor: new THREE.Uniform( new THREE.Color( 0x000000 ) ), - lightDirectionVS: new THREE.Uniform( new THREE.Vector3( 0, 1, 0 ) ), - lightPositionVS: new THREE.Uniform( new THREE.Vector3( 0, 1, 0 ) ), - lightAngle: new THREE.Uniform( 1.0 ), - lightIntensity: new THREE.Uniform( 1.0 ) - - }, - - THREE.ShaderDeferredCommon[ 'commonUniforms' ] - - ), - - vertexShader: [ - - "void main() { ", - - " gl_Position = vec4( sign( position.xy ), 0.0, 1.0 );", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "uniform sampler2D samplerNormalDepthShininess;", - - "uniform float viewHeight;", - "uniform float viewWidth;", - - "uniform vec3 lightColor;", - "uniform vec3 lightPositionVS;", - "uniform vec3 lightDirectionVS;", - "uniform float lightAngle;", - "uniform float lightIntensity;", - - "uniform mat4 matProjInverse;", - - THREE.DeferredShaderChunk[ "unpackFloat" ], - THREE.DeferredShaderChunk[ "unpackVector2" ], - - "void main() {", - - THREE.DeferredShaderChunk[ "computeTextureCoord" ], - THREE.DeferredShaderChunk[ "unpackNormalDepthShininess" ], - THREE.DeferredShaderChunk[ "computeVertexPositionVS" ], - - " vec3 lightVector = normalize( lightPositionVS.xyz - vertexPositionVS.xyz );", - - " float rho = dot( lightDirectionVS, lightVector );", - " float rhoMax = cos( lightAngle );", - - " if ( rho <= rhoMax ) discard;", - - " float theta = rhoMax + 0.0001;", - " float phi = rhoMax + 0.05;", - " float falloff = 4.0;", - - " float spot = 0.0;", - - " if ( rho >= phi ) {", - - " spot = 1.0;", - - " } else if ( rho <= theta ) {", - - " spot = 0.0;", - - " } else { ", - - " spot = pow( ( rho - theta ) / ( phi - theta ), falloff );", - - " }", - - THREE.DeferredShaderChunk[ "computeSpecular" ], - - " const float attenuation = 1.0;", - - THREE.DeferredShaderChunk[ "packLight" ], - - " gl_FragColor = spot * packedLight;", - - "}" - - ].join( "\n" ) - - }, - - directionalLightPre: { - - uniforms: Object.assign( - - { - - samplerNormalDepthShininess: new THREE.Uniform( null ), - - lightColor: new THREE.Uniform( new THREE.Color( 0x000000 ) ), - lightDirectionVS: new THREE.Uniform( new THREE.Vector3( 0, 1, 0 ) ), - lightIntensity: new THREE.Uniform( 1.0 ) - - }, - - THREE.ShaderDeferredCommon[ 'commonUniforms' ] - - ), - - vertexShader: [ - - "void main() { ", - - " gl_Position = vec4( sign( position.xy ), 0.0, 1.0 );", - - "}" - - ].join( '\n' ), - - fragmentShader: [ - - "uniform sampler2D samplerNormalDepthShininess;", - - "uniform float viewHeight;", - "uniform float viewWidth;", - - "uniform vec3 lightColor;", - "uniform vec3 lightDirectionVS;", - "uniform float lightIntensity;", - - "uniform mat4 matProjInverse;", - - THREE.DeferredShaderChunk[ "unpackFloat" ], - THREE.DeferredShaderChunk[ "unpackVector2" ], - - "void main() {", - - THREE.DeferredShaderChunk[ "computeTextureCoord" ], - THREE.DeferredShaderChunk[ "unpackNormalDepthShininess" ], - THREE.DeferredShaderChunk[ "computeVertexPositionVS" ], - - " vec3 lightVector = normalize( lightDirectionVS );", - - THREE.DeferredShaderChunk[ "computeSpecular" ], - - " const float attenuation = 1.0;", - - THREE.DeferredShaderChunk[ "packLight" ], - - " gl_FragColor = packedLight;", - - "}" - - ].join( '\n' ) - - }, - - reconstruction: { - - uniforms: Object.assign( - - { - - samplerLight: new THREE.Uniform( null ), - - map: new THREE.Uniform( null ), - offsetRepeat: new THREE.Uniform( new THREE.Vector4( 0, 0, 1, 1 ) ), - - diffuse: new THREE.Uniform( new THREE.Color( 0x000000 ) ), - emissive: new THREE.Uniform( new THREE.Color( 0x000000 ) ), - specular: new THREE.Uniform( new THREE.Color( 0x000000 ) ), - shininess: new THREE.Uniform( 30.0 ) - - }, - - THREE.ShaderDeferredCommon[ 'commonUniforms' ] - - ), - - vertexShader: [ - - "#include ", - "#include ", - "#include ", - - "void main() {", - - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - "#include ", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "uniform sampler2D samplerLight;", - - "uniform vec3 diffuse;", - "uniform vec3 emissive;", - "uniform vec3 specular;", - "uniform float shininess;", - - "uniform float viewHeight;", - "uniform float viewWidth;", - - "#include ", - "#include ", - - THREE.DeferredShaderChunk[ "unpackFloat" ], - - "void main() {", - - " vec4 diffuseColor = vec4( diffuse, 1.0 );", - " vec3 emissiveColor = emissive;", - " vec3 specularColor = specular;", - - THREE.DeferredShaderChunk[ "computeTextureCoord" ], - - " vec4 light = texture2D( samplerLight, texCoord );", - - "#include ", - - " vec3 diffuseFinal = diffuseColor.rgb * light.rgb;", - " vec3 emissiveFinal = emissiveColor;", - " vec3 specularFinal = specularColor * light.rgb * light.a;", - - " gl_FragColor = vec4( diffuseFinal + emissiveFinal + specularFinal, 1.0 );", - - "}" - - ].join( "\n" ) - - }, - - // TODO: implement tone mapping - final: { - - uniforms: { - - samplerResult: new THREE.Uniform( null ) - - }, - - vertexShader: [ - - "varying vec2 texCoord;", - - "void main() {", - - " vec4 pos = vec4( sign( position.xy ), 0.0, 1.0 );", - " texCoord = pos.xy * vec2( 0.5 ) + 0.5;", - " gl_Position = pos;", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "varying vec2 texCoord;", - "uniform sampler2D samplerResult;", - - "void main() {", - - " gl_FragColor = texture2D( samplerResult, texCoord );", - - "}" - - ].join( "\n" ) - - } - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/loaders/GLTFLoader.d.ts b/spaces/banana-projects/web3d/node_modules/three/examples/jsm/loaders/GLTFLoader.d.ts deleted file mode 100644 index 7e511a074f1862784f988616bdb6cd2799a09305..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/loaders/GLTFLoader.d.ts +++ /dev/null @@ -1,27 +0,0 @@ -import { - AnimationClip, - Camera, - LoadingManager, - Scene -} from '../../../src/Three'; - -export interface GLTF { - animations: AnimationClip[]; - scene: Scene; - scenes: Scene[]; - cameras: Camera[]; - asset: object; -} - -export class GLTFLoader { - constructor(manager?: LoadingManager); - manager: LoadingManager; - path: string; - - load(url: string, onLoad: (gltf: GLTF) => void, onProgress?: (event: ProgressEvent) => void, onError?: (event: ErrorEvent) => void) : void; - setPath(path: string) : GLTFLoader; - setResourcePath(path: string) : GLTFLoader; - setCrossOrigin(value: string): void; - setDRACOLoader(dracoLoader: object): void; - parse(data: ArrayBuffer, path: string, onLoad: (gltf: GLTF) => void, onError?: (event: ErrorEvent) => void) : void; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/lights/PointLight.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/lights/PointLight.d.ts deleted file mode 100644 index 97247a39f5ce37e56f3353e92883e63f29f03ca7..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/lights/PointLight.d.ts +++ /dev/null @@ -1,41 +0,0 @@ -import { Color } from './../math/Color'; -import { Light } from './Light'; -import { PerspectiveCamera } from './../cameras/PerspectiveCamera'; -import { LightShadow } from './LightShadow'; - -export class PointLightShadow extends LightShadow { - camera: PerspectiveCamera; -} - -/** - * Affects objects using {@link MeshLambertMaterial} or {@link MeshPhongMaterial}. - * - * @example - * var light = new THREE.PointLight( 0xff0000, 1, 100 ); - * light.position.set( 50, 50, 50 ); - * scene.add( light ); - */ -export class PointLight extends Light { - constructor( - color?: Color | string | number, - intensity?: number, - distance?: number, - decay?: number - ); - - /* - * Light's intensity. - * Default - 1.0. - */ - intensity: number; - - /** - * If non-zero, light will attenuate linearly from maximum intensity at light position down to zero at distance. - * Default — 0.0. - */ - distance: number; - - decay: number; - shadow: PointLightShadow; - power: number; -} diff --git a/spaces/bergrozen1213/3d-obj/app.py b/spaces/bergrozen1213/3d-obj/app.py deleted file mode 100644 index e03e734dc952b388f89c99dda1b7106a4f886079..0000000000000000000000000000000000000000 --- a/spaces/bergrozen1213/3d-obj/app.py +++ /dev/null @@ -1,119 +0,0 @@ -import gradio as gr -from transformers import DPTFeatureExtractor, DPTForDepthEstimation -import torch -import numpy as np -from PIL import Image -import open3d as o3d -from pathlib import Path -import os - -feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-large") -model = DPTForDepthEstimation.from_pretrained("Intel/dpt-large") - - -def process_image(image_path): - image_path = Path(image_path) - image_raw = Image.open(image_path) - image = image_raw.resize( - (800, int(800 * image_raw.size[1] / image_raw.size[0])), - Image.Resampling.LANCZOS) - - # prepare image for the model - encoding = feature_extractor(image, return_tensors="pt") - - # forward pass - with torch.no_grad(): - outputs = model(**encoding) - predicted_depth = outputs.predicted_depth - - # interpolate to original size - prediction = torch.nn.functional.interpolate( - predicted_depth.unsqueeze(1), - size=image.size[::-1], - mode="bicubic", - align_corners=False, - ).squeeze() - output = prediction.cpu().numpy() - depth_image = (output * 255 / np.max(output)).astype('uint8') - try: - gltf_path = create_3d_obj(np.array(image), depth_image, image_path) - img = Image.fromarray(depth_image) - return [img, gltf_path, gltf_path] - except Exception as e: - gltf_path = create_3d_obj( - np.array(image), depth_image, image_path, depth=8) - img = Image.fromarray(depth_image) - return [img, gltf_path, gltf_path] - except: - print("Error reconstructing 3D model") - raise Exception("Error reconstructing 3D model") - - -def create_3d_obj(rgb_image, depth_image, image_path, depth=10): - depth_o3d = o3d.geometry.Image(depth_image) - image_o3d = o3d.geometry.Image(rgb_image) - rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth( - image_o3d, depth_o3d, convert_rgb_to_intensity=False) - w = int(depth_image.shape[1]) - h = int(depth_image.shape[0]) - - camera_intrinsic = o3d.camera.PinholeCameraIntrinsic() - camera_intrinsic.set_intrinsics(w, h, 500, 500, w/2, h/2) - - pcd = o3d.geometry.PointCloud.create_from_rgbd_image( - rgbd_image, camera_intrinsic) - - print('normals') - pcd.normals = o3d.utility.Vector3dVector( - np.zeros((1, 3))) # invalidate existing normals - pcd.estimate_normals( - search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.01, max_nn=30)) - pcd.orient_normals_towards_camera_location( - camera_location=np.array([0., 0., 1000.])) - pcd.transform([[1, 0, 0, 0], - [0, -1, 0, 0], - [0, 0, -1, 0], - [0, 0, 0, 1]]) - pcd.transform([[-1, 0, 0, 0], - [0, 1, 0, 0], - [0, 0, 1, 0], - [0, 0, 0, 1]]) - - print('run Poisson surface reconstruction') - with o3d.utility.VerbosityContextManager(o3d.utility.VerbosityLevel.Debug) as cm: - mesh_raw, densities = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson( - pcd, depth=depth, width=0, scale=1.1, linear_fit=True) - - voxel_size = max(mesh_raw.get_max_bound() - mesh_raw.get_min_bound()) / 256 - print(f'voxel_size = {voxel_size:e}') - mesh = mesh_raw.simplify_vertex_clustering( - voxel_size=voxel_size, - contraction=o3d.geometry.SimplificationContraction.Average) - - # vertices_to_remove = densities < np.quantile(densities, 0.001) - # mesh.remove_vertices_by_mask(vertices_to_remove) - bbox = pcd.get_axis_aligned_bounding_box() - mesh_crop = mesh.crop(bbox) - gltf_path = f'./{image_path.stem}.gltf' - o3d.io.write_triangle_mesh( - gltf_path, mesh_crop, write_triangle_uvs=True) - return gltf_path - - -title = "Demo: zero-shot depth estimation with DPT + 3D Point Cloud" -description = "This demo is a variation from the original DPT Demo. It uses the DPT model to predict the depth of an image and then uses 3D Point Cloud to create a 3D object." -examples = [["examples/" + img] for img in os.listdir("examples/")] - -iface = gr.Interface(fn=process_image, - inputs=[gr.Image( - type="filepath", label="Input Image")], - outputs=[gr.Image(label="predicted depth", type="pil"), - gr.Model3D(label="3d mesh reconstruction", clear_color=[ - 1.0, 1.0, 1.0, 1.0]), - gr.File(label="3d gLTF")], - title=title, - description=description, - examples=examples, - allow_flagging="never", - cache_examples=False) -iface.launch(debug=True, enable_queue=False) diff --git a/spaces/bert9946/frame-interpolation/README.md b/spaces/bert9946/frame-interpolation/README.md deleted file mode 100644 index b40ff22bb211f33bbd37f7310909ac8858825dfc..0000000000000000000000000000000000000000 --- a/spaces/bert9946/frame-interpolation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Frame Interpolation -emoji: 🐢 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -duplicated_from: johngoad/frame-interpolation ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/bigbencat/internlm-internlm-chat-7b-8k/README.md b/spaces/bigbencat/internlm-internlm-chat-7b-8k/README.md deleted file mode 100644 index 97b2f3d27c7d9bdc5fb87d50f8f63884f744b16d..0000000000000000000000000000000000000000 --- a/spaces/bigbencat/internlm-internlm-chat-7b-8k/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Internlm Internlm Chat 7b 8k -emoji: 🐢 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.36.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bioriAsaeru/text-to-voice/GrimDawnAshesofMalmouthCODEXhacktooldownload Free.md b/spaces/bioriAsaeru/text-to-voice/GrimDawnAshesofMalmouthCODEXhacktooldownload Free.md deleted file mode 100644 index 19c99cb1f2f66bcf42e09ab7569183cc55d629ab..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/GrimDawnAshesofMalmouthCODEXhacktooldownload Free.md +++ /dev/null @@ -1,12 +0,0 @@ - -

    jonvictoriadf76b833ed https://www.kaggle.com/kawormasu/grimdawnashesofmalmouthcodexhacktooldownload-upd. 0 0. cevalloyrdf76b833ed https://www.kaggle.com/kawormasu/grimdawnashesofmalmouthcodexhacktooldownload-upd. 724826f1690b 4:48 am.

    -

    GrimDawnAshesofMalmouthCODEXhacktooldownload


    Download Ziphttps://urloso.com/2uyPhd



    -

    https://www.clipaffiliate.com/stories/11376095-grimdawnashesofmalmouthcodexhacktooldownload. scormserch https://www.clipaffiliate.com/stories/54301176-grimdawnashesofmalmouthcodexhacktooldownload-better.

    -

    http://www.graphicarbeiten.net/resources/GrimDawnAshesofMalmouthCODEXhacktooldownload.pdf https://wakelet.com/wake/RVlauh6QAe1zmM7SVVGlSt https://coub.com/stories/3075809-grimdawnashesofmalmouthcodexhacktooldownload-better

    -

    http://www.graphicarbeiten.net/resources/GrimDawnAshesofMalmouthCODEXhacktooldownload.pdf https://wakelet.com/wake/sm21blLJR3n4cUBDYCvdk2w https://coub.com/stories/3075809-grimdawnashesofmalmouthcodexhacktooldownload-better

    -

    -

    https://awakepress.com/stories/27441498-grimdawnashesofmalmouthcodexhacktooldownload. pdf https://wakelet.com/wake/aXaVY3Tn3O5kTPTEmm6qA https://coub.com/stories/3075809-grimdawnashesofmalmouthcodexhacktooldownload-better

    -

    https://awakepress.com/stories/34064359-grimdawnashesofmalmouthcodexhacktooldownload-upd. pdf https://wakelet.com/wake/VReYqk746Y2qqPzcYeLilg. https://coub.com/stories/3075809-grimdawnashesofmalmouthcodexhacktooldownload-better

    -

    http://dipatemp.yolasite.com/resources/GrimDawnAshesofMalmouthCODEXhacktooldownload.pdf https://wakelet.com/wake/lx2Cmck5z7oR3q5GhSCOho https://coub.com/stories/3075809-grimdawnashesofmalmouthcodexhacktooldownload-better

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/blaziant/ysda_nlp_ops/templates/base.html b/spaces/blaziant/ysda_nlp_ops/templates/base.html deleted file mode 100644 index cd1848a94cd12ffa1f9e9aff1b32cf8c50b29da5..0000000000000000000000000000000000000000 --- a/spaces/blaziant/ysda_nlp_ops/templates/base.html +++ /dev/null @@ -1,15 +0,0 @@ - - - - dev_ops laba 5 - - - - - -{% block body %} -{% endblock %} - - - - \ No newline at end of file diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/test_time_augmentation.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/test_time_augmentation.py deleted file mode 100644 index 373e6bf00a39c040ff1da49d6dcd39a54a0b69a7..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/test_time_augmentation.py +++ /dev/null @@ -1,307 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import numpy as np -from contextlib import contextmanager -from itertools import count -from typing import List -import torch -from fvcore.transforms import HFlipTransform, NoOpTransform -from torch import nn -from torch.nn.parallel import DistributedDataParallel - -from detectron2.config import configurable -from detectron2.data.detection_utils import read_image -from detectron2.data.transforms import ( - RandomFlip, - ResizeShortestEdge, - ResizeTransform, - apply_augmentations, -) -from detectron2.structures import Boxes, Instances - -from .meta_arch import GeneralizedRCNN -from .postprocessing import detector_postprocess -from .roi_heads.fast_rcnn import fast_rcnn_inference_single_image - -__all__ = ["DatasetMapperTTA", "GeneralizedRCNNWithTTA"] - - -class DatasetMapperTTA: - """ - Implement test-time augmentation for detection data. - It is a callable which takes a dataset dict from a detection dataset, - and returns a list of dataset dicts where the images - are augmented from the input image by the transformations defined in the config. - This is used for test-time augmentation. - """ - - @configurable - def __init__(self, min_sizes: List[int], max_size: int, flip: bool): - """ - Args: - min_sizes: list of short-edge size to resize the image to - max_size: maximum height or width of resized images - flip: whether to apply flipping augmentation - """ - self.min_sizes = min_sizes - self.max_size = max_size - self.flip = flip - - @classmethod - def from_config(cls, cfg): - return { - "min_sizes": cfg.TEST.AUG.MIN_SIZES, - "max_size": cfg.TEST.AUG.MAX_SIZE, - "flip": cfg.TEST.AUG.FLIP, - } - - def __call__(self, dataset_dict): - """ - Args: - dict: a dict in standard model input format. See tutorials for details. - - Returns: - list[dict]: - a list of dicts, which contain augmented version of the input image. - The total number of dicts is ``len(min_sizes) * (2 if flip else 1)``. - Each dict has field "transforms" which is a TransformList, - containing the transforms that are used to generate this image. - """ - numpy_image = dataset_dict["image"].permute(1, 2, 0).numpy() - shape = numpy_image.shape - orig_shape = (dataset_dict["height"], dataset_dict["width"]) - if shape[:2] != orig_shape: - # It transforms the "original" image in the dataset to the input image - pre_tfm = ResizeTransform(orig_shape[0], orig_shape[1], shape[0], shape[1]) - else: - pre_tfm = NoOpTransform() - - # Create all combinations of augmentations to use - aug_candidates = [] # each element is a list[Augmentation] - for min_size in self.min_sizes: - resize = ResizeShortestEdge(min_size, self.max_size) - aug_candidates.append([resize]) # resize only - if self.flip: - flip = RandomFlip(prob=1.0) - aug_candidates.append([resize, flip]) # resize + flip - - # Apply all the augmentations - ret = [] - for aug in aug_candidates: - new_image, tfms = apply_augmentations(aug, np.copy(numpy_image)) - torch_image = torch.from_numpy(np.ascontiguousarray(new_image.transpose(2, 0, 1))) - - dic = copy.deepcopy(dataset_dict) - dic["transforms"] = pre_tfm + tfms - dic["image"] = torch_image - ret.append(dic) - return ret - - -class GeneralizedRCNNWithTTA(nn.Module): - """ - A GeneralizedRCNN with test-time augmentation enabled. - Its :meth:`__call__` method has the same interface as :meth:`GeneralizedRCNN.forward`. - """ - - def __init__(self, cfg, model, tta_mapper=None, batch_size=3): - """ - Args: - cfg (CfgNode): - model (GeneralizedRCNN): a GeneralizedRCNN to apply TTA on. - tta_mapper (callable): takes a dataset dict and returns a list of - augmented versions of the dataset dict. Defaults to - `DatasetMapperTTA(cfg)`. - batch_size (int): batch the augmented images into this batch size for inference. - """ - super().__init__() - if isinstance(model, DistributedDataParallel): - model = model.module - assert isinstance( - model, GeneralizedRCNN - ), "TTA is only supported on GeneralizedRCNN. Got a model of type {}".format(type(model)) - self.cfg = cfg.clone() - assert not self.cfg.MODEL.KEYPOINT_ON, "TTA for keypoint is not supported yet" - assert ( - not self.cfg.MODEL.LOAD_PROPOSALS - ), "TTA for pre-computed proposals is not supported yet" - - self.model = model - - if tta_mapper is None: - tta_mapper = DatasetMapperTTA(cfg) - self.tta_mapper = tta_mapper - self.batch_size = batch_size - - @contextmanager - def _turn_off_roi_heads(self, attrs): - """ - Open a context where some heads in `model.roi_heads` are temporarily turned off. - Args: - attr (list[str]): the attribute in `model.roi_heads` which can be used - to turn off a specific head, e.g., "mask_on", "keypoint_on". - """ - roi_heads = self.model.roi_heads - old = {} - for attr in attrs: - try: - old[attr] = getattr(roi_heads, attr) - except AttributeError: - # The head may not be implemented in certain ROIHeads - pass - - if len(old.keys()) == 0: - yield - else: - for attr in old.keys(): - setattr(roi_heads, attr, False) - yield - for attr in old.keys(): - setattr(roi_heads, attr, old[attr]) - - def _batch_inference(self, batched_inputs, detected_instances=None): - """ - Execute inference on a list of inputs, - using batch size = self.batch_size, instead of the length of the list. - - Inputs & outputs have the same format as :meth:`GeneralizedRCNN.inference` - """ - if detected_instances is None: - detected_instances = [None] * len(batched_inputs) - - outputs = [] - inputs, instances = [], [] - for idx, input, instance in zip(count(), batched_inputs, detected_instances): - inputs.append(input) - instances.append(instance) - if len(inputs) == self.batch_size or idx == len(batched_inputs) - 1: - outputs.extend( - self.model.inference( - inputs, - instances if instances[0] is not None else None, - do_postprocess=False, - ) - ) - inputs, instances = [], [] - return outputs - - def __call__(self, batched_inputs): - """ - Same input/output format as :meth:`GeneralizedRCNN.forward` - """ - - def _maybe_read_image(dataset_dict): - ret = copy.copy(dataset_dict) - if "image" not in ret: - image = read_image(ret.pop("file_name"), self.model.input_format) - image = torch.from_numpy(np.ascontiguousarray(image.transpose(2, 0, 1))) # CHW - ret["image"] = image - if "height" not in ret and "width" not in ret: - ret["height"] = image.shape[1] - ret["width"] = image.shape[2] - return ret - - return [self._inference_one_image(_maybe_read_image(x)) for x in batched_inputs] - - def _inference_one_image(self, input): - """ - Args: - input (dict): one dataset dict with "image" field being a CHW tensor - - Returns: - dict: one output dict - """ - orig_shape = (input["height"], input["width"]) - augmented_inputs, tfms = self._get_augmented_inputs(input) - # Detect boxes from all augmented versions - with self._turn_off_roi_heads(["mask_on", "keypoint_on"]): - # temporarily disable roi heads - all_boxes, all_scores, all_classes = self._get_augmented_boxes(augmented_inputs, tfms) - # merge all detected boxes to obtain final predictions for boxes - merged_instances = self._merge_detections(all_boxes, all_scores, all_classes, orig_shape) - - if self.cfg.MODEL.MASK_ON: - # Use the detected boxes to obtain masks - augmented_instances = self._rescale_detected_boxes( - augmented_inputs, merged_instances, tfms - ) - # run forward on the detected boxes - outputs = self._batch_inference(augmented_inputs, augmented_instances) - # Delete now useless variables to avoid being out of memory - del augmented_inputs, augmented_instances - # average the predictions - merged_instances.pred_masks = self._reduce_pred_masks(outputs, tfms) - merged_instances = detector_postprocess(merged_instances, *orig_shape) - return {"instances": merged_instances} - else: - return {"instances": merged_instances} - - def _get_augmented_inputs(self, input): - augmented_inputs = self.tta_mapper(input) - tfms = [x.pop("transforms") for x in augmented_inputs] - return augmented_inputs, tfms - - def _get_augmented_boxes(self, augmented_inputs, tfms): - # 1: forward with all augmented images - outputs = self._batch_inference(augmented_inputs) - # 2: union the results - all_boxes = [] - all_scores = [] - all_classes = [] - for output, tfm in zip(outputs, tfms): - # Need to inverse the transforms on boxes, to obtain results on original image - pred_boxes = output.pred_boxes.tensor - original_pred_boxes = tfm.inverse().apply_box(pred_boxes.cpu().numpy()) - all_boxes.append(torch.from_numpy(original_pred_boxes).to(pred_boxes.device)) - - all_scores.extend(output.scores) - all_classes.extend(output.pred_classes) - all_boxes = torch.cat(all_boxes, dim=0) - return all_boxes, all_scores, all_classes - - def _merge_detections(self, all_boxes, all_scores, all_classes, shape_hw): - # select from the union of all results - num_boxes = len(all_boxes) - num_classes = self.cfg.MODEL.ROI_HEADS.NUM_CLASSES - # +1 because fast_rcnn_inference expects background scores as well - all_scores_2d = torch.zeros(num_boxes, num_classes + 1, device=all_boxes.device) - for idx, cls, score in zip(count(), all_classes, all_scores): - all_scores_2d[idx, cls] = score - - merged_instances, _ = fast_rcnn_inference_single_image( - all_boxes, - all_scores_2d, - shape_hw, - 1e-8, - self.cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST, - self.cfg.TEST.DETECTIONS_PER_IMAGE, - ) - - return merged_instances - - def _rescale_detected_boxes(self, augmented_inputs, merged_instances, tfms): - augmented_instances = [] - for input, tfm in zip(augmented_inputs, tfms): - # Transform the target box to the augmented image's coordinate space - pred_boxes = merged_instances.pred_boxes.tensor.cpu().numpy() - pred_boxes = torch.from_numpy(tfm.apply_box(pred_boxes)) - - aug_instances = Instances( - image_size=input["image"].shape[1:3], - pred_boxes=Boxes(pred_boxes), - pred_classes=merged_instances.pred_classes, - scores=merged_instances.scores, - ) - augmented_instances.append(aug_instances) - return augmented_instances - - def _reduce_pred_masks(self, outputs, tfms): - # Should apply inverse transforms on masks. - # We assume only resize & flip are used. pred_masks is a scale-invariant - # representation, so we handle flip specially - for output, tfm in zip(outputs, tfms): - if any(isinstance(t, HFlipTransform) for t in tfm.transforms): - output.pred_masks = output.pred_masks.flip(dims=[3]) - all_pred_masks = torch.stack([o.pred_masks for o in outputs], dim=0) - avg_pred_masks = torch.mean(all_pred_masks, dim=0) - return avg_pred_masks diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/mask_ops.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/mask_ops.py deleted file mode 100644 index 990d04abbb120e40fe07a21d024dfead471bc998..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/mask_ops.py +++ /dev/null @@ -1,275 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from typing import Tuple -import torch -from PIL import Image -from torch.nn import functional as F - -__all__ = ["paste_masks_in_image"] - - -BYTES_PER_FLOAT = 4 -# TODO: This memory limit may be too much or too little. It would be better to -# determine it based on available resources. -GPU_MEM_LIMIT = 1024**3 # 1 GB memory limit - - -def _do_paste_mask(masks, boxes, img_h: int, img_w: int, skip_empty: bool = True): - """ - Args: - masks: N, 1, H, W - boxes: N, 4 - img_h, img_w (int): - skip_empty (bool): only paste masks within the region that - tightly bound all boxes, and returns the results this region only. - An important optimization for CPU. - - Returns: - if skip_empty == False, a mask of shape (N, img_h, img_w) - if skip_empty == True, a mask of shape (N, h', w'), and the slice - object for the corresponding region. - """ - # On GPU, paste all masks together (up to chunk size) - # by using the entire image to sample the masks - # Compared to pasting them one by one, - # this has more operations but is faster on COCO-scale dataset. - device = masks.device - - if skip_empty and not torch.jit.is_scripting(): - x0_int, y0_int = torch.clamp(boxes.min(dim=0).values.floor()[:2] - 1, min=0).to( - dtype=torch.int32 - ) - x1_int = torch.clamp(boxes[:, 2].max().ceil() + 1, max=img_w).to(dtype=torch.int32) - y1_int = torch.clamp(boxes[:, 3].max().ceil() + 1, max=img_h).to(dtype=torch.int32) - else: - x0_int, y0_int = 0, 0 - x1_int, y1_int = img_w, img_h - x0, y0, x1, y1 = torch.split(boxes, 1, dim=1) # each is Nx1 - - N = masks.shape[0] - - img_y = torch.arange(y0_int, y1_int, device=device, dtype=torch.float32) + 0.5 - img_x = torch.arange(x0_int, x1_int, device=device, dtype=torch.float32) + 0.5 - img_y = (img_y - y0) / (y1 - y0) * 2 - 1 - img_x = (img_x - x0) / (x1 - x0) * 2 - 1 - # img_x, img_y have shapes (N, w), (N, h) - - gx = img_x[:, None, :].expand(N, img_y.size(1), img_x.size(1)) - gy = img_y[:, :, None].expand(N, img_y.size(1), img_x.size(1)) - grid = torch.stack([gx, gy], dim=3) - - if not torch.jit.is_scripting(): - if not masks.dtype.is_floating_point: - masks = masks.float() - img_masks = F.grid_sample(masks, grid.to(masks.dtype), align_corners=False) - - if skip_empty and not torch.jit.is_scripting(): - return img_masks[:, 0], (slice(y0_int, y1_int), slice(x0_int, x1_int)) - else: - return img_masks[:, 0], () - - -# Annotate boxes as Tensor (but not Boxes) in order to use scripting -@torch.jit.script_if_tracing -def paste_masks_in_image( - masks: torch.Tensor, boxes: torch.Tensor, image_shape: Tuple[int, int], threshold: float = 0.5 -): - """ - Paste a set of masks that are of a fixed resolution (e.g., 28 x 28) into an image. - The location, height, and width for pasting each mask is determined by their - corresponding bounding boxes in boxes. - - Note: - This is a complicated but more accurate implementation. In actual deployment, it is - often enough to use a faster but less accurate implementation. - See :func:`paste_mask_in_image_old` in this file for an alternative implementation. - - Args: - masks (tensor): Tensor of shape (Bimg, Hmask, Wmask), where Bimg is the number of - detected object instances in the image and Hmask, Wmask are the mask width and mask - height of the predicted mask (e.g., Hmask = Wmask = 28). Values are in [0, 1]. - boxes (Boxes or Tensor): A Boxes of length Bimg or Tensor of shape (Bimg, 4). - boxes[i] and masks[i] correspond to the same object instance. - image_shape (tuple): height, width - threshold (float): A threshold in [0, 1] for converting the (soft) masks to - binary masks. - - Returns: - img_masks (Tensor): A tensor of shape (Bimg, Himage, Wimage), where Bimg is the - number of detected object instances and Himage, Wimage are the image width - and height. img_masks[i] is a binary mask for object instance i. - """ - - assert masks.shape[-1] == masks.shape[-2], "Only square mask predictions are supported" - N = len(masks) - if N == 0: - return masks.new_empty((0,) + image_shape, dtype=torch.uint8) - if not isinstance(boxes, torch.Tensor): - boxes = boxes.tensor - device = boxes.device - assert len(boxes) == N, boxes.shape - - img_h, img_w = image_shape - - # The actual implementation split the input into chunks, - # and paste them chunk by chunk. - if device.type == "cpu" or torch.jit.is_scripting(): - # CPU is most efficient when they are pasted one by one with skip_empty=True - # so that it performs minimal number of operations. - num_chunks = N - else: - # GPU benefits from parallelism for larger chunks, but may have memory issue - # int(img_h) because shape may be tensors in tracing - num_chunks = int(np.ceil(N * int(img_h) * int(img_w) * BYTES_PER_FLOAT / GPU_MEM_LIMIT)) - assert ( - num_chunks <= N - ), "Default GPU_MEM_LIMIT in mask_ops.py is too small; try increasing it" - chunks = torch.chunk(torch.arange(N, device=device), num_chunks) - - img_masks = torch.zeros( - N, img_h, img_w, device=device, dtype=torch.bool if threshold >= 0 else torch.uint8 - ) - for inds in chunks: - masks_chunk, spatial_inds = _do_paste_mask( - masks[inds, None, :, :], boxes[inds], img_h, img_w, skip_empty=device.type == "cpu" - ) - - if threshold >= 0: - masks_chunk = (masks_chunk >= threshold).to(dtype=torch.bool) - else: - # for visualization and debugging - masks_chunk = (masks_chunk * 255).to(dtype=torch.uint8) - - if torch.jit.is_scripting(): # Scripting does not use the optimized codepath - img_masks[inds] = masks_chunk - else: - img_masks[(inds,) + spatial_inds] = masks_chunk - return img_masks - - -# The below are the original paste function (from Detectron1) which has -# larger quantization error. -# It is faster on CPU, while the aligned one is faster on GPU thanks to grid_sample. - - -def paste_mask_in_image_old(mask, box, img_h, img_w, threshold): - """ - Paste a single mask in an image. - This is a per-box implementation of :func:`paste_masks_in_image`. - This function has larger quantization error due to incorrect pixel - modeling and is not used any more. - - Args: - mask (Tensor): A tensor of shape (Hmask, Wmask) storing the mask of a single - object instance. Values are in [0, 1]. - box (Tensor): A tensor of shape (4, ) storing the x0, y0, x1, y1 box corners - of the object instance. - img_h, img_w (int): Image height and width. - threshold (float): Mask binarization threshold in [0, 1]. - - Returns: - im_mask (Tensor): - The resized and binarized object mask pasted into the original - image plane (a tensor of shape (img_h, img_w)). - """ - # Conversion from continuous box coordinates to discrete pixel coordinates - # via truncation (cast to int32). This determines which pixels to paste the - # mask onto. - box = box.to(dtype=torch.int32) # Continuous to discrete coordinate conversion - # An example (1D) box with continuous coordinates (x0=0.7, x1=4.3) will map to - # a discrete coordinates (x0=0, x1=4). Note that box is mapped to 5 = x1 - x0 + 1 - # pixels (not x1 - x0 pixels). - samples_w = box[2] - box[0] + 1 # Number of pixel samples, *not* geometric width - samples_h = box[3] - box[1] + 1 # Number of pixel samples, *not* geometric height - - # Resample the mask from it's original grid to the new samples_w x samples_h grid - mask = Image.fromarray(mask.cpu().numpy()) - mask = mask.resize((samples_w, samples_h), resample=Image.BILINEAR) - mask = np.array(mask, copy=False) - - if threshold >= 0: - mask = np.array(mask > threshold, dtype=np.uint8) - mask = torch.from_numpy(mask) - else: - # for visualization and debugging, we also - # allow it to return an unmodified mask - mask = torch.from_numpy(mask * 255).to(torch.uint8) - - im_mask = torch.zeros((img_h, img_w), dtype=torch.uint8) - x_0 = max(box[0], 0) - x_1 = min(box[2] + 1, img_w) - y_0 = max(box[1], 0) - y_1 = min(box[3] + 1, img_h) - - im_mask[y_0:y_1, x_0:x_1] = mask[ - (y_0 - box[1]) : (y_1 - box[1]), (x_0 - box[0]) : (x_1 - box[0]) - ] - return im_mask - - -# Our pixel modeling requires extrapolation for any continuous -# coordinate < 0.5 or > length - 0.5. When sampling pixels on the masks, -# we would like this extrapolation to be an interpolation between boundary values and zero, -# instead of using absolute zero or boundary values. -# Therefore `paste_mask_in_image_old` is often used with zero padding around the masks like this: -# masks, scale = pad_masks(masks[:, 0, :, :], 1) -# boxes = scale_boxes(boxes.tensor, scale) - - -def pad_masks(masks, padding): - """ - Args: - masks (tensor): A tensor of shape (B, M, M) representing B masks. - padding (int): Number of cells to pad on all sides. - - Returns: - The padded masks and the scale factor of the padding size / original size. - """ - B = masks.shape[0] - M = masks.shape[-1] - pad2 = 2 * padding - scale = float(M + pad2) / M - padded_masks = masks.new_zeros((B, M + pad2, M + pad2)) - padded_masks[:, padding:-padding, padding:-padding] = masks - return padded_masks, scale - - -def scale_boxes(boxes, scale): - """ - Args: - boxes (tensor): A tensor of shape (B, 4) representing B boxes with 4 - coords representing the corners x0, y0, x1, y1, - scale (float): The box scaling factor. - - Returns: - Scaled boxes. - """ - w_half = (boxes[:, 2] - boxes[:, 0]) * 0.5 - h_half = (boxes[:, 3] - boxes[:, 1]) * 0.5 - x_c = (boxes[:, 2] + boxes[:, 0]) * 0.5 - y_c = (boxes[:, 3] + boxes[:, 1]) * 0.5 - - w_half *= scale - h_half *= scale - - scaled_boxes = torch.zeros_like(boxes) - scaled_boxes[:, 0] = x_c - w_half - scaled_boxes[:, 2] = x_c + w_half - scaled_boxes[:, 1] = y_c - h_half - scaled_boxes[:, 3] = y_c + h_half - return scaled_boxes - - -@torch.jit.script_if_tracing -def _paste_masks_tensor_shape( - masks: torch.Tensor, - boxes: torch.Tensor, - image_shape: Tuple[torch.Tensor, torch.Tensor], - threshold: float = 0.5, -): - """ - A wrapper of paste_masks_in_image where image_shape is Tensor. - During tracing, shapes might be tensors instead of ints. The Tensor->int - conversion should be scripted rather than traced. - """ - return paste_masks_in_image(masks, boxes, (int(image_shape[0]), int(image_shape[1])), threshold) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/structures/__init__.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/structures/__init__.py deleted file mode 100644 index f3ee6057e3ec2731984ce8203c6eaf5348d08260..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/structures/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .boxes import Boxes, BoxMode, pairwise_iou, pairwise_ioa, pairwise_point_box_distance -from .image_list import ImageList - -from .instances import Instances -from .keypoints import Keypoints, heatmaps_to_keypoints -from .masks import BitMasks, PolygonMasks, polygons_to_bitmask, ROIMasks -from .rotated_boxes import RotatedBoxes -from .rotated_boxes import pairwise_iou as pairwise_iou_rotated - -__all__ = [k for k in globals().keys() if not k.startswith("_")] - - -from detectron2.utils.env import fixup_module_metadata - -fixup_module_metadata(__name__, globals(), __all__) -del fixup_module_metadata diff --git a/spaces/cccc-c/bingo/src/components/chat.tsx b/spaces/cccc-c/bingo/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/cccc-c/bingo/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
    - -
    - - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
    - -
    - ) : null} - - ) : null} -
    - - -
    - ) -} diff --git a/spaces/ccolas/TastyPiano/src/music2cocktailrep/pipeline/music2cocktailrep.py b/spaces/ccolas/TastyPiano/src/music2cocktailrep/pipeline/music2cocktailrep.py deleted file mode 100644 index dbf64d3d81edc98943a971afd186595705c41194..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/music2cocktailrep/pipeline/music2cocktailrep.py +++ /dev/null @@ -1,80 +0,0 @@ -import os - -import numpy as np -import torch -import time - -from src.cocktails.pipeline.get_affect2affective_cluster import get_affect2affective_cluster -from src.music2cocktailrep.training.latent_translation.setup_trained_model import setup_trained_model -from src.music2cocktailrep.pipeline.music2affect import setup_pretrained_affective_models - -global music2affect, find_affective_cluster, translation_vae -import streamlit as st - -os.environ["TOKENIZERS_PARALLELISM"] = "false" - -def setup_translation_models(): - global music2affect, find_affective_cluster, translation_vae - music2affect, keys = setup_pretrained_affective_models() - find_affective_cluster = get_affect2affective_cluster() - translation_vae = setup_trained_model() - return translation_vae - -def music2affect_cluster(handcoded_rep): - global music2affect, find_affective_cluster - affects = np.clip(music2affect(handcoded_rep), -1, 1) - cluster_id = find_affective_cluster(affects) - return cluster_id, affects - -def music2flavor(music_ai_rep, affective_cluster_id): - global translation_vae - cocktail_rep = translation_vae(music_ai_rep, modality_out='cocktail') - return cocktail_rep - -def debug_translation(music_ai_rep): - global translation_vae - music_reconstruction = translation_vae(music_ai_rep, modality_out='music') - return music_reconstruction - -def music2cocktailrep(music_ai_rep, handcoded_music_rep, verbose=False, level=0): - init_time = time.time() - if verbose: print(' ' * level + 'Synesthetic mapping..') - if verbose: print(' ' * (level*2) + 'Mapping to affective cluster.') - # affective_cluster_id, affect = music2affect_cluster(handcoded_music_rep) - affective_cluster_id, affect = None, None - if verbose: print(' ' * (level*2) + 'Mapping to flavors.') - cocktail_rep = music2flavor(music_ai_rep, affective_cluster_id) - if verbose: print(' ' * (level + 2) + f'Mapped in {int(time.time() - init_time)} seconds.') - return cocktail_rep, affective_cluster_id, affect - -# def sigmoid(x, shift, beta): -# return (1 / (1 + np.exp(-(x + shift) * beta)) - 0.5) * 2 -# -# cluster_colors = ['#%06X' % random.randint(0, 0xFFFFFF) for _ in range(10)] - -# def plot_cluster_ids_dataset(handcoded_rep_path): -# import matplotlib.pyplot as plt -# reps, _, _ = get_data(handcoded_rep_path, keys) -# cluster_ids, affects = music2affect_cluster(reps) -# # plt.figure() -# # affects2 = affects.copy() -# # affects2 = sigmoid(affects2, 0.05, 8) -# # plt.hist(affects2[:, 2], bins=30) -# # plt.xlim([-1, 1]) -# fig = plt.figure() -# ax = fig.add_subplot(projection='3d') -# ax.set_xlim([-1, 1]) -# ax.set_ylim([-1, 1]) -# ax.set_zlim([-1, 1]) -# for cluster_id in sorted(set(cluster_ids)): -# indexes = np.argwhere(cluster_ids == cluster_id).flatten() -# if len(indexes) > 0: -# ax.scatter(affects[indexes, 0], affects[indexes, 1], affects[indexes, 2], c=cluster_colors[cluster_id], s=150) -# ax.set_xlabel('Valence') -# ax.set_ylabel('Arousal') -# ax.set_zlabel('Dominance') -# plt.figure() -# plt.bar(range(10), [np.argwhere(cluster_ids == i).size for i in range(10)]) -# plt.show() -# -# plot_cluster_ids_dataset(handcoded_rep_path) \ No newline at end of file diff --git a/spaces/changlisheng/shangChat/modules/utils.py b/spaces/changlisheng/shangChat/modules/utils.py deleted file mode 100644 index 23f47d688d9690c6c68ccacc765108ce68d62b76..0000000000000000000000000000000000000000 --- a/spaces/changlisheng/shangChat/modules/utils.py +++ /dev/null @@ -1,536 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type -import logging -import json -import os -import datetime -import hashlib -import csv -import requests -import re -import html -import sys -import subprocess - -import gradio as gr -from pypinyin import lazy_pinyin -import tiktoken -import mdtex2html -from markdown import markdown -from pygments import highlight -from pygments.lexers import get_lexer_by_name -from pygments.formatters import HtmlFormatter -import pandas as pd - -from modules.presets import * -from . import shared -from modules.config import retrieve_proxy - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - - -def count_token(message): - encoding = tiktoken.get_encoding("cl100k_base") - input_str = f"role: {message['role']}, content: {message['content']}" - length = len(encoding.encode(input_str)) - return length - - -def markdown_to_html_with_syntax_highlight(md_str): - def replacer(match): - lang = match.group(1) or "text" - code = match.group(2) - - try: - lexer = get_lexer_by_name(lang, stripall=True) - except ValueError: - lexer = get_lexer_by_name("text", stripall=True) - - formatter = HtmlFormatter() - highlighted_code = highlight(code, lexer, formatter) - - return f'
    {highlighted_code}
    ' - - code_block_pattern = r"```(\w+)?\n([\s\S]+?)\n```" - md_str = re.sub(code_block_pattern, replacer, md_str, flags=re.MULTILINE) - - html_str = markdown(md_str) - return html_str - - -def normalize_markdown(md_text: str) -> str: - lines = md_text.split("\n") - normalized_lines = [] - inside_list = False - - for i, line in enumerate(lines): - if re.match(r"^(\d+\.|-|\*|\+)\s", line.strip()): - if not inside_list and i > 0 and lines[i - 1].strip() != "": - normalized_lines.append("") - inside_list = True - normalized_lines.append(line) - elif inside_list and line.strip() == "": - if i < len(lines) - 1 and not re.match( - r"^(\d+\.|-|\*|\+)\s", lines[i + 1].strip() - ): - normalized_lines.append(line) - continue - else: - inside_list = False - normalized_lines.append(line) - - return "\n".join(normalized_lines) - - -def convert_mdtext(md_text): - code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL) - inline_code_pattern = re.compile(r"`(.*?)`", re.DOTALL) - code_blocks = code_block_pattern.findall(md_text) - non_code_parts = code_block_pattern.split(md_text)[::2] - - result = [] - for non_code, code in zip(non_code_parts, code_blocks + [""]): - if non_code.strip(): - non_code = normalize_markdown(non_code) - if inline_code_pattern.search(non_code): - result.append(markdown(non_code, extensions=["tables"])) - else: - result.append(mdtex2html.convert(non_code, extensions=["tables"])) - if code.strip(): - # _, code = detect_language(code) # 暂时去除代码高亮功能,因为在大段代码的情况下会出现问题 - # code = code.replace("\n\n", "\n") # 暂时去除代码中的空行,因为在大段代码的情况下会出现问题 - code = f"\n```{code}\n\n```" - code = markdown_to_html_with_syntax_highlight(code) - result.append(code) - result = "".join(result) - result += ALREADY_CONVERTED_MARK - return result - - -def convert_asis(userinput): - return ( - f'

    {html.escape(userinput)}

    ' - + ALREADY_CONVERTED_MARK - ) - - -def detect_converted_mark(userinput): - if userinput.endswith(ALREADY_CONVERTED_MARK): - return True - else: - return False - - -def detect_language(code): - if code.startswith("\n"): - first_line = "" - else: - first_line = code.strip().split("\n", 1)[0] - language = first_line.lower() if first_line else "" - code_without_language = code[len(first_line) :].lstrip() if first_line else code - return language, code_without_language - - -def construct_text(role, text): - return {"role": role, "content": text} - - -def construct_user(text): - return construct_text("user", text) - - -def construct_system(text): - return construct_text("system", text) - - -def construct_assistant(text): - return construct_text("assistant", text) - - -def construct_token_message(tokens: List[int]): - token_sum = 0 - for i in range(len(tokens)): - token_sum += sum(tokens[: i + 1]) - return f"Token 计数: {sum(tokens)},本次对话累计消耗了 {token_sum} tokens" - - -def delete_first_conversation(history, previous_token_count): - if history: - del history[:2] - del previous_token_count[0] - return ( - history, - previous_token_count, - construct_token_message(previous_token_count), - ) - - -def delete_last_conversation(chatbot, history, previous_token_count): - if len(chatbot) > 0 and standard_error_msg in chatbot[-1][1]: - logging.info("由于包含报错信息,只删除chatbot记录") - chatbot.pop() - return chatbot, history - if len(history) > 0: - logging.info("删除了一组对话历史") - history.pop() - history.pop() - if len(chatbot) > 0: - logging.info("删除了一组chatbot对话") - chatbot.pop() - if len(previous_token_count) > 0: - logging.info("删除了一组对话的token计数记录") - previous_token_count.pop() - return ( - chatbot, - history, - previous_token_count, - construct_token_message(previous_token_count), - ) - - -def save_file(filename, system, history, chatbot, user_name): - logging.info(f"{user_name} 保存对话历史中……") - os.makedirs(HISTORY_DIR / user_name, exist_ok=True) - if filename.endswith(".json"): - json_s = {"system": system, "history": history, "chatbot": chatbot} - print(json_s) - with open(os.path.join(HISTORY_DIR / user_name, filename), "w") as f: - json.dump(json_s, f) - elif filename.endswith(".md"): - md_s = f"system: \n- {system} \n" - for data in history: - md_s += f"\n{data['role']}: \n- {data['content']} \n" - with open(os.path.join(HISTORY_DIR / user_name, filename), "w", encoding="utf8") as f: - f.write(md_s) - logging.info(f"{user_name} 保存对话历史完毕") - return os.path.join(HISTORY_DIR / user_name, filename) - - -def save_chat_history(filename, system, history, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, system, history, chatbot, user_name) - - -def export_markdown(filename, system, history, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, system, history, chatbot, user_name) - - -def load_chat_history(filename, system, history, chatbot, user_name): - logging.info(f"{user_name} 加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR / user_name, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.info(f"{user_name} 加载对话历史完毕") - return filename, json_s["system"], json_s["history"], json_s["chatbot"] - except FileNotFoundError: - logging.info(f"{user_name} 没有找到对话历史文件,不执行任何操作") - return filename, system, history, chatbot - - -def sorted_by_pinyin(list): - return sorted(list, key=lambda char: lazy_pinyin(char)[0][0]) - - -def get_file_names(dir, plain=False, filetypes=[".json"]): - logging.info(f"获取文件名列表,目录为{dir},文件类型为{filetypes},是否为纯文本列表{plain}") - files = [] - try: - for type in filetypes: - files += [f for f in os.listdir(dir) if f.endswith(type)] - except FileNotFoundError: - files = [] - files = sorted_by_pinyin(files) - if files == []: - files = [""] - logging.debug(f"files are:{files}") - if plain: - return files - else: - return gr.Dropdown.update(choices=files) - - -def get_history_names(plain=False, user_name=""): - logging.info(f"从用户 {user_name} 中获取历史记录文件名列表") - return get_file_names(HISTORY_DIR / user_name, plain) - - -def load_template(filename, mode=0): - logging.info(f"加载模板文件{filename},模式为{mode}(0为返回字典和下拉菜单,1为返回下拉菜单,2为返回字典)") - lines = [] - logging.info("Loading template...") - if filename.endswith(".json"): - with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f: - lines = json.load(f) - lines = [[i["act"], i["prompt"]] for i in lines] - else: - with open( - os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8" - ) as csvfile: - reader = csv.reader(csvfile) - lines = list(reader) - lines = lines[1:] - if mode == 1: - return sorted_by_pinyin([row[0] for row in lines]) - elif mode == 2: - return {row[0]: row[1] for row in lines} - else: - choices = sorted_by_pinyin([row[0] for row in lines]) - return {row[0]: row[1] for row in lines}, gr.Dropdown.update( - choices=choices - ) - - -def get_template_names(plain=False): - logging.info("获取模板文件名列表") - return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json"]) - - -def get_template_content(templates, selection, original_system_prompt): - logging.info(f"应用模板中,选择为{selection},原始系统提示为{original_system_prompt}") - try: - return templates[selection] - except: - return original_system_prompt - - -def reset_state(): - logging.info("重置状态") - return [], [], [], construct_token_message([0]) - - -def reset_textbox(): - logging.debug("重置文本框") - return gr.update(value="") - - -def reset_default(): - default_host = shared.state.reset_api_host() - retrieve_proxy("") - return gr.update(value=default_host), gr.update(value=""), "API-Host 和代理已重置" - - -def change_api_host(host): - shared.state.set_api_host(host) - msg = f"API-Host更改为了{host}" - logging.info(msg) - return msg - - -def change_proxy(proxy): - retrieve_proxy(proxy) - os.environ["HTTPS_PROXY"] = proxy - msg = f"代理更改为了{proxy}" - logging.info(msg) - return msg - - -def hide_middle_chars(s): - if s is None: - return "" - if len(s) <= 8: - return s - else: - head = s[:4] - tail = s[-4:] - hidden = "*" * (len(s) - 8) - return head + hidden + tail - - -def submit_key(key): - key = key.strip() - msg = f"API密钥更改为了{hide_middle_chars(key)}" - logging.info(msg) - return key, msg - - -def replace_today(prompt): - today = datetime.datetime.today().strftime("%Y-%m-%d") - return prompt.replace("{current_date}", today) - - -def get_geoip(): - try: - with retrieve_proxy(): - response = requests.get("https://ipapi.co/json/", timeout=5) - data = response.json() - except: - data = {"error": True, "reason": "连接ipapi失败"} - if "error" in data.keys(): - logging.warning(f"无法获取IP地址信息。\n{data}") - if data["reason"] == "RateLimited": - return ( - f"获取IP地理位置失败,因为达到了检测IP的速率限制。聊天功能可能仍然可用。" - ) - else: - return f"获取IP地理位置失败。原因:{data['reason']}。你仍然可以使用聊天功能。" - else: - country = data["country_name"] - if country == "China": - text = "**您的IP区域:中国。请立即检查代理设置,在不受支持的地区使用API可能导致账号被封禁。**" - else: - text = f"您的IP区域:{country}。" - logging.info(text) - return text - - -def find_n(lst, max_num): - n = len(lst) - total = sum(lst) - - if total < max_num: - return n - - for i in range(len(lst)): - if total - lst[i] < max_num: - return n - i - 1 - total = total - lst[i] - return 1 - - -def start_outputing(): - logging.debug("显示取消按钮,隐藏发送按钮") - return gr.Button.update(visible=True), gr.Button.update(visible=False) - - -def end_outputing(): - return ( - gr.Button.update(visible=True), - gr.Button.update(visible=False), - ) - - -def cancel_outputing(): - logging.info("中止输出……") - shared.state.interrupt() - - -def transfer_input(inputs): - # 一次性返回,降低延迟 - textbox = reset_textbox() - outputing = start_outputing() - return ( - inputs, - gr.update(value=""), - gr.Button.update(visible=True), - gr.Button.update(visible=False), - ) - - - -def run(command, desc=None, errdesc=None, custom_env=None, live=False): - if desc is not None: - print(desc) - if live: - result = subprocess.run(command, shell=True, env=os.environ if custom_env is None else custom_env) - if result.returncode != 0: - raise RuntimeError(f"""{errdesc or 'Error running command'}. -Command: {command} -Error code: {result.returncode}""") - - return "" - result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, env=os.environ if custom_env is None else custom_env) - if result.returncode != 0: - message = f"""{errdesc or 'Error running command'}. -Command: {command} -Error code: {result.returncode} -stdout: {result.stdout.decode(encoding="utf8", errors="ignore") if len(result.stdout)>0 else ''} -stderr: {result.stderr.decode(encoding="utf8", errors="ignore") if len(result.stderr)>0 else ''} -""" - raise RuntimeError(message) - return result.stdout.decode(encoding="utf8", errors="ignore") - -def versions_html(): - git = os.environ.get('GIT', "git") - python_version = ".".join([str(x) for x in sys.version_info[0:3]]) - try: - commit_hash = run(f"{git} rev-parse HEAD").strip() - except Exception: - commit_hash = "" - if commit_hash != "": - short_commit = commit_hash[0:7] - commit_info = f"{short_commit}" - else: - commit_info = "unknown \U0001F615" - return f""" -Python: {python_version} - •  -Gradio: {gr.__version__} - •  -Commit: {commit_info} -""" - -def add_source_numbers(lst, source_name = "Source", use_source = True): - if use_source: - return [f'[{idx+1}]\t "{item[0]}"\n{source_name}: {item[1]}' for idx, item in enumerate(lst)] - else: - return [f'[{idx+1}]\t "{item}"' for idx, item in enumerate(lst)] - -def add_details(lst): - nodes = [] - for index, txt in enumerate(lst): - brief = txt[:25].replace("\n", "") - nodes.append( - f"
    {brief}...

    {txt}

    " - ) - return nodes - - -def sheet_to_string(sheet): - result = "" - for index, row in sheet.iterrows(): - row_string = "" - for column in sheet.columns: - row_string += f"{column}: {row[column]}, " - row_string = row_string.rstrip(", ") - row_string += "." - result += row_string + "\n" - return result - -def excel_to_string(file_path): - # 读取Excel文件中的所有工作表 - excel_file = pd.read_excel(file_path, engine='openpyxl', sheet_name=None) - - # 初始化结果字符串 - result = "" - - # 遍历每一个工作表 - for sheet_name, sheet_data in excel_file.items(): - # 将工作表名称添加到结果字符串 - result += f"Sheet: {sheet_name}\n" - - # 处理当前工作表并添加到结果字符串 - result += sheet_to_string(sheet_data) - - # 在不同工作表之间添加分隔符 - result += "\n" + ("-" * 20) + "\n\n" - - return result diff --git a/spaces/chendl/compositional_test/transformers/examples/pytorch/token-classification/run_ner_no_trainer.py b/spaces/chendl/compositional_test/transformers/examples/pytorch/token-classification/run_ner_no_trainer.py deleted file mode 100644 index 2865e1a31a04325ec4568fa3ba627e5675d4e094..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/pytorch/token-classification/run_ner_no_trainer.py +++ /dev/null @@ -1,778 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2021 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Fine-tuning a 🤗 Transformers model on token classification tasks (NER, POS, CHUNKS) relying on the accelerate library -without using a Trainer. -""" - -import argparse -import json -import logging -import math -import os -import random -from pathlib import Path - -import datasets -import evaluate -import torch -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from datasets import ClassLabel, load_dataset -from huggingface_hub import Repository, create_repo -from torch.utils.data import DataLoader -from tqdm.auto import tqdm - -import transformers -from transformers import ( - CONFIG_MAPPING, - MODEL_MAPPING, - AutoConfig, - AutoModelForTokenClassification, - AutoTokenizer, - DataCollatorForTokenClassification, - PretrainedConfig, - SchedulerType, - default_data_collator, - get_scheduler, -) -from transformers.utils import check_min_version, get_full_repo_name, send_example_telemetry -from transformers.utils.versions import require_version - - -# Will error if the minimal version of Transformers is not installed. Remove at your own risks. -check_min_version("4.28.0") - -logger = get_logger(__name__) -require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/token-classification/requirements.txt") - -# You should update this to your particular problem to have better documentation of `model_type` -MODEL_CONFIG_CLASSES = list(MODEL_MAPPING.keys()) -MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES) - - -def parse_args(): - parser = argparse.ArgumentParser( - description="Finetune a transformers model on a text classification task (NER) with accelerate library" - ) - parser.add_argument( - "--dataset_name", - type=str, - default=None, - help="The name of the dataset to use (via the datasets library).", - ) - parser.add_argument( - "--dataset_config_name", - type=str, - default=None, - help="The configuration name of the dataset to use (via the datasets library).", - ) - parser.add_argument( - "--train_file", type=str, default=None, help="A csv or a json file containing the training data." - ) - parser.add_argument( - "--validation_file", type=str, default=None, help="A csv or a json file containing the validation data." - ) - parser.add_argument( - "--text_column_name", - type=str, - default=None, - help="The column name of text to input in the file (a csv or JSON file).", - ) - parser.add_argument( - "--label_column_name", - type=str, - default=None, - help="The column name of label to input in the file (a csv or JSON file).", - ) - parser.add_argument( - "--max_length", - type=int, - default=128, - help=( - "The maximum total input sequence length after tokenization. Sequences longer than this will be truncated," - " sequences shorter will be padded if `--pad_to_max_length` is passed." - ), - ) - parser.add_argument( - "--pad_to_max_length", - action="store_true", - help="If passed, pad all samples to `max_length`. Otherwise, dynamic padding is used.", - ) - parser.add_argument( - "--model_name_or_path", - type=str, - help="Path to pretrained model or model identifier from huggingface.co/models.", - required=False, - ) - parser.add_argument( - "--config_name", - type=str, - default=None, - help="Pretrained config name or path if not the same as model_name", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--per_device_train_batch_size", - type=int, - default=8, - help="Batch size (per device) for the training dataloader.", - ) - parser.add_argument( - "--per_device_eval_batch_size", - type=int, - default=8, - help="Batch size (per device) for the evaluation dataloader.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-5, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument("--weight_decay", type=float, default=0.0, help="Weight decay to use.") - parser.add_argument("--num_train_epochs", type=int, default=3, help="Total number of training epochs to perform.") - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--lr_scheduler_type", - type=SchedulerType, - default="linear", - help="The scheduler type to use.", - choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"], - ) - parser.add_argument( - "--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.") - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--model_type", - type=str, - default=None, - help="Model type to use if training from scratch.", - choices=MODEL_TYPES, - ) - parser.add_argument( - "--label_all_tokens", - action="store_true", - help="Setting labels of all special tokens to -100 and thus PyTorch will ignore them.", - ) - parser.add_argument( - "--return_entity_level_metrics", - action="store_true", - help="Indication whether entity level metrics are to be returner.", - ) - parser.add_argument( - "--task_name", - type=str, - default="ner", - choices=["ner", "pos", "chunk"], - help="The name of the task.", - ) - parser.add_argument( - "--debug", - action="store_true", - help="Activate debug mode and run training only with a subset of data.", - ) - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument( - "--hub_model_id", type=str, help="The name of the repository to keep in sync with the local `output_dir`." - ) - parser.add_argument("--hub_token", type=str, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--checkpointing_steps", - type=str, - default=None, - help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.", - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help="If the training should continue from a checkpoint folder.", - ) - parser.add_argument( - "--with_tracking", - action="store_true", - help="Whether to enable experiment trackers for logging.", - ) - parser.add_argument( - "--report_to", - type=str, - default="all", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`,' - ' `"wandb"`, `"comet_ml"` and `"clearml"`. Use `"all"` (default) to report to all integrations.' - "Only applicable when `--with_tracking` is passed." - ), - ) - parser.add_argument( - "--ignore_mismatched_sizes", - action="store_true", - help="Whether or not to enable to load a pretrained model whose head dimensions are different.", - ) - args = parser.parse_args() - - # Sanity checks - if args.task_name is None and args.train_file is None and args.validation_file is None: - raise ValueError("Need either a task name or a training/validation file.") - else: - if args.train_file is not None: - extension = args.train_file.split(".")[-1] - assert extension in ["csv", "json"], "`train_file` should be a csv or a json file." - if args.validation_file is not None: - extension = args.validation_file.split(".")[-1] - assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file." - - if args.push_to_hub: - assert args.output_dir is not None, "Need an `output_dir` to create a repo when `--push_to_hub` is passed." - - return args - - -def main(): - args = parse_args() - - # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The - # information sent is the one passed as arguments along with your Python/PyTorch versions. - send_example_telemetry("run_ner_no_trainer", args) - - # Initialize the accelerator. We will let the accelerator handle device placement for us in this example. - # If we're using tracking, we also need to initialize it here and it will by default pick up all supported trackers - # in the environment - accelerator = ( - Accelerator(log_with=args.report_to, logging_dir=args.output_dir) if args.with_tracking else Accelerator() - ) - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - create_repo(repo_name, exist_ok=True, token=args.hub_token) - repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - accelerator.wait_for_everyone() - - # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below) - # or just provide the name of one of the public datasets for token classification task available on the hub at https://huggingface.co/datasets/ - # (the dataset will be downloaded automatically from the datasets Hub). - # - # For CSV/JSON files, this script will use the column called 'tokens' or the first column if no column called - # 'tokens' is found. You can easily tweak this behavior (see below). - # - # In distributed training, the load_dataset function guarantee that only one local process can concurrently - # download the dataset. - if args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - raw_datasets = load_dataset(args.dataset_name, args.dataset_config_name) - else: - data_files = {} - if args.train_file is not None: - data_files["train"] = args.train_file - if args.validation_file is not None: - data_files["validation"] = args.validation_file - extension = args.train_file.split(".")[-1] - raw_datasets = load_dataset(extension, data_files=data_files) - # Trim a number of training examples - if args.debug: - for split in raw_datasets.keys(): - raw_datasets[split] = raw_datasets[split].select(range(100)) - # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at - # https://huggingface.co/docs/datasets/loading_datasets.html. - - if raw_datasets["train"] is not None: - column_names = raw_datasets["train"].column_names - features = raw_datasets["train"].features - else: - column_names = raw_datasets["validation"].column_names - features = raw_datasets["validation"].features - - if args.text_column_name is not None: - text_column_name = args.text_column_name - elif "tokens" in column_names: - text_column_name = "tokens" - else: - text_column_name = column_names[0] - - if args.label_column_name is not None: - label_column_name = args.label_column_name - elif f"{args.task_name}_tags" in column_names: - label_column_name = f"{args.task_name}_tags" - else: - label_column_name = column_names[1] - - # In the event the labels are not a `Sequence[ClassLabel]`, we will need to go through the dataset to get the - # unique labels. - def get_label_list(labels): - unique_labels = set() - for label in labels: - unique_labels = unique_labels | set(label) - label_list = list(unique_labels) - label_list.sort() - return label_list - - # If the labels are of type ClassLabel, they are already integers and we have the map stored somewhere. - # Otherwise, we have to get the list of labels manually. - labels_are_int = isinstance(features[label_column_name].feature, ClassLabel) - if labels_are_int: - label_list = features[label_column_name].feature.names - label_to_id = {i: i for i in range(len(label_list))} - else: - label_list = get_label_list(raw_datasets["train"][label_column_name]) - label_to_id = {l: i for i, l in enumerate(label_list)} - - num_labels = len(label_list) - - # Load pretrained model and tokenizer - # - # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently - # download model & vocab. - if args.config_name: - config = AutoConfig.from_pretrained(args.config_name, num_labels=num_labels) - elif args.model_name_or_path: - config = AutoConfig.from_pretrained(args.model_name_or_path, num_labels=num_labels) - else: - config = CONFIG_MAPPING[args.model_type]() - logger.warning("You are instantiating a new config instance from scratch.") - - tokenizer_name_or_path = args.tokenizer_name if args.tokenizer_name else args.model_name_or_path - if not tokenizer_name_or_path: - raise ValueError( - "You are instantiating a new tokenizer from scratch. This is not supported by this script." - "You can do it from another script, save it, and load it from here, using --tokenizer_name." - ) - - if config.model_type in {"bloom", "gpt2", "roberta"}: - tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path, use_fast=True, add_prefix_space=True) - else: - tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path, use_fast=True) - - if args.model_name_or_path: - model = AutoModelForTokenClassification.from_pretrained( - args.model_name_or_path, - from_tf=bool(".ckpt" in args.model_name_or_path), - config=config, - ignore_mismatched_sizes=args.ignore_mismatched_sizes, - ) - else: - logger.info("Training new model from scratch") - model = AutoModelForTokenClassification.from_config(config) - - # We resize the embeddings only when necessary to avoid index errors. If you are creating a model from scratch - # on a small vocab and want a smaller embedding size, remove this test. - embedding_size = model.get_input_embeddings().weight.shape[0] - if len(tokenizer) > embedding_size: - embedding_size = model.get_input_embeddings().weight.shape[0] - if len(tokenizer) > embedding_size: - model.resize_token_embeddings(len(tokenizer)) - - # Model has labels -> use them. - if model.config.label2id != PretrainedConfig(num_labels=num_labels).label2id: - if sorted(model.config.label2id.keys()) == sorted(label_list): - # Reorganize `label_list` to match the ordering of the model. - if labels_are_int: - label_to_id = {i: int(model.config.label2id[l]) for i, l in enumerate(label_list)} - label_list = [model.config.id2label[i] for i in range(num_labels)] - else: - label_list = [model.config.id2label[i] for i in range(num_labels)] - label_to_id = {l: i for i, l in enumerate(label_list)} - else: - logger.warning( - "Your model seems to have been trained with labels, but they don't match the dataset: ", - f"model labels: {sorted(model.config.label2id.keys())}, dataset labels:" - f" {sorted(label_list)}.\nIgnoring the model labels as a result.", - ) - - # Set the correspondences label/ID inside the model config - model.config.label2id = {l: i for i, l in enumerate(label_list)} - model.config.id2label = dict(enumerate(label_list)) - - # Map that sends B-Xxx label to its I-Xxx counterpart - b_to_i_label = [] - for idx, label in enumerate(label_list): - if label.startswith("B-") and label.replace("B-", "I-") in label_list: - b_to_i_label.append(label_list.index(label.replace("B-", "I-"))) - else: - b_to_i_label.append(idx) - - # Preprocessing the datasets. - # First we tokenize all the texts. - padding = "max_length" if args.pad_to_max_length else False - - # Tokenize all texts and align the labels with them. - - def tokenize_and_align_labels(examples): - tokenized_inputs = tokenizer( - examples[text_column_name], - max_length=args.max_length, - padding=padding, - truncation=True, - # We use this argument because the texts in our dataset are lists of words (with a label for each word). - is_split_into_words=True, - ) - - labels = [] - for i, label in enumerate(examples[label_column_name]): - word_ids = tokenized_inputs.word_ids(batch_index=i) - previous_word_idx = None - label_ids = [] - for word_idx in word_ids: - # Special tokens have a word id that is None. We set the label to -100 so they are automatically - # ignored in the loss function. - if word_idx is None: - label_ids.append(-100) - # We set the label for the first token of each word. - elif word_idx != previous_word_idx: - label_ids.append(label_to_id[label[word_idx]]) - # For the other tokens in a word, we set the label to either the current label or -100, depending on - # the label_all_tokens flag. - else: - if args.label_all_tokens: - label_ids.append(b_to_i_label[label_to_id[label[word_idx]]]) - else: - label_ids.append(-100) - previous_word_idx = word_idx - - labels.append(label_ids) - tokenized_inputs["labels"] = labels - return tokenized_inputs - - with accelerator.main_process_first(): - processed_raw_datasets = raw_datasets.map( - tokenize_and_align_labels, - batched=True, - remove_columns=raw_datasets["train"].column_names, - desc="Running tokenizer on dataset", - ) - - train_dataset = processed_raw_datasets["train"] - eval_dataset = processed_raw_datasets["validation"] - - # Log a few random samples from the training set: - for index in random.sample(range(len(train_dataset)), 3): - logger.info(f"Sample {index} of the training set: {train_dataset[index]}.") - - # DataLoaders creation: - if args.pad_to_max_length: - # If padding was already done ot max length, we use the default data collator that will just convert everything - # to tensors. - data_collator = default_data_collator - else: - # Otherwise, `DataCollatorForTokenClassification` will apply dynamic padding for us (by padding to the maximum length of - # the samples passed). When using mixed precision, we add `pad_to_multiple_of=8` to pad all tensors to multiple - # of 8s, which will enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta). - data_collator = DataCollatorForTokenClassification( - tokenizer, pad_to_multiple_of=(8 if accelerator.use_fp16 else None) - ) - - train_dataloader = DataLoader( - train_dataset, shuffle=True, collate_fn=data_collator, batch_size=args.per_device_train_batch_size - ) - eval_dataloader = DataLoader(eval_dataset, collate_fn=data_collator, batch_size=args.per_device_eval_batch_size) - - # Optimizer - # Split weights in two groups, one with weight decay and the other not. - no_decay = ["bias", "LayerNorm.weight"] - optimizer_grouped_parameters = [ - { - "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], - "weight_decay": args.weight_decay, - }, - { - "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], - "weight_decay": 0.0, - }, - ] - optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=args.learning_rate) - - # Use the device given by the `accelerator` object. - device = accelerator.device - model.to(device) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - name=args.lr_scheduler_type, - optimizer=optimizer, - num_warmup_steps=args.num_warmup_steps, - num_training_steps=args.max_train_steps, - ) - - # Prepare everything with our `accelerator`. - model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare( - model, optimizer, train_dataloader, eval_dataloader, lr_scheduler - ) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # Figure out how many steps we should save the Accelerator states - checkpointing_steps = args.checkpointing_steps - if checkpointing_steps is not None and checkpointing_steps.isdigit(): - checkpointing_steps = int(checkpointing_steps) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if args.with_tracking: - experiment_config = vars(args) - # TensorBoard cannot log Enums, need the raw value - experiment_config["lr_scheduler_type"] = experiment_config["lr_scheduler_type"].value - accelerator.init_trackers("ner_no_trainer", experiment_config) - - # Metrics - metric = evaluate.load("seqeval") - - def get_labels(predictions, references): - # Transform predictions and references tensos to numpy arrays - if device.type == "cpu": - y_pred = predictions.detach().clone().numpy() - y_true = references.detach().clone().numpy() - else: - y_pred = predictions.detach().cpu().clone().numpy() - y_true = references.detach().cpu().clone().numpy() - - # Remove ignored index (special tokens) - true_predictions = [ - [label_list[p] for (p, l) in zip(pred, gold_label) if l != -100] - for pred, gold_label in zip(y_pred, y_true) - ] - true_labels = [ - [label_list[l] for (p, l) in zip(pred, gold_label) if l != -100] - for pred, gold_label in zip(y_pred, y_true) - ] - return true_predictions, true_labels - - def compute_metrics(): - results = metric.compute() - if args.return_entity_level_metrics: - # Unpack nested dictionaries - final_results = {} - for key, value in results.items(): - if isinstance(value, dict): - for n, v in value.items(): - final_results[f"{key}_{n}"] = v - else: - final_results[key] = value - return final_results - else: - return { - "precision": results["overall_precision"], - "recall": results["overall_recall"], - "f1": results["overall_f1"], - "accuracy": results["overall_accuracy"], - } - - # Train! - total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process) - completed_steps = 0 - starting_epoch = 0 - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "": - accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}") - accelerator.load_state(args.resume_from_checkpoint) - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()] - dirs.sort(key=os.path.getctime) - path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last - # Extract `epoch_{i}` or `step_{i}` - training_difference = os.path.splitext(path)[0] - - if "epoch" in training_difference: - starting_epoch = int(training_difference.replace("epoch_", "")) + 1 - resume_step = None - else: - resume_step = int(training_difference.replace("step_", "")) - starting_epoch = resume_step // len(train_dataloader) - resume_step -= starting_epoch * len(train_dataloader) - - for epoch in range(starting_epoch, args.num_train_epochs): - model.train() - if args.with_tracking: - total_loss = 0 - for step, batch in enumerate(train_dataloader): - # We need to skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == starting_epoch: - if resume_step is not None and step < resume_step: - completed_steps += 1 - continue - outputs = model(**batch) - loss = outputs.loss - # We keep track of the loss at each epoch - if args.with_tracking: - total_loss += loss.detach().float() - loss = loss / args.gradient_accumulation_steps - accelerator.backward(loss) - if step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1: - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - progress_bar.update(1) - completed_steps += 1 - - if isinstance(checkpointing_steps, int): - if completed_steps % checkpointing_steps == 0: - output_dir = f"step_{completed_steps }" - if args.output_dir is not None: - output_dir = os.path.join(args.output_dir, output_dir) - accelerator.save_state(output_dir) - - if completed_steps >= args.max_train_steps: - break - - model.eval() - samples_seen = 0 - for step, batch in enumerate(eval_dataloader): - with torch.no_grad(): - outputs = model(**batch) - predictions = outputs.logits.argmax(dim=-1) - labels = batch["labels"] - if not args.pad_to_max_length: # necessary to pad predictions and labels for being gathered - predictions = accelerator.pad_across_processes(predictions, dim=1, pad_index=-100) - labels = accelerator.pad_across_processes(labels, dim=1, pad_index=-100) - predictions_gathered, labels_gathered = accelerator.gather((predictions, labels)) - # If we are in a multiprocess environment, the last batch has duplicates - if accelerator.num_processes > 1: - if step == len(eval_dataloader) - 1: - predictions_gathered = predictions_gathered[: len(eval_dataloader.dataset) - samples_seen] - labels_gathered = labels_gathered[: len(eval_dataloader.dataset) - samples_seen] - else: - samples_seen += labels_gathered.shape[0] - preds, refs = get_labels(predictions_gathered, labels_gathered) - metric.add_batch( - predictions=preds, - references=refs, - ) # predictions and preferences are expected to be a nested list of labels, not label_ids - - eval_metric = compute_metrics() - accelerator.print(f"epoch {epoch}:", eval_metric) - if args.with_tracking: - accelerator.log( - { - "seqeval": eval_metric, - "train_loss": total_loss.item() / len(train_dataloader), - "epoch": epoch, - "step": completed_steps, - }, - step=completed_steps, - ) - - if args.push_to_hub and epoch < args.num_train_epochs - 1: - accelerator.wait_for_everyone() - unwrapped_model = accelerator.unwrap_model(model) - unwrapped_model.save_pretrained( - args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save - ) - if accelerator.is_main_process: - tokenizer.save_pretrained(args.output_dir) - repo.push_to_hub( - commit_message=f"Training in progress epoch {epoch}", blocking=False, auto_lfs_prune=True - ) - - if args.checkpointing_steps == "epoch": - output_dir = f"epoch_{epoch}" - if args.output_dir is not None: - output_dir = os.path.join(args.output_dir, output_dir) - accelerator.save_state(output_dir) - - if args.with_tracking: - accelerator.end_training() - - if args.output_dir is not None: - accelerator.wait_for_everyone() - unwrapped_model = accelerator.unwrap_model(model) - unwrapped_model.save_pretrained( - args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save - ) - if accelerator.is_main_process: - tokenizer.save_pretrained(args.output_dir) - if args.push_to_hub: - repo.push_to_hub(commit_message="End of training", auto_lfs_prune=True) - - all_results = {f"eval_{k}": v for k, v in eval_metric.items()} - if args.with_tracking: - all_results.update({"train_loss": total_loss.item() / len(train_dataloader)}) - with open(os.path.join(args.output_dir, "all_results.json"), "w") as f: - json.dump(all_results, f) - - -if __name__ == "__main__": - main() diff --git a/spaces/chilge/taoli/commons.py b/spaces/chilge/taoli/commons.py deleted file mode 100644 index 074888006392e956ce204d8368362dbb2cd4e304..0000000000000000000000000000000000000000 --- a/spaces/chilge/taoli/commons.py +++ /dev/null @@ -1,188 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -def slice_pitch_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - -def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size) - return ret, ret_pitch, ids_str - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def rand_spec_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/chlab/interactive_kinematic_planet_detector/app.py b/spaces/chlab/interactive_kinematic_planet_detector/app.py deleted file mode 100644 index f8013c12ef9cd1435d3198b486e6c6ec00264c2d..0000000000000000000000000000000000000000 --- a/spaces/chlab/interactive_kinematic_planet_detector/app.py +++ /dev/null @@ -1,401 +0,0 @@ -import gradio as gr -from huggingface_hub import hf_hub_url, cached_download -from matplotlib import cm -import matplotlib.pyplot as plt -from mpl_toolkits.axes_grid1 import make_axes_locatable -import numpy as np -# import onnxruntime as ort -from PIL import Image -from scipy import special -import sys -# import timm -from types import SimpleNamespace -# from transformers import AutoModel, pipeline -from transformers import AutoModelForImageClassification, AutoModel, AutoConfig -import torch - -sys.path.insert(1, "../") -# from utils import model_utils, train_utils, data_utils, run_utils -# from model_utils import jason_regnet_maker, jason_efficientnet_maker -from model_utils.efficientnet_config import EfficientNetConfig, EfficientNetPreTrained, EfficientNet - -model_path = 'chlab/' -# model_path = './models/' - -# plotting a prameters -labels = 20 -ticks = 14 -legends = 14 -text = 14 -titles = 22 -lw = 3 -ps = 200 -cmap = 'magma' - -effnet_hparams = {47: {"num_classes": 2, - "gamma": 0.04294256770072906, - "lr": 0.010208864616781627, - "weight_decay": 0.00014537466483781656, - "batch_size": 16, - "num_channels": 47, - "stochastic_depth_prob": 0.017760418815821067, - "dropout": 0.039061686292663655, - "width_mult": 0.7540060155156922, - "depth_mult": 0.9378692812212488, - "size": "v2_s", - "model_type": "efficientnet_47_planet_detection" - }, - 61: { - "num_classes": 2, - "gamma": 0.032606396652426956, - "lr": 0.008692971067922545, - "weight_decay": 0.00008348389688708425, - "batch_size": 23, - "num_channels": 61, - "stochastic_depth_prob": 0.003581930052432713, - "dropout": 0.027804120950575217, - "width_mult": 1.060782511229692, - "depth_mult": 0.7752918857163054, - "size": "v2_s", - "model_type": "efficientnet_61_planet_detection" - }, - 75: { - "num_classes": 2, - "gamma": 0.029768470449465057, - "lr": 0.008383851744497892, - "weight_decay": 0.000196304392793202, - "batch_size": 32, - "num_channels": 75, - "stochastic_depth_prob": 0.08398410137077088, - "dropout": 0.03351826828687193, - "width_mult": 1.144132674734038, - "depth_mult": 1.2267023928285563, - "size": "v2_s", - "model_type": "efficientnet_75_planet_detection" - } -} -# effnet_config = SimpleNamespace(**effnet_hparams) - -# which layers to look at -activation_indices = {'efficientnet': [0, 3]} - - -def normalize_array(x: list): - - '''Makes array between 0 and 1''' - - x = np.array(x) - - return (x - np.min(x)) / np.max(x - np.min(x)) - -# def load_model(model: str, activation: bool=True): - -# if activation: -# model += '_w_activation' - -# # set options for onnx runtime -# options = ort.SessionOptions() -# options.intra_op_num_threads = 1 -# options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL -# provider = "CPUExecutionProvider" - -# # start session -# ort_session = ort.InferenceSession(model_path + '%s.onnx' % (model), options, providers=[provider]) -# # ort_session = ORTModel.load_model(model_path + '%s.onnx' % (model)) - -# return ort_session - -def get_activations(model, image: list, model_name: str, - layer=None, vmax=2.5, sub_mean=True, - channel: int=0): - - '''Gets activations for a given input image''' - - # run model - # input_name = intermediate_model.get_inputs()[0].name - # outputs = intermediate_model.run(None, {input_name: image}) - - - layer_outputs = {} - temp_image = image - for i in range(len(model.features)): - temp_image = model.features[i](temp_image) - if i in activation_indices[model_name]: - layer_outputs[i] = temp_image - # print(i, layer_outputs[i].shape) - if i == max(activation_indices[model_name]): - break - output = model(image).detach().cpu().numpy() - # print(model(image), model.model(image)) - - image = image.detach().cpu().numpy() - output_1 = layer_outputs[activation_indices[model_name][0]].detach().cpu().numpy() - output_2 = layer_outputs[activation_indices[model_name][1]].detach().cpu().numpy() - - # print(image.shape, output.shape, output_1.shape, output_2.shape) - - # get activations - # output_1 = outputs[1] - # output_2 = outputs[2] - - # get prediction - # output = outputs[0][0] - output = special.softmax(output) - print(output) - - # sum over velocity channels - if channel == 0: - in_image = np.sum(image[0, :, :, :], axis=0) - else: - image[0, int(channel-1), :, :] - in_image = normalize_array(in_image) - - if layer is None: - # sum over all velocity channels - activation_1 = np.sum(output_1[0, :, :, :], axis=0) - activation_2 = np.sum(output_2[0, :, :, :], axis=0) - else: - # select a single channel - activation_1 = output_1[0, layer, :, :] - activation_2 = output_2[0, layer, :, :] - - if sub_mean: - # y = |x - | - activation_1 -= np.mean(activation_1) - activation_1 = np.abs(activation_1) - - activation_2 -= np.mean(activation_2) - activation_2 = np.abs(activation_2) - - return output, in_image, activation_1, activation_2 - -def plot_input(input_image: list, origin='lower'): - - ##### make the figure for the input image ##### - plt.rcParams['xtick.labelsize'] = ticks - plt.rcParams['ytick.labelsize'] = ticks - - input_fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 5)) - - im0 = ax.imshow(input_image, cmap=cmap, - origin=origin) - - divider = make_axes_locatable(ax) - cax = divider.append_axes('right', size='5%', pad=0.05) - input_fig.colorbar(im0, cax=cax, orientation='vertical') - - ax.set_title('Input', fontsize=titles) - - return input_fig - -def plot_activations(activation_1: list, activation_2: list, origin='lower'): - - - ##### Make the activation figure ###### - plt.rcParams['xtick.labelsize'] = ticks - plt.rcParams['ytick.labelsize'] = ticks - - fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(18, 7)) - - ax1, ax2 = axs[0], axs[1] - - im1 = ax1.imshow(activation_1, cmap=cmap, - origin=origin) - im2 = ax2.imshow(activation_2, cmap=cmap, - origin=origin) - - ims = [im1, im2] - - for (i, ax) in enumerate(axs): - divider = make_axes_locatable(ax) - cax = divider.append_axes('right', size='5%', pad=0.05) - fig.colorbar(ims[i], cax=cax, orientation='vertical') - - # ax0.set_title('Input', fontsize=titles) - ax1.set_title('Early Activation', fontsize=titles) - ax2.set_title('Late Activation', fontsize=titles) - - return fig - -def predict_and_analyze(model_name, num_channels, dim, input_channel, image): - - ''' - Loads a model with activations, passes through image and shows activations - - The image must be a numpy array of shape (C, W, W) or (1, C, W, W) - ''' - - model_name = model_name.lower() - num_channels = int(num_channels) - W = int(dim) - - print("Running %s for %i channels" % (model_name, num_channels)) - print("Loading data") - # print(image) - - image = np.load(image.name, allow_pickle=True) - image = image.astype(np.float32) - - if len(image.shape) != 4: - image = image[np.newaxis, :, :, :] - - image = torch.from_numpy(image) - - assert image.shape == (1, num_channels, W, W), "Data is the wrong shape" - print("Data loaded") - - print("Loading model") - - model_loading_name = "%s_%i_planet_detection" % (model_name, num_channels) - - if 'eff' in model_name: - hparams = effnet_hparams[num_channels] - hparams = SimpleNamespace(**hparams) - config = EfficientNetConfig( - dropout=hparams.dropout, - num_channels=hparams.num_channels, - num_classes=hparams.num_classes, - size=hparams.size, - stochastic_depth_prob=hparams.stochastic_depth_prob, - width_mult=hparams.width_mult, - depth_mult=hparams.depth_mult, - ) - # EfficientNetConfig.model_type = "efficientnet_%s_planet_detection" % (hparams.num_channels) - # EfficientNetConfig.model_type = hparams.model_type - - # config.save_pretrained(save_directory=model_loading_name) - - # model = EfficientNet(dropout=hparams.dropout, - # num_channels=hparams.num_channels, - # num_classes=hparams.num_classes, - # size=hparams.size, - # stochastic_depth_prob=hparams.stochastic_depth_prob, - # width_mult=hparams.width_mult, - # depth_mult=hparams.depth_mult,) - - ###### kinda working ##### - # AutoConfig.register(model_loading_name, EfficientNetConfig) - # AutoModel.register(EfficientNetConfig, EfficientNetPreTrained) - # model = AutoModel.from_pretrained(model_path + model_loading_name) - - # config = EfficientNetConfig.from_pretrained(model_loading_name) - - # model = EfficientNetPreTrained.from_pretrained(model_loading_name) - # model = AutoModel.from_pretrained(model_loading_name, trust_remote_code=True) - - # model = AutoModel.from_pretrained(model_path + model_loading_name) - - model = EfficientNet(dropout=hparams.dropout, - num_channels=hparams.num_channels, - num_classes=hparams.num_classes, - size=hparams.size, - stochastic_depth_prob=hparams.stochastic_depth_prob, - width_mult=hparams.width_mult, - depth_mult=hparams.depth_mult,) - model_url = cached_download(hf_hub_url(model_path + model_loading_name, filename="pytorch_model.bin")) - # print(model_url) - - loaded = torch.load(model_url, map_location='cpu',) - # print(loaded.keys()) - - model.load_state_dict(loaded['state_dict']) - # print(model) - - # model = EfficientNetPreTrained(config) - # config.register_for_auto_class() - # model.register_for_auto_class("AutoModelForImageClassification") - # pretrained_model = timm.create_model(model_loading_name, pretrained=True) - # model.model.load_state_dict(pretrained_model.state_dict()) - # pipeline = pipeline(task="image-classification", model=model_loading_name) - # model = load_model(model_name, activation=True) - # model = AutoModel.from_pretrained(model_loading_name) - - print("Model loaded") - - print("Looking at activations") - output, input_image, activation_1, activation_2 = get_activations(model, image, model_name, - channel=input_channel, - sub_mean=True) - print("Activations and predictions finished") - # print(output) - - if output[0][0] < output[0][1]: - output = 'Planet predicted with %.3f percent confidence' % (100*output[0][1]) - else: - output = 'No planet predicted with %.3f percent confidence' % (100*output[0][0]) - - print(output) - - input_image = normalize_array(input_image) - activation_1 = normalize_array(activation_1) - activation_2 = normalize_array(activation_2) - - # convert input image to RGB (unused for now since not outputting actual image) - # input_pil_image = Image.fromarray(np.uint8(cm.magma(input_image)*255)) - - print("Plotting") - - origin = 'lower' - - # plot input image - input_fig = plot_input(input_image, origin=origin) - - # plot mean subtracted activations - fig1 = plot_activations(activation_1, activation_2, origin=origin) - - # plot raw activations - _, _, activation_1, activation_2 = get_activations(model, image, model_name, - channel=input_channel, - sub_mean=False) - activation_1 = normalize_array(activation_1) - activation_2 = normalize_array(activation_2) - fig2 = plot_activations(activation_1, activation_2, origin=origin) - - print("Sending to Hugging Face") - - return output, input_fig, fig1, fig2 - - -if __name__ == "__main__": - - demo = gr.Interface( - fn=predict_and_analyze, - inputs=[gr.Dropdown(["EfficientNet"], - # "RegNet"], - value="EfficientNet", - label="Model Selection", - show_label=True), - gr.Dropdown(["47", "61", "75"], - value="61", - label="Number of Velocity Channels", - show_label=True), - gr.Dropdown(["600"], - value="600", - label="Image Dimensions", - show_label=True), - gr.Number(value=0., - label="Input Channel to show (0 = sum over all)", - show_label=True), - gr.File(label="Input Data", show_label=True)], - outputs=[gr.Textbox(lines=1, label="Prediction", show_label=True), - # gr.Image(label="Input Image", show_label=True), - gr.Plot(label="Input Image", show_label=True), - gr.Plot(label="Mean-Subtracted Activations", show_label=True), - gr.Plot(label="Raw Activations", show_label=True) - ], - title="Kinematic Planet Detector" - ) - demo.launch() - - - - - - - - - - - - - diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py deleted file mode 100644 index bfdb4ea7e12761fa1440e484c83bcaa3de7844c9..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py +++ /dev/null @@ -1,2117 +0,0 @@ -from __future__ import annotations - -import array -import asyncio -import concurrent.futures -import math -import socket -import sys -from asyncio.base_events import _run_until_complete_cb # type: ignore[attr-defined] -from collections import OrderedDict, deque -from concurrent.futures import Future -from contextvars import Context, copy_context -from dataclasses import dataclass -from functools import partial, wraps -from inspect import ( - CORO_RUNNING, - CORO_SUSPENDED, - GEN_RUNNING, - GEN_SUSPENDED, - getcoroutinestate, - getgeneratorstate, -) -from io import IOBase -from os import PathLike -from queue import Queue -from socket import AddressFamily, SocketKind -from threading import Thread -from types import TracebackType -from typing import ( - IO, - Any, - AsyncGenerator, - Awaitable, - Callable, - Collection, - Coroutine, - Generator, - Iterable, - Mapping, - Optional, - Sequence, - Tuple, - TypeVar, - Union, - cast, -) -from weakref import WeakKeyDictionary - -import sniffio - -from .. import CapacityLimiterStatistics, EventStatistics, TaskInfo, abc -from .._core._compat import DeprecatedAsyncContextManager, DeprecatedAwaitable -from .._core._eventloop import claim_worker_thread, threadlocals -from .._core._exceptions import ( - BrokenResourceError, - BusyResourceError, - ClosedResourceError, - EndOfStream, - WouldBlock, -) -from .._core._exceptions import ExceptionGroup as BaseExceptionGroup -from .._core._sockets import GetAddrInfoReturnType, convert_ipv6_sockaddr -from .._core._synchronization import CapacityLimiter as BaseCapacityLimiter -from .._core._synchronization import Event as BaseEvent -from .._core._synchronization import ResourceGuard -from .._core._tasks import CancelScope as BaseCancelScope -from ..abc import IPSockAddrType, UDPPacketType -from ..lowlevel import RunVar - -if sys.version_info >= (3, 8): - - def get_coro(task: asyncio.Task) -> Generator | Awaitable[Any]: - return task.get_coro() - -else: - - def get_coro(task: asyncio.Task) -> Generator | Awaitable[Any]: - return task._coro - - -from asyncio import all_tasks, create_task, current_task, get_running_loop -from asyncio import run as native_run - - -def _get_task_callbacks(task: asyncio.Task) -> Iterable[Callable]: - return [cb for cb, context in task._callbacks] - - -T_Retval = TypeVar("T_Retval") -T_contra = TypeVar("T_contra", contravariant=True) - -# Check whether there is native support for task names in asyncio (3.8+) -_native_task_names = hasattr(asyncio.Task, "get_name") - - -_root_task: RunVar[asyncio.Task | None] = RunVar("_root_task") - - -def find_root_task() -> asyncio.Task: - root_task = _root_task.get(None) - if root_task is not None and not root_task.done(): - return root_task - - # Look for a task that has been started via run_until_complete() - for task in all_tasks(): - if task._callbacks and not task.done(): - for cb in _get_task_callbacks(task): - if ( - cb is _run_until_complete_cb - or getattr(cb, "__module__", None) == "uvloop.loop" - ): - _root_task.set(task) - return task - - # Look up the topmost task in the AnyIO task tree, if possible - task = cast(asyncio.Task, current_task()) - state = _task_states.get(task) - if state: - cancel_scope = state.cancel_scope - while cancel_scope and cancel_scope._parent_scope is not None: - cancel_scope = cancel_scope._parent_scope - - if cancel_scope is not None: - return cast(asyncio.Task, cancel_scope._host_task) - - return task - - -def get_callable_name(func: Callable) -> str: - module = getattr(func, "__module__", None) - qualname = getattr(func, "__qualname__", None) - return ".".join([x for x in (module, qualname) if x]) - - -# -# Event loop -# - -_run_vars = ( - WeakKeyDictionary() -) # type: WeakKeyDictionary[asyncio.AbstractEventLoop, Any] - -current_token = get_running_loop - - -def _task_started(task: asyncio.Task) -> bool: - """Return ``True`` if the task has been started and has not finished.""" - coro = cast(Coroutine[Any, Any, Any], get_coro(task)) - try: - return getcoroutinestate(coro) in (CORO_RUNNING, CORO_SUSPENDED) - except AttributeError: - try: - return getgeneratorstate(cast(Generator, coro)) in ( - GEN_RUNNING, - GEN_SUSPENDED, - ) - except AttributeError: - # task coro is async_genenerator_asend https://bugs.python.org/issue37771 - raise Exception(f"Cannot determine if task {task} has started or not") - - -def _maybe_set_event_loop_policy( - policy: asyncio.AbstractEventLoopPolicy | None, use_uvloop: bool -) -> None: - # On CPython, use uvloop when possible if no other policy has been given and if not - # explicitly disabled - if policy is None and use_uvloop and sys.implementation.name == "cpython": - try: - import uvloop - except ImportError: - pass - else: - # Test for missing shutdown_default_executor() (uvloop 0.14.0 and earlier) - if not hasattr( - asyncio.AbstractEventLoop, "shutdown_default_executor" - ) or hasattr(uvloop.loop.Loop, "shutdown_default_executor"): - policy = uvloop.EventLoopPolicy() - - if policy is not None: - asyncio.set_event_loop_policy(policy) - - -def run( - func: Callable[..., Awaitable[T_Retval]], - *args: object, - debug: bool = False, - use_uvloop: bool = False, - policy: asyncio.AbstractEventLoopPolicy | None = None, -) -> T_Retval: - @wraps(func) - async def wrapper() -> T_Retval: - task = cast(asyncio.Task, current_task()) - task_state = TaskState(None, get_callable_name(func), None) - _task_states[task] = task_state - if _native_task_names: - task.set_name(task_state.name) - - try: - return await func(*args) - finally: - del _task_states[task] - - _maybe_set_event_loop_policy(policy, use_uvloop) - return native_run(wrapper(), debug=debug) - - -# -# Miscellaneous -# - -sleep = asyncio.sleep - - -# -# Timeouts and cancellation -# - -CancelledError = asyncio.CancelledError - - -class CancelScope(BaseCancelScope): - def __new__( - cls, *, deadline: float = math.inf, shield: bool = False - ) -> CancelScope: - return object.__new__(cls) - - def __init__(self, deadline: float = math.inf, shield: bool = False): - self._deadline = deadline - self._shield = shield - self._parent_scope: CancelScope | None = None - self._cancel_called = False - self._active = False - self._timeout_handle: asyncio.TimerHandle | None = None - self._cancel_handle: asyncio.Handle | None = None - self._tasks: set[asyncio.Task] = set() - self._host_task: asyncio.Task | None = None - self._timeout_expired = False - self._cancel_calls: int = 0 - - def __enter__(self) -> CancelScope: - if self._active: - raise RuntimeError( - "Each CancelScope may only be used for a single 'with' block" - ) - - self._host_task = host_task = cast(asyncio.Task, current_task()) - self._tasks.add(host_task) - try: - task_state = _task_states[host_task] - except KeyError: - task_name = host_task.get_name() if _native_task_names else None - task_state = TaskState(None, task_name, self) - _task_states[host_task] = task_state - else: - self._parent_scope = task_state.cancel_scope - task_state.cancel_scope = self - - self._timeout() - self._active = True - - # Start cancelling the host task if the scope was cancelled before entering - if self._cancel_called: - self._deliver_cancellation() - - return self - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - if not self._active: - raise RuntimeError("This cancel scope is not active") - if current_task() is not self._host_task: - raise RuntimeError( - "Attempted to exit cancel scope in a different task than it was " - "entered in" - ) - - assert self._host_task is not None - host_task_state = _task_states.get(self._host_task) - if host_task_state is None or host_task_state.cancel_scope is not self: - raise RuntimeError( - "Attempted to exit a cancel scope that isn't the current tasks's " - "current cancel scope" - ) - - self._active = False - if self._timeout_handle: - self._timeout_handle.cancel() - self._timeout_handle = None - - self._tasks.remove(self._host_task) - - host_task_state.cancel_scope = self._parent_scope - - # Restart the cancellation effort in the farthest directly cancelled parent scope if this - # one was shielded - if self._shield: - self._deliver_cancellation_to_parent() - - if exc_val is not None: - exceptions = ( - exc_val.exceptions if isinstance(exc_val, ExceptionGroup) else [exc_val] - ) - if all(isinstance(exc, CancelledError) for exc in exceptions): - if self._timeout_expired: - return self._uncancel() - elif not self._cancel_called: - # Task was cancelled natively - return None - elif not self._parent_cancelled(): - # This scope was directly cancelled - return self._uncancel() - - return None - - def _uncancel(self) -> bool: - if sys.version_info < (3, 11) or self._host_task is None: - self._cancel_calls = 0 - return True - - # Uncancel all AnyIO cancellations - for i in range(self._cancel_calls): - self._host_task.uncancel() - - self._cancel_calls = 0 - return not self._host_task.cancelling() - - def _timeout(self) -> None: - if self._deadline != math.inf: - loop = get_running_loop() - if loop.time() >= self._deadline: - self._timeout_expired = True - self.cancel() - else: - self._timeout_handle = loop.call_at(self._deadline, self._timeout) - - def _deliver_cancellation(self) -> None: - """ - Deliver cancellation to directly contained tasks and nested cancel scopes. - - Schedule another run at the end if we still have tasks eligible for cancellation. - """ - should_retry = False - current = current_task() - for task in self._tasks: - if task._must_cancel: # type: ignore[attr-defined] - continue - - # The task is eligible for cancellation if it has started and is not in a cancel - # scope shielded from this one - cancel_scope = _task_states[task].cancel_scope - while cancel_scope is not self: - if cancel_scope is None or cancel_scope._shield: - break - else: - cancel_scope = cancel_scope._parent_scope - else: - should_retry = True - if task is not current and ( - task is self._host_task or _task_started(task) - ): - self._cancel_calls += 1 - task.cancel() - - # Schedule another callback if there are still tasks left - if should_retry: - self._cancel_handle = get_running_loop().call_soon( - self._deliver_cancellation - ) - else: - self._cancel_handle = None - - def _deliver_cancellation_to_parent(self) -> None: - """Start cancellation effort in the farthest directly cancelled parent scope""" - scope = self._parent_scope - scope_to_cancel: CancelScope | None = None - while scope is not None: - if scope._cancel_called and scope._cancel_handle is None: - scope_to_cancel = scope - - # No point in looking beyond any shielded scope - if scope._shield: - break - - scope = scope._parent_scope - - if scope_to_cancel is not None: - scope_to_cancel._deliver_cancellation() - - def _parent_cancelled(self) -> bool: - # Check whether any parent has been cancelled - cancel_scope = self._parent_scope - while cancel_scope is not None and not cancel_scope._shield: - if cancel_scope._cancel_called: - return True - else: - cancel_scope = cancel_scope._parent_scope - - return False - - def cancel(self) -> DeprecatedAwaitable: - if not self._cancel_called: - if self._timeout_handle: - self._timeout_handle.cancel() - self._timeout_handle = None - - self._cancel_called = True - if self._host_task is not None: - self._deliver_cancellation() - - return DeprecatedAwaitable(self.cancel) - - @property - def deadline(self) -> float: - return self._deadline - - @deadline.setter - def deadline(self, value: float) -> None: - self._deadline = float(value) - if self._timeout_handle is not None: - self._timeout_handle.cancel() - self._timeout_handle = None - - if self._active and not self._cancel_called: - self._timeout() - - @property - def cancel_called(self) -> bool: - return self._cancel_called - - @property - def shield(self) -> bool: - return self._shield - - @shield.setter - def shield(self, value: bool) -> None: - if self._shield != value: - self._shield = value - if not value: - self._deliver_cancellation_to_parent() - - -async def checkpoint() -> None: - await sleep(0) - - -async def checkpoint_if_cancelled() -> None: - task = current_task() - if task is None: - return - - try: - cancel_scope = _task_states[task].cancel_scope - except KeyError: - return - - while cancel_scope: - if cancel_scope.cancel_called: - await sleep(0) - elif cancel_scope.shield: - break - else: - cancel_scope = cancel_scope._parent_scope - - -async def cancel_shielded_checkpoint() -> None: - with CancelScope(shield=True): - await sleep(0) - - -def current_effective_deadline() -> float: - try: - cancel_scope = _task_states[current_task()].cancel_scope # type: ignore[index] - except KeyError: - return math.inf - - deadline = math.inf - while cancel_scope: - deadline = min(deadline, cancel_scope.deadline) - if cancel_scope._cancel_called: - deadline = -math.inf - break - elif cancel_scope.shield: - break - else: - cancel_scope = cancel_scope._parent_scope - - return deadline - - -def current_time() -> float: - return get_running_loop().time() - - -# -# Task states -# - - -class TaskState: - """ - Encapsulates auxiliary task information that cannot be added to the Task instance itself - because there are no guarantees about its implementation. - """ - - __slots__ = "parent_id", "name", "cancel_scope" - - def __init__( - self, - parent_id: int | None, - name: str | None, - cancel_scope: CancelScope | None, - ): - self.parent_id = parent_id - self.name = name - self.cancel_scope = cancel_scope - - -_task_states = WeakKeyDictionary() # type: WeakKeyDictionary[asyncio.Task, TaskState] - - -# -# Task groups -# - - -class ExceptionGroup(BaseExceptionGroup): - def __init__(self, exceptions: list[BaseException]): - super().__init__() - self.exceptions = exceptions - - -class _AsyncioTaskStatus(abc.TaskStatus): - def __init__(self, future: asyncio.Future, parent_id: int): - self._future = future - self._parent_id = parent_id - - def started(self, value: T_contra | None = None) -> None: - try: - self._future.set_result(value) - except asyncio.InvalidStateError: - raise RuntimeError( - "called 'started' twice on the same task status" - ) from None - - task = cast(asyncio.Task, current_task()) - _task_states[task].parent_id = self._parent_id - - -class TaskGroup(abc.TaskGroup): - def __init__(self) -> None: - self.cancel_scope: CancelScope = CancelScope() - self._active = False - self._exceptions: list[BaseException] = [] - - async def __aenter__(self) -> TaskGroup: - self.cancel_scope.__enter__() - self._active = True - return self - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - ignore_exception = self.cancel_scope.__exit__(exc_type, exc_val, exc_tb) - if exc_val is not None: - self.cancel_scope.cancel() - self._exceptions.append(exc_val) - - while self.cancel_scope._tasks: - try: - await asyncio.wait(self.cancel_scope._tasks) - except asyncio.CancelledError: - self.cancel_scope.cancel() - - self._active = False - if not self.cancel_scope._parent_cancelled(): - exceptions = self._filter_cancellation_errors(self._exceptions) - else: - exceptions = self._exceptions - - try: - if len(exceptions) > 1: - if all( - isinstance(e, CancelledError) and not e.args for e in exceptions - ): - # Tasks were cancelled natively, without a cancellation message - raise CancelledError - else: - raise ExceptionGroup(exceptions) - elif exceptions and exceptions[0] is not exc_val: - raise exceptions[0] - except BaseException as exc: - # Clear the context here, as it can only be done in-flight. - # If the context is not cleared, it can result in recursive tracebacks (see #145). - exc.__context__ = None - raise - - return ignore_exception - - @staticmethod - def _filter_cancellation_errors( - exceptions: Sequence[BaseException], - ) -> list[BaseException]: - filtered_exceptions: list[BaseException] = [] - for exc in exceptions: - if isinstance(exc, ExceptionGroup): - new_exceptions = TaskGroup._filter_cancellation_errors(exc.exceptions) - if len(new_exceptions) > 1: - filtered_exceptions.append(exc) - elif len(new_exceptions) == 1: - filtered_exceptions.append(new_exceptions[0]) - elif new_exceptions: - new_exc = ExceptionGroup(new_exceptions) - new_exc.__cause__ = exc.__cause__ - new_exc.__context__ = exc.__context__ - new_exc.__traceback__ = exc.__traceback__ - filtered_exceptions.append(new_exc) - elif not isinstance(exc, CancelledError) or exc.args: - filtered_exceptions.append(exc) - - return filtered_exceptions - - async def _run_wrapped_task( - self, coro: Coroutine, task_status_future: asyncio.Future | None - ) -> None: - # This is the code path for Python 3.7 on which asyncio freaks out if a task - # raises a BaseException. - __traceback_hide__ = __tracebackhide__ = True # noqa: F841 - task = cast(asyncio.Task, current_task()) - try: - await coro - except BaseException as exc: - if task_status_future is None or task_status_future.done(): - self._exceptions.append(exc) - self.cancel_scope.cancel() - else: - task_status_future.set_exception(exc) - else: - if task_status_future is not None and not task_status_future.done(): - task_status_future.set_exception( - RuntimeError("Child exited without calling task_status.started()") - ) - finally: - if task in self.cancel_scope._tasks: - self.cancel_scope._tasks.remove(task) - del _task_states[task] - - def _spawn( - self, - func: Callable[..., Awaitable[Any]], - args: tuple, - name: object, - task_status_future: asyncio.Future | None = None, - ) -> asyncio.Task: - def task_done(_task: asyncio.Task) -> None: - # This is the code path for Python 3.8+ - assert _task in self.cancel_scope._tasks - self.cancel_scope._tasks.remove(_task) - del _task_states[_task] - - try: - exc = _task.exception() - except CancelledError as e: - while isinstance(e.__context__, CancelledError): - e = e.__context__ - - exc = e - - if exc is not None: - if task_status_future is None or task_status_future.done(): - self._exceptions.append(exc) - self.cancel_scope.cancel() - else: - task_status_future.set_exception(exc) - elif task_status_future is not None and not task_status_future.done(): - task_status_future.set_exception( - RuntimeError("Child exited without calling task_status.started()") - ) - - if not self._active: - raise RuntimeError( - "This task group is not active; no new tasks can be started." - ) - - options: dict[str, Any] = {} - name = get_callable_name(func) if name is None else str(name) - if _native_task_names: - options["name"] = name - - kwargs = {} - if task_status_future: - parent_id = id(current_task()) - kwargs["task_status"] = _AsyncioTaskStatus( - task_status_future, id(self.cancel_scope._host_task) - ) - else: - parent_id = id(self.cancel_scope._host_task) - - coro = func(*args, **kwargs) - if not asyncio.iscoroutine(coro): - raise TypeError( - f"Expected an async function, but {func} appears to be synchronous" - ) - - foreign_coro = not hasattr(coro, "cr_frame") and not hasattr(coro, "gi_frame") - if foreign_coro or sys.version_info < (3, 8): - coro = self._run_wrapped_task(coro, task_status_future) - - task = create_task(coro, **options) - if not foreign_coro and sys.version_info >= (3, 8): - task.add_done_callback(task_done) - - # Make the spawned task inherit the task group's cancel scope - _task_states[task] = TaskState( - parent_id=parent_id, name=name, cancel_scope=self.cancel_scope - ) - self.cancel_scope._tasks.add(task) - return task - - def start_soon( - self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None - ) -> None: - self._spawn(func, args, name) - - async def start( - self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None - ) -> None: - future: asyncio.Future = asyncio.Future() - task = self._spawn(func, args, name, future) - - # If the task raises an exception after sending a start value without a switch point - # between, the task group is cancelled and this method never proceeds to process the - # completed future. That's why we have to have a shielded cancel scope here. - with CancelScope(shield=True): - try: - return await future - except CancelledError: - task.cancel() - raise - - -# -# Threads -# - -_Retval_Queue_Type = Tuple[Optional[T_Retval], Optional[BaseException]] - - -class WorkerThread(Thread): - MAX_IDLE_TIME = 10 # seconds - - def __init__( - self, - root_task: asyncio.Task, - workers: set[WorkerThread], - idle_workers: deque[WorkerThread], - ): - super().__init__(name="AnyIO worker thread") - self.root_task = root_task - self.workers = workers - self.idle_workers = idle_workers - self.loop = root_task._loop - self.queue: Queue[ - tuple[Context, Callable, tuple, asyncio.Future] | None - ] = Queue(2) - self.idle_since = current_time() - self.stopping = False - - def _report_result( - self, future: asyncio.Future, result: Any, exc: BaseException | None - ) -> None: - self.idle_since = current_time() - if not self.stopping: - self.idle_workers.append(self) - - if not future.cancelled(): - if exc is not None: - if isinstance(exc, StopIteration): - new_exc = RuntimeError("coroutine raised StopIteration") - new_exc.__cause__ = exc - exc = new_exc - - future.set_exception(exc) - else: - future.set_result(result) - - def run(self) -> None: - with claim_worker_thread("asyncio"): - threadlocals.loop = self.loop - while True: - item = self.queue.get() - if item is None: - # Shutdown command received - return - - context, func, args, future = item - if not future.cancelled(): - result = None - exception: BaseException | None = None - try: - result = context.run(func, *args) - except BaseException as exc: - exception = exc - - if not self.loop.is_closed(): - self.loop.call_soon_threadsafe( - self._report_result, future, result, exception - ) - - self.queue.task_done() - - def stop(self, f: asyncio.Task | None = None) -> None: - self.stopping = True - self.queue.put_nowait(None) - self.workers.discard(self) - try: - self.idle_workers.remove(self) - except ValueError: - pass - - -_threadpool_idle_workers: RunVar[deque[WorkerThread]] = RunVar( - "_threadpool_idle_workers" -) -_threadpool_workers: RunVar[set[WorkerThread]] = RunVar("_threadpool_workers") - - -async def run_sync_in_worker_thread( - func: Callable[..., T_Retval], - *args: object, - cancellable: bool = False, - limiter: CapacityLimiter | None = None, -) -> T_Retval: - await checkpoint() - - # If this is the first run in this event loop thread, set up the necessary variables - try: - idle_workers = _threadpool_idle_workers.get() - workers = _threadpool_workers.get() - except LookupError: - idle_workers = deque() - workers = set() - _threadpool_idle_workers.set(idle_workers) - _threadpool_workers.set(workers) - - async with (limiter or current_default_thread_limiter()): - with CancelScope(shield=not cancellable): - future: asyncio.Future = asyncio.Future() - root_task = find_root_task() - if not idle_workers: - worker = WorkerThread(root_task, workers, idle_workers) - worker.start() - workers.add(worker) - root_task.add_done_callback(worker.stop) - else: - worker = idle_workers.pop() - - # Prune any other workers that have been idle for MAX_IDLE_TIME seconds or longer - now = current_time() - while idle_workers: - if now - idle_workers[0].idle_since < WorkerThread.MAX_IDLE_TIME: - break - - expired_worker = idle_workers.popleft() - expired_worker.root_task.remove_done_callback(expired_worker.stop) - expired_worker.stop() - - context = copy_context() - context.run(sniffio.current_async_library_cvar.set, None) - worker.queue.put_nowait((context, func, args, future)) - return await future - - -def run_sync_from_thread( - func: Callable[..., T_Retval], - *args: object, - loop: asyncio.AbstractEventLoop | None = None, -) -> T_Retval: - @wraps(func) - def wrapper() -> None: - try: - f.set_result(func(*args)) - except BaseException as exc: - f.set_exception(exc) - if not isinstance(exc, Exception): - raise - - f: concurrent.futures.Future[T_Retval] = Future() - loop = loop or threadlocals.loop - loop.call_soon_threadsafe(wrapper) - return f.result() - - -def run_async_from_thread( - func: Callable[..., Awaitable[T_Retval]], *args: object -) -> T_Retval: - f: concurrent.futures.Future[T_Retval] = asyncio.run_coroutine_threadsafe( - func(*args), threadlocals.loop - ) - return f.result() - - -class BlockingPortal(abc.BlockingPortal): - def __new__(cls) -> BlockingPortal: - return object.__new__(cls) - - def __init__(self) -> None: - super().__init__() - self._loop = get_running_loop() - - def _spawn_task_from_thread( - self, - func: Callable, - args: tuple, - kwargs: dict[str, Any], - name: object, - future: Future, - ) -> None: - run_sync_from_thread( - partial(self._task_group.start_soon, name=name), - self._call_func, - func, - args, - kwargs, - future, - loop=self._loop, - ) - - -# -# Subprocesses -# - - -@dataclass(eq=False) -class StreamReaderWrapper(abc.ByteReceiveStream): - _stream: asyncio.StreamReader - - async def receive(self, max_bytes: int = 65536) -> bytes: - data = await self._stream.read(max_bytes) - if data: - return data - else: - raise EndOfStream - - async def aclose(self) -> None: - self._stream.feed_eof() - - -@dataclass(eq=False) -class StreamWriterWrapper(abc.ByteSendStream): - _stream: asyncio.StreamWriter - - async def send(self, item: bytes) -> None: - self._stream.write(item) - await self._stream.drain() - - async def aclose(self) -> None: - self._stream.close() - - -@dataclass(eq=False) -class Process(abc.Process): - _process: asyncio.subprocess.Process - _stdin: StreamWriterWrapper | None - _stdout: StreamReaderWrapper | None - _stderr: StreamReaderWrapper | None - - async def aclose(self) -> None: - if self._stdin: - await self._stdin.aclose() - if self._stdout: - await self._stdout.aclose() - if self._stderr: - await self._stderr.aclose() - - await self.wait() - - async def wait(self) -> int: - return await self._process.wait() - - def terminate(self) -> None: - self._process.terminate() - - def kill(self) -> None: - self._process.kill() - - def send_signal(self, signal: int) -> None: - self._process.send_signal(signal) - - @property - def pid(self) -> int: - return self._process.pid - - @property - def returncode(self) -> int | None: - return self._process.returncode - - @property - def stdin(self) -> abc.ByteSendStream | None: - return self._stdin - - @property - def stdout(self) -> abc.ByteReceiveStream | None: - return self._stdout - - @property - def stderr(self) -> abc.ByteReceiveStream | None: - return self._stderr - - -async def open_process( - command: str | bytes | Sequence[str | bytes], - *, - shell: bool, - stdin: int | IO[Any] | None, - stdout: int | IO[Any] | None, - stderr: int | IO[Any] | None, - cwd: str | bytes | PathLike | None = None, - env: Mapping[str, str] | None = None, - start_new_session: bool = False, -) -> Process: - await checkpoint() - if shell: - process = await asyncio.create_subprocess_shell( - cast(Union[str, bytes], command), - stdin=stdin, - stdout=stdout, - stderr=stderr, - cwd=cwd, - env=env, - start_new_session=start_new_session, - ) - else: - process = await asyncio.create_subprocess_exec( - *command, - stdin=stdin, - stdout=stdout, - stderr=stderr, - cwd=cwd, - env=env, - start_new_session=start_new_session, - ) - - stdin_stream = StreamWriterWrapper(process.stdin) if process.stdin else None - stdout_stream = StreamReaderWrapper(process.stdout) if process.stdout else None - stderr_stream = StreamReaderWrapper(process.stderr) if process.stderr else None - return Process(process, stdin_stream, stdout_stream, stderr_stream) - - -def _forcibly_shutdown_process_pool_on_exit( - workers: set[Process], _task: object -) -> None: - """ - Forcibly shuts down worker processes belonging to this event loop.""" - child_watcher: asyncio.AbstractChildWatcher | None - try: - child_watcher = asyncio.get_event_loop_policy().get_child_watcher() - except NotImplementedError: - child_watcher = None - - # Close as much as possible (w/o async/await) to avoid warnings - for process in workers: - if process.returncode is None: - continue - - process._stdin._stream._transport.close() # type: ignore[union-attr] - process._stdout._stream._transport.close() # type: ignore[union-attr] - process._stderr._stream._transport.close() # type: ignore[union-attr] - process.kill() - if child_watcher: - child_watcher.remove_child_handler(process.pid) - - -async def _shutdown_process_pool_on_exit(workers: set[Process]) -> None: - """ - Shuts down worker processes belonging to this event loop. - - NOTE: this only works when the event loop was started using asyncio.run() or anyio.run(). - - """ - process: Process - try: - await sleep(math.inf) - except asyncio.CancelledError: - for process in workers: - if process.returncode is None: - process.kill() - - for process in workers: - await process.aclose() - - -def setup_process_pool_exit_at_shutdown(workers: set[Process]) -> None: - kwargs: dict[str, Any] = ( - {"name": "AnyIO process pool shutdown task"} if _native_task_names else {} - ) - create_task(_shutdown_process_pool_on_exit(workers), **kwargs) - find_root_task().add_done_callback( - partial(_forcibly_shutdown_process_pool_on_exit, workers) - ) - - -# -# Sockets and networking -# - - -class StreamProtocol(asyncio.Protocol): - read_queue: deque[bytes] - read_event: asyncio.Event - write_event: asyncio.Event - exception: Exception | None = None - - def connection_made(self, transport: asyncio.BaseTransport) -> None: - self.read_queue = deque() - self.read_event = asyncio.Event() - self.write_event = asyncio.Event() - self.write_event.set() - cast(asyncio.Transport, transport).set_write_buffer_limits(0) - - def connection_lost(self, exc: Exception | None) -> None: - if exc: - self.exception = BrokenResourceError() - self.exception.__cause__ = exc - - self.read_event.set() - self.write_event.set() - - def data_received(self, data: bytes) -> None: - self.read_queue.append(data) - self.read_event.set() - - def eof_received(self) -> bool | None: - self.read_event.set() - return True - - def pause_writing(self) -> None: - self.write_event = asyncio.Event() - - def resume_writing(self) -> None: - self.write_event.set() - - -class DatagramProtocol(asyncio.DatagramProtocol): - read_queue: deque[tuple[bytes, IPSockAddrType]] - read_event: asyncio.Event - write_event: asyncio.Event - exception: Exception | None = None - - def connection_made(self, transport: asyncio.BaseTransport) -> None: - self.read_queue = deque(maxlen=100) # arbitrary value - self.read_event = asyncio.Event() - self.write_event = asyncio.Event() - self.write_event.set() - - def connection_lost(self, exc: Exception | None) -> None: - self.read_event.set() - self.write_event.set() - - def datagram_received(self, data: bytes, addr: IPSockAddrType) -> None: - addr = convert_ipv6_sockaddr(addr) - self.read_queue.append((data, addr)) - self.read_event.set() - - def error_received(self, exc: Exception) -> None: - self.exception = exc - - def pause_writing(self) -> None: - self.write_event.clear() - - def resume_writing(self) -> None: - self.write_event.set() - - -class SocketStream(abc.SocketStream): - def __init__(self, transport: asyncio.Transport, protocol: StreamProtocol): - self._transport = transport - self._protocol = protocol - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - self._closed = False - - @property - def _raw_socket(self) -> socket.socket: - return self._transport.get_extra_info("socket") - - async def receive(self, max_bytes: int = 65536) -> bytes: - with self._receive_guard: - await checkpoint() - - if ( - not self._protocol.read_event.is_set() - and not self._transport.is_closing() - ): - self._transport.resume_reading() - await self._protocol.read_event.wait() - self._transport.pause_reading() - - try: - chunk = self._protocol.read_queue.popleft() - except IndexError: - if self._closed: - raise ClosedResourceError from None - elif self._protocol.exception: - raise self._protocol.exception - else: - raise EndOfStream from None - - if len(chunk) > max_bytes: - # Split the oversized chunk - chunk, leftover = chunk[:max_bytes], chunk[max_bytes:] - self._protocol.read_queue.appendleft(leftover) - - # If the read queue is empty, clear the flag so that the next call will block until - # data is available - if not self._protocol.read_queue: - self._protocol.read_event.clear() - - return chunk - - async def send(self, item: bytes) -> None: - with self._send_guard: - await checkpoint() - - if self._closed: - raise ClosedResourceError - elif self._protocol.exception is not None: - raise self._protocol.exception - - try: - self._transport.write(item) - except RuntimeError as exc: - if self._transport.is_closing(): - raise BrokenResourceError from exc - else: - raise - - await self._protocol.write_event.wait() - - async def send_eof(self) -> None: - try: - self._transport.write_eof() - except OSError: - pass - - async def aclose(self) -> None: - if not self._transport.is_closing(): - self._closed = True - try: - self._transport.write_eof() - except OSError: - pass - - self._transport.close() - await sleep(0) - self._transport.abort() - - -class UNIXSocketStream(abc.SocketStream): - _receive_future: asyncio.Future | None = None - _send_future: asyncio.Future | None = None - _closing = False - - def __init__(self, raw_socket: socket.socket): - self.__raw_socket = raw_socket - self._loop = get_running_loop() - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - - @property - def _raw_socket(self) -> socket.socket: - return self.__raw_socket - - def _wait_until_readable(self, loop: asyncio.AbstractEventLoop) -> asyncio.Future: - def callback(f: object) -> None: - del self._receive_future - loop.remove_reader(self.__raw_socket) - - f = self._receive_future = asyncio.Future() - self._loop.add_reader(self.__raw_socket, f.set_result, None) - f.add_done_callback(callback) - return f - - def _wait_until_writable(self, loop: asyncio.AbstractEventLoop) -> asyncio.Future: - def callback(f: object) -> None: - del self._send_future - loop.remove_writer(self.__raw_socket) - - f = self._send_future = asyncio.Future() - self._loop.add_writer(self.__raw_socket, f.set_result, None) - f.add_done_callback(callback) - return f - - async def send_eof(self) -> None: - with self._send_guard: - self._raw_socket.shutdown(socket.SHUT_WR) - - async def receive(self, max_bytes: int = 65536) -> bytes: - loop = get_running_loop() - await checkpoint() - with self._receive_guard: - while True: - try: - data = self.__raw_socket.recv(max_bytes) - except BlockingIOError: - await self._wait_until_readable(loop) - except OSError as exc: - if self._closing: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - else: - if not data: - raise EndOfStream - - return data - - async def send(self, item: bytes) -> None: - loop = get_running_loop() - await checkpoint() - with self._send_guard: - view = memoryview(item) - while view: - try: - bytes_sent = self.__raw_socket.send(view) - except BlockingIOError: - await self._wait_until_writable(loop) - except OSError as exc: - if self._closing: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - else: - view = view[bytes_sent:] - - async def receive_fds(self, msglen: int, maxfds: int) -> tuple[bytes, list[int]]: - if not isinstance(msglen, int) or msglen < 0: - raise ValueError("msglen must be a non-negative integer") - if not isinstance(maxfds, int) or maxfds < 1: - raise ValueError("maxfds must be a positive integer") - - loop = get_running_loop() - fds = array.array("i") - await checkpoint() - with self._receive_guard: - while True: - try: - message, ancdata, flags, addr = self.__raw_socket.recvmsg( - msglen, socket.CMSG_LEN(maxfds * fds.itemsize) - ) - except BlockingIOError: - await self._wait_until_readable(loop) - except OSError as exc: - if self._closing: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - else: - if not message and not ancdata: - raise EndOfStream - - break - - for cmsg_level, cmsg_type, cmsg_data in ancdata: - if cmsg_level != socket.SOL_SOCKET or cmsg_type != socket.SCM_RIGHTS: - raise RuntimeError( - f"Received unexpected ancillary data; message = {message!r}, " - f"cmsg_level = {cmsg_level}, cmsg_type = {cmsg_type}" - ) - - fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) - - return message, list(fds) - - async def send_fds(self, message: bytes, fds: Collection[int | IOBase]) -> None: - if not message: - raise ValueError("message must not be empty") - if not fds: - raise ValueError("fds must not be empty") - - loop = get_running_loop() - filenos: list[int] = [] - for fd in fds: - if isinstance(fd, int): - filenos.append(fd) - elif isinstance(fd, IOBase): - filenos.append(fd.fileno()) - - fdarray = array.array("i", filenos) - await checkpoint() - with self._send_guard: - while True: - try: - # The ignore can be removed after mypy picks up - # https://github.com/python/typeshed/pull/5545 - self.__raw_socket.sendmsg( - [message], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, fdarray)] - ) - break - except BlockingIOError: - await self._wait_until_writable(loop) - except OSError as exc: - if self._closing: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - - async def aclose(self) -> None: - if not self._closing: - self._closing = True - if self.__raw_socket.fileno() != -1: - self.__raw_socket.close() - - if self._receive_future: - self._receive_future.set_result(None) - if self._send_future: - self._send_future.set_result(None) - - -class TCPSocketListener(abc.SocketListener): - _accept_scope: CancelScope | None = None - _closed = False - - def __init__(self, raw_socket: socket.socket): - self.__raw_socket = raw_socket - self._loop = cast(asyncio.BaseEventLoop, get_running_loop()) - self._accept_guard = ResourceGuard("accepting connections from") - - @property - def _raw_socket(self) -> socket.socket: - return self.__raw_socket - - async def accept(self) -> abc.SocketStream: - if self._closed: - raise ClosedResourceError - - with self._accept_guard: - await checkpoint() - with CancelScope() as self._accept_scope: - try: - client_sock, _addr = await self._loop.sock_accept(self._raw_socket) - except asyncio.CancelledError: - # Workaround for https://bugs.python.org/issue41317 - try: - self._loop.remove_reader(self._raw_socket) - except (ValueError, NotImplementedError): - pass - - if self._closed: - raise ClosedResourceError from None - - raise - finally: - self._accept_scope = None - - client_sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) - transport, protocol = await self._loop.connect_accepted_socket( - StreamProtocol, client_sock - ) - return SocketStream(transport, protocol) - - async def aclose(self) -> None: - if self._closed: - return - - self._closed = True - if self._accept_scope: - # Workaround for https://bugs.python.org/issue41317 - try: - self._loop.remove_reader(self._raw_socket) - except (ValueError, NotImplementedError): - pass - - self._accept_scope.cancel() - await sleep(0) - - self._raw_socket.close() - - -class UNIXSocketListener(abc.SocketListener): - def __init__(self, raw_socket: socket.socket): - self.__raw_socket = raw_socket - self._loop = get_running_loop() - self._accept_guard = ResourceGuard("accepting connections from") - self._closed = False - - async def accept(self) -> abc.SocketStream: - await checkpoint() - with self._accept_guard: - while True: - try: - client_sock, _ = self.__raw_socket.accept() - client_sock.setblocking(False) - return UNIXSocketStream(client_sock) - except BlockingIOError: - f: asyncio.Future = asyncio.Future() - self._loop.add_reader(self.__raw_socket, f.set_result, None) - f.add_done_callback( - lambda _: self._loop.remove_reader(self.__raw_socket) - ) - await f - except OSError as exc: - if self._closed: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - - async def aclose(self) -> None: - self._closed = True - self.__raw_socket.close() - - @property - def _raw_socket(self) -> socket.socket: - return self.__raw_socket - - -class UDPSocket(abc.UDPSocket): - def __init__( - self, transport: asyncio.DatagramTransport, protocol: DatagramProtocol - ): - self._transport = transport - self._protocol = protocol - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - self._closed = False - - @property - def _raw_socket(self) -> socket.socket: - return self._transport.get_extra_info("socket") - - async def aclose(self) -> None: - if not self._transport.is_closing(): - self._closed = True - self._transport.close() - - async def receive(self) -> tuple[bytes, IPSockAddrType]: - with self._receive_guard: - await checkpoint() - - # If the buffer is empty, ask for more data - if not self._protocol.read_queue and not self._transport.is_closing(): - self._protocol.read_event.clear() - await self._protocol.read_event.wait() - - try: - return self._protocol.read_queue.popleft() - except IndexError: - if self._closed: - raise ClosedResourceError from None - else: - raise BrokenResourceError from None - - async def send(self, item: UDPPacketType) -> None: - with self._send_guard: - await checkpoint() - await self._protocol.write_event.wait() - if self._closed: - raise ClosedResourceError - elif self._transport.is_closing(): - raise BrokenResourceError - else: - self._transport.sendto(*item) - - -class ConnectedUDPSocket(abc.ConnectedUDPSocket): - def __init__( - self, transport: asyncio.DatagramTransport, protocol: DatagramProtocol - ): - self._transport = transport - self._protocol = protocol - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - self._closed = False - - @property - def _raw_socket(self) -> socket.socket: - return self._transport.get_extra_info("socket") - - async def aclose(self) -> None: - if not self._transport.is_closing(): - self._closed = True - self._transport.close() - - async def receive(self) -> bytes: - with self._receive_guard: - await checkpoint() - - # If the buffer is empty, ask for more data - if not self._protocol.read_queue and not self._transport.is_closing(): - self._protocol.read_event.clear() - await self._protocol.read_event.wait() - - try: - packet = self._protocol.read_queue.popleft() - except IndexError: - if self._closed: - raise ClosedResourceError from None - else: - raise BrokenResourceError from None - - return packet[0] - - async def send(self, item: bytes) -> None: - with self._send_guard: - await checkpoint() - await self._protocol.write_event.wait() - if self._closed: - raise ClosedResourceError - elif self._transport.is_closing(): - raise BrokenResourceError - else: - self._transport.sendto(item) - - -async def connect_tcp( - host: str, port: int, local_addr: tuple[str, int] | None = None -) -> SocketStream: - transport, protocol = cast( - Tuple[asyncio.Transport, StreamProtocol], - await get_running_loop().create_connection( - StreamProtocol, host, port, local_addr=local_addr - ), - ) - transport.pause_reading() - return SocketStream(transport, protocol) - - -async def connect_unix(path: str) -> UNIXSocketStream: - await checkpoint() - loop = get_running_loop() - raw_socket = socket.socket(socket.AF_UNIX) - raw_socket.setblocking(False) - while True: - try: - raw_socket.connect(path) - except BlockingIOError: - f: asyncio.Future = asyncio.Future() - loop.add_writer(raw_socket, f.set_result, None) - f.add_done_callback(lambda _: loop.remove_writer(raw_socket)) - await f - except BaseException: - raw_socket.close() - raise - else: - return UNIXSocketStream(raw_socket) - - -async def create_udp_socket( - family: socket.AddressFamily, - local_address: IPSockAddrType | None, - remote_address: IPSockAddrType | None, - reuse_port: bool, -) -> UDPSocket | ConnectedUDPSocket: - result = await get_running_loop().create_datagram_endpoint( - DatagramProtocol, - local_addr=local_address, - remote_addr=remote_address, - family=family, - reuse_port=reuse_port, - ) - transport = result[0] - protocol = result[1] - if protocol.exception: - transport.close() - raise protocol.exception - - if not remote_address: - return UDPSocket(transport, protocol) - else: - return ConnectedUDPSocket(transport, protocol) - - -async def getaddrinfo( - host: bytes | str, - port: str | int | None, - *, - family: int | AddressFamily = 0, - type: int | SocketKind = 0, - proto: int = 0, - flags: int = 0, -) -> GetAddrInfoReturnType: - # https://github.com/python/typeshed/pull/4304 - result = await get_running_loop().getaddrinfo( - host, port, family=family, type=type, proto=proto, flags=flags - ) - return cast(GetAddrInfoReturnType, result) - - -async def getnameinfo(sockaddr: IPSockAddrType, flags: int = 0) -> tuple[str, str]: - return await get_running_loop().getnameinfo(sockaddr, flags) - - -_read_events: RunVar[dict[Any, asyncio.Event]] = RunVar("read_events") -_write_events: RunVar[dict[Any, asyncio.Event]] = RunVar("write_events") - - -async def wait_socket_readable(sock: socket.socket) -> None: - await checkpoint() - try: - read_events = _read_events.get() - except LookupError: - read_events = {} - _read_events.set(read_events) - - if read_events.get(sock): - raise BusyResourceError("reading from") from None - - loop = get_running_loop() - event = read_events[sock] = asyncio.Event() - loop.add_reader(sock, event.set) - try: - await event.wait() - finally: - if read_events.pop(sock, None) is not None: - loop.remove_reader(sock) - readable = True - else: - readable = False - - if not readable: - raise ClosedResourceError - - -async def wait_socket_writable(sock: socket.socket) -> None: - await checkpoint() - try: - write_events = _write_events.get() - except LookupError: - write_events = {} - _write_events.set(write_events) - - if write_events.get(sock): - raise BusyResourceError("writing to") from None - - loop = get_running_loop() - event = write_events[sock] = asyncio.Event() - loop.add_writer(sock.fileno(), event.set) - try: - await event.wait() - finally: - if write_events.pop(sock, None) is not None: - loop.remove_writer(sock) - writable = True - else: - writable = False - - if not writable: - raise ClosedResourceError - - -# -# Synchronization -# - - -class Event(BaseEvent): - def __new__(cls) -> Event: - return object.__new__(cls) - - def __init__(self) -> None: - self._event = asyncio.Event() - - def set(self) -> DeprecatedAwaitable: - self._event.set() - return DeprecatedAwaitable(self.set) - - def is_set(self) -> bool: - return self._event.is_set() - - async def wait(self) -> None: - if await self._event.wait(): - await checkpoint() - - def statistics(self) -> EventStatistics: - return EventStatistics(len(self._event._waiters)) # type: ignore[attr-defined] - - -class CapacityLimiter(BaseCapacityLimiter): - _total_tokens: float = 0 - - def __new__(cls, total_tokens: float) -> CapacityLimiter: - return object.__new__(cls) - - def __init__(self, total_tokens: float): - self._borrowers: set[Any] = set() - self._wait_queue: OrderedDict[Any, asyncio.Event] = OrderedDict() - self.total_tokens = total_tokens - - async def __aenter__(self) -> None: - await self.acquire() - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> None: - self.release() - - @property - def total_tokens(self) -> float: - return self._total_tokens - - @total_tokens.setter - def total_tokens(self, value: float) -> None: - if not isinstance(value, int) and not math.isinf(value): - raise TypeError("total_tokens must be an int or math.inf") - if value < 1: - raise ValueError("total_tokens must be >= 1") - - old_value = self._total_tokens - self._total_tokens = value - events = [] - for event in self._wait_queue.values(): - if value <= old_value: - break - - if not event.is_set(): - events.append(event) - old_value += 1 - - for event in events: - event.set() - - @property - def borrowed_tokens(self) -> int: - return len(self._borrowers) - - @property - def available_tokens(self) -> float: - return self._total_tokens - len(self._borrowers) - - def acquire_nowait(self) -> DeprecatedAwaitable: - self.acquire_on_behalf_of_nowait(current_task()) - return DeprecatedAwaitable(self.acquire_nowait) - - def acquire_on_behalf_of_nowait(self, borrower: object) -> DeprecatedAwaitable: - if borrower in self._borrowers: - raise RuntimeError( - "this borrower is already holding one of this CapacityLimiter's " - "tokens" - ) - - if self._wait_queue or len(self._borrowers) >= self._total_tokens: - raise WouldBlock - - self._borrowers.add(borrower) - return DeprecatedAwaitable(self.acquire_on_behalf_of_nowait) - - async def acquire(self) -> None: - return await self.acquire_on_behalf_of(current_task()) - - async def acquire_on_behalf_of(self, borrower: object) -> None: - await checkpoint_if_cancelled() - try: - self.acquire_on_behalf_of_nowait(borrower) - except WouldBlock: - event = asyncio.Event() - self._wait_queue[borrower] = event - try: - await event.wait() - except BaseException: - self._wait_queue.pop(borrower, None) - raise - - self._borrowers.add(borrower) - else: - try: - await cancel_shielded_checkpoint() - except BaseException: - self.release() - raise - - def release(self) -> None: - self.release_on_behalf_of(current_task()) - - def release_on_behalf_of(self, borrower: object) -> None: - try: - self._borrowers.remove(borrower) - except KeyError: - raise RuntimeError( - "this borrower isn't holding any of this CapacityLimiter's " "tokens" - ) from None - - # Notify the next task in line if this limiter has free capacity now - if self._wait_queue and len(self._borrowers) < self._total_tokens: - event = self._wait_queue.popitem(last=False)[1] - event.set() - - def statistics(self) -> CapacityLimiterStatistics: - return CapacityLimiterStatistics( - self.borrowed_tokens, - self.total_tokens, - tuple(self._borrowers), - len(self._wait_queue), - ) - - -_default_thread_limiter: RunVar[CapacityLimiter] = RunVar("_default_thread_limiter") - - -def current_default_thread_limiter() -> CapacityLimiter: - try: - return _default_thread_limiter.get() - except LookupError: - limiter = CapacityLimiter(40) - _default_thread_limiter.set(limiter) - return limiter - - -# -# Operating system signals -# - - -class _SignalReceiver(DeprecatedAsyncContextManager["_SignalReceiver"]): - def __init__(self, signals: tuple[int, ...]): - self._signals = signals - self._loop = get_running_loop() - self._signal_queue: deque[int] = deque() - self._future: asyncio.Future = asyncio.Future() - self._handled_signals: set[int] = set() - - def _deliver(self, signum: int) -> None: - self._signal_queue.append(signum) - if not self._future.done(): - self._future.set_result(None) - - def __enter__(self) -> _SignalReceiver: - for sig in set(self._signals): - self._loop.add_signal_handler(sig, self._deliver, sig) - self._handled_signals.add(sig) - - return self - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - for sig in self._handled_signals: - self._loop.remove_signal_handler(sig) - return None - - def __aiter__(self) -> _SignalReceiver: - return self - - async def __anext__(self) -> int: - await checkpoint() - if not self._signal_queue: - self._future = asyncio.Future() - await self._future - - return self._signal_queue.popleft() - - -def open_signal_receiver(*signals: int) -> _SignalReceiver: - return _SignalReceiver(signals) - - -# -# Testing and debugging -# - - -def _create_task_info(task: asyncio.Task) -> TaskInfo: - task_state = _task_states.get(task) - if task_state is None: - name = task.get_name() if _native_task_names else None - parent_id = None - else: - name = task_state.name - parent_id = task_state.parent_id - - return TaskInfo(id(task), parent_id, name, get_coro(task)) - - -def get_current_task() -> TaskInfo: - return _create_task_info(current_task()) # type: ignore[arg-type] - - -def get_running_tasks() -> list[TaskInfo]: - return [_create_task_info(task) for task in all_tasks() if not task.done()] - - -async def wait_all_tasks_blocked() -> None: - await checkpoint() - this_task = current_task() - while True: - for task in all_tasks(): - if task is this_task: - continue - - if task._fut_waiter is None or task._fut_waiter.done(): # type: ignore[attr-defined] - await sleep(0.1) - break - else: - return - - -class TestRunner(abc.TestRunner): - def __init__( - self, - debug: bool = False, - use_uvloop: bool = False, - policy: asyncio.AbstractEventLoopPolicy | None = None, - ): - self._exceptions: list[BaseException] = [] - _maybe_set_event_loop_policy(policy, use_uvloop) - self._loop = asyncio.new_event_loop() - self._loop.set_debug(debug) - self._loop.set_exception_handler(self._exception_handler) - asyncio.set_event_loop(self._loop) - - def _cancel_all_tasks(self) -> None: - to_cancel = all_tasks(self._loop) - if not to_cancel: - return - - for task in to_cancel: - task.cancel() - - self._loop.run_until_complete( - asyncio.gather(*to_cancel, return_exceptions=True) - ) - - for task in to_cancel: - if task.cancelled(): - continue - if task.exception() is not None: - raise cast(BaseException, task.exception()) - - def _exception_handler( - self, loop: asyncio.AbstractEventLoop, context: dict[str, Any] - ) -> None: - if isinstance(context.get("exception"), Exception): - self._exceptions.append(context["exception"]) - else: - loop.default_exception_handler(context) - - def _raise_async_exceptions(self) -> None: - # Re-raise any exceptions raised in asynchronous callbacks - if self._exceptions: - exceptions, self._exceptions = self._exceptions, [] - if len(exceptions) == 1: - raise exceptions[0] - elif exceptions: - raise ExceptionGroup(exceptions) - - def close(self) -> None: - try: - self._cancel_all_tasks() - self._loop.run_until_complete(self._loop.shutdown_asyncgens()) - finally: - asyncio.set_event_loop(None) - self._loop.close() - - def run_asyncgen_fixture( - self, - fixture_func: Callable[..., AsyncGenerator[T_Retval, Any]], - kwargs: dict[str, Any], - ) -> Iterable[T_Retval]: - async def fixture_runner() -> None: - agen = fixture_func(**kwargs) - try: - retval = await agen.asend(None) - self._raise_async_exceptions() - except BaseException as exc: - f.set_exception(exc) - return - else: - f.set_result(retval) - - await event.wait() - try: - await agen.asend(None) - except StopAsyncIteration: - pass - else: - await agen.aclose() - raise RuntimeError("Async generator fixture did not stop") - - f = self._loop.create_future() - event = asyncio.Event() - fixture_task = self._loop.create_task(fixture_runner()) - self._loop.run_until_complete(f) - yield f.result() - event.set() - self._loop.run_until_complete(fixture_task) - self._raise_async_exceptions() - - def run_fixture( - self, - fixture_func: Callable[..., Coroutine[Any, Any, T_Retval]], - kwargs: dict[str, Any], - ) -> T_Retval: - retval = self._loop.run_until_complete(fixture_func(**kwargs)) - self._raise_async_exceptions() - return retval - - def run_test( - self, test_func: Callable[..., Coroutine[Any, Any, Any]], kwargs: dict[str, Any] - ) -> None: - try: - self._loop.run_until_complete(test_func(**kwargs)) - except Exception as exc: - self._exceptions.append(exc) - - self._raise_async_exceptions() diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/backends/openssl/ciphers.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/backends/openssl/ciphers.py deleted file mode 100644 index bc42adbd49a52f21bae6594e364f212188332d27..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/backends/openssl/ciphers.py +++ /dev/null @@ -1,281 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -from __future__ import annotations - -import typing - -from cryptography.exceptions import InvalidTag, UnsupportedAlgorithm, _Reasons -from cryptography.hazmat.primitives import ciphers -from cryptography.hazmat.primitives.ciphers import algorithms, modes - -if typing.TYPE_CHECKING: - from cryptography.hazmat.backends.openssl.backend import Backend - - -class _CipherContext: - _ENCRYPT = 1 - _DECRYPT = 0 - _MAX_CHUNK_SIZE = 2**30 - 1 - - def __init__(self, backend: Backend, cipher, mode, operation: int) -> None: - self._backend = backend - self._cipher = cipher - self._mode = mode - self._operation = operation - self._tag: typing.Optional[bytes] = None - - if isinstance(self._cipher, ciphers.BlockCipherAlgorithm): - self._block_size_bytes = self._cipher.block_size // 8 - else: - self._block_size_bytes = 1 - - ctx = self._backend._lib.EVP_CIPHER_CTX_new() - ctx = self._backend._ffi.gc( - ctx, self._backend._lib.EVP_CIPHER_CTX_free - ) - - registry = self._backend._cipher_registry - try: - adapter = registry[type(cipher), type(mode)] - except KeyError: - raise UnsupportedAlgorithm( - "cipher {} in {} mode is not supported " - "by this backend.".format( - cipher.name, mode.name if mode else mode - ), - _Reasons.UNSUPPORTED_CIPHER, - ) - - evp_cipher = adapter(self._backend, cipher, mode) - if evp_cipher == self._backend._ffi.NULL: - msg = f"cipher {cipher.name} " - if mode is not None: - msg += f"in {mode.name} mode " - msg += ( - "is not supported by this backend (Your version of OpenSSL " - "may be too old. Current version: {}.)" - ).format(self._backend.openssl_version_text()) - raise UnsupportedAlgorithm(msg, _Reasons.UNSUPPORTED_CIPHER) - - if isinstance(mode, modes.ModeWithInitializationVector): - iv_nonce = self._backend._ffi.from_buffer( - mode.initialization_vector - ) - elif isinstance(mode, modes.ModeWithTweak): - iv_nonce = self._backend._ffi.from_buffer(mode.tweak) - elif isinstance(mode, modes.ModeWithNonce): - iv_nonce = self._backend._ffi.from_buffer(mode.nonce) - elif isinstance(cipher, algorithms.ChaCha20): - iv_nonce = self._backend._ffi.from_buffer(cipher.nonce) - else: - iv_nonce = self._backend._ffi.NULL - # begin init with cipher and operation type - res = self._backend._lib.EVP_CipherInit_ex( - ctx, - evp_cipher, - self._backend._ffi.NULL, - self._backend._ffi.NULL, - self._backend._ffi.NULL, - operation, - ) - self._backend.openssl_assert(res != 0) - # set the key length to handle variable key ciphers - res = self._backend._lib.EVP_CIPHER_CTX_set_key_length( - ctx, len(cipher.key) - ) - self._backend.openssl_assert(res != 0) - if isinstance(mode, modes.GCM): - res = self._backend._lib.EVP_CIPHER_CTX_ctrl( - ctx, - self._backend._lib.EVP_CTRL_AEAD_SET_IVLEN, - len(iv_nonce), - self._backend._ffi.NULL, - ) - self._backend.openssl_assert(res != 0) - if mode.tag is not None: - res = self._backend._lib.EVP_CIPHER_CTX_ctrl( - ctx, - self._backend._lib.EVP_CTRL_AEAD_SET_TAG, - len(mode.tag), - mode.tag, - ) - self._backend.openssl_assert(res != 0) - self._tag = mode.tag - - # pass key/iv - res = self._backend._lib.EVP_CipherInit_ex( - ctx, - self._backend._ffi.NULL, - self._backend._ffi.NULL, - self._backend._ffi.from_buffer(cipher.key), - iv_nonce, - operation, - ) - - # Check for XTS mode duplicate keys error - errors = self._backend._consume_errors() - lib = self._backend._lib - if res == 0 and ( - ( - not lib.CRYPTOGRAPHY_IS_LIBRESSL - and errors[0]._lib_reason_match( - lib.ERR_LIB_EVP, lib.EVP_R_XTS_DUPLICATED_KEYS - ) - ) - or ( - lib.Cryptography_HAS_PROVIDERS - and errors[0]._lib_reason_match( - lib.ERR_LIB_PROV, lib.PROV_R_XTS_DUPLICATED_KEYS - ) - ) - ): - raise ValueError("In XTS mode duplicated keys are not allowed") - - self._backend.openssl_assert(res != 0, errors=errors) - - # We purposely disable padding here as it's handled higher up in the - # API. - self._backend._lib.EVP_CIPHER_CTX_set_padding(ctx, 0) - self._ctx = ctx - - def update(self, data: bytes) -> bytes: - buf = bytearray(len(data) + self._block_size_bytes - 1) - n = self.update_into(data, buf) - return bytes(buf[:n]) - - def update_into(self, data: bytes, buf: bytes) -> int: - total_data_len = len(data) - if len(buf) < (total_data_len + self._block_size_bytes - 1): - raise ValueError( - "buffer must be at least {} bytes for this " - "payload".format(len(data) + self._block_size_bytes - 1) - ) - - data_processed = 0 - total_out = 0 - outlen = self._backend._ffi.new("int *") - baseoutbuf = self._backend._ffi.from_buffer(buf, require_writable=True) - baseinbuf = self._backend._ffi.from_buffer(data) - - while data_processed != total_data_len: - outbuf = baseoutbuf + total_out - inbuf = baseinbuf + data_processed - inlen = min(self._MAX_CHUNK_SIZE, total_data_len - data_processed) - - res = self._backend._lib.EVP_CipherUpdate( - self._ctx, outbuf, outlen, inbuf, inlen - ) - if res == 0 and isinstance(self._mode, modes.XTS): - self._backend._consume_errors() - raise ValueError( - "In XTS mode you must supply at least a full block in the " - "first update call. For AES this is 16 bytes." - ) - else: - self._backend.openssl_assert(res != 0) - data_processed += inlen - total_out += outlen[0] - - return total_out - - def finalize(self) -> bytes: - if ( - self._operation == self._DECRYPT - and isinstance(self._mode, modes.ModeWithAuthenticationTag) - and self.tag is None - ): - raise ValueError( - "Authentication tag must be provided when decrypting." - ) - - buf = self._backend._ffi.new("unsigned char[]", self._block_size_bytes) - outlen = self._backend._ffi.new("int *") - res = self._backend._lib.EVP_CipherFinal_ex(self._ctx, buf, outlen) - if res == 0: - errors = self._backend._consume_errors() - - if not errors and isinstance(self._mode, modes.GCM): - raise InvalidTag - - lib = self._backend._lib - self._backend.openssl_assert( - errors[0]._lib_reason_match( - lib.ERR_LIB_EVP, - lib.EVP_R_DATA_NOT_MULTIPLE_OF_BLOCK_LENGTH, - ) - or ( - lib.Cryptography_HAS_PROVIDERS - and errors[0]._lib_reason_match( - lib.ERR_LIB_PROV, - lib.PROV_R_WRONG_FINAL_BLOCK_LENGTH, - ) - ) - or ( - lib.CRYPTOGRAPHY_IS_BORINGSSL - and errors[0].reason - == lib.CIPHER_R_DATA_NOT_MULTIPLE_OF_BLOCK_LENGTH - ), - errors=errors, - ) - raise ValueError( - "The length of the provided data is not a multiple of " - "the block length." - ) - - if ( - isinstance(self._mode, modes.GCM) - and self._operation == self._ENCRYPT - ): - tag_buf = self._backend._ffi.new( - "unsigned char[]", self._block_size_bytes - ) - res = self._backend._lib.EVP_CIPHER_CTX_ctrl( - self._ctx, - self._backend._lib.EVP_CTRL_AEAD_GET_TAG, - self._block_size_bytes, - tag_buf, - ) - self._backend.openssl_assert(res != 0) - self._tag = self._backend._ffi.buffer(tag_buf)[:] - - res = self._backend._lib.EVP_CIPHER_CTX_reset(self._ctx) - self._backend.openssl_assert(res == 1) - return self._backend._ffi.buffer(buf)[: outlen[0]] - - def finalize_with_tag(self, tag: bytes) -> bytes: - tag_len = len(tag) - if tag_len < self._mode._min_tag_length: - raise ValueError( - "Authentication tag must be {} bytes or longer.".format( - self._mode._min_tag_length - ) - ) - elif tag_len > self._block_size_bytes: - raise ValueError( - "Authentication tag cannot be more than {} bytes.".format( - self._block_size_bytes - ) - ) - res = self._backend._lib.EVP_CIPHER_CTX_ctrl( - self._ctx, self._backend._lib.EVP_CTRL_AEAD_SET_TAG, len(tag), tag - ) - self._backend.openssl_assert(res != 0) - self._tag = tag - return self.finalize() - - def authenticate_additional_data(self, data: bytes) -> None: - outlen = self._backend._ffi.new("int *") - res = self._backend._lib.EVP_CipherUpdate( - self._ctx, - self._backend._ffi.NULL, - outlen, - self._backend._ffi.from_buffer(data), - len(data), - ) - self._backend.openssl_assert(res != 0) - - @property - def tag(self) -> typing.Optional[bytes]: - return self._tag diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/blocks.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/blocks.py deleted file mode 100644 index ca416b3a9eb1deafd13f72c1e86215b63a4a5c31..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/blocks.py +++ /dev/null @@ -1,2170 +0,0 @@ -from __future__ import annotations - -import copy -import inspect -import json -import os -import random -import secrets -import sys -import time -import warnings -import webbrowser -from abc import abstractmethod -from pathlib import Path -from types import ModuleType -from typing import TYPE_CHECKING, Any, AsyncIterator, Callable, Literal, cast - -import anyio -import requests -from anyio import CapacityLimiter -from gradio_client import serializing -from gradio_client import utils as client_utils -from gradio_client.documentation import document, set_documentation_group -from packaging import version - -from gradio import ( - analytics, - components, - external, - networking, - queueing, - routes, - strings, - themes, - utils, - wasm_utils, -) -from gradio.context import Context -from gradio.deprecation import check_deprecated_parameters, warn_deprecation -from gradio.exceptions import ( - DuplicateBlockError, - InvalidApiNameError, - InvalidBlockError, -) -from gradio.helpers import EventData, create_tracker, skip, special_args -from gradio.themes import Default as DefaultTheme -from gradio.themes import ThemeClass as Theme -from gradio.tunneling import ( - BINARY_FILENAME, - BINARY_FOLDER, - BINARY_PATH, - BINARY_URL, - CURRENT_TUNNELS, -) -from gradio.utils import ( - GRADIO_VERSION, - TupleNoPrint, - check_function_inputs_match, - component_or_layout_class, - delete_none, - get_cancel_function, - get_continuous_fn, -) - -try: - import spaces # type: ignore -except Exception: - spaces = None - -set_documentation_group("blocks") - -if TYPE_CHECKING: # Only import for type checking (is False at runtime). - from fastapi.applications import FastAPI - - from gradio.components import Component - -BUILT_IN_THEMES: dict[str, Theme] = { - t.name: t - for t in [ - themes.Base(), - themes.Default(), - themes.Monochrome(), - themes.Soft(), - themes.Glass(), - ] -} - - -class Block: - def __init__( - self, - *, - render: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - visible: bool = True, - root_url: str | None = None, # URL that is prepended to all file paths - _skip_init_processing: bool = False, # Used for loading from Spaces - **kwargs, - ): - self._id = Context.id - Context.id += 1 - self.visible = visible - self.elem_id = elem_id - self.elem_classes = ( - [elem_classes] if isinstance(elem_classes, str) else elem_classes - ) - self.root_url = root_url - self.share_token = secrets.token_urlsafe(32) - self._skip_init_processing = _skip_init_processing - self.parent: BlockContext | None = None - - if render: - self.render() - check_deprecated_parameters(self.__class__.__name__, kwargs=kwargs) - - def render(self): - """ - Adds self into appropriate BlockContext - """ - if Context.root_block is not None and self._id in Context.root_block.blocks: - raise DuplicateBlockError( - f"A block with id: {self._id} has already been rendered in the current Blocks." - ) - if Context.block is not None: - Context.block.add(self) - if Context.root_block is not None: - Context.root_block.blocks[self._id] = self - if isinstance(self, components.IOComponent): - Context.root_block.temp_file_sets.append(self.temp_files) - return self - - def unrender(self): - """ - Removes self from BlockContext if it has been rendered (otherwise does nothing). - Removes self from the layout and collection of blocks, but does not delete any event triggers. - """ - if Context.block is not None: - try: - Context.block.children.remove(self) - except ValueError: - pass - if Context.root_block is not None: - try: - del Context.root_block.blocks[self._id] - except KeyError: - pass - return self - - def get_block_name(self) -> str: - """ - Gets block's class name. - - If it is template component it gets the parent's class name. - - @return: class name - """ - return ( - self.__class__.__base__.__name__.lower() - if hasattr(self, "is_template") - else self.__class__.__name__.lower() - ) - - def get_expected_parent(self) -> type[BlockContext] | None: - return None - - def set_event_trigger( - self, - event_name: str, - fn: Callable | None, - inputs: Component | list[Component] | set[Component] | None, - outputs: Component | list[Component] | None, - preprocess: bool = True, - postprocess: bool = True, - scroll_to_output: bool = False, - show_progress: str = "full", - api_name: str | None | Literal[False] = None, - js: str | None = None, - no_target: bool = False, - queue: bool | None = None, - batch: bool = False, - max_batch_size: int = 4, - cancels: list[int] | None = None, - every: float | None = None, - collects_event_data: bool | None = None, - trigger_after: int | None = None, - trigger_only_on_success: bool = False, - ) -> tuple[dict[str, Any], int]: - """ - Adds an event to the component's dependencies. - Parameters: - event_name: event name - fn: Callable function - inputs: input list - outputs: output list - preprocess: whether to run the preprocess methods of components - postprocess: whether to run the postprocess methods of components - scroll_to_output: whether to scroll to output of dependency on trigger - show_progress: whether to show progress animation while running. - api_name: defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, the endpoint will be exposed in the api docs as an unnamed endpoint, although this behavior will be changed in Gradio 4.0. If set to a string, the endpoint will be exposed in the api docs with the given name. - js: Experimental parameter (API may change): Optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components - no_target: if True, sets "targets" to [], used for Blocks "load" event - queue: If True, will place the request on the queue, if the queue has been enabled. If False, will not put this event on the queue, even if the queue has been enabled. If None, will use the queue setting of the gradio app. - batch: whether this function takes in a batch of inputs - max_batch_size: the maximum batch size to send to the function - cancels: a list of other events to cancel when this event is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method. - every: Run this event 'every' number of seconds while the client connection is open. Interpreted in seconds. Queue must be enabled. - collects_event_data: whether to collect event data for this event - trigger_after: if set, this event will be triggered after 'trigger_after' function index - trigger_only_on_success: if True, this event will only be triggered if the previous event was successful (only applies if `trigger_after` is set) - Returns: dependency information, dependency index - """ - # Support for singular parameter - if isinstance(inputs, set): - inputs_as_dict = True - inputs = sorted(inputs, key=lambda x: x._id) - else: - inputs_as_dict = False - if inputs is None: - inputs = [] - elif not isinstance(inputs, list): - inputs = [inputs] - - if isinstance(outputs, set): - outputs = sorted(outputs, key=lambda x: x._id) - else: - if outputs is None: - outputs = [] - elif not isinstance(outputs, list): - outputs = [outputs] - - if fn is not None and not cancels: - check_function_inputs_match(fn, inputs, inputs_as_dict) - - if Context.root_block is None: - raise AttributeError( - f"{event_name}() and other events can only be called within a Blocks context." - ) - if every is not None and every <= 0: - raise ValueError("Parameter every must be positive or None") - if every and batch: - raise ValueError( - f"Cannot run {event_name} event in a batch and every {every} seconds. " - "Either batch is True or every is non-zero but not both." - ) - - if every and fn: - fn = get_continuous_fn(fn, every) - elif every: - raise ValueError("Cannot set a value for `every` without a `fn`.") - - _, progress_index, event_data_index = ( - special_args(fn) if fn else (None, None, None) - ) - Context.root_block.fns.append( - BlockFunction( - fn, - inputs, - outputs, - preprocess, - postprocess, - inputs_as_dict, - progress_index is not None, - ) - ) - if api_name is not None and api_name is not False: - api_name_ = utils.append_unique_suffix( - api_name, [dep["api_name"] for dep in Context.root_block.dependencies] - ) - if api_name != api_name_: - warnings.warn(f"api_name {api_name} already exists, using {api_name_}") - api_name = api_name_ - - if collects_event_data is None: - collects_event_data = event_data_index is not None - - dependency = { - "targets": [self._id] if not no_target else [], - "trigger": event_name, - "inputs": [block._id for block in inputs], - "outputs": [block._id for block in outputs], - "backend_fn": fn is not None, - "js": js, - "queue": False if fn is None else queue, - "api_name": api_name, - "scroll_to_output": False if utils.get_space() else scroll_to_output, - "show_progress": show_progress, - "every": every, - "batch": batch, - "max_batch_size": max_batch_size, - "cancels": cancels or [], - "types": { - "continuous": bool(every), - "generator": inspect.isgeneratorfunction(fn) or bool(every), - }, - "collects_event_data": collects_event_data, - "trigger_after": trigger_after, - "trigger_only_on_success": trigger_only_on_success, - } - Context.root_block.dependencies.append(dependency) - return dependency, len(Context.root_block.dependencies) - 1 - - def get_config(self): - return { - "visible": self.visible, - "elem_id": self.elem_id, - "elem_classes": self.elem_classes, - "root_url": self.root_url, - } - - @staticmethod - @abstractmethod - def update(**kwargs) -> dict: - return {} - - @classmethod - def get_specific_update(cls, generic_update: dict[str, Any]) -> dict: - generic_update = generic_update.copy() - del generic_update["__type__"] - specific_update = cls.update(**generic_update) - return specific_update - - -class BlockContext(Block): - def __init__( - self, - visible: bool = True, - render: bool = True, - **kwargs, - ): - """ - Parameters: - visible: If False, this will be hidden but included in the Blocks config file (its visibility can later be updated). - render: If False, this will not be included in the Blocks config file at all. - """ - self.children: list[Block] = [] - Block.__init__(self, visible=visible, render=render, **kwargs) - - def add_child(self, child: Block): - self.children.append(child) - - def __enter__(self): - self.parent = Context.block - Context.block = self - return self - - def add(self, child: Block): - child.parent = self - self.children.append(child) - - def fill_expected_parents(self): - children = [] - pseudo_parent = None - for child in self.children: - expected_parent = child.get_expected_parent() - if not expected_parent or isinstance(self, expected_parent): - pseudo_parent = None - children.append(child) - else: - if pseudo_parent is not None and isinstance( - pseudo_parent, expected_parent - ): - pseudo_parent.add_child(child) - else: - pseudo_parent = expected_parent(render=False) - pseudo_parent.parent = self - children.append(pseudo_parent) - pseudo_parent.add_child(child) - if Context.root_block: - Context.root_block.blocks[pseudo_parent._id] = pseudo_parent - child.parent = pseudo_parent - self.children = children - - def __exit__(self, *args): - if getattr(self, "allow_expected_parents", True): - self.fill_expected_parents() - Context.block = self.parent - - def postprocess(self, y): - """ - Any postprocessing needed to be performed on a block context. - """ - return y - - -class BlockFunction: - def __init__( - self, - fn: Callable | None, - inputs: list[Component], - outputs: list[Component], - preprocess: bool, - postprocess: bool, - inputs_as_dict: bool, - tracks_progress: bool = False, - ): - self.fn = fn - self.inputs = inputs - self.outputs = outputs - self.preprocess = preprocess - self.postprocess = postprocess - self.tracks_progress = tracks_progress - self.total_runtime = 0 - self.total_runs = 0 - self.inputs_as_dict = inputs_as_dict - self.name = getattr(fn, "__name__", "fn") if fn is not None else None - self.spaces_auto_wrap() - - def spaces_auto_wrap(self): - if spaces is None: - return - if utils.get_space() is None: - return - self.fn = spaces.gradio_auto_wrap(self.fn) - - def __str__(self): - return str( - { - "fn": self.name, - "preprocess": self.preprocess, - "postprocess": self.postprocess, - } - ) - - def __repr__(self): - return str(self) - - -class class_or_instancemethod(classmethod): # noqa: N801 - def __get__(self, instance, type_): - descr_get = super().__get__ if instance is None else self.__func__.__get__ - return descr_get(instance, type_) - - -def postprocess_update_dict(block: Block, update_dict: dict, postprocess: bool = True): - """ - Converts a dictionary of updates into a format that can be sent to the frontend. - E.g. {"__type__": "generic_update", "value": "2", "interactive": False} - Into -> {"__type__": "update", "value": 2.0, "mode": "static"} - - Parameters: - block: The Block that is being updated with this update dictionary. - update_dict: The original update dictionary - postprocess: Whether to postprocess the "value" key of the update dictionary. - """ - if update_dict.get("__type__", "") == "generic_update": - update_dict = block.get_specific_update(update_dict) - if update_dict.get("value") is components._Keywords.NO_VALUE: - update_dict.pop("value") - interactive = update_dict.pop("interactive", None) - if interactive is not None: - update_dict["mode"] = "dynamic" if interactive else "static" - prediction_value = delete_none(update_dict, skip_value=True) - if "value" in prediction_value and postprocess: - assert isinstance( - block, components.IOComponent - ), f"Component {block.__class__} does not support value" - prediction_value["value"] = block.postprocess(prediction_value["value"]) - return prediction_value - - -def convert_component_dict_to_list( - outputs_ids: list[int], predictions: dict -) -> list | dict: - """ - Converts a dictionary of component updates into a list of updates in the order of - the outputs_ids and including every output component. Leaves other types of dictionaries unchanged. - E.g. {"textbox": "hello", "number": {"__type__": "generic_update", "value": "2"}} - Into -> ["hello", {"__type__": "generic_update"}, {"__type__": "generic_update", "value": "2"}] - """ - keys_are_blocks = [isinstance(key, Block) for key in predictions] - if all(keys_are_blocks): - reordered_predictions = [skip() for _ in outputs_ids] - for component, value in predictions.items(): - if component._id not in outputs_ids: - raise ValueError( - f"Returned component {component} not specified as output of function." - ) - output_index = outputs_ids.index(component._id) - reordered_predictions[output_index] = value - predictions = utils.resolve_singleton(reordered_predictions) - elif any(keys_are_blocks): - raise ValueError( - "Returned dictionary included some keys as Components. Either all keys must be Components to assign Component values, or return a List of values to assign output values in order." - ) - return predictions - - -def get_api_info(config: dict, serialize: bool = True): - """ - Gets the information needed to generate the API docs from a Blocks config. - Parameters: - config: a Blocks config dictionary - serialize: If True, returns the serialized version of the typed information. If False, returns the raw version. - """ - api_info = {"named_endpoints": {}, "unnamed_endpoints": {}} - mode = config.get("mode", None) - after_new_format = version.parse(config.get("version", "2.0")) > version.Version( - "3.28.3" - ) - - for d, dependency in enumerate(config["dependencies"]): - dependency_info = {"parameters": [], "returns": []} - skip_endpoint = False - skip_components = ["state"] - - inputs = dependency["inputs"] - for i in inputs: - for component in config["components"]: - if component["id"] == i: - break - else: - skip_endpoint = True # if component not found, skip endpoint - break - type = component["type"] - if ( - not component.get("serializer") - and type not in serializing.COMPONENT_MAPPING - ): - skip_endpoint = True # if component not serializable, skip endpoint - break - if type in skip_components: - continue - label = component["props"].get("label", f"parameter_{i}") - # The config has the most specific API info (taking into account the parameters - # of the component), so we use that if it exists. Otherwise, we fallback to the - # Serializer's API info. - serializer = serializing.COMPONENT_MAPPING[type]() - if component.get("api_info") and after_new_format: - info = component["api_info"] - example = component["example_inputs"]["serialized"] - else: - assert isinstance(serializer, serializing.Serializable) - info = serializer.api_info() - example = serializer.example_inputs()["raw"] - python_info = info["info"] - if serialize and info["serialized_info"]: - python_info = serializer.serialized_info() - if ( - isinstance(serializer, serializing.FileSerializable) - and component["props"].get("file_count", "single") != "single" - ): - python_info = serializer._multiple_file_serialized_info() - - python_type = client_utils.json_schema_to_python_type(python_info) - serializer_name = serializing.COMPONENT_MAPPING[type].__name__ - dependency_info["parameters"].append( - { - "label": label, - "type": info["info"], - "python_type": { - "type": python_type, - "description": python_info.get("description", ""), - }, - "component": type.capitalize(), - "example_input": example, - "serializer": serializer_name, - } - ) - - outputs = dependency["outputs"] - for o in outputs: - for component in config["components"]: - if component["id"] == o: - break - else: - skip_endpoint = True # if component not found, skip endpoint - break - type = component["type"] - if ( - not component.get("serializer") - and type not in serializing.COMPONENT_MAPPING - ): - skip_endpoint = True # if component not serializable, skip endpoint - break - if type in skip_components: - continue - label = component["props"].get("label", f"value_{o}") - serializer = serializing.COMPONENT_MAPPING[type]() - if component.get("api_info") and after_new_format: - info = component["api_info"] - example = component["example_inputs"]["serialized"] - else: - assert isinstance(serializer, serializing.Serializable) - info = serializer.api_info() - example = serializer.example_inputs()["raw"] - python_info = info["info"] - if serialize and info["serialized_info"]: - python_info = serializer.serialized_info() - if ( - isinstance(serializer, serializing.FileSerializable) - and component["props"].get("file_count", "single") != "single" - ): - python_info = serializer._multiple_file_serialized_info() - python_type = client_utils.json_schema_to_python_type(python_info) - serializer_name = serializing.COMPONENT_MAPPING[type].__name__ - dependency_info["returns"].append( - { - "label": label, - "type": info["info"], - "python_type": { - "type": python_type, - "description": python_info.get("description", ""), - }, - "component": type.capitalize(), - "serializer": serializer_name, - } - ) - - if not dependency["backend_fn"]: - skip_endpoint = True - - if skip_endpoint: - continue - if dependency["api_name"] is not None and dependency["api_name"] is not False: - api_info["named_endpoints"][f"/{dependency['api_name']}"] = dependency_info - elif ( - dependency["api_name"] is False - or mode == "interface" - or mode == "tabbed_interface" - ): - pass # Skip unnamed endpoints in interface mode - else: - api_info["unnamed_endpoints"][str(d)] = dependency_info - - return api_info - - -@document("launch", "queue", "integrate", "load") -class Blocks(BlockContext): - """ - Blocks is Gradio's low-level API that allows you to create more custom web - applications and demos than Interfaces (yet still entirely in Python). - - - Compared to the Interface class, Blocks offers more flexibility and control over: - (1) the layout of components (2) the events that - trigger the execution of functions (3) data flows (e.g. inputs can trigger outputs, - which can trigger the next level of outputs). Blocks also offers ways to group - together related demos such as with tabs. - - - The basic usage of Blocks is as follows: create a Blocks object, then use it as a - context (with the "with" statement), and then define layouts, components, or events - within the Blocks context. Finally, call the launch() method to launch the demo. - - Example: - import gradio as gr - def update(name): - return f"Welcome to Gradio, {name}!" - - with gr.Blocks() as demo: - gr.Markdown("Start typing below and then click **Run** to see the output.") - with gr.Row(): - inp = gr.Textbox(placeholder="What is your name?") - out = gr.Textbox() - btn = gr.Button("Run") - btn.click(fn=update, inputs=inp, outputs=out) - - demo.launch() - Demos: blocks_hello, blocks_flipper, blocks_speech_text_sentiment, generate_english_german, sound_alert - Guides: blocks-and-event-listeners, controlling-layout, state-in-blocks, custom-CSS-and-JS, custom-interpretations-with-blocks, using-blocks-like-functions - """ - - def __init__( - self, - theme: Theme | str | None = None, - analytics_enabled: bool | None = None, - mode: str = "blocks", - title: str = "Gradio", - css: str | None = None, - **kwargs, - ): - """ - Parameters: - theme: a Theme object or a string representing a theme. If a string, will look for a built-in theme with that name (e.g. "soft" or "default"), or will attempt to load a theme from the HF Hub (e.g. "gradio/monochrome"). If None, will use the Default theme. - analytics_enabled: whether to allow basic telemetry. If None, will use GRADIO_ANALYTICS_ENABLED environment variable or default to True. - mode: a human-friendly name for the kind of Blocks or Interface being created. - title: The tab title to display when this is opened in a browser window. - css: custom css or path to custom css file to apply to entire Blocks - """ - self.limiter = None - if theme is None: - theme = DefaultTheme() - elif isinstance(theme, str): - if theme.lower() in BUILT_IN_THEMES: - theme = BUILT_IN_THEMES[theme.lower()] - else: - try: - theme = Theme.from_hub(theme) - except Exception as e: - warnings.warn(f"Cannot load {theme}. Caught Exception: {str(e)}") - theme = DefaultTheme() - if not isinstance(theme, Theme): - warnings.warn("Theme should be a class loaded from gradio.themes") - theme = DefaultTheme() - self.theme: Theme = theme - self.theme_css = theme._get_theme_css() - self.stylesheets = theme._stylesheets - self.encrypt = False - self.share = False - self.enable_queue = None - self.max_threads = 40 - self.show_error = True - if css is not None and os.path.exists(css): - with open(css) as css_file: - self.css = css_file.read() - else: - self.css = css - - # For analytics_enabled and allow_flagging: (1) first check for - # parameter, (2) check for env variable, (3) default to True/"manual" - self.analytics_enabled = ( - analytics_enabled - if analytics_enabled is not None - else analytics.analytics_enabled() - ) - if not self.analytics_enabled: - os.environ["HF_HUB_DISABLE_TELEMETRY"] = "True" - super().__init__(render=False, **kwargs) - self.blocks: dict[int, Block] = {} - self.fns: list[BlockFunction] = [] - self.dependencies = [] - self.mode = mode - - self.is_running = False - self.local_url = None - self.share_url = None - self.width = None - self.height = None - self.api_open = True - - self.space_id = utils.get_space() - self.favicon_path = None - self.auth = None - self.dev_mode = True - self.app_id = random.getrandbits(64) - self.temp_file_sets = [] - self.title = title - self.show_api = True - - # Only used when an Interface is loaded from a config - self.predict = None - self.input_components = None - self.output_components = None - self.__name__ = None - self.api_mode = None - self.progress_tracking = None - self.ssl_verify = True - - self.allowed_paths = [] - self.blocked_paths = [] - self.root_path = "" - self.root_urls = set() - - if not wasm_utils.IS_WASM and self.analytics_enabled: - is_custom_theme = not any( - self.theme.to_dict() == built_in_theme.to_dict() - for built_in_theme in BUILT_IN_THEMES.values() - ) - data = { - "mode": self.mode, - "custom_css": self.css is not None, - "theme": self.theme.name, - "is_custom_theme": is_custom_theme, - "version": GRADIO_VERSION, - } - analytics.initiated_analytics(data) - - @classmethod - def from_config( - cls, - config: dict, - fns: list[Callable], - root_url: str, - ) -> Blocks: - """ - Factory method that creates a Blocks from a config and list of functions. Used - internally by the gradio.external.load() method. - - Parameters: - config: a dictionary containing the configuration of the Blocks. - fns: a list of functions that are used in the Blocks. Must be in the same order as the dependencies in the config. - root_url: an external url to use as a root URL when serving files for components in the Blocks. - """ - config = copy.deepcopy(config) - components_config = config["components"] - for component_config in components_config: - # for backwards compatibility, extract style into props - if "style" in component_config["props"]: - component_config["props"].update(component_config["props"]["style"]) - del component_config["props"]["style"] - theme = config.get("theme", "default") - original_mapping: dict[int, Block] = {} - root_urls = {root_url} - - def get_block_instance(id: int) -> Block: - for block_config in components_config: - if block_config["id"] == id: - break - else: - raise ValueError(f"Cannot find block with id {id}") - cls = component_or_layout_class(block_config["type"]) - block_config["props"].pop("type", None) - block_config["props"].pop("name", None) - # If a Gradio app B is loaded into a Gradio app A, and B itself loads a - # Gradio app C, then the root_urls of the components in A need to be the - # URL of C, not B. The else clause below handles this case. - if block_config["props"].get("root_url") is None: - block_config["props"]["root_url"] = f"{root_url}/" - else: - root_urls.add(block_config["props"]["root_url"]) - # Any component has already processed its initial value, so we skip that step here - block = cls(**block_config["props"], _skip_init_processing=True) - return block - - def iterate_over_children(children_list): - for child_config in children_list: - id = child_config["id"] - block = get_block_instance(id) - - original_mapping[id] = block - - children = child_config.get("children") - if children is not None: - assert isinstance( - block, BlockContext - ), f"Invalid config, Block with id {id} has children but is not a BlockContext." - with block: - iterate_over_children(children) - - derived_fields = ["types"] - - with Blocks(theme=theme) as blocks: - # ID 0 should be the root Blocks component - original_mapping[0] = Context.root_block or blocks - - iterate_over_children(config["layout"]["children"]) - - first_dependency = None - - # add the event triggers - for dependency, fn in zip(config["dependencies"], fns): - # We used to add a "fake_event" to the config to cache examples - # without removing it. This was causing bugs in calling gr.load - # We fixed the issue by removing "fake_event" from the config in examples.py - # but we still need to skip these events when loading the config to support - # older demos - if dependency["trigger"] == "fake_event": - continue - for field in derived_fields: - dependency.pop(field, None) - targets = dependency.pop("targets") - trigger = dependency.pop("trigger") - dependency.pop("backend_fn") - dependency.pop("documentation", None) - dependency["inputs"] = [ - original_mapping[i] for i in dependency["inputs"] - ] - dependency["outputs"] = [ - original_mapping[o] for o in dependency["outputs"] - ] - dependency.pop("status_tracker", None) - dependency["preprocess"] = False - dependency["postprocess"] = False - - for target in targets: - dependency = original_mapping[target].set_event_trigger( - event_name=trigger, fn=fn, **dependency - )[0] - if first_dependency is None: - first_dependency = dependency - - # Allows some use of Interface-specific methods with loaded Spaces - if first_dependency and Context.root_block: - blocks.predict = [fns[0]] - blocks.input_components = [ - Context.root_block.blocks[i] for i in first_dependency["inputs"] - ] - blocks.output_components = [ - Context.root_block.blocks[o] for o in first_dependency["outputs"] - ] - blocks.__name__ = "Interface" - blocks.api_mode = True - - blocks.root_urls = root_urls - return blocks - - def __str__(self): - return self.__repr__() - - def __repr__(self): - num_backend_fns = len([d for d in self.dependencies if d["backend_fn"]]) - repr = f"Gradio Blocks instance: {num_backend_fns} backend functions" - repr += f"\n{'-' * len(repr)}" - for d, dependency in enumerate(self.dependencies): - if dependency["backend_fn"]: - repr += f"\nfn_index={d}" - repr += "\n inputs:" - for input_id in dependency["inputs"]: - block = self.blocks[input_id] - repr += f"\n |-{block}" - repr += "\n outputs:" - for output_id in dependency["outputs"]: - block = self.blocks[output_id] - repr += f"\n |-{block}" - return repr - - def render(self): - if Context.root_block is not None: - if self._id in Context.root_block.blocks: - raise DuplicateBlockError( - f"A block with id: {self._id} has already been rendered in the current Blocks." - ) - overlapping_ids = set(Context.root_block.blocks).intersection(self.blocks) - for id in overlapping_ids: - # State components are allowed to be reused between Blocks - if not isinstance(self.blocks[id], components.State): - raise DuplicateBlockError( - "At least one block in this Blocks has already been rendered." - ) - - Context.root_block.blocks.update(self.blocks) - Context.root_block.fns.extend(self.fns) - dependency_offset = len(Context.root_block.dependencies) - for i, dependency in enumerate(self.dependencies): - api_name = dependency["api_name"] - if api_name is not None and api_name is not False: - api_name_ = utils.append_unique_suffix( - api_name, - [dep["api_name"] for dep in Context.root_block.dependencies], - ) - if api_name != api_name_: - warnings.warn( - f"api_name {api_name} already exists, using {api_name_}" - ) - dependency["api_name"] = api_name_ - dependency["cancels"] = [ - c + dependency_offset for c in dependency["cancels"] - ] - if dependency.get("trigger_after") is not None: - dependency["trigger_after"] += dependency_offset - # Recreate the cancel function so that it has the latest - # dependency fn indices. This is necessary to properly cancel - # events in the backend - if dependency["cancels"]: - updated_cancels = [ - Context.root_block.dependencies[i] - for i in dependency["cancels"] - ] - new_fn = BlockFunction( - get_cancel_function(updated_cancels)[0], - [], - [], - False, - True, - False, - ) - Context.root_block.fns[dependency_offset + i] = new_fn - Context.root_block.dependencies.append(dependency) - Context.root_block.temp_file_sets.extend(self.temp_file_sets) - Context.root_block.root_urls.update(self.root_urls) - - if Context.block is not None: - Context.block.children.extend(self.children) - return self - - def is_callable(self, fn_index: int = 0) -> bool: - """Checks if a particular Blocks function is callable (i.e. not stateful or a generator).""" - block_fn = self.fns[fn_index] - dependency = self.dependencies[fn_index] - - if inspect.isasyncgenfunction(block_fn.fn): - return False - if inspect.isgeneratorfunction(block_fn.fn): - return False - for input_id in dependency["inputs"]: - block = self.blocks[input_id] - if getattr(block, "stateful", False): - return False - for output_id in dependency["outputs"]: - block = self.blocks[output_id] - if getattr(block, "stateful", False): - return False - - return True - - def __call__(self, *inputs, fn_index: int = 0, api_name: str | None = None): - """ - Allows Blocks objects to be called as functions. Supply the parameters to the - function as positional arguments. To choose which function to call, use the - fn_index parameter, which must be a keyword argument. - - Parameters: - *inputs: the parameters to pass to the function - fn_index: the index of the function to call (defaults to 0, which for Interfaces, is the default prediction function) - api_name: The api_name of the dependency to call. Will take precedence over fn_index. - """ - if api_name is not None: - inferred_fn_index = next( - ( - i - for i, d in enumerate(self.dependencies) - if d.get("api_name") == api_name - ), - None, - ) - if inferred_fn_index is None: - raise InvalidApiNameError( - f"Cannot find a function with api_name {api_name}" - ) - fn_index = inferred_fn_index - if not (self.is_callable(fn_index)): - raise ValueError( - "This function is not callable because it is either stateful or is a generator. Please use the .launch() method instead to create an interactive user interface." - ) - - inputs = list(inputs) - processed_inputs = self.serialize_data(fn_index, inputs) - batch = self.dependencies[fn_index]["batch"] - if batch: - processed_inputs = [[inp] for inp in processed_inputs] - - outputs = client_utils.synchronize_async( - self.process_api, - fn_index=fn_index, - inputs=processed_inputs, - request=None, - state={}, - ) - outputs = outputs["data"] - - if batch: - outputs = [out[0] for out in outputs] - - processed_outputs = self.deserialize_data(fn_index, outputs) - processed_outputs = utils.resolve_singleton(processed_outputs) - - return processed_outputs - - async def call_function( - self, - fn_index: int, - processed_input: list[Any], - iterator: AsyncIterator[Any] | None = None, - requests: routes.Request | list[routes.Request] | None = None, - event_id: str | None = None, - event_data: EventData | None = None, - ): - """ - Calls function with given index and preprocessed input, and measures process time. - Parameters: - fn_index: index of function to call - processed_input: preprocessed input to pass to function - iterator: iterator to use if function is a generator - requests: requests to pass to function - event_id: id of event in queue - event_data: data associated with event trigger - """ - block_fn = self.fns[fn_index] - assert block_fn.fn, f"function with index {fn_index} not defined." - is_generating = False - - if block_fn.inputs_as_dict: - processed_input = [dict(zip(block_fn.inputs, processed_input))] - - request = requests[0] if isinstance(requests, list) else requests - processed_input, progress_index, _ = special_args( - block_fn.fn, processed_input, request, event_data - ) - progress_tracker = ( - processed_input[progress_index] if progress_index is not None else None - ) - - start = time.time() - - fn = utils.get_function_with_locals(block_fn.fn, self, event_id) - - if iterator is None: # If not a generator function that has already run - if progress_tracker is not None and progress_index is not None: - progress_tracker, fn = create_tracker( - self, event_id, fn, progress_tracker.track_tqdm - ) - processed_input[progress_index] = progress_tracker - - if inspect.iscoroutinefunction(fn): - prediction = await fn(*processed_input) - else: - prediction = await anyio.to_thread.run_sync( - fn, *processed_input, limiter=self.limiter - ) - else: - prediction = None - - if inspect.isgeneratorfunction(fn) or inspect.isasyncgenfunction(fn): - if not self.enable_queue: - raise ValueError("Need to enable queue to use generators.") - try: - if iterator is None: - iterator = cast(AsyncIterator[Any], prediction) - if inspect.isgenerator(iterator): - iterator = utils.SyncToAsyncIterator(iterator, self.limiter) - prediction = await utils.async_iteration(iterator) - is_generating = True - except StopAsyncIteration: - n_outputs = len(self.dependencies[fn_index].get("outputs")) - prediction = ( - components._Keywords.FINISHED_ITERATING - if n_outputs == 1 - else (components._Keywords.FINISHED_ITERATING,) * n_outputs - ) - iterator = None - - duration = time.time() - start - - return { - "prediction": prediction, - "duration": duration, - "is_generating": is_generating, - "iterator": iterator, - } - - def serialize_data(self, fn_index: int, inputs: list[Any]) -> list[Any]: - dependency = self.dependencies[fn_index] - processed_input = [] - - for i, input_id in enumerate(dependency["inputs"]): - try: - block = self.blocks[input_id] - except KeyError as e: - raise InvalidBlockError( - f"Input component with id {input_id} used in {dependency['trigger']}() event is not defined in this gr.Blocks context. You are allowed to nest gr.Blocks contexts, but there must be a gr.Blocks context that contains all components and events." - ) from e - assert isinstance( - block, components.IOComponent - ), f"{block.__class__} Component with id {input_id} not a valid input component." - serialized_input = block.serialize(inputs[i]) - processed_input.append(serialized_input) - - return processed_input - - def deserialize_data(self, fn_index: int, outputs: list[Any]) -> list[Any]: - dependency = self.dependencies[fn_index] - predictions = [] - - for o, output_id in enumerate(dependency["outputs"]): - try: - block = self.blocks[output_id] - except KeyError as e: - raise InvalidBlockError( - f"Output component with id {output_id} used in {dependency['trigger']}() event not found in this gr.Blocks context. You are allowed to nest gr.Blocks contexts, but there must be a gr.Blocks context that contains all components and events." - ) from e - assert isinstance( - block, components.IOComponent - ), f"{block.__class__} Component with id {output_id} not a valid output component." - deserialized = block.deserialize( - outputs[o], - save_dir=block.DEFAULT_TEMP_DIR, - root_url=block.root_url, - hf_token=Context.hf_token, - ) - predictions.append(deserialized) - - return predictions - - def validate_inputs(self, fn_index: int, inputs: list[Any]): - block_fn = self.fns[fn_index] - dependency = self.dependencies[fn_index] - - dep_inputs = dependency["inputs"] - - # This handles incorrect inputs when args are changed by a JS function - # Only check not enough args case, ignore extra arguments (for now) - # TODO: make this stricter? - if len(inputs) < len(dep_inputs): - name = ( - f" ({block_fn.name})" - if block_fn.name and block_fn.name != "" - else "" - ) - - wanted_args = [] - received_args = [] - for input_id in dep_inputs: - block = self.blocks[input_id] - wanted_args.append(str(block)) - for inp in inputs: - v = f'"{inp}"' if isinstance(inp, str) else str(inp) - received_args.append(v) - - wanted = ", ".join(wanted_args) - received = ", ".join(received_args) - - # JS func didn't pass enough arguments - raise ValueError( - f"""An event handler{name} didn't receive enough input values (needed: {len(dep_inputs)}, got: {len(inputs)}). -Check if the event handler calls a Javascript function, and make sure its return value is correct. -Wanted inputs: - [{wanted}] -Received inputs: - [{received}]""" - ) - - def preprocess_data(self, fn_index: int, inputs: list[Any], state: dict[int, Any]): - block_fn = self.fns[fn_index] - dependency = self.dependencies[fn_index] - - self.validate_inputs(fn_index, inputs) - - if block_fn.preprocess: - processed_input = [] - for i, input_id in enumerate(dependency["inputs"]): - try: - block = self.blocks[input_id] - except KeyError as e: - raise InvalidBlockError( - f"Input component with id {input_id} used in {dependency['trigger']}() event not found in this gr.Blocks context. You are allowed to nest gr.Blocks contexts, but there must be a gr.Blocks context that contains all components and events." - ) from e - assert isinstance( - block, components.Component - ), f"{block.__class__} Component with id {input_id} not a valid input component." - if getattr(block, "stateful", False): - processed_input.append(state.get(input_id)) - else: - processed_input.append(block.preprocess(inputs[i])) - else: - processed_input = inputs - return processed_input - - def validate_outputs(self, fn_index: int, predictions: Any | list[Any]): - block_fn = self.fns[fn_index] - dependency = self.dependencies[fn_index] - - dep_outputs = dependency["outputs"] - - if type(predictions) is not list and type(predictions) is not tuple: - predictions = [predictions] - - if len(predictions) < len(dep_outputs): - name = ( - f" ({block_fn.name})" - if block_fn.name and block_fn.name != "" - else "" - ) - - wanted_args = [] - received_args = [] - for output_id in dep_outputs: - block = self.blocks[output_id] - wanted_args.append(str(block)) - for pred in predictions: - v = f'"{pred}"' if isinstance(pred, str) else str(pred) - received_args.append(v) - - wanted = ", ".join(wanted_args) - received = ", ".join(received_args) - - raise ValueError( - f"""An event handler{name} didn't receive enough output values (needed: {len(dep_outputs)}, received: {len(predictions)}). -Wanted outputs: - [{wanted}] -Received outputs: - [{received}]""" - ) - - def postprocess_data( - self, fn_index: int, predictions: list | dict, state: dict[int, Any] - ): - block_fn = self.fns[fn_index] - dependency = self.dependencies[fn_index] - batch = dependency["batch"] - - if type(predictions) is dict and len(predictions) > 0: - predictions = convert_component_dict_to_list( - dependency["outputs"], predictions - ) - - if len(dependency["outputs"]) == 1 and not (batch): - predictions = [ - predictions, - ] - - self.validate_outputs(fn_index, predictions) # type: ignore - - output = [] - for i, output_id in enumerate(dependency["outputs"]): - try: - if predictions[i] is components._Keywords.FINISHED_ITERATING: - output.append(None) - continue - except (IndexError, KeyError) as err: - raise ValueError( - "Number of output components does not match number " - f"of values returned from from function {block_fn.name}" - ) from err - - try: - block = self.blocks[output_id] - except KeyError as e: - raise InvalidBlockError( - f"Output component with id {output_id} used in {dependency['trigger']}() event not found in this gr.Blocks context. You are allowed to nest gr.Blocks contexts, but there must be a gr.Blocks context that contains all components and events." - ) from e - - if getattr(block, "stateful", False): - if not utils.is_update(predictions[i]): - state[output_id] = predictions[i] - output.append(None) - else: - prediction_value = predictions[i] - if utils.is_update(prediction_value): - assert isinstance(prediction_value, dict) - prediction_value = postprocess_update_dict( - block=block, - update_dict=prediction_value, - postprocess=block_fn.postprocess, - ) - elif block_fn.postprocess: - assert isinstance( - block, components.Component - ), f"{block.__class__} Component with id {output_id} not a valid output component." - prediction_value = block.postprocess(prediction_value) - output.append(prediction_value) - - return output - - async def process_api( - self, - fn_index: int, - inputs: list[Any], - state: dict[int, Any], - request: routes.Request | list[routes.Request] | None = None, - iterators: dict[int, Any] | None = None, - event_id: str | None = None, - event_data: EventData | None = None, - ) -> dict[str, Any]: - """ - Processes API calls from the frontend. First preprocesses the data, - then runs the relevant function, then postprocesses the output. - Parameters: - fn_index: Index of function to run. - inputs: input data received from the frontend - state: data stored from stateful components for session (key is input block id) - request: the gr.Request object containing information about the network request (e.g. IP address, headers, query parameters, username) - iterators: the in-progress iterators for each generator function (key is function index) - event_id: id of event that triggered this API call - event_data: data associated with the event trigger itself - Returns: None - """ - block_fn = self.fns[fn_index] - batch = self.dependencies[fn_index]["batch"] - - if batch: - max_batch_size = self.dependencies[fn_index]["max_batch_size"] - batch_sizes = [len(inp) for inp in inputs] - batch_size = batch_sizes[0] - if inspect.isasyncgenfunction(block_fn.fn) or inspect.isgeneratorfunction( - block_fn.fn - ): - raise ValueError("Gradio does not support generators in batch mode.") - if not all(x == batch_size for x in batch_sizes): - raise ValueError( - f"All inputs to a batch function must have the same length but instead have sizes: {batch_sizes}." - ) - if batch_size > max_batch_size: - raise ValueError( - f"Batch size ({batch_size}) exceeds the max_batch_size for this function ({max_batch_size})" - ) - - inputs = [ - self.preprocess_data(fn_index, list(i), state) for i in zip(*inputs) - ] - result = await self.call_function( - fn_index, list(zip(*inputs)), None, request, event_id, event_data - ) - preds = result["prediction"] - data = [ - self.postprocess_data(fn_index, list(o), state) for o in zip(*preds) - ] - data = list(zip(*data)) - is_generating, iterator = None, None - else: - inputs = self.preprocess_data(fn_index, inputs, state) - iterator = iterators.get(fn_index, None) if iterators else None - result = await self.call_function( - fn_index, inputs, iterator, request, event_id, event_data - ) - data = self.postprocess_data(fn_index, result["prediction"], state) - is_generating, iterator = result["is_generating"], result["iterator"] - - block_fn.total_runtime += result["duration"] - block_fn.total_runs += 1 - return { - "data": data, - "is_generating": is_generating, - "iterator": iterator, - "duration": result["duration"], - "average_duration": block_fn.total_runtime / block_fn.total_runs, - } - - async def create_limiter(self): - self.limiter = ( - None - if self.max_threads == 40 - else CapacityLimiter(total_tokens=self.max_threads) - ) - - def get_config(self): - return {"type": "column"} - - def get_config_file(self): - config = { - "version": routes.VERSION, - "mode": self.mode, - "dev_mode": self.dev_mode, - "analytics_enabled": self.analytics_enabled, - "components": [], - "css": self.css, - "title": self.title or "Gradio", - "space_id": self.space_id, - "enable_queue": getattr(self, "enable_queue", False), # launch attributes - "show_error": getattr(self, "show_error", False), - "show_api": self.show_api, - "is_colab": utils.colab_check(), - "stylesheets": self.stylesheets, - "theme": self.theme.name, - } - - def get_layout(block): - if not isinstance(block, BlockContext): - return {"id": block._id} - children_layout = [] - for child in block.children: - children_layout.append(get_layout(child)) - return {"id": block._id, "children": children_layout} - - config["layout"] = get_layout(self) - - for _id, block in self.blocks.items(): - props = block.get_config() if hasattr(block, "get_config") else {} - block_config = { - "id": _id, - "type": block.get_block_name(), - "props": utils.delete_none(props), - } - serializer = utils.get_serializer_name(block) - if serializer: - assert isinstance(block, serializing.Serializable) - block_config["serializer"] = serializer - block_config["api_info"] = block.api_info() # type: ignore - block_config["example_inputs"] = block.example_inputs() # type: ignore - config["components"].append(block_config) - config["dependencies"] = self.dependencies - return config - - def __enter__(self): - if Context.block is None: - Context.root_block = self - self.parent = Context.block - Context.block = self - self.exited = False - return self - - def __exit__(self, *args): - super().fill_expected_parents() - Context.block = self.parent - # Configure the load events before root_block is reset - self.attach_load_events() - if self.parent is None: - Context.root_block = None - else: - self.parent.children.extend(self.children) - self.config = self.get_config_file() - self.app = routes.App.create_app(self) - self.progress_tracking = any(block_fn.tracks_progress for block_fn in self.fns) - self.exited = True - - @class_or_instancemethod - def load( - self_or_cls, # noqa: N805 - fn: Callable | None = None, - inputs: list[Component] | None = None, - outputs: list[Component] | None = None, - api_name: str | None | Literal[False] = None, - scroll_to_output: bool = False, - show_progress: str = "full", - queue=None, - batch: bool = False, - max_batch_size: int = 4, - preprocess: bool = True, - postprocess: bool = True, - every: float | None = None, - _js: str | None = None, - *, - name: str | None = None, - src: str | None = None, - api_key: str | None = None, - alias: str | None = None, - **kwargs, - ) -> Blocks | dict[str, Any] | None: - """ - For reverse compatibility reasons, this is both a class method and an instance - method, the two of which, confusingly, do two completely different things. - - - Class method: loads a demo from a Hugging Face Spaces repo and creates it locally and returns a block instance. Warning: this method will be deprecated. Use the equivalent `gradio.load()` instead. - - - Instance method: adds event that runs as soon as the demo loads in the browser. Example usage below. - Parameters: - name: Class Method - the name of the model (e.g. "gpt2" or "facebook/bart-base") or space (e.g. "flax-community/spanish-gpt2"), can include the `src` as prefix (e.g. "models/facebook/bart-base") - src: Class Method - the source of the model: `models` or `spaces` (or leave empty if source is provided as a prefix in `name`) - api_key: Class Method - optional access token for loading private Hugging Face Hub models or spaces. Find your token here: https://huggingface.co/settings/tokens. Warning: only provide this if you are loading a trusted private Space as it can be read by the Space you are loading. - alias: Class Method - optional string used as the name of the loaded model instead of the default name (only applies if loading a Space running Gradio 2.x) - fn: Instance Method - the function to wrap an interface around. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component. - inputs: Instance Method - List of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list. - outputs: Instance Method - List of gradio.components to use as inputs. If the function returns no outputs, this should be an empty list. - api_name: Instance Method - Defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, the endpoint will be exposed in the api docs as an unnamed endpoint, although this behavior will be changed in Gradio 4.0. If set to a string, the endpoint will be exposed in the api docs with the given name. - scroll_to_output: Instance Method - If True, will scroll to output component on completion - show_progress: Instance Method - If True, will show progress animation while pending - queue: Instance Method - If True, will place the request on the queue, if the queue exists - batch: Instance Method - If True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component. - max_batch_size: Instance Method - Maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True) - preprocess: Instance Method - If False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component). - postprocess: Instance Method - If False, will not run postprocessing of component data before returning 'fn' output to the browser. - every: Instance Method - Run this event 'every' number of seconds. Interpreted in seconds. Queue must be enabled. - Example: - import gradio as gr - import datetime - with gr.Blocks() as demo: - def get_time(): - return datetime.datetime.now().time() - dt = gr.Textbox(label="Current time") - demo.load(get_time, inputs=None, outputs=dt) - demo.launch() - """ - if isinstance(self_or_cls, type): - warn_deprecation( - "gr.Blocks.load() will be deprecated. Use gr.load() instead." - ) - if name is None: - raise ValueError( - "Blocks.load() requires passing parameters as keyword arguments" - ) - return external.load( - name=name, src=src, hf_token=api_key, alias=alias, **kwargs - ) - else: - from gradio.events import Dependency - - dep, dep_index = self_or_cls.set_event_trigger( - event_name="load", - fn=fn, - inputs=inputs, - outputs=outputs, - api_name=api_name, - preprocess=preprocess, - postprocess=postprocess, - scroll_to_output=scroll_to_output, - show_progress=show_progress, - js=_js, - queue=queue, - batch=batch, - max_batch_size=max_batch_size, - every=every, - no_target=True, - ) - return Dependency(self_or_cls, dep, dep_index) - - def clear(self): - """Resets the layout of the Blocks object.""" - self.blocks = {} - self.fns = [] - self.dependencies = [] - self.children = [] - return self - - @document() - def queue( - self, - concurrency_count: int = 1, - status_update_rate: float | Literal["auto"] = "auto", - client_position_to_load_data: int | None = None, - default_enabled: bool | None = None, - api_open: bool = True, - max_size: int | None = None, - ): - """ - You can control the rate of processed requests by creating a queue. This will allow you to set the number of requests to be processed at one time, and will let users know their position in the queue. - Parameters: - concurrency_count: Number of worker threads that will be processing requests from the queue concurrently. Increasing this number will increase the rate at which requests are processed, but will also increase the memory usage of the queue. - status_update_rate: If "auto", Queue will send status estimations to all clients whenever a job is finished. Otherwise Queue will send status at regular intervals set by this parameter as the number of seconds. - client_position_to_load_data: DEPRECATED. This parameter is deprecated and has no effect. - default_enabled: Deprecated and has no effect. - api_open: If True, the REST routes of the backend will be open, allowing requests made directly to those endpoints to skip the queue. - max_size: The maximum number of events the queue will store at any given moment. If the queue is full, new events will not be added and a user will receive a message saying that the queue is full. If None, the queue size will be unlimited. - Example: (Blocks) - with gr.Blocks() as demo: - button = gr.Button(label="Generate Image") - button.click(fn=image_generator, inputs=gr.Textbox(), outputs=gr.Image()) - demo.queue(concurrency_count=3) - demo.launch() - Example: (Interface) - demo = gr.Interface(image_generator, gr.Textbox(), gr.Image()) - demo.queue(concurrency_count=3) - demo.launch() - """ - if default_enabled is not None: - warn_deprecation( - "The default_enabled parameter of queue has no effect and will be removed " - "in a future version of gradio." - ) - self.enable_queue = True - self.api_open = api_open - if client_position_to_load_data is not None: - warn_deprecation( - "The client_position_to_load_data parameter is deprecated." - ) - self._queue = queueing.Queue( - live_updates=status_update_rate == "auto", - concurrency_count=concurrency_count, - update_intervals=status_update_rate if status_update_rate != "auto" else 1, - max_size=max_size, - blocks_dependencies=self.dependencies, - ) - self.config = self.get_config_file() - self.app = routes.App.create_app(self) - return self - - def validate_queue_settings(self): - if not self.enable_queue and self.progress_tracking: - raise ValueError("Progress tracking requires queuing to be enabled.") - - for fn_index, dep in enumerate(self.dependencies): - if not self.enable_queue and self.queue_enabled_for_fn(fn_index): - raise ValueError( - f"The queue is enabled for event {dep['api_name'] if dep['api_name'] else fn_index} " - "but the queue has not been enabled for the app. Please call .queue() " - "on your app. Consult https://gradio.app/docs/#blocks-queue for information on how " - "to configure the queue." - ) - for i in dep["cancels"]: - if not self.queue_enabled_for_fn(i): - raise ValueError( - "Queue needs to be enabled! " - "You may get this error by either 1) passing a function that uses the yield keyword " - "into an interface without enabling the queue or 2) defining an event that cancels " - "another event without enabling the queue. Both can be solved by calling .queue() " - "before .launch()" - ) - if dep["batch"] and ( - dep["queue"] is False - or (dep["queue"] is None and not self.enable_queue) - ): - raise ValueError("In order to use batching, the queue must be enabled.") - - def launch( - self, - inline: bool | None = None, - inbrowser: bool = False, - share: bool | None = None, - debug: bool = False, - enable_queue: bool | None = None, - max_threads: int = 40, - auth: Callable | tuple[str, str] | list[tuple[str, str]] | None = None, - auth_message: str | None = None, - prevent_thread_lock: bool = False, - show_error: bool = False, - server_name: str | None = None, - server_port: int | None = None, - show_tips: bool = False, - height: int = 500, - width: int | str = "100%", - encrypt: bool | None = None, - favicon_path: str | None = None, - ssl_keyfile: str | None = None, - ssl_certfile: str | None = None, - ssl_keyfile_password: str | None = None, - ssl_verify: bool = True, - quiet: bool = False, - show_api: bool = True, - file_directories: list[str] | None = None, - allowed_paths: list[str] | None = None, - blocked_paths: list[str] | None = None, - root_path: str = "", - _frontend: bool = True, - app_kwargs: dict[str, Any] | None = None, - ) -> tuple[FastAPI, str, str]: - """ - Launches a simple web server that serves the demo. Can also be used to create a - public link used by anyone to access the demo from their browser by setting share=True. - - Parameters: - inline: whether to display in the interface inline in an iframe. Defaults to True in python notebooks; False otherwise. - inbrowser: whether to automatically launch the interface in a new tab on the default browser. - share: whether to create a publicly shareable link for the interface. Creates an SSH tunnel to make your UI accessible from anywhere. If not provided, it is set to False by default every time, except when running in Google Colab. When localhost is not accessible (e.g. Google Colab), setting share=False is not supported. - debug: if True, blocks the main thread from running. If running in Google Colab, this is needed to print the errors in the cell output. - auth: If provided, username and password (or list of username-password tuples) required to access interface. Can also provide function that takes username and password and returns True if valid login. - auth_message: If provided, HTML message provided on login page. - prevent_thread_lock: If True, the interface will block the main thread while the server is running. - show_error: If True, any errors in the interface will be displayed in an alert modal and printed in the browser console log - server_port: will start gradio app on this port (if available). Can be set by environment variable GRADIO_SERVER_PORT. If None, will search for an available port starting at 7860. - server_name: to make app accessible on local network, set this to "0.0.0.0". Can be set by environment variable GRADIO_SERVER_NAME. If None, will use "127.0.0.1". - show_tips: if True, will occasionally show tips about new Gradio features - enable_queue: DEPRECATED (use .queue() method instead.) if True, inference requests will be served through a queue instead of with parallel threads. Required for longer inference times (> 1min) to prevent timeout. The default option in HuggingFace Spaces is True. The default option elsewhere is False. - max_threads: the maximum number of total threads that the Gradio app can generate in parallel. The default is inherited from the starlette library (currently 40). Applies whether the queue is enabled or not. But if queuing is enabled, this parameter is increaseed to be at least the concurrency_count of the queue. - width: The width in pixels of the iframe element containing the interface (used if inline=True) - height: The height in pixels of the iframe element containing the interface (used if inline=True) - encrypt: DEPRECATED. Has no effect. - favicon_path: If a path to a file (.png, .gif, or .ico) is provided, it will be used as the favicon for the web page. - ssl_keyfile: If a path to a file is provided, will use this as the private key file to create a local server running on https. - ssl_certfile: If a path to a file is provided, will use this as the signed certificate for https. Needs to be provided if ssl_keyfile is provided. - ssl_keyfile_password: If a password is provided, will use this with the ssl certificate for https. - ssl_verify: If False, skips certificate validation which allows self-signed certificates to be used. - quiet: If True, suppresses most print statements. - show_api: If True, shows the api docs in the footer of the app. Default True. If the queue is enabled, then api_open parameter of .queue() will determine if the api docs are shown, independent of the value of show_api. - file_directories: This parameter has been renamed to `allowed_paths`. It will be removed in a future version. - allowed_paths: List of complete filepaths or parent directories that gradio is allowed to serve (in addition to the directory containing the gradio python file). Must be absolute paths. Warning: if you provide directories, any files in these directories or their subdirectories are accessible to all users of your app. - blocked_paths: List of complete filepaths or parent directories that gradio is not allowed to serve (i.e. users of your app are not allowed to access). Must be absolute paths. Warning: takes precedence over `allowed_paths` and all other directories exposed by Gradio by default. - root_path: The root path (or "mount point") of the application, if it's not served from the root ("/") of the domain. Often used when the application is behind a reverse proxy that forwards requests to the application. For example, if the application is served at "https://example.com/myapp", the `root_path` should be set to "/myapp". - app_kwargs: Additional keyword arguments to pass to the underlying FastAPI app as a dictionary of parameter keys and argument values. For example, `{"docs_url": "/docs"}` - Returns: - app: FastAPI app object that is running the demo - local_url: Locally accessible link to the demo - share_url: Publicly accessible link to the demo (if share=True, otherwise None) - Example: (Blocks) - import gradio as gr - def reverse(text): - return text[::-1] - with gr.Blocks() as demo: - button = gr.Button(value="Reverse") - button.click(reverse, gr.Textbox(), gr.Textbox()) - demo.launch(share=True, auth=("username", "password")) - Example: (Interface) - import gradio as gr - def reverse(text): - return text[::-1] - demo = gr.Interface(reverse, "text", "text") - demo.launch(share=True, auth=("username", "password")) - """ - if not self.exited: - self.__exit__() - - self.dev_mode = False - if ( - auth - and not callable(auth) - and not isinstance(auth[0], tuple) - and not isinstance(auth[0], list) - ): - self.auth = [auth] - else: - self.auth = auth - self.auth_message = auth_message - self.show_tips = show_tips - self.show_error = show_error - self.height = height - self.width = width - self.favicon_path = favicon_path - self.ssl_verify = ssl_verify - self.root_path = root_path - - if enable_queue is not None: - self.enable_queue = enable_queue - warn_deprecation( - "The `enable_queue` parameter has been deprecated. " - "Please use the `.queue()` method instead.", - ) - if encrypt is not None: - warn_deprecation( - "The `encrypt` parameter has been deprecated and has no effect.", - ) - - if self.space_id: - self.enable_queue = self.enable_queue is not False - else: - self.enable_queue = self.enable_queue is True - if self.enable_queue and not hasattr(self, "_queue"): - self.queue() - self.show_api = self.api_open if self.enable_queue else show_api - - if file_directories is not None: - warn_deprecation( - "The `file_directories` parameter has been renamed to `allowed_paths`. " - "Please use that instead.", - ) - if allowed_paths is None: - allowed_paths = file_directories - self.allowed_paths = allowed_paths or [] - self.blocked_paths = blocked_paths or [] - - if not isinstance(self.allowed_paths, list): - raise ValueError("`allowed_paths` must be a list of directories.") - if not isinstance(self.blocked_paths, list): - raise ValueError("`blocked_paths` must be a list of directories.") - - self.validate_queue_settings() - - self.config = self.get_config_file() - self.max_threads = max( - self._queue.max_thread_count if self.enable_queue else 0, max_threads - ) - - if self.is_running: - assert isinstance( - self.local_url, str - ), f"Invalid local_url: {self.local_url}" - if not (quiet): - print( - "Rerunning server... use `close()` to stop if you need to change `launch()` parameters.\n----" - ) - else: - if wasm_utils.IS_WASM: - server_name = "xxx" - server_port = 99999 - local_url = "" - server = None - - # In the Wasm environment, we only need the app object - # which the frontend app will directly communicate with through the Worker API, - # and we don't need to start a server. - # So we just create the app object and register it here, - # and avoid using `networking.start_server` that would start a server that don't work in the Wasm env. - from gradio.routes import App - - app = App.create_app(self, app_kwargs=app_kwargs) - wasm_utils.register_app(app) - else: - ( - server_name, - server_port, - local_url, - app, - server, - ) = networking.start_server( - self, - server_name, - server_port, - ssl_keyfile, - ssl_certfile, - ssl_keyfile_password, - app_kwargs=app_kwargs, - ) - self.server_name = server_name - self.local_url = local_url - self.server_port = server_port - self.server_app = app - self.server = server - self.is_running = True - self.is_colab = utils.colab_check() - self.is_kaggle = utils.kaggle_check() - self.is_sagemaker = utils.sagemaker_check() - - self.protocol = ( - "https" - if self.local_url.startswith("https") or self.is_colab - else "http" - ) - - if self.enable_queue: - self._queue.set_url(self.local_url) - - # Cannot run async functions in background other than app's scope. - # Workaround by triggering the app endpoint - if not wasm_utils.IS_WASM: - requests.get(f"{self.local_url}startup-events", verify=ssl_verify) - - if wasm_utils.IS_WASM: - return TupleNoPrint((self.server_app, self.local_url, self.share_url)) - - utils.launch_counter() - - if share is None: - if self.is_colab and self.enable_queue: - if not quiet: - print( - "Setting queue=True in a Colab notebook requires sharing enabled. Setting `share=True` (you can turn this off by setting `share=False` in `launch()` explicitly).\n" - ) - self.share = True - elif self.is_kaggle: - if not quiet: - print( - "Kaggle notebooks require sharing enabled. Setting `share=True` (you can turn this off by setting `share=False` in `launch()` explicitly).\n" - ) - self.share = True - elif self.is_sagemaker: - if not quiet: - print( - "Sagemaker notebooks may require sharing enabled. Setting `share=True` (you can turn this off by setting `share=False` in `launch()` explicitly).\n" - ) - self.share = True - else: - self.share = False - else: - self.share = share - - # If running in a colab or not able to access localhost, - # a shareable link must be created. - if _frontend and (not networking.url_ok(self.local_url)) and (not self.share): - raise ValueError( - "When localhost is not accessible, a shareable link must be created. Please set share=True or check your proxy settings to allow access to localhost." - ) - - if self.is_colab: - if not quiet: - if debug: - print(strings.en["COLAB_DEBUG_TRUE"]) - else: - print(strings.en["COLAB_DEBUG_FALSE"]) - if not self.share: - print(strings.en["COLAB_WARNING"].format(self.server_port)) - if self.enable_queue and not self.share: - raise ValueError( - "When using queueing in Colab, a shareable link must be created. Please set share=True." - ) - else: - print( - strings.en["RUNNING_LOCALLY_SEPARATED"].format( - self.protocol, self.server_name, self.server_port - ) - ) - - if self.share: - if self.space_id: - raise RuntimeError("Share is not supported when you are in Spaces") - try: - if self.share_url is None: - self.share_url = networking.setup_tunnel( - self.server_name, self.server_port, self.share_token - ) - print(strings.en["SHARE_LINK_DISPLAY"].format(self.share_url)) - if not (quiet): - print(strings.en["SHARE_LINK_MESSAGE"]) - except (RuntimeError, requests.exceptions.ConnectionError): - if self.analytics_enabled: - analytics.error_analytics("Not able to set up tunnel") - self.share_url = None - self.share = False - if Path(BINARY_PATH).exists(): - print(strings.en["COULD_NOT_GET_SHARE_LINK"]) - else: - print( - strings.en["COULD_NOT_GET_SHARE_LINK_MISSING_FILE"].format( - BINARY_PATH, - BINARY_URL, - BINARY_FILENAME, - BINARY_FOLDER, - ) - ) - else: - if not (quiet): - print(strings.en["PUBLIC_SHARE_TRUE"]) - self.share_url = None - - if inbrowser: - link = self.share_url if self.share and self.share_url else self.local_url - webbrowser.open(link) - - # Check if running in a Python notebook in which case, display inline - if inline is None: - inline = utils.ipython_check() - if inline: - try: - from IPython.display import HTML, Javascript, display # type: ignore - - if self.share and self.share_url: - while not networking.url_ok(self.share_url): - time.sleep(0.25) - display( - HTML( - f'
    ' - ) - ) - elif self.is_colab: - # modified from /usr/local/lib/python3.7/dist-packages/google/colab/output/_util.py within Colab environment - code = """(async (port, path, width, height, cache, element) => { - if (!google.colab.kernel.accessAllowed && !cache) { - return; - } - element.appendChild(document.createTextNode('')); - const url = await google.colab.kernel.proxyPort(port, {cache}); - - const external_link = document.createElement('div'); - external_link.innerHTML = ` - - `; - element.appendChild(external_link); - - const iframe = document.createElement('iframe'); - iframe.src = new URL(path, url).toString(); - iframe.height = height; - iframe.allow = "autoplay; camera; microphone; clipboard-read; clipboard-write;" - iframe.width = width; - iframe.style.border = 0; - element.appendChild(iframe); - })""" + "({port}, {path}, {width}, {height}, {cache}, window.element)".format( - port=json.dumps(self.server_port), - path=json.dumps("/"), - width=json.dumps(self.width), - height=json.dumps(self.height), - cache=json.dumps(False), - ) - - display(Javascript(code)) - else: - display( - HTML( - f'
    ' - ) - ) - except ImportError: - pass - - if getattr(self, "analytics_enabled", False): - data = { - "launch_method": "browser" if inbrowser else "inline", - "is_google_colab": self.is_colab, - "is_sharing_on": self.share, - "share_url": self.share_url, - "enable_queue": self.enable_queue, - "show_tips": self.show_tips, - "server_name": server_name, - "server_port": server_port, - "is_space": self.space_id is not None, - "mode": self.mode, - } - analytics.launched_analytics(self, data) - - utils.show_tip(self) - - # Block main thread if debug==True - if debug or int(os.getenv("GRADIO_DEBUG", 0)) == 1: - self.block_thread() - # Block main thread if running in a script to stop script from exiting - is_in_interactive_mode = bool(getattr(sys, "ps1", sys.flags.interactive)) - - if not prevent_thread_lock and not is_in_interactive_mode: - self.block_thread() - - return TupleNoPrint((self.server_app, self.local_url, self.share_url)) - - def integrate( - self, - comet_ml=None, - wandb: ModuleType | None = None, - mlflow: ModuleType | None = None, - ) -> None: - """ - A catch-all method for integrating with other libraries. This method should be run after launch() - Parameters: - comet_ml: If a comet_ml Experiment object is provided, will integrate with the experiment and appear on Comet dashboard - wandb: If the wandb module is provided, will integrate with it and appear on WandB dashboard - mlflow: If the mlflow module is provided, will integrate with the experiment and appear on ML Flow dashboard - """ - analytics_integration = "" - if comet_ml is not None: - analytics_integration = "CometML" - comet_ml.log_other("Created from", "Gradio") - if self.share_url is not None: - comet_ml.log_text(f"gradio: {self.share_url}") - comet_ml.end() - elif self.local_url: - comet_ml.log_text(f"gradio: {self.local_url}") - comet_ml.end() - else: - raise ValueError("Please run `launch()` first.") - if wandb is not None: - analytics_integration = "WandB" - if self.share_url is not None: - wandb.log( - { - "Gradio panel": wandb.Html( - '' - ) - } - ) - else: - print( - "The WandB integration requires you to " - "`launch(share=True)` first." - ) - if mlflow is not None: - analytics_integration = "MLFlow" - if self.share_url is not None: - mlflow.log_param("Gradio Interface Share Link", self.share_url) - else: - mlflow.log_param("Gradio Interface Local Link", self.local_url) - if self.analytics_enabled and analytics_integration: - data = {"integration": analytics_integration} - analytics.integration_analytics(data) - - def close(self, verbose: bool = True) -> None: - """ - Closes the Interface that was launched and frees the port. - """ - try: - if self.enable_queue: - self._queue.close() - if self.server: - self.server.close() - self.is_running = False - # So that the startup events (starting the queue) - # happen the next time the app is launched - self.app.startup_events_triggered = False - if verbose: - print(f"Closing server running on port: {self.server_port}") - except (AttributeError, OSError): # can't close if not running - pass - - def block_thread( - self, - ) -> None: - """Block main thread until interrupted by user.""" - try: - while True: - time.sleep(0.1) - except (KeyboardInterrupt, OSError): - print("Keyboard interruption in main thread... closing server.") - if self.server: - self.server.close() - for tunnel in CURRENT_TUNNELS: - tunnel.kill() - - def attach_load_events(self): - """Add a load event for every component whose initial value should be randomized.""" - if Context.root_block: - for component in Context.root_block.blocks.values(): - if ( - isinstance(component, components.IOComponent) - and component.load_event_to_attach - ): - load_fn, every = component.load_event_to_attach - # Use set_event_trigger to avoid ambiguity between load class/instance method - dep = self.set_event_trigger( - "load", - load_fn, - None, - component, - no_target=True, - # If every is None, for sure skip the queue - # else, let the enable_queue parameter take precedence - # this will raise a nice error message is every is used - # without queue - queue=False if every is None else None, - every=every, - )[0] - component.load_event = dep - - def startup_events(self): - """Events that should be run when the app containing this block starts up.""" - - if self.enable_queue: - utils.run_coro_in_background(self._queue.start, self.ssl_verify) - # So that processing can resume in case the queue was stopped - self._queue.stopped = False - utils.run_coro_in_background(self.create_limiter) - - def queue_enabled_for_fn(self, fn_index: int): - if self.dependencies[fn_index]["queue"] is None: - return self.enable_queue - return self.dependencies[fn_index]["queue"] diff --git a/spaces/cihyFjudo/fairness-paper-search/The Secrets of Di Pwedeng Hindi Pwede How Robin Padilla and Vina Morales Created Chemistry on Screen.md b/spaces/cihyFjudo/fairness-paper-search/The Secrets of Di Pwedeng Hindi Pwede How Robin Padilla and Vina Morales Created Chemistry on Screen.md deleted file mode 100644 index 2bf6159fe6615a4463d909ff9a9320702069caed..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/The Secrets of Di Pwedeng Hindi Pwede How Robin Padilla and Vina Morales Created Chemistry on Screen.md +++ /dev/null @@ -1,6 +0,0 @@ -

    di pwedeng hindi pwede full movie robin padilla


    Download File ✪✪✪ https://tinurli.com/2uwk4n



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/ck46/extractive_summaries/app.py b/spaces/ck46/extractive_summaries/app.py deleted file mode 100644 index 6fdb0e93db180b44aa60398843baeb61357eaf46..0000000000000000000000000000000000000000 --- a/spaces/ck46/extractive_summaries/app.py +++ /dev/null @@ -1,41 +0,0 @@ -import streamlit as st - -from paraphraser import get_key_sentences, ParaphraseModel - -paraphraser = ParaphraseModel() - - -# Add a model selector to the sidebar: -#model = st.sidebar.selectbox( -# 'Select Model', -# ('T5-base', 'DistilT5-base', 'T5-small') -#) - -top_n = st.sidebar.slider('Top_n', 1, 20, 5) -diversity = st.sidebar.slider('Diversity', 0.0, 1.0, 0.6) - - -top_k = st.sidebar.slider('Top_K', 100, 300, 168) -top_p = st.sidebar.slider('Top_P', 0.0, 1.0, 0.95) - -st.header("Bullet-point Summarization") -#st.write(f'Model in use: {model}') - -txt = st.text_area('Text to analyze', ) - -if len(txt) >= 1: - key_sentences = get_key_sentences(txt, top_n=top_n, diversity=('mmr', diversity)) - sentences = [] - for i in sorted(key_sentences): - sentences.append(key_sentences[i]) - - paraphrased_sentences = paraphraser(sentences, top_k=top_k, top_p=top_p, num_sequences=1) -else: - sentences = [] - paraphrased_sentences = [] - -st.header('Extracted Key Sentences') -st.write(sentences) - -st.header('Paraphrase results') -st.write(paraphrased_sentences) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiofiles/threadpool/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiofiles/threadpool/__init__.py deleted file mode 100644 index a1cc673d1a7398f23a1e8f00c19cef1cafa906c2..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiofiles/threadpool/__init__.py +++ /dev/null @@ -1,141 +0,0 @@ -"""Handle files using a thread pool executor.""" -import asyncio -import sys -from functools import partial, singledispatch -from io import ( - BufferedIOBase, - BufferedRandom, - BufferedReader, - BufferedWriter, - FileIO, - TextIOBase, -) -from types import coroutine - -from ..base import AiofilesContextManager -from .binary import ( - AsyncBufferedIOBase, - AsyncBufferedReader, - AsyncFileIO, - AsyncIndirectBufferedIOBase, -) -from .text import AsyncTextIndirectIOWrapper, AsyncTextIOWrapper - -sync_open = open - -__all__ = ( - "open", - "stdin", - "stdout", - "stderr", - "stdin_bytes", - "stdout_bytes", - "stderr_bytes", -) - - -def open( - file, - mode="r", - buffering=-1, - encoding=None, - errors=None, - newline=None, - closefd=True, - opener=None, - *, - loop=None, - executor=None, -): - return AiofilesContextManager( - _open( - file, - mode=mode, - buffering=buffering, - encoding=encoding, - errors=errors, - newline=newline, - closefd=closefd, - opener=opener, - loop=loop, - executor=executor, - ) - ) - - -@coroutine -def _open( - file, - mode="r", - buffering=-1, - encoding=None, - errors=None, - newline=None, - closefd=True, - opener=None, - *, - loop=None, - executor=None, -): - """Open an asyncio file.""" - if loop is None: - loop = asyncio.get_running_loop() - cb = partial( - sync_open, - file, - mode=mode, - buffering=buffering, - encoding=encoding, - errors=errors, - newline=newline, - closefd=closefd, - opener=opener, - ) - f = yield from loop.run_in_executor(executor, cb) - - return wrap(f, loop=loop, executor=executor) - - -@singledispatch -def wrap(file, *, loop=None, executor=None): - raise TypeError("Unsupported io type: {}.".format(file)) - - -@wrap.register(TextIOBase) -def _(file, *, loop=None, executor=None): - return AsyncTextIOWrapper(file, loop=loop, executor=executor) - - -@wrap.register(BufferedWriter) -@wrap.register(BufferedIOBase) -def _(file, *, loop=None, executor=None): - return AsyncBufferedIOBase(file, loop=loop, executor=executor) - - -@wrap.register(BufferedReader) -@wrap.register(BufferedRandom) -def _(file, *, loop=None, executor=None): - return AsyncBufferedReader(file, loop=loop, executor=executor) - - -@wrap.register(FileIO) -def _(file, *, loop=None, executor=None): - return AsyncFileIO(file, loop=loop, executor=executor) - - -stdin = AsyncTextIndirectIOWrapper("sys.stdin", None, None, indirect=lambda: sys.stdin) -stdout = AsyncTextIndirectIOWrapper( - "sys.stdout", None, None, indirect=lambda: sys.stdout -) -stderr = AsyncTextIndirectIOWrapper( - "sys.stderr", None, None, indirect=lambda: sys.stderr -) -stdin_bytes = AsyncIndirectBufferedIOBase( - "sys.stdin.buffer", None, None, indirect=lambda: sys.stdin.buffer -) -stdout_bytes = AsyncIndirectBufferedIOBase( - "sys.stdout.buffer", None, None, indirect=lambda: sys.stdout.buffer -) -stderr_bytes = AsyncIndirectBufferedIOBase( - "sys.stderr.buffer", None, None, indirect=lambda: sys.stderr.buffer -) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_exceptions.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_exceptions.py deleted file mode 100644 index 92ccd77a2de2e865e92c5e6943a66bdaff91f840..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_exceptions.py +++ /dev/null @@ -1,94 +0,0 @@ -from __future__ import annotations - -from traceback import format_exception - - -class BrokenResourceError(Exception): - """ - Raised when trying to use a resource that has been rendered unusable due to external causes - (e.g. a send stream whose peer has disconnected). - """ - - -class BrokenWorkerProcess(Exception): - """ - Raised by :func:`run_sync_in_process` if the worker process terminates abruptly or otherwise - misbehaves. - """ - - -class BusyResourceError(Exception): - """Raised when two tasks are trying to read from or write to the same resource concurrently.""" - - def __init__(self, action: str): - super().__init__(f"Another task is already {action} this resource") - - -class ClosedResourceError(Exception): - """Raised when trying to use a resource that has been closed.""" - - -class DelimiterNotFound(Exception): - """ - Raised during :meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_until` if the - maximum number of bytes has been read without the delimiter being found. - """ - - def __init__(self, max_bytes: int) -> None: - super().__init__( - f"The delimiter was not found among the first {max_bytes} bytes" - ) - - -class EndOfStream(Exception): - """Raised when trying to read from a stream that has been closed from the other end.""" - - -class ExceptionGroup(BaseException): - """ - Raised when multiple exceptions have been raised in a task group. - - :var ~typing.Sequence[BaseException] exceptions: the sequence of exceptions raised together - """ - - SEPARATOR = "----------------------------\n" - - exceptions: list[BaseException] - - def __str__(self) -> str: - tracebacks = [ - "".join(format_exception(type(exc), exc, exc.__traceback__)) - for exc in self.exceptions - ] - return ( - f"{len(self.exceptions)} exceptions were raised in the task group:\n" - f"{self.SEPARATOR}{self.SEPARATOR.join(tracebacks)}" - ) - - def __repr__(self) -> str: - exception_reprs = ", ".join(repr(exc) for exc in self.exceptions) - return f"<{self.__class__.__name__}: {exception_reprs}>" - - -class IncompleteRead(Exception): - """ - Raised during :meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_exactly` or - :meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_until` if the - connection is closed before the requested amount of bytes has been read. - """ - - def __init__(self) -> None: - super().__init__( - "The stream was closed before the read operation could be completed" - ) - - -class TypedAttributeLookupError(LookupError): - """ - Raised by :meth:`~anyio.TypedAttributeProvider.extra` when the given typed attribute is not - found and no default value has been given. - """ - - -class WouldBlock(Exception): - """Raised by ``X_nowait`` functions if ``X()`` would block.""" diff --git a/spaces/cncn102/bingo1/src/components/chat-notification.tsx b/spaces/cncn102/bingo1/src/components/chat-notification.tsx deleted file mode 100644 index 3474e522992c43a4d1d0eadcf205a9760d5b930b..0000000000000000000000000000000000000000 --- a/spaces/cncn102/bingo1/src/components/chat-notification.tsx +++ /dev/null @@ -1,91 +0,0 @@ -import { useEffect } from 'react' -import Image from 'next/image' - -import IconWarning from '@/assets/images/warning.svg' -import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types' -import { ExternalLink } from './external-link' -import { useBing } from '@/lib/hooks/use-bing' - -export interface ChatNotificationProps extends Pick, 'bot'> { - message?: ChatMessageModel -} - -function getAction(error: ChatError, reset: () => void) { - if (error.code === ErrorCode.THROTTLE_LIMIT) { - reset() - return ( -
    - 你已达到每日最大发送消息次数,请更换账号或隔一天后重试 -
    - ) - } - if (error.code === ErrorCode.BING_IP_FORBIDDEN) { - return ( - - 你的服务器或代理已被封禁,请更换服务器或使用代理重试 - - ) - } - if (error.code === ErrorCode.BING_TRY_LATER) { - return ( - - 创建会话失败,请稍候重试 - - ) - } - if (error.code === ErrorCode.BING_FORBIDDEN) { - return ( - - 你的账号已在黑名单,请尝试更换账号及申请解封 - - ) - } - if (error.code === ErrorCode.CONVERSATION_LIMIT) { - return ( -
    - 当前话题已中止,请点 - 重新开始 - 开启新的对话 -
    - ) - } - if (error.code === ErrorCode.BING_CAPTCHA) { - return ( - - 点击通过人机验证 - - ) - } - if (error.code === ErrorCode.BING_UNAUTHORIZED) { - reset() - return ( - 没有获取到身份信息或身份信息失效,点此重新设置 - ) - } - return error.message -} - -export function ChatNotification({ message, bot }: ChatNotificationProps) { - useEffect(() => { - window.scrollBy(0, 2000) - }, [message]) - - if (!message?.error) return - - return ( -
    -
    -
    -
    -
    - error - {getAction(message.error, () => bot.resetConversation())} -
    -
    -
    -
    -
    - ) -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dirac_arith.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dirac_arith.c deleted file mode 100644 index 69b62802302fd2d61f29e2cd4ff70f5260cc68f6..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dirac_arith.c +++ /dev/null @@ -1,123 +0,0 @@ -/* - * Copyright (C) 2007 Marco Gerards - * Copyright (C) 2009 David Conrad - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Arithmetic decoder for Dirac - * @author Marco Gerards - */ - -#include "dirac_arith.h" - - -static const uint16_t dirac_prob[256] = { - 0, 2, 5, 8, 11, 15, 20, 24, - 29, 35, 41, 47, 53, 60, 67, 74, - 82, 89, 97, 106, 114, 123, 132, 141, - 150, 160, 170, 180, 190, 201, 211, 222, - 233, 244, 256, 267, 279, 291, 303, 315, - 327, 340, 353, 366, 379, 392, 405, 419, - 433, 447, 461, 475, 489, 504, 518, 533, - 548, 563, 578, 593, 609, 624, 640, 656, - 672, 688, 705, 721, 738, 754, 771, 788, - 805, 822, 840, 857, 875, 892, 910, 928, - 946, 964, 983, 1001, 1020, 1038, 1057, 1076, - 1095, 1114, 1133, 1153, 1172, 1192, 1211, 1231, - 1251, 1271, 1291, 1311, 1332, 1352, 1373, 1393, - 1414, 1435, 1456, 1477, 1498, 1520, 1541, 1562, - 1584, 1606, 1628, 1649, 1671, 1694, 1716, 1738, - 1760, 1783, 1806, 1828, 1851, 1874, 1897, 1920, - 1935, 1942, 1949, 1955, 1961, 1968, 1974, 1980, - 1985, 1991, 1996, 2001, 2006, 2011, 2016, 2021, - 2025, 2029, 2033, 2037, 2040, 2044, 2047, 2050, - 2053, 2056, 2058, 2061, 2063, 2065, 2066, 2068, - 2069, 2070, 2071, 2072, 2072, 2072, 2072, 2072, - 2072, 2071, 2070, 2069, 2068, 2066, 2065, 2063, - 2060, 2058, 2055, 2052, 2049, 2045, 2042, 2038, - 2033, 2029, 2024, 2019, 2013, 2008, 2002, 1996, - 1989, 1982, 1975, 1968, 1960, 1952, 1943, 1934, - 1925, 1916, 1906, 1896, 1885, 1874, 1863, 1851, - 1839, 1827, 1814, 1800, 1786, 1772, 1757, 1742, - 1727, 1710, 1694, 1676, 1659, 1640, 1622, 1602, - 1582, 1561, 1540, 1518, 1495, 1471, 1447, 1422, - 1396, 1369, 1341, 1312, 1282, 1251, 1219, 1186, - 1151, 1114, 1077, 1037, 995, 952, 906, 857, - 805, 750, 690, 625, 553, 471, 376, 255 -}; - -const uint8_t ff_dirac_next_ctx[DIRAC_CTX_COUNT] = { - [CTX_ZPZN_F1] = CTX_ZP_F2, - [CTX_ZPNN_F1] = CTX_ZP_F2, - [CTX_ZP_F2] = CTX_ZP_F3, - [CTX_ZP_F3] = CTX_ZP_F4, - [CTX_ZP_F4] = CTX_ZP_F5, - [CTX_ZP_F5] = CTX_ZP_F6, - [CTX_ZP_F6] = CTX_ZP_F6, - [CTX_NPZN_F1] = CTX_NP_F2, - [CTX_NPNN_F1] = CTX_NP_F2, - [CTX_NP_F2] = CTX_NP_F3, - [CTX_NP_F3] = CTX_NP_F4, - [CTX_NP_F4] = CTX_NP_F5, - [CTX_NP_F5] = CTX_NP_F6, - [CTX_NP_F6] = CTX_NP_F6, - [CTX_DELTA_Q_F] = CTX_DELTA_Q_F, -}; - -int16_t ff_dirac_prob_branchless[256][2]; - -av_cold void ff_dirac_init_arith_tables(void) -{ - int i; - - for (i = 0; i < 256; i++) { - ff_dirac_prob_branchless[i][0] = dirac_prob[255-i]; - ff_dirac_prob_branchless[i][1] = -dirac_prob[i]; - } -} - -void ff_dirac_init_arith_decoder(DiracArith *c, GetBitContext *gb, int length) -{ - int i; - align_get_bits(gb); - - length = FFMIN(length, get_bits_left(gb)/8); - - c->bytestream = gb->buffer + get_bits_count(gb)/8; - c->bytestream_end = c->bytestream + length; - skip_bits_long(gb, length*8); - - c->low = 0; - for (i = 0; i < 4; i++) { - c->low <<= 8; - if (c->bytestream < c->bytestream_end) - c->low |= *c->bytestream++; - else - c->low |= 0xff; - } - - c->counter = -16; - c->range = 0xffff; - c->error = 0; - c->overread= 0; - - for (i = 0; i < DIRAC_CTX_COUNT; i++) - c->contexts[i] = 0x8000; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_cabac.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_cabac.c deleted file mode 100644 index 6b38da84bdca88ce1201c14a1c0dc60f7b4ac8b7..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_cabac.c +++ /dev/null @@ -1,1571 +0,0 @@ -/* - * HEVC CABAC decoding - * - * Copyright (C) 2012 - 2013 Guillaume Martres - * Copyright (C) 2012 - 2013 Gildas Cocherel - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/attributes.h" -#include "libavutil/common.h" - -#include "cabac_functions.h" -#include "hevc_data.h" -#include "hevc.h" -#include "hevcdec.h" - -#define CABAC_MAX_BIN 31 - -/** - * number of bin by SyntaxElement. - */ -static const int8_t num_bins_in_se[] = { - 1, // sao_merge_flag - 1, // sao_type_idx - 0, // sao_eo_class - 0, // sao_band_position - 0, // sao_offset_abs - 0, // sao_offset_sign - 0, // end_of_slice_flag - 3, // split_coding_unit_flag - 1, // cu_transquant_bypass_flag - 3, // skip_flag - 3, // cu_qp_delta - 1, // pred_mode - 4, // part_mode - 0, // pcm_flag - 1, // prev_intra_luma_pred_mode - 0, // mpm_idx - 0, // rem_intra_luma_pred_mode - 2, // intra_chroma_pred_mode - 1, // merge_flag - 1, // merge_idx - 5, // inter_pred_idc - 2, // ref_idx_l0 - 2, // ref_idx_l1 - 2, // abs_mvd_greater0_flag - 2, // abs_mvd_greater1_flag - 0, // abs_mvd_minus2 - 0, // mvd_sign_flag - 1, // mvp_lx_flag - 1, // no_residual_data_flag - 3, // split_transform_flag - 2, // cbf_luma - 5, // cbf_cb, cbf_cr - 2, // transform_skip_flag[][] - 2, // explicit_rdpcm_flag[][] - 2, // explicit_rdpcm_dir_flag[][] - 18, // last_significant_coeff_x_prefix - 18, // last_significant_coeff_y_prefix - 0, // last_significant_coeff_x_suffix - 0, // last_significant_coeff_y_suffix - 4, // significant_coeff_group_flag - 44, // significant_coeff_flag - 24, // coeff_abs_level_greater1_flag - 6, // coeff_abs_level_greater2_flag - 0, // coeff_abs_level_remaining - 0, // coeff_sign_flag - 8, // log2_res_scale_abs - 2, // res_scale_sign_flag - 1, // cu_chroma_qp_offset_flag - 1, // cu_chroma_qp_offset_idx -}; - -/** - * Offset to ctxIdx 0 in init_values and states, indexed by SyntaxElement. - */ -static const int elem_offset[sizeof(num_bins_in_se)] = { - 0, // sao_merge_flag - 1, // sao_type_idx - 2, // sao_eo_class - 2, // sao_band_position - 2, // sao_offset_abs - 2, // sao_offset_sign - 2, // end_of_slice_flag - 2, // split_coding_unit_flag - 5, // cu_transquant_bypass_flag - 6, // skip_flag - 9, // cu_qp_delta - 12, // pred_mode - 13, // part_mode - 17, // pcm_flag - 17, // prev_intra_luma_pred_mode - 18, // mpm_idx - 18, // rem_intra_luma_pred_mode - 18, // intra_chroma_pred_mode - 20, // merge_flag - 21, // merge_idx - 22, // inter_pred_idc - 27, // ref_idx_l0 - 29, // ref_idx_l1 - 31, // abs_mvd_greater0_flag - 33, // abs_mvd_greater1_flag - 35, // abs_mvd_minus2 - 35, // mvd_sign_flag - 35, // mvp_lx_flag - 36, // no_residual_data_flag - 37, // split_transform_flag - 40, // cbf_luma - 42, // cbf_cb, cbf_cr - 47, // transform_skip_flag[][] - 49, // explicit_rdpcm_flag[][] - 51, // explicit_rdpcm_dir_flag[][] - 53, // last_significant_coeff_x_prefix - 71, // last_significant_coeff_y_prefix - 89, // last_significant_coeff_x_suffix - 89, // last_significant_coeff_y_suffix - 89, // significant_coeff_group_flag - 93, // significant_coeff_flag - 137, // coeff_abs_level_greater1_flag - 161, // coeff_abs_level_greater2_flag - 167, // coeff_abs_level_remaining - 167, // coeff_sign_flag - 167, // log2_res_scale_abs - 175, // res_scale_sign_flag - 177, // cu_chroma_qp_offset_flag - 178, // cu_chroma_qp_offset_idx -}; - -#define CNU 154 -/** - * Indexed by init_type - */ -static const uint8_t init_values[3][HEVC_CONTEXTS] = { - { // sao_merge_flag - 153, - // sao_type_idx - 200, - // split_coding_unit_flag - 139, 141, 157, - // cu_transquant_bypass_flag - 154, - // skip_flag - CNU, CNU, CNU, - // cu_qp_delta - 154, 154, 154, - // pred_mode - CNU, - // part_mode - 184, CNU, CNU, CNU, - // prev_intra_luma_pred_mode - 184, - // intra_chroma_pred_mode - 63, 139, - // merge_flag - CNU, - // merge_idx - CNU, - // inter_pred_idc - CNU, CNU, CNU, CNU, CNU, - // ref_idx_l0 - CNU, CNU, - // ref_idx_l1 - CNU, CNU, - // abs_mvd_greater1_flag - CNU, CNU, - // abs_mvd_greater1_flag - CNU, CNU, - // mvp_lx_flag - CNU, - // no_residual_data_flag - CNU, - // split_transform_flag - 153, 138, 138, - // cbf_luma - 111, 141, - // cbf_cb, cbf_cr - 94, 138, 182, 154, 154, - // transform_skip_flag - 139, 139, - // explicit_rdpcm_flag - 139, 139, - // explicit_rdpcm_dir_flag - 139, 139, - // last_significant_coeff_x_prefix - 110, 110, 124, 125, 140, 153, 125, 127, 140, 109, 111, 143, 127, 111, - 79, 108, 123, 63, - // last_significant_coeff_y_prefix - 110, 110, 124, 125, 140, 153, 125, 127, 140, 109, 111, 143, 127, 111, - 79, 108, 123, 63, - // significant_coeff_group_flag - 91, 171, 134, 141, - // significant_coeff_flag - 111, 111, 125, 110, 110, 94, 124, 108, 124, 107, 125, 141, 179, 153, - 125, 107, 125, 141, 179, 153, 125, 107, 125, 141, 179, 153, 125, 140, - 139, 182, 182, 152, 136, 152, 136, 153, 136, 139, 111, 136, 139, 111, - 141, 111, - // coeff_abs_level_greater1_flag - 140, 92, 137, 138, 140, 152, 138, 139, 153, 74, 149, 92, 139, 107, - 122, 152, 140, 179, 166, 182, 140, 227, 122, 197, - // coeff_abs_level_greater2_flag - 138, 153, 136, 167, 152, 152, - // log2_res_scale_abs - 154, 154, 154, 154, 154, 154, 154, 154, - // res_scale_sign_flag - 154, 154, - // cu_chroma_qp_offset_flag - 154, - // cu_chroma_qp_offset_idx - 154, - }, - { // sao_merge_flag - 153, - // sao_type_idx - 185, - // split_coding_unit_flag - 107, 139, 126, - // cu_transquant_bypass_flag - 154, - // skip_flag - 197, 185, 201, - // cu_qp_delta - 154, 154, 154, - // pred_mode - 149, - // part_mode - 154, 139, 154, 154, - // prev_intra_luma_pred_mode - 154, - // intra_chroma_pred_mode - 152, 139, - // merge_flag - 110, - // merge_idx - 122, - // inter_pred_idc - 95, 79, 63, 31, 31, - // ref_idx_l0 - 153, 153, - // ref_idx_l1 - 153, 153, - // abs_mvd_greater1_flag - 140, 198, - // abs_mvd_greater1_flag - 140, 198, - // mvp_lx_flag - 168, - // no_residual_data_flag - 79, - // split_transform_flag - 124, 138, 94, - // cbf_luma - 153, 111, - // cbf_cb, cbf_cr - 149, 107, 167, 154, 154, - // transform_skip_flag - 139, 139, - // explicit_rdpcm_flag - 139, 139, - // explicit_rdpcm_dir_flag - 139, 139, - // last_significant_coeff_x_prefix - 125, 110, 94, 110, 95, 79, 125, 111, 110, 78, 110, 111, 111, 95, - 94, 108, 123, 108, - // last_significant_coeff_y_prefix - 125, 110, 94, 110, 95, 79, 125, 111, 110, 78, 110, 111, 111, 95, - 94, 108, 123, 108, - // significant_coeff_group_flag - 121, 140, 61, 154, - // significant_coeff_flag - 155, 154, 139, 153, 139, 123, 123, 63, 153, 166, 183, 140, 136, 153, - 154, 166, 183, 140, 136, 153, 154, 166, 183, 140, 136, 153, 154, 170, - 153, 123, 123, 107, 121, 107, 121, 167, 151, 183, 140, 151, 183, 140, - 140, 140, - // coeff_abs_level_greater1_flag - 154, 196, 196, 167, 154, 152, 167, 182, 182, 134, 149, 136, 153, 121, - 136, 137, 169, 194, 166, 167, 154, 167, 137, 182, - // coeff_abs_level_greater2_flag - 107, 167, 91, 122, 107, 167, - // log2_res_scale_abs - 154, 154, 154, 154, 154, 154, 154, 154, - // res_scale_sign_flag - 154, 154, - // cu_chroma_qp_offset_flag - 154, - // cu_chroma_qp_offset_idx - 154, - }, - { // sao_merge_flag - 153, - // sao_type_idx - 160, - // split_coding_unit_flag - 107, 139, 126, - // cu_transquant_bypass_flag - 154, - // skip_flag - 197, 185, 201, - // cu_qp_delta - 154, 154, 154, - // pred_mode - 134, - // part_mode - 154, 139, 154, 154, - // prev_intra_luma_pred_mode - 183, - // intra_chroma_pred_mode - 152, 139, - // merge_flag - 154, - // merge_idx - 137, - // inter_pred_idc - 95, 79, 63, 31, 31, - // ref_idx_l0 - 153, 153, - // ref_idx_l1 - 153, 153, - // abs_mvd_greater1_flag - 169, 198, - // abs_mvd_greater1_flag - 169, 198, - // mvp_lx_flag - 168, - // no_residual_data_flag - 79, - // split_transform_flag - 224, 167, 122, - // cbf_luma - 153, 111, - // cbf_cb, cbf_cr - 149, 92, 167, 154, 154, - // transform_skip_flag - 139, 139, - // explicit_rdpcm_flag - 139, 139, - // explicit_rdpcm_dir_flag - 139, 139, - // last_significant_coeff_x_prefix - 125, 110, 124, 110, 95, 94, 125, 111, 111, 79, 125, 126, 111, 111, - 79, 108, 123, 93, - // last_significant_coeff_y_prefix - 125, 110, 124, 110, 95, 94, 125, 111, 111, 79, 125, 126, 111, 111, - 79, 108, 123, 93, - // significant_coeff_group_flag - 121, 140, 61, 154, - // significant_coeff_flag - 170, 154, 139, 153, 139, 123, 123, 63, 124, 166, 183, 140, 136, 153, - 154, 166, 183, 140, 136, 153, 154, 166, 183, 140, 136, 153, 154, 170, - 153, 138, 138, 122, 121, 122, 121, 167, 151, 183, 140, 151, 183, 140, - 140, 140, - // coeff_abs_level_greater1_flag - 154, 196, 167, 167, 154, 152, 167, 182, 182, 134, 149, 136, 153, 121, - 136, 122, 169, 208, 166, 167, 154, 152, 167, 182, - // coeff_abs_level_greater2_flag - 107, 167, 91, 107, 107, 167, - // log2_res_scale_abs - 154, 154, 154, 154, 154, 154, 154, 154, - // res_scale_sign_flag - 154, 154, - // cu_chroma_qp_offset_flag - 154, - // cu_chroma_qp_offset_idx - 154, - }, -}; - -static const uint8_t scan_1x1[1] = { - 0, -}; - -static const uint8_t horiz_scan2x2_x[4] = { - 0, 1, 0, 1, -}; - -static const uint8_t horiz_scan2x2_y[4] = { - 0, 0, 1, 1 -}; - -static const uint8_t horiz_scan4x4_x[16] = { - 0, 1, 2, 3, - 0, 1, 2, 3, - 0, 1, 2, 3, - 0, 1, 2, 3, -}; - -static const uint8_t horiz_scan4x4_y[16] = { - 0, 0, 0, 0, - 1, 1, 1, 1, - 2, 2, 2, 2, - 3, 3, 3, 3, -}; - -static const uint8_t horiz_scan8x8_inv[8][8] = { - { 0, 1, 2, 3, 16, 17, 18, 19, }, - { 4, 5, 6, 7, 20, 21, 22, 23, }, - { 8, 9, 10, 11, 24, 25, 26, 27, }, - { 12, 13, 14, 15, 28, 29, 30, 31, }, - { 32, 33, 34, 35, 48, 49, 50, 51, }, - { 36, 37, 38, 39, 52, 53, 54, 55, }, - { 40, 41, 42, 43, 56, 57, 58, 59, }, - { 44, 45, 46, 47, 60, 61, 62, 63, }, -}; - -static const uint8_t diag_scan2x2_x[4] = { - 0, 0, 1, 1, -}; - -static const uint8_t diag_scan2x2_y[4] = { - 0, 1, 0, 1, -}; - -static const uint8_t diag_scan2x2_inv[2][2] = { - { 0, 2, }, - { 1, 3, }, -}; - -static const uint8_t diag_scan4x4_inv[4][4] = { - { 0, 2, 5, 9, }, - { 1, 4, 8, 12, }, - { 3, 7, 11, 14, }, - { 6, 10, 13, 15, }, -}; - -static const uint8_t diag_scan8x8_inv[8][8] = { - { 0, 2, 5, 9, 14, 20, 27, 35, }, - { 1, 4, 8, 13, 19, 26, 34, 42, }, - { 3, 7, 12, 18, 25, 33, 41, 48, }, - { 6, 11, 17, 24, 32, 40, 47, 53, }, - { 10, 16, 23, 31, 39, 46, 52, 57, }, - { 15, 22, 30, 38, 45, 51, 56, 60, }, - { 21, 29, 37, 44, 50, 55, 59, 62, }, - { 28, 36, 43, 49, 54, 58, 61, 63, }, -}; - -void ff_hevc_save_states(HEVCLocalContext *lc, int ctb_addr_ts) -{ - const HEVCContext *const s = lc->parent; - - if (s->ps.pps->entropy_coding_sync_enabled_flag && - (ctb_addr_ts % s->ps.sps->ctb_width == 2 || - (s->ps.sps->ctb_width == 2 && - ctb_addr_ts % s->ps.sps->ctb_width == 0))) { - memcpy(lc->common_cabac_state->state, lc->cabac_state, HEVC_CONTEXTS); - if (s->ps.sps->persistent_rice_adaptation_enabled_flag) { - memcpy(lc->common_cabac_state->stat_coeff, lc->stat_coeff, HEVC_STAT_COEFFS); - } - } -} - -static void load_states(HEVCLocalContext *lc, const HEVCContext *s) -{ - memcpy(lc->cabac_state, lc->common_cabac_state->state, HEVC_CONTEXTS); - if (s->ps.sps->persistent_rice_adaptation_enabled_flag) { - memcpy(lc->stat_coeff, lc->common_cabac_state->stat_coeff, HEVC_STAT_COEFFS); - } -} - -static int cabac_reinit(HEVCLocalContext *lc) -{ - return skip_bytes(&lc->cc, 0) == NULL ? AVERROR_INVALIDDATA : 0; -} - -static int cabac_init_decoder(HEVCLocalContext *lc) -{ - GetBitContext *gb = &lc->gb; - skip_bits(gb, 1); - align_get_bits(gb); - return ff_init_cabac_decoder(&lc->cc, - gb->buffer + get_bits_count(gb) / 8, - (get_bits_left(gb) + 7) / 8); -} - -static void cabac_init_state(HEVCLocalContext *lc, const HEVCContext *s) -{ - int init_type = 2 - s->sh.slice_type; - int i; - - if (s->sh.cabac_init_flag && s->sh.slice_type != HEVC_SLICE_I) - init_type ^= 3; - - for (i = 0; i < HEVC_CONTEXTS; i++) { - int init_value = init_values[init_type][i]; - int m = (init_value >> 4) * 5 - 45; - int n = ((init_value & 15) << 3) - 16; - int pre = 2 * (((m * av_clip(s->sh.slice_qp, 0, 51)) >> 4) + n) - 127; - - pre ^= pre >> 31; - if (pre > 124) - pre = 124 + (pre & 1); - lc->cabac_state[i] = pre; - } - - for (i = 0; i < 4; i++) - lc->stat_coeff[i] = 0; -} - -int ff_hevc_cabac_init(HEVCLocalContext *lc, int ctb_addr_ts) -{ - const HEVCContext *const s = lc->parent; - - if (ctb_addr_ts == s->ps.pps->ctb_addr_rs_to_ts[s->sh.slice_ctb_addr_rs]) { - int ret = cabac_init_decoder(lc); - if (ret < 0) - return ret; - if (s->sh.dependent_slice_segment_flag == 0 || - (s->ps.pps->tiles_enabled_flag && - s->ps.pps->tile_id[ctb_addr_ts] != s->ps.pps->tile_id[ctb_addr_ts - 1])) - cabac_init_state(lc, s); - - if (!s->sh.first_slice_in_pic_flag && - s->ps.pps->entropy_coding_sync_enabled_flag) { - if (ctb_addr_ts % s->ps.sps->ctb_width == 0) { - if (s->ps.sps->ctb_width == 1) - cabac_init_state(lc, s); - else if (s->sh.dependent_slice_segment_flag == 1) - load_states(lc, s); - } - } - } else { - if (s->ps.pps->tiles_enabled_flag && - s->ps.pps->tile_id[ctb_addr_ts] != s->ps.pps->tile_id[ctb_addr_ts - 1]) { - int ret; - if (s->threads_number == 1) - ret = cabac_reinit(lc); - else { - ret = cabac_init_decoder(lc); - } - if (ret < 0) - return ret; - cabac_init_state(lc, s); - } - if (s->ps.pps->entropy_coding_sync_enabled_flag) { - if (ctb_addr_ts % s->ps.sps->ctb_width == 0) { - int ret; - get_cabac_terminate(&lc->cc); - if (s->threads_number == 1) - ret = cabac_reinit(lc); - else { - ret = cabac_init_decoder(lc); - } - if (ret < 0) - return ret; - - if (s->ps.sps->ctb_width == 1) - cabac_init_state(lc, s); - else - load_states(lc, s); - } - } - } - return 0; -} - -#define GET_CABAC(ctx) get_cabac(&lc->cc, &lc->cabac_state[ctx]) - -int ff_hevc_sao_merge_flag_decode(HEVCLocalContext *lc) -{ - return GET_CABAC(elem_offset[SAO_MERGE_FLAG]); -} - -int ff_hevc_sao_type_idx_decode(HEVCLocalContext *lc) -{ - if (!GET_CABAC(elem_offset[SAO_TYPE_IDX])) - return 0; - - if (!get_cabac_bypass(&lc->cc)) - return SAO_BAND; - return SAO_EDGE; -} - -int ff_hevc_sao_band_position_decode(HEVCLocalContext *lc) -{ - int i; - int value = get_cabac_bypass(&lc->cc); - - for (i = 0; i < 4; i++) - value = (value << 1) | get_cabac_bypass(&lc->cc); - return value; -} - -int ff_hevc_sao_offset_abs_decode(HEVCLocalContext *lc) -{ - int i = 0; - int length = (1 << (FFMIN(lc->parent->ps.sps->bit_depth, 10) - 5)) - 1; - - while (i < length && get_cabac_bypass(&lc->cc)) - i++; - return i; -} - -int ff_hevc_sao_offset_sign_decode(HEVCLocalContext *lc) -{ - return get_cabac_bypass(&lc->cc); -} - -int ff_hevc_sao_eo_class_decode(HEVCLocalContext *lc) -{ - int ret = get_cabac_bypass(&lc->cc) << 1; - ret |= get_cabac_bypass(&lc->cc); - return ret; -} - -int ff_hevc_end_of_slice_flag_decode(HEVCLocalContext *lc) -{ - return get_cabac_terminate(&lc->cc); -} - -int ff_hevc_cu_transquant_bypass_flag_decode(HEVCLocalContext *lc) -{ - return GET_CABAC(elem_offset[CU_TRANSQUANT_BYPASS_FLAG]); -} - -int ff_hevc_skip_flag_decode(HEVCLocalContext *lc, int x0, int y0, int x_cb, int y_cb) -{ - const HEVCContext *const s = lc->parent; - int min_cb_width = s->ps.sps->min_cb_width; - int inc = 0; - int x0b = av_mod_uintp2(x0, s->ps.sps->log2_ctb_size); - int y0b = av_mod_uintp2(y0, s->ps.sps->log2_ctb_size); - - if (lc->ctb_left_flag || x0b) - inc = !!SAMPLE_CTB(s->skip_flag, x_cb - 1, y_cb); - if (lc->ctb_up_flag || y0b) - inc += !!SAMPLE_CTB(s->skip_flag, x_cb, y_cb - 1); - - return GET_CABAC(elem_offset[SKIP_FLAG] + inc); -} - -int ff_hevc_cu_qp_delta_abs(HEVCLocalContext *lc) -{ - int prefix_val = 0; - int suffix_val = 0; - int inc = 0; - - while (prefix_val < 5 && GET_CABAC(elem_offset[CU_QP_DELTA] + inc)) { - prefix_val++; - inc = 1; - } - if (prefix_val >= 5) { - int k = 0; - while (k < 7 && get_cabac_bypass(&lc->cc)) { - suffix_val += 1 << k; - k++; - } - if (k == 7) { - av_log(lc->logctx, AV_LOG_ERROR, "CABAC_MAX_BIN : %d\n", k); - return AVERROR_INVALIDDATA; - } - - while (k--) - suffix_val += get_cabac_bypass(&lc->cc) << k; - } - return prefix_val + suffix_val; -} - -int ff_hevc_cu_qp_delta_sign_flag(HEVCLocalContext *lc) -{ - return get_cabac_bypass(&lc->cc); -} - -int ff_hevc_cu_chroma_qp_offset_flag(HEVCLocalContext *lc) -{ - return GET_CABAC(elem_offset[CU_CHROMA_QP_OFFSET_FLAG]); -} - -int ff_hevc_cu_chroma_qp_offset_idx(HEVCLocalContext *lc) -{ - int c_max= FFMAX(5, lc->parent->ps.pps->chroma_qp_offset_list_len_minus1); - int i = 0; - - while (i < c_max && GET_CABAC(elem_offset[CU_CHROMA_QP_OFFSET_IDX])) - i++; - - return i; -} - -int ff_hevc_pred_mode_decode(HEVCLocalContext *lc) -{ - return GET_CABAC(elem_offset[PRED_MODE_FLAG]); -} - -int ff_hevc_split_coding_unit_flag_decode(HEVCLocalContext *lc, int ct_depth, int x0, int y0) -{ - const HEVCContext *const s = lc->parent; - const HEVCSPS *const sps = s->ps.sps; - int inc = 0, depth_left = 0, depth_top = 0; - int x0b = av_mod_uintp2(x0, sps->log2_ctb_size); - int y0b = av_mod_uintp2(y0, sps->log2_ctb_size); - int x_cb = x0 >> sps->log2_min_cb_size; - int y_cb = y0 >> sps->log2_min_cb_size; - - if (lc->ctb_left_flag || x0b) - depth_left = s->tab_ct_depth[(y_cb) * sps->min_cb_width + x_cb - 1]; - if (lc->ctb_up_flag || y0b) - depth_top = s->tab_ct_depth[(y_cb - 1) * sps->min_cb_width + x_cb]; - - inc += (depth_left > ct_depth); - inc += (depth_top > ct_depth); - - return GET_CABAC(elem_offset[SPLIT_CODING_UNIT_FLAG] + inc); -} - -int ff_hevc_part_mode_decode(HEVCLocalContext *lc, int log2_cb_size) -{ - if (GET_CABAC(elem_offset[PART_MODE])) // 1 - return PART_2Nx2N; - if (log2_cb_size == lc->parent->ps.sps->log2_min_cb_size) { - if (lc->cu.pred_mode == MODE_INTRA) // 0 - return PART_NxN; - if (GET_CABAC(elem_offset[PART_MODE] + 1)) // 01 - return PART_2NxN; - if (log2_cb_size == 3) // 00 - return PART_Nx2N; - if (GET_CABAC(elem_offset[PART_MODE] + 2)) // 001 - return PART_Nx2N; - return PART_NxN; // 000 - } - - if (!lc->parent->ps.sps->amp_enabled_flag) { - if (GET_CABAC(elem_offset[PART_MODE] + 1)) // 01 - return PART_2NxN; - return PART_Nx2N; - } - - if (GET_CABAC(elem_offset[PART_MODE] + 1)) { // 01X, 01XX - if (GET_CABAC(elem_offset[PART_MODE] + 3)) // 011 - return PART_2NxN; - if (get_cabac_bypass(&lc->cc)) // 0101 - return PART_2NxnD; - return PART_2NxnU; // 0100 - } - - if (GET_CABAC(elem_offset[PART_MODE] + 3)) // 001 - return PART_Nx2N; - if (get_cabac_bypass(&lc->cc)) // 0001 - return PART_nRx2N; - return PART_nLx2N; // 0000 -} - -int ff_hevc_pcm_flag_decode(HEVCLocalContext *lc) -{ - return get_cabac_terminate(&lc->cc); -} - -int ff_hevc_prev_intra_luma_pred_flag_decode(HEVCLocalContext *lc) -{ - return GET_CABAC(elem_offset[PREV_INTRA_LUMA_PRED_FLAG]); -} - -int ff_hevc_mpm_idx_decode(HEVCLocalContext *lc) -{ - int i = 0; - while (i < 2 && get_cabac_bypass(&lc->cc)) - i++; - return i; -} - -int ff_hevc_rem_intra_luma_pred_mode_decode(HEVCLocalContext *lc) -{ - int i; - int value = get_cabac_bypass(&lc->cc); - - for (i = 0; i < 4; i++) - value = (value << 1) | get_cabac_bypass(&lc->cc); - return value; -} - -int ff_hevc_intra_chroma_pred_mode_decode(HEVCLocalContext *lc) -{ - int ret; - if (!GET_CABAC(elem_offset[INTRA_CHROMA_PRED_MODE])) - return 4; - - ret = get_cabac_bypass(&lc->cc) << 1; - ret |= get_cabac_bypass(&lc->cc); - return ret; -} - -int ff_hevc_merge_idx_decode(HEVCLocalContext *lc) -{ - int i = GET_CABAC(elem_offset[MERGE_IDX]); - - if (i != 0) { - while (i < lc->parent->sh.max_num_merge_cand-1 && get_cabac_bypass(&lc->cc)) - i++; - } - return i; -} - -int ff_hevc_merge_flag_decode(HEVCLocalContext *lc) -{ - return GET_CABAC(elem_offset[MERGE_FLAG]); -} - -int ff_hevc_inter_pred_idc_decode(HEVCLocalContext *lc, int nPbW, int nPbH) -{ - if (nPbW + nPbH == 12) - return GET_CABAC(elem_offset[INTER_PRED_IDC] + 4); - if (GET_CABAC(elem_offset[INTER_PRED_IDC] + lc->ct_depth)) - return PRED_BI; - - return GET_CABAC(elem_offset[INTER_PRED_IDC] + 4); -} - -int ff_hevc_ref_idx_lx_decode(HEVCLocalContext *lc, int num_ref_idx_lx) -{ - int i = 0; - int max = num_ref_idx_lx - 1; - int max_ctx = FFMIN(max, 2); - - while (i < max_ctx && GET_CABAC(elem_offset[REF_IDX_L0] + i)) - i++; - if (i == 2) { - while (i < max && get_cabac_bypass(&lc->cc)) - i++; - } - - return i; -} - -int ff_hevc_mvp_lx_flag_decode(HEVCLocalContext *lc) -{ - return GET_CABAC(elem_offset[MVP_LX_FLAG]); -} - -int ff_hevc_no_residual_syntax_flag_decode(HEVCLocalContext *lc) -{ - return GET_CABAC(elem_offset[NO_RESIDUAL_DATA_FLAG]); -} - -static av_always_inline int abs_mvd_greater0_flag_decode(HEVCLocalContext *lc) -{ - return GET_CABAC(elem_offset[ABS_MVD_GREATER0_FLAG]); -} - -static av_always_inline int abs_mvd_greater1_flag_decode(HEVCLocalContext *lc) -{ - return GET_CABAC(elem_offset[ABS_MVD_GREATER1_FLAG] + 1); -} - -static av_always_inline int mvd_decode(HEVCLocalContext *lc) -{ - int ret = 2; - int k = 1; - - while (k < CABAC_MAX_BIN && get_cabac_bypass(&lc->cc)) { - ret += 1U << k; - k++; - } - if (k == CABAC_MAX_BIN) { - av_log(lc->logctx, AV_LOG_ERROR, "CABAC_MAX_BIN : %d\n", k); - return 0; - } - while (k--) - ret += get_cabac_bypass(&lc->cc) << k; - return get_cabac_bypass_sign(&lc->cc, -ret); -} - -static av_always_inline int mvd_sign_flag_decode(HEVCLocalContext *lc) -{ - return get_cabac_bypass_sign(&lc->cc, -1); -} - -int ff_hevc_split_transform_flag_decode(HEVCLocalContext *lc, int log2_trafo_size) -{ - return GET_CABAC(elem_offset[SPLIT_TRANSFORM_FLAG] + 5 - log2_trafo_size); -} - -int ff_hevc_cbf_cb_cr_decode(HEVCLocalContext *lc, int trafo_depth) -{ - return GET_CABAC(elem_offset[CBF_CB_CR] + trafo_depth); -} - -int ff_hevc_cbf_luma_decode(HEVCLocalContext *lc, int trafo_depth) -{ - return GET_CABAC(elem_offset[CBF_LUMA] + !trafo_depth); -} - -static int hevc_transform_skip_flag_decode(HEVCLocalContext *lc, int c_idx) -{ - return GET_CABAC(elem_offset[TRANSFORM_SKIP_FLAG] + !!c_idx); -} - -static int explicit_rdpcm_flag_decode(HEVCLocalContext *lc, int c_idx) -{ - return GET_CABAC(elem_offset[EXPLICIT_RDPCM_FLAG] + !!c_idx); -} - -static int explicit_rdpcm_dir_flag_decode(HEVCLocalContext *lc, int c_idx) -{ - return GET_CABAC(elem_offset[EXPLICIT_RDPCM_DIR_FLAG] + !!c_idx); -} - -int ff_hevc_log2_res_scale_abs(HEVCLocalContext *lc, int idx) -{ - int i =0; - - while (i < 4 && GET_CABAC(elem_offset[LOG2_RES_SCALE_ABS] + 4 * idx + i)) - i++; - - return i; -} - -int ff_hevc_res_scale_sign_flag(HEVCLocalContext *lc, int idx) -{ - return GET_CABAC(elem_offset[RES_SCALE_SIGN_FLAG] + idx); -} - -static av_always_inline void last_significant_coeff_xy_prefix_decode(HEVCLocalContext *lc, int c_idx, - int log2_size, int *last_scx_prefix, int *last_scy_prefix) -{ - int i = 0; - int max = (log2_size << 1) - 1; - int ctx_offset, ctx_shift; - - if (!c_idx) { - ctx_offset = 3 * (log2_size - 2) + ((log2_size - 1) >> 2); - ctx_shift = (log2_size + 1) >> 2; - } else { - ctx_offset = 15; - ctx_shift = log2_size - 2; - } - while (i < max && - GET_CABAC(elem_offset[LAST_SIGNIFICANT_COEFF_X_PREFIX] + (i >> ctx_shift) + ctx_offset)) - i++; - *last_scx_prefix = i; - - i = 0; - while (i < max && - GET_CABAC(elem_offset[LAST_SIGNIFICANT_COEFF_Y_PREFIX] + (i >> ctx_shift) + ctx_offset)) - i++; - *last_scy_prefix = i; -} - -static av_always_inline int last_significant_coeff_suffix_decode(HEVCLocalContext *lc, - int last_significant_coeff_prefix) -{ - int i; - int length = (last_significant_coeff_prefix >> 1) - 1; - int value = get_cabac_bypass(&lc->cc); - - for (i = 1; i < length; i++) - value = (value << 1) | get_cabac_bypass(&lc->cc); - return value; -} - -static av_always_inline int significant_coeff_group_flag_decode(HEVCLocalContext *lc, int c_idx, int ctx_cg) -{ - int inc; - - inc = FFMIN(ctx_cg, 1) + (c_idx>0 ? 2 : 0); - - return GET_CABAC(elem_offset[SIGNIFICANT_COEFF_GROUP_FLAG] + inc); -} -static av_always_inline int significant_coeff_flag_decode(HEVCLocalContext *lc, int x_c, int y_c, - int offset, const uint8_t *ctx_idx_map) -{ - int inc = ctx_idx_map[(y_c << 2) + x_c] + offset; - return GET_CABAC(elem_offset[SIGNIFICANT_COEFF_FLAG] + inc); -} - -static av_always_inline int significant_coeff_flag_decode_0(HEVCLocalContext *lc, int c_idx, int offset) -{ - return GET_CABAC(elem_offset[SIGNIFICANT_COEFF_FLAG] + offset); -} - -static av_always_inline int coeff_abs_level_greater1_flag_decode(HEVCLocalContext *lc, int c_idx, int inc) -{ - - if (c_idx > 0) - inc += 16; - - return GET_CABAC(elem_offset[COEFF_ABS_LEVEL_GREATER1_FLAG] + inc); -} - -static av_always_inline int coeff_abs_level_greater2_flag_decode(HEVCLocalContext *lc, int c_idx, int inc) -{ - if (c_idx > 0) - inc += 4; - - return GET_CABAC(elem_offset[COEFF_ABS_LEVEL_GREATER2_FLAG] + inc); -} - -static av_always_inline int coeff_abs_level_remaining_decode(HEVCLocalContext *lc, int rc_rice_param) -{ - int prefix = 0; - int suffix = 0; - int last_coeff_abs_level_remaining; - int i; - - while (prefix < CABAC_MAX_BIN && get_cabac_bypass(&lc->cc)) - prefix++; - - if (prefix < 3) { - for (i = 0; i < rc_rice_param; i++) - suffix = (suffix << 1) | get_cabac_bypass(&lc->cc); - last_coeff_abs_level_remaining = (prefix << rc_rice_param) + suffix; - } else { - int prefix_minus3 = prefix - 3; - - if (prefix == CABAC_MAX_BIN || prefix_minus3 + rc_rice_param > 16 + 6) { - av_log(lc->logctx, AV_LOG_ERROR, "CABAC_MAX_BIN : %d\n", prefix); - return 0; - } - - for (i = 0; i < prefix_minus3 + rc_rice_param; i++) - suffix = (suffix << 1) | get_cabac_bypass(&lc->cc); - last_coeff_abs_level_remaining = (((1 << prefix_minus3) + 3 - 1) - << rc_rice_param) + suffix; - } - return last_coeff_abs_level_remaining; -} - -static av_always_inline int coeff_sign_flag_decode(HEVCLocalContext *lc, uint8_t nb) -{ - int i; - int ret = 0; - - for (i = 0; i < nb; i++) - ret = (ret << 1) | get_cabac_bypass(&lc->cc); - return ret; -} - -void ff_hevc_hls_residual_coding(HEVCLocalContext *lc, int x0, int y0, - int log2_trafo_size, enum ScanType scan_idx, - int c_idx) -{ -#define GET_COORD(offset, n) \ - do { \ - x_c = (x_cg << 2) + scan_x_off[n]; \ - y_c = (y_cg << 2) + scan_y_off[n]; \ - } while (0) - const HEVCContext *const s = lc->parent; - int transform_skip_flag = 0; - - int last_significant_coeff_x, last_significant_coeff_y; - int last_scan_pos; - int n_end; - int num_coeff = 0; - int greater1_ctx = 1; - - int num_last_subset; - int x_cg_last_sig, y_cg_last_sig; - - const uint8_t *scan_x_cg, *scan_y_cg, *scan_x_off, *scan_y_off; - - ptrdiff_t stride = s->frame->linesize[c_idx]; - int hshift = s->ps.sps->hshift[c_idx]; - int vshift = s->ps.sps->vshift[c_idx]; - uint8_t *dst = &s->frame->data[c_idx][(y0 >> vshift) * stride + - ((x0 >> hshift) << s->ps.sps->pixel_shift)]; - int16_t *coeffs = (int16_t*)(c_idx ? lc->edge_emu_buffer2 : lc->edge_emu_buffer); - uint8_t significant_coeff_group_flag[8][8] = {{0}}; - int explicit_rdpcm_flag = 0; - int explicit_rdpcm_dir_flag; - - int trafo_size = 1 << log2_trafo_size; - int i; - int qp,shift,add,scale,scale_m; - static const uint8_t level_scale[] = { 40, 45, 51, 57, 64, 72 }; - const uint8_t *scale_matrix = NULL; - uint8_t dc_scale; - int pred_mode_intra = (c_idx == 0) ? lc->tu.intra_pred_mode : - lc->tu.intra_pred_mode_c; - - memset(coeffs, 0, trafo_size * trafo_size * sizeof(int16_t)); - - // Derive QP for dequant - if (!lc->cu.cu_transquant_bypass_flag) { - static const int qp_c[] = { 29, 30, 31, 32, 33, 33, 34, 34, 35, 35, 36, 36, 37, 37 }; - static const uint8_t rem6[51 + 4 * 6 + 1] = { - 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, - 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, - 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, - 4, 5, 0, 1, 2, 3, 4, 5, 0, 1 - }; - - static const uint8_t div6[51 + 4 * 6 + 1] = { - 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, - 3, 3, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, - 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, - 10, 10, 11, 11, 11, 11, 11, 11, 12, 12 - }; - int qp_y = lc->qp_y; - - if (s->ps.pps->transform_skip_enabled_flag && - log2_trafo_size <= s->ps.pps->log2_max_transform_skip_block_size) { - transform_skip_flag = hevc_transform_skip_flag_decode(lc, c_idx); - } - - if (c_idx == 0) { - qp = qp_y + s->ps.sps->qp_bd_offset; - } else { - int qp_i, offset; - - if (c_idx == 1) - offset = s->ps.pps->cb_qp_offset + s->sh.slice_cb_qp_offset + - lc->tu.cu_qp_offset_cb; - else - offset = s->ps.pps->cr_qp_offset + s->sh.slice_cr_qp_offset + - lc->tu.cu_qp_offset_cr; - - qp_i = av_clip(qp_y + offset, - s->ps.sps->qp_bd_offset, 57); - if (s->ps.sps->chroma_format_idc == 1) { - if (qp_i < 30) - qp = qp_i; - else if (qp_i > 43) - qp = qp_i - 6; - else - qp = qp_c[qp_i - 30]; - } else { - if (qp_i > 51) - qp = 51; - else - qp = qp_i; - } - - qp += s->ps.sps->qp_bd_offset; - } - - shift = s->ps.sps->bit_depth + log2_trafo_size - 5; - add = 1 << (shift-1); - scale = level_scale[rem6[qp]] << (div6[qp]); - scale_m = 16; // default when no custom scaling lists. - dc_scale = 16; - - if (s->ps.sps->scaling_list_enable_flag && !(transform_skip_flag && log2_trafo_size > 2)) { - const ScalingList *sl = s->ps.pps->scaling_list_data_present_flag ? - &s->ps.pps->scaling_list : &s->ps.sps->scaling_list; - int matrix_id = lc->cu.pred_mode != MODE_INTRA; - - matrix_id = 3 * matrix_id + c_idx; - - scale_matrix = sl->sl[log2_trafo_size - 2][matrix_id]; - if (log2_trafo_size >= 4) - dc_scale = sl->sl_dc[log2_trafo_size - 4][matrix_id]; - } - } else { - shift = 0; - add = 0; - scale = 0; - dc_scale = 0; - } - - if (lc->cu.pred_mode == MODE_INTER && s->ps.sps->explicit_rdpcm_enabled_flag && - (transform_skip_flag || lc->cu.cu_transquant_bypass_flag)) { - explicit_rdpcm_flag = explicit_rdpcm_flag_decode(lc, c_idx); - if (explicit_rdpcm_flag) { - explicit_rdpcm_dir_flag = explicit_rdpcm_dir_flag_decode(lc, c_idx); - } - } - - last_significant_coeff_xy_prefix_decode(lc, c_idx, log2_trafo_size, - &last_significant_coeff_x, &last_significant_coeff_y); - - if (last_significant_coeff_x > 3) { - int suffix = last_significant_coeff_suffix_decode(lc, last_significant_coeff_x); - last_significant_coeff_x = (1 << ((last_significant_coeff_x >> 1) - 1)) * - (2 + (last_significant_coeff_x & 1)) + - suffix; - } - - if (last_significant_coeff_y > 3) { - int suffix = last_significant_coeff_suffix_decode(lc, last_significant_coeff_y); - last_significant_coeff_y = (1 << ((last_significant_coeff_y >> 1) - 1)) * - (2 + (last_significant_coeff_y & 1)) + - suffix; - } - - if (scan_idx == SCAN_VERT) - FFSWAP(int, last_significant_coeff_x, last_significant_coeff_y); - - x_cg_last_sig = last_significant_coeff_x >> 2; - y_cg_last_sig = last_significant_coeff_y >> 2; - - switch (scan_idx) { - case SCAN_DIAG: { - int last_x_c = last_significant_coeff_x & 3; - int last_y_c = last_significant_coeff_y & 3; - - scan_x_off = ff_hevc_diag_scan4x4_x; - scan_y_off = ff_hevc_diag_scan4x4_y; - num_coeff = diag_scan4x4_inv[last_y_c][last_x_c]; - if (trafo_size == 4) { - scan_x_cg = scan_1x1; - scan_y_cg = scan_1x1; - } else if (trafo_size == 8) { - num_coeff += diag_scan2x2_inv[y_cg_last_sig][x_cg_last_sig] << 4; - scan_x_cg = diag_scan2x2_x; - scan_y_cg = diag_scan2x2_y; - } else if (trafo_size == 16) { - num_coeff += diag_scan4x4_inv[y_cg_last_sig][x_cg_last_sig] << 4; - scan_x_cg = ff_hevc_diag_scan4x4_x; - scan_y_cg = ff_hevc_diag_scan4x4_y; - } else { // trafo_size == 32 - num_coeff += diag_scan8x8_inv[y_cg_last_sig][x_cg_last_sig] << 4; - scan_x_cg = ff_hevc_diag_scan8x8_x; - scan_y_cg = ff_hevc_diag_scan8x8_y; - } - break; - } - case SCAN_HORIZ: - scan_x_cg = horiz_scan2x2_x; - scan_y_cg = horiz_scan2x2_y; - scan_x_off = horiz_scan4x4_x; - scan_y_off = horiz_scan4x4_y; - num_coeff = horiz_scan8x8_inv[last_significant_coeff_y][last_significant_coeff_x]; - break; - default: //SCAN_VERT - scan_x_cg = horiz_scan2x2_y; - scan_y_cg = horiz_scan2x2_x; - scan_x_off = horiz_scan4x4_y; - scan_y_off = horiz_scan4x4_x; - num_coeff = horiz_scan8x8_inv[last_significant_coeff_x][last_significant_coeff_y]; - break; - } - num_coeff++; - num_last_subset = (num_coeff - 1) >> 4; - - for (i = num_last_subset; i >= 0; i--) { - int n, m; - int x_cg, y_cg, x_c, y_c, pos; - int implicit_non_zero_coeff = 0; - int64_t trans_coeff_level; - int prev_sig = 0; - int offset = i << 4; - int rice_init = 0; - - uint8_t significant_coeff_flag_idx[16]; - uint8_t nb_significant_coeff_flag = 0; - - x_cg = scan_x_cg[i]; - y_cg = scan_y_cg[i]; - - if ((i < num_last_subset) && (i > 0)) { - int ctx_cg = 0; - if (x_cg < (1 << (log2_trafo_size - 2)) - 1) - ctx_cg += significant_coeff_group_flag[x_cg + 1][y_cg]; - if (y_cg < (1 << (log2_trafo_size - 2)) - 1) - ctx_cg += significant_coeff_group_flag[x_cg][y_cg + 1]; - - significant_coeff_group_flag[x_cg][y_cg] = - significant_coeff_group_flag_decode(lc, c_idx, ctx_cg); - implicit_non_zero_coeff = 1; - } else { - significant_coeff_group_flag[x_cg][y_cg] = - ((x_cg == x_cg_last_sig && y_cg == y_cg_last_sig) || - (x_cg == 0 && y_cg == 0)); - } - - last_scan_pos = num_coeff - offset - 1; - - if (i == num_last_subset) { - n_end = last_scan_pos - 1; - significant_coeff_flag_idx[0] = last_scan_pos; - nb_significant_coeff_flag = 1; - } else { - n_end = 15; - } - - if (x_cg < ((1 << log2_trafo_size) - 1) >> 2) - prev_sig = !!significant_coeff_group_flag[x_cg + 1][y_cg]; - if (y_cg < ((1 << log2_trafo_size) - 1) >> 2) - prev_sig += (!!significant_coeff_group_flag[x_cg][y_cg + 1] << 1); - - if (significant_coeff_group_flag[x_cg][y_cg] && n_end >= 0) { - static const uint8_t ctx_idx_map[] = { - 0, 1, 4, 5, 2, 3, 4, 5, 6, 6, 8, 8, 7, 7, 8, 8, // log2_trafo_size == 2 - 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, // prev_sig == 0 - 2, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, // prev_sig == 1 - 2, 1, 0, 0, 2, 1, 0, 0, 2, 1, 0, 0, 2, 1, 0, 0, // prev_sig == 2 - 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 // default - }; - const uint8_t *ctx_idx_map_p; - int scf_offset = 0; - if (s->ps.sps->transform_skip_context_enabled_flag && - (transform_skip_flag || lc->cu.cu_transquant_bypass_flag)) { - ctx_idx_map_p = &ctx_idx_map[4 * 16]; - if (c_idx == 0) { - scf_offset = 40; - } else { - scf_offset = 14 + 27; - } - } else { - if (c_idx != 0) - scf_offset = 27; - if (log2_trafo_size == 2) { - ctx_idx_map_p = &ctx_idx_map[0]; - } else { - ctx_idx_map_p = &ctx_idx_map[(prev_sig + 1) << 4]; - if (c_idx == 0) { - if ((x_cg > 0 || y_cg > 0)) - scf_offset += 3; - if (log2_trafo_size == 3) { - scf_offset += (scan_idx == SCAN_DIAG) ? 9 : 15; - } else { - scf_offset += 21; - } - } else { - if (log2_trafo_size == 3) - scf_offset += 9; - else - scf_offset += 12; - } - } - } - for (n = n_end; n > 0; n--) { - x_c = scan_x_off[n]; - y_c = scan_y_off[n]; - if (significant_coeff_flag_decode(lc, x_c, y_c, scf_offset, ctx_idx_map_p)) { - significant_coeff_flag_idx[nb_significant_coeff_flag] = n; - nb_significant_coeff_flag++; - implicit_non_zero_coeff = 0; - } - } - if (implicit_non_zero_coeff == 0) { - if (s->ps.sps->transform_skip_context_enabled_flag && - (transform_skip_flag || lc->cu.cu_transquant_bypass_flag)) { - if (c_idx == 0) { - scf_offset = 42; - } else { - scf_offset = 16 + 27; - } - } else { - if (i == 0) { - if (c_idx == 0) - scf_offset = 0; - else - scf_offset = 27; - } else { - scf_offset = 2 + scf_offset; - } - } - if (significant_coeff_flag_decode_0(lc, c_idx, scf_offset) == 1) { - significant_coeff_flag_idx[nb_significant_coeff_flag] = 0; - nb_significant_coeff_flag++; - } - } else { - significant_coeff_flag_idx[nb_significant_coeff_flag] = 0; - nb_significant_coeff_flag++; - } - } - - n_end = nb_significant_coeff_flag; - - - if (n_end) { - int first_nz_pos_in_cg; - int last_nz_pos_in_cg; - int c_rice_param = 0; - int first_greater1_coeff_idx = -1; - uint8_t coeff_abs_level_greater1_flag[8]; - uint16_t coeff_sign_flag; - int sum_abs = 0; - int sign_hidden; - int sb_type; - - - // initialize first elem of coeff_bas_level_greater1_flag - int ctx_set = (i > 0 && c_idx == 0) ? 2 : 0; - - if (s->ps.sps->persistent_rice_adaptation_enabled_flag) { - if (!transform_skip_flag && !lc->cu.cu_transquant_bypass_flag) - sb_type = 2 * (c_idx == 0 ? 1 : 0); - else - sb_type = 2 * (c_idx == 0 ? 1 : 0) + 1; - c_rice_param = lc->stat_coeff[sb_type] / 4; - } - - if (!(i == num_last_subset) && greater1_ctx == 0) - ctx_set++; - greater1_ctx = 1; - last_nz_pos_in_cg = significant_coeff_flag_idx[0]; - - for (m = 0; m < (n_end > 8 ? 8 : n_end); m++) { - int inc = (ctx_set << 2) + greater1_ctx; - coeff_abs_level_greater1_flag[m] = - coeff_abs_level_greater1_flag_decode(lc, c_idx, inc); - if (coeff_abs_level_greater1_flag[m]) { - greater1_ctx = 0; - if (first_greater1_coeff_idx == -1) - first_greater1_coeff_idx = m; - } else if (greater1_ctx > 0 && greater1_ctx < 3) { - greater1_ctx++; - } - } - first_nz_pos_in_cg = significant_coeff_flag_idx[n_end - 1]; - - if (lc->cu.cu_transquant_bypass_flag || - (lc->cu.pred_mode == MODE_INTRA && - s->ps.sps->implicit_rdpcm_enabled_flag && transform_skip_flag && - (pred_mode_intra == 10 || pred_mode_intra == 26 )) || - explicit_rdpcm_flag) - sign_hidden = 0; - else - sign_hidden = (last_nz_pos_in_cg - first_nz_pos_in_cg >= 4); - - if (first_greater1_coeff_idx != -1) { - coeff_abs_level_greater1_flag[first_greater1_coeff_idx] += coeff_abs_level_greater2_flag_decode(lc, c_idx, ctx_set); - } - if (!s->ps.pps->sign_data_hiding_flag || !sign_hidden ) { - coeff_sign_flag = coeff_sign_flag_decode(lc, nb_significant_coeff_flag) << (16 - nb_significant_coeff_flag); - } else { - coeff_sign_flag = coeff_sign_flag_decode(lc, nb_significant_coeff_flag - 1) << (16 - (nb_significant_coeff_flag - 1)); - } - - for (m = 0; m < n_end; m++) { - n = significant_coeff_flag_idx[m]; - GET_COORD(offset, n); - if (m < 8) { - trans_coeff_level = 1 + coeff_abs_level_greater1_flag[m]; - if (trans_coeff_level == ((m == first_greater1_coeff_idx) ? 3 : 2)) { - int last_coeff_abs_level_remaining = coeff_abs_level_remaining_decode(lc, c_rice_param); - - trans_coeff_level += last_coeff_abs_level_remaining; - if (trans_coeff_level > (3 << c_rice_param)) - c_rice_param = s->ps.sps->persistent_rice_adaptation_enabled_flag ? c_rice_param + 1 : FFMIN(c_rice_param + 1, 4); - if (s->ps.sps->persistent_rice_adaptation_enabled_flag && !rice_init) { - int c_rice_p_init = lc->stat_coeff[sb_type] / 4; - if (last_coeff_abs_level_remaining >= (3 << c_rice_p_init)) - lc->stat_coeff[sb_type]++; - else if (2 * last_coeff_abs_level_remaining < (1 << c_rice_p_init)) - if (lc->stat_coeff[sb_type] > 0) - lc->stat_coeff[sb_type]--; - rice_init = 1; - } - } - } else { - int last_coeff_abs_level_remaining = coeff_abs_level_remaining_decode(lc, c_rice_param); - - trans_coeff_level = 1 + last_coeff_abs_level_remaining; - if (trans_coeff_level > (3 << c_rice_param)) - c_rice_param = s->ps.sps->persistent_rice_adaptation_enabled_flag ? c_rice_param + 1 : FFMIN(c_rice_param + 1, 4); - if (s->ps.sps->persistent_rice_adaptation_enabled_flag && !rice_init) { - int c_rice_p_init = lc->stat_coeff[sb_type] / 4; - if (last_coeff_abs_level_remaining >= (3 << c_rice_p_init)) - lc->stat_coeff[sb_type]++; - else if (2 * last_coeff_abs_level_remaining < (1 << c_rice_p_init)) - if (lc->stat_coeff[sb_type] > 0) - lc->stat_coeff[sb_type]--; - rice_init = 1; - } - } - if (s->ps.pps->sign_data_hiding_flag && sign_hidden) { - sum_abs += trans_coeff_level; - if (n == first_nz_pos_in_cg && (sum_abs&1)) - trans_coeff_level = -trans_coeff_level; - } - if (coeff_sign_flag >> 15) - trans_coeff_level = -trans_coeff_level; - coeff_sign_flag <<= 1; - if(!lc->cu.cu_transquant_bypass_flag) { - if (s->ps.sps->scaling_list_enable_flag && !(transform_skip_flag && log2_trafo_size > 2)) { - if(y_c || x_c || log2_trafo_size < 4) { - switch(log2_trafo_size) { - case 3: pos = (y_c << 3) + x_c; break; - case 4: pos = ((y_c >> 1) << 3) + (x_c >> 1); break; - case 5: pos = ((y_c >> 2) << 3) + (x_c >> 2); break; - default: pos = (y_c << 2) + x_c; break; - } - scale_m = scale_matrix[pos]; - } else { - scale_m = dc_scale; - } - } - trans_coeff_level = (trans_coeff_level * (int64_t)scale * (int64_t)scale_m + add) >> shift; - if(trans_coeff_level < 0) { - if((~trans_coeff_level) & 0xFffffffffff8000) - trans_coeff_level = -32768; - } else { - if(trans_coeff_level & 0xffffffffffff8000) - trans_coeff_level = 32767; - } - } - coeffs[y_c * trafo_size + x_c] = trans_coeff_level; - } - } - } - - if (lc->cu.cu_transquant_bypass_flag) { - if (explicit_rdpcm_flag || (s->ps.sps->implicit_rdpcm_enabled_flag && - (pred_mode_intra == 10 || pred_mode_intra == 26))) { - int mode = s->ps.sps->implicit_rdpcm_enabled_flag ? (pred_mode_intra == 26) : explicit_rdpcm_dir_flag; - - s->hevcdsp.transform_rdpcm(coeffs, log2_trafo_size, mode); - } - } else { - if (transform_skip_flag) { - int rot = s->ps.sps->transform_skip_rotation_enabled_flag && - log2_trafo_size == 2 && - lc->cu.pred_mode == MODE_INTRA; - if (rot) { - for (i = 0; i < 8; i++) - FFSWAP(int16_t, coeffs[i], coeffs[16 - i - 1]); - } - - s->hevcdsp.dequant(coeffs, log2_trafo_size); - - if (explicit_rdpcm_flag || (s->ps.sps->implicit_rdpcm_enabled_flag && - lc->cu.pred_mode == MODE_INTRA && - (pred_mode_intra == 10 || pred_mode_intra == 26))) { - int mode = explicit_rdpcm_flag ? explicit_rdpcm_dir_flag : (pred_mode_intra == 26); - - s->hevcdsp.transform_rdpcm(coeffs, log2_trafo_size, mode); - } - } else if (lc->cu.pred_mode == MODE_INTRA && c_idx == 0 && log2_trafo_size == 2) { - s->hevcdsp.transform_4x4_luma(coeffs); - } else { - int max_xy = FFMAX(last_significant_coeff_x, last_significant_coeff_y); - if (max_xy == 0) - s->hevcdsp.idct_dc[log2_trafo_size - 2](coeffs); - else { - int col_limit = last_significant_coeff_x + last_significant_coeff_y + 4; - if (max_xy < 4) - col_limit = FFMIN(4, col_limit); - else if (max_xy < 8) - col_limit = FFMIN(8, col_limit); - else if (max_xy < 12) - col_limit = FFMIN(24, col_limit); - s->hevcdsp.idct[log2_trafo_size - 2](coeffs, col_limit); - } - } - } - if (lc->tu.cross_pf) { - int16_t *coeffs_y = (int16_t*)lc->edge_emu_buffer; - - for (i = 0; i < (trafo_size * trafo_size); i++) { - coeffs[i] = coeffs[i] + ((lc->tu.res_scale_val * coeffs_y[i]) >> 3); - } - } - s->hevcdsp.add_residual[log2_trafo_size-2](dst, coeffs, stride); -} - -void ff_hevc_hls_mvd_coding(HEVCLocalContext *lc, int x0, int y0, int log2_cb_size) -{ - int x = abs_mvd_greater0_flag_decode(lc); - int y = abs_mvd_greater0_flag_decode(lc); - - if (x) - x += abs_mvd_greater1_flag_decode(lc); - if (y) - y += abs_mvd_greater1_flag_decode(lc); - - switch (x) { - case 2: lc->pu.mvd.x = mvd_decode(lc); break; - case 1: lc->pu.mvd.x = mvd_sign_flag_decode(lc); break; - case 0: lc->pu.mvd.x = 0; break; - } - - switch (y) { - case 2: lc->pu.mvd.y = mvd_decode(lc); break; - case 1: lc->pu.mvd.y = mvd_sign_flag_decode(lc); break; - case 0: lc->pu.mvd.y = 0; break; - } -} - diff --git a/spaces/congsaPfin/Manga-OCR/logs/European War 61914 MOD APK - Download and Play the Epic Strategy Game with All Unlocked.md b/spaces/congsaPfin/Manga-OCR/logs/European War 61914 MOD APK - Download and Play the Epic Strategy Game with All Unlocked.md deleted file mode 100644 index ddd81b062ba7c456958adf71f80ae96d7800a5ae..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/European War 61914 MOD APK - Download and Play the Epic Strategy Game with All Unlocked.md +++ /dev/null @@ -1,247 +0,0 @@ -
    -

    European War 6: 1914 Mod Apk Unlock All - A Guide for Strategy Game Fans

    -

    If you are a fan of strategy games that simulate historical wars, you might have heard of European War 6: 1914, a popular game developed by Easytech, a company that specializes in historical strategy games. In this game, you can choose from over 150 countries and regions, and lead them to victory or defeat in various wars and conflicts that took place between 1798 and 1950. You can also customize your own generals, troops, weapons, and technologies, and challenge other players online or offline.

    -

    However, some players may find the game too difficult, too expensive, or too boring after a while. That's why some of them resort to using a mod apk, which is a modified version of the original game application that can unlock all the features, resources, and content that are otherwise restricted or limited in the game. A mod apk can give you unlimited money, medals, generals, troops, weapons, technologies, and more. It can also remove ads, bugs, and errors that may affect your gameplay.

    -

    european war 6 1914 mod apk unlock all


    Download Ziphttps://urlca.com/2uO7eS



    -

    But is using a mod apk for European War 6: 1914 a good idea? What are the benefits and risks of doing so? How can you download and install a mod apk for European War 6: 1914? In this article, we will answer these questions and more. We will also provide you with some tips and tricks on how to use a mod apk for European War 6: 1914 safely and effectively. Read on to find out more!

    -

    What is European War 6: 1914 and what are its features?

    -

    European War 6: 1914 is a strategy game that simulates the historical wars of the 19th and 20th centuries. It is the sixth installment of the European War series, which started in 2010 with European War: Napoleon Wars. The game was released in 2020 for Android and iOS devices.

    -

    The game has four main modes: Campaign, Conquest, Challenge, and Multiplayer. In Campaign mode, you can follow the historical events and scenarios of different wars and regions, such as the Napoleonic Wars, the American Civil War, the World War I, the World War II, etc. You can choose from different countries and factions, and complete various missions and objectives to progress through the story. In Conquest mode, you can create your own scenarios and maps, and conquer the world with your own strategy and tactics. You can also adjust the difficulty level, the number of countries and regions, the resources and technologies available, etc. In Challenge mode, you can test your skills and knowledge in different quizzes and puzzles related to history and geography. You can also earn medals and rewards for completing them. In Multiplayer mode, you can play with or against other players online or offline via Wi-Fi or Bluetooth. You can also chat with them, send them gifts, or join alliances.

    -

    The game has over 150 countries and regions to choose from, each with their own unique generals, troops, weapons, and technologies. You can also customize your own generals by changing their names, portraits, skills, ranks, etc. You can also upgrade your troops by training them, equipping them with different weapons and armors, etc. You can also research new technologies by spending money and medals on them. The game has over 200 historical battles to fight in, each with their own terrain, weather, objectives, etc. You can also use different strategies and tactics to win them, such as diplomacy, espionage, sabotage, etc.

    -

    The game has high-quality graphics that depict the historical scenes and characters in detail. The game also has realistic sound effects that enhance the atmosphere of war. The game has a user-friendly interface that allows you to control your units easily and efficiently. The game also has a tutorial mode that teaches you the basics of the game.

    -

    The game is similar to other historical strategy games such as Age of Civilizations II , Age of Empires, or Civilization. However, it has its own unique features and challenges that make it stand out from the crowd. If you are looking for a strategy game that combines historical accuracy, complexity, and fun, you might want to give European War 6: 1914 a try.

    -

    What is a mod apk and why do some players use it?

    -

    A mod apk is a modified version of an original game application that can alter or enhance some aspects of the game. A mod apk can be created by the game developers themselves, or by third-party programmers or hackers who have access to the game's source code. A mod apk can be downloaded from various websites or platforms, such as Google Play, App Store, or APKPure.

    -

    european war 6 1914 mod apk unlimited money and medals
    -european war 6 1914 hack mod apk free download
    -european war 6 1914 mod apk latest version
    -european war 6 1914 mod apk all generals unlocked
    -european war 6 1914 mod apk android 1
    -european war 6 1914 mod apk revdl
    -european war 6 1914 mod apk no root
    -european war 6 1914 mod apk offline
    -european war 6 1914 mod apk obb
    -european war 6 1914 mod apk rexdl
    -european war 6 1914 mod apk premium
    -european war 6 1914 mod apk full version
    -european war 6 1914 mod apk mega
    -european war 6 1914 mod apk data
    -european war 6 1914 mod apk vip
    -european war 6 1914 mod apk pro
    -european war 6 1914 mod apk cracked
    -european war 6 1914 mod apk cheat
    -european war 6 1914 mod apk hack download
    -european war 6 1914 mod apk update
    -european war 6 1914 mod apk new version
    -european war 6 1914 mod apk original
    -european war 6 1914 mod apk for pc
    -european war 6 1914 mod apk for ios
    -european war 6 1914 mod apk for windows
    -european war 6 1914 mod apk for mac
    -european war 6 1914 mod apk for laptop
    -european war 6 1914 mod apk for tablet
    -european war 6 1914 mod apk for chromebook
    -european war 6 1914 mod apk for android tv
    -european war 6: world at war - ww1 strategy game mod apk unlock all
    -easytech's world conquest games: ww1 ww2 civil war - all unlocked with mods and cheats
    -how to install and play european war: world at war - ww1 strategy game with mods and hacks on android devices
    -best tips and tricks for playing and winning in european war: world at war - ww1 strategy game with mods and hacks on android devices
    -how to get free money and medals in european war: world at war - ww1 strategy game with mods and hacks on android devices
    -how to unlock all generals and scenarios in european war: world at war - ww1 strategy game with mods and hacks on android devices
    -how to upgrade and customize your troops and weapons in european war: world at war - ww1 strategy game with mods and hacks on android devices
    -how to use diplomacy and alliances in european war: world at war - ww1 strategy game with mods and hacks on android devices
    -how to conquer the world and win the great wars in european war: world at war - ww1 strategy game with mods and hacks on android devices
    -how to play multiplayer mode in european war: world at war - ww1 strategy game with mods and hacks on android devices

    -

    Some players use a mod apk for various reasons, such as:

    -
      -
    • To unlock all the features, resources, and content that are otherwise restricted or limited in the game
    • -
    • To bypass the in-app purchases or ads that may require real money or interrupt the gameplay
    • -
    • To cheat or hack the game to gain an unfair advantage over other players or the game itself
    • -
    • To customize or personalize the game according to their preferences and tastes
    • -
    • To explore new possibilities or scenarios that are not available in the original game
    • -
    • To fix some bugs or errors that may affect the gameplay
    • -
    • To have more fun and enjoyment with the game
    • -
    -

    However, using a mod apk also comes with some legal and ethical issues, such as:

    -
      -
    • Violating the terms and conditions of the game developers or publishers
    • -
    • Infringing the intellectual property rights of the game developers or publishers
    • -
    • Exposing the device or data to viruses, malware, or scams that may harm them
    • -
    • Disrupting the balance and fairness of the game for other players
    • -
    • Ruining the original design and intention of the game creators
    • -
    • Losing the official support and updates from the game developers or publishers
    • -
    • Risking being banned or suspended from the game or its online services
    • -
    -

    Therefore, using a mod apk for European War 6: 1914 is a personal choice that depends on your own judgment and responsibility. You should weigh the pros and cons carefully before deciding to use a mod apk for European War 6: 1914.

    -

    What are the benefits of using a mod apk for European War 6: 1914?

    If you decide to use a mod apk for European War 6: 1914, you can enjoy some benefits that the original game may not offer. Here are some of them:

    -
      -
    • You can unlock all the features, resources, and content that are otherwise restricted or limited in the game. For example, you can have unlimited money, medals, generals, troops, weapons, technologies, and more. You can also access all the modes, campaigns, conquests, challenges, and multiplayer options. You can also remove the ads that may interrupt your gameplay.
    • -
    • You can customize or personalize the game according to your preferences and tastes. For example, you can change the names, portraits, skills, ranks, etc. of your generals. You can also modify the graphics, sound, and user interface of the game. You can also create your own scenarios and maps in Conquest mode.
    • -
    • You can explore new possibilities or scenarios that are not available in the original game. For example, you can play as different countries or factions that are not normally playable in the game. You can also change the historical events and outcomes of the wars and conflicts. You can also use different strategies and tactics that may not work in the original game.
    • -
    • You can enhance your gameplay experience and enjoyment with the game. For example, you can have more fun and challenge with the game by adjusting the difficulty level, the number of countries and regions, the resources and technologies available, etc. You can also have more satisfaction and achievement with the game by completing all the missions and objectives, earning all the medals and rewards, conquering the world with your strategy and tactics, etc.
    • -
    -

    To illustrate these benefits, here is a table that compares the features of the original game and the mod apk:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    FeatureOriginal GameMod Apk
    MoneyLimitedUnlimited
    MedalsLimitedUnlimited
    GeneralsLimitedUnlimited
    TroopsLimitedUnlimited
    WeaponsLimitedUnlimited
    TechnologiesLimitedUnlimited
    ModesLimitedAll unlocked
    CampaignsLimitedAll unlocked
    ConquestsLimitedAll unlocked
    ChallengesLimitedAll unlocked
    MultiplayerLimitedAll unlocked
    AdsPresentRemoved
    Bugs and errorsPresentFixed
    CustomizationLimitedEnhanced
    New possibilities and scenariosLimitedAdded
    Gameplay experience and enjoymentLimitedImproved
    -

    As you can see, using a mod apk for European War 6: 1914 can provide you with many benefits that can make your game more enjoyable and rewarding. However, you should also be aware of the risks and drawbacks of using a mod apk for European War 6: 1914, which we will discuss in the next section.

    -

    What are the risks and drawbacks of using a mod apk for European War 6: 1914?

    -

    Using a mod apk for European War 6: 1914 is not without its risks and drawbacks. Here are some of them:

    -
      -
    • You can violate the terms and conditions of the game developers or publishers, which can result in legal actions or penalties against you. You can also infringe the intellectual property rights of the game developers or publishers, which can result in lawsuits or damages against you.
    • -
    • You can expose your device or data to viruses, malware, or scams that can harm them. Some mod apks may contain malicious code or software that can infect your device or data, or steal your personal information or money. You can also download mod apks from unreliable sources or platforms that may contain viruses, malware, or scams.
    • -
    • You can disrupt the balance and fairness of the game for other players. Using a mod apk can give you an unfair advantage over other players who play the game legitimately, which can ruin their gameplay experience and satisfaction. You can also encounter other players who use mod apks to cheat or hack the game, which can ruin your gameplay experience and satisfaction.
    • -
    • You can ruin the original design and intention of the game creators. Using a mod apk can alter or enhance some aspects of the game that may not be intended by the game creators, which can affect their artistic vision and expression. You can also miss out on some features, resources, or content that the game creators have designed for the original game.
    • -
    • You can lose the official support and updates from the game developers or publishers. Using a mod apk can make your game incompatible with the official updates or patches that the game developers or publishers may release to improve or fix the game. You can also lose access to the official online services or features that the game developers or publishers may provide for the original game.
    • -
    • You can risk being banned or suspended from the game or its online services. Using a mod apk can make your game detectable by the anti-cheat or anti-hack systems that the game developers or publishers may use to protect their game. You can also be reported by other players who notice your suspicious behavior or activities in the game.
    • -
    -

    To illustrate these risks and drawbacks, here is a table that compares them with the original game and the mod apk:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Risk/DrawbackOriginal GameMod Apk
    Legal and ethical issuesNonePresent
    Viruses, malware, or scamsNonePossible
    Balance and fairnessPresentDisrupted
    Original design and intentionPresentRuined
    Official support and updatesPresentLost
    Ban or suspensionNonePossible
    -

    As you can see, using a mod apk for European War 6: 1914 can also expose you to some risks and drawbacks that can make your game less enjoyable and rewarding. Therefore, you should be careful and cautious when using a mod apk for European War 6: 1914.

    -

    How to download and install a mod apk for European War 6: 1914?

    -

    If you still want to use a mod apk for European War 6: 1914, you need to know how to download and install it on your device. Here are the steps that you need to follow:

    -
      -
    1. Find a reliable source where you can download a mod apk for European War 6: 1914. You can search online for some websites or platforms that offer mod apks for various games, or you can ask other players who have used a mod apk for European War 6: 1914 before. However, you should be careful and wary of some sources that may contain viruses, malware, or scams that can harm your device or data.
    2. -
    3. Download the mod apk file from the source that you have chosen. You may need to allow your device to download files from unknown sources in your settings. You may also need to disable your antivirus or firewall software temporarily to avoid any interference.
    4. -
    5. Install the mod apk file on your device. You may need to uninstall the original game application first if you have it on your device. You may also need to enable the installation of apps from unknown sources in your settings. You may also need to grant some permissions or access to the mod apk file during the installation process.
    6. -
    7. Launch the mod apk file on your device. You may need to verify or activate the mod apk file by following some instructions or entering some codes. You may also need to create an account or log in with an existing one to access the mod apk file.
    8. -
    9. Enjoy the game with the mod apk file. You can now play European War 6: 1914 with all the features, resources, and content that are unlocked by the mod apk file. However, you should also be aware of the risks and drawbacks of using a mod apk file, as we discussed in the previous section.
    10. -
    -

    To help you with finding a reliable source where you can download a mod apk for European War 6: 1914, here is a link that you can use as a reference:

    -

    European War 6: 1914 Mod Apk Unlock All - APKPure.com

    -

    This is a website that offers mod apks for various games, including European War 6: 1914. It claims that its mod apks are safe, tested, and verified by its users and editors. However, you should still be careful and cautious when downloading and installing any mod apk from any source, as there is no guarantee that they are free from viruses, malware, or scams.

    -

    Conclusion

    -

    In this article, we have discussed what European War 6: 1914 is and what are its features, what a mod apk is and why some players use it, what are the benefits and risks of using a mod apk for European War 6: 1914, and how to download and install a mod apk for European War 6: 1914. We have also provided you with some tips and tricks on how to use a mod apk for European War 6: 1914 safely and effectively.

    -

    We hope that this article has been helpful and informative for you. If you are a fan of strategy games that simulate historical wars, you might want to give European War 6: 1914 a try. However, if you decide to use a mod apk for European War 6: 1914, you should weigh the pros and cons carefully before doing so. You should also be responsible and respectful when playing the game with or without a mod apk.

    -

    We would love to hear your opinions, experiences, and feedback on European War 6: 1914 and its mod apk. Please feel free to share them with us in the comments section below. Thank you for reading and happy gaming!

    -

    FAQs

    -

    Here are some frequently asked questions about European War 6: 1914 and its mod apk, along with their answers:

    -

    Q: Is European War 6: 1914 free to play?

    -

    A: Yes, European War 6: 1914 is free to download and play on Android and iOS devices. However, the game may contain some in-app purchases or ads that may require real money or interrupt the gameplay.

    -

    Q: Is using a mod apk for European War 6: 1914 legal?

    -

    A: No, using a mod apk for European War 6: 1914 is not legal, as it violates the terms and conditions of the game developers or publishers, and infringes their intellectual property rights. Using a mod apk for European War 6: 1914 may result in legal actions or penalties against you.

    -

    Q: Is using a mod apk for European War 6: 1914 safe?

    -

    A: No, using a mod apk for European War 6: 1914 is not safe, as it exposes your device or data to viruses, malware, or scams that can harm them. Using a mod apk for European War 6: 1914 may also make your game incompatible with the official updates or patches, or lose access to the official online services or features.

    -

    Q: Is using a mod apk for European War 6: 1914 fair?

    -

    A: No, using a mod apk for European War 6: 1914 is not fair, as it disrupts the balance and fairness of the game for other players who play the game legitimately. Using a mod apk for European War 6: 1914 may also encounter other players who use mod apks to cheat or hack the game.

    -

    Q: Is using a mod apk for European War 6: 1914 fun?

    -

    A: It depends on your personal preference and judgment. Some players may find using a mod apk for European War 6: 1914 fun, as it unlocks all the features, resources, and content that are otherwise restricted or limited in the game. However, some players may find using a mod apk for European War 6: 1914 boring, as it removes the challenge and achievement that come with playing the game legitimately.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Free Triple Play Video Poker No Registration No Limits No Hassle.md b/spaces/congsaPfin/Manga-OCR/logs/Free Triple Play Video Poker No Registration No Limits No Hassle.md deleted file mode 100644 index 70a6612b2650e10c654fa107e6b46abae7e9ae91..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Free Triple Play Video Poker No Registration No Limits No Hassle.md +++ /dev/null @@ -1,303 +0,0 @@ -
    -

    Triple Play Video Poker: How to Play and Win Online

    -

    Video poker is one of the most popular and exciting casino games that you can play online. It combines the skill and strategy of poker with the simplicity and thrill of slot machines. And if you want to take your video poker experience to the next level, you should try triple play video poker.

    -

    Triple play video poker is a variation of video poker that allows you to play three hands at once, giving you three times the action and three times the chances to win. Unlike slots machines, video poker lets you use your knowledge and skills to beat the house edge and get the best payouts possible.

    -

    triple play video poker free download


    Download 🔗 https://urlca.com/2uO9Us



    -

    In this article, we will show you how to play triple play video poker online, how to find the best games and sites, and how to improve your strategy and win more. Whether you are a beginner or a seasoned player, you will find something useful and interesting in this guide. So, let's get started!

    -

    What is Triple Play Video Poker?

    -

    Triple play video poker is a type of video poker that lets you play three hands at the same time, with the same initial cards. You can choose which cards to hold or discard for each hand separately, and then draw new cards for each hand. The final outcome of each hand is determined by the poker value of your five cards, according to the paytable of the game.

    -

    There are many different variants of triple play video poker, such as Jacks or Better, Deuces Wild, Bonus Poker, Double Bonus Poker, Double Double Bonus Poker, Joker Poker, and more. Each variant has its own rules, payouts, and strategies. You can find them all at the best online casinos that offer video poker games.

    -

    The Benefits of Playing Triple Play Video Poker

    -

    Playing triple play video poker online has many advantages over playing single hand video poker or other casino games. Here are some of them:

    -
      -
    • You get more action and excitement. Playing three hands at once means you have more opportunities to make winning combinations and hit big jackpots.
    • -
    • You get more variety and challenge. Playing different variants of triple play video poker means you have to adapt your strategy and skills to each game's rules and paytable.
    • -
    • You get more control and flexibility. Playing online means you can choose your bet size, game speed, sound effects, and other settings according to your preferences.
    • -
    • You get more convenience and comfort. Playing online means you can access your favorite games anytime and anywhere, on your computer or mobile device.
    • -
    -

    The Rules of Triple Play Video Poker

    -

    The rules of triple play video poker are similar to those of single hand video poker, with some minor differences. Here are the basic steps to follow when playing triple play video poker online:

    -
      -
    1. Select your bet size. You can choose how much each credit is worth, and how many credits you want to bet on each hand. Usually, the maximum bet is five credits per hand, or 15 credits in total.
    2. -
    3. Press the Deal button. You will receive five cards face up on the main hand, and two other sets of five cards face down on the other two hands.
    4. -
    5. Select which cards to hold or discard. You can use the Hold buttons under each card to select which cards you want to keep for each hand separately. The other cards will be exchanged for new ones in the next step.
    6. -
    7. Press the Draw button. All the cards that you did not hold will be replaced with new ones from the same deck. You will now see the final outcome of each hand.
    8. -
    9. Collect your winnings. If any of your hands have a winning combination, according to the paytable of the game, you will receive the corresponding payout. The payout is multiplied by the number of credits you bet on each hand.
    10. -
    -

    Here is an example of how a round of triple play video poker looks like:

    -

    * triple play video poker app
    -* triple play video poker online
    -* triple play video poker for pc
    -* triple play video poker casino
    -* triple play video poker games
    -* triple play video poker strategy
    -* triple play video poker machine
    -* triple play video poker trainer
    -* triple play video poker android
    -* triple play video poker iphone
    -* triple play video poker apk
    -* triple play video poker bonus
    -* triple play video poker jackpot
    -* triple play video poker tips
    -* triple play video poker cheats
    -* triple play video poker simulator
    -* triple play video poker offline
    -* triple play video poker review
    -* triple play video poker tutorial
    -* triple play video poker rules
    -* triple play video poker odds
    -* triple play video poker variations
    -* triple play video poker pay tables
    -* triple play video poker no ads
    -* triple play video poker no registration
    -* triple play video poker no internet
    -* triple play video poker no deposit
    -* triple play video poker no download
    -* triple hand video poker free download
    -* three hand video poker free download
    -* 3 hand video poker free download
    -* multi hand video poker free download
    -* five hand video poker free download
    -* ten hand video poker free download
    -* 50 hand video poker free download
    -* 100 hand video poker free download
    -* ultimate x video poker free download
    -* spin poker video slots free download
    -* game king video poker free download
    -* double bonus video poker free download
    -* double double bonus video poker free download
    -* deuces wild video poker free download
    -* joker poker video poker free download
    -* jacks or better video poker free download

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ... and so on.

    How to Choose Your Cards

    -

    One of the most important skills in video poker is knowing which cards to hold and which ones to discard. The best way to do this is to follow a strategy chart that tells you the optimal move for every possible hand. A strategy chart is based on mathematical calculations that take into account the probabilities and payouts of each game variant.

    -

    However, since you are playing three hands at once, you have to consider how your decision affects each hand separately. For example, if you have a pair of jacks on the main hand, you might want to hold them and hope for a three of a kind or better. But if the other two hands have low cards, you might want to discard them and try for a flush or a straight.

    -

    Therefore, you have to balance the risk and reward of each hand, and choose the cards that give you the best overall expected value. This can be tricky, especially if you are new to video poker. That's why we recommend practicing with free games before playing with real money.

    -

    How to Read the Paytable

    -

    The paytable is the table that shows you how much each winning combination pays, depending on how many credits you bet. You can find it on the top or the side of the screen, depending on the game. The paytable also tells you the game variant, the minimum qualifying hand, and the jackpot amount.

    -

    It is very important to read and understand the paytable before playing any video poker game, because it affects your strategy and your chances of winning. Different variants have different paytables, and some paytables are more generous than others. For example, some games pay 9 credits for a full house and 6 credits for a flush (9/6), while others pay 8 credits for a full house and 5 credits for a flush (8/5). The difference might seem small, but it can make a big difference in your long-term results.

    -

    As a general rule, you should look for games that have high payouts for the lower-ranking hands, such as jacks or better, two pair, and three of a kind. These are the hands that you will get more often, and they will help you sustain your bankroll. You should also look for games that have a bonus payout for certain hands, such as four aces or four deuces. These are the hands that can give you a big boost in your winnings.

    -

    How to Bet and Win

    -

    The final step in playing triple play video poker online is to place your bet and collect your winnings. You can choose how much each credit is worth, from $0.01 to $5.00, depending on the game and the site. You can also choose how many credits you want to bet on each hand, from one to five. The more credits you bet, the higher the payouts.

    -

    However, there is one exception: the royal flush. The royal flush is the highest-ranking hand in video poker, and it consists of a 10, jack, queen, king, and ace of the same suit. The payout for a royal flush is usually 250 credits for each credit bet, except when you bet five credits. In that case, the payout jumps to 800 credits per credit bet, or 4,000 credits in total.

    -

    This means that betting five credits on each hand gives you an extra incentive to hit a royal flush, and it also increases your overall return percentage. Therefore, we advise you to always bet five credits on each hand when playing triple play video poker online. Of course, this also means that you have to adjust your bet size according to your budget and bankroll management.

    -

    Once you have placed your bet and drawn your cards, you will see if any of your hands have a winning combination. If so, you will receive the corresponding payout according to the paytable. You can then choose to collect your winnings or continue playing with them.

    How to Find the Best Triple Play Video Poker Games Online

    -

    Now that you know how to play triple play video poker online, you might be wondering where to find the best games and sites. There are hundreds of online casinos that offer video poker games, but not all of them are trustworthy, fair, and reliable. You have to be careful and do some research before choosing where to play.

    -

    Fortunately, we have done the hard work for you and compiled a list of the best online casinos that offer triple play video poker games. We have tested and reviewed each site based on several criteria, such as:

    -
      -
    • The quality and variety of the video poker games. We look for sites that offer different variants of triple play video poker, with high-quality graphics, sound effects, and gameplay.
    • -
    • The security and safety of the site. We look for sites that use encryption, firewalls, and other measures to protect your personal and financial information.
    • -
    • The fairness and randomness of the games. We look for sites that use certified random number generators (RNGs) to ensure that the outcomes of the games are fair and unpredictable.
    • -
    • The bonuses and promotions of the site. We look for sites that offer generous and frequent bonuses and promotions for video poker players, such as welcome bonuses, reload bonuses, loyalty programs, tournaments, and more.
    • -
    • The customer support and service of the site. We look for sites that offer friendly, professional, and responsive customer support via phone, email, live chat, or social media.
    • -
    -

    Based on these criteria, here are our top picks for the best online casinos that offer triple play video poker games:

    -

    The Top 10 Video Poker Games with the Best Odds

    -

    One of the main factors that affect your chances of winning at video poker is the game variant you choose. Different variants have different paytables, rules, and strategies, which affect the return percentage of the game. The return percentage is the amount of money that the game pays back to the players in the long run, on average.

    -

    The higher the return percentage, the better the odds for the player. For example, a game with a 99% return percentage means that for every $100 you bet, you can expect to get back $99 in winnings over time. Of course, this does not mean that you will win every time or that you will never lose money. It just means that you have a better chance of winning in the long run.

    -

    Therefore, it is wise to choose video poker games with high return percentages when playing online. Here are the top 10 video poker games with the best odds, according to their paytables:

    -
    Main HandSecond HandThird HandPayout
    10♠ J♠ Q♠ K♠ A♠10♦ J♦ Q♦ K♦ A♦10♥ J♥ Q♥ K♥ A♥800 credits x 3 = 2400 credits
    You have a Royal Flush on all three hands, which pays 800 credits for each credit bet.
    2♣ 3♣ 4♣ 5♣ 6♣2♥ 3♥ 4♥ 5♥ 6♥2♦ 3♦ 4♦ 5♦ 6♦50 credits x 3 = 150 credits
    You have a Straight Flush on all three hands, which pays 50 credits for each credit bet.
    A♠ A♥ A♦ A♣ K♠A♠ A♥ A♦ A♣ Q♠A♠ A♥ A♦ A♣ J♠25 credits x 3 = 75 credits
    You have Four of a Kind on all three hands, which pays 25 credits for each credit bet.
    - - - - - - - - - - - - - - - - - - - - - ... and so on. - - - - - - - - - - - - - - - ... and so on.

    The Best Video Poker Apps and Sites for Mobile and Desktop

    -

    If you want to play triple play video poker online, you need to find a reliable and reputable site that offers the games you want. You also need to make sure that the site is compatible with your device, whether it is a computer, a smartphone, or a tablet.

    -

    The good news is that there are many video poker apps and sites that you can choose from, depending on your preferences and needs. Some of them are dedicated to video poker only, while others offer a variety of casino games, including slots, blackjack, roulette, and more. Some of them are web-based, while others require you to download and install software or an app.

    -

    To help you find the best video poker apps and sites for mobile and desktop, we have reviewed and ranked the top options based on several factors, such as:

    -
      -
    • The quality and variety of the video poker games. We look for apps and sites that offer different variants of triple play video poker, with high-quality graphics, sound effects, and gameplay.
    • -
    • The compatibility and usability of the app or site. We look for apps and sites that work smoothly and seamlessly on different devices, platforms, browsers, and screen sizes.
    • -
    • The security and safety of the app or site. We look for apps and sites that use encryption, firewalls, and other measures to protect your personal and financial information.
    • -
    • The bonuses and promotions of the app or site. We look for apps and sites that offer generous and frequent bonuses and promotions for video poker players, such as welcome bonuses, reload bonuses, loyalty programs, tournaments, and more.
    • -
    • The customer support and service of the app or site. We look for apps and sites that offer friendly, professional, and responsive customer support via phone, email, live chat, or social media.
    • -
    -

    Based on these factors, here are our top picks for the best video poker apps and sites for mobile and desktop:

    -
    Game VariantPaytableReturn Percentage
    Jacks or Better (9/6)Royal Flush: 800
    Straight Flush: 50
    Four of a Kind: 25
    Full House: 9
    Flush: 6
    Straight: 4
    Three of a Kind: 3
    Two Pair: 2
    Jacks or Better: 1
    99.54%
    Bonus Poker (8/5)Royal Flush: 800
    Straight Flush: 50
    Four Aces: 80
    Four 2s-4s: 40
    Four 5s-Ks: 25
    Full House: 8
    Flush: 5
    Straight: 4
    Three of a Kind: 3
    Two Pair: 2
    Jacks or Better: 1
    99.17%
    Bonus Poker Deluxe (8/6)Royal Flush: 800
    Straight Flush: 50
    Four of a Kind: 80
    Full House: 8
    Flush: 6
    Straight: 4
    Three of a Kind: 3
    Two Pair: 1
    Jacks or Better: 1
    99.64%
    Double Bonus Poker (10/7)Royal Flush: 800
    Straight Flush: 50
    Four Aces: 160
    Four 2s-4s: 80
    Four 5s-Ks: 50
    Full House: 10
    Flush: 7
    Straight: 5
    Three of a Kind: 3
    Two Pair: 1
    Jacks or Better: 1
    100.17%
    Double Double Bonus Poker (9/6)Royal Flush: 800
    Straight Flush: 50
    Four Aces + 2-4: 400
    Four Aces + 5-K: 160
    Four 2s-4s + A-4: 160
    Four 2s-4s + 5-K: 80
    Four 5s-Ks: 50
    Full House: 9
    Flush: 6
    Straight: 4
    Three of a Kind: 3
    Two Pair: 1
    Jacks or Better: 1
    98.98%
    Joker Poker (Kings or Better)Royal Flush: 800
    Five of a Kind: 200
    Royal Flush with Joker: 100
    Straight Flush: 50
    Four of a Kind: 20
    Full House: 7
    Flush: 5
    Straight: 3
    Three of a Kind: 2
    Two Pair: 1
    Kings or Better: 1
    100.64%
    - - - - - - - - - - - - - ... and so on. - - - - - - - - - - - - - - - - - - ... and so on.

    The Best Video Poker Bonuses and Promotions

    -

    Another factor that can enhance your triple play video poker online experience is the bonuses and promotions that you can get from the online casinos. Bonuses and promotions are incentives that the casinos offer to attract and retain players, and they can give you extra money, free spins, cashback, and other benefits.

    -

    However, not all bonuses and promotions are created equal. Some of them are more suitable for video poker players than others, and some of them have terms and conditions that you have to meet before you can withdraw your winnings. Therefore, you have to be careful and read the fine print before claiming any bonus or promotion.

    -

    As a general rule, you should look for bonuses and promotions that have the following characteristics:

    -
      -
    • They are specifically designed for video poker players, or they allow video poker games to contribute fully or partially to the wagering requirements.
    • -
    • They have a high percentage match, a high maximum amount, and a low minimum deposit.
    • -
    • They have a low wagering requirement, a long validity period, and no maximum cashout limit.
    • -
    • They are offered by reputable and trustworthy online casinos that have good ratings and reviews.
    • -
    -

    To help you find the best video poker bonuses and promotions online, we have selected and ranked the top options based on these criteria:

    -
    NameTypeDescriptionRating
    Video Poker Deluxe CasinoAppA free video poker app that offers over 70 different variants of video poker games, including triple play video poker. You can play with virtual coins or real money, and enjoy daily bonuses, tournaments, leaderboards, achievements, and more.★★★★★
    Video Poker.comSiteA free video poker site that offers over 40 different variants of video poker games, including triple play video poker. You can play with virtual coins or real money, and enjoy daily bonuses, tournaments, leaderboards, achievements, and more.★★★★☆
    Video Poker WizardAppA paid video poker app that offers over 30 different variants of video poker games, including triple play video poker. You can play with virtual coins or real money, and enjoy features such as strategy charts, statistics, analysis, and more.★★★★☆
    Bovada CasinoSiteA real money online casino that offers over 20 different variants of video poker games, including triple play video poker. You can play with US dollars or bitcoins, and enjoy a welcome bonus of up to $3,000, as well as other promotions and rewards.★★★★☆
    - - - - - - - - - - - - - ... and so on.

    How to Improve Your Triple Play Video Poker Strategy

    -

    Playing triple play video poker online is not only fun and exciting, but also rewarding and profitable, if you know how to play it well. Video poker is a game of skill and strategy, and you can improve your chances of winning by learning and applying some tips and tricks. Here are some of the best ways to improve your triple play video poker strategy:

    -

    The Basic Video Poker Strategy Chart

    -

    The first thing you need to do is to memorize the basic video poker strategy chart. This is a table that tells you the optimal move for every possible hand, based on the game variant and the paytable. You can find the strategy chart for each game online, or you can use a video poker trainer app or software that will guide you through each decision.

    -

    The basic video poker strategy chart is based on the principle of maximizing the expected value of each hand. The expected value is the average amount of money that you can expect to win or lose from each hand, in the long run. By following the strategy chart, you will always choose the move that gives you the highest expected value, and therefore, the highest return percentage.

    -

    However, since you are playing three hands at once, you have to consider how your decision affects each hand separately, as we explained before. Sometimes, you might have to deviate from the basic strategy chart and choose a different move that gives you a better overall expected value. This requires some practice and intuition, but it can make a big difference in your results.

    -

    The Advanced Video Poker Tips and Tricks

    -

    Once you have mastered the basic video poker strategy chart, you can take your game to the next level by learning some advanced video poker tips and tricks. These are some of the best ones:

    -
      -
    • Always bet five credits on each hand. This will give you the maximum payout for a royal flush, and increase your overall return percentage.
    • -
    • Always check the paytable before playing any game. Look for games that have high payouts for the lower-ranking hands, such as jacks or better, two pair, and three of a kind.
    • -
    • Always play games that have a bonus payout for certain hands, such as four aces or four deuces. These are the hands that can give you a big boost in your winnings.
    • -
    • Always play games that have a high return percentage, such as Jacks or Better (9/6), Bonus Poker (8/5), Bonus Poker Deluxe (8/6), Double Bonus Poker (10/7), Double Double Bonus Poker (9/6), Joker Poker (Kings or Better), and Deuces Wild (9/5).
    • -
    • Always use a video poker trainer app or software to practice your skills and test your strategies. You can also use a video poker calculator or analyzer to check the expected value of each hand and move.
    • -
    -

    The Common Video Poker Mistakes to Avoid

    -

    Finally, you should avoid making some common video poker mistakes that can ruin your game and cost you money. These are some of them:

    -
      -
    • Playing too fast or too slow. Playing too fast can make you miss some important details or make some errors in judgment. Playing too slow can make you lose focus or get bored. You should find a comfortable pace that suits your style and mood.
    • -
    • Playing with emotions or impulses. Playing with emotions or impulses can make you chase losses, bet more than you can afford, or make irrational decisions. You should always play with logic and discipline, and stick to your budget and bankroll management.
    • -
    • Playing without a plan or a goal. Playing without a plan or a goal can make you lose track of your progress, performance, or results. You should always have a clear plan and a realistic goal when playing video poker online, and review them regularly.
    • -
    -

    Conclusion and FAQs

    -

    In conclusion, triple play video poker is one of the most exciting and rewarding casino games that you can play online. It combines the skill and strategy of poker with the simplicity and thrill of slot machines. And it gives you three times the action and three times the chances to win.

    -

    To play triple play video poker online, you need to know how to choose your cards, read the paytable, bet and win, find the best games and sites, improve your strategy, and avoid common mistakes. By following our guide, you will learn all these skills and more.

    -

    We hope that you enjoyed reading this article and that you found it useful and informative. If you have any questions or comments about triple play video poker online, feel free to contact us anytime. We would love to hear from you!

    -

    Here are some frequently asked questions about triple play video poker online:

    -

    What is the difference between triple play video poker and single hand video poker?

    -

    The main difference between triple play video poker and single hand video poker is that in triple play video poker, you play three hands at once, with the same initial cards. You can choose which cards to hold or discard for each hand separately, and then draw new cards for each hand. The final outcome of each hand is determined by the poker value of your five cards, according to the paytable of the game.

    -

    What are the advantages of playing triple play video poker online?

    -

    Some of the advantages of playing triple play video poker online are:

    -
      -
    • You get more action and excitement. Playing three hands at once means you have more opportunities to make winning combinations and hit big jackpots.
    • -
    • You get more variety and challenge. Playing different variants of triple play video poker means you have to adapt your strategy and skills to each game's rules and paytable.
    • -
    • You get more control and flexibility. Playing online means you can choose your bet size, game speed, sound effects, and other settings according to your preferences.
    • -
    • You get more convenience and comfort. Playing online means you can access your favorite games anytime and anywhere, on your computer or mobile device.
    • -
    -

    How can I improve my chances of winning at triple play video poker online?

    -

    Some of the ways you can improve your chances of winning at triple play video poker online are:

    -
      -
    • Learn and follow the basic video poker strategy chart for each game variant and paytable.
    • -
    • Balance the risk and reward of each hand, and choose the cards that give you the best overall expected value.
    • -
    • Bet five credits on each hand to get the maximum payout for a royal flush, and increase your overall return percentage.
    • -
    • Check the paytable before playing any game, and look for games that have high payouts for the lower-ranking hands, such as jacks or better, two pair, and three of a kind.
    • -
    • Play games that have a bonus payout for certain hands, such as four aces or four deuces.
    • -
    • Play games that have a high return percentage, such as Jacks or Better (9/6), Bonus Poker (8/5), Bonus Poker Deluxe (8/6), Double Bonus Poker (10/7), Double Double Bonus Poker (9/6), Joker Poker (Kings or Better), and Deuces Wild (9/5).
    • -
    • Use a video poker trainer app or software to practice your skills and test your strategies.
    • -
    • Avoid making common video poker mistakes, such as playing too fast or too slow, playing with emotions or impulses, or playing without a plan or a goal.
    • -
    -

    Where can I find the best triple play video poker games and sites online?

    -

    You can find the best triple play video poker games and sites online by using our guide. We have tested and reviewed each site based on several criteria, such as:

    -
      -
    • The quality and variety of the video poker games. We look for sites that offer different variants of triple play video poker, with high-quality graphics, sound effects, and gameplay.
    • -
    • The security and safety of the site. We look for sites that use encryption, firewalls, and other measures to protect your personal and financial information.
    • -
    • The fairness and randomness of the games. We look for sites that use certified random number generators (RNGs) to ensure that the outcomes of the games are fair and unpredictable.
    • -
    • The bonuses and promotions of the site. We look for sites that offer generous and frequent bonuses and promotions for video poker players, such as welcome bonuses, reload bonuses, loyalty programs, tournaments, and more.
    • -
    • The customer support and service of the site. We look for sites that offer friendly, professional, and responsive customer support via phone, email, live chat, or social media.
    • -
    -

    How can I play triple play video poker online for free?

    -

    You can play triple play video poker online for free by using one of the following methods:

    -
      -
    • You can use a free video poker app or site that offers different variants of triple play video poker. You can play with virtual coins or real money, and enjoy daily bonuses, tournaments, leaderboards, achievements, and more.
    • -
    • You can use a free trial or demo mode at a real money online casino that offers triple play video poker games. You can play with virtual coins or real money, but you will not be able to withdraw your winnings until you make a deposit.
    • -
    • You can use a no deposit bonus or free spins at a real money online casino that offers triple play video poker games. You can play with real money without making a deposit, and keep your winnings if you meet the wagering requirements.
    • -
    -

    However, keep in mind that playing for free is not the same as playing for real money. When you play for free, you might not have access to all the features, games, and bonuses that the site offers. You might also have a different mindset and attitude when you play for free, which can affect your strategy and performance. Therefore, we recommend playing for real money if you want to enjoy the full benefits and excitement of triple play video poker online.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Geometry Dash World APK Play Online Levels Daily Quests and More in this Rythm-based Adventure.md b/spaces/congsaPfin/Manga-OCR/logs/Geometry Dash World APK Play Online Levels Daily Quests and More in this Rythm-based Adventure.md deleted file mode 100644 index 7d32a6d07622290cb7f8c916e215c2892c9676d8..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Geometry Dash World APK Play Online Levels Daily Quests and More in this Rythm-based Adventure.md +++ /dev/null @@ -1,159 +0,0 @@ -
    -

    Geometry Dash World Full APK: A Rhythm-Based Action Platformer

    -

    Do you love jumping, flying and flipping through challenging levels? Do you enjoy listening to catchy music while playing a fun game? If yes, then you should try Geometry Dash World, a popular arcade game that will test your reflexes and skills. In this article, we will tell you everything you need to know about Geometry Dash World full apk, how to download and install it, how to play online levels, and how to customize your character.

    -

    What is Geometry Dash World?

    -

    Geometry Dash World is a game developed by RobTop Games, the same creator of the original Geometry Dash. It is a spin-off of the main series, featuring new levels, music, monsters, and everything else. It was released in December 2016 for Android and iOS devices.

    -

    geometry dash world full apk


    DOWNLOAD ——— https://urlca.com/2uOesM



    -

    The gameplay of Geometry Dash World

    -

    The gameplay of Geometry Dash World is similar to the other games in the series. You control a geometric shape that can jump, fly, and flip through various obstacles. You have to tap the screen at the right time to avoid crashing or falling. The game is rhythm-based, meaning that the music syncs with the level design and the obstacles. The game is also very hard, requiring precise timing and fast reactions.

    -

    The features of Geometry Dash World

    -

    Geometry Dash World has many features that make it an enjoyable and addictive game. Some of these features are:

    -
      -
    • Ten unique levels with music from Dex Arson, Waterflame and F-777
    • -
    • Daily quests and rewards
    • -
    • Online levels created by the Geometry Dash community
    • -
    • Unique icons and colors to customize your character
    • -
    • Rockets, gravity switches, portals, and more
    • -
    • Practice mode to sharpen your skills
    • -
    • Near impossible challenges
    • -
    -

    How to download and install Geometry Dash World full apk?

    -

    If you want to enjoy all the features of Geometry Dash World without any limitations or ads, you can download and install the full apk version of the game. This will give you access to all the levels, icons, colors, secrets, and achievements in the game.

    -

    The benefits of downloading the full apk

    -

    Downloading the full apk of Geometry Dash World has many benefits, such as:

    -
      -
    • You can play offline without any internet connection
    • -
    • You can save your progress and data on your device
    • -
    • You can avoid any annoying ads or pop-ups
    • -
    • You can support the developer and appreciate their work
    • -
    -

    The steps to download and install the full apk

    -

    To download and install the full apk of Geometry Dash World, you need to follow these steps:

    -
      -
    1. Go to [this link](^1^) and download the apk file (71 MB)
    2. -
    3. Enable unknown sources on your device settings
    4. -
    5. Locate the downloaded file on your file manager and tap on it
    6. -
    7. Install the apk file and wait for it to finish
    8. -
    9. Launch the game and enjoy!
    10. -
    -

    How to play Geometry Dash World online levels?

    -

    One of the best features of Geometry Dash World is that you can play online levels created by other players from around the world. These levels are uploaded to a server where you can browse, rate, comment, and play them. You can also create your own levels using the level editor and share them with others.

    -

    geometry dash world mod apk unlimited everything
    -geometry dash world hack apk download
    -geometry dash world apk latest version
    -geometry dash world full version apk free
    -geometry dash world all levels unlocked apk
    -geometry dash world 2.2 apk download
    -geometry dash world android apk
    -geometry dash world apk mod menu
    -geometry dash world cheats apk
    -geometry dash world cracked apk
    -geometry dash world download apk pure
    -geometry dash world editor apk
    -geometry dash world free apk
    -geometry dash world full game apk
    -geometry dash world hacked apk
    -geometry dash world ios apk
    -geometry dash world mod apk android 1
    -geometry dash world mod apk revdl
    -geometry dash world new update apk
    -geometry dash world online apk
    -geometry dash world premium apk
    -geometry dash world pro apk
    -geometry dash world rexdl apk
    -geometry dash world subzero apk download
    -geometry dash world unlocked apk

    - The types of online levels in Geometry Dash World -

    There are different types of online levels in Geometry Dash World, depending on their difficulty, length, and style. Some of the most common types are:

    -
    NameTypeDescriptionRating
    Ignition Casino Video Poker BonusBonusA 100% match bonus up to $1,000 for new players who deposit with bitcoin or credit card. The bonus has a 25x wagering requirement, and video poker games contribute 10% to it.★★★★★
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    TypeDescription
    EasyThese levels are suitable for beginners and have simple obstacles and patterns.
    NormalThese levels are slightly harder than easy levels and have more variety and challenge.
    HardThese levels are for experienced players and have complex and fast obstacles and patterns.
    HarderThese levels are for advanced players and have very difficult and tricky obstacles and patterns.
    InsaneThese levels are for expert players and have extremely hard and insane obstacles and patterns.
    DemonThese levels are for the most skilled players and have nearly impossible and frustrating obstacles and patterns.
    ShortThese levels are less than 30 seconds long and have a high intensity and pace.
    MediumThese levels are between 30 seconds and 1 minute long and have a balanced intensity and pace.
    LongThese levels are more than 1 minute long and have a low intensity and pace.
    XStepThese levels are inspired by the XStep level from the original Geometry Dash and have a lot of spikes, jumps, and portals.
    Theory of EverythingThese levels are inspired by the Theory of Everything level from the original Geometry Dash and have a lot of gravity switches, flying sections, and wave mode.
    ...
    -

    The tips and tricks to play online levels

    -

    To play online levels in Geometry Dash World, you need to have some tips and tricks up your sleeve. Here are some of them:

    -
      -
    • Practice mode: Use the practice mode to learn the layout and patterns of the level before attempting it in normal mode. You can place checkpoints along the way to resume from where you left off.
    • -
    • Rhythm: Follow the rhythm of the music to time your taps correctly. The music is synced with the level design and the obstacles, so you can use it as a guide.
    • -
    • Persistence: Don't give up easily if you fail a level. Try again and again until you master it. You will improve your skills and reflexes as you play more.
    • -
    • Finger position: Find a comfortable finger position to tap the screen. You can use one finger or two fingers, depending on your preference. You can also adjust the sensitivity of the touch screen in the settings.
    • -
    • Focusing: Focus on the level and avoid any distractions. You need to pay attention to every detail and movement in the level. You can also use headphones to immerse yourself in the music.
    • -
    • Fun: Have fun playing online levels. Don't stress too much about completing them or getting high scores. Enjoy the game and appreciate the creativity of the level creators.
    • -
    -

    How to customize your character in Geometry Dash World?

    -

    Another cool feature of Geometry Dash World is that you can customize your character with different icons and colors. You can also unlock secrets and achievements that will give you more options to personalize your character.

    -

    The icons and colors in Geometry Dash World

    -

    You can choose from various icons and colors to change the appearance of your character. You can select an icon for your cube, ship, ball, UFO, wave, robot, or spider form. You can also select a primary color and a secondary color for your character. You can unlock more icons and colors by completing levels, quests, achievements, or secrets.

    -

    The secrets and achievements in Geometry Dash World

    -

    You can unlock secrets and achievements by performing certain actions or finding hidden objects in the game. Some of these secrets and achievements are:

    -
      -
    • The Vault: The Vault is a secret area where you can enter codes to unlock rewards. You can find The Vault by tapping on the lock icon in the settings menu. You can get codes by completing quests or finding clues in the game.
    • -
    • The Treasure Room: The Treasure Room is another secret area where you can collect chests that contain rewards. You can find The Treasure Room by tapping on the purple door in The World map. You can get chests by completing achievements or finding keys in the game.
    • -
    • The Shop: The Shop is where you can buy icons, colors, and trails for your character using orbs. You can find The Shop by tapping on the shop icon in the main menu. You can get orbs by playing levels or opening chests.
    • -
    • The Community Shop: The Community Shop is where you can buy icons and colors made by other players using diamonds. You can find The Community Shop by tapping on the community shop icon in the main menu. You can get diamonds by playing online levels or opening chests.
    • -
    • The Scratch's Shop: The Scratch's Shop is where you can buy special icons and colors using shards. You can find The Scratch's Shop by tapping on the scratch icon in the main menu. You can get shards by playing levels or opening chests.
    • -
    -

    Conclusion

    -

    Geometry Dash World is a fun and challenging game that will keep you entertained for hours. You can download and install the full apk version of the game to enjoy all the features without any limitations or ads. You can also play online levels created by other players and customize your character with different icons and colors. Geometry Dash World is a game that will test your skills, reflexes, and rhythm. Are you ready to dash into the world of geometry?

    -

    FAQs

    -

    Here are some frequently asked questions about Geometry Dash World full apk:

    -
      -
    1. Q: Is Geometry Dash World full apk safe to download and install?
    2. -
    3. A: Yes, Geometry Dash World full apk is safe to download and install, as long as you use a trusted source like [this one]. However, you should always be careful when downloading files from unknown sources and scan them for viruses or malware before installing them.
    4. -
    5. Q: Do I need to root my device to install Geometry Dash World full apk?
    6. -
    7. A: No, you don't need to root your device to install Geometry Dash World full apk. You just need to enable unknown sources on your device settings and follow the steps mentioned above.
    8. -
    9. Q: Can I play Geometry Dash World full apk on PC?
    10. -
    11. A: Yes, you can play Geometry Dash World full apk on PC using an Android emulator like BlueStacks or NoxPlayer. You just need to download and install the emulator on your PC and then follow the same steps as you would on your mobile device.
    12. -
    13. Q: How can I update Geometry Dash World full apk?
    14. -
    15. A: To update Geometry Dash World full apk, you need to download and install the latest version of the apk file from [this link]. You don't need to uninstall the previous version, as the new one will overwrite it.
    16. -
    17. Q: How can I contact the developer of Geometry Dash World?
    18. -
    19. A: You can contact the developer of Geometry Dash World by sending an email to support@robtopgames.com or visiting their website at www.robtopgames.com.
    20. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Use GreenPois0n RC5 for Windows to Jailbreak iPhone iPad and iPod Touch on iOS 4.2.1.md b/spaces/congsaPfin/Manga-OCR/logs/How to Use GreenPois0n RC5 for Windows to Jailbreak iPhone iPad and iPod Touch on iOS 4.2.1.md deleted file mode 100644 index 1df97e31ffd3641e0e934600fa904faadfdc709c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Use GreenPois0n RC5 for Windows to Jailbreak iPhone iPad and iPod Touch on iOS 4.2.1.md +++ /dev/null @@ -1,118 +0,0 @@ -
    -

    How to Download Greenpois0n RC5 4.2.1 for Windows

    -

    If you are an iPhone, iPad or iPod touch user who wants to jailbreak your device running iOS 4.2.1, you might be interested in downloading Greenpois0n RC5 4.2.1 for Windows. This is a tool that allows you to jailbreak your device untethered, meaning that you don't have to connect it to a computer every time you reboot it.

    -

    download greenpois0n rc5 4.2.1 for windows


    DOWNLOADhttps://urlca.com/2uOahE



    -

    In this article, we will explain what Greenpois0n RC5 4.2.1 is, how to download it for Windows, and how to use it to jailbreak your device. We will also show you how to use Cydia, the jailbreak app store, where you can find many useful apps, tweaks and themes that are not available on the official App Store.

    -

    What is Greenpois0n RC5 4.2.1?

    -

    Greenpois0n RC5 4.2.1 is a tool developed by the Chronic Dev Team, a group of hackers who specialize in jailbreaking iOS devices. Jailbreaking is a process that removes the restrictions imposed by Apple on its devices, allowing users to customize them and install apps that are not approved by Apple.

    -

    A tool to jailbreak iOS 4.2.1 untethered

    -

    Greenpois0n RC5 4.2.1 is designed to jailbreak iOS devices running iOS 4.2.1, which is an older version of iOS that was released in November 2010. It supports iPhone 4, iPhone 3GS, iPod touch 4G/3G/2G and iPad.

    -

    One of the main features of Greenpois0n RC5 4.2.1 is that it can jailbreak iOS devices untethered, which means that you don't have to connect your device to a computer every time you reboot it or turn it off and on again. This makes your device more convenient and stable after jailbreaking.

    -

    Compatible with iPhone, iPad and iPod touch

    -

    Greenpois0n RC5 4.2.1 is compatible with the following iOS devices:

    - - - - - -
    DeviceModel
    iPhoneiPhone 4, iPhone 3GS
    iPadiPad (first generation)
    iPod touchiPod touch 4G/3G/2G
    -

    Benefits of jailbreaking with Greenpois0n RC5 4.2.1

    -

    Jailbreaking your device with Greenpois0n RC5 4.2.1 has many benefits, such as:

    -
      -
    • You can install Cydia, the jailbreak app store, where you can find many apps, tweaks and themes that are not available on the official App Store.
    • -
    • You can customize your device's look and feel, such as changing the icons, fonts, colors, wallpapers, etc.
    • -
    • You can unlock your device and use it with any carrier or network of your choice.
    • -
    • You can enhance your device's performance and functionality, such as adding multitasking, widgets, gestures, etc.
    • -
    • You can access the root file system of your device and modify it as you wish.
    • -
    -

    Of course, jailbreaking also has some risks and drawbacks, such as voiding your warranty, exposing your device to security threats, and causing instability or compatibility issues. Therefore, you should always backup your device before jailbreaking and follow the instructions carefully.

    -

    How to Download Greenpois0n RC5 4.2.1 for Windows

    -

    If you are ready to jailbreak your device with Greenpois0n RC5 4.2.1, you will need to download the tool for Windows and follow the steps below.

    -

    Requirements

    -

    Before you start, make sure you have the following requirements:

    -

    How to jailbreak iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
    -GreenPois0n RC5 Windows version released - iOS 4.2.1 untethered jailbreak for iPhone, iPad and iPod touch
    -GreenPois0n RC5 b2 - Jailbreak iOS 4.2.1 firmware untethered on Windows
    -Download GreenPois0n RC5 for Windows - Untethered jailbreak for iOS 4.2.1 firmware
    -GreenPois0n RC5 Windows download link - Jailbreak iOS 4.2.1 untethered without SHSH blobs
    -GreenPois0n RC5 tutorial for Windows - Step by step guide to jailbreak iOS 4.2.1 untethered
    -GreenPois0n RC5 Windows problems and solutions - How to fix errors and issues with iOS 4.2.1 untethered jailbreak
    -GreenPois0n RC5 vs Redsn0w - Which is better for iOS 4.2.1 untethered jailbreak on Windows
    -GreenPois0n RC5 Windows review - Pros and cons of iOS 4.2.1 untethered jailbreak with GreenPois0n
    -GreenPois0n RC5 Windows alternatives - Other tools to jailbreak iOS 4.2.1 untethered on Windows
    -GreenPois0n RC5 Windows requirements - What you need to jailbreak iOS 4.2.1 untethered with GreenPois0n
    -GreenPois0n RC5 Windows features - What's new and improved in iOS 4.2.1 untethered jailbreak with GreenPois0n
    -GreenPois0n RC5 Windows compatibility - Which devices and firmware versions are supported by iOS 4.2.1 untethered jailbreak with GreenPois0n
    -GreenPois0n RC5 Windows FAQs - Frequently asked questions and answers about iOS 4.2.1 untethered jailbreak with GreenPois0n
    -GreenPois0n RC5 Windows video tutorial - Watch how to jailbreak iOS 4.2.1 untethered with GreenPois0n on Windows
    -GreenPois0n RC5 Windows screenshots - See how iOS 4.2.1 untethered jailbreak with GreenPois0n looks like on Windows
    -GreenPois0n RC5 Windows tips and tricks - How to get the most out of iOS 4.2.1 untethered jailbreak with GreenPois0n
    -GreenPois0n RC5 Windows updates - How to check and install the latest version of iOS 4.2.1 untethered jailbreak with GreenPois0n
    -GreenPois0n RC5 Windows feedback - How to share your experience and opinion about iOS 4.2.1 untethered jailbreak with GreenPois0n
    -GreenPois0n RC5 Windows support - How to get help and assistance with iOS 4.2.1 untethered jailbreak with GreenPois0n
    -Download GreenPois0n RC5 for Mac - Untethered jailbreak for iOS 4.2.1 firmware on Mac OS X
    -How to switch from tethered to untethered jailbreak on iOS 4.2.1 with GreenPois0n RC5 for Windows
    -How to restore your device after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
    -How to backup your data before jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
    -How to customize your device after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
    -How to install Cydia apps and tweaks after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
    -How to unlock your iPhone after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
    -How to fix battery drain issue after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
    -How to fix WiFi issue after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
    -How to fix GPS issue after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows

    -
      -
    • A Windows PC with an internet connection.
    • -
    • A USB cable to connect your device to your PC.
    • -
    • A device running iOS 4.2.1 that is compatible with Greenpois0n RC5 4.2.1 (see above).
    • -
    • A backup of your device's data in case something goes wrong.
    • -
    -

    Steps

    -

    Once you have everything ready, you can proceed with the following steps:

    -

    Step 1: Backup your device and quit iTunes

    -

    The first step is to backup your device using iTunes or iCloud. This will ensure that you don't lose any important data in case something goes wrong during the jailbreak process. To backup your device using iTunes, connect it to your PC and open iTunes. Then, click on the device icon in the upper left corner and select "Back Up Now". To backup your device using iCloud, go to Settings > iCloud > Backup and tap on "Back Up Now".

    -

    After backing up your device, make sure you quit iTunes and any other programs that might interfere with the jailbreak process.

    -

    Step 2: Download Greenpois0n RC5 4.2.1 for Windows

    -

    The next step is to download Greenpois0n RC5 4.2.1 for Windows from the official website of the Chronic Dev Team. You can also use this mirror link if the official website is down or slow. The file size is about 16 MB and the file name is gp_win_rc5_b4.zip.

    -

    After downloading the file, extract it to a folder on your desktop or any other location that is easy to access.

    -

    Step 3: Run Greenpois0n and follow the instructions

    -

    The third step is to run Greenpois0n and follow the instructions on the screen. To do this, double-click on the greenpois0n.exe file that you extracted in the previous step. You will see a window like this:

    -Greenpois0n window -

    Before you click on "Prepare to Jailbreak (DFU)", make sure your device is turned off and connected to your PC via USB. Then, click on the button and follow the instructions to put your device into DFU mode. DFU mode is a special mode that allows your device to communicate with Greenpois0n and accept the jailbreak code. To enter DFU mode, you will need to press and hold the power and home buttons on your device for a certain amount of time.

    -

    The instructions will tell you how long to press each button and when to release them. You will also see a countdown timer on the screen that will guide you through the process. If you do everything correctly, you will see a message saying "Ready to Jailbreak" and a button saying "Jailbreak". If not, you will see a message saying "Try Again" and a button saying "Retry". In that case, try again until you succeed.

    -

    Once you are ready to jailbreak, click on the "Jailbreak" button and wait for Greenpois0n to do its magic. You will see some code running on your device's screen and a progress bar on your PC's screen. This may take a few minutes, so be patient and do not disconnect or interrupt your device.

    -

    Step 4: Install Cydia from the Loader app

    -

    The final step is to install Cydia from the Loader app that Greenpois on will install on your device after the jailbreak. Cydia is the jailbreak app store, where you can find many apps, tweaks and themes that are not available on the official App Store.

    -

    To install Cydia, go to your device's home screen and look for an icon that says "Loader". Tap on it and you will see a screen like this:

    -Loader app -

    Tap on the "Cydia" button and wait for the app to download and install. You may need to confirm some prompts or restart your device during the process. Once Cydia is installed, you will see its icon on your home screen. You can then delete the Loader app by tapping and holding on it and tapping on the "X" button.

    -

    How to Use Cydia, the Jailbreak App Store

    -

    Now that you have Cydia on your device, you can start exploring the world of jailbreak apps, tweaks and themes. Here are some tips on how to use Cydia:

    -

    What is Cydia?

    -

    Cydia is an app that allows you to browse and install software packages that are not available on the official App Store. These packages are created by independent developers and hackers who want to enhance or modify the iOS experience. Some of these packages are free, while others are paid or require a donation.

    -

    Cydia works by accessing repositories, which are online sources that host the packages. There are many repositories that you can add to Cydia, each offering different kinds of packages. Some of the most popular repositories are BigBoss, ModMyi, ZodTTD & MacCiti, and Saurik's own repository.

    -

    How to access Cydia

    -

    To access Cydia, simply tap on its icon on your home screen. You will see a screen like this:

    -Cydia home screen -

    On the bottom of the screen, you will see five tabs: Home, Sections, Changes, Manage and Search. Here is what each tab does:

    -
      -
    • Home: This tab shows you some information and news about Cydia, such as featured packages, updates, tips and tutorials.
    • -
    • Sections: This tab shows you the categories of packages that are available on Cydia, such as Themes, Tweaks, Utilities, Games, etc. You can browse through them and find what you are looking for.
    • -
    • Changes: This tab shows you the latest updates and additions to the packages that are available on Cydia. You can also refresh this tab to check for new updates.
    • -
    • Manage: This tab shows you the sources (repositories) and packages that you have installed or added to Cydia. You can also add or remove sources and packages from here.
    • -
    • Search: This tab allows you to search for a specific package by name or keyword. You can also use filters to narrow down your search results.
    • -
    -

    How to install apps, tweaks and themes from Cydia

    -

    To install an app, tweak or theme from Cydia, follow these steps:

    -
      -
    1. Find the package that you want to install by browsing through the sections or searching for it.
    2. -
    3. Tap on the package name and you will see a screen with more information about it, such as description, screenshots, ratings, etc.
    4. -
    5. If you want to install the package, tap on the "Install" button on the top right corner. If the package is paid or requires a donation, you will need to purchase it or donate first before installing it.
    6. -
    7. You will see a confirmation screen with the details of the package and its dependencies (other packages that are required for it to work). Tap on "Confirm" to proceed with the installation.
    8. -
    9. Cydia will download and install the package and its dependencies. You may need to restart your device or respring (restart the springboard) after the installation.
    10. -
    11. Once the installation is done, you will see a screen with a button saying "Return to Cydia". Tap on it and you will go back to Cydia.
    12. -
    -

    You can now enjoy your new app, tweak or theme on your device. To uninstall a package from Cydia, simply go to the Manage tab > Packages > tap on the package name > tap on "Modify" > tap on "Remove".

    -

    Conclusion

    -If you have any feedback or suggestions for improving the article, please let me know. I appreciate your input and I want to make sure you are satisfied with my work. Thank you for choosing me as your content writer. ?

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Mortal Kombat XL APK - Join the Online Multiplayer Faction Wars and Compete with Other Players Worldwide.md b/spaces/congsaPfin/Manga-OCR/logs/Mortal Kombat XL APK - Join the Online Multiplayer Faction Wars and Compete with Other Players Worldwide.md deleted file mode 100644 index 8ab510dedc4396b1aaa16aa0d00bc7ee0b83fec9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Mortal Kombat XL APK - Join the Online Multiplayer Faction Wars and Compete with Other Players Worldwide.md +++ /dev/null @@ -1,132 +0,0 @@ - -

    Mortal Kombat XL Download APK: How to Play the Ultimate Fighting Game on Your Android Device

    -

    If you are a fan of fighting games, you must have heard of Mortal Kombat, one of the most iconic and brutal franchises in the genre. Mortal Kombat has been around since 1992, and it has evolved over the years with new characters, graphics, gameplay, and storylines. One of the latest and most popular entries in the series is Mortal Kombat XL, which was released in 2016 as an enhanced version of Mortal Kombat X.

    -

    Mortal Kombat XL is a game that offers a lot of content, variety, and fun for both casual and hardcore players. It has a rich roster of fighters, each with their own unique abilities, moves, and fatalities. It has several game modes, such as story mode, arcade mode, online mode, and tower mode. It has stunning graphics, animations, and sound effects that make every fight feel realistic and immersive. And it has the signature gore and violence that make Mortal Kombat stand out from other fighting games.

    -

    mortal kombat xl download apk


    Download Zip ►►► https://urlca.com/2uO8AY



    -

    But what if you want to play Mortal Kombat XL on your Android device? Is it possible to enjoy this game on your smartphone or tablet? The answer is yes, thanks to the Mortal Kombat XL download APK. This is a file that allows you to install and run Mortal Kombat XL on your Android device without any hassle. In this article, we will show you how to download and install Mortal Kombat XL APK on your Android device, and how to enjoy it to the fullest.

    -

    What is Mortal Kombat XL?

    -

    Mortal Kombat XL is an updated version of Mortal Kombat X, which was released in 2015. Mortal Kombat X is the tenth main installment in the Mortal Kombat series, and it continues the story of the previous games. It is set 25 years after the events of Mortal Kombat 9, and it features a new generation of fighters who have to face a new threat from the Outworld.

    -

    The main features of Mortal Kombat XL

    -

    Mortal Kombat XL has many features that make it one of the best fighting games ever made. Some of these features are:

    -
      -
    • It has over 30 playable characters, including fan-favorites like Scorpion, Sub-Zero, Raiden, Liu Kang, Kitana, Sonya Blade, Johnny Cage, and more. It also has new characters like Cassie Cage, Jacqui Briggs, Takeda Takahashi, D'Vorah, Erron Black, Ferra/Torr, and more. And it has guest characters from other franchises like Alien, Predator, Jason Voorhees, Leatherface, and Bo' Rai Cho.
    • -
    • It has four different variations for each character, which change their appearance, abilities, moves, and strategies. For example, Scorpion can choose between Ninjutsu, Hellfire, Inferno, or Flame Fist variations.
    • -
    • It has a cinematic story mode that spans 12 chapters and follows the events of Mortal Kombat X. It also has an arcade mode that lets you fight against random opponents until you reach the final boss.
    • -
    • It has an online mode that lets you compete with other players around the world in ranked matches or casual matches. You can also join factions and participate in faction wars that reward you with points and rewards. You can also chat with other players and create or join rooms for custom matches.
    • -
    • It has a tower mode that lets you challenge yourself with different objectives and modifiers. For example, you can play the Test Your Luck tower, which randomly applies effects like low gravity, poison, bombs, or inverted controls to your fights. Or you can play the Living Towers, which change every hour, day, or week with new challenges and rewards.
    • -
    • It has a krypt mode that lets you explore a vast underground area filled with secrets, puzzles, and treasures. You can use the coins you earn from playing the game to unlock new costumes, fatalities, brutalities, concept art, and more.
    • -
    • It has stunning graphics that use the Unreal Engine 4 to create realistic and detailed environments, characters, and effects. It also has smooth and responsive gameplay that supports 60 frames per second on most devices.
    • -
    • It has the signature gore and violence that make Mortal Kombat famous. It has brutal and creative fatalities that finish off your opponents in gruesome ways. It also has brutalities that let you perform quick and savage kills during the fight. And it has x-ray moves that show the damage you inflict on your opponent's bones and organs in slow motion.
    • -
    -

    The difference between Mortal Kombat X and Mortal Kombat XL

    -

    Mortal Kombat XL is an enhanced version of Mortal Kombat X that includes all the downloadable content (DLC) that was released for the original game. This means that Mortal Kombat XL has more characters, costumes, stages, and features than Mortal Kombat X. For example, Mortal Kombat XL has nine additional characters that were not available in Mortal Kombat X: Alien, Predator, Jason Voorhees, Leatherface, Bo' Rai Cho, Triborg, Tremor, Tanya, and Goro. It also has new costumes for some of the existing characters, such as Cyber Sub-Zero, Revenant Liu Kang, Revenant Kitana, Revenant Jax, Revenant Kung Lao, Dark Emperor Liu Kang, Dark Empress Kitana, and more. It also has new stages for some of the game modes, such as the Pit Stage for arcade mode and the Refugee Kamp Stage for online mode.

    -

    If you already own Mortal Kombat X on your Android device, you can upgrade to Mortal Kombat XL by purchasing the XL Pack from the in-game store. This will give you access to all the DLC content that is included in Mortal Kombat XL. However, if you do not own Mortal Kombat X on your Android device, you can download and install Mortal Kombat XL APK directly from a trusted source online.

    -

    How to download and install Mortal Kombat XL APK on your Android device

    -

    If you want to play Mortal Kombat XL on your Android device, you will need to download and install Mortal Kombat XL APK from a reliable website. This is a file that contains the game data and allows you to run it on your device without any problems. However, before you do that, you will need to make sure that your device meets the requirements for running Mortal Kombat XL APK.

    -

    mortal kombat xl apk for android free download
    -mortal kombat xl mod apk download latest version
    -how to install mortal kombat xl on android
    -mortal kombat xl apk + obb download
    -mortal kombat xl android gameplay
    -mortal kombat xl apk offline
    -mortal kombat xl apk filehippo
    -mortal kombat xl characters and moves apk
    -mortal kombat xl apk no verification
    -mortal kombat xl apk highly compressed
    -mortal kombat xl apk revdl
    -mortal kombat xl apk rexdl
    -mortal kombat xl apk pure
    -mortal kombat xl apk uptodown
    -mortal kombat xl apk apkpure
    -mortal kombat xl apk data download
    -mortal kombat xl apk and data
    -mortal kombat xl apk android 1
    -mortal kombat xl apk android republic
    -mortal kombat xl apk android oyun club
    -mortal kombat xl apk all characters unlocked
    -mortal kombat xl apk and obb file download
    -mortal kombat xl apk blackmod
    -mortal kombat xl apk best settings
    -mortal kombat xl apk by rexdl
    -mortal kombat xl apk by revdl
    -mortal kombat xl apk by apkpure
    -mortal kombat xl apk by uptodown
    -mortal kombat xl cheats and hacks apk download
    -mortal kombat xl cracked apk download
    -mortal kombat xl download for android phone
    -mortal kombat xl download for android tablet
    -mortal kombat xl download for android device
    -mortal kombat xl download for android emulator
    -mortal kombat xl download for android free full version
    -mortal kombat xl full game download for android
    -mortal kombat xl free download for android mobile
    -mortal kombat x vs xl android download
    -how to download and play mortal kombat xl on android
    -how to download and install mortal kombat xl on android without verification
    -how to download and run mortal kombat xl on android with ppsspp emulator
    -how to download and update mortal kombat x to xl on android
    -is there a mortal kombat xl for android?
    -is it possible to play mortal kombat xl on android?
    -is it safe to download mortal kombat xl on android?

    -

    The requirements for running Mortal Kombat XL APK

    -

    Mortal Kombat XL APK is a large and demanding game that requires a powerful device to run smoothly and without errors. Here are some of the minimum requirements for running Mortal Kombat XL APK on your Android device:

    -
      -
    • Your device must have Android 5.0 or higher as its operating system.
    • -
    • Your device must have at least 2 GB of RAM and 4 GB of free storage space.
    • -
    • Your device must have a quad-core processor with a clock speed of at least 1.5 GHz.
    • -
    • Your device must have a GPU that supports OpenGL ES 3.1 or higher.
    • -
    • Your device must have a stable internet connection for downloading the game data and playing online modes.
    • -
    -

    If your device meets these requirements, you can proceed to download and install Mortal Kombat XL APK on your Android device.

    -

    The steps for downloading and installing Mortal Kombat XL APK

    -

    Here are the steps for downloading and installing Mortal Kombat XL APK on your Android device:

    -
      -
    1. Go to a reputable website that offers Mortal Kombat XL APK for download. Make sure that the website is safe and secure by checking its reviews and ratings.
    2. -
    3. Download the Mortal Kombat XL APK file to your device. The file size may vary depending on the website, but it should be around 1 GB.
    4. -
    5. Once the download is complete, go to your device settings and enable the installation of apps from unknown sources. This will allow you to install the Mortal Kombat XL APK file on your device.
    6. -
    7. Locate the Mortal Kombat XL APK file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
    8. -
    9. After the installation is done, launch the Mortal Kombat XL app on your device. The app will ask you to download additional data for the game, which may take some time depending on your internet speed. The additional data size may vary depending on the website, but it should be around 2 GB.
    10. -
    11. Once the additional data is downloaded, you can start playing Mortal Kombat XL on your Android device. Enjoy!
    12. -
    -

    The tips for playing Mortal Kombat XL APK smoothly and safely

    -

    Mortal Kombat XL APK is a great way to enjoy Mortal Kombat XL on your Android device, but it also comes with some risks and challenges. Here are some tips for playing Mortal Kombat XL APK smoothly and safely:

    -
      -
    • Make sure that you download Mortal Kombat XL APK from a trusted and verified website. Do not download it from any random or suspicious website, as it may contain viruses, malware, or spyware that can harm your device or steal your personal information.
    • -
    • Make sure that you have enough storage space and battery life on your device before downloading and installing Mortal Kombat XL APK. The game requires a lot of space and power to run properly, so you do not want to run out of either while playing.
    • -
    • Make sure that you have a good internet connection when downloading the game data and playing online modes. The game requires a stable and fast internet connection to download the data and connect with other players. If your connection is slow or unstable, you may experience lag, glitches, or disconnections while playing.
    • -
    • Make sure that you update the game regularly when new updates are available. The game developers may release new updates that fix bugs, improve performance, or add new features to the game. You can check for updates in the game settings or in the app store.
    • -
    • Make sure that you play the game responsibly and respectfully. Do not cheat, hack, or mod the game in any way, as it may ruin the game experience for yourself and others. Do not harass, insult, or threaten other players online, as it may result in a ban or a report. And do not play the game for too long or too often, as it may affect your health and well-being.
    • -
    -

    How to enjoy Mortal Kombat XL APK to the fullest

    -

    Mortal Kombat XL APK is a game that offers a lot of fun and excitement for anyone who loves fighting games. It has a lot of content, variety, and challenge that will keep you entertained for hours. Here are some tips on how to enjoy Mortal Kombat XL APK to the fullest:

    -

    The best characters to use in Mortal Kombat XL APK

    -

    Mortal Kombat XL APK has a large and diverse roster of characters that you can choose from. Each character has their own strengths, weaknesses, styles, and personalities that make them unique and fun to play with. However, some characters may be better than others depending on your preferences and skills. Here are some of the best characters to use in Mortal Kombat XL APK:

    - - - - - - -
    CharacterReason
    ScorpionScorpion is one of the most iconic and popular characters in Mortal Kombat history. He is a ninja who wields a spear and controls fire. He is fast, agile, and versatile, with a lot of combos, mix-ups, and mobility options. He can also teleport behind his opponents and catch them off guard. He is a great character for beginners and experts alike.
    Sub-ZeroSub-Zero is another classic and fan-favorite character in Mortal Kombat history. He is a ninja who manipulates ice and cold. He is strong, durable, and defensive, with a lot of tools to control the space and tempo of the fight. He can freeze his opponents with his ice ball or ice clone, create ice weapons or walls, and slide under his opponents' attacks. He is a great character for intermediate and advanced players.
    Cassie CageCassie Cage is one of the new characters introduced in Mortal Kombat X. She is the daughter of Johnny Cage and Sonya Blade, and she inherits their fighting skills and charisma. She is a balanced and well-rounded character, with a lot of options for offense and defense. She can use her pistols, baton, drone, or martial arts to attack her opponents from different ranges and angles. She can also use her nut punch or x-ray move to deal massive damage and stun her opponents. She is a great character for any level of player.
    AlienAlien is one of the guest characters from the Alien franchise. He is a terrifying and deadly creature that uses his claws, tail, teeth, and acid to hunt and kill his prey. He is fast, aggressive, and unpredictable, with a lot of pressure, damage, and range. He can also use his facehugger or chestburster to create traps or setups for his opponents. He is a great character for advanced and expert players.
    -

    The best game modes to play in Mortal Kombat XL APK

    -

    Mortal Kombat XL APK has several game modes that you can play and enjoy. Each game mode has its own objectives, rules, and rewards that make them different and fun. Here are some of the best game modes to play in Mortal Kombat XL APK:

    -
      -
    • Story mode: This is the mode where you can experience the story of Mortal Kombat X and XL. You can follow the events of the game through 12 chapters, each focusing on a different character. You can watch cinematic cutscenes, dialogue, and fights that advance the plot and reveal the secrets of the Mortal Kombat universe. You can also unlock new costumes and achievements by completing the story mode.
    • -
    • Arcade mode: This is the mode where you can fight against random opponents until you reach the final boss. You can choose your character, variation, difficulty, and number of rounds. You can also see your stats, such as wins, losses, fatalities, brutalities, and more. You can also unlock new endings for each character by completing the arcade mode.
    • -
    • Online mode: This is the mode where you can compete with other players around the world in various modes. You can play ranked matches or casual matches with players of similar skill level. You can also join factions and participate in faction wars that reward you with points and rewards. You can also chat with other players and create or join rooms for custom matches.
    • -
    • Tower mode: This is the mode where you can challenge yourself with different objectives and modifiers. You can play the Test Your Luck tower, which randomly applies effects like low gravity, poison, bombs, or inverted controls to your fights. Or you can play the Living Towers, which change every hour, day, or week with new challenges and rewards.
    • -
    • Krypt mode: This is the mode where you can explore a vast underground area filled with secrets, puzzles, and treasures. You can use the coins you earn from playing the game to unlock new costumes, fatalities, brutalities, concept art, and more.
    • -
    -

    The best tips and tricks to master Mortal Kombat XL APK

    -

    Mortal Kombat XL APK is a game that requires skill, strategy, and practice to master. It is not a game that you can win by button mashing or spamming moves. It is a game that rewards you for learning the mechanics, characters, combos, and tactics of the game. Here are some of the best tips and tricks to master Mortal Kombat XL APK:

    -
      -
    • Learn the basics: Before you jump into the game modes, make sure that you learn the basics of the game. Learn how to move, block, attack, grab, throw, break, counter, of the website before downloading and installing Mortal Kombat XL APK.

      -
    • Is Mortal Kombat XL APK legal?
    • -

      Mortal Kombat XL APK is legal to download and install from a legitimate website. However, some websites may offer pirated or cracked files that violate the intellectual property rights of the game developers. Therefore, you should always respect the law and the game developers by downloading and installing Mortal Kombat XL APK from an authorized website.

      -
    • Is Mortal Kombat XL APK compatible with my device?
    • -

      Mortal Kombat XL APK is compatible with most Android devices that meet the minimum requirements for running the game. However, some devices may have issues or errors with the game due to different specifications or models. Therefore, you should always check the compatibility of your device with the game before downloading and installing Mortal Kombat XL APK.

      -
    • How can I contact the support team of Mortal Kombat XL APK?
    • -

      If you have any questions, problems, or feedback about Mortal Kombat XL APK, you can contact the support team of the game by visiting their official website, social media pages, or email address. You can also check their online forums, FAQs, or guides for more information and help.

      -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Play Melon Playground on Your Mac with BlueStacks The Ultimate Android Emulator for PC and Mac.md b/spaces/congsaPfin/Manga-OCR/logs/Play Melon Playground on Your Mac with BlueStacks The Ultimate Android Emulator for PC and Mac.md deleted file mode 100644 index c94403867274d2c4bed7906e48bdcc9d0f99557a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Play Melon Playground on Your Mac with BlueStacks The Ultimate Android Emulator for PC and Mac.md +++ /dev/null @@ -1,133 +0,0 @@ -
-

Can You Download Melon Playground on Mac?

-

Melon Playground is a popular sandbox game that lets you unleash your creativity and imagination. You can use various items, weapons, and physics to create your own scenarios and experiments. But can you download Melon Playground on Mac? The answer is yes, but not directly. In this article, we will show you two ways to play Melon Playground on your Mac computer.

-

What is Melon Playground?

-

Melon Playground is a simulation game developed by TwentySeven, a studio based in Russia. It is available for Android devices on Google Play Store. The game has over 10 million downloads and a rating of 2.9 stars out of 5.

-

can you download melon playground on mac


Download ✔✔✔ https://urlca.com/2uO6pB



-

A sandbox game for Android devices

-

Melon Playground is a sandbox game, which means that there is no fixed goal or objective. You can do whatever you want with the items and tools provided in the game. You can create your own scenes, characters, and stories. You can also destroy, explode, burn, freeze, or electrocute anything you see.

-

Features and gameplay of Melon Playground

-

Melon Playground has a wide variety of items at your disposal, such as melee weapons, guns, barrels, explosives, vehicles, animals, humans, robots, zombies, aliens, and more. You can also customize the items by changing their color, size, shape, texture, and properties. You can also use physics to make the items interact with each other in realistic or unrealistic ways.

-

The game has a simple and intuitive interface that allows you to drag and drop items from the menu to the scene. You can also use gestures to rotate, zoom, or move the camera. You can also pause, resume, or reset the scene at any time. The game also has a screenshot and video recording feature that lets you capture and share your creations with others.

-

How to Download Melon Playground on Mac?

-

Since Melon Playground is an Android game, you cannot download it directly from the App Store or run it natively on your Mac. However, there are two ways to play Melon Playground on your Mac using either an Android emulator or a cloud gaming service.

-

Option 1: Use an Android emulator

-

An Android emulator is a software that simulates an Android device on your computer. It allows you to run Android apps and games on your Mac as if you were using an actual Android device.

-

How to install melon playground on mac with bluestacks
-Melon playground app store download for mac
-Melon playground mac compatibility and requirements
-Melon playground simulation game for mac
-Melon playground sandbox game for mac
-Melon playground weapons and tools for mac
-Melon playground no ads subscription for mac
-Melon playground privacy policy and data usage for mac
-Melon playground developer and support for mac
-Melon playground reviews and ratings for mac
-Melon playground update and new features for mac
-Melon playground tips and tricks for mac
-Melon playground cheats and hacks for mac
-Melon playground gameplay and videos for mac
-Melon playground alternatives and similar games for mac
-Melon playground free download and play for mac
-Melon playground online and offline mode for mac
-Melon playground multiplayer and co-op mode for mac
-Melon playground custom scenarios and maps for mac
-Melon playground best weapons and tools for mac
-Melon playground bugs and glitches for mac
-Melon playground fixes and solutions for mac
-Melon playground system performance and optimization for mac
-Melon playground keyboard and mouse controls for mac
-Melon playground gamepad support and settings for mac
-Melon playground graphics and sound quality for mac
-Melon playground fun and relaxing game for mac
-Melon playground stress relief and outlet game for mac
-Melon playground destruction and chaos game for mac
-Melon playground explosions and fire effects for mac
-Melon playground ragdoll physics and animations for mac
-Melon playground barrels and grenades for mac
-Melon playground machine guns and rifles for mac
-Melon playground melee weapons and swords for mac
-Melon playground dummies and targets for mac
-Melon playground challenges and achievements for mac
-Melon playground leaderboards and rankings for mac
-Melon playground social media and community for mac
-Melon playground feedback and suggestions for mac
-Melon playground questions and answers for mac
-Is melon playground safe and secure for mac?
-Is melon playground compatible with macOS 11 or later?
-Is melon playground available on the Mac App Store?
-Is melon playground free or paid on the Mac App Store?
-Is melon playground worth downloading on the Mac App Store?
-Is melon playground fun and addictive on the Mac App Store?
-Is melon playground easy to install on the Mac App Store?
-Is melon playground updated regularly on the Mac App Store?

-

What is an Android emulator and how does it work?

-

An Android emulator is a software that creates a virtual environment that mimics the hardware and software of an Android device. It runs an Android operating system (OS) on your computer and lets you access the Google Play Store and other Android services. You can then download and install any Android app or game on your computer and run it using the emulator.

-

Pros and cons of using an Android emulator

-

Using an Android emulator has some advantages and disadvantages. Here are some of them:

Pros:

- -

Cons:

- -

Steps to download and install an Android emulator and Melon Playground on Mac

-

There are many Android emulators available for Mac, such as BlueStacks, NoxPlayer, MEmu, LDPlayer, and more. You can choose any emulator that suits your needs and preferences. Here are the general steps to download and install an Android emulator and Melon Playground on Mac:

-
    -
  1. Go to the official website of the Android emulator you want to use and download the installer file for Mac.
  2. -
  3. Run the installer file and follow the instructions to install the emulator on your Mac.
  4. -
  5. Launch the emulator and sign in with your Google account. If you don't have one, you can create one for free.
  6. -
  7. Open the Google Play Store app on the emulator and search for Melon Playground. Alternatively, you can download the APK file of Melon Playground from a third-party source and drag and drop it to the emulator.
  8. -
  9. Click on the Install button to download and install Melon Playground on the emulator.
  10. -
  11. Once the installation is complete, you can launch Melon Playground from the emulator home screen or app drawer and start playing.
  12. -
-

Option 2: Use a cloud gaming service

-

A cloud gaming service is a service that allows you to play games online without downloading or installing them on your device. It streams the game from a remote server to your device over the internet. You can play any game on any device as long as you have a stable internet connection and a compatible browser or app.

-

What is a cloud gaming service and how does it work?

-

A cloud gaming service is a service that uses cloud computing technology to run games on powerful servers and deliver them to your device via streaming. It works like Netflix or YouTube, but for games. You don't need to download or install anything on your device, you just need to access the service through a web browser or an app and choose the game you want to play. The service will then stream the game to your device in real-time, allowing you to control it with your keyboard, mouse, touchpad, or controller.

-

Pros and cons of using a cloud gaming service

-

Using a cloud gaming service has some advantages and disadvantages. Here are some of them:

-

Pros:

- -

Cons:

- -

Steps to sign up for a cloud gaming service and play Melon Playground online on Mac

-

There are many cloud gaming services available online, such as Google Stadia, NVIDIA GeForce Now, Amazon Luna, Microsoft xCloud, Vortex, Shadow, and more. You can choose any service that offers Melon Playground or other games that you want to play. Here are the general steps to sign up for a cloud gaming service and play Melon Playground online on Mac:

-
    -
  1. Go to the official website of the cloud gaming service you want to use and create an account. You may need to provide your email, password, payment method, and other personal information.
  2. -
  3. Choose a subscription plan or buy credits that suit your budget and gaming needs. Some services may offer a free trial or a limited number of games for free.
  4. -
  5. Download and install the app of the cloud gaming service on your Mac, or access the service through a web browser that supports streaming, such as Chrome, Safari, or Firefox.
  6. -
  7. Launch the app or the browser and sign in with your account. You will see a library of games that you can play online.
  8. -
  9. Search for Melon Playground or browse the categories and genres to find it. Click on the Play button to start streaming the game to your Mac.
  10. -
  11. Enjoy playing Melon Playground online on your Mac with high-quality graphics and performance.
  12. -
-

Conclusion

-

Melon Playground is a fun and creative sandbox game that lets you play with various items, weapons, and physics. However, since it is an Android game, you cannot download it directly on your Mac. You need to use either an Android emulator or a cloud gaming service to play Melon Playground on your Mac.

-

Both options have their pros and cons, so you need to consider your preferences, budget, and internet connection before choosing one. If you want to play Melon Playground offline or customize the game settings, you may prefer using an Android emulator. If you want to play Melon Playground online or enjoy high-quality graphics and performance, you may prefer using a cloud gaming service.

-

Whichever option you choose, we hope that this article has helped you learn how to download Melon Playground on Mac. Now you can enjoy creating and destroying anything you want with Melon Playground on your Mac computer.

-

FAQs

-

Is Melon Playground free to play?

-

Yes, Melon Playground is free to download and play on Android devices. However, it may contain ads and in-app purchases that require real money. If you use an Android emulator or a cloud gaming service to play Melon Playground on Mac, you may also need to pay for the emulator or the service.

-

Is Melon Playground safe to download and play?

-

Yes, Melon Playground is safe to download and play if you get it from the official Google Play Store or a trusted third-party source. However, you should be careful when using an Android emulator or a cloud gaming service, as they may pose some security risks or violate some terms and conditions. You should always use a reputable emulator or service and protect your device and account with antivirus software and strong passwords.

-

Is Melon Playground compatible with other devices?

-

Melon Playground is compatible with most Android devices that have Android 4.4 or higher. However, some devices may not support the game or run it smoothly due to different specifications or settings. You can also play Melon Playground on other devices such as Windows PC, Mac, iOS, or Linux using an Android emulator or a cloud gaming service.

-

Can I play Melon Playground offline?

-

Yes, you can play Melon Playground offline on your Android device without an internet connection. However, you may not be able to access some features or functions of the game that require online services, such as updating the game, downloading new items, or sharing your creations. If you use an Android emulator or a cloud gaming service to play Melon Playground on Mac, you will need an internet connection to run the emulator or the service.

-

Can I create my own items and scenarios in Melon Playground?

-

Yes, you can create your own items and scenarios in Melon Playground using the customization tools provided in the game. You can change the color, size, shape, texture, and properties of any item in the game. You can also use physics to make the items interact with each other in realistic or unrealistic ways. You can then save your creations and share them with others through screenshots or videos.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Rise of the Kings Mod Apk OBB Build Your Empire and Defeat Your Enemies.md b/spaces/congsaPfin/Manga-OCR/logs/Rise of the Kings Mod Apk OBB Build Your Empire and Defeat Your Enemies.md deleted file mode 100644 index fb84d089dc86398d04d8abd76ae275d5ecbec6e9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Rise of the Kings Mod Apk OBB Build Your Empire and Defeat Your Enemies.md +++ /dev/null @@ -1,79 +0,0 @@ - -

Rise of the Kings Mod APK + OBB: A Guide for Strategy Game Lovers

-

If you are a fan of strategy games, you might have heard of Rise of the Kings, a popular online multiplayer game that lets you build your own empire, recruit and train your army, forge alliances and fight enemies, and explore a vast world full of challenges and opportunities. In this article, we will tell you everything you need to know about Rise of the Kings, and how you can download and install Rise of the Kings Mod APK + OBB, a modified version of the game that gives you unlimited money, free VIP privileges, and no ads. Read on to find out more.

-

rise of the kings mod apk + obb


Download ►►►►► https://urlca.com/2uOdYi



-

What is Rise of the Kings?

-

Rise of the Kings is a strategy game developed by ONEMT, a Chinese game studio that specializes in creating immersive and realistic games for mobile devices. The game was released in 2016 and has since attracted millions of players from all over the world. The game is set in a fantasy world where several kingdoms are vying for power and glory. You play as a lord who has to build your own empire, recruit and train your army, forge alliances and fight enemies, and explore a vast world full of challenges and opportunities.

-

Features of Rise of the Kings

-

Rise of the Kings has many features that make it one of the best strategy games on the market. Here are some of them:

-

Build your empire

-

You start with a small castle and some resources, and you have to expand your territory, upgrade your buildings, research new technologies, and manage your economy. You can choose from different types of buildings, such as farms, mines, barracks, workshops, academies, and more. You can also customize your castle with various decorations and designs.

-

Recruit and train your army

-

You can recruit different types of units, such as infantry, cavalry, archers, siege weapons, and more. You can also train them to improve their skills and abilities. You can also recruit legendary heroes who have unique talents and powers. You can equip them with weapons, armor, accessories, and mounts. You can also form teams with other players to create powerful formations.

-

Forge alliances and fight enemies

-

You can join or create an alliance with other players to cooperate and communicate with them. You can share resources, information, strategies, and troops with your allies. You can also participate in alliance events, such as wars, raids, rallies, quests, and more. You can also compete with other alliances for territory, resources, honor, and rewards. You can also fight against other players in PvP battles or against NPC enemies in PvE battles.

-

Rise of the Kings Mod Apk 1.9.32 Full + OBB Data
-Download Rise of the Kings Mod Apk + OBB for Android
-How to Install Rise of the Kings Mod Apk + OBB on PC
-Rise of the Kings Mod Apk + OBB Unlimited Gems and Gold
-Rise of the Kings Mod Apk + OBB Latest Version 2023
-Rise of the Kings Mod Apk + OBB Offline Mode
-Rise of the Kings Mod Apk + OBB Hack and Cheats
-Rise of the Kings Mod Apk + OBB Gameplay and Review
-Rise of the Kings Mod Apk + OBB Free Download Link
-Rise of the Kings Mod Apk + OBB No Root Required
-Rise of the Kings Mod Apk + OBB Features and Benefits
-Rise of the Kings Mod Apk + OBB Tips and Tricks
-Rise of the Kings Mod Apk + OBB Best Strategy and Guide
-Rise of the Kings Mod Apk + OBB Compatible Devices and Requirements
-Rise of the Kings Mod Apk + OBB Update and Patch Notes
-Rise of the Kings Mod Apk + OBB Customer Support and Feedback
-Rise of the Kings Mod Apk + OBB Bug Fixes and Improvements
-Rise of the Kings Mod Apk + OBB Comparison and Alternatives
-Rise of the Kings Mod Apk + OBB Pros and Cons
-Rise of the Kings Mod Apk + OBB FAQ and Troubleshooting
-Rise of the Kings Mod Apk + OBB Forum and Community
-Rise of the Kings Mod Apk + OBB Wiki and Database
-Rise of the Kings Mod Apk + OBB Codes and Rewards
-Rise of the Kings Mod Apk + OBB Events and Promotions
-Rise of the Kings Mod Apk + OBB News and Updates

-

Explore a vast world

-

You can explore a vast world full of mysteries and surprises. You can discover new lands, resources, monsters, treasures, secrets, and more. You can also interact with other players in various ways, such as trading, chatting, gifting, spying, attacking, defending, and more. You can also experience different events and scenarios that change according to real-time situations.

-

What is Rise of the Kings Mod APK + OBB?

-

Rise of the Kings Mod APK + OBB is a modified version of the original game that gives you some advantages that are not available in the original game. These advantages include unlimited money, free VIP privileges, and no ads. With these benefits, you can enjoy the game without any limitations or interruptions. You can buy anything you want, access exclusive features, and play the game smoothly and comfortably.

-

Benefits of Rise of the Kings Mod APK + OBB

-

Here are some of the benefits of Rise of the Kings Mod APK + OBB that you can enjoy:

-

Unlimited money

-

Money is the main currency in the game that you can use to buy various items, such as resources, equipment, boosts, and more. With unlimited money, you can buy anything you want without worrying about running out of money. You can also speed up your progress and development by using money to upgrade your buildings, research new technologies, and train your army faster.

-

Free VIP privileges

-

VIP is a special status in the game that gives you access to exclusive features and benefits, such as extra resources, faster construction and research, more troops and heroes, and more. Normally, you have to pay real money to get VIP privileges or earn them by completing certain tasks. With Rise of the Kings Mod APK + OBB, you can get free VIP privileges without spending any money or doing any work. You can enjoy all the perks of being a VIP without any hassle.

-

No ads

-

Ads are annoying and distracting interruptions that can ruin your gaming experience. They can pop up at any time and force you to watch them or close them. They can also consume your data and battery life. With Rise of the Kings Mod APK + OBB, you can get rid of all the ads in the game and play the game without any interruptions or distractions. You can focus on your strategy and enjoy the game fully.

-

How to download and install Rise of the Kings Mod APK + OBB?

-

If you are interested in downloading and installing Rise of the Kings Mod APK + OBB, you need to follow some simple steps. Here are the steps you need to follow:

-

Steps to download and install Rise of the Kings Mod APK + OBB

-

Before you start, make sure you have enough storage space on your device and a stable internet connection.

-

Download the files from a trusted source

-

The first step is to download the files from a trusted source. You can find many websites that offer Rise of the Kings Mod APK + OBB files for free, but not all of them are safe and reliable. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. To avoid this, you should only download the files from a trusted source that has positive reviews and feedback from other users. You can use this link to download the files safely and securely.

-

Enable unknown sources on your device

-

The second step is to enable unknown sources on your device. This is a security setting that prevents you from installing apps from sources other than the official Google Play Store. Since Rise of the Kings Mod APK + OBB is not available on the Play Store, you need to enable unknown sources to install it on your device. To do this, go to your device settings > security > unknown sources > enable.

-

Install the APK file and copy the OBB file to the Android/obb folder

-

The third step is to install the APK file and copy the OBB file to the Android/obb folder. The APK file is the application file that contains the game data and code. The OBB file is the additional data file that contains the game graphics and sounds. To install the APK file, locate it in your device storage and tap on it. Follow the instructions on the screen to complete the installation. To copy the OBB file, locate it in your device storage and move it to the Android/obb folder. If you don't have this folder, create it manually.

-

Launch the game and enjoy

-

The final step is to launch the game and enjoy. To launch the game, go to your app drawer and tap on the Rise of the Kings icon. You will see the game loading screen and then the main menu. You can now enjoy the game with unlimited money, free VIP privileges, and no ads. You can also access all the features and content of the game without any restrictions or limitations.

-

Conclusion

-

Rise of the Kings is a strategy game that lets you build your own empire, recruit and train your army, forge alliances and fight enemies, and explore a vast world full of challenges and opportunities. It is a fun and addictive game that will keep you entertained for hours. However, if you want to enjoy the game without any limitations or interruptions, you should download and install Rise of the Kings Mod APK + OBB, a modified version of the game that gives you unlimited money, free VIP privileges, and no ads. With these benefits, you can buy anything you want, access exclusive features, and play the game smoothly and comfortably. To download and install Rise of the Kings Mod APK + OBB, you just need to follow some simple steps that we have explained in this article. We hope this article was helpful and informative for you. If you have any questions or feedback, feel free to leave a comment below.

-

FAQs

-

Here are some frequently asked questions about Rise of the Kings Mod APK + OBB:

-

Is Rise of the Kings Mod APK + OBB safe to use?

-

Yes, Rise of the Kings Mod APK + OBB is safe to use as long as you download it from a trusted source. We have provided a link to download the files safely and securely in this article. However, you should always be careful when downloading and installing any modded or hacked apps from unknown sources, as they may contain viruses, malware, or spyware that can harm your device or steal your personal information.

-

Is Rise of the Kings Mod APK + OBB compatible with my device?

-

Rise of the Kings Mod APK + OBB is compatible with most Android devices that run on Android 4.0.3 or higher. However, some devices may not support the game due to hardware or software limitations. To check if your device is compatible with the game, you can visit the official Google Play Store page of Rise of the Kings and see if your device is listed among the supported devices.

-

Will I get banned for using Rise of the Kings Mod APK + OBB?

-

There is a low risk of getting banned for using Rise of the Kings Mod APK + OBB, as the modded version does not interfere with the game servers or other players' accounts. However, you should always use the modded version at your own risk and discretion, as we cannot guarantee that it will work flawlessly or that it will not be detected by the game developers or moderators. If you want to avoid any potential issues or consequences, you should play the game with the original version.

-

Can I update Rise of the Kings Mod APK + OBB?

-

No, you cannot update Rise of the Kings Mod APK + OBB through the Google Play Store or any other official source. If you try to do so, you will lose all the benefits of the modded version and revert back to the original version. If you want to update the game, you will have to download and install the latest version of Rise of the Kings Mod APK + OBB from the same source that you downloaded it from. You may also have to uninstall the previous version of the game before installing the new one.

-

Can I play Rise of the Kings Mod APK + OBB offline?

-

No, you cannot play Rise of the Kings Mod APK + OBB offline, as the game requires an internet connection to run and function properly. The game is an online multiplayer game that connects you with other players from all over the world. You need an internet connection to access the game servers, chat with other players, participate in alliance events, and more. If you try to play the game offline, you will not be able to load the game or access any of its features.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Walking Dead How to Survive a Walker Attack.md b/spaces/congsaPfin/Manga-OCR/logs/The Walking Dead How to Survive a Walker Attack.md deleted file mode 100644 index 16c3f4a5d5ef73843fc199730338d92c3a0166fe..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/The Walking Dead How to Survive a Walker Attack.md +++ /dev/null @@ -1,125 +0,0 @@ - -

The Walking Dead: A Guide to the Post-Apocalyptic Horror Series

-

If you are a fan of horror, drama, and zombies, you might have heard of The Walking Dead, one of the most popular and acclaimed TV shows of the last decade. But what is The Walking Dead exactly, and why should you watch it? In this article, we will give you a comprehensive guide to the post-apocalyptic horror series, covering its origins, characters, themes, and more. Whether you are a newcomer or a longtime fan, we hope this article will help you enjoy The Walking Dead even more.

-

What is The Walking Dead?

-

The Walking Dead is a multimedia franchise that revolves around a group of survivors in a world overrun by zombies, or "walkers" as they are called in the series. The franchise consists of comic books, TV shows, video games, novels, webisodes, and movies. Here are some of the main components of the franchise:

-

the walking dead


DOWNLOAD ❤❤❤ https://urlca.com/2uO6iZ



-

The Walking Dead comic book series

-

The original source material for The Walking Dead is a comic book series created by writer Robert Kirkman and artists Tony Moore and Charlie Adlard. The comic book series began in 2003 and ended in 2019, with 193 issues published. The comic book series follows the journey of Rick Grimes, a former sheriff's deputy who wakes up from a coma to find himself in a zombie apocalypse. He meets other survivors and forms a community with them, facing various threats from both the living and the dead. The comic book series is known for its dark, gritty, and realistic tone, as well as its shocking twists and deaths.

-

The Walking Dead TV series

-

The most popular adaptation of The Walking Dead is a TV series that premiered on AMC in 2010 and will conclude in 2022, with 11 seasons and 177 episodes. The TV series is developed by Frank Darabont and has several executive producers, including Kirkman, Gale Anne Hurd, David Alpert, Greg Nicotero, Scott M. Gimple, Angela Kang, and others. The TV series follows a similar storyline as the comic book series, but also introduces new characters, locations, and events. The TV series is praised for its compelling characters, performances, action sequences, makeup effects, and social commentary.

-

The Walking Dead franchise

-

Besides the comic book series and the TV series, The Walking Dead has expanded into a larger franchise that includes several spin-offs and movies. Some of the notable spin-offs are:

- -

Some of the upcoming movies are:

- -

Who are the main characters of The Walking Dead?

-

One of the strengths of The Walking Dead is its large and diverse cast of characters, who have different backgrounds, personalities, skills, and motivations. The characters evolve and change over time, as they face various challenges and dilemmas in the zombie apocalypse. Some of the main characters of The Walking Dead are:

-

Rick Grimes

-

Rick Grimes is the protagonist and leader of the group of survivors. He is a former sheriff's deputy who wakes up from a coma to find himself in a zombie apocalypse. He is determined to protect his family and friends, and to find a safe place to live. He is brave, loyal, resourceful, and compassionate, but also ruthless, pragmatic, and sometimes conflicted. He often struggles with his role as a leader, as he has to make difficult decisions that affect the lives of others. He is played by Andrew Lincoln in the TV series.

-

The Walking Dead cast and crew
-The Walking Dead comic book series
-The Walking Dead spin-off shows
-The Walking Dead season 11 release date
-The Walking Dead zombies or walkers
-The Walking Dead Rick Grimes movies
-The Walking Dead best episodes and moments
-The Walking Dead merchandise and collectibles
-The Walking Dead fan theories and predictions
-The Walking Dead video games and apps
-The Walking Dead behind the scenes and trivia
-The Walking Dead crossover with Fear the Walking Dead
-The Walking Dead Negan and Lucille
-The Walking Dead Daryl Dixon and Carol Peletier
-The Walking Dead Michonne and her katana
-The Walking Dead Glenn Rhee and Maggie Greene
-The Walking Dead Carl Grimes and his eye patch
-The Walking Dead Morgan Jones and his stick
-The Walking Dead Eugene Porter and his mullet
-The Walking Dead Rosita Espinosa and her baby
-The Walking Dead Gabriel Stokes and his glasses
-The Walking Dead Aaron and his metal arm
-The Walking Dead Ezekiel and his tiger Shiva
-The Walking Dead Jerry and his axe
-The Walking Dead Sasha Williams and her sniper rifle
-The Walking Dead Tara Chambler and her sunglasses
-The Walking Dead Abraham Ford and his mustache
-The Walking Dead Hershel Greene and his farm
-The Walking Dead Shane Walsh and his betrayal
-The Walking Dead Lori Grimes and her pregnancy
-The Walking Dead Andrea and her gun skills
-The Walking Dead Merle Dixon and his knife hand
-The Walking Dead Dale Horvath and his RV
-The Walking Dead Sophia Peletier and her barn reveal
-The Walking Dead Beth Greene and her singing voice
-The Walking Dead Tyreese Williams and his hammer
-The Walking Dead Bob Stookey and his tainted meat
-The Walking Dead Noah and his revolving door death
-The Walking Dead Denise Cloyd and her arrow to the eye
-The Walking Dead Spencer Monroe and his guts spillage
-The Walking Dead Simon and his baseball bat swing
-The Walking Dead Jadis and her junkyard group
-The Walking Dead Enid and her JSS motto
-The Walking Dead Alpha and her Whisperers
-The Walking Dead Beta and his mask
-The Walking Dead Lydia and her mother issues
-The Walking Dead Yumiko and her lawyer background
-The Walking Dead Magna and her prison tattoos
-The Walking Dead Connie and her deafness
-The Walking Dead Princess and her pink jacket

-

Daryl Dixon

-

Daryl Dixon is one of the most popular and beloved characters of The Walking Dead. He is a skilled hunter, tracker, and fighter, who uses a crossbow as his signature weapon. He is loyal, brave, independent, and resilient, but also introverted, guarded, and sometimes volatile. He has a close bond with his brother Merle, who is often a source of conflict for him. He also develops a strong friendship with Carol Peletier, who helps him open up and heal from his past traumas. He is played by Norman Reedus in the TV series.

-

Carol Peletier

-

Carol Peletier is one of the longest-surviving characters of The Walking Dead. She starts off as a timid and abused housewife, who loses her husband and daughter in the zombie apocalypse. She then transforms into a strong, confident, and capable survivor, who is willing to do anything to protect her group. She is smart, resourceful, strategic, and compassionate, but also ruthless, cold, and sometimes manipulative. She has a close friendship with Daryl Dixon, who supports her through her losses and struggles. She is played by Melissa McBride in the TV series.

-

Negan

-

Negan is one of the most notorious and controversial characters of The Walking Dead. He is the leader of the Saviors, a large group of survivors who extort other communities for resources in exchange for protection from the walkers. He is charismatic, witty, sadistic, and violent, who uses a baseball bat wrapped in barbed wire named Lucille as his weapon of choice. He kills several members of Rick's group in a brutal way, sparking a war between them. He is later captured by Rick and imprisoned for years, until he redeems himself by helping them fight against other enemies. He is played by Jeffrey Dean Morgan in the TV series.

-

What are the main themes of The Walking Dead?

-

The Walking Dead is not just about zombies and gore. It is also about exploring various themes that are relevant to our society and humanity. Some of the main themes of The Walking Dead are:

-

Survival

-

The most obvious theme of The Walking Dead is survival. The characters have to survive not only from the walkers, but also from other humans who pose threats to them. They have to find food, water, shelter, weapons, medicine, and other resources to stay alive. They also have to deal with injuries, diseases, infections, hunger, thirst, fatigue, and stress. They have to adapt to different environments and situations, and overcome various obstacles and challenges. Survival is not easy or guaranteed in The Walking Dead, as many characters die or disappear along the way.

-

Humanity

-

Another theme of The Walking Dead is humanity. The characters have to question what it means to be human in a world where humanity seems to be lost or corrupted. They have to face moral dilemmas and ethical choices that test their values and principles. They have to balance their individual needs

and their group interests, and their personal feelings and their rational judgments. They have to cope with their emotions, such as fear, anger, grief, guilt, and hope. They have to maintain their sanity, dignity, and identity in a chaotic and cruel world. They have to find meaning and purpose in their existence, and to preserve their values and beliefs.

-

Leadership

-

A third theme of The Walking Dead is leadership. The characters have to decide who to follow and who to trust in a world where authority and order are gone or corrupted. They have to deal with different types of leaders, such as Rick Grimes, who is a democratic and benevolent leader, Negan, who is a tyrannical and oppressive leader, and the Governor, who is a charismatic and manipulative leader. They also have to face the challenges and responsibilities of being a leader themselves, such as making decisions, resolving conflicts, inspiring others, and facing consequences.

-

Morality

-

A fourth theme of The Walking Dead is morality. The characters have to determine what is right and wrong in a world where morality seems to be relative or irrelevant. They have to confront the ethical implications of their actions and choices, such as killing, stealing, lying, betraying, sacrificing, and forgiving. They have to deal with the moral gray areas and ambiguities that arise in the zombie apocalypse, such as whether to kill walkers or spare them, whether to help strangers or ignore them, whether to cooperate with other groups or compete with them, and whether to follow the rules or break them.

-

Why should you watch The Walking Dead?

-

Now that you know what The Walking Dead is about, you might be wondering why you should watch it. Here are some of the reasons why The Walking Dead is worth watching:

-

The Walking Dead is thrilling and suspenseful

-

If you like horror and action, you will love The Walking Dead. The series is full of thrilling and suspenseful scenes that will keep you on the edge of your seat. You will witness the characters fighting against hordes of walkers, escaping from dangerous situations, encountering new enemies, and facing unexpected twists. You will also enjoy the stunning visuals, sound effects, and music that create a tense and immersive atmosphere.

-

The Walking Dead is emotional and character-driven

-

If you like drama and character development, you will love The Walking Dead. The series is not just about zombies and violence. It is also about the human stories and relationships that emerge in the zombie apocalypse. You will get to know the characters deeply, their backgrounds, personalities, motivations, and goals. You will care about them, root for them, cry for them, and sometimes hate them. You will also witness their growth and change over time, as they face various challenges and dilemmas.

-

The Walking Dead is creative and diverse

-

If you like creativity and diversity, you will love The Walking Dead. The series is not just a repetitive or predictable show. It is constantly evolving and expanding its scope and scale. You will explore different settings and locations, such as Atlanta, Hershel's farm, the prison, Terminus, Alexandria, the Kingdom, the Hilltop, the Sanctuary , and Oceanside. You will meet different groups and communities, such as the Survivors, the Saviors, the Whisperers, the Commonwealth, and the CRM. You will encounter different types of walkers, such as the roamers, the lurkers, the herd, the spiked, and the radioactive. You will also enjoy the various spin-offs and movies that expand the universe and explore new stories and characters.

-

How to watch The Walking Dead?

-

If you are interested in watching The Walking Dead, you might be wondering how to do it. Here are some of the ways you can watch The Walking Dead:

-

The Walking Dead streaming platforms

-

The easiest way to watch The Walking Dead is to stream it online. You can find all the episodes of the TV series on various streaming platforms, such as Netflix, Hulu, Amazon Prime Video, AMC+, and others. You can also find some of the spin-offs and movies on these platforms, or on other platforms, such as YouTube, iTunes, Google Play, and others. You can choose the platform that suits your preferences and budget, and enjoy The Walking Dead anytime and anywhere.

-

The Walking Dead spin-offs and movies

-

Another way to watch The Walking Dead is to watch its spin-offs and movies. You can find some of the spin-offs on the same streaming platforms as the TV series, or on other platforms, such as AMC's website or app. You can also find some of the movies on these platforms, or on other platforms, such as theaters or cable TV. You can choose the spin-off or movie that interests you, and enjoy a different perspective or experience of The Walking Dead.

-

Conclusion

-

In conclusion, The Walking Dead is a post-apocalyptic horror series that has captivated millions of fans around the world. It is a franchise that consists of comic books, TV shows, video games, novels, webisodes, and movies. It is a series that features a large and diverse cast of characters, who have to survive in a world overrun by zombies and other threats. It is a series that explores various themes that are relevant to our society and humanity. It is a series that is thrilling, emotional, creative, and diverse. It is a series that you should watch if you are a fan of horror, drama, and zombies.

-

Here are some FAQs about The Walking Dead:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/vision_transformer_hybrid.py b/spaces/cooelf/Multimodal-CoT/timm/models/vision_transformer_hybrid.py deleted file mode 100644 index d5f0a5377ec9492c5ed55ceb3ce5a4378cbb8e3c..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/models/vision_transformer_hybrid.py +++ /dev/null @@ -1,363 +0,0 @@ -""" Hybrid Vision Transformer (ViT) in PyTorch - -A PyTorch implement of the Hybrid Vision Transformers as described in: - -'An Image Is Worth 16 x 16 Words: Transformers for Image Recognition at Scale' - - https://arxiv.org/abs/2010.11929 - -`How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers` - - https://arxiv.org/abs/2106.TODO - -NOTE These hybrid model definitions depend on code in vision_transformer.py. -They were moved here to keep file sizes sane. - -Hacked together by / Copyright 2021 Ross Wightman -""" -from copy import deepcopy -from functools import partial - -import torch -import torch.nn as nn - -from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD -from .layers import StdConv2dSame, StdConv2d, to_2tuple -from .resnet import resnet26d, resnet50d -from .resnetv2 import ResNetV2, create_resnetv2_stem -from .registry import register_model -from timm.models.vision_transformer import _create_vision_transformer - - -def _cfg(url='', **kwargs): - return { - 'url': url, - 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None, - 'crop_pct': .9, 'interpolation': 'bicubic', 'fixed_input_size': True, - 'mean': (0.5, 0.5, 0.5), 'std': (0.5, 0.5, 0.5), - 'first_conv': 'patch_embed.backbone.stem.conv', 'classifier': 'head', - **kwargs - } - - -default_cfgs = { - # hybrid in-1k models (weights from official JAX impl where they exist) - 'vit_tiny_r_s16_p8_224': _cfg( - url='https://storage.googleapis.com/vit_models/augreg/' - 'R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_224.npz', - first_conv='patch_embed.backbone.conv'), - 'vit_tiny_r_s16_p8_384': _cfg( - url='https://storage.googleapis.com/vit_models/augreg/' - 'R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz', - first_conv='patch_embed.backbone.conv', input_size=(3, 384, 384), crop_pct=1.0), - 'vit_small_r26_s32_224': _cfg( - url='https://storage.googleapis.com/vit_models/augreg/' - 'R26_S_32-i21k-300ep-lr_0.001-aug_light0-wd_0.03-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.03-res_224.npz', - ), - 'vit_small_r26_s32_384': _cfg( - url='https://storage.googleapis.com/vit_models/augreg/' - 'R26_S_32-i21k-300ep-lr_0.001-aug_medium2-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz', - input_size=(3, 384, 384), crop_pct=1.0), - 'vit_base_r26_s32_224': _cfg(), - 'vit_base_r50_s16_224': _cfg(), - 'vit_base_r50_s16_384': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_resnet50_384-9fd3c705.pth', - input_size=(3, 384, 384), crop_pct=1.0), - 'vit_large_r50_s32_224': _cfg( - url='https://storage.googleapis.com/vit_models/augreg/' - 'R50_L_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_224.npz' - ), - 'vit_large_r50_s32_384': _cfg( - url='https://storage.googleapis.com/vit_models/augreg/' - 'R50_L_32-i21k-300ep-lr_0.001-aug_medium2-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz', - input_size=(3, 384, 384), crop_pct=1.0 - ), - - # hybrid in-21k models (weights from official Google JAX impl where they exist) - 'vit_tiny_r_s16_p8_224_in21k': _cfg( - url='https://storage.googleapis.com/vit_models/augreg/R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz', - num_classes=21843, crop_pct=0.9, first_conv='patch_embed.backbone.conv'), - 'vit_small_r26_s32_224_in21k': _cfg( - url='https://storage.googleapis.com/vit_models/augreg/R26_S_32-i21k-300ep-lr_0.001-aug_medium2-wd_0.03-do_0.0-sd_0.0.npz', - num_classes=21843, crop_pct=0.9), - 'vit_base_r50_s16_224_in21k': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_resnet50_224_in21k-6f7c7740.pth', - num_classes=21843, crop_pct=0.9), - 'vit_large_r50_s32_224_in21k': _cfg( - url='https://storage.googleapis.com/vit_models/augreg/R50_L_32-i21k-300ep-lr_0.001-aug_medium2-wd_0.1-do_0.0-sd_0.0.npz', - num_classes=21843, crop_pct=0.9), - - # hybrid models (using timm resnet backbones) - 'vit_small_resnet26d_224': _cfg( - mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, first_conv='patch_embed.backbone.conv1.0'), - 'vit_small_resnet50d_s16_224': _cfg( - mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, first_conv='patch_embed.backbone.conv1.0'), - 'vit_base_resnet26d_224': _cfg( - mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, first_conv='patch_embed.backbone.conv1.0'), - 'vit_base_resnet50d_224': _cfg( - mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, first_conv='patch_embed.backbone.conv1.0'), -} - - -class HybridEmbed(nn.Module): - """ CNN Feature Map Embedding - Extract feature map from CNN, flatten, project to embedding dim. - """ - def __init__(self, backbone, img_size=224, patch_size=1, feature_size=None, in_chans=3, embed_dim=768): - super().__init__() - assert isinstance(backbone, nn.Module) - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - self.img_size = img_size - self.patch_size = patch_size - self.backbone = backbone - if feature_size is None: - with torch.no_grad(): - # NOTE Most reliable way of determining output dims is to run forward pass - training = backbone.training - if training: - backbone.eval() - o = self.backbone(torch.zeros(1, in_chans, img_size[0], img_size[1])) - if isinstance(o, (list, tuple)): - o = o[-1] # last feature if backbone outputs list/tuple of features - feature_size = o.shape[-2:] - feature_dim = o.shape[1] - backbone.train(training) - else: - feature_size = to_2tuple(feature_size) - if hasattr(self.backbone, 'feature_info'): - feature_dim = self.backbone.feature_info.channels()[-1] - else: - feature_dim = self.backbone.num_features - assert feature_size[0] % patch_size[0] == 0 and feature_size[1] % patch_size[1] == 0 - self.grid_size = (feature_size[0] // patch_size[0], feature_size[1] // patch_size[1]) - self.num_patches = self.grid_size[0] * self.grid_size[1] - self.proj = nn.Conv2d(feature_dim, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x): - x = self.backbone(x) - if isinstance(x, (list, tuple)): - x = x[-1] # last feature if backbone outputs list/tuple of features - x = self.proj(x).flatten(2).transpose(1, 2) - return x - - -def _create_vision_transformer_hybrid(variant, backbone, pretrained=False, **kwargs): - embed_layer = partial(HybridEmbed, backbone=backbone) - kwargs.setdefault('patch_size', 1) # default patch size for hybrid models if not set - return _create_vision_transformer( - variant, pretrained=pretrained, embed_layer=embed_layer, default_cfg=default_cfgs[variant], **kwargs) - - -def _resnetv2(layers=(3, 4, 9), **kwargs): - """ ResNet-V2 backbone helper""" - padding_same = kwargs.get('padding_same', True) - stem_type = 'same' if padding_same else '' - conv_layer = partial(StdConv2dSame, eps=1e-8) if padding_same else partial(StdConv2d, eps=1e-8) - if len(layers): - backbone = ResNetV2( - layers=layers, num_classes=0, global_pool='', in_chans=kwargs.get('in_chans', 3), - preact=False, stem_type=stem_type, conv_layer=conv_layer) - else: - backbone = create_resnetv2_stem( - kwargs.get('in_chans', 3), stem_type=stem_type, preact=False, conv_layer=conv_layer) - return backbone - - -@register_model -def vit_tiny_r_s16_p8_224(pretrained=False, **kwargs): - """ R+ViT-Ti/S16 w/ 8x8 patch hybrid @ 224 x 224. - """ - backbone = _resnetv2(layers=(), **kwargs) - model_kwargs = dict(patch_size=8, embed_dim=192, depth=12, num_heads=3, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_tiny_r_s16_p8_224', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_tiny_r_s16_p8_384(pretrained=False, **kwargs): - """ R+ViT-Ti/S16 w/ 8x8 patch hybrid @ 384 x 384. - """ - backbone = _resnetv2(layers=(), **kwargs) - model_kwargs = dict(patch_size=8, embed_dim=192, depth=12, num_heads=3, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_tiny_r_s16_p8_384', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_small_r26_s32_224(pretrained=False, **kwargs): - """ R26+ViT-S/S32 hybrid. - """ - backbone = _resnetv2((2, 2, 2, 2), **kwargs) - model_kwargs = dict(embed_dim=384, depth=12, num_heads=6, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_small_r26_s32_224', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_small_r26_s32_384(pretrained=False, **kwargs): - """ R26+ViT-S/S32 hybrid. - """ - backbone = _resnetv2((2, 2, 2, 2), **kwargs) - model_kwargs = dict(embed_dim=384, depth=12, num_heads=6, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_small_r26_s32_384', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_base_r26_s32_224(pretrained=False, **kwargs): - """ R26+ViT-B/S32 hybrid. - """ - backbone = _resnetv2((2, 2, 2, 2), **kwargs) - model_kwargs = dict(embed_dim=768, depth=12, num_heads=12, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_base_r26_s32_224', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_base_r50_s16_224(pretrained=False, **kwargs): - """ R50+ViT-B/S16 hybrid from original paper (https://arxiv.org/abs/2010.11929). - """ - backbone = _resnetv2((3, 4, 9), **kwargs) - model_kwargs = dict(embed_dim=768, depth=12, num_heads=12, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_base_r50_s16_224', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_base_r50_s16_384(pretrained=False, **kwargs): - """ R50+ViT-B/16 hybrid from original paper (https://arxiv.org/abs/2010.11929). - ImageNet-1k weights fine-tuned from in21k @ 384x384, source https://github.com/google-research/vision_transformer. - """ - backbone = _resnetv2((3, 4, 9), **kwargs) - model_kwargs = dict(embed_dim=768, depth=12, num_heads=12, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_base_r50_s16_384', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_base_resnet50_384(pretrained=False, **kwargs): - # DEPRECATED this is forwarding to model def above for backwards compatibility - return vit_base_r50_s16_384(pretrained=pretrained, **kwargs) - - -@register_model -def vit_large_r50_s32_224(pretrained=False, **kwargs): - """ R50+ViT-L/S32 hybrid. - """ - backbone = _resnetv2((3, 4, 6, 3), **kwargs) - model_kwargs = dict(embed_dim=1024, depth=24, num_heads=16, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_large_r50_s32_224', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_large_r50_s32_384(pretrained=False, **kwargs): - """ R50+ViT-L/S32 hybrid. - """ - backbone = _resnetv2((3, 4, 6, 3), **kwargs) - model_kwargs = dict(embed_dim=1024, depth=24, num_heads=16, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_large_r50_s32_384', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_tiny_r_s16_p8_224_in21k(pretrained=False, **kwargs): - """ R+ViT-Ti/S16 w/ 8x8 patch hybrid. ImageNet-21k. - """ - backbone = _resnetv2(layers=(), **kwargs) - model_kwargs = dict(patch_size=8, embed_dim=192, depth=12, num_heads=3, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_tiny_r_s16_p8_224_in21k', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_small_r26_s32_224_in21k(pretrained=False, **kwargs): - """ R26+ViT-S/S32 hybrid. ImageNet-21k. - """ - backbone = _resnetv2((2, 2, 2, 2), **kwargs) - model_kwargs = dict(embed_dim=384, depth=12, num_heads=6, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_small_r26_s32_224_in21k', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_base_r50_s16_224_in21k(pretrained=False, **kwargs): - """ R50+ViT-B/16 hybrid model from original paper (https://arxiv.org/abs/2010.11929). - ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer. - """ - backbone = _resnetv2(layers=(3, 4, 9), **kwargs) - model_kwargs = dict(embed_dim=768, depth=12, num_heads=12, representation_size=768, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_base_r50_s16_224_in21k', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_base_resnet50_224_in21k(pretrained=False, **kwargs): - # DEPRECATED this is forwarding to model def above for backwards compatibility - return vit_base_r50_s16_224_in21k(pretrained=pretrained, **kwargs) - - -@register_model -def vit_large_r50_s32_224_in21k(pretrained=False, **kwargs): - """ R50+ViT-L/S32 hybrid. ImageNet-21k. - """ - backbone = _resnetv2((3, 4, 6, 3), **kwargs) - model_kwargs = dict(embed_dim=1024, depth=24, num_heads=16, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_large_r50_s32_224_in21k', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_small_resnet26d_224(pretrained=False, **kwargs): - """ Custom ViT small hybrid w/ ResNet26D stride 32. No pretrained weights. - """ - backbone = resnet26d(pretrained=pretrained, in_chans=kwargs.get('in_chans', 3), features_only=True, out_indices=[4]) - model_kwargs = dict(embed_dim=768, depth=8, num_heads=8, mlp_ratio=3, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_small_resnet26d_224', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_small_resnet50d_s16_224(pretrained=False, **kwargs): - """ Custom ViT small hybrid w/ ResNet50D 3-stages, stride 16. No pretrained weights. - """ - backbone = resnet50d(pretrained=pretrained, in_chans=kwargs.get('in_chans', 3), features_only=True, out_indices=[3]) - model_kwargs = dict(embed_dim=768, depth=8, num_heads=8, mlp_ratio=3, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_small_resnet50d_s16_224', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_base_resnet26d_224(pretrained=False, **kwargs): - """ Custom ViT base hybrid w/ ResNet26D stride 32. No pretrained weights. - """ - backbone = resnet26d(pretrained=pretrained, in_chans=kwargs.get('in_chans', 3), features_only=True, out_indices=[4]) - model_kwargs = dict(embed_dim=768, depth=12, num_heads=12, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_base_resnet26d_224', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model - - -@register_model -def vit_base_resnet50d_224(pretrained=False, **kwargs): - """ Custom ViT base hybrid w/ ResNet50D stride 32. No pretrained weights. - """ - backbone = resnet50d(pretrained=pretrained, in_chans=kwargs.get('in_chans', 3), features_only=True, out_indices=[4]) - model_kwargs = dict(embed_dim=768, depth=12, num_heads=12, **kwargs) - model = _create_vision_transformer_hybrid( - 'vit_base_resnet50d_224', backbone=backbone, pretrained=pretrained, **model_kwargs) - return model \ No newline at end of file diff --git a/spaces/cooelf/Multimodal-CoT/timm/utils/__init__.py b/spaces/cooelf/Multimodal-CoT/timm/utils/__init__.py deleted file mode 100644 index d02e62d2d0ce62e594393014208e28c3ace5318b..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/utils/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -from .agc import adaptive_clip_grad -from .checkpoint_saver import CheckpointSaver -from .clip_grad import dispatch_clip_grad -from .cuda import ApexScaler, NativeScaler -from .distributed import distribute_bn, reduce_tensor -from .jit import set_jit_legacy -from .log import setup_default_logging, FormatterNoInfo -from .metrics import AverageMeter, accuracy -from .misc import natural_key, add_bool_arg -from .model import unwrap_model, get_state_dict -from .model_ema import ModelEma, ModelEmaV2 -from .random import random_seed -from .summary import update_summary, get_outdir diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/aspp_head.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/aspp_head.py deleted file mode 100644 index 3c0aadb2b097a604d96ba1c99c05663b7884b6e0..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/aspp_head.py +++ /dev/null @@ -1,107 +0,0 @@ -import torch -import torch.nn as nn -from annotator.mmpkg.mmcv.cnn import ConvModule - -from annotator.mmpkg.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class ASPPModule(nn.ModuleList): - """Atrous Spatial Pyramid Pooling (ASPP) Module. - - Args: - dilations (tuple[int]): Dilation rate of each layer. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, dilations, in_channels, channels, conv_cfg, norm_cfg, - act_cfg): - super(ASPPModule, self).__init__() - self.dilations = dilations - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - for dilation in dilations: - self.append( - ConvModule( - self.in_channels, - self.channels, - 1 if dilation == 1 else 3, - dilation=dilation, - padding=0 if dilation == 1 else dilation, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - - def forward(self, x): - """Forward function.""" - aspp_outs = [] - for aspp_module in self: - aspp_outs.append(aspp_module(x)) - - return aspp_outs - - -@HEADS.register_module() -class ASPPHead(BaseDecodeHead): - """Rethinking Atrous Convolution for Semantic Image Segmentation. - - This head is the implementation of `DeepLabV3 - `_. - - Args: - dilations (tuple[int]): Dilation rates for ASPP module. - Default: (1, 6, 12, 18). - """ - - def __init__(self, dilations=(1, 6, 12, 18), **kwargs): - super(ASPPHead, self).__init__(**kwargs) - assert isinstance(dilations, (list, tuple)) - self.dilations = dilations - self.image_pool = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.aspp_modules = ASPPModule( - dilations, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.bottleneck = ConvModule( - (len(dilations) + 1) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - aspp_outs = [ - resize( - self.image_pool(x), - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ] - aspp_outs.extend(self.aspp_modules(x)) - aspp_outs = torch.cat(aspp_outs, dim=1) - output = self.bottleneck(aspp_outs) - output = self.cls_seg(output) - return output diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/datasets/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/datasets/__init__.py deleted file mode 100644 index 59ce30713f63d056107b2a06ecd434eb27a30b7d..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/datasets/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -from . import ( - register_ade20k_panoptic, - register_cityscapes_panoptic, - register_coco_panoptic_annos_semseg, - register_ade20k_instance, - register_coco_panoptic2instance, -) diff --git a/spaces/cymic/Talking_Head_Anime_3/tha3/nn/separable_conv.py b/spaces/cymic/Talking_Head_Anime_3/tha3/nn/separable_conv.py deleted file mode 100644 index e33bce3c25aa279e7a7fbb0a7998a3f3788e4c25..0000000000000000000000000000000000000000 --- a/spaces/cymic/Talking_Head_Anime_3/tha3/nn/separable_conv.py +++ /dev/null @@ -1,119 +0,0 @@ -from typing import Optional - -from torch.nn import Sequential, Conv2d, ConvTranspose2d, Module - -from tha3.nn.normalization import NormalizationLayerFactory -from tha3.nn.util import BlockArgs, wrap_conv_or_linear_module - - -def create_separable_conv3(in_channels: int, out_channels: int, - bias: bool = False, - initialization_method='he', - use_spectral_norm: bool = False) -> Module: - return Sequential( - wrap_conv_or_linear_module( - Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1, bias=False, groups=in_channels), - initialization_method, - use_spectral_norm), - wrap_conv_or_linear_module( - Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=bias), - initialization_method, - use_spectral_norm)) - - -def create_separable_conv7(in_channels: int, out_channels: int, - bias: bool = False, - initialization_method='he', - use_spectral_norm: bool = False) -> Module: - return Sequential( - wrap_conv_or_linear_module( - Conv2d(in_channels, in_channels, kernel_size=7, stride=1, padding=3, bias=False, groups=in_channels), - initialization_method, - use_spectral_norm), - wrap_conv_or_linear_module( - Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=bias), - initialization_method, - use_spectral_norm)) - - -def create_separable_conv3_block( - in_channels: int, out_channels: int, block_args: Optional[BlockArgs] = None): - if block_args is None: - block_args = BlockArgs() - return Sequential( - wrap_conv_or_linear_module( - Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1, bias=False, groups=in_channels), - block_args.initialization_method, - block_args.use_spectral_norm), - wrap_conv_or_linear_module( - Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False), - block_args.initialization_method, - block_args.use_spectral_norm), - NormalizationLayerFactory.resolve_2d(block_args.normalization_layer_factory).create(out_channels, affine=True), - block_args.nonlinearity_factory.create()) - - -def create_separable_conv7_block( - in_channels: int, out_channels: int, block_args: Optional[BlockArgs] = None): - if block_args is None: - block_args = BlockArgs() - return Sequential( - wrap_conv_or_linear_module( - Conv2d(in_channels, in_channels, kernel_size=7, stride=1, padding=3, bias=False, groups=in_channels), - block_args.initialization_method, - block_args.use_spectral_norm), - wrap_conv_or_linear_module( - Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False), - block_args.initialization_method, - block_args.use_spectral_norm), - NormalizationLayerFactory.resolve_2d(block_args.normalization_layer_factory).create(out_channels, affine=True), - block_args.nonlinearity_factory.create()) - - -def create_separable_downsample_block( - in_channels: int, out_channels: int, is_output_1x1: bool, block_args: Optional[BlockArgs] = None): - if block_args is None: - block_args = BlockArgs() - if is_output_1x1: - return Sequential( - wrap_conv_or_linear_module( - Conv2d(in_channels, in_channels, kernel_size=4, stride=2, padding=1, bias=False, groups=in_channels), - block_args.initialization_method, - block_args.use_spectral_norm), - wrap_conv_or_linear_module( - Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False), - block_args.initialization_method, - block_args.use_spectral_norm), - block_args.nonlinearity_factory.create()) - else: - return Sequential( - wrap_conv_or_linear_module( - Conv2d(in_channels, in_channels, kernel_size=4, stride=2, padding=1, bias=False, groups=in_channels), - block_args.initialization_method, - block_args.use_spectral_norm), - wrap_conv_or_linear_module( - Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False), - block_args.initialization_method, - block_args.use_spectral_norm), - NormalizationLayerFactory.resolve_2d(block_args.normalization_layer_factory) - .create(out_channels, affine=True), - block_args.nonlinearity_factory.create()) - - -def create_separable_upsample_block( - in_channels: int, out_channels: int, block_args: Optional[BlockArgs] = None): - if block_args is None: - block_args = BlockArgs() - return Sequential( - wrap_conv_or_linear_module( - ConvTranspose2d( - in_channels, in_channels, kernel_size=4, stride=2, padding=1, bias=False, groups=in_channels), - block_args.initialization_method, - block_args.use_spectral_norm), - wrap_conv_or_linear_module( - Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False), - block_args.initialization_method, - block_args.use_spectral_norm), - NormalizationLayerFactory.resolve_2d(block_args.normalization_layer_factory) - .create(out_channels, affine=True), - block_args.nonlinearity_factory.create()) diff --git a/spaces/darkartsaibwd/Envvi-Inkpunk-Diffusion/README.md b/spaces/darkartsaibwd/Envvi-Inkpunk-Diffusion/README.md deleted file mode 100644 index 50c55a51990d9b9881b7d512e52772fac537ee52..0000000000000000000000000000000000000000 --- a/spaces/darkartsaibwd/Envvi-Inkpunk-Diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Envvi Inkpunk Diffusion -emoji: 🏃 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/utils/options.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/utils/options.py deleted file mode 100644 index db490e4aa52e26fde31959fd74c2cef3af2ecf76..0000000000000000000000000000000000000000 --- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/utils/options.py +++ /dev/null @@ -1,108 +0,0 @@ -import yaml -import time -from collections import OrderedDict -from os import path as osp -from basicsr.utils.misc import get_time_str - -def ordered_yaml(): - """Support OrderedDict for yaml. - - Returns: - yaml Loader and Dumper. - """ - try: - from yaml import CDumper as Dumper - from yaml import CLoader as Loader - except ImportError: - from yaml import Dumper, Loader - - _mapping_tag = yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG - - def dict_representer(dumper, data): - return dumper.represent_dict(data.items()) - - def dict_constructor(loader, node): - return OrderedDict(loader.construct_pairs(node)) - - Dumper.add_representer(OrderedDict, dict_representer) - Loader.add_constructor(_mapping_tag, dict_constructor) - return Loader, Dumper - - -def parse(opt_path, root_path, is_train=True): - """Parse option file. - - Args: - opt_path (str): Option file path. - is_train (str): Indicate whether in training or not. Default: True. - - Returns: - (dict): Options. - """ - with open(opt_path, mode='r') as f: - Loader, _ = ordered_yaml() - opt = yaml.load(f, Loader=Loader) - - opt['is_train'] = is_train - - # opt['name'] = f"{get_time_str()}_{opt['name']}" - if opt['path'].get('resume_state', None): # Shangchen added - resume_state_path = opt['path'].get('resume_state') - opt['name'] = resume_state_path.split("/")[-3] - else: - opt['name'] = f"{get_time_str()}_{opt['name']}" - - - # datasets - for phase, dataset in opt['datasets'].items(): - # for several datasets, e.g., test_1, test_2 - phase = phase.split('_')[0] - dataset['phase'] = phase - if 'scale' in opt: - dataset['scale'] = opt['scale'] - if dataset.get('dataroot_gt') is not None: - dataset['dataroot_gt'] = osp.expanduser(dataset['dataroot_gt']) - if dataset.get('dataroot_lq') is not None: - dataset['dataroot_lq'] = osp.expanduser(dataset['dataroot_lq']) - - # paths - for key, val in opt['path'].items(): - if (val is not None) and ('resume_state' in key or 'pretrain_network' in key): - opt['path'][key] = osp.expanduser(val) - - if is_train: - experiments_root = osp.join(root_path, 'experiments', opt['name']) - opt['path']['experiments_root'] = experiments_root - opt['path']['models'] = osp.join(experiments_root, 'models') - opt['path']['training_states'] = osp.join(experiments_root, 'training_states') - opt['path']['log'] = experiments_root - opt['path']['visualization'] = osp.join(experiments_root, 'visualization') - - else: # test - results_root = osp.join(root_path, 'results', opt['name']) - opt['path']['results_root'] = results_root - opt['path']['log'] = results_root - opt['path']['visualization'] = osp.join(results_root, 'visualization') - - return opt - - -def dict2str(opt, indent_level=1): - """dict to string for printing options. - - Args: - opt (dict): Option dict. - indent_level (int): Indent level. Default: 1. - - Return: - (str): Option string for printing. - """ - msg = '\n' - for k, v in opt.items(): - if isinstance(v, dict): - msg += ' ' * (indent_level * 2) + k + ':[' - msg += dict2str(v, indent_level + 1) - msg += ' ' * (indent_level * 2) + ']\n' - else: - msg += ' ' * (indent_level * 2) + k + ': ' + str(v) + '\n' - return msg diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/vegalite/v5/data.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/vegalite/v5/data.py deleted file mode 100644 index 703dffb3246a32f4734f0653dfcc1aaa0d1d23f9..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/vegalite/v5/data.py +++ /dev/null @@ -1,43 +0,0 @@ -from ..data import ( - MaxRowsError, - curry, - default_data_transformer, - limit_rows, - pipe, - sample, - to_csv, - to_json, - to_values, - DataTransformerRegistry, -) - - -# ============================================================================== -# VegaLite 5 data transformers -# ============================================================================== - - -ENTRY_POINT_GROUP = "altair.vegalite.v5.data_transformer" # type: str - - -data_transformers = DataTransformerRegistry( - entry_point_group=ENTRY_POINT_GROUP -) # type: DataTransformerRegistry -data_transformers.register("default", default_data_transformer) -data_transformers.register("json", to_json) -data_transformers.register("csv", to_csv) -data_transformers.enable("default") - - -__all__ = ( - "MaxRowsError", - "curry", - "default_data_transformer", - "limit_rows", - "pipe", - "sample", - "to_csv", - "to_json", - "to_values", - "data_transformers", -) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio_client/utils.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio_client/utils.py deleted file mode 100644 index 850e6f8882bd3295a01c9285b136dc54c3daa7d3..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio_client/utils.py +++ /dev/null @@ -1,575 +0,0 @@ -from __future__ import annotations - -import asyncio -import base64 -import json -import mimetypes -import os -import pkgutil -import secrets -import shutil -import tempfile -import warnings -from concurrent.futures import CancelledError -from dataclasses import dataclass, field -from datetime import datetime -from enum import Enum -from pathlib import Path -from threading import Lock -from typing import Any, Callable, Optional - -import fsspec.asyn -import httpx -import huggingface_hub -import requests -from huggingface_hub import SpaceStage -from websockets.legacy.protocol import WebSocketCommonProtocol - -API_URL = "api/predict/" -WS_URL = "queue/join" -UPLOAD_URL = "upload" -CONFIG_URL = "config" -API_INFO_URL = "info" -RAW_API_INFO_URL = "info?serialize=False" -SPACE_FETCHER_URL = "https://gradio-space-api-fetcher-v2.hf.space/api" -RESET_URL = "reset" -SPACE_URL = "https://hf.space/{}" - -SKIP_COMPONENTS = { - "state", - "row", - "column", - "tabs", - "tab", - "tabitem", - "box", - "form", - "accordion", - "group", - "interpretation", - "dataset", -} -STATE_COMPONENT = "state" -INVALID_RUNTIME = [ - SpaceStage.NO_APP_FILE, - SpaceStage.CONFIG_ERROR, - SpaceStage.BUILD_ERROR, - SpaceStage.RUNTIME_ERROR, - SpaceStage.PAUSED, -] - -__version__ = (pkgutil.get_data(__name__, "version.txt") or b"").decode("ascii").strip() - - -class TooManyRequestsError(Exception): - """Raised when the API returns a 429 status code.""" - - pass - - -class QueueError(Exception): - """Raised when the queue is full or there is an issue adding a job to the queue.""" - - pass - - -class InvalidAPIEndpointError(Exception): - """Raised when the API endpoint is invalid.""" - - pass - - -class SpaceDuplicationError(Exception): - """Raised when something goes wrong with a Space Duplication.""" - - pass - - -class Status(Enum): - """Status codes presented to client users.""" - - STARTING = "STARTING" - JOINING_QUEUE = "JOINING_QUEUE" - QUEUE_FULL = "QUEUE_FULL" - IN_QUEUE = "IN_QUEUE" - SENDING_DATA = "SENDING_DATA" - PROCESSING = "PROCESSING" - ITERATING = "ITERATING" - PROGRESS = "PROGRESS" - FINISHED = "FINISHED" - CANCELLED = "CANCELLED" - - @staticmethod - def ordering(status: Status) -> int: - """Order of messages. Helpful for testing.""" - order = [ - Status.STARTING, - Status.JOINING_QUEUE, - Status.QUEUE_FULL, - Status.IN_QUEUE, - Status.SENDING_DATA, - Status.PROCESSING, - Status.PROGRESS, - Status.ITERATING, - Status.FINISHED, - Status.CANCELLED, - ] - return order.index(status) - - def __lt__(self, other: Status): - return self.ordering(self) < self.ordering(other) - - @staticmethod - def msg_to_status(msg: str) -> Status: - """Map the raw message from the backend to the status code presented to users.""" - return { - "send_hash": Status.JOINING_QUEUE, - "queue_full": Status.QUEUE_FULL, - "estimation": Status.IN_QUEUE, - "send_data": Status.SENDING_DATA, - "process_starts": Status.PROCESSING, - "process_generating": Status.ITERATING, - "process_completed": Status.FINISHED, - "progress": Status.PROGRESS, - }[msg] - - -@dataclass -class ProgressUnit: - index: Optional[int] - length: Optional[int] - unit: Optional[str] - progress: Optional[float] - desc: Optional[str] - - @classmethod - def from_ws_msg(cls, data: list[dict]) -> list[ProgressUnit]: - return [ - cls( - index=d.get("index"), - length=d.get("length"), - unit=d.get("unit"), - progress=d.get("progress"), - desc=d.get("desc"), - ) - for d in data - ] - - -@dataclass -class StatusUpdate: - """Update message sent from the worker thread to the Job on the main thread.""" - - code: Status - rank: int | None - queue_size: int | None - eta: float | None - success: bool | None - time: datetime | None - progress_data: list[ProgressUnit] | None - - -def create_initial_status_update(): - return StatusUpdate( - code=Status.STARTING, - rank=None, - queue_size=None, - eta=None, - success=None, - time=datetime.now(), - progress_data=None, - ) - - -@dataclass -class JobStatus: - """The job status. - - Keeps track of the latest status update and intermediate outputs (not yet implements). - """ - - latest_status: StatusUpdate = field(default_factory=create_initial_status_update) - outputs: list[Any] = field(default_factory=list) - - -@dataclass -class Communicator: - """Helper class to help communicate between the worker thread and main thread.""" - - lock: Lock - job: JobStatus - prediction_processor: Callable[..., tuple] - reset_url: str - should_cancel: bool = False - - -######################## -# Network utils -######################## - - -def is_http_url_like(possible_url: str) -> bool: - """ - Check if the given string looks like an HTTP(S) URL. - """ - return possible_url.startswith(("http://", "https://")) - - -def probe_url(possible_url: str) -> bool: - """ - Probe the given URL to see if it responds with a 200 status code (to HEAD, then to GET). - """ - headers = {"User-Agent": "gradio (https://gradio.app/; team@gradio.app)"} - try: - with requests.session() as sess: - head_request = sess.head(possible_url, headers=headers) - if head_request.status_code == 405: - return sess.get(possible_url, headers=headers).ok - return head_request.ok - except Exception: - return False - - -def is_valid_url(possible_url: str) -> bool: - """ - Check if the given string is a valid URL. - """ - warnings.warn( - "is_valid_url should not be used. " - "Use is_http_url_like() and probe_url(), as suitable, instead.", - ) - return is_http_url_like(possible_url) and probe_url(possible_url) - - -async def get_pred_from_ws( - websocket: WebSocketCommonProtocol, - data: str, - hash_data: str, - helper: Communicator | None = None, -) -> dict[str, Any]: - completed = False - resp = {} - while not completed: - # Receive message in the background so that we can - # cancel even while running a long pred - task = asyncio.create_task(websocket.recv()) - while not task.done(): - if helper: - with helper.lock: - if helper.should_cancel: - # Need to reset the iterator state since the client - # will not reset the session - async with httpx.AsyncClient() as http: - reset = http.post( - helper.reset_url, json=json.loads(hash_data) - ) - # Retrieve cancel exception from task - # otherwise will get nasty warning in console - task.cancel() - await asyncio.gather(task, reset, return_exceptions=True) - raise CancelledError() - # Need to suspend this coroutine so that task actually runs - await asyncio.sleep(0.01) - msg = task.result() - resp = json.loads(msg) - if helper: - with helper.lock: - has_progress = "progress_data" in resp - status_update = StatusUpdate( - code=Status.msg_to_status(resp["msg"]), - queue_size=resp.get("queue_size"), - rank=resp.get("rank", None), - success=resp.get("success"), - time=datetime.now(), - eta=resp.get("rank_eta"), - progress_data=ProgressUnit.from_ws_msg(resp["progress_data"]) - if has_progress - else None, - ) - output = resp.get("output", {}).get("data", []) - if output and status_update.code != Status.FINISHED: - try: - result = helper.prediction_processor(*output) - except Exception as e: - result = [e] - helper.job.outputs.append(result) - helper.job.latest_status = status_update - if resp["msg"] == "queue_full": - raise QueueError("Queue is full! Please try again.") - if resp["msg"] == "send_hash": - await websocket.send(hash_data) - elif resp["msg"] == "send_data": - await websocket.send(data) - completed = resp["msg"] == "process_completed" - return resp["output"] - - -######################## -# Data processing utils -######################## - - -def download_tmp_copy_of_file( - url_path: str, hf_token: str | None = None, dir: str | None = None -) -> str: - if dir is not None: - os.makedirs(dir, exist_ok=True) - headers = {"Authorization": "Bearer " + hf_token} if hf_token else {} - directory = Path(dir or tempfile.gettempdir()) / secrets.token_hex(20) - directory.mkdir(exist_ok=True, parents=True) - file_path = directory / Path(url_path).name - - with requests.get(url_path, headers=headers, stream=True) as r: - r.raise_for_status() - with open(file_path, "wb") as f: - shutil.copyfileobj(r.raw, f) - return str(file_path.resolve()) - - -def create_tmp_copy_of_file(file_path: str, dir: str | None = None) -> str: - directory = Path(dir or tempfile.gettempdir()) / secrets.token_hex(20) - directory.mkdir(exist_ok=True, parents=True) - dest = directory / Path(file_path).name - shutil.copy2(file_path, dest) - return str(dest.resolve()) - - -def get_mimetype(filename: str) -> str | None: - if filename.endswith(".vtt"): - return "text/vtt" - mimetype = mimetypes.guess_type(filename)[0] - if mimetype is not None: - mimetype = mimetype.replace("x-wav", "wav").replace("x-flac", "flac") - return mimetype - - -def get_extension(encoding: str) -> str | None: - encoding = encoding.replace("audio/wav", "audio/x-wav") - type = mimetypes.guess_type(encoding)[0] - if type == "audio/flac": # flac is not supported by mimetypes - return "flac" - elif type is None: - return None - extension = mimetypes.guess_extension(type) - if extension is not None and extension.startswith("."): - extension = extension[1:] - return extension - - -def encode_file_to_base64(f: str | Path): - with open(f, "rb") as file: - encoded_string = base64.b64encode(file.read()) - base64_str = str(encoded_string, "utf-8") - mimetype = get_mimetype(str(f)) - return ( - "data:" - + (mimetype if mimetype is not None else "") - + ";base64," - + base64_str - ) - - -def encode_url_to_base64(url: str): - resp = requests.get(url) - resp.raise_for_status() - encoded_string = base64.b64encode(resp.content) - base64_str = str(encoded_string, "utf-8") - mimetype = get_mimetype(url) - return ( - "data:" + (mimetype if mimetype is not None else "") + ";base64," + base64_str - ) - - -def encode_url_or_file_to_base64(path: str | Path): - path = str(path) - if is_http_url_like(path): - return encode_url_to_base64(path) - return encode_file_to_base64(path) - - -def decode_base64_to_binary(encoding: str) -> tuple[bytes, str | None]: - extension = get_extension(encoding) - data = encoding.rsplit(",", 1)[-1] - return base64.b64decode(data), extension - - -def strip_invalid_filename_characters(filename: str, max_bytes: int = 200) -> str: - """Strips invalid characters from a filename and ensures that the file_length is less than `max_bytes` bytes.""" - filename = "".join([char for char in filename if char.isalnum() or char in "._- "]) - filename_len = len(filename.encode()) - if filename_len > max_bytes: - while filename_len > max_bytes: - if len(filename) == 0: - break - filename = filename[:-1] - filename_len = len(filename.encode()) - return filename - - -def sanitize_parameter_names(original_name: str) -> str: - """Cleans up a Python parameter name to make the API info more readable.""" - return ( - "".join([char for char in original_name if char.isalnum() or char in " _"]) - .replace(" ", "_") - .lower() - ) - - -def decode_base64_to_file( - encoding: str, - file_path: str | None = None, - dir: str | Path | None = None, - prefix: str | None = None, -): - directory = Path(dir or tempfile.gettempdir()) / secrets.token_hex(20) - directory.mkdir(exist_ok=True, parents=True) - data, extension = decode_base64_to_binary(encoding) - if file_path is not None and prefix is None: - filename = Path(file_path).name - prefix = filename - if "." in filename: - prefix = filename[0 : filename.index(".")] - extension = filename[filename.index(".") + 1 :] - - if prefix is not None: - prefix = strip_invalid_filename_characters(prefix) - - if extension is None: - file_obj = tempfile.NamedTemporaryFile( - delete=False, prefix=prefix, dir=directory - ) - else: - file_obj = tempfile.NamedTemporaryFile( - delete=False, - prefix=prefix, - suffix="." + extension, - dir=directory, - ) - file_obj.write(data) - file_obj.flush() - return file_obj - - -def dict_or_str_to_json_file(jsn: str | dict | list, dir: str | Path | None = None): - if dir is not None: - os.makedirs(dir, exist_ok=True) - - file_obj = tempfile.NamedTemporaryFile( - delete=False, suffix=".json", dir=dir, mode="w+" - ) - if isinstance(jsn, str): - jsn = json.loads(jsn) - json.dump(jsn, file_obj) - file_obj.flush() - return file_obj - - -def file_to_json(file_path: str | Path) -> dict | list: - with open(file_path) as f: - return json.load(f) - - -########################### -# HuggingFace Hub API Utils -########################### -def set_space_timeout( - space_id: str, - hf_token: str | None = None, - timeout_in_seconds: int = 300, -): - headers = huggingface_hub.utils.build_hf_headers( - token=hf_token, - library_name="gradio_client", - library_version=__version__, - ) - req = requests.post( - f"https://huggingface.co/api/spaces/{space_id}/sleeptime", - json={"seconds": timeout_in_seconds}, - headers=headers, - ) - try: - huggingface_hub.utils.hf_raise_for_status(req) - except huggingface_hub.utils.HfHubHTTPError as err: - raise SpaceDuplicationError( - f"Could not set sleep timeout on duplicated Space. Please visit {SPACE_URL.format(space_id)} " - "to set a timeout manually to reduce billing charges." - ) from err - - -######################## -# Misc utils -######################## - - -def synchronize_async(func: Callable, *args, **kwargs) -> Any: - """ - Runs async functions in sync scopes. Can be used in any scope. - - Example: - if inspect.iscoroutinefunction(block_fn.fn): - predictions = utils.synchronize_async(block_fn.fn, *processed_input) - - Args: - func: - *args: - **kwargs: - """ - return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs) # type: ignore - - -class APIInfoParseError(ValueError): - pass - - -def get_type(schema: dict): - if "type" in schema: - return schema["type"] - elif schema.get("oneOf"): - return "oneOf" - elif schema.get("anyOf"): - return "anyOf" - else: - raise APIInfoParseError(f"Cannot parse type for {schema}") - - -def json_schema_to_python_type(schema: Any) -> str: - """Convert the json schema into a python type hint""" - type_ = get_type(schema) - if type_ == {}: - if "json" in schema["description"]: - return "Dict[Any, Any]" - else: - return "Any" - elif type_ == "null": - return "None" - elif type_ == "integer": - return "int" - elif type_ == "string": - return "str" - elif type_ == "boolean": - return "bool" - elif type_ == "number": - return "int | float" - elif type_ == "array": - items = schema.get("items") - if "prefixItems" in items: - elements = ", ".join( - [json_schema_to_python_type(i) for i in items["prefixItems"]] - ) - return f"Tuple[{elements}]" - else: - elements = json_schema_to_python_type(items) - return f"List[{elements}]" - elif type_ == "object": - des = ", ".join( - [ - f"{n}: {json_schema_to_python_type(v)} ({v.get('description')})" - for n, v in schema["properties"].items() - ] - ) - return f"Dict({des})" - elif type_ in ["oneOf", "anyOf"]: - desc = " | ".join([json_schema_to_python_type(i) for i in schema[type_]]) - return desc - else: - raise APIInfoParseError(f"Cannot parse schema {schema}") diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/commands/scan_cache.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/commands/scan_cache.py deleted file mode 100644 index ff26fa9de50f607ca78a24c5041010b4d629c148..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/commands/scan_cache.py +++ /dev/null @@ -1,138 +0,0 @@ -# coding=utf-8 -# Copyright 2022-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains command to scan the HF cache directory. - -Usage: - huggingface-cli scan-cache - huggingface-cli scan-cache -v - huggingface-cli scan-cache -vvv - huggingface-cli scan-cache --dir ~/.cache/huggingface/hub -""" -import time -from argparse import _SubParsersAction -from typing import Optional - -from ..utils import CacheNotFound, HFCacheInfo, scan_cache_dir -from . import BaseHuggingfaceCLICommand -from ._cli_utils import ANSI, tabulate - - -class ScanCacheCommand(BaseHuggingfaceCLICommand): - @staticmethod - def register_subcommand(parser: _SubParsersAction): - scan_cache_parser = parser.add_parser("scan-cache", help="Scan cache directory.") - - scan_cache_parser.add_argument( - "--dir", - type=str, - default=None, - help="cache directory to scan (optional). Default to the default HuggingFace cache.", - ) - scan_cache_parser.add_argument( - "-v", - "--verbose", - action="count", - default=0, - help="show a more verbose output", - ) - scan_cache_parser.set_defaults(func=ScanCacheCommand) - - def __init__(self, args): - self.verbosity: int = args.verbose - self.cache_dir: Optional[str] = args.dir - - def run(self): - try: - t0 = time.time() - hf_cache_info = scan_cache_dir(self.cache_dir) - t1 = time.time() - except CacheNotFound as exc: - cache_dir = exc.cache_dir - print(f"Cache directory not found: {cache_dir}") - return - - self._print_hf_cache_info_as_table(hf_cache_info) - - print( - f"\nDone in {round(t1-t0,1)}s. Scanned {len(hf_cache_info.repos)} repo(s)" - f" for a total of {ANSI.red(hf_cache_info.size_on_disk_str)}." - ) - if len(hf_cache_info.warnings) > 0: - message = f"Got {len(hf_cache_info.warnings)} warning(s) while scanning." - if self.verbosity >= 3: - print(ANSI.gray(message)) - for warning in hf_cache_info.warnings: - print(ANSI.gray(warning)) - else: - print(ANSI.gray(message + " Use -vvv to print details.")) - - def _print_hf_cache_info_as_table(self, hf_cache_info: HFCacheInfo) -> None: - if self.verbosity == 0: - print( - tabulate( - rows=[ - [ - repo.repo_id, - repo.repo_type, - "{:>12}".format(repo.size_on_disk_str), - repo.nb_files, - repo.last_accessed_str, - repo.last_modified_str, - ", ".join(sorted(repo.refs)), - str(repo.repo_path), - ] - for repo in sorted(hf_cache_info.repos, key=lambda repo: repo.repo_path) - ], - headers=[ - "REPO ID", - "REPO TYPE", - "SIZE ON DISK", - "NB FILES", - "LAST_ACCESSED", - "LAST_MODIFIED", - "REFS", - "LOCAL PATH", - ], - ) - ) - else: - print( - tabulate( - rows=[ - [ - repo.repo_id, - repo.repo_type, - revision.commit_hash, - "{:>12}".format(revision.size_on_disk_str), - revision.nb_files, - revision.last_modified_str, - ", ".join(sorted(revision.refs)), - str(revision.snapshot_path), - ] - for repo in sorted(hf_cache_info.repos, key=lambda repo: repo.repo_path) - for revision in sorted(repo.revisions, key=lambda revision: revision.commit_hash) - ], - headers=[ - "REPO ID", - "REPO TYPE", - "REVISION", - "SIZE ON DISK", - "NB FILES", - "LAST_MODIFIED", - "REFS", - "LOCAL PATH", - ], - ) - ) diff --git a/spaces/derek-thomas/RAGDemo/backend/semantic_search.py b/spaces/derek-thomas/RAGDemo/backend/semantic_search.py deleted file mode 100644 index 653cf44d4d345fe165b229babaec744cf774d476..0000000000000000000000000000000000000000 --- a/spaces/derek-thomas/RAGDemo/backend/semantic_search.py +++ /dev/null @@ -1,31 +0,0 @@ -import time -import logging -from qdrant_haystack import QdrantDocumentStore -from haystack.nodes import EmbeddingRetriever -from pathlib import Path - -# Setting up the logging -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - -# Start the timer for loading the QdrantDocumentStore -start_time = time.perf_counter() - -proj_dir = Path(__file__).parents[1] -qd_document_store = QdrantDocumentStore(path=str(proj_dir/'Qdrant'), index='RAGDemo') - -# Log the time taken to load the QdrantDocumentStore -document_store_loading_time = time.perf_counter() - start_time -logger.info(f"Time taken to load QdrantDocumentStore: {document_store_loading_time:.6f} seconds") - -# Start the timer for loading the EmbeddingRetriever -start_time = time.perf_counter() - -qd_retriever = EmbeddingRetriever(document_store=qd_document_store, - embedding_model="BAAI/bge-base-en-v1.5", - model_format="sentence_transformers", - use_gpu=False) - -# Log the time taken to load the EmbeddingRetriever -retriever_loading_time = time.perf_counter() - start_time -logger.info(f"Time taken to load EmbeddingRetriever: {retriever_loading_time:.6f} seconds") diff --git a/spaces/dhof/shapetest/README.md b/spaces/dhof/shapetest/README.md deleted file mode 100644 index 28c652015c6a0211f7d9edebd73b9131214eec98..0000000000000000000000000000000000000000 --- a/spaces/dhof/shapetest/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Shap-E -emoji: 🧢 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.28.3 -python_version: 3.10.11 -app_file: app.py -pinned: false -license: mit -duplicated_from: hysts/Shap-E ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/diacanFperku/AutoGPT/Easeus Partition Master Key 13.8 Technician Portable BETTER.md b/spaces/diacanFperku/AutoGPT/Easeus Partition Master Key 13.8 Technician Portable BETTER.md deleted file mode 100644 index 406b9c6c76958e8d3d869b68ba320828f8d96e8c..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Easeus Partition Master Key 13.8 Technician Portable BETTER.md +++ /dev/null @@ -1,6 +0,0 @@ -

Easeus partition master key 13.8 Technician Portable


Download File 🔗 https://gohhs.com/2uFVrA



- -EaseUS Partition Master Technician is an all-in-one partition... ... EaseUS Partition Master 13.8 Technician + crack (key gen). ... Partition Master 14.0 Technician Edition + Free + Pro + Rus + WinPE Bootable CD + Portable. 4d29de3e1b
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/Interviu Olvido Hormigos Pdf 131.md b/spaces/diacanFperku/AutoGPT/Interviu Olvido Hormigos Pdf 131.md deleted file mode 100644 index 86b7f7733b03a34e7a2ece21bc6e4a269838cb88..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Interviu Olvido Hormigos Pdf 131.md +++ /dev/null @@ -1,6 +0,0 @@ -

Interviu Olvido Hormigos Pdf 131


DOWNLOAD ✒ ✒ ✒ https://gohhs.com/2uFUaX



- -Olvido Hormigos (2017) Interviú Nº2149 ... Formatos disponibles. Descargue como PDF o lea en línea desde Scribd. Marcar según contenido . 4d29de3e1b
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/Magix Video Pro X5 Keygen Download BEST.md b/spaces/diacanFperku/AutoGPT/Magix Video Pro X5 Keygen Download BEST.md deleted file mode 100644 index cda70b50d303e0b8e40fa3e0ed26f326282dde60..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Magix Video Pro X5 Keygen Download BEST.md +++ /dev/null @@ -1,22 +0,0 @@ -

Magix Video Pro X5 Keygen Download


Download File ››››› https://gohhs.com/2uFTlC



-
-This software helps you to make beautiful videos and movies of your life. You can edit all your favorite clips, short video clips, long video clip, HD-video clip, or movies of your life. This software is a free and powerful video editing software. It has a powerful editing tool with many features. More, you can add special effects, like Black and White, Sepia, Red-Eye Remover, Photo Filter, Special Effects. And many more you can also use the video editor software. - -MAGIX Video Pro X13 19.0.1.141 Crack is a powerful video editing software. It has powerful and easy to use editing tools for basic video editing needs. You can trim, crop, add effects, and apply transitions. You can save your video as an AVI, MPEG, MPG, and WMV files format. More, you can burn it on DVD, upload on YouTube, or save as a PDF file. In the other hand, this is a fast video editing software. More, it also has some other advanced features and functions. So that you can edit more powerful videos. This software is a smart software and you can also use it at your laptop, PC, or Windows. - -MAGIX Video Pro X13 Torrent Full Version Free Download - -MAGIX Video Pro X13 19.0.1.141 Crack is a powerful and advanced video editing software for PC, Mac, Android, iPhone, iPad, or IPhone OS. It is a powerful video editor for users. More, it is a fast video editor for Windows, Windows Mac, IOS, and Android users. So, this is a powerful software for a beginner and experienced user. It is one of the best video editor and free video editor software. It helps you to make a beautiful video. More, it has powerful video editing features and tools. Also, you can add your favorite clips or songs in your video. It has a powerful editing tool with many features and powerful tools. More, you can also make a video of your life. It can also edit movies and videos. It has a powerful video editing tool for editing your video clips, short video clips, long video clips, and HD video clips. - -MAGIX Video Pro X13 19.0.1.141 Crack Features: - -It is a powerful and efficient video editor software. - -It helps you to edit your video. - -It has powerful editing tools with many features. - -It is a smart video editing software 4fefd39f24
-
-
-

diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/trainer.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/trainer.py deleted file mode 100644 index 0b0ac566577c016ff1dade2a90266764c1c6dafe..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/trainer.py +++ /dev/null @@ -1,36 +0,0 @@ -from colbert.infra.run import Run -from colbert.infra.launcher import Launcher -from colbert.infra.config import ColBERTConfig, RunConfig - -from colbert.training.training import train - - -class Trainer: - def __init__(self, triples, queries, collection, config=None): - self.config = ColBERTConfig.from_existing(config, Run().config) - - self.triples = triples - self.queries = queries - self.collection = collection - - def configure(self, **kw_args): - self.config.configure(**kw_args) - - def train(self, checkpoint='bert-base-uncased'): - """ - Note that config.checkpoint is ignored. Only the supplied checkpoint here is used. - """ - - # Resources don't come from the config object. They come from the input parameters. - # TODO: After the API stabilizes, make this "self.config.assign()" to emphasize this distinction. - self.configure(triples=self.triples, queries=self.queries, collection=self.collection) - self.configure(checkpoint=checkpoint) - - launcher = Launcher(train) - - self._best_checkpoint_path = launcher.launch(self.config, self.triples, self.queries, self.collection) - - - def best_checkpoint_path(self): - return self._best_checkpoint_path - diff --git a/spaces/dineshreddy/WALT/mmdet/core/bbox/samplers/sampling_result.py b/spaces/dineshreddy/WALT/mmdet/core/bbox/samplers/sampling_result.py deleted file mode 100644 index 419a8e39a3c307a7cd9cfd0565a20037ded0d646..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/core/bbox/samplers/sampling_result.py +++ /dev/null @@ -1,152 +0,0 @@ -import torch - -from mmdet.utils import util_mixins - - -class SamplingResult(util_mixins.NiceRepr): - """Bbox sampling result. - - Example: - >>> # xdoctest: +IGNORE_WANT - >>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA - >>> self = SamplingResult.random(rng=10) - >>> print(f'self = {self}') - self = - """ - - def __init__(self, pos_inds, neg_inds, bboxes, gt_bboxes, assign_result, - gt_flags): - self.pos_inds = pos_inds - self.neg_inds = neg_inds - self.pos_bboxes = bboxes[pos_inds] - self.neg_bboxes = bboxes[neg_inds] - self.pos_is_gt = gt_flags[pos_inds] - - self.num_gts = gt_bboxes.shape[0] - self.pos_assigned_gt_inds = assign_result.gt_inds[pos_inds] - 1 - - if gt_bboxes.numel() == 0: - # hack for index error case - assert self.pos_assigned_gt_inds.numel() == 0 - self.pos_gt_bboxes = torch.empty_like(gt_bboxes).view(-1, 4) - else: - if len(gt_bboxes.shape) < 2: - gt_bboxes = gt_bboxes.view(-1, 4) - - self.pos_gt_bboxes = gt_bboxes[self.pos_assigned_gt_inds, :] - - if assign_result.labels is not None: - self.pos_gt_labels = assign_result.labels[pos_inds] - else: - self.pos_gt_labels = None - - @property - def bboxes(self): - """torch.Tensor: concatenated positive and negative boxes""" - return torch.cat([self.pos_bboxes, self.neg_bboxes]) - - def to(self, device): - """Change the device of the data inplace. - - Example: - >>> self = SamplingResult.random() - >>> print(f'self = {self.to(None)}') - >>> # xdoctest: +REQUIRES(--gpu) - >>> print(f'self = {self.to(0)}') - """ - _dict = self.__dict__ - for key, value in _dict.items(): - if isinstance(value, torch.Tensor): - _dict[key] = value.to(device) - return self - - def __nice__(self): - data = self.info.copy() - data['pos_bboxes'] = data.pop('pos_bboxes').shape - data['neg_bboxes'] = data.pop('neg_bboxes').shape - parts = [f"'{k}': {v!r}" for k, v in sorted(data.items())] - body = ' ' + ',\n '.join(parts) - return '{\n' + body + '\n}' - - @property - def info(self): - """Returns a dictionary of info about the object.""" - return { - 'pos_inds': self.pos_inds, - 'neg_inds': self.neg_inds, - 'pos_bboxes': self.pos_bboxes, - 'neg_bboxes': self.neg_bboxes, - 'pos_is_gt': self.pos_is_gt, - 'num_gts': self.num_gts, - 'pos_assigned_gt_inds': self.pos_assigned_gt_inds, - } - - @classmethod - def random(cls, rng=None, **kwargs): - """ - Args: - rng (None | int | numpy.random.RandomState): seed or state. - kwargs (keyword arguments): - - num_preds: number of predicted boxes - - num_gts: number of true boxes - - p_ignore (float): probability of a predicted box assinged to \ - an ignored truth. - - p_assigned (float): probability of a predicted box not being \ - assigned. - - p_use_label (float | bool): with labels or not. - - Returns: - :obj:`SamplingResult`: Randomly generated sampling result. - - Example: - >>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA - >>> self = SamplingResult.random() - >>> print(self.__dict__) - """ - from mmdet.core.bbox.samplers.random_sampler import RandomSampler - from mmdet.core.bbox.assigners.assign_result import AssignResult - from mmdet.core.bbox import demodata - rng = demodata.ensure_rng(rng) - - # make probabalistic? - num = 32 - pos_fraction = 0.5 - neg_pos_ub = -1 - - assign_result = AssignResult.random(rng=rng, **kwargs) - - # Note we could just compute an assignment - bboxes = demodata.random_boxes(assign_result.num_preds, rng=rng) - gt_bboxes = demodata.random_boxes(assign_result.num_gts, rng=rng) - - if rng.rand() > 0.2: - # sometimes algorithms squeeze their data, be robust to that - gt_bboxes = gt_bboxes.squeeze() - bboxes = bboxes.squeeze() - - if assign_result.labels is None: - gt_labels = None - else: - gt_labels = None # todo - - if gt_labels is None: - add_gt_as_proposals = False - else: - add_gt_as_proposals = True # make probabalistic? - - sampler = RandomSampler( - num, - pos_fraction, - neg_pos_ub=neg_pos_ub, - add_gt_as_proposals=add_gt_as_proposals, - rng=rng) - self = sampler.sample(assign_result, bboxes, gt_bboxes, gt_labels) - return self diff --git a/spaces/dineshreddy/WALT/mmdet/models/backbones/hrnet.py b/spaces/dineshreddy/WALT/mmdet/models/backbones/hrnet.py deleted file mode 100644 index c0fd0a974192231506aa68b1e1719f618b78a1b3..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/backbones/hrnet.py +++ /dev/null @@ -1,537 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import (build_conv_layer, build_norm_layer, constant_init, - kaiming_init) -from mmcv.runner import load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm - -from mmdet.utils import get_root_logger -from ..builder import BACKBONES -from .resnet import BasicBlock, Bottleneck - - -class HRModule(nn.Module): - """High-Resolution Module for HRNet. - - In this module, every branch has 4 BasicBlocks/Bottlenecks. Fusion/Exchange - is in this module. - """ - - def __init__(self, - num_branches, - blocks, - num_blocks, - in_channels, - num_channels, - multiscale_output=True, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN')): - super(HRModule, self).__init__() - self._check_branches(num_branches, num_blocks, in_channels, - num_channels) - - self.in_channels = in_channels - self.num_branches = num_branches - - self.multiscale_output = multiscale_output - self.norm_cfg = norm_cfg - self.conv_cfg = conv_cfg - self.with_cp = with_cp - self.branches = self._make_branches(num_branches, blocks, num_blocks, - num_channels) - self.fuse_layers = self._make_fuse_layers() - self.relu = nn.ReLU(inplace=False) - - def _check_branches(self, num_branches, num_blocks, in_channels, - num_channels): - if num_branches != len(num_blocks): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_BLOCKS({len(num_blocks)})' - raise ValueError(error_msg) - - if num_branches != len(num_channels): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_CHANNELS({len(num_channels)})' - raise ValueError(error_msg) - - if num_branches != len(in_channels): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_INCHANNELS({len(in_channels)})' - raise ValueError(error_msg) - - def _make_one_branch(self, - branch_index, - block, - num_blocks, - num_channels, - stride=1): - downsample = None - if stride != 1 or \ - self.in_channels[branch_index] != \ - num_channels[branch_index] * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - self.in_channels[branch_index], - num_channels[branch_index] * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, num_channels[branch_index] * - block.expansion)[1]) - - layers = [] - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - self.in_channels[branch_index] = \ - num_channels[branch_index] * block.expansion - for i in range(1, num_blocks[branch_index]): - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - - return nn.Sequential(*layers) - - def _make_branches(self, num_branches, block, num_blocks, num_channels): - branches = [] - - for i in range(num_branches): - branches.append( - self._make_one_branch(i, block, num_blocks, num_channels)) - - return nn.ModuleList(branches) - - def _make_fuse_layers(self): - if self.num_branches == 1: - return None - - num_branches = self.num_branches - in_channels = self.in_channels - fuse_layers = [] - num_out_branches = num_branches if self.multiscale_output else 1 - for i in range(num_out_branches): - fuse_layer = [] - for j in range(num_branches): - if j > i: - fuse_layer.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=1, - stride=1, - padding=0, - bias=False), - build_norm_layer(self.norm_cfg, in_channels[i])[1], - nn.Upsample( - scale_factor=2**(j - i), mode='nearest'))) - elif j == i: - fuse_layer.append(None) - else: - conv_downsamples = [] - for k in range(i - j): - if k == i - j - 1: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[i])[1])) - else: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[j], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[j])[1], - nn.ReLU(inplace=False))) - fuse_layer.append(nn.Sequential(*conv_downsamples)) - fuse_layers.append(nn.ModuleList(fuse_layer)) - - return nn.ModuleList(fuse_layers) - - def forward(self, x): - """Forward function.""" - if self.num_branches == 1: - return [self.branches[0](x[0])] - - for i in range(self.num_branches): - x[i] = self.branches[i](x[i]) - - x_fuse = [] - for i in range(len(self.fuse_layers)): - y = 0 - for j in range(self.num_branches): - if i == j: - y += x[j] - else: - y += self.fuse_layers[i][j](x[j]) - x_fuse.append(self.relu(y)) - return x_fuse - - -@BACKBONES.register_module() -class HRNet(nn.Module): - """HRNet backbone. - - High-Resolution Representations for Labeling Pixels and Regions - arXiv: https://arxiv.org/abs/1904.04514 - - Args: - extra (dict): detailed configuration for each stage of HRNet. - in_channels (int): Number of input image channels. Default: 3. - conv_cfg (dict): dictionary to construct and config conv layer. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from mmdet.models import HRNet - >>> import torch - >>> extra = dict( - >>> stage1=dict( - >>> num_modules=1, - >>> num_branches=1, - >>> block='BOTTLENECK', - >>> num_blocks=(4, ), - >>> num_channels=(64, )), - >>> stage2=dict( - >>> num_modules=1, - >>> num_branches=2, - >>> block='BASIC', - >>> num_blocks=(4, 4), - >>> num_channels=(32, 64)), - >>> stage3=dict( - >>> num_modules=4, - >>> num_branches=3, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4), - >>> num_channels=(32, 64, 128)), - >>> stage4=dict( - >>> num_modules=3, - >>> num_branches=4, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4, 4), - >>> num_channels=(32, 64, 128, 256))) - >>> self = HRNet(extra, in_channels=1) - >>> self.eval() - >>> inputs = torch.rand(1, 1, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 32, 8, 8) - (1, 64, 4, 4) - (1, 128, 2, 2) - (1, 256, 1, 1) - """ - - blocks_dict = {'BASIC': BasicBlock, 'BOTTLENECK': Bottleneck} - - def __init__(self, - extra, - in_channels=3, - conv_cfg=None, - norm_cfg=dict(type='BN'), - norm_eval=True, - with_cp=False, - zero_init_residual=False): - super(HRNet, self).__init__() - self.extra = extra - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - self.zero_init_residual = zero_init_residual - - # stem net - self.norm1_name, norm1 = build_norm_layer(self.norm_cfg, 64, postfix=1) - self.norm2_name, norm2 = build_norm_layer(self.norm_cfg, 64, postfix=2) - - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - self.conv_cfg, - 64, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.relu = nn.ReLU(inplace=True) - - # stage 1 - self.stage1_cfg = self.extra['stage1'] - num_channels = self.stage1_cfg['num_channels'][0] - block_type = self.stage1_cfg['block'] - num_blocks = self.stage1_cfg['num_blocks'][0] - - block = self.blocks_dict[block_type] - stage1_out_channels = num_channels * block.expansion - self.layer1 = self._make_layer(block, 64, num_channels, num_blocks) - - # stage 2 - self.stage2_cfg = self.extra['stage2'] - num_channels = self.stage2_cfg['num_channels'] - block_type = self.stage2_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition1 = self._make_transition_layer([stage1_out_channels], - num_channels) - self.stage2, pre_stage_channels = self._make_stage( - self.stage2_cfg, num_channels) - - # stage 3 - self.stage3_cfg = self.extra['stage3'] - num_channels = self.stage3_cfg['num_channels'] - block_type = self.stage3_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition2 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage3, pre_stage_channels = self._make_stage( - self.stage3_cfg, num_channels) - - # stage 4 - self.stage4_cfg = self.extra['stage4'] - num_channels = self.stage4_cfg['num_channels'] - block_type = self.stage4_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition3 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage4, pre_stage_channels = self._make_stage( - self.stage4_cfg, num_channels) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: the normalization layer named "norm2" """ - return getattr(self, self.norm2_name) - - def _make_transition_layer(self, num_channels_pre_layer, - num_channels_cur_layer): - num_branches_cur = len(num_channels_cur_layer) - num_branches_pre = len(num_channels_pre_layer) - - transition_layers = [] - for i in range(num_branches_cur): - if i < num_branches_pre: - if num_channels_cur_layer[i] != num_channels_pre_layer[i]: - transition_layers.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - num_channels_pre_layer[i], - num_channels_cur_layer[i], - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - num_channels_cur_layer[i])[1], - nn.ReLU(inplace=True))) - else: - transition_layers.append(None) - else: - conv_downsamples = [] - for j in range(i + 1 - num_branches_pre): - in_channels = num_channels_pre_layer[-1] - out_channels = num_channels_cur_layer[i] \ - if j == i - num_branches_pre else in_channels - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - out_channels, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, out_channels)[1], - nn.ReLU(inplace=True))) - transition_layers.append(nn.Sequential(*conv_downsamples)) - - return nn.ModuleList(transition_layers) - - def _make_layer(self, block, inplanes, planes, blocks, stride=1): - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, planes * block.expansion)[1]) - - layers = [] - layers.append( - block( - inplanes, - planes, - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append( - block( - inplanes, - planes, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - - return nn.Sequential(*layers) - - def _make_stage(self, layer_config, in_channels, multiscale_output=True): - num_modules = layer_config['num_modules'] - num_branches = layer_config['num_branches'] - num_blocks = layer_config['num_blocks'] - num_channels = layer_config['num_channels'] - block = self.blocks_dict[layer_config['block']] - - hr_modules = [] - for i in range(num_modules): - # multi_scale_output is only used for the last module - if not multiscale_output and i == num_modules - 1: - reset_multiscale_output = False - else: - reset_multiscale_output = True - - hr_modules.append( - HRModule( - num_branches, - block, - num_blocks, - in_channels, - num_channels, - reset_multiscale_output, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - - return nn.Sequential(*hr_modules), in_channels - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - - if self.zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - constant_init(m.norm3, 0) - elif isinstance(m, BasicBlock): - constant_init(m.norm2, 0) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - """Forward function.""" - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.conv2(x) - x = self.norm2(x) - x = self.relu(x) - x = self.layer1(x) - - x_list = [] - for i in range(self.stage2_cfg['num_branches']): - if self.transition1[i] is not None: - x_list.append(self.transition1[i](x)) - else: - x_list.append(x) - y_list = self.stage2(x_list) - - x_list = [] - for i in range(self.stage3_cfg['num_branches']): - if self.transition2[i] is not None: - x_list.append(self.transition2[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage3(x_list) - - x_list = [] - for i in range(self.stage4_cfg['num_branches']): - if self.transition3[i] is not None: - x_list.append(self.transition3[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage4(x_list) - - return y_list - - def train(self, mode=True): - """Convert the model into training mode will keeping the normalization - layer freezed.""" - super(HRNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_pipelines/panet_pipeline.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_pipelines/panet_pipeline.py deleted file mode 100644 index eae50de4fab0536d114509854f9250c0d613cb3c..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_pipelines/panet_pipeline.py +++ /dev/null @@ -1,156 +0,0 @@ -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -# for ctw1500 -img_scale_train_ctw1500 = [(3000, 640)] -shrink_ratio_train_ctw1500 = (1.0, 0.7) -target_size_train_ctw1500 = (640, 640) -train_pipeline_ctw1500 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='LoadTextAnnotations', - with_bbox=True, - with_mask=True, - poly2mask=False), - dict(type='ColorJitter', brightness=32.0 / 255, saturation=0.5), - dict(type='Normalize', **img_norm_cfg), - dict( - type='ScaleAspectJitter', - img_scale=img_scale_train_ctw1500, - ratio_range=(0.7, 1.3), - aspect_ratio_range=(0.9, 1.1), - multiscale_mode='value', - keep_ratio=False), - # shrink_ratio is from big to small. The 1st must be 1.0 - dict(type='PANetTargets', shrink_ratio=shrink_ratio_train_ctw1500), - dict(type='RandomFlip', flip_ratio=0.5, direction='horizontal'), - dict(type='RandomRotateTextDet'), - dict( - type='RandomCropInstances', - target_size=target_size_train_ctw1500, - instance_key='gt_kernels'), - dict(type='Pad', size_divisor=32), - dict( - type='CustomFormatBundle', - keys=['gt_kernels', 'gt_mask'], - visualize=dict(flag=False, boundary_key='gt_kernels')), - dict(type='Collect', keys=['img', 'gt_kernels', 'gt_mask']) -] - -img_scale_test_ctw1500 = (3000, 640) -test_pipeline_ctw1500 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale_test_ctw1500, # used by Resize - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] - -# for icdar2015 -img_scale_train_icdar2015 = [(3000, 736)] -shrink_ratio_train_icdar2015 = (1.0, 0.5) -target_size_train_icdar2015 = (736, 736) -train_pipeline_icdar2015 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='LoadTextAnnotations', - with_bbox=True, - with_mask=True, - poly2mask=False), - dict(type='ColorJitter', brightness=32.0 / 255, saturation=0.5), - dict(type='Normalize', **img_norm_cfg), - dict( - type='ScaleAspectJitter', - img_scale=img_scale_train_icdar2015, - ratio_range=(0.7, 1.3), - aspect_ratio_range=(0.9, 1.1), - multiscale_mode='value', - keep_ratio=False), - dict(type='PANetTargets', shrink_ratio=shrink_ratio_train_icdar2015), - dict(type='RandomFlip', flip_ratio=0.5, direction='horizontal'), - dict(type='RandomRotateTextDet'), - dict( - type='RandomCropInstances', - target_size=target_size_train_icdar2015, - instance_key='gt_kernels'), - dict(type='Pad', size_divisor=32), - dict( - type='CustomFormatBundle', - keys=['gt_kernels', 'gt_mask'], - visualize=dict(flag=False, boundary_key='gt_kernels')), - dict(type='Collect', keys=['img', 'gt_kernels', 'gt_mask']) -] - -img_scale_test_icdar2015 = (1333, 736) -test_pipeline_icdar2015 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale_test_icdar2015, # used by Resize - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] - -# for icdar2017 -img_scale_train_icdar2017 = [(3000, 800)] -shrink_ratio_train_icdar2017 = (1.0, 0.5) -target_size_train_icdar2017 = (800, 800) -train_pipeline_icdar2017 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='LoadTextAnnotations', - with_bbox=True, - with_mask=True, - poly2mask=False), - dict(type='ColorJitter', brightness=32.0 / 255, saturation=0.5), - dict(type='Normalize', **img_norm_cfg), - dict( - type='ScaleAspectJitter', - img_scale=img_scale_train_icdar2017, - ratio_range=(0.7, 1.3), - aspect_ratio_range=(0.9, 1.1), - multiscale_mode='value', - keep_ratio=False), - dict(type='PANetTargets', shrink_ratio=shrink_ratio_train_icdar2017), - dict(type='RandomFlip', flip_ratio=0.5, direction='horizontal'), - dict(type='RandomRotateTextDet'), - dict( - type='RandomCropInstances', - target_size=target_size_train_icdar2017, - instance_key='gt_kernels'), - dict(type='Pad', size_divisor=32), - dict( - type='CustomFormatBundle', - keys=['gt_kernels', 'gt_mask'], - visualize=dict(flag=False, boundary_key='gt_kernels')), - dict(type='Collect', keys=['img', 'gt_kernels', 'gt_mask']) -] - -img_scale_test_icdar2017 = (1333, 800) -test_pipeline_icdar2017 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale_test_icdar2017, # used by Resize - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] diff --git a/spaces/dirge/voicevox/speaker_info/7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff/policy.md b/spaces/dirge/voicevox/speaker_info/7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff/policy.md deleted file mode 100644 index c9bcc2cea42f727c8e43c934fc38163144848882..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/speaker_info/7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff/policy.md +++ /dev/null @@ -1,3 +0,0 @@ -dummy1 policy - -https://voicevox.hiroshiba.jp/ diff --git a/spaces/dmeck/RVC-Speakers/speakers/server/__init__.py b/spaces/dmeck/RVC-Speakers/speakers/server/__init__.py deleted file mode 100644 index 0591109378f8c017c4a65acab9f496dfb28454c0..0000000000000000000000000000000000000000 --- a/spaces/dmeck/RVC-Speakers/speakers/server/__init__.py +++ /dev/null @@ -1,109 +0,0 @@ -from speakers.common.registry import registry -from speakers.server.bootstrap.bootstrap_register import load_bootstrap, get_bootstrap - -from omegaconf import OmegaConf - -from speakers.common.utils import get_abs_path -from oscrypto import util as crypto_utils - -import asyncio -import time -import os -import sys -import traceback - -import subprocess - -root_dir = os.path.dirname(os.path.abspath(__file__)) -registry.register_path("server_library_root", root_dir) -# Time to wait for web client to send a request to /task-state request -# before that web clients task gets removed from the queue -WEB_CLIENT_TIMEOUT = -1 -# Time before finished tasks get removed from memory -FINISHED_TASK_REMOVE_TIMEOUT = 1800 - - -def generate_nonce(): - return crypto_utils.rand_bytes(16).hex() - - -def start_translator_client_proc(speakers_config_file: str, nonce: str = None): - cmds = [ - sys.executable, - '-m', 'speakers', - '--mode', 'web_runner', - '--speakers-config-file', speakers_config_file, - '--nonce', nonce, - ] - - proc = subprocess.Popen(cmds, cwd=f"{registry.get_path('library_root')}/../") - return proc - - -async def start_async_app(speakers_config_file: str, nonce: str = None): - config = OmegaConf.load(get_abs_path(speakers_config_file)) - load_bootstrap(config=config.get("bootstrap")) - - runner_bootstrap_web = get_bootstrap("runner_bootstrap_web") - - runner_bootstrap_web.set_nonce(nonce=nonce) - await runner_bootstrap_web.run() - return runner_bootstrap_web - - -async def dispatch(speakers_config_file: str, nonce: str = None): - - if nonce is None: - nonce = os.getenv('MT_WEB_NONCE', generate_nonce()) - - runner = await start_async_app(speakers_config_file=speakers_config_file, nonce=nonce) - # Create client process - print() - client_process = start_translator_client_proc(speakers_config_file, nonce=nonce) - - try: - while True: - """任务队列状态维护""" - await asyncio.sleep(1) - - # Restart client if OOM or similar errors occured - if client_process.poll() is not None: - print('Restarting translator process') - if len(runner.ongoing_tasks) > 0: - task_id = runner.ongoing_tasks.pop(0) - state = runner.task_states[task_id] - state['info'] = 'error' - state['finished'] = True - client_process = start_translator_client_proc(speakers_config_file=speakers_config_file) - - # Filter queued and finished tasks - now = time.time() - to_del_task_ids = set() - for tid, s in runner.task_states.items(): - payload = runner.task_data[tid] - # Remove finished tasks after 30 minutes - if s['finished'] and now - payload.created_at > FINISHED_TASK_REMOVE_TIMEOUT: - to_del_task_ids.add(tid) - - # Remove queued tasks without web client - elif WEB_CLIENT_TIMEOUT >= 0: - if tid not in runner.ongoing_tasks and not s['finished'] \ - and now - payload.requested_at > WEB_CLIENT_TIMEOUT: - print('REMOVING TASK', tid) - to_del_task_ids.add(tid) - try: - runner.queue.remove(tid) - except Exception: - pass - - for tid in to_del_task_ids: - del runner.task_states[tid] - del runner.task_data[tid] - - except: - if client_process.poll() is None: - # client_process.terminate() - client_process.kill() - await runner.destroy() - traceback.print_exc() - raise diff --git a/spaces/duycse1603/math2tex/ScanSSD/layers/functions/detection.py b/spaces/duycse1603/math2tex/ScanSSD/layers/functions/detection.py deleted file mode 100644 index 597aa7374786ec7400a76db3f3d8a5c46e92da5b..0000000000000000000000000000000000000000 --- a/spaces/duycse1603/math2tex/ScanSSD/layers/functions/detection.py +++ /dev/null @@ -1,68 +0,0 @@ -import torch -from torch.autograd import Function -from ..box_utils import decode, nms - -class Detect(Function): - """At test time, Detect is the final layer of SSD. Decode location preds, - apply non-maximum suppression to location predictions based on conf - scores and threshold to a top_k number of output predictions for both - confidence score and locations. - """ - def __init__(self, cfg, num_classes, bkg_label, top_k, conf_thresh, nms_thresh): - self.num_classes = num_classes - self.background_label = bkg_label - self.top_k = top_k - # Parameters used in nms. - self.nms_thresh = nms_thresh - if nms_thresh <= 0: - raise ValueError('nms_threshold must be non negative.') - self.conf_thresh = conf_thresh - self.variance = cfg['variance'] - - # @staticmethod - def forward(self, loc_data, conf_data, prior_data): - """ - Args: - loc_data: (tensor) Loc preds from loc layers - Shape: [batch,num_priors*4] - conf_data: (tensor) Shape: Conf preds from conf layers - Shape: [batch*num_priors,num_classes] - prior_data: (tensor) Prior boxes and variances from priorbox layers - Shape: [1,num_priors,4] - """ - # move to CPU - loc_data = loc_data.cpu() - conf_data = conf_data.cpu() - prior_data = prior_data.cpu() - - num = loc_data.size(0) # batch size - num_priors = prior_data.size(0) - output = torch.zeros(num, self.num_classes, self.top_k, 5) - conf_preds = conf_data.view(num, num_priors, - self.num_classes).transpose(2, 1) - - # Decode predictions into bboxes. - for i in range(num): - decoded_boxes = decode(loc_data[i], prior_data, self.variance) - # For each class, perform nms - conf_scores = conf_preds[i].clone() - #print('decoded boxes ', decoded_boxes) - #print('conf scores', conf_scores) - for cl in range(1, self.num_classes): - c_mask = conf_scores[cl].gt(self.conf_thresh) - scores = conf_scores[cl][c_mask] - if scores.dim() == 0: - continue - l_mask = c_mask.unsqueeze(1).expand_as(decoded_boxes) - boxes = decoded_boxes[l_mask].view(-1, 4) - # idx of highest scoring and non-overlapping boxes per class - - ids, count = nms(boxes, scores, self.nms_thresh, self.top_k) - output[i, cl, :count] = \ - torch.cat((scores[ids[:count]].unsqueeze(1), - boxes[ids[:count]]), 1) - flt = output.contiguous().view(num, -1, 5) - _, idx = flt[:, :, 0].sort(1, descending=True) - _, rank = idx.sort(1) - flt[(rank < self.top_k).unsqueeze(-1).expand_as(flt)].fill_(0) - return output, boxes, scores diff --git a/spaces/ecarbo/text-generator-gpt-neo/app.py b/spaces/ecarbo/text-generator-gpt-neo/app.py deleted file mode 100644 index e13b7461fda47629406c9dba813bb7ae01e19493..0000000000000000000000000000000000000000 --- a/spaces/ecarbo/text-generator-gpt-neo/app.py +++ /dev/null @@ -1,25 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForCausalLM -import torch -import gradio as gr - -tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B") -model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B") - -def text_generation(input_text, seed): - input_ids = tokenizer(input_text, return_tensors="pt").input_ids - torch.manual_seed(seed) # Max value: 18446744073709551615 - outputs = model.generate(input_ids, do_sample=True, min_length=50, max_length=200) - generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True) - return generated_text - -title = "Text Generator Demo GPT-Neo" -description = "Text Generator Application by ecarbo" - -gr.Interface( - text_generation, - [gr.inputs.Textbox(lines=2, label="Enter input text"), gr.inputs.Number(default=10, label="Enter seed number")], - [gr.outputs.Textbox(type="auto", label="Text Generated")], - title=title, - description=description, - theme="huggingface" -).launch() \ No newline at end of file diff --git a/spaces/editing-images/ai-halloween-photobooth/style.css b/spaces/editing-images/ai-halloween-photobooth/style.css deleted file mode 100644 index 6c0d066f2b8e2f4234634daf7e6fa8b2b2c0fd2b..0000000000000000000000000000000000000000 --- a/spaces/editing-images/ai-halloween-photobooth/style.css +++ /dev/null @@ -1,153 +0,0 @@ -/* -This CSS file is modified from: -https://huggingface.co/spaces/DeepFloyd/IF/blob/main/style.css -*/ -@import url('https://fonts.googleapis.com/css2?family=IBM+Plex+Sans:wght@400;700&display=swap'); -body gradio-app{ - background-image: url(https://i.imgur.com/gqXjjP0.jpg) !important; - background-position: center -131px !important; - background-size: 1480px !important; - background-repeat: no-repeat !important; - background-color: #000305 !important; -} -h1, h3 { - text-align: center; - font-family: 'IBM Plex Sans', sans-serif; -} - -.gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - padding-top: 0 !important; - margin-top: -35px !important; -} - -.gr-button { - color: white; - border-color: black; - background: black; -} - -input[type='range'] { - accent-color: black; -} - -.dark input[type='range'] { - accent-color: #dfdfdf; -} - -.container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; -} - -/*background-image: url("https://i.imgur.com/gqXjjP0.jpg");*/ - -.gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; -} - -.gr-form { - flex: 1 1 50%; - border-top-right-radius: 0; - border-bottom-right-radius: 0; -} - -#prompt-container { - gap: 0; -} - -#prompt-text-input, -#negative-prompt-text-input { - padding: .45rem 0.625rem -} - -/* #component-16 { - border-top-width: 1px !important; - margin-top: 1em -} */ - -.image_duplication { - position: absolute; - width: 100px; - left: 50px -} - -#component-0 { - max-width: 1048px; - margin: auto; -} - -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; margin-left: auto; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -#lora_image button{ -height: 126px; width: 100% -} -h3, h4{margin-top: 0 !important} -#input_image img, #output_image img, #lora_image img{ - object-fit: cover -} -.output_column_reverse{ - flex-direction: column-reverse !important; -} -#total_box{ - background-color: rgba(0, 0, 0, 0.5); - backdrop-filter: blur(5px); - max-width: 850px; - margin: 0 auto; - margin-top: -10px; -} -*{transition: width 0.5s ease, height 0.5s ease, flex-grow 0.5s ease} -#buttons_area{margin-top: 1em} -#iccv_logo{display: block !important; margin-top: 1.5em} -#iccv_logo h1{margin-bottom: .5em} -#pick{margin-top: .8em} -[aria-label="Edit"],[aria-label="Download"],[aria-label="Share"]{ - display: none !important; -} -h3 a{ - color: var(--body-text-color) !important; -} -@media print{ - body gradio-app { - background-color: #000305 !important; - background-image: url(https://i.imgur.com/gqXjjP0.jpg) !important; - } - @page { - size: landscape; - margin: 0 !important; - } - #input_image, #output_image{height: 356px !important} - #buttons_area{display: none !important} - h3{display: none !important} - footer{display: none !important} - - [data-testid="block-label"], - [aria-label="Edit"], - [aria-label="Clear"], - [aria-label="Download"] { - display: none !important; - } - #pick{display: none !important} -} \ No newline at end of file diff --git a/spaces/eetn/Hellenic_AI_Society/README.md b/spaces/eetn/Hellenic_AI_Society/README.md deleted file mode 100644 index 51e4e4f0c7b6a73e0a1a82301c5d90c6a9595e80..0000000000000000000000000000000000000000 --- a/spaces/eetn/Hellenic_AI_Society/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Hellenic_AI_Society -emoji: 🔥 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/eforebrahim/Cassava-Leaf-Disease-Classification/app.py b/spaces/eforebrahim/Cassava-Leaf-Disease-Classification/app.py deleted file mode 100644 index a4ac4d612d8ff86469f5f19a1010b5d09fb1c721..0000000000000000000000000000000000000000 --- a/spaces/eforebrahim/Cassava-Leaf-Disease-Classification/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import cv2 -import os -import PIL.Image -import numpy as np -import streamlit as st -import tensorflow as tf -from tensorflow.keras.preprocessing import image -from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2,preprocess_input as mobilenet_v2_preprocess_input - -model = tf.keras.models.load_model("model_cassava.hdf5") -### load file -# uploaded_file = st.file_uploader("Choose a image file", type="jpg") - -map_dict = {0: 'Cassava Bacterial Blight (CBB)', - 1: 'Cassava Brown Streak Disease (CBSD)', - 2: 'Cassava Green Mottle (CGM)', - 3: 'Cassava Mosaic Disease (CMD)', - 4: 'Healthy'} - -option = st.radio('', ['Choose a test image', 'Choose your own image']) -if option == 'Choose your own image': - uploaded_file = st.file_uploader("Choose a image file", type="jpg") - - if uploaded_file is not None: - # Convert the file to an opencv image. - file_bytes = np.asarray(bytearray(uploaded_file.read()), dtype=np.uint8) - opencv_image = cv2.imdecode(file_bytes, 1) - opencv_image = cv2.cvtColor(opencv_image, cv2.COLOR_BGR2RGB) - resized = cv2.resize(opencv_image,(224,224)) - # Now do something with the image! For example, let's display it: - st.image(opencv_image, channels="RGB") - - # resized = mobilenet_v2_preprocess_input(resized) - img_reshape = resized[np.newaxis,...] - - Genrate_pred = st.button("Generate Prediction") - if Genrate_pred: - predictions_proba = model.predict(img_reshape).max()*100 - prediction = model.predict(img_reshape).argmax() - st.title("Predicted Label for the image is {}".format(map_dict[prediction])) - st.title("with a probability of" f" { '{:.2f}'.format(predictions_proba)}") - -else: - test_images = os.listdir('train_images') - test_image = st.selectbox('Please select a test image:', test_images) - file_path = 'train_images/' + test_image - with open(file_path, 'rb') as img_stream: - file_bytes = np.asarray(bytearray(img_stream.read()), dtype=np.uint8) - opencv_image = cv2.imdecode(file_bytes, 1) - opencv_image = cv2.cvtColor(opencv_image, cv2.COLOR_BGR2RGB) - resized = cv2.resize(opencv_image,(224,224)) - # Now do something with the image! For example, let's display it: - st.image(opencv_image, channels="RGB") - - # resized = mobilenet_v2_preprocess_input(resized) - img_reshape = resized[np.newaxis,...] - - Genrate_pred = st.button("Generate Prediction") - if Genrate_pred: - predictions_proba = model.predict(img_reshape).max()*100 - prediction = model.predict(img_reshape).argmax() - st.title("Predicted Label for the image is {}".format(map_dict[prediction])) - st.title("with a probability of" f" { '{:.2f}'.format(predictions_proba)}") - - diff --git a/spaces/ekojs/ml_food10/app.py b/spaces/ekojs/ml_food10/app.py deleted file mode 100644 index e0c19ddc6ece270805a5765e2bc9c81fd3c6345b..0000000000000000000000000000000000000000 --- a/spaces/ekojs/ml_food10/app.py +++ /dev/null @@ -1,245 +0,0 @@ -import gradio as gr -from fastai.vision.all import * -import skimage -from torchvision.transforms import RandAugment - -class RandAugmentTransform(RandTransform): - "A fastai transform handler/wrapper for RandAugment (https://arxiv.org/abs/1909.13719)" - split_idx, order = None, 2 - def __init__(self): store_attr() - - def before_call(self, b, split_idx): - self.idx = split_idx - self.aug = RandAugment() - - def encodes(self, img: PILImage): - return self.aug(img) if self.idx == 0 else img - -# Adapted from https://pytorch.org/vision/stable/_modules/torchvision/models/resnet.html -class BasicResNetBlock(nn.Module): - """Basic ResNet Block (no bottleneck) with GELU instead of RELU""" - expansion: int = 1 - - def __init__( - self, - inplanes: int, - planes: int, - stride: int = 1, - downsample = None - ) -> None: - super().__init__() - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=3, padding=1, stride=stride, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.gelu = nn.GELU() - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1, stride=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.gelu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.gelu(out) - - return out - - # Adapted from https://pytorch.org/vision/stable/_modules/torchvision/models/resnet.html -class ResNet(nn.Module): - def __init__( - self, - block, - layers, - num_classes = 10, - zero_init_residual = True, - ) -> None: - super().__init__() - - self.inplanes = 64 - - self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, bias=False) - self.bn1 = nn.BatchNorm2d(self.inplanes) - self.gelu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2) - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.fc = nn.Linear(512, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode="fan_out") - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - # Zero-initialize the last BN in each residual branch, - # so that the residual branch starts with zeros, and each residual block behaves like an identity. - # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 - if zero_init_residual: - for m in self.modules(): - if isinstance(m, BasicResNetBlock): - nn.init.constant_(m.bn2.weight, 0) - - def _make_layer( - self, - block, - planes: int, - blocks: int, - stride: int = 1, - ) -> nn.Sequential: - downsample = None - if stride != 1 or self.inplanes != planes: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes, 1, stride, bias=False), - nn.BatchNorm2d(planes), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - - self.inplanes = planes - - for _ in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x: Tensor) -> Tensor: - x = self.conv1(x) - x = self.bn1(x) - x = self.gelu(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = self.avgpool(x) - x = torch.flatten(x, 1) - x = self.fc(x) - - return x - -class ResNetWithPatches(nn.Module): - def __init__( - self, - block, - layers, - num_classes = 10, - zero_init_residual = True, - ) -> None: - super().__init__() - - self.inplanes = 64 - - # Patchify stem - # self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, bias=False) - # self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.stem = nn.Conv2d(3, self.inplanes, kernel_size=4, stride=4) - self.gelu = nn.ReLU(inplace=True) - self.bn1 = nn.BatchNorm2d(self.inplanes) - - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2) - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.fc = nn.Linear(512, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode="fan_out") - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - # Zero-initialize the last BN in each residual branch, - # so that the residual branch starts with zeros, and each residual block behaves like an identity. - # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 - if zero_init_residual: - for m in self.modules(): - if isinstance(m, BasicResNetBlock): - nn.init.constant_(m.bn2.weight, 0) - - def _make_layer( - self, - block, - planes: int, - blocks: int, - stride: int = 1, - ) -> nn.Sequential: - downsample = None - if stride != 1 or self.inplanes != planes: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes, 1, stride, bias=False), - nn.BatchNorm2d(planes), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - - self.inplanes = planes - - for _ in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x: Tensor) -> Tensor: - x = self.stem(x) - x = self.gelu(x) - x = self.bn1(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = self.avgpool(x) - x = torch.flatten(x, 1) - x = self.fc(x) - - return x - -learner = load_learner("ResNetPatches.pkl") - -labels = learner.dls.vocab - - -def predict(img): - img = PILImage(PILImage.create(img).resize((224, 224))) - pred, pred_idx, probs = learner.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - - -title = "Food Classifier" -description = "A ResNet34 model modified to use patches of images, trained to classify food items on a subset (10 classes) of the Food 101 dataset." -article = "

Relevant Resources

" -examples = [["chicken_curry.jpg"], ["garlic_bread.jpg"], ["takoyaki.jpg"]] -enable_queue = True - -gr.Interface( - fn=predict, - inputs=gr.inputs.Image(shape=(224, 224)), - outputs=gr.outputs.Label(num_top_classes=3), - title=title, - description=description, - article=article, - examples=examples, - enable_queue=enable_queue, -).launch() diff --git a/spaces/epexVfeibi/Imagedeblurr/!!BETTER!! Full Version Scriptcase 6 Serial Number.md b/spaces/epexVfeibi/Imagedeblurr/!!BETTER!! Full Version Scriptcase 6 Serial Number.md deleted file mode 100644 index cf0988f4fbb38156b8d1de0dcee271acde962d1d..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/!!BETTER!! Full Version Scriptcase 6 Serial Number.md +++ /dev/null @@ -1,147 +0,0 @@ - -

Full Version Scriptcase 6 Serial Number: What You Need to Know

- -

If you are looking for a powerful and easy-to-use web development tool, you might have heard of Scriptcase. Scriptcase is a low-code platform that allows you to create web applications using only your browser. You can import data from spreadsheets, databases, or external sources, and generate forms, reports, charts, dashboards, and more.

-

Full Version Scriptcase 6 Serial Number


DOWNLOADhttps://jinyurl.com/2uEpj5



- -

Scriptcase is compatible with various databases, such as MySQL, PostgreSQL, SQLite, Interbase, Firebird, MS Access, Oracle, SQL Server, DB2, SyBase, Informix, or ODBC layer. You can also customize your applications with your own business rules, using external libraries, programming IDE (Blank), Events, Macros, and other features.

- -

Scriptcase has different versions for different needs and budgets. You can choose from Express Edition, Professional Edition, Enterprise Edition, or Cloud Edition. Each version has different features and limitations. For example, the Express Edition is free but has a limit of 10 projects and 2 connections.

- -

But what if you want to get the full version of Scriptcase 6 with all the features and no limitations? You will need a valid serial number to activate it. A serial number is a unique code that identifies your license and allows you to use Scriptcase without restrictions.

- -

How to Get a Full Version Scriptcase 6 Serial Number

- -

There are two ways to get a full version Scriptcase 6 serial number: buying it or cracking it.

- -

The first option is to buy it from the official website of Scriptcase. You can choose the edition that suits your needs and pay with your credit card or PayPal. You will receive an email with your serial number and instructions on how to activate it. This is the legal and safe way to get a full version Scriptcase 6 serial number.

- -

The second option is to crack it. This means using a software or a tool that generates a fake serial number that bypasses the activation process of Scriptcase. This is an illegal and risky way to get a full version Scriptcase 6 serial number. You might find some websites or blogs that offer cracked versions of Scriptcase 6 or serial number generators. However, these are not reliable sources and might contain viruses, malware, or spyware that can harm your computer or steal your personal information. Moreover, you might face legal consequences for violating the terms and conditions of Scriptcase.

- -

Why You Should Avoid Cracking Scriptcase 6

- -

Cracking Scriptcase 6 might seem tempting if you want to save money or try it before buying it. However, there are many reasons why you should avoid cracking Scriptcase 6 and opt for the legal way instead.

-

- - - -

Conclusion

- -

Scriptcase 6 is a great web development tool that can help you create web applications faster and easier. However, if you want to get the full version of Scriptcase 6 with all the features and no limitations, you need a valid serial number to activate it.

- -

The best way to get a full version Scriptcase 6 serial number is to buy it from the official website of Scriptcase. This is the legal and safe way to get a full version Scriptcase 6 serial number. You will also enjoy the benefits of having a legitimate license of Scriptcase 6.

- -

The worst way to get a full version Scriptcase 6 serial number is to crack it. This is an illegal and risky way to get a full version Scriptcase 6 serial number. You will also face the drawbacks of having a cracked version of Scriptcase 6.

- -

Therefore, we recommend you to avoid cracking Scriptcase 6 and opt for the legal way instead. This will ensure your security, quality, professionalism, and productivity as a web developer.

-

How to Use Scriptcase 6

- -

Once you have activated your full version Scriptcase 6 serial number, you can start using Scriptcase 6 to create web applications. The process is simple and intuitive. You just need to follow these steps:

- -
    -
  1. Create a new project or open an existing one.
  2. -
  3. Select the database connection that you want to use for your application.
  4. -
  5. Create a new application or edit an existing one. You can choose from different types of applications, such as Form, Grid, Chart, Calendar, Control, Report, Dashboard, etc.
  6. -
  7. Configure the settings and options of your application, such as layout, fields, buttons, events, validations, filters, etc.
  8. -
  9. Generate the source code of your application and run it in your browser.
  10. -
  11. Publish your application to a web server or export it to a file.
  12. -
- -

Scriptcase 6 also provides you with tools and resources to help you with your web development. You can access the documentation, tutorials, videos, samples, forums, support, and more from the Scriptcase menu.

- -

Benefits of Using Scriptcase 6

- -

Using Scriptcase 6 with a full version serial number has many benefits for web developers. Here are some of them:

- - - -

Scriptcase 6 is a powerful and easy-to-use web development tool that can help you create web applications faster and easier. However, if you want to get the full version of Scriptcase 6 with all the features and no limitations, you need a valid serial number to activate it.

- -

The best way to get a full version Scriptcase 6 serial number is to buy it from the official website of Scriptcase. This is the legal and safe way to get a full version Scriptcase 6 serial number. You will also enjoy the benefits of having a legitimate license of Scriptcase 6.

- -

The worst way to get a full version Scriptcase 6 serial number is to crack it. This is an illegal and risky way to get a full version Scriptcase 6 serial number. You will also face the drawbacks of having a cracked version of Scriptcase 6.

- -

Therefore, we recommend you to avoid cracking Scriptcase 6 and opt for the legal way instead. This will ensure your security, quality, professionalism, and productivity as a web developer.

-

How to Download and Install Scriptcase 6

- -

If you have bought a full version Scriptcase 6 serial number, you can download and install Scriptcase 6 on your computer or on a web server. The process is simple and straightforward. You just need to follow these steps:

- -
    -
  1. Go to the official website of Scriptcase and log in with your email and password.
  2. -
  3. Go to the download section and choose the version that matches your operating system (Windows, Linux, or Mac).
  4. -
  5. Download the installer file and save it on your computer or on a web server.
  6. -
  7. Run the installer file and follow the instructions on the screen.
  8. -
  9. Enter your full version Scriptcase 6 serial number when prompted.
  10. -
  11. Finish the installation and launch Scriptcase 6.
  12. -
- -

You can also watch the video tutorials on the website of Scriptcase to see how to download and install Scriptcase 6 step by step.

- -

How to Update and Upgrade Scriptcase 6

- -

If you have a full version Scriptcase 6 serial number, you can update and upgrade Scriptcase 6 to get the latest features and improvements. The process is easy and convenient. You just need to follow these steps:

- -
    -
  1. Open Scriptcase 6 and go to Help > Check for updates.
  2. -
  3. If there are any updates available, click on Download and Install.
  4. -
  5. Wait for the updates to be downloaded and installed.
  6. -
  7. Restart Scriptcase 6 and enjoy the new features and improvements.
  8. -
- -

If you want to upgrade your edition of Scriptcase 6, you can do so by going to the website of Scriptcase and choosing the edition that you want to upgrade to. You will need to pay the difference between your current edition and the new edition. You will receive an email with your new serial number and instructions on how to activate it.

- -

You can also watch the video tutorials on the website of Scriptcase to see how to update and upgrade Scriptcase 6 step by step.

-

How to Troubleshoot Scriptcase 6

- -

If you encounter any problems or issues while using Scriptcase 6 with a full version serial number, you can troubleshoot them by following these steps:

- -
    -
  1. Check the system requirements of Scriptcase 6 and make sure your computer or web server meets them.
  2. -
  3. Check the compatibility of Scriptcase 6 with your browser and make sure you are using the latest version.
  4. -
  5. Check the connection settings of Scriptcase 6 and make sure they are correct and valid.
  6. -
  7. Check the error logs of Scriptcase 6 and see if there are any messages or codes that indicate the cause of the problem.
  8. -
  9. Check the documentation and FAQ of Scriptcase 6 and see if there are any solutions or tips for your problem.
  10. -
  11. Contact the support team of Scriptcase 6 and report your problem with details and screenshots.
  12. -
- -

You can also watch the video tutorials on the website of Scriptcase to see how to troubleshoot Scriptcase 6 step by step.

- -

How to Uninstall Scriptcase 6

- -

If you want to uninstall Scriptcase 6 from your computer or web server, you can do so by following these steps:

- -
    -
  1. Backup your projects and data before uninstalling Scriptcase 6.
  2. -
  3. Go to the control panel of your computer or web server and find the program list.
  4. -
  5. Select Scriptcase 6 and click on Uninstall.
  6. -
  7. Follow the instructions on the screen to complete the uninstallation process.
  8. -
  9. Delete any remaining files or folders related to Scriptcase 6 from your computer or web server.
  10. -
- -

You can also watch the video tutorials on the website of Scriptcase to see how to uninstall Scriptcase 6 step by step.

-

Conclusion

- -

Scriptcase 6 is a powerful and easy-to-use web development tool that can help you create web applications faster and easier. However, if you want to get the full version of Scriptcase 6 with all the features and no limitations, you need a valid serial number to activate it.

- -

The best way to get a full version Scriptcase 6 serial number is to buy it from the official website of Scriptcase. This is the legal and safe way to get a full version Scriptcase 6 serial number. You will also enjoy the benefits of having a legitimate license of Scriptcase 6, such as updates, support, documentation, tutorials, community forums, and more.

- -

The worst way to get a full version Scriptcase 6 serial number is to crack it. This is an illegal and risky way to get a full version Scriptcase 6 serial number. You will also face the drawbacks of having a cracked version of Scriptcase 6, such as viruses, malware, spyware, errors, bugs, legal consequences, and more.

- -

Therefore, we recommend you to avoid cracking Scriptcase 6 and opt for the legal way instead. This will ensure your security, quality, professionalism, and productivity as a web developer.

- -

We hope this article has helped you to understand what Scriptcase 6 is and how to get a full version Scriptcase 6 serial number. If you have any questions or feedback, please feel free to contact us or leave a comment below.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/epexVfeibi/Imagedeblurr/!!TOP!! Download Tally Ees V6 3 Release 1 Crack Keygen.md b/spaces/epexVfeibi/Imagedeblurr/!!TOP!! Download Tally Ees V6 3 Release 1 Crack Keygen.md deleted file mode 100644 index cb25045577f967a29d525cd31e263af689d4cb8c..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/!!TOP!! Download Tally Ees V6 3 Release 1 Crack Keygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

Download Tally Ees V6 3 Release 1 Crack Keygen


DOWNLOAD === https://jinyurl.com/2uEpHK



-
-We suggest that you download and install the current product i.e. Tally ERP 9. To migrate your Tally 6.3 data to Tally ERP 9, you can follow the steps below or contact us if ... Tally ERP 9 is compatible with Windows XP SP2 or higher versions of the ... 1. Start TallyERP 9. 2. Click A: Activate Your License in the startup screen, ... 4d29de3e1b
-
-
-

diff --git a/spaces/eswat/Image-and-3D-Model-Creator/PIFu/README.md b/spaces/eswat/Image-and-3D-Model-Creator/PIFu/README.md deleted file mode 100644 index 5eae12f2a370027de6c46fbf78ec68a1ecb1c01c..0000000000000000000000000000000000000000 --- a/spaces/eswat/Image-and-3D-Model-Creator/PIFu/README.md +++ /dev/null @@ -1,167 +0,0 @@ -# PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization - -[![report](https://img.shields.io/badge/arxiv-report-red)](https://arxiv.org/abs/1905.05172) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1GFSsqP2BWz4gtq0e-nki00ZHSirXwFyY) - -News: -* \[2020/05/04\] Added EGL rendering option for training data generation. Now you can create your own training data with headless machines! -* \[2020/04/13\] Demo with Google Colab (incl. visualization) is available. Special thanks to [@nanopoteto](https://github.com/nanopoteto)!!! -* \[2020/02/26\] License is updated to MIT license! Enjoy! - -This repository contains a pytorch implementation of "[PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization](https://arxiv.org/abs/1905.05172)". - -[Project Page](https://shunsukesaito.github.io/PIFu/) -![Teaser Image](https://shunsukesaito.github.io/PIFu/resources/images/teaser.png) - -If you find the code useful in your research, please consider citing the paper. - -``` -@InProceedings{saito2019pifu, -author = {Saito, Shunsuke and Huang, Zeng and Natsume, Ryota and Morishima, Shigeo and Kanazawa, Angjoo and Li, Hao}, -title = {PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization}, -booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, -month = {October}, -year = {2019} -} -``` - - -This codebase provides: -- test code -- training code -- data generation code - -## Requirements -- Python 3 -- [PyTorch](https://pytorch.org/) tested on 1.4.0 -- json -- PIL -- skimage -- tqdm -- numpy -- cv2 - -for training and data generation -- [trimesh](https://trimsh.org/) with [pyembree](https://github.com/scopatz/pyembree) -- [pyexr](https://github.com/tvogels/pyexr) -- PyOpenGL -- freeglut (use `sudo apt-get install freeglut3-dev` for ubuntu users) -- (optional) egl related packages for rendering with headless machines. (use `apt install libgl1-mesa-dri libegl1-mesa libgbm1` for ubuntu users) - -Warning: I found that outdated NVIDIA drivers may cause errors with EGL. If you want to try out the EGL version, please update your NVIDIA driver to the latest!! - -## Windows demo installation instuction - -- Install [miniconda](https://docs.conda.io/en/latest/miniconda.html) -- Add `conda` to PATH -- Install [git bash](https://git-scm.com/downloads) -- Launch `Git\bin\bash.exe` -- `eval "$(conda shell.bash hook)"` then `conda activate my_env` because of [this](https://github.com/conda/conda-build/issues/3371) -- Automatic `env create -f environment.yml` (look [this](https://github.com/conda/conda/issues/3417)) -- OR manually setup [environment](https://towardsdatascience.com/a-guide-to-conda-environments-bc6180fc533) - - `conda create —name pifu python` where `pifu` is name of your environment - - `conda activate` - - `conda install pytorch torchvision cudatoolkit=10.1 -c pytorch` - - `conda install pillow` - - `conda install scikit-image` - - `conda install tqdm` - - `conda install -c menpo opencv` -- Download [wget.exe](https://eternallybored.org/misc/wget/) -- Place it into `Git\mingw64\bin` -- `sh ./scripts/download_trained_model.sh` -- Remove background from your image ([this](https://www.remove.bg/), for example) -- Create black-white mask .png -- Replace original from sample_images/ -- Try it out - `sh ./scripts/test.sh` -- Download [Meshlab](http://www.meshlab.net/) because of [this](https://github.com/shunsukesaito/PIFu/issues/1) -- Open .obj file in Meshlab - - -## Demo -Warning: The released model is trained with mostly upright standing scans with weak perspectie projection and the pitch angle of 0 degree. Reconstruction quality may degrade for images highly deviated from trainining data. -1. run the following script to download the pretrained models from the following link and copy them under `./PIFu/checkpoints/`. -``` -sh ./scripts/download_trained_model.sh -``` - -2. run the following script. the script creates a textured `.obj` file under `./PIFu/eval_results/`. You may need to use `./apps/crop_img.py` to roughly align an input image and the corresponding mask to the training data for better performance. For background removal, you can use any off-the-shelf tools such as [removebg](https://www.remove.bg/). -``` -sh ./scripts/test.sh -``` - -## Demo on Google Colab -If you do not have a setup to run PIFu, we offer Google Colab version to give it a try, allowing you to run PIFu in the cloud, free of charge. Try our Colab demo using the following notebook: -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1GFSsqP2BWz4gtq0e-nki00ZHSirXwFyY) - -## Data Generation (Linux Only) -While we are unable to release the full training data due to the restriction of commertial scans, we provide rendering code using free models in [RenderPeople](https://renderpeople.com/free-3d-people/). -This tutorial uses `rp_dennis_posed_004` model. Please download the model from [this link](https://renderpeople.com/sample/free/rp_dennis_posed_004_OBJ.zip) and unzip the content under a folder named `rp_dennis_posed_004_OBJ`. The same process can be applied to other RenderPeople data. - -Warning: the following code becomes extremely slow without [pyembree](https://github.com/scopatz/pyembree). Please make sure you install pyembree. - -1. run the following script to compute spherical harmonics coefficients for [precomputed radiance transfer (PRT)](https://sites.fas.harvard.edu/~cs278/papers/prt.pdf). In a nutshell, PRT is used to account for accurate light transport including ambient occlusion without compromising online rendering time, which significantly improves the photorealism compared with [a common sperical harmonics rendering using surface normals](https://cseweb.ucsd.edu/~ravir/papers/envmap/envmap.pdf). This process has to be done once for each obj file. -``` -python -m apps.prt_util -i {path_to_rp_dennis_posed_004_OBJ} -``` - -2. run the following script. Under the specified data path, the code creates folders named `GEO`, `RENDER`, `MASK`, `PARAM`, `UV_RENDER`, `UV_MASK`, `UV_NORMAL`, and `UV_POS`. Note that you may need to list validation subjects to exclude from training in `{path_to_training_data}/val.txt` (this tutorial has only one subject and leave it empty). If you wish to render images with headless servers equipped with NVIDIA GPU, add -e to enable EGL rendering. -``` -python -m apps.render_data -i {path_to_rp_dennis_posed_004_OBJ} -o {path_to_training_data} [-e] -``` - -## Training (Linux Only) - -Warning: the following code becomes extremely slow without [pyembree](https://github.com/scopatz/pyembree). Please make sure you install pyembree. - -1. run the following script to train the shape module. The intermediate results and checkpoints are saved under `./results` and `./checkpoints` respectively. You can add `--batch_size` and `--num_sample_input` flags to adjust the batch size and the number of sampled points based on available GPU memory. -``` -python -m apps.train_shape --dataroot {path_to_training_data} --random_flip --random_scale --random_trans -``` - -2. run the following script to train the color module. -``` -python -m apps.train_color --dataroot {path_to_training_data} --num_sample_inout 0 --num_sample_color 5000 --sigma 0.1 --random_flip --random_scale --random_trans -``` - -## Related Research -**[Monocular Real-Time Volumetric Performance Capture (ECCV 2020)](https://project-splinter.github.io/)** -*Ruilong Li\*, Yuliang Xiu\*, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li* - -The first real-time PIFu by accelerating reconstruction and rendering!! - -**[PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization (CVPR 2020)](https://shunsukesaito.github.io/PIFuHD/)** -*Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo* - -We further improve the quality of reconstruction by leveraging multi-level approach! - -**[ARCH: Animatable Reconstruction of Clothed Humans (CVPR 2020)](https://arxiv.org/pdf/2004.04572.pdf)** -*Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, Tony Tung* - -Learning PIFu in canonical space for animatable avatar generation! - -**[Robust 3D Self-portraits in Seconds (CVPR 2020)](http://www.liuyebin.com/portrait/portrait.html)** -*Zhe Li, Tao Yu, Chuanyu Pan, Zerong Zheng, Yebin Liu* - -They extend PIFu to RGBD + introduce "PIFusion" utilizing PIFu reconstruction for non-rigid fusion. - -**[Learning to Infer Implicit Surfaces without 3d Supervision (NeurIPS 2019)](http://papers.nips.cc/paper/9039-learning-to-infer-implicit-surfaces-without-3d-supervision.pdf)** -*Shichen Liu, Shunsuke Saito, Weikai Chen, Hao Li* - -We answer to the question of "how can we learn implicit function if we don't have 3D ground truth?" - -**[SiCloPe: Silhouette-Based Clothed People (CVPR 2019, best paper finalist)](https://arxiv.org/pdf/1901.00049.pdf)** -*Ryota Natsume\*, Shunsuke Saito\*, Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, Shigeo Morishima* - -Our first attempt to reconstruct 3D clothed human body with texture from a single image! - -**[Deep Volumetric Video from Very Sparse Multi-view Performance Capture (ECCV 2018)](http://openaccess.thecvf.com/content_ECCV_2018/papers/Zeng_Huang_Deep_Volumetric_Video_ECCV_2018_paper.pdf)** -*Zeng Huang, Tianye Li, Weikai Chen, Yajie Zhao, Jun Xing, Chloe LeGendre, Linjie Luo, Chongyang Ma, Hao Li* - -Implict surface learning for sparse view human performance capture! - ------- - - - -For commercial queries, please contact: - -Hao Li: hao@hao-li.com ccto: saitos@usc.edu Baker!! diff --git a/spaces/eubinecto/idiomify/main_train.py b/spaces/eubinecto/idiomify/main_train.py deleted file mode 100644 index ef24b27005bcdade3cc93d8c77adfe46ff04f843..0000000000000000000000000000000000000000 --- a/spaces/eubinecto/idiomify/main_train.py +++ /dev/null @@ -1,56 +0,0 @@ -import os -import torch.cuda -import wandb -import argparse -import pytorch_lightning as pl -from termcolor import colored -from pytorch_lightning.loggers import WandbLogger -from transformers import BartForConditionalGeneration -from idiomify.datamodules import IdiomifyDataModule -from idiomify.fetchers import fetch_config, fetch_tokenizer -from idiomify.models import Idiomifier -from idiomify.paths import ROOT_DIR - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--num_workers", type=int, default=os.cpu_count()) - parser.add_argument("--log_every_n_steps", type=int, default=1) - parser.add_argument("--fast_dev_run", action="store_true", default=False) - parser.add_argument("--upload", dest='upload', action='store_true', default=False) - args = parser.parse_args() - config = fetch_config()['idiomifier'] - config.update(vars(args)) - if not config['upload']: - print(colored("WARNING: YOU CHOSE NOT TO UPLOAD. NOTHING BUT LOGS WILL BE SAVED TO WANDB", color="red")) - # prepare a pre-trained BART - bart = BartForConditionalGeneration.from_pretrained(config['bart']) - # prepare the datamodule - with wandb.init(entity="eubinecto", project="idiomify", config=config) as run: - tokenizer = fetch_tokenizer(config['tokenizer_ver'], run) - bart.resize_token_embeddings(len(tokenizer)) # because new tokens are added, this process is necessary - model = Idiomifier(bart, config['lr'], tokenizer.bos_token_id, tokenizer.pad_token_id) - datamodule = IdiomifyDataModule(config, tokenizer, run) - logger = WandbLogger(log_model=False) - trainer = pl.Trainer(max_epochs=config['max_epochs'], - fast_dev_run=config['fast_dev_run'], - log_every_n_steps=config['log_every_n_steps'], - gpus=torch.cuda.device_count(), - default_root_dir=str(ROOT_DIR), - enable_checkpointing=False, - logger=logger) - # start training - trainer.fit(model=model, datamodule=datamodule) - # upload the model to wandb only if the training is properly done # - if not config['fast_dev_run'] and trainer.current_epoch == config['max_epochs'] - 1: - ckpt_path = ROOT_DIR / "model.ckpt" - trainer.save_checkpoint(str(ckpt_path)) - config['vocab_size'] = len(tokenizer) # this will be needed to fetch a pretrained idiomifier later - artifact = wandb.Artifact(name="idiomifier", type="model", metadata=config) - artifact.add_file(str(ckpt_path)) - run.log_artifact(artifact, aliases=["latest", config['ver']]) - os.remove(str(ckpt_path)) # make sure you remove it after you are done with uploading it - - -if __name__ == '__main__': - main() diff --git a/spaces/evoss/NLP_text_analyzer/app.py b/spaces/evoss/NLP_text_analyzer/app.py deleted file mode 100644 index df2b45136606645bbe2253c96fd2abe160d927d8..0000000000000000000000000000000000000000 --- a/spaces/evoss/NLP_text_analyzer/app.py +++ /dev/null @@ -1,29 +0,0 @@ -import gradio as gr - -import spacy -nlp = spacy.load('en_core_web_sm') - - -def count_verbs(doc): - verbs = 0 - for token in doc: - if token.pos_ == "VERB": - verbs += 1 - return verbs - -def greet(sent): - doc = nlp(sent) - nouns = 0 - for token in doc: - if token.pos_ == "NOUN": - nouns += 1 - verbs = count_verbs(doc) - length = len(sent.split()) - return (f"Your sentence has {length} word(s).\n Your sentence has {nouns} noun(s).\n Your sentence has {verbs} verb(s).") - - - -iface = gr.Interface(fn=greet, inputs="text", outputs="text", examples = [ - ["The Moon's orbit around Earth takes a long time."], - ["The smooth Borealis basin in the Northern Hemisphere covers 40%."]]) -iface.launch(debug=True) \ No newline at end of file diff --git a/spaces/facebook/StyleNeRF/training/networks.py b/spaces/facebook/StyleNeRF/training/networks.py deleted file mode 100644 index f762ea468f7825d2f207fc6edb293088815f689b..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/training/networks.py +++ /dev/null @@ -1,1563 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -from pickle import NONE -from re import X -from sndhdr import whathdr -import numpy as np -import math -import scipy.signal -import scipy.optimize - -from numpy import core -from numpy.lib.arraysetops import isin - -import torch -import torch.nn.functional as F -from torch.overrides import is_tensor_method_or_property -from einops import repeat -from dnnlib import camera, util, geometry -from torch_utils import misc -from torch_utils import persistence -from torch_utils.ops import conv2d_resample -from torch_utils.ops import upfirdn2d -from torch_utils.ops import bias_act -from torch_utils.ops import fma -from torch_utils.ops import filtered_lrelu - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def normalize_2nd_moment(x, dim=1, eps=1e-8): - return x * (x.square().mean(dim=dim, keepdim=True) + eps).rsqrt() - - -@misc.profiled_function -def conv3d(x, w, up=1, down=1, padding=0, groups=1): - if up > 1: - x = F.interpolate(x, scale_factor=up, mode='trilinear', align_corners=True) - x = F.conv3d(x, w, padding=padding, groups=groups) - if down > 1: - x = F.interpolate(x, scale_factor=1./float(down), mode='trilinear', align_corners=True) - return x - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def modulated_conv2d( - x, # Input tensor of shape [batch_size, in_channels, in_height, in_width]. - weight, # Weight tensor of shape [out_channels, in_channels, kernel_height, kernel_width]. - styles, # Modulation coefficients of shape [batch_size, in_channels]. - noise = None, # Optional noise tensor to add to the output activations. - up = 1, # Integer upsampling factor. - down = 1, # Integer downsampling factor. - padding = 0, # Padding with respect to the upsampled image. - resample_filter = None, # Low-pass filter to apply when resampling activations. Must be prepared beforehand by calling upfirdn2d.setup_filter(). - demodulate = True, # Apply weight demodulation? - flip_weight = True, # False = convolution, True = correlation (matches torch.nn.functional.conv2d) ???????? - fused_modconv = True, # Perform modulation, convolution, and demodulation as a single fused operation? - mode = '2d', # modulated 2d/3d conv or MLP - **unused, -): - batch_size = x.shape[0] - if mode == '3d': - _, in_channels, kd, kh, kw = weight.shape - else: - _, in_channels, kh, kw = weight.shape - - # Pre-normalize inputs to avoid FP16 overflow. - if x.dtype == torch.float16 and demodulate: - weight_sizes = in_channels * kh * kw if mode != '3d' else in_channels * kd * kh * kw - weight = weight * (1 / np.sqrt(weight_sizes) / weight.norm(float('inf'), dim=[1,2,3], keepdim=True)) # max_Ikk - styles = styles / styles.norm(float('inf'), dim=1, keepdim=True) # max_I - - # Calculate per-sample weights and demodulation coefficients. - w = None - dcoefs = None - if mode != '3d': - rsizes, ssizes = [-1, 1, 1], [2, 3, 4] - else: - rsizes, ssizes = [-1, 1, 1, 1], [2, 3, 4, 5] - - if demodulate or fused_modconv: # if not fused, skip - w = weight.unsqueeze(0) * styles.reshape(batch_size, 1, *rsizes) - if demodulate: - dcoefs = (w.square().sum(dim=ssizes) + 1e-8).rsqrt() # [NO] - - if demodulate and fused_modconv: - w = w * dcoefs.reshape(batch_size, *rsizes, 1) # [NOIkk] (batch_size, out_channels, in_channels, kernel_size, kernel_size) - - # Execute by scaling the activations before and after the convolution. - if not fused_modconv: - x = x * styles.to(x.dtype).reshape(batch_size, *rsizes) - if mode == '2d': - x = conv2d_resample.conv2d_resample(x=x, w=weight.to(x.dtype), f=resample_filter, up=up, down=down, padding=padding, flip_weight=flip_weight) - elif mode == '3d': - x = conv3d(x=x, w=weight.to(x.dtype), up=up, down=down, padding=padding) - else: - raise NotImplementedError - - if demodulate and noise is not None: - x = fma.fma(x, dcoefs.to(x.dtype).reshape(batch_size, *rsizes), noise.to(x.dtype)) # fused multiply add - elif demodulate: - x = x * dcoefs.to(x.dtype).reshape(batch_size, *rsizes) - elif noise is not None: - x = x.add_(noise.to(x.dtype)) - return x - - # Execute as one fused op using grouped convolution. - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - batch_size = int(batch_size) - - x = x.reshape(1, -1, *x.shape[2:]) - w = w.reshape(-1, *w.shape[2:]) - if mode == '2d': - x = conv2d_resample.conv2d_resample(x=x, w=w.to(x.dtype), f=resample_filter, up=up, down=down, padding=padding, groups=batch_size, flip_weight=flip_weight) - elif mode == '3d': - x = conv3d(x=x, w=w.to(x.dtype), up=up, down=down, padding=padding, groups=batch_size) - x = x.reshape(batch_size, -1, *x.shape[2:]) - - if noise is not None: - x = x.add_(noise) - return x - - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class FullyConnectedLayer(torch.nn.Module): - def __init__(self, - in_features, # Number of input features. - out_features, # Number of output features. - bias = True, # Apply additive bias before the activation function? - activation = 'linear', # Activation function: 'relu', 'lrelu', etc. - lr_multiplier = 1, # Learning rate multiplier. - bias_init = 0, # Initial value for the additive bias. - ): - super().__init__() - self.activation = activation - self.weight = torch.nn.Parameter(torch.randn([out_features, in_features]) / lr_multiplier) - self.bias = torch.nn.Parameter(torch.full([out_features], np.float32(bias_init))) if bias else None - self.weight_gain = lr_multiplier / np.sqrt(in_features) - self.bias_gain = lr_multiplier - - def forward(self, x): - w = self.weight.to(x.dtype) * self.weight_gain - b = self.bias - if b is not None: - b = b.to(x.dtype) - if self.bias_gain != 1: - b = b * self.bias_gain - - if self.activation == 'linear' and b is not None: - x = torch.addmm(b.unsqueeze(0), x, w.t()) - else: - x = x.matmul(w.t()) - x = bias_act.bias_act(x, b, act=self.activation) - return x - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class Conv2dLayer(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - kernel_size, # Width and height of the convolution kernel. - bias = True, # Apply additive bias before the activation function? - activation = 'linear', # Activation function: 'relu', 'lrelu', etc. - up = 1, # Integer upsampling factor. - down = 1, # Integer downsampling factor. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = None, # Clamp the output to +-X, None = disable clamping. - channels_last = False, # Expect the input to have memory_format=channels_last? - trainable = True, # Update the weights of this layer during training? - mode = '2d', - **unused - ): - super().__init__() - self.activation = activation - self.up = up - self.down = down - self.conv_clamp = conv_clamp - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.padding = kernel_size // 2 - self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2)) - self.act_gain = bias_act.activation_funcs[activation].def_gain - self.mode = mode - weight_shape = [out_channels, in_channels, kernel_size, kernel_size] - if mode == '3d': - weight_shape += [kernel_size] - - memory_format = torch.channels_last if channels_last else torch.contiguous_format - weight = torch.randn(weight_shape).to(memory_format=memory_format) - bias = torch.zeros([out_channels]) if bias else None - if trainable: - self.weight = torch.nn.Parameter(weight) - self.bias = torch.nn.Parameter(bias) if bias is not None else None - else: - self.register_buffer('weight', weight) - if bias is not None: - self.register_buffer('bias', bias) - else: - self.bias = None - - def forward(self, x, gain=1): - w = self.weight * self.weight_gain - b = self.bias.to(x.dtype) if self.bias is not None else None - flip_weight = (self.up == 1) # slightly faster - - if self.mode == '2d': - x = conv2d_resample.conv2d_resample(x=x, w=w.to(x.dtype), f=self.resample_filter, up=self.up, down=self.down, padding=self.padding, flip_weight=flip_weight) - elif self.mode == '3d': - x = conv3d(x=x, w=w.to(x.dtype), up=self.up, down=self.down, padding=self.padding) - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - x = bias_act.bias_act(x, b, act=self.activation, gain=act_gain, clamp=act_clamp) - return x - -# --------------------------------------------------------------------------- - -@persistence.persistent_class -class Blur(torch.nn.Module): - def __init__(self): - super().__init__() - f = torch.Tensor([1, 2, 1]) - self.register_buffer('f', f) - - def forward(self, x): - from kornia.filters import filter2d - f = self.f - f = f[None, None, :] * f [None, :, None] - return filter2d(x, f, normalized=True) - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class MappingNetwork(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality, 0 = no latent. - c_dim, # Conditioning label (C) dimensionality, 0 = no label. - w_dim, # Intermediate latent (W) dimensionality. - num_ws, # Number of intermediate latents to output, None = do not broadcast. - num_layers = 8, # Number of mapping layers. - embed_features = None, # Label embedding dimensionality, None = same as w_dim. - layer_features = None, # Number of intermediate features in the mapping layers, None = same as w_dim. - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - lr_multiplier = 0.01, # Learning rate multiplier for the mapping layers. - w_avg_beta = 0.995, # Decay for tracking the moving average of W during training, None = do not track. - **unused, - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.num_ws = num_ws - self.num_layers = num_layers - self.w_avg_beta = w_avg_beta - - if embed_features is None: - embed_features = w_dim - if c_dim == 0: - embed_features = 0 - if layer_features is None: - layer_features = w_dim - features_list = [z_dim + embed_features] + [layer_features] * (num_layers - 1) + [w_dim] - - if c_dim > 0: # project label condition - self.embed = FullyConnectedLayer(c_dim, embed_features) - for idx in range(num_layers): - in_features = features_list[idx] - out_features = features_list[idx + 1] - layer = FullyConnectedLayer(in_features, out_features, activation=activation, lr_multiplier=lr_multiplier) - setattr(self, f'fc{idx}', layer) - - if num_ws is not None and w_avg_beta is not None: - self.register_buffer('w_avg', torch.zeros([w_dim])) - - def forward(self, z=None, c=None, truncation_psi=1, truncation_cutoff=None, skip_w_avg_update=False, styles=None, **unused_kwargs): - if styles is not None: - return styles - - # Embed, normalize, and concat inputs. - x = None - with torch.autograd.profiler.record_function('input'): - if self.z_dim > 0: - misc.assert_shape(z, [None, self.z_dim]) - x = normalize_2nd_moment(z.to(torch.float32)) # normalize z to shpere - if self.c_dim > 0: - misc.assert_shape(c, [None, self.c_dim]) - y = normalize_2nd_moment(self.embed(c.to(torch.float32))) - x = torch.cat([x, y], dim=1) if x is not None else y - - # Main layers. - for idx in range(self.num_layers): - layer = getattr(self, f'fc{idx}') - x = layer(x) - - # Update moving average of W. - if self.w_avg_beta is not None and self.training and not skip_w_avg_update: - with torch.autograd.profiler.record_function('update_w_avg'): - self.w_avg.copy_(x.detach().mean(dim=0).lerp(self.w_avg, self.w_avg_beta)) - - # Broadcast. - if self.num_ws is not None: - with torch.autograd.profiler.record_function('broadcast'): - x = x.unsqueeze(1).repeat([1, self.num_ws, 1]) - - # Apply truncation. - if truncation_psi != 1: - with torch.autograd.profiler.record_function('truncate'): - assert self.w_avg_beta is not None - if self.num_ws is None or truncation_cutoff is None: - x = self.w_avg.lerp(x, truncation_psi) - else: - x[:, :truncation_cutoff] = self.w_avg.lerp(x[:, :truncation_cutoff], truncation_psi) - return x - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisLayer(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - w_dim, # Intermediate latent (W) dimensionality. - resolution, # Resolution of this layer. - kernel_size = 3, # Convolution kernel size. - up = 1, # Integer upsampling factor. - use_noise = True, # Enable noise input? - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping. - channels_last = False, # Use channels_last format for the weights? - upsample_mode = 'default', # [default, bilinear, ray_comm, ray_attn, ray_penc] - use_group = False, - magnitude_ema_beta = -1, # -1 means not using magnitude ema - mode = '2d', # choose from 1d, 2d or 3d - **unused_kwargs - ): - super().__init__() - self.resolution = resolution - self.up = up - self.use_noise = use_noise - self.activation = activation - self.conv_clamp = conv_clamp - self.upsample_mode = upsample_mode - self.mode = mode - - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - if up == 2: - if 'pixelshuffle' in upsample_mode: - self.adapter = torch.nn.Sequential( - Conv2dLayer(out_channels, out_channels // 4, kernel_size=1, activation=activation), - Conv2dLayer(out_channels // 4, out_channels * 4, kernel_size=1, activation='linear'), - ) - elif upsample_mode == 'liif': - from dnnlib.geometry import get_grids, local_ensemble - pi = get_grids(self.resolution//2, self.resolution//2, 'cpu', align=False).transpose(0,1) - po = get_grids(self.resolution, self.resolution, 'cpu', align=False).transpose(0,1) - diffs, coords, coeffs = local_ensemble(pi, po, self.resolution) - - self.diffs = torch.nn.Parameter(diffs, requires_grad=False) - self.coords = torch.nn.Parameter(coords.float(), requires_grad=False) - self.coeffs = torch.nn.Parameter(coeffs, requires_grad=False) - add_dim = 2 - self.adapter = torch.nn.Sequential( - Conv2dLayer(out_channels + add_dim, out_channels // 2, kernel_size=1, activation=activation), - Conv2dLayer(out_channels // 2, out_channels, kernel_size=1, activation='linear'), - ) - elif 'nn_cat' in upsample_mode: - self.adapter = torch.nn.Sequential( - Conv2dLayer(out_channels * 2, out_channels // 4, kernel_size=1, activation=activation), - Conv2dLayer(out_channels // 4, out_channels, kernel_size=1, activation='linear'), - ) - elif 'ada' in upsample_mode: - self.adapter = torch.nn.Sequential( - Conv2dLayer(out_channels, 8, kernel_size=1, activation=activation), - Conv2dLayer(8, out_channels, kernel_size=1, activation='linear') - ) - self.adapter[1].weight.data.zero_() - if 'blur' in upsample_mode: - self.blur = Blur() - - self.padding = kernel_size // 2 - self.groups = 2 if use_group else 1 - self.act_gain = bias_act.activation_funcs[activation].def_gain - self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1) - - memory_format = torch.channels_last if channels_last else torch.contiguous_format - weight_sizes = [out_channels // self.groups, in_channels, kernel_size, kernel_size] - if self.mode == '3d': - weight_sizes += [kernel_size] - weight = torch.randn(weight_sizes).to(memory_format=memory_format) - self.weight = torch.nn.Parameter(weight) - - if use_noise: - if self.mode == '2d': - noise_sizes = [resolution, resolution] - elif self.mode == '3d': - noise_sizes = [resolution, resolution, resolution] - else: - raise NotImplementedError('not support for MLP') - self.register_buffer('noise_const', torch.randn(noise_sizes)) # HACK: for safety reasons - self.noise_strength = torch.nn.Parameter(torch.zeros([])) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - - self.magnitude_ema_beta = magnitude_ema_beta - if magnitude_ema_beta > 0: - self.register_buffer('w_avg', torch.ones([])) # TODO: name for compitibality - - def forward(self, x, w, noise_mode='random', fused_modconv=True, gain=1, skip_up=False, input_noise=None, **unused_kwargs): - assert noise_mode in ['random', 'const', 'none'] - batch_size = x.size(0) - - if (self.magnitude_ema_beta > 0): - if self.training: # updating EMA. - with torch.autograd.profiler.record_function('update_magnitude_ema'): - magnitude_cur = x.detach().to(torch.float32).square().mean() - self.w_avg.copy_(magnitude_cur.lerp(self.w_avg, self.magnitude_ema_beta)) - input_gain = self.w_avg.rsqrt() - x = x * input_gain - - styles = self.affine(w) # Batch x style_dim - if styles.size(0) < x.size(0): # for repeating - assert (x.size(0) // styles.size(0) * styles.size(0) == x.size(0)) - styles = repeat(styles, 'b c -> (b s) c', s=x.size(0) // styles.size(0)) - up = self.up if not skip_up else 1 - use_default = (self.upsample_mode == 'default') - noise = None - resample_filter = None - if use_default and (up > 1): - resample_filter = self.resample_filter - - if self.use_noise: - if input_noise is not None: - noise = input_noise * self.noise_strength - elif noise_mode == 'random': - noise_sizes = [x.shape[0], 1, up * x.shape[2], up * x.shape[3]] - if self.mode == '3d': - noise_sizes += [up * x.shape[4]] - noise = torch.randn(noise_sizes, device=x.device) * self.noise_strength - elif noise_mode == 'const': - noise = self.noise_const * self.noise_strength - if noise.shape[-1] < (up * x.shape[3]): - noise = repeat(noise, 'h w -> h (s w)', s=up*x.shape[3]//noise.shape[-1]) - - flip_weight = (up == 1) # slightly faster - x = modulated_conv2d( - x=x, weight=self.weight, styles=styles, - noise=noise if (use_default and not skip_up) else None, - up=up if use_default else 1, - padding=self.padding, - resample_filter=resample_filter, - flip_weight=flip_weight, - fused_modconv=fused_modconv, - groups=self.groups, - mode=self.mode - ) - - if (up == 2) and (not use_default): - resolution = x.size(-1) * 2 - if 'bilinear' in self.upsample_mode: - x = F.interpolate(x, size=(resolution, resolution), mode='bilinear', align_corners=True) - elif 'nearest' in self.upsample_mode: - x = F.interpolate(x, size=(resolution, resolution), mode='nearest') - x = upfirdn2d.filter2d(x, self.resample_filter) - elif 'bicubic' in self.upsample_mode: - x = F.interpolate(x, size=(resolution, resolution), mode='bicubic', align_corners=True) - elif 'pixelshuffle' in self.upsample_mode: # does not have rotation invariance - x = F.interpolate(x, size=(resolution, resolution), mode='nearest') + torch.pixel_shuffle(self.adapter(x), 2) - if not 'noblur' in self.upsample_mode: - x = upfirdn2d.filter2d(x, self.resample_filter) - elif 'nn_cat' in self.upsample_mode: - x_pad = x.new_zeros(*x.size()[:2], x.size(-2)+2, x.size(-1)+2) - x_pad[...,1:-1,1:-1] = x - xl, xu, xd, xr = x_pad[..., 1:-1, :-2], x_pad[..., :-2, 1:-1], x_pad[..., 2:, 1:-1], x_pad[..., 1:-1, 2:] - x1, x2, x3, x4 = xl + xu, xu + xr, xl + xd, xr + xd - xb = torch.stack([x1, x2, x3, x4], 2) / 2 - xb = torch.pixel_shuffle(xb.view(xb.size(0), -1, xb.size(-2), xb.size(-1)), 2) - xa = F.interpolate(x, size=(resolution, resolution), mode='nearest') - x = xa + self.adapter(torch.cat([xa, xb], 1)) - if not 'noblur' in self.upsample_mode: - x = upfirdn2d.filter2d(x, self.resample_filter) - elif self.upsample_mode == 'liif': # this is an old version - x = torch.stack([x[..., self.coords[j,:,:,0].long(), self.coords[j,:,:,1].long()] for j in range(4)], 0) - d = self.diffs[:, None].type_as(x).repeat(1,batch_size,1,1,1).permute(0,1,4,2,3) - x = self.adapter(torch.cat([x, d.type_as(x)], 2).reshape(batch_size*4,-1,*x.size()[-2:])) - x = (x.reshape(4,batch_size,*x.size()[-3:]) * self.coeffs[:,None,None].type_as(x)).sum(0) - else: - raise NotImplementedError - - if up == 2: - if 'ada' in self.upsample_mode: - x = x + self.adapter(x) - if 'blur' in self.upsample_mode: - x = self.blur(x) - - if (noise is not None) and (not use_default) and (not skip_up): - x = x.add_(noise.type_as(x)) - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - x = bias_act.bias_act(x, self.bias.to(x.dtype), act=self.activation, gain=act_gain, clamp=act_clamp) - - return x - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisLayer3(torch.nn.Module): - """copy from the stylegan3 codebase with minor changes""" - def __init__(self, - w_dim, # Intermediate latent (W) dimensionality. - is_torgb, # Is this the final ToRGB layer? - is_critically_sampled, # Does this layer use critical sampling? - use_fp16, # Does this layer use FP16? - - # Input & output specifications. - in_channels, # Number of input channels. - out_channels, # Number of output channels. - in_size, # Input spatial size: int or [width, height]. - out_size, # Output spatial size: int or [width, height]. - in_sampling_rate, # Input sampling rate (s). - out_sampling_rate, # Output sampling rate (s). - in_cutoff, # Input cutoff frequency (f_c). - out_cutoff, # Output cutoff frequency (f_c). - in_half_width, # Input transition band half-width (f_h). - out_half_width, # Output Transition band half-width (f_h). - - # Hyperparameters. - kernel_size = 3, # Convolution kernel size. Ignored for final the ToRGB layer. - filter_size = 6, # Low-pass filter size relative to the lower resolution when up/downsampling. - lrelu_upsampling = 2, # Relative sampling rate for leaky ReLU. Ignored for final the ToRGB layer. - use_radial_filters = False, # Use radially symmetric downsampling filter? Ignored for critically sampled layers. - conv_clamp = 256, # Clamp the output to [-X, +X], None = disable clamping. - magnitude_ema_beta = 0.999, # Decay rate for the moving average of input magnitudes. - - **unused_kwargs, - ): - super().__init__() - self.w_dim = w_dim - self.is_torgb = is_torgb - self.is_critically_sampled = is_critically_sampled - self.use_fp16 = use_fp16 - self.in_channels = in_channels - self.out_channels = out_channels - self.in_size = np.broadcast_to(np.asarray(in_size), [2]) - self.out_size = np.broadcast_to(np.asarray(out_size), [2]) - self.in_sampling_rate = in_sampling_rate - self.out_sampling_rate = out_sampling_rate - self.tmp_sampling_rate = max(in_sampling_rate, out_sampling_rate) * (1 if is_torgb else lrelu_upsampling) - self.in_cutoff = in_cutoff - self.out_cutoff = out_cutoff - self.in_half_width = in_half_width - self.out_half_width = out_half_width - self.conv_kernel = 1 if is_torgb else kernel_size - self.conv_clamp = conv_clamp - self.magnitude_ema_beta = magnitude_ema_beta - - # Setup parameters and buffers. - self.affine = FullyConnectedLayer(self.w_dim, self.in_channels, bias_init=1) - self.weight = torch.nn.Parameter(torch.randn([self.out_channels, self.in_channels, self.conv_kernel, self.conv_kernel])) - self.bias = torch.nn.Parameter(torch.zeros([self.out_channels])) - if magnitude_ema_beta > 0: - self.register_buffer('w_avg', torch.ones([])) - - # Design upsampling filter. - self.up_factor = int(np.rint(self.tmp_sampling_rate / self.in_sampling_rate)) - assert self.in_sampling_rate * self.up_factor == self.tmp_sampling_rate - self.up_taps = filter_size * self.up_factor if self.up_factor > 1 and not self.is_torgb else 1 - self.register_buffer('up_filter', self.design_lowpass_filter( - numtaps=self.up_taps, cutoff=self.in_cutoff, width=self.in_half_width*2, fs=self.tmp_sampling_rate)) - - # Design downsampling filter. - self.down_factor = int(np.rint(self.tmp_sampling_rate / self.out_sampling_rate)) - assert self.out_sampling_rate * self.down_factor == self.tmp_sampling_rate - self.down_taps = filter_size * self.down_factor if self.down_factor > 1 and not self.is_torgb else 1 - self.down_radial = use_radial_filters and not self.is_critically_sampled - self.register_buffer('down_filter', self.design_lowpass_filter( - numtaps=self.down_taps, cutoff=self.out_cutoff, width=self.out_half_width*2, fs=self.tmp_sampling_rate, radial=self.down_radial)) - - # Compute padding. - pad_total = (self.out_size - 1) * self.down_factor + 1 # Desired output size before downsampling. - pad_total -= (self.in_size + self.conv_kernel - 1) * self.up_factor # Input size after upsampling. - pad_total += self.up_taps + self.down_taps - 2 # Size reduction caused by the filters. - pad_lo = (pad_total + self.up_factor) // 2 # Shift sample locations according to the symmetric interpretation (Appendix C.3). - pad_hi = pad_total - pad_lo - self.padding = [int(pad_lo[0]), int(pad_hi[0]), int(pad_lo[1]), int(pad_hi[1])] - - def forward(self, x, w, noise_mode='random', force_fp32=False, **unused_kwargs): - assert noise_mode in ['random', 'const', 'none'] # unused - misc.assert_shape(x, [None, self.in_channels, int(self.in_size[1]), int(self.in_size[0])]) - misc.assert_shape(w, [x.shape[0], self.w_dim]) - - # Track input magnitude. - if (self.magnitude_ema_beta > 0): - if self.training: # updating EMA. - with torch.autograd.profiler.record_function('update_magnitude_ema'): - magnitude_cur = x.detach().to(torch.float32).square().mean() - self.w_avg.copy_(magnitude_cur.lerp(self.w_avg, self.magnitude_ema_beta)) - input_gain = self.w_avg.rsqrt() - x = x * input_gain - - # Execute affine layer. - styles = self.affine(w) - if self.is_torgb: - weight_gain = 1 / np.sqrt(self.in_channels * (self.conv_kernel ** 2)) - styles = styles * weight_gain - - # Execute modulated conv2d. - dtype = torch.float16 if (self.use_fp16 and not force_fp32 and x.device.type == 'cuda') else torch.float32 - x = modulated_conv2d(x=x.to(dtype), weight=self.weight, styles=styles, padding=self.conv_kernel-1, up=1, fused_modconv=True) - - # Execute bias, filtered leaky ReLU, and clamping. - gain = 1 if self.is_torgb else np.sqrt(2) - slope = 1 if self.is_torgb else 0.2 - x = filtered_lrelu.filtered_lrelu(x=x, fu=self.up_filter, fd=self.down_filter, b=self.bias.to(x.dtype), - up=self.up_factor, down=self.down_factor, padding=self.padding, gain=gain, slope=slope, clamp=self.conv_clamp) - - # Ensure correct shape and dtype. - misc.assert_shape(x, [None, self.out_channels, int(self.out_size[1]), int(self.out_size[0])]) - assert x.dtype == dtype - return x - - @staticmethod - def design_lowpass_filter(numtaps, cutoff, width, fs, radial=False): - assert numtaps >= 1 - - # Identity filter. - if numtaps == 1: - return None - - # Separable Kaiser low-pass filter. - if not radial: - f = scipy.signal.firwin(numtaps=numtaps, cutoff=cutoff, width=width, fs=fs) - return torch.as_tensor(f, dtype=torch.float32) - - # Radially symmetric jinc-based filter. - x = (np.arange(numtaps) - (numtaps - 1) / 2) / fs - r = np.hypot(*np.meshgrid(x, x)) - f = scipy.special.j1(2 * cutoff * (np.pi * r)) / (np.pi * r) - beta = scipy.signal.kaiser_beta(scipy.signal.kaiser_atten(numtaps, width / (fs / 2))) - w = np.kaiser(numtaps, beta) - f *= np.outer(w, w) - f /= np.sum(f) - return torch.as_tensor(f, dtype=torch.float32) - - def extra_repr(self): - return '\n'.join([ - f'w_dim={self.w_dim:d}, is_torgb={self.is_torgb},', - f'is_critically_sampled={self.is_critically_sampled}, use_fp16={self.use_fp16},', - f'in_sampling_rate={self.in_sampling_rate:g}, out_sampling_rate={self.out_sampling_rate:g},', - f'in_cutoff={self.in_cutoff:g}, out_cutoff={self.out_cutoff:g},', - f'in_half_width={self.in_half_width:g}, out_half_width={self.out_half_width:g},', - f'in_size={list(self.in_size)}, out_size={list(self.out_size)},', - f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}']) - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class ToRGBLayer(torch.nn.Module): - def __init__(self, in_channels, out_channels, w_dim=0, kernel_size=1, conv_clamp=None, channels_last=False, mode='2d', **unused): - super().__init__() - self.conv_clamp = conv_clamp - self.mode = mode - weight_shape = [out_channels, in_channels, kernel_size, kernel_size] - if mode == '3d': - weight_shape += [kernel_size] - - if w_dim > 0: - self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1) - memory_format = torch.channels_last if channels_last else torch.contiguous_format - self.weight = torch.nn.Parameter(torch.randn(weight_shape).to(memory_format=memory_format)) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - self.weight_gain = 1 / np.sqrt(np.prod(weight_shape[1:])) - - else: - assert kernel_size == 1, "does not support larger kernel sizes for now. used in NeRF" - assert mode != '3d', "does not support 3D convolution for now" - - self.weight = torch.nn.Parameter(torch.Tensor(out_channels, in_channels)) - self.bias = torch.nn.Parameter(torch.Tensor(out_channels)) - self.weight_gain = 1. - - # initialization - torch.nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5)) - fan_in, _ = torch.nn.init._calculate_fan_in_and_fan_out(self.weight) - bound = 1 / math.sqrt(fan_in) - torch.nn.init.uniform_(self.bias, -bound, bound) - - def forward(self, x, w=None, fused_modconv=True): - if w is not None: - styles = self.affine(w) * self.weight_gain - if x.size(0) > styles.size(0): - assert (x.size(0) // styles.size(0) * styles.size(0) == x.size(0)) - styles = repeat(styles, 'b c -> (b s) c', s=x.size(0) // styles.size(0)) - x = modulated_conv2d(x=x, weight=self.weight, styles=styles, demodulate=False, fused_modconv=fused_modconv, mode=self.mode) - x = bias_act.bias_act(x, self.bias.to(x.dtype), clamp=self.conv_clamp) - else: - if x.ndim == 2: - x = F.linear(x, self.weight, self.bias) - else: - x = F.conv2d(x, self.weight[:,:,None,None], self.bias) - return x - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisBlock(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels, 0 = first block. - out_channels, # Number of output channels. - w_dim, # Intermediate latent (W) dimensionality. - resolution, # Resolution of this block. - img_channels, # Number of output color channels. - is_last, # Is this the last block? - architecture = 'skip', # Architecture: 'orig', 'skip', 'resnet'. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping. - use_fp16 = False, # Use FP16 for this block? - fp16_channels_last = False, # Use channels-last memory format with FP16? - use_single_layer = False, # use only one instead of two synthesis layer - disable_upsample = False, - **layer_kwargs, # Arguments for SynthesisLayer. - ): - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.w_dim = w_dim - self.resolution = resolution - self.img_channels = img_channels - self.is_last = is_last - self.architecture = architecture - self.use_fp16 = use_fp16 - self.channels_last = (use_fp16 and fp16_channels_last) - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.num_conv = 0 - self.num_torgb = 0 - - self.groups = 1 - self.use_single_layer = use_single_layer - self.margin = layer_kwargs.get('margin', 0) - self.upsample_mode = layer_kwargs.get('upsample_mode', 'default') - self.disable_upsample = disable_upsample - self.mode = layer_kwargs.get('mode', '2d') - - if in_channels == 0: - const_sizes = [out_channels, resolution, resolution] - if self.mode == '3d': - const_sizes = const_sizes + [resolution] - self.const = torch.nn.Parameter(torch.randn(const_sizes)) - - if in_channels != 0: - self.conv0 = util.construct_class_by_name( - class_name=layer_kwargs.get('layer_name', "training.networks.SynthesisLayer"), - in_channels=in_channels, out_channels=out_channels, - w_dim=w_dim, resolution=resolution, - up=2 if (not disable_upsample) else 1, - resample_filter=resample_filter, conv_clamp=conv_clamp, - channels_last=self.channels_last, **layer_kwargs) - self.num_conv += 1 - - if not self.use_single_layer: - self.conv1 = util.construct_class_by_name( - class_name=layer_kwargs.get('layer_name', "training.networks.SynthesisLayer"), - in_channels=out_channels, out_channels=out_channels, - w_dim=w_dim, resolution=resolution, - conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs) - self.num_conv += 1 - - if is_last or architecture == 'skip': - self.torgb = ToRGBLayer( - out_channels, img_channels, w_dim=w_dim, - conv_clamp=conv_clamp, channels_last=self.channels_last, - groups=self.groups, mode=self.mode) - self.num_torgb += 1 - - if in_channels != 0 and architecture == 'resnet': - self.skip = Conv2dLayer( - in_channels, out_channels, kernel_size=1, bias=False, up=2, - resample_filter=resample_filter, - channels_last=self.channels_last, - mode=self.mode) - - def forward(self, x, img, ws, force_fp32=False, fused_modconv=None, add_on=None, block_noise=None, disable_rgb=False, **layer_kwargs): - misc.assert_shape(ws, [None, self.num_conv + self.num_torgb, self.w_dim]) - w_iter = iter(ws.unbind(dim=1)) - dtype = torch.float16 if (self.use_fp16 and x.device.type == 'cuda') and not force_fp32 else torch.float32 - memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format - if fused_modconv is None: - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - fused_modconv = (not self.training) and (dtype == torch.float32 or int(x.shape[0]) == 1) - - # Input. - if self.in_channels == 0: - x = self.const.to(dtype=dtype, memory_format=memory_format) - x = x.unsqueeze(0).expand(ws.shape[0], *x.size()) - else: - x = x.to(dtype=dtype, memory_format=memory_format) - - # Main layers. - if add_on is not None: - add_on = add_on.to(dtype=dtype, memory_format=memory_format) - - if self.in_channels == 0: - if not self.use_single_layer: - layer_kwargs['input_noise'] = block_noise[:,1:2] if block_noise is not None else None - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, **layer_kwargs) - - elif self.architecture == 'resnet': - y = self.skip(x, gain=np.sqrt(0.5)) - layer_kwargs['input_noise'] = block_noise[:,0:1] if block_noise is not None else None - x = self.conv0(x, next(w_iter), fused_modconv=fused_modconv, **layer_kwargs) - if not self.use_single_layer: - layer_kwargs['input_noise'] = block_noise[:,1:2] if block_noise is not None else None - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, gain=np.sqrt(0.5), **layer_kwargs) - x = y.add_(x) - else: - layer_kwargs['input_noise'] = block_noise[:,0:1] if block_noise is not None else None - x = self.conv0(x, next(w_iter), fused_modconv=fused_modconv, **layer_kwargs) - if not self.use_single_layer: - layer_kwargs['input_noise'] = block_noise[:,1:2] if block_noise is not None else None - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, **layer_kwargs) - - # ToRGB. - if img is not None: - if img.size(-1) * 2 == x.size(-1): - if (self.upsample_mode == 'bilinear_all') or (self.upsample_mode == 'bilinear_ada'): - img = F.interpolate(img, scale_factor=2, mode='bilinear', align_corners=True) - else: - img = upfirdn2d.upsample2d(img, self.resample_filter) # this is upsampling. Not sure about details and why they do this.. - elif img.size(-1) == x.size(-1): - pass - else: - raise NotImplementedError - - if self.is_last or self.architecture == 'skip': - if not disable_rgb: - y = x if add_on is None else x + add_on - y = self.torgb(y, next(w_iter), fused_modconv=fused_modconv) - y = y.to(dtype=torch.float32, memory_format=torch.contiguous_format) - img = img.add_(y) if img is not None else y - else: - img = None - - assert x.dtype == dtype - assert img is None or img.dtype == torch.float32 - return x, img - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisBlock3(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels, 0 = first block. - out_channels, # Number of output channels. - w_dim, # Intermediate latent (W) dimensionality. - resolution, # Resolution of this block. - img_channels, # Number of output color channels. - block_id, - stylegan3_hyperam, - use_fp16 = False, # Use FP16 for this block? - **layer_kwargs, # Arguments for SynthesisLayer. - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.w_dim = w_dim - self.resolution = resolution - self.img_channels = img_channels - self.num_conv = 0 - self.num_torgb = 0 - self.use_fp16 = use_fp16 - - is_critically_sampled = block_id == (len(stylegan3_hyperam['sampling_rates'][:-1]) // 2 - 1) - sizes, sampling_rates, cutoffs, half_widths = \ - stylegan3_hyperam['sizes'], stylegan3_hyperam['sampling_rates'], \ - stylegan3_hyperam['cutoffs'], stylegan3_hyperam['half_widths'] - - # each block has two layer - prev = max(block_id * 2 - 1, 0) - curr = block_id * 2 - self.conv0 = util.construct_class_by_name( - class_name=layer_kwargs.get('layer_name', "training.networks.SynthesisLayer3"), - w_dim=self.w_dim, - is_torgb=False, - is_critically_sampled=is_critically_sampled, - use_fp16=use_fp16, - in_channels=in_channels, - out_channels=out_channels, - in_size=int(sizes[prev]), - out_size=int(sizes[curr]), - in_sampling_rate=int(sampling_rates[prev]), - out_sampling_rate=int(sampling_rates[curr]), - in_cutoff=cutoffs[prev], - out_cutoff=cutoffs[curr], - in_half_width=half_widths[prev], - out_half_width=half_widths[curr], - use_radial_filters=True, - **layer_kwargs) - self.num_conv += 1 - - prev = block_id * 2 - curr = block_id * 2 + 1 - self.conv1 = util.construct_class_by_name( - class_name=layer_kwargs.get('layer_name', "training.networks.SynthesisLayer3"), - w_dim=self.w_dim, - is_torgb=False, - is_critically_sampled=is_critically_sampled, - use_fp16=use_fp16, - in_channels=out_channels, - out_channels=out_channels, - in_size=int(sizes[prev]), - out_size=int(sizes[curr]), - in_sampling_rate=int(sampling_rates[prev]), - out_sampling_rate=int(sampling_rates[curr]), - in_cutoff=cutoffs[prev], - out_cutoff=cutoffs[curr], - in_half_width=half_widths[prev], - out_half_width=half_widths[curr], - use_radial_filters=True, - **layer_kwargs) - self.num_conv += 1 - - # toRGB layer (used for progressive growing) - self.torgb = ToRGBLayer(out_channels, img_channels, w_dim=w_dim) - self.num_torgb += 1 - - def forward(self, x, img, ws, force_fp32=False, add_on=None, disable_rgb=False, **layer_kwargs): - w_iter = iter(ws.unbind(dim=1)) - dtype = torch.float16 if (self.use_fp16 and x.device.type == 'cuda') and not force_fp32 else torch.float32 - memory_format = torch.contiguous_format - - # Main layers. - x = x.to(dtype=dtype, memory_format=memory_format) - if add_on is not None: - add_on = add_on.to(dtype=dtype, memory_format=memory_format) - - x = self.conv0(x, next(w_iter), **layer_kwargs) - x = self.conv1(x, next(w_iter), **layer_kwargs) - - assert img is None, "currently not support." - if not disable_rgb: - y = x if add_on is None else x + add_on - y = self.torgb(y, next(w_iter), fused_modconv=True) - y = y.to(dtype=torch.float32, memory_format=torch.contiguous_format) - img = y - - assert x.dtype == dtype - assert img is None or img.dtype == torch.float32 - return x, img - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisNetwork(torch.nn.Module): - def __init__(self, - w_dim, # Intermediate latent (W) dimensionality. - img_resolution, # Output image resolution. - img_channels, # Number of color channels. - channel_base = 1, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - num_fp16_res = 0, # Use FP16 for the N highest resolutions. - **block_kwargs, # Arguments for SynthesisBlock. - ): - assert img_resolution >= 4 and img_resolution & (img_resolution - 1) == 0 - super().__init__() - self.w_dim = w_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.block_resolutions = [2 ** i for i in range(2, self.img_resolution_log2 + 1)] - - channel_base = int(channel_base * 32768) - channels_dict = {res: min(channel_base // res, channel_max) for res in self.block_resolutions} - fp16_resolution = max(2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - self.channels_dict = channels_dict - - self.num_ws = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res // 2] if res > 4 else 0 - out_channels = channels_dict[res] - use_fp16 = (res >= fp16_resolution) - is_last = (res == self.img_resolution) - block = util.construct_class_by_name( - class_name=block_kwargs.get('block_name', "training.networks.SynthesisBlock"), - in_channels=in_channels, out_channels=out_channels, w_dim=w_dim, resolution=res, - img_channels=img_channels, is_last=is_last, use_fp16=use_fp16, **block_kwargs) - - self.num_ws += block.num_conv - if is_last: - self.num_ws += block.num_torgb - setattr(self, f'b{res}', block) - - def forward(self, ws, **block_kwargs): - block_ws = [] - - # this part is to slice the style matrices (W) to each layer (conv/RGB) - with torch.autograd.profiler.record_function('split_ws'): - misc.assert_shape(ws, [None, self.num_ws, self.w_dim]) - ws = ws.to(torch.float32) - w_idx = 0 - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - block_ws.append(ws.narrow(1, w_idx, block.num_conv + block.num_torgb)) - w_idx += block.num_conv - - x = img = None - for res, cur_ws in zip(self.block_resolutions, block_ws): - block = getattr(self, f'b{res}') - x, img = block(x, img, cur_ws, **block_kwargs) - return img - - def get_current_resolution(self): - return [self.img_resolution] # For compitibility - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class Generator(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality. - c_dim, # Conditioning label (C) dimensionality. - w_dim, # Intermediate latent (W) dimensionality. - img_resolution, # Output resolution. - img_channels, # Number of output color channels. - mapping_kwargs = {}, # Arguments for MappingNetwork. - synthesis_kwargs = {}, # Arguments for SynthesisNetwork. - encoder_kwargs = {}, # Arguments for Encoder (optional) - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.img_resolution = img_resolution - self.img_channels = img_channels - self.synthesis = util.construct_class_by_name( - class_name=synthesis_kwargs.get('module_name', "training.networks.SynthesisNetwork"), - w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, **synthesis_kwargs) - self.num_ws = self.synthesis.num_ws - self.mapping = None - self.encoder = None - - if len(mapping_kwargs) > 0: # Use mapping network - self.mapping = util.construct_class_by_name( - class_name=mapping_kwargs.get('module_name', "training.networks.MappingNetwork"), - z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs) - - if len(encoder_kwargs) > 0: # Use Image-Encoder - encoder_kwargs['model_kwargs'].update({'num_ws': self.num_ws, 'w_dim': self.w_dim}) - self.encoder = util.construct_class_by_name( - img_resolution=img_resolution, - img_channels=img_channels, - **encoder_kwargs) - - def forward(self, z=None, c=None, styles=None, truncation_psi=1, truncation_cutoff=None, img=None, **synthesis_kwargs): - if styles is None: - assert z is not None - if (self.encoder is not None) and (img is not None): #TODO: debug - outputs = self.encoder(img) - ws = outputs['ws'] - if ('camera' in outputs) and ('camera_mode' not in synthesis_kwargs): - synthesis_kwargs['camera_RT'] = outputs['camera'] - else: - ws = self.mapping(z, c, truncation_psi=truncation_psi, truncation_cutoff=truncation_cutoff, **synthesis_kwargs) - else: - ws = styles - - img = self.synthesis(ws, **synthesis_kwargs) - return img - - def get_final_output(self, *args, **kwargs): - img = self.forward(*args, **kwargs) - if isinstance(img, list): - return img[-1] - elif isinstance(img, dict): - return img['img'] - return img - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class DiscriminatorBlock(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels, 0 = first block. - tmp_channels, # Number of intermediate channels. - out_channels, # Number of output channels. - resolution, # Resolution of this block. - img_channels, # Number of input color channels. - first_layer_idx, # Index of the first layer. - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping. - use_fp16 = False, # Use FP16 for this block? - fp16_channels_last = False, # Use channels-last memory format with FP16? - freeze_layers = 0, # Freeze-D: Number of layers to freeze. - ): - assert in_channels in [0, tmp_channels] - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.resolution = resolution - self.img_channels = img_channels - self.first_layer_idx = first_layer_idx - self.architecture = architecture - self.use_fp16 = use_fp16 - self.channels_last = (use_fp16 and fp16_channels_last) - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - - self.num_layers = 0 - def trainable_gen(): - while True: - layer_idx = self.first_layer_idx + self.num_layers - trainable = (layer_idx >= freeze_layers) - self.num_layers += 1 - yield trainable - trainable_iter = trainable_gen() - - if in_channels == 0 or architecture == 'skip': - self.fromrgb = Conv2dLayer(img_channels, tmp_channels, kernel_size=1, activation=activation, - trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last) - - self.conv0 = Conv2dLayer(tmp_channels, tmp_channels, kernel_size=3, activation=activation, - trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last) - - self.conv1 = Conv2dLayer(tmp_channels, out_channels, kernel_size=3, activation=activation, down=2, - trainable=next(trainable_iter), resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last) - - if architecture == 'resnet': - self.skip = Conv2dLayer(tmp_channels, out_channels, kernel_size=1, bias=False, down=2, - trainable=next(trainable_iter), resample_filter=resample_filter, channels_last=self.channels_last) - - def forward(self, x, img, force_fp32=False, downsampler=None): - dtype = torch.float16 if (self.use_fp16 and x.device.type == 'cuda') and not force_fp32 else torch.float32 - memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format - - # Input. - if x is not None: - misc.assert_shape(x, [None, self.in_channels, self.resolution, self.resolution]) - x = x.to(dtype=dtype, memory_format=memory_format) - - # FromRGB. - if self.in_channels == 0 or self.architecture == 'skip': - misc.assert_shape(img, [None, self.img_channels, self.resolution, self.resolution]) - img = img.to(dtype=dtype, memory_format=memory_format) - y = self.fromrgb(img) - x = x + y if x is not None else y - if self.architecture != 'skip': - img = None - elif downsampler is not None: - img = downsampler(img, 2) - else: - img = upfirdn2d.downsample2d(img, self.resample_filter) - - # Main layers. - if self.architecture == 'resnet': - y = self.skip(x, gain=np.sqrt(0.5)) - x = self.conv0(x) - x = self.conv1(x, gain=np.sqrt(0.5)) - x = y.add_(x) - else: - x = self.conv0(x) - x = self.conv1(x) - - assert x.dtype == dtype - return x, img - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class MinibatchStdLayer(torch.nn.Module): - def __init__(self, group_size, num_channels=1): - super().__init__() - self.group_size = group_size - self.num_channels = num_channels - - def forward(self, x): - N, C, H, W = x.shape - with misc.suppress_tracer_warnings(): # as_tensor results are registered as constants - G = torch.min(torch.as_tensor(self.group_size), torch.as_tensor(N)) if self.group_size is not None else N - F = self.num_channels - c = C // F - - y = x.reshape(G, -1, F, c, H, W) # [GnFcHW] Split minibatch N into n groups of size G, and channels C into F groups of size c. - y = y - y.mean(dim=0) # [GnFcHW] Subtract mean over group. - y = y.square().mean(dim=0) # [nFcHW] Calc variance over group. - y = (y + 1e-8).sqrt() # [nFcHW] Calc stddev over group. - y = y.mean(dim=[2,3,4]) # [nF] Take average over channels and pixels. - y = y.reshape(-1, F, 1, 1) # [nF11] Add missing dimensions. - y = y.repeat(G, 1, H, W) # [NFHW] Replicate over group and pixels. - x = torch.cat([x, y], dim=1) # [NCHW] Append to input as new channels. - return x - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class DiscriminatorEpilogue(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - cmap_dim, # Dimensionality of mapped conditioning label, 0 = no label. - resolution, # Resolution of this block. - img_channels, # Number of input color channels. - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - mbstd_group_size = 4, # Group size for the minibatch standard deviation layer, None = entire minibatch. - mbstd_num_channels = 1, # Number of features for the minibatch standard deviation layer, 0 = disable. - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping. - final_channels = 1, # for classification it is always 1. - ): - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.final_channels = final_channels - self.cmap_dim = cmap_dim - self.resolution = resolution - self.img_channels = img_channels - self.architecture = architecture - - if architecture == 'skip': - self.fromrgb = Conv2dLayer(img_channels, in_channels, kernel_size=1, activation=activation) - self.mbstd = MinibatchStdLayer(group_size=mbstd_group_size, num_channels=mbstd_num_channels) if mbstd_num_channels > 0 else None - self.conv = Conv2dLayer(in_channels + mbstd_num_channels, in_channels, kernel_size=3, activation=activation, conv_clamp=conv_clamp) - self.fc = FullyConnectedLayer(in_channels * (resolution ** 2), in_channels, activation=activation) - self.out = FullyConnectedLayer(in_channels, final_channels if cmap_dim == 0 else cmap_dim) - - def forward(self, x, img, cmap, force_fp32=False): - misc.assert_shape(x, [None, self.in_channels, self.resolution, self.resolution]) # [NCHW] - _ = force_fp32 # unused - dtype = torch.float32 - memory_format = torch.contiguous_format - - # FromRGB. - x = x.to(dtype=dtype, memory_format=memory_format) - if self.architecture == 'skip': - misc.assert_shape(img, [None, self.img_channels, self.resolution, self.resolution]) - img = img.to(dtype=dtype, memory_format=memory_format) - x = x + self.fromrgb(img) - - # Main layers. - if self.mbstd is not None: - x = self.mbstd(x) - x = self.conv(x) - x = self.fc(x.flatten(1)) - x = self.out(x) - - # Conditioning. - if self.cmap_dim > 0: - if not isinstance(cmap, list): - cmap = [cmap] # in case of multiple conditions. a trick (TODO) - x = [(x * c).sum(dim=1, keepdim=True) * (1 / np.sqrt(self.cmap_dim)) for c in cmap] - x = sum(x) / len(cmap) - - assert x.dtype == dtype - return x - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class Discriminator(torch.nn.Module): # The original StyleGAN2 discriminator - def __init__(self, - c_dim, # Conditioning label (C) dimensionality. - img_resolution, # Input resolution. - img_channels, # Number of input color channels. - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - channel_base = 32768, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - num_fp16_res = 0, # Use FP16 for the N highest resolutions. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping. - cmap_dim = None, # Dimensionality of mapped conditioning label, None = default. - block_kwargs = {}, # Arguments for DiscriminatorBlock. - mapping_kwargs = {}, # Arguments for MappingNetwork. - epilogue_kwargs = {}, # Arguments for DiscriminatorEpilogue. - ): - super().__init__() - self.c_dim = c_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.block_resolutions = [2 ** i for i in range(self.img_resolution_log2, 2, -1)] - channels_dict = {res: min(channel_base // res, channel_max) for res in self.block_resolutions + [4]} - fp16_resolution = max(2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - if cmap_dim is None: - cmap_dim = channels_dict[4] - if c_dim == 0: - cmap_dim = 0 - - common_kwargs = dict(img_channels=img_channels, architecture=architecture, conv_clamp=conv_clamp) - cur_layer_idx = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res] if res < img_resolution else 0 - tmp_channels = channels_dict[res] - out_channels = channels_dict[res // 2] - use_fp16 = (res >= fp16_resolution) - block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res, - first_layer_idx=cur_layer_idx, use_fp16=use_fp16, **block_kwargs, **common_kwargs) - setattr(self, f'b{res}', block) - cur_layer_idx += block.num_layers - if c_dim > 0: - self.mapping = MappingNetwork(z_dim=0, c_dim=c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs) - self.b4 = DiscriminatorEpilogue(channels_dict[4], cmap_dim=cmap_dim, resolution=4, **epilogue_kwargs, **common_kwargs) - - def forward(self, img, c, **block_kwargs): - x = None - if isinstance(img, dict): - img = img['img'] - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - x, img = block(x, img, **block_kwargs) - - cmap = None - if self.c_dim > 0: - cmap = self.mapping(None, c) - x = self.b4(x, img, cmap) - return x - -#---------------------------------------------------------------------------- -# encoders maybe used for inversion (not cleaned) - -@persistence.persistent_class -class EncoderResBlock(torch.nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = Conv2dLayer(in_channel, in_channel, 3, activation='lrelu') - self.conv2 = Conv2dLayer(in_channel, out_channel, 3, down=2, activation='lrelu') - self.skip = Conv2dLayer(in_channel, out_channel, 1, down=2, activation='linear', bias=False) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - return out - - -@persistence.persistent_class -class EqualConv2d(torch.nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - new_scale = 1.0 - self.weight = torch.nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) * new_scale - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - self.stride = stride - self.padding = padding - if bias: - self.bias = torch.nn.Parameter(torch.zeros(out_channel)) - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -@persistence.persistent_class -class Encoder(torch.nn.Module): - def __init__(self, size, n_latents, w_dim=512, add_dim=0, **unused): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256, - 128: 128, - 256: 64, - 512: 32, - 1024: 16 - } - - self.w_dim = w_dim - self.add_dim = add_dim - log_size = int(math.log(size, 2)) - - self.n_latents = n_latents - convs = [Conv2dLayer(3, channels[size], 1)] - - in_channel = channels[size] - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - convs.append(EncoderResBlock(in_channel, out_channel)) - in_channel = out_channel - - self.convs = torch.nn.Sequential(*convs) - self.projector = EqualConv2d(in_channel, self.n_latents*self.w_dim + add_dim, 4, padding=0, bias=False) - - def forward(self, input): - out = self.convs(input) - out = self.projector(out) - pws, pcm = out[:, :-2], out[:, -2:] - pws = pws.view(len(input), self.n_latents, self.w_dim) - pcm = pcm.view(len(input), self.add_dim) - return pws, pcm - - -@persistence.persistent_class -class ResNetEncoder(torch.nn.Module): - def __init__(self): - super().__init__() - - import torchvision - resnet_net = torchvision.models.resnet18(pretrained=True) - modules = list(resnet_net.children())[:-1] - self.convs = torch.nn.Sequential(*modules) - self.requires_grad_(True) - self.train() - - def preprocess_tensor(self, x): - x = F.interpolate(x, size=(224, 224), mode='bicubic', align_corners=False) - return x - - def forward(self, input): - out = self.convs(self.preprocess_tensor(input)) - return out[:, :, 0, 0] - - -@persistence.persistent_class -class CLIPEncoder(torch.nn.Module): - def __init__(self): - super().__init__() - - import clip - clip_net, _ = clip.load('ViT-B/32', device='cpu', jit=False) - self.encoder = clip_net.visual - for p in self.encoder.parameters(): - p.requires_grad_(True) - - def preprocess_tensor(self, x): - import PIL.Image - import torchvision.transforms.functional as TF - x = x * 0.5 + 0.5 # mapping to 0~1 - x = TF.resize(x, size=224, interpolation=PIL.Image.BICUBIC) - x = TF.normalize(x, (0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)) - return x - - def forward(self, input): - out = self.encoder(self.preprocess_tensor(input)) - return out - - -# --------------------------------------------------------------------------------------------------- # -# VolumeGAN thanks https://gist.github.com/justimyhxu/a96f5ac25480d733f3151adb8142d706 - -@persistence.persistent_class -class InstanceNormLayer3d(torch.nn.Module): - """Implements instance normalization layer.""" - def __init__(self, num_features, epsilon=1e-8, affine=False): - super().__init__() - self.eps = epsilon - self.affine = affine - if self.affine: - self.weight = torch.nn.Parameter(torch.Tensor(1, num_features,1,1,1)) - self.bias = torch.nn.Parameter(torch.Tensor(1, num_features,1,1,1)) - self.weight.data.uniform_() - self.bias.data.zero_() - - def forward(self, x, weight=None, bias=None): - x = x - torch.mean(x, dim=[2, 3, 4], keepdim=True) - norm = torch.sqrt( - torch.mean(x**2, dim=[2, 3, 4], keepdim=True) + self.eps) - x = x / norm - isnot_input_none = weight is not None and bias is not None - assert (isnot_input_none and not self.affine) or (not isnot_input_none and self.affine) - if self.affine: - x = x*self.weight + self.bias - else: - x = x*weight + bias - return x - -@persistence.persistent_class -class FeatureVolume(torch.nn.Module): - def __init__( - self, - feat_res=32, - init_res=4, - base_channels=256, - output_channels=32, - z_dim=256, - use_mapping=True, - **kwargs - ): - super().__init__() - self.num_stages = int(np.log2(feat_res // init_res)) + 1 - self.use_mapping = use_mapping - - self.const = nn.Parameter( - torch.ones(1, base_channels, init_res, init_res, init_res)) - inplanes = base_channels - outplanes = base_channels - - self.stage_channels = [] - for i in range(self.num_stages): - conv = nn.Conv3d(inplanes, - outplanes, - kernel_size=(3, 3, 3), - padding=(1, 1, 1)) - self.stage_channels.append(outplanes) - self.add_module(f'layer{i}', conv) - instance_norm = InstanceNormLayer3d(num_features=outplanes, affine=not use_mapping) - - self.add_module(f'instance_norm{i}', instance_norm) - inplanes = outplanes - outplanes = max(outplanes // 2, output_channels) - if i == self.num_stages - 1: - outplanes = output_channels - - if self.use_mapping: - self.mapping_network = CustomMappingNetwork( - z_dim, 256, - sum(self.stage_channels) * 2) - self.upsample = UpsamplingLayer() - self.lrelu = nn.LeakyReLU(negative_slope=0.2) - - def forward(self, z, **kwargs): - if self.use_mapping: - scales, shifts, style = self.mapping_network(z) - - x = self.const.repeat(z.shape[0], 1, 1, 1, 1) - for idx in range(self.num_stages): - if idx != 0: - x = self.upsample(x) - conv_layer = self.__getattr__(f'layer{idx}') - x = conv_layer(x) - instance_norm = self.__getattr__(f'instance_norm{idx}') - if self.use_mapping: - scale = scales[:, sum(self.stage_channels[:idx]):sum(self.stage_channels[:idx + 1])] - shift = shifts[:, sum(self.stage_channels[:idx]):sum(self.stage_channels[:idx + 1])] - scale = scale.view(scale.shape + (1, 1, 1)) - shift = shift.view(shift.shape + (1, 1, 1)) - else: - scale, shift = None, None - x = instance_norm(x, weight=scale, bias=shift) - x = self.lrelu(x) - - return x \ No newline at end of file diff --git a/spaces/facebook/ov-seg/open_vocab_seg/utils/events.py b/spaces/facebook/ov-seg/open_vocab_seg/utils/events.py deleted file mode 100644 index cbe82ce80a7110a1018167763ba3adc90f58faa0..0000000000000000000000000000000000000000 --- a/spaces/facebook/ov-seg/open_vocab_seg/utils/events.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -import os -import wandb -from detectron2.utils import comm -from detectron2.utils.events import EventWriter, get_event_storage - - -def setup_wandb(cfg, args): - if comm.is_main_process(): - init_args = { - k.lower(): v - for k, v in cfg.WANDB.items() - if isinstance(k, str) and k not in ["config", "name"] - } - # only include most related part to avoid too big table - # TODO: add configurable params to select which part of `cfg` should be saved in config - if "config_exclude_keys" in init_args: - init_args["config"] = cfg - init_args["config"]["cfg_file"] = args.config_file - else: - init_args["config"] = { - "model": cfg.MODEL, - "solver": cfg.SOLVER, - "cfg_file": args.config_file, - } - if ("name" not in init_args) or (init_args["name"] is None): - init_args["name"] = os.path.basename(args.config_file) - wandb.init(**init_args) - - -class BaseRule(object): - def __call__(self, target): - return target - - -class IsIn(BaseRule): - def __init__(self, keyword: str): - self.keyword = keyword - - def __call__(self, target): - return self.keyword in target - - -class Prefix(BaseRule): - def __init__(self, keyword: str): - self.keyword = keyword - - def __call__(self, target): - return "/".join([self.keyword, target]) - - -class WandbWriter(EventWriter): - """ - Write all scalars to a tensorboard file. - """ - - def __init__(self): - """ - Args: - log_dir (str): the directory to save the output events - kwargs: other arguments passed to `torch.utils.tensorboard.SummaryWriter(...)` - """ - self._last_write = -1 - self._group_rules = [ - (IsIn("/"), BaseRule()), - (IsIn("loss"), Prefix("train")), - ] - - def write(self): - - storage = get_event_storage() - - def _group_name(scalar_name): - for (rule, op) in self._group_rules: - if rule(scalar_name): - return op(scalar_name) - return scalar_name - - stats = { - _group_name(name): scalars[0] - for name, scalars in storage.latest().items() - if scalars[1] > self._last_write - } - if len(stats) > 0: - self._last_write = max([v[1] for k, v in storage.latest().items()]) - - # storage.put_{image,histogram} is only meant to be used by - # tensorboard writer. So we access its internal fields directly from here. - if len(storage._vis_data) >= 1: - stats["image"] = [ - wandb.Image(img, caption=img_name) - for img_name, img, step_num in storage._vis_data - ] - # Storage stores all image data and rely on this writer to clear them. - # As a result it assumes only one writer will use its image data. - # An alternative design is to let storage store limited recent - # data (e.g. only the most recent image) that all writers can access. - # In that case a writer may not see all image data if its period is long. - storage.clear_images() - - if len(storage._histograms) >= 1: - - def create_bar(tag, bucket_limits, bucket_counts, **kwargs): - data = [ - [label, val] for (label, val) in zip(bucket_limits, bucket_counts) - ] - table = wandb.Table(data=data, columns=["label", "value"]) - return wandb.plot.bar(table, "label", "value", title=tag) - - stats["hist"] = [create_bar(**params) for params in storage._histograms] - - storage.clear_histograms() - - if len(stats) == 0: - return - wandb.log(stats, step=storage.iter) - - def close(self): - wandb.finish() diff --git a/spaces/falterWliame/Face_Mask_Detection/Download Driver Monitor Aoc 215lm00040.md b/spaces/falterWliame/Face_Mask_Detection/Download Driver Monitor Aoc 215lm00040.md deleted file mode 100644 index f690799ee90637f5c88cb54f4959bcff9b6b1db0..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Download Driver Monitor Aoc 215lm00040.md +++ /dev/null @@ -1,14 +0,0 @@ -

download driver monitor aoc 215lm00040


DOWNLOAD » https://urlca.com/2uDcic



-
-Results - -After installing the AOC official driver, the screen is not displayed and your computer is not able to communicate with the external monitor. Check your monitor and your computer manufacturer's website to ensure that the monitor is compatible with your computer. - -## 0.3. Run the driver setup utility to install the Universal drivers - -In this tutorial, we will install the Universal drivers for all AOC monitors. In this tutorial, we will install the Universal driver for the AOC monitor. - -1. Open the Device Manager by clicking on the Start button, selecting Control Panel, and then selecting Programs and Features. In the left pane, select View By and then select By Category. Select Hardware and Sound from the left pane and then select Device Manager from the middle pane. Expand the display driver list in the left pane and then select Display adapters from the middle pane. Select Universal from the list. If there 4fefd39f24
-
-
-

diff --git a/spaces/fatiXbelha/sd/Download FIFA Romania and get ready for the new 23 season with updated players kits and clubs.md b/spaces/fatiXbelha/sd/Download FIFA Romania and get ready for the new 23 season with updated players kits and clubs.md deleted file mode 100644 index fbbdc5323e5001f315cde85b145904f6ac82223b..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download FIFA Romania and get ready for the new 23 season with updated players kits and clubs.md +++ /dev/null @@ -1,148 +0,0 @@ - -

Download FIFA Romania: How to Play the Ultimate Football Game with Your Favorite Romanian Teams and Players

-

If you are a fan of football and you love Romania, you might be interested in downloading FIFA Romania, a modded version of the popular FIFA game that features Romanian teams, players, stadiums, and more. In this article, we will show you how to download FIFA Romania, how to play it, and why you should give it a try.

-

Introduction

-

What is FIFA Romania?

-

FIFA Romania is a modification of the original FIFA game that adds Romanian elements to the game. It was created by a group of passionate Romanian fans who wanted to bring their country's football culture to the virtual world. FIFA Romania includes:

-

download fifa romania


Download File ->>> https://urllie.com/2uNCdC



- -

Why should you download FIFA Romania?

-

Downloading FIFA Romania has many benefits for football lovers. Here are some of them:

- -

How to download FIFA Romania

-

Requirements and compatibility

-

To download FIFA Romania, you will need the following:

- -

FIFA Romania is compatible with both FIFA 14 and FIFA 15 versions. However, some features may vary depending on the version you have. For example, FIFA 15 has better graphics and gameplay than FIFA 14, but FIFA 14 has more Romanian teams and players than FIFA 15.

-

Steps to download and install FIFA Romania

-

To download and install FIFA Romania, follow these steps:

-
    -
  1. Go to the official website of FIFA Romania (www.fifaromania.net) and register an account (it's free)
  2. -
  3. Log in to your account and go to the download section
  4. -
  5. Select the version of FIFA Romania that matches your version of FIFA (FIFA 14 or FIFA 15)
  6. -
  7. Download the mod file (it's a zip file that contains several folders)
  8. -
  9. Extract the mod file to a folder on your device (you can use WinRAR or any other extraction software)
  10. -
  11. Copy the contents of the extracted folder to the folder where you installed FIFA on your device (usually C:\Program Files\EA Sports\FIFA)
  12. -
  13. Run the game as administrator (right-click on the game icon and select "Run as administrator")
  14. -
  15. Enjoy playing FIFA Romania!
  16. -
-

Tips and tricks to optimize your gaming experience

-

Here are some tips and tricks to make the most out of FIFA Romania:

- -

How to play FIFA Romania

-

Game modes and features

-

FIFA Romania offers a variety of game modes and features that will keep you entertained and challenged. Here are some of them:

- -

How to create and customize your own team

-

If you want to create and customize your own team in FIFA Romania, you can do so by following these steps:

-
    -
  1. Go to the main menu and select "Customize"
  2. -
  3. Select "Create team"
  4. -
  5. Enter a name for your team and choose a country (you can choose Romania or any other country)
  6. -
  7. Select a league for your team (you can choose from Liga 1, Liga 2, Liga 3, or Rest of World)
  8. -
  9. Select a stadium for your team (you can choose from any of the Romanian stadiums or any other stadium)
  10. -
  11. Select a kit for your team (you can choose from any of the Romanian kits or any other kit)
  12. -
  13. Select a logo for your team (you can choose from any of the Romanian logos or any other logo)
  14. -
  15. Select a squad for your team (you can choose from any of the Romanian players or any other players)
  16. -
  17. Save your team and exit
  18. -
-

You can also edit or delete your team at any time by going to "Customize" > "Edit teams"

-

download fifa romania apk
-download fifa romania 2023
-download fifa romania liga 1
-download fifa romania patch
-download fifa romania android
-download fifa romania pc
-download fifa romania mod
-download fifa romania 2022
-download fifa romania free
-download fifa romania online
-download fifa mobile romania
-download fifa 14 romania
-download fifa 21 romania
-download fifa 20 romania
-download fifa 19 romania
-download fifa 18 romania
-download fifa 17 romania
-download fifa 16 romania
-download fifa 15 romania
-download fifa 13 romania
-download ea sports fifa romania
-download jocuri fifa romania
-download liga 1 betano fifa romania
-download liga 2 casa pariurilor fifa romania
-download liga 3 fifa romania
-download echipe nationale fifa romania
-download comentariu in limba romana pentru fifa
-download stadioane romanesti pentru fifa
-download imnul national al romaniei pentru fifa
-download steaua bucuresti pentru fifa
-download dinamo bucuresti pentru fifa
-download rapid bucuresti pentru fifa
-download universitatea craiova pentru fifa
-download cfr cluj pentru fifa
-download fcsb pentru fifa
-download viitorul constanta pentru fifa
-download astra giurgiu pentru fifa
-download petrolul ploiesti pentru fifa
-download uta arad pentru fifa
-download chindia targoviste pentru fifa

-

How to compete online and offline with other players

-

If you want to compete online and offline with other players in FIFA Romania, you have several options:

- -

Conclusion

-

Summary of the main points

-

In conclusion, FIFA Romania is a modded version of the original FIFA game that features Romanian teams, players, stadiums, and more. It is a fun and unique way to enjoy football and support Romania. To download FIFA Romania, you need to have FIFA 14 or FIFA 15 installed on your device, and then follow the steps to download and install the mod from the official website. You can play FIFA Romania in various game modes and features, such as career mode, tournament mode, online mode, ultimate team mode, or skill games mode. You can also create and customize your own team, and compete online and offline with other players.

-

Call to action and invitation to share feedback

-

If you are interested in downloading FIFA Romania, you can visit the official website (www.fifaromania.net) and register an account for free. You can also join the online community of FIFA Romania players and fans on social media, forums, and blogs, where you can share your feedback, suggestions, questions, and experiences. We hope you enjoy playing FIFA Romania and have a great time with your favorite Romanian teams and players. Thank you for reading this article and please share it with your friends if you found it helpful.

-

FAQs

-

Here are some frequently asked questions about FIFA Romania:

-
    -
  1. Is FIFA Romania legal and safe?
    -Yes, FIFA Romania is legal and safe. It is a fan-made modification of the original FIFA game that does not violate any copyrights or trademarks of EA Sports or FIFA. It is also free of viruses, malware, or spyware.
  2. -
  3. Can I play FIFA Romania on other devices besides PC?
    -No, FIFA Romania is only available for PC devices. It is not compatible with other devices such as consoles, mobile phones, or tablets.
  4. -
  5. Can I play FIFA Romania with other mods or patches?
    -No, FIFA Romania is not compatible with other mods or patches. It is a standalone mod that requires a clean installation of FIFA 14 or FIFA 15. If you have other mods or patches installed on your device, you need to uninstall them before installing FIFA Romania.
  6. -
  7. Can I update FIFA Romania to the latest version of FIFA?
    -No, FIFA Romania is only compatible with FIFA 14 or FIFA 15 versions. It is not compatible with newer versions of FIFA such as FIFA 16 or FIFA 17. If you want to play the latest version of FIFA, you need to buy it separately from EA Sports or from a local store.
  8. -
  9. Can I contact the developers of FIFA Romania for support or feedback?
    -Yes, you can contact the developers of FIFA Romania for support or feedback. You can visit their official website (www.fifaromania.net) and use the contact form or the forum to send them a message. You can also follow them on social media (Facebook, Twitter, YouTube) and send them a message there.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Soul Knight APK dan Nikmati Gameplay Menarik di Game RPG Ini 2022.md b/spaces/fatiXbelha/sd/Download Soul Knight APK dan Nikmati Gameplay Menarik di Game RPG Ini 2022.md deleted file mode 100644 index 7084106b2b12e4b1e7431156f28a69681977cbfd..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Soul Knight APK dan Nikmati Gameplay Menarik di Game RPG Ini 2022.md +++ /dev/null @@ -1,133 +0,0 @@ - -

Download Soul Knight Versi Terbaru 2022: A Guide for Pixel Action Shoot'em Up Fans

-

If you are a fan of pixel action shoot'em up games, you might have heard of Soul Knight, a popular game made by ChillyRoom Inc. © for Android, iOS, and Nintendo Switch. Soul Knight is a shooter game that features extremely easy and intuitive control, super smooth and enjoyable gameplay, mixed with rogue-like elements, that will get you hooked from the very first run!

-

In this article, we will guide you on how to download Soul Knight versi terbaru 2022, the latest version of the game that was released on June 6th, 2023 for Android devices and May 15th, 2023 for iOS devices. We will also show you what's new in this version, how to play it, and why you should play it.

-

download soul knight versi terbaru 2022


Download Ziphttps://urllie.com/2uNAub



-

What's New in Soul Knight Versi Terbaru 2022?

-

Soul Knight versi terbaru 2022 is the most updated version of the game that brings new features and improvements to enhance your gaming experience. Here are some of the highlights of this version:

- -

As you can see, Soul Knight versi terbaru 2022 is packed with new content and updates that will make you want to play it more than ever. But how do you download it? Let's find out in the next section.

-

How to Download Soul Knight Versi Terbaru 2022 for Android and iOS Devices?

-

Downloading Soul Knight versi terbaru 2022 is very easy and simple. Just follow these steps:

-
    -
  1. Go to the official website of Soul Knight: https://www.chillyroom.com/
  2. -
  3. Click on the "Download" button on the top right corner of the screen.
  4. -
  5. Select your device type: Android or iOS.
  6. -
  7. You will be redirected to the Google Play Store or the App Store, depending on your device.
  8. -
  9. Click on the "Install" or "Get" button to download Soul Knight versi terbaru 2022 on your device.
  10. -
  11. Wait for the download and installation process to finish.
  12. -
  13. Launch the game and enjoy!
  14. -
-

Alternatively, you can also scan the QR code on the website to download Soul Knight versi terbaru 2022 directly on your device. Just make sure you have a QR code scanner app installed on your device.

-

That's it! You have successfully downloaded Soul Knight versi terbaru 2022 on your device. Now, let's learn how to play it in the next section.

How to Play Soul Knight Versi Terbaru 2022?

-

Soul Knight versi terbaru 2022 is a game that is easy to learn but hard to master. It is a game that requires skill, strategy, and luck. Here are some tips and tricks on how to play it and have fun:

-

Choose Your Hero and Weapon

-

One of the first things you need to do in Soul Knight versi terbaru 2022 is to choose your hero and weapon. There are more than 30 heroes to choose from, each with their own unique skills and stats. Some of the heroes are free, while others require gems or real money to unlock. You can also customize your hero's appearance with different skins.

-

download soul knight mod apk versi terbaru 2022
-download soul knight update 5.2.2 versi terbaru 2022
-download soul knight game android versi terbaru 2022
-download soul knight hack unlimited gems versi terbaru 2022
-download soul knight offline rpg versi terbaru 2022
-download soul knight cheat menu versi terbaru 2022
-download soul knight latest version 2022 for free
-download soul knight new skins and weapons versi terbaru 2022
-download soul knight premium unlocked versi terbaru 2022
-download soul knight full version modded versi terbaru 2022
-download soul knight best roguelike game versi terbaru 2022
-download soul knight all characters and pets versi terbaru 2022
-download soul knight no ads and in-app purchases versi terbaru 2022
-download soul knight online multiplayer versi terbaru 2022
-download soul knight dungeon shooter versi terbaru 2022
-download soul knight pixel art graphics versi terbaru 2022
-download soul knight tips and tricks versi terbaru 2022
-download soul knight changelog and patch notes versi terbaru 2022
-download soul knight apk file for android versi terbaru 2022
-download soul knight apk + obb data versi terbaru 2022
-download soul knight apk mirror link versi terbaru 2022
-download soul knight apk pure source versi terbaru 2022
-download soul knight apk combo installer versi terbaru 2022
-download soul knight apk mod menu versi terbaru 2022
-download soul knight apk unlimited money and gems versi terbaru 2022
-download soul knight apk pro version versi terbaru 2022
-download soul knight apk latest update versi terbaru 2022
-download soul knight apk from google play store versi terbaru 2022
-download soul knight apk from official website versi terbaru 2022
-download soul knight apk from youtube video link versi terbaru 2022
-cara download soul knight di android versi terbaru 2022
-cara download soul knight mod gratis versi terbaru 2022
-cara download soul knight tanpa root versi terbaru 2022
-cara download soul knight dengan mudah dan cepat versi terbaru 2022
-cara download soul knight dari situs web resmi versi terbaru 2022
-cara download soul knight dari play store versi terbaru 2022
-cara download soul knight dari link alternatif versi terbaru 2022
-cara download soul knight dengan fitur lengkap versi terbaru 2022
-cara download soul knight dengan kualitas tinggi versi terbaru 2022
-cara download soul knight dengan ukuran kecil versi terbaru 2022
-review game soul knight untuk android versi terbaru 2022
-gameplay game soul knight di android versi terbaru 2022
-fitur game soul knight untuk android versi terbaru 2022
-kelebihan game soul knight untuk android versi terbaru 2022
-kekurangan game soul knight untuk android versi terbaru 2022
-tips bermain game soul knight di android versi terbaru 2022
-trik bermain game soul knight di android versi terbaru 2022
-cheat game soul knight di android versi terbaru 2022
-kode game soul knight di android versi terbaru 2022

-

Each hero has a default weapon that they start with, but you can also find and use other weapons in the game. There are more than 300 weapons to collect, ranging from swords, guns, bows, lasers, rockets, etc. Each weapon has its own attributes, such as damage, fire rate, accuracy, etc. You can also upgrade your weapons with gold or gems to make them more powerful.

-

Choosing the right hero and weapon for your play style and preference is crucial for your success in Soul Knight versi terbaru 2022. Experiment with different combinations and see what works best for you.

-

Explore Randomly Generated Dungeons

-

The main mode of Soul Knight versi terbaru 2022 is the dungeon mode, where you have to explore randomly generated dungeons and fight against various enemies and bosses. Each dungeon has a theme, such as forest, desert, ice, etc., and consists of several floors. Each floor has a number of rooms, where you can find enemies, chests, shops, statues, etc.

-

Your goal is to clear all the rooms on each floor and reach the portal that leads to the next floor. Along the way, you can collect coins, gems, health potions, ammo boxes, etc., that will help you survive and progress. You can also find secrets and hidden rooms that may contain special items or surprises.

-

The dungeons are randomly generated every time you play, so you never know what to expect. This adds an element of unpredictability and replayability to Soul Knight versi terbaru 2022. You have to be prepared for anything and adapt to different situations.

-

Use Skills and Buffs Wisely

-

Another important aspect of Soul Knight versi terbaru 2022 is the use of skills and buffs. Skills are special abilities that each hero has and can use in combat. Skills have a cooldown time before they can be used again. Some skills are offensive, such as shooting projectiles or summoning allies, while others are defensive, such as healing or shielding.

-

Buffs are temporary effects that enhance your hero's performance in some way. Buffs can be obtained from statues, plants, potions, etc., and can affect your stats, such as health, damage, speed, etc., or give you special abilities, such as immunity, reflection, regeneration, etc.

-

Using skills and buffs wisely can make a huge difference in your gameplay. You have to know when to use them and how to combine them for maximum effect. You also have to be careful not to use them too often or too recklessly, as they may have drawbacks or side effects.

Team Up with Friends Online or Offline

-

One of the best features of Soul Knight versi terbaru 2022 is the multiplayer mode, where you can team up with your friends online or offline and play together. You can join or create a room online and invite up to three other players to join you. You can also use the local multiplayer mode and connect up to four devices via Wi-Fi or Bluetooth.

-

Playing with your friends can make Soul Knight versi terbaru 2022 more fun and exciting, as you can cooperate and coordinate with each other, share items and resources, revive each other, and compete for the best score. You can also chat with your friends using the in-game voice or text chat feature.

-

However, playing with your friends can also make Soul Knight versi terbaru 2022 more challenging and chaotic, as you have to deal with more enemies and bosses, friendly fire, limited screen space, and potential lag or connection issues. You also have to be careful not to steal or sabotage each other's items or buffs.

-

Whether you prefer to play solo or with your friends, Soul Knight versi terbaru 2022 has something for everyone. You can choose the mode that suits your mood and preference.

-

Why You Should Play Soul Knight Versi Terbaru 2022?

-

By now, you might be wondering why you should play Soul Knight versi terbaru 2022. Well, there are many reasons why this game is worth your time and attention. Here are some of them:

-

Fun and Engaging Gameplay

-

Soul Knight versi terbaru 2022 offers a fun and engaging gameplay experience for pixel action shoot'em up fans. The game is fast-paced, smooth, and responsive, with easy and intuitive controls. The game is also varied, unpredictable, and replayable, with different heroes, weapons, dungeons, enemies, bosses, skills, buffs, etc. The game is also challenging, rewarding, and addictive, with different difficulty levels, achievements, leaderboards, etc.

-

If you are looking for a game that will keep you entertained and hooked for hours, Soul Knight versi terbaru 2022 is the game for you.

-

Beautiful Pixel Art and Music

-

Soul Knight versi terbaru 2022 showcases a beautiful pixel art style and music that enhance the game's atmosphere. The game has a retro and nostalgic feel, with colorful and detailed graphics that capture the essence of pixel art. The game also has a catchy and upbeat soundtrack that matches the mood and theme of the game. The game also has a variety of sound effects that add to the immersion and realism of the game.

-

If you are a fan of pixel art and music, Soul Knight versi terbaru 2022 is the game for you.

Challenging but Rewarding Difficulty

-

Soul Knight versi terbaru 2022 offers a challenging but rewarding difficulty level that keeps you hooked and motivated. The game is not easy, as you have to face countless enemies and bosses that will test your skills and reflexes. The game is also not forgiving, as you have to start over from the beginning if you die. The game is also not predictable, as you have to deal with random dungeons and events that will change your gameplay.

-

However, the game is also not impossible, as you have access to various resources and tools that will help you overcome the challenges. The game is also not boring, as you have different goals and rewards that will keep you interested and satisfied. The game is also not repetitive, as you have different strategies and styles that will keep you creative and curious.

-

If you are looking for a game that will challenge you but also reward you, Soul Knight versi terbaru 2022 is the game for you.

-

Diverse Game Modes and Features

-

Soul Knight versi terbaru 2022 offers a diverse range of game modes and features that cater to different preferences and tastes. The game has a main dungeon mode, where you can explore different dungeons and fight against various enemies and bosses. The game also has an adventure mode, where you can embark on different quests and stories. The game also has a multiplayer mode, where you can play with your friends online or offline. The game also has a workshop mode, where you can create your own weapons and dungeons. The game also has a garden mode, where you can grow plants and harvest resources.

-

If you are looking for a game that will offer you different options and possibilities, Soul Knight versi terbaru 2022 is the game for you.

-

Conclusion

-

Soul Knight versi terbaru 2022 is a pixel action shoot'em up game that features extremely easy and intuitive control, super smooth and enjoyable gameplay, mixed with rogue-like elements, that will get you hooked from the very first run!

-

In this article, we have guided you on how to download Soul Knight versi terbaru 2022, the latest version of the game that brings new features and improvements to enhance your gaming experience. We have also shown you what's new in this version, how to play it, and why you should play it.

-

Now that you know everything about Soul Knight versi terbaru 2022, what are you waiting for? Download it now and enjoy the pixel action shoot'em up adventure of your life!

-

FAQs

-

Here are some frequently asked questions and answers about Soul Knight versi terbaru 2022:

-
    -
  1. Is Soul Knight versi terbaru 2022 free to play?
  2. -

    Yes, Soul Knight versi terbaru 2022 is free to play, but it contains some in-app purchases that can enhance your gameplay or unlock some premium content.

    -
  3. Is Soul Knight versi terbaru 2022 compatible with my device?
  4. -

    Soul Knight versi terbaru 2022 is compatible with Android devices running Android 4.4 or higher, and iOS devices running iOS 9.0 or higher. It is also compatible with Nintendo Switch devices.

    -
  5. How can I save my progress in Soul Knight versi terbaru 2022?
  6. -

    Soul Knight versi terbaru 2022 supports cloud saving, which means you can save your progress online and access it from any device. You just need to log in with your Google Play Games or Game Center account in the game settings.

    -
  7. How can I contact the developers of Soul Knight versi terbaru 2022?
  8. -

    You can contact the developers of Soul Knight versi terbaru 2022 by sending an email to support@chillyroom.com, or by visiting their official website: https://www.chillyroom.com/.

    -
  9. Where can I find more information about Soul Knight versi terbaru 2022?
  10. -

    You can find more information about Soul Knight versi terbaru 2022 by visiting their official website: https://www.chillyroom.com/, or by following their social media accounts: https://www.facebook.com/chillyroomsoulknight/, https://twitter.com/ChillyRoom, https://www.instagram.com/chillyroominc/.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/gradio_demo.py b/spaces/fb700/chatglm-fitness-RLHF/src/gradio_demo.py deleted file mode 100644 index b1d2619fd9a67b37bea55bc91776afbcb3e50558..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/gradio_demo.py +++ /dev/null @@ -1,170 +0,0 @@ -import torch, uuid -import os, sys, shutil, platform -from src.facerender.pirender_animate import AnimateFromCoeff_PIRender -from src.utils.preprocess import CropAndExtract -from src.test_audio2coeff import Audio2Coeff -from src.facerender.animate import AnimateFromCoeff -from src.generate_batch import get_data -from src.generate_facerender_batch import get_facerender_data - -from src.utils.init_path import init_path - -from pydub import AudioSegment - - -def mp3_to_wav(mp3_filename,wav_filename,frame_rate): - mp3_file = AudioSegment.from_file(file=mp3_filename) - mp3_file.set_frame_rate(frame_rate).export(wav_filename,format="wav") - - -class SadTalker(): - - def __init__(self, checkpoint_path='checkpoints', config_path='src/config', lazy_load=False): - - if torch.cuda.is_available(): - device = "cuda" - elif platform.system() == 'Darwin': # macos - device = "mps" - else: - device = "cpu" - - self.device = device - - os.environ['TORCH_HOME']= checkpoint_path - - self.checkpoint_path = checkpoint_path - self.config_path = config_path - - - def test(self, source_image, driven_audio, preprocess='crop', - still_mode=False, use_enhancer=False, batch_size=1, size=256, - pose_style = 0, - facerender='facevid2vid', - exp_scale=1.0, - use_ref_video = False, - ref_video = None, - ref_info = None, - use_idle_mode = False, - length_of_audio = 0, use_blink=True, - result_dir='./results/'): - - self.sadtalker_paths = init_path(self.checkpoint_path, self.config_path, size, False, preprocess) - print(self.sadtalker_paths) - - self.audio_to_coeff = Audio2Coeff(self.sadtalker_paths, self.device) - self.preprocess_model = CropAndExtract(self.sadtalker_paths, self.device) - - if facerender == 'facevid2vid' and self.device != 'mps': - self.animate_from_coeff = AnimateFromCoeff(self.sadtalker_paths, self.device) - elif facerender == 'pirender' or self.device == 'mps': - self.animate_from_coeff = AnimateFromCoeff_PIRender(self.sadtalker_paths, self.device) - facerender = 'pirender' - else: - raise(RuntimeError('Unknown model: {}'.format(facerender))) - - - time_tag = str(uuid.uuid4()) - save_dir = os.path.join(result_dir, time_tag) - os.makedirs(save_dir, exist_ok=True) - - input_dir = os.path.join(save_dir, 'input') - os.makedirs(input_dir, exist_ok=True) - - print(source_image) - pic_path = os.path.join(input_dir, os.path.basename(source_image)) - shutil.move(source_image, input_dir) - - if driven_audio is not None and os.path.isfile(driven_audio): - audio_path = os.path.join(input_dir, os.path.basename(driven_audio)) - - #### mp3 to wav - if '.mp3' in audio_path: - mp3_to_wav(driven_audio, audio_path.replace('.mp3', '.wav'), 16000) - audio_path = audio_path.replace('.mp3', '.wav') - else: - shutil.move(driven_audio, input_dir) - - elif use_idle_mode: - audio_path = os.path.join(input_dir, 'idlemode_'+str(length_of_audio)+'.wav') ## generate audio from this new audio_path - from pydub import AudioSegment - one_sec_segment = AudioSegment.silent(duration=1000*length_of_audio) #duration in milliseconds - one_sec_segment.export(audio_path, format="wav") - else: - print(use_ref_video, ref_info) - assert use_ref_video == True and ref_info == 'all' - - if use_ref_video and ref_info == 'all': # full ref mode - ref_video_videoname = os.path.basename(ref_video) - audio_path = os.path.join(save_dir, ref_video_videoname+'.wav') - print('new audiopath:',audio_path) - # if ref_video contains audio, set the audio from ref_video. - cmd = r"ffmpeg -y -hide_banner -loglevel error -i %s %s"%(ref_video, audio_path) - os.system(cmd) - - os.makedirs(save_dir, exist_ok=True) - - #crop image and extract 3dmm from image - first_frame_dir = os.path.join(save_dir, 'first_frame_dir') - os.makedirs(first_frame_dir, exist_ok=True) - first_coeff_path, crop_pic_path, crop_info = self.preprocess_model.generate(pic_path, first_frame_dir, preprocess, True, size) - - if first_coeff_path is None: - raise AttributeError("No face is detected") - - if use_ref_video: - print('using ref video for genreation') - ref_video_videoname = os.path.splitext(os.path.split(ref_video)[-1])[0] - ref_video_frame_dir = os.path.join(save_dir, ref_video_videoname) - os.makedirs(ref_video_frame_dir, exist_ok=True) - print('3DMM Extraction for the reference video providing pose') - ref_video_coeff_path, _, _ = self.preprocess_model.generate(ref_video, ref_video_frame_dir, preprocess, source_image_flag=False) - else: - ref_video_coeff_path = None - - if use_ref_video: - if ref_info == 'pose': - ref_pose_coeff_path = ref_video_coeff_path - ref_eyeblink_coeff_path = None - elif ref_info == 'blink': - ref_pose_coeff_path = None - ref_eyeblink_coeff_path = ref_video_coeff_path - elif ref_info == 'pose+blink': - ref_pose_coeff_path = ref_video_coeff_path - ref_eyeblink_coeff_path = ref_video_coeff_path - elif ref_info == 'all': - ref_pose_coeff_path = None - ref_eyeblink_coeff_path = None - else: - raise('error in refinfo') - else: - ref_pose_coeff_path = None - ref_eyeblink_coeff_path = None - - #audio2ceoff - if use_ref_video and ref_info == 'all': - coeff_path = ref_video_coeff_path # self.audio_to_coeff.generate(batch, save_dir, pose_style, ref_pose_coeff_path) - else: - batch = get_data(first_coeff_path, audio_path, self.device, ref_eyeblink_coeff_path=ref_eyeblink_coeff_path, still=still_mode, \ - idlemode=use_idle_mode, length_of_audio=length_of_audio, use_blink=use_blink) # longer audio? - coeff_path = self.audio_to_coeff.generate(batch, save_dir, pose_style, ref_pose_coeff_path) - - #coeff2video - data = get_facerender_data(coeff_path, crop_pic_path, first_coeff_path, audio_path, batch_size, still_mode=still_mode, \ - preprocess=preprocess, size=size, expression_scale = exp_scale, facemodel=facerender) - return_path = self.animate_from_coeff.generate(data, save_dir, pic_path, crop_info, enhancer='gfpgan' if use_enhancer else None, preprocess=preprocess, img_size=size) - video_name = data['video_name'] - print(f'The generated video is named {video_name} in {save_dir}') - - del self.preprocess_model - del self.audio_to_coeff - del self.animate_from_coeff - - if torch.cuda.is_available(): - torch.cuda.empty_cache() - torch.cuda.synchronize() - - import gc; gc.collect() - - return return_path - - \ No newline at end of file diff --git a/spaces/feng2022/styleganhuman_copy/torch_utils/op_edit/upfirdn2d.py b/spaces/feng2022/styleganhuman_copy/torch_utils/op_edit/upfirdn2d.py deleted file mode 100644 index 874c09c5e98bee1ace64408aa31ec547dfe695a4..0000000000000000000000000000000000000000 --- a/spaces/feng2022/styleganhuman_copy/torch_utils/op_edit/upfirdn2d.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -import os - -import torch -from torch.nn import functional as F -from torch.autograd import Function -from torch.utils.cpp_extension import load - - -module_path = os.path.dirname(__file__) -upfirdn2d_op = load( - "upfirdn2d", - sources=[ - os.path.join(module_path, "upfirdn2d.cpp"), - os.path.join(module_path, "upfirdn2d_kernel.cu"), - ], -) - - -class UpFirDn2dBackward(Function): - @staticmethod - def forward( - ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size - ): - - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_op.upfirdn2d( - grad_output, - grad_kernel, - down_x, - down_y, - up_x, - up_y, - g_pad_x0, - g_pad_x1, - g_pad_y0, - g_pad_y1, - ) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - (kernel,) = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_op.upfirdn2d( - gradgrad_input, - kernel, - ctx.up_x, - ctx.up_y, - ctx.down_x, - ctx.down_y, - ctx.pad_x0, - ctx.pad_x1, - ctx.pad_y0, - ctx.pad_y1, - ) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view( - ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1] - ) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_op.upfirdn2d( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 - ) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - if input.device.type == "cpu": - out = upfirdn2d_native( - input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1] - ) - - else: - out = UpFirDn2d.apply( - input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1]) - ) - - return out - - -def upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - - return out.view(-1, channel, out_h, out_w) diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/content-disposition/HISTORY.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/content-disposition/HISTORY.md deleted file mode 100644 index 488effa0c9440f4e214102980665781a62ba7059..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/content-disposition/HISTORY.md +++ /dev/null @@ -1,60 +0,0 @@ -0.5.4 / 2021-12-10 -================== - - * deps: safe-buffer@5.2.1 - -0.5.3 / 2018-12-17 -================== - - * Use `safe-buffer` for improved Buffer API - -0.5.2 / 2016-12-08 -================== - - * Fix `parse` to accept any linear whitespace character - -0.5.1 / 2016-01-17 -================== - - * perf: enable strict mode - -0.5.0 / 2014-10-11 -================== - - * Add `parse` function - -0.4.0 / 2014-09-21 -================== - - * Expand non-Unicode `filename` to the full ISO-8859-1 charset - -0.3.0 / 2014-09-20 -================== - - * Add `fallback` option - * Add `type` option - -0.2.0 / 2014-09-19 -================== - - * Reduce ambiguity of file names with hex escape in buggy browsers - -0.1.2 / 2014-09-19 -================== - - * Fix periodic invalid Unicode filename header - -0.1.1 / 2014-09-19 -================== - - * Fix invalid characters appearing in `filename*` parameter - -0.1.0 / 2014-09-18 -================== - - * Make the `filename` argument optional - -0.0.0 / 2014-09-18 -================== - - * Initial release diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/has-symbols/test/shams/get-own-property-symbols.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/has-symbols/test/shams/get-own-property-symbols.js deleted file mode 100644 index 9191b248baa14b9866da65ccf638b96b71c046e7..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/has-symbols/test/shams/get-own-property-symbols.js +++ /dev/null @@ -1,28 +0,0 @@ -'use strict'; - -var test = require('tape'); - -if (typeof Symbol === 'function' && typeof Symbol() === 'symbol') { - test('has native Symbol support', function (t) { - t.equal(typeof Symbol, 'function'); - t.equal(typeof Symbol(), 'symbol'); - t.end(); - }); - return; -} - -var hasSymbols = require('../../shams'); - -test('polyfilled Symbols', function (t) { - /* eslint-disable global-require */ - t.equal(hasSymbols(), false, 'hasSymbols is false before polyfilling'); - - require('get-own-property-symbols'); - - require('../tests')(t); - - var hasSymbolsAfter = hasSymbols(); - t.equal(hasSymbolsAfter, true, 'hasSymbols is true after polyfilling'); - /* eslint-enable global-require */ - t.end(); -}); diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/video/io.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/video/io.py deleted file mode 100644 index 9879154227f640c262853b92c219461c6f67ee8e..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/video/io.py +++ /dev/null @@ -1,318 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -from collections import OrderedDict - -import cv2 -from cv2 import (CAP_PROP_FOURCC, CAP_PROP_FPS, CAP_PROP_FRAME_COUNT, - CAP_PROP_FRAME_HEIGHT, CAP_PROP_FRAME_WIDTH, - CAP_PROP_POS_FRAMES, VideoWriter_fourcc) - -from annotator.uniformer.mmcv.utils import (check_file_exist, mkdir_or_exist, scandir, - track_progress) - - -class Cache: - - def __init__(self, capacity): - self._cache = OrderedDict() - self._capacity = int(capacity) - if capacity <= 0: - raise ValueError('capacity must be a positive integer') - - @property - def capacity(self): - return self._capacity - - @property - def size(self): - return len(self._cache) - - def put(self, key, val): - if key in self._cache: - return - if len(self._cache) >= self.capacity: - self._cache.popitem(last=False) - self._cache[key] = val - - def get(self, key, default=None): - val = self._cache[key] if key in self._cache else default - return val - - -class VideoReader: - """Video class with similar usage to a list object. - - This video warpper class provides convenient apis to access frames. - There exists an issue of OpenCV's VideoCapture class that jumping to a - certain frame may be inaccurate. It is fixed in this class by checking - the position after jumping each time. - Cache is used when decoding videos. So if the same frame is visited for - the second time, there is no need to decode again if it is stored in the - cache. - - :Example: - - >>> import annotator.uniformer.mmcv as mmcv - >>> v = mmcv.VideoReader('sample.mp4') - >>> len(v) # get the total frame number with `len()` - 120 - >>> for img in v: # v is iterable - >>> mmcv.imshow(img) - >>> v[5] # get the 6th frame - """ - - def __init__(self, filename, cache_capacity=10): - # Check whether the video path is a url - if not filename.startswith(('https://', 'http://')): - check_file_exist(filename, 'Video file not found: ' + filename) - self._vcap = cv2.VideoCapture(filename) - assert cache_capacity > 0 - self._cache = Cache(cache_capacity) - self._position = 0 - # get basic info - self._width = int(self._vcap.get(CAP_PROP_FRAME_WIDTH)) - self._height = int(self._vcap.get(CAP_PROP_FRAME_HEIGHT)) - self._fps = self._vcap.get(CAP_PROP_FPS) - self._frame_cnt = int(self._vcap.get(CAP_PROP_FRAME_COUNT)) - self._fourcc = self._vcap.get(CAP_PROP_FOURCC) - - @property - def vcap(self): - """:obj:`cv2.VideoCapture`: The raw VideoCapture object.""" - return self._vcap - - @property - def opened(self): - """bool: Indicate whether the video is opened.""" - return self._vcap.isOpened() - - @property - def width(self): - """int: Width of video frames.""" - return self._width - - @property - def height(self): - """int: Height of video frames.""" - return self._height - - @property - def resolution(self): - """tuple: Video resolution (width, height).""" - return (self._width, self._height) - - @property - def fps(self): - """float: FPS of the video.""" - return self._fps - - @property - def frame_cnt(self): - """int: Total frames of the video.""" - return self._frame_cnt - - @property - def fourcc(self): - """str: "Four character code" of the video.""" - return self._fourcc - - @property - def position(self): - """int: Current cursor position, indicating frame decoded.""" - return self._position - - def _get_real_position(self): - return int(round(self._vcap.get(CAP_PROP_POS_FRAMES))) - - def _set_real_position(self, frame_id): - self._vcap.set(CAP_PROP_POS_FRAMES, frame_id) - pos = self._get_real_position() - for _ in range(frame_id - pos): - self._vcap.read() - self._position = frame_id - - def read(self): - """Read the next frame. - - If the next frame have been decoded before and in the cache, then - return it directly, otherwise decode, cache and return it. - - Returns: - ndarray or None: Return the frame if successful, otherwise None. - """ - # pos = self._position - if self._cache: - img = self._cache.get(self._position) - if img is not None: - ret = True - else: - if self._position != self._get_real_position(): - self._set_real_position(self._position) - ret, img = self._vcap.read() - if ret: - self._cache.put(self._position, img) - else: - ret, img = self._vcap.read() - if ret: - self._position += 1 - return img - - def get_frame(self, frame_id): - """Get frame by index. - - Args: - frame_id (int): Index of the expected frame, 0-based. - - Returns: - ndarray or None: Return the frame if successful, otherwise None. - """ - if frame_id < 0 or frame_id >= self._frame_cnt: - raise IndexError( - f'"frame_id" must be between 0 and {self._frame_cnt - 1}') - if frame_id == self._position: - return self.read() - if self._cache: - img = self._cache.get(frame_id) - if img is not None: - self._position = frame_id + 1 - return img - self._set_real_position(frame_id) - ret, img = self._vcap.read() - if ret: - if self._cache: - self._cache.put(self._position, img) - self._position += 1 - return img - - def current_frame(self): - """Get the current frame (frame that is just visited). - - Returns: - ndarray or None: If the video is fresh, return None, otherwise - return the frame. - """ - if self._position == 0: - return None - return self._cache.get(self._position - 1) - - def cvt2frames(self, - frame_dir, - file_start=0, - filename_tmpl='{:06d}.jpg', - start=0, - max_num=0, - show_progress=True): - """Convert a video to frame images. - - Args: - frame_dir (str): Output directory to store all the frame images. - file_start (int): Filenames will start from the specified number. - filename_tmpl (str): Filename template with the index as the - placeholder. - start (int): The starting frame index. - max_num (int): Maximum number of frames to be written. - show_progress (bool): Whether to show a progress bar. - """ - mkdir_or_exist(frame_dir) - if max_num == 0: - task_num = self.frame_cnt - start - else: - task_num = min(self.frame_cnt - start, max_num) - if task_num <= 0: - raise ValueError('start must be less than total frame number') - if start > 0: - self._set_real_position(start) - - def write_frame(file_idx): - img = self.read() - if img is None: - return - filename = osp.join(frame_dir, filename_tmpl.format(file_idx)) - cv2.imwrite(filename, img) - - if show_progress: - track_progress(write_frame, range(file_start, - file_start + task_num)) - else: - for i in range(task_num): - write_frame(file_start + i) - - def __len__(self): - return self.frame_cnt - - def __getitem__(self, index): - if isinstance(index, slice): - return [ - self.get_frame(i) - for i in range(*index.indices(self.frame_cnt)) - ] - # support negative indexing - if index < 0: - index += self.frame_cnt - if index < 0: - raise IndexError('index out of range') - return self.get_frame(index) - - def __iter__(self): - self._set_real_position(0) - return self - - def __next__(self): - img = self.read() - if img is not None: - return img - else: - raise StopIteration - - next = __next__ - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - self._vcap.release() - - -def frames2video(frame_dir, - video_file, - fps=30, - fourcc='XVID', - filename_tmpl='{:06d}.jpg', - start=0, - end=0, - show_progress=True): - """Read the frame images from a directory and join them as a video. - - Args: - frame_dir (str): The directory containing video frames. - video_file (str): Output filename. - fps (float): FPS of the output video. - fourcc (str): Fourcc of the output video, this should be compatible - with the output file type. - filename_tmpl (str): Filename template with the index as the variable. - start (int): Starting frame index. - end (int): Ending frame index. - show_progress (bool): Whether to show a progress bar. - """ - if end == 0: - ext = filename_tmpl.split('.')[-1] - end = len([name for name in scandir(frame_dir, ext)]) - first_file = osp.join(frame_dir, filename_tmpl.format(start)) - check_file_exist(first_file, 'The start frame not found: ' + first_file) - img = cv2.imread(first_file) - height, width = img.shape[:2] - resolution = (width, height) - vwriter = cv2.VideoWriter(video_file, VideoWriter_fourcc(*fourcc), fps, - resolution) - - def write_frame(file_idx): - filename = osp.join(frame_dir, filename_tmpl.format(file_idx)) - img = cv2.imread(filename) - vwriter.write(img) - - if show_progress: - track_progress(write_frame, range(start, end)) - else: - for i in range(start, end): - write_frame(i) - vwriter.release() diff --git a/spaces/giswqs/solara-demo/pages/00_home.py b/spaces/giswqs/solara-demo/pages/00_home.py deleted file mode 100644 index 7f177ca6f04afd20334d0efdee1d00b8539b0ef4..0000000000000000000000000000000000000000 --- a/spaces/giswqs/solara-demo/pages/00_home.py +++ /dev/null @@ -1,25 +0,0 @@ -import solara - -@solara.component -def Page(): - - markdown = """ - ## Solara for Geospatial Applications - - ### Introduction - - **A collection of [Solara](https://github.com/widgetti/solara) web apps for geospatial applications.** - - Just a proof-of-concept for now. Not all features are working yet. More features will be added in the future. Click on the menu above to see the other pages. - - - Web App: - - GitHub: - - Hugging Face: - - ### Demos - - ![](https://i.imgur.com/4uIEnAJ.gif) - - """ - - solara.Markdown(markdown) diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Izotope Ozone Crack Mac.md b/spaces/gotiQspiryo/whisper-ui/examples/Izotope Ozone Crack Mac.md deleted file mode 100644 index 6901f8e5c45453f2a9dc14e5df12df61b900a0f2..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Izotope Ozone Crack Mac.md +++ /dev/null @@ -1,10 +0,0 @@ -
-

for producers and music makers that have been using plugins to modify their track’s sound, ozone 9 promises a lot of improvements. ozone 9 gives you the ability to change the sound of your instrument by adjusting its balance, panning, resonances, and eq. adjust every instrument and every mic with an intuitive interface, whether you are recording, mixing, mastering, or simply sound designing.

-

Izotope Ozone Crack Mac


Download Filehttps://urlgoal.com/2uyMwM



-

ozone 9 also includes a new balance plugin, which lets you fine-tune the balance of your stereo mix or render a mono balance for your mix. set up your mix with a single click and balance to your heart’s content. a new spectrum analyzer and sender gui are also available.

-

ozone 9 also includes more than 12 ozone-exclusive mastering processors. use the master assistant to fine-tune your eq and gain in real time. add harmonics, saturate, and noise reduction to your sound with the maximizer and peaking plugins. the new tonal balance control is perfect for balancing out vocals, drums, and low-frequency effects. you can even use a variable target to automatically balance the sound to the sound of a reference track.

-

ozone 9 also includes a new sender workflow. it’s a new way to send your mixes to ozone. use the sender to send a stereo mix as mono, or send a mono mix as a stereo mix with a new panning. you can also send a kick and bass track separately and mix them to a mono file.

-

ozone 9 also includes a new spectral analyzer plugin. it lets you fine-tune the sound of your stereo mix or render a mono balance for your mix. finally, ozone 9 offers a new integrated workflow for mixing and mastering. mix any combination of ozone and other plugins, or route individual plugins and buses via the daw’s mixer channels. this workflow allows you to spend more time creating and less time trying to learn the ins and outs of every vst.

-

899543212b
-
-
\ No newline at end of file diff --git a/spaces/gradio/HuBERT/examples/speech_recognition/utils/wer_utils.py b/spaces/gradio/HuBERT/examples/speech_recognition/utils/wer_utils.py deleted file mode 100644 index cf6f3d09ba41a46ad4d7968fb3c286dd53d15c38..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/speech_recognition/utils/wer_utils.py +++ /dev/null @@ -1,381 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from __future__ import absolute_import, division, print_function, unicode_literals - -import re -from collections import deque -from enum import Enum - -import numpy as np - - -""" - Utility modules for computation of Word Error Rate, - Alignments, as well as more granular metrics like - deletion, insersion and substitutions. -""" - - -class Code(Enum): - match = 1 - substitution = 2 - insertion = 3 - deletion = 4 - - -class Token(object): - def __init__(self, lbl="", st=np.nan, en=np.nan): - if np.isnan(st): - self.label, self.start, self.end = "", 0.0, 0.0 - else: - self.label, self.start, self.end = lbl, st, en - - -class AlignmentResult(object): - def __init__(self, refs, hyps, codes, score): - self.refs = refs # std::deque - self.hyps = hyps # std::deque - self.codes = codes # std::deque - self.score = score # float - - -def coordinate_to_offset(row, col, ncols): - return int(row * ncols + col) - - -def offset_to_row(offset, ncols): - return int(offset / ncols) - - -def offset_to_col(offset, ncols): - return int(offset % ncols) - - -def trimWhitespace(str): - return re.sub(" +", " ", re.sub(" *$", "", re.sub("^ *", "", str))) - - -def str2toks(str): - pieces = trimWhitespace(str).split(" ") - toks = [] - for p in pieces: - toks.append(Token(p, 0.0, 0.0)) - return toks - - -class EditDistance(object): - def __init__(self, time_mediated): - self.time_mediated_ = time_mediated - self.scores_ = np.nan # Eigen::Matrix - self.backtraces_ = ( - np.nan - ) # Eigen::Matrix backtraces_; - self.confusion_pairs_ = {} - - def cost(self, ref, hyp, code): - if self.time_mediated_: - if code == Code.match: - return abs(ref.start - hyp.start) + abs(ref.end - hyp.end) - elif code == Code.insertion: - return hyp.end - hyp.start - elif code == Code.deletion: - return ref.end - ref.start - else: # substitution - return abs(ref.start - hyp.start) + abs(ref.end - hyp.end) + 0.1 - else: - if code == Code.match: - return 0 - elif code == Code.insertion or code == Code.deletion: - return 3 - else: # substitution - return 4 - - def get_result(self, refs, hyps): - res = AlignmentResult(refs=deque(), hyps=deque(), codes=deque(), score=np.nan) - - num_rows, num_cols = self.scores_.shape - res.score = self.scores_[num_rows - 1, num_cols - 1] - - curr_offset = coordinate_to_offset(num_rows - 1, num_cols - 1, num_cols) - - while curr_offset != 0: - curr_row = offset_to_row(curr_offset, num_cols) - curr_col = offset_to_col(curr_offset, num_cols) - - prev_offset = self.backtraces_[curr_row, curr_col] - - prev_row = offset_to_row(prev_offset, num_cols) - prev_col = offset_to_col(prev_offset, num_cols) - - res.refs.appendleft(curr_row - 1) # Note: this was .push_front() in C++ - res.hyps.appendleft(curr_col - 1) - if curr_row - 1 == prev_row and curr_col == prev_col: - res.codes.appendleft(Code.deletion) - elif curr_row == prev_row and curr_col - 1 == prev_col: - res.codes.appendleft(Code.insertion) - else: - # assert(curr_row - 1 == prev_row and curr_col - 1 == prev_col) - ref_str = refs[res.refs[0]].label - hyp_str = hyps[res.hyps[0]].label - - if ref_str == hyp_str: - res.codes.appendleft(Code.match) - else: - res.codes.appendleft(Code.substitution) - - confusion_pair = "%s -> %s" % (ref_str, hyp_str) - if confusion_pair not in self.confusion_pairs_: - self.confusion_pairs_[confusion_pair] = 1 - else: - self.confusion_pairs_[confusion_pair] += 1 - - curr_offset = prev_offset - - return res - - def align(self, refs, hyps): - if len(refs) == 0 and len(hyps) == 0: - return np.nan - - # NOTE: we're not resetting the values in these matrices because every value - # will be overridden in the loop below. If this assumption doesn't hold, - # be sure to set all entries in self.scores_ and self.backtraces_ to 0. - self.scores_ = np.zeros((len(refs) + 1, len(hyps) + 1)) - self.backtraces_ = np.zeros((len(refs) + 1, len(hyps) + 1)) - - num_rows, num_cols = self.scores_.shape - - for i in range(num_rows): - for j in range(num_cols): - if i == 0 and j == 0: - self.scores_[i, j] = 0.0 - self.backtraces_[i, j] = 0 - continue - - if i == 0: - self.scores_[i, j] = self.scores_[i, j - 1] + self.cost( - None, hyps[j - 1], Code.insertion - ) - self.backtraces_[i, j] = coordinate_to_offset(i, j - 1, num_cols) - continue - - if j == 0: - self.scores_[i, j] = self.scores_[i - 1, j] + self.cost( - refs[i - 1], None, Code.deletion - ) - self.backtraces_[i, j] = coordinate_to_offset(i - 1, j, num_cols) - continue - - # Below here both i and j are greater than 0 - ref = refs[i - 1] - hyp = hyps[j - 1] - best_score = self.scores_[i - 1, j - 1] + ( - self.cost(ref, hyp, Code.match) - if (ref.label == hyp.label) - else self.cost(ref, hyp, Code.substitution) - ) - - prev_row = i - 1 - prev_col = j - 1 - ins = self.scores_[i, j - 1] + self.cost(None, hyp, Code.insertion) - if ins < best_score: - best_score = ins - prev_row = i - prev_col = j - 1 - - delt = self.scores_[i - 1, j] + self.cost(ref, None, Code.deletion) - if delt < best_score: - best_score = delt - prev_row = i - 1 - prev_col = j - - self.scores_[i, j] = best_score - self.backtraces_[i, j] = coordinate_to_offset( - prev_row, prev_col, num_cols - ) - - return self.get_result(refs, hyps) - - -class WERTransformer(object): - def __init__(self, hyp_str, ref_str, verbose=True): - self.ed_ = EditDistance(False) - self.id2oracle_errs_ = {} - self.utts_ = 0 - self.words_ = 0 - self.insertions_ = 0 - self.deletions_ = 0 - self.substitutions_ = 0 - - self.process(["dummy_str", hyp_str, ref_str]) - - if verbose: - print("'%s' vs '%s'" % (hyp_str, ref_str)) - self.report_result() - - def process(self, input): # std::vector&& input - if len(input) < 3: - print( - "Input must be of the form ... , got ", - len(input), - " inputs:", - ) - return None - - # Align - # std::vector hyps; - # std::vector refs; - - hyps = str2toks(input[-2]) - refs = str2toks(input[-1]) - - alignment = self.ed_.align(refs, hyps) - if alignment is None: - print("Alignment is null") - return np.nan - - # Tally errors - ins = 0 - dels = 0 - subs = 0 - for code in alignment.codes: - if code == Code.substitution: - subs += 1 - elif code == Code.insertion: - ins += 1 - elif code == Code.deletion: - dels += 1 - - # Output - row = input - row.append(str(len(refs))) - row.append(str(ins)) - row.append(str(dels)) - row.append(str(subs)) - # print(row) - - # Accumulate - kIdIndex = 0 - kNBestSep = "/" - - pieces = input[kIdIndex].split(kNBestSep) - - if len(pieces) == 0: - print( - "Error splitting ", - input[kIdIndex], - " on '", - kNBestSep, - "', got empty list", - ) - return np.nan - - id = pieces[0] - if id not in self.id2oracle_errs_: - self.utts_ += 1 - self.words_ += len(refs) - self.insertions_ += ins - self.deletions_ += dels - self.substitutions_ += subs - self.id2oracle_errs_[id] = [ins, dels, subs] - else: - curr_err = ins + dels + subs - prev_err = np.sum(self.id2oracle_errs_[id]) - if curr_err < prev_err: - self.id2oracle_errs_[id] = [ins, dels, subs] - - return 0 - - def report_result(self): - # print("---------- Summary ---------------") - if self.words_ == 0: - print("No words counted") - return - - # 1-best - best_wer = ( - 100.0 - * (self.insertions_ + self.deletions_ + self.substitutions_) - / self.words_ - ) - - print( - "\tWER = %0.2f%% (%i utts, %i words, %0.2f%% ins, " - "%0.2f%% dels, %0.2f%% subs)" - % ( - best_wer, - self.utts_, - self.words_, - 100.0 * self.insertions_ / self.words_, - 100.0 * self.deletions_ / self.words_, - 100.0 * self.substitutions_ / self.words_, - ) - ) - - def wer(self): - if self.words_ == 0: - wer = np.nan - else: - wer = ( - 100.0 - * (self.insertions_ + self.deletions_ + self.substitutions_) - / self.words_ - ) - return wer - - def stats(self): - if self.words_ == 0: - stats = {} - else: - wer = ( - 100.0 - * (self.insertions_ + self.deletions_ + self.substitutions_) - / self.words_ - ) - stats = dict( - { - "wer": wer, - "utts": self.utts_, - "numwords": self.words_, - "ins": self.insertions_, - "dels": self.deletions_, - "subs": self.substitutions_, - "confusion_pairs": self.ed_.confusion_pairs_, - } - ) - return stats - - -def calc_wer(hyp_str, ref_str): - t = WERTransformer(hyp_str, ref_str, verbose=0) - return t.wer() - - -def calc_wer_stats(hyp_str, ref_str): - t = WERTransformer(hyp_str, ref_str, verbose=0) - return t.stats() - - -def get_wer_alignment_codes(hyp_str, ref_str): - """ - INPUT: hypothesis string, reference string - OUTPUT: List of alignment codes (intermediate results from WER computation) - """ - t = WERTransformer(hyp_str, ref_str, verbose=0) - return t.ed_.align(str2toks(ref_str), str2toks(hyp_str)).codes - - -def merge_counts(x, y): - # Merge two hashes which have 'counts' as their values - # This can be used for example to merge confusion pair counts - # conf_pairs = merge_counts(conf_pairs, stats['confusion_pairs']) - for k, v in y.items(): - if k not in x: - x[k] = 0 - x[k] += v - return x diff --git a/spaces/gradio/HuBERT/tests/test_dictionary.py b/spaces/gradio/HuBERT/tests/test_dictionary.py deleted file mode 100644 index 81ce102f4f555822e36298034cdeb3d1c0650255..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/tests/test_dictionary.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import io -import tempfile -import unittest - -import torch -from fairseq.data import Dictionary - - -class TestDictionary(unittest.TestCase): - def test_finalize(self): - txt = [ - "A B C D", - "B C D", - "C D", - "D", - ] - ref_ids1 = list( - map( - torch.IntTensor, - [ - [4, 5, 6, 7, 2], - [5, 6, 7, 2], - [6, 7, 2], - [7, 2], - ], - ) - ) - ref_ids2 = list( - map( - torch.IntTensor, - [ - [7, 6, 5, 4, 2], - [6, 5, 4, 2], - [5, 4, 2], - [4, 2], - ], - ) - ) - - # build dictionary - d = Dictionary() - for line in txt: - d.encode_line(line, add_if_not_exist=True) - - def get_ids(dictionary): - ids = [] - for line in txt: - ids.append(dictionary.encode_line(line, add_if_not_exist=False)) - return ids - - def assertMatch(ids, ref_ids): - for toks, ref_toks in zip(ids, ref_ids): - self.assertEqual(toks.size(), ref_toks.size()) - self.assertEqual(0, (toks != ref_toks).sum().item()) - - ids = get_ids(d) - assertMatch(ids, ref_ids1) - - # check finalized dictionary - d.finalize() - finalized_ids = get_ids(d) - assertMatch(finalized_ids, ref_ids2) - - # write to disk and reload - with tempfile.NamedTemporaryFile(mode="w") as tmp_dict: - d.save(tmp_dict.name) - d = Dictionary.load(tmp_dict.name) - reload_ids = get_ids(d) - assertMatch(reload_ids, ref_ids2) - assertMatch(finalized_ids, reload_ids) - - def test_overwrite(self): - # for example, Camembert overwrites , and - dict_file = io.StringIO( - " 999 #fairseq:overwrite\n" - " 999 #fairseq:overwrite\n" - " 999 #fairseq:overwrite\n" - ", 999\n" - "▁de 999\n" - ) - d = Dictionary() - d.add_from_file(dict_file) - self.assertEqual(d.index(""), 1) - self.assertEqual(d.index("foo"), 3) - self.assertEqual(d.index(""), 4) - self.assertEqual(d.index(""), 5) - self.assertEqual(d.index(""), 6) - self.assertEqual(d.index(","), 7) - self.assertEqual(d.index("▁de"), 8) - - def test_no_overwrite(self): - # for example, Camembert overwrites , and - dict_file = io.StringIO( - " 999\n" " 999\n" " 999\n" ", 999\n" "▁de 999\n" - ) - d = Dictionary() - with self.assertRaisesRegex(RuntimeError, "Duplicate"): - d.add_from_file(dict_file) - - def test_space(self): - # for example, character models treat space as a symbol - dict_file = io.StringIO(" 999\n" "a 999\n" "b 999\n") - d = Dictionary() - d.add_from_file(dict_file) - self.assertEqual(d.index(" "), 4) - self.assertEqual(d.index("a"), 5) - self.assertEqual(d.index("b"), 6) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/gstaff/MagicGen/colab-data-test/css/mtg.css b/spaces/gstaff/MagicGen/colab-data-test/css/mtg.css deleted file mode 100644 index 535ffd300815eacedfca2c6bad4e17aa0f55f2ec..0000000000000000000000000000000000000000 --- a/spaces/gstaff/MagicGen/colab-data-test/css/mtg.css +++ /dev/null @@ -1,130 +0,0 @@ -* {margin: 0; padding: 0; box-sizing: border-box; } - -.card {background: #000; padding: 17px; height: 600px; width: 400px; - margin: 100px auto; -} -.card-background { - padding: 7px 8px 30px 7px; - background-color: #69a; - background-image: - repeating-linear-gradient(140deg, transparent, rgba(255, 255, 255, 0.25) 1%, transparent 20%), - repeating-linear-gradient(-30deg, transparent, transparent 8%, rgba(255, 255, 255, 0.4), transparent 9%), - repeating-linear-gradient(-10deg, transparent, transparent 13%, rgba(0, 0, 0, 0.4), transparent 15%), - repeating-linear-gradient(80deg, transparent, transparent 7.5%, rgba(0, 0, 0, 0.25), transparent 8%), - repeating-linear-gradient(5deg, transparent, transparent 10.5%, rgba(255, 255, 255, 0.5), transparent 11%), - repeating-linear-gradient(75deg, transparent, transparent 11.5%, rgba(255, 255, 255, 0.5), transparent 12%), - repeating-radial-gradient(rgba(0, 0, 0, 0.2), rgba(0, 0, 0, 0.2) 1%, transparent 1%, transparent 5%); - border-radius: 10px 10px 40px 40px; - height: 500px; -} - -.card-body { - position: absolute; - height: 109.4%; - width: 350px; - border: 2px solid rgba(0, 0, 0, 0.8); - border-right: 2px solid #ddd; - border-bottom: 2px solid #555; - border-radius: 5px 5px 0 0; - background: #ddd; - -} - -article { - padding: 3px; - width: 350px; -} - -article > div { - background: #ddd; - position: relative; - height: 200px; - border: 2px solid #333; - z-index: -1; -} - -header { - padding: 3px; - background: #ddd; - border-radius: 8px/20px; - box-shadow: -2px 0 0 0 rgba(0, 0, 0, 0.8); - position: relative; - top: 200px; left: 0; right: 0; -} -header div { - padding: 5px 8px 3px; - background: radial-gradient(ellipse farthest-corner, #E0E7ED 50%, #BDC6CD); - position: relative; - border: 2px solid #000; - border-radius: 10px/20px; - box-shadow: inset 2px -3px 0 #aaa, inset -1px 1px 0 #fff; - height: 33px; -} -header:first-child {top: 0; } -header:first-child div {height: 34px; } - -#textBox { - margin-top: 38px; - padding: 10px 7px; - top: 260px; bottom: 44px; - border: 2px solid #999; - border-bottom: 0 none; - border-left: 0 none; - background: #d3dddd; - -} - -#powerToughness { - width: 4em; - top: ; right: 21px; bottom: 28px; left: auto; - text-align: center; - box-shadow: -2px 1px 2px 0 rgba(0, 0, 0, 0.8); -} -#powerToughness div { - padding: 4px 0 0; - height: 23px; - box-shadow: inset -2px 2px 1px #333, inset 1px -1px 0 #fff; - border: 0 none; - font-size: 21px; -} - -footer { - color: #ccc; - font-family: sans-serif; font-size: 9px; - position: relative; - left: 25px; bottom: 10px; right: 25px; - overflow: auto; -} -footer p {margin-bottom: 0.2em; letter-spacing: 0.18em; } - -.ms { - position: relative; - top: -22px; - float: right; -} - -h1 {font-size: 21px; line-height: 1em; } -h2 {font-size: 18px; line-height: 1em; } -h3 { - padding-top: 2px; - position: relative; - right: 5px; top: 2px; - width: 1.05em; height: 1.05em; - background: #ddd; - text-align: center; - border-radius: 1em; - line-height: 1em; -} -h4 { - border-bottom: 14px solid #000; - border-right: 7px solid transparent; - border-left: 7px solid transparent; - height: 0; width: 0; - overflow: hidden; - position: relative; - right: 10px; top: 7px; -} -h6 {float: right; width: 60%; text-align: right; font-size: 8px; } -p {margin-bottom: 0.6em; line-height: 1.1em; } -blockquote {font-style: italic; } -blockquote p {margin-bottom: 0; } diff --git a/spaces/gulabpatel/Real-ESRGAN/realesrgan/train.py b/spaces/gulabpatel/Real-ESRGAN/realesrgan/train.py deleted file mode 100644 index 8a9cec9ed80d9f362984779548dcec921a636a04..0000000000000000000000000000000000000000 --- a/spaces/gulabpatel/Real-ESRGAN/realesrgan/train.py +++ /dev/null @@ -1,11 +0,0 @@ -# flake8: noqa -import os.path as osp -from basicsr.train import train_pipeline - -import realesrgan.archs -import realesrgan.data -import realesrgan.models - -if __name__ == '__main__': - root_path = osp.abspath(osp.join(__file__, osp.pardir, osp.pardir)) - train_pipeline(root_path) diff --git a/spaces/gwang-kim/DATID-3D/eg3d/dataset_tool.py b/spaces/gwang-kim/DATID-3D/eg3d/dataset_tool.py deleted file mode 100644 index a400f770fa477ef09adf4804235be4d67898765a..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/dataset_tool.py +++ /dev/null @@ -1,458 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: LicenseRef-NvidiaProprietary -# -# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual -# property and proprietary rights in and to this material, related -# documentation and any modifications thereto. Any use, reproduction, -# disclosure or distribution of this material and related documentation -# without an express license agreement from NVIDIA CORPORATION or -# its affiliates is strictly prohibited. - -"""Tool for creating ZIP/PNG based datasets.""" - -import functools -import gzip -import io -import json -import os -import pickle -import re -import sys -import tarfile -import zipfile -from pathlib import Path -from typing import Callable, Optional, Tuple, Union - -import click -import numpy as np -import PIL.Image -from tqdm import tqdm - -#---------------------------------------------------------------------------- - -def error(msg): - print('Error: ' + msg) - sys.exit(1) - -#---------------------------------------------------------------------------- - -def parse_tuple(s: str) -> Tuple[int, int]: - '''Parse a 'M,N' or 'MxN' integer tuple. - - Example: - '4x2' returns (4,2) - '0,1' returns (0,1) - ''' - if m := re.match(r'^(\d+)[x,](\d+)$', s): - return (int(m.group(1)), int(m.group(2))) - raise ValueError(f'cannot parse tuple {s}') - -#---------------------------------------------------------------------------- - -def maybe_min(a: int, b: Optional[int]) -> int: - if b is not None: - return min(a, b) - return a - -#---------------------------------------------------------------------------- - -def file_ext(name: Union[str, Path]) -> str: - return str(name).split('.')[-1] - -#---------------------------------------------------------------------------- - -def is_image_ext(fname: Union[str, Path]) -> bool: - ext = file_ext(fname).lower() - return f'.{ext}' in PIL.Image.EXTENSION # type: ignore - -#---------------------------------------------------------------------------- - -def open_image_folder(source_dir, *, max_images: Optional[int]): - input_images = [str(f) for f in sorted(Path(source_dir).rglob('*')) if is_image_ext(f) and os.path.isfile(f)] - - # Load labels. - labels = {} - meta_fname = os.path.join(source_dir, 'dataset.json') - if os.path.isfile(meta_fname): - with open(meta_fname, 'r') as file: - labels = json.load(file)['labels'] - if labels is not None: - labels = { x[0]: x[1] for x in labels } - else: - labels = {} - - max_idx = maybe_min(len(input_images), max_images) - - def iterate_images(): - for idx, fname in enumerate(input_images): - arch_fname = os.path.relpath(fname, source_dir) - arch_fname = arch_fname.replace('\\', '/') - img = np.array(PIL.Image.open(fname)) - yield dict(img=img, label=labels.get(arch_fname)) - if idx >= max_idx-1: - break - return max_idx, iterate_images() - -#---------------------------------------------------------------------------- - -def open_image_zip(source, *, max_images: Optional[int]): - with zipfile.ZipFile(source, mode='r') as z: - input_images = [str(f) for f in sorted(z.namelist()) if is_image_ext(f)] - - # Load labels. - labels = {} - if 'dataset.json' in z.namelist(): - with z.open('dataset.json', 'r') as file: - labels = json.load(file)['labels'] - if labels is not None: - labels = { x[0]: x[1] for x in labels } - else: - labels = {} - - max_idx = maybe_min(len(input_images), max_images) - - def iterate_images(): - with zipfile.ZipFile(source, mode='r') as z: - for idx, fname in enumerate(input_images): - with z.open(fname, 'r') as file: - img = PIL.Image.open(file) # type: ignore - img = np.array(img) - yield dict(img=img, label=labels.get(fname)) - if idx >= max_idx-1: - break - return max_idx, iterate_images() - -#---------------------------------------------------------------------------- - -def open_lmdb(lmdb_dir: str, *, max_images: Optional[int]): - import cv2 # pip install opencv-python # pylint: disable=import-error - import lmdb # pip install lmdb # pylint: disable=import-error - - with lmdb.open(lmdb_dir, readonly=True, lock=False).begin(write=False) as txn: - max_idx = maybe_min(txn.stat()['entries'], max_images) - - def iterate_images(): - with lmdb.open(lmdb_dir, readonly=True, lock=False).begin(write=False) as txn: - for idx, (_key, value) in enumerate(txn.cursor()): - try: - try: - img = cv2.imdecode(np.frombuffer(value, dtype=np.uint8), 1) - if img is None: - raise IOError('cv2.imdecode failed') - img = img[:, :, ::-1] # BGR => RGB - except IOError: - img = np.array(PIL.Image.open(io.BytesIO(value))) - yield dict(img=img, label=None) - if idx >= max_idx-1: - break - except: - print(sys.exc_info()[1]) - - return max_idx, iterate_images() - -#---------------------------------------------------------------------------- - -def open_cifar10(tarball: str, *, max_images: Optional[int]): - images = [] - labels = [] - - with tarfile.open(tarball, 'r:gz') as tar: - for batch in range(1, 6): - member = tar.getmember(f'cifar-10-batches-py/data_batch_{batch}') - with tar.extractfile(member) as file: - data = pickle.load(file, encoding='latin1') - images.append(data['data'].reshape(-1, 3, 32, 32)) - labels.append(data['labels']) - - images = np.concatenate(images) - labels = np.concatenate(labels) - images = images.transpose([0, 2, 3, 1]) # NCHW -> NHWC - assert images.shape == (50000, 32, 32, 3) and images.dtype == np.uint8 - assert labels.shape == (50000,) and labels.dtype in [np.int32, np.int64] - assert np.min(images) == 0 and np.max(images) == 255 - assert np.min(labels) == 0 and np.max(labels) == 9 - - max_idx = maybe_min(len(images), max_images) - - def iterate_images(): - for idx, img in enumerate(images): - yield dict(img=img, label=int(labels[idx])) - if idx >= max_idx-1: - break - - return max_idx, iterate_images() - -#---------------------------------------------------------------------------- - -def open_mnist(images_gz: str, *, max_images: Optional[int]): - labels_gz = images_gz.replace('-images-idx3-ubyte.gz', '-labels-idx1-ubyte.gz') - assert labels_gz != images_gz - images = [] - labels = [] - - with gzip.open(images_gz, 'rb') as f: - images = np.frombuffer(f.read(), np.uint8, offset=16) - with gzip.open(labels_gz, 'rb') as f: - labels = np.frombuffer(f.read(), np.uint8, offset=8) - - images = images.reshape(-1, 28, 28) - images = np.pad(images, [(0,0), (2,2), (2,2)], 'constant', constant_values=0) - assert images.shape == (60000, 32, 32) and images.dtype == np.uint8 - assert labels.shape == (60000,) and labels.dtype == np.uint8 - assert np.min(images) == 0 and np.max(images) == 255 - assert np.min(labels) == 0 and np.max(labels) == 9 - - max_idx = maybe_min(len(images), max_images) - - def iterate_images(): - for idx, img in enumerate(images): - yield dict(img=img, label=int(labels[idx])) - if idx >= max_idx-1: - break - - return max_idx, iterate_images() - -#---------------------------------------------------------------------------- - -def make_transform( - transform: Optional[str], - output_width: Optional[int], - output_height: Optional[int] -) -> Callable[[np.ndarray], Optional[np.ndarray]]: - def scale(width, height, img): - w = img.shape[1] - h = img.shape[0] - if width == w and height == h: - return img - img = PIL.Image.fromarray(img) - ww = width if width is not None else w - hh = height if height is not None else h - img = img.resize((ww, hh), PIL.Image.LANCZOS) - return np.array(img) - - def center_crop(width, height, img): - crop = np.min(img.shape[:2]) - img = img[(img.shape[0] - crop) // 2 : (img.shape[0] + crop) // 2, (img.shape[1] - crop) // 2 : (img.shape[1] + crop) // 2] - img = PIL.Image.fromarray(img, 'RGB') - img = img.resize((width, height), PIL.Image.LANCZOS) - return np.array(img) - - def center_crop_wide(width, height, img): - ch = int(np.round(width * img.shape[0] / img.shape[1])) - if img.shape[1] < width or ch < height: - return None - - img = img[(img.shape[0] - ch) // 2 : (img.shape[0] + ch) // 2] - img = PIL.Image.fromarray(img, 'RGB') - img = img.resize((width, height), PIL.Image.LANCZOS) - img = np.array(img) - - canvas = np.zeros([width, width, 3], dtype=np.uint8) - canvas[(width - height) // 2 : (width + height) // 2, :] = img - return canvas - - if transform is None: - return functools.partial(scale, output_width, output_height) - if transform == 'center-crop': - if (output_width is None) or (output_height is None): - error ('must specify --resolution=WxH when using ' + transform + 'transform') - return functools.partial(center_crop, output_width, output_height) - if transform == 'center-crop-wide': - if (output_width is None) or (output_height is None): - error ('must specify --resolution=WxH when using ' + transform + ' transform') - return functools.partial(center_crop_wide, output_width, output_height) - assert False, 'unknown transform' - -#---------------------------------------------------------------------------- - -def open_dataset(source, *, max_images: Optional[int]): - if os.path.isdir(source): - if source.rstrip('/').endswith('_lmdb'): - return open_lmdb(source, max_images=max_images) - else: - return open_image_folder(source, max_images=max_images) - elif os.path.isfile(source): - if os.path.basename(source) == 'cifar-10-python.tar.gz': - return open_cifar10(source, max_images=max_images) - elif os.path.basename(source) == 'train-images-idx3-ubyte.gz': - return open_mnist(source, max_images=max_images) - elif file_ext(source) == 'zip': - return open_image_zip(source, max_images=max_images) - else: - assert False, 'unknown archive type' - else: - error(f'Missing input file or directory: {source}') - -#---------------------------------------------------------------------------- - -def open_dest(dest: str) -> Tuple[str, Callable[[str, Union[bytes, str]], None], Callable[[], None]]: - dest_ext = file_ext(dest) - - if dest_ext == 'zip': - if os.path.dirname(dest) != '': - os.makedirs(os.path.dirname(dest), exist_ok=True) - zf = zipfile.ZipFile(file=dest, mode='w', compression=zipfile.ZIP_STORED) - def zip_write_bytes(fname: str, data: Union[bytes, str]): - zf.writestr(fname, data) - return '', zip_write_bytes, zf.close - else: - # If the output folder already exists, check that is is - # empty. - # - # Note: creating the output directory is not strictly - # necessary as folder_write_bytes() also mkdirs, but it's better - # to give an error message earlier in case the dest folder - # somehow cannot be created. - if os.path.isdir(dest) and len(os.listdir(dest)) != 0: - error('--dest folder must be empty') - os.makedirs(dest, exist_ok=True) - - def folder_write_bytes(fname: str, data: Union[bytes, str]): - os.makedirs(os.path.dirname(fname), exist_ok=True) - with open(fname, 'wb') as fout: - if isinstance(data, str): - data = data.encode('utf8') - fout.write(data) - return dest, folder_write_bytes, lambda: None - -#---------------------------------------------------------------------------- - -@click.command() -@click.pass_context -@click.option('--source', help='Directory or archive name for input dataset', required=True, metavar='PATH') -@click.option('--dest', help='Output directory or archive name for output dataset', required=True, metavar='PATH') -@click.option('--max-images', help='Output only up to `max-images` images', type=int, default=None) -@click.option('--transform', help='Input crop/resize mode', type=click.Choice(['center-crop', 'center-crop-wide'])) -@click.option('--resolution', help='Output resolution (e.g., \'512x512\')', metavar='WxH', type=parse_tuple) -def convert_dataset( - ctx: click.Context, - source: str, - dest: str, - max_images: Optional[int], - transform: Optional[str], - resolution: Optional[Tuple[int, int]] -): - """Convert an image dataset into a dataset archive usable with StyleGAN2 ADA PyTorch. - - The input dataset format is guessed from the --source argument: - - \b - --source *_lmdb/ Load LSUN dataset - --source cifar-10-python.tar.gz Load CIFAR-10 dataset - --source train-images-idx3-ubyte.gz Load MNIST dataset - --source path/ Recursively load all images from path/ - --source dataset.zip Recursively load all images from dataset.zip - - Specifying the output format and path: - - \b - --dest /path/to/dir Save output files under /path/to/dir - --dest /path/to/dataset.zip Save output files into /path/to/dataset.zip - - The output dataset format can be either an image folder or an uncompressed zip archive. - Zip archives makes it easier to move datasets around file servers and clusters, and may - offer better training performance on network file systems. - - Images within the dataset archive will be stored as uncompressed PNG. - Uncompressed PNGs can be efficiently decoded in the training loop. - - Class labels are stored in a file called 'dataset.json' that is stored at the - dataset root folder. This file has the following structure: - - \b - { - "labels": [ - ["00000/img00000000.png",6], - ["00000/img00000001.png",9], - ... repeated for every image in the dataset - ["00049/img00049999.png",1] - ] - } - - If the 'dataset.json' file cannot be found, the dataset is interpreted as - not containing class labels. - - Image scale/crop and resolution requirements: - - Output images must be square-shaped and they must all have the same power-of-two - dimensions. - - To scale arbitrary input image size to a specific width and height, use the - --resolution option. Output resolution will be either the original - input resolution (if resolution was not specified) or the one specified with - --resolution option. - - Use the --transform=center-crop or --transform=center-crop-wide options to apply a - center crop transform on the input image. These options should be used with the - --resolution option. For example: - - \b - python dataset_tool.py --source LSUN/raw/cat_lmdb --dest /tmp/lsun_cat \\ - --transform=center-crop-wide --resolution=512x384 - """ - - PIL.Image.init() # type: ignore - - if dest == '': - ctx.fail('--dest output filename or directory must not be an empty string') - - num_files, input_iter = open_dataset(source, max_images=max_images) - archive_root_dir, save_bytes, close_dest = open_dest(dest) - - if resolution is None: resolution = (None, None) - transform_image = make_transform(transform, *resolution) - - dataset_attrs = None - - labels = [] - for idx, image in tqdm(enumerate(input_iter), total=num_files): - idx_str = f'{idx:08d}' - archive_fname = f'{idx_str[:5]}/img{idx_str}.png' - - # Apply crop and resize. - img = transform_image(image['img']) - - # Transform may drop images. - if img is None: - continue - - # Error check to require uniform image attributes across - # the whole dataset. - channels = img.shape[2] if img.ndim == 3 else 1 - cur_image_attrs = { - 'width': img.shape[1], - 'height': img.shape[0], - 'channels': channels - } - if dataset_attrs is None: - dataset_attrs = cur_image_attrs - width = dataset_attrs['width'] - height = dataset_attrs['height'] - if width != height: - error(f'Image dimensions after scale and crop are required to be square. Got {width}x{height}') - if dataset_attrs['channels'] not in [1, 3, 4]: - error('Input images must be stored as RGB or grayscale') - if width != 2 ** int(np.floor(np.log2(width))): - error('Image width/height after scale and crop are required to be power-of-two') - elif dataset_attrs != cur_image_attrs: - err = [f' dataset {k}/cur image {k}: {dataset_attrs[k]}/{cur_image_attrs[k]}' for k in dataset_attrs.keys()] # pylint: disable=unsubscriptable-object - error(f'Image {archive_fname} attributes must be equal across all images of the dataset. Got:\n' + '\n'.join(err)) - - # Save the image as an uncompressed PNG. - img = PIL.Image.fromarray(img, { 1: 'L', 3: 'RGB', 4: 'RGBA'}[channels]) - if channels == 4: img = img.convert('RGB') - image_bits = io.BytesIO() - img.save(image_bits, format='png', compress_level=0, optimize=False) - save_bytes(os.path.join(archive_root_dir, archive_fname), image_bits.getbuffer()) - labels.append([archive_fname, image['label']] if image['label'] is not None else None) - - metadata = { - 'labels': labels if all(x is not None for x in labels) else None - } - save_bytes(os.path.join(archive_root_dir, 'dataset.json'), json.dumps(metadata)) - close_dest() - -#---------------------------------------------------------------------------- - -if __name__ == "__main__": - convert_dataset() # pylint: disable=no-value-for-parameter diff --git a/spaces/gylleus/icongen/dnnlib/__init__.py b/spaces/gylleus/icongen/dnnlib/__init__.py deleted file mode 100644 index 2f08cf36f11f9b0fd94c1b7caeadf69b98375b04..0000000000000000000000000000000000000000 --- a/spaces/gylleus/icongen/dnnlib/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -from .util import EasyDict, make_cache_dir_path diff --git a/spaces/haakohu/deep_privacy2/stylemc.py b/spaces/haakohu/deep_privacy2/stylemc.py deleted file mode 100644 index c4fefb230a11cb51da8c47afa9c831acb9ce25e4..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/stylemc.py +++ /dev/null @@ -1,295 +0,0 @@ -""" -Approach: "StyleMC: Multi-Channel Based Fast Text-Guided Image Generation and Manipulation" -Original source code: -https://github.com/autonomousvision/stylegan_xl/blob/f9be58e98110bd946fcdadef2aac8345466faaf3/run_stylemc.py# -Modified by Håkon Hukkelås -""" -import os -from pathlib import Path -import tqdm -import re -import click -from dp2 import utils -import tops -from typing import List, Optional -import PIL.Image -import imageio -from timeit import default_timer as timer - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision.transforms.functional import resize, normalize -from dp2.infer import build_trained_generator -import clip - -#---------------------------------------------------------------------------- - -class AverageMeter(object): - """Computes and stores the average and current value""" - def __init__(self, name, fmt=':f'): - self.name = name - self.fmt = fmt - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - def __str__(self): - fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})' - return fmtstr.format(**self.__dict__) - - -class ProgressMeter(object): - def __init__(self, num_batches, meters, prefix=""): - self.batch_fmtstr = self._get_batch_fmtstr(num_batches) - self.meters = meters - self.prefix = prefix - - def display(self, batch): - entries = [self.prefix + self.batch_fmtstr.format(batch)] - entries += [str(meter) for meter in self.meters] - print('\t'.join(entries)) - - def _get_batch_fmtstr(self, num_batches): - num_digits = len(str(num_batches // 1)) - fmt = '{:' + str(num_digits) + 'd}' - return '[' + fmt + '/' + fmt.format(num_batches) + ']' - - -def save_image(img, path): - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8) - PIL.Image.fromarray(img[0].cpu().numpy(), 'RGB').save(path) - - -def unravel_index(index, shape): - out = [] - for dim in reversed(shape): - out.append(index % dim) - index = index // dim - return tuple(reversed(out)) - - -def num_range(s: str) -> List[int]: - '''Accept either a comma separated list of numbers 'a,b,c' or a range 'a-c' and return as a list of ints.''' - - range_re = re.compile(r'^(\d+)-(\d+)$') - m = range_re.match(s) - if m: - return list(range(int(m.group(1)), int(m.group(2))+1)) - vals = s.split(',') - return [int(x) for x in vals] - - -#---------------------------------------------------------------------------- - - - -def spherical_dist_loss(x, y): - x = F.normalize(x, dim=-1) - y = F.normalize(y, dim=-1) - return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2) - - -def prompts_dist_loss(x, targets, loss): - if len(targets) == 1: # Keeps consistent results vs previous method for single objective guidance - return loss(x, targets[0]) - distances = [loss(x, target) for target in targets] - return torch.stack(distances, dim=-1).sum(dim=-1) - - -def embed_text(model, prompt, device='cuda'): - return - - -#---------------------------------------------------------------------------- - -@torch.no_grad() -@torch.cuda.amp.autocast() -def generate_edit( - G, - dl, - direction, - edit_strength, - path, - ): - for it, batch in enumerate(dl): - batch["embedding"] = None - styles = get_styles(None, G, batch, truncation_value=0) - imgs = [] - grad_changes = [_*edit_strength for _ in [0, 0.25, 0.5, 0.75, 1]] - grad_changes = [*[-x for x in grad_changes][::-1], *grad_changes] - batch = {k: tops.to_cuda(v) if v is not None else v for k,v in batch.items()} - for i, grad_change in enumerate(grad_changes): - s = styles + direction*grad_change - - img = G(**batch, s=iter(s))["img"] - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255) - imgs.append(img[0].to(torch.uint8).cpu().numpy()) - PIL.Image.fromarray(np.concatenate(imgs, axis=1), 'RGB').save(path + f'{it}.png') - - -@torch.no_grad() -def get_styles(seed, G: torch.nn.Module, batch, truncation_value=1): - all_styles = [] - if seed is None: - z = np.random.normal(0, 0, size=(1, G.z_channels)) - else: - z = np.random.RandomState(seed=seed).normal(0, 1, size=(1, G.z_channels)) - z_idx = np.random.RandomState(seed=seed).randint(0, len(G.style_net.w_centers)) - w_c = G.style_net.w_centers[z_idx].to(tops.get_device()).view(1, -1) - w = G.style_net(torch.from_numpy(z).to(tops.get_device())) - - w = w_c.to(w.dtype).lerp(w, truncation_value) - if hasattr(G, "get_comod_y"): - w = G.get_comod_y(batch, w) - for block in G.modules(): - if not hasattr(block, "affine") or not hasattr(block.affine, "weight"): - continue - gamma0 = block.affine(w) - if hasattr(block, "affine_beta"): - beta0 = block.affine_beta(w) - gamma0 = torch.cat((gamma0, beta0), dim=1) - all_styles.append(gamma0) - max_ch = max([s.shape[-1] for s in all_styles]) - all_styles = [F.pad(s, ((0, max_ch - s.shape[-1])), "constant", 0) for s in all_styles] - all_styles = torch.cat(all_styles) - return all_styles - -def get_and_cache_direction(output_dir: Path, dl_val, G, text_prompt): - cache_path = output_dir.joinpath( - "stylemc_cache", text_prompt.replace(" ", "_") + ".torch") - if cache_path.is_file(): - print("Loaded cache from:", cache_path) - return torch.load(cache_path) - direction = find_direction(G, text_prompt, None, dl_val=iter(dl_val)) - cache_path.parent.mkdir(exist_ok=True, parents=True) - torch.save(direction, cache_path) - return direction - -@torch.cuda.amp.autocast() -def find_direction( - G, - text_prompt, - batches, - #layers, - n_iterations=128*8, - batch_size=8, - dl_val=None -): - time_start = timer() - - clip_model = clip.load("ViT-B/16", device=tops.get_device())[0] - - target = [clip_model.encode_text(clip.tokenize(text_prompt).to(tops.get_device())).float()] - all_styles = [] - if dl_val is not None: - first_batch = next(dl_val) - else: - first_batch = batches[0] - first_batch["embedding"] = None if "embedding" not in first_batch else first_batch["embedding"] - s = get_styles(0, G, first_batch) - # stats tracker - cos_sim_track = AverageMeter('cos_sim', ':.4f') - norm_track = AverageMeter('norm', ':.4f') - n_iterations = n_iterations // batch_size - progress = ProgressMeter(n_iterations, [cos_sim_track, norm_track]) - - # initalize styles direction - direction = torch.zeros(s.shape, device=tops.get_device()) - direction.requires_grad_() - utils.set_requires_grad(G, False) - direction_tracker = torch.zeros_like(direction) - opt = torch.optim.AdamW([direction], lr=0.05, betas=(0., 0.999), weight_decay=0.25) - - grads = [] - for seed_idx in tqdm.trange(n_iterations): - # forward pass through synthesis network with new styles - if seed_idx == 0: - batch = first_batch - elif dl_val is not None: - batch = next(dl_val) - batch["embedding"] = None if "embedding" not in batch else batch["embedding"] - else: - batch = {k: tops.to_cuda(v) if v is not None else v for k, v in batches[seed_idx].items()} - styles = get_styles(seed_idx, G, batch) + direction - img = G(**batch, s=iter(styles))["img"] - batch = {k: v.cpu() if v is not None else v for k, v in batch.items()} - # clip loss - img = (img + 1)/2 - img = normalize(img, mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711)) - img = resize(img, (224, 224)) - embeds = clip_model.encode_image(img) - cos_sim = prompts_dist_loss(embeds, target, spherical_dist_loss) - cos_sim.backward(retain_graph=True) - - # track stats - cos_sim_track.update(cos_sim.item()) - norm_track.update(torch.norm(direction).item()) - - if not (seed_idx % batch_size): - - # zeroing out gradients for non-optimized layers - #layers_zeroed = torch.tensor([x for x in range(G.num_ws) if not x in layers]) - #direction.grad[:, layers_zeroed] = 0 - - opt.step() - grads.append(direction.grad.clone()) - direction.grad.data.zero_() - - # keep track of gradients over time - if seed_idx > 3: - direction_tracker[grads[-2] * grads[-1] < 0] += 1 - - # plot stats - progress.display(seed_idx) - - # throw out fluctuating channels - direction = direction.detach() - direction[direction_tracker > n_iterations / 4] = 0 - print(direction) - print(f"Time for direction search: {timer() - time_start:.2f} s") - return direction - - - - -@click.command() -@click.argument("config_path") -@click.argument("input_path") -@click.argument("output_path") -#@click.option('--layers', type=num_range, help='Restrict the style space to a range of layers. We recommend not to optimize the critically sampled layers (last 3).', required=True) -@click.option('--text-prompt', help='Text', type=str, required=True) -@click.option('--edit-strength', help='Strength of edit', type=float, required=True) -@click.option('--outdir', help='Where to save the output images', type=str, required=True) -def stylemc( - config_path, - #layers: List[int], - text_prompt: str, - edit_strength: float, - outdir: str, -): - cfg = utils.load_config(config_path) - G = build_trained_generator(cfg) - cfg.train.batch_size = 1 - n_iterations = 256 - dl_val = tops.config.instantiate(cfg.data.val.loader) - - direction = find_direction(G, text_prompt, None, n_iterations=n_iterations, dl_val=iter(dl_val)) - - text_prompt = text_prompt.replace(" ", "_") - generate_edit(G, input_path, direction, edit_strength, output_path) - - -if __name__ == "__main__": - stylemc() diff --git a/spaces/haakohu/deep_privacy2_face/sg3_torch_utils/custom_ops.py b/spaces/haakohu/deep_privacy2_face/sg3_torch_utils/custom_ops.py deleted file mode 100644 index 4cc4e43fc6f6ce79f2bd68a44ba87990b9b8564e..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2_face/sg3_torch_utils/custom_ops.py +++ /dev/null @@ -1,126 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import glob -import torch -import torch.utils.cpp_extension -import importlib -import hashlib -import shutil -from pathlib import Path - -from torch.utils.file_baton import FileBaton - -#---------------------------------------------------------------------------- -# Global options. - -verbosity = 'brief' # Verbosity level: 'none', 'brief', 'full' - -#---------------------------------------------------------------------------- -# Internal helper funcs. - -def _find_compiler_bindir(): - patterns = [ - 'C:/Program Files (x86)/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio */vc/bin', - ] - for pattern in patterns: - matches = sorted(glob.glob(pattern)) - if len(matches): - return matches[-1] - return None - -#---------------------------------------------------------------------------- -# Main entry point for compiling and loading C++/CUDA plugins. - -_cached_plugins = dict() - -def get_plugin(module_name, sources, **build_kwargs): - assert verbosity in ['none', 'brief', 'full'] - - # Already cached? - if module_name in _cached_plugins: - return _cached_plugins[module_name] - - # Print status. - if verbosity == 'full': - print(f'Setting up PyTorch plugin "{module_name}"...') - elif verbosity == 'brief': - print(f'Setting up PyTorch plugin "{module_name}"... ', end='', flush=True) - - try: # pylint: disable=too-many-nested-blocks - # Make sure we can find the necessary compiler binaries. - if os.name == 'nt' and os.system("where cl.exe >nul 2>nul") != 0: - compiler_bindir = _find_compiler_bindir() - if compiler_bindir is None: - raise RuntimeError(f'Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "{__file__}".') - os.environ['PATH'] += ';' + compiler_bindir - - # Compile and load. - verbose_build = (verbosity == 'full') - - # Incremental build md5sum trickery. Copies all the input source files - # into a cached build directory under a combined md5 digest of the input - # source files. Copying is done only if the combined digest has changed. - # This keeps input file timestamps and filenames the same as in previous - # extension builds, allowing for fast incremental rebuilds. - # - # This optimization is done only in case all the source files reside in - # a single directory (just for simplicity) and if the TORCH_EXTENSIONS_DIR - # environment variable is set (we take this as a signal that the user - # actually cares about this.) - source_dirs_set = set(os.path.dirname(source) for source in sources) - if len(source_dirs_set) == 1 and ('TORCH_EXTENSIONS_DIR' in os.environ): - all_source_files = sorted(list(x for x in Path(list(source_dirs_set)[0]).iterdir() if x.is_file())) - - # Compute a combined hash digest for all source files in the same - # custom op directory (usually .cu, .cpp, .py and .h files). - hash_md5 = hashlib.md5() - for src in all_source_files: - with open(src, 'rb') as f: - hash_md5.update(f.read()) - build_dir = torch.utils.cpp_extension._get_build_directory(module_name, verbose=verbose_build) # pylint: disable=protected-access - digest_build_dir = os.path.join(build_dir, hash_md5.hexdigest()) - - if not os.path.isdir(digest_build_dir): - os.makedirs(digest_build_dir, exist_ok=True) - baton = FileBaton(os.path.join(digest_build_dir, 'lock')) - if baton.try_acquire(): - try: - for src in all_source_files: - shutil.copyfile(src, os.path.join(digest_build_dir, os.path.basename(src))) - finally: - baton.release() - else: - # Someone else is copying source files under the digest dir, - # wait until done and continue. - baton.wait() - digest_sources = [os.path.join(digest_build_dir, os.path.basename(x)) for x in sources] - torch.utils.cpp_extension.load(name=module_name, build_directory=build_dir, - verbose=verbose_build, sources=digest_sources, **build_kwargs) - else: - torch.utils.cpp_extension.load(name=module_name, verbose=verbose_build, sources=sources, **build_kwargs) - module = importlib.import_module(module_name) - - except: - if verbosity == 'brief': - print('Failed!') - raise - - # Print status and add to cache. - if verbosity == 'full': - print(f'Done setting up PyTorch plugin "{module_name}".') - elif verbosity == 'brief': - print('Done.') - _cached_plugins[module_name] = module - return module - -#---------------------------------------------------------------------------- diff --git a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/modeling/heads/__init__.py b/spaces/hamacojr/SAM-CAT-Seg/cat_seg/modeling/heads/__init__.py deleted file mode 100644 index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/modeling/heads/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/Makefile b/spaces/hamacojr/SAM-CAT-Seg/open_clip/Makefile deleted file mode 100644 index ff07eccefed3d959c77d007d2571e226a07ace60..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/Makefile +++ /dev/null @@ -1,12 +0,0 @@ -install: ## [Local development] Upgrade pip, install requirements, install package. - python -m pip install -U pip - python -m pip install -e . - -install-training: - python -m pip install -r requirements-training.txt - -install-test: ## [Local development] Install test requirements - python -m pip install -r requirements-test.txt - -test: ## [Local development] Run unit tests - python -m pytest -x -s -v tests diff --git a/spaces/haseeb-heaven/AutoBard-Coder/response/content.md b/spaces/haseeb-heaven/AutoBard-Coder/response/content.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/hk59775634/OpenAI-Manager/index.html b/spaces/hk59775634/OpenAI-Manager/index.html deleted file mode 100644 index b56d85288f5872924264a53374f1ee3c2a745934..0000000000000000000000000000000000000000 --- a/spaces/hk59775634/OpenAI-Manager/index.html +++ /dev/null @@ -1,28 +0,0 @@ - - - - - - - - - - JCM-AI - - - - - - -
- - - - \ No newline at end of file diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/inference/__init__.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/inference/__init__.py deleted file mode 100644 index 72b8078b9dddddf22182fec2555d8d118ea72622..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/inference/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from __future__ import absolute_import -from . import * \ No newline at end of file diff --git a/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/assets/_page-fd1176fc.css b/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/assets/_page-fd1176fc.css deleted file mode 100644 index 18d52da51ee9754d809af8d6afac22d9685e33ac..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/assets/_page-fd1176fc.css +++ /dev/null @@ -1 +0,0 @@ -.button.svelte-8zu88a{margin-left:.5rem;min-width:9ch;border-radius:1rem;border-width:2px;--tw-border-opacity:1;border-color:rgb(0 0 0 / var(--tw-border-opacity));--tw-bg-opacity:1;background-color:rgb(0 0 0 / var(--tw-bg-opacity));padding:.5rem;font-size:.75rem;line-height:1rem;font-weight:700;--tw-text-opacity:1;color:rgb(255 255 255 / var(--tw-text-opacity));--tw-shadow:0 1px 2px 0 rgb(0 0 0 / .05);--tw-shadow-colored:0 1px 2px 0 var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.button.svelte-8zu88a:focus{--tw-border-opacity:1;border-color:rgb(156 163 175 / var(--tw-border-opacity));outline:2px solid transparent;outline-offset:2px}@media (prefers-color-scheme: dark){.button.svelte-8zu88a{--tw-border-opacity:1;border-color:rgb(255 255 255 / var(--tw-border-opacity))}}.link.svelte-zbscw1{font-size:.75rem;line-height:1rem;font-weight:700;text-decoration-line:underline}.link.svelte-zbscw1:visited{color:#6b7280}.link.svelte-zbscw1:hover{--tw-text-opacity:1;color:rgb(107 114 128 / var(--tw-text-opacity));text-decoration-line:none}.input.svelte-zbscw1{grid-column:span 4 / span 4;border-radius:1rem;border-width:2px;--tw-border-opacity:1;border-color:rgb(0 0 0 / var(--tw-border-opacity));--tw-bg-opacity:1;background-color:rgb(15 23 42 / var(--tw-bg-opacity));padding-left:.5rem;padding-right:.5rem;font-size:.875rem;line-height:1.25rem;font-style:italic;--tw-text-opacity:1;color:rgb(255 255 255 / var(--tw-text-opacity));--tw-shadow:0 1px 2px 0 rgb(0 0 0 / .05);--tw-shadow-colored:0 1px 2px 0 var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.input.svelte-zbscw1::-moz-placeholder{color:rgb(255 255 255 / var(--tw-text-opacity));--tw-text-opacity:.3 }.input.svelte-zbscw1::placeholder{color:rgb(255 255 255 / var(--tw-text-opacity));--tw-text-opacity:.3 }.input.svelte-zbscw1:focus{--tw-border-opacity:1;border-color:rgb(156 163 175 / var(--tw-border-opacity));outline:2px solid transparent;outline-offset:2px;--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(1px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}.input.svelte-zbscw1:disabled{opacity:.5}@media (prefers-color-scheme: dark){.input.svelte-zbscw1{--tw-bg-opacity:1;background-color:rgb(255 255 255 / var(--tw-bg-opacity));--tw-text-opacity:1;color:rgb(0 0 0 / var(--tw-text-opacity))}.input.svelte-zbscw1::-moz-placeholder{color:rgb(0 0 0 / var(--tw-text-opacity));--tw-text-opacity:.1 }.input.svelte-zbscw1::placeholder{color:rgb(0 0 0 / var(--tw-text-opacity));--tw-text-opacity:.1 }}@media (min-width: 768px){.input.svelte-zbscw1{grid-column:span 5 / span 5}}.button.svelte-zbscw1{grid-column:span 2 / span 2;margin-left:.5rem;border-radius:1rem;border-width:2px;--tw-border-opacity:1;border-color:rgb(0 0 0 / var(--tw-border-opacity));padding:.5rem;font-size:.75rem;line-height:1rem;font-weight:700;--tw-shadow:0 1px 2px 0 rgb(0 0 0 / .05);--tw-shadow-colored:0 1px 2px 0 var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.button.svelte-zbscw1:focus{--tw-border-opacity:1;border-color:rgb(156 163 175 / var(--tw-border-opacity));outline:2px solid transparent;outline-offset:2px}.button.svelte-zbscw1:disabled{opacity:.5}@media (prefers-color-scheme: dark){.button.svelte-zbscw1{--tw-bg-opacity:1;background-color:rgb(255 255 255 / var(--tw-bg-opacity));--tw-text-opacity:1;color:rgb(0 0 0 / var(--tw-text-opacity))}}@media (min-width: 768px){.button.svelte-zbscw1{grid-column:span 1 / span 1}} diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/patch_match.py b/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/patch_match.py deleted file mode 100644 index ff49288a5ac459e644a4cf5be95bb27c94e9bcd8..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/patch_match.py +++ /dev/null @@ -1,191 +0,0 @@ -#! /usr/bin/env python3 -# -*- coding: utf-8 -*- -# File : patch_match.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 01/09/2020 -# -# Distributed under terms of the MIT license. - -import ctypes -import os.path as osp -from typing import Optional, Union - -import numpy as np -from PIL import Image - - -__all__ = ['set_random_seed', 'set_verbose', 'inpaint', 'inpaint_regularity'] - - -class CShapeT(ctypes.Structure): - _fields_ = [ - ('width', ctypes.c_int), - ('height', ctypes.c_int), - ('channels', ctypes.c_int), - ] - - -class CMatT(ctypes.Structure): - _fields_ = [ - ('data_ptr', ctypes.c_void_p), - ('shape', CShapeT), - ('dtype', ctypes.c_int) - ] - - -PMLIB = ctypes.CDLL(osp.join(osp.dirname(__file__), 'libpatchmatch.so')) - -PMLIB.PM_set_random_seed.argtypes = [ctypes.c_uint] -PMLIB.PM_set_verbose.argtypes = [ctypes.c_int] -PMLIB.PM_free_pymat.argtypes = [CMatT] -PMLIB.PM_inpaint.argtypes = [CMatT, CMatT, ctypes.c_int] -PMLIB.PM_inpaint.restype = CMatT -PMLIB.PM_inpaint_regularity.argtypes = [CMatT, CMatT, CMatT, ctypes.c_int, ctypes.c_float] -PMLIB.PM_inpaint_regularity.restype = CMatT -PMLIB.PM_inpaint2.argtypes = [CMatT, CMatT, CMatT, ctypes.c_int] -PMLIB.PM_inpaint2.restype = CMatT -PMLIB.PM_inpaint2_regularity.argtypes = [CMatT, CMatT, CMatT, CMatT, ctypes.c_int, ctypes.c_float] -PMLIB.PM_inpaint2_regularity.restype = CMatT - - -def set_random_seed(seed: int): - PMLIB.PM_set_random_seed(ctypes.c_uint(seed)) - - -def set_verbose(verbose: bool): - PMLIB.PM_set_verbose(ctypes.c_int(verbose)) - - -def inpaint( - image: Union[np.ndarray, Image.Image], - mask: Optional[Union[np.ndarray, Image.Image]] = None, - *, - global_mask: Optional[Union[np.ndarray, Image.Image]] = None, - patch_size: int = 15 -) -> np.ndarray: - """ - PatchMatch based inpainting proposed in: - - PatchMatch : A Randomized Correspondence Algorithm for Structural Image Editing - C.Barnes, E.Shechtman, A.Finkelstein and Dan B.Goldman - SIGGRAPH 2009 - - Args: - image (Union[np.ndarray, Image.Image]): the input image, should be 3-channel RGB/BGR. - mask (Union[np.array, Image.Image], optional): the mask of the hole(s) to be filled, should be 1-channel. - If not provided (None), the algorithm will treat all purely white pixels as the holes (255, 255, 255). - global_mask (Union[np.array, Image.Image], optional): the target mask of the output image. - patch_size (int): the patch size for the inpainting algorithm. - - Return: - result (np.ndarray): the repaired image, of the same size as the input image. - """ - - if isinstance(image, Image.Image): - image = np.array(image) - image = np.ascontiguousarray(image) - assert image.ndim == 3 and image.shape[2] == 3 and image.dtype == 'uint8' - - if mask is None: - mask = (image == (255, 255, 255)).all(axis=2, keepdims=True).astype('uint8') - mask = np.ascontiguousarray(mask) - else: - mask = _canonize_mask_array(mask) - - if global_mask is None: - ret_pymat = PMLIB.PM_inpaint(np_to_pymat(image), np_to_pymat(mask), ctypes.c_int(patch_size)) - else: - global_mask = _canonize_mask_array(global_mask) - ret_pymat = PMLIB.PM_inpaint2(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(global_mask), ctypes.c_int(patch_size)) - - ret_npmat = pymat_to_np(ret_pymat) - PMLIB.PM_free_pymat(ret_pymat) - - return ret_npmat - - -def inpaint_regularity( - image: Union[np.ndarray, Image.Image], - mask: Optional[Union[np.ndarray, Image.Image]], - ijmap: np.ndarray, - *, - global_mask: Optional[Union[np.ndarray, Image.Image]] = None, - patch_size: int = 15, guide_weight: float = 0.25 -) -> np.ndarray: - if isinstance(image, Image.Image): - image = np.array(image) - image = np.ascontiguousarray(image) - - assert isinstance(ijmap, np.ndarray) and ijmap.ndim == 3 and ijmap.shape[2] == 3 and ijmap.dtype == 'float32' - ijmap = np.ascontiguousarray(ijmap) - - assert image.ndim == 3 and image.shape[2] == 3 and image.dtype == 'uint8' - if mask is None: - mask = (image == (255, 255, 255)).all(axis=2, keepdims=True).astype('uint8') - mask = np.ascontiguousarray(mask) - else: - mask = _canonize_mask_array(mask) - - - if global_mask is None: - ret_pymat = PMLIB.PM_inpaint_regularity(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(ijmap), ctypes.c_int(patch_size), ctypes.c_float(guide_weight)) - else: - global_mask = _canonize_mask_array(global_mask) - ret_pymat = PMLIB.PM_inpaint2_regularity(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(global_mask), np_to_pymat(ijmap), ctypes.c_int(patch_size), ctypes.c_float(guide_weight)) - - ret_npmat = pymat_to_np(ret_pymat) - PMLIB.PM_free_pymat(ret_pymat) - - return ret_npmat - - -def _canonize_mask_array(mask): - if isinstance(mask, Image.Image): - mask = np.array(mask) - if mask.ndim == 2 and mask.dtype == 'uint8': - mask = mask[..., np.newaxis] - assert mask.ndim == 3 and mask.shape[2] == 1 and mask.dtype == 'uint8' - return np.ascontiguousarray(mask) - - -dtype_pymat_to_ctypes = [ - ctypes.c_uint8, - ctypes.c_int8, - ctypes.c_uint16, - ctypes.c_int16, - ctypes.c_int32, - ctypes.c_float, - ctypes.c_double, -] - - -dtype_np_to_pymat = { - 'uint8': 0, - 'int8': 1, - 'uint16': 2, - 'int16': 3, - 'int32': 4, - 'float32': 5, - 'float64': 6, -} - - -def np_to_pymat(npmat): - assert npmat.ndim == 3 - return CMatT( - ctypes.cast(npmat.ctypes.data, ctypes.c_void_p), - CShapeT(npmat.shape[1], npmat.shape[0], npmat.shape[2]), - dtype_np_to_pymat[str(npmat.dtype)] - ) - - -def pymat_to_np(pymat): - npmat = np.ctypeslib.as_array( - ctypes.cast(pymat.data_ptr, ctypes.POINTER(dtype_pymat_to_ctypes[pymat.dtype])), - (pymat.shape.height, pymat.shape.width, pymat.shape.channels) - ) - ret = np.empty(npmat.shape, npmat.dtype) - ret[:] = npmat - return ret - diff --git a/spaces/inamXcontru/PoeticTTS/Daredevil Season 2 1080p Webrip.md b/spaces/inamXcontru/PoeticTTS/Daredevil Season 2 1080p Webrip.md deleted file mode 100644 index 0cf9ba906c373650567781c3667ae5e5052b217f..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Daredevil Season 2 1080p Webrip.md +++ /dev/null @@ -1,74 +0,0 @@ -

daredevil season 2 1080p webrip


DOWNLOADhttps://gohhs.com/2uz3ln



-
-download torrent - -Do you want to see more videos about 2017 season 2 of Daredevil?Q: - -Can't redirect to HTTPS from HTTP - -I have a Rails 3.2 app. I'm using Rack::CommonLogging. This logger has a format_message_for_browser method which is used to format the message to be presented in the browser. - -In development mode, I have HTTPS enabled by setting config.force_ssl = true in the application.rb. - -I can't seem to get the redirects working. My Rack config looks like this: - -config.middleware.insert_before 0, Rack::CommonLogging, CommonLoggerMiddleware - -config.middleware.insert_before 0, 'Rack::Handler::Static', StaticFileHandler - -config.middleware.insert_before 0, 'Rack::Static', StaticFileHandler - -This is in a Rackup config file. - -I've also tried: - -config.middleware.insert_before 0, 'Rack::Handler::HTTP', HTTPProtocol - -config.middleware.insert_before 0, 'Rack::Handler::HTTPS', HTTPProtocol - -But, I get a "bad request" error. - -The debug output is like this: - -DEBUG: Rack: def fetch(request) - -DEBUG: Rack: -- request: - -DEBUG: Rack: -- headers: - -DEBUG: Rack: -- params: - -DEBUG: Rack: -- env: - -DEBUG: Rack: -- body: - -DEBUG: Rack: -- cookies: - -DEBUG: Rack: -- session: - -DEBUG: Rack: -- rack_version: [1, 1] - -DEBUG: Rack: -- wsgi: false - -DEBUG: Rack: -- bound to 127.0.0.1 port 80 - -DEBUG: Monitor: connect - 127.0.0.1 - - -DEBUG: Monitor: (qbok.blackjackapp.com) - - -DEBUG: Monitor: Mon Apr 5 18:34:52 2012 +0000 - -DEBUG: Monitor: (127.0.0.1) - - -DEBUG: Monitor: GET / HTTP/1.1 - -DEBUG: Monitor: Host: 127.0.0.1 - -DEBUG: Monitor: User-Agent: Rack-1.5.2 - -DEBUG: Monitor: Connection: close - -DEBUG: Monitor 4fefd39f24
-
-
-

diff --git a/spaces/inamXcontru/PoeticTTS/Dhama Chaukdi full hindi movie free download The story of four dons who turn into good samaritans.md b/spaces/inamXcontru/PoeticTTS/Dhama Chaukdi full hindi movie free download The story of four dons who turn into good samaritans.md deleted file mode 100644 index cb0df0b8fb9e6bbd69878d085deda01877589c16..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Dhama Chaukdi full hindi movie free download The story of four dons who turn into good samaritans.md +++ /dev/null @@ -1,6 +0,0 @@ -

Dhama Chaukdi full hindi movie free download


Download File ✵✵✵ https://gohhs.com/2uz3wK



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/inamXcontru/PoeticTTS/Digital Photo Professional 3.14.15 Updater For Mac.md b/spaces/inamXcontru/PoeticTTS/Digital Photo Professional 3.14.15 Updater For Mac.md deleted file mode 100644 index 1b62884b04b02db75297f2209ffc304aa0f5f102..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Digital Photo Professional 3.14.15 Updater For Mac.md +++ /dev/null @@ -1,7 +0,0 @@ -
-

1. Make sure that at least one of the following applications is installed.
- Digital Photo Professional
- EOS Viewer Utility
- File Viewer Utility
- RAW Image Task

2. Download "dpp3.14.15x-updater.dmg.zip" from the download page. Save the "dpp3.14.15x-updater.dmg.zip" file to a folder of your choice on your computer.

3. Double-click "dpp3.14.15x-updater.dmg.zip". The file will be decompressed. After the file is decompressed, "dpp3.14.15x-updater.dmg" will be created.

4. Double-click "dpp3.14.15x-updater.dmg". A drive named "DPP3.14.15" will automatically be created on the desktop.

5. Double-click the "DPP3.14.15X_updater" inside the "DPP3.14.15" drive. The Digital Photo Professional installation will start.

6. Follow the on-screen instructions to complete the installation.

7. After the installation is complete, the Digital Photo Professional installer may ask to restart the computer. In this case, restart the computer. If the installation finished properly, the downloaded file and the "DPP3.14.15" file will no longer be necessary.

-

It used to be the case that cr2 was conciderd a reasonably safe format with Canon being part of various professional working groups to ensure compatability and also due to the shear numbers of photos that exist in this format, however with this latest new I'm not so sure now. That's why some clarification is needed from Canon, because the information supplied so far is inadequate and if they have decided to shun support for their older proprietry raw files, that's a huge kick in the teeth for people who have spend such a large amount of time and money to capture the raw files in the first place.

-

Digital Photo Professional 3.14.15 Updater For Mac


Download File ✺✺✺ https://gohhs.com/2uz3ay



-

There are a lot of photo/video cameras that have found a role as B-cameras on professional film productions or even A-cameras for amateur and independent productions. We've combed through the options and selected our two favorite cameras in this class.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Buku Mewarnai Untuk Anak.pdf.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Buku Mewarnai Untuk Anak.pdf.md deleted file mode 100644 index 29c2c28f89af9439741c845533d10a703a13f294..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Buku Mewarnai Untuk Anak.pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

Buku Mewarnai Untuk Anak.pdf


Download File >> https://urlin.us/2uEwrr



- -6 Buku Mewarnai Versi PDF untuk 3-10 Anak Di TK Prasekolah BukuUSD 2.99/lot. HTB14.62XErrK1RkSne1q6ArVVXar. Bahasa Inggris Letter. Buku belajar. 1fdad05405
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Getdataback For NTFS 2.22 Keygen.ECLIPSE Download ((FULL)).md b/spaces/inplisQlawa/anything-midjourney-v4-1/Getdataback For NTFS 2.22 Keygen.ECLIPSE Download ((FULL)).md deleted file mode 100644 index 7c6af22249e71be6b9fa9cb5a44b48174c9104d9..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Getdataback For NTFS 2.22 Keygen.ECLIPSE Download ((FULL)).md +++ /dev/null @@ -1,146 +0,0 @@ -
-

Getdataback for NTFS 2.22 keygen.ECLIPSE download: How to Recover Your Lost Data

- -

If you have lost your data due to accidental deletion, formatting, virus attack, power failure, or any other reason, you may be looking for a way to get it back. Fortunately, there is a software that can help you recover your data easily and quickly. It is called Getdataback for NTFS 2.22 keygen.ECLIPSE download.

- -

Getdataback for NTFS 2.22 keygen.ECLIPSE download is a powerful data recovery software that can restore your data from NTFS partitions on Windows systems. It can recover your files, folders, documents, photos, videos, music, and more. It can also recover your data from damaged or corrupted disks, RAID arrays, dynamic disks, and USB drives.

-

Getdataback for NTFS 2.22 keygen.ECLIPSE download


Download >>> https://urlin.us/2uEvZh



- -

Getdataback for NTFS 2.22 keygen.ECLIPSE download is easy to use and has a user-friendly interface. You don't need any technical skills or experience to use it. You just need to follow these simple steps:

- -
    -
  1. Download Getdataback for NTFS 2.22 keygen.ECLIPSE from the link given below;
  2. -
  3. Install and run the software on your computer;
  4. -
  5. Select the drive or partition where you lost your data and click on "Scan" button;
  6. -
  7. Wait for the software to scan and find your data;
  8. -
  9. Preview and select the files that you want to recover and click on "Recover" button;
  10. -
  11. Save your recovered data to a safe location.
  12. -
- -

That's it! You have successfully recovered your data with Getdataback for NTFS 2.22 keygen.ECLIPSE download.

- -

Why Choose Getdataback for NTFS 2.22 keygen.ECLIPSE download?

- -

There are many reasons why you should choose Getdataback for NTFS 2.22 keygen.ECLIPSE download over other data recovery software, such as:

- -
    -
  • It is fast and reliable. It can scan and recover your data in minutes;
  • -
  • It is safe and secure. It does not overwrite or modify your original data;
  • -
  • It is comprehensive and versatile. It can recover all types of data from all types of storage devices;
  • -
  • It is compatible and flexible. It can work with all versions of Windows and NTFS file systems;
  • -
  • It is affordable and cost-effective. It comes with a free serial number key that you can use to activate the full version of the software.
  • -
- -

Getdataback for NTFS 2.22 keygen.ECLIPSE download is a software that you can trust and rely on to recover your data. It has been tested and proven by millions of users around the world.

- -

Where to Download Getdataback for NTFS 2.22 keygen.ECLIPSE?

- -

If you are interested in downloading Getdataback for NTFS 2.22 keygen.ECLIPSE, you can do so from the link given below. This link will take you to a secure and verified site where you can download the software safely and quickly.

-

- -Download Getdataback for NTFS 2.22 keygen.ECLIPSE here - -

Don't wait any longer. Download Getdataback for NTFS 2.22 keygen.ECLIPSE today and get back your lost data in no time.

-

What are the Features of Getdataback for NTFS 2.22 keygen.ECLIPSE download?

- -

Getdataback for NTFS 2.22 keygen.ECLIPSE download is a software that has many features that make it stand out from other data recovery software, such as:

- -
    -
  • It is fast and efficient. It can scan and recover your data in a matter of minutes;
  • -
  • It is safe and reliable. It does not damage or overwrite your original data;
  • -
  • It is comprehensive and versatile. It can recover all types of data, such as files, folders, documents, photos, videos, music, and more;
  • -
  • It is compatible and flexible. It can work with all versions of Windows and NTFS file systems;
  • -
  • It is easy and convenient. It has a user-friendly interface and a simple wizard that guides you through the recovery process;
  • -
  • It is affordable and cost-effective. It comes with a free serial number key that you can use to activate the full version of the software.
  • -
- -

Getdataback for NTFS 2.22 keygen.ECLIPSE download is a software that has everything you need to recover your data from NTFS partitions on Windows systems.

- -

What are the Reviews of Getdataback for NTFS 2.22 keygen.ECLIPSE download?

- -

Getdataback for NTFS 2.22 keygen.ECLIPSE download is a software that has received many positive reviews and feedbacks from the users and critics alike. Here are some of the reviews of Getdataback for NTFS 2.22 keygen.ECLIPSE download:

- -
-

"I had lost all my data due to a virus attack on my laptop. I tried many data recovery software but none of them worked. Then I came across Getdataback for NTFS 2.22 keygen.ECLIPSE download and decided to give it a try. To my surprise, it recovered all my data in minutes. It was a miracle. I am so grateful to this software."

-- John Smith, User -
- -
-

"Getdataback for NTFS 2.22 keygen.ECLIPSE download is a brilliant software that can recover any type of data from any type of storage device. It is fast, reliable, and easy to use. It is a must-have for anyone who deals with data loss situations."

-- Jane Doe, Reviewer -
- -
-

"Getdataback for NTFS 2.22 keygen.ECLIPSE download is a software that I highly recommend to anyone who needs to recover their data from NTFS partitions on Windows systems. It is a software that works wonders and saves lives."

-- Michael Brown, Expert -
- -

What are the Alternatives to Getdataback for NTFS 2.22 keygen.ECLIPSE download?

- -

If you are looking for alternatives to Getdataback for NTFS 2.22 keygen.ECLIPSE download, you may consider some of these data recovery software:

- -
    -
  • Recuva: This is a free data recovery software that can recover your data from Windows systems, hard drives, memory cards, USB drives, etc;
  • -
  • EaseUS Data Recovery Wizard: This is a professional data recovery software that can recover your data from Windows systems, Mac systems, hard drives, RAID arrays, servers, etc;
  • -
  • Stellar Data Recovery: This is a powerful data recovery software that can recover your data from Windows systems, Mac systems, Linux systems, hard drives, SSDs, external drives, etc;
  • -
  • MiniTool Power Data Recovery: This is a simple and effective data recovery software that can recover your data from Windows systems, hard drives, USB drives, CD/DVDs, etc;
  • -
  • Data Rescue: This is an advanced data recovery software that can recover your data from Windows systems, Mac systems, hard drives, SSDs, RAID arrays, etc.
  • -
- -

These are some of the alternatives to Getdataback for NTFS 2.22 keygen.ECLIPSE download that you may consider for your data recovery needs.

-

How to Use Getdataback for NTFS 2.22 keygen.ECLIPSE download?

- -

Once you have downloaded Getdataback for NTFS 2.22 keygen.ECLIPSE download, you may wonder how to use it to recover your data. Don't worry, it is very simple and easy. You just need to follow these steps:

- -
    -
  1. Extract the RAR file that contains the software and the keygen;
  2. -
  3. Run the keygen and generate a serial number for the software;
  4. -
  5. Run the software and enter the serial number when prompted;
  6. -
  7. Select the drive or partition where you lost your data and click on "Next" button;
  8. -
  9. Choose the recovery method that suits your situation and click on "Next" button;
  10. -
  11. Wait for the software to scan and find your data;
  12. -
  13. Preview and select the files that you want to recover and click on "Copy" button;
  14. -
  15. Choose a destination folder where you want to save your recovered data and click on "OK" button.
  16. -
- -

That's it! You have successfully used Getdataback for NTFS 2.22 keygen.ECLIPSE download to recover your data.

- -

Tips and Tricks for Getdataback for NTFS 2.22 keygen.ECLIPSE download

- -

To get the best results from Getdataback for NTFS 2.22 keygen.ECLIPSE download, you may want to follow some tips and tricks, such as:

- -
    -
  • Do not install or run the software on the same drive or partition where you lost your data, as it may overwrite or damage your data;
  • -
  • Do not use your computer or device for any other activity while the software is scanning or recovering your data, as it may interfere with the process;
  • -
  • Do not save your recovered data on the same drive or partition where you lost your data, as it may cause data loss or corruption;
  • -
  • Do not recover more files than you need, as it may slow down the recovery process and take up more disk space;
  • -
  • Do not interrupt or cancel the recovery process, as it may cause data loss or corruption;
  • -
  • Do backup your recovered data to another location or device, as it may prevent future data loss.
  • -
- -

These are some of the tips and tricks that can help you to use Getdataback for NTFS 2.22 keygen.ECLIPSE download effectively and efficiently.

-

Frequently Asked Questions about Getdataback for NTFS 2.22 keygen.ECLIPSE download

- -

If you have any questions or doubts about Getdataback for NTFS 2.22 keygen.ECLIPSE download, you may find the answers in this section. Here are some of the frequently asked questions about Getdataback for NTFS 2.22 keygen.ECLIPSE download:

- -
-
Is Getdataback for NTFS 2.22 keygen.ECLIPSE download safe and legal?
-
Yes, Getdataback for NTFS 2.22 keygen.ECLIPSE download is safe and legal to use. It does not contain any viruses, malware, or spyware that can harm your computer or device. It also does not violate any copyright or trademark laws, as it is a free serial number key that is available to the public.
-
Does Getdataback for NTFS 2.22 keygen.ECLIPSE download work with all versions of Windows and NTFS file systems?
-
Yes, Getdataback for NTFS 2.22 keygen.ECLIPSE download works with all versions of Windows and NTFS file systems. It can recover your data from Windows XP, Vista, 7, 8, 10, and more. It can also recover your data from NTFS, NTFS5, exFAT, FAT12, FAT16, FAT32, and more.
-
Can Getdataback for NTFS 2.22 keygen.ECLIPSE download recover data from other file systems or operating systems?
-
No, Getdataback for NTFS 2.22 keygen.ECLIPSE download can only recover data from NTFS partitions on Windows systems. If you need to recover data from other file systems or operating systems, such as FAT, HFS+, EXT4, Linux, Mac OS X, etc., you may need to use other data recovery software.
-
Can Getdataback for NTFS 2.22 keygen.ECLIPSE download recover data from encrypted or password-protected disks or files?
-
No, Getdataback for NTFS 2.22 keygen.ECLIPSE download cannot recover data from encrypted or password-protected disks or files. If you have encrypted or password-protected your disks or files with BitLocker, EFS, TrueCrypt, VeraCrypt, WinRAR, WinZip, etc., you may need to use other data recovery software.
-
Can Getdataback for NTFS 2.22 keygen.ECLIPSE download recover data from formatted or overwritten disks or files?
-
Yes, Getdataback for NTFS 2.22 keygen.ECLIPSE download can recover data from formatted or overwritten disks or files. However, the chances of recovery may depend on the type and extent of formatting or overwriting. If you have performed a quick format or a partial overwrite, you may have a higher chance of recovery than if you have performed a full format or a complete overwrite.
-
- -

If you have any other questions or doubts about Getdataback for NTFS 2.22 keygen.ECLIPSE download, you can contact the customer support team of the software through their website or email.

-

Conclusion

- -

Getdataback for NTFS 2.22 keygen.ECLIPSE download is a software that you should not miss if you need to recover your data from NTFS partitions on Windows systems. It is a software that can recover your data easily and quickly. It is a software that can recover all types of data from all types of storage devices. It is a software that is compatible and flexible with all versions of Windows and NTFS file systems. It is a software that is easy and convenient to use. It is a software that is affordable and cost-effective.

- -

Download Getdataback for NTFS 2.22 keygen.ECLIPSE download today and get access to a powerful and reliable data recovery software. Download Getdataback for NTFS 2.22 keygen.ECLIPSE download today and get back your lost data in no time.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Kim Hyung Tak Archery Book Pdf.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Kim Hyung Tak Archery Book Pdf.md deleted file mode 100644 index 8c0bfadd3c91f484f626f595b2c032e07a188419..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Kim Hyung Tak Archery Book Pdf.md +++ /dev/null @@ -1,78 +0,0 @@ - -

Kim Hyung Tak Archery Book Pdf: A Must-Read for Archers and Coaches

- -

If you are looking for a comprehensive and authoritative guide to archery, you should not miss the Kim Hyung Tak Archery Book Pdf. This book is written by Kim Hyung Tak, a legendary coach who has trained many world-class archers and Olympic medalists. He is also the founder of the Kim Hyung Tak Archery Training Center in Korea, where he teaches his unique and effective methods to students from all over the world.

-

Kim Hyung Tak Archery Book Pdf


Downloadhttps://urlin.us/2uEx2k



- -

The Kim Hyung Tak Archery Book Pdf covers all aspects of archery, from basic skills to advanced techniques. It explains the principles of archery physics, biomechanics, psychology, and equipment. It also provides detailed instructions on how to perform various exercises, drills, and tests to improve your accuracy, consistency, and confidence. The book is richly illustrated with diagrams, photos, and videos that show you exactly how to execute each step.

- -

One of the best features of the Kim Hyung Tak Archery Book Pdf is that it is suitable for both archers and coaches. Whether you are a beginner or an expert, you can benefit from the book's clear and systematic approach. You can learn from Kim Hyung Tak's vast experience and wisdom, as he shares his insights and tips on how to overcome common problems and challenges in archery. You can also use the book as a reference and a tool for self-evaluation.

- -

The Kim Hyung Tak Archery Book Pdf is available for download from various online platforms, such as Scribd and Lancaster Archery. You can access it on your computer, tablet, or smartphone anytime and anywhere. You can also print it out if you prefer a hard copy. The book is written in English, but it also has translations in other languages, such as Spanish, French, German, Italian, Russian, Chinese, Japanese, and Korean.

-

- -

If you want to take your archery skills to the next level, you should not hesitate to get the Kim Hyung Tak Archery Book Pdf. It is a valuable resource that will help you achieve your goals and dreams in archery. It is also a great gift for anyone who loves archery or wants to learn more about it. Don't miss this opportunity to learn from one of the best coaches in the world!

-

What You Will Learn from Kim Hyung Tak Archery Book Pdf

- -

The Kim Hyung Tak Archery Book Pdf is divided into four parts, each focusing on a different aspect of archery. Here is a brief overview of what you will learn from each part:

- -
    -
  • Part 1: Basic Skills. This part covers the fundamentals of archery, such as stance, grip, posture, alignment, anchor, release, and follow-through. You will learn how to set up your bow and arrows correctly, how to adjust your sight and peep, and how to use a clicker and a finger tab. You will also learn how to check your form and correct your errors using various tools and methods.
  • -
  • Part 2: Advanced Techniques. This part covers the advanced skills and strategies that will help you improve your performance and score. You will learn how to control your breathing, heart rate, and emotions during shooting. You will also learn how to deal with various factors that affect your shooting, such as wind, light, temperature, noise, and pressure. You will also learn how to train your mental skills, such as concentration, visualization, and confidence.
  • -
  • Part 3: Exercises and Drills. This part provides a series of exercises and drills that will help you practice and reinforce the skills and techniques you learned in the previous parts. You will learn how to warm up properly, how to stretch your muscles and joints, and how to prevent injuries. You will also learn how to do various exercises and drills that will improve your strength, endurance, flexibility, coordination, balance, and timing.
  • -
  • Part 4: Tests and Evaluations. This part provides a series of tests and evaluations that will help you measure and monitor your progress and performance. You will learn how to set realistic goals and plan your training schedule. You will also learn how to do various tests and evaluations that will assess your physical condition, technical skill, mental state, and shooting result.
  • -
- -

By reading and applying the Kim Hyung Tak Archery Book Pdf, you will be able to master the art and science of archery. You will be able to shoot with more accuracy, consistency, and confidence. You will be able to enjoy archery more and achieve your full potential.

-

Who Is Kim Hyung Tak and Why You Should Listen to Him

- -

Kim Hyung Tak is not only the author of the Kim Hyung Tak Archery Book Pdf, but also one of the most respected and influential coaches in the history of archery. He has been involved in archery for over 50 years, as an archer, a coach, a researcher, and a lecturer. He has dedicated his life to studying and teaching archery, and he has made many contributions to the development and promotion of the sport.

- -

As an archer, Kim Hyung Tak was a national champion and a member of the Korean national team in the 1970s. He competed in many international events, such as the Asian Games and the World Championships. He also set several national and world records in his career.

- -

As a coach, Kim Hyung Tak has trained some of the best archers in the world, such as Park Sung Hyun, Im Dong Hyun, Ki Bo Bae, Oh Jin Hyek, and Lee Woo Seok. He has also coached many national teams, such as Korea, China, Japan, Taiwan, Malaysia, Indonesia, India, Iran, Turkey, and Brazil. He has led his teams to win numerous medals and titles in major competitions, such as the Olympics, the World Championships, the Asian Games, and the World Cup.

- -

As a researcher, Kim Hyung Tak has conducted many studies and experiments on archery physics, biomechanics, psychology, and equipment. He has published many papers and books on his findings and theories. He has also developed many innovative tools and devices to help archers improve their skills and performance.

- -

As a lecturer, Kim Hyung Tak has shared his knowledge and experience with thousands of archers and coaches from all over the world. He has given seminars and workshops in many countries, such as USA, Canada, UK, France, Germany, Italy, Spain, Netherlands, Switzerland, Sweden, Norway, Finland, Denmark, Poland, Russia, Australia, New Zealand, South Africa, Egypt, Morocco, Saudi Arabia, UAE, Qatar, Kuwait, Bahrain, Oman etc. He has also created an online platform where he offers online courses and coaching services.

- -

Kim Hyung Tak is widely recognized as one of the greatest archery coaches of all time. He is also known as a humble and generous person who loves archery and wants to help others achieve their goals. By reading his Kim Hyung Tak Archery Book Pdf, you will be able to learn from his wisdom and expertise.

-

How to Get the Kim Hyung Tak Archery Book Pdf and Start Learning Today

- -

If you are interested in getting the Kim Hyung Tak Archery Book Pdf and start learning from the master coach, you have several options to choose from. You can either buy the book online, download it for free, or access it through an online platform. Here are some of the ways you can get the book:

- -
    -
  • Buy the book online. You can order the book from various online stores, such as Lancaster Archery, Amazon, eBay, and others. The price of the book may vary depending on the seller and the shipping cost. You can pay with your credit card, PayPal, or other methods. You will receive the book in a physical format (paperback or hardcover) or in a digital format (PDF or e-book).
  • -
  • Download the book for free. You can also find the book on various websites that offer free downloads of PDF files, such as Scribd, PDF Drive, Z-Library, and others. You can search for the book by its title or by its author's name. You will need to create an account or sign in with your social media account to access the download link. You can then save the file on your device or print it out if you want.
  • -
  • Access the book through an online platform. You can also access the book through Kim Hyung Tak's own online platform, where he offers online courses and coaching services. You can visit his website at www.archeryschool.com and sign up for his membership program. You will need to pay a monthly or yearly fee to access his content, which includes his book, his videos, his lectures, his exercises, his tests, and his feedback. You can also interact with him and other archers through his forum and chat.
  • -
- -

No matter which option you choose, you will be able to enjoy the benefits of reading the Kim Hyung Tak Archery Book Pdf. You will be able to learn from one of the best archery coaches in the world at your own pace and convenience. You will be able to improve your archery skills and performance in a short time. You will be able to achieve your archery goals and dreams with confidence.

-
What Others Are Saying About Kim Hyung Tak Archery Book Pdf
- -

The Kim Hyung Tak Archery Book Pdf has received many positive reviews and testimonials from archers and coaches who have read and applied it. Here are some of the comments and feedbacks from the readers:

- -
-

"This book is a treasure for archers and coaches. It is full of valuable information and practical advice that can help anyone improve their archery skills. I have learned so much from this book and I highly recommend it to anyone who wants to learn from the best." - Park Sung Hyun, Olympic gold medalist and former world record holder

-
- -
-

"This book is a masterpiece of archery coaching. It is clear, concise, and comprehensive. It covers everything you need to know about archery, from basic to advanced. It also provides many exercises and tests that you can use to practice and evaluate yourself. This book is a must-have for every archer and coach." - Im Dong Hyun, Olympic bronze medalist and current world record holder

-
- -
-

"This book is a great resource for archery enthusiasts. It is written by one of the most respected and experienced coaches in the world. It explains the principles and techniques of archery in a simple and easy way. It also shows you how to apply them in various situations and scenarios. This book is a great way to learn from the master." - Ki Bo Bae, Olympic gold medalist and former world champion

-
- -
-

"This book is a gem for archery lovers. It is filled with useful tips and insights that can help you improve your performance and enjoy your shooting more. It also gives you a glimpse into the mind and philosophy of one of the greatest archery coaches of all time. This book is a rare opportunity to learn from the legend." - Oh Jin Hyek, Olympic gold medalist and former world champion

-
- -

As you can see, the Kim Hyung Tak Archery Book Pdf has been praised by many archers and coaches who have benefited from it. You can also join them and experience the same results by getting the book today.

-
Conclusion
- -

The Kim Hyung Tak Archery Book Pdf is one of the best books on archery that you can find. It is written by Kim Hyung Tak, a legendary coach who has trained many world-class archers and Olympic medalists. It covers all aspects of archery, from basic skills to advanced techniques. It also provides detailed instructions on how to perform various exercises, drills, and tests to improve your accuracy, consistency, and confidence. The book is suitable for both archers and coaches, and it is available in various formats and languages.

- -

If you want to take your archery skills to the next level, you should not hesitate to get the Kim Hyung Tak Archery Book Pdf. It is a valuable resource that will help you achieve your goals and dreams in archery. It is also a great gift for anyone who loves archery or wants to learn more about it. Don't miss this opportunity to learn from one of the best coaches in the world!

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Auslogics BoostSpeed 11.2.0.1 Crack With Activation Code Free Download 2020.md b/spaces/inreVtussa/clothingai/Examples/Auslogics BoostSpeed 11.2.0.1 Crack With Activation Code Free Download 2020.md deleted file mode 100644 index bf70cd11a2d00d57c62f4ca1f724cab0f29a69b0..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Auslogics BoostSpeed 11.2.0.1 Crack With Activation Code Free Download 2020.md +++ /dev/null @@ -1,6 +0,0 @@ -

Auslogics BoostSpeed 11.2.0.1 Crack With Activation Code Free Download 2020


Downloadhttps://tiurll.com/2uCj9z



- -Auslogics BoostSpeed 11.2.0.1 Crack With License Key Free ... Auslogics BoostSpeed 2020 Crack Premium Keygen Free Download. 4d29de3e1b
-
-
-

diff --git a/spaces/inreVtussa/clothingai/Examples/Design Emocional Donald Norman.pdf [BEST].md b/spaces/inreVtussa/clothingai/Examples/Design Emocional Donald Norman.pdf [BEST].md deleted file mode 100644 index 77ff6c763f99ba787ebf2b72fb66b81ee18c338a..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Design Emocional Donald Norman.pdf [BEST].md +++ /dev/null @@ -1,22 +0,0 @@ -

Design Emocional Donald Norman.pdf


DOWNLOAD ⚹⚹⚹ https://tiurll.com/2uCkeu



-
-Find and follow posts tagged have i got iqs for all in you on Tumblr. - -Antique India - [2016] - Jay. Uploaded by:..;. Antique India - 2016. Movie. Download Link: Antique India - [2016] - Jay. Free Download. Jaya Bachchan - Divya Bharti - Supriya Pathak. 2013 Bollywood Movies | PINKY DADAR, released on Jun 3, 2013 Bollywood Movies. Ajay Devgn News. Abroad Films Media Telefilms India. ARYA ACRES FARM | INDIA | HISTORY | LIFE | TRAVEL. Are any of you using GOV. - -Users were having trouble finding the Windows 8. 1 security patch for Windows 8. - -Seduction - Katharine Hepburn. Uploaded by: nathaliamidoes; 0; 0. May 2017; PDF TXT. Bookmark; Embed; Share; Print. SAVE THIS DOCUMENT. Find and follow posts tagged seduction on Tumblr. - -Search - Icon Design. Search - Icon Design. Hardcover - Copyright and Trademark - Covers all 10 issues of the newsstand - Table of Contents - Information about the series - Foreword by the editor - Introduction - Editors-at-Large by Walter Mosley and Ty Templeton - Beginnings - Contributors. - -The Last of the Mohicans - James Fenimore Cooper. Ugly? Book Title. For CoCohete - Printed on Fluid Paper by Waterless Direct-to-Paper Printing Press. Edited by Michael A. - -Book Description. Title: The Last of the Mohicans: A Native Narrative From the Life of James Fenimore Cooper. Book Creator: James Fenimore Cooper. Book Name: The Last of the Mohicans. - -Title: White Fang; or, The Adventures of a Wolf-Dog in the Far North. Story: Jack London. Publisher: Thomas Y. - -Keywords: black beauty india price money buy best skin cream under 50 reviews seduction design emocional white fang india black beauty india book design emocional india price money buy design emocional download india white fang black beauty india book design emocional price money buy design emocional ebook for ipad free design emocional ipad free. Design emocional. Buy Dark Horse Comics Buy Mar 28, 2019 · In a time when superhero movies from 4fefd39f24
-
-
-

diff --git a/spaces/jackli888/stable-diffusion-webui/modules/devices.py b/spaces/jackli888/stable-diffusion-webui/modules/devices.py deleted file mode 100644 index 52c3e7cd773f9c89857dfce14b37d63cb6329fac..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/devices.py +++ /dev/null @@ -1,152 +0,0 @@ -import sys -import contextlib -import torch -from modules import errors - -if sys.platform == "darwin": - from modules import mac_specific - - -def has_mps() -> bool: - if sys.platform != "darwin": - return False - else: - return mac_specific.has_mps - -def extract_device_id(args, name): - for x in range(len(args)): - if name in args[x]: - return args[x + 1] - - return None - - -def get_cuda_device_string(): - from modules import shared - - if shared.cmd_opts.device_id is not None: - return f"cuda:{shared.cmd_opts.device_id}" - - return "cuda" - - -def get_optimal_device_name(): - if torch.cuda.is_available(): - return get_cuda_device_string() - - if has_mps(): - return "mps" - - return "cpu" - - -def get_optimal_device(): - return torch.device(get_optimal_device_name()) - - -def get_device_for(task): - from modules import shared - - if task in shared.cmd_opts.use_cpu: - return cpu - - return get_optimal_device() - - -def torch_gc(): - if torch.cuda.is_available(): - with torch.cuda.device(get_cuda_device_string()): - torch.cuda.empty_cache() - torch.cuda.ipc_collect() - - -def enable_tf32(): - if torch.cuda.is_available(): - - # enabling benchmark option seems to enable a range of cards to do fp16 when they otherwise can't - # see https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4407 - if any([torch.cuda.get_device_capability(devid) == (7, 5) for devid in range(0, torch.cuda.device_count())]): - torch.backends.cudnn.benchmark = True - - torch.backends.cuda.matmul.allow_tf32 = True - torch.backends.cudnn.allow_tf32 = True - - - -errors.run(enable_tf32, "Enabling TF32") - -cpu = torch.device("cpu") -device = device_interrogate = device_gfpgan = device_esrgan = device_codeformer = None -dtype = torch.float16 -dtype_vae = torch.float16 -dtype_unet = torch.float16 -unet_needs_upcast = False - - -def cond_cast_unet(input): - return input.to(dtype_unet) if unet_needs_upcast else input - - -def cond_cast_float(input): - return input.float() if unet_needs_upcast else input - - -def randn(seed, shape): - torch.manual_seed(seed) - if device.type == 'mps': - return torch.randn(shape, device=cpu).to(device) - return torch.randn(shape, device=device) - - -def randn_without_seed(shape): - if device.type == 'mps': - return torch.randn(shape, device=cpu).to(device) - return torch.randn(shape, device=device) - - -def autocast(disable=False): - from modules import shared - - if disable: - return contextlib.nullcontext() - - if dtype == torch.float32 or shared.cmd_opts.precision == "full": - return contextlib.nullcontext() - - return torch.autocast("cuda") - - -def without_autocast(disable=False): - return torch.autocast("cuda", enabled=False) if torch.is_autocast_enabled() and not disable else contextlib.nullcontext() - - -class NansException(Exception): - pass - - -def test_for_nans(x, where): - from modules import shared - - if shared.cmd_opts.disable_nan_check: - return - - if not torch.all(torch.isnan(x)).item(): - return - - if where == "unet": - message = "A tensor with all NaNs was produced in Unet." - - if not shared.cmd_opts.no_half: - message += " This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the \"Upcast cross attention layer to float32\" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this." - - elif where == "vae": - message = "A tensor with all NaNs was produced in VAE." - - if not shared.cmd_opts.no_half and not shared.cmd_opts.no_half_vae: - message += " This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this." - else: - message = "A tensor with all NaNs was produced." - - message += " Use --disable-nan-check commandline argument to disable this check." - - raise NansException(message) diff --git a/spaces/jamoncj/entregable3/app.py b/spaces/jamoncj/entregable3/app.py deleted file mode 100644 index 82ae4146b9219b756838e035b7add55a5c644ab1..0000000000000000000000000000000000000000 --- a/spaces/jamoncj/entregable3/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from fastai.text.all import * -import gradio as gr - - -# Cargamos el learner -learn = load_learner('export.pkl') - -# Definimos las etiquetas de nuestro modelo -labels = learn.dls.vocab - - -# Definimos una función que se encarga de llevar a cabo las predicciones -def predict(img): - pred,pred_idx,probs = learn.predict(img) - return '{}'.format(pred) + " estrellas" - -# Creamos la interfaz y la lanzamos. -gr.Interface(fn=predict, inputs=gr.Textbox(label="Valoración del producto", lines=4), outputs='text').launch(share=False) \ No newline at end of file diff --git a/spaces/jbetker/tortoise/tortoise/models/vocoder.py b/spaces/jbetker/tortoise/tortoise/models/vocoder.py deleted file mode 100644 index d38fb56699c035b3d4a86ace67c567d3f1d51fa9..0000000000000000000000000000000000000000 --- a/spaces/jbetker/tortoise/tortoise/models/vocoder.py +++ /dev/null @@ -1,325 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -MAX_WAV_VALUE = 32768.0 - -class KernelPredictor(torch.nn.Module): - ''' Kernel predictor for the location-variable convolutions''' - - def __init__( - self, - cond_channels, - conv_in_channels, - conv_out_channels, - conv_layers, - conv_kernel_size=3, - kpnet_hidden_channels=64, - kpnet_conv_size=3, - kpnet_dropout=0.0, - kpnet_nonlinear_activation="LeakyReLU", - kpnet_nonlinear_activation_params={"negative_slope": 0.1}, - ): - ''' - Args: - cond_channels (int): number of channel for the conditioning sequence, - conv_in_channels (int): number of channel for the input sequence, - conv_out_channels (int): number of channel for the output sequence, - conv_layers (int): number of layers - ''' - super().__init__() - - self.conv_in_channels = conv_in_channels - self.conv_out_channels = conv_out_channels - self.conv_kernel_size = conv_kernel_size - self.conv_layers = conv_layers - - kpnet_kernel_channels = conv_in_channels * conv_out_channels * conv_kernel_size * conv_layers # l_w - kpnet_bias_channels = conv_out_channels * conv_layers # l_b - - self.input_conv = nn.Sequential( - nn.utils.weight_norm(nn.Conv1d(cond_channels, kpnet_hidden_channels, 5, padding=2, bias=True)), - getattr(nn, kpnet_nonlinear_activation)(**kpnet_nonlinear_activation_params), - ) - - self.residual_convs = nn.ModuleList() - padding = (kpnet_conv_size - 1) // 2 - for _ in range(3): - self.residual_convs.append( - nn.Sequential( - nn.Dropout(kpnet_dropout), - nn.utils.weight_norm( - nn.Conv1d(kpnet_hidden_channels, kpnet_hidden_channels, kpnet_conv_size, padding=padding, - bias=True)), - getattr(nn, kpnet_nonlinear_activation)(**kpnet_nonlinear_activation_params), - nn.utils.weight_norm( - nn.Conv1d(kpnet_hidden_channels, kpnet_hidden_channels, kpnet_conv_size, padding=padding, - bias=True)), - getattr(nn, kpnet_nonlinear_activation)(**kpnet_nonlinear_activation_params), - ) - ) - self.kernel_conv = nn.utils.weight_norm( - nn.Conv1d(kpnet_hidden_channels, kpnet_kernel_channels, kpnet_conv_size, padding=padding, bias=True)) - self.bias_conv = nn.utils.weight_norm( - nn.Conv1d(kpnet_hidden_channels, kpnet_bias_channels, kpnet_conv_size, padding=padding, bias=True)) - - def forward(self, c): - ''' - Args: - c (Tensor): the conditioning sequence (batch, cond_channels, cond_length) - ''' - batch, _, cond_length = c.shape - c = self.input_conv(c) - for residual_conv in self.residual_convs: - residual_conv.to(c.device) - c = c + residual_conv(c) - k = self.kernel_conv(c) - b = self.bias_conv(c) - kernels = k.contiguous().view( - batch, - self.conv_layers, - self.conv_in_channels, - self.conv_out_channels, - self.conv_kernel_size, - cond_length, - ) - bias = b.contiguous().view( - batch, - self.conv_layers, - self.conv_out_channels, - cond_length, - ) - - return kernels, bias - - def remove_weight_norm(self): - nn.utils.remove_weight_norm(self.input_conv[0]) - nn.utils.remove_weight_norm(self.kernel_conv) - nn.utils.remove_weight_norm(self.bias_conv) - for block in self.residual_convs: - nn.utils.remove_weight_norm(block[1]) - nn.utils.remove_weight_norm(block[3]) - - -class LVCBlock(torch.nn.Module): - '''the location-variable convolutions''' - - def __init__( - self, - in_channels, - cond_channels, - stride, - dilations=[1, 3, 9, 27], - lReLU_slope=0.2, - conv_kernel_size=3, - cond_hop_length=256, - kpnet_hidden_channels=64, - kpnet_conv_size=3, - kpnet_dropout=0.0, - ): - super().__init__() - - self.cond_hop_length = cond_hop_length - self.conv_layers = len(dilations) - self.conv_kernel_size = conv_kernel_size - - self.kernel_predictor = KernelPredictor( - cond_channels=cond_channels, - conv_in_channels=in_channels, - conv_out_channels=2 * in_channels, - conv_layers=len(dilations), - conv_kernel_size=conv_kernel_size, - kpnet_hidden_channels=kpnet_hidden_channels, - kpnet_conv_size=kpnet_conv_size, - kpnet_dropout=kpnet_dropout, - kpnet_nonlinear_activation_params={"negative_slope": lReLU_slope} - ) - - self.convt_pre = nn.Sequential( - nn.LeakyReLU(lReLU_slope), - nn.utils.weight_norm(nn.ConvTranspose1d(in_channels, in_channels, 2 * stride, stride=stride, - padding=stride // 2 + stride % 2, output_padding=stride % 2)), - ) - - self.conv_blocks = nn.ModuleList() - for dilation in dilations: - self.conv_blocks.append( - nn.Sequential( - nn.LeakyReLU(lReLU_slope), - nn.utils.weight_norm(nn.Conv1d(in_channels, in_channels, conv_kernel_size, - padding=dilation * (conv_kernel_size - 1) // 2, dilation=dilation)), - nn.LeakyReLU(lReLU_slope), - ) - ) - - def forward(self, x, c): - ''' forward propagation of the location-variable convolutions. - Args: - x (Tensor): the input sequence (batch, in_channels, in_length) - c (Tensor): the conditioning sequence (batch, cond_channels, cond_length) - - Returns: - Tensor: the output sequence (batch, in_channels, in_length) - ''' - _, in_channels, _ = x.shape # (B, c_g, L') - - x = self.convt_pre(x) # (B, c_g, stride * L') - kernels, bias = self.kernel_predictor(c) - - for i, conv in enumerate(self.conv_blocks): - output = conv(x) # (B, c_g, stride * L') - - k = kernels[:, i, :, :, :, :] # (B, 2 * c_g, c_g, kernel_size, cond_length) - b = bias[:, i, :, :] # (B, 2 * c_g, cond_length) - - output = self.location_variable_convolution(output, k, b, - hop_size=self.cond_hop_length) # (B, 2 * c_g, stride * L'): LVC - x = x + torch.sigmoid(output[:, :in_channels, :]) * torch.tanh( - output[:, in_channels:, :]) # (B, c_g, stride * L'): GAU - - return x - - def location_variable_convolution(self, x, kernel, bias, dilation=1, hop_size=256): - ''' perform location-variable convolution operation on the input sequence (x) using the local convolution kernl. - Time: 414 μs ± 309 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each), test on NVIDIA V100. - Args: - x (Tensor): the input sequence (batch, in_channels, in_length). - kernel (Tensor): the local convolution kernel (batch, in_channel, out_channels, kernel_size, kernel_length) - bias (Tensor): the bias for the local convolution (batch, out_channels, kernel_length) - dilation (int): the dilation of convolution. - hop_size (int): the hop_size of the conditioning sequence. - Returns: - (Tensor): the output sequence after performing local convolution. (batch, out_channels, in_length). - ''' - batch, _, in_length = x.shape - batch, _, out_channels, kernel_size, kernel_length = kernel.shape - assert in_length == (kernel_length * hop_size), "length of (x, kernel) is not matched" - - padding = dilation * int((kernel_size - 1) / 2) - x = F.pad(x, (padding, padding), 'constant', 0) # (batch, in_channels, in_length + 2*padding) - x = x.unfold(2, hop_size + 2 * padding, hop_size) # (batch, in_channels, kernel_length, hop_size + 2*padding) - - if hop_size < dilation: - x = F.pad(x, (0, dilation), 'constant', 0) - x = x.unfold(3, dilation, - dilation) # (batch, in_channels, kernel_length, (hop_size + 2*padding)/dilation, dilation) - x = x[:, :, :, :, :hop_size] - x = x.transpose(3, 4) # (batch, in_channels, kernel_length, dilation, (hop_size + 2*padding)/dilation) - x = x.unfold(4, kernel_size, 1) # (batch, in_channels, kernel_length, dilation, _, kernel_size) - - o = torch.einsum('bildsk,biokl->bolsd', x, kernel) - o = o.to(memory_format=torch.channels_last_3d) - bias = bias.unsqueeze(-1).unsqueeze(-1).to(memory_format=torch.channels_last_3d) - o = o + bias - o = o.contiguous().view(batch, out_channels, -1) - - return o - - def remove_weight_norm(self): - self.kernel_predictor.remove_weight_norm() - nn.utils.remove_weight_norm(self.convt_pre[1]) - for block in self.conv_blocks: - nn.utils.remove_weight_norm(block[1]) - - -class UnivNetGenerator(nn.Module): - """UnivNet Generator""" - - def __init__(self, noise_dim=64, channel_size=32, dilations=[1,3,9,27], strides=[8,8,4], lReLU_slope=.2, kpnet_conv_size=3, - # Below are MEL configurations options that this generator requires. - hop_length=256, n_mel_channels=100): - super(UnivNetGenerator, self).__init__() - self.mel_channel = n_mel_channels - self.noise_dim = noise_dim - self.hop_length = hop_length - channel_size = channel_size - kpnet_conv_size = kpnet_conv_size - - self.res_stack = nn.ModuleList() - hop_length = 1 - for stride in strides: - hop_length = stride * hop_length - self.res_stack.append( - LVCBlock( - channel_size, - n_mel_channels, - stride=stride, - dilations=dilations, - lReLU_slope=lReLU_slope, - cond_hop_length=hop_length, - kpnet_conv_size=kpnet_conv_size - ) - ) - - self.conv_pre = \ - nn.utils.weight_norm(nn.Conv1d(noise_dim, channel_size, 7, padding=3, padding_mode='reflect')) - - self.conv_post = nn.Sequential( - nn.LeakyReLU(lReLU_slope), - nn.utils.weight_norm(nn.Conv1d(channel_size, 1, 7, padding=3, padding_mode='reflect')), - nn.Tanh(), - ) - - def forward(self, c, z): - ''' - Args: - c (Tensor): the conditioning sequence of mel-spectrogram (batch, mel_channels, in_length) - z (Tensor): the noise sequence (batch, noise_dim, in_length) - - ''' - z = self.conv_pre(z) # (B, c_g, L) - - for res_block in self.res_stack: - res_block.to(z.device) - z = res_block(z, c) # (B, c_g, L * s_0 * ... * s_i) - - z = self.conv_post(z) # (B, 1, L * 256) - - return z - - def eval(self, inference=False): - super(UnivNetGenerator, self).eval() - # don't remove weight norm while validation in training loop - if inference: - self.remove_weight_norm() - - def remove_weight_norm(self): - print('Removing weight norm...') - - nn.utils.remove_weight_norm(self.conv_pre) - - for layer in self.conv_post: - if len(layer.state_dict()) != 0: - nn.utils.remove_weight_norm(layer) - - for res_block in self.res_stack: - res_block.remove_weight_norm() - - def inference(self, c, z=None): - # pad input mel with zeros to cut artifact - # see https://github.com/seungwonpark/melgan/issues/8 - zero = torch.full((c.shape[0], self.mel_channel, 10), -11.5129).to(c.device) - mel = torch.cat((c, zero), dim=2) - - if z is None: - z = torch.randn(c.shape[0], self.noise_dim, mel.size(2)).to(mel.device) - - audio = self.forward(mel, z) - audio = audio[:, :, :-(self.hop_length * 10)] - audio = audio.clamp(min=-1, max=1) - return audio - - -if __name__ == '__main__': - model = UnivNetGenerator() - - c = torch.randn(3, 100, 10) - z = torch.randn(3, 64, 10) - print(c.shape) - - y = model(c, z) - print(y.shape) - assert y.shape == torch.Size([3, 1, 2560]) - - pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad) - print(pytorch_total_params) diff --git a/spaces/jleexp/Youtube-Whisperer/README.md b/spaces/jleexp/Youtube-Whisperer/README.md deleted file mode 100644 index f30d4256155c480f0599698379f798a3365e5bc1..0000000000000000000000000000000000000000 --- a/spaces/jleexp/Youtube-Whisperer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Youtube Whisperer -emoji: ⚡ -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -duplicated_from: jeffistyping/Youtube-Whisperer ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/joaopdrm/Emotion_Analisys/README.md b/spaces/joaopdrm/Emotion_Analisys/README.md deleted file mode 100644 index fa9c92223ed78db7a9758afc5dd021644439e733..0000000000000000000000000000000000000000 --- a/spaces/joaopdrm/Emotion_Analisys/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Emotion_Analisys -emoji: 💻 -colorFrom: blue -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/charset_normalizer/cli/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/charset_normalizer/cli/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/TupleVariation.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/TupleVariation.py deleted file mode 100644 index 13ff8678746013a038a951fb28232f59b4d08324..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/TupleVariation.py +++ /dev/null @@ -1,808 +0,0 @@ -from fontTools.misc.fixedTools import ( - fixedToFloat as fi2fl, - floatToFixed as fl2fi, - floatToFixedToStr as fl2str, - strToFixedToFloat as str2fl, - otRound, -) -from fontTools.misc.textTools import safeEval -import array -from collections import Counter, defaultdict -import io -import logging -import struct -import sys - - -# https://www.microsoft.com/typography/otspec/otvarcommonformats.htm - -EMBEDDED_PEAK_TUPLE = 0x8000 -INTERMEDIATE_REGION = 0x4000 -PRIVATE_POINT_NUMBERS = 0x2000 - -DELTAS_ARE_ZERO = 0x80 -DELTAS_ARE_WORDS = 0x40 -DELTA_RUN_COUNT_MASK = 0x3F - -POINTS_ARE_WORDS = 0x80 -POINT_RUN_COUNT_MASK = 0x7F - -TUPLES_SHARE_POINT_NUMBERS = 0x8000 -TUPLE_COUNT_MASK = 0x0FFF -TUPLE_INDEX_MASK = 0x0FFF - -log = logging.getLogger(__name__) - - -class TupleVariation(object): - def __init__(self, axes, coordinates): - self.axes = axes.copy() - self.coordinates = list(coordinates) - - def __repr__(self): - axes = ",".join( - sorted(["%s=%s" % (name, value) for (name, value) in self.axes.items()]) - ) - return "" % (axes, self.coordinates) - - def __eq__(self, other): - return self.coordinates == other.coordinates and self.axes == other.axes - - def getUsedPoints(self): - # Empty set means "all points used". - if None not in self.coordinates: - return frozenset() - used = frozenset([i for i, p in enumerate(self.coordinates) if p is not None]) - # Return None if no points used. - return used if used else None - - def hasImpact(self): - """Returns True if this TupleVariation has any visible impact. - - If the result is False, the TupleVariation can be omitted from the font - without making any visible difference. - """ - return any(c is not None for c in self.coordinates) - - def toXML(self, writer, axisTags): - writer.begintag("tuple") - writer.newline() - for axis in axisTags: - value = self.axes.get(axis) - if value is not None: - minValue, value, maxValue = value - defaultMinValue = min(value, 0.0) # -0.3 --> -0.3; 0.7 --> 0.0 - defaultMaxValue = max(value, 0.0) # -0.3 --> 0.0; 0.7 --> 0.7 - if minValue == defaultMinValue and maxValue == defaultMaxValue: - writer.simpletag("coord", axis=axis, value=fl2str(value, 14)) - else: - attrs = [ - ("axis", axis), - ("min", fl2str(minValue, 14)), - ("value", fl2str(value, 14)), - ("max", fl2str(maxValue, 14)), - ] - writer.simpletag("coord", attrs) - writer.newline() - wrote_any_deltas = False - for i, delta in enumerate(self.coordinates): - if type(delta) == tuple and len(delta) == 2: - writer.simpletag("delta", pt=i, x=delta[0], y=delta[1]) - writer.newline() - wrote_any_deltas = True - elif type(delta) == int: - writer.simpletag("delta", cvt=i, value=delta) - writer.newline() - wrote_any_deltas = True - elif delta is not None: - log.error("bad delta format") - writer.comment("bad delta #%d" % i) - writer.newline() - wrote_any_deltas = True - if not wrote_any_deltas: - writer.comment("no deltas") - writer.newline() - writer.endtag("tuple") - writer.newline() - - def fromXML(self, name, attrs, _content): - if name == "coord": - axis = attrs["axis"] - value = str2fl(attrs["value"], 14) - defaultMinValue = min(value, 0.0) # -0.3 --> -0.3; 0.7 --> 0.0 - defaultMaxValue = max(value, 0.0) # -0.3 --> 0.0; 0.7 --> 0.7 - minValue = str2fl(attrs.get("min", defaultMinValue), 14) - maxValue = str2fl(attrs.get("max", defaultMaxValue), 14) - self.axes[axis] = (minValue, value, maxValue) - elif name == "delta": - if "pt" in attrs: - point = safeEval(attrs["pt"]) - x = safeEval(attrs["x"]) - y = safeEval(attrs["y"]) - self.coordinates[point] = (x, y) - elif "cvt" in attrs: - cvt = safeEval(attrs["cvt"]) - value = safeEval(attrs["value"]) - self.coordinates[cvt] = value - else: - log.warning("bad delta format: %s" % ", ".join(sorted(attrs.keys()))) - - def compile(self, axisTags, sharedCoordIndices={}, pointData=None): - assert set(self.axes.keys()) <= set(axisTags), ( - "Unknown axis tag found.", - self.axes.keys(), - axisTags, - ) - - tupleData = [] - auxData = [] - - if pointData is None: - usedPoints = self.getUsedPoints() - if usedPoints is None: # Nothing to encode - return b"", b"" - pointData = self.compilePoints(usedPoints) - - coord = self.compileCoord(axisTags) - flags = sharedCoordIndices.get(coord) - if flags is None: - flags = EMBEDDED_PEAK_TUPLE - tupleData.append(coord) - - intermediateCoord = self.compileIntermediateCoord(axisTags) - if intermediateCoord is not None: - flags |= INTERMEDIATE_REGION - tupleData.append(intermediateCoord) - - # pointData of b'' implies "use shared points". - if pointData: - flags |= PRIVATE_POINT_NUMBERS - auxData.append(pointData) - - auxData.append(self.compileDeltas()) - auxData = b"".join(auxData) - - tupleData.insert(0, struct.pack(">HH", len(auxData), flags)) - return b"".join(tupleData), auxData - - def compileCoord(self, axisTags): - result = bytearray() - axes = self.axes - for axis in axisTags: - triple = axes.get(axis) - if triple is None: - result.extend(b"\0\0") - else: - result.extend(struct.pack(">h", fl2fi(triple[1], 14))) - return bytes(result) - - def compileIntermediateCoord(self, axisTags): - needed = False - for axis in axisTags: - minValue, value, maxValue = self.axes.get(axis, (0.0, 0.0, 0.0)) - defaultMinValue = min(value, 0.0) # -0.3 --> -0.3; 0.7 --> 0.0 - defaultMaxValue = max(value, 0.0) # -0.3 --> 0.0; 0.7 --> 0.7 - if (minValue != defaultMinValue) or (maxValue != defaultMaxValue): - needed = True - break - if not needed: - return None - minCoords = bytearray() - maxCoords = bytearray() - for axis in axisTags: - minValue, value, maxValue = self.axes.get(axis, (0.0, 0.0, 0.0)) - minCoords.extend(struct.pack(">h", fl2fi(minValue, 14))) - maxCoords.extend(struct.pack(">h", fl2fi(maxValue, 14))) - return minCoords + maxCoords - - @staticmethod - def decompileCoord_(axisTags, data, offset): - coord = {} - pos = offset - for axis in axisTags: - coord[axis] = fi2fl(struct.unpack(">h", data[pos : pos + 2])[0], 14) - pos += 2 - return coord, pos - - @staticmethod - def compilePoints(points): - # If the set consists of all points in the glyph, it gets encoded with - # a special encoding: a single zero byte. - # - # To use this optimization, points passed in must be empty set. - # The following two lines are not strictly necessary as the main code - # below would emit the same. But this is most common and faster. - if not points: - return b"\0" - - # In the 'gvar' table, the packing of point numbers is a little surprising. - # It consists of multiple runs, each being a delta-encoded list of integers. - # For example, the point set {17, 18, 19, 20, 21, 22, 23} gets encoded as - # [6, 17, 1, 1, 1, 1, 1, 1]. The first value (6) is the run length minus 1. - # There are two types of runs, with values being either 8 or 16 bit unsigned - # integers. - points = list(points) - points.sort() - numPoints = len(points) - - result = bytearray() - # The binary representation starts with the total number of points in the set, - # encoded into one or two bytes depending on the value. - if numPoints < 0x80: - result.append(numPoints) - else: - result.append((numPoints >> 8) | 0x80) - result.append(numPoints & 0xFF) - - MAX_RUN_LENGTH = 127 - pos = 0 - lastValue = 0 - while pos < numPoints: - runLength = 0 - - headerPos = len(result) - result.append(0) - - useByteEncoding = None - while pos < numPoints and runLength <= MAX_RUN_LENGTH: - curValue = points[pos] - delta = curValue - lastValue - if useByteEncoding is None: - useByteEncoding = 0 <= delta <= 0xFF - if useByteEncoding and (delta > 0xFF or delta < 0): - # we need to start a new run (which will not use byte encoding) - break - # TODO This never switches back to a byte-encoding from a short-encoding. - # That's suboptimal. - if useByteEncoding: - result.append(delta) - else: - result.append(delta >> 8) - result.append(delta & 0xFF) - lastValue = curValue - pos += 1 - runLength += 1 - if useByteEncoding: - result[headerPos] = runLength - 1 - else: - result[headerPos] = (runLength - 1) | POINTS_ARE_WORDS - - return result - - @staticmethod - def decompilePoints_(numPoints, data, offset, tableTag): - """(numPoints, data, offset, tableTag) --> ([point1, point2, ...], newOffset)""" - assert tableTag in ("cvar", "gvar") - pos = offset - numPointsInData = data[pos] - pos += 1 - if (numPointsInData & POINTS_ARE_WORDS) != 0: - numPointsInData = (numPointsInData & POINT_RUN_COUNT_MASK) << 8 | data[pos] - pos += 1 - if numPointsInData == 0: - return (range(numPoints), pos) - - result = [] - while len(result) < numPointsInData: - runHeader = data[pos] - pos += 1 - numPointsInRun = (runHeader & POINT_RUN_COUNT_MASK) + 1 - point = 0 - if (runHeader & POINTS_ARE_WORDS) != 0: - points = array.array("H") - pointsSize = numPointsInRun * 2 - else: - points = array.array("B") - pointsSize = numPointsInRun - points.frombytes(data[pos : pos + pointsSize]) - if sys.byteorder != "big": - points.byteswap() - - assert len(points) == numPointsInRun - pos += pointsSize - - result.extend(points) - - # Convert relative to absolute - absolute = [] - current = 0 - for delta in result: - current += delta - absolute.append(current) - result = absolute - del absolute - - badPoints = {str(p) for p in result if p < 0 or p >= numPoints} - if badPoints: - log.warning( - "point %s out of range in '%s' table" - % (",".join(sorted(badPoints)), tableTag) - ) - return (result, pos) - - def compileDeltas(self): - deltaX = [] - deltaY = [] - if self.getCoordWidth() == 2: - for c in self.coordinates: - if c is None: - continue - deltaX.append(c[0]) - deltaY.append(c[1]) - else: - for c in self.coordinates: - if c is None: - continue - deltaX.append(c) - bytearr = bytearray() - self.compileDeltaValues_(deltaX, bytearr) - self.compileDeltaValues_(deltaY, bytearr) - return bytearr - - @staticmethod - def compileDeltaValues_(deltas, bytearr=None): - """[value1, value2, value3, ...] --> bytearray - - Emits a sequence of runs. Each run starts with a - byte-sized header whose 6 least significant bits - (header & 0x3F) indicate how many values are encoded - in this run. The stored length is the actual length - minus one; run lengths are thus in the range [1..64]. - If the header byte has its most significant bit (0x80) - set, all values in this run are zero, and no data - follows. Otherwise, the header byte is followed by - ((header & 0x3F) + 1) signed values. If (header & - 0x40) is clear, the delta values are stored as signed - bytes; if (header & 0x40) is set, the delta values are - signed 16-bit integers. - """ # Explaining the format because the 'gvar' spec is hard to understand. - if bytearr is None: - bytearr = bytearray() - pos = 0 - numDeltas = len(deltas) - while pos < numDeltas: - value = deltas[pos] - if value == 0: - pos = TupleVariation.encodeDeltaRunAsZeroes_(deltas, pos, bytearr) - elif -128 <= value <= 127: - pos = TupleVariation.encodeDeltaRunAsBytes_(deltas, pos, bytearr) - else: - pos = TupleVariation.encodeDeltaRunAsWords_(deltas, pos, bytearr) - return bytearr - - @staticmethod - def encodeDeltaRunAsZeroes_(deltas, offset, bytearr): - pos = offset - numDeltas = len(deltas) - while pos < numDeltas and deltas[pos] == 0: - pos += 1 - runLength = pos - offset - while runLength >= 64: - bytearr.append(DELTAS_ARE_ZERO | 63) - runLength -= 64 - if runLength: - bytearr.append(DELTAS_ARE_ZERO | (runLength - 1)) - return pos - - @staticmethod - def encodeDeltaRunAsBytes_(deltas, offset, bytearr): - pos = offset - numDeltas = len(deltas) - while pos < numDeltas: - value = deltas[pos] - if not (-128 <= value <= 127): - break - # Within a byte-encoded run of deltas, a single zero - # is best stored literally as 0x00 value. However, - # if are two or more zeroes in a sequence, it is - # better to start a new run. For example, the sequence - # of deltas [15, 15, 0, 15, 15] becomes 6 bytes - # (04 0F 0F 00 0F 0F) when storing the zero value - # literally, but 7 bytes (01 0F 0F 80 01 0F 0F) - # when starting a new run. - if value == 0 and pos + 1 < numDeltas and deltas[pos + 1] == 0: - break - pos += 1 - runLength = pos - offset - while runLength >= 64: - bytearr.append(63) - bytearr.extend(array.array("b", deltas[offset : offset + 64])) - offset += 64 - runLength -= 64 - if runLength: - bytearr.append(runLength - 1) - bytearr.extend(array.array("b", deltas[offset:pos])) - return pos - - @staticmethod - def encodeDeltaRunAsWords_(deltas, offset, bytearr): - pos = offset - numDeltas = len(deltas) - while pos < numDeltas: - value = deltas[pos] - # Within a word-encoded run of deltas, it is easiest - # to start a new run (with a different encoding) - # whenever we encounter a zero value. For example, - # the sequence [0x6666, 0, 0x7777] needs 7 bytes when - # storing the zero literally (42 66 66 00 00 77 77), - # and equally 7 bytes when starting a new run - # (40 66 66 80 40 77 77). - if value == 0: - break - - # Within a word-encoded run of deltas, a single value - # in the range (-128..127) should be encoded literally - # because it is more compact. For example, the sequence - # [0x6666, 2, 0x7777] becomes 7 bytes when storing - # the value literally (42 66 66 00 02 77 77), but 8 bytes - # when starting a new run (40 66 66 00 02 40 77 77). - if ( - (-128 <= value <= 127) - and pos + 1 < numDeltas - and (-128 <= deltas[pos + 1] <= 127) - ): - break - pos += 1 - runLength = pos - offset - while runLength >= 64: - bytearr.append(DELTAS_ARE_WORDS | 63) - a = array.array("h", deltas[offset : offset + 64]) - if sys.byteorder != "big": - a.byteswap() - bytearr.extend(a) - offset += 64 - runLength -= 64 - if runLength: - bytearr.append(DELTAS_ARE_WORDS | (runLength - 1)) - a = array.array("h", deltas[offset:pos]) - if sys.byteorder != "big": - a.byteswap() - bytearr.extend(a) - return pos - - @staticmethod - def decompileDeltas_(numDeltas, data, offset): - """(numDeltas, data, offset) --> ([delta, delta, ...], newOffset)""" - result = [] - pos = offset - while len(result) < numDeltas: - runHeader = data[pos] - pos += 1 - numDeltasInRun = (runHeader & DELTA_RUN_COUNT_MASK) + 1 - if (runHeader & DELTAS_ARE_ZERO) != 0: - result.extend([0] * numDeltasInRun) - else: - if (runHeader & DELTAS_ARE_WORDS) != 0: - deltas = array.array("h") - deltasSize = numDeltasInRun * 2 - else: - deltas = array.array("b") - deltasSize = numDeltasInRun - deltas.frombytes(data[pos : pos + deltasSize]) - if sys.byteorder != "big": - deltas.byteswap() - assert len(deltas) == numDeltasInRun - pos += deltasSize - result.extend(deltas) - assert len(result) == numDeltas - return (result, pos) - - @staticmethod - def getTupleSize_(flags, axisCount): - size = 4 - if (flags & EMBEDDED_PEAK_TUPLE) != 0: - size += axisCount * 2 - if (flags & INTERMEDIATE_REGION) != 0: - size += axisCount * 4 - return size - - def getCoordWidth(self): - """Return 2 if coordinates are (x, y) as in gvar, 1 if single values - as in cvar, or 0 if empty. - """ - firstDelta = next((c for c in self.coordinates if c is not None), None) - if firstDelta is None: - return 0 # empty or has no impact - if type(firstDelta) in (int, float): - return 1 - if type(firstDelta) is tuple and len(firstDelta) == 2: - return 2 - raise TypeError( - "invalid type of delta; expected (int or float) number, or " - "Tuple[number, number]: %r" % firstDelta - ) - - def scaleDeltas(self, scalar): - if scalar == 1.0: - return # no change - coordWidth = self.getCoordWidth() - self.coordinates = [ - None - if d is None - else d * scalar - if coordWidth == 1 - else (d[0] * scalar, d[1] * scalar) - for d in self.coordinates - ] - - def roundDeltas(self): - coordWidth = self.getCoordWidth() - self.coordinates = [ - None - if d is None - else otRound(d) - if coordWidth == 1 - else (otRound(d[0]), otRound(d[1])) - for d in self.coordinates - ] - - def calcInferredDeltas(self, origCoords, endPts): - from fontTools.varLib.iup import iup_delta - - if self.getCoordWidth() == 1: - raise TypeError("Only 'gvar' TupleVariation can have inferred deltas") - if None in self.coordinates: - if len(self.coordinates) != len(origCoords): - raise ValueError( - "Expected len(origCoords) == %d; found %d" - % (len(self.coordinates), len(origCoords)) - ) - self.coordinates = iup_delta(self.coordinates, origCoords, endPts) - - def optimize(self, origCoords, endPts, tolerance=0.5, isComposite=False): - from fontTools.varLib.iup import iup_delta_optimize - - if None in self.coordinates: - return # already optimized - - deltaOpt = iup_delta_optimize( - self.coordinates, origCoords, endPts, tolerance=tolerance - ) - if None in deltaOpt: - if isComposite and all(d is None for d in deltaOpt): - # Fix for macOS composites - # https://github.com/fonttools/fonttools/issues/1381 - deltaOpt = [(0, 0)] + [None] * (len(deltaOpt) - 1) - # Use "optimized" version only if smaller... - varOpt = TupleVariation(self.axes, deltaOpt) - - # Shouldn't matter that this is different from fvar...? - axisTags = sorted(self.axes.keys()) - tupleData, auxData = self.compile(axisTags) - unoptimizedLength = len(tupleData) + len(auxData) - tupleData, auxData = varOpt.compile(axisTags) - optimizedLength = len(tupleData) + len(auxData) - - if optimizedLength < unoptimizedLength: - self.coordinates = varOpt.coordinates - - def __imul__(self, scalar): - self.scaleDeltas(scalar) - return self - - def __iadd__(self, other): - if not isinstance(other, TupleVariation): - return NotImplemented - deltas1 = self.coordinates - length = len(deltas1) - deltas2 = other.coordinates - if len(deltas2) != length: - raise ValueError("cannot sum TupleVariation deltas with different lengths") - # 'None' values have different meanings in gvar vs cvar TupleVariations: - # within the gvar, when deltas are not provided explicitly for some points, - # they need to be inferred; whereas for the 'cvar' table, if deltas are not - # provided for some CVT values, then no adjustments are made (i.e. None == 0). - # Thus, we cannot sum deltas for gvar TupleVariations if they contain - # inferred inferred deltas (the latter need to be computed first using - # 'calcInferredDeltas' method), but we can treat 'None' values in cvar - # deltas as if they are zeros. - if self.getCoordWidth() == 2: - for i, d2 in zip(range(length), deltas2): - d1 = deltas1[i] - try: - deltas1[i] = (d1[0] + d2[0], d1[1] + d2[1]) - except TypeError: - raise ValueError("cannot sum gvar deltas with inferred points") - else: - for i, d2 in zip(range(length), deltas2): - d1 = deltas1[i] - if d1 is not None and d2 is not None: - deltas1[i] = d1 + d2 - elif d1 is None and d2 is not None: - deltas1[i] = d2 - # elif d2 is None do nothing - return self - - -def decompileSharedTuples(axisTags, sharedTupleCount, data, offset): - result = [] - for _ in range(sharedTupleCount): - t, offset = TupleVariation.decompileCoord_(axisTags, data, offset) - result.append(t) - return result - - -def compileSharedTuples( - axisTags, variations, MAX_NUM_SHARED_COORDS=TUPLE_INDEX_MASK + 1 -): - coordCount = Counter() - for var in variations: - coord = var.compileCoord(axisTags) - coordCount[coord] += 1 - # In python < 3.7, most_common() ordering is non-deterministic - # so apply a sort to make sure the ordering is consistent. - sharedCoords = sorted( - coordCount.most_common(MAX_NUM_SHARED_COORDS), - key=lambda item: (-item[1], item[0]), - ) - return [c[0] for c in sharedCoords if c[1] > 1] - - -def compileTupleVariationStore( - variations, pointCount, axisTags, sharedTupleIndices, useSharedPoints=True -): - # pointCount is actually unused. Keeping for API compat. - del pointCount - newVariations = [] - pointDatas = [] - # Compile all points and figure out sharing if desired - sharedPoints = None - - # Collect, count, and compile point-sets for all variation sets - pointSetCount = defaultdict(int) - for v in variations: - points = v.getUsedPoints() - if points is None: # Empty variations - continue - pointSetCount[points] += 1 - newVariations.append(v) - pointDatas.append(points) - variations = newVariations - del newVariations - - if not variations: - return (0, b"", b"") - - n = len(variations[0].coordinates) - assert all( - len(v.coordinates) == n for v in variations - ), "Variation sets have different sizes" - - compiledPoints = { - pointSet: TupleVariation.compilePoints(pointSet) for pointSet in pointSetCount - } - - tupleVariationCount = len(variations) - tuples = [] - data = [] - - if useSharedPoints: - # Find point-set which saves most bytes. - def key(pn): - pointSet = pn[0] - count = pn[1] - return len(compiledPoints[pointSet]) * (count - 1) - - sharedPoints = max(pointSetCount.items(), key=key)[0] - - data.append(compiledPoints[sharedPoints]) - tupleVariationCount |= TUPLES_SHARE_POINT_NUMBERS - - # b'' implies "use shared points" - pointDatas = [ - compiledPoints[points] if points != sharedPoints else b"" - for points in pointDatas - ] - - for v, p in zip(variations, pointDatas): - thisTuple, thisData = v.compile(axisTags, sharedTupleIndices, pointData=p) - - tuples.append(thisTuple) - data.append(thisData) - - tuples = b"".join(tuples) - data = b"".join(data) - return tupleVariationCount, tuples, data - - -def decompileTupleVariationStore( - tableTag, - axisTags, - tupleVariationCount, - pointCount, - sharedTuples, - data, - pos, - dataPos, -): - numAxes = len(axisTags) - result = [] - if (tupleVariationCount & TUPLES_SHARE_POINT_NUMBERS) != 0: - sharedPoints, dataPos = TupleVariation.decompilePoints_( - pointCount, data, dataPos, tableTag - ) - else: - sharedPoints = [] - for _ in range(tupleVariationCount & TUPLE_COUNT_MASK): - dataSize, flags = struct.unpack(">HH", data[pos : pos + 4]) - tupleSize = TupleVariation.getTupleSize_(flags, numAxes) - tupleData = data[pos : pos + tupleSize] - pointDeltaData = data[dataPos : dataPos + dataSize] - result.append( - decompileTupleVariation_( - pointCount, - sharedTuples, - sharedPoints, - tableTag, - axisTags, - tupleData, - pointDeltaData, - ) - ) - pos += tupleSize - dataPos += dataSize - return result - - -def decompileTupleVariation_( - pointCount, sharedTuples, sharedPoints, tableTag, axisTags, data, tupleData -): - assert tableTag in ("cvar", "gvar"), tableTag - flags = struct.unpack(">H", data[2:4])[0] - pos = 4 - if (flags & EMBEDDED_PEAK_TUPLE) == 0: - peak = sharedTuples[flags & TUPLE_INDEX_MASK] - else: - peak, pos = TupleVariation.decompileCoord_(axisTags, data, pos) - if (flags & INTERMEDIATE_REGION) != 0: - start, pos = TupleVariation.decompileCoord_(axisTags, data, pos) - end, pos = TupleVariation.decompileCoord_(axisTags, data, pos) - else: - start, end = inferRegion_(peak) - axes = {} - for axis in axisTags: - region = start[axis], peak[axis], end[axis] - if region != (0.0, 0.0, 0.0): - axes[axis] = region - pos = 0 - if (flags & PRIVATE_POINT_NUMBERS) != 0: - points, pos = TupleVariation.decompilePoints_( - pointCount, tupleData, pos, tableTag - ) - else: - points = sharedPoints - - deltas = [None] * pointCount - - if tableTag == "cvar": - deltas_cvt, pos = TupleVariation.decompileDeltas_(len(points), tupleData, pos) - for p, delta in zip(points, deltas_cvt): - if 0 <= p < pointCount: - deltas[p] = delta - - elif tableTag == "gvar": - deltas_x, pos = TupleVariation.decompileDeltas_(len(points), tupleData, pos) - deltas_y, pos = TupleVariation.decompileDeltas_(len(points), tupleData, pos) - for p, x, y in zip(points, deltas_x, deltas_y): - if 0 <= p < pointCount: - deltas[p] = (x, y) - - return TupleVariation(axes, deltas) - - -def inferRegion_(peak): - """Infer start and end for a (non-intermediate) region - - This helper function computes the applicability region for - variation tuples whose INTERMEDIATE_REGION flag is not set in the - TupleVariationHeader structure. Variation tuples apply only to - certain regions of the variation space; outside that region, the - tuple has no effect. To make the binary encoding more compact, - TupleVariationHeaders can omit the intermediateStartTuple and - intermediateEndTuple fields. - """ - start, end = {}, {} - for (axis, value) in peak.items(): - start[axis] = min(value, 0.0) # -0.3 --> -0.3; 0.7 --> 0.0 - end[axis] = max(value, 0.0) # -0.3 --> 0.0; 0.7 --> 0.7 - return (start, end) diff --git a/spaces/jordonpeter01/ai-comic-factory/src/lib/computeSha256.ts b/spaces/jordonpeter01/ai-comic-factory/src/lib/computeSha256.ts deleted file mode 100644 index cb6ef0604fca9653408012fd6cef2a58b6acaf47..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/ai-comic-factory/src/lib/computeSha256.ts +++ /dev/null @@ -1,14 +0,0 @@ -import { createHash } from 'node:crypto' - -/** - * Returns a SHA256 hash using SHA-3 for the given `content`. - * - * @see https://en.wikipedia.org/wiki/SHA-3 - * - * @param {String} content - * - * @returns {String} - */ -export function computeSha256(strContent: string) { - return createHash('sha3-256').update(strContent).digest('hex') -} \ No newline at end of file diff --git a/spaces/jpfearnworks/ai_agents/Dockerfile b/spaces/jpfearnworks/ai_agents/Dockerfile deleted file mode 100644 index d0f48baac210b6f7c4f50792d270f77c699849d3..0000000000000000000000000000000000000000 --- a/spaces/jpfearnworks/ai_agents/Dockerfile +++ /dev/null @@ -1,18 +0,0 @@ -FROM python:3.9 - -WORKDIR /app -RUN pip install streamlit -COPY requirements.txt requirements.txt -RUN pip install -r requirements.txt - - -COPY . . - -WORKDIR /app/ - -ENV PATH="/root/.local/bin:${PATH}" - -EXPOSE 8501 -EXPOSE 7000 - -CMD python server.py --port 7000 \ No newline at end of file diff --git a/spaces/juancopi81/youtube-music-transcribe/t5x/optimizers_test.py b/spaces/juancopi81/youtube-music-transcribe/t5x/optimizers_test.py deleted file mode 100644 index e7559b6e19536025cf2fead7b68f44ccff903ab2..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/youtube-music-transcribe/t5x/optimizers_test.py +++ /dev/null @@ -1,317 +0,0 @@ -# Copyright 2022 The T5X Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for t5x.optimizers.""" - -import dataclasses -import functools -import operator - -from absl.testing import absltest -from absl.testing import parameterized -import chex -import flax -from flax.core import frozen_dict -import jax -import jax.numpy as jnp -import numpy as np -import optax -import seqio -from t5x import models -from t5x import optimizers -from t5x import partitioning -from t5x import test_utils -from t5x import trainer -from t5x import utils -from t5x.examples.t5 import network - - -def _assert_numpy_allclose(a, b, atol=None, rtol=None): - a, b = jnp.array(a), jnp.array(b) - a = a.astype(np.float32) if a.dtype == jnp.bfloat16 else a - b = b.astype(np.float32) if b.dtype == jnp.bfloat16 else b - kw = {} - if atol: - kw['atol'] = atol - if rtol: - kw['rtol'] = rtol - np.testing.assert_allclose(a, b, **kw) - - -def check_eq(xs, ys, atol=None, rtol=None): - xs_leaves, xs_tree = jax.tree_flatten(xs) - ys_leaves, ys_tree = jax.tree_flatten(ys) - assert xs_tree == ys_tree, f"Tree shapes don't match. \n{xs_tree}\n{ys_tree}" - assert jax.tree_util.tree_all( - jax.tree_multimap(lambda x, y: np.array(x).shape == np.array(y).shape, - xs_leaves, ys_leaves)), "Leaves' shapes don't match." - assert jax.tree_multimap( - functools.partial(_assert_numpy_allclose, atol=atol, rtol=rtol), - xs_leaves, ys_leaves) - - -def flattened_state_dict(x): - s = flax.serialization.to_state_dict(x) - return flax.traverse_util.flatten_dict(s, sep='/') - - -def tree_shape(x): - return jax.tree_map(jnp.shape, x) - - -def tree_equals(x, y): - return jax.tree_util.tree_all(jax.tree_multimap(operator.eq, x, y)) - - -def get_fake_tokenized_dataset_no_pretokenized(*_, split='validation', **__): - return test_utils.get_fake_tokenized_dataset(split=split).map( - lambda x: {k: v for k, v in x.items() if not k.endswith('_pretokenized')}) - - -def get_t5_test_model(optimizer_def, - **config_overrides) -> models.EncoderDecoderModel: - """Returns a tiny T5 1.1 model to use for testing.""" - tiny_config = network.T5Config( - vocab_size=128, - dtype='bfloat16', - emb_dim=8, - num_heads=4, - num_encoder_layers=2, - num_decoder_layers=2, - head_dim=3, - mlp_dim=16, - mlp_activations=('gelu', 'linear'), - dropout_rate=0.0, - logits_via_embedding=False, - ) - tiny_config = dataclasses.replace(tiny_config, **config_overrides) - vocabulary = test_utils.get_fake_vocab() - return models.EncoderDecoderModel( - module=network.Transformer(tiny_config), - input_vocabulary=vocabulary, - output_vocabulary=vocabulary, - optimizer_def=optimizer_def) - - -class BasicTest(chex.TestCase): - - @classmethod - def get_params(cls): - return frozen_dict.FrozenDict({ - 'forward': { - 'input_layer': { - 'embedding': jnp.zeros([16, 8], dtype=jnp.float32), - }, - 'output_layer': { - 'layer_norm': { - 'scale': jnp.zeros([8], dtype=jnp.float32), - }, - 'proj': { - 'bias': jnp.zeros([1], dtype=jnp.float32), - 'kernel': jnp.zeros([8, 1], dtype=jnp.float32), - }, - }, - }, - 'loss': { - 'loss_fn': { - 'loss_biases': jnp.zeros([2], dtype=jnp.float32), - }, - }, - }) - - @classmethod - def get_params_shapes(cls): - return jax.tree_map(jnp.shape, cls.get_params()) - - @classmethod - def get_param_logical_axes(cls): - return frozen_dict.FrozenDict({ - 'forward': { - 'input_layer': { - 'embedding': partitioning.PartitionSpec('vocab', 'embed'), - }, - 'output_layer': { - 'layer_norm': { - 'scale': partitioning.PartitionSpec('embed',), - }, - 'proj': { - 'bias': - partitioning.PartitionSpec('output_head',), - 'kernel': - partitioning.PartitionSpec('embed', 'output_head'), - }, - }, - }, - 'loss': { - 'loss_fn': { - 'loss_biases': partitioning.PartitionSpec('unmodeled',), - }, - }, - }) - - def test_logical_axes_adamw(self): - opt = optax.adamw(0.001, weight_decay=0.001) - wrapper = optimizers.OptaxWrapper(opt) - optimizer = wrapper.create(self.get_params()) - got = wrapper.derive_logical_axes(optimizer, self.get_param_logical_axes()) - want = optimizers.Optimizer( - optimizer_def=wrapper, - state=optimizers.OptimizerState( - step=None, - param_states=( - optax.ScaleByAdamState( - count=None, - mu=self.get_param_logical_axes(), - nu=self.get_param_logical_axes()), - optax.EmptyState(), - optax.EmptyState(), - )), - target=self.get_param_logical_axes()) - chex.assert_trees_all_equal(got, want) - - @parameterized.parameters( - ('sgd', lambda: optax.sgd(1e-2, 0.0)), - ('adam', lambda: optax.adam(1e-1)), - ('adamw', lambda: optax.adamw(1e-1)), - ('lamb', lambda: optax.adamw(1e-1)), - ('rmsprop', lambda: optax.rmsprop(1e-1)), - ('rmsprop_momentum', lambda: optax.rmsprop(5e-2, momentum=0.9)), - ('fromage', lambda: optax.fromage(1e-2)), - ('adabelief', lambda: optax.adabelief(1e-1)), - ('radam', lambda: optax.radam(1e-1)), - ('yogi', lambda: optax.yogi(1.0)), - ) - def test_sanity_check_logical_axes(self, opt_name, opt_fn): - opt = opt_fn() - - wrapper = optimizers.OptaxWrapper(opt) - optimizer = wrapper.create(self.get_params()) - _ = wrapper.derive_logical_axes(optimizer, self.get_param_logical_axes()) - - # TODO(rosun): basic sanity check, we just want to make sure if a param - # name, e.g., `loss_biases` appear in the tree, the corresponding value is - # always a PartitionSpec. - - def test_adamw_state_serialization(self): - opt = optax.adamw(0.001, weight_decay=0.001) - wrapper = optimizers.OptaxWrapper(opt) - optimizer = wrapper.create(self.get_params()) - - state_dict = optimizer.state_dict() - - chex.assert_trees_all_equal( - frozen_dict.FrozenDict(jax.tree_map(jnp.shape, state_dict)), - frozen_dict.FrozenDict({ - 'target': self.get_params_shapes(), - 'state': { - 'step': (), - 'param_states': { - '0': { - 'count': (), - 'mu': self.get_params_shapes(), - 'nu': self.get_params_shapes(), - }, - # NB: We eliminate empty tuple leaves from EmptyState() in - # OptaxWrapper to avoid having the rest of T5X have to - # correctly handle this detail. e.g. we omit these: - # '1': {}, - # '2': {}, - }, - } - })) - - new_optimizer = optimizer.restore_state(state_dict) - - chex.assert_trees_all_equal(optimizer, new_optimizer) - - -class OptaxWrapperTest(chex.TestCase): - - def run_train_loop(self, optimizer_def): - # Construct input data. - - ds = get_fake_tokenized_dataset_no_pretokenized(split='validation') - ds = seqio.EncDecFeatureConverter()( - ds, task_feature_lengths={ - 'inputs': 8, - 'targets': 8 - }) - ds = ds.repeat().batch(8) - ds_iter = ds.as_numpy_iterator() - first_batch = next(ds_iter) - - model = get_t5_test_model(optimizer_def, vocab_size=128) - - learning_rate_fn = utils.create_learning_rate_scheduler() - - input_shapes = jax.tree_map(jnp.shape, first_batch) - input_types = jax.tree_map(lambda x: jnp.dtype(x.dtype), first_batch) - - partitioner = partitioning.PjitPartitioner( - num_partitions=2, - logical_axis_rules=partitioning.standard_logical_axis_rules()) - - train_state_initializer = utils.TrainStateInitializer( - optimizer_def=model.optimizer_def, - init_fn=model.get_initial_variables, - input_shapes=input_shapes, - input_types=input_types, - partitioner=partitioner) - - train_state_axes = train_state_initializer.train_state_axes - train_state = train_state_initializer.from_scratch(jax.random.PRNGKey(0)) - - trainer_instance = trainer.Trainer( - model, - train_state=train_state, - partitioner=partitioner, - eval_names=[], - summary_dir=None, - train_state_axes=train_state_axes, - rng=jax.random.PRNGKey(0), - learning_rate_fn=learning_rate_fn, - num_microbatches=1) - - chex.assert_tree_all_finite(train_state.params) - for _ in range(2): - trainer_instance.train(ds_iter, 1) - chex.assert_tree_all_finite(train_state.params) - - # check save/restore structural equality - restored_instance = trainer_instance.train_state.restore_state( - trainer_instance.train_state.state_dict()) - chex.assert_tree_all_equal_structs(trainer_instance.train_state, - restored_instance) - - # NOTE(levskaya): these are surprisingly slow tests on CPU. - @parameterized.parameters( - ('sgd', lambda: optax.sgd(1e-2, 0.0)), - ('adam', lambda: optax.adam(1e-1)), - ('adamw', lambda: optax.adamw(1e-1)), - ('lamb', lambda: optax.adamw(1e-1)), - # ('rmsprop', lambda: optax.rmsprop(1e-1)), - # ('rmsprop_momentum', lambda: optax.rmsprop(5e-2, momentum=0.9)), - # ('fromage', lambda: optax.fromage(1e-2)), - ('adabelief', lambda: optax.adabelief(1e-1)), - # ('radam', lambda: optax.radam(1e-1)), - ('yogi', lambda: optax.yogi(1.0)), - ) - def test_optimizer(self, opt_name, opt_fn): - opt = opt_fn() - optimizer_def = optimizers.OptaxWrapper(opt) - self.run_train_loop(optimizer_def) - - -if __name__ == '__main__': - absltest.main() diff --git a/spaces/justest/gpt4free/g4f/Provider/Providers/GetGpt.py b/spaces/justest/gpt4free/g4f/Provider/Providers/GetGpt.py deleted file mode 100644 index 56a121f6ee5f430da7beda3b65abdea64a87c36b..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/g4f/Provider/Providers/GetGpt.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import json -import uuid -import requests -from Crypto.Cipher import AES -from ...typing import sha256, Dict, get_type_hints - -url = 'https://chat.getgpt.world/' -model = ['gpt-3.5-turbo'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - def encrypt(e): - t = os.urandom(8).hex().encode('utf-8') - n = os.urandom(8).hex().encode('utf-8') - r = e.encode('utf-8') - cipher = AES.new(t, AES.MODE_CBC, n) - ciphertext = cipher.encrypt(pad_data(r)) - return ciphertext.hex() + t.decode('utf-8') + n.decode('utf-8') - - def pad_data(data: bytes) -> bytes: - block_size = AES.block_size - padding_size = block_size - len(data) % block_size - padding = bytes([padding_size] * padding_size) - return data + padding - - headers = { - 'Content-Type': 'application/json', - 'Referer': 'https://chat.getgpt.world/', - 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36' - } - - data = json.dumps({ - 'messages': messages, - 'frequency_penalty': kwargs.get('frequency_penalty', 0), - 'max_tokens': kwargs.get('max_tokens', 4000), - 'model': 'gpt-3.5-turbo', - 'presence_penalty': kwargs.get('presence_penalty', 0), - 'temperature': kwargs.get('temperature', 1), - 'top_p': kwargs.get('top_p', 1), - 'stream': True, - 'uuid': str(uuid.uuid4()) - }) - - res = requests.post('https://chat.getgpt.world/api/chat/stream', - headers=headers, json={'signature': encrypt(data)}, stream=True) - - for line in res.iter_lines(): - if b'content' in line: - line_json = json.loads(line.decode('utf-8').split('data: ')[1]) - yield (line_json['choices'][0]['delta']['content']) - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f'{name}: {get_type_hints(_create_completion)[name].__name__}' for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/katanaml-org/sparrow-ml/routers/__init__.py b/spaces/katanaml-org/sparrow-ml/routers/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kdrkdrkdr/YuukaTTS/utils.py b/spaces/kdrkdrkdr/YuukaTTS/utils.py deleted file mode 100644 index 4cb5b43d0ca2bae496e7871b2094f2ffb26ab642..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/YuukaTTS/utils.py +++ /dev/null @@ -1,226 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.ERROR) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/keanteng/job/README.md b/spaces/keanteng/job/README.md deleted file mode 100644 index 192285d2ef1fc83a29e4ded8a08b165463d46540..0000000000000000000000000000000000000000 --- a/spaces/keanteng/job/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Job -emoji: ⚡ -colorFrom: gray -colorTo: yellow -sdk: streamlit -sdk_version: 1.28.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kermitt2/grobid/Dockerfile b/spaces/kermitt2/grobid/Dockerfile deleted file mode 100644 index e876c6a448f5e0c6eef45b2885ceebc2eff85801..0000000000000000000000000000000000000000 --- a/spaces/kermitt2/grobid/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM grobid/grobid:0.8.0-SNAPSHOT -USER root -RUN mkdir -m 777 -p /opt/grobid/grobid-home/tmp -RUN mkdir -m 777 -p /opt/grobid/logs -RUN chmod -R uog+rw /data/db -#ENTRYPOINT ["/tini", "-s", "--"] -CMD ["./grobid-service/bin/grobid-service"] diff --git a/spaces/kevinwang676/ControlNet-with-GPT-4/app.py b/spaces/kevinwang676/ControlNet-with-GPT-4/app.py deleted file mode 100644 index 73dd57dcc6b5595c0d4822419bdd5eb657d56e2c..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ControlNet-with-GPT-4/app.py +++ /dev/null @@ -1,151 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import gradio as gr -import torch -import re -import openai -from cairosvg import svg2png - -from app_canny import create_demo as create_demo_canny -from app_depth import create_demo as create_demo_depth -from app_ip2p import create_demo as create_demo_ip2p -from app_lineart import create_demo as create_demo_lineart -from app_mlsd import create_demo as create_demo_mlsd -from app_normal import create_demo as create_demo_normal -from app_openpose import create_demo as create_demo_openpose -from app_scribble import create_demo as create_demo_scribble -from app_scribble_interactive import create_demo as create_demo_scribble_interactive -from app_segmentation import create_demo as create_demo_segmentation -from app_shuffle import create_demo as create_demo_shuffle -from app_softedge import create_demo as create_demo_softedge -from model import Model -from settings import ALLOW_CHANGING_BASE_MODEL, DEFAULT_MODEL_ID, SHOW_DUPLICATE_BUTTON - -DESCRIPTION = "# ControlNet v1.1" - -if not torch.cuda.is_available(): - DESCRIPTION += "\n

Running on CPU 🥶 This demo does not work on CPU.

" - -model = Model(base_model_id=DEFAULT_MODEL_ID, task_name="Canny") - -def gpt_control(apikey, prompt): - - openai.api_key = apikey - - # gpt - messages = [{"role": "system", "content": "You are an SVG expert with years of experience and multiple contributions to the SVG project. Based on the prompt and the description, please generate the corresponding SVG code."}, - {"role": "user", "content": f"""Provide only the shell command without any explanations. -The current objective is below. Reply with the SVG code only: -OBJECTIVE: {prompt} -YOUR SVG CODE: -"""}] - - completion = openai.ChatCompletion.create( - model = "gpt-4", - messages = messages - ) - - chat_response = completion.choices[0].message.content - - code = re.findall('<.*>', chat_response) - code_new = '\n'.join(code) - - svg_code = f""" - {code_new} - """ - svg2png(bytestring=svg_code,write_to='output.png') - - return 'output.png' - - -with gr.Blocks(css="style.css") as demo: - gr.HTML("
" - "

🌁🪄🌃 - ControlNet with GPT-4

" - "
") - - gr.Markdown("##
🌟 Born to Create: Controllable Text-to-Image Generation with GPT-4
") - - gr.DuplicateButton( - value="Duplicate Space for private use", - elem_id="duplicate-button", - visible=SHOW_DUPLICATE_BUTTON, - ) - - with gr.Tab("GPT-4 Control"): - with gr.Row(): - with gr.Column(): - inp1 = gr.Textbox(label="OpenAI API Key", type="password") - inp2 = gr.Textbox(label="Position Prompt (as simple as possible)") - btn1 = gr.Button("GPT-4 Control", variant="primary") - with gr.Column(): - out1 = gr.Image(label="Output Image", type="pil", interactive=True) - - btn1.click(gpt_control, [inp1, inp2], [out1]) - - - with gr.Tabs(): - with gr.TabItem("Canny"): - create_demo_canny(model.process_canny) - with gr.TabItem("MLSD"): - create_demo_mlsd(model.process_mlsd) - with gr.TabItem("Scribble"): - create_demo_scribble(model.process_scribble) - with gr.TabItem("Scribble Interactive"): - create_demo_scribble_interactive(model.process_scribble_interactive) - with gr.TabItem("SoftEdge"): - create_demo_softedge(model.process_softedge) - with gr.TabItem("OpenPose"): - create_demo_openpose(model.process_openpose) - with gr.TabItem("Segmentation"): - create_demo_segmentation(model.process_segmentation) - with gr.TabItem("Depth"): - create_demo_depth(model.process_depth) - with gr.TabItem("Normal map"): - create_demo_normal(model.process_normal) - with gr.TabItem("Lineart"): - create_demo_lineart(model.process_lineart) - with gr.TabItem("Content Shuffle"): - create_demo_shuffle(model.process_shuffle) - with gr.TabItem("Instruct Pix2Pix"): - create_demo_ip2p(model.process_ip2p) - - with gr.Accordion(label="Base model", open=False): - with gr.Row(): - with gr.Column(scale=5): - current_base_model = gr.Text(label="Current base model") - with gr.Column(scale=1): - check_base_model_button = gr.Button("Check current base model") - with gr.Row(): - with gr.Column(scale=5): - new_base_model_id = gr.Text( - label="New base model", - max_lines=1, - placeholder="runwayml/stable-diffusion-v1-5", - info="The base model must be compatible with Stable Diffusion v1.5.", - interactive=ALLOW_CHANGING_BASE_MODEL, - ) - with gr.Column(scale=1): - change_base_model_button = gr.Button("Change base model", interactive=ALLOW_CHANGING_BASE_MODEL) - if not ALLOW_CHANGING_BASE_MODEL: - gr.Markdown( - """The base model is not allowed to be changed in this Space so as not to slow down the demo, but it can be changed if you duplicate the Space.""" - ) - - check_base_model_button.click( - fn=lambda: model.base_model_id, - outputs=current_base_model, - queue=False, - api_name="check_base_model", - ) - gr.on( - triggers=[new_base_model_id.submit, change_base_model_button.click], - fn=model.set_base_model, - inputs=new_base_model_id, - outputs=current_base_model, - api_name=False, - ) - -if __name__ == "__main__": - demo.queue(max_size=20).launch() diff --git a/spaces/kevinwang676/FreeVC/app.py b/spaces/kevinwang676/FreeVC/app.py deleted file mode 100644 index ffd212c8e9295f1922b393b6a80596aafc6162d4..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/FreeVC/app.py +++ /dev/null @@ -1,481 +0,0 @@ -import os -import torch -import librosa -import gradio as gr -from scipy.io.wavfile import write -from transformers import WavLMModel - -import utils -from models import SynthesizerTrn -from mel_processing import mel_spectrogram_torch -from speaker_encoder.voice_encoder import SpeakerEncoder - -import time -from textwrap import dedent - -import mdtex2html -from loguru import logger -from transformers import AutoModel, AutoTokenizer - -from tts_voice import tts_order_voice -import edge_tts -import tempfile -import anyio - -''' -def get_wavlm(): - os.system('gdown https://drive.google.com/uc?id=12-cB34qCTvByWT-QtOcZaqwwO21FLSqU') - shutil.move('WavLM-Large.pt', 'wavlm') -''' - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -smodel = SpeakerEncoder('speaker_encoder/ckpt/pretrained_bak_5805000.pt') - -print("Loading FreeVC(24k)...") -hps = utils.get_hparams_from_file("configs/freevc-24.json") -freevc_24 = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).to(device) -_ = freevc_24.eval() -_ = utils.load_checkpoint("checkpoints/freevc-24.pth", freevc_24, None) - -print("Loading WavLM for content...") -cmodel = WavLMModel.from_pretrained("microsoft/wavlm-large").to(device) - -def convert(model, src, tgt): - with torch.no_grad(): - # tgt - wav_tgt, _ = librosa.load(tgt, sr=hps.data.sampling_rate) - wav_tgt, _ = librosa.effects.trim(wav_tgt, top_db=20) - if model == "FreeVC" or model == "FreeVC (24kHz)": - g_tgt = smodel.embed_utterance(wav_tgt) - g_tgt = torch.from_numpy(g_tgt).unsqueeze(0).to(device) - else: - wav_tgt = torch.from_numpy(wav_tgt).unsqueeze(0).to(device) - mel_tgt = mel_spectrogram_torch( - wav_tgt, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - # src - wav_src, _ = librosa.load(src, sr=hps.data.sampling_rate) - wav_src = torch.from_numpy(wav_src).unsqueeze(0).to(device) - c = cmodel(wav_src).last_hidden_state.transpose(1, 2).to(device) - # infer - if model == "FreeVC": - audio = freevc.infer(c, g=g_tgt) - elif model == "FreeVC-s": - audio = freevc_s.infer(c, mel=mel_tgt) - else: - audio = freevc_24.infer(c, g=g_tgt) - audio = audio[0][0].data.cpu().float().numpy() - if model == "FreeVC" or model == "FreeVC-s": - write("out.wav", hps.data.sampling_rate, audio) - else: - write("out.wav", 24000, audio) - out = "out.wav" - return out - -# GLM2 - -language_dict = tts_order_voice - -# fix timezone in Linux -os.environ["TZ"] = "Asia/Shanghai" -try: - time.tzset() # type: ignore # pylint: disable=no-member -except Exception: - # Windows - logger.warning("Windows, cant run time.tzset()") - -# model_name = "THUDM/chatglm2-6b" -model_name = "THUDM/chatglm2-6b-int4" - -RETRY_FLAG = False - -tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) - -# model = AutoModel.from_pretrained(model_name, trust_remote_code=True).cuda() - -# 4/8 bit -# model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).quantize(4).cuda() - -has_cuda = torch.cuda.is_available() - -# has_cuda = False # force cpu - -if has_cuda: - model_glm = ( - AutoModel.from_pretrained(model_name, trust_remote_code=True).cuda().half() - ) # 3.92G -else: - model_glm = AutoModel.from_pretrained( - model_name, trust_remote_code=True - ).float() # .float() .half().float() - -model_glm = model_glm.eval() - -_ = """Override Chatbot.postprocess""" - - -def postprocess(self, y): - if y is None: - return [] - for i, (message, response) in enumerate(y): - y[i] = ( - None if message is None else mdtex2html.convert((message)), - None if response is None else mdtex2html.convert(response), - ) - return y - - -gr.Chatbot.postprocess = postprocess - - -def parse_text(text): - """copy from https://github.com/GaiZhenbiao/ChuanhuChatGPT/""" - lines = text.split("\n") - lines = [line for line in lines if line != ""] - count = 0 - for i, line in enumerate(lines): - if "```" in line: - count += 1 - items = line.split("`") - if count % 2 == 1: - lines[i] = f'
'
-            else:
-                lines[i] = "
" - else: - if i > 0: - if count % 2 == 1: - line = line.replace("`", r"\`") - line = line.replace("<", "<") - line = line.replace(">", ">") - line = line.replace(" ", " ") - line = line.replace("*", "*") - line = line.replace("_", "_") - line = line.replace("-", "-") - line = line.replace(".", ".") - line = line.replace("!", "!") - line = line.replace("(", "(") - line = line.replace(")", ")") - line = line.replace("$", "$") - lines[i] = "
" + line - text = "".join(lines) - return text - - -def predict( - RETRY_FLAG, input, chatbot, max_length, top_p, temperature, history, past_key_values -): - try: - chatbot.append((parse_text(input), "")) - except Exception as exc: - logger.error(exc) - logger.debug(f"{chatbot=}") - _ = """ - if chatbot: - chatbot[-1] = (parse_text(input), str(exc)) - yield chatbot, history, past_key_values - # """ - yield chatbot, history, past_key_values - - for response, history, past_key_values in model_glm.stream_chat( - tokenizer, - input, - history, - past_key_values=past_key_values, - return_past_key_values=True, - max_length=max_length, - top_p=top_p, - temperature=temperature, - ): - chatbot[-1] = (parse_text(input), parse_text(response)) - # chatbot[-1][-1] = parse_text(response) - - yield chatbot, history, past_key_values, parse_text(response) - - -def trans_api(input, max_length=4096, top_p=0.8, temperature=0.2): - if max_length < 10: - max_length = 4096 - if top_p < 0.1 or top_p > 1: - top_p = 0.85 - if temperature <= 0 or temperature > 1: - temperature = 0.01 - try: - res, _ = model_glm.chat( - tokenizer, - input, - history=[], - past_key_values=None, - max_length=max_length, - top_p=top_p, - temperature=temperature, - ) - # logger.debug(f"{res=} \n{_=}") - except Exception as exc: - logger.error(f"{exc=}") - res = str(exc) - - return res - - -def reset_user_input(): - return gr.update(value="") - - -def reset_state(): - return [], [], None, "" - - -# Delete last turn -def delete_last_turn(chat, history): - if chat and history: - chat.pop(-1) - history.pop(-1) - return chat, history - - -# Regenerate response -def retry_last_answer( - user_input, chatbot, max_length, top_p, temperature, history, past_key_values -): - if chatbot and history: - # Removing the previous conversation from chat - chatbot.pop(-1) - # Setting up a flag to capture a retry - RETRY_FLAG = True - # Getting last message from user - user_input = history[-1][0] - # Removing bot response from the history - history.pop(-1) - - yield from predict( - RETRY_FLAG, # type: ignore - user_input, - chatbot, - max_length, - top_p, - temperature, - history, - past_key_values, - ) - -# print - -def print(text): - return text - -# TTS - -async def text_to_speech_edge(text, language_code): - voice = language_dict[language_code] - communicate = edge_tts.Communicate(text, voice) - with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as tmp_file: - tmp_path = tmp_file.name - - await communicate.save(tmp_path) - - return tmp_path - - -with gr.Blocks(title="ChatGLM2-6B-int4", theme=gr.themes.Soft(text_size="sm")) as demo: - gr.HTML("
" - "

🥳💕🎶 - ChatGLM2 + 声音克隆:和你喜欢的角色畅所欲言吧!

" - "
") - gr.Markdown("##
💡 - 第二代ChatGLM大语言模型 + FreeVC变声,为您打造独一无二的沉浸式对话体验,支持中英双语
") - gr.Markdown("##
🌊 - 更多精彩应用,尽在[滔滔AI](http://www.talktalkai.com);滔滔AI,为爱滔滔!💕
") - gr.Markdown("###
⭐ - 如果您喜欢这个程序,欢迎给我的[Github项目](https://github.com/KevinWang676/ChatGLM2-Voice-Cloning)点赞支持!
") - - with gr.Accordion("📒 相关信息", open=False): - _ = f""" ChatGLM2的可选参数信息: - * Low temperature: responses will be more deterministic and focused; High temperature: responses more creative. - * Suggested temperatures -- translation: up to 0.3; chatting: > 0.4 - * Top P controls dynamic vocabulary selection based on context.\n - 如果您想让ChatGLM2进行角色扮演并与之对话,请先输入恰当的提示词,如“请你扮演成动漫角色蜡笔小新并和我进行对话”;您也可以为ChatGLM2提供自定义的角色设定\n - 当您使用声音克隆功能时,请先在此程序的对应位置上传一段您喜欢的音频 - """ - gr.Markdown(dedent(_)) - chatbot = gr.Chatbot(height=300) - with gr.Row(): - with gr.Column(scale=4): - with gr.Column(scale=12): - user_input = gr.Textbox( - label="请在此处和GLM2聊天 (按回车键即可发送)", - placeholder="聊点什么吧", - ) - RETRY_FLAG = gr.Checkbox(value=False, visible=False) - with gr.Column(min_width=32, scale=1): - with gr.Row(): - submitBtn = gr.Button("开始和GLM2交流吧", variant="primary") - deleteBtn = gr.Button("删除最新一轮对话", variant="secondary") - retryBtn = gr.Button("重新生成最新一轮对话", variant="secondary") - - with gr.Accordion("🔧 更多设置", open=False): - with gr.Row(): - emptyBtn = gr.Button("清空所有聊天记录") - max_length = gr.Slider( - 0, - 32768, - value=8192, - step=1.0, - label="Maximum length", - interactive=True, - ) - top_p = gr.Slider( - 0, 1, value=0.85, step=0.01, label="Top P", interactive=True - ) - temperature = gr.Slider( - 0.01, 1, value=0.95, step=0.01, label="Temperature", interactive=True - ) - - - with gr.Row(): - test1 = gr.Textbox(label="GLM2的最新回答 (可编辑)", lines = 3) - with gr.Column(): - language = gr.Dropdown(choices=list(language_dict.keys()), value="普通话 (中国大陆)-Xiaoxiao-女", label="请选择文本对应的语言及您喜欢的说话人") - tts_btn = gr.Button("生成对应的音频吧", variant="primary") - output_audio = gr.Audio(type="filepath", label="为您生成的音频", interactive=False) - - tts_btn.click(text_to_speech_edge, inputs=[test1, language], outputs=[output_audio]) - - with gr.Row(): - model_choice = gr.Dropdown(choices=["FreeVC", "FreeVC-s", "FreeVC (24kHz)"], value="FreeVC (24kHz)", label="Model", visible=False) - audio1 = output_audio - audio2 = gr.Audio(label="请上传您喜欢的声音进行声音克隆", type='filepath') - clone_btn = gr.Button("开始AI声音克隆吧", variant="primary") - audio_cloned = gr.Audio(label="为您生成的专属声音克隆音频", type='filepath') - - clone_btn.click(convert, inputs=[model_choice, audio1, audio2], outputs=[audio_cloned]) - - history = gr.State([]) - past_key_values = gr.State(None) - - user_input.submit( - predict, - [ - RETRY_FLAG, - user_input, - chatbot, - max_length, - top_p, - temperature, - history, - past_key_values, - ], - [chatbot, history, past_key_values, test1], - show_progress="full", - ) - submitBtn.click( - predict, - [ - RETRY_FLAG, - user_input, - chatbot, - max_length, - top_p, - temperature, - history, - past_key_values, - ], - [chatbot, history, past_key_values, test1], - show_progress="full", - api_name="predict", - ) - submitBtn.click(reset_user_input, [], [user_input]) - - emptyBtn.click( - reset_state, outputs=[chatbot, history, past_key_values, test1], show_progress="full" - ) - - retryBtn.click( - retry_last_answer, - inputs=[ - user_input, - chatbot, - max_length, - top_p, - temperature, - history, - past_key_values, - ], - # outputs = [chatbot, history, last_user_message, user_message] - outputs=[chatbot, history, past_key_values, test1], - ) - deleteBtn.click(delete_last_turn, [chatbot, history], [chatbot, history]) - - with gr.Accordion("📔 提示词示例", open=False): - etext = """In America, where cars are an important part of the national psyche, a decade ago people had suddenly started to drive less, which had not happened since the oil shocks of the 1970s. """ - examples = gr.Examples( - examples=[ - ["Explain the plot of Cinderella in a sentence."], - [ - "How long does it take to become proficient in French, and what are the best methods for retaining information?" - ], - ["What are some common mistakes to avoid when writing code?"], - ["Build a prompt to generate a beautiful portrait of a horse"], - ["Suggest four metaphors to describe the benefits of AI"], - ["Write a pop song about leaving home for the sandy beaches."], - ["Write a summary demonstrating my ability to tame lions"], - ["鲁迅和周树人什么关系"], - ["从前有一头牛,这头牛后面有什么?"], - ["正无穷大加一大于正无穷大吗?"], - ["正无穷大加正无穷大大于正无穷大吗?"], - ["-2的平方根等于什么"], - ["树上有5只鸟,猎人开枪打死了一只。树上还有几只鸟?"], - ["树上有11只鸟,猎人开枪打死了一只。树上还有几只鸟?提示:需考虑鸟可能受惊吓飞走。"], - ["鲁迅和周树人什么关系 用英文回答"], - ["以红楼梦的行文风格写一张委婉的请假条。不少于320字。"], - [f"{etext} 翻成中文,列出3个版本"], - [f"{etext} \n 翻成中文,保留原意,但使用文学性的语言。不要写解释。列出3个版本"], - ["js 判断一个数是不是质数"], - ["js 实现python 的 range(10)"], - ["js 实现python 的 [*(range(10)]"], - ["假定 1 + 2 = 4, 试求 7 + 8"], - ["Erkläre die Handlung von Cinderella in einem Satz."], - ["Erkläre die Handlung von Cinderella in einem Satz. Auf Deutsch"], - ], - inputs=[user_input], - examples_per_page=30, - ) - - with gr.Accordion("For Chat/Translation API", open=False, visible=False): - input_text = gr.Text() - tr_btn = gr.Button("Go", variant="primary") - out_text = gr.Text() - tr_btn.click( - trans_api, - [input_text, max_length, top_p, temperature], - out_text, - # show_progress="full", - api_name="tr", - ) - _ = """ - input_text.submit( - trans_api, - [input_text, max_length, top_p, temperature], - out_text, - show_progress="full", - api_name="tr1", - ) - # """ - - gr.Markdown("###
注意❗:请不要生成会对个人以及组织造成侵害的内容,此程序仅供科研、学习及个人娱乐使用。
") - gr.Markdown("
💡 - 如何使用此程序:输入您对ChatGLM的提问后,依次点击“开始和GLM2交流吧”、“生成对应的音频吧”、“开始AI声音克隆吧”三个按键即可;使用声音克隆功能时,请先上传一段您喜欢的音频
") - gr.HTML(''' - - ''') - - -demo.queue().launch(show_error=True, debug=True) diff --git a/spaces/kevinwang676/SadTalker/src/facerender/modules/generator.py b/spaces/kevinwang676/SadTalker/src/facerender/modules/generator.py deleted file mode 100644 index 5a9edcb3b328d3afc99072b2461d7ca69919f813..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/facerender/modules/generator.py +++ /dev/null @@ -1,255 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F -from src.facerender.modules.util import ResBlock2d, SameBlock2d, UpBlock2d, DownBlock2d, ResBlock3d, SPADEResnetBlock -from src.facerender.modules.dense_motion import DenseMotionNetwork - - -class OcclusionAwareGenerator(nn.Module): - """ - Generator follows NVIDIA architecture. - """ - - def __init__(self, image_channel, feature_channel, num_kp, block_expansion, max_features, num_down_blocks, reshape_channel, reshape_depth, - num_resblocks, estimate_occlusion_map=False, dense_motion_params=None, estimate_jacobian=False): - super(OcclusionAwareGenerator, self).__init__() - - if dense_motion_params is not None: - self.dense_motion_network = DenseMotionNetwork(num_kp=num_kp, feature_channel=feature_channel, - estimate_occlusion_map=estimate_occlusion_map, - **dense_motion_params) - else: - self.dense_motion_network = None - - self.first = SameBlock2d(image_channel, block_expansion, kernel_size=(7, 7), padding=(3, 3)) - - down_blocks = [] - for i in range(num_down_blocks): - in_features = min(max_features, block_expansion * (2 ** i)) - out_features = min(max_features, block_expansion * (2 ** (i + 1))) - down_blocks.append(DownBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1))) - self.down_blocks = nn.ModuleList(down_blocks) - - self.second = nn.Conv2d(in_channels=out_features, out_channels=max_features, kernel_size=1, stride=1) - - self.reshape_channel = reshape_channel - self.reshape_depth = reshape_depth - - self.resblocks_3d = torch.nn.Sequential() - for i in range(num_resblocks): - self.resblocks_3d.add_module('3dr' + str(i), ResBlock3d(reshape_channel, kernel_size=3, padding=1)) - - out_features = block_expansion * (2 ** (num_down_blocks)) - self.third = SameBlock2d(max_features, out_features, kernel_size=(3, 3), padding=(1, 1), lrelu=True) - self.fourth = nn.Conv2d(in_channels=out_features, out_channels=out_features, kernel_size=1, stride=1) - - self.resblocks_2d = torch.nn.Sequential() - for i in range(num_resblocks): - self.resblocks_2d.add_module('2dr' + str(i), ResBlock2d(out_features, kernel_size=3, padding=1)) - - up_blocks = [] - for i in range(num_down_blocks): - in_features = max(block_expansion, block_expansion * (2 ** (num_down_blocks - i))) - out_features = max(block_expansion, block_expansion * (2 ** (num_down_blocks - i - 1))) - up_blocks.append(UpBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1))) - self.up_blocks = nn.ModuleList(up_blocks) - - self.final = nn.Conv2d(block_expansion, image_channel, kernel_size=(7, 7), padding=(3, 3)) - self.estimate_occlusion_map = estimate_occlusion_map - self.image_channel = image_channel - - def deform_input(self, inp, deformation): - _, d_old, h_old, w_old, _ = deformation.shape - _, _, d, h, w = inp.shape - if d_old != d or h_old != h or w_old != w: - deformation = deformation.permute(0, 4, 1, 2, 3) - deformation = F.interpolate(deformation, size=(d, h, w), mode='trilinear') - deformation = deformation.permute(0, 2, 3, 4, 1) - return F.grid_sample(inp, deformation) - - def forward(self, source_image, kp_driving, kp_source): - # Encoding (downsampling) part - out = self.first(source_image) - for i in range(len(self.down_blocks)): - out = self.down_blocks[i](out) - out = self.second(out) - bs, c, h, w = out.shape - # print(out.shape) - feature_3d = out.view(bs, self.reshape_channel, self.reshape_depth, h ,w) - feature_3d = self.resblocks_3d(feature_3d) - - # Transforming feature representation according to deformation and occlusion - output_dict = {} - if self.dense_motion_network is not None: - dense_motion = self.dense_motion_network(feature=feature_3d, kp_driving=kp_driving, - kp_source=kp_source) - output_dict['mask'] = dense_motion['mask'] - - if 'occlusion_map' in dense_motion: - occlusion_map = dense_motion['occlusion_map'] - output_dict['occlusion_map'] = occlusion_map - else: - occlusion_map = None - deformation = dense_motion['deformation'] - out = self.deform_input(feature_3d, deformation) - - bs, c, d, h, w = out.shape - out = out.view(bs, c*d, h, w) - out = self.third(out) - out = self.fourth(out) - - if occlusion_map is not None: - if out.shape[2] != occlusion_map.shape[2] or out.shape[3] != occlusion_map.shape[3]: - occlusion_map = F.interpolate(occlusion_map, size=out.shape[2:], mode='bilinear') - out = out * occlusion_map - - # output_dict["deformed"] = self.deform_input(source_image, deformation) # 3d deformation cannot deform 2d image - - # Decoding part - out = self.resblocks_2d(out) - for i in range(len(self.up_blocks)): - out = self.up_blocks[i](out) - out = self.final(out) - out = F.sigmoid(out) - - output_dict["prediction"] = out - - return output_dict - - -class SPADEDecoder(nn.Module): - def __init__(self): - super().__init__() - ic = 256 - oc = 64 - norm_G = 'spadespectralinstance' - label_nc = 256 - - self.fc = nn.Conv2d(ic, 2 * ic, 3, padding=1) - self.G_middle_0 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_1 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_2 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_3 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_4 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_5 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.up_0 = SPADEResnetBlock(2 * ic, ic, norm_G, label_nc) - self.up_1 = SPADEResnetBlock(ic, oc, norm_G, label_nc) - self.conv_img = nn.Conv2d(oc, 3, 3, padding=1) - self.up = nn.Upsample(scale_factor=2) - - def forward(self, feature): - seg = feature - x = self.fc(feature) - x = self.G_middle_0(x, seg) - x = self.G_middle_1(x, seg) - x = self.G_middle_2(x, seg) - x = self.G_middle_3(x, seg) - x = self.G_middle_4(x, seg) - x = self.G_middle_5(x, seg) - x = self.up(x) - x = self.up_0(x, seg) # 256, 128, 128 - x = self.up(x) - x = self.up_1(x, seg) # 64, 256, 256 - - x = self.conv_img(F.leaky_relu(x, 2e-1)) - # x = torch.tanh(x) - x = F.sigmoid(x) - - return x - - -class OcclusionAwareSPADEGenerator(nn.Module): - - def __init__(self, image_channel, feature_channel, num_kp, block_expansion, max_features, num_down_blocks, reshape_channel, reshape_depth, - num_resblocks, estimate_occlusion_map=False, dense_motion_params=None, estimate_jacobian=False): - super(OcclusionAwareSPADEGenerator, self).__init__() - - if dense_motion_params is not None: - self.dense_motion_network = DenseMotionNetwork(num_kp=num_kp, feature_channel=feature_channel, - estimate_occlusion_map=estimate_occlusion_map, - **dense_motion_params) - else: - self.dense_motion_network = None - - self.first = SameBlock2d(image_channel, block_expansion, kernel_size=(3, 3), padding=(1, 1)) - - down_blocks = [] - for i in range(num_down_blocks): - in_features = min(max_features, block_expansion * (2 ** i)) - out_features = min(max_features, block_expansion * (2 ** (i + 1))) - down_blocks.append(DownBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1))) - self.down_blocks = nn.ModuleList(down_blocks) - - self.second = nn.Conv2d(in_channels=out_features, out_channels=max_features, kernel_size=1, stride=1) - - self.reshape_channel = reshape_channel - self.reshape_depth = reshape_depth - - self.resblocks_3d = torch.nn.Sequential() - for i in range(num_resblocks): - self.resblocks_3d.add_module('3dr' + str(i), ResBlock3d(reshape_channel, kernel_size=3, padding=1)) - - out_features = block_expansion * (2 ** (num_down_blocks)) - self.third = SameBlock2d(max_features, out_features, kernel_size=(3, 3), padding=(1, 1), lrelu=True) - self.fourth = nn.Conv2d(in_channels=out_features, out_channels=out_features, kernel_size=1, stride=1) - - self.estimate_occlusion_map = estimate_occlusion_map - self.image_channel = image_channel - - self.decoder = SPADEDecoder() - - def deform_input(self, inp, deformation): - _, d_old, h_old, w_old, _ = deformation.shape - _, _, d, h, w = inp.shape - if d_old != d or h_old != h or w_old != w: - deformation = deformation.permute(0, 4, 1, 2, 3) - deformation = F.interpolate(deformation, size=(d, h, w), mode='trilinear') - deformation = deformation.permute(0, 2, 3, 4, 1) - return F.grid_sample(inp, deformation) - - def forward(self, source_image, kp_driving, kp_source): - # Encoding (downsampling) part - out = self.first(source_image) - for i in range(len(self.down_blocks)): - out = self.down_blocks[i](out) - out = self.second(out) - bs, c, h, w = out.shape - # print(out.shape) - feature_3d = out.view(bs, self.reshape_channel, self.reshape_depth, h ,w) - feature_3d = self.resblocks_3d(feature_3d) - - # Transforming feature representation according to deformation and occlusion - output_dict = {} - if self.dense_motion_network is not None: - dense_motion = self.dense_motion_network(feature=feature_3d, kp_driving=kp_driving, - kp_source=kp_source) - output_dict['mask'] = dense_motion['mask'] - - # import pdb; pdb.set_trace() - - if 'occlusion_map' in dense_motion: - occlusion_map = dense_motion['occlusion_map'] - output_dict['occlusion_map'] = occlusion_map - else: - occlusion_map = None - deformation = dense_motion['deformation'] - out = self.deform_input(feature_3d, deformation) - - bs, c, d, h, w = out.shape - out = out.view(bs, c*d, h, w) - out = self.third(out) - out = self.fourth(out) - - # occlusion_map = torch.where(occlusion_map < 0.95, 0, occlusion_map) - - if occlusion_map is not None: - if out.shape[2] != occlusion_map.shape[2] or out.shape[3] != occlusion_map.shape[3]: - occlusion_map = F.interpolate(occlusion_map, size=out.shape[2:], mode='bilinear') - out = out * occlusion_map - - # Decoding part - out = self.decoder(out) - - output_dict["prediction"] = out - - return output_dict \ No newline at end of file diff --git a/spaces/kevinwang676/VALLE/modules/transformer.py b/spaces/kevinwang676/VALLE/modules/transformer.py deleted file mode 100644 index ea8826b193c5053cb8ae74312f65ac95fe440350..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VALLE/modules/transformer.py +++ /dev/null @@ -1,683 +0,0 @@ -import copy -import numbers -from functools import partial -from typing import Any, Callable, List, Optional, Tuple, Union - -import torch -from torch import Tensor, nn -from torch.nn import functional as F - -from .activation import MultiheadAttention -from .scaling import ActivationBalancer, BalancedDoubleSwish -from .scaling import BasicNorm as _BasicNorm - -_shape_t = Union[int, List[int], torch.Size] - - -class LayerNorm(nn.Module): - __constants__ = ["normalized_shape", "eps", "elementwise_affine"] - normalized_shape: Tuple[int, ...] - eps: float - elementwise_affine: bool - - def __init__( - self, - normalized_shape: _shape_t, - eps: float = 1e-5, - elementwise_affine: bool = True, - device=None, - dtype=None, - ) -> None: - factory_kwargs = {"device": device, "dtype": dtype} - super(LayerNorm, self).__init__() - if isinstance(normalized_shape, numbers.Integral): - # mypy error: incompatible types in assignment - normalized_shape = (normalized_shape,) # type: ignore[assignment] - self.normalized_shape = tuple(normalized_shape) # type: ignore[arg-type] - self.eps = eps - self.elementwise_affine = elementwise_affine - if self.elementwise_affine: - self.weight = nn.Parameter( - torch.empty(self.normalized_shape, **factory_kwargs) - ) - self.bias = nn.Parameter( - torch.empty(self.normalized_shape, **factory_kwargs) - ) - else: - self.register_parameter("weight", None) - self.register_parameter("bias", None) - - self.reset_parameters() - - def reset_parameters(self) -> None: - if self.elementwise_affine: - nn.init.ones_(self.weight) - nn.init.zeros_(self.bias) - - def forward(self, input: Tensor, embedding: Any = None) -> Tensor: - if isinstance(input, tuple): - input, embedding = input - return ( - F.layer_norm( - input, - self.normalized_shape, - self.weight, - self.bias, - self.eps, - ), - embedding, - ) - - assert embedding is None - return F.layer_norm( - input, self.normalized_shape, self.weight, self.bias, self.eps - ) - - def extra_repr(self) -> str: - return ( - "{normalized_shape}, eps={eps}, " - "elementwise_affine={elementwise_affine}".format(**self.__dict__) - ) - - -class AdaptiveLayerNorm(nn.Module): - r"""Adaptive Layer Normalization""" - - def __init__(self, d_model, norm) -> None: - super(AdaptiveLayerNorm, self).__init__() - self.project_layer = nn.Linear(d_model, 2 * d_model) - self.norm = norm - self.d_model = d_model - self.eps = self.norm.eps - - def forward(self, input: Tensor, embedding: Tensor = None) -> Tensor: - if isinstance(input, tuple): - input, embedding = input - weight, bias = torch.split( - self.project_layer(embedding), - split_size_or_sections=self.d_model, - dim=-1, - ) - return (weight * self.norm(input) + bias, embedding) - - weight, bias = torch.split( - self.project_layer(embedding), - split_size_or_sections=self.d_model, - dim=-1, - ) - return weight * self.norm(input) + bias - - -class BasicNorm(_BasicNorm): - def __init__( - self, - d_model: int, - eps: float = 1e-5, - device=None, - dtype=None, - ): - super(BasicNorm, self).__init__(d_model, eps=eps) - - def forward(self, input: Tensor, embedding: Any = None) -> Tensor: - if isinstance(input, tuple): - input, embedding = input - return ( - super(BasicNorm, self).forward(input), - embedding, - ) - - assert embedding is None - return super(BasicNorm, self).forward(input) - - -class BalancedBasicNorm(nn.Module): - def __init__( - self, - d_model: int, - eps: float = 1e-5, - device=None, - dtype=None, - ): - super(BalancedBasicNorm, self).__init__() - self.balancer = ActivationBalancer( - d_model, - channel_dim=-1, - min_positive=0.45, - max_positive=0.55, - max_abs=6.0, - ) - self.norm = BasicNorm(d_model, eps, device=device, dtype=dtype) - - def forward(self, input: Tensor, embedding: Any = None) -> Tensor: - if isinstance(input, tuple): - input, embedding = input - return self.norm((self.balancer(input), embedding)) - - assert embedding is None - return self.norm(self.balancer(input)) - - -class IdentityNorm(nn.Module): - def __init__( - self, - d_model: int, - eps: float = 1e-5, - device=None, - dtype=None, - ) -> None: - super(IdentityNorm, self).__init__() - - def forward(self, input: Tensor, embedding: Any = None) -> Tensor: - if isinstance(input, tuple): - return input - - assert embedding is None - return input - - -class TransformerEncoderLayer(nn.Module): - __constants__ = ["batch_first", "norm_first"] - - def __init__( - self, - d_model: int, - nhead: int, - dim_feedforward: int = 2048, - dropout: float = 0.1, - activation: Union[str, Callable[[Tensor], Tensor]] = F.relu, - batch_first: bool = False, - norm_first: bool = False, - device=None, - dtype=None, - linear1_self_attention_cls: nn.Module = nn.Linear, - linear2_self_attention_cls: nn.Module = nn.Linear, - linear1_feedforward_cls: nn.Module = nn.Linear, - linear2_feedforward_cls: nn.Module = nn.Linear, - layer_norm_cls: nn.Module = LayerNorm, - layer_norm_eps: float = 1e-5, - adaptive_layer_norm=False, - ) -> None: - factory_kwargs = {"device": device, "dtype": dtype} - super(TransformerEncoderLayer, self).__init__() - self.self_attn = MultiheadAttention( - d_model, - nhead, - dropout=dropout, - batch_first=batch_first, - linear1_cls=linear1_self_attention_cls, - linear2_cls=linear2_self_attention_cls, - **factory_kwargs, - ) - - # Implementation of Feedforward model - self.linear1 = linear1_feedforward_cls( - d_model, dim_feedforward, **factory_kwargs - ) - self.dropout = nn.Dropout(dropout) - self.linear2 = linear2_feedforward_cls( - dim_feedforward, d_model, **factory_kwargs - ) - - self.norm_first = norm_first - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - - # Legacy string support for activation function. - if isinstance(activation, str): - activation = _get_activation_fn(activation) - elif isinstance(activation, partial): - activation = activation(d_model) - elif activation == BalancedDoubleSwish: - activation = BalancedDoubleSwish(d_model) - - # # We can't test self.activation in forward() in TorchScript, - # # so stash some information about it instead. - # if activation is F.relu or isinstance(activation, torch.nn.ReLU): - # self.activation_relu_or_gelu = 1 - # elif activation is F.gelu or isinstance(activation, torch.nn.GELU): - # self.activation_relu_or_gelu = 2 - # else: - # self.activation_relu_or_gelu = 0 - self.activation = activation - - norm1 = layer_norm_cls(d_model, eps=layer_norm_eps, **factory_kwargs) - if layer_norm_cls == IdentityNorm: - norm2 = BalancedBasicNorm( - d_model, eps=layer_norm_eps, **factory_kwargs - ) - else: - norm2 = layer_norm_cls( - d_model, eps=layer_norm_eps, **factory_kwargs - ) - - if adaptive_layer_norm: - self.norm1 = AdaptiveLayerNorm(d_model, norm1) - self.norm2 = AdaptiveLayerNorm(d_model, norm2) - else: - self.norm1 = norm1 - self.norm2 = norm2 - - def __setstate__(self, state): - super(TransformerEncoderLayer, self).__setstate__(state) - if not hasattr(self, "activation"): - self.activation = F.relu - - def forward( - self, - src: Tensor, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - ) -> Tensor: - r"""Pass the input through the encoder layer. - - Args: - src: the sequence to the encoder layer (required). - src_mask: the mask for the src sequence (optional). - src_key_padding_mask: the mask for the src keys per batch (optional). - - Shape: - see the docs in Transformer class. - """ - x, stage_embedding = src, None - is_src_tuple = False - if isinstance(src, tuple): - x, stage_embedding = src - is_src_tuple = True - - if src_key_padding_mask is not None: - _skpm_dtype = src_key_padding_mask.dtype - if _skpm_dtype != torch.bool and not torch.is_floating_point( - src_key_padding_mask - ): - raise AssertionError( - "only bool and floating types of key_padding_mask are supported" - ) - - if self.norm_first: - x = x + self._sa_block( - self.norm1(x, stage_embedding), - src_mask, - src_key_padding_mask, - ) - x = x + self._ff_block(self.norm2(x, stage_embedding)) - else: - x = self.norm1( - x + self._sa_block(x, src_mask, src_key_padding_mask), - stage_embedding, - ) - x = self.norm2(x + self._ff_block(x), stage_embedding) - - if is_src_tuple: - return (x, stage_embedding) - return x - - def infer( - self, - src: Tensor, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - past_kv: Optional[Tensor] = None, - use_cache: bool = False, - ): - x, stage_embedding = src, None - is_src_tuple = False - if isinstance(src, tuple): - x, stage_embedding = src - is_src_tuple = True - - if src_key_padding_mask is not None: - _skpm_dtype = src_key_padding_mask.dtype - if _skpm_dtype != torch.bool and not torch.is_floating_point( - src_key_padding_mask - ): - raise AssertionError( - "only bool and floating types of key_padding_mask are supported" - ) - - if self.norm_first: - x_attn_out, kv = self.self_attn.infer( - self.norm1(x, stage_embedding), - attn_mask=src_mask, - key_padding_mask=src_key_padding_mask, - need_weights=False, - past_kv=past_kv, - use_cache=use_cache, - ) - x = x + x_attn_out - x = x + self._ff_block(self.norm2(x, stage_embedding)) - - if is_src_tuple: - return (x, stage_embedding) - return (x, kv) - - # self-attention block - def _sa_block( - self, - x: Tensor, - attn_mask: Optional[Tensor], - key_padding_mask: Optional[Tensor], - ) -> Tensor: - x = self.self_attn( - x, - x, - x, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask, - need_weights=False, - )[0] - return self.dropout1(x) - - # feed forward block - def _ff_block(self, x: Tensor) -> Tensor: - x = self.linear2(self.dropout(self.activation(self.linear1(x)))) - return self.dropout2(x) - - -class TransformerEncoder(nn.Module): - r"""TransformerEncoder is a stack of N encoder layers. Users can build the - BERT(https://arxiv.org/abs/1810.04805) model with corresponding parameters. - - Args: - encoder_layer: an instance of the TransformerEncoderLayer() class (required). - num_layers: the number of sub-encoder-layers in the encoder (required). - norm: the layer normalization component (optional). - enable_nested_tensor: if True, input will automatically convert to nested tensor - (and convert back on output). This will improve the overall performance of - TransformerEncoder when padding rate is high. Default: ``True`` (enabled). - - Examples:: - >>> encoder_layer = TransformerEncoderLayer(d_model=512, nhead=8) - >>> transformer_encoder = TransformerEncoder(encoder_layer, num_layers=6) - >>> src = torch.rand(10, 32, 512) - >>> out = transformer_encoder(src) - """ - __constants__ = ["norm"] - - def __init__(self, encoder_layer, num_layers, norm=None): - super(TransformerEncoder, self).__init__() - self.layers = _get_clones(encoder_layer, num_layers) - self.num_layers = num_layers - self.norm = norm - - def forward( - self, - src: Tensor, - mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - return_layer_states: bool = False, - ) -> Tensor: - r"""Pass the input through the encoder layers in turn. - - Args: - src: the sequence to the encoder (required). - mask: the mask for the src sequence (optional). - src_key_padding_mask: the mask for the src keys per batch (optional). - return_layer_states: return layers' state (optional). - - Shape: - see the docs in Transformer class. - """ - if return_layer_states: - layer_states = [] # layers' output - output = src - for mod in self.layers: - output = mod( - output, - src_mask=mask, - src_key_padding_mask=src_key_padding_mask, - ) - layer_states.append(output[0]) - - if self.norm is not None: - output = self.norm(output) - - return layer_states, output - - output = src - for mod in self.layers: - output = mod( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - - if self.norm is not None: - output = self.norm(output) - - return output - - def infer( - self, - src: Tensor, - mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - return_layer_states: bool = False, - past_kv: Optional[Tensor] = None, - use_cache: bool = False, - ): - if past_kv is None: - past_length = 0 - past_kv = tuple([None] * self.num_layers) - else: - past_length = past_kv[0][0].size(-2) - new_kv = () if use_cache else None - output = src - for mod, past_layer_kv in zip(self.layers, past_kv): - output, kv = mod.infer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask, past_kv=past_layer_kv, use_cache=use_cache - ) - if use_cache: - new_kv = new_kv + (kv,) - - if self.norm is not None: - output = self.norm(output) - - return output, new_kv - - -class TransformerDecoderLayer(nn.Module): - __constants__ = ["batch_first", "norm_first"] - - def __init__( - self, - d_model: int, - nhead: int, - dim_feedforward: int = 2048, - dropout: float = 0.1, - activation: Union[str, Callable[[Tensor], Tensor]] = F.relu, - linear1_self_attention_cls: nn.Module = nn.Linear, - linear2_self_attention_cls: nn.Module = nn.Linear, - linear1_feedforward_cls: nn.Module = nn.Linear, - linear2_feedforward_cls: nn.Module = nn.Linear, - batch_first: bool = False, - norm_first: bool = False, - device=None, - dtype=None, - layer_norm_cls: nn.Module = LayerNorm, - layer_norm_eps: float = 1e-5, - adaptive_layer_norm=False, - ) -> None: - factory_kwargs = {"device": device, "dtype": dtype} - super(TransformerDecoderLayer, self).__init__() - self.self_attn = MultiheadAttention( - d_model, - nhead, - dropout=dropout, - batch_first=batch_first, - linear1_cls=linear1_self_attention_cls, - linear2_cls=linear2_self_attention_cls, - **factory_kwargs, - ) - self.multihead_attn = MultiheadAttention( - d_model, - nhead, - dropout=dropout, - batch_first=batch_first, - linear1_cls=linear1_self_attention_cls, - linear2_cls=linear2_self_attention_cls, - **factory_kwargs, - ) - # Implementation of Feedforward model - self.linear1 = linear1_feedforward_cls( - d_model, dim_feedforward, **factory_kwargs - ) - self.dropout = nn.Dropout(dropout) - self.linear2 = linear2_feedforward_cls( - dim_feedforward, d_model, **factory_kwargs - ) - - self.norm_first = norm_first - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - self.dropout3 = nn.Dropout(dropout) - - # Legacy string support for activation function. - if isinstance(activation, str): - self.activation = _get_activation_fn(activation) - elif isinstance(activation, partial): - self.activation = activation(d_model) - elif activation == BalancedDoubleSwish: - self.activation = BalancedDoubleSwish(d_model) - else: - self.activation = activation - - if adaptive_layer_norm: - norm1 = layer_norm_cls( - d_model, eps=layer_norm_eps, **factory_kwargs - ) - norm2 = layer_norm_cls( - d_model, eps=layer_norm_eps, **factory_kwargs - ) - norm3 = layer_norm_cls( - d_model, eps=layer_norm_eps, **factory_kwargs - ) - - self.norm1 = AdaptiveLayerNorm(d_model, norm1) - self.norm2 = AdaptiveLayerNorm(d_model, norm2) - self.norm3 = AdaptiveLayerNorm(d_model, norm3) - else: - self.norm1 = layer_norm_cls( - d_model, eps=layer_norm_eps, **factory_kwargs - ) - self.norm2 = layer_norm_cls( - d_model, eps=layer_norm_eps, **factory_kwargs - ) - if layer_norm_cls == IdentityNorm: - self.norm3 = BalancedBasicNorm( - d_model, eps=layer_norm_eps, **factory_kwargs - ) - else: - self.norm3 = layer_norm_cls( - d_model, eps=layer_norm_eps, **factory_kwargs - ) - - def forward( - self, - tgt: Tensor, - memory: Tensor, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - ) -> Tensor: - r"""Pass the inputs (and mask) through the decoder layer. - - Args: - tgt: the sequence to the decoder layer (required). - memory: the sequence from the last layer of the encoder (required). - tgt_mask: the mask for the tgt sequence (optional). - memory_mask: the mask for the memory sequence (optional). - tgt_key_padding_mask: the mask for the tgt keys per batch (optional). - memory_key_padding_mask: the mask for the memory keys per batch (optional). - - Shape: - see the docs in Transformer class. - """ - tgt_is_tuple = False - if isinstance(tgt, tuple): - x, stage_embedding = tgt - tgt_is_tuple = True - else: - x, stage_embedding = tgt, None - - if self.norm_first: - x = x + self._sa_block( - self.norm1(x, stage_embedding), tgt_mask, tgt_key_padding_mask - ) - x = x + self._mha_block( - self.norm2(x, stage_embedding), - memory, - memory_mask, - memory_key_padding_mask, - ) - x = x + self._ff_block(self.norm3(x, stage_embedding)) - else: - x = self.norm1( - x + self._sa_block(x, tgt_mask, tgt_key_padding_mask), - stage_embedding, - ) - x = self.norm2( - x - + self._mha_block( - x, memory, memory_mask, memory_key_padding_mask - ), - stage_embedding, - ) - x = self.norm3(x + self._ff_block(x), stage_embedding) - - if tgt_is_tuple: - return (x, stage_embedding) - return x - - # self-attention block - def _sa_block( - self, - x: Tensor, - attn_mask: Optional[Tensor], - key_padding_mask: Optional[Tensor], - ) -> Tensor: - x = self.self_attn( - x, - x, - x, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask, - need_weights=False, - )[0] - return self.dropout1(x) - - # multihead attention block - def _mha_block( - self, - x: Tensor, - mem: Tensor, - attn_mask: Optional[Tensor], - key_padding_mask: Optional[Tensor], - ) -> Tensor: - x = self.multihead_attn( - x, - mem, - mem, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask, - need_weights=False, - )[0] - return self.dropout2(x) - - # feed forward block - def _ff_block(self, x: Tensor) -> Tensor: - x = self.linear2(self.dropout(self.activation(self.linear1(x)))) - return self.dropout3(x) - - -def _get_clones(module, N): - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - -def _get_activation_fn(activation: str) -> Callable[[Tensor], Tensor]: - if activation == "relu": - return F.relu - elif activation == "gelu": - return F.gelu - - raise RuntimeError( - "activation should be relu/gelu, not {}".format(activation) - ) diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/web/static/js/jquery.js b/spaces/kira4424/Tacotron-zero-short-voice-clone/web/static/js/jquery.js deleted file mode 100644 index fc6c299b73e792ef288e785c22393a5df9dded4b..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/web/static/js/jquery.js +++ /dev/null @@ -1,10881 +0,0 @@ -/*! - * jQuery JavaScript Library v3.6.0 - * https://jquery.com/ - * - * Includes Sizzle.js - * https://sizzlejs.com/ - * - * Copyright OpenJS Foundation and other contributors - * Released under the MIT license - * https://jquery.org/license - * - * Date: 2021-03-02T17:08Z - */ -( function( global, factory ) { - - "use strict"; - - if ( typeof module === "object" && typeof module.exports === "object" ) { - - // For CommonJS and CommonJS-like environments where a proper `window` - // is present, execute the factory and get jQuery. - // For environments that do not have a `window` with a `document` - // (such as Node.js), expose a factory as module.exports. - // This accentuates the need for the creation of a real `window`. - // e.g. var jQuery = require("jquery")(window); - // See ticket #14549 for more info. - module.exports = global.document ? - factory( global, true ) : - function( w ) { - if ( !w.document ) { - throw new Error( "jQuery requires a window with a document" ); - } - return factory( w ); - }; - } else { - factory( global ); - } - -// Pass this if window is not defined yet -} )( typeof window !== "undefined" ? window : this, function( window, noGlobal ) { - -// Edge <= 12 - 13+, Firefox <=18 - 45+, IE 10 - 11, Safari 5.1 - 9+, iOS 6 - 9.1 -// throw exceptions when non-strict code (e.g., ASP.NET 4.5) accesses strict mode -// arguments.callee.caller (trac-13335). But as of jQuery 3.0 (2016), strict mode should be common -// enough that all such attempts are guarded in a try block. -"use strict"; - -var arr = []; - -var getProto = Object.getPrototypeOf; - -var slice = arr.slice; - -var flat = arr.flat ? function( array ) { - return arr.flat.call( array ); -} : function( array ) { - return arr.concat.apply( [], array ); -}; - - -var push = arr.push; - -var indexOf = arr.indexOf; - -var class2type = {}; - -var toString = class2type.toString; - -var hasOwn = class2type.hasOwnProperty; - -var fnToString = hasOwn.toString; - -var ObjectFunctionString = fnToString.call( Object ); - -var support = {}; - -var isFunction = function isFunction( obj ) { - - // Support: Chrome <=57, Firefox <=52 - // In some browsers, typeof returns "function" for HTML elements - // (i.e., `typeof document.createElement( "object" ) === "function"`). - // We don't want to classify *any* DOM node as a function. - // Support: QtWeb <=3.8.5, WebKit <=534.34, wkhtmltopdf tool <=0.12.5 - // Plus for old WebKit, typeof returns "function" for HTML collections - // (e.g., `typeof document.getElementsByTagName("div") === "function"`). (gh-4756) - return typeof obj === "function" && typeof obj.nodeType !== "number" && - typeof obj.item !== "function"; - }; - - -var isWindow = function isWindow( obj ) { - return obj != null && obj === obj.window; - }; - - -var document = window.document; - - - - var preservedScriptAttributes = { - type: true, - src: true, - nonce: true, - noModule: true - }; - - function DOMEval( code, node, doc ) { - doc = doc || document; - - var i, val, - script = doc.createElement( "script" ); - - script.text = code; - if ( node ) { - for ( i in preservedScriptAttributes ) { - - // Support: Firefox 64+, Edge 18+ - // Some browsers don't support the "nonce" property on scripts. - // On the other hand, just using `getAttribute` is not enough as - // the `nonce` attribute is reset to an empty string whenever it - // becomes browsing-context connected. - // See https://github.com/whatwg/html/issues/2369 - // See https://html.spec.whatwg.org/#nonce-attributes - // The `node.getAttribute` check was added for the sake of - // `jQuery.globalEval` so that it can fake a nonce-containing node - // via an object. - val = node[ i ] || node.getAttribute && node.getAttribute( i ); - if ( val ) { - script.setAttribute( i, val ); - } - } - } - doc.head.appendChild( script ).parentNode.removeChild( script ); - } - - -function toType( obj ) { - if ( obj == null ) { - return obj + ""; - } - - // Support: Android <=2.3 only (functionish RegExp) - return typeof obj === "object" || typeof obj === "function" ? - class2type[ toString.call( obj ) ] || "object" : - typeof obj; -} -/* global Symbol */ -// Defining this global in .eslintrc.json would create a danger of using the global -// unguarded in another place, it seems safer to define global only for this module - - - -var - version = "3.6.0", - - // Define a local copy of jQuery - jQuery = function( selector, context ) { - - // The jQuery object is actually just the init constructor 'enhanced' - // Need init if jQuery is called (just allow error to be thrown if not included) - return new jQuery.fn.init( selector, context ); - }; - -jQuery.fn = jQuery.prototype = { - - // The current version of jQuery being used - jquery: version, - - constructor: jQuery, - - // The default length of a jQuery object is 0 - length: 0, - - toArray: function() { - return slice.call( this ); - }, - - // Get the Nth element in the matched element set OR - // Get the whole matched element set as a clean array - get: function( num ) { - - // Return all the elements in a clean array - if ( num == null ) { - return slice.call( this ); - } - - // Return just the one element from the set - return num < 0 ? this[ num + this.length ] : this[ num ]; - }, - - // Take an array of elements and push it onto the stack - // (returning the new matched element set) - pushStack: function( elems ) { - - // Build a new jQuery matched element set - var ret = jQuery.merge( this.constructor(), elems ); - - // Add the old object onto the stack (as a reference) - ret.prevObject = this; - - // Return the newly-formed element set - return ret; - }, - - // Execute a callback for every element in the matched set. - each: function( callback ) { - return jQuery.each( this, callback ); - }, - - map: function( callback ) { - return this.pushStack( jQuery.map( this, function( elem, i ) { - return callback.call( elem, i, elem ); - } ) ); - }, - - slice: function() { - return this.pushStack( slice.apply( this, arguments ) ); - }, - - first: function() { - return this.eq( 0 ); - }, - - last: function() { - return this.eq( -1 ); - }, - - even: function() { - return this.pushStack( jQuery.grep( this, function( _elem, i ) { - return ( i + 1 ) % 2; - } ) ); - }, - - odd: function() { - return this.pushStack( jQuery.grep( this, function( _elem, i ) { - return i % 2; - } ) ); - }, - - eq: function( i ) { - var len = this.length, - j = +i + ( i < 0 ? len : 0 ); - return this.pushStack( j >= 0 && j < len ? [ this[ j ] ] : [] ); - }, - - end: function() { - return this.prevObject || this.constructor(); - }, - - // For internal use only. - // Behaves like an Array's method, not like a jQuery method. - push: push, - sort: arr.sort, - splice: arr.splice -}; - -jQuery.extend = jQuery.fn.extend = function() { - var options, name, src, copy, copyIsArray, clone, - target = arguments[ 0 ] || {}, - i = 1, - length = arguments.length, - deep = false; - - // Handle a deep copy situation - if ( typeof target === "boolean" ) { - deep = target; - - // Skip the boolean and the target - target = arguments[ i ] || {}; - i++; - } - - // Handle case when target is a string or something (possible in deep copy) - if ( typeof target !== "object" && !isFunction( target ) ) { - target = {}; - } - - // Extend jQuery itself if only one argument is passed - if ( i === length ) { - target = this; - i--; - } - - for ( ; i < length; i++ ) { - - // Only deal with non-null/undefined values - if ( ( options = arguments[ i ] ) != null ) { - - // Extend the base object - for ( name in options ) { - copy = options[ name ]; - - // Prevent Object.prototype pollution - // Prevent never-ending loop - if ( name === "__proto__" || target === copy ) { - continue; - } - - // Recurse if we're merging plain objects or arrays - if ( deep && copy && ( jQuery.isPlainObject( copy ) || - ( copyIsArray = Array.isArray( copy ) ) ) ) { - src = target[ name ]; - - // Ensure proper type for the source value - if ( copyIsArray && !Array.isArray( src ) ) { - clone = []; - } else if ( !copyIsArray && !jQuery.isPlainObject( src ) ) { - clone = {}; - } else { - clone = src; - } - copyIsArray = false; - - // Never move original objects, clone them - target[ name ] = jQuery.extend( deep, clone, copy ); - - // Don't bring in undefined values - } else if ( copy !== undefined ) { - target[ name ] = copy; - } - } - } - } - - // Return the modified object - return target; -}; - -jQuery.extend( { - - // Unique for each copy of jQuery on the page - expando: "jQuery" + ( version + Math.random() ).replace( /\D/g, "" ), - - // Assume jQuery is ready without the ready module - isReady: true, - - error: function( msg ) { - throw new Error( msg ); - }, - - noop: function() {}, - - isPlainObject: function( obj ) { - var proto, Ctor; - - // Detect obvious negatives - // Use toString instead of jQuery.type to catch host objects - if ( !obj || toString.call( obj ) !== "[object Object]" ) { - return false; - } - - proto = getProto( obj ); - - // Objects with no prototype (e.g., `Object.create( null )`) are plain - if ( !proto ) { - return true; - } - - // Objects with prototype are plain iff they were constructed by a global Object function - Ctor = hasOwn.call( proto, "constructor" ) && proto.constructor; - return typeof Ctor === "function" && fnToString.call( Ctor ) === ObjectFunctionString; - }, - - isEmptyObject: function( obj ) { - var name; - - for ( name in obj ) { - return false; - } - return true; - }, - - // Evaluates a script in a provided context; falls back to the global one - // if not specified. - globalEval: function( code, options, doc ) { - DOMEval( code, { nonce: options && options.nonce }, doc ); - }, - - each: function( obj, callback ) { - var length, i = 0; - - if ( isArrayLike( obj ) ) { - length = obj.length; - for ( ; i < length; i++ ) { - if ( callback.call( obj[ i ], i, obj[ i ] ) === false ) { - break; - } - } - } else { - for ( i in obj ) { - if ( callback.call( obj[ i ], i, obj[ i ] ) === false ) { - break; - } - } - } - - return obj; - }, - - // results is for internal usage only - makeArray: function( arr, results ) { - var ret = results || []; - - if ( arr != null ) { - if ( isArrayLike( Object( arr ) ) ) { - jQuery.merge( ret, - typeof arr === "string" ? - [ arr ] : arr - ); - } else { - push.call( ret, arr ); - } - } - - return ret; - }, - - inArray: function( elem, arr, i ) { - return arr == null ? -1 : indexOf.call( arr, elem, i ); - }, - - // Support: Android <=4.0 only, PhantomJS 1 only - // push.apply(_, arraylike) throws on ancient WebKit - merge: function( first, second ) { - var len = +second.length, - j = 0, - i = first.length; - - for ( ; j < len; j++ ) { - first[ i++ ] = second[ j ]; - } - - first.length = i; - - return first; - }, - - grep: function( elems, callback, invert ) { - var callbackInverse, - matches = [], - i = 0, - length = elems.length, - callbackExpect = !invert; - - // Go through the array, only saving the items - // that pass the validator function - for ( ; i < length; i++ ) { - callbackInverse = !callback( elems[ i ], i ); - if ( callbackInverse !== callbackExpect ) { - matches.push( elems[ i ] ); - } - } - - return matches; - }, - - // arg is for internal usage only - map: function( elems, callback, arg ) { - var length, value, - i = 0, - ret = []; - - // Go through the array, translating each of the items to their new values - if ( isArrayLike( elems ) ) { - length = elems.length; - for ( ; i < length; i++ ) { - value = callback( elems[ i ], i, arg ); - - if ( value != null ) { - ret.push( value ); - } - } - - // Go through every key on the object, - } else { - for ( i in elems ) { - value = callback( elems[ i ], i, arg ); - - if ( value != null ) { - ret.push( value ); - } - } - } - - // Flatten any nested arrays - return flat( ret ); - }, - - // A global GUID counter for objects - guid: 1, - - // jQuery.support is not used in Core but other projects attach their - // properties to it so it needs to exist. - support: support -} ); - -if ( typeof Symbol === "function" ) { - jQuery.fn[ Symbol.iterator ] = arr[ Symbol.iterator ]; -} - -// Populate the class2type map -jQuery.each( "Boolean Number String Function Array Date RegExp Object Error Symbol".split( " " ), - function( _i, name ) { - class2type[ "[object " + name + "]" ] = name.toLowerCase(); - } ); - -function isArrayLike( obj ) { - - // Support: real iOS 8.2 only (not reproducible in simulator) - // `in` check used to prevent JIT error (gh-2145) - // hasOwn isn't used here due to false negatives - // regarding Nodelist length in IE - var length = !!obj && "length" in obj && obj.length, - type = toType( obj ); - - if ( isFunction( obj ) || isWindow( obj ) ) { - return false; - } - - return type === "array" || length === 0 || - typeof length === "number" && length > 0 && ( length - 1 ) in obj; -} -var Sizzle = -/*! - * Sizzle CSS Selector Engine v2.3.6 - * https://sizzlejs.com/ - * - * Copyright JS Foundation and other contributors - * Released under the MIT license - * https://js.foundation/ - * - * Date: 2021-02-16 - */ -( function( window ) { -var i, - support, - Expr, - getText, - isXML, - tokenize, - compile, - select, - outermostContext, - sortInput, - hasDuplicate, - - // Local document vars - setDocument, - document, - docElem, - documentIsHTML, - rbuggyQSA, - rbuggyMatches, - matches, - contains, - - // Instance-specific data - expando = "sizzle" + 1 * new Date(), - preferredDoc = window.document, - dirruns = 0, - done = 0, - classCache = createCache(), - tokenCache = createCache(), - compilerCache = createCache(), - nonnativeSelectorCache = createCache(), - sortOrder = function( a, b ) { - if ( a === b ) { - hasDuplicate = true; - } - return 0; - }, - - // Instance methods - hasOwn = ( {} ).hasOwnProperty, - arr = [], - pop = arr.pop, - pushNative = arr.push, - push = arr.push, - slice = arr.slice, - - // Use a stripped-down indexOf as it's faster than native - // https://jsperf.com/thor-indexof-vs-for/5 - indexOf = function( list, elem ) { - var i = 0, - len = list.length; - for ( ; i < len; i++ ) { - if ( list[ i ] === elem ) { - return i; - } - } - return -1; - }, - - booleans = "checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|" + - "ismap|loop|multiple|open|readonly|required|scoped", - - // Regular expressions - - // http://www.w3.org/TR/css3-selectors/#whitespace - whitespace = "[\\x20\\t\\r\\n\\f]", - - // https://www.w3.org/TR/css-syntax-3/#ident-token-diagram - identifier = "(?:\\\\[\\da-fA-F]{1,6}" + whitespace + - "?|\\\\[^\\r\\n\\f]|[\\w-]|[^\0-\\x7f])+", - - // Attribute selectors: http://www.w3.org/TR/selectors/#attribute-selectors - attributes = "\\[" + whitespace + "*(" + identifier + ")(?:" + whitespace + - - // Operator (capture 2) - "*([*^$|!~]?=)" + whitespace + - - // "Attribute values must be CSS identifiers [capture 5] - // or strings [capture 3 or capture 4]" - "*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|(" + identifier + "))|)" + - whitespace + "*\\]", - - pseudos = ":(" + identifier + ")(?:\\((" + - - // To reduce the number of selectors needing tokenize in the preFilter, prefer arguments: - // 1. quoted (capture 3; capture 4 or capture 5) - "('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|" + - - // 2. simple (capture 6) - "((?:\\\\.|[^\\\\()[\\]]|" + attributes + ")*)|" + - - // 3. anything else (capture 2) - ".*" + - ")\\)|)", - - // Leading and non-escaped trailing whitespace, capturing some non-whitespace characters preceding the latter - rwhitespace = new RegExp( whitespace + "+", "g" ), - rtrim = new RegExp( "^" + whitespace + "+|((?:^|[^\\\\])(?:\\\\.)*)" + - whitespace + "+$", "g" ), - - rcomma = new RegExp( "^" + whitespace + "*," + whitespace + "*" ), - rcombinators = new RegExp( "^" + whitespace + "*([>+~]|" + whitespace + ")" + whitespace + - "*" ), - rdescend = new RegExp( whitespace + "|>" ), - - rpseudo = new RegExp( pseudos ), - ridentifier = new RegExp( "^" + identifier + "$" ), - - matchExpr = { - "ID": new RegExp( "^#(" + identifier + ")" ), - "CLASS": new RegExp( "^\\.(" + identifier + ")" ), - "TAG": new RegExp( "^(" + identifier + "|[*])" ), - "ATTR": new RegExp( "^" + attributes ), - "PSEUDO": new RegExp( "^" + pseudos ), - "CHILD": new RegExp( "^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\(" + - whitespace + "*(even|odd|(([+-]|)(\\d*)n|)" + whitespace + "*(?:([+-]|)" + - whitespace + "*(\\d+)|))" + whitespace + "*\\)|)", "i" ), - "bool": new RegExp( "^(?:" + booleans + ")$", "i" ), - - // For use in libraries implementing .is() - // We use this for POS matching in `select` - "needsContext": new RegExp( "^" + whitespace + - "*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\(" + whitespace + - "*((?:-\\d)?\\d*)" + whitespace + "*\\)|)(?=[^-]|$)", "i" ) - }, - - rhtml = /HTML$/i, - rinputs = /^(?:input|select|textarea|button)$/i, - rheader = /^h\d$/i, - - rnative = /^[^{]+\{\s*\[native \w/, - - // Easily-parseable/retrievable ID or TAG or CLASS selectors - rquickExpr = /^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/, - - rsibling = /[+~]/, - - // CSS escapes - // http://www.w3.org/TR/CSS21/syndata.html#escaped-characters - runescape = new RegExp( "\\\\[\\da-fA-F]{1,6}" + whitespace + "?|\\\\([^\\r\\n\\f])", "g" ), - funescape = function( escape, nonHex ) { - var high = "0x" + escape.slice( 1 ) - 0x10000; - - return nonHex ? - - // Strip the backslash prefix from a non-hex escape sequence - nonHex : - - // Replace a hexadecimal escape sequence with the encoded Unicode code point - // Support: IE <=11+ - // For values outside the Basic Multilingual Plane (BMP), manually construct a - // surrogate pair - high < 0 ? - String.fromCharCode( high + 0x10000 ) : - String.fromCharCode( high >> 10 | 0xD800, high & 0x3FF | 0xDC00 ); - }, - - // CSS string/identifier serialization - // https://drafts.csswg.org/cssom/#common-serializing-idioms - rcssescape = /([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g, - fcssescape = function( ch, asCodePoint ) { - if ( asCodePoint ) { - - // U+0000 NULL becomes U+FFFD REPLACEMENT CHARACTER - if ( ch === "\0" ) { - return "\uFFFD"; - } - - // Control characters and (dependent upon position) numbers get escaped as code points - return ch.slice( 0, -1 ) + "\\" + - ch.charCodeAt( ch.length - 1 ).toString( 16 ) + " "; - } - - // Other potentially-special ASCII characters get backslash-escaped - return "\\" + ch; - }, - - // Used for iframes - // See setDocument() - // Removing the function wrapper causes a "Permission Denied" - // error in IE - unloadHandler = function() { - setDocument(); - }, - - inDisabledFieldset = addCombinator( - function( elem ) { - return elem.disabled === true && elem.nodeName.toLowerCase() === "fieldset"; - }, - { dir: "parentNode", next: "legend" } - ); - -// Optimize for push.apply( _, NodeList ) -try { - push.apply( - ( arr = slice.call( preferredDoc.childNodes ) ), - preferredDoc.childNodes - ); - - // Support: Android<4.0 - // Detect silently failing push.apply - // eslint-disable-next-line no-unused-expressions - arr[ preferredDoc.childNodes.length ].nodeType; -} catch ( e ) { - push = { apply: arr.length ? - - // Leverage slice if possible - function( target, els ) { - pushNative.apply( target, slice.call( els ) ); - } : - - // Support: IE<9 - // Otherwise append directly - function( target, els ) { - var j = target.length, - i = 0; - - // Can't trust NodeList.length - while ( ( target[ j++ ] = els[ i++ ] ) ) {} - target.length = j - 1; - } - }; -} - -function Sizzle( selector, context, results, seed ) { - var m, i, elem, nid, match, groups, newSelector, - newContext = context && context.ownerDocument, - - // nodeType defaults to 9, since context defaults to document - nodeType = context ? context.nodeType : 9; - - results = results || []; - - // Return early from calls with invalid selector or context - if ( typeof selector !== "string" || !selector || - nodeType !== 1 && nodeType !== 9 && nodeType !== 11 ) { - - return results; - } - - // Try to shortcut find operations (as opposed to filters) in HTML documents - if ( !seed ) { - setDocument( context ); - context = context || document; - - if ( documentIsHTML ) { - - // If the selector is sufficiently simple, try using a "get*By*" DOM method - // (excepting DocumentFragment context, where the methods don't exist) - if ( nodeType !== 11 && ( match = rquickExpr.exec( selector ) ) ) { - - // ID selector - if ( ( m = match[ 1 ] ) ) { - - // Document context - if ( nodeType === 9 ) { - if ( ( elem = context.getElementById( m ) ) ) { - - // Support: IE, Opera, Webkit - // TODO: identify versions - // getElementById can match elements by name instead of ID - if ( elem.id === m ) { - results.push( elem ); - return results; - } - } else { - return results; - } - - // Element context - } else { - - // Support: IE, Opera, Webkit - // TODO: identify versions - // getElementById can match elements by name instead of ID - if ( newContext && ( elem = newContext.getElementById( m ) ) && - contains( context, elem ) && - elem.id === m ) { - - results.push( elem ); - return results; - } - } - - // Type selector - } else if ( match[ 2 ] ) { - push.apply( results, context.getElementsByTagName( selector ) ); - return results; - - // Class selector - } else if ( ( m = match[ 3 ] ) && support.getElementsByClassName && - context.getElementsByClassName ) { - - push.apply( results, context.getElementsByClassName( m ) ); - return results; - } - } - - // Take advantage of querySelectorAll - if ( support.qsa && - !nonnativeSelectorCache[ selector + " " ] && - ( !rbuggyQSA || !rbuggyQSA.test( selector ) ) && - - // Support: IE 8 only - // Exclude object elements - ( nodeType !== 1 || context.nodeName.toLowerCase() !== "object" ) ) { - - newSelector = selector; - newContext = context; - - // qSA considers elements outside a scoping root when evaluating child or - // descendant combinators, which is not what we want. - // In such cases, we work around the behavior by prefixing every selector in the - // list with an ID selector referencing the scope context. - // The technique has to be used as well when a leading combinator is used - // as such selectors are not recognized by querySelectorAll. - // Thanks to Andrew Dupont for this technique. - if ( nodeType === 1 && - ( rdescend.test( selector ) || rcombinators.test( selector ) ) ) { - - // Expand context for sibling selectors - newContext = rsibling.test( selector ) && testContext( context.parentNode ) || - context; - - // We can use :scope instead of the ID hack if the browser - // supports it & if we're not changing the context. - if ( newContext !== context || !support.scope ) { - - // Capture the context ID, setting it first if necessary - if ( ( nid = context.getAttribute( "id" ) ) ) { - nid = nid.replace( rcssescape, fcssescape ); - } else { - context.setAttribute( "id", ( nid = expando ) ); - } - } - - // Prefix every selector in the list - groups = tokenize( selector ); - i = groups.length; - while ( i-- ) { - groups[ i ] = ( nid ? "#" + nid : ":scope" ) + " " + - toSelector( groups[ i ] ); - } - newSelector = groups.join( "," ); - } - - try { - push.apply( results, - newContext.querySelectorAll( newSelector ) - ); - return results; - } catch ( qsaError ) { - nonnativeSelectorCache( selector, true ); - } finally { - if ( nid === expando ) { - context.removeAttribute( "id" ); - } - } - } - } - } - - // All others - return select( selector.replace( rtrim, "$1" ), context, results, seed ); -} - -/** - * Create key-value caches of limited size - * @returns {function(string, object)} Returns the Object data after storing it on itself with - * property name the (space-suffixed) string and (if the cache is larger than Expr.cacheLength) - * deleting the oldest entry - */ -function createCache() { - var keys = []; - - function cache( key, value ) { - - // Use (key + " ") to avoid collision with native prototype properties (see Issue #157) - if ( keys.push( key + " " ) > Expr.cacheLength ) { - - // Only keep the most recent entries - delete cache[ keys.shift() ]; - } - return ( cache[ key + " " ] = value ); - } - return cache; -} - -/** - * Mark a function for special use by Sizzle - * @param {Function} fn The function to mark - */ -function markFunction( fn ) { - fn[ expando ] = true; - return fn; -} - -/** - * Support testing using an element - * @param {Function} fn Passed the created element and returns a boolean result - */ -function assert( fn ) { - var el = document.createElement( "fieldset" ); - - try { - return !!fn( el ); - } catch ( e ) { - return false; - } finally { - - // Remove from its parent by default - if ( el.parentNode ) { - el.parentNode.removeChild( el ); - } - - // release memory in IE - el = null; - } -} - -/** - * Adds the same handler for all of the specified attrs - * @param {String} attrs Pipe-separated list of attributes - * @param {Function} handler The method that will be applied - */ -function addHandle( attrs, handler ) { - var arr = attrs.split( "|" ), - i = arr.length; - - while ( i-- ) { - Expr.attrHandle[ arr[ i ] ] = handler; - } -} - -/** - * Checks document order of two siblings - * @param {Element} a - * @param {Element} b - * @returns {Number} Returns less than 0 if a precedes b, greater than 0 if a follows b - */ -function siblingCheck( a, b ) { - var cur = b && a, - diff = cur && a.nodeType === 1 && b.nodeType === 1 && - a.sourceIndex - b.sourceIndex; - - // Use IE sourceIndex if available on both nodes - if ( diff ) { - return diff; - } - - // Check if b follows a - if ( cur ) { - while ( ( cur = cur.nextSibling ) ) { - if ( cur === b ) { - return -1; - } - } - } - - return a ? 1 : -1; -} - -/** - * Returns a function to use in pseudos for input types - * @param {String} type - */ -function createInputPseudo( type ) { - return function( elem ) { - var name = elem.nodeName.toLowerCase(); - return name === "input" && elem.type === type; - }; -} - -/** - * Returns a function to use in pseudos for buttons - * @param {String} type - */ -function createButtonPseudo( type ) { - return function( elem ) { - var name = elem.nodeName.toLowerCase(); - return ( name === "input" || name === "button" ) && elem.type === type; - }; -} - -/** - * Returns a function to use in pseudos for :enabled/:disabled - * @param {Boolean} disabled true for :disabled; false for :enabled - */ -function createDisabledPseudo( disabled ) { - - // Known :disabled false positives: fieldset[disabled] > legend:nth-of-type(n+2) :can-disable - return function( elem ) { - - // Only certain elements can match :enabled or :disabled - // https://html.spec.whatwg.org/multipage/scripting.html#selector-enabled - // https://html.spec.whatwg.org/multipage/scripting.html#selector-disabled - if ( "form" in elem ) { - - // Check for inherited disabledness on relevant non-disabled elements: - // * listed form-associated elements in a disabled fieldset - // https://html.spec.whatwg.org/multipage/forms.html#category-listed - // https://html.spec.whatwg.org/multipage/forms.html#concept-fe-disabled - // * option elements in a disabled optgroup - // https://html.spec.whatwg.org/multipage/forms.html#concept-option-disabled - // All such elements have a "form" property. - if ( elem.parentNode && elem.disabled === false ) { - - // Option elements defer to a parent optgroup if present - if ( "label" in elem ) { - if ( "label" in elem.parentNode ) { - return elem.parentNode.disabled === disabled; - } else { - return elem.disabled === disabled; - } - } - - // Support: IE 6 - 11 - // Use the isDisabled shortcut property to check for disabled fieldset ancestors - return elem.isDisabled === disabled || - - // Where there is no isDisabled, check manually - /* jshint -W018 */ - elem.isDisabled !== !disabled && - inDisabledFieldset( elem ) === disabled; - } - - return elem.disabled === disabled; - - // Try to winnow out elements that can't be disabled before trusting the disabled property. - // Some victims get caught in our net (label, legend, menu, track), but it shouldn't - // even exist on them, let alone have a boolean value. - } else if ( "label" in elem ) { - return elem.disabled === disabled; - } - - // Remaining elements are neither :enabled nor :disabled - return false; - }; -} - -/** - * Returns a function to use in pseudos for positionals - * @param {Function} fn - */ -function createPositionalPseudo( fn ) { - return markFunction( function( argument ) { - argument = +argument; - return markFunction( function( seed, matches ) { - var j, - matchIndexes = fn( [], seed.length, argument ), - i = matchIndexes.length; - - // Match elements found at the specified indexes - while ( i-- ) { - if ( seed[ ( j = matchIndexes[ i ] ) ] ) { - seed[ j ] = !( matches[ j ] = seed[ j ] ); - } - } - } ); - } ); -} - -/** - * Checks a node for validity as a Sizzle context - * @param {Element|Object=} context - * @returns {Element|Object|Boolean} The input node if acceptable, otherwise a falsy value - */ -function testContext( context ) { - return context && typeof context.getElementsByTagName !== "undefined" && context; -} - -// Expose support vars for convenience -support = Sizzle.support = {}; - -/** - * Detects XML nodes - * @param {Element|Object} elem An element or a document - * @returns {Boolean} True iff elem is a non-HTML XML node - */ -isXML = Sizzle.isXML = function( elem ) { - var namespace = elem && elem.namespaceURI, - docElem = elem && ( elem.ownerDocument || elem ).documentElement; - - // Support: IE <=8 - // Assume HTML when documentElement doesn't yet exist, such as inside loading iframes - // https://bugs.jquery.com/ticket/4833 - return !rhtml.test( namespace || docElem && docElem.nodeName || "HTML" ); -}; - -/** - * Sets document-related variables once based on the current document - * @param {Element|Object} [doc] An element or document object to use to set the document - * @returns {Object} Returns the current document - */ -setDocument = Sizzle.setDocument = function( node ) { - var hasCompare, subWindow, - doc = node ? node.ownerDocument || node : preferredDoc; - - // Return early if doc is invalid or already selected - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( doc == document || doc.nodeType !== 9 || !doc.documentElement ) { - return document; - } - - // Update global variables - document = doc; - docElem = document.documentElement; - documentIsHTML = !isXML( document ); - - // Support: IE 9 - 11+, Edge 12 - 18+ - // Accessing iframe documents after unload throws "permission denied" errors (jQuery #13936) - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( preferredDoc != document && - ( subWindow = document.defaultView ) && subWindow.top !== subWindow ) { - - // Support: IE 11, Edge - if ( subWindow.addEventListener ) { - subWindow.addEventListener( "unload", unloadHandler, false ); - - // Support: IE 9 - 10 only - } else if ( subWindow.attachEvent ) { - subWindow.attachEvent( "onunload", unloadHandler ); - } - } - - // Support: IE 8 - 11+, Edge 12 - 18+, Chrome <=16 - 25 only, Firefox <=3.6 - 31 only, - // Safari 4 - 5 only, Opera <=11.6 - 12.x only - // IE/Edge & older browsers don't support the :scope pseudo-class. - // Support: Safari 6.0 only - // Safari 6.0 supports :scope but it's an alias of :root there. - support.scope = assert( function( el ) { - docElem.appendChild( el ).appendChild( document.createElement( "div" ) ); - return typeof el.querySelectorAll !== "undefined" && - !el.querySelectorAll( ":scope fieldset div" ).length; - } ); - - /* Attributes - ---------------------------------------------------------------------- */ - - // Support: IE<8 - // Verify that getAttribute really returns attributes and not properties - // (excepting IE8 booleans) - support.attributes = assert( function( el ) { - el.className = "i"; - return !el.getAttribute( "className" ); - } ); - - /* getElement(s)By* - ---------------------------------------------------------------------- */ - - // Check if getElementsByTagName("*") returns only elements - support.getElementsByTagName = assert( function( el ) { - el.appendChild( document.createComment( "" ) ); - return !el.getElementsByTagName( "*" ).length; - } ); - - // Support: IE<9 - support.getElementsByClassName = rnative.test( document.getElementsByClassName ); - - // Support: IE<10 - // Check if getElementById returns elements by name - // The broken getElementById methods don't pick up programmatically-set names, - // so use a roundabout getElementsByName test - support.getById = assert( function( el ) { - docElem.appendChild( el ).id = expando; - return !document.getElementsByName || !document.getElementsByName( expando ).length; - } ); - - // ID filter and find - if ( support.getById ) { - Expr.filter[ "ID" ] = function( id ) { - var attrId = id.replace( runescape, funescape ); - return function( elem ) { - return elem.getAttribute( "id" ) === attrId; - }; - }; - Expr.find[ "ID" ] = function( id, context ) { - if ( typeof context.getElementById !== "undefined" && documentIsHTML ) { - var elem = context.getElementById( id ); - return elem ? [ elem ] : []; - } - }; - } else { - Expr.filter[ "ID" ] = function( id ) { - var attrId = id.replace( runescape, funescape ); - return function( elem ) { - var node = typeof elem.getAttributeNode !== "undefined" && - elem.getAttributeNode( "id" ); - return node && node.value === attrId; - }; - }; - - // Support: IE 6 - 7 only - // getElementById is not reliable as a find shortcut - Expr.find[ "ID" ] = function( id, context ) { - if ( typeof context.getElementById !== "undefined" && documentIsHTML ) { - var node, i, elems, - elem = context.getElementById( id ); - - if ( elem ) { - - // Verify the id attribute - node = elem.getAttributeNode( "id" ); - if ( node && node.value === id ) { - return [ elem ]; - } - - // Fall back on getElementsByName - elems = context.getElementsByName( id ); - i = 0; - while ( ( elem = elems[ i++ ] ) ) { - node = elem.getAttributeNode( "id" ); - if ( node && node.value === id ) { - return [ elem ]; - } - } - } - - return []; - } - }; - } - - // Tag - Expr.find[ "TAG" ] = support.getElementsByTagName ? - function( tag, context ) { - if ( typeof context.getElementsByTagName !== "undefined" ) { - return context.getElementsByTagName( tag ); - - // DocumentFragment nodes don't have gEBTN - } else if ( support.qsa ) { - return context.querySelectorAll( tag ); - } - } : - - function( tag, context ) { - var elem, - tmp = [], - i = 0, - - // By happy coincidence, a (broken) gEBTN appears on DocumentFragment nodes too - results = context.getElementsByTagName( tag ); - - // Filter out possible comments - if ( tag === "*" ) { - while ( ( elem = results[ i++ ] ) ) { - if ( elem.nodeType === 1 ) { - tmp.push( elem ); - } - } - - return tmp; - } - return results; - }; - - // Class - Expr.find[ "CLASS" ] = support.getElementsByClassName && function( className, context ) { - if ( typeof context.getElementsByClassName !== "undefined" && documentIsHTML ) { - return context.getElementsByClassName( className ); - } - }; - - /* QSA/matchesSelector - ---------------------------------------------------------------------- */ - - // QSA and matchesSelector support - - // matchesSelector(:active) reports false when true (IE9/Opera 11.5) - rbuggyMatches = []; - - // qSa(:focus) reports false when true (Chrome 21) - // We allow this because of a bug in IE8/9 that throws an error - // whenever `document.activeElement` is accessed on an iframe - // So, we allow :focus to pass through QSA all the time to avoid the IE error - // See https://bugs.jquery.com/ticket/13378 - rbuggyQSA = []; - - if ( ( support.qsa = rnative.test( document.querySelectorAll ) ) ) { - - // Build QSA regex - // Regex strategy adopted from Diego Perini - assert( function( el ) { - - var input; - - // Select is set to empty string on purpose - // This is to test IE's treatment of not explicitly - // setting a boolean content attribute, - // since its presence should be enough - // https://bugs.jquery.com/ticket/12359 - docElem.appendChild( el ).innerHTML = "" + - ""; - - // Support: IE8, Opera 11-12.16 - // Nothing should be selected when empty strings follow ^= or $= or *= - // The test attribute must be unknown in Opera but "safe" for WinRT - // https://msdn.microsoft.com/en-us/library/ie/hh465388.aspx#attribute_section - if ( el.querySelectorAll( "[msallowcapture^='']" ).length ) { - rbuggyQSA.push( "[*^$]=" + whitespace + "*(?:''|\"\")" ); - } - - // Support: IE8 - // Boolean attributes and "value" are not treated correctly - if ( !el.querySelectorAll( "[selected]" ).length ) { - rbuggyQSA.push( "\\[" + whitespace + "*(?:value|" + booleans + ")" ); - } - - // Support: Chrome<29, Android<4.4, Safari<7.0+, iOS<7.0+, PhantomJS<1.9.8+ - if ( !el.querySelectorAll( "[id~=" + expando + "-]" ).length ) { - rbuggyQSA.push( "~=" ); - } - - // Support: IE 11+, Edge 15 - 18+ - // IE 11/Edge don't find elements on a `[name='']` query in some cases. - // Adding a temporary attribute to the document before the selection works - // around the issue. - // Interestingly, IE 10 & older don't seem to have the issue. - input = document.createElement( "input" ); - input.setAttribute( "name", "" ); - el.appendChild( input ); - if ( !el.querySelectorAll( "[name='']" ).length ) { - rbuggyQSA.push( "\\[" + whitespace + "*name" + whitespace + "*=" + - whitespace + "*(?:''|\"\")" ); - } - - // Webkit/Opera - :checked should return selected option elements - // http://www.w3.org/TR/2011/REC-css3-selectors-20110929/#checked - // IE8 throws error here and will not see later tests - if ( !el.querySelectorAll( ":checked" ).length ) { - rbuggyQSA.push( ":checked" ); - } - - // Support: Safari 8+, iOS 8+ - // https://bugs.webkit.org/show_bug.cgi?id=136851 - // In-page `selector#id sibling-combinator selector` fails - if ( !el.querySelectorAll( "a#" + expando + "+*" ).length ) { - rbuggyQSA.push( ".#.+[+~]" ); - } - - // Support: Firefox <=3.6 - 5 only - // Old Firefox doesn't throw on a badly-escaped identifier. - el.querySelectorAll( "\\\f" ); - rbuggyQSA.push( "[\\r\\n\\f]" ); - } ); - - assert( function( el ) { - el.innerHTML = "" + - ""; - - // Support: Windows 8 Native Apps - // The type and name attributes are restricted during .innerHTML assignment - var input = document.createElement( "input" ); - input.setAttribute( "type", "hidden" ); - el.appendChild( input ).setAttribute( "name", "D" ); - - // Support: IE8 - // Enforce case-sensitivity of name attribute - if ( el.querySelectorAll( "[name=d]" ).length ) { - rbuggyQSA.push( "name" + whitespace + "*[*^$|!~]?=" ); - } - - // FF 3.5 - :enabled/:disabled and hidden elements (hidden elements are still enabled) - // IE8 throws error here and will not see later tests - if ( el.querySelectorAll( ":enabled" ).length !== 2 ) { - rbuggyQSA.push( ":enabled", ":disabled" ); - } - - // Support: IE9-11+ - // IE's :disabled selector does not pick up the children of disabled fieldsets - docElem.appendChild( el ).disabled = true; - if ( el.querySelectorAll( ":disabled" ).length !== 2 ) { - rbuggyQSA.push( ":enabled", ":disabled" ); - } - - // Support: Opera 10 - 11 only - // Opera 10-11 does not throw on post-comma invalid pseudos - el.querySelectorAll( "*,:x" ); - rbuggyQSA.push( ",.*:" ); - } ); - } - - if ( ( support.matchesSelector = rnative.test( ( matches = docElem.matches || - docElem.webkitMatchesSelector || - docElem.mozMatchesSelector || - docElem.oMatchesSelector || - docElem.msMatchesSelector ) ) ) ) { - - assert( function( el ) { - - // Check to see if it's possible to do matchesSelector - // on a disconnected node (IE 9) - support.disconnectedMatch = matches.call( el, "*" ); - - // This should fail with an exception - // Gecko does not error, returns false instead - matches.call( el, "[s!='']:x" ); - rbuggyMatches.push( "!=", pseudos ); - } ); - } - - rbuggyQSA = rbuggyQSA.length && new RegExp( rbuggyQSA.join( "|" ) ); - rbuggyMatches = rbuggyMatches.length && new RegExp( rbuggyMatches.join( "|" ) ); - - /* Contains - ---------------------------------------------------------------------- */ - hasCompare = rnative.test( docElem.compareDocumentPosition ); - - // Element contains another - // Purposefully self-exclusive - // As in, an element does not contain itself - contains = hasCompare || rnative.test( docElem.contains ) ? - function( a, b ) { - var adown = a.nodeType === 9 ? a.documentElement : a, - bup = b && b.parentNode; - return a === bup || !!( bup && bup.nodeType === 1 && ( - adown.contains ? - adown.contains( bup ) : - a.compareDocumentPosition && a.compareDocumentPosition( bup ) & 16 - ) ); - } : - function( a, b ) { - if ( b ) { - while ( ( b = b.parentNode ) ) { - if ( b === a ) { - return true; - } - } - } - return false; - }; - - /* Sorting - ---------------------------------------------------------------------- */ - - // Document order sorting - sortOrder = hasCompare ? - function( a, b ) { - - // Flag for duplicate removal - if ( a === b ) { - hasDuplicate = true; - return 0; - } - - // Sort on method existence if only one input has compareDocumentPosition - var compare = !a.compareDocumentPosition - !b.compareDocumentPosition; - if ( compare ) { - return compare; - } - - // Calculate position if both inputs belong to the same document - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - compare = ( a.ownerDocument || a ) == ( b.ownerDocument || b ) ? - a.compareDocumentPosition( b ) : - - // Otherwise we know they are disconnected - 1; - - // Disconnected nodes - if ( compare & 1 || - ( !support.sortDetached && b.compareDocumentPosition( a ) === compare ) ) { - - // Choose the first element that is related to our preferred document - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( a == document || a.ownerDocument == preferredDoc && - contains( preferredDoc, a ) ) { - return -1; - } - - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( b == document || b.ownerDocument == preferredDoc && - contains( preferredDoc, b ) ) { - return 1; - } - - // Maintain original order - return sortInput ? - ( indexOf( sortInput, a ) - indexOf( sortInput, b ) ) : - 0; - } - - return compare & 4 ? -1 : 1; - } : - function( a, b ) { - - // Exit early if the nodes are identical - if ( a === b ) { - hasDuplicate = true; - return 0; - } - - var cur, - i = 0, - aup = a.parentNode, - bup = b.parentNode, - ap = [ a ], - bp = [ b ]; - - // Parentless nodes are either documents or disconnected - if ( !aup || !bup ) { - - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - /* eslint-disable eqeqeq */ - return a == document ? -1 : - b == document ? 1 : - /* eslint-enable eqeqeq */ - aup ? -1 : - bup ? 1 : - sortInput ? - ( indexOf( sortInput, a ) - indexOf( sortInput, b ) ) : - 0; - - // If the nodes are siblings, we can do a quick check - } else if ( aup === bup ) { - return siblingCheck( a, b ); - } - - // Otherwise we need full lists of their ancestors for comparison - cur = a; - while ( ( cur = cur.parentNode ) ) { - ap.unshift( cur ); - } - cur = b; - while ( ( cur = cur.parentNode ) ) { - bp.unshift( cur ); - } - - // Walk down the tree looking for a discrepancy - while ( ap[ i ] === bp[ i ] ) { - i++; - } - - return i ? - - // Do a sibling check if the nodes have a common ancestor - siblingCheck( ap[ i ], bp[ i ] ) : - - // Otherwise nodes in our document sort first - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - /* eslint-disable eqeqeq */ - ap[ i ] == preferredDoc ? -1 : - bp[ i ] == preferredDoc ? 1 : - /* eslint-enable eqeqeq */ - 0; - }; - - return document; -}; - -Sizzle.matches = function( expr, elements ) { - return Sizzle( expr, null, null, elements ); -}; - -Sizzle.matchesSelector = function( elem, expr ) { - setDocument( elem ); - - if ( support.matchesSelector && documentIsHTML && - !nonnativeSelectorCache[ expr + " " ] && - ( !rbuggyMatches || !rbuggyMatches.test( expr ) ) && - ( !rbuggyQSA || !rbuggyQSA.test( expr ) ) ) { - - try { - var ret = matches.call( elem, expr ); - - // IE 9's matchesSelector returns false on disconnected nodes - if ( ret || support.disconnectedMatch || - - // As well, disconnected nodes are said to be in a document - // fragment in IE 9 - elem.document && elem.document.nodeType !== 11 ) { - return ret; - } - } catch ( e ) { - nonnativeSelectorCache( expr, true ); - } - } - - return Sizzle( expr, document, null, [ elem ] ).length > 0; -}; - -Sizzle.contains = function( context, elem ) { - - // Set document vars if needed - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( ( context.ownerDocument || context ) != document ) { - setDocument( context ); - } - return contains( context, elem ); -}; - -Sizzle.attr = function( elem, name ) { - - // Set document vars if needed - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( ( elem.ownerDocument || elem ) != document ) { - setDocument( elem ); - } - - var fn = Expr.attrHandle[ name.toLowerCase() ], - - // Don't get fooled by Object.prototype properties (jQuery #13807) - val = fn && hasOwn.call( Expr.attrHandle, name.toLowerCase() ) ? - fn( elem, name, !documentIsHTML ) : - undefined; - - return val !== undefined ? - val : - support.attributes || !documentIsHTML ? - elem.getAttribute( name ) : - ( val = elem.getAttributeNode( name ) ) && val.specified ? - val.value : - null; -}; - -Sizzle.escape = function( sel ) { - return ( sel + "" ).replace( rcssescape, fcssescape ); -}; - -Sizzle.error = function( msg ) { - throw new Error( "Syntax error, unrecognized expression: " + msg ); -}; - -/** - * Document sorting and removing duplicates - * @param {ArrayLike} results - */ -Sizzle.uniqueSort = function( results ) { - var elem, - duplicates = [], - j = 0, - i = 0; - - // Unless we *know* we can detect duplicates, assume their presence - hasDuplicate = !support.detectDuplicates; - sortInput = !support.sortStable && results.slice( 0 ); - results.sort( sortOrder ); - - if ( hasDuplicate ) { - while ( ( elem = results[ i++ ] ) ) { - if ( elem === results[ i ] ) { - j = duplicates.push( i ); - } - } - while ( j-- ) { - results.splice( duplicates[ j ], 1 ); - } - } - - // Clear input after sorting to release objects - // See https://github.com/jquery/sizzle/pull/225 - sortInput = null; - - return results; -}; - -/** - * Utility function for retrieving the text value of an array of DOM nodes - * @param {Array|Element} elem - */ -getText = Sizzle.getText = function( elem ) { - var node, - ret = "", - i = 0, - nodeType = elem.nodeType; - - if ( !nodeType ) { - - // If no nodeType, this is expected to be an array - while ( ( node = elem[ i++ ] ) ) { - - // Do not traverse comment nodes - ret += getText( node ); - } - } else if ( nodeType === 1 || nodeType === 9 || nodeType === 11 ) { - - // Use textContent for elements - // innerText usage removed for consistency of new lines (jQuery #11153) - if ( typeof elem.textContent === "string" ) { - return elem.textContent; - } else { - - // Traverse its children - for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { - ret += getText( elem ); - } - } - } else if ( nodeType === 3 || nodeType === 4 ) { - return elem.nodeValue; - } - - // Do not include comment or processing instruction nodes - - return ret; -}; - -Expr = Sizzle.selectors = { - - // Can be adjusted by the user - cacheLength: 50, - - createPseudo: markFunction, - - match: matchExpr, - - attrHandle: {}, - - find: {}, - - relative: { - ">": { dir: "parentNode", first: true }, - " ": { dir: "parentNode" }, - "+": { dir: "previousSibling", first: true }, - "~": { dir: "previousSibling" } - }, - - preFilter: { - "ATTR": function( match ) { - match[ 1 ] = match[ 1 ].replace( runescape, funescape ); - - // Move the given value to match[3] whether quoted or unquoted - match[ 3 ] = ( match[ 3 ] || match[ 4 ] || - match[ 5 ] || "" ).replace( runescape, funescape ); - - if ( match[ 2 ] === "~=" ) { - match[ 3 ] = " " + match[ 3 ] + " "; - } - - return match.slice( 0, 4 ); - }, - - "CHILD": function( match ) { - - /* matches from matchExpr["CHILD"] - 1 type (only|nth|...) - 2 what (child|of-type) - 3 argument (even|odd|\d*|\d*n([+-]\d+)?|...) - 4 xn-component of xn+y argument ([+-]?\d*n|) - 5 sign of xn-component - 6 x of xn-component - 7 sign of y-component - 8 y of y-component - */ - match[ 1 ] = match[ 1 ].toLowerCase(); - - if ( match[ 1 ].slice( 0, 3 ) === "nth" ) { - - // nth-* requires argument - if ( !match[ 3 ] ) { - Sizzle.error( match[ 0 ] ); - } - - // numeric x and y parameters for Expr.filter.CHILD - // remember that false/true cast respectively to 0/1 - match[ 4 ] = +( match[ 4 ] ? - match[ 5 ] + ( match[ 6 ] || 1 ) : - 2 * ( match[ 3 ] === "even" || match[ 3 ] === "odd" ) ); - match[ 5 ] = +( ( match[ 7 ] + match[ 8 ] ) || match[ 3 ] === "odd" ); - - // other types prohibit arguments - } else if ( match[ 3 ] ) { - Sizzle.error( match[ 0 ] ); - } - - return match; - }, - - "PSEUDO": function( match ) { - var excess, - unquoted = !match[ 6 ] && match[ 2 ]; - - if ( matchExpr[ "CHILD" ].test( match[ 0 ] ) ) { - return null; - } - - // Accept quoted arguments as-is - if ( match[ 3 ] ) { - match[ 2 ] = match[ 4 ] || match[ 5 ] || ""; - - // Strip excess characters from unquoted arguments - } else if ( unquoted && rpseudo.test( unquoted ) && - - // Get excess from tokenize (recursively) - ( excess = tokenize( unquoted, true ) ) && - - // advance to the next closing parenthesis - ( excess = unquoted.indexOf( ")", unquoted.length - excess ) - unquoted.length ) ) { - - // excess is a negative index - match[ 0 ] = match[ 0 ].slice( 0, excess ); - match[ 2 ] = unquoted.slice( 0, excess ); - } - - // Return only captures needed by the pseudo filter method (type and argument) - return match.slice( 0, 3 ); - } - }, - - filter: { - - "TAG": function( nodeNameSelector ) { - var nodeName = nodeNameSelector.replace( runescape, funescape ).toLowerCase(); - return nodeNameSelector === "*" ? - function() { - return true; - } : - function( elem ) { - return elem.nodeName && elem.nodeName.toLowerCase() === nodeName; - }; - }, - - "CLASS": function( className ) { - var pattern = classCache[ className + " " ]; - - return pattern || - ( pattern = new RegExp( "(^|" + whitespace + - ")" + className + "(" + whitespace + "|$)" ) ) && classCache( - className, function( elem ) { - return pattern.test( - typeof elem.className === "string" && elem.className || - typeof elem.getAttribute !== "undefined" && - elem.getAttribute( "class" ) || - "" - ); - } ); - }, - - "ATTR": function( name, operator, check ) { - return function( elem ) { - var result = Sizzle.attr( elem, name ); - - if ( result == null ) { - return operator === "!="; - } - if ( !operator ) { - return true; - } - - result += ""; - - /* eslint-disable max-len */ - - return operator === "=" ? result === check : - operator === "!=" ? result !== check : - operator === "^=" ? check && result.indexOf( check ) === 0 : - operator === "*=" ? check && result.indexOf( check ) > -1 : - operator === "$=" ? check && result.slice( -check.length ) === check : - operator === "~=" ? ( " " + result.replace( rwhitespace, " " ) + " " ).indexOf( check ) > -1 : - operator === "|=" ? result === check || result.slice( 0, check.length + 1 ) === check + "-" : - false; - /* eslint-enable max-len */ - - }; - }, - - "CHILD": function( type, what, _argument, first, last ) { - var simple = type.slice( 0, 3 ) !== "nth", - forward = type.slice( -4 ) !== "last", - ofType = what === "of-type"; - - return first === 1 && last === 0 ? - - // Shortcut for :nth-*(n) - function( elem ) { - return !!elem.parentNode; - } : - - function( elem, _context, xml ) { - var cache, uniqueCache, outerCache, node, nodeIndex, start, - dir = simple !== forward ? "nextSibling" : "previousSibling", - parent = elem.parentNode, - name = ofType && elem.nodeName.toLowerCase(), - useCache = !xml && !ofType, - diff = false; - - if ( parent ) { - - // :(first|last|only)-(child|of-type) - if ( simple ) { - while ( dir ) { - node = elem; - while ( ( node = node[ dir ] ) ) { - if ( ofType ? - node.nodeName.toLowerCase() === name : - node.nodeType === 1 ) { - - return false; - } - } - - // Reverse direction for :only-* (if we haven't yet done so) - start = dir = type === "only" && !start && "nextSibling"; - } - return true; - } - - start = [ forward ? parent.firstChild : parent.lastChild ]; - - // non-xml :nth-child(...) stores cache data on `parent` - if ( forward && useCache ) { - - // Seek `elem` from a previously-cached index - - // ...in a gzip-friendly way - node = parent; - outerCache = node[ expando ] || ( node[ expando ] = {} ); - - // Support: IE <9 only - // Defend against cloned attroperties (jQuery gh-1709) - uniqueCache = outerCache[ node.uniqueID ] || - ( outerCache[ node.uniqueID ] = {} ); - - cache = uniqueCache[ type ] || []; - nodeIndex = cache[ 0 ] === dirruns && cache[ 1 ]; - diff = nodeIndex && cache[ 2 ]; - node = nodeIndex && parent.childNodes[ nodeIndex ]; - - while ( ( node = ++nodeIndex && node && node[ dir ] || - - // Fallback to seeking `elem` from the start - ( diff = nodeIndex = 0 ) || start.pop() ) ) { - - // When found, cache indexes on `parent` and break - if ( node.nodeType === 1 && ++diff && node === elem ) { - uniqueCache[ type ] = [ dirruns, nodeIndex, diff ]; - break; - } - } - - } else { - - // Use previously-cached element index if available - if ( useCache ) { - - // ...in a gzip-friendly way - node = elem; - outerCache = node[ expando ] || ( node[ expando ] = {} ); - - // Support: IE <9 only - // Defend against cloned attroperties (jQuery gh-1709) - uniqueCache = outerCache[ node.uniqueID ] || - ( outerCache[ node.uniqueID ] = {} ); - - cache = uniqueCache[ type ] || []; - nodeIndex = cache[ 0 ] === dirruns && cache[ 1 ]; - diff = nodeIndex; - } - - // xml :nth-child(...) - // or :nth-last-child(...) or :nth(-last)?-of-type(...) - if ( diff === false ) { - - // Use the same loop as above to seek `elem` from the start - while ( ( node = ++nodeIndex && node && node[ dir ] || - ( diff = nodeIndex = 0 ) || start.pop() ) ) { - - if ( ( ofType ? - node.nodeName.toLowerCase() === name : - node.nodeType === 1 ) && - ++diff ) { - - // Cache the index of each encountered element - if ( useCache ) { - outerCache = node[ expando ] || - ( node[ expando ] = {} ); - - // Support: IE <9 only - // Defend against cloned attroperties (jQuery gh-1709) - uniqueCache = outerCache[ node.uniqueID ] || - ( outerCache[ node.uniqueID ] = {} ); - - uniqueCache[ type ] = [ dirruns, diff ]; - } - - if ( node === elem ) { - break; - } - } - } - } - } - - // Incorporate the offset, then check against cycle size - diff -= last; - return diff === first || ( diff % first === 0 && diff / first >= 0 ); - } - }; - }, - - "PSEUDO": function( pseudo, argument ) { - - // pseudo-class names are case-insensitive - // http://www.w3.org/TR/selectors/#pseudo-classes - // Prioritize by case sensitivity in case custom pseudos are added with uppercase letters - // Remember that setFilters inherits from pseudos - var args, - fn = Expr.pseudos[ pseudo ] || Expr.setFilters[ pseudo.toLowerCase() ] || - Sizzle.error( "unsupported pseudo: " + pseudo ); - - // The user may use createPseudo to indicate that - // arguments are needed to create the filter function - // just as Sizzle does - if ( fn[ expando ] ) { - return fn( argument ); - } - - // But maintain support for old signatures - if ( fn.length > 1 ) { - args = [ pseudo, pseudo, "", argument ]; - return Expr.setFilters.hasOwnProperty( pseudo.toLowerCase() ) ? - markFunction( function( seed, matches ) { - var idx, - matched = fn( seed, argument ), - i = matched.length; - while ( i-- ) { - idx = indexOf( seed, matched[ i ] ); - seed[ idx ] = !( matches[ idx ] = matched[ i ] ); - } - } ) : - function( elem ) { - return fn( elem, 0, args ); - }; - } - - return fn; - } - }, - - pseudos: { - - // Potentially complex pseudos - "not": markFunction( function( selector ) { - - // Trim the selector passed to compile - // to avoid treating leading and trailing - // spaces as combinators - var input = [], - results = [], - matcher = compile( selector.replace( rtrim, "$1" ) ); - - return matcher[ expando ] ? - markFunction( function( seed, matches, _context, xml ) { - var elem, - unmatched = matcher( seed, null, xml, [] ), - i = seed.length; - - // Match elements unmatched by `matcher` - while ( i-- ) { - if ( ( elem = unmatched[ i ] ) ) { - seed[ i ] = !( matches[ i ] = elem ); - } - } - } ) : - function( elem, _context, xml ) { - input[ 0 ] = elem; - matcher( input, null, xml, results ); - - // Don't keep the element (issue #299) - input[ 0 ] = null; - return !results.pop(); - }; - } ), - - "has": markFunction( function( selector ) { - return function( elem ) { - return Sizzle( selector, elem ).length > 0; - }; - } ), - - "contains": markFunction( function( text ) { - text = text.replace( runescape, funescape ); - return function( elem ) { - return ( elem.textContent || getText( elem ) ).indexOf( text ) > -1; - }; - } ), - - // "Whether an element is represented by a :lang() selector - // is based solely on the element's language value - // being equal to the identifier C, - // or beginning with the identifier C immediately followed by "-". - // The matching of C against the element's language value is performed case-insensitively. - // The identifier C does not have to be a valid language name." - // http://www.w3.org/TR/selectors/#lang-pseudo - "lang": markFunction( function( lang ) { - - // lang value must be a valid identifier - if ( !ridentifier.test( lang || "" ) ) { - Sizzle.error( "unsupported lang: " + lang ); - } - lang = lang.replace( runescape, funescape ).toLowerCase(); - return function( elem ) { - var elemLang; - do { - if ( ( elemLang = documentIsHTML ? - elem.lang : - elem.getAttribute( "xml:lang" ) || elem.getAttribute( "lang" ) ) ) { - - elemLang = elemLang.toLowerCase(); - return elemLang === lang || elemLang.indexOf( lang + "-" ) === 0; - } - } while ( ( elem = elem.parentNode ) && elem.nodeType === 1 ); - return false; - }; - } ), - - // Miscellaneous - "target": function( elem ) { - var hash = window.location && window.location.hash; - return hash && hash.slice( 1 ) === elem.id; - }, - - "root": function( elem ) { - return elem === docElem; - }, - - "focus": function( elem ) { - return elem === document.activeElement && - ( !document.hasFocus || document.hasFocus() ) && - !!( elem.type || elem.href || ~elem.tabIndex ); - }, - - // Boolean properties - "enabled": createDisabledPseudo( false ), - "disabled": createDisabledPseudo( true ), - - "checked": function( elem ) { - - // In CSS3, :checked should return both checked and selected elements - // http://www.w3.org/TR/2011/REC-css3-selectors-20110929/#checked - var nodeName = elem.nodeName.toLowerCase(); - return ( nodeName === "input" && !!elem.checked ) || - ( nodeName === "option" && !!elem.selected ); - }, - - "selected": function( elem ) { - - // Accessing this property makes selected-by-default - // options in Safari work properly - if ( elem.parentNode ) { - // eslint-disable-next-line no-unused-expressions - elem.parentNode.selectedIndex; - } - - return elem.selected === true; - }, - - // Contents - "empty": function( elem ) { - - // http://www.w3.org/TR/selectors/#empty-pseudo - // :empty is negated by element (1) or content nodes (text: 3; cdata: 4; entity ref: 5), - // but not by others (comment: 8; processing instruction: 7; etc.) - // nodeType < 6 works because attributes (2) do not appear as children - for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { - if ( elem.nodeType < 6 ) { - return false; - } - } - return true; - }, - - "parent": function( elem ) { - return !Expr.pseudos[ "empty" ]( elem ); - }, - - // Element/input types - "header": function( elem ) { - return rheader.test( elem.nodeName ); - }, - - "input": function( elem ) { - return rinputs.test( elem.nodeName ); - }, - - "button": function( elem ) { - var name = elem.nodeName.toLowerCase(); - return name === "input" && elem.type === "button" || name === "button"; - }, - - "text": function( elem ) { - var attr; - return elem.nodeName.toLowerCase() === "input" && - elem.type === "text" && - - // Support: IE<8 - // New HTML5 attribute values (e.g., "search") appear with elem.type === "text" - ( ( attr = elem.getAttribute( "type" ) ) == null || - attr.toLowerCase() === "text" ); - }, - - // Position-in-collection - "first": createPositionalPseudo( function() { - return [ 0 ]; - } ), - - "last": createPositionalPseudo( function( _matchIndexes, length ) { - return [ length - 1 ]; - } ), - - "eq": createPositionalPseudo( function( _matchIndexes, length, argument ) { - return [ argument < 0 ? argument + length : argument ]; - } ), - - "even": createPositionalPseudo( function( matchIndexes, length ) { - var i = 0; - for ( ; i < length; i += 2 ) { - matchIndexes.push( i ); - } - return matchIndexes; - } ), - - "odd": createPositionalPseudo( function( matchIndexes, length ) { - var i = 1; - for ( ; i < length; i += 2 ) { - matchIndexes.push( i ); - } - return matchIndexes; - } ), - - "lt": createPositionalPseudo( function( matchIndexes, length, argument ) { - var i = argument < 0 ? - argument + length : - argument > length ? - length : - argument; - for ( ; --i >= 0; ) { - matchIndexes.push( i ); - } - return matchIndexes; - } ), - - "gt": createPositionalPseudo( function( matchIndexes, length, argument ) { - var i = argument < 0 ? argument + length : argument; - for ( ; ++i < length; ) { - matchIndexes.push( i ); - } - return matchIndexes; - } ) - } -}; - -Expr.pseudos[ "nth" ] = Expr.pseudos[ "eq" ]; - -// Add button/input type pseudos -for ( i in { radio: true, checkbox: true, file: true, password: true, image: true } ) { - Expr.pseudos[ i ] = createInputPseudo( i ); -} -for ( i in { submit: true, reset: true } ) { - Expr.pseudos[ i ] = createButtonPseudo( i ); -} - -// Easy API for creating new setFilters -function setFilters() {} -setFilters.prototype = Expr.filters = Expr.pseudos; -Expr.setFilters = new setFilters(); - -tokenize = Sizzle.tokenize = function( selector, parseOnly ) { - var matched, match, tokens, type, - soFar, groups, preFilters, - cached = tokenCache[ selector + " " ]; - - if ( cached ) { - return parseOnly ? 0 : cached.slice( 0 ); - } - - soFar = selector; - groups = []; - preFilters = Expr.preFilter; - - while ( soFar ) { - - // Comma and first run - if ( !matched || ( match = rcomma.exec( soFar ) ) ) { - if ( match ) { - - // Don't consume trailing commas as valid - soFar = soFar.slice( match[ 0 ].length ) || soFar; - } - groups.push( ( tokens = [] ) ); - } - - matched = false; - - // Combinators - if ( ( match = rcombinators.exec( soFar ) ) ) { - matched = match.shift(); - tokens.push( { - value: matched, - - // Cast descendant combinators to space - type: match[ 0 ].replace( rtrim, " " ) - } ); - soFar = soFar.slice( matched.length ); - } - - // Filters - for ( type in Expr.filter ) { - if ( ( match = matchExpr[ type ].exec( soFar ) ) && ( !preFilters[ type ] || - ( match = preFilters[ type ]( match ) ) ) ) { - matched = match.shift(); - tokens.push( { - value: matched, - type: type, - matches: match - } ); - soFar = soFar.slice( matched.length ); - } - } - - if ( !matched ) { - break; - } - } - - // Return the length of the invalid excess - // if we're just parsing - // Otherwise, throw an error or return tokens - return parseOnly ? - soFar.length : - soFar ? - Sizzle.error( selector ) : - - // Cache the tokens - tokenCache( selector, groups ).slice( 0 ); -}; - -function toSelector( tokens ) { - var i = 0, - len = tokens.length, - selector = ""; - for ( ; i < len; i++ ) { - selector += tokens[ i ].value; - } - return selector; -} - -function addCombinator( matcher, combinator, base ) { - var dir = combinator.dir, - skip = combinator.next, - key = skip || dir, - checkNonElements = base && key === "parentNode", - doneName = done++; - - return combinator.first ? - - // Check against closest ancestor/preceding element - function( elem, context, xml ) { - while ( ( elem = elem[ dir ] ) ) { - if ( elem.nodeType === 1 || checkNonElements ) { - return matcher( elem, context, xml ); - } - } - return false; - } : - - // Check against all ancestor/preceding elements - function( elem, context, xml ) { - var oldCache, uniqueCache, outerCache, - newCache = [ dirruns, doneName ]; - - // We can't set arbitrary data on XML nodes, so they don't benefit from combinator caching - if ( xml ) { - while ( ( elem = elem[ dir ] ) ) { - if ( elem.nodeType === 1 || checkNonElements ) { - if ( matcher( elem, context, xml ) ) { - return true; - } - } - } - } else { - while ( ( elem = elem[ dir ] ) ) { - if ( elem.nodeType === 1 || checkNonElements ) { - outerCache = elem[ expando ] || ( elem[ expando ] = {} ); - - // Support: IE <9 only - // Defend against cloned attroperties (jQuery gh-1709) - uniqueCache = outerCache[ elem.uniqueID ] || - ( outerCache[ elem.uniqueID ] = {} ); - - if ( skip && skip === elem.nodeName.toLowerCase() ) { - elem = elem[ dir ] || elem; - } else if ( ( oldCache = uniqueCache[ key ] ) && - oldCache[ 0 ] === dirruns && oldCache[ 1 ] === doneName ) { - - // Assign to newCache so results back-propagate to previous elements - return ( newCache[ 2 ] = oldCache[ 2 ] ); - } else { - - // Reuse newcache so results back-propagate to previous elements - uniqueCache[ key ] = newCache; - - // A match means we're done; a fail means we have to keep checking - if ( ( newCache[ 2 ] = matcher( elem, context, xml ) ) ) { - return true; - } - } - } - } - } - return false; - }; -} - -function elementMatcher( matchers ) { - return matchers.length > 1 ? - function( elem, context, xml ) { - var i = matchers.length; - while ( i-- ) { - if ( !matchers[ i ]( elem, context, xml ) ) { - return false; - } - } - return true; - } : - matchers[ 0 ]; -} - -function multipleContexts( selector, contexts, results ) { - var i = 0, - len = contexts.length; - for ( ; i < len; i++ ) { - Sizzle( selector, contexts[ i ], results ); - } - return results; -} - -function condense( unmatched, map, filter, context, xml ) { - var elem, - newUnmatched = [], - i = 0, - len = unmatched.length, - mapped = map != null; - - for ( ; i < len; i++ ) { - if ( ( elem = unmatched[ i ] ) ) { - if ( !filter || filter( elem, context, xml ) ) { - newUnmatched.push( elem ); - if ( mapped ) { - map.push( i ); - } - } - } - } - - return newUnmatched; -} - -function setMatcher( preFilter, selector, matcher, postFilter, postFinder, postSelector ) { - if ( postFilter && !postFilter[ expando ] ) { - postFilter = setMatcher( postFilter ); - } - if ( postFinder && !postFinder[ expando ] ) { - postFinder = setMatcher( postFinder, postSelector ); - } - return markFunction( function( seed, results, context, xml ) { - var temp, i, elem, - preMap = [], - postMap = [], - preexisting = results.length, - - // Get initial elements from seed or context - elems = seed || multipleContexts( - selector || "*", - context.nodeType ? [ context ] : context, - [] - ), - - // Prefilter to get matcher input, preserving a map for seed-results synchronization - matcherIn = preFilter && ( seed || !selector ) ? - condense( elems, preMap, preFilter, context, xml ) : - elems, - - matcherOut = matcher ? - - // If we have a postFinder, or filtered seed, or non-seed postFilter or preexisting results, - postFinder || ( seed ? preFilter : preexisting || postFilter ) ? - - // ...intermediate processing is necessary - [] : - - // ...otherwise use results directly - results : - matcherIn; - - // Find primary matches - if ( matcher ) { - matcher( matcherIn, matcherOut, context, xml ); - } - - // Apply postFilter - if ( postFilter ) { - temp = condense( matcherOut, postMap ); - postFilter( temp, [], context, xml ); - - // Un-match failing elements by moving them back to matcherIn - i = temp.length; - while ( i-- ) { - if ( ( elem = temp[ i ] ) ) { - matcherOut[ postMap[ i ] ] = !( matcherIn[ postMap[ i ] ] = elem ); - } - } - } - - if ( seed ) { - if ( postFinder || preFilter ) { - if ( postFinder ) { - - // Get the final matcherOut by condensing this intermediate into postFinder contexts - temp = []; - i = matcherOut.length; - while ( i-- ) { - if ( ( elem = matcherOut[ i ] ) ) { - - // Restore matcherIn since elem is not yet a final match - temp.push( ( matcherIn[ i ] = elem ) ); - } - } - postFinder( null, ( matcherOut = [] ), temp, xml ); - } - - // Move matched elements from seed to results to keep them synchronized - i = matcherOut.length; - while ( i-- ) { - if ( ( elem = matcherOut[ i ] ) && - ( temp = postFinder ? indexOf( seed, elem ) : preMap[ i ] ) > -1 ) { - - seed[ temp ] = !( results[ temp ] = elem ); - } - } - } - - // Add elements to results, through postFinder if defined - } else { - matcherOut = condense( - matcherOut === results ? - matcherOut.splice( preexisting, matcherOut.length ) : - matcherOut - ); - if ( postFinder ) { - postFinder( null, results, matcherOut, xml ); - } else { - push.apply( results, matcherOut ); - } - } - } ); -} - -function matcherFromTokens( tokens ) { - var checkContext, matcher, j, - len = tokens.length, - leadingRelative = Expr.relative[ tokens[ 0 ].type ], - implicitRelative = leadingRelative || Expr.relative[ " " ], - i = leadingRelative ? 1 : 0, - - // The foundational matcher ensures that elements are reachable from top-level context(s) - matchContext = addCombinator( function( elem ) { - return elem === checkContext; - }, implicitRelative, true ), - matchAnyContext = addCombinator( function( elem ) { - return indexOf( checkContext, elem ) > -1; - }, implicitRelative, true ), - matchers = [ function( elem, context, xml ) { - var ret = ( !leadingRelative && ( xml || context !== outermostContext ) ) || ( - ( checkContext = context ).nodeType ? - matchContext( elem, context, xml ) : - matchAnyContext( elem, context, xml ) ); - - // Avoid hanging onto element (issue #299) - checkContext = null; - return ret; - } ]; - - for ( ; i < len; i++ ) { - if ( ( matcher = Expr.relative[ tokens[ i ].type ] ) ) { - matchers = [ addCombinator( elementMatcher( matchers ), matcher ) ]; - } else { - matcher = Expr.filter[ tokens[ i ].type ].apply( null, tokens[ i ].matches ); - - // Return special upon seeing a positional matcher - if ( matcher[ expando ] ) { - - // Find the next relative operator (if any) for proper handling - j = ++i; - for ( ; j < len; j++ ) { - if ( Expr.relative[ tokens[ j ].type ] ) { - break; - } - } - return setMatcher( - i > 1 && elementMatcher( matchers ), - i > 1 && toSelector( - - // If the preceding token was a descendant combinator, insert an implicit any-element `*` - tokens - .slice( 0, i - 1 ) - .concat( { value: tokens[ i - 2 ].type === " " ? "*" : "" } ) - ).replace( rtrim, "$1" ), - matcher, - i < j && matcherFromTokens( tokens.slice( i, j ) ), - j < len && matcherFromTokens( ( tokens = tokens.slice( j ) ) ), - j < len && toSelector( tokens ) - ); - } - matchers.push( matcher ); - } - } - - return elementMatcher( matchers ); -} - -function matcherFromGroupMatchers( elementMatchers, setMatchers ) { - var bySet = setMatchers.length > 0, - byElement = elementMatchers.length > 0, - superMatcher = function( seed, context, xml, results, outermost ) { - var elem, j, matcher, - matchedCount = 0, - i = "0", - unmatched = seed && [], - setMatched = [], - contextBackup = outermostContext, - - // We must always have either seed elements or outermost context - elems = seed || byElement && Expr.find[ "TAG" ]( "*", outermost ), - - // Use integer dirruns iff this is the outermost matcher - dirrunsUnique = ( dirruns += contextBackup == null ? 1 : Math.random() || 0.1 ), - len = elems.length; - - if ( outermost ) { - - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - outermostContext = context == document || context || outermost; - } - - // Add elements passing elementMatchers directly to results - // Support: IE<9, Safari - // Tolerate NodeList properties (IE: "length"; Safari: ) matching elements by id - for ( ; i !== len && ( elem = elems[ i ] ) != null; i++ ) { - if ( byElement && elem ) { - j = 0; - - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( !context && elem.ownerDocument != document ) { - setDocument( elem ); - xml = !documentIsHTML; - } - while ( ( matcher = elementMatchers[ j++ ] ) ) { - if ( matcher( elem, context || document, xml ) ) { - results.push( elem ); - break; - } - } - if ( outermost ) { - dirruns = dirrunsUnique; - } - } - - // Track unmatched elements for set filters - if ( bySet ) { - - // They will have gone through all possible matchers - if ( ( elem = !matcher && elem ) ) { - matchedCount--; - } - - // Lengthen the array for every element, matched or not - if ( seed ) { - unmatched.push( elem ); - } - } - } - - // `i` is now the count of elements visited above, and adding it to `matchedCount` - // makes the latter nonnegative. - matchedCount += i; - - // Apply set filters to unmatched elements - // NOTE: This can be skipped if there are no unmatched elements (i.e., `matchedCount` - // equals `i`), unless we didn't visit _any_ elements in the above loop because we have - // no element matchers and no seed. - // Incrementing an initially-string "0" `i` allows `i` to remain a string only in that - // case, which will result in a "00" `matchedCount` that differs from `i` but is also - // numerically zero. - if ( bySet && i !== matchedCount ) { - j = 0; - while ( ( matcher = setMatchers[ j++ ] ) ) { - matcher( unmatched, setMatched, context, xml ); - } - - if ( seed ) { - - // Reintegrate element matches to eliminate the need for sorting - if ( matchedCount > 0 ) { - while ( i-- ) { - if ( !( unmatched[ i ] || setMatched[ i ] ) ) { - setMatched[ i ] = pop.call( results ); - } - } - } - - // Discard index placeholder values to get only actual matches - setMatched = condense( setMatched ); - } - - // Add matches to results - push.apply( results, setMatched ); - - // Seedless set matches succeeding multiple successful matchers stipulate sorting - if ( outermost && !seed && setMatched.length > 0 && - ( matchedCount + setMatchers.length ) > 1 ) { - - Sizzle.uniqueSort( results ); - } - } - - // Override manipulation of globals by nested matchers - if ( outermost ) { - dirruns = dirrunsUnique; - outermostContext = contextBackup; - } - - return unmatched; - }; - - return bySet ? - markFunction( superMatcher ) : - superMatcher; -} - -compile = Sizzle.compile = function( selector, match /* Internal Use Only */ ) { - var i, - setMatchers = [], - elementMatchers = [], - cached = compilerCache[ selector + " " ]; - - if ( !cached ) { - - // Generate a function of recursive functions that can be used to check each element - if ( !match ) { - match = tokenize( selector ); - } - i = match.length; - while ( i-- ) { - cached = matcherFromTokens( match[ i ] ); - if ( cached[ expando ] ) { - setMatchers.push( cached ); - } else { - elementMatchers.push( cached ); - } - } - - // Cache the compiled function - cached = compilerCache( - selector, - matcherFromGroupMatchers( elementMatchers, setMatchers ) - ); - - // Save selector and tokenization - cached.selector = selector; - } - return cached; -}; - -/** - * A low-level selection function that works with Sizzle's compiled - * selector functions - * @param {String|Function} selector A selector or a pre-compiled - * selector function built with Sizzle.compile - * @param {Element} context - * @param {Array} [results] - * @param {Array} [seed] A set of elements to match against - */ -select = Sizzle.select = function( selector, context, results, seed ) { - var i, tokens, token, type, find, - compiled = typeof selector === "function" && selector, - match = !seed && tokenize( ( selector = compiled.selector || selector ) ); - - results = results || []; - - // Try to minimize operations if there is only one selector in the list and no seed - // (the latter of which guarantees us context) - if ( match.length === 1 ) { - - // Reduce context if the leading compound selector is an ID - tokens = match[ 0 ] = match[ 0 ].slice( 0 ); - if ( tokens.length > 2 && ( token = tokens[ 0 ] ).type === "ID" && - context.nodeType === 9 && documentIsHTML && Expr.relative[ tokens[ 1 ].type ] ) { - - context = ( Expr.find[ "ID" ]( token.matches[ 0 ] - .replace( runescape, funescape ), context ) || [] )[ 0 ]; - if ( !context ) { - return results; - - // Precompiled matchers will still verify ancestry, so step up a level - } else if ( compiled ) { - context = context.parentNode; - } - - selector = selector.slice( tokens.shift().value.length ); - } - - // Fetch a seed set for right-to-left matching - i = matchExpr[ "needsContext" ].test( selector ) ? 0 : tokens.length; - while ( i-- ) { - token = tokens[ i ]; - - // Abort if we hit a combinator - if ( Expr.relative[ ( type = token.type ) ] ) { - break; - } - if ( ( find = Expr.find[ type ] ) ) { - - // Search, expanding context for leading sibling combinators - if ( ( seed = find( - token.matches[ 0 ].replace( runescape, funescape ), - rsibling.test( tokens[ 0 ].type ) && testContext( context.parentNode ) || - context - ) ) ) { - - // If seed is empty or no tokens remain, we can return early - tokens.splice( i, 1 ); - selector = seed.length && toSelector( tokens ); - if ( !selector ) { - push.apply( results, seed ); - return results; - } - - break; - } - } - } - } - - // Compile and execute a filtering function if one is not provided - // Provide `match` to avoid retokenization if we modified the selector above - ( compiled || compile( selector, match ) )( - seed, - context, - !documentIsHTML, - results, - !context || rsibling.test( selector ) && testContext( context.parentNode ) || context - ); - return results; -}; - -// One-time assignments - -// Sort stability -support.sortStable = expando.split( "" ).sort( sortOrder ).join( "" ) === expando; - -// Support: Chrome 14-35+ -// Always assume duplicates if they aren't passed to the comparison function -support.detectDuplicates = !!hasDuplicate; - -// Initialize against the default document -setDocument(); - -// Support: Webkit<537.32 - Safari 6.0.3/Chrome 25 (fixed in Chrome 27) -// Detached nodes confoundingly follow *each other* -support.sortDetached = assert( function( el ) { - - // Should return 1, but returns 4 (following) - return el.compareDocumentPosition( document.createElement( "fieldset" ) ) & 1; -} ); - -// Support: IE<8 -// Prevent attribute/property "interpolation" -// https://msdn.microsoft.com/en-us/library/ms536429%28VS.85%29.aspx -if ( !assert( function( el ) { - el.innerHTML = ""; - return el.firstChild.getAttribute( "href" ) === "#"; -} ) ) { - addHandle( "type|href|height|width", function( elem, name, isXML ) { - if ( !isXML ) { - return elem.getAttribute( name, name.toLowerCase() === "type" ? 1 : 2 ); - } - } ); -} - -// Support: IE<9 -// Use defaultValue in place of getAttribute("value") -if ( !support.attributes || !assert( function( el ) { - el.innerHTML = ""; - el.firstChild.setAttribute( "value", "" ); - return el.firstChild.getAttribute( "value" ) === ""; -} ) ) { - addHandle( "value", function( elem, _name, isXML ) { - if ( !isXML && elem.nodeName.toLowerCase() === "input" ) { - return elem.defaultValue; - } - } ); -} - -// Support: IE<9 -// Use getAttributeNode to fetch booleans when getAttribute lies -if ( !assert( function( el ) { - return el.getAttribute( "disabled" ) == null; -} ) ) { - addHandle( booleans, function( elem, name, isXML ) { - var val; - if ( !isXML ) { - return elem[ name ] === true ? name.toLowerCase() : - ( val = elem.getAttributeNode( name ) ) && val.specified ? - val.value : - null; - } - } ); -} - -return Sizzle; - -} )( window ); - - - -jQuery.find = Sizzle; -jQuery.expr = Sizzle.selectors; - -// Deprecated -jQuery.expr[ ":" ] = jQuery.expr.pseudos; -jQuery.uniqueSort = jQuery.unique = Sizzle.uniqueSort; -jQuery.text = Sizzle.getText; -jQuery.isXMLDoc = Sizzle.isXML; -jQuery.contains = Sizzle.contains; -jQuery.escapeSelector = Sizzle.escape; - - - - -var dir = function( elem, dir, until ) { - var matched = [], - truncate = until !== undefined; - - while ( ( elem = elem[ dir ] ) && elem.nodeType !== 9 ) { - if ( elem.nodeType === 1 ) { - if ( truncate && jQuery( elem ).is( until ) ) { - break; - } - matched.push( elem ); - } - } - return matched; -}; - - -var siblings = function( n, elem ) { - var matched = []; - - for ( ; n; n = n.nextSibling ) { - if ( n.nodeType === 1 && n !== elem ) { - matched.push( n ); - } - } - - return matched; -}; - - -var rneedsContext = jQuery.expr.match.needsContext; - - - -function nodeName( elem, name ) { - - return elem.nodeName && elem.nodeName.toLowerCase() === name.toLowerCase(); - -} -var rsingleTag = ( /^<([a-z][^\/\0>:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i ); - - - -// Implement the identical functionality for filter and not -function winnow( elements, qualifier, not ) { - if ( isFunction( qualifier ) ) { - return jQuery.grep( elements, function( elem, i ) { - return !!qualifier.call( elem, i, elem ) !== not; - } ); - } - - // Single element - if ( qualifier.nodeType ) { - return jQuery.grep( elements, function( elem ) { - return ( elem === qualifier ) !== not; - } ); - } - - // Arraylike of elements (jQuery, arguments, Array) - if ( typeof qualifier !== "string" ) { - return jQuery.grep( elements, function( elem ) { - return ( indexOf.call( qualifier, elem ) > -1 ) !== not; - } ); - } - - // Filtered directly for both simple and complex selectors - return jQuery.filter( qualifier, elements, not ); -} - -jQuery.filter = function( expr, elems, not ) { - var elem = elems[ 0 ]; - - if ( not ) { - expr = ":not(" + expr + ")"; - } - - if ( elems.length === 1 && elem.nodeType === 1 ) { - return jQuery.find.matchesSelector( elem, expr ) ? [ elem ] : []; - } - - return jQuery.find.matches( expr, jQuery.grep( elems, function( elem ) { - return elem.nodeType === 1; - } ) ); -}; - -jQuery.fn.extend( { - find: function( selector ) { - var i, ret, - len = this.length, - self = this; - - if ( typeof selector !== "string" ) { - return this.pushStack( jQuery( selector ).filter( function() { - for ( i = 0; i < len; i++ ) { - if ( jQuery.contains( self[ i ], this ) ) { - return true; - } - } - } ) ); - } - - ret = this.pushStack( [] ); - - for ( i = 0; i < len; i++ ) { - jQuery.find( selector, self[ i ], ret ); - } - - return len > 1 ? jQuery.uniqueSort( ret ) : ret; - }, - filter: function( selector ) { - return this.pushStack( winnow( this, selector || [], false ) ); - }, - not: function( selector ) { - return this.pushStack( winnow( this, selector || [], true ) ); - }, - is: function( selector ) { - return !!winnow( - this, - - // If this is a positional/relative selector, check membership in the returned set - // so $("p:first").is("p:last") won't return true for a doc with two "p". - typeof selector === "string" && rneedsContext.test( selector ) ? - jQuery( selector ) : - selector || [], - false - ).length; - } -} ); - - -// Initialize a jQuery object - - -// A central reference to the root jQuery(document) -var rootjQuery, - - // A simple way to check for HTML strings - // Prioritize #id over to avoid XSS via location.hash (#9521) - // Strict HTML recognition (#11290: must start with <) - // Shortcut simple #id case for speed - rquickExpr = /^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]+))$/, - - init = jQuery.fn.init = function( selector, context, root ) { - var match, elem; - - // HANDLE: $(""), $(null), $(undefined), $(false) - if ( !selector ) { - return this; - } - - // Method init() accepts an alternate rootjQuery - // so migrate can support jQuery.sub (gh-2101) - root = root || rootjQuery; - - // Handle HTML strings - if ( typeof selector === "string" ) { - if ( selector[ 0 ] === "<" && - selector[ selector.length - 1 ] === ">" && - selector.length >= 3 ) { - - // Assume that strings that start and end with <> are HTML and skip the regex check - match = [ null, selector, null ]; - - } else { - match = rquickExpr.exec( selector ); - } - - // Match html or make sure no context is specified for #id - if ( match && ( match[ 1 ] || !context ) ) { - - // HANDLE: $(html) -> $(array) - if ( match[ 1 ] ) { - context = context instanceof jQuery ? context[ 0 ] : context; - - // Option to run scripts is true for back-compat - // Intentionally let the error be thrown if parseHTML is not present - jQuery.merge( this, jQuery.parseHTML( - match[ 1 ], - context && context.nodeType ? context.ownerDocument || context : document, - true - ) ); - - // HANDLE: $(html, props) - if ( rsingleTag.test( match[ 1 ] ) && jQuery.isPlainObject( context ) ) { - for ( match in context ) { - - // Properties of context are called as methods if possible - if ( isFunction( this[ match ] ) ) { - this[ match ]( context[ match ] ); - - // ...and otherwise set as attributes - } else { - this.attr( match, context[ match ] ); - } - } - } - - return this; - - // HANDLE: $(#id) - } else { - elem = document.getElementById( match[ 2 ] ); - - if ( elem ) { - - // Inject the element directly into the jQuery object - this[ 0 ] = elem; - this.length = 1; - } - return this; - } - - // HANDLE: $(expr, $(...)) - } else if ( !context || context.jquery ) { - return ( context || root ).find( selector ); - - // HANDLE: $(expr, context) - // (which is just equivalent to: $(context).find(expr) - } else { - return this.constructor( context ).find( selector ); - } - - // HANDLE: $(DOMElement) - } else if ( selector.nodeType ) { - this[ 0 ] = selector; - this.length = 1; - return this; - - // HANDLE: $(function) - // Shortcut for document ready - } else if ( isFunction( selector ) ) { - return root.ready !== undefined ? - root.ready( selector ) : - - // Execute immediately if ready is not present - selector( jQuery ); - } - - return jQuery.makeArray( selector, this ); - }; - -// Give the init function the jQuery prototype for later instantiation -init.prototype = jQuery.fn; - -// Initialize central reference -rootjQuery = jQuery( document ); - - -var rparentsprev = /^(?:parents|prev(?:Until|All))/, - - // Methods guaranteed to produce a unique set when starting from a unique set - guaranteedUnique = { - children: true, - contents: true, - next: true, - prev: true - }; - -jQuery.fn.extend( { - has: function( target ) { - var targets = jQuery( target, this ), - l = targets.length; - - return this.filter( function() { - var i = 0; - for ( ; i < l; i++ ) { - if ( jQuery.contains( this, targets[ i ] ) ) { - return true; - } - } - } ); - }, - - closest: function( selectors, context ) { - var cur, - i = 0, - l = this.length, - matched = [], - targets = typeof selectors !== "string" && jQuery( selectors ); - - // Positional selectors never match, since there's no _selection_ context - if ( !rneedsContext.test( selectors ) ) { - for ( ; i < l; i++ ) { - for ( cur = this[ i ]; cur && cur !== context; cur = cur.parentNode ) { - - // Always skip document fragments - if ( cur.nodeType < 11 && ( targets ? - targets.index( cur ) > -1 : - - // Don't pass non-elements to Sizzle - cur.nodeType === 1 && - jQuery.find.matchesSelector( cur, selectors ) ) ) { - - matched.push( cur ); - break; - } - } - } - } - - return this.pushStack( matched.length > 1 ? jQuery.uniqueSort( matched ) : matched ); - }, - - // Determine the position of an element within the set - index: function( elem ) { - - // No argument, return index in parent - if ( !elem ) { - return ( this[ 0 ] && this[ 0 ].parentNode ) ? this.first().prevAll().length : -1; - } - - // Index in selector - if ( typeof elem === "string" ) { - return indexOf.call( jQuery( elem ), this[ 0 ] ); - } - - // Locate the position of the desired element - return indexOf.call( this, - - // If it receives a jQuery object, the first element is used - elem.jquery ? elem[ 0 ] : elem - ); - }, - - add: function( selector, context ) { - return this.pushStack( - jQuery.uniqueSort( - jQuery.merge( this.get(), jQuery( selector, context ) ) - ) - ); - }, - - addBack: function( selector ) { - return this.add( selector == null ? - this.prevObject : this.prevObject.filter( selector ) - ); - } -} ); - -function sibling( cur, dir ) { - while ( ( cur = cur[ dir ] ) && cur.nodeType !== 1 ) {} - return cur; -} - -jQuery.each( { - parent: function( elem ) { - var parent = elem.parentNode; - return parent && parent.nodeType !== 11 ? parent : null; - }, - parents: function( elem ) { - return dir( elem, "parentNode" ); - }, - parentsUntil: function( elem, _i, until ) { - return dir( elem, "parentNode", until ); - }, - next: function( elem ) { - return sibling( elem, "nextSibling" ); - }, - prev: function( elem ) { - return sibling( elem, "previousSibling" ); - }, - nextAll: function( elem ) { - return dir( elem, "nextSibling" ); - }, - prevAll: function( elem ) { - return dir( elem, "previousSibling" ); - }, - nextUntil: function( elem, _i, until ) { - return dir( elem, "nextSibling", until ); - }, - prevUntil: function( elem, _i, until ) { - return dir( elem, "previousSibling", until ); - }, - siblings: function( elem ) { - return siblings( ( elem.parentNode || {} ).firstChild, elem ); - }, - children: function( elem ) { - return siblings( elem.firstChild ); - }, - contents: function( elem ) { - if ( elem.contentDocument != null && - - // Support: IE 11+ - // elements with no `data` attribute has an object - // `contentDocument` with a `null` prototype. - getProto( elem.contentDocument ) ) { - - return elem.contentDocument; - } - - // Support: IE 9 - 11 only, iOS 7 only, Android Browser <=4.3 only - // Treat the template element as a regular one in browsers that - // don't support it. - if ( nodeName( elem, "template" ) ) { - elem = elem.content || elem; - } - - return jQuery.merge( [], elem.childNodes ); - } -}, function( name, fn ) { - jQuery.fn[ name ] = function( until, selector ) { - var matched = jQuery.map( this, fn, until ); - - if ( name.slice( -5 ) !== "Until" ) { - selector = until; - } - - if ( selector && typeof selector === "string" ) { - matched = jQuery.filter( selector, matched ); - } - - if ( this.length > 1 ) { - - // Remove duplicates - if ( !guaranteedUnique[ name ] ) { - jQuery.uniqueSort( matched ); - } - - // Reverse order for parents* and prev-derivatives - if ( rparentsprev.test( name ) ) { - matched.reverse(); - } - } - - return this.pushStack( matched ); - }; -} ); -var rnothtmlwhite = ( /[^\x20\t\r\n\f]+/g ); - - - -// Convert String-formatted options into Object-formatted ones -function createOptions( options ) { - var object = {}; - jQuery.each( options.match( rnothtmlwhite ) || [], function( _, flag ) { - object[ flag ] = true; - } ); - return object; -} - -/* - * Create a callback list using the following parameters: - * - * options: an optional list of space-separated options that will change how - * the callback list behaves or a more traditional option object - * - * By default a callback list will act like an event callback list and can be - * "fired" multiple times. - * - * Possible options: - * - * once: will ensure the callback list can only be fired once (like a Deferred) - * - * memory: will keep track of previous values and will call any callback added - * after the list has been fired right away with the latest "memorized" - * values (like a Deferred) - * - * unique: will ensure a callback can only be added once (no duplicate in the list) - * - * stopOnFalse: interrupt callings when a callback returns false - * - */ -jQuery.Callbacks = function( options ) { - - // Convert options from String-formatted to Object-formatted if needed - // (we check in cache first) - options = typeof options === "string" ? - createOptions( options ) : - jQuery.extend( {}, options ); - - var // Flag to know if list is currently firing - firing, - - // Last fire value for non-forgettable lists - memory, - - // Flag to know if list was already fired - fired, - - // Flag to prevent firing - locked, - - // Actual callback list - list = [], - - // Queue of execution data for repeatable lists - queue = [], - - // Index of currently firing callback (modified by add/remove as needed) - firingIndex = -1, - - // Fire callbacks - fire = function() { - - // Enforce single-firing - locked = locked || options.once; - - // Execute callbacks for all pending executions, - // respecting firingIndex overrides and runtime changes - fired = firing = true; - for ( ; queue.length; firingIndex = -1 ) { - memory = queue.shift(); - while ( ++firingIndex < list.length ) { - - // Run callback and check for early termination - if ( list[ firingIndex ].apply( memory[ 0 ], memory[ 1 ] ) === false && - options.stopOnFalse ) { - - // Jump to end and forget the data so .add doesn't re-fire - firingIndex = list.length; - memory = false; - } - } - } - - // Forget the data if we're done with it - if ( !options.memory ) { - memory = false; - } - - firing = false; - - // Clean up if we're done firing for good - if ( locked ) { - - // Keep an empty list if we have data for future add calls - if ( memory ) { - list = []; - - // Otherwise, this object is spent - } else { - list = ""; - } - } - }, - - // Actual Callbacks object - self = { - - // Add a callback or a collection of callbacks to the list - add: function() { - if ( list ) { - - // If we have memory from a past run, we should fire after adding - if ( memory && !firing ) { - firingIndex = list.length - 1; - queue.push( memory ); - } - - ( function add( args ) { - jQuery.each( args, function( _, arg ) { - if ( isFunction( arg ) ) { - if ( !options.unique || !self.has( arg ) ) { - list.push( arg ); - } - } else if ( arg && arg.length && toType( arg ) !== "string" ) { - - // Inspect recursively - add( arg ); - } - } ); - } )( arguments ); - - if ( memory && !firing ) { - fire(); - } - } - return this; - }, - - // Remove a callback from the list - remove: function() { - jQuery.each( arguments, function( _, arg ) { - var index; - while ( ( index = jQuery.inArray( arg, list, index ) ) > -1 ) { - list.splice( index, 1 ); - - // Handle firing indexes - if ( index <= firingIndex ) { - firingIndex--; - } - } - } ); - return this; - }, - - // Check if a given callback is in the list. - // If no argument is given, return whether or not list has callbacks attached. - has: function( fn ) { - return fn ? - jQuery.inArray( fn, list ) > -1 : - list.length > 0; - }, - - // Remove all callbacks from the list - empty: function() { - if ( list ) { - list = []; - } - return this; - }, - - // Disable .fire and .add - // Abort any current/pending executions - // Clear all callbacks and values - disable: function() { - locked = queue = []; - list = memory = ""; - return this; - }, - disabled: function() { - return !list; - }, - - // Disable .fire - // Also disable .add unless we have memory (since it would have no effect) - // Abort any pending executions - lock: function() { - locked = queue = []; - if ( !memory && !firing ) { - list = memory = ""; - } - return this; - }, - locked: function() { - return !!locked; - }, - - // Call all callbacks with the given context and arguments - fireWith: function( context, args ) { - if ( !locked ) { - args = args || []; - args = [ context, args.slice ? args.slice() : args ]; - queue.push( args ); - if ( !firing ) { - fire(); - } - } - return this; - }, - - // Call all the callbacks with the given arguments - fire: function() { - self.fireWith( this, arguments ); - return this; - }, - - // To know if the callbacks have already been called at least once - fired: function() { - return !!fired; - } - }; - - return self; -}; - - -function Identity( v ) { - return v; -} -function Thrower( ex ) { - throw ex; -} - -function adoptValue( value, resolve, reject, noValue ) { - var method; - - try { - - // Check for promise aspect first to privilege synchronous behavior - if ( value && isFunction( ( method = value.promise ) ) ) { - method.call( value ).done( resolve ).fail( reject ); - - // Other thenables - } else if ( value && isFunction( ( method = value.then ) ) ) { - method.call( value, resolve, reject ); - - // Other non-thenables - } else { - - // Control `resolve` arguments by letting Array#slice cast boolean `noValue` to integer: - // * false: [ value ].slice( 0 ) => resolve( value ) - // * true: [ value ].slice( 1 ) => resolve() - resolve.apply( undefined, [ value ].slice( noValue ) ); - } - - // For Promises/A+, convert exceptions into rejections - // Since jQuery.when doesn't unwrap thenables, we can skip the extra checks appearing in - // Deferred#then to conditionally suppress rejection. - } catch ( value ) { - - // Support: Android 4.0 only - // Strict mode functions invoked without .call/.apply get global-object context - reject.apply( undefined, [ value ] ); - } -} - -jQuery.extend( { - - Deferred: function( func ) { - var tuples = [ - - // action, add listener, callbacks, - // ... .then handlers, argument index, [final state] - [ "notify", "progress", jQuery.Callbacks( "memory" ), - jQuery.Callbacks( "memory" ), 2 ], - [ "resolve", "done", jQuery.Callbacks( "once memory" ), - jQuery.Callbacks( "once memory" ), 0, "resolved" ], - [ "reject", "fail", jQuery.Callbacks( "once memory" ), - jQuery.Callbacks( "once memory" ), 1, "rejected" ] - ], - state = "pending", - promise = { - state: function() { - return state; - }, - always: function() { - deferred.done( arguments ).fail( arguments ); - return this; - }, - "catch": function( fn ) { - return promise.then( null, fn ); - }, - - // Keep pipe for back-compat - pipe: function( /* fnDone, fnFail, fnProgress */ ) { - var fns = arguments; - - return jQuery.Deferred( function( newDefer ) { - jQuery.each( tuples, function( _i, tuple ) { - - // Map tuples (progress, done, fail) to arguments (done, fail, progress) - var fn = isFunction( fns[ tuple[ 4 ] ] ) && fns[ tuple[ 4 ] ]; - - // deferred.progress(function() { bind to newDefer or newDefer.notify }) - // deferred.done(function() { bind to newDefer or newDefer.resolve }) - // deferred.fail(function() { bind to newDefer or newDefer.reject }) - deferred[ tuple[ 1 ] ]( function() { - var returned = fn && fn.apply( this, arguments ); - if ( returned && isFunction( returned.promise ) ) { - returned.promise() - .progress( newDefer.notify ) - .done( newDefer.resolve ) - .fail( newDefer.reject ); - } else { - newDefer[ tuple[ 0 ] + "With" ]( - this, - fn ? [ returned ] : arguments - ); - } - } ); - } ); - fns = null; - } ).promise(); - }, - then: function( onFulfilled, onRejected, onProgress ) { - var maxDepth = 0; - function resolve( depth, deferred, handler, special ) { - return function() { - var that = this, - args = arguments, - mightThrow = function() { - var returned, then; - - // Support: Promises/A+ section 2.3.3.3.3 - // https://promisesaplus.com/#point-59 - // Ignore double-resolution attempts - if ( depth < maxDepth ) { - return; - } - - returned = handler.apply( that, args ); - - // Support: Promises/A+ section 2.3.1 - // https://promisesaplus.com/#point-48 - if ( returned === deferred.promise() ) { - throw new TypeError( "Thenable self-resolution" ); - } - - // Support: Promises/A+ sections 2.3.3.1, 3.5 - // https://promisesaplus.com/#point-54 - // https://promisesaplus.com/#point-75 - // Retrieve `then` only once - then = returned && - - // Support: Promises/A+ section 2.3.4 - // https://promisesaplus.com/#point-64 - // Only check objects and functions for thenability - ( typeof returned === "object" || - typeof returned === "function" ) && - returned.then; - - // Handle a returned thenable - if ( isFunction( then ) ) { - - // Special processors (notify) just wait for resolution - if ( special ) { - then.call( - returned, - resolve( maxDepth, deferred, Identity, special ), - resolve( maxDepth, deferred, Thrower, special ) - ); - - // Normal processors (resolve) also hook into progress - } else { - - // ...and disregard older resolution values - maxDepth++; - - then.call( - returned, - resolve( maxDepth, deferred, Identity, special ), - resolve( maxDepth, deferred, Thrower, special ), - resolve( maxDepth, deferred, Identity, - deferred.notifyWith ) - ); - } - - // Handle all other returned values - } else { - - // Only substitute handlers pass on context - // and multiple values (non-spec behavior) - if ( handler !== Identity ) { - that = undefined; - args = [ returned ]; - } - - // Process the value(s) - // Default process is resolve - ( special || deferred.resolveWith )( that, args ); - } - }, - - // Only normal processors (resolve) catch and reject exceptions - process = special ? - mightThrow : - function() { - try { - mightThrow(); - } catch ( e ) { - - if ( jQuery.Deferred.exceptionHook ) { - jQuery.Deferred.exceptionHook( e, - process.stackTrace ); - } - - // Support: Promises/A+ section 2.3.3.3.4.1 - // https://promisesaplus.com/#point-61 - // Ignore post-resolution exceptions - if ( depth + 1 >= maxDepth ) { - - // Only substitute handlers pass on context - // and multiple values (non-spec behavior) - if ( handler !== Thrower ) { - that = undefined; - args = [ e ]; - } - - deferred.rejectWith( that, args ); - } - } - }; - - // Support: Promises/A+ section 2.3.3.3.1 - // https://promisesaplus.com/#point-57 - // Re-resolve promises immediately to dodge false rejection from - // subsequent errors - if ( depth ) { - process(); - } else { - - // Call an optional hook to record the stack, in case of exception - // since it's otherwise lost when execution goes async - if ( jQuery.Deferred.getStackHook ) { - process.stackTrace = jQuery.Deferred.getStackHook(); - } - window.setTimeout( process ); - } - }; - } - - return jQuery.Deferred( function( newDefer ) { - - // progress_handlers.add( ... ) - tuples[ 0 ][ 3 ].add( - resolve( - 0, - newDefer, - isFunction( onProgress ) ? - onProgress : - Identity, - newDefer.notifyWith - ) - ); - - // fulfilled_handlers.add( ... ) - tuples[ 1 ][ 3 ].add( - resolve( - 0, - newDefer, - isFunction( onFulfilled ) ? - onFulfilled : - Identity - ) - ); - - // rejected_handlers.add( ... ) - tuples[ 2 ][ 3 ].add( - resolve( - 0, - newDefer, - isFunction( onRejected ) ? - onRejected : - Thrower - ) - ); - } ).promise(); - }, - - // Get a promise for this deferred - // If obj is provided, the promise aspect is added to the object - promise: function( obj ) { - return obj != null ? jQuery.extend( obj, promise ) : promise; - } - }, - deferred = {}; - - // Add list-specific methods - jQuery.each( tuples, function( i, tuple ) { - var list = tuple[ 2 ], - stateString = tuple[ 5 ]; - - // promise.progress = list.add - // promise.done = list.add - // promise.fail = list.add - promise[ tuple[ 1 ] ] = list.add; - - // Handle state - if ( stateString ) { - list.add( - function() { - - // state = "resolved" (i.e., fulfilled) - // state = "rejected" - state = stateString; - }, - - // rejected_callbacks.disable - // fulfilled_callbacks.disable - tuples[ 3 - i ][ 2 ].disable, - - // rejected_handlers.disable - // fulfilled_handlers.disable - tuples[ 3 - i ][ 3 ].disable, - - // progress_callbacks.lock - tuples[ 0 ][ 2 ].lock, - - // progress_handlers.lock - tuples[ 0 ][ 3 ].lock - ); - } - - // progress_handlers.fire - // fulfilled_handlers.fire - // rejected_handlers.fire - list.add( tuple[ 3 ].fire ); - - // deferred.notify = function() { deferred.notifyWith(...) } - // deferred.resolve = function() { deferred.resolveWith(...) } - // deferred.reject = function() { deferred.rejectWith(...) } - deferred[ tuple[ 0 ] ] = function() { - deferred[ tuple[ 0 ] + "With" ]( this === deferred ? undefined : this, arguments ); - return this; - }; - - // deferred.notifyWith = list.fireWith - // deferred.resolveWith = list.fireWith - // deferred.rejectWith = list.fireWith - deferred[ tuple[ 0 ] + "With" ] = list.fireWith; - } ); - - // Make the deferred a promise - promise.promise( deferred ); - - // Call given func if any - if ( func ) { - func.call( deferred, deferred ); - } - - // All done! - return deferred; - }, - - // Deferred helper - when: function( singleValue ) { - var - - // count of uncompleted subordinates - remaining = arguments.length, - - // count of unprocessed arguments - i = remaining, - - // subordinate fulfillment data - resolveContexts = Array( i ), - resolveValues = slice.call( arguments ), - - // the primary Deferred - primary = jQuery.Deferred(), - - // subordinate callback factory - updateFunc = function( i ) { - return function( value ) { - resolveContexts[ i ] = this; - resolveValues[ i ] = arguments.length > 1 ? slice.call( arguments ) : value; - if ( !( --remaining ) ) { - primary.resolveWith( resolveContexts, resolveValues ); - } - }; - }; - - // Single- and empty arguments are adopted like Promise.resolve - if ( remaining <= 1 ) { - adoptValue( singleValue, primary.done( updateFunc( i ) ).resolve, primary.reject, - !remaining ); - - // Use .then() to unwrap secondary thenables (cf. gh-3000) - if ( primary.state() === "pending" || - isFunction( resolveValues[ i ] && resolveValues[ i ].then ) ) { - - return primary.then(); - } - } - - // Multiple arguments are aggregated like Promise.all array elements - while ( i-- ) { - adoptValue( resolveValues[ i ], updateFunc( i ), primary.reject ); - } - - return primary.promise(); - } -} ); - - -// These usually indicate a programmer mistake during development, -// warn about them ASAP rather than swallowing them by default. -var rerrorNames = /^(Eval|Internal|Range|Reference|Syntax|Type|URI)Error$/; - -jQuery.Deferred.exceptionHook = function( error, stack ) { - - // Support: IE 8 - 9 only - // Console exists when dev tools are open, which can happen at any time - if ( window.console && window.console.warn && error && rerrorNames.test( error.name ) ) { - window.console.warn( "jQuery.Deferred exception: " + error.message, error.stack, stack ); - } -}; - - - - -jQuery.readyException = function( error ) { - window.setTimeout( function() { - throw error; - } ); -}; - - - - -// The deferred used on DOM ready -var readyList = jQuery.Deferred(); - -jQuery.fn.ready = function( fn ) { - - readyList - .then( fn ) - - // Wrap jQuery.readyException in a function so that the lookup - // happens at the time of error handling instead of callback - // registration. - .catch( function( error ) { - jQuery.readyException( error ); - } ); - - return this; -}; - -jQuery.extend( { - - // Is the DOM ready to be used? Set to true once it occurs. - isReady: false, - - // A counter to track how many items to wait for before - // the ready event fires. See #6781 - readyWait: 1, - - // Handle when the DOM is ready - ready: function( wait ) { - - // Abort if there are pending holds or we're already ready - if ( wait === true ? --jQuery.readyWait : jQuery.isReady ) { - return; - } - - // Remember that the DOM is ready - jQuery.isReady = true; - - // If a normal DOM Ready event fired, decrement, and wait if need be - if ( wait !== true && --jQuery.readyWait > 0 ) { - return; - } - - // If there are functions bound, to execute - readyList.resolveWith( document, [ jQuery ] ); - } -} ); - -jQuery.ready.then = readyList.then; - -// The ready event handler and self cleanup method -function completed() { - document.removeEventListener( "DOMContentLoaded", completed ); - window.removeEventListener( "load", completed ); - jQuery.ready(); -} - -// Catch cases where $(document).ready() is called -// after the browser event has already occurred. -// Support: IE <=9 - 10 only -// Older IE sometimes signals "interactive" too soon -if ( document.readyState === "complete" || - ( document.readyState !== "loading" && !document.documentElement.doScroll ) ) { - - // Handle it asynchronously to allow scripts the opportunity to delay ready - window.setTimeout( jQuery.ready ); - -} else { - - // Use the handy event callback - document.addEventListener( "DOMContentLoaded", completed ); - - // A fallback to window.onload, that will always work - window.addEventListener( "load", completed ); -} - - - - -// Multifunctional method to get and set values of a collection -// The value/s can optionally be executed if it's a function -var access = function( elems, fn, key, value, chainable, emptyGet, raw ) { - var i = 0, - len = elems.length, - bulk = key == null; - - // Sets many values - if ( toType( key ) === "object" ) { - chainable = true; - for ( i in key ) { - access( elems, fn, i, key[ i ], true, emptyGet, raw ); - } - - // Sets one value - } else if ( value !== undefined ) { - chainable = true; - - if ( !isFunction( value ) ) { - raw = true; - } - - if ( bulk ) { - - // Bulk operations run against the entire set - if ( raw ) { - fn.call( elems, value ); - fn = null; - - // ...except when executing function values - } else { - bulk = fn; - fn = function( elem, _key, value ) { - return bulk.call( jQuery( elem ), value ); - }; - } - } - - if ( fn ) { - for ( ; i < len; i++ ) { - fn( - elems[ i ], key, raw ? - value : - value.call( elems[ i ], i, fn( elems[ i ], key ) ) - ); - } - } - } - - if ( chainable ) { - return elems; - } - - // Gets - if ( bulk ) { - return fn.call( elems ); - } - - return len ? fn( elems[ 0 ], key ) : emptyGet; -}; - - -// Matches dashed string for camelizing -var rmsPrefix = /^-ms-/, - rdashAlpha = /-([a-z])/g; - -// Used by camelCase as callback to replace() -function fcamelCase( _all, letter ) { - return letter.toUpperCase(); -} - -// Convert dashed to camelCase; used by the css and data modules -// Support: IE <=9 - 11, Edge 12 - 15 -// Microsoft forgot to hump their vendor prefix (#9572) -function camelCase( string ) { - return string.replace( rmsPrefix, "ms-" ).replace( rdashAlpha, fcamelCase ); -} -var acceptData = function( owner ) { - - // Accepts only: - // - Node - // - Node.ELEMENT_NODE - // - Node.DOCUMENT_NODE - // - Object - // - Any - return owner.nodeType === 1 || owner.nodeType === 9 || !( +owner.nodeType ); -}; - - - - -function Data() { - this.expando = jQuery.expando + Data.uid++; -} - -Data.uid = 1; - -Data.prototype = { - - cache: function( owner ) { - - // Check if the owner object already has a cache - var value = owner[ this.expando ]; - - // If not, create one - if ( !value ) { - value = {}; - - // We can accept data for non-element nodes in modern browsers, - // but we should not, see #8335. - // Always return an empty object. - if ( acceptData( owner ) ) { - - // If it is a node unlikely to be stringify-ed or looped over - // use plain assignment - if ( owner.nodeType ) { - owner[ this.expando ] = value; - - // Otherwise secure it in a non-enumerable property - // configurable must be true to allow the property to be - // deleted when data is removed - } else { - Object.defineProperty( owner, this.expando, { - value: value, - configurable: true - } ); - } - } - } - - return value; - }, - set: function( owner, data, value ) { - var prop, - cache = this.cache( owner ); - - // Handle: [ owner, key, value ] args - // Always use camelCase key (gh-2257) - if ( typeof data === "string" ) { - cache[ camelCase( data ) ] = value; - - // Handle: [ owner, { properties } ] args - } else { - - // Copy the properties one-by-one to the cache object - for ( prop in data ) { - cache[ camelCase( prop ) ] = data[ prop ]; - } - } - return cache; - }, - get: function( owner, key ) { - return key === undefined ? - this.cache( owner ) : - - // Always use camelCase key (gh-2257) - owner[ this.expando ] && owner[ this.expando ][ camelCase( key ) ]; - }, - access: function( owner, key, value ) { - - // In cases where either: - // - // 1. No key was specified - // 2. A string key was specified, but no value provided - // - // Take the "read" path and allow the get method to determine - // which value to return, respectively either: - // - // 1. The entire cache object - // 2. The data stored at the key - // - if ( key === undefined || - ( ( key && typeof key === "string" ) && value === undefined ) ) { - - return this.get( owner, key ); - } - - // When the key is not a string, or both a key and value - // are specified, set or extend (existing objects) with either: - // - // 1. An object of properties - // 2. A key and value - // - this.set( owner, key, value ); - - // Since the "set" path can have two possible entry points - // return the expected data based on which path was taken[*] - return value !== undefined ? value : key; - }, - remove: function( owner, key ) { - var i, - cache = owner[ this.expando ]; - - if ( cache === undefined ) { - return; - } - - if ( key !== undefined ) { - - // Support array or space separated string of keys - if ( Array.isArray( key ) ) { - - // If key is an array of keys... - // We always set camelCase keys, so remove that. - key = key.map( camelCase ); - } else { - key = camelCase( key ); - - // If a key with the spaces exists, use it. - // Otherwise, create an array by matching non-whitespace - key = key in cache ? - [ key ] : - ( key.match( rnothtmlwhite ) || [] ); - } - - i = key.length; - - while ( i-- ) { - delete cache[ key[ i ] ]; - } - } - - // Remove the expando if there's no more data - if ( key === undefined || jQuery.isEmptyObject( cache ) ) { - - // Support: Chrome <=35 - 45 - // Webkit & Blink performance suffers when deleting properties - // from DOM nodes, so set to undefined instead - // https://bugs.chromium.org/p/chromium/issues/detail?id=378607 (bug restricted) - if ( owner.nodeType ) { - owner[ this.expando ] = undefined; - } else { - delete owner[ this.expando ]; - } - } - }, - hasData: function( owner ) { - var cache = owner[ this.expando ]; - return cache !== undefined && !jQuery.isEmptyObject( cache ); - } -}; -var dataPriv = new Data(); - -var dataUser = new Data(); - - - -// Implementation Summary -// -// 1. Enforce API surface and semantic compatibility with 1.9.x branch -// 2. Improve the module's maintainability by reducing the storage -// paths to a single mechanism. -// 3. Use the same single mechanism to support "private" and "user" data. -// 4. _Never_ expose "private" data to user code (TODO: Drop _data, _removeData) -// 5. Avoid exposing implementation details on user objects (eg. expando properties) -// 6. Provide a clear path for implementation upgrade to WeakMap in 2014 - -var rbrace = /^(?:\{[\w\W]*\}|\[[\w\W]*\])$/, - rmultiDash = /[A-Z]/g; - -function getData( data ) { - if ( data === "true" ) { - return true; - } - - if ( data === "false" ) { - return false; - } - - if ( data === "null" ) { - return null; - } - - // Only convert to a number if it doesn't change the string - if ( data === +data + "" ) { - return +data; - } - - if ( rbrace.test( data ) ) { - return JSON.parse( data ); - } - - return data; -} - -function dataAttr( elem, key, data ) { - var name; - - // If nothing was found internally, try to fetch any - // data from the HTML5 data-* attribute - if ( data === undefined && elem.nodeType === 1 ) { - name = "data-" + key.replace( rmultiDash, "-$&" ).toLowerCase(); - data = elem.getAttribute( name ); - - if ( typeof data === "string" ) { - try { - data = getData( data ); - } catch ( e ) {} - - // Make sure we set the data so it isn't changed later - dataUser.set( elem, key, data ); - } else { - data = undefined; - } - } - return data; -} - -jQuery.extend( { - hasData: function( elem ) { - return dataUser.hasData( elem ) || dataPriv.hasData( elem ); - }, - - data: function( elem, name, data ) { - return dataUser.access( elem, name, data ); - }, - - removeData: function( elem, name ) { - dataUser.remove( elem, name ); - }, - - // TODO: Now that all calls to _data and _removeData have been replaced - // with direct calls to dataPriv methods, these can be deprecated. - _data: function( elem, name, data ) { - return dataPriv.access( elem, name, data ); - }, - - _removeData: function( elem, name ) { - dataPriv.remove( elem, name ); - } -} ); - -jQuery.fn.extend( { - data: function( key, value ) { - var i, name, data, - elem = this[ 0 ], - attrs = elem && elem.attributes; - - // Gets all values - if ( key === undefined ) { - if ( this.length ) { - data = dataUser.get( elem ); - - if ( elem.nodeType === 1 && !dataPriv.get( elem, "hasDataAttrs" ) ) { - i = attrs.length; - while ( i-- ) { - - // Support: IE 11 only - // The attrs elements can be null (#14894) - if ( attrs[ i ] ) { - name = attrs[ i ].name; - if ( name.indexOf( "data-" ) === 0 ) { - name = camelCase( name.slice( 5 ) ); - dataAttr( elem, name, data[ name ] ); - } - } - } - dataPriv.set( elem, "hasDataAttrs", true ); - } - } - - return data; - } - - // Sets multiple values - if ( typeof key === "object" ) { - return this.each( function() { - dataUser.set( this, key ); - } ); - } - - return access( this, function( value ) { - var data; - - // The calling jQuery object (element matches) is not empty - // (and therefore has an element appears at this[ 0 ]) and the - // `value` parameter was not undefined. An empty jQuery object - // will result in `undefined` for elem = this[ 0 ] which will - // throw an exception if an attempt to read a data cache is made. - if ( elem && value === undefined ) { - - // Attempt to get data from the cache - // The key will always be camelCased in Data - data = dataUser.get( elem, key ); - if ( data !== undefined ) { - return data; - } - - // Attempt to "discover" the data in - // HTML5 custom data-* attrs - data = dataAttr( elem, key ); - if ( data !== undefined ) { - return data; - } - - // We tried really hard, but the data doesn't exist. - return; - } - - // Set the data... - this.each( function() { - - // We always store the camelCased key - dataUser.set( this, key, value ); - } ); - }, null, value, arguments.length > 1, null, true ); - }, - - removeData: function( key ) { - return this.each( function() { - dataUser.remove( this, key ); - } ); - } -} ); - - -jQuery.extend( { - queue: function( elem, type, data ) { - var queue; - - if ( elem ) { - type = ( type || "fx" ) + "queue"; - queue = dataPriv.get( elem, type ); - - // Speed up dequeue by getting out quickly if this is just a lookup - if ( data ) { - if ( !queue || Array.isArray( data ) ) { - queue = dataPriv.access( elem, type, jQuery.makeArray( data ) ); - } else { - queue.push( data ); - } - } - return queue || []; - } - }, - - dequeue: function( elem, type ) { - type = type || "fx"; - - var queue = jQuery.queue( elem, type ), - startLength = queue.length, - fn = queue.shift(), - hooks = jQuery._queueHooks( elem, type ), - next = function() { - jQuery.dequeue( elem, type ); - }; - - // If the fx queue is dequeued, always remove the progress sentinel - if ( fn === "inprogress" ) { - fn = queue.shift(); - startLength--; - } - - if ( fn ) { - - // Add a progress sentinel to prevent the fx queue from being - // automatically dequeued - if ( type === "fx" ) { - queue.unshift( "inprogress" ); - } - - // Clear up the last queue stop function - delete hooks.stop; - fn.call( elem, next, hooks ); - } - - if ( !startLength && hooks ) { - hooks.empty.fire(); - } - }, - - // Not public - generate a queueHooks object, or return the current one - _queueHooks: function( elem, type ) { - var key = type + "queueHooks"; - return dataPriv.get( elem, key ) || dataPriv.access( elem, key, { - empty: jQuery.Callbacks( "once memory" ).add( function() { - dataPriv.remove( elem, [ type + "queue", key ] ); - } ) - } ); - } -} ); - -jQuery.fn.extend( { - queue: function( type, data ) { - var setter = 2; - - if ( typeof type !== "string" ) { - data = type; - type = "fx"; - setter--; - } - - if ( arguments.length < setter ) { - return jQuery.queue( this[ 0 ], type ); - } - - return data === undefined ? - this : - this.each( function() { - var queue = jQuery.queue( this, type, data ); - - // Ensure a hooks for this queue - jQuery._queueHooks( this, type ); - - if ( type === "fx" && queue[ 0 ] !== "inprogress" ) { - jQuery.dequeue( this, type ); - } - } ); - }, - dequeue: function( type ) { - return this.each( function() { - jQuery.dequeue( this, type ); - } ); - }, - clearQueue: function( type ) { - return this.queue( type || "fx", [] ); - }, - - // Get a promise resolved when queues of a certain type - // are emptied (fx is the type by default) - promise: function( type, obj ) { - var tmp, - count = 1, - defer = jQuery.Deferred(), - elements = this, - i = this.length, - resolve = function() { - if ( !( --count ) ) { - defer.resolveWith( elements, [ elements ] ); - } - }; - - if ( typeof type !== "string" ) { - obj = type; - type = undefined; - } - type = type || "fx"; - - while ( i-- ) { - tmp = dataPriv.get( elements[ i ], type + "queueHooks" ); - if ( tmp && tmp.empty ) { - count++; - tmp.empty.add( resolve ); - } - } - resolve(); - return defer.promise( obj ); - } -} ); -var pnum = ( /[+-]?(?:\d*\.|)\d+(?:[eE][+-]?\d+|)/ ).source; - -var rcssNum = new RegExp( "^(?:([+-])=|)(" + pnum + ")([a-z%]*)$", "i" ); - - -var cssExpand = [ "Top", "Right", "Bottom", "Left" ]; - -var documentElement = document.documentElement; - - - - var isAttached = function( elem ) { - return jQuery.contains( elem.ownerDocument, elem ); - }, - composed = { composed: true }; - - // Support: IE 9 - 11+, Edge 12 - 18+, iOS 10.0 - 10.2 only - // Check attachment across shadow DOM boundaries when possible (gh-3504) - // Support: iOS 10.0-10.2 only - // Early iOS 10 versions support `attachShadow` but not `getRootNode`, - // leading to errors. We need to check for `getRootNode`. - if ( documentElement.getRootNode ) { - isAttached = function( elem ) { - return jQuery.contains( elem.ownerDocument, elem ) || - elem.getRootNode( composed ) === elem.ownerDocument; - }; - } -var isHiddenWithinTree = function( elem, el ) { - - // isHiddenWithinTree might be called from jQuery#filter function; - // in that case, element will be second argument - elem = el || elem; - - // Inline style trumps all - return elem.style.display === "none" || - elem.style.display === "" && - - // Otherwise, check computed style - // Support: Firefox <=43 - 45 - // Disconnected elements can have computed display: none, so first confirm that elem is - // in the document. - isAttached( elem ) && - - jQuery.css( elem, "display" ) === "none"; - }; - - - -function adjustCSS( elem, prop, valueParts, tween ) { - var adjusted, scale, - maxIterations = 20, - currentValue = tween ? - function() { - return tween.cur(); - } : - function() { - return jQuery.css( elem, prop, "" ); - }, - initial = currentValue(), - unit = valueParts && valueParts[ 3 ] || ( jQuery.cssNumber[ prop ] ? "" : "px" ), - - // Starting value computation is required for potential unit mismatches - initialInUnit = elem.nodeType && - ( jQuery.cssNumber[ prop ] || unit !== "px" && +initial ) && - rcssNum.exec( jQuery.css( elem, prop ) ); - - if ( initialInUnit && initialInUnit[ 3 ] !== unit ) { - - // Support: Firefox <=54 - // Halve the iteration target value to prevent interference from CSS upper bounds (gh-2144) - initial = initial / 2; - - // Trust units reported by jQuery.css - unit = unit || initialInUnit[ 3 ]; - - // Iteratively approximate from a nonzero starting point - initialInUnit = +initial || 1; - - while ( maxIterations-- ) { - - // Evaluate and update our best guess (doubling guesses that zero out). - // Finish if the scale equals or crosses 1 (making the old*new product non-positive). - jQuery.style( elem, prop, initialInUnit + unit ); - if ( ( 1 - scale ) * ( 1 - ( scale = currentValue() / initial || 0.5 ) ) <= 0 ) { - maxIterations = 0; - } - initialInUnit = initialInUnit / scale; - - } - - initialInUnit = initialInUnit * 2; - jQuery.style( elem, prop, initialInUnit + unit ); - - // Make sure we update the tween properties later on - valueParts = valueParts || []; - } - - if ( valueParts ) { - initialInUnit = +initialInUnit || +initial || 0; - - // Apply relative offset (+=/-=) if specified - adjusted = valueParts[ 1 ] ? - initialInUnit + ( valueParts[ 1 ] + 1 ) * valueParts[ 2 ] : - +valueParts[ 2 ]; - if ( tween ) { - tween.unit = unit; - tween.start = initialInUnit; - tween.end = adjusted; - } - } - return adjusted; -} - - -var defaultDisplayMap = {}; - -function getDefaultDisplay( elem ) { - var temp, - doc = elem.ownerDocument, - nodeName = elem.nodeName, - display = defaultDisplayMap[ nodeName ]; - - if ( display ) { - return display; - } - - temp = doc.body.appendChild( doc.createElement( nodeName ) ); - display = jQuery.css( temp, "display" ); - - temp.parentNode.removeChild( temp ); - - if ( display === "none" ) { - display = "block"; - } - defaultDisplayMap[ nodeName ] = display; - - return display; -} - -function showHide( elements, show ) { - var display, elem, - values = [], - index = 0, - length = elements.length; - - // Determine new display value for elements that need to change - for ( ; index < length; index++ ) { - elem = elements[ index ]; - if ( !elem.style ) { - continue; - } - - display = elem.style.display; - if ( show ) { - - // Since we force visibility upon cascade-hidden elements, an immediate (and slow) - // check is required in this first loop unless we have a nonempty display value (either - // inline or about-to-be-restored) - if ( display === "none" ) { - values[ index ] = dataPriv.get( elem, "display" ) || null; - if ( !values[ index ] ) { - elem.style.display = ""; - } - } - if ( elem.style.display === "" && isHiddenWithinTree( elem ) ) { - values[ index ] = getDefaultDisplay( elem ); - } - } else { - if ( display !== "none" ) { - values[ index ] = "none"; - - // Remember what we're overwriting - dataPriv.set( elem, "display", display ); - } - } - } - - // Set the display of the elements in a second loop to avoid constant reflow - for ( index = 0; index < length; index++ ) { - if ( values[ index ] != null ) { - elements[ index ].style.display = values[ index ]; - } - } - - return elements; -} - -jQuery.fn.extend( { - show: function() { - return showHide( this, true ); - }, - hide: function() { - return showHide( this ); - }, - toggle: function( state ) { - if ( typeof state === "boolean" ) { - return state ? this.show() : this.hide(); - } - - return this.each( function() { - if ( isHiddenWithinTree( this ) ) { - jQuery( this ).show(); - } else { - jQuery( this ).hide(); - } - } ); - } -} ); -var rcheckableType = ( /^(?:checkbox|radio)$/i ); - -var rtagName = ( /<([a-z][^\/\0>\x20\t\r\n\f]*)/i ); - -var rscriptType = ( /^$|^module$|\/(?:java|ecma)script/i ); - - - -( function() { - var fragment = document.createDocumentFragment(), - div = fragment.appendChild( document.createElement( "div" ) ), - input = document.createElement( "input" ); - - // Support: Android 4.0 - 4.3 only - // Check state lost if the name is set (#11217) - // Support: Windows Web Apps (WWA) - // `name` and `type` must use .setAttribute for WWA (#14901) - input.setAttribute( "type", "radio" ); - input.setAttribute( "checked", "checked" ); - input.setAttribute( "name", "t" ); - - div.appendChild( input ); - - // Support: Android <=4.1 only - // Older WebKit doesn't clone checked state correctly in fragments - support.checkClone = div.cloneNode( true ).cloneNode( true ).lastChild.checked; - - // Support: IE <=11 only - // Make sure textarea (and checkbox) defaultValue is properly cloned - div.innerHTML = ""; - support.noCloneChecked = !!div.cloneNode( true ).lastChild.defaultValue; - - // Support: IE <=9 only - // IE <=9 replaces "; - support.option = !!div.lastChild; -} )(); - - -// We have to close these tags to support XHTML (#13200) -var wrapMap = { - - // XHTML parsers do not magically insert elements in the - // same way that tag soup parsers do. So we cannot shorten - // this by omitting or other required elements. - thead: [ 1, "", "
" ], - col: [ 2, "", "
" ], - tr: [ 2, "", "
" ], - td: [ 3, "", "
" ], - - _default: [ 0, "", "" ] -}; - -wrapMap.tbody = wrapMap.tfoot = wrapMap.colgroup = wrapMap.caption = wrapMap.thead; -wrapMap.th = wrapMap.td; - -// Support: IE <=9 only -if ( !support.option ) { - wrapMap.optgroup = wrapMap.option = [ 1, "" ]; -} - - -function getAll( context, tag ) { - - // Support: IE <=9 - 11 only - // Use typeof to avoid zero-argument method invocation on host objects (#15151) - var ret; - - if ( typeof context.getElementsByTagName !== "undefined" ) { - ret = context.getElementsByTagName( tag || "*" ); - - } else if ( typeof context.querySelectorAll !== "undefined" ) { - ret = context.querySelectorAll( tag || "*" ); - - } else { - ret = []; - } - - if ( tag === undefined || tag && nodeName( context, tag ) ) { - return jQuery.merge( [ context ], ret ); - } - - return ret; -} - - -// Mark scripts as having already been evaluated -function setGlobalEval( elems, refElements ) { - var i = 0, - l = elems.length; - - for ( ; i < l; i++ ) { - dataPriv.set( - elems[ i ], - "globalEval", - !refElements || dataPriv.get( refElements[ i ], "globalEval" ) - ); - } -} - - -var rhtml = /<|&#?\w+;/; - -function buildFragment( elems, context, scripts, selection, ignored ) { - var elem, tmp, tag, wrap, attached, j, - fragment = context.createDocumentFragment(), - nodes = [], - i = 0, - l = elems.length; - - for ( ; i < l; i++ ) { - elem = elems[ i ]; - - if ( elem || elem === 0 ) { - - // Add nodes directly - if ( toType( elem ) === "object" ) { - - // Support: Android <=4.0 only, PhantomJS 1 only - // push.apply(_, arraylike) throws on ancient WebKit - jQuery.merge( nodes, elem.nodeType ? [ elem ] : elem ); - - // Convert non-html into a text node - } else if ( !rhtml.test( elem ) ) { - nodes.push( context.createTextNode( elem ) ); - - // Convert html into DOM nodes - } else { - tmp = tmp || fragment.appendChild( context.createElement( "div" ) ); - - // Deserialize a standard representation - tag = ( rtagName.exec( elem ) || [ "", "" ] )[ 1 ].toLowerCase(); - wrap = wrapMap[ tag ] || wrapMap._default; - tmp.innerHTML = wrap[ 1 ] + jQuery.htmlPrefilter( elem ) + wrap[ 2 ]; - - // Descend through wrappers to the right content - j = wrap[ 0 ]; - while ( j-- ) { - tmp = tmp.lastChild; - } - - // Support: Android <=4.0 only, PhantomJS 1 only - // push.apply(_, arraylike) throws on ancient WebKit - jQuery.merge( nodes, tmp.childNodes ); - - // Remember the top-level container - tmp = fragment.firstChild; - - // Ensure the created nodes are orphaned (#12392) - tmp.textContent = ""; - } - } - } - - // Remove wrapper from fragment - fragment.textContent = ""; - - i = 0; - while ( ( elem = nodes[ i++ ] ) ) { - - // Skip elements already in the context collection (trac-4087) - if ( selection && jQuery.inArray( elem, selection ) > -1 ) { - if ( ignored ) { - ignored.push( elem ); - } - continue; - } - - attached = isAttached( elem ); - - // Append to fragment - tmp = getAll( fragment.appendChild( elem ), "script" ); - - // Preserve script evaluation history - if ( attached ) { - setGlobalEval( tmp ); - } - - // Capture executables - if ( scripts ) { - j = 0; - while ( ( elem = tmp[ j++ ] ) ) { - if ( rscriptType.test( elem.type || "" ) ) { - scripts.push( elem ); - } - } - } - } - - return fragment; -} - - -var rtypenamespace = /^([^.]*)(?:\.(.+)|)/; - -function returnTrue() { - return true; -} - -function returnFalse() { - return false; -} - -// Support: IE <=9 - 11+ -// focus() and blur() are asynchronous, except when they are no-op. -// So expect focus to be synchronous when the element is already active, -// and blur to be synchronous when the element is not already active. -// (focus and blur are always synchronous in other supported browsers, -// this just defines when we can count on it). -function expectSync( elem, type ) { - return ( elem === safeActiveElement() ) === ( type === "focus" ); -} - -// Support: IE <=9 only -// Accessing document.activeElement can throw unexpectedly -// https://bugs.jquery.com/ticket/13393 -function safeActiveElement() { - try { - return document.activeElement; - } catch ( err ) { } -} - -function on( elem, types, selector, data, fn, one ) { - var origFn, type; - - // Types can be a map of types/handlers - if ( typeof types === "object" ) { - - // ( types-Object, selector, data ) - if ( typeof selector !== "string" ) { - - // ( types-Object, data ) - data = data || selector; - selector = undefined; - } - for ( type in types ) { - on( elem, type, selector, data, types[ type ], one ); - } - return elem; - } - - if ( data == null && fn == null ) { - - // ( types, fn ) - fn = selector; - data = selector = undefined; - } else if ( fn == null ) { - if ( typeof selector === "string" ) { - - // ( types, selector, fn ) - fn = data; - data = undefined; - } else { - - // ( types, data, fn ) - fn = data; - data = selector; - selector = undefined; - } - } - if ( fn === false ) { - fn = returnFalse; - } else if ( !fn ) { - return elem; - } - - if ( one === 1 ) { - origFn = fn; - fn = function( event ) { - - // Can use an empty set, since event contains the info - jQuery().off( event ); - return origFn.apply( this, arguments ); - }; - - // Use same guid so caller can remove using origFn - fn.guid = origFn.guid || ( origFn.guid = jQuery.guid++ ); - } - return elem.each( function() { - jQuery.event.add( this, types, fn, data, selector ); - } ); -} - -/* - * Helper functions for managing events -- not part of the public interface. - * Props to Dean Edwards' addEvent library for many of the ideas. - */ -jQuery.event = { - - global: {}, - - add: function( elem, types, handler, data, selector ) { - - var handleObjIn, eventHandle, tmp, - events, t, handleObj, - special, handlers, type, namespaces, origType, - elemData = dataPriv.get( elem ); - - // Only attach events to objects that accept data - if ( !acceptData( elem ) ) { - return; - } - - // Caller can pass in an object of custom data in lieu of the handler - if ( handler.handler ) { - handleObjIn = handler; - handler = handleObjIn.handler; - selector = handleObjIn.selector; - } - - // Ensure that invalid selectors throw exceptions at attach time - // Evaluate against documentElement in case elem is a non-element node (e.g., document) - if ( selector ) { - jQuery.find.matchesSelector( documentElement, selector ); - } - - // Make sure that the handler has a unique ID, used to find/remove it later - if ( !handler.guid ) { - handler.guid = jQuery.guid++; - } - - // Init the element's event structure and main handler, if this is the first - if ( !( events = elemData.events ) ) { - events = elemData.events = Object.create( null ); - } - if ( !( eventHandle = elemData.handle ) ) { - eventHandle = elemData.handle = function( e ) { - - // Discard the second event of a jQuery.event.trigger() and - // when an event is called after a page has unloaded - return typeof jQuery !== "undefined" && jQuery.event.triggered !== e.type ? - jQuery.event.dispatch.apply( elem, arguments ) : undefined; - }; - } - - // Handle multiple events separated by a space - types = ( types || "" ).match( rnothtmlwhite ) || [ "" ]; - t = types.length; - while ( t-- ) { - tmp = rtypenamespace.exec( types[ t ] ) || []; - type = origType = tmp[ 1 ]; - namespaces = ( tmp[ 2 ] || "" ).split( "." ).sort(); - - // There *must* be a type, no attaching namespace-only handlers - if ( !type ) { - continue; - } - - // If event changes its type, use the special event handlers for the changed type - special = jQuery.event.special[ type ] || {}; - - // If selector defined, determine special event api type, otherwise given type - type = ( selector ? special.delegateType : special.bindType ) || type; - - // Update special based on newly reset type - special = jQuery.event.special[ type ] || {}; - - // handleObj is passed to all event handlers - handleObj = jQuery.extend( { - type: type, - origType: origType, - data: data, - handler: handler, - guid: handler.guid, - selector: selector, - needsContext: selector && jQuery.expr.match.needsContext.test( selector ), - namespace: namespaces.join( "." ) - }, handleObjIn ); - - // Init the event handler queue if we're the first - if ( !( handlers = events[ type ] ) ) { - handlers = events[ type ] = []; - handlers.delegateCount = 0; - - // Only use addEventListener if the special events handler returns false - if ( !special.setup || - special.setup.call( elem, data, namespaces, eventHandle ) === false ) { - - if ( elem.addEventListener ) { - elem.addEventListener( type, eventHandle ); - } - } - } - - if ( special.add ) { - special.add.call( elem, handleObj ); - - if ( !handleObj.handler.guid ) { - handleObj.handler.guid = handler.guid; - } - } - - // Add to the element's handler list, delegates in front - if ( selector ) { - handlers.splice( handlers.delegateCount++, 0, handleObj ); - } else { - handlers.push( handleObj ); - } - - // Keep track of which events have ever been used, for event optimization - jQuery.event.global[ type ] = true; - } - - }, - - // Detach an event or set of events from an element - remove: function( elem, types, handler, selector, mappedTypes ) { - - var j, origCount, tmp, - events, t, handleObj, - special, handlers, type, namespaces, origType, - elemData = dataPriv.hasData( elem ) && dataPriv.get( elem ); - - if ( !elemData || !( events = elemData.events ) ) { - return; - } - - // Once for each type.namespace in types; type may be omitted - types = ( types || "" ).match( rnothtmlwhite ) || [ "" ]; - t = types.length; - while ( t-- ) { - tmp = rtypenamespace.exec( types[ t ] ) || []; - type = origType = tmp[ 1 ]; - namespaces = ( tmp[ 2 ] || "" ).split( "." ).sort(); - - // Unbind all events (on this namespace, if provided) for the element - if ( !type ) { - for ( type in events ) { - jQuery.event.remove( elem, type + types[ t ], handler, selector, true ); - } - continue; - } - - special = jQuery.event.special[ type ] || {}; - type = ( selector ? special.delegateType : special.bindType ) || type; - handlers = events[ type ] || []; - tmp = tmp[ 2 ] && - new RegExp( "(^|\\.)" + namespaces.join( "\\.(?:.*\\.|)" ) + "(\\.|$)" ); - - // Remove matching events - origCount = j = handlers.length; - while ( j-- ) { - handleObj = handlers[ j ]; - - if ( ( mappedTypes || origType === handleObj.origType ) && - ( !handler || handler.guid === handleObj.guid ) && - ( !tmp || tmp.test( handleObj.namespace ) ) && - ( !selector || selector === handleObj.selector || - selector === "**" && handleObj.selector ) ) { - handlers.splice( j, 1 ); - - if ( handleObj.selector ) { - handlers.delegateCount--; - } - if ( special.remove ) { - special.remove.call( elem, handleObj ); - } - } - } - - // Remove generic event handler if we removed something and no more handlers exist - // (avoids potential for endless recursion during removal of special event handlers) - if ( origCount && !handlers.length ) { - if ( !special.teardown || - special.teardown.call( elem, namespaces, elemData.handle ) === false ) { - - jQuery.removeEvent( elem, type, elemData.handle ); - } - - delete events[ type ]; - } - } - - // Remove data and the expando if it's no longer used - if ( jQuery.isEmptyObject( events ) ) { - dataPriv.remove( elem, "handle events" ); - } - }, - - dispatch: function( nativeEvent ) { - - var i, j, ret, matched, handleObj, handlerQueue, - args = new Array( arguments.length ), - - // Make a writable jQuery.Event from the native event object - event = jQuery.event.fix( nativeEvent ), - - handlers = ( - dataPriv.get( this, "events" ) || Object.create( null ) - )[ event.type ] || [], - special = jQuery.event.special[ event.type ] || {}; - - // Use the fix-ed jQuery.Event rather than the (read-only) native event - args[ 0 ] = event; - - for ( i = 1; i < arguments.length; i++ ) { - args[ i ] = arguments[ i ]; - } - - event.delegateTarget = this; - - // Call the preDispatch hook for the mapped type, and let it bail if desired - if ( special.preDispatch && special.preDispatch.call( this, event ) === false ) { - return; - } - - // Determine handlers - handlerQueue = jQuery.event.handlers.call( this, event, handlers ); - - // Run delegates first; they may want to stop propagation beneath us - i = 0; - while ( ( matched = handlerQueue[ i++ ] ) && !event.isPropagationStopped() ) { - event.currentTarget = matched.elem; - - j = 0; - while ( ( handleObj = matched.handlers[ j++ ] ) && - !event.isImmediatePropagationStopped() ) { - - // If the event is namespaced, then each handler is only invoked if it is - // specially universal or its namespaces are a superset of the event's. - if ( !event.rnamespace || handleObj.namespace === false || - event.rnamespace.test( handleObj.namespace ) ) { - - event.handleObj = handleObj; - event.data = handleObj.data; - - ret = ( ( jQuery.event.special[ handleObj.origType ] || {} ).handle || - handleObj.handler ).apply( matched.elem, args ); - - if ( ret !== undefined ) { - if ( ( event.result = ret ) === false ) { - event.preventDefault(); - event.stopPropagation(); - } - } - } - } - } - - // Call the postDispatch hook for the mapped type - if ( special.postDispatch ) { - special.postDispatch.call( this, event ); - } - - return event.result; - }, - - handlers: function( event, handlers ) { - var i, handleObj, sel, matchedHandlers, matchedSelectors, - handlerQueue = [], - delegateCount = handlers.delegateCount, - cur = event.target; - - // Find delegate handlers - if ( delegateCount && - - // Support: IE <=9 - // Black-hole SVG instance trees (trac-13180) - cur.nodeType && - - // Support: Firefox <=42 - // Suppress spec-violating clicks indicating a non-primary pointer button (trac-3861) - // https://www.w3.org/TR/DOM-Level-3-Events/#event-type-click - // Support: IE 11 only - // ...but not arrow key "clicks" of radio inputs, which can have `button` -1 (gh-2343) - !( event.type === "click" && event.button >= 1 ) ) { - - for ( ; cur !== this; cur = cur.parentNode || this ) { - - // Don't check non-elements (#13208) - // Don't process clicks on disabled elements (#6911, #8165, #11382, #11764) - if ( cur.nodeType === 1 && !( event.type === "click" && cur.disabled === true ) ) { - matchedHandlers = []; - matchedSelectors = {}; - for ( i = 0; i < delegateCount; i++ ) { - handleObj = handlers[ i ]; - - // Don't conflict with Object.prototype properties (#13203) - sel = handleObj.selector + " "; - - if ( matchedSelectors[ sel ] === undefined ) { - matchedSelectors[ sel ] = handleObj.needsContext ? - jQuery( sel, this ).index( cur ) > -1 : - jQuery.find( sel, this, null, [ cur ] ).length; - } - if ( matchedSelectors[ sel ] ) { - matchedHandlers.push( handleObj ); - } - } - if ( matchedHandlers.length ) { - handlerQueue.push( { elem: cur, handlers: matchedHandlers } ); - } - } - } - } - - // Add the remaining (directly-bound) handlers - cur = this; - if ( delegateCount < handlers.length ) { - handlerQueue.push( { elem: cur, handlers: handlers.slice( delegateCount ) } ); - } - - return handlerQueue; - }, - - addProp: function( name, hook ) { - Object.defineProperty( jQuery.Event.prototype, name, { - enumerable: true, - configurable: true, - - get: isFunction( hook ) ? - function() { - if ( this.originalEvent ) { - return hook( this.originalEvent ); - } - } : - function() { - if ( this.originalEvent ) { - return this.originalEvent[ name ]; - } - }, - - set: function( value ) { - Object.defineProperty( this, name, { - enumerable: true, - configurable: true, - writable: true, - value: value - } ); - } - } ); - }, - - fix: function( originalEvent ) { - return originalEvent[ jQuery.expando ] ? - originalEvent : - new jQuery.Event( originalEvent ); - }, - - special: { - load: { - - // Prevent triggered image.load events from bubbling to window.load - noBubble: true - }, - click: { - - // Utilize native event to ensure correct state for checkable inputs - setup: function( data ) { - - // For mutual compressibility with _default, replace `this` access with a local var. - // `|| data` is dead code meant only to preserve the variable through minification. - var el = this || data; - - // Claim the first handler - if ( rcheckableType.test( el.type ) && - el.click && nodeName( el, "input" ) ) { - - // dataPriv.set( el, "click", ... ) - leverageNative( el, "click", returnTrue ); - } - - // Return false to allow normal processing in the caller - return false; - }, - trigger: function( data ) { - - // For mutual compressibility with _default, replace `this` access with a local var. - // `|| data` is dead code meant only to preserve the variable through minification. - var el = this || data; - - // Force setup before triggering a click - if ( rcheckableType.test( el.type ) && - el.click && nodeName( el, "input" ) ) { - - leverageNative( el, "click" ); - } - - // Return non-false to allow normal event-path propagation - return true; - }, - - // For cross-browser consistency, suppress native .click() on links - // Also prevent it if we're currently inside a leveraged native-event stack - _default: function( event ) { - var target = event.target; - return rcheckableType.test( target.type ) && - target.click && nodeName( target, "input" ) && - dataPriv.get( target, "click" ) || - nodeName( target, "a" ); - } - }, - - beforeunload: { - postDispatch: function( event ) { - - // Support: Firefox 20+ - // Firefox doesn't alert if the returnValue field is not set. - if ( event.result !== undefined && event.originalEvent ) { - event.originalEvent.returnValue = event.result; - } - } - } - } -}; - -// Ensure the presence of an event listener that handles manually-triggered -// synthetic events by interrupting progress until reinvoked in response to -// *native* events that it fires directly, ensuring that state changes have -// already occurred before other listeners are invoked. -function leverageNative( el, type, expectSync ) { - - // Missing expectSync indicates a trigger call, which must force setup through jQuery.event.add - if ( !expectSync ) { - if ( dataPriv.get( el, type ) === undefined ) { - jQuery.event.add( el, type, returnTrue ); - } - return; - } - - // Register the controller as a special universal handler for all event namespaces - dataPriv.set( el, type, false ); - jQuery.event.add( el, type, { - namespace: false, - handler: function( event ) { - var notAsync, result, - saved = dataPriv.get( this, type ); - - if ( ( event.isTrigger & 1 ) && this[ type ] ) { - - // Interrupt processing of the outer synthetic .trigger()ed event - // Saved data should be false in such cases, but might be a leftover capture object - // from an async native handler (gh-4350) - if ( !saved.length ) { - - // Store arguments for use when handling the inner native event - // There will always be at least one argument (an event object), so this array - // will not be confused with a leftover capture object. - saved = slice.call( arguments ); - dataPriv.set( this, type, saved ); - - // Trigger the native event and capture its result - // Support: IE <=9 - 11+ - // focus() and blur() are asynchronous - notAsync = expectSync( this, type ); - this[ type ](); - result = dataPriv.get( this, type ); - if ( saved !== result || notAsync ) { - dataPriv.set( this, type, false ); - } else { - result = {}; - } - if ( saved !== result ) { - - // Cancel the outer synthetic event - event.stopImmediatePropagation(); - event.preventDefault(); - - // Support: Chrome 86+ - // In Chrome, if an element having a focusout handler is blurred by - // clicking outside of it, it invokes the handler synchronously. If - // that handler calls `.remove()` on the element, the data is cleared, - // leaving `result` undefined. We need to guard against this. - return result && result.value; - } - - // If this is an inner synthetic event for an event with a bubbling surrogate - // (focus or blur), assume that the surrogate already propagated from triggering the - // native event and prevent that from happening again here. - // This technically gets the ordering wrong w.r.t. to `.trigger()` (in which the - // bubbling surrogate propagates *after* the non-bubbling base), but that seems - // less bad than duplication. - } else if ( ( jQuery.event.special[ type ] || {} ).delegateType ) { - event.stopPropagation(); - } - - // If this is a native event triggered above, everything is now in order - // Fire an inner synthetic event with the original arguments - } else if ( saved.length ) { - - // ...and capture the result - dataPriv.set( this, type, { - value: jQuery.event.trigger( - - // Support: IE <=9 - 11+ - // Extend with the prototype to reset the above stopImmediatePropagation() - jQuery.extend( saved[ 0 ], jQuery.Event.prototype ), - saved.slice( 1 ), - this - ) - } ); - - // Abort handling of the native event - event.stopImmediatePropagation(); - } - } - } ); -} - -jQuery.removeEvent = function( elem, type, handle ) { - - // This "if" is needed for plain objects - if ( elem.removeEventListener ) { - elem.removeEventListener( type, handle ); - } -}; - -jQuery.Event = function( src, props ) { - - // Allow instantiation without the 'new' keyword - if ( !( this instanceof jQuery.Event ) ) { - return new jQuery.Event( src, props ); - } - - // Event object - if ( src && src.type ) { - this.originalEvent = src; - this.type = src.type; - - // Events bubbling up the document may have been marked as prevented - // by a handler lower down the tree; reflect the correct value. - this.isDefaultPrevented = src.defaultPrevented || - src.defaultPrevented === undefined && - - // Support: Android <=2.3 only - src.returnValue === false ? - returnTrue : - returnFalse; - - // Create target properties - // Support: Safari <=6 - 7 only - // Target should not be a text node (#504, #13143) - this.target = ( src.target && src.target.nodeType === 3 ) ? - src.target.parentNode : - src.target; - - this.currentTarget = src.currentTarget; - this.relatedTarget = src.relatedTarget; - - // Event type - } else { - this.type = src; - } - - // Put explicitly provided properties onto the event object - if ( props ) { - jQuery.extend( this, props ); - } - - // Create a timestamp if incoming event doesn't have one - this.timeStamp = src && src.timeStamp || Date.now(); - - // Mark it as fixed - this[ jQuery.expando ] = true; -}; - -// jQuery.Event is based on DOM3 Events as specified by the ECMAScript Language Binding -// https://www.w3.org/TR/2003/WD-DOM-Level-3-Events-20030331/ecma-script-binding.html -jQuery.Event.prototype = { - constructor: jQuery.Event, - isDefaultPrevented: returnFalse, - isPropagationStopped: returnFalse, - isImmediatePropagationStopped: returnFalse, - isSimulated: false, - - preventDefault: function() { - var e = this.originalEvent; - - this.isDefaultPrevented = returnTrue; - - if ( e && !this.isSimulated ) { - e.preventDefault(); - } - }, - stopPropagation: function() { - var e = this.originalEvent; - - this.isPropagationStopped = returnTrue; - - if ( e && !this.isSimulated ) { - e.stopPropagation(); - } - }, - stopImmediatePropagation: function() { - var e = this.originalEvent; - - this.isImmediatePropagationStopped = returnTrue; - - if ( e && !this.isSimulated ) { - e.stopImmediatePropagation(); - } - - this.stopPropagation(); - } -}; - -// Includes all common event props including KeyEvent and MouseEvent specific props -jQuery.each( { - altKey: true, - bubbles: true, - cancelable: true, - changedTouches: true, - ctrlKey: true, - detail: true, - eventPhase: true, - metaKey: true, - pageX: true, - pageY: true, - shiftKey: true, - view: true, - "char": true, - code: true, - charCode: true, - key: true, - keyCode: true, - button: true, - buttons: true, - clientX: true, - clientY: true, - offsetX: true, - offsetY: true, - pointerId: true, - pointerType: true, - screenX: true, - screenY: true, - targetTouches: true, - toElement: true, - touches: true, - which: true -}, jQuery.event.addProp ); - -jQuery.each( { focus: "focusin", blur: "focusout" }, function( type, delegateType ) { - jQuery.event.special[ type ] = { - - // Utilize native event if possible so blur/focus sequence is correct - setup: function() { - - // Claim the first handler - // dataPriv.set( this, "focus", ... ) - // dataPriv.set( this, "blur", ... ) - leverageNative( this, type, expectSync ); - - // Return false to allow normal processing in the caller - return false; - }, - trigger: function() { - - // Force setup before trigger - leverageNative( this, type ); - - // Return non-false to allow normal event-path propagation - return true; - }, - - // Suppress native focus or blur as it's already being fired - // in leverageNative. - _default: function() { - return true; - }, - - delegateType: delegateType - }; -} ); - -// Create mouseenter/leave events using mouseover/out and event-time checks -// so that event delegation works in jQuery. -// Do the same for pointerenter/pointerleave and pointerover/pointerout -// -// Support: Safari 7 only -// Safari sends mouseenter too often; see: -// https://bugs.chromium.org/p/chromium/issues/detail?id=470258 -// for the description of the bug (it existed in older Chrome versions as well). -jQuery.each( { - mouseenter: "mouseover", - mouseleave: "mouseout", - pointerenter: "pointerover", - pointerleave: "pointerout" -}, function( orig, fix ) { - jQuery.event.special[ orig ] = { - delegateType: fix, - bindType: fix, - - handle: function( event ) { - var ret, - target = this, - related = event.relatedTarget, - handleObj = event.handleObj; - - // For mouseenter/leave call the handler if related is outside the target. - // NB: No relatedTarget if the mouse left/entered the browser window - if ( !related || ( related !== target && !jQuery.contains( target, related ) ) ) { - event.type = handleObj.origType; - ret = handleObj.handler.apply( this, arguments ); - event.type = fix; - } - return ret; - } - }; -} ); - -jQuery.fn.extend( { - - on: function( types, selector, data, fn ) { - return on( this, types, selector, data, fn ); - }, - one: function( types, selector, data, fn ) { - return on( this, types, selector, data, fn, 1 ); - }, - off: function( types, selector, fn ) { - var handleObj, type; - if ( types && types.preventDefault && types.handleObj ) { - - // ( event ) dispatched jQuery.Event - handleObj = types.handleObj; - jQuery( types.delegateTarget ).off( - handleObj.namespace ? - handleObj.origType + "." + handleObj.namespace : - handleObj.origType, - handleObj.selector, - handleObj.handler - ); - return this; - } - if ( typeof types === "object" ) { - - // ( types-object [, selector] ) - for ( type in types ) { - this.off( type, selector, types[ type ] ); - } - return this; - } - if ( selector === false || typeof selector === "function" ) { - - // ( types [, fn] ) - fn = selector; - selector = undefined; - } - if ( fn === false ) { - fn = returnFalse; - } - return this.each( function() { - jQuery.event.remove( this, types, fn, selector ); - } ); - } -} ); - - -var - - // Support: IE <=10 - 11, Edge 12 - 13 only - // In IE/Edge using regex groups here causes severe slowdowns. - // See https://connect.microsoft.com/IE/feedback/details/1736512/ - rnoInnerhtml = /\s*$/g; - -// Prefer a tbody over its parent table for containing new rows -function manipulationTarget( elem, content ) { - if ( nodeName( elem, "table" ) && - nodeName( content.nodeType !== 11 ? content : content.firstChild, "tr" ) ) { - - return jQuery( elem ).children( "tbody" )[ 0 ] || elem; - } - - return elem; -} - -// Replace/restore the type attribute of script elements for safe DOM manipulation -function disableScript( elem ) { - elem.type = ( elem.getAttribute( "type" ) !== null ) + "/" + elem.type; - return elem; -} -function restoreScript( elem ) { - if ( ( elem.type || "" ).slice( 0, 5 ) === "true/" ) { - elem.type = elem.type.slice( 5 ); - } else { - elem.removeAttribute( "type" ); - } - - return elem; -} - -function cloneCopyEvent( src, dest ) { - var i, l, type, pdataOld, udataOld, udataCur, events; - - if ( dest.nodeType !== 1 ) { - return; - } - - // 1. Copy private data: events, handlers, etc. - if ( dataPriv.hasData( src ) ) { - pdataOld = dataPriv.get( src ); - events = pdataOld.events; - - if ( events ) { - dataPriv.remove( dest, "handle events" ); - - for ( type in events ) { - for ( i = 0, l = events[ type ].length; i < l; i++ ) { - jQuery.event.add( dest, type, events[ type ][ i ] ); - } - } - } - } - - // 2. Copy user data - if ( dataUser.hasData( src ) ) { - udataOld = dataUser.access( src ); - udataCur = jQuery.extend( {}, udataOld ); - - dataUser.set( dest, udataCur ); - } -} - -// Fix IE bugs, see support tests -function fixInput( src, dest ) { - var nodeName = dest.nodeName.toLowerCase(); - - // Fails to persist the checked state of a cloned checkbox or radio button. - if ( nodeName === "input" && rcheckableType.test( src.type ) ) { - dest.checked = src.checked; - - // Fails to return the selected option to the default selected state when cloning options - } else if ( nodeName === "input" || nodeName === "textarea" ) { - dest.defaultValue = src.defaultValue; - } -} - -function domManip( collection, args, callback, ignored ) { - - // Flatten any nested arrays - args = flat( args ); - - var fragment, first, scripts, hasScripts, node, doc, - i = 0, - l = collection.length, - iNoClone = l - 1, - value = args[ 0 ], - valueIsFunction = isFunction( value ); - - // We can't cloneNode fragments that contain checked, in WebKit - if ( valueIsFunction || - ( l > 1 && typeof value === "string" && - !support.checkClone && rchecked.test( value ) ) ) { - return collection.each( function( index ) { - var self = collection.eq( index ); - if ( valueIsFunction ) { - args[ 0 ] = value.call( this, index, self.html() ); - } - domManip( self, args, callback, ignored ); - } ); - } - - if ( l ) { - fragment = buildFragment( args, collection[ 0 ].ownerDocument, false, collection, ignored ); - first = fragment.firstChild; - - if ( fragment.childNodes.length === 1 ) { - fragment = first; - } - - // Require either new content or an interest in ignored elements to invoke the callback - if ( first || ignored ) { - scripts = jQuery.map( getAll( fragment, "script" ), disableScript ); - hasScripts = scripts.length; - - // Use the original fragment for the last item - // instead of the first because it can end up - // being emptied incorrectly in certain situations (#8070). - for ( ; i < l; i++ ) { - node = fragment; - - if ( i !== iNoClone ) { - node = jQuery.clone( node, true, true ); - - // Keep references to cloned scripts for later restoration - if ( hasScripts ) { - - // Support: Android <=4.0 only, PhantomJS 1 only - // push.apply(_, arraylike) throws on ancient WebKit - jQuery.merge( scripts, getAll( node, "script" ) ); - } - } - - callback.call( collection[ i ], node, i ); - } - - if ( hasScripts ) { - doc = scripts[ scripts.length - 1 ].ownerDocument; - - // Reenable scripts - jQuery.map( scripts, restoreScript ); - - // Evaluate executable scripts on first document insertion - for ( i = 0; i < hasScripts; i++ ) { - node = scripts[ i ]; - if ( rscriptType.test( node.type || "" ) && - !dataPriv.access( node, "globalEval" ) && - jQuery.contains( doc, node ) ) { - - if ( node.src && ( node.type || "" ).toLowerCase() !== "module" ) { - - // Optional AJAX dependency, but won't run scripts if not present - if ( jQuery._evalUrl && !node.noModule ) { - jQuery._evalUrl( node.src, { - nonce: node.nonce || node.getAttribute( "nonce" ) - }, doc ); - } - } else { - DOMEval( node.textContent.replace( rcleanScript, "" ), node, doc ); - } - } - } - } - } - } - - return collection; -} - -function remove( elem, selector, keepData ) { - var node, - nodes = selector ? jQuery.filter( selector, elem ) : elem, - i = 0; - - for ( ; ( node = nodes[ i ] ) != null; i++ ) { - if ( !keepData && node.nodeType === 1 ) { - jQuery.cleanData( getAll( node ) ); - } - - if ( node.parentNode ) { - if ( keepData && isAttached( node ) ) { - setGlobalEval( getAll( node, "script" ) ); - } - node.parentNode.removeChild( node ); - } - } - - return elem; -} - -jQuery.extend( { - htmlPrefilter: function( html ) { - return html; - }, - - clone: function( elem, dataAndEvents, deepDataAndEvents ) { - var i, l, srcElements, destElements, - clone = elem.cloneNode( true ), - inPage = isAttached( elem ); - - // Fix IE cloning issues - if ( !support.noCloneChecked && ( elem.nodeType === 1 || elem.nodeType === 11 ) && - !jQuery.isXMLDoc( elem ) ) { - - // We eschew Sizzle here for performance reasons: https://jsperf.com/getall-vs-sizzle/2 - destElements = getAll( clone ); - srcElements = getAll( elem ); - - for ( i = 0, l = srcElements.length; i < l; i++ ) { - fixInput( srcElements[ i ], destElements[ i ] ); - } - } - - // Copy the events from the original to the clone - if ( dataAndEvents ) { - if ( deepDataAndEvents ) { - srcElements = srcElements || getAll( elem ); - destElements = destElements || getAll( clone ); - - for ( i = 0, l = srcElements.length; i < l; i++ ) { - cloneCopyEvent( srcElements[ i ], destElements[ i ] ); - } - } else { - cloneCopyEvent( elem, clone ); - } - } - - // Preserve script evaluation history - destElements = getAll( clone, "script" ); - if ( destElements.length > 0 ) { - setGlobalEval( destElements, !inPage && getAll( elem, "script" ) ); - } - - // Return the cloned set - return clone; - }, - - cleanData: function( elems ) { - var data, elem, type, - special = jQuery.event.special, - i = 0; - - for ( ; ( elem = elems[ i ] ) !== undefined; i++ ) { - if ( acceptData( elem ) ) { - if ( ( data = elem[ dataPriv.expando ] ) ) { - if ( data.events ) { - for ( type in data.events ) { - if ( special[ type ] ) { - jQuery.event.remove( elem, type ); - - // This is a shortcut to avoid jQuery.event.remove's overhead - } else { - jQuery.removeEvent( elem, type, data.handle ); - } - } - } - - // Support: Chrome <=35 - 45+ - // Assign undefined instead of using delete, see Data#remove - elem[ dataPriv.expando ] = undefined; - } - if ( elem[ dataUser.expando ] ) { - - // Support: Chrome <=35 - 45+ - // Assign undefined instead of using delete, see Data#remove - elem[ dataUser.expando ] = undefined; - } - } - } - } -} ); - -jQuery.fn.extend( { - detach: function( selector ) { - return remove( this, selector, true ); - }, - - remove: function( selector ) { - return remove( this, selector ); - }, - - text: function( value ) { - return access( this, function( value ) { - return value === undefined ? - jQuery.text( this ) : - this.empty().each( function() { - if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { - this.textContent = value; - } - } ); - }, null, value, arguments.length ); - }, - - append: function() { - return domManip( this, arguments, function( elem ) { - if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { - var target = manipulationTarget( this, elem ); - target.appendChild( elem ); - } - } ); - }, - - prepend: function() { - return domManip( this, arguments, function( elem ) { - if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { - var target = manipulationTarget( this, elem ); - target.insertBefore( elem, target.firstChild ); - } - } ); - }, - - before: function() { - return domManip( this, arguments, function( elem ) { - if ( this.parentNode ) { - this.parentNode.insertBefore( elem, this ); - } - } ); - }, - - after: function() { - return domManip( this, arguments, function( elem ) { - if ( this.parentNode ) { - this.parentNode.insertBefore( elem, this.nextSibling ); - } - } ); - }, - - empty: function() { - var elem, - i = 0; - - for ( ; ( elem = this[ i ] ) != null; i++ ) { - if ( elem.nodeType === 1 ) { - - // Prevent memory leaks - jQuery.cleanData( getAll( elem, false ) ); - - // Remove any remaining nodes - elem.textContent = ""; - } - } - - return this; - }, - - clone: function( dataAndEvents, deepDataAndEvents ) { - dataAndEvents = dataAndEvents == null ? false : dataAndEvents; - deepDataAndEvents = deepDataAndEvents == null ? dataAndEvents : deepDataAndEvents; - - return this.map( function() { - return jQuery.clone( this, dataAndEvents, deepDataAndEvents ); - } ); - }, - - html: function( value ) { - return access( this, function( value ) { - var elem = this[ 0 ] || {}, - i = 0, - l = this.length; - - if ( value === undefined && elem.nodeType === 1 ) { - return elem.innerHTML; - } - - // See if we can take a shortcut and just use innerHTML - if ( typeof value === "string" && !rnoInnerhtml.test( value ) && - !wrapMap[ ( rtagName.exec( value ) || [ "", "" ] )[ 1 ].toLowerCase() ] ) { - - value = jQuery.htmlPrefilter( value ); - - try { - for ( ; i < l; i++ ) { - elem = this[ i ] || {}; - - // Remove element nodes and prevent memory leaks - if ( elem.nodeType === 1 ) { - jQuery.cleanData( getAll( elem, false ) ); - elem.innerHTML = value; - } - } - - elem = 0; - - // If using innerHTML throws an exception, use the fallback method - } catch ( e ) {} - } - - if ( elem ) { - this.empty().append( value ); - } - }, null, value, arguments.length ); - }, - - replaceWith: function() { - var ignored = []; - - // Make the changes, replacing each non-ignored context element with the new content - return domManip( this, arguments, function( elem ) { - var parent = this.parentNode; - - if ( jQuery.inArray( this, ignored ) < 0 ) { - jQuery.cleanData( getAll( this ) ); - if ( parent ) { - parent.replaceChild( elem, this ); - } - } - - // Force callback invocation - }, ignored ); - } -} ); - -jQuery.each( { - appendTo: "append", - prependTo: "prepend", - insertBefore: "before", - insertAfter: "after", - replaceAll: "replaceWith" -}, function( name, original ) { - jQuery.fn[ name ] = function( selector ) { - var elems, - ret = [], - insert = jQuery( selector ), - last = insert.length - 1, - i = 0; - - for ( ; i <= last; i++ ) { - elems = i === last ? this : this.clone( true ); - jQuery( insert[ i ] )[ original ]( elems ); - - // Support: Android <=4.0 only, PhantomJS 1 only - // .get() because push.apply(_, arraylike) throws on ancient WebKit - push.apply( ret, elems.get() ); - } - - return this.pushStack( ret ); - }; -} ); -var rnumnonpx = new RegExp( "^(" + pnum + ")(?!px)[a-z%]+$", "i" ); - -var getStyles = function( elem ) { - - // Support: IE <=11 only, Firefox <=30 (#15098, #14150) - // IE throws on elements created in popups - // FF meanwhile throws on frame elements through "defaultView.getComputedStyle" - var view = elem.ownerDocument.defaultView; - - if ( !view || !view.opener ) { - view = window; - } - - return view.getComputedStyle( elem ); - }; - -var swap = function( elem, options, callback ) { - var ret, name, - old = {}; - - // Remember the old values, and insert the new ones - for ( name in options ) { - old[ name ] = elem.style[ name ]; - elem.style[ name ] = options[ name ]; - } - - ret = callback.call( elem ); - - // Revert the old values - for ( name in options ) { - elem.style[ name ] = old[ name ]; - } - - return ret; -}; - - -var rboxStyle = new RegExp( cssExpand.join( "|" ), "i" ); - - - -( function() { - - // Executing both pixelPosition & boxSizingReliable tests require only one layout - // so they're executed at the same time to save the second computation. - function computeStyleTests() { - - // This is a singleton, we need to execute it only once - if ( !div ) { - return; - } - - container.style.cssText = "position:absolute;left:-11111px;width:60px;" + - "margin-top:1px;padding:0;border:0"; - div.style.cssText = - "position:relative;display:block;box-sizing:border-box;overflow:scroll;" + - "margin:auto;border:1px;padding:1px;" + - "width:60%;top:1%"; - documentElement.appendChild( container ).appendChild( div ); - - var divStyle = window.getComputedStyle( div ); - pixelPositionVal = divStyle.top !== "1%"; - - // Support: Android 4.0 - 4.3 only, Firefox <=3 - 44 - reliableMarginLeftVal = roundPixelMeasures( divStyle.marginLeft ) === 12; - - // Support: Android 4.0 - 4.3 only, Safari <=9.1 - 10.1, iOS <=7.0 - 9.3 - // Some styles come back with percentage values, even though they shouldn't - div.style.right = "60%"; - pixelBoxStylesVal = roundPixelMeasures( divStyle.right ) === 36; - - // Support: IE 9 - 11 only - // Detect misreporting of content dimensions for box-sizing:border-box elements - boxSizingReliableVal = roundPixelMeasures( divStyle.width ) === 36; - - // Support: IE 9 only - // Detect overflow:scroll screwiness (gh-3699) - // Support: Chrome <=64 - // Don't get tricked when zoom affects offsetWidth (gh-4029) - div.style.position = "absolute"; - scrollboxSizeVal = roundPixelMeasures( div.offsetWidth / 3 ) === 12; - - documentElement.removeChild( container ); - - // Nullify the div so it wouldn't be stored in the memory and - // it will also be a sign that checks already performed - div = null; - } - - function roundPixelMeasures( measure ) { - return Math.round( parseFloat( measure ) ); - } - - var pixelPositionVal, boxSizingReliableVal, scrollboxSizeVal, pixelBoxStylesVal, - reliableTrDimensionsVal, reliableMarginLeftVal, - container = document.createElement( "div" ), - div = document.createElement( "div" ); - - // Finish early in limited (non-browser) environments - if ( !div.style ) { - return; - } - - // Support: IE <=9 - 11 only - // Style of cloned element affects source element cloned (#8908) - div.style.backgroundClip = "content-box"; - div.cloneNode( true ).style.backgroundClip = ""; - support.clearCloneStyle = div.style.backgroundClip === "content-box"; - - jQuery.extend( support, { - boxSizingReliable: function() { - computeStyleTests(); - return boxSizingReliableVal; - }, - pixelBoxStyles: function() { - computeStyleTests(); - return pixelBoxStylesVal; - }, - pixelPosition: function() { - computeStyleTests(); - return pixelPositionVal; - }, - reliableMarginLeft: function() { - computeStyleTests(); - return reliableMarginLeftVal; - }, - scrollboxSize: function() { - computeStyleTests(); - return scrollboxSizeVal; - }, - - // Support: IE 9 - 11+, Edge 15 - 18+ - // IE/Edge misreport `getComputedStyle` of table rows with width/height - // set in CSS while `offset*` properties report correct values. - // Behavior in IE 9 is more subtle than in newer versions & it passes - // some versions of this test; make sure not to make it pass there! - // - // Support: Firefox 70+ - // Only Firefox includes border widths - // in computed dimensions. (gh-4529) - reliableTrDimensions: function() { - var table, tr, trChild, trStyle; - if ( reliableTrDimensionsVal == null ) { - table = document.createElement( "table" ); - tr = document.createElement( "tr" ); - trChild = document.createElement( "div" ); - - table.style.cssText = "position:absolute;left:-11111px;border-collapse:separate"; - tr.style.cssText = "border:1px solid"; - - // Support: Chrome 86+ - // Height set through cssText does not get applied. - // Computed height then comes back as 0. - tr.style.height = "1px"; - trChild.style.height = "9px"; - - // Support: Android 8 Chrome 86+ - // In our bodyBackground.html iframe, - // display for all div elements is set to "inline", - // which causes a problem only in Android 8 Chrome 86. - // Ensuring the div is display: block - // gets around this issue. - trChild.style.display = "block"; - - documentElement - .appendChild( table ) - .appendChild( tr ) - .appendChild( trChild ); - - trStyle = window.getComputedStyle( tr ); - reliableTrDimensionsVal = ( parseInt( trStyle.height, 10 ) + - parseInt( trStyle.borderTopWidth, 10 ) + - parseInt( trStyle.borderBottomWidth, 10 ) ) === tr.offsetHeight; - - documentElement.removeChild( table ); - } - return reliableTrDimensionsVal; - } - } ); -} )(); - - -function curCSS( elem, name, computed ) { - var width, minWidth, maxWidth, ret, - - // Support: Firefox 51+ - // Retrieving style before computed somehow - // fixes an issue with getting wrong values - // on detached elements - style = elem.style; - - computed = computed || getStyles( elem ); - - // getPropertyValue is needed for: - // .css('filter') (IE 9 only, #12537) - // .css('--customProperty) (#3144) - if ( computed ) { - ret = computed.getPropertyValue( name ) || computed[ name ]; - - if ( ret === "" && !isAttached( elem ) ) { - ret = jQuery.style( elem, name ); - } - - // A tribute to the "awesome hack by Dean Edwards" - // Android Browser returns percentage for some values, - // but width seems to be reliably pixels. - // This is against the CSSOM draft spec: - // https://drafts.csswg.org/cssom/#resolved-values - if ( !support.pixelBoxStyles() && rnumnonpx.test( ret ) && rboxStyle.test( name ) ) { - - // Remember the original values - width = style.width; - minWidth = style.minWidth; - maxWidth = style.maxWidth; - - // Put in the new values to get a computed value out - style.minWidth = style.maxWidth = style.width = ret; - ret = computed.width; - - // Revert the changed values - style.width = width; - style.minWidth = minWidth; - style.maxWidth = maxWidth; - } - } - - return ret !== undefined ? - - // Support: IE <=9 - 11 only - // IE returns zIndex value as an integer. - ret + "" : - ret; -} - - -function addGetHookIf( conditionFn, hookFn ) { - - // Define the hook, we'll check on the first run if it's really needed. - return { - get: function() { - if ( conditionFn() ) { - - // Hook not needed (or it's not possible to use it due - // to missing dependency), remove it. - delete this.get; - return; - } - - // Hook needed; redefine it so that the support test is not executed again. - return ( this.get = hookFn ).apply( this, arguments ); - } - }; -} - - -var cssPrefixes = [ "Webkit", "Moz", "ms" ], - emptyStyle = document.createElement( "div" ).style, - vendorProps = {}; - -// Return a vendor-prefixed property or undefined -function vendorPropName( name ) { - - // Check for vendor prefixed names - var capName = name[ 0 ].toUpperCase() + name.slice( 1 ), - i = cssPrefixes.length; - - while ( i-- ) { - name = cssPrefixes[ i ] + capName; - if ( name in emptyStyle ) { - return name; - } - } -} - -// Return a potentially-mapped jQuery.cssProps or vendor prefixed property -function finalPropName( name ) { - var final = jQuery.cssProps[ name ] || vendorProps[ name ]; - - if ( final ) { - return final; - } - if ( name in emptyStyle ) { - return name; - } - return vendorProps[ name ] = vendorPropName( name ) || name; -} - - -var - - // Swappable if display is none or starts with table - // except "table", "table-cell", or "table-caption" - // See here for display values: https://developer.mozilla.org/en-US/docs/CSS/display - rdisplayswap = /^(none|table(?!-c[ea]).+)/, - rcustomProp = /^--/, - cssShow = { position: "absolute", visibility: "hidden", display: "block" }, - cssNormalTransform = { - letterSpacing: "0", - fontWeight: "400" - }; - -function setPositiveNumber( _elem, value, subtract ) { - - // Any relative (+/-) values have already been - // normalized at this point - var matches = rcssNum.exec( value ); - return matches ? - - // Guard against undefined "subtract", e.g., when used as in cssHooks - Math.max( 0, matches[ 2 ] - ( subtract || 0 ) ) + ( matches[ 3 ] || "px" ) : - value; -} - -function boxModelAdjustment( elem, dimension, box, isBorderBox, styles, computedVal ) { - var i = dimension === "width" ? 1 : 0, - extra = 0, - delta = 0; - - // Adjustment may not be necessary - if ( box === ( isBorderBox ? "border" : "content" ) ) { - return 0; - } - - for ( ; i < 4; i += 2 ) { - - // Both box models exclude margin - if ( box === "margin" ) { - delta += jQuery.css( elem, box + cssExpand[ i ], true, styles ); - } - - // If we get here with a content-box, we're seeking "padding" or "border" or "margin" - if ( !isBorderBox ) { - - // Add padding - delta += jQuery.css( elem, "padding" + cssExpand[ i ], true, styles ); - - // For "border" or "margin", add border - if ( box !== "padding" ) { - delta += jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); - - // But still keep track of it otherwise - } else { - extra += jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); - } - - // If we get here with a border-box (content + padding + border), we're seeking "content" or - // "padding" or "margin" - } else { - - // For "content", subtract padding - if ( box === "content" ) { - delta -= jQuery.css( elem, "padding" + cssExpand[ i ], true, styles ); - } - - // For "content" or "padding", subtract border - if ( box !== "margin" ) { - delta -= jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); - } - } - } - - // Account for positive content-box scroll gutter when requested by providing computedVal - if ( !isBorderBox && computedVal >= 0 ) { - - // offsetWidth/offsetHeight is a rounded sum of content, padding, scroll gutter, and border - // Assuming integer scroll gutter, subtract the rest and round down - delta += Math.max( 0, Math.ceil( - elem[ "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ) ] - - computedVal - - delta - - extra - - 0.5 - - // If offsetWidth/offsetHeight is unknown, then we can't determine content-box scroll gutter - // Use an explicit zero to avoid NaN (gh-3964) - ) ) || 0; - } - - return delta; -} - -function getWidthOrHeight( elem, dimension, extra ) { - - // Start with computed style - var styles = getStyles( elem ), - - // To avoid forcing a reflow, only fetch boxSizing if we need it (gh-4322). - // Fake content-box until we know it's needed to know the true value. - boxSizingNeeded = !support.boxSizingReliable() || extra, - isBorderBox = boxSizingNeeded && - jQuery.css( elem, "boxSizing", false, styles ) === "border-box", - valueIsBorderBox = isBorderBox, - - val = curCSS( elem, dimension, styles ), - offsetProp = "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ); - - // Support: Firefox <=54 - // Return a confounding non-pixel value or feign ignorance, as appropriate. - if ( rnumnonpx.test( val ) ) { - if ( !extra ) { - return val; - } - val = "auto"; - } - - - // Support: IE 9 - 11 only - // Use offsetWidth/offsetHeight for when box sizing is unreliable. - // In those cases, the computed value can be trusted to be border-box. - if ( ( !support.boxSizingReliable() && isBorderBox || - - // Support: IE 10 - 11+, Edge 15 - 18+ - // IE/Edge misreport `getComputedStyle` of table rows with width/height - // set in CSS while `offset*` properties report correct values. - // Interestingly, in some cases IE 9 doesn't suffer from this issue. - !support.reliableTrDimensions() && nodeName( elem, "tr" ) || - - // Fall back to offsetWidth/offsetHeight when value is "auto" - // This happens for inline elements with no explicit setting (gh-3571) - val === "auto" || - - // Support: Android <=4.1 - 4.3 only - // Also use offsetWidth/offsetHeight for misreported inline dimensions (gh-3602) - !parseFloat( val ) && jQuery.css( elem, "display", false, styles ) === "inline" ) && - - // Make sure the element is visible & connected - elem.getClientRects().length ) { - - isBorderBox = jQuery.css( elem, "boxSizing", false, styles ) === "border-box"; - - // Where available, offsetWidth/offsetHeight approximate border box dimensions. - // Where not available (e.g., SVG), assume unreliable box-sizing and interpret the - // retrieved value as a content box dimension. - valueIsBorderBox = offsetProp in elem; - if ( valueIsBorderBox ) { - val = elem[ offsetProp ]; - } - } - - // Normalize "" and auto - val = parseFloat( val ) || 0; - - // Adjust for the element's box model - return ( val + - boxModelAdjustment( - elem, - dimension, - extra || ( isBorderBox ? "border" : "content" ), - valueIsBorderBox, - styles, - - // Provide the current computed size to request scroll gutter calculation (gh-3589) - val - ) - ) + "px"; -} - -jQuery.extend( { - - // Add in style property hooks for overriding the default - // behavior of getting and setting a style property - cssHooks: { - opacity: { - get: function( elem, computed ) { - if ( computed ) { - - // We should always get a number back from opacity - var ret = curCSS( elem, "opacity" ); - return ret === "" ? "1" : ret; - } - } - } - }, - - // Don't automatically add "px" to these possibly-unitless properties - cssNumber: { - "animationIterationCount": true, - "columnCount": true, - "fillOpacity": true, - "flexGrow": true, - "flexShrink": true, - "fontWeight": true, - "gridArea": true, - "gridColumn": true, - "gridColumnEnd": true, - "gridColumnStart": true, - "gridRow": true, - "gridRowEnd": true, - "gridRowStart": true, - "lineHeight": true, - "opacity": true, - "order": true, - "orphans": true, - "widows": true, - "zIndex": true, - "zoom": true - }, - - // Add in properties whose names you wish to fix before - // setting or getting the value - cssProps: {}, - - // Get and set the style property on a DOM Node - style: function( elem, name, value, extra ) { - - // Don't set styles on text and comment nodes - if ( !elem || elem.nodeType === 3 || elem.nodeType === 8 || !elem.style ) { - return; - } - - // Make sure that we're working with the right name - var ret, type, hooks, - origName = camelCase( name ), - isCustomProp = rcustomProp.test( name ), - style = elem.style; - - // Make sure that we're working with the right name. We don't - // want to query the value if it is a CSS custom property - // since they are user-defined. - if ( !isCustomProp ) { - name = finalPropName( origName ); - } - - // Gets hook for the prefixed version, then unprefixed version - hooks = jQuery.cssHooks[ name ] || jQuery.cssHooks[ origName ]; - - // Check if we're setting a value - if ( value !== undefined ) { - type = typeof value; - - // Convert "+=" or "-=" to relative numbers (#7345) - if ( type === "string" && ( ret = rcssNum.exec( value ) ) && ret[ 1 ] ) { - value = adjustCSS( elem, name, ret ); - - // Fixes bug #9237 - type = "number"; - } - - // Make sure that null and NaN values aren't set (#7116) - if ( value == null || value !== value ) { - return; - } - - // If a number was passed in, add the unit (except for certain CSS properties) - // The isCustomProp check can be removed in jQuery 4.0 when we only auto-append - // "px" to a few hardcoded values. - if ( type === "number" && !isCustomProp ) { - value += ret && ret[ 3 ] || ( jQuery.cssNumber[ origName ] ? "" : "px" ); - } - - // background-* props affect original clone's values - if ( !support.clearCloneStyle && value === "" && name.indexOf( "background" ) === 0 ) { - style[ name ] = "inherit"; - } - - // If a hook was provided, use that value, otherwise just set the specified value - if ( !hooks || !( "set" in hooks ) || - ( value = hooks.set( elem, value, extra ) ) !== undefined ) { - - if ( isCustomProp ) { - style.setProperty( name, value ); - } else { - style[ name ] = value; - } - } - - } else { - - // If a hook was provided get the non-computed value from there - if ( hooks && "get" in hooks && - ( ret = hooks.get( elem, false, extra ) ) !== undefined ) { - - return ret; - } - - // Otherwise just get the value from the style object - return style[ name ]; - } - }, - - css: function( elem, name, extra, styles ) { - var val, num, hooks, - origName = camelCase( name ), - isCustomProp = rcustomProp.test( name ); - - // Make sure that we're working with the right name. We don't - // want to modify the value if it is a CSS custom property - // since they are user-defined. - if ( !isCustomProp ) { - name = finalPropName( origName ); - } - - // Try prefixed name followed by the unprefixed name - hooks = jQuery.cssHooks[ name ] || jQuery.cssHooks[ origName ]; - - // If a hook was provided get the computed value from there - if ( hooks && "get" in hooks ) { - val = hooks.get( elem, true, extra ); - } - - // Otherwise, if a way to get the computed value exists, use that - if ( val === undefined ) { - val = curCSS( elem, name, styles ); - } - - // Convert "normal" to computed value - if ( val === "normal" && name in cssNormalTransform ) { - val = cssNormalTransform[ name ]; - } - - // Make numeric if forced or a qualifier was provided and val looks numeric - if ( extra === "" || extra ) { - num = parseFloat( val ); - return extra === true || isFinite( num ) ? num || 0 : val; - } - - return val; - } -} ); - -jQuery.each( [ "height", "width" ], function( _i, dimension ) { - jQuery.cssHooks[ dimension ] = { - get: function( elem, computed, extra ) { - if ( computed ) { - - // Certain elements can have dimension info if we invisibly show them - // but it must have a current display style that would benefit - return rdisplayswap.test( jQuery.css( elem, "display" ) ) && - - // Support: Safari 8+ - // Table columns in Safari have non-zero offsetWidth & zero - // getBoundingClientRect().width unless display is changed. - // Support: IE <=11 only - // Running getBoundingClientRect on a disconnected node - // in IE throws an error. - ( !elem.getClientRects().length || !elem.getBoundingClientRect().width ) ? - swap( elem, cssShow, function() { - return getWidthOrHeight( elem, dimension, extra ); - } ) : - getWidthOrHeight( elem, dimension, extra ); - } - }, - - set: function( elem, value, extra ) { - var matches, - styles = getStyles( elem ), - - // Only read styles.position if the test has a chance to fail - // to avoid forcing a reflow. - scrollboxSizeBuggy = !support.scrollboxSize() && - styles.position === "absolute", - - // To avoid forcing a reflow, only fetch boxSizing if we need it (gh-3991) - boxSizingNeeded = scrollboxSizeBuggy || extra, - isBorderBox = boxSizingNeeded && - jQuery.css( elem, "boxSizing", false, styles ) === "border-box", - subtract = extra ? - boxModelAdjustment( - elem, - dimension, - extra, - isBorderBox, - styles - ) : - 0; - - // Account for unreliable border-box dimensions by comparing offset* to computed and - // faking a content-box to get border and padding (gh-3699) - if ( isBorderBox && scrollboxSizeBuggy ) { - subtract -= Math.ceil( - elem[ "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ) ] - - parseFloat( styles[ dimension ] ) - - boxModelAdjustment( elem, dimension, "border", false, styles ) - - 0.5 - ); - } - - // Convert to pixels if value adjustment is needed - if ( subtract && ( matches = rcssNum.exec( value ) ) && - ( matches[ 3 ] || "px" ) !== "px" ) { - - elem.style[ dimension ] = value; - value = jQuery.css( elem, dimension ); - } - - return setPositiveNumber( elem, value, subtract ); - } - }; -} ); - -jQuery.cssHooks.marginLeft = addGetHookIf( support.reliableMarginLeft, - function( elem, computed ) { - if ( computed ) { - return ( parseFloat( curCSS( elem, "marginLeft" ) ) || - elem.getBoundingClientRect().left - - swap( elem, { marginLeft: 0 }, function() { - return elem.getBoundingClientRect().left; - } ) - ) + "px"; - } - } -); - -// These hooks are used by animate to expand properties -jQuery.each( { - margin: "", - padding: "", - border: "Width" -}, function( prefix, suffix ) { - jQuery.cssHooks[ prefix + suffix ] = { - expand: function( value ) { - var i = 0, - expanded = {}, - - // Assumes a single number if not a string - parts = typeof value === "string" ? value.split( " " ) : [ value ]; - - for ( ; i < 4; i++ ) { - expanded[ prefix + cssExpand[ i ] + suffix ] = - parts[ i ] || parts[ i - 2 ] || parts[ 0 ]; - } - - return expanded; - } - }; - - if ( prefix !== "margin" ) { - jQuery.cssHooks[ prefix + suffix ].set = setPositiveNumber; - } -} ); - -jQuery.fn.extend( { - css: function( name, value ) { - return access( this, function( elem, name, value ) { - var styles, len, - map = {}, - i = 0; - - if ( Array.isArray( name ) ) { - styles = getStyles( elem ); - len = name.length; - - for ( ; i < len; i++ ) { - map[ name[ i ] ] = jQuery.css( elem, name[ i ], false, styles ); - } - - return map; - } - - return value !== undefined ? - jQuery.style( elem, name, value ) : - jQuery.css( elem, name ); - }, name, value, arguments.length > 1 ); - } -} ); - - -function Tween( elem, options, prop, end, easing ) { - return new Tween.prototype.init( elem, options, prop, end, easing ); -} -jQuery.Tween = Tween; - -Tween.prototype = { - constructor: Tween, - init: function( elem, options, prop, end, easing, unit ) { - this.elem = elem; - this.prop = prop; - this.easing = easing || jQuery.easing._default; - this.options = options; - this.start = this.now = this.cur(); - this.end = end; - this.unit = unit || ( jQuery.cssNumber[ prop ] ? "" : "px" ); - }, - cur: function() { - var hooks = Tween.propHooks[ this.prop ]; - - return hooks && hooks.get ? - hooks.get( this ) : - Tween.propHooks._default.get( this ); - }, - run: function( percent ) { - var eased, - hooks = Tween.propHooks[ this.prop ]; - - if ( this.options.duration ) { - this.pos = eased = jQuery.easing[ this.easing ]( - percent, this.options.duration * percent, 0, 1, this.options.duration - ); - } else { - this.pos = eased = percent; - } - this.now = ( this.end - this.start ) * eased + this.start; - - if ( this.options.step ) { - this.options.step.call( this.elem, this.now, this ); - } - - if ( hooks && hooks.set ) { - hooks.set( this ); - } else { - Tween.propHooks._default.set( this ); - } - return this; - } -}; - -Tween.prototype.init.prototype = Tween.prototype; - -Tween.propHooks = { - _default: { - get: function( tween ) { - var result; - - // Use a property on the element directly when it is not a DOM element, - // or when there is no matching style property that exists. - if ( tween.elem.nodeType !== 1 || - tween.elem[ tween.prop ] != null && tween.elem.style[ tween.prop ] == null ) { - return tween.elem[ tween.prop ]; - } - - // Passing an empty string as a 3rd parameter to .css will automatically - // attempt a parseFloat and fallback to a string if the parse fails. - // Simple values such as "10px" are parsed to Float; - // complex values such as "rotate(1rad)" are returned as-is. - result = jQuery.css( tween.elem, tween.prop, "" ); - - // Empty strings, null, undefined and "auto" are converted to 0. - return !result || result === "auto" ? 0 : result; - }, - set: function( tween ) { - - // Use step hook for back compat. - // Use cssHook if its there. - // Use .style if available and use plain properties where available. - if ( jQuery.fx.step[ tween.prop ] ) { - jQuery.fx.step[ tween.prop ]( tween ); - } else if ( tween.elem.nodeType === 1 && ( - jQuery.cssHooks[ tween.prop ] || - tween.elem.style[ finalPropName( tween.prop ) ] != null ) ) { - jQuery.style( tween.elem, tween.prop, tween.now + tween.unit ); - } else { - tween.elem[ tween.prop ] = tween.now; - } - } - } -}; - -// Support: IE <=9 only -// Panic based approach to setting things on disconnected nodes -Tween.propHooks.scrollTop = Tween.propHooks.scrollLeft = { - set: function( tween ) { - if ( tween.elem.nodeType && tween.elem.parentNode ) { - tween.elem[ tween.prop ] = tween.now; - } - } -}; - -jQuery.easing = { - linear: function( p ) { - return p; - }, - swing: function( p ) { - return 0.5 - Math.cos( p * Math.PI ) / 2; - }, - _default: "swing" -}; - -jQuery.fx = Tween.prototype.init; - -// Back compat <1.8 extension point -jQuery.fx.step = {}; - - - - -var - fxNow, inProgress, - rfxtypes = /^(?:toggle|show|hide)$/, - rrun = /queueHooks$/; - -function schedule() { - if ( inProgress ) { - if ( document.hidden === false && window.requestAnimationFrame ) { - window.requestAnimationFrame( schedule ); - } else { - window.setTimeout( schedule, jQuery.fx.interval ); - } - - jQuery.fx.tick(); - } -} - -// Animations created synchronously will run synchronously -function createFxNow() { - window.setTimeout( function() { - fxNow = undefined; - } ); - return ( fxNow = Date.now() ); -} - -// Generate parameters to create a standard animation -function genFx( type, includeWidth ) { - var which, - i = 0, - attrs = { height: type }; - - // If we include width, step value is 1 to do all cssExpand values, - // otherwise step value is 2 to skip over Left and Right - includeWidth = includeWidth ? 1 : 0; - for ( ; i < 4; i += 2 - includeWidth ) { - which = cssExpand[ i ]; - attrs[ "margin" + which ] = attrs[ "padding" + which ] = type; - } - - if ( includeWidth ) { - attrs.opacity = attrs.width = type; - } - - return attrs; -} - -function createTween( value, prop, animation ) { - var tween, - collection = ( Animation.tweeners[ prop ] || [] ).concat( Animation.tweeners[ "*" ] ), - index = 0, - length = collection.length; - for ( ; index < length; index++ ) { - if ( ( tween = collection[ index ].call( animation, prop, value ) ) ) { - - // We're done with this property - return tween; - } - } -} - -function defaultPrefilter( elem, props, opts ) { - var prop, value, toggle, hooks, oldfire, propTween, restoreDisplay, display, - isBox = "width" in props || "height" in props, - anim = this, - orig = {}, - style = elem.style, - hidden = elem.nodeType && isHiddenWithinTree( elem ), - dataShow = dataPriv.get( elem, "fxshow" ); - - // Queue-skipping animations hijack the fx hooks - if ( !opts.queue ) { - hooks = jQuery._queueHooks( elem, "fx" ); - if ( hooks.unqueued == null ) { - hooks.unqueued = 0; - oldfire = hooks.empty.fire; - hooks.empty.fire = function() { - if ( !hooks.unqueued ) { - oldfire(); - } - }; - } - hooks.unqueued++; - - anim.always( function() { - - // Ensure the complete handler is called before this completes - anim.always( function() { - hooks.unqueued--; - if ( !jQuery.queue( elem, "fx" ).length ) { - hooks.empty.fire(); - } - } ); - } ); - } - - // Detect show/hide animations - for ( prop in props ) { - value = props[ prop ]; - if ( rfxtypes.test( value ) ) { - delete props[ prop ]; - toggle = toggle || value === "toggle"; - if ( value === ( hidden ? "hide" : "show" ) ) { - - // Pretend to be hidden if this is a "show" and - // there is still data from a stopped show/hide - if ( value === "show" && dataShow && dataShow[ prop ] !== undefined ) { - hidden = true; - - // Ignore all other no-op show/hide data - } else { - continue; - } - } - orig[ prop ] = dataShow && dataShow[ prop ] || jQuery.style( elem, prop ); - } - } - - // Bail out if this is a no-op like .hide().hide() - propTween = !jQuery.isEmptyObject( props ); - if ( !propTween && jQuery.isEmptyObject( orig ) ) { - return; - } - - // Restrict "overflow" and "display" styles during box animations - if ( isBox && elem.nodeType === 1 ) { - - // Support: IE <=9 - 11, Edge 12 - 15 - // Record all 3 overflow attributes because IE does not infer the shorthand - // from identically-valued overflowX and overflowY and Edge just mirrors - // the overflowX value there. - opts.overflow = [ style.overflow, style.overflowX, style.overflowY ]; - - // Identify a display type, preferring old show/hide data over the CSS cascade - restoreDisplay = dataShow && dataShow.display; - if ( restoreDisplay == null ) { - restoreDisplay = dataPriv.get( elem, "display" ); - } - display = jQuery.css( elem, "display" ); - if ( display === "none" ) { - if ( restoreDisplay ) { - display = restoreDisplay; - } else { - - // Get nonempty value(s) by temporarily forcing visibility - showHide( [ elem ], true ); - restoreDisplay = elem.style.display || restoreDisplay; - display = jQuery.css( elem, "display" ); - showHide( [ elem ] ); - } - } - - // Animate inline elements as inline-block - if ( display === "inline" || display === "inline-block" && restoreDisplay != null ) { - if ( jQuery.css( elem, "float" ) === "none" ) { - - // Restore the original display value at the end of pure show/hide animations - if ( !propTween ) { - anim.done( function() { - style.display = restoreDisplay; - } ); - if ( restoreDisplay == null ) { - display = style.display; - restoreDisplay = display === "none" ? "" : display; - } - } - style.display = "inline-block"; - } - } - } - - if ( opts.overflow ) { - style.overflow = "hidden"; - anim.always( function() { - style.overflow = opts.overflow[ 0 ]; - style.overflowX = opts.overflow[ 1 ]; - style.overflowY = opts.overflow[ 2 ]; - } ); - } - - // Implement show/hide animations - propTween = false; - for ( prop in orig ) { - - // General show/hide setup for this element animation - if ( !propTween ) { - if ( dataShow ) { - if ( "hidden" in dataShow ) { - hidden = dataShow.hidden; - } - } else { - dataShow = dataPriv.access( elem, "fxshow", { display: restoreDisplay } ); - } - - // Store hidden/visible for toggle so `.stop().toggle()` "reverses" - if ( toggle ) { - dataShow.hidden = !hidden; - } - - // Show elements before animating them - if ( hidden ) { - showHide( [ elem ], true ); - } - - /* eslint-disable no-loop-func */ - - anim.done( function() { - - /* eslint-enable no-loop-func */ - - // The final step of a "hide" animation is actually hiding the element - if ( !hidden ) { - showHide( [ elem ] ); - } - dataPriv.remove( elem, "fxshow" ); - for ( prop in orig ) { - jQuery.style( elem, prop, orig[ prop ] ); - } - } ); - } - - // Per-property setup - propTween = createTween( hidden ? dataShow[ prop ] : 0, prop, anim ); - if ( !( prop in dataShow ) ) { - dataShow[ prop ] = propTween.start; - if ( hidden ) { - propTween.end = propTween.start; - propTween.start = 0; - } - } - } -} - -function propFilter( props, specialEasing ) { - var index, name, easing, value, hooks; - - // camelCase, specialEasing and expand cssHook pass - for ( index in props ) { - name = camelCase( index ); - easing = specialEasing[ name ]; - value = props[ index ]; - if ( Array.isArray( value ) ) { - easing = value[ 1 ]; - value = props[ index ] = value[ 0 ]; - } - - if ( index !== name ) { - props[ name ] = value; - delete props[ index ]; - } - - hooks = jQuery.cssHooks[ name ]; - if ( hooks && "expand" in hooks ) { - value = hooks.expand( value ); - delete props[ name ]; - - // Not quite $.extend, this won't overwrite existing keys. - // Reusing 'index' because we have the correct "name" - for ( index in value ) { - if ( !( index in props ) ) { - props[ index ] = value[ index ]; - specialEasing[ index ] = easing; - } - } - } else { - specialEasing[ name ] = easing; - } - } -} - -function Animation( elem, properties, options ) { - var result, - stopped, - index = 0, - length = Animation.prefilters.length, - deferred = jQuery.Deferred().always( function() { - - // Don't match elem in the :animated selector - delete tick.elem; - } ), - tick = function() { - if ( stopped ) { - return false; - } - var currentTime = fxNow || createFxNow(), - remaining = Math.max( 0, animation.startTime + animation.duration - currentTime ), - - // Support: Android 2.3 only - // Archaic crash bug won't allow us to use `1 - ( 0.5 || 0 )` (#12497) - temp = remaining / animation.duration || 0, - percent = 1 - temp, - index = 0, - length = animation.tweens.length; - - for ( ; index < length; index++ ) { - animation.tweens[ index ].run( percent ); - } - - deferred.notifyWith( elem, [ animation, percent, remaining ] ); - - // If there's more to do, yield - if ( percent < 1 && length ) { - return remaining; - } - - // If this was an empty animation, synthesize a final progress notification - if ( !length ) { - deferred.notifyWith( elem, [ animation, 1, 0 ] ); - } - - // Resolve the animation and report its conclusion - deferred.resolveWith( elem, [ animation ] ); - return false; - }, - animation = deferred.promise( { - elem: elem, - props: jQuery.extend( {}, properties ), - opts: jQuery.extend( true, { - specialEasing: {}, - easing: jQuery.easing._default - }, options ), - originalProperties: properties, - originalOptions: options, - startTime: fxNow || createFxNow(), - duration: options.duration, - tweens: [], - createTween: function( prop, end ) { - var tween = jQuery.Tween( elem, animation.opts, prop, end, - animation.opts.specialEasing[ prop ] || animation.opts.easing ); - animation.tweens.push( tween ); - return tween; - }, - stop: function( gotoEnd ) { - var index = 0, - - // If we are going to the end, we want to run all the tweens - // otherwise we skip this part - length = gotoEnd ? animation.tweens.length : 0; - if ( stopped ) { - return this; - } - stopped = true; - for ( ; index < length; index++ ) { - animation.tweens[ index ].run( 1 ); - } - - // Resolve when we played the last frame; otherwise, reject - if ( gotoEnd ) { - deferred.notifyWith( elem, [ animation, 1, 0 ] ); - deferred.resolveWith( elem, [ animation, gotoEnd ] ); - } else { - deferred.rejectWith( elem, [ animation, gotoEnd ] ); - } - return this; - } - } ), - props = animation.props; - - propFilter( props, animation.opts.specialEasing ); - - for ( ; index < length; index++ ) { - result = Animation.prefilters[ index ].call( animation, elem, props, animation.opts ); - if ( result ) { - if ( isFunction( result.stop ) ) { - jQuery._queueHooks( animation.elem, animation.opts.queue ).stop = - result.stop.bind( result ); - } - return result; - } - } - - jQuery.map( props, createTween, animation ); - - if ( isFunction( animation.opts.start ) ) { - animation.opts.start.call( elem, animation ); - } - - // Attach callbacks from options - animation - .progress( animation.opts.progress ) - .done( animation.opts.done, animation.opts.complete ) - .fail( animation.opts.fail ) - .always( animation.opts.always ); - - jQuery.fx.timer( - jQuery.extend( tick, { - elem: elem, - anim: animation, - queue: animation.opts.queue - } ) - ); - - return animation; -} - -jQuery.Animation = jQuery.extend( Animation, { - - tweeners: { - "*": [ function( prop, value ) { - var tween = this.createTween( prop, value ); - adjustCSS( tween.elem, prop, rcssNum.exec( value ), tween ); - return tween; - } ] - }, - - tweener: function( props, callback ) { - if ( isFunction( props ) ) { - callback = props; - props = [ "*" ]; - } else { - props = props.match( rnothtmlwhite ); - } - - var prop, - index = 0, - length = props.length; - - for ( ; index < length; index++ ) { - prop = props[ index ]; - Animation.tweeners[ prop ] = Animation.tweeners[ prop ] || []; - Animation.tweeners[ prop ].unshift( callback ); - } - }, - - prefilters: [ defaultPrefilter ], - - prefilter: function( callback, prepend ) { - if ( prepend ) { - Animation.prefilters.unshift( callback ); - } else { - Animation.prefilters.push( callback ); - } - } -} ); - -jQuery.speed = function( speed, easing, fn ) { - var opt = speed && typeof speed === "object" ? jQuery.extend( {}, speed ) : { - complete: fn || !fn && easing || - isFunction( speed ) && speed, - duration: speed, - easing: fn && easing || easing && !isFunction( easing ) && easing - }; - - // Go to the end state if fx are off - if ( jQuery.fx.off ) { - opt.duration = 0; - - } else { - if ( typeof opt.duration !== "number" ) { - if ( opt.duration in jQuery.fx.speeds ) { - opt.duration = jQuery.fx.speeds[ opt.duration ]; - - } else { - opt.duration = jQuery.fx.speeds._default; - } - } - } - - // Normalize opt.queue - true/undefined/null -> "fx" - if ( opt.queue == null || opt.queue === true ) { - opt.queue = "fx"; - } - - // Queueing - opt.old = opt.complete; - - opt.complete = function() { - if ( isFunction( opt.old ) ) { - opt.old.call( this ); - } - - if ( opt.queue ) { - jQuery.dequeue( this, opt.queue ); - } - }; - - return opt; -}; - -jQuery.fn.extend( { - fadeTo: function( speed, to, easing, callback ) { - - // Show any hidden elements after setting opacity to 0 - return this.filter( isHiddenWithinTree ).css( "opacity", 0 ).show() - - // Animate to the value specified - .end().animate( { opacity: to }, speed, easing, callback ); - }, - animate: function( prop, speed, easing, callback ) { - var empty = jQuery.isEmptyObject( prop ), - optall = jQuery.speed( speed, easing, callback ), - doAnimation = function() { - - // Operate on a copy of prop so per-property easing won't be lost - var anim = Animation( this, jQuery.extend( {}, prop ), optall ); - - // Empty animations, or finishing resolves immediately - if ( empty || dataPriv.get( this, "finish" ) ) { - anim.stop( true ); - } - }; - - doAnimation.finish = doAnimation; - - return empty || optall.queue === false ? - this.each( doAnimation ) : - this.queue( optall.queue, doAnimation ); - }, - stop: function( type, clearQueue, gotoEnd ) { - var stopQueue = function( hooks ) { - var stop = hooks.stop; - delete hooks.stop; - stop( gotoEnd ); - }; - - if ( typeof type !== "string" ) { - gotoEnd = clearQueue; - clearQueue = type; - type = undefined; - } - if ( clearQueue ) { - this.queue( type || "fx", [] ); - } - - return this.each( function() { - var dequeue = true, - index = type != null && type + "queueHooks", - timers = jQuery.timers, - data = dataPriv.get( this ); - - if ( index ) { - if ( data[ index ] && data[ index ].stop ) { - stopQueue( data[ index ] ); - } - } else { - for ( index in data ) { - if ( data[ index ] && data[ index ].stop && rrun.test( index ) ) { - stopQueue( data[ index ] ); - } - } - } - - for ( index = timers.length; index--; ) { - if ( timers[ index ].elem === this && - ( type == null || timers[ index ].queue === type ) ) { - - timers[ index ].anim.stop( gotoEnd ); - dequeue = false; - timers.splice( index, 1 ); - } - } - - // Start the next in the queue if the last step wasn't forced. - // Timers currently will call their complete callbacks, which - // will dequeue but only if they were gotoEnd. - if ( dequeue || !gotoEnd ) { - jQuery.dequeue( this, type ); - } - } ); - }, - finish: function( type ) { - if ( type !== false ) { - type = type || "fx"; - } - return this.each( function() { - var index, - data = dataPriv.get( this ), - queue = data[ type + "queue" ], - hooks = data[ type + "queueHooks" ], - timers = jQuery.timers, - length = queue ? queue.length : 0; - - // Enable finishing flag on private data - data.finish = true; - - // Empty the queue first - jQuery.queue( this, type, [] ); - - if ( hooks && hooks.stop ) { - hooks.stop.call( this, true ); - } - - // Look for any active animations, and finish them - for ( index = timers.length; index--; ) { - if ( timers[ index ].elem === this && timers[ index ].queue === type ) { - timers[ index ].anim.stop( true ); - timers.splice( index, 1 ); - } - } - - // Look for any animations in the old queue and finish them - for ( index = 0; index < length; index++ ) { - if ( queue[ index ] && queue[ index ].finish ) { - queue[ index ].finish.call( this ); - } - } - - // Turn off finishing flag - delete data.finish; - } ); - } -} ); - -jQuery.each( [ "toggle", "show", "hide" ], function( _i, name ) { - var cssFn = jQuery.fn[ name ]; - jQuery.fn[ name ] = function( speed, easing, callback ) { - return speed == null || typeof speed === "boolean" ? - cssFn.apply( this, arguments ) : - this.animate( genFx( name, true ), speed, easing, callback ); - }; -} ); - -// Generate shortcuts for custom animations -jQuery.each( { - slideDown: genFx( "show" ), - slideUp: genFx( "hide" ), - slideToggle: genFx( "toggle" ), - fadeIn: { opacity: "show" }, - fadeOut: { opacity: "hide" }, - fadeToggle: { opacity: "toggle" } -}, function( name, props ) { - jQuery.fn[ name ] = function( speed, easing, callback ) { - return this.animate( props, speed, easing, callback ); - }; -} ); - -jQuery.timers = []; -jQuery.fx.tick = function() { - var timer, - i = 0, - timers = jQuery.timers; - - fxNow = Date.now(); - - for ( ; i < timers.length; i++ ) { - timer = timers[ i ]; - - // Run the timer and safely remove it when done (allowing for external removal) - if ( !timer() && timers[ i ] === timer ) { - timers.splice( i--, 1 ); - } - } - - if ( !timers.length ) { - jQuery.fx.stop(); - } - fxNow = undefined; -}; - -jQuery.fx.timer = function( timer ) { - jQuery.timers.push( timer ); - jQuery.fx.start(); -}; - -jQuery.fx.interval = 13; -jQuery.fx.start = function() { - if ( inProgress ) { - return; - } - - inProgress = true; - schedule(); -}; - -jQuery.fx.stop = function() { - inProgress = null; -}; - -jQuery.fx.speeds = { - slow: 600, - fast: 200, - - // Default speed - _default: 400 -}; - - -// Based off of the plugin by Clint Helfers, with permission. -// https://web.archive.org/web/20100324014747/http://blindsignals.com/index.php/2009/07/jquery-delay/ -jQuery.fn.delay = function( time, type ) { - time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; - type = type || "fx"; - - return this.queue( type, function( next, hooks ) { - var timeout = window.setTimeout( next, time ); - hooks.stop = function() { - window.clearTimeout( timeout ); - }; - } ); -}; - - -( function() { - var input = document.createElement( "input" ), - select = document.createElement( "select" ), - opt = select.appendChild( document.createElement( "option" ) ); - - input.type = "checkbox"; - - // Support: Android <=4.3 only - // Default value for a checkbox should be "on" - support.checkOn = input.value !== ""; - - // Support: IE <=11 only - // Must access selectedIndex to make default options select - support.optSelected = opt.selected; - - // Support: IE <=11 only - // An input loses its value after becoming a radio - input = document.createElement( "input" ); - input.value = "t"; - input.type = "radio"; - support.radioValue = input.value === "t"; -} )(); - - -var boolHook, - attrHandle = jQuery.expr.attrHandle; - -jQuery.fn.extend( { - attr: function( name, value ) { - return access( this, jQuery.attr, name, value, arguments.length > 1 ); - }, - - removeAttr: function( name ) { - return this.each( function() { - jQuery.removeAttr( this, name ); - } ); - } -} ); - -jQuery.extend( { - attr: function( elem, name, value ) { - var ret, hooks, - nType = elem.nodeType; - - // Don't get/set attributes on text, comment and attribute nodes - if ( nType === 3 || nType === 8 || nType === 2 ) { - return; - } - - // Fallback to prop when attributes are not supported - if ( typeof elem.getAttribute === "undefined" ) { - return jQuery.prop( elem, name, value ); - } - - // Attribute hooks are determined by the lowercase version - // Grab necessary hook if one is defined - if ( nType !== 1 || !jQuery.isXMLDoc( elem ) ) { - hooks = jQuery.attrHooks[ name.toLowerCase() ] || - ( jQuery.expr.match.bool.test( name ) ? boolHook : undefined ); - } - - if ( value !== undefined ) { - if ( value === null ) { - jQuery.removeAttr( elem, name ); - return; - } - - if ( hooks && "set" in hooks && - ( ret = hooks.set( elem, value, name ) ) !== undefined ) { - return ret; - } - - elem.setAttribute( name, value + "" ); - return value; - } - - if ( hooks && "get" in hooks && ( ret = hooks.get( elem, name ) ) !== null ) { - return ret; - } - - ret = jQuery.find.attr( elem, name ); - - // Non-existent attributes return null, we normalize to undefined - return ret == null ? undefined : ret; - }, - - attrHooks: { - type: { - set: function( elem, value ) { - if ( !support.radioValue && value === "radio" && - nodeName( elem, "input" ) ) { - var val = elem.value; - elem.setAttribute( "type", value ); - if ( val ) { - elem.value = val; - } - return value; - } - } - } - }, - - removeAttr: function( elem, value ) { - var name, - i = 0, - - // Attribute names can contain non-HTML whitespace characters - // https://html.spec.whatwg.org/multipage/syntax.html#attributes-2 - attrNames = value && value.match( rnothtmlwhite ); - - if ( attrNames && elem.nodeType === 1 ) { - while ( ( name = attrNames[ i++ ] ) ) { - elem.removeAttribute( name ); - } - } - } -} ); - -// Hooks for boolean attributes -boolHook = { - set: function( elem, value, name ) { - if ( value === false ) { - - // Remove boolean attributes when set to false - jQuery.removeAttr( elem, name ); - } else { - elem.setAttribute( name, name ); - } - return name; - } -}; - -jQuery.each( jQuery.expr.match.bool.source.match( /\w+/g ), function( _i, name ) { - var getter = attrHandle[ name ] || jQuery.find.attr; - - attrHandle[ name ] = function( elem, name, isXML ) { - var ret, handle, - lowercaseName = name.toLowerCase(); - - if ( !isXML ) { - - // Avoid an infinite loop by temporarily removing this function from the getter - handle = attrHandle[ lowercaseName ]; - attrHandle[ lowercaseName ] = ret; - ret = getter( elem, name, isXML ) != null ? - lowercaseName : - null; - attrHandle[ lowercaseName ] = handle; - } - return ret; - }; -} ); - - - - -var rfocusable = /^(?:input|select|textarea|button)$/i, - rclickable = /^(?:a|area)$/i; - -jQuery.fn.extend( { - prop: function( name, value ) { - return access( this, jQuery.prop, name, value, arguments.length > 1 ); - }, - - removeProp: function( name ) { - return this.each( function() { - delete this[ jQuery.propFix[ name ] || name ]; - } ); - } -} ); - -jQuery.extend( { - prop: function( elem, name, value ) { - var ret, hooks, - nType = elem.nodeType; - - // Don't get/set properties on text, comment and attribute nodes - if ( nType === 3 || nType === 8 || nType === 2 ) { - return; - } - - if ( nType !== 1 || !jQuery.isXMLDoc( elem ) ) { - - // Fix name and attach hooks - name = jQuery.propFix[ name ] || name; - hooks = jQuery.propHooks[ name ]; - } - - if ( value !== undefined ) { - if ( hooks && "set" in hooks && - ( ret = hooks.set( elem, value, name ) ) !== undefined ) { - return ret; - } - - return ( elem[ name ] = value ); - } - - if ( hooks && "get" in hooks && ( ret = hooks.get( elem, name ) ) !== null ) { - return ret; - } - - return elem[ name ]; - }, - - propHooks: { - tabIndex: { - get: function( elem ) { - - // Support: IE <=9 - 11 only - // elem.tabIndex doesn't always return the - // correct value when it hasn't been explicitly set - // https://web.archive.org/web/20141116233347/http://fluidproject.org/blog/2008/01/09/getting-setting-and-removing-tabindex-values-with-javascript/ - // Use proper attribute retrieval(#12072) - var tabindex = jQuery.find.attr( elem, "tabindex" ); - - if ( tabindex ) { - return parseInt( tabindex, 10 ); - } - - if ( - rfocusable.test( elem.nodeName ) || - rclickable.test( elem.nodeName ) && - elem.href - ) { - return 0; - } - - return -1; - } - } - }, - - propFix: { - "for": "htmlFor", - "class": "className" - } -} ); - -// Support: IE <=11 only -// Accessing the selectedIndex property -// forces the browser to respect setting selected -// on the option -// The getter ensures a default option is selected -// when in an optgroup -// eslint rule "no-unused-expressions" is disabled for this code -// since it considers such accessions noop -if ( !support.optSelected ) { - jQuery.propHooks.selected = { - get: function( elem ) { - - /* eslint no-unused-expressions: "off" */ - - var parent = elem.parentNode; - if ( parent && parent.parentNode ) { - parent.parentNode.selectedIndex; - } - return null; - }, - set: function( elem ) { - - /* eslint no-unused-expressions: "off" */ - - var parent = elem.parentNode; - if ( parent ) { - parent.selectedIndex; - - if ( parent.parentNode ) { - parent.parentNode.selectedIndex; - } - } - } - }; -} - -jQuery.each( [ - "tabIndex", - "readOnly", - "maxLength", - "cellSpacing", - "cellPadding", - "rowSpan", - "colSpan", - "useMap", - "frameBorder", - "contentEditable" -], function() { - jQuery.propFix[ this.toLowerCase() ] = this; -} ); - - - - - // Strip and collapse whitespace according to HTML spec - // https://infra.spec.whatwg.org/#strip-and-collapse-ascii-whitespace - function stripAndCollapse( value ) { - var tokens = value.match( rnothtmlwhite ) || []; - return tokens.join( " " ); - } - - -function getClass( elem ) { - return elem.getAttribute && elem.getAttribute( "class" ) || ""; -} - -function classesToArray( value ) { - if ( Array.isArray( value ) ) { - return value; - } - if ( typeof value === "string" ) { - return value.match( rnothtmlwhite ) || []; - } - return []; -} - -jQuery.fn.extend( { - addClass: function( value ) { - var classes, elem, cur, curValue, clazz, j, finalValue, - i = 0; - - if ( isFunction( value ) ) { - return this.each( function( j ) { - jQuery( this ).addClass( value.call( this, j, getClass( this ) ) ); - } ); - } - - classes = classesToArray( value ); - - if ( classes.length ) { - while ( ( elem = this[ i++ ] ) ) { - curValue = getClass( elem ); - cur = elem.nodeType === 1 && ( " " + stripAndCollapse( curValue ) + " " ); - - if ( cur ) { - j = 0; - while ( ( clazz = classes[ j++ ] ) ) { - if ( cur.indexOf( " " + clazz + " " ) < 0 ) { - cur += clazz + " "; - } - } - - // Only assign if different to avoid unneeded rendering. - finalValue = stripAndCollapse( cur ); - if ( curValue !== finalValue ) { - elem.setAttribute( "class", finalValue ); - } - } - } - } - - return this; - }, - - removeClass: function( value ) { - var classes, elem, cur, curValue, clazz, j, finalValue, - i = 0; - - if ( isFunction( value ) ) { - return this.each( function( j ) { - jQuery( this ).removeClass( value.call( this, j, getClass( this ) ) ); - } ); - } - - if ( !arguments.length ) { - return this.attr( "class", "" ); - } - - classes = classesToArray( value ); - - if ( classes.length ) { - while ( ( elem = this[ i++ ] ) ) { - curValue = getClass( elem ); - - // This expression is here for better compressibility (see addClass) - cur = elem.nodeType === 1 && ( " " + stripAndCollapse( curValue ) + " " ); - - if ( cur ) { - j = 0; - while ( ( clazz = classes[ j++ ] ) ) { - - // Remove *all* instances - while ( cur.indexOf( " " + clazz + " " ) > -1 ) { - cur = cur.replace( " " + clazz + " ", " " ); - } - } - - // Only assign if different to avoid unneeded rendering. - finalValue = stripAndCollapse( cur ); - if ( curValue !== finalValue ) { - elem.setAttribute( "class", finalValue ); - } - } - } - } - - return this; - }, - - toggleClass: function( value, stateVal ) { - var type = typeof value, - isValidValue = type === "string" || Array.isArray( value ); - - if ( typeof stateVal === "boolean" && isValidValue ) { - return stateVal ? this.addClass( value ) : this.removeClass( value ); - } - - if ( isFunction( value ) ) { - return this.each( function( i ) { - jQuery( this ).toggleClass( - value.call( this, i, getClass( this ), stateVal ), - stateVal - ); - } ); - } - - return this.each( function() { - var className, i, self, classNames; - - if ( isValidValue ) { - - // Toggle individual class names - i = 0; - self = jQuery( this ); - classNames = classesToArray( value ); - - while ( ( className = classNames[ i++ ] ) ) { - - // Check each className given, space separated list - if ( self.hasClass( className ) ) { - self.removeClass( className ); - } else { - self.addClass( className ); - } - } - - // Toggle whole class name - } else if ( value === undefined || type === "boolean" ) { - className = getClass( this ); - if ( className ) { - - // Store className if set - dataPriv.set( this, "__className__", className ); - } - - // If the element has a class name or if we're passed `false`, - // then remove the whole classname (if there was one, the above saved it). - // Otherwise bring back whatever was previously saved (if anything), - // falling back to the empty string if nothing was stored. - if ( this.setAttribute ) { - this.setAttribute( "class", - className || value === false ? - "" : - dataPriv.get( this, "__className__" ) || "" - ); - } - } - } ); - }, - - hasClass: function( selector ) { - var className, elem, - i = 0; - - className = " " + selector + " "; - while ( ( elem = this[ i++ ] ) ) { - if ( elem.nodeType === 1 && - ( " " + stripAndCollapse( getClass( elem ) ) + " " ).indexOf( className ) > -1 ) { - return true; - } - } - - return false; - } -} ); - - - - -var rreturn = /\r/g; - -jQuery.fn.extend( { - val: function( value ) { - var hooks, ret, valueIsFunction, - elem = this[ 0 ]; - - if ( !arguments.length ) { - if ( elem ) { - hooks = jQuery.valHooks[ elem.type ] || - jQuery.valHooks[ elem.nodeName.toLowerCase() ]; - - if ( hooks && - "get" in hooks && - ( ret = hooks.get( elem, "value" ) ) !== undefined - ) { - return ret; - } - - ret = elem.value; - - // Handle most common string cases - if ( typeof ret === "string" ) { - return ret.replace( rreturn, "" ); - } - - // Handle cases where value is null/undef or number - return ret == null ? "" : ret; - } - - return; - } - - valueIsFunction = isFunction( value ); - - return this.each( function( i ) { - var val; - - if ( this.nodeType !== 1 ) { - return; - } - - if ( valueIsFunction ) { - val = value.call( this, i, jQuery( this ).val() ); - } else { - val = value; - } - - // Treat null/undefined as ""; convert numbers to string - if ( val == null ) { - val = ""; - - } else if ( typeof val === "number" ) { - val += ""; - - } else if ( Array.isArray( val ) ) { - val = jQuery.map( val, function( value ) { - return value == null ? "" : value + ""; - } ); - } - - hooks = jQuery.valHooks[ this.type ] || jQuery.valHooks[ this.nodeName.toLowerCase() ]; - - // If set returns undefined, fall back to normal setting - if ( !hooks || !( "set" in hooks ) || hooks.set( this, val, "value" ) === undefined ) { - this.value = val; - } - } ); - } -} ); - -jQuery.extend( { - valHooks: { - option: { - get: function( elem ) { - - var val = jQuery.find.attr( elem, "value" ); - return val != null ? - val : - - // Support: IE <=10 - 11 only - // option.text throws exceptions (#14686, #14858) - // Strip and collapse whitespace - // https://html.spec.whatwg.org/#strip-and-collapse-whitespace - stripAndCollapse( jQuery.text( elem ) ); - } - }, - select: { - get: function( elem ) { - var value, option, i, - options = elem.options, - index = elem.selectedIndex, - one = elem.type === "select-one", - values = one ? null : [], - max = one ? index + 1 : options.length; - - if ( index < 0 ) { - i = max; - - } else { - i = one ? index : 0; - } - - // Loop through all the selected options - for ( ; i < max; i++ ) { - option = options[ i ]; - - // Support: IE <=9 only - // IE8-9 doesn't update selected after form reset (#2551) - if ( ( option.selected || i === index ) && - - // Don't return options that are disabled or in a disabled optgroup - !option.disabled && - ( !option.parentNode.disabled || - !nodeName( option.parentNode, "optgroup" ) ) ) { - - // Get the specific value for the option - value = jQuery( option ).val(); - - // We don't need an array for one selects - if ( one ) { - return value; - } - - // Multi-Selects return an array - values.push( value ); - } - } - - return values; - }, - - set: function( elem, value ) { - var optionSet, option, - options = elem.options, - values = jQuery.makeArray( value ), - i = options.length; - - while ( i-- ) { - option = options[ i ]; - - /* eslint-disable no-cond-assign */ - - if ( option.selected = - jQuery.inArray( jQuery.valHooks.option.get( option ), values ) > -1 - ) { - optionSet = true; - } - - /* eslint-enable no-cond-assign */ - } - - // Force browsers to behave consistently when non-matching value is set - if ( !optionSet ) { - elem.selectedIndex = -1; - } - return values; - } - } - } -} ); - -// Radios and checkboxes getter/setter -jQuery.each( [ "radio", "checkbox" ], function() { - jQuery.valHooks[ this ] = { - set: function( elem, value ) { - if ( Array.isArray( value ) ) { - return ( elem.checked = jQuery.inArray( jQuery( elem ).val(), value ) > -1 ); - } - } - }; - if ( !support.checkOn ) { - jQuery.valHooks[ this ].get = function( elem ) { - return elem.getAttribute( "value" ) === null ? "on" : elem.value; - }; - } -} ); - - - - -// Return jQuery for attributes-only inclusion - - -support.focusin = "onfocusin" in window; - - -var rfocusMorph = /^(?:focusinfocus|focusoutblur)$/, - stopPropagationCallback = function( e ) { - e.stopPropagation(); - }; - -jQuery.extend( jQuery.event, { - - trigger: function( event, data, elem, onlyHandlers ) { - - var i, cur, tmp, bubbleType, ontype, handle, special, lastElement, - eventPath = [ elem || document ], - type = hasOwn.call( event, "type" ) ? event.type : event, - namespaces = hasOwn.call( event, "namespace" ) ? event.namespace.split( "." ) : []; - - cur = lastElement = tmp = elem = elem || document; - - // Don't do events on text and comment nodes - if ( elem.nodeType === 3 || elem.nodeType === 8 ) { - return; - } - - // focus/blur morphs to focusin/out; ensure we're not firing them right now - if ( rfocusMorph.test( type + jQuery.event.triggered ) ) { - return; - } - - if ( type.indexOf( "." ) > -1 ) { - - // Namespaced trigger; create a regexp to match event type in handle() - namespaces = type.split( "." ); - type = namespaces.shift(); - namespaces.sort(); - } - ontype = type.indexOf( ":" ) < 0 && "on" + type; - - // Caller can pass in a jQuery.Event object, Object, or just an event type string - event = event[ jQuery.expando ] ? - event : - new jQuery.Event( type, typeof event === "object" && event ); - - // Trigger bitmask: & 1 for native handlers; & 2 for jQuery (always true) - event.isTrigger = onlyHandlers ? 2 : 3; - event.namespace = namespaces.join( "." ); - event.rnamespace = event.namespace ? - new RegExp( "(^|\\.)" + namespaces.join( "\\.(?:.*\\.|)" ) + "(\\.|$)" ) : - null; - - // Clean up the event in case it is being reused - event.result = undefined; - if ( !event.target ) { - event.target = elem; - } - - // Clone any incoming data and prepend the event, creating the handler arg list - data = data == null ? - [ event ] : - jQuery.makeArray( data, [ event ] ); - - // Allow special events to draw outside the lines - special = jQuery.event.special[ type ] || {}; - if ( !onlyHandlers && special.trigger && special.trigger.apply( elem, data ) === false ) { - return; - } - - // Determine event propagation path in advance, per W3C events spec (#9951) - // Bubble up to document, then to window; watch for a global ownerDocument var (#9724) - if ( !onlyHandlers && !special.noBubble && !isWindow( elem ) ) { - - bubbleType = special.delegateType || type; - if ( !rfocusMorph.test( bubbleType + type ) ) { - cur = cur.parentNode; - } - for ( ; cur; cur = cur.parentNode ) { - eventPath.push( cur ); - tmp = cur; - } - - // Only add window if we got to document (e.g., not plain obj or detached DOM) - if ( tmp === ( elem.ownerDocument || document ) ) { - eventPath.push( tmp.defaultView || tmp.parentWindow || window ); - } - } - - // Fire handlers on the event path - i = 0; - while ( ( cur = eventPath[ i++ ] ) && !event.isPropagationStopped() ) { - lastElement = cur; - event.type = i > 1 ? - bubbleType : - special.bindType || type; - - // jQuery handler - handle = ( dataPriv.get( cur, "events" ) || Object.create( null ) )[ event.type ] && - dataPriv.get( cur, "handle" ); - if ( handle ) { - handle.apply( cur, data ); - } - - // Native handler - handle = ontype && cur[ ontype ]; - if ( handle && handle.apply && acceptData( cur ) ) { - event.result = handle.apply( cur, data ); - if ( event.result === false ) { - event.preventDefault(); - } - } - } - event.type = type; - - // If nobody prevented the default action, do it now - if ( !onlyHandlers && !event.isDefaultPrevented() ) { - - if ( ( !special._default || - special._default.apply( eventPath.pop(), data ) === false ) && - acceptData( elem ) ) { - - // Call a native DOM method on the target with the same name as the event. - // Don't do default actions on window, that's where global variables be (#6170) - if ( ontype && isFunction( elem[ type ] ) && !isWindow( elem ) ) { - - // Don't re-trigger an onFOO event when we call its FOO() method - tmp = elem[ ontype ]; - - if ( tmp ) { - elem[ ontype ] = null; - } - - // Prevent re-triggering of the same event, since we already bubbled it above - jQuery.event.triggered = type; - - if ( event.isPropagationStopped() ) { - lastElement.addEventListener( type, stopPropagationCallback ); - } - - elem[ type ](); - - if ( event.isPropagationStopped() ) { - lastElement.removeEventListener( type, stopPropagationCallback ); - } - - jQuery.event.triggered = undefined; - - if ( tmp ) { - elem[ ontype ] = tmp; - } - } - } - } - - return event.result; - }, - - // Piggyback on a donor event to simulate a different one - // Used only for `focus(in | out)` events - simulate: function( type, elem, event ) { - var e = jQuery.extend( - new jQuery.Event(), - event, - { - type: type, - isSimulated: true - } - ); - - jQuery.event.trigger( e, null, elem ); - } - -} ); - -jQuery.fn.extend( { - - trigger: function( type, data ) { - return this.each( function() { - jQuery.event.trigger( type, data, this ); - } ); - }, - triggerHandler: function( type, data ) { - var elem = this[ 0 ]; - if ( elem ) { - return jQuery.event.trigger( type, data, elem, true ); - } - } -} ); - - -// Support: Firefox <=44 -// Firefox doesn't have focus(in | out) events -// Related ticket - https://bugzilla.mozilla.org/show_bug.cgi?id=687787 -// -// Support: Chrome <=48 - 49, Safari <=9.0 - 9.1 -// focus(in | out) events fire after focus & blur events, -// which is spec violation - http://www.w3.org/TR/DOM-Level-3-Events/#events-focusevent-event-order -// Related ticket - https://bugs.chromium.org/p/chromium/issues/detail?id=449857 -if ( !support.focusin ) { - jQuery.each( { focus: "focusin", blur: "focusout" }, function( orig, fix ) { - - // Attach a single capturing handler on the document while someone wants focusin/focusout - var handler = function( event ) { - jQuery.event.simulate( fix, event.target, jQuery.event.fix( event ) ); - }; - - jQuery.event.special[ fix ] = { - setup: function() { - - // Handle: regular nodes (via `this.ownerDocument`), window - // (via `this.document`) & document (via `this`). - var doc = this.ownerDocument || this.document || this, - attaches = dataPriv.access( doc, fix ); - - if ( !attaches ) { - doc.addEventListener( orig, handler, true ); - } - dataPriv.access( doc, fix, ( attaches || 0 ) + 1 ); - }, - teardown: function() { - var doc = this.ownerDocument || this.document || this, - attaches = dataPriv.access( doc, fix ) - 1; - - if ( !attaches ) { - doc.removeEventListener( orig, handler, true ); - dataPriv.remove( doc, fix ); - - } else { - dataPriv.access( doc, fix, attaches ); - } - } - }; - } ); -} -var location = window.location; - -var nonce = { guid: Date.now() }; - -var rquery = ( /\?/ ); - - - -// Cross-browser xml parsing -jQuery.parseXML = function( data ) { - var xml, parserErrorElem; - if ( !data || typeof data !== "string" ) { - return null; - } - - // Support: IE 9 - 11 only - // IE throws on parseFromString with invalid input. - try { - xml = ( new window.DOMParser() ).parseFromString( data, "text/xml" ); - } catch ( e ) {} - - parserErrorElem = xml && xml.getElementsByTagName( "parsererror" )[ 0 ]; - if ( !xml || parserErrorElem ) { - jQuery.error( "Invalid XML: " + ( - parserErrorElem ? - jQuery.map( parserErrorElem.childNodes, function( el ) { - return el.textContent; - } ).join( "\n" ) : - data - ) ); - } - return xml; -}; - - -var - rbracket = /\[\]$/, - rCRLF = /\r?\n/g, - rsubmitterTypes = /^(?:submit|button|image|reset|file)$/i, - rsubmittable = /^(?:input|select|textarea|keygen)/i; - -function buildParams( prefix, obj, traditional, add ) { - var name; - - if ( Array.isArray( obj ) ) { - - // Serialize array item. - jQuery.each( obj, function( i, v ) { - if ( traditional || rbracket.test( prefix ) ) { - - // Treat each array item as a scalar. - add( prefix, v ); - - } else { - - // Item is non-scalar (array or object), encode its numeric index. - buildParams( - prefix + "[" + ( typeof v === "object" && v != null ? i : "" ) + "]", - v, - traditional, - add - ); - } - } ); - - } else if ( !traditional && toType( obj ) === "object" ) { - - // Serialize object item. - for ( name in obj ) { - buildParams( prefix + "[" + name + "]", obj[ name ], traditional, add ); - } - - } else { - - // Serialize scalar item. - add( prefix, obj ); - } -} - -// Serialize an array of form elements or a set of -// key/values into a query string -jQuery.param = function( a, traditional ) { - var prefix, - s = [], - add = function( key, valueOrFunction ) { - - // If value is a function, invoke it and use its return value - var value = isFunction( valueOrFunction ) ? - valueOrFunction() : - valueOrFunction; - - s[ s.length ] = encodeURIComponent( key ) + "=" + - encodeURIComponent( value == null ? "" : value ); - }; - - if ( a == null ) { - return ""; - } - - // If an array was passed in, assume that it is an array of form elements. - if ( Array.isArray( a ) || ( a.jquery && !jQuery.isPlainObject( a ) ) ) { - - // Serialize the form elements - jQuery.each( a, function() { - add( this.name, this.value ); - } ); - - } else { - - // If traditional, encode the "old" way (the way 1.3.2 or older - // did it), otherwise encode params recursively. - for ( prefix in a ) { - buildParams( prefix, a[ prefix ], traditional, add ); - } - } - - // Return the resulting serialization - return s.join( "&" ); -}; - -jQuery.fn.extend( { - serialize: function() { - return jQuery.param( this.serializeArray() ); - }, - serializeArray: function() { - return this.map( function() { - - // Can add propHook for "elements" to filter or add form elements - var elements = jQuery.prop( this, "elements" ); - return elements ? jQuery.makeArray( elements ) : this; - } ).filter( function() { - var type = this.type; - - // Use .is( ":disabled" ) so that fieldset[disabled] works - return this.name && !jQuery( this ).is( ":disabled" ) && - rsubmittable.test( this.nodeName ) && !rsubmitterTypes.test( type ) && - ( this.checked || !rcheckableType.test( type ) ); - } ).map( function( _i, elem ) { - var val = jQuery( this ).val(); - - if ( val == null ) { - return null; - } - - if ( Array.isArray( val ) ) { - return jQuery.map( val, function( val ) { - return { name: elem.name, value: val.replace( rCRLF, "\r\n" ) }; - } ); - } - - return { name: elem.name, value: val.replace( rCRLF, "\r\n" ) }; - } ).get(); - } -} ); - - -var - r20 = /%20/g, - rhash = /#.*$/, - rantiCache = /([?&])_=[^&]*/, - rheaders = /^(.*?):[ \t]*([^\r\n]*)$/mg, - - // #7653, #8125, #8152: local protocol detection - rlocalProtocol = /^(?:about|app|app-storage|.+-extension|file|res|widget):$/, - rnoContent = /^(?:GET|HEAD)$/, - rprotocol = /^\/\//, - - /* Prefilters - * 1) They are useful to introduce custom dataTypes (see ajax/jsonp.js for an example) - * 2) These are called: - * - BEFORE asking for a transport - * - AFTER param serialization (s.data is a string if s.processData is true) - * 3) key is the dataType - * 4) the catchall symbol "*" can be used - * 5) execution will start with transport dataType and THEN continue down to "*" if needed - */ - prefilters = {}, - - /* Transports bindings - * 1) key is the dataType - * 2) the catchall symbol "*" can be used - * 3) selection will start with transport dataType and THEN go to "*" if needed - */ - transports = {}, - - // Avoid comment-prolog char sequence (#10098); must appease lint and evade compression - allTypes = "*/".concat( "*" ), - - // Anchor tag for parsing the document origin - originAnchor = document.createElement( "a" ); - -originAnchor.href = location.href; - -// Base "constructor" for jQuery.ajaxPrefilter and jQuery.ajaxTransport -function addToPrefiltersOrTransports( structure ) { - - // dataTypeExpression is optional and defaults to "*" - return function( dataTypeExpression, func ) { - - if ( typeof dataTypeExpression !== "string" ) { - func = dataTypeExpression; - dataTypeExpression = "*"; - } - - var dataType, - i = 0, - dataTypes = dataTypeExpression.toLowerCase().match( rnothtmlwhite ) || []; - - if ( isFunction( func ) ) { - - // For each dataType in the dataTypeExpression - while ( ( dataType = dataTypes[ i++ ] ) ) { - - // Prepend if requested - if ( dataType[ 0 ] === "+" ) { - dataType = dataType.slice( 1 ) || "*"; - ( structure[ dataType ] = structure[ dataType ] || [] ).unshift( func ); - - // Otherwise append - } else { - ( structure[ dataType ] = structure[ dataType ] || [] ).push( func ); - } - } - } - }; -} - -// Base inspection function for prefilters and transports -function inspectPrefiltersOrTransports( structure, options, originalOptions, jqXHR ) { - - var inspected = {}, - seekingTransport = ( structure === transports ); - - function inspect( dataType ) { - var selected; - inspected[ dataType ] = true; - jQuery.each( structure[ dataType ] || [], function( _, prefilterOrFactory ) { - var dataTypeOrTransport = prefilterOrFactory( options, originalOptions, jqXHR ); - if ( typeof dataTypeOrTransport === "string" && - !seekingTransport && !inspected[ dataTypeOrTransport ] ) { - - options.dataTypes.unshift( dataTypeOrTransport ); - inspect( dataTypeOrTransport ); - return false; - } else if ( seekingTransport ) { - return !( selected = dataTypeOrTransport ); - } - } ); - return selected; - } - - return inspect( options.dataTypes[ 0 ] ) || !inspected[ "*" ] && inspect( "*" ); -} - -// A special extend for ajax options -// that takes "flat" options (not to be deep extended) -// Fixes #9887 -function ajaxExtend( target, src ) { - var key, deep, - flatOptions = jQuery.ajaxSettings.flatOptions || {}; - - for ( key in src ) { - if ( src[ key ] !== undefined ) { - ( flatOptions[ key ] ? target : ( deep || ( deep = {} ) ) )[ key ] = src[ key ]; - } - } - if ( deep ) { - jQuery.extend( true, target, deep ); - } - - return target; -} - -/* Handles responses to an ajax request: - * - finds the right dataType (mediates between content-type and expected dataType) - * - returns the corresponding response - */ -function ajaxHandleResponses( s, jqXHR, responses ) { - - var ct, type, finalDataType, firstDataType, - contents = s.contents, - dataTypes = s.dataTypes; - - // Remove auto dataType and get content-type in the process - while ( dataTypes[ 0 ] === "*" ) { - dataTypes.shift(); - if ( ct === undefined ) { - ct = s.mimeType || jqXHR.getResponseHeader( "Content-Type" ); - } - } - - // Check if we're dealing with a known content-type - if ( ct ) { - for ( type in contents ) { - if ( contents[ type ] && contents[ type ].test( ct ) ) { - dataTypes.unshift( type ); - break; - } - } - } - - // Check to see if we have a response for the expected dataType - if ( dataTypes[ 0 ] in responses ) { - finalDataType = dataTypes[ 0 ]; - } else { - - // Try convertible dataTypes - for ( type in responses ) { - if ( !dataTypes[ 0 ] || s.converters[ type + " " + dataTypes[ 0 ] ] ) { - finalDataType = type; - break; - } - if ( !firstDataType ) { - firstDataType = type; - } - } - - // Or just use first one - finalDataType = finalDataType || firstDataType; - } - - // If we found a dataType - // We add the dataType to the list if needed - // and return the corresponding response - if ( finalDataType ) { - if ( finalDataType !== dataTypes[ 0 ] ) { - dataTypes.unshift( finalDataType ); - } - return responses[ finalDataType ]; - } -} - -/* Chain conversions given the request and the original response - * Also sets the responseXXX fields on the jqXHR instance - */ -function ajaxConvert( s, response, jqXHR, isSuccess ) { - var conv2, current, conv, tmp, prev, - converters = {}, - - // Work with a copy of dataTypes in case we need to modify it for conversion - dataTypes = s.dataTypes.slice(); - - // Create converters map with lowercased keys - if ( dataTypes[ 1 ] ) { - for ( conv in s.converters ) { - converters[ conv.toLowerCase() ] = s.converters[ conv ]; - } - } - - current = dataTypes.shift(); - - // Convert to each sequential dataType - while ( current ) { - - if ( s.responseFields[ current ] ) { - jqXHR[ s.responseFields[ current ] ] = response; - } - - // Apply the dataFilter if provided - if ( !prev && isSuccess && s.dataFilter ) { - response = s.dataFilter( response, s.dataType ); - } - - prev = current; - current = dataTypes.shift(); - - if ( current ) { - - // There's only work to do if current dataType is non-auto - if ( current === "*" ) { - - current = prev; - - // Convert response if prev dataType is non-auto and differs from current - } else if ( prev !== "*" && prev !== current ) { - - // Seek a direct converter - conv = converters[ prev + " " + current ] || converters[ "* " + current ]; - - // If none found, seek a pair - if ( !conv ) { - for ( conv2 in converters ) { - - // If conv2 outputs current - tmp = conv2.split( " " ); - if ( tmp[ 1 ] === current ) { - - // If prev can be converted to accepted input - conv = converters[ prev + " " + tmp[ 0 ] ] || - converters[ "* " + tmp[ 0 ] ]; - if ( conv ) { - - // Condense equivalence converters - if ( conv === true ) { - conv = converters[ conv2 ]; - - // Otherwise, insert the intermediate dataType - } else if ( converters[ conv2 ] !== true ) { - current = tmp[ 0 ]; - dataTypes.unshift( tmp[ 1 ] ); - } - break; - } - } - } - } - - // Apply converter (if not an equivalence) - if ( conv !== true ) { - - // Unless errors are allowed to bubble, catch and return them - if ( conv && s.throws ) { - response = conv( response ); - } else { - try { - response = conv( response ); - } catch ( e ) { - return { - state: "parsererror", - error: conv ? e : "No conversion from " + prev + " to " + current - }; - } - } - } - } - } - } - - return { state: "success", data: response }; -} - -jQuery.extend( { - - // Counter for holding the number of active queries - active: 0, - - // Last-Modified header cache for next request - lastModified: {}, - etag: {}, - - ajaxSettings: { - url: location.href, - type: "GET", - isLocal: rlocalProtocol.test( location.protocol ), - global: true, - processData: true, - async: true, - contentType: "application/x-www-form-urlencoded; charset=UTF-8", - - /* - timeout: 0, - data: null, - dataType: null, - username: null, - password: null, - cache: null, - throws: false, - traditional: false, - headers: {}, - */ - - accepts: { - "*": allTypes, - text: "text/plain", - html: "text/html", - xml: "application/xml, text/xml", - json: "application/json, text/javascript" - }, - - contents: { - xml: /\bxml\b/, - html: /\bhtml/, - json: /\bjson\b/ - }, - - responseFields: { - xml: "responseXML", - text: "responseText", - json: "responseJSON" - }, - - // Data converters - // Keys separate source (or catchall "*") and destination types with a single space - converters: { - - // Convert anything to text - "* text": String, - - // Text to html (true = no transformation) - "text html": true, - - // Evaluate text as a json expression - "text json": JSON.parse, - - // Parse text as xml - "text xml": jQuery.parseXML - }, - - // For options that shouldn't be deep extended: - // you can add your own custom options here if - // and when you create one that shouldn't be - // deep extended (see ajaxExtend) - flatOptions: { - url: true, - context: true - } - }, - - // Creates a full fledged settings object into target - // with both ajaxSettings and settings fields. - // If target is omitted, writes into ajaxSettings. - ajaxSetup: function( target, settings ) { - return settings ? - - // Building a settings object - ajaxExtend( ajaxExtend( target, jQuery.ajaxSettings ), settings ) : - - // Extending ajaxSettings - ajaxExtend( jQuery.ajaxSettings, target ); - }, - - ajaxPrefilter: addToPrefiltersOrTransports( prefilters ), - ajaxTransport: addToPrefiltersOrTransports( transports ), - - // Main method - ajax: function( url, options ) { - - // If url is an object, simulate pre-1.5 signature - if ( typeof url === "object" ) { - options = url; - url = undefined; - } - - // Force options to be an object - options = options || {}; - - var transport, - - // URL without anti-cache param - cacheURL, - - // Response headers - responseHeadersString, - responseHeaders, - - // timeout handle - timeoutTimer, - - // Url cleanup var - urlAnchor, - - // Request state (becomes false upon send and true upon completion) - completed, - - // To know if global events are to be dispatched - fireGlobals, - - // Loop variable - i, - - // uncached part of the url - uncached, - - // Create the final options object - s = jQuery.ajaxSetup( {}, options ), - - // Callbacks context - callbackContext = s.context || s, - - // Context for global events is callbackContext if it is a DOM node or jQuery collection - globalEventContext = s.context && - ( callbackContext.nodeType || callbackContext.jquery ) ? - jQuery( callbackContext ) : - jQuery.event, - - // Deferreds - deferred = jQuery.Deferred(), - completeDeferred = jQuery.Callbacks( "once memory" ), - - // Status-dependent callbacks - statusCode = s.statusCode || {}, - - // Headers (they are sent all at once) - requestHeaders = {}, - requestHeadersNames = {}, - - // Default abort message - strAbort = "canceled", - - // Fake xhr - jqXHR = { - readyState: 0, - - // Builds headers hashtable if needed - getResponseHeader: function( key ) { - var match; - if ( completed ) { - if ( !responseHeaders ) { - responseHeaders = {}; - while ( ( match = rheaders.exec( responseHeadersString ) ) ) { - responseHeaders[ match[ 1 ].toLowerCase() + " " ] = - ( responseHeaders[ match[ 1 ].toLowerCase() + " " ] || [] ) - .concat( match[ 2 ] ); - } - } - match = responseHeaders[ key.toLowerCase() + " " ]; - } - return match == null ? null : match.join( ", " ); - }, - - // Raw string - getAllResponseHeaders: function() { - return completed ? responseHeadersString : null; - }, - - // Caches the header - setRequestHeader: function( name, value ) { - if ( completed == null ) { - name = requestHeadersNames[ name.toLowerCase() ] = - requestHeadersNames[ name.toLowerCase() ] || name; - requestHeaders[ name ] = value; - } - return this; - }, - - // Overrides response content-type header - overrideMimeType: function( type ) { - if ( completed == null ) { - s.mimeType = type; - } - return this; - }, - - // Status-dependent callbacks - statusCode: function( map ) { - var code; - if ( map ) { - if ( completed ) { - - // Execute the appropriate callbacks - jqXHR.always( map[ jqXHR.status ] ); - } else { - - // Lazy-add the new callbacks in a way that preserves old ones - for ( code in map ) { - statusCode[ code ] = [ statusCode[ code ], map[ code ] ]; - } - } - } - return this; - }, - - // Cancel the request - abort: function( statusText ) { - var finalText = statusText || strAbort; - if ( transport ) { - transport.abort( finalText ); - } - done( 0, finalText ); - return this; - } - }; - - // Attach deferreds - deferred.promise( jqXHR ); - - // Add protocol if not provided (prefilters might expect it) - // Handle falsy url in the settings object (#10093: consistency with old signature) - // We also use the url parameter if available - s.url = ( ( url || s.url || location.href ) + "" ) - .replace( rprotocol, location.protocol + "//" ); - - // Alias method option to type as per ticket #12004 - s.type = options.method || options.type || s.method || s.type; - - // Extract dataTypes list - s.dataTypes = ( s.dataType || "*" ).toLowerCase().match( rnothtmlwhite ) || [ "" ]; - - // A cross-domain request is in order when the origin doesn't match the current origin. - if ( s.crossDomain == null ) { - urlAnchor = document.createElement( "a" ); - - // Support: IE <=8 - 11, Edge 12 - 15 - // IE throws exception on accessing the href property if url is malformed, - // e.g. http://example.com:80x/ - try { - urlAnchor.href = s.url; - - // Support: IE <=8 - 11 only - // Anchor's host property isn't correctly set when s.url is relative - urlAnchor.href = urlAnchor.href; - s.crossDomain = originAnchor.protocol + "//" + originAnchor.host !== - urlAnchor.protocol + "//" + urlAnchor.host; - } catch ( e ) { - - // If there is an error parsing the URL, assume it is crossDomain, - // it can be rejected by the transport if it is invalid - s.crossDomain = true; - } - } - - // Convert data if not already a string - if ( s.data && s.processData && typeof s.data !== "string" ) { - s.data = jQuery.param( s.data, s.traditional ); - } - - // Apply prefilters - inspectPrefiltersOrTransports( prefilters, s, options, jqXHR ); - - // If request was aborted inside a prefilter, stop there - if ( completed ) { - return jqXHR; - } - - // We can fire global events as of now if asked to - // Don't fire events if jQuery.event is undefined in an AMD-usage scenario (#15118) - fireGlobals = jQuery.event && s.global; - - // Watch for a new set of requests - if ( fireGlobals && jQuery.active++ === 0 ) { - jQuery.event.trigger( "ajaxStart" ); - } - - // Uppercase the type - s.type = s.type.toUpperCase(); - - // Determine if request has content - s.hasContent = !rnoContent.test( s.type ); - - // Save the URL in case we're toying with the If-Modified-Since - // and/or If-None-Match header later on - // Remove hash to simplify url manipulation - cacheURL = s.url.replace( rhash, "" ); - - // More options handling for requests with no content - if ( !s.hasContent ) { - - // Remember the hash so we can put it back - uncached = s.url.slice( cacheURL.length ); - - // If data is available and should be processed, append data to url - if ( s.data && ( s.processData || typeof s.data === "string" ) ) { - cacheURL += ( rquery.test( cacheURL ) ? "&" : "?" ) + s.data; - - // #9682: remove data so that it's not used in an eventual retry - delete s.data; - } - - // Add or update anti-cache param if needed - if ( s.cache === false ) { - cacheURL = cacheURL.replace( rantiCache, "$1" ); - uncached = ( rquery.test( cacheURL ) ? "&" : "?" ) + "_=" + ( nonce.guid++ ) + - uncached; - } - - // Put hash and anti-cache on the URL that will be requested (gh-1732) - s.url = cacheURL + uncached; - - // Change '%20' to '+' if this is encoded form body content (gh-2658) - } else if ( s.data && s.processData && - ( s.contentType || "" ).indexOf( "application/x-www-form-urlencoded" ) === 0 ) { - s.data = s.data.replace( r20, "+" ); - } - - // Set the If-Modified-Since and/or If-None-Match header, if in ifModified mode. - if ( s.ifModified ) { - if ( jQuery.lastModified[ cacheURL ] ) { - jqXHR.setRequestHeader( "If-Modified-Since", jQuery.lastModified[ cacheURL ] ); - } - if ( jQuery.etag[ cacheURL ] ) { - jqXHR.setRequestHeader( "If-None-Match", jQuery.etag[ cacheURL ] ); - } - } - - // Set the correct header, if data is being sent - if ( s.data && s.hasContent && s.contentType !== false || options.contentType ) { - jqXHR.setRequestHeader( "Content-Type", s.contentType ); - } - - // Set the Accepts header for the server, depending on the dataType - jqXHR.setRequestHeader( - "Accept", - s.dataTypes[ 0 ] && s.accepts[ s.dataTypes[ 0 ] ] ? - s.accepts[ s.dataTypes[ 0 ] ] + - ( s.dataTypes[ 0 ] !== "*" ? ", " + allTypes + "; q=0.01" : "" ) : - s.accepts[ "*" ] - ); - - // Check for headers option - for ( i in s.headers ) { - jqXHR.setRequestHeader( i, s.headers[ i ] ); - } - - // Allow custom headers/mimetypes and early abort - if ( s.beforeSend && - ( s.beforeSend.call( callbackContext, jqXHR, s ) === false || completed ) ) { - - // Abort if not done already and return - return jqXHR.abort(); - } - - // Aborting is no longer a cancellation - strAbort = "abort"; - - // Install callbacks on deferreds - completeDeferred.add( s.complete ); - jqXHR.done( s.success ); - jqXHR.fail( s.error ); - - // Get transport - transport = inspectPrefiltersOrTransports( transports, s, options, jqXHR ); - - // If no transport, we auto-abort - if ( !transport ) { - done( -1, "No Transport" ); - } else { - jqXHR.readyState = 1; - - // Send global event - if ( fireGlobals ) { - globalEventContext.trigger( "ajaxSend", [ jqXHR, s ] ); - } - - // If request was aborted inside ajaxSend, stop there - if ( completed ) { - return jqXHR; - } - - // Timeout - if ( s.async && s.timeout > 0 ) { - timeoutTimer = window.setTimeout( function() { - jqXHR.abort( "timeout" ); - }, s.timeout ); - } - - try { - completed = false; - transport.send( requestHeaders, done ); - } catch ( e ) { - - // Rethrow post-completion exceptions - if ( completed ) { - throw e; - } - - // Propagate others as results - done( -1, e ); - } - } - - // Callback for when everything is done - function done( status, nativeStatusText, responses, headers ) { - var isSuccess, success, error, response, modified, - statusText = nativeStatusText; - - // Ignore repeat invocations - if ( completed ) { - return; - } - - completed = true; - - // Clear timeout if it exists - if ( timeoutTimer ) { - window.clearTimeout( timeoutTimer ); - } - - // Dereference transport for early garbage collection - // (no matter how long the jqXHR object will be used) - transport = undefined; - - // Cache response headers - responseHeadersString = headers || ""; - - // Set readyState - jqXHR.readyState = status > 0 ? 4 : 0; - - // Determine if successful - isSuccess = status >= 200 && status < 300 || status === 304; - - // Get response data - if ( responses ) { - response = ajaxHandleResponses( s, jqXHR, responses ); - } - - // Use a noop converter for missing script but not if jsonp - if ( !isSuccess && - jQuery.inArray( "script", s.dataTypes ) > -1 && - jQuery.inArray( "json", s.dataTypes ) < 0 ) { - s.converters[ "text script" ] = function() {}; - } - - // Convert no matter what (that way responseXXX fields are always set) - response = ajaxConvert( s, response, jqXHR, isSuccess ); - - // If successful, handle type chaining - if ( isSuccess ) { - - // Set the If-Modified-Since and/or If-None-Match header, if in ifModified mode. - if ( s.ifModified ) { - modified = jqXHR.getResponseHeader( "Last-Modified" ); - if ( modified ) { - jQuery.lastModified[ cacheURL ] = modified; - } - modified = jqXHR.getResponseHeader( "etag" ); - if ( modified ) { - jQuery.etag[ cacheURL ] = modified; - } - } - - // if no content - if ( status === 204 || s.type === "HEAD" ) { - statusText = "nocontent"; - - // if not modified - } else if ( status === 304 ) { - statusText = "notmodified"; - - // If we have data, let's convert it - } else { - statusText = response.state; - success = response.data; - error = response.error; - isSuccess = !error; - } - } else { - - // Extract error from statusText and normalize for non-aborts - error = statusText; - if ( status || !statusText ) { - statusText = "error"; - if ( status < 0 ) { - status = 0; - } - } - } - - // Set data for the fake xhr object - jqXHR.status = status; - jqXHR.statusText = ( nativeStatusText || statusText ) + ""; - - // Success/Error - if ( isSuccess ) { - deferred.resolveWith( callbackContext, [ success, statusText, jqXHR ] ); - } else { - deferred.rejectWith( callbackContext, [ jqXHR, statusText, error ] ); - } - - // Status-dependent callbacks - jqXHR.statusCode( statusCode ); - statusCode = undefined; - - if ( fireGlobals ) { - globalEventContext.trigger( isSuccess ? "ajaxSuccess" : "ajaxError", - [ jqXHR, s, isSuccess ? success : error ] ); - } - - // Complete - completeDeferred.fireWith( callbackContext, [ jqXHR, statusText ] ); - - if ( fireGlobals ) { - globalEventContext.trigger( "ajaxComplete", [ jqXHR, s ] ); - - // Handle the global AJAX counter - if ( !( --jQuery.active ) ) { - jQuery.event.trigger( "ajaxStop" ); - } - } - } - - return jqXHR; - }, - - getJSON: function( url, data, callback ) { - return jQuery.get( url, data, callback, "json" ); - }, - - getScript: function( url, callback ) { - return jQuery.get( url, undefined, callback, "script" ); - } -} ); - -jQuery.each( [ "get", "post" ], function( _i, method ) { - jQuery[ method ] = function( url, data, callback, type ) { - - // Shift arguments if data argument was omitted - if ( isFunction( data ) ) { - type = type || callback; - callback = data; - data = undefined; - } - - // The url can be an options object (which then must have .url) - return jQuery.ajax( jQuery.extend( { - url: url, - type: method, - dataType: type, - data: data, - success: callback - }, jQuery.isPlainObject( url ) && url ) ); - }; -} ); - -jQuery.ajaxPrefilter( function( s ) { - var i; - for ( i in s.headers ) { - if ( i.toLowerCase() === "content-type" ) { - s.contentType = s.headers[ i ] || ""; - } - } -} ); - - -jQuery._evalUrl = function( url, options, doc ) { - return jQuery.ajax( { - url: url, - - // Make this explicit, since user can override this through ajaxSetup (#11264) - type: "GET", - dataType: "script", - cache: true, - async: false, - global: false, - - // Only evaluate the response if it is successful (gh-4126) - // dataFilter is not invoked for failure responses, so using it instead - // of the default converter is kludgy but it works. - converters: { - "text script": function() {} - }, - dataFilter: function( response ) { - jQuery.globalEval( response, options, doc ); - } - } ); -}; - - -jQuery.fn.extend( { - wrapAll: function( html ) { - var wrap; - - if ( this[ 0 ] ) { - if ( isFunction( html ) ) { - html = html.call( this[ 0 ] ); - } - - // The elements to wrap the target around - wrap = jQuery( html, this[ 0 ].ownerDocument ).eq( 0 ).clone( true ); - - if ( this[ 0 ].parentNode ) { - wrap.insertBefore( this[ 0 ] ); - } - - wrap.map( function() { - var elem = this; - - while ( elem.firstElementChild ) { - elem = elem.firstElementChild; - } - - return elem; - } ).append( this ); - } - - return this; - }, - - wrapInner: function( html ) { - if ( isFunction( html ) ) { - return this.each( function( i ) { - jQuery( this ).wrapInner( html.call( this, i ) ); - } ); - } - - return this.each( function() { - var self = jQuery( this ), - contents = self.contents(); - - if ( contents.length ) { - contents.wrapAll( html ); - - } else { - self.append( html ); - } - } ); - }, - - wrap: function( html ) { - var htmlIsFunction = isFunction( html ); - - return this.each( function( i ) { - jQuery( this ).wrapAll( htmlIsFunction ? html.call( this, i ) : html ); - } ); - }, - - unwrap: function( selector ) { - this.parent( selector ).not( "body" ).each( function() { - jQuery( this ).replaceWith( this.childNodes ); - } ); - return this; - } -} ); - - -jQuery.expr.pseudos.hidden = function( elem ) { - return !jQuery.expr.pseudos.visible( elem ); -}; -jQuery.expr.pseudos.visible = function( elem ) { - return !!( elem.offsetWidth || elem.offsetHeight || elem.getClientRects().length ); -}; - - - - -jQuery.ajaxSettings.xhr = function() { - try { - return new window.XMLHttpRequest(); - } catch ( e ) {} -}; - -var xhrSuccessStatus = { - - // File protocol always yields status code 0, assume 200 - 0: 200, - - // Support: IE <=9 only - // #1450: sometimes IE returns 1223 when it should be 204 - 1223: 204 - }, - xhrSupported = jQuery.ajaxSettings.xhr(); - -support.cors = !!xhrSupported && ( "withCredentials" in xhrSupported ); -support.ajax = xhrSupported = !!xhrSupported; - -jQuery.ajaxTransport( function( options ) { - var callback, errorCallback; - - // Cross domain only allowed if supported through XMLHttpRequest - if ( support.cors || xhrSupported && !options.crossDomain ) { - return { - send: function( headers, complete ) { - var i, - xhr = options.xhr(); - - xhr.open( - options.type, - options.url, - options.async, - options.username, - options.password - ); - - // Apply custom fields if provided - if ( options.xhrFields ) { - for ( i in options.xhrFields ) { - xhr[ i ] = options.xhrFields[ i ]; - } - } - - // Override mime type if needed - if ( options.mimeType && xhr.overrideMimeType ) { - xhr.overrideMimeType( options.mimeType ); - } - - // X-Requested-With header - // For cross-domain requests, seeing as conditions for a preflight are - // akin to a jigsaw puzzle, we simply never set it to be sure. - // (it can always be set on a per-request basis or even using ajaxSetup) - // For same-domain requests, won't change header if already provided. - if ( !options.crossDomain && !headers[ "X-Requested-With" ] ) { - headers[ "X-Requested-With" ] = "XMLHttpRequest"; - } - - // Set headers - for ( i in headers ) { - xhr.setRequestHeader( i, headers[ i ] ); - } - - // Callback - callback = function( type ) { - return function() { - if ( callback ) { - callback = errorCallback = xhr.onload = - xhr.onerror = xhr.onabort = xhr.ontimeout = - xhr.onreadystatechange = null; - - if ( type === "abort" ) { - xhr.abort(); - } else if ( type === "error" ) { - - // Support: IE <=9 only - // On a manual native abort, IE9 throws - // errors on any property access that is not readyState - if ( typeof xhr.status !== "number" ) { - complete( 0, "error" ); - } else { - complete( - - // File: protocol always yields status 0; see #8605, #14207 - xhr.status, - xhr.statusText - ); - } - } else { - complete( - xhrSuccessStatus[ xhr.status ] || xhr.status, - xhr.statusText, - - // Support: IE <=9 only - // IE9 has no XHR2 but throws on binary (trac-11426) - // For XHR2 non-text, let the caller handle it (gh-2498) - ( xhr.responseType || "text" ) !== "text" || - typeof xhr.responseText !== "string" ? - { binary: xhr.response } : - { text: xhr.responseText }, - xhr.getAllResponseHeaders() - ); - } - } - }; - }; - - // Listen to events - xhr.onload = callback(); - errorCallback = xhr.onerror = xhr.ontimeout = callback( "error" ); - - // Support: IE 9 only - // Use onreadystatechange to replace onabort - // to handle uncaught aborts - if ( xhr.onabort !== undefined ) { - xhr.onabort = errorCallback; - } else { - xhr.onreadystatechange = function() { - - // Check readyState before timeout as it changes - if ( xhr.readyState === 4 ) { - - // Allow onerror to be called first, - // but that will not handle a native abort - // Also, save errorCallback to a variable - // as xhr.onerror cannot be accessed - window.setTimeout( function() { - if ( callback ) { - errorCallback(); - } - } ); - } - }; - } - - // Create the abort callback - callback = callback( "abort" ); - - try { - - // Do send the request (this may raise an exception) - xhr.send( options.hasContent && options.data || null ); - } catch ( e ) { - - // #14683: Only rethrow if this hasn't been notified as an error yet - if ( callback ) { - throw e; - } - } - }, - - abort: function() { - if ( callback ) { - callback(); - } - } - }; - } -} ); - - - - -// Prevent auto-execution of scripts when no explicit dataType was provided (See gh-2432) -jQuery.ajaxPrefilter( function( s ) { - if ( s.crossDomain ) { - s.contents.script = false; - } -} ); - -// Install script dataType -jQuery.ajaxSetup( { - accepts: { - script: "text/javascript, application/javascript, " + - "application/ecmascript, application/x-ecmascript" - }, - contents: { - script: /\b(?:java|ecma)script\b/ - }, - converters: { - "text script": function( text ) { - jQuery.globalEval( text ); - return text; - } - } -} ); - -// Handle cache's special case and crossDomain -jQuery.ajaxPrefilter( "script", function( s ) { - if ( s.cache === undefined ) { - s.cache = false; - } - if ( s.crossDomain ) { - s.type = "GET"; - } -} ); - -// Bind script tag hack transport -jQuery.ajaxTransport( "script", function( s ) { - - // This transport only deals with cross domain or forced-by-attrs requests - if ( s.crossDomain || s.scriptAttrs ) { - var script, callback; - return { - send: function( _, complete ) { - script = jQuery( " -
- diff --git a/spaces/pycoming/bingo/src/lib/bots/bing/types.ts b/spaces/pycoming/bingo/src/lib/bots/bing/types.ts deleted file mode 100644 index 02cd5e8b01e3529642d28dc1539bf958f4ac420b..0000000000000000000000000000000000000000 --- a/spaces/pycoming/bingo/src/lib/bots/bing/types.ts +++ /dev/null @@ -1,259 +0,0 @@ -export type Author = 'user' | 'system' | 'bot' - -export type BotId = 'bing' - -export enum BingConversationStyle { - Creative = 'Creative', - Balanced = 'Balanced', - Precise = 'Precise' -} - -export enum ErrorCode { - CONVERSATION_LIMIT = 'CONVERSATION_LIMIT', - BING_UNAUTHORIZED = 'BING_UNAUTHORIZED', - BING_FORBIDDEN = 'BING_FORBIDDEN', - BING_CAPTCHA = 'BING_CAPTCHA', - THROTTLE_LIMIT = 'THROTTLE_LIMIT', - NOTFOUND_ERROR = 'NOT_FOUND_ERROR', - UNKOWN_ERROR = 'UNKOWN_ERROR', - NETWORK_ERROR = 'NETWORK_ERROR', -} - -export class ChatError extends Error { - code: ErrorCode - constructor(message: string, code: ErrorCode) { - super(message) - this.code = code - } -} - -export type ChatMessageModel = { - id: string - author: Author - text: string - error?: ChatError - throttling?: Throttling - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] -} - -export interface ConversationModel { - messages: ChatMessageModel[] -} - -export type Event = - | { - type: 'UPDATE_ANSWER' - data: { - text: string - spokenText?: string - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] - throttling?: Throttling - } - } - | { - type: 'DONE' - } - | { - type: 'ERROR' - error: ChatError - } - -export interface SendMessageParams { - prompt: string - imageUrl?: string - options: T - onEvent: (event: Event) => void - signal?: AbortSignal -} - -export interface ConversationResponse { - conversationId: string - clientId: string - conversationSignature: string - result: { - value: string - message?: string - } -} - -export interface Telemetry { - metrics?: null - startTime: string -} - -export interface ChatUpdateArgument { - messages?: ChatResponseMessage[] - throttling?: Throttling - requestId: string - result: null -} - -export type ChatUpdateCompleteResponse = { - type: 2 - invocationId: string - item: ChatResponseItem -} | { - type: 1 - target: string - arguments: ChatUpdateArgument[] -} | { - type: 3 - invocationId: string -} | { - type: 6 | 7 -} - -export interface ChatRequestResult { - value: string - serviceVersion: string - error?: string -} - -export interface ChatResponseItem { - messages: ChatResponseMessage[] - firstNewMessageIndex: number - suggestedResponses: null - conversationId: string - requestId: string - conversationExpiryTime: string - telemetry: Telemetry - result: ChatRequestResult - throttling: Throttling -} -export enum InvocationEventType { - Invocation = 1, - StreamItem = 2, - Completion = 3, - StreamInvocation = 4, - CancelInvocation = 5, - Ping = 6, - Close = 7, -} - -// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts - -export interface ConversationInfo { - conversationId: string - clientId: string - conversationSignature: string - invocationId: number - conversationStyle: BingConversationStyle - prompt: string - imageUrl?: string -} - -export interface BingChatResponse { - conversationSignature: string - conversationId: string - clientId: string - invocationId: number - conversationExpiryTime: Date - response: string - details: ChatResponseMessage -} - -export interface Throttling { - maxNumLongDocSummaryUserMessagesInConversation: number - maxNumUserMessagesInConversation: number - numLongDocSummaryUserMessagesInConversation: number - numUserMessagesInConversation: number -} - -export interface ChatResponseMessage { - text: string - spokenText?: string - author: string - createdAt: Date - timestamp: Date - messageId: string - requestId: string - offense: string - adaptiveCards: AdaptiveCard[] - sourceAttributions: SourceAttribution[] - feedback: Feedback - contentOrigin: string - messageType?: string - contentType?: string - privacy: null - suggestedResponses: SuggestedResponse[] -} - -export interface AdaptiveCard { - type: string - version: string - body: Body[] -} - -export interface Body { - type: string - text: string - wrap: boolean - size?: string -} - -export interface Feedback { - tag: null - updatedOn: null - type: string -} - -export interface SourceAttribution { - providerDisplayName: string - seeMoreUrl: string - searchQuery: string -} - -export interface SuggestedResponse { - text: string - author?: Author - createdAt?: Date - timestamp?: Date - messageId?: string - messageType?: string - offense?: string - feedback?: Feedback - contentOrigin?: string - privacy?: null -} - -export interface KBlobRequest { - knowledgeRequest: KnowledgeRequestContext - imageBase64?: string -} - -export interface KBlobResponse { - blobId: string - processedBlobId?: string -} - -export interface KnowledgeRequestContext { - imageInfo: ImageInfo; - knowledgeRequest: KnowledgeRequest; -} - -export interface ImageInfo { - url?: string; -} - -export interface KnowledgeRequest { - invokedSkills: string[]; - subscriptionId: string; - invokedSkillsRequestData: InvokedSkillsRequestData; - convoData: ConvoData; -} - -export interface ConvoData { - convoid: string; - convotone: BingConversationStyle; -} - -export interface InvokedSkillsRequestData { - enableFaceBlur: boolean; -} - -export interface FileItem { - url: string; - status?: 'loading' | 'error' | 'loaded' -} diff --git a/spaces/pytorch/DeepLabV3/app.py b/spaces/pytorch/DeepLabV3/app.py deleted file mode 100644 index 0d67dc053a711c0c00aa9e6132b526901aa3e8f0..0000000000000000000000000000000000000000 --- a/spaces/pytorch/DeepLabV3/app.py +++ /dev/null @@ -1,57 +0,0 @@ -import torch -model = torch.hub.load('pytorch/vision:v0.9.0', 'deeplabv3_resnet101', pretrained=True) -model.eval() -# Download an example image from the pytorch website -import urllib -url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") -try: urllib.URLopener().retrieve(url, filename) -except: urllib.request.urlretrieve(url, filename) -# sample execution (requires torchvision) -from PIL import Image -from torchvision import transforms -import gradio as gr -import matplotlib.pyplot as plt - - -def inference(input_image): - preprocess = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), - ]) - - input_tensor = preprocess(input_image) - input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model - - # move the input and model to GPU for speed if available - if torch.cuda.is_available(): - input_batch = input_batch.to('cuda') - model.to('cuda') - - with torch.no_grad(): - output = model(input_batch)['out'][0] - output_predictions = output.argmax(0) - # create a color pallette, selecting a color for each class - palette = torch.tensor([2 ** 25 - 1, 2 ** 15 - 1, 2 ** 21 - 1]) - colors = torch.as_tensor([i for i in range(21)])[:, None] * palette - colors = (colors % 255).numpy().astype("uint8") - - # plot the semantic segmentation predictions of 21 classes in each color - r = Image.fromarray(output_predictions.byte().cpu().numpy()).resize(input_image.size) - r.putpalette(colors) - plt.imshow(r) - return plt - -title = "DEEPLABV3-RESNET101" -description = "demo for DEEPLABV3-RESNET101, DeepLabV3 model with a ResNet-101 backbone. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below." -article = "

Rethinking Atrous Convolution for Semantic Image Segmentation | Github Repo

" - -gr.Interface( - inference, - gr.inputs.Image(type="pil", label="Input"), - gr.outputs.Image(type="plot", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ["dog.jpg"] - ]).launch() \ No newline at end of file diff --git a/spaces/qefunaba/nicky007-stable-diffusion-logo-fine-tuned/app.py b/spaces/qefunaba/nicky007-stable-diffusion-logo-fine-tuned/app.py deleted file mode 100644 index 64b4b06d5c2039e5b801d77f1388c0cdddfa76dd..0000000000000000000000000000000000000000 --- a/spaces/qefunaba/nicky007-stable-diffusion-logo-fine-tuned/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/nicky007/stable-diffusion-logo-fine-tuned").launch() \ No newline at end of file diff --git a/spaces/qingxu98/academic-chatgpt-beta/theme.py b/spaces/qingxu98/academic-chatgpt-beta/theme.py deleted file mode 100644 index 1cc26b06d994eba6d37aa86f3bbfc12fc164731c..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/academic-chatgpt-beta/theme.py +++ /dev/null @@ -1,231 +0,0 @@ -import gradio as gr -from toolbox import get_conf -CODE_HIGHLIGHT, = get_conf('CODE_HIGHLIGHT') -# gradio可用颜色列表 -# gr.themes.utils.colors.slate (石板色) -# gr.themes.utils.colors.gray (灰色) -# gr.themes.utils.colors.zinc (锌色) -# gr.themes.utils.colors.neutral (中性色) -# gr.themes.utils.colors.stone (石头色) -# gr.themes.utils.colors.red (红色) -# gr.themes.utils.colors.orange (橙色) -# gr.themes.utils.colors.amber (琥珀色) -# gr.themes.utils.colors.yellow (黄色) -# gr.themes.utils.colors.lime (酸橙色) -# gr.themes.utils.colors.green (绿色) -# gr.themes.utils.colors.emerald (祖母绿) -# gr.themes.utils.colors.teal (青蓝色) -# gr.themes.utils.colors.cyan (青色) -# gr.themes.utils.colors.sky (天蓝色) -# gr.themes.utils.colors.blue (蓝色) -# gr.themes.utils.colors.indigo (靛蓝色) -# gr.themes.utils.colors.violet (紫罗兰色) -# gr.themes.utils.colors.purple (紫色) -# gr.themes.utils.colors.fuchsia (洋红色) -# gr.themes.utils.colors.pink (粉红色) -# gr.themes.utils.colors.rose (玫瑰色) - - -def adjust_theme(): - try: - color_er = gr.themes.utils.colors.fuchsia - set_theme = gr.themes.Default( - primary_hue=gr.themes.utils.colors.orange, - neutral_hue=gr.themes.utils.colors.gray, - font=["sans-serif", "Microsoft YaHei", "ui-sans-serif", "system-ui", - "sans-serif", gr.themes.utils.fonts.GoogleFont("Source Sans Pro")], - font_mono=["ui-monospace", "Consolas", "monospace", gr.themes.utils.fonts.GoogleFont("IBM Plex Mono")]) - set_theme.set( - # Colors - input_background_fill_dark="*neutral_800", - # Transition - button_transition="none", - # Shadows - button_shadow="*shadow_drop", - button_shadow_hover="*shadow_drop_lg", - button_shadow_active="*shadow_inset", - input_shadow="0 0 0 *shadow_spread transparent, *shadow_inset", - input_shadow_focus="0 0 0 *shadow_spread *secondary_50, *shadow_inset", - input_shadow_focus_dark="0 0 0 *shadow_spread *neutral_700, *shadow_inset", - checkbox_label_shadow="*shadow_drop", - block_shadow="*shadow_drop", - form_gap_width="1px", - # Button borders - input_border_width="1px", - input_background_fill="white", - # Gradients - stat_background_fill="linear-gradient(to right, *primary_400, *primary_200)", - stat_background_fill_dark="linear-gradient(to right, *primary_400, *primary_600)", - error_background_fill=f"linear-gradient(to right, {color_er.c100}, *background_fill_secondary)", - error_background_fill_dark="*background_fill_primary", - checkbox_label_background_fill="linear-gradient(to top, *neutral_50, white)", - checkbox_label_background_fill_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - checkbox_label_background_fill_hover="linear-gradient(to top, *neutral_100, white)", - checkbox_label_background_fill_hover_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - button_primary_background_fill="linear-gradient(to bottom right, *primary_100, *primary_300)", - button_primary_background_fill_dark="linear-gradient(to bottom right, *primary_500, *primary_600)", - button_primary_background_fill_hover="linear-gradient(to bottom right, *primary_100, *primary_200)", - button_primary_background_fill_hover_dark="linear-gradient(to bottom right, *primary_500, *primary_500)", - button_primary_border_color_dark="*primary_500", - button_secondary_background_fill="linear-gradient(to bottom right, *neutral_100, *neutral_200)", - button_secondary_background_fill_dark="linear-gradient(to bottom right, *neutral_600, *neutral_700)", - button_secondary_background_fill_hover="linear-gradient(to bottom right, *neutral_100, *neutral_100)", - button_secondary_background_fill_hover_dark="linear-gradient(to bottom right, *neutral_600, *neutral_600)", - button_cancel_background_fill=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c200})", - button_cancel_background_fill_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c700})", - button_cancel_background_fill_hover=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c100})", - button_cancel_background_fill_hover_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c600})", - button_cancel_border_color=color_er.c200, - button_cancel_border_color_dark=color_er.c600, - button_cancel_text_color=color_er.c600, - button_cancel_text_color_dark="white", - ) - except: - set_theme = None - print('gradio版本较旧, 不能自定义字体和颜色') - return set_theme - - -advanced_css = """ -/* 设置表格的外边距为1em,内部单元格之间边框合并,空单元格显示. */ -.markdown-body table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} - -/* 设置表格单元格的内边距为5px,边框粗细为1.2px,颜色为--border-color-primary. */ -.markdown-body th, .markdown-body td { - border: 1.2px solid var(--border-color-primary); - padding: 5px; -} - -/* 设置表头背景颜色为rgba(175,184,193,0.2),透明度为0.2. */ -.markdown-body thead { - background-color: rgba(175,184,193,0.2); -} - -/* 设置表头单元格的内边距为0.5em和0.2em. */ -.markdown-body thead th { - padding: .5em .2em; -} - -/* 去掉列表前缀的默认间距,使其与文本线对齐. */ -.markdown-body ol, .markdown-body ul { - padding-inline-start: 2em !important; -} - -/* 设定聊天气泡的样式,包括圆角、最大宽度和阴影等. */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - /* padding: var(--spacing-xl) !important; */ - /* font-size: var(--text-md) !important; */ - /* line-height: var(--line-md) !important; */ - /* min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ - /* min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ -} -[data-testid = "bot"] { - max-width: 95%; - /* width: auto !important; */ - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 100%; - /* width: auto !important; */ - border-bottom-right-radius: 0 !important; -} - -/* 行内代码的背景设为淡灰色,设定圆角和间距. */ -.markdown-body code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 设定代码块的样式,包括背景颜色、内、外边距、圆角。 */ -.markdown-body pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: rgba(175,184,193,0.2); - border-radius: 10px; - padding: 1em; - margin: 1em 2em 1em 0.5em; -} - -""" - -if CODE_HIGHLIGHT: - advanced_css += """ - -.hll { background-color: #ffffcc } -.c { color: #3D7B7B; font-style: italic } /* Comment */ -.err { border: 1px solid #FF0000 } /* Error */ -.k { color: hsl(197, 94%, 51%); font-weight: bold } /* Keyword */ -.o { color: #666666 } /* Operator */ -.ch { color: #3D7B7B; font-style: italic } /* Comment.Hashbang */ -.cm { color: #3D7B7B; font-style: italic } /* Comment.Multiline */ -.cp { color: #9C6500 } /* Comment.Preproc */ -.cpf { color: #3D7B7B; font-style: italic } /* Comment.PreprocFile */ -.c1 { color: #3D7B7B; font-style: italic } /* Comment.Single */ -.cs { color: #3D7B7B; font-style: italic } /* Comment.Special */ -.gd { color: #A00000 } /* Generic.Deleted */ -.ge { font-style: italic } /* Generic.Emph */ -.gr { color: #E40000 } /* Generic.Error */ -.gh { color: #000080; font-weight: bold } /* Generic.Heading */ -.gi { color: #008400 } /* Generic.Inserted */ -.go { color: #717171 } /* Generic.Output */ -.gp { color: #000080; font-weight: bold } /* Generic.Prompt */ -.gs { font-weight: bold } /* Generic.Strong */ -.gu { color: #800080; font-weight: bold } /* Generic.Subheading */ -.gt { color: #a9dd00 } /* Generic.Traceback */ -.kc { color: #008000; font-weight: bold } /* Keyword.Constant */ -.kd { color: #008000; font-weight: bold } /* Keyword.Declaration */ -.kn { color: #008000; font-weight: bold } /* Keyword.Namespace */ -.kp { color: #008000 } /* Keyword.Pseudo */ -.kr { color: #008000; font-weight: bold } /* Keyword.Reserved */ -.kt { color: #B00040 } /* Keyword.Type */ -.m { color: #666666 } /* Literal.Number */ -.s { color: #BA2121 } /* Literal.String */ -.na { color: #687822 } /* Name.Attribute */ -.nb { color: #e5f8c3 } /* Name.Builtin */ -.nc { color: #ffad65; font-weight: bold } /* Name.Class */ -.no { color: #880000 } /* Name.Constant */ -.nd { color: #AA22FF } /* Name.Decorator */ -.ni { color: #717171; font-weight: bold } /* Name.Entity */ -.ne { color: #CB3F38; font-weight: bold } /* Name.Exception */ -.nf { color: #f9f978 } /* Name.Function */ -.nl { color: #767600 } /* Name.Label */ -.nn { color: #0000FF; font-weight: bold } /* Name.Namespace */ -.nt { color: #008000; font-weight: bold } /* Name.Tag */ -.nv { color: #19177C } /* Name.Variable */ -.ow { color: #AA22FF; font-weight: bold } /* Operator.Word */ -.w { color: #bbbbbb } /* Text.Whitespace */ -.mb { color: #666666 } /* Literal.Number.Bin */ -.mf { color: #666666 } /* Literal.Number.Float */ -.mh { color: #666666 } /* Literal.Number.Hex */ -.mi { color: #666666 } /* Literal.Number.Integer */ -.mo { color: #666666 } /* Literal.Number.Oct */ -.sa { color: #BA2121 } /* Literal.String.Affix */ -.sb { color: #BA2121 } /* Literal.String.Backtick */ -.sc { color: #BA2121 } /* Literal.String.Char */ -.dl { color: #BA2121 } /* Literal.String.Delimiter */ -.sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */ -.s2 { color: #2bf840 } /* Literal.String.Double */ -.se { color: #AA5D1F; font-weight: bold } /* Literal.String.Escape */ -.sh { color: #BA2121 } /* Literal.String.Heredoc */ -.si { color: #A45A77; font-weight: bold } /* Literal.String.Interpol */ -.sx { color: #008000 } /* Literal.String.Other */ -.sr { color: #A45A77 } /* Literal.String.Regex */ -.s1 { color: #BA2121 } /* Literal.String.Single */ -.ss { color: #19177C } /* Literal.String.Symbol */ -.bp { color: #008000 } /* Name.Builtin.Pseudo */ -.fm { color: #0000FF } /* Name.Function.Magic */ -.vc { color: #19177C } /* Name.Variable.Class */ -.vg { color: #19177C } /* Name.Variable.Global */ -.vi { color: #19177C } /* Name.Variable.Instance */ -.vm { color: #19177C } /* Name.Variable.Magic */ -.il { color: #666666 } /* Literal.Number.Integer.Long */ -""" diff --git a/spaces/quidiaMuxgu/Expedit-SAM/CRACK Passmark WirelessMon Pro V2.1.md b/spaces/quidiaMuxgu/Expedit-SAM/CRACK Passmark WirelessMon Pro V2.1.md deleted file mode 100644 index b936ae671c60fc5b64aa29c890b566e572ef8233..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/CRACK Passmark WirelessMon Pro V2.1.md +++ /dev/null @@ -1,8 +0,0 @@ -
-

wirelessmon has the ability to display intermittent and hidden access points - those with a mac address, but not a ssid. the access point connection window allows you to connect to an access point via a mac address.

-

when you download files through wlan, a large number of files may be transferred and the data can be transferred to a client computer, so that the service user can see the file. wirelessmon serial number is a little piece of software that takes up less disc space than the average communication application. its popular in india, the united states, and indonesia. when requesting information from a network card, several improvements have been made to make the user interface more responsive. overall, this program offers a variety of network diagnostic capabilities in an easy-to-use interface that consumes very little system resources.

-

CRACK Passmark WirelessMon Pro v2.1


Download Filehttps://geags.com/2uCrRN



-

wirelessmon serial number is a little piece of software that takes up less disc space than the average communication application. its popular in india, the united states, and indonesia. when requesting information from a network card, several improvements have been made to make the user interface more responsive. overall, this program offers a variety of network diagnostic capabilities in an easy-to-use interface that consumes very little system resources.

-

it is also a handy little utility that can be used to monitor the status of other computers on your network, wirelessmon serial number is a little piece of software that takes up less disc space than the average communication application. its popular in india, the united states, and indonesia. when requesting information from a network card, several improvements have been made to make the user interface more responsive. overall, this program offers a variety of network diagnostic capabilities in an easy-to-use interface that consumes very little system resources.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Elements Of Astro Mechanics Van De Kamp Pdf Download ((FULL)).md b/spaces/quidiaMuxgu/Expedit-SAM/Elements Of Astro Mechanics Van De Kamp Pdf Download ((FULL)).md deleted file mode 100644 index 91c541b403e2794ab9f58692956e65670de743c7..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Elements Of Astro Mechanics Van De Kamp Pdf Download ((FULL)).md +++ /dev/null @@ -1,7 +0,0 @@ -
-

in general, the rejection of problematic data was necessary to ensure the reliability of the astrometric reductions. in gaia, individual transit data are automatically analysed, based on the information about the apparent brightness of the asteroid and the local sky background. the brightness range is taken into account as well as the brightness variation over the observation, as well as the full extent of the photometric errors. the main causes of rejection are listed below.

    too faint apparent brightness. the ephemeris of the transit is used to estimate the apparent brightness of the asteroid. if the apparent brightness of the object is too low, the uncertainty associated with it will be very large, and therefore the transit cannot be well fitted. in this case, the position of the asteroid cannot be measured with the required precision.

    -

    Elements Of Astro Mechanics Van De Kamp Pdf Download


    Download - https://geags.com/2uCrop



    -

    the position is not well defined. this problem occurs when the asteroid is so faint that its position cannot be well defined by the astrometric reduction. this problem is generally resolved by increasing the exposure time.

    -

    the astrometric calibration module, which is activated by the appropriate telescope and is the only module that is specific to gaia, is the fundamental tool for the determination of the position of the ssos with respect to the ccds. the agiss provide the transformation between the focal plane and the sky coordinate systems, which are used to transform the raw data into the astrometric reference frame and to make a first clean cut through the observations. these transformations are stored in agiss that are specific to each telescope, and they are stored in the database in a form of a catalogue. this is the procedure described below in sect. 3.2.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Elf Bowling Hawaiian Vacation Crack Download.md b/spaces/quidiaMuxgu/Expedit-SAM/Elf Bowling Hawaiian Vacation Crack Download.md deleted file mode 100644 index bf16cb9bca227e87bc44127bbf581d500bac6e0e..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Elf Bowling Hawaiian Vacation Crack Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    elf bowling hawaiian vacation crack download


    Downloadhttps://geags.com/2uCsLn



    - -Elf Bowling Holiday Pack Free Download PC Game Cracked in ... Download Elf Bowling Hawaiian Vacation • Windows. Games @ The Iso Zone .... Elf bowling ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Jdk 1.6 Free Download For Windows 7 64 Bit Cnet PATCHED.md b/spaces/quidiaMuxgu/Expedit-SAM/Jdk 1.6 Free Download For Windows 7 64 Bit Cnet PATCHED.md deleted file mode 100644 index 9564b8d9e3d2485f8bc01070630080a9d5fe42af..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Jdk 1.6 Free Download For Windows 7 64 Bit Cnet PATCHED.md +++ /dev/null @@ -1,6 +0,0 @@ -

    jdk 1.6 free download for windows 7 64 bit cnet


    Downloadhttps://geags.com/2uCqsc



    -
    -We looked at JDK Version 8 for 32-bit Windows. The latest update includes various bug and security fixes. Pros. Toolset: Java Developer Kit ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Magic Engine Fx V1.1.1 !!TOP!! Cracked Version.md b/spaces/quidiaMuxgu/Expedit-SAM/Magic Engine Fx V1.1.1 !!TOP!! Cracked Version.md deleted file mode 100644 index 995966fa22477abe257fcf298222c8cecf7105c4..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Magic Engine Fx V1.1.1 !!TOP!! Cracked Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

    magic engine fx v1.1.1 cracked version


    Downloadhttps://geags.com/2uCrmr



    - -FUT 19 Mod APKK.. Magic Engine Fx V1.1.1 Cracked Version Of Action . cw zip crack grevi fifa 14 origin crack v1.1 download team laxity crack download ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/spec_utils.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/spec_utils.py deleted file mode 100644 index 062d9050d85c036f8ebafc9c64f1501cff747568..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/spec_utils.py +++ /dev/null @@ -1,666 +0,0 @@ -import os, librosa -import numpy as np -import soundfile as sf -from tqdm import tqdm -import json, math, hashlib - - -def crop_center(h1, h2): - h1_shape = h1.size() - h2_shape = h2.size() - - if h1_shape[3] == h2_shape[3]: - return h1 - elif h1_shape[3] < h2_shape[3]: - raise ValueError("h1_shape[3] must be greater than h2_shape[3]") - - # s_freq = (h2_shape[2] - h1_shape[2]) // 2 - # e_freq = s_freq + h1_shape[2] - s_time = (h1_shape[3] - h2_shape[3]) // 2 - e_time = s_time + h2_shape[3] - h1 = h1[:, :, :, s_time:e_time] - - return h1 - - -def wave_to_spectrogram( - wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False -): - if reverse: - wave_left = np.flip(np.asfortranarray(wave[0])) - wave_right = np.flip(np.asfortranarray(wave[1])) - elif mid_side: - wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1])) - elif mid_side_b2: - wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5)) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5)) - else: - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - - spec_left = librosa.stft(wave_left, n_fft, hop_length=hop_length) - spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length) - - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def wave_to_spectrogram_mt( - wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False -): - import threading - - if reverse: - wave_left = np.flip(np.asfortranarray(wave[0])) - wave_right = np.flip(np.asfortranarray(wave[1])) - elif mid_side: - wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1])) - elif mid_side_b2: - wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5)) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5)) - else: - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - - def run_thread(**kwargs): - global spec_left - spec_left = librosa.stft(**kwargs) - - thread = threading.Thread( - target=run_thread, - kwargs={"y": wave_left, "n_fft": n_fft, "hop_length": hop_length}, - ) - thread.start() - spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length) - thread.join() - - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def combine_spectrograms(specs, mp): - l = min([specs[i].shape[2] for i in specs]) - spec_c = np.zeros(shape=(2, mp.param["bins"] + 1, l), dtype=np.complex64) - offset = 0 - bands_n = len(mp.param["band"]) - - for d in range(1, bands_n + 1): - h = mp.param["band"][d]["crop_stop"] - mp.param["band"][d]["crop_start"] - spec_c[:, offset : offset + h, :l] = specs[d][ - :, mp.param["band"][d]["crop_start"] : mp.param["band"][d]["crop_stop"], :l - ] - offset += h - - if offset > mp.param["bins"]: - raise ValueError("Too much bins") - - # lowpass fiter - if ( - mp.param["pre_filter_start"] > 0 - ): # and mp.param['band'][bands_n]['res_type'] in ['scipy', 'polyphase']: - if bands_n == 1: - spec_c = fft_lp_filter( - spec_c, mp.param["pre_filter_start"], mp.param["pre_filter_stop"] - ) - else: - gp = 1 - for b in range( - mp.param["pre_filter_start"] + 1, mp.param["pre_filter_stop"] - ): - g = math.pow( - 10, -(b - mp.param["pre_filter_start"]) * (3.5 - gp) / 20.0 - ) - gp = g - spec_c[:, b, :] *= g - - return np.asfortranarray(spec_c) - - -def spectrogram_to_image(spec, mode="magnitude"): - if mode == "magnitude": - if np.iscomplexobj(spec): - y = np.abs(spec) - else: - y = spec - y = np.log10(y**2 + 1e-8) - elif mode == "phase": - if np.iscomplexobj(spec): - y = np.angle(spec) - else: - y = spec - - y -= y.min() - y *= 255 / y.max() - img = np.uint8(y) - - if y.ndim == 3: - img = img.transpose(1, 2, 0) - img = np.concatenate([np.max(img, axis=2, keepdims=True), img], axis=2) - - return img - - -def reduce_vocal_aggressively(X, y, softmask): - v = X - y - y_mag_tmp = np.abs(y) - v_mag_tmp = np.abs(v) - - v_mask = v_mag_tmp > y_mag_tmp - y_mag = np.clip(y_mag_tmp - v_mag_tmp * v_mask * softmask, 0, np.inf) - - return y_mag * np.exp(1.0j * np.angle(y)) - - -def mask_silence(mag, ref, thres=0.2, min_range=64, fade_size=32): - if min_range < fade_size * 2: - raise ValueError("min_range must be >= fade_area * 2") - - mag = mag.copy() - - idx = np.where(ref.mean(axis=(0, 1)) < thres)[0] - starts = np.insert(idx[np.where(np.diff(idx) != 1)[0] + 1], 0, idx[0]) - ends = np.append(idx[np.where(np.diff(idx) != 1)[0]], idx[-1]) - uninformative = np.where(ends - starts > min_range)[0] - if len(uninformative) > 0: - starts = starts[uninformative] - ends = ends[uninformative] - old_e = None - for s, e in zip(starts, ends): - if old_e is not None and s - old_e < fade_size: - s = old_e - fade_size * 2 - - if s != 0: - weight = np.linspace(0, 1, fade_size) - mag[:, :, s : s + fade_size] += weight * ref[:, :, s : s + fade_size] - else: - s -= fade_size - - if e != mag.shape[2]: - weight = np.linspace(1, 0, fade_size) - mag[:, :, e - fade_size : e] += weight * ref[:, :, e - fade_size : e] - else: - e += fade_size - - mag[:, :, s + fade_size : e - fade_size] += ref[ - :, :, s + fade_size : e - fade_size - ] - old_e = e - - return mag - - -def align_wave_head_and_tail(a, b): - l = min([a[0].size, b[0].size]) - - return a[:l, :l], b[:l, :l] - - -def cache_or_load(mix_path, inst_path, mp): - mix_basename = os.path.splitext(os.path.basename(mix_path))[0] - inst_basename = os.path.splitext(os.path.basename(inst_path))[0] - - cache_dir = "mph{}".format( - hashlib.sha1(json.dumps(mp.param, sort_keys=True).encode("utf-8")).hexdigest() - ) - mix_cache_dir = os.path.join("cache", cache_dir) - inst_cache_dir = os.path.join("cache", cache_dir) - - os.makedirs(mix_cache_dir, exist_ok=True) - os.makedirs(inst_cache_dir, exist_ok=True) - - mix_cache_path = os.path.join(mix_cache_dir, mix_basename + ".npy") - inst_cache_path = os.path.join(inst_cache_dir, inst_basename + ".npy") - - if os.path.exists(mix_cache_path) and os.path.exists(inst_cache_path): - X_spec_m = np.load(mix_cache_path) - y_spec_m = np.load(inst_cache_path) - else: - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - - for d in range(len(mp.param["band"]), 0, -1): - bp = mp.param["band"][d] - - if d == len(mp.param["band"]): # high-end band - X_wave[d], _ = librosa.load( - mix_path, bp["sr"], False, dtype=np.float32, res_type=bp["res_type"] - ) - y_wave[d], _ = librosa.load( - inst_path, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - else: # lower bands - X_wave[d] = librosa.resample( - X_wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - y_wave[d] = librosa.resample( - y_wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - - X_wave[d], y_wave[d] = align_wave_head_and_tail(X_wave[d], y_wave[d]) - - X_spec_s[d] = wave_to_spectrogram( - X_wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - y_spec_s[d] = wave_to_spectrogram( - y_wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - - del X_wave, y_wave - - X_spec_m = combine_spectrograms(X_spec_s, mp) - y_spec_m = combine_spectrograms(y_spec_s, mp) - - if X_spec_m.shape != y_spec_m.shape: - raise ValueError("The combined spectrograms are different: " + mix_path) - - _, ext = os.path.splitext(mix_path) - - np.save(mix_cache_path, X_spec_m) - np.save(inst_cache_path, y_spec_m) - - return X_spec_m, y_spec_m - - -def spectrogram_to_wave(spec, hop_length, mid_side, mid_side_b2, reverse): - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - wave_left = librosa.istft(spec_left, hop_length=hop_length) - wave_right = librosa.istft(spec_right, hop_length=hop_length) - - if reverse: - return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)]) - elif mid_side: - return np.asfortranarray( - [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)] - ) - elif mid_side_b2: - return np.asfortranarray( - [ - np.add(wave_right / 1.25, 0.4 * wave_left), - np.subtract(wave_left / 1.25, 0.4 * wave_right), - ] - ) - else: - return np.asfortranarray([wave_left, wave_right]) - - -def spectrogram_to_wave_mt(spec, hop_length, mid_side, reverse, mid_side_b2): - import threading - - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - def run_thread(**kwargs): - global wave_left - wave_left = librosa.istft(**kwargs) - - thread = threading.Thread( - target=run_thread, kwargs={"stft_matrix": spec_left, "hop_length": hop_length} - ) - thread.start() - wave_right = librosa.istft(spec_right, hop_length=hop_length) - thread.join() - - if reverse: - return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)]) - elif mid_side: - return np.asfortranarray( - [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)] - ) - elif mid_side_b2: - return np.asfortranarray( - [ - np.add(wave_right / 1.25, 0.4 * wave_left), - np.subtract(wave_left / 1.25, 0.4 * wave_right), - ] - ) - else: - return np.asfortranarray([wave_left, wave_right]) - - -def cmb_spectrogram_to_wave(spec_m, mp, extra_bins_h=None, extra_bins=None): - wave_band = {} - bands_n = len(mp.param["band"]) - offset = 0 - - for d in range(1, bands_n + 1): - bp = mp.param["band"][d] - spec_s = np.ndarray( - shape=(2, bp["n_fft"] // 2 + 1, spec_m.shape[2]), dtype=complex - ) - h = bp["crop_stop"] - bp["crop_start"] - spec_s[:, bp["crop_start"] : bp["crop_stop"], :] = spec_m[ - :, offset : offset + h, : - ] - - offset += h - if d == bands_n: # higher - if extra_bins_h: # if --high_end_process bypass - max_bin = bp["n_fft"] // 2 - spec_s[:, max_bin - extra_bins_h : max_bin, :] = extra_bins[ - :, :extra_bins_h, : - ] - if bp["hpf_start"] > 0: - spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1) - if bands_n == 1: - wave = spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - else: - wave = np.add( - wave, - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - ) - else: - sr = mp.param["band"][d + 1]["sr"] - if d == 1: # lower - spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"]) - wave = librosa.resample( - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - bp["sr"], - sr, - res_type="sinc_fastest", - ) - else: # mid - spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1) - spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"]) - wave2 = np.add( - wave, - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - ) - # wave = librosa.core.resample(wave2, bp['sr'], sr, res_type="sinc_fastest") - wave = librosa.core.resample(wave2, bp["sr"], sr, res_type="scipy") - - return wave.T - - -def fft_lp_filter(spec, bin_start, bin_stop): - g = 1.0 - for b in range(bin_start, bin_stop): - g -= 1 / (bin_stop - bin_start) - spec[:, b, :] = g * spec[:, b, :] - - spec[:, bin_stop:, :] *= 0 - - return spec - - -def fft_hp_filter(spec, bin_start, bin_stop): - g = 1.0 - for b in range(bin_start, bin_stop, -1): - g -= 1 / (bin_start - bin_stop) - spec[:, b, :] = g * spec[:, b, :] - - spec[:, 0 : bin_stop + 1, :] *= 0 - - return spec - - -def mirroring(a, spec_m, input_high_end, mp): - if "mirroring" == a: - mirror = np.flip( - np.abs( - spec_m[ - :, - mp.param["pre_filter_start"] - - 10 - - input_high_end.shape[1] : mp.param["pre_filter_start"] - - 10, - :, - ] - ), - 1, - ) - mirror = mirror * np.exp(1.0j * np.angle(input_high_end)) - - return np.where( - np.abs(input_high_end) <= np.abs(mirror), input_high_end, mirror - ) - - if "mirroring2" == a: - mirror = np.flip( - np.abs( - spec_m[ - :, - mp.param["pre_filter_start"] - - 10 - - input_high_end.shape[1] : mp.param["pre_filter_start"] - - 10, - :, - ] - ), - 1, - ) - mi = np.multiply(mirror, input_high_end * 1.7) - - return np.where(np.abs(input_high_end) <= np.abs(mi), input_high_end, mi) - - -def ensembling(a, specs): - for i in range(1, len(specs)): - if i == 1: - spec = specs[0] - - ln = min([spec.shape[2], specs[i].shape[2]]) - spec = spec[:, :, :ln] - specs[i] = specs[i][:, :, :ln] - - if "min_mag" == a: - spec = np.where(np.abs(specs[i]) <= np.abs(spec), specs[i], spec) - if "max_mag" == a: - spec = np.where(np.abs(specs[i]) >= np.abs(spec), specs[i], spec) - - return spec - - -def stft(wave, nfft, hl): - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - spec_left = librosa.stft(wave_left, nfft, hop_length=hl) - spec_right = librosa.stft(wave_right, nfft, hop_length=hl) - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def istft(spec, hl): - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - wave_left = librosa.istft(spec_left, hop_length=hl) - wave_right = librosa.istft(spec_right, hop_length=hl) - wave = np.asfortranarray([wave_left, wave_right]) - - -if __name__ == "__main__": - import cv2 - import time - import argparse - from model_param_init import ModelParameters - - p = argparse.ArgumentParser() - p.add_argument( - "--algorithm", - "-a", - type=str, - choices=["invert", "invert_p", "min_mag", "max_mag", "deep", "align"], - default="min_mag", - ) - p.add_argument( - "--model_params", - "-m", - type=str, - default=os.path.join("modelparams", "1band_sr44100_hl512.json"), - ) - p.add_argument("--output_name", "-o", type=str, default="output") - p.add_argument("--vocals_only", "-v", action="store_true") - p.add_argument("input", nargs="+") - args = p.parse_args() - - start_time = time.time() - - if args.algorithm.startswith("invert") and len(args.input) != 2: - raise ValueError("There should be two input files.") - - if not args.algorithm.startswith("invert") and len(args.input) < 2: - raise ValueError("There must be at least two input files.") - - wave, specs = {}, {} - mp = ModelParameters(args.model_params) - - for i in range(len(args.input)): - spec = {} - - for d in range(len(mp.param["band"]), 0, -1): - bp = mp.param["band"][d] - - if d == len(mp.param["band"]): # high-end band - wave[d], _ = librosa.load( - args.input[i], - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - - if len(wave[d].shape) == 1: # mono to stereo - wave[d] = np.array([wave[d], wave[d]]) - else: # lower bands - wave[d] = librosa.resample( - wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - - spec[d] = wave_to_spectrogram( - wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - - specs[i] = combine_spectrograms(spec, mp) - - del wave - - if args.algorithm == "deep": - d_spec = np.where(np.abs(specs[0]) <= np.abs(spec[1]), specs[0], spec[1]) - v_spec = d_spec - specs[1] - sf.write( - os.path.join("{}.wav".format(args.output_name)), - cmb_spectrogram_to_wave(v_spec, mp), - mp.param["sr"], - ) - - if args.algorithm.startswith("invert"): - ln = min([specs[0].shape[2], specs[1].shape[2]]) - specs[0] = specs[0][:, :, :ln] - specs[1] = specs[1][:, :, :ln] - - if "invert_p" == args.algorithm: - X_mag = np.abs(specs[0]) - y_mag = np.abs(specs[1]) - max_mag = np.where(X_mag >= y_mag, X_mag, y_mag) - v_spec = specs[1] - max_mag * np.exp(1.0j * np.angle(specs[0])) - else: - specs[1] = reduce_vocal_aggressively(specs[0], specs[1], 0.2) - v_spec = specs[0] - specs[1] - - if not args.vocals_only: - X_mag = np.abs(specs[0]) - y_mag = np.abs(specs[1]) - v_mag = np.abs(v_spec) - - X_image = spectrogram_to_image(X_mag) - y_image = spectrogram_to_image(y_mag) - v_image = spectrogram_to_image(v_mag) - - cv2.imwrite("{}_X.png".format(args.output_name), X_image) - cv2.imwrite("{}_y.png".format(args.output_name), y_image) - cv2.imwrite("{}_v.png".format(args.output_name), v_image) - - sf.write( - "{}_X.wav".format(args.output_name), - cmb_spectrogram_to_wave(specs[0], mp), - mp.param["sr"], - ) - sf.write( - "{}_y.wav".format(args.output_name), - cmb_spectrogram_to_wave(specs[1], mp), - mp.param["sr"], - ) - - sf.write( - "{}_v.wav".format(args.output_name), - cmb_spectrogram_to_wave(v_spec, mp), - mp.param["sr"], - ) - else: - if not args.algorithm == "deep": - sf.write( - os.path.join("ensembled", "{}.wav".format(args.output_name)), - cmb_spectrogram_to_wave(ensembling(args.algorithm, specs), mp), - mp.param["sr"], - ) - - if args.algorithm == "align": - trackalignment = [ - { - "file1": '"{}"'.format(args.input[0]), - "file2": '"{}"'.format(args.input[1]), - } - ] - - for i, e in tqdm(enumerate(trackalignment), desc="Performing Alignment..."): - os.system(f"python lib/align_tracks.py {e['file1']} {e['file2']}") - - # print('Total time: {0:.{1}f}s'.format(time.time() - start_time, 1)) diff --git a/spaces/rachana219/MODT2/utils/loss.py b/spaces/rachana219/MODT2/utils/loss.py deleted file mode 100644 index bf7ab65a304b51b398d9877da0673d5c01e52081..0000000000000000000000000000000000000000 --- a/spaces/rachana219/MODT2/utils/loss.py +++ /dev/null @@ -1,1697 +0,0 @@ -# Loss functions - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from utils.general import bbox_iou, bbox_alpha_iou, box_iou, box_giou, box_diou, box_ciou, xywh2xyxy -from utils.torch_utils import is_parallel - - -def smooth_BCE(eps=0.1): # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441 - # return positive, negative label smoothing BCE targets - return 1.0 - 0.5 * eps, 0.5 * eps - - -class BCEBlurWithLogitsLoss(nn.Module): - # BCEwithLogitLoss() with reduced missing label effects. - def __init__(self, alpha=0.05): - super(BCEBlurWithLogitsLoss, self).__init__() - self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none') # must be nn.BCEWithLogitsLoss() - self.alpha = alpha - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - pred = torch.sigmoid(pred) # prob from logits - dx = pred - true # reduce only missing label effects - # dx = (pred - true).abs() # reduce missing label and false label effects - alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4)) - loss *= alpha_factor - return loss.mean() - - -class SigmoidBin(nn.Module): - stride = None # strides computed during build - export = False # onnx export - - def __init__(self, bin_count=10, min=0.0, max=1.0, reg_scale = 2.0, use_loss_regression=True, use_fw_regression=True, BCE_weight=1.0, smooth_eps=0.0): - super(SigmoidBin, self).__init__() - - self.bin_count = bin_count - self.length = bin_count + 1 - self.min = min - self.max = max - self.scale = float(max - min) - self.shift = self.scale / 2.0 - - self.use_loss_regression = use_loss_regression - self.use_fw_regression = use_fw_regression - self.reg_scale = reg_scale - self.BCE_weight = BCE_weight - - start = min + (self.scale/2.0) / self.bin_count - end = max - (self.scale/2.0) / self.bin_count - step = self.scale / self.bin_count - self.step = step - #print(f" start = {start}, end = {end}, step = {step} ") - - bins = torch.range(start, end + 0.0001, step).float() - self.register_buffer('bins', bins) - - - self.cp = 1.0 - 0.5 * smooth_eps - self.cn = 0.5 * smooth_eps - - self.BCEbins = nn.BCEWithLogitsLoss(pos_weight=torch.Tensor([BCE_weight])) - self.MSELoss = nn.MSELoss() - - def get_length(self): - return self.length - - def forward(self, pred): - assert pred.shape[-1] == self.length, 'pred.shape[-1]=%d is not equal to self.length=%d' % (pred.shape[-1], self.length) - - pred_reg = (pred[..., 0] * self.reg_scale - self.reg_scale/2.0) * self.step - pred_bin = pred[..., 1:(1+self.bin_count)] - - _, bin_idx = torch.max(pred_bin, dim=-1) - bin_bias = self.bins[bin_idx] - - if self.use_fw_regression: - result = pred_reg + bin_bias - else: - result = bin_bias - result = result.clamp(min=self.min, max=self.max) - - return result - - - def training_loss(self, pred, target): - assert pred.shape[-1] == self.length, 'pred.shape[-1]=%d is not equal to self.length=%d' % (pred.shape[-1], self.length) - assert pred.shape[0] == target.shape[0], 'pred.shape=%d is not equal to the target.shape=%d' % (pred.shape[0], target.shape[0]) - device = pred.device - - pred_reg = (pred[..., 0].sigmoid() * self.reg_scale - self.reg_scale/2.0) * self.step - pred_bin = pred[..., 1:(1+self.bin_count)] - - diff_bin_target = torch.abs(target[..., None] - self.bins) - _, bin_idx = torch.min(diff_bin_target, dim=-1) - - bin_bias = self.bins[bin_idx] - bin_bias.requires_grad = False - result = pred_reg + bin_bias - - target_bins = torch.full_like(pred_bin, self.cn, device=device) # targets - n = pred.shape[0] - target_bins[range(n), bin_idx] = self.cp - - loss_bin = self.BCEbins(pred_bin, target_bins) # BCE - - if self.use_loss_regression: - loss_regression = self.MSELoss(result, target) # MSE - loss = loss_bin + loss_regression - else: - loss = loss_bin - - out_result = result.clamp(min=self.min, max=self.max) - - return loss, out_result - - -class FocalLoss(nn.Module): - # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5) - def __init__(self, loss_fcn, gamma=1.5, alpha=0.25): - super(FocalLoss, self).__init__() - self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss() - self.gamma = gamma - self.alpha = alpha - self.reduction = loss_fcn.reduction - self.loss_fcn.reduction = 'none' # required to apply FL to each element - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - # p_t = torch.exp(-loss) - # loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability - - # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py - pred_prob = torch.sigmoid(pred) # prob from logits - p_t = true * pred_prob + (1 - true) * (1 - pred_prob) - alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha) - modulating_factor = (1.0 - p_t) ** self.gamma - loss *= alpha_factor * modulating_factor - - if self.reduction == 'mean': - return loss.mean() - elif self.reduction == 'sum': - return loss.sum() - else: # 'none' - return loss - - -class QFocalLoss(nn.Module): - # Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5) - def __init__(self, loss_fcn, gamma=1.5, alpha=0.25): - super(QFocalLoss, self).__init__() - self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss() - self.gamma = gamma - self.alpha = alpha - self.reduction = loss_fcn.reduction - self.loss_fcn.reduction = 'none' # required to apply FL to each element - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - - pred_prob = torch.sigmoid(pred) # prob from logits - alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha) - modulating_factor = torch.abs(true - pred_prob) ** self.gamma - loss *= alpha_factor * modulating_factor - - if self.reduction == 'mean': - return loss.mean() - elif self.reduction == 'sum': - return loss.sum() - else: # 'none' - return loss - -class RankSort(torch.autograd.Function): - @staticmethod - def forward(ctx, logits, targets, delta_RS=0.50, eps=1e-10): - - classification_grads=torch.zeros(logits.shape).cuda() - - #Filter fg logits - fg_labels = (targets > 0.) - fg_logits = logits[fg_labels] - fg_targets = targets[fg_labels] - fg_num = len(fg_logits) - - #Do not use bg with scores less than minimum fg logit - #since changing its score does not have an effect on precision - threshold_logit = torch.min(fg_logits)-delta_RS - relevant_bg_labels=((targets==0) & (logits>=threshold_logit)) - - relevant_bg_logits = logits[relevant_bg_labels] - relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda() - sorting_error=torch.zeros(fg_num).cuda() - ranking_error=torch.zeros(fg_num).cuda() - fg_grad=torch.zeros(fg_num).cuda() - - #sort the fg logits - order=torch.argsort(fg_logits) - #Loops over each positive following the order - for ii in order: - # Difference Transforms (x_ij) - fg_relations=fg_logits-fg_logits[ii] - bg_relations=relevant_bg_logits-fg_logits[ii] - - if delta_RS > 0: - fg_relations=torch.clamp(fg_relations/(2*delta_RS)+0.5,min=0,max=1) - bg_relations=torch.clamp(bg_relations/(2*delta_RS)+0.5,min=0,max=1) - else: - fg_relations = (fg_relations >= 0).float() - bg_relations = (bg_relations >= 0).float() - - # Rank of ii among pos and false positive number (bg with larger scores) - rank_pos=torch.sum(fg_relations) - FP_num=torch.sum(bg_relations) - - # Rank of ii among all examples - rank=rank_pos+FP_num - - # Ranking error of example ii. target_ranking_error is always 0. (Eq. 7) - ranking_error[ii]=FP_num/rank - - # Current sorting error of example ii. (Eq. 7) - current_sorting_error = torch.sum(fg_relations*(1-fg_targets))/rank_pos - - #Find examples in the target sorted order for example ii - iou_relations = (fg_targets >= fg_targets[ii]) - target_sorted_order = iou_relations * fg_relations - - #The rank of ii among positives in sorted order - rank_pos_target = torch.sum(target_sorted_order) - - #Compute target sorting error. (Eq. 8) - #Since target ranking error is 0, this is also total target error - target_sorting_error= torch.sum(target_sorted_order*(1-fg_targets))/rank_pos_target - - #Compute sorting error on example ii - sorting_error[ii] = current_sorting_error - target_sorting_error - - #Identity Update for Ranking Error - if FP_num > eps: - #For ii the update is the ranking error - fg_grad[ii] -= ranking_error[ii] - #For negatives, distribute error via ranking pmf (i.e. bg_relations/FP_num) - relevant_bg_grad += (bg_relations*(ranking_error[ii]/FP_num)) - - #Find the positives that are misranked (the cause of the error) - #These are the ones with smaller IoU but larger logits - missorted_examples = (~ iou_relations) * fg_relations - - #Denominotor of sorting pmf - sorting_pmf_denom = torch.sum(missorted_examples) - - #Identity Update for Sorting Error - if sorting_pmf_denom > eps: - #For ii the update is the sorting error - fg_grad[ii] -= sorting_error[ii] - #For positives, distribute error via sorting pmf (i.e. missorted_examples/sorting_pmf_denom) - fg_grad += (missorted_examples*(sorting_error[ii]/sorting_pmf_denom)) - - #Normalize gradients by number of positives - classification_grads[fg_labels]= (fg_grad/fg_num) - classification_grads[relevant_bg_labels]= (relevant_bg_grad/fg_num) - - ctx.save_for_backward(classification_grads) - - return ranking_error.mean(), sorting_error.mean() - - @staticmethod - def backward(ctx, out_grad1, out_grad2): - g1, =ctx.saved_tensors - return g1*out_grad1, None, None, None - -class aLRPLoss(torch.autograd.Function): - @staticmethod - def forward(ctx, logits, targets, regression_losses, delta=1., eps=1e-5): - classification_grads=torch.zeros(logits.shape).cuda() - - #Filter fg logits - fg_labels = (targets == 1) - fg_logits = logits[fg_labels] - fg_num = len(fg_logits) - - #Do not use bg with scores less than minimum fg logit - #since changing its score does not have an effect on precision - threshold_logit = torch.min(fg_logits)-delta - - #Get valid bg logits - relevant_bg_labels=((targets==0)&(logits>=threshold_logit)) - relevant_bg_logits=logits[relevant_bg_labels] - relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda() - rank=torch.zeros(fg_num).cuda() - prec=torch.zeros(fg_num).cuda() - fg_grad=torch.zeros(fg_num).cuda() - - max_prec=0 - #sort the fg logits - order=torch.argsort(fg_logits) - #Loops over each positive following the order - for ii in order: - #x_ij s as score differences with fgs - fg_relations=fg_logits-fg_logits[ii] - #Apply piecewise linear function and determine relations with fgs - fg_relations=torch.clamp(fg_relations/(2*delta)+0.5,min=0,max=1) - #Discard i=j in the summation in rank_pos - fg_relations[ii]=0 - - #x_ij s as score differences with bgs - bg_relations=relevant_bg_logits-fg_logits[ii] - #Apply piecewise linear function and determine relations with bgs - bg_relations=torch.clamp(bg_relations/(2*delta)+0.5,min=0,max=1) - - #Compute the rank of the example within fgs and number of bgs with larger scores - rank_pos=1+torch.sum(fg_relations) - FP_num=torch.sum(bg_relations) - #Store the total since it is normalizer also for aLRP Regression error - rank[ii]=rank_pos+FP_num - - #Compute precision for this example to compute classification loss - prec[ii]=rank_pos/rank[ii] - #For stability, set eps to a infinitesmall value (e.g. 1e-6), then compute grads - if FP_num > eps: - fg_grad[ii] = -(torch.sum(fg_relations*regression_losses)+FP_num)/rank[ii] - relevant_bg_grad += (bg_relations*(-fg_grad[ii]/FP_num)) - - #aLRP with grad formulation fg gradient - classification_grads[fg_labels]= fg_grad - #aLRP with grad formulation bg gradient - classification_grads[relevant_bg_labels]= relevant_bg_grad - - classification_grads /= (fg_num) - - cls_loss=1-prec.mean() - ctx.save_for_backward(classification_grads) - - return cls_loss, rank, order - - @staticmethod - def backward(ctx, out_grad1, out_grad2, out_grad3): - g1, =ctx.saved_tensors - return g1*out_grad1, None, None, None, None - - -class APLoss(torch.autograd.Function): - @staticmethod - def forward(ctx, logits, targets, delta=1.): - classification_grads=torch.zeros(logits.shape).cuda() - - #Filter fg logits - fg_labels = (targets == 1) - fg_logits = logits[fg_labels] - fg_num = len(fg_logits) - - #Do not use bg with scores less than minimum fg logit - #since changing its score does not have an effect on precision - threshold_logit = torch.min(fg_logits)-delta - - #Get valid bg logits - relevant_bg_labels=((targets==0)&(logits>=threshold_logit)) - relevant_bg_logits=logits[relevant_bg_labels] - relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda() - rank=torch.zeros(fg_num).cuda() - prec=torch.zeros(fg_num).cuda() - fg_grad=torch.zeros(fg_num).cuda() - - max_prec=0 - #sort the fg logits - order=torch.argsort(fg_logits) - #Loops over each positive following the order - for ii in order: - #x_ij s as score differences with fgs - fg_relations=fg_logits-fg_logits[ii] - #Apply piecewise linear function and determine relations with fgs - fg_relations=torch.clamp(fg_relations/(2*delta)+0.5,min=0,max=1) - #Discard i=j in the summation in rank_pos - fg_relations[ii]=0 - - #x_ij s as score differences with bgs - bg_relations=relevant_bg_logits-fg_logits[ii] - #Apply piecewise linear function and determine relations with bgs - bg_relations=torch.clamp(bg_relations/(2*delta)+0.5,min=0,max=1) - - #Compute the rank of the example within fgs and number of bgs with larger scores - rank_pos=1+torch.sum(fg_relations) - FP_num=torch.sum(bg_relations) - #Store the total since it is normalizer also for aLRP Regression error - rank[ii]=rank_pos+FP_num - - #Compute precision for this example - current_prec=rank_pos/rank[ii] - - #Compute interpolated AP and store gradients for relevant bg examples - if (max_prec<=current_prec): - max_prec=current_prec - relevant_bg_grad += (bg_relations/rank[ii]) - else: - relevant_bg_grad += (bg_relations/rank[ii])*(((1-max_prec)/(1-current_prec))) - - #Store fg gradients - fg_grad[ii]=-(1-max_prec) - prec[ii]=max_prec - - #aLRP with grad formulation fg gradient - classification_grads[fg_labels]= fg_grad - #aLRP with grad formulation bg gradient - classification_grads[relevant_bg_labels]= relevant_bg_grad - - classification_grads /= fg_num - - cls_loss=1-prec.mean() - ctx.save_for_backward(classification_grads) - - return cls_loss - - @staticmethod - def backward(ctx, out_grad1): - g1, =ctx.saved_tensors - return g1*out_grad1, None, None - - -class ComputeLoss: - # Compute losses - def __init__(self, model, autobalance=False): - super(ComputeLoss, self).__init__() - device = next(model.parameters()).device # get model device - h = model.hyp # hyperparameters - - # Define criteria - BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device)) - BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device)) - - # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3 - self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets - - # Focal loss - g = h['fl_gamma'] # focal loss gamma - if g > 0: - BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g) - - det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module - self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7 - #self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.1, .05]) # P3-P7 - #self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.5, 0.4, .1]) # P3-P7 - self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index - self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance - for k in 'na', 'nc', 'nl', 'anchors': - setattr(self, k, getattr(det, k)) - - def __call__(self, p, targets): # predictions, targets, model - device = targets.device - lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device) - tcls, tbox, indices, anchors = self.build_targets(p, targets) # targets - - # Losses - for i, pi in enumerate(p): # layer index, layer predictions - b, a, gj, gi = indices[i] # image, anchor, gridy, gridx - tobj = torch.zeros_like(pi[..., 0], device=device) # target obj - - n = b.shape[0] # number of targets - if n: - ps = pi[b, a, gj, gi] # prediction subset corresponding to targets - - # Regression - pxy = ps[:, :2].sigmoid() * 2. - 0.5 - pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i] - pbox = torch.cat((pxy, pwh), 1) # predicted box - iou = bbox_iou(pbox.T, tbox[i], x1y1x2y2=False, CIoU=True) # iou(prediction, target) - lbox += (1.0 - iou).mean() # iou loss - - # Objectness - tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio - - # Classification - if self.nc > 1: # cls loss (only if multiple classes) - t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets - t[range(n), tcls[i]] = self.cp - #t[t==self.cp] = iou.detach().clamp(0).type(t.dtype) - lcls += self.BCEcls(ps[:, 5:], t) # BCE - - # Append targets to text file - # with open('targets.txt', 'a') as file: - # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)] - - obji = self.BCEobj(pi[..., 4], tobj) - lobj += obji * self.balance[i] # obj loss - if self.autobalance: - self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item() - - if self.autobalance: - self.balance = [x / self.balance[self.ssi] for x in self.balance] - lbox *= self.hyp['box'] - lobj *= self.hyp['obj'] - lcls *= self.hyp['cls'] - bs = tobj.shape[0] # batch size - - loss = lbox + lobj + lcls - return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach() - - def build_targets(self, p, targets): - # Build targets for compute_loss(), input targets(image,class,x,y,w,h) - na, nt = self.na, targets.shape[0] # number of anchors, targets - tcls, tbox, indices, anch = [], [], [], [] - gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain - ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt) - targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices - - g = 0.5 # bias - off = torch.tensor([[0, 0], - [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m - # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm - ], device=targets.device).float() * g # offsets - - for i in range(self.nl): - anchors = self.anchors[i] - gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain - - # Match targets to anchors - t = targets * gain - if nt: - # Matches - r = t[:, :, 4:6] / anchors[:, None] # wh ratio - j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare - # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2)) - t = t[j] # filter - - # Offsets - gxy = t[:, 2:4] # grid xy - gxi = gain[[2, 3]] - gxy # inverse - j, k = ((gxy % 1. < g) & (gxy > 1.)).T - l, m = ((gxi % 1. < g) & (gxi > 1.)).T - j = torch.stack((torch.ones_like(j), j, k, l, m)) - t = t.repeat((5, 1, 1))[j] - offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j] - else: - t = targets[0] - offsets = 0 - - # Define - b, c = t[:, :2].long().T # image, class - gxy = t[:, 2:4] # grid xy - gwh = t[:, 4:6] # grid wh - gij = (gxy - offsets).long() - gi, gj = gij.T # grid xy indices - - # Append - a = t[:, 6].long() # anchor indices - indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices - tbox.append(torch.cat((gxy - gij, gwh), 1)) # box - anch.append(anchors[a]) # anchors - tcls.append(c) # class - - return tcls, tbox, indices, anch - - -class ComputeLossOTA: - # Compute losses - def __init__(self, model, autobalance=False): - super(ComputeLossOTA, self).__init__() - device = next(model.parameters()).device # get model device - h = model.hyp # hyperparameters - - # Define criteria - BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device)) - BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device)) - - # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3 - self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets - - # Focal loss - g = h['fl_gamma'] # focal loss gamma - if g > 0: - BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g) - - det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module - self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7 - self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index - self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance - for k in 'na', 'nc', 'nl', 'anchors', 'stride': - setattr(self, k, getattr(det, k)) - - def __call__(self, p, targets, imgs): # predictions, targets, model - device = targets.device - lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device) - bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs) - pre_gen_gains = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p] - - - # Losses - for i, pi in enumerate(p): # layer index, layer predictions - b, a, gj, gi = bs[i], as_[i], gjs[i], gis[i] # image, anchor, gridy, gridx - tobj = torch.zeros_like(pi[..., 0], device=device) # target obj - - n = b.shape[0] # number of targets - if n: - ps = pi[b, a, gj, gi] # prediction subset corresponding to targets - - # Regression - grid = torch.stack([gi, gj], dim=1) - pxy = ps[:, :2].sigmoid() * 2. - 0.5 - #pxy = ps[:, :2].sigmoid() * 3. - 1. - pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i] - pbox = torch.cat((pxy, pwh), 1) # predicted box - selected_tbox = targets[i][:, 2:6] * pre_gen_gains[i] - selected_tbox[:, :2] -= grid - iou = bbox_iou(pbox.T, selected_tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target) - lbox += (1.0 - iou).mean() # iou loss - - # Objectness - tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio - - # Classification - selected_tcls = targets[i][:, 1].long() - if self.nc > 1: # cls loss (only if multiple classes) - t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets - t[range(n), selected_tcls] = self.cp - lcls += self.BCEcls(ps[:, 5:], t) # BCE - - # Append targets to text file - # with open('targets.txt', 'a') as file: - # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)] - - obji = self.BCEobj(pi[..., 4], tobj) - lobj += obji * self.balance[i] # obj loss - if self.autobalance: - self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item() - - if self.autobalance: - self.balance = [x / self.balance[self.ssi] for x in self.balance] - lbox *= self.hyp['box'] - lobj *= self.hyp['obj'] - lcls *= self.hyp['cls'] - bs = tobj.shape[0] # batch size - - loss = lbox + lobj + lcls - return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach() - - def build_targets(self, p, targets, imgs): - - #indices, anch = self.find_positive(p, targets) - indices, anch = self.find_3_positive(p, targets) - #indices, anch = self.find_4_positive(p, targets) - #indices, anch = self.find_5_positive(p, targets) - #indices, anch = self.find_9_positive(p, targets) - - matching_bs = [[] for pp in p] - matching_as = [[] for pp in p] - matching_gjs = [[] for pp in p] - matching_gis = [[] for pp in p] - matching_targets = [[] for pp in p] - matching_anchs = [[] for pp in p] - - nl = len(p) - - for batch_idx in range(p[0].shape[0]): - - b_idx = targets[:, 0]==batch_idx - this_target = targets[b_idx] - if this_target.shape[0] == 0: - continue - - txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1] - txyxy = xywh2xyxy(txywh) - - pxyxys = [] - p_cls = [] - p_obj = [] - from_which_layer = [] - all_b = [] - all_a = [] - all_gj = [] - all_gi = [] - all_anch = [] - - for i, pi in enumerate(p): - - b, a, gj, gi = indices[i] - idx = (b == batch_idx) - b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx] - all_b.append(b) - all_a.append(a) - all_gj.append(gj) - all_gi.append(gi) - all_anch.append(anch[i][idx]) - from_which_layer.append(torch.ones(size=(len(b),)) * i) - - fg_pred = pi[b, a, gj, gi] - p_obj.append(fg_pred[:, 4:5]) - p_cls.append(fg_pred[:, 5:]) - - grid = torch.stack([gi, gj], dim=1) - pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8. - #pxy = (fg_pred[:, :2].sigmoid() * 3. - 1. + grid) * self.stride[i] - pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8. - pxywh = torch.cat([pxy, pwh], dim=-1) - pxyxy = xywh2xyxy(pxywh) - pxyxys.append(pxyxy) - - pxyxys = torch.cat(pxyxys, dim=0) - if pxyxys.shape[0] == 0: - continue - p_obj = torch.cat(p_obj, dim=0) - p_cls = torch.cat(p_cls, dim=0) - from_which_layer = torch.cat(from_which_layer, dim=0) - all_b = torch.cat(all_b, dim=0) - all_a = torch.cat(all_a, dim=0) - all_gj = torch.cat(all_gj, dim=0) - all_gi = torch.cat(all_gi, dim=0) - all_anch = torch.cat(all_anch, dim=0) - - pair_wise_iou = box_iou(txyxy, pxyxys) - - pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8) - - top_k, _ = torch.topk(pair_wise_iou, min(10, pair_wise_iou.shape[1]), dim=1) - dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1) - - gt_cls_per_image = ( - F.one_hot(this_target[:, 1].to(torch.int64), self.nc) - .float() - .unsqueeze(1) - .repeat(1, pxyxys.shape[0], 1) - ) - - num_gt = this_target.shape[0] - cls_preds_ = ( - p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_() - * p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_() - ) - - y = cls_preds_.sqrt_() - pair_wise_cls_loss = F.binary_cross_entropy_with_logits( - torch.log(y/(1-y)) , gt_cls_per_image, reduction="none" - ).sum(-1) - del cls_preds_ - - cost = ( - pair_wise_cls_loss - + 3.0 * pair_wise_iou_loss - ) - - matching_matrix = torch.zeros_like(cost) - - for gt_idx in range(num_gt): - _, pos_idx = torch.topk( - cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False - ) - matching_matrix[gt_idx][pos_idx] = 1.0 - - del top_k, dynamic_ks - anchor_matching_gt = matching_matrix.sum(0) - if (anchor_matching_gt > 1).sum() > 0: - _, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0) - matching_matrix[:, anchor_matching_gt > 1] *= 0.0 - matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0 - fg_mask_inboxes = matching_matrix.sum(0) > 0.0 - matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0) - - from_which_layer = from_which_layer[fg_mask_inboxes] - all_b = all_b[fg_mask_inboxes] - all_a = all_a[fg_mask_inboxes] - all_gj = all_gj[fg_mask_inboxes] - all_gi = all_gi[fg_mask_inboxes] - all_anch = all_anch[fg_mask_inboxes] - - this_target = this_target[matched_gt_inds] - - for i in range(nl): - layer_idx = from_which_layer == i - matching_bs[i].append(all_b[layer_idx]) - matching_as[i].append(all_a[layer_idx]) - matching_gjs[i].append(all_gj[layer_idx]) - matching_gis[i].append(all_gi[layer_idx]) - matching_targets[i].append(this_target[layer_idx]) - matching_anchs[i].append(all_anch[layer_idx]) - - for i in range(nl): - if matching_targets[i] != []: - matching_bs[i] = torch.cat(matching_bs[i], dim=0) - matching_as[i] = torch.cat(matching_as[i], dim=0) - matching_gjs[i] = torch.cat(matching_gjs[i], dim=0) - matching_gis[i] = torch.cat(matching_gis[i], dim=0) - matching_targets[i] = torch.cat(matching_targets[i], dim=0) - matching_anchs[i] = torch.cat(matching_anchs[i], dim=0) - else: - matching_bs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_as[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_gjs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_gis[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_targets[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_anchs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - - return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs - - def find_3_positive(self, p, targets): - # Build targets for compute_loss(), input targets(image,class,x,y,w,h) - na, nt = self.na, targets.shape[0] # number of anchors, targets - indices, anch = [], [] - gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain - ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt) - targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices - - g = 0.5 # bias - off = torch.tensor([[0, 0], - [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m - # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm - ], device=targets.device).float() * g # offsets - - for i in range(self.nl): - anchors = self.anchors[i] - gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain - - # Match targets to anchors - t = targets * gain - if nt: - # Matches - r = t[:, :, 4:6] / anchors[:, None] # wh ratio - j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare - # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2)) - t = t[j] # filter - - # Offsets - gxy = t[:, 2:4] # grid xy - gxi = gain[[2, 3]] - gxy # inverse - j, k = ((gxy % 1. < g) & (gxy > 1.)).T - l, m = ((gxi % 1. < g) & (gxi > 1.)).T - j = torch.stack((torch.ones_like(j), j, k, l, m)) - t = t.repeat((5, 1, 1))[j] - offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j] - else: - t = targets[0] - offsets = 0 - - # Define - b, c = t[:, :2].long().T # image, class - gxy = t[:, 2:4] # grid xy - gwh = t[:, 4:6] # grid wh - gij = (gxy - offsets).long() - gi, gj = gij.T # grid xy indices - - # Append - a = t[:, 6].long() # anchor indices - indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices - anch.append(anchors[a]) # anchors - - return indices, anch - - -class ComputeLossBinOTA: - # Compute losses - def __init__(self, model, autobalance=False): - super(ComputeLossBinOTA, self).__init__() - device = next(model.parameters()).device # get model device - h = model.hyp # hyperparameters - - # Define criteria - BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device)) - BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device)) - #MSEangle = nn.MSELoss().to(device) - - # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3 - self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets - - # Focal loss - g = h['fl_gamma'] # focal loss gamma - if g > 0: - BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g) - - det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module - self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7 - self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index - self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance - for k in 'na', 'nc', 'nl', 'anchors', 'stride', 'bin_count': - setattr(self, k, getattr(det, k)) - - #xy_bin_sigmoid = SigmoidBin(bin_count=11, min=-0.5, max=1.5, use_loss_regression=False).to(device) - wh_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0, use_loss_regression=False).to(device) - #angle_bin_sigmoid = SigmoidBin(bin_count=31, min=-1.1, max=1.1, use_loss_regression=False).to(device) - self.wh_bin_sigmoid = wh_bin_sigmoid - - def __call__(self, p, targets, imgs): # predictions, targets, model - device = targets.device - lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device) - bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs) - pre_gen_gains = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p] - - - # Losses - for i, pi in enumerate(p): # layer index, layer predictions - b, a, gj, gi = bs[i], as_[i], gjs[i], gis[i] # image, anchor, gridy, gridx - tobj = torch.zeros_like(pi[..., 0], device=device) # target obj - - obj_idx = self.wh_bin_sigmoid.get_length()*2 + 2 # x,y, w-bce, h-bce # xy_bin_sigmoid.get_length()*2 - - n = b.shape[0] # number of targets - if n: - ps = pi[b, a, gj, gi] # prediction subset corresponding to targets - - # Regression - grid = torch.stack([gi, gj], dim=1) - selected_tbox = targets[i][:, 2:6] * pre_gen_gains[i] - selected_tbox[:, :2] -= grid - - #pxy = ps[:, :2].sigmoid() * 2. - 0.5 - ##pxy = ps[:, :2].sigmoid() * 3. - 1. - #pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i] - #pbox = torch.cat((pxy, pwh), 1) # predicted box - - #x_loss, px = xy_bin_sigmoid.training_loss(ps[..., 0:12], tbox[i][..., 0]) - #y_loss, py = xy_bin_sigmoid.training_loss(ps[..., 12:24], tbox[i][..., 1]) - w_loss, pw = self.wh_bin_sigmoid.training_loss(ps[..., 2:(3+self.bin_count)], selected_tbox[..., 2] / anchors[i][..., 0]) - h_loss, ph = self.wh_bin_sigmoid.training_loss(ps[..., (3+self.bin_count):obj_idx], selected_tbox[..., 3] / anchors[i][..., 1]) - - pw *= anchors[i][..., 0] - ph *= anchors[i][..., 1] - - px = ps[:, 0].sigmoid() * 2. - 0.5 - py = ps[:, 1].sigmoid() * 2. - 0.5 - - lbox += w_loss + h_loss # + x_loss + y_loss - - #print(f"\n px = {px.shape}, py = {py.shape}, pw = {pw.shape}, ph = {ph.shape} \n") - - pbox = torch.cat((px.unsqueeze(1), py.unsqueeze(1), pw.unsqueeze(1), ph.unsqueeze(1)), 1).to(device) # predicted box - - - - - iou = bbox_iou(pbox.T, selected_tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target) - lbox += (1.0 - iou).mean() # iou loss - - # Objectness - tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio - - # Classification - selected_tcls = targets[i][:, 1].long() - if self.nc > 1: # cls loss (only if multiple classes) - t = torch.full_like(ps[:, (1+obj_idx):], self.cn, device=device) # targets - t[range(n), selected_tcls] = self.cp - lcls += self.BCEcls(ps[:, (1+obj_idx):], t) # BCE - - # Append targets to text file - # with open('targets.txt', 'a') as file: - # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)] - - obji = self.BCEobj(pi[..., obj_idx], tobj) - lobj += obji * self.balance[i] # obj loss - if self.autobalance: - self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item() - - if self.autobalance: - self.balance = [x / self.balance[self.ssi] for x in self.balance] - lbox *= self.hyp['box'] - lobj *= self.hyp['obj'] - lcls *= self.hyp['cls'] - bs = tobj.shape[0] # batch size - - loss = lbox + lobj + lcls - return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach() - - def build_targets(self, p, targets, imgs): - - #indices, anch = self.find_positive(p, targets) - indices, anch = self.find_3_positive(p, targets) - #indices, anch = self.find_4_positive(p, targets) - #indices, anch = self.find_5_positive(p, targets) - #indices, anch = self.find_9_positive(p, targets) - - matching_bs = [[] for pp in p] - matching_as = [[] for pp in p] - matching_gjs = [[] for pp in p] - matching_gis = [[] for pp in p] - matching_targets = [[] for pp in p] - matching_anchs = [[] for pp in p] - - nl = len(p) - - for batch_idx in range(p[0].shape[0]): - - b_idx = targets[:, 0]==batch_idx - this_target = targets[b_idx] - if this_target.shape[0] == 0: - continue - - txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1] - txyxy = xywh2xyxy(txywh) - - pxyxys = [] - p_cls = [] - p_obj = [] - from_which_layer = [] - all_b = [] - all_a = [] - all_gj = [] - all_gi = [] - all_anch = [] - - for i, pi in enumerate(p): - - obj_idx = self.wh_bin_sigmoid.get_length()*2 + 2 - - b, a, gj, gi = indices[i] - idx = (b == batch_idx) - b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx] - all_b.append(b) - all_a.append(a) - all_gj.append(gj) - all_gi.append(gi) - all_anch.append(anch[i][idx]) - from_which_layer.append(torch.ones(size=(len(b),)) * i) - - fg_pred = pi[b, a, gj, gi] - p_obj.append(fg_pred[:, obj_idx:(obj_idx+1)]) - p_cls.append(fg_pred[:, (obj_idx+1):]) - - grid = torch.stack([gi, gj], dim=1) - pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8. - #pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8. - pw = self.wh_bin_sigmoid.forward(fg_pred[..., 2:(3+self.bin_count)].sigmoid()) * anch[i][idx][:, 0] * self.stride[i] - ph = self.wh_bin_sigmoid.forward(fg_pred[..., (3+self.bin_count):obj_idx].sigmoid()) * anch[i][idx][:, 1] * self.stride[i] - - pxywh = torch.cat([pxy, pw.unsqueeze(1), ph.unsqueeze(1)], dim=-1) - pxyxy = xywh2xyxy(pxywh) - pxyxys.append(pxyxy) - - pxyxys = torch.cat(pxyxys, dim=0) - if pxyxys.shape[0] == 0: - continue - p_obj = torch.cat(p_obj, dim=0) - p_cls = torch.cat(p_cls, dim=0) - from_which_layer = torch.cat(from_which_layer, dim=0) - all_b = torch.cat(all_b, dim=0) - all_a = torch.cat(all_a, dim=0) - all_gj = torch.cat(all_gj, dim=0) - all_gi = torch.cat(all_gi, dim=0) - all_anch = torch.cat(all_anch, dim=0) - - pair_wise_iou = box_iou(txyxy, pxyxys) - - pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8) - - top_k, _ = torch.topk(pair_wise_iou, min(10, pair_wise_iou.shape[1]), dim=1) - dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1) - - gt_cls_per_image = ( - F.one_hot(this_target[:, 1].to(torch.int64), self.nc) - .float() - .unsqueeze(1) - .repeat(1, pxyxys.shape[0], 1) - ) - - num_gt = this_target.shape[0] - cls_preds_ = ( - p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_() - * p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_() - ) - - y = cls_preds_.sqrt_() - pair_wise_cls_loss = F.binary_cross_entropy_with_logits( - torch.log(y/(1-y)) , gt_cls_per_image, reduction="none" - ).sum(-1) - del cls_preds_ - - cost = ( - pair_wise_cls_loss - + 3.0 * pair_wise_iou_loss - ) - - matching_matrix = torch.zeros_like(cost) - - for gt_idx in range(num_gt): - _, pos_idx = torch.topk( - cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False - ) - matching_matrix[gt_idx][pos_idx] = 1.0 - - del top_k, dynamic_ks - anchor_matching_gt = matching_matrix.sum(0) - if (anchor_matching_gt > 1).sum() > 0: - _, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0) - matching_matrix[:, anchor_matching_gt > 1] *= 0.0 - matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0 - fg_mask_inboxes = matching_matrix.sum(0) > 0.0 - matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0) - - from_which_layer = from_which_layer[fg_mask_inboxes] - all_b = all_b[fg_mask_inboxes] - all_a = all_a[fg_mask_inboxes] - all_gj = all_gj[fg_mask_inboxes] - all_gi = all_gi[fg_mask_inboxes] - all_anch = all_anch[fg_mask_inboxes] - - this_target = this_target[matched_gt_inds] - - for i in range(nl): - layer_idx = from_which_layer == i - matching_bs[i].append(all_b[layer_idx]) - matching_as[i].append(all_a[layer_idx]) - matching_gjs[i].append(all_gj[layer_idx]) - matching_gis[i].append(all_gi[layer_idx]) - matching_targets[i].append(this_target[layer_idx]) - matching_anchs[i].append(all_anch[layer_idx]) - - for i in range(nl): - if matching_targets[i] != []: - matching_bs[i] = torch.cat(matching_bs[i], dim=0) - matching_as[i] = torch.cat(matching_as[i], dim=0) - matching_gjs[i] = torch.cat(matching_gjs[i], dim=0) - matching_gis[i] = torch.cat(matching_gis[i], dim=0) - matching_targets[i] = torch.cat(matching_targets[i], dim=0) - matching_anchs[i] = torch.cat(matching_anchs[i], dim=0) - else: - matching_bs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_as[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_gjs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_gis[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_targets[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_anchs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - - return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs - - def find_3_positive(self, p, targets): - # Build targets for compute_loss(), input targets(image,class,x,y,w,h) - na, nt = self.na, targets.shape[0] # number of anchors, targets - indices, anch = [], [] - gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain - ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt) - targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices - - g = 0.5 # bias - off = torch.tensor([[0, 0], - [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m - # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm - ], device=targets.device).float() * g # offsets - - for i in range(self.nl): - anchors = self.anchors[i] - gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain - - # Match targets to anchors - t = targets * gain - if nt: - # Matches - r = t[:, :, 4:6] / anchors[:, None] # wh ratio - j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare - # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2)) - t = t[j] # filter - - # Offsets - gxy = t[:, 2:4] # grid xy - gxi = gain[[2, 3]] - gxy # inverse - j, k = ((gxy % 1. < g) & (gxy > 1.)).T - l, m = ((gxi % 1. < g) & (gxi > 1.)).T - j = torch.stack((torch.ones_like(j), j, k, l, m)) - t = t.repeat((5, 1, 1))[j] - offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j] - else: - t = targets[0] - offsets = 0 - - # Define - b, c = t[:, :2].long().T # image, class - gxy = t[:, 2:4] # grid xy - gwh = t[:, 4:6] # grid wh - gij = (gxy - offsets).long() - gi, gj = gij.T # grid xy indices - - # Append - a = t[:, 6].long() # anchor indices - indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices - anch.append(anchors[a]) # anchors - - return indices, anch - - -class ComputeLossAuxOTA: - # Compute losses - def __init__(self, model, autobalance=False): - super(ComputeLossAuxOTA, self).__init__() - device = next(model.parameters()).device # get model device - h = model.hyp # hyperparameters - - # Define criteria - BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device)) - BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device)) - - # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3 - self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets - - # Focal loss - g = h['fl_gamma'] # focal loss gamma - if g > 0: - BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g) - - det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module - self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7 - self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index - self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance - for k in 'na', 'nc', 'nl', 'anchors', 'stride': - setattr(self, k, getattr(det, k)) - - def __call__(self, p, targets, imgs): # predictions, targets, model - device = targets.device - lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device) - bs_aux, as_aux_, gjs_aux, gis_aux, targets_aux, anchors_aux = self.build_targets2(p[:self.nl], targets, imgs) - bs, as_, gjs, gis, targets, anchors = self.build_targets(p[:self.nl], targets, imgs) - pre_gen_gains_aux = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p[:self.nl]] - pre_gen_gains = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p[:self.nl]] - - - # Losses - for i in range(self.nl): # layer index, layer predictions - pi = p[i] - pi_aux = p[i+self.nl] - b, a, gj, gi = bs[i], as_[i], gjs[i], gis[i] # image, anchor, gridy, gridx - b_aux, a_aux, gj_aux, gi_aux = bs_aux[i], as_aux_[i], gjs_aux[i], gis_aux[i] # image, anchor, gridy, gridx - tobj = torch.zeros_like(pi[..., 0], device=device) # target obj - tobj_aux = torch.zeros_like(pi_aux[..., 0], device=device) # target obj - - n = b.shape[0] # number of targets - if n: - ps = pi[b, a, gj, gi] # prediction subset corresponding to targets - - # Regression - grid = torch.stack([gi, gj], dim=1) - pxy = ps[:, :2].sigmoid() * 2. - 0.5 - pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i] - pbox = torch.cat((pxy, pwh), 1) # predicted box - selected_tbox = targets[i][:, 2:6] * pre_gen_gains[i] - selected_tbox[:, :2] -= grid - iou = bbox_iou(pbox.T, selected_tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target) - lbox += (1.0 - iou).mean() # iou loss - - # Objectness - tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio - - # Classification - selected_tcls = targets[i][:, 1].long() - if self.nc > 1: # cls loss (only if multiple classes) - t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets - t[range(n), selected_tcls] = self.cp - lcls += self.BCEcls(ps[:, 5:], t) # BCE - - # Append targets to text file - # with open('targets.txt', 'a') as file: - # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)] - - n_aux = b_aux.shape[0] # number of targets - if n_aux: - ps_aux = pi_aux[b_aux, a_aux, gj_aux, gi_aux] # prediction subset corresponding to targets - grid_aux = torch.stack([gi_aux, gj_aux], dim=1) - pxy_aux = ps_aux[:, :2].sigmoid() * 2. - 0.5 - #pxy_aux = ps_aux[:, :2].sigmoid() * 3. - 1. - pwh_aux = (ps_aux[:, 2:4].sigmoid() * 2) ** 2 * anchors_aux[i] - pbox_aux = torch.cat((pxy_aux, pwh_aux), 1) # predicted box - selected_tbox_aux = targets_aux[i][:, 2:6] * pre_gen_gains_aux[i] - selected_tbox_aux[:, :2] -= grid_aux - iou_aux = bbox_iou(pbox_aux.T, selected_tbox_aux, x1y1x2y2=False, CIoU=True) # iou(prediction, target) - lbox += 0.25 * (1.0 - iou_aux).mean() # iou loss - - # Objectness - tobj_aux[b_aux, a_aux, gj_aux, gi_aux] = (1.0 - self.gr) + self.gr * iou_aux.detach().clamp(0).type(tobj_aux.dtype) # iou ratio - - # Classification - selected_tcls_aux = targets_aux[i][:, 1].long() - if self.nc > 1: # cls loss (only if multiple classes) - t_aux = torch.full_like(ps_aux[:, 5:], self.cn, device=device) # targets - t_aux[range(n_aux), selected_tcls_aux] = self.cp - lcls += 0.25 * self.BCEcls(ps_aux[:, 5:], t_aux) # BCE - - obji = self.BCEobj(pi[..., 4], tobj) - obji_aux = self.BCEobj(pi_aux[..., 4], tobj_aux) - lobj += obji * self.balance[i] + 0.25 * obji_aux * self.balance[i] # obj loss - if self.autobalance: - self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item() - - if self.autobalance: - self.balance = [x / self.balance[self.ssi] for x in self.balance] - lbox *= self.hyp['box'] - lobj *= self.hyp['obj'] - lcls *= self.hyp['cls'] - bs = tobj.shape[0] # batch size - - loss = lbox + lobj + lcls - return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach() - - def build_targets(self, p, targets, imgs): - - indices, anch = self.find_3_positive(p, targets) - - matching_bs = [[] for pp in p] - matching_as = [[] for pp in p] - matching_gjs = [[] for pp in p] - matching_gis = [[] for pp in p] - matching_targets = [[] for pp in p] - matching_anchs = [[] for pp in p] - - nl = len(p) - - for batch_idx in range(p[0].shape[0]): - - b_idx = targets[:, 0]==batch_idx - this_target = targets[b_idx] - if this_target.shape[0] == 0: - continue - - txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1] - txyxy = xywh2xyxy(txywh) - - pxyxys = [] - p_cls = [] - p_obj = [] - from_which_layer = [] - all_b = [] - all_a = [] - all_gj = [] - all_gi = [] - all_anch = [] - - for i, pi in enumerate(p): - - b, a, gj, gi = indices[i] - idx = (b == batch_idx) - b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx] - all_b.append(b) - all_a.append(a) - all_gj.append(gj) - all_gi.append(gi) - all_anch.append(anch[i][idx]) - from_which_layer.append(torch.ones(size=(len(b),)) * i) - - fg_pred = pi[b, a, gj, gi] - p_obj.append(fg_pred[:, 4:5]) - p_cls.append(fg_pred[:, 5:]) - - grid = torch.stack([gi, gj], dim=1) - pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8. - #pxy = (fg_pred[:, :2].sigmoid() * 3. - 1. + grid) * self.stride[i] - pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8. - pxywh = torch.cat([pxy, pwh], dim=-1) - pxyxy = xywh2xyxy(pxywh) - pxyxys.append(pxyxy) - - pxyxys = torch.cat(pxyxys, dim=0) - if pxyxys.shape[0] == 0: - continue - p_obj = torch.cat(p_obj, dim=0) - p_cls = torch.cat(p_cls, dim=0) - from_which_layer = torch.cat(from_which_layer, dim=0) - all_b = torch.cat(all_b, dim=0) - all_a = torch.cat(all_a, dim=0) - all_gj = torch.cat(all_gj, dim=0) - all_gi = torch.cat(all_gi, dim=0) - all_anch = torch.cat(all_anch, dim=0) - - pair_wise_iou = box_iou(txyxy, pxyxys) - - pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8) - - top_k, _ = torch.topk(pair_wise_iou, min(20, pair_wise_iou.shape[1]), dim=1) - dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1) - - gt_cls_per_image = ( - F.one_hot(this_target[:, 1].to(torch.int64), self.nc) - .float() - .unsqueeze(1) - .repeat(1, pxyxys.shape[0], 1) - ) - - num_gt = this_target.shape[0] - cls_preds_ = ( - p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_() - * p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_() - ) - - y = cls_preds_.sqrt_() - pair_wise_cls_loss = F.binary_cross_entropy_with_logits( - torch.log(y/(1-y)) , gt_cls_per_image, reduction="none" - ).sum(-1) - del cls_preds_ - - cost = ( - pair_wise_cls_loss - + 3.0 * pair_wise_iou_loss - ) - - matching_matrix = torch.zeros_like(cost) - - for gt_idx in range(num_gt): - _, pos_idx = torch.topk( - cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False - ) - matching_matrix[gt_idx][pos_idx] = 1.0 - - del top_k, dynamic_ks - anchor_matching_gt = matching_matrix.sum(0) - if (anchor_matching_gt > 1).sum() > 0: - _, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0) - matching_matrix[:, anchor_matching_gt > 1] *= 0.0 - matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0 - fg_mask_inboxes = matching_matrix.sum(0) > 0.0 - matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0) - - from_which_layer = from_which_layer[fg_mask_inboxes] - all_b = all_b[fg_mask_inboxes] - all_a = all_a[fg_mask_inboxes] - all_gj = all_gj[fg_mask_inboxes] - all_gi = all_gi[fg_mask_inboxes] - all_anch = all_anch[fg_mask_inboxes] - - this_target = this_target[matched_gt_inds] - - for i in range(nl): - layer_idx = from_which_layer == i - matching_bs[i].append(all_b[layer_idx]) - matching_as[i].append(all_a[layer_idx]) - matching_gjs[i].append(all_gj[layer_idx]) - matching_gis[i].append(all_gi[layer_idx]) - matching_targets[i].append(this_target[layer_idx]) - matching_anchs[i].append(all_anch[layer_idx]) - - for i in range(nl): - if matching_targets[i] != []: - matching_bs[i] = torch.cat(matching_bs[i], dim=0) - matching_as[i] = torch.cat(matching_as[i], dim=0) - matching_gjs[i] = torch.cat(matching_gjs[i], dim=0) - matching_gis[i] = torch.cat(matching_gis[i], dim=0) - matching_targets[i] = torch.cat(matching_targets[i], dim=0) - matching_anchs[i] = torch.cat(matching_anchs[i], dim=0) - else: - matching_bs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_as[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_gjs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_gis[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_targets[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_anchs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - - return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs - - def build_targets2(self, p, targets, imgs): - - indices, anch = self.find_5_positive(p, targets) - - matching_bs = [[] for pp in p] - matching_as = [[] for pp in p] - matching_gjs = [[] for pp in p] - matching_gis = [[] for pp in p] - matching_targets = [[] for pp in p] - matching_anchs = [[] for pp in p] - - nl = len(p) - - for batch_idx in range(p[0].shape[0]): - - b_idx = targets[:, 0]==batch_idx - this_target = targets[b_idx] - if this_target.shape[0] == 0: - continue - - txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1] - txyxy = xywh2xyxy(txywh) - - pxyxys = [] - p_cls = [] - p_obj = [] - from_which_layer = [] - all_b = [] - all_a = [] - all_gj = [] - all_gi = [] - all_anch = [] - - for i, pi in enumerate(p): - - b, a, gj, gi = indices[i] - idx = (b == batch_idx) - b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx] - all_b.append(b) - all_a.append(a) - all_gj.append(gj) - all_gi.append(gi) - all_anch.append(anch[i][idx]) - from_which_layer.append(torch.ones(size=(len(b),)) * i) - - fg_pred = pi[b, a, gj, gi] - p_obj.append(fg_pred[:, 4:5]) - p_cls.append(fg_pred[:, 5:]) - - grid = torch.stack([gi, gj], dim=1) - pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8. - #pxy = (fg_pred[:, :2].sigmoid() * 3. - 1. + grid) * self.stride[i] - pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8. - pxywh = torch.cat([pxy, pwh], dim=-1) - pxyxy = xywh2xyxy(pxywh) - pxyxys.append(pxyxy) - - pxyxys = torch.cat(pxyxys, dim=0) - if pxyxys.shape[0] == 0: - continue - p_obj = torch.cat(p_obj, dim=0) - p_cls = torch.cat(p_cls, dim=0) - from_which_layer = torch.cat(from_which_layer, dim=0) - all_b = torch.cat(all_b, dim=0) - all_a = torch.cat(all_a, dim=0) - all_gj = torch.cat(all_gj, dim=0) - all_gi = torch.cat(all_gi, dim=0) - all_anch = torch.cat(all_anch, dim=0) - - pair_wise_iou = box_iou(txyxy, pxyxys) - - pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8) - - top_k, _ = torch.topk(pair_wise_iou, min(20, pair_wise_iou.shape[1]), dim=1) - dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1) - - gt_cls_per_image = ( - F.one_hot(this_target[:, 1].to(torch.int64), self.nc) - .float() - .unsqueeze(1) - .repeat(1, pxyxys.shape[0], 1) - ) - - num_gt = this_target.shape[0] - cls_preds_ = ( - p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_() - * p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_() - ) - - y = cls_preds_.sqrt_() - pair_wise_cls_loss = F.binary_cross_entropy_with_logits( - torch.log(y/(1-y)) , gt_cls_per_image, reduction="none" - ).sum(-1) - del cls_preds_ - - cost = ( - pair_wise_cls_loss - + 3.0 * pair_wise_iou_loss - ) - - matching_matrix = torch.zeros_like(cost) - - for gt_idx in range(num_gt): - _, pos_idx = torch.topk( - cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False - ) - matching_matrix[gt_idx][pos_idx] = 1.0 - - del top_k, dynamic_ks - anchor_matching_gt = matching_matrix.sum(0) - if (anchor_matching_gt > 1).sum() > 0: - _, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0) - matching_matrix[:, anchor_matching_gt > 1] *= 0.0 - matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0 - fg_mask_inboxes = matching_matrix.sum(0) > 0.0 - matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0) - - from_which_layer = from_which_layer[fg_mask_inboxes] - all_b = all_b[fg_mask_inboxes] - all_a = all_a[fg_mask_inboxes] - all_gj = all_gj[fg_mask_inboxes] - all_gi = all_gi[fg_mask_inboxes] - all_anch = all_anch[fg_mask_inboxes] - - this_target = this_target[matched_gt_inds] - - for i in range(nl): - layer_idx = from_which_layer == i - matching_bs[i].append(all_b[layer_idx]) - matching_as[i].append(all_a[layer_idx]) - matching_gjs[i].append(all_gj[layer_idx]) - matching_gis[i].append(all_gi[layer_idx]) - matching_targets[i].append(this_target[layer_idx]) - matching_anchs[i].append(all_anch[layer_idx]) - - for i in range(nl): - if matching_targets[i] != []: - matching_bs[i] = torch.cat(matching_bs[i], dim=0) - matching_as[i] = torch.cat(matching_as[i], dim=0) - matching_gjs[i] = torch.cat(matching_gjs[i], dim=0) - matching_gis[i] = torch.cat(matching_gis[i], dim=0) - matching_targets[i] = torch.cat(matching_targets[i], dim=0) - matching_anchs[i] = torch.cat(matching_anchs[i], dim=0) - else: - matching_bs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_as[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_gjs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_gis[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_targets[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - matching_anchs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64) - - return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs - - def find_5_positive(self, p, targets): - # Build targets for compute_loss(), input targets(image,class,x,y,w,h) - na, nt = self.na, targets.shape[0] # number of anchors, targets - indices, anch = [], [] - gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain - ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt) - targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices - - g = 1.0 # bias - off = torch.tensor([[0, 0], - [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m - # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm - ], device=targets.device).float() * g # offsets - - for i in range(self.nl): - anchors = self.anchors[i] - gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain - - # Match targets to anchors - t = targets * gain - if nt: - # Matches - r = t[:, :, 4:6] / anchors[:, None] # wh ratio - j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare - # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2)) - t = t[j] # filter - - # Offsets - gxy = t[:, 2:4] # grid xy - gxi = gain[[2, 3]] - gxy # inverse - j, k = ((gxy % 1. < g) & (gxy > 1.)).T - l, m = ((gxi % 1. < g) & (gxi > 1.)).T - j = torch.stack((torch.ones_like(j), j, k, l, m)) - t = t.repeat((5, 1, 1))[j] - offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j] - else: - t = targets[0] - offsets = 0 - - # Define - b, c = t[:, :2].long().T # image, class - gxy = t[:, 2:4] # grid xy - gwh = t[:, 4:6] # grid wh - gij = (gxy - offsets).long() - gi, gj = gij.T # grid xy indices - - # Append - a = t[:, 6].long() # anchor indices - indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices - anch.append(anchors[a]) # anchors - - return indices, anch - - def find_3_positive(self, p, targets): - # Build targets for compute_loss(), input targets(image,class,x,y,w,h) - na, nt = self.na, targets.shape[0] # number of anchors, targets - indices, anch = [], [] - gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain - ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt) - targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices - - g = 0.5 # bias - off = torch.tensor([[0, 0], - [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m - # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm - ], device=targets.device).float() * g # offsets - - for i in range(self.nl): - anchors = self.anchors[i] - gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain - - # Match targets to anchors - t = targets * gain - if nt: - # Matches - r = t[:, :, 4:6] / anchors[:, None] # wh ratio - j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare - # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2)) - t = t[j] # filter - - # Offsets - gxy = t[:, 2:4] # grid xy - gxi = gain[[2, 3]] - gxy # inverse - j, k = ((gxy % 1. < g) & (gxy > 1.)).T - l, m = ((gxi % 1. < g) & (gxi > 1.)).T - j = torch.stack((torch.ones_like(j), j, k, l, m)) - t = t.repeat((5, 1, 1))[j] - offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j] - else: - t = targets[0] - offsets = 0 - - # Define - b, c = t[:, :2].long().T # image, class - gxy = t[:, 2:4] # grid xy - gwh = t[:, 4:6] # grid wh - gij = (gxy - offsets).long() - gi, gj = gij.T # grid xy indices - - # Append - a = t[:, 6].long() # anchor indices - indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices - anch.append(anchors[a]) # anchors - - return indices, anch diff --git a/spaces/radames/Candle-T5-Generation-Wasm/build/m.d.ts b/spaces/radames/Candle-T5-Generation-Wasm/build/m.d.ts deleted file mode 100644 index 21c078ca6d35d5432ad19ec8e4306c62e4072d9b..0000000000000000000000000000000000000000 --- a/spaces/radames/Candle-T5-Generation-Wasm/build/m.d.ts +++ /dev/null @@ -1,74 +0,0 @@ -/* tslint:disable */ -/* eslint-disable */ -/** -*/ -export class ModelConditionalGeneration { - free(): void; -/** -* @param {Uint8Array} weights -* @param {Uint8Array} tokenizer -* @param {Uint8Array} config -*/ - constructor(weights: Uint8Array, tokenizer: Uint8Array, config: Uint8Array); -/** -* @param {any} input -* @returns {any} -*/ - decode(input: any): any; -} -/** -*/ -export class ModelEncoder { - free(): void; -/** -* @param {Uint8Array} weights -* @param {Uint8Array} tokenizer -* @param {Uint8Array} config -*/ - constructor(weights: Uint8Array, tokenizer: Uint8Array, config: Uint8Array); -/** -* @param {any} input -* @returns {any} -*/ - decode(input: any): any; -} - -export type InitInput = RequestInfo | URL | Response | BufferSource | WebAssembly.Module; - -export interface InitOutput { - readonly memory: WebAssembly.Memory; - readonly __wbg_modelencoder_free: (a: number) => void; - readonly __wbg_modelconditionalgeneration_free: (a: number) => void; - readonly modelconditionalgeneration_load: (a: number, b: number, c: number, d: number, e: number, f: number, g: number) => void; - readonly modelconditionalgeneration_decode: (a: number, b: number, c: number) => void; - readonly modelencoder_load: (a: number, b: number, c: number, d: number, e: number, f: number, g: number) => void; - readonly modelencoder_decode: (a: number, b: number, c: number) => void; - readonly main: (a: number, b: number) => number; - readonly __wbindgen_malloc: (a: number, b: number) => number; - readonly __wbindgen_realloc: (a: number, b: number, c: number, d: number) => number; - readonly __wbindgen_add_to_stack_pointer: (a: number) => number; - readonly __wbindgen_free: (a: number, b: number, c: number) => void; - readonly __wbindgen_exn_store: (a: number) => void; - readonly __wbindgen_start: () => void; -} - -export type SyncInitInput = BufferSource | WebAssembly.Module; -/** -* Instantiates the given `module`, which can either be bytes or -* a precompiled `WebAssembly.Module`. -* -* @param {SyncInitInput} module -* -* @returns {InitOutput} -*/ -export function initSync(module: SyncInitInput): InitOutput; - -/** -* If `module_or_path` is {RequestInfo} or {URL}, makes a request and -* for everything else, calls `WebAssembly.instantiate` directly. -* -* @param {InitInput | Promise} module_or_path -* -* @returns {Promise} -*/ -export default function __wbg_init (module_or_path?: InitInput | Promise): Promise; diff --git a/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/models/cnn/transform_e2p.py b/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/models/cnn/transform_e2p.py deleted file mode 100644 index 14c42534bf5f608bd34f576c5ac1f6e8c0eb6167..0000000000000000000000000000000000000000 --- a/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/models/cnn/transform_e2p.py +++ /dev/null @@ -1,257 +0,0 @@ -import torch -from torch import nn - - -class E2Ptransform(nn.Module): - """Edge to Points trasnformation""" - def __init__(self, points, edges, out_dim=64): - super(E2Ptransform, self).__init__() - self.ones = nn.parameter.Parameter(torch.ones((1, out_dim, out_dim)), requires_grad=False) - edge_matrix = self._select_matrix(points, edges) - self.edge2point = nn.parameter.Parameter(edge_matrix, requires_grad=False) # Npoint X Nedges+1 - - def forward(self, edges): - B, L, H, W = edges.shape - edges_ext = torch.cat((edges, self.ones.repeat(B, 1, 1, 1)), 1) - edges_mat = edges_ext.permute(0, 2, 3, 1).reshape(B, H, W, 1, L+1) - edge2point = self.edge2point.transpose(-1, -2) - point_edges = torch.matmul(edges_mat, edge2point) - point_edges = point_edges.reshape(B, H, W, -1).permute(0, 3, 1, 2) - point_edges[point_edges > 1] = 1. - return point_edges - - def _select_matrix(self, points, edges): - - if points == 98 and edges == 15: - return WFLW_98x15 - elif points == 68 and edges == 13: - return W300_68x13 - elif points == 29 and edges == 13: - return COFW_29x13 - elif points == 19 and edges == 6: - return AFLW19_19x6 - else: - raise ValueError("E2P matrix not implemented") - - -# Database matrixE2P -WFLW_98x15 = torch.Tensor([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]]) - - -W300_68x13 = torch.Tensor([ [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]]) - - -AFLW19_19x6 = torch.Tensor([[1, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0], - [0, 0, 1, 0, 0, 0, 0], - [0, 0, 1, 0, 0, 0, 0], - [0, 0, 1, 0, 0, 0, 0], - [0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 0, 1, 0, 0], - [0, 0, 0, 0, 1, 0, 0], - [0, 0, 0, 0, 1, 0, 0], - [0, 0, 0, 0, 0, 1, 0], - [0, 0, 0, 0, 0, 1, 0], - [0, 0, 0, 0, 0, 1, 0], - [0, 0, 0, 0, 0, 0, 1]]) - - -COFW_29x13 = torch.Tensor([ [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]]) diff --git a/spaces/radames/sentence-embeddings-visualization/README.md b/spaces/radames/sentence-embeddings-visualization/README.md deleted file mode 100644 index bc3a0b6593945dfadb8b0cce55106c94548a6473..0000000000000000000000000000000000000000 --- a/spaces/radames/sentence-embeddings-visualization/README.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Sentence Embeddings Visualization -emoji: 📈 -colorFrom: green -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: false ---- - -# Hugging Face Spaces + Observable -### Sentence Embeddings Visualization - -Recently I've been exploring [Hugging face Spaces](https://huggingface.co/spaces) and [sentence-transformers](https://huggingface.co/sentence-transformers) to build an application to generate text embeddings and clustering visualization. - -Currently, the quickest way to build interactive ML apps with Python (backend/frontend), afaik, is to use [Streamlit](https://streamlit.io/) or [Gradio](https://www.gradio.app/). To embed an Observable notebook on Streamlit, you can use this custom component [streamlit-observable](https://github.com/asg017/streamlit-observable) - -This [Observable notebook](https://observablehq.com/@radames/hugging-face-spaces-observable-sentence-embeddings) is the frontend application for this [Hugging Face Spaces](https://huggingface.co/spaces/radames/sentence-embeddings-visualization) app. - -This notebook explores another way to integrate Observable inside Hugging Face Spaces. Currently, [HF Spaces supports](https://huggingface.co/docs/hub/spaces#streamlit-and-gradio) Streamlit and Gradio or a simple static web page. - -The concept here is to use this entire notebook as the frontend and data visualization application for the [ML Flask/Python](https://huggingface.co/spaces/radames/sentence-embeddings-visualization/blob/main/app.py#L37-L75) backend. - -* The index route renders a [simple HTML template](https://huggingface.co/spaces/radames/sentence-embeddings-visualization/blob/main/templates/index.html) containing [Observable Runtime API code](https://observablehq.com/@observablehq/downloading-and-embedding-notebooks). -* A single function, triggered by a POST request to \`run-umap\`, returns a low dimensional representation of the original sentence transformers embeddings using UMAP and cluster analysis with HDBSCAN. -* All the visualization and interactive magic happen on the Javascript code inside the Observable Notebook. diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Illustrator CS6 V16.0.0 682 Portablel.md b/spaces/raedeXanto/academic-chatgpt-beta/Adobe Illustrator CS6 V16.0.0 682 Portablel.md deleted file mode 100644 index cf27a27a71734e57a906c54cebf7383889aaba3b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Illustrator CS6 V16.0.0 682 Portablel.md +++ /dev/null @@ -1,36 +0,0 @@ -
    -

    Adobe Illustrator CS6 V16.0.0 682 Portablel: What You Need to Know

    - -

    Adobe Illustrator CS6 is a powerful vector graphics software that allows you to create stunning logos, icons, illustrations, and more. But what if you want to use it on different devices without installing it? That's where Adobe Illustrator CS6 V16.0.0 682 Portablel comes in.

    -

    Adobe Illustrator CS6 V16.0.0 682 Portablel


    Download Zip ❤❤❤ https://tinourl.com/2uL2Pm



    - -

    Adobe Illustrator CS6 V16.0.0 682 Portablel is not an official Adobe product but a hacked version that can be run from a USB flash drive or any other portable device. It sounds convenient, but it also comes with many risks and disadvantages that you should be aware of before downloading it.

    - -

    The Risks of Using Adobe Illustrator CS6 V16.0.0 682 Portablel

    - -

    Here are some of the main reasons why you should avoid using Adobe Illustrator CS6 V16.0.0 682 Portablel:

    - -
      -
    • High risk of virus infection. Since this version is not authorized by Adobe, you cannot be sure that it has not been modified or infected with malware that can harm your computer or steal your personal information[^1^].
    • -
    • Lack of updates and developer support. When you use Adobe Illustrator CS6 V16.0.0 682 Portablel, you will not receive any updates that fix bugs, improve performance, or add new features[^2^]. You will also not have access to Adobe's customer service or technical support in case you encounter any problems.
    • -
    • Unstable and slow operation. Adobe Illustrator CS6 V16.0.0 682 Portablel is compressed and stripped of some functions to reduce its size and make it portable[^2^]. This means that it will run slower and crash more often than the original version.
    • -
    • Violation of the law. Using Adobe Illustrator CS6 V16.0.0 682 Portablel is illegal and violates Adobe's intellectual property rights[^2^]. You could face fines or even jail time if you are caught using or distributing this software.
    • -
    - -

    The Benefits of Using Adobe Illustrator CS6 License

    - -

    If you want to enjoy the full potential of Adobe Illustrator CS6 without risking your security, performance, or legal status, you should buy a license from Adobe's official website. Here are some of the benefits of using a licensed version of Adobe Illustrator CS6:

    -

    - -
      -
    • Flexible work with scale. You can save your AI files in any resolution you need without losing quality or detail[^2^]. You can also print your projects without any pixelation or distortion.
    • -
    • Creative tools and features. You can access all the tools and features that Adobe Illustrator CS6 has to offer, such as gradients, patterns, brushes, symbols, effects, and more[^3^]. You can also use plugins and extensions to enhance your workflow and creativity.
    • -
    • Integration with other Adobe products. You can easily import and export your files between Adobe Illustrator CS6 and other Adobe products, such as Photoshop, InDesign, After Effects, and more[^3^]. You can also use Adobe Creative Cloud to sync your files and settings across multiple devices.
    • -
    • Security and reliability. You can be sure that your software is safe and free from viruses or malware[^1^]. You can also update your software regularly to get the latest improvements and fixes[^2^]. And if you ever need help, you can contact Adobe's customer service or technical support anytime[^2^].
    • -
    - -

    Conclusion

    - -

    Adobe Illustrator CS6 V16.0.0 682 Portablel may seem like a convenient way to use Adobe Illustrator CS6 on different devices without installing it, but it is actually a risky and illegal option that can compromise your security, performance, and legal status. If you want to use Adobe Illustrator CS6 safely and legally, you should buy a license from Adobe's official website and enjoy all the benefits of using a licensed version of this amazing software.

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/realfill-library/RealFill-Training-UI/app_training.py b/spaces/realfill-library/RealFill-Training-UI/app_training.py deleted file mode 100644 index 762c8559e51916b73033976673d8e59681de1cef..0000000000000000000000000000000000000000 --- a/spaces/realfill-library/RealFill-Training-UI/app_training.py +++ /dev/null @@ -1,152 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os - -import gradio as gr - -from constants import UploadTarget -from inference import InferencePipeline -from trainer import Trainer - - -def create_training_demo(trainer: Trainer, - pipe: InferencePipeline | None = None) -> gr.Blocks: - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - with gr.Box(): - gr.Markdown('Training Data') - reference_images = gr.Files(label='Reference images') - target_image = gr.Files(label='Target image') - target_mask = gr.Files(label='Target mask') - gr.Markdown(''' - - Upload reference images of the scene you are planning on training on. - - For the target image, the inpainting region should be white. - - For the target mask, white for inpainting and black for keeping as is. - ''') - with gr.Box(): - gr.Markdown('Output Model') - output_model_name = gr.Text(label='Name of your model', - max_lines=1) - delete_existing_model = gr.Checkbox( - label='Delete existing model of the same name', - value=False) - with gr.Box(): - gr.Markdown('Upload Settings') - with gr.Row(): - upload_to_hub = gr.Checkbox( - label='Upload model to Hub', value=True) - use_private_repo = gr.Checkbox(label='Private', - value=True) - delete_existing_repo = gr.Checkbox( - label='Delete existing repo of the same name', - value=False) - upload_to = gr.Radio( - label='Upload to', - choices=[_.value for _ in UploadTarget], - value=UploadTarget.REALFILL_LIBRARY.value) - gr.Markdown(''' - - By default, trained models will be uploaded to [ReaFill Library](https://huggingface.co/realfill-library). - - You can also choose "Personal Profile", in which case, the model will be uploaded to https://huggingface.co/{your_username}/{model_name}. - ''') - - with gr.Box(): - gr.Markdown('Training Parameters') - with gr.Row(): - base_model = gr.Text( - label='Base Model', - value='stabilityai/stable-diffusion-2-inpainting', - max_lines=1) - resolution = gr.Dropdown(choices=['512', '768'], - value='512', - label='Resolution') - num_training_steps = gr.Number( - label='Number of Training Steps', value=2000, precision=0) - unet_learning_rate = gr.Number(label='Unet Learning Rate', value=0.0002) - text_encoder_learning_rate = gr.Number(label='Text Encoder Learning Rate', value=0.00004) - lora_rank = gr.Number(label='LoRA rank value', value=8, precision=0) - lora_dropout = gr.Number(label='LoRA dropout rate', value=0.1) - lora_alpha = gr.Number(label='LoRA alpha value', value=16, precision=0) - gradient_accumulation = gr.Number( - label='Number of Gradient Accumulation', - value=1, - precision=0) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=100000, - step=1, - value=0) - fp16 = gr.Checkbox(label='FP16', value=True) - use_8bit_adam = gr.Checkbox(label='Use 8bit Adam', value=True) - checkpointing_steps = gr.Number(label='Checkpointing Steps', - value=100, - precision=0) - use_wandb = gr.Checkbox(label='Use W&B', - value=False, - interactive=bool( - os.getenv('WANDB_API_KEY'))) - validation_steps = gr.Number(label='Validation Steps', - value=100, - precision=0) - gr.Markdown(''' - - The base model must be a model that is compatible with [diffusers](https://github.com/huggingface/diffusers) library. - - It takes a few minutes to download the base model first. - - It will take about 16 minutes to train for 2000 steps with a T4 GPU. - - You may want to try a small number of steps first, like 1, to see if everything works fine in your environment. - - You can check the training status by pressing the "Open logs" button if you are running this on your Space. - - You need to set the environment variable `WANDB_API_KEY` if you'd like to use [W&B](https://wandb.ai/site). See [W&B documentation](https://docs.wandb.ai/guides/track/advanced/environment-variables). - - **Note:** Due to [this issue](https://github.com/huggingface/accelerate/issues/944), currently, training will not terminate properly if you use W&B. - ''') - - remove_gpu_after_training = gr.Checkbox( - label='Remove GPU after training', - value=False, - interactive=bool(os.getenv('SPACE_ID')), - visible=False) - run_button = gr.Button('Start Training') - - with gr.Box(): - gr.Markdown('Output message') - output_message = gr.Markdown() - - if pipe is not None: - run_button.click(fn=pipe.clear) - run_button.click(fn=trainer.run, - inputs=[ - reference_images, - target_image, - target_mask, - output_model_name, - delete_existing_model, - base_model, - resolution, - num_training_steps, - unet_learning_rate, - text_encoder_learning_rate, - lora_rank, - lora_dropout, - lora_alpha, - gradient_accumulation, - seed, - fp16, - use_8bit_adam, - checkpointing_steps, - use_wandb, - validation_steps, - upload_to_hub, - use_private_repo, - delete_existing_repo, - upload_to, - remove_gpu_after_training, - ], - outputs=output_message) - return demo - - -if __name__ == '__main__': - hf_token = os.getenv('HF_TOKEN') - trainer = Trainer(hf_token) - demo = create_training_demo(trainer) - demo.queue(max_size=1).launch(share=False) diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/0.3 Mega Pixel Fixed Web Camera Drivers.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/0.3 Mega Pixel Fixed Web Camera Drivers.md deleted file mode 100644 index 34eea59c88f84e3a2ff3c8db3ccb547fd0221fd2..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/0.3 Mega Pixel Fixed Web Camera Drivers.md +++ /dev/null @@ -1,6 +0,0 @@ -

    0.3 mega pixel fixed web camera drivers


    Download Zip >>>>> https://urlgoal.com/2uCLH8



    - -Download Chicony webcam drivers or install DriverPack Solution software for driver scan and update. ... Sonix ST50220 USB Video Camera · EasyCamera · USB 2.0 1.3M UVC ... HP Webcam [2 MP Fixed] ... USB2.0 0.3M UVC WebCam. 1fdad05405
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bazaraajarvisprogramacionlinealflujoredes.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bazaraajarvisprogramacionlinealflujoredes.md deleted file mode 100644 index 585500c2f97d756ad854624a22e20cbd21b8dde9..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bazaraajarvisprogramacionlinealflujoredes.md +++ /dev/null @@ -1,6 +0,0 @@ -

    bazaraajarvisprogramacionlinealflujoredes


    Download File >>>>> https://urlgoal.com/2uCJDc



    - - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Creative Es1373 Sound Card Driver Free Download For Windows 7 Free.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Creative Es1373 Sound Card Driver Free Download For Windows 7 Free.md deleted file mode 100644 index 6f993f9f42fa21de60f107186e67bdce0e7376d4..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Creative Es1373 Sound Card Driver Free Download For Windows 7 Free.md +++ /dev/null @@ -1,50 +0,0 @@ -
    -

    Creative Es1373 Sound Card Driver Free Download For Windows 7

    -

    If you are looking for a high-quality sound card driver for your Windows 7 computer, you might want to consider the Creative Es1373 Sound Card Driver. This driver is compatible with the Ensoniq/Creative AudioPCI ES1373 sound card, which is a versatile and powerful device that can enhance your audio experience.

    -

    The Creative Es1373 Sound Card Driver offers many features and benefits, such as:

    -

    Creative Es1373 Sound Card Driver Free Download For Windows 7


    Download Zip ○○○ https://urlgoal.com/2uCK4c



    -
      -
    • Wavetable sound sets: You can choose from 2, 4, or 8 MB sets of 128 General MIDI wavetable instruments, 61 drum programs, 128 MT-32 instruments, and Roland GS Sound set in 4 & 8 MB sets.
    • -
    • Synthesizer: You can enjoy up to 32 simultaneous voice polyphony and 16 MIDI channels with this driver.
    • -
    • Digital effects: You can add reverb, chorus, and spatial enhancement to your sound with this driver.
    • -
    • Digital audio: You can record and playback 16-bit audio at up to 48 kHz (mono/stereo) with this driver.
    • -
    • A/D D/A codec: You can get the lowest noise possible with this driver, which has a signal-to-noise ratio of 90 dbr typical.
    • -
    • Frequency response: You can hear every detail of your sound with this driver, which has a frequency response of 20Hz - 22kHz.
    • -
    • Full duplex operation: You can record and playback sound simultaneously with this driver.
    • -
    • S/PDIF and I²S output: You can connect your sound card to external devices with these outputs, which are available only for the ES1373 model.
    • -
    • Supported standards: You can play various games and applications with this driver, which supports ENSONIQ Soundscape, Microsoft Direct Audio (DirectX), AdLib, OpenAL, Sound Blaster Pro (2.0), General MIDI, MT-32, FM (software emulation), MPC 1,2,3 standards.
    • -
    • Drivers: You can install this driver on various operating systems, such as DOS, Windows (3.1, 9x, NT 4.x, 2000, XP), and FreeBSD.
    • -
    -

    The Creative Es1373 Sound Card Driver is easy to download and install. You just need to follow these steps:

    -
      -
    1. Go to the official website of Creative and select your country/region.
    2. -
    3. Click on the Support tab and then on Downloads.
    4. -
    5. Type in Creative Es1373 Sound Card Driver in the search box and hit Enter.
    6. -
    7. Select the appropriate driver for your operating system and click on Download.
    8. -
    9. Save the file to your computer and run it as an administrator.
    10. -
    11. Follow the instructions on the screen to complete the installation process.
    12. -
    13. Restart your computer and enjoy your improved sound quality.
    14. -
    -

    The Creative Es1373 Sound Card Driver is a reliable and efficient driver that can make your sound card work better. It is free to download and use, and it can improve your audio performance and compatibility. If you have a Creative Es1373 Sound Card or a similar device, you should definitely try this driver today.

    -

    Creative Es1373 Sound Card Driver Reviews

    -

    Many users have tried the Creative Es1373 Sound Card Driver and have shared their opinions and experiences online. Here are some of the reviews that we have found:

    -
    -

    "I have been using this driver for a long time and I am very satisfied with it. It works perfectly with my Windows 7 system and my Creative ES1373 sound card. The sound quality is amazing and I can play all kinds of games and applications without any problems. I highly recommend this driver to anyone who has a Creative ES1373 sound card or a similar device."

    -- John, from Treexy.com -
    -
    -

    "This driver is very easy to download and install. It only took me a few minutes to get it up and running. It improved my sound performance and compatibility a lot. I can now enjoy my music and movies with better sound effects and clarity. This driver is a must-have for anyone who wants to upgrade their sound system."

    -- Lisa, from Downloadsource.net -
    -
    -

    "This driver is a lifesaver for me. I have an old computer with a Creative ES1373 sound card and I was having trouble finding a driver that works with Windows 7. I tried many other drivers but none of them worked. Then I found this driver and it solved all my problems. It supports all the features and standards that I need and it makes my sound card work like new."

    -- Mike, from Oemdrivers.com -
    -
    -

    "This driver is awesome. It has everything that I want from a sound card driver. It has wavetable sound sets, synthesizer, digital effects, digital audio, A/D D/A codec, frequency response, full duplex operation, S/PDIF and I²S output, supported standards, and drivers for various operating systems. It is very versatile and powerful. It is the best driver for my Creative ES1373 sound card."

    -- Anna, from Cnet.com -
    -

    As you can see, most users are very happy with the Creative Es1373 Sound Card Driver and have given it positive feedback and ratings. They have praised its features, benefits, quality, performance, compatibility, ease of use, and reliability. They have also reported that it works well with Windows 7 and other operating systems.

    -

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/rgres/Seg2Sat/frontend/.svelte-kit/runtime/app/paths.js b/spaces/rgres/Seg2Sat/frontend/.svelte-kit/runtime/app/paths.js deleted file mode 100644 index 7ed4fff2aaf37d3fd8855a01d34cf8f89288eed7..0000000000000000000000000000000000000000 --- a/spaces/rgres/Seg2Sat/frontend/.svelte-kit/runtime/app/paths.js +++ /dev/null @@ -1 +0,0 @@ -export { assets, base } from '../paths.js'; diff --git a/spaces/rgres/Seg2Sat/frontend/build/_app/immutable/chunks/paths-d3bcbd10.js b/spaces/rgres/Seg2Sat/frontend/build/_app/immutable/chunks/paths-d3bcbd10.js deleted file mode 100644 index 0911f4acc85b0ac94f242f9bc4ab5effba088495..0000000000000000000000000000000000000000 --- a/spaces/rgres/Seg2Sat/frontend/build/_app/immutable/chunks/paths-d3bcbd10.js +++ /dev/null @@ -1 +0,0 @@ -import{E as f,s as p}from"./index-bcf2726a.js";const n=[];function _(t,b=f){let o;const i=new Set;function r(e){if(p(t,e)&&(t=e,o)){const c=!n.length;for(const s of i)s[1](),n.push(s,t);if(c){for(let s=0;s{i.delete(s),i.size===0&&(o(),o=null)}}return{set:r,update:a,subscribe:l}}let u="",d="";function g(t){u=t.base,d=t.assets||u}export{d as a,u as b,g as s,_ as w}; diff --git a/spaces/rishikesh/365DataScience/app.py b/spaces/rishikesh/365DataScience/app.py deleted file mode 100644 index c26fb29fc09d8c2a5a88bb4a1a674229f46d149b..0000000000000000000000000000000000000000 --- a/spaces/rishikesh/365DataScience/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import streamlit as st -import pickle -import numpy as np - -# Stores loaded model in cache so that we don't need to reload model repeatedly for each input -@st.cache(allow_output_mutation=True) -def load_model(): - model = pickle.load(open('random_forest_model.sav', 'rb')) - country_dict = pickle.load(open('country_dict.pickle', 'rb')) - scaler = pickle.load(open('standardScaler.pickle', 'rb')) - return model, scaler, country_dict - -def featurize(time, country, scaler, country_dict): - arr = np.array([country_dict[country], time]).reshape(1,-1) - vector = scaler.transform(arr) - return vector - -def main(): - model, scaler, country_dict = load_model() - st.title("\'365 data science\' : free-to-paid user conversion predictor") - list_of_countries = list(country_dict.keys()) - st.write("\'365 data science\' is a ed-tech company that creates data science courses comprising of video lectures and \ - exercises in the form of quizzes and exams. Some of the courses offered are free and majority of the other courses \ - need the user to buy paid subscription. Students mostly register on this platform as 'free-tier user' as the registration is free of cost. \ - They enroll for free courses and then if they like the content of the platform, they proceed to buy paid-subscription \ - which offers lot of perks as compared to free tier. Paid student get access to large library of courses along with certificates, \ - quizzes and exams.") - st.write("This application predicts how likely the student is to buy the paid subscription based on the number of minutes \ - he spent engaging with the free course content and the country he comes from. In the exploratory data analysis done, it was found that \ - total time spent by user and nationality of user are two major and most significant factor for determining how likely the user is \ - to buy the course. Typical range for total time watched for students is mostly 0.1 to 100 minutes") - - with st.form("my_form"): - total_time = st.number_input('Time spent on platform watching tutorials') - student_country = st.selectbox('country', list_of_countries) - st.write('Total time spent : ', total_time) - st.write('Student country :', student_country) - - # Every form must have a submit button. - submitted = st.form_submit_button("Submit") - - if submitted: - vector = featurize(total_time, student_country, scaler, country_dict) - prediction = model.predict(vector)[0] - predicted_proba = model.predict_proba(vector) - if prediction == 0 : - st.write('Student is ', str(round(predicted_proba[0][0]*100)), '% likely to NOT buy the paid subscription') - else : - st.write('Student is ', str(round(predicted_proba[0][1]*100)), '% likely to buy the paid subscription') - -if __name__ == '__main__' : - main() \ No newline at end of file diff --git a/spaces/riyueyiming/gpt/modules/pdf_func.py b/spaces/riyueyiming/gpt/modules/pdf_func.py deleted file mode 100644 index 0aba6b7b891fc527c79b887256b0cbaa81ae5b3d..0000000000000000000000000000000000000000 --- a/spaces/riyueyiming/gpt/modules/pdf_func.py +++ /dev/null @@ -1,180 +0,0 @@ -from types import SimpleNamespace -import pdfplumber -import logging -from llama_index import Document - -def prepare_table_config(crop_page): - """Prepare table查找边界, 要求page为原始page - - From https://github.com/jsvine/pdfplumber/issues/242 - """ - page = crop_page.root_page # root/parent - cs = page.curves + page.edges - def curves_to_edges(): - """See https://github.com/jsvine/pdfplumber/issues/127""" - edges = [] - for c in cs: - edges += pdfplumber.utils.rect_to_edges(c) - return edges - edges = curves_to_edges() - return { - "vertical_strategy": "explicit", - "horizontal_strategy": "explicit", - "explicit_vertical_lines": edges, - "explicit_horizontal_lines": edges, - "intersection_y_tolerance": 10, - } - -def get_text_outside_table(crop_page): - ts = prepare_table_config(crop_page) - if len(ts["explicit_vertical_lines"]) == 0 or len(ts["explicit_horizontal_lines"]) == 0: - return crop_page - - ### Get the bounding boxes of the tables on the page. - bboxes = [table.bbox for table in crop_page.root_page.find_tables(table_settings=ts)] - def not_within_bboxes(obj): - """Check if the object is in any of the table's bbox.""" - def obj_in_bbox(_bbox): - """See https://github.com/jsvine/pdfplumber/blob/stable/pdfplumber/table.py#L404""" - v_mid = (obj["top"] + obj["bottom"]) / 2 - h_mid = (obj["x0"] + obj["x1"]) / 2 - x0, top, x1, bottom = _bbox - return (h_mid >= x0) and (h_mid < x1) and (v_mid >= top) and (v_mid < bottom) - return not any(obj_in_bbox(__bbox) for __bbox in bboxes) - - return crop_page.filter(not_within_bboxes) -# 请使用 LaTeX 表达公式,行内公式以 $ 包裹,行间公式以 $$ 包裹 - -extract_words = lambda page: page.extract_words(keep_blank_chars=True, y_tolerance=0, x_tolerance=1, extra_attrs=["fontname", "size", "object_type"]) -# dict_keys(['text', 'x0', 'x1', 'top', 'doctop', 'bottom', 'upright', 'direction', 'fontname', 'size']) - -def get_title_with_cropped_page(first_page): - title = [] # 处理标题 - x0,top,x1,bottom = first_page.bbox # 获取页面边框 - - for word in extract_words(first_page): - word = SimpleNamespace(**word) - - if word.size >= 14: - title.append(word.text) - title_bottom = word.bottom - elif word.text == "Abstract": # 获取页面abstract - top = word.top - - user_info = [i["text"] for i in extract_words(first_page.within_bbox((x0,title_bottom,x1,top)))] - # 裁剪掉上半部分, within_bbox: full_included; crop: partial_included - return title, user_info, first_page.within_bbox((x0,top,x1,bottom)) - -def get_column_cropped_pages(pages, two_column=True): - new_pages = [] - for page in pages: - if two_column: - left = page.within_bbox((0, 0, page.width/2, page.height),relative=True) - right = page.within_bbox((page.width/2, 0, page.width, page.height), relative=True) - new_pages.append(left) - new_pages.append(right) - else: - new_pages.append(page) - - return new_pages - -def parse_pdf(filename, two_column = True): - level = logging.getLogger().level - if level == logging.getLevelName("DEBUG"): - logging.getLogger().setLevel("INFO") - - with pdfplumber.open(filename) as pdf: - title, user_info, first_page = get_title_with_cropped_page(pdf.pages[0]) - new_pages = get_column_cropped_pages([first_page] + pdf.pages[1:], two_column) - - chapters = [] - # tuple (chapter_name, [pageid] (start,stop), chapter_text) - create_chapter = lambda page_start,name_top,name_bottom: SimpleNamespace( - name=[], - name_top=name_top, - name_bottom=name_bottom, - record_chapter_name = True, - - page_start=page_start, - page_stop=None, - - text=[], - ) - cur_chapter = None - - # 按页遍历PDF文档 - for idx, page in enumerate(new_pages): - page = get_text_outside_table(page) - - # 按行遍历页面文本 - for word in extract_words(page): - word = SimpleNamespace(**word) - - # 检查行文本是否以12号字体打印,如果是,则将其作为新章节开始 - if word.size >= 11: # 出现chapter name - if cur_chapter is None: - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - elif not cur_chapter.record_chapter_name or (cur_chapter.name_bottom != cur_chapter.name_bottom and cur_chapter.name_top != cur_chapter.name_top): - # 不再继续写chapter name - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - # 重置当前chapter信息 - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - - # print(word.size, word.top, word.bottom, word.text) - cur_chapter.name.append(word.text) - else: - cur_chapter.record_chapter_name = False # chapter name 结束 - cur_chapter.text.append(word.text) - else: - # 处理最后一个章节 - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - - for i in chapters: - logging.info(f"section: {i.name} pages:{i.page_start, i.page_stop} word-count:{len(i.text)}") - logging.debug(" ".join(i.text)) - - title = " ".join(title) - user_info = " ".join(user_info) - text = f"Article Title: {title}, Information:{user_info}\n" - for idx, chapter in enumerate(chapters): - chapter.name = " ".join(chapter.name) - text += f"The {idx}th Chapter {chapter.name}: " + " ".join(chapter.text) + "\n" - - logging.getLogger().setLevel(level) - return Document(text=text, extra_info={"title": title}) - -BASE_POINTS = """ -1. Who are the authors? -2. What is the process of the proposed method? -3. What is the performance of the proposed method? Please note down its performance metrics. -4. What are the baseline models and their performances? Please note down these baseline methods. -5. What dataset did this paper use? -""" - -READING_PROMPT = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{} -""" - -READING_PROMT_V2 = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{}, - -And You need to generate a brief but informative title for this part. -Your return format: -- title: '...' -- summary: '...' -""" - -SUMMARY_PROMPT = "You are a researcher helper bot. Now you need to read the summaries of a research paper." - - -if __name__ == '__main__': - # Test code - z = parse_pdf("./build/test.pdf") - print(z["user_info"]) - print(z["title"]) \ No newline at end of file diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/samplers/distributed_sampler.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/samplers/distributed_sampler.py deleted file mode 100644 index 1bc8b7c3602cee288e4ab8d661819c0a2490d4ee..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/samplers/distributed_sampler.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -from torch.utils.data import DistributedSampler as _DistributedSampler - -from mmdet.core.utils import sync_random_seed -from mmdet.utils import get_device - - -class DistributedSampler(_DistributedSampler): - - def __init__(self, - dataset, - num_replicas=None, - rank=None, - shuffle=True, - seed=0): - super().__init__( - dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - - # In distributed sampling, different ranks should sample - # non-overlapped data in the dataset. Therefore, this function - # is used to make sure that each rank shuffles the data indices - # in the same order based on the same seed. Then different ranks - # could use different indices to select non-overlapped data from the - # same data list. - device = get_device() - self.seed = sync_random_seed(seed, device) - - def __iter__(self): - # deterministically shuffle based on epoch - if self.shuffle: - g = torch.Generator() - # When :attr:`shuffle=True`, this ensures all replicas - # use a different random ordering for each epoch. - # Otherwise, the next iteration of this sampler will - # yield the same ordering. - g.manual_seed(self.epoch + self.seed) - indices = torch.randperm(len(self.dataset), generator=g).tolist() - else: - indices = torch.arange(len(self.dataset)).tolist() - - # add extra samples to make it evenly divisible - # in case that indices is shorter than half of total_size - indices = (indices * - math.ceil(self.total_size / len(indices)))[:self.total_size] - assert len(indices) == self.total_size - - # subsample - indices = indices[self.rank:self.total_size:self.num_replicas] - assert len(indices) == self.num_samples - - return iter(indices) diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/necks/yolo_neck.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/necks/yolo_neck.py deleted file mode 100644 index c8eeb5737cdf871fa415c1a207956ea7753c304e..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/necks/yolo_neck.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -import torch -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -class DetectionBlock(BaseModule): - """Detection block in YOLO neck. - - Let out_channels = n, the DetectionBlock contains: - Six ConvLayers, 1 Conv2D Layer and 1 YoloLayer. - The first 6 ConvLayers are formed the following way: - 1x1xn, 3x3x2n, 1x1xn, 3x3x2n, 1x1xn, 3x3x2n. - The Conv2D layer is 1x1x255. - Some block will have branch after the fifth ConvLayer. - The input channel is arbitrary (in_channels) - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - init_cfg=None): - super(DetectionBlock, self).__init__(init_cfg) - double_out_channels = out_channels * 2 - - # shortcut - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - self.conv1 = ConvModule(in_channels, out_channels, 1, **cfg) - self.conv2 = ConvModule( - out_channels, double_out_channels, 3, padding=1, **cfg) - self.conv3 = ConvModule(double_out_channels, out_channels, 1, **cfg) - self.conv4 = ConvModule( - out_channels, double_out_channels, 3, padding=1, **cfg) - self.conv5 = ConvModule(double_out_channels, out_channels, 1, **cfg) - - def forward(self, x): - tmp = self.conv1(x) - tmp = self.conv2(tmp) - tmp = self.conv3(tmp) - tmp = self.conv4(tmp) - out = self.conv5(tmp) - return out - - -@NECKS.register_module() -class YOLOV3Neck(BaseModule): - """The neck of YOLOV3. - - It can be treated as a simplified version of FPN. It - will take the result from Darknet backbone and do some upsampling and - concatenation. It will finally output the detection result. - - Note: - The input feats should be from top to bottom. - i.e., from high-lvl to low-lvl - But YOLOV3Neck will process them in reversed order. - i.e., from bottom (high-lvl) to top (low-lvl) - - Args: - num_scales (int): The number of scales / stages. - in_channels (List[int]): The number of input channels per scale. - out_channels (List[int]): The number of output channels per scale. - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None. - norm_cfg (dict, optional): Dictionary to construct and config norm - layer. Default: dict(type='BN', requires_grad=True) - act_cfg (dict, optional): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - num_scales, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - init_cfg=None): - super(YOLOV3Neck, self).__init__(init_cfg) - assert (num_scales == len(in_channels) == len(out_channels)) - self.num_scales = num_scales - self.in_channels = in_channels - self.out_channels = out_channels - - # shortcut - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - # To support arbitrary scales, the code looks awful, but it works. - # Better solution is welcomed. - self.detect1 = DetectionBlock(in_channels[0], out_channels[0], **cfg) - for i in range(1, self.num_scales): - in_c, out_c = self.in_channels[i], self.out_channels[i] - inter_c = out_channels[i - 1] - self.add_module(f'conv{i}', ConvModule(inter_c, out_c, 1, **cfg)) - # in_c + out_c : High-lvl feats will be cat with low-lvl feats - self.add_module(f'detect{i+1}', - DetectionBlock(in_c + out_c, out_c, **cfg)) - - def forward(self, feats): - assert len(feats) == self.num_scales - - # processed from bottom (high-lvl) to top (low-lvl) - outs = [] - out = self.detect1(feats[-1]) - outs.append(out) - - for i, x in enumerate(reversed(feats[:-1])): - conv = getattr(self, f'conv{i+1}') - tmp = conv(out) - - # Cat with low-lvl feats - tmp = F.interpolate(tmp, scale_factor=2) - tmp = torch.cat((tmp, x), 1) - - detect = getattr(self, f'detect{i+2}') - out = detect(tmp) - outs.append(out) - - return tuple(outs) diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/setup.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/setup.py deleted file mode 100644 index 535d90eff44ba6f68a8388e08dea8cb6487650ed..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/setup.py +++ /dev/null @@ -1,220 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) OpenMMLab. All rights reserved. -import os -import os.path as osp -import platform -import shutil -import sys -import warnings -from setuptools import find_packages, setup - -import torch -from torch.utils.cpp_extension import (BuildExtension, CppExtension, - CUDAExtension) - - -def readme(): - with open('README.md', encoding='utf-8') as f: - content = f.read() - return content - - -version_file = 'mmdet/version.py' - - -def get_version(): - with open(version_file, 'r') as f: - exec(compile(f.read(), version_file, 'exec')) - return locals()['__version__'] - - -def make_cuda_ext(name, module, sources, sources_cuda=[]): - - define_macros = [] - extra_compile_args = {'cxx': []} - - if torch.cuda.is_available() or os.getenv('FORCE_CUDA', '0') == '1': - define_macros += [('WITH_CUDA', None)] - extension = CUDAExtension - extra_compile_args['nvcc'] = [ - '-D__CUDA_NO_HALF_OPERATORS__', - '-D__CUDA_NO_HALF_CONVERSIONS__', - '-D__CUDA_NO_HALF2_OPERATORS__', - ] - sources += sources_cuda - else: - print(f'Compiling {name} without CUDA') - extension = CppExtension - - return extension( - name=f'{module}.{name}', - sources=[os.path.join(*module.split('.'), p) for p in sources], - define_macros=define_macros, - extra_compile_args=extra_compile_args) - - -def parse_requirements(fname='requirements.txt', with_version=True): - """Parse the package dependencies listed in a requirements file but strips - specific versioning information. - - Args: - fname (str): path to requirements file - with_version (bool, default=False): if True include version specs - - Returns: - List[str]: list of requirements items - - CommandLine: - python -c "import setup; print(setup.parse_requirements())" - """ - import re - import sys - from os.path import exists - require_fpath = fname - - def parse_line(line): - """Parse information from a line in a requirements text file.""" - if line.startswith('-r '): - # Allow specifying requirements in other files - target = line.split(' ')[1] - for info in parse_require_file(target): - yield info - else: - info = {'line': line} - if line.startswith('-e '): - info['package'] = line.split('#egg=')[1] - elif '@git+' in line: - info['package'] = line - else: - # Remove versioning from the package - pat = '(' + '|'.join(['>=', '==', '>']) + ')' - parts = re.split(pat, line, maxsplit=1) - parts = [p.strip() for p in parts] - - info['package'] = parts[0] - if len(parts) > 1: - op, rest = parts[1:] - if ';' in rest: - # Handle platform specific dependencies - # http://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-platform-specific-dependencies - version, platform_deps = map(str.strip, - rest.split(';')) - info['platform_deps'] = platform_deps - else: - version = rest # NOQA - info['version'] = (op, version) - yield info - - def parse_require_file(fpath): - with open(fpath, 'r') as f: - for line in f.readlines(): - line = line.strip() - if line and not line.startswith('#'): - for info in parse_line(line): - yield info - - def gen_packages_items(): - if exists(require_fpath): - for info in parse_require_file(require_fpath): - parts = [info['package']] - if with_version and 'version' in info: - parts.extend(info['version']) - if not sys.version.startswith('3.4'): - # apparently package_deps are broken in 3.4 - platform_deps = info.get('platform_deps') - if platform_deps is not None: - parts.append(';' + platform_deps) - item = ''.join(parts) - yield item - - packages = list(gen_packages_items()) - return packages - - -def add_mim_extension(): - """Add extra files that are required to support MIM into the package. - - These files will be added by creating a symlink to the originals if the - package is installed in `editable` mode (e.g. pip install -e .), or by - copying from the originals otherwise. - """ - - # parse installment mode - if 'develop' in sys.argv: - # installed by `pip install -e .` - if platform.system() == 'Windows': - # set `copy` mode here since symlink fails on Windows. - mode = 'copy' - else: - mode = 'symlink' - elif 'sdist' in sys.argv or 'bdist_wheel' in sys.argv: - # installed by `pip install .` - # or create source distribution by `python setup.py sdist` - mode = 'copy' - else: - return - - filenames = ['tools', 'configs', 'demo', 'model-index.yml'] - repo_path = osp.dirname(__file__) - mim_path = osp.join(repo_path, 'mmdet', '.mim') - os.makedirs(mim_path, exist_ok=True) - - for filename in filenames: - if osp.exists(filename): - src_path = osp.join(repo_path, filename) - tar_path = osp.join(mim_path, filename) - - if osp.isfile(tar_path) or osp.islink(tar_path): - os.remove(tar_path) - elif osp.isdir(tar_path): - shutil.rmtree(tar_path) - - if mode == 'symlink': - src_relpath = osp.relpath(src_path, osp.dirname(tar_path)) - os.symlink(src_relpath, tar_path) - elif mode == 'copy': - if osp.isfile(src_path): - shutil.copyfile(src_path, tar_path) - elif osp.isdir(src_path): - shutil.copytree(src_path, tar_path) - else: - warnings.warn(f'Cannot copy file {src_path}.') - else: - raise ValueError(f'Invalid mode {mode}') - - -if __name__ == '__main__': - add_mim_extension() - setup( - name='mmdet', - version=get_version(), - description='OpenMMLab Detection Toolbox and Benchmark', - long_description=readme(), - long_description_content_type='text/markdown', - author='MMDetection Contributors', - author_email='openmmlab@gmail.com', - keywords='computer vision, object detection', - url='https://github.com/open-mmlab/mmdetection', - packages=find_packages(exclude=('configs', 'tools', 'demo')), - include_package_data=True, - classifiers=[ - 'Development Status :: 5 - Production/Stable', - 'License :: OSI Approved :: Apache Software License', - 'Operating System :: OS Independent', - 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.7', - 'Programming Language :: Python :: 3.8', - 'Programming Language :: Python :: 3.9', - ], - license='Apache License 2.0', - install_requires=parse_requirements('requirements/runtime.txt'), - extras_require={ - 'all': parse_requirements('requirements.txt'), - 'tests': parse_requirements('requirements/tests.txt'), - 'build': parse_requirements('requirements/build.txt'), - 'optional': parse_requirements('requirements/optional.txt'), - 'mim': parse_requirements('requirements/mminstall.txt'), - }, - ext_modules=[], - cmdclass={'build_ext': BuildExtension}, - zip_safe=False) diff --git a/spaces/rorallitri/biomedical-language-models/logs/Angels Sing Christmas In Ireland Rar ((TOP)).md b/spaces/rorallitri/biomedical-language-models/logs/Angels Sing Christmas In Ireland Rar ((TOP)).md deleted file mode 100644 index a3f85c092fd4598f13fb46831cadcd20e1cc342c..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Angels Sing Christmas In Ireland Rar ((TOP)).md +++ /dev/null @@ -1,109 +0,0 @@ - -

    Angels Sing Christmas in Ireland: A Review of Libera's Holiday Album

    - -

    If you are looking for a unique and beautiful way to celebrate the festive season, you might want to check out Angels Sing Christmas in Ireland, a stunning album by Libera, the world-renowned boys choir from London.

    - -

    Libera is known for their angelic voices and ethereal sound, blending classical and contemporary music with a touch of mystery. They have performed all over the world, from the Vatican to the White House, and have collaborated with artists such as Enya, Brian Wilson, and Elton John.

    -

    angels sing christmas in ireland rar


    Download Zip ✏ ✏ ✏ https://tinurll.com/2uzm7i



    - -

    In 2013, they recorded their first Christmas album in the historic St Patrick's Cathedral in Armagh, Ireland. The album features 15 tracks of traditional and modern carols, sung in English, Latin, and Irish Gaelic. Some of the highlights include:

    - -
      -
    • Joy to the World: A joyful and upbeat rendition of the classic hymn, with a Celtic twist.
    • -
    • God Rest You Merry Gentlemen: A haunting and harmonious version of the old English carol, with a solo by Daniel Fontannaz.
    • -
    • The Wexford Carol: A beautiful and ancient Irish lullaby, sung by Isaac London.
    • -
    • In Dulci Jubilo: A lively and festive medley of two medieval songs, one in Latin and one in German.
    • -
    • Angels We Have Heard On High: A soaring and majestic arrangement of the French carol, with a solo by Cassius O'Connell-White.
    • -
    • Sanctus: A sublime and serene piece composed by Libera's musical director Robert Prizeman, based on the Latin Mass.
    • -
    • Danny Boy: A moving and emotional interpretation of the Irish folk song, sung by Tom Cully.
    • -
    • Carol of the Bells: A thrilling and dynamic adaptation of the Ukrainian carol, with a solo by Alex Gula.
    • -
    • O Holy Night: A powerful and passionate performance of the French song, sung by Joshua Madine.
    • -
    • Still, Still, Still: A gentle and soothing lullaby from Austria, sung by Benedict Philipp.
    • -
    • Gaudete: A cheerful and catchy song from the 16th century, sung in Latin.
    • -
    • Away in a Manger: A simple and sweet rendition of the popular carol, sung by Ralph Skan.
    • -
    • Have Yourself a Merry Little Christmas: A warm and cozy version of the American song, sung by Ciaran Bradbury-Hickey.
    • -
    • Silent Night: A tender and touching finale of the most famous Christmas song, sung in English and Irish Gaelic.
    • -
    • What Child Is This: A bonus track available only on the DVD version of the album, sung by Kavana Crossley.
    • -
    - -

    The album also comes with a DVD that features behind-the-scenes footage of the recording process, interviews with the choir members and staff, and a documentary about Libera's visit to Ireland. You can watch a preview of the DVD here:

    - - - -

    If you want to download Angels Sing Christmas in Ireland by Libera (RAR file), you can do so from this link:

    - -https://bitbucket.org/psrsoft/tempo2/issues/259/angels-sing-christmas-in-ireland-rar - -

    Alternatively, you can stream or purchase the album from LINE MUSIC here:

    - -https://music.line.me/webapp/album/mb00000000000f5611 - -

    Angels Sing Christmas in Ireland is a wonderful gift for yourself or your loved ones this holiday season. It will fill your home with joy and peace, and transport you to a magical place where angels sing.

    -

    Angels Sing Christmas in Ireland: A Collection of NFTs

    - -

    If you are a fan of Libera and their album Angels Sing Christmas in Ireland, you might be interested in owning a piece of their history as a non-fungible token (NFT). NFTs are digital assets that represent unique and scarce items, such as art, music, or collectibles. They are stored on a blockchain, which ensures their authenticity and ownership.

    -

    - -

    On OpenSea, the largest marketplace for NFTs, you can find a collection of Angels Sing Christmas in Ireland NFTs that feature images and audio clips from the album. You can bid on them using cryptocurrency, such as Ethereum or Dai. Some of the NFTs available include:

    - -
      -
    • Joy to the World: A festive image of Libera singing in front of a Christmas tree, with a 30-second audio clip of the song.
    • -
    • God Rest You Merry Gentlemen: A mysterious image of Libera wearing cloaks and holding candles, with a 30-second audio clip of the song.
    • -
    • The Wexford Carol: A beautiful image of Libera standing in front of a stained glass window, with a 30-second audio clip of the song.
    • -
    • In Dulci Jubilo: A lively image of Libera playing instruments and dancing, with a 30-second audio clip of the song.
    • -
    • Angels We Have Heard On High: A majestic image of Libera surrounded by angels, with a 30-second audio clip of the song.
    • -
    - -

    You can view and purchase the Angels Sing Christmas in Ireland NFTs from this link:

    - -https://opensea.io/collection/angels-sing-christmas-in-ireland-rar - -

    Angels Sing Christmas in Ireland: A Tribute to The Waterboys

    - -

    Another reason why Angels Sing Christmas in Ireland is a special album is that it pays tribute to one of Libera's musical influences: The Waterboys. The Waterboys are a British band founded in 1986 and currently one of the most popular bands in the UK. They are known for their eclectic and adventurous style, mixing rock, folk, Celtic, gospel, and soul music.

    - -

    Libera has expressed their admiration for The Waterboys on several occasions, and even performed with them at a concert in London in 2010. On their album Angels Sing Christmas in Ireland, Libera covers two songs by The Waterboys: Danny Boy and The Whole of the Moon. Both songs are sung with passion and grace by Libera, showing their respect and appreciation for The Waterboys.

    - -

    You can watch Libera's performance of The Whole of the Moon with The Waterboys here:

    - - -

    Angels Sing Christmas in Ireland: A Behind-the-Scenes Look

    - -

    If you are curious about how Libera made their album Angels Sing Christmas in Ireland, you might want to watch their DVD that comes with the album. The DVD features a behind-the-scenes look at the recording process, as well as interviews with the choir members and staff. You can see how Libera prepared for their trip to Ireland, how they rehearsed and recorded in the historic St Patrick's Cathedral in Armagh, and how they enjoyed their time in the Emerald Isle.

    - -

    The DVD also includes a documentary about Libera's visit to Ireland, where they explored the culture and history of the country. You can see them visiting places such as Belfast, Dublin, Newgrange, Giant's Causeway, and Glendalough. You can also see them meeting and interacting with local people, such as schoolchildren, musicians, and clergy. You can witness their curiosity and enthusiasm, as well as their respect and gratitude.

    - -

    The DVD gives you a glimpse into the lives and personalities of Libera's members, who are not only talented singers, but also ordinary boys who like to have fun and learn new things. You can see them playing games, telling jokes, making friends, and sharing their thoughts and feelings. You can also see them expressing their faith and devotion, as well as their love and appreciation for each other.

    - -

    Angels Sing Christmas in Ireland: A Testimonial from a Fan

    - -

    If you are still not convinced that Angels Sing Christmas in Ireland is a great album to listen to or to give as a gift, you might want to read this testimonial from a fan who bought it and loved it. Here is what she wrote:

    - -
    -

    I have been a fan of Libera for many years, and I have all their albums. But I have to say that Angels Sing Christmas in Ireland is my favorite one so far. It is such a beautiful and uplifting album that captures the spirit of Christmas perfectly.

    - -

    I love how Libera sings both traditional and modern carols, as well as songs from different countries and languages. They have such amazing voices that sound like angels, but also like children who are happy and innocent. They sing with so much emotion and expression that they touch my heart every time.

    - -

    I also love how Libera pays tribute to The Waterboys, one of my favorite bands. I think it is very cool that they cover their songs Danny Boy and The Whole of the Moon, which are both very meaningful and powerful songs. I think Libera does a great job of interpreting them in their own way.

    - -

    I also love how Libera includes a DVD with their album, which shows how they made it and what they did in Ireland. It is very interesting and entertaining to watch them recording, traveling, and having fun. It makes me feel like I am part of their journey and their family.

    - -

    I highly recommend Angels Sing Christmas in Ireland to anyone who loves music, especially Christmas music. It is an album that will make you smile, cry, sing, and pray. It is an album that will make you feel the true meaning of Christmas.

    -
    -

    Angels Sing Christmas in Ireland: A Must-Have Album for the Holidays

    - -

    In conclusion, Angels Sing Christmas in Ireland is a must-have album for anyone who loves music and Christmas. It is an album that showcases the talent and charm of Libera, the world-renowned boys choir from London. It is an album that features a variety of songs that are both familiar and new, both cheerful and solemn, both sung and instrumental. It is an album that pays tribute to one of Libera's musical influences, The Waterboys, as well as to the culture and history of Ireland. It is an album that comes with a DVD that gives a behind-the-scenes look at the recording process and Libera's visit to Ireland. It is an album that has received rave reviews from fans and critics alike.

    - -

    Angels Sing Christmas in Ireland is an album that will fill your home with joy and peace, and transport you to a magical place where angels sing. It is an album that you will want to listen to over and over again, and share with your loved ones. It is an album that you will cherish for years to come.

    - -

    If you want to download Angels Sing Christmas in Ireland by Libera (RAR file), you can do so from this link:

    - -https://bitbucket.org/psrsoft/tempo2/issues/259/angels-sing-christmas-in-ireland-rar - -

    Alternatively, you can stream or purchase the album from LINE MUSIC here:

    - -https://music.line.me/webapp/album/mb00000000000f5611 - -

    Don't miss this opportunity to get Angels Sing Christmas in Ireland, the amazing album by Libera. You won't regret it!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Hd Tamil Rustom 1080p TOP.md b/spaces/rorallitri/biomedical-language-models/logs/Hd Tamil Rustom 1080p TOP.md deleted file mode 100644 index 6e005d0364bc63093fad711730e622338b324a30..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Hd Tamil Rustom 1080p TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Hd Tamil Rustom 1080p


    Download Zip ✺✺✺ https://tinurll.com/2uzlK4



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/runa91/barc_gradio/src/smal_pytorch/smal_model/smal_basics.py b/spaces/runa91/barc_gradio/src/smal_pytorch/smal_model/smal_basics.py deleted file mode 100644 index bd2e71ce5c5bd1d087041aed79a376eae749ad24..0000000000000000000000000000000000000000 --- a/spaces/runa91/barc_gradio/src/smal_pytorch/smal_model/smal_basics.py +++ /dev/null @@ -1,82 +0,0 @@ -''' -Adjusted version of other PyTorch implementation of the SMAL/SMPL model -see: - 1.) https://github.com/silviazuffi/smalst/blob/master/smal_model/smal_torch.py - 2.) https://github.com/benjiebob/SMALify/blob/master/smal_model/smal_torch.py -''' - -import os -import pickle as pkl -import json -import numpy as np -import pickle as pkl - -import os -import sys -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..')) -from configs.SMAL_configs import SMAL_DATA_DIR, SYMMETRY_INDS_FILE - -# model_dir = 'smalst/smpl_models/' -# FILE_DIR = os.path.dirname(os.path.realpath(__file__)) -model_dir = SMAL_DATA_DIR # os.path.join(FILE_DIR, '..', 'smpl_models/') -symmetry_inds_file = SYMMETRY_INDS_FILE # os.path.join(FILE_DIR, '..', 'smpl_models/symmetry_inds.json') -with open(symmetry_inds_file) as f: - symmetry_inds_dict = json.load(f) -LEFT_INDS = np.asarray(symmetry_inds_dict['left_inds']) -RIGHT_INDS = np.asarray(symmetry_inds_dict['right_inds']) -CENTER_INDS = np.asarray(symmetry_inds_dict['center_inds']) - - -def get_symmetry_indices(): - sym_dict = {'left': LEFT_INDS, - 'right': RIGHT_INDS, - 'center': CENTER_INDS} - return sym_dict - -def verify_symmetry(shapedirs, center_inds=CENTER_INDS, left_inds=LEFT_INDS, right_inds=RIGHT_INDS): - # shapedirs: (3889, 3, n_sh) - assert (shapedirs[center_inds, 1, :] == 0.0).all() - assert (shapedirs[right_inds, 1, :] == -shapedirs[left_inds, 1, :]).all() - return - -def from_shapedirs_to_shapedirs_half(shapedirs, center_inds=CENTER_INDS, left_inds=LEFT_INDS, right_inds=RIGHT_INDS, verify=False): - # shapedirs: (3889, 3, n_sh) - # shapedirs_half: (2012, 3, n_sh) - selected_inds = np.concatenate((center_inds, left_inds), axis=0) - shapedirs_half = shapedirs[selected_inds, :, :] - if verify: - verify_symmetry(shapedirs) - else: - shapedirs_half[:center_inds.shape[0], 1, :] = 0.0 - return shapedirs_half - -def from_shapedirs_half_to_shapedirs(shapedirs_half, center_inds=CENTER_INDS, left_inds=LEFT_INDS, right_inds=RIGHT_INDS): - # shapedirs_half: (2012, 3, n_sh) - # shapedirs: (3889, 3, n_sh) - shapedirs = np.zeros((center_inds.shape[0] + 2*left_inds.shape[0], 3, shapedirs_half.shape[2])) - shapedirs[center_inds, :, :] = shapedirs_half[:center_inds.shape[0], :, :] - shapedirs[left_inds, :, :] = shapedirs_half[center_inds.shape[0]:, :, :] - shapedirs[right_inds, :, :] = shapedirs_half[center_inds.shape[0]:, :, :] - shapedirs[right_inds, 1, :] = - shapedirs_half[center_inds.shape[0]:, 1, :] - return shapedirs - -def align_smal_template_to_symmetry_axis(v, subtract_mean=True): - # These are the indexes of the points that are on the symmetry axis - I = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 37, 55, 119, 120, 163, 209, 210, 211, 213, 216, 227, 326, 395, 452, 578, 910, 959, 964, 975, 976, 977, 1172, 1175, 1176, 1178, 1194, 1243, 1739, 1796, 1797, 1798, 1799, 1800, 1801, 1802, 1803, 1804, 1805, 1806, 1807, 1808, 1809, 1810, 1811, 1812, 1813, 1814, 1815, 1816, 1817, 1818, 1819, 1820, 1821, 1822, 1823, 1824, 1825, 1826, 1827, 1828, 1829, 1830, 1831, 1832, 1833, 1834, 1835, 1836, 1837, 1838, 1839, 1840, 1842, 1843, 1844, 1845, 1846, 1847, 1848, 1849, 1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859, 1860, 1861, 1862, 1863, 1870, 1919, 1960, 1961, 1965, 1967, 2003] - if subtract_mean: - v = v - np.mean(v) - y = np.mean(v[I,1]) - v[:,1] = v[:,1] - y - v[I,1] = 0 - left_inds = LEFT_INDS - right_inds = RIGHT_INDS - center_inds = CENTER_INDS - v[right_inds, :] = np.array([1,-1,1])*v[left_inds, :] - try: - assert(len(left_inds) == len(right_inds)) - except: - import pdb; pdb.set_trace() - return v, left_inds, right_inds, center_inds - - - diff --git a/spaces/sanchanhart/Warehouse_Apparel_Detection/metadata/dataset_utils/dataset_downloader.py b/spaces/sanchanhart/Warehouse_Apparel_Detection/metadata/dataset_utils/dataset_downloader.py deleted file mode 100644 index 8731de62d79a4ef6ae2c181316806ffb1105379d..0000000000000000000000000000000000000000 --- a/spaces/sanchanhart/Warehouse_Apparel_Detection/metadata/dataset_utils/dataset_downloader.py +++ /dev/null @@ -1,21 +0,0 @@ -import gdown -from zipfile import ZipFile - -# Original Link :- https://drive.google.com/file/d/14QoqoZQLYnUmZgYblmFZ2u2eHo9yv2aA/view?usp=sharing -url = 'https://drive.google.com/uc?id=14QoqoZQLYnUmZgYblmFZ2u2eHo9yv2aA' -output = 'Fire_smoke.zip' - -gdown.download(url, output, quiet=False) - -# specifying the zip file name -file_name = output - -# opening the zip file in READ mode -with ZipFile(file_name, 'r') as zip: - # printing all the contents of the zip file - zip.printdir() - - # extracting all the files - print('Extracting all the files now...') - zip.extractall() - print('Done!') diff --git a/spaces/scedlatioru/img-to-music/example/EASEUS Data Recovery Wizard Professional Edition V5.6.5 With Key Free Download ((NEW)).md b/spaces/scedlatioru/img-to-music/example/EASEUS Data Recovery Wizard Professional Edition V5.6.5 With Key Free Download ((NEW)).md deleted file mode 100644 index 9e743a2919434235e61a91da297515f0a06994b1..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/EASEUS Data Recovery Wizard Professional Edition V5.6.5 With Key Free Download ((NEW)).md +++ /dev/null @@ -1,6 +0,0 @@ -

    EASEUS Data Recovery Wizard Professional Edition v5.6.5 with Key free download


    Download File 🗹 https://gohhs.com/2uEySs



    -
    -... for Adobe Photoshop 2.0.0 Bentley gINT CONNECT Edition Professional Plus ... Download Manager 2.0.5.0 EaseUS Data Recovery Wizard Technician 13.2 ... Image Viewer 6.5 FastStone Capture 9.1 FB Reader 0.12.10 File Maker Pro 17 ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/MorphVOX Pro 4.4.85 Crack With Patch 2020 [Latest] Free !!HOT!!.md b/spaces/scedlatioru/img-to-music/example/MorphVOX Pro 4.4.85 Crack With Patch 2020 [Latest] Free !!HOT!!.md deleted file mode 100644 index 78e37a24e3b062d16b2643208ab86d281ab548ef..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/MorphVOX Pro 4.4.85 Crack With Patch 2020 [Latest] Free !!HOT!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

    MorphVOX Pro 4.4.85 Crack With Patch 2020 [Latest] Free


    DOWNLOAD »»» https://gohhs.com/2uEzlE



    - -Posted on August 22, 2020. MorphVOX Pro 4.4.85 Crack Plus License Key Free Download [LATEST] MorphVOX Pro Crack Plus License Key is a program that ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/__HOT__ Xforce Keygen AutoCAD P ID 2010 64 Bit Windows 8.md b/spaces/scedlatioru/img-to-music/example/__HOT__ Xforce Keygen AutoCAD P ID 2010 64 Bit Windows 8.md deleted file mode 100644 index 6d9b6266bbf105373aa64969556d8fe998675f4a..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/__HOT__ Xforce Keygen AutoCAD P ID 2010 64 Bit Windows 8.md +++ /dev/null @@ -1,30 +0,0 @@ -

    Xforce Keygen AutoCAD P ID 2010 64 Bit Windows 8


    DOWNLOAD >>> https://gohhs.com/2uEyVd



    -
    -dwg, and 2013-R2, the message is displayed "This application has encountered a problem and needs to close." The window never closes. - -If you have more than one Autodesk product installed on your computer, clicking the Ribbon button on the product that is not working may close the non-functioning Autodesk product. - -Workaround: Close the non-functioning Autodesk product, and then restart the Autodesk product that is not working. - -Install May 2017 CU5 on Windows 7, Windows 8, or Windows 8.1 systems with AMD graphics - -Cause - -An issue existed on Windows systems with AMD graphics where if the Autodesk product installation was interrupted, and if CU5 was installed on the same system, the installation could fail. - -Install May 2017 CU5 on Windows 10 systems - -When CU5 was installed on a Windows 10 system, the application could crash on startup. - -Install May 2017 CU5 on macOS systems - -When CU5 was installed on a macOS system, the application could crash on startup. - -Install May 2017 CU5 on Linux systems - -An issue existed in CU5 Linux versions when there was an update to a shared library (libclw or libcw) that was included in the Linux installer, where the installer could fail to install. This issue has since been resolved in the latest CU5 Linux versions. - -In the CU5 Linux installer 4fefd39f24
    -
    -
    -

    diff --git a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/data/__init__.py b/spaces/sczhou/CodeFormer/CodeFormer/basicsr/data/__init__.py deleted file mode 100644 index c6adb4bb6a926af7a46aaec4794eee95fda02a33..0000000000000000000000000000000000000000 --- a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/data/__init__.py +++ /dev/null @@ -1,100 +0,0 @@ -import importlib -import numpy as np -import random -import torch -import torch.utils.data -from copy import deepcopy -from functools import partial -from os import path as osp - -from basicsr.data.prefetch_dataloader import PrefetchDataLoader -from basicsr.utils import get_root_logger, scandir -from basicsr.utils.dist_util import get_dist_info -from basicsr.utils.registry import DATASET_REGISTRY - -__all__ = ['build_dataset', 'build_dataloader'] - -# automatically scan and import dataset modules for registry -# scan all the files under the data folder with '_dataset' in file names -data_folder = osp.dirname(osp.abspath(__file__)) -dataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v.endswith('_dataset.py')] -# import all the dataset modules -_dataset_modules = [importlib.import_module(f'basicsr.data.{file_name}') for file_name in dataset_filenames] - - -def build_dataset(dataset_opt): - """Build dataset from options. - - Args: - dataset_opt (dict): Configuration for dataset. It must constain: - name (str): Dataset name. - type (str): Dataset type. - """ - dataset_opt = deepcopy(dataset_opt) - dataset = DATASET_REGISTRY.get(dataset_opt['type'])(dataset_opt) - logger = get_root_logger() - logger.info(f'Dataset [{dataset.__class__.__name__}] - {dataset_opt["name"]} ' 'is built.') - return dataset - - -def build_dataloader(dataset, dataset_opt, num_gpu=1, dist=False, sampler=None, seed=None): - """Build dataloader. - - Args: - dataset (torch.utils.data.Dataset): Dataset. - dataset_opt (dict): Dataset options. It contains the following keys: - phase (str): 'train' or 'val'. - num_worker_per_gpu (int): Number of workers for each GPU. - batch_size_per_gpu (int): Training batch size for each GPU. - num_gpu (int): Number of GPUs. Used only in the train phase. - Default: 1. - dist (bool): Whether in distributed training. Used only in the train - phase. Default: False. - sampler (torch.utils.data.sampler): Data sampler. Default: None. - seed (int | None): Seed. Default: None - """ - phase = dataset_opt['phase'] - rank, _ = get_dist_info() - if phase == 'train': - if dist: # distributed training - batch_size = dataset_opt['batch_size_per_gpu'] - num_workers = dataset_opt['num_worker_per_gpu'] - else: # non-distributed training - multiplier = 1 if num_gpu == 0 else num_gpu - batch_size = dataset_opt['batch_size_per_gpu'] * multiplier - num_workers = dataset_opt['num_worker_per_gpu'] * multiplier - dataloader_args = dict( - dataset=dataset, - batch_size=batch_size, - shuffle=False, - num_workers=num_workers, - sampler=sampler, - drop_last=True) - if sampler is None: - dataloader_args['shuffle'] = True - dataloader_args['worker_init_fn'] = partial( - worker_init_fn, num_workers=num_workers, rank=rank, seed=seed) if seed is not None else None - elif phase in ['val', 'test']: # validation - dataloader_args = dict(dataset=dataset, batch_size=1, shuffle=False, num_workers=0) - else: - raise ValueError(f'Wrong dataset phase: {phase}. ' "Supported ones are 'train', 'val' and 'test'.") - - dataloader_args['pin_memory'] = dataset_opt.get('pin_memory', False) - - prefetch_mode = dataset_opt.get('prefetch_mode') - if prefetch_mode == 'cpu': # CPUPrefetcher - num_prefetch_queue = dataset_opt.get('num_prefetch_queue', 1) - logger = get_root_logger() - logger.info(f'Use {prefetch_mode} prefetch dataloader: ' f'num_prefetch_queue = {num_prefetch_queue}') - return PrefetchDataLoader(num_prefetch_queue=num_prefetch_queue, **dataloader_args) - else: - # prefetch_mode=None: Normal dataloader - # prefetch_mode='cuda': dataloader for CUDAPrefetcher - return torch.utils.data.DataLoader(**dataloader_args) - - -def worker_init_fn(worker_id, num_workers, rank, seed): - # Set the worker seed to num_workers * rank + worker_id + seed - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) diff --git a/spaces/segments-tobias/conex/espnet/lm/lm_utils.py b/spaces/segments-tobias/conex/espnet/lm/lm_utils.py deleted file mode 100644 index bb43e5de0e7ac83cf889d5e536d51c853058013e..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/lm/lm_utils.py +++ /dev/null @@ -1,293 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright 2017 Johns Hopkins University (Shinji Watanabe) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -# This code is ported from the following implementation written in Torch. -# https://github.com/chainer/chainer/blob/master/examples/ptb/train_ptb_custom_loop.py - -import chainer -import h5py -import logging -import numpy as np -import os -import random -import six -from tqdm import tqdm - -from chainer.training import extension - - -def load_dataset(path, label_dict, outdir=None): - """Load and save HDF5 that contains a dataset and stats for LM - - Args: - path (str): The path of an input text dataset file - label_dict (dict[str, int]): - dictionary that maps token label string to its ID number - outdir (str): The path of an output dir - - Returns: - tuple[list[np.ndarray], int, int]: Tuple of - token IDs in np.int32 converted by `read_tokens` - the number of tokens by `count_tokens`, - and the number of OOVs by `count_tokens` - """ - if outdir is not None: - os.makedirs(outdir, exist_ok=True) - filename = outdir + "/" + os.path.basename(path) + ".h5" - if os.path.exists(filename): - logging.info(f"loading binary dataset: {filename}") - f = h5py.File(filename, "r") - return f["data"][:], f["n_tokens"][()], f["n_oovs"][()] - else: - logging.info("skip dump/load HDF5 because the output dir is not specified") - logging.info(f"reading text dataset: {path}") - ret = read_tokens(path, label_dict) - n_tokens, n_oovs = count_tokens(ret, label_dict[""]) - if outdir is not None: - logging.info(f"saving binary dataset: {filename}") - with h5py.File(filename, "w") as f: - # http://docs.h5py.org/en/stable/special.html#arbitrary-vlen-data - data = f.create_dataset( - "data", (len(ret),), dtype=h5py.special_dtype(vlen=np.int32) - ) - data[:] = ret - f["n_tokens"] = n_tokens - f["n_oovs"] = n_oovs - return ret, n_tokens, n_oovs - - -def read_tokens(filename, label_dict): - """Read tokens as a sequence of sentences - - :param str filename : The name of the input file - :param dict label_dict : dictionary that maps token label string to its ID number - :return list of ID sequences - :rtype list - """ - - data = [] - unk = label_dict[""] - for ln in tqdm(open(filename, "r", encoding="utf-8")): - data.append( - np.array( - [label_dict.get(label, unk) for label in ln.split()], dtype=np.int32 - ) - ) - return data - - -def count_tokens(data, unk_id=None): - """Count tokens and oovs in token ID sequences. - - Args: - data (list[np.ndarray]): list of token ID sequences - unk_id (int): ID of unknown token - - Returns: - tuple: tuple of number of token occurrences and number of oov tokens - - """ - - n_tokens = 0 - n_oovs = 0 - for sentence in data: - n_tokens += len(sentence) - if unk_id is not None: - n_oovs += np.count_nonzero(sentence == unk_id) - return n_tokens, n_oovs - - -def compute_perplexity(result): - """Computes and add the perplexity to the LogReport - - :param dict result: The current observations - """ - # Routine to rewrite the result dictionary of LogReport to add perplexity values - result["perplexity"] = np.exp(result["main/loss"] / result["main/count"]) - if "validation/main/loss" in result: - result["val_perplexity"] = np.exp(result["validation/main/loss"]) - - -class ParallelSentenceIterator(chainer.dataset.Iterator): - """Dataset iterator to create a batch of sentences. - - This iterator returns a pair of sentences, where one token is shifted - between the sentences like ' w1 w2 w3' and 'w1 w2 w3 ' - Sentence batches are made in order of longer sentences, and then - randomly shuffled. - """ - - def __init__( - self, dataset, batch_size, max_length=0, sos=0, eos=0, repeat=True, shuffle=True - ): - self.dataset = dataset - self.batch_size = batch_size # batch size - # Number of completed sweeps over the dataset. In this case, it is - # incremented if every word is visited at least once after the last - # increment. - self.epoch = 0 - # True if the epoch is incremented at the last iteration. - self.is_new_epoch = False - self.repeat = repeat - length = len(dataset) - self.batch_indices = [] - # make mini-batches - if batch_size > 1: - indices = sorted(range(len(dataset)), key=lambda i: -len(dataset[i])) - bs = 0 - while bs < length: - be = min(bs + batch_size, length) - # batch size is automatically reduced if the sentence length - # is larger than max_length - if max_length > 0: - sent_length = len(dataset[indices[bs]]) - be = min( - be, bs + max(batch_size // (sent_length // max_length + 1), 1) - ) - self.batch_indices.append(np.array(indices[bs:be])) - bs = be - if shuffle: - # shuffle batches - random.shuffle(self.batch_indices) - else: - self.batch_indices = [np.array([i]) for i in six.moves.range(length)] - - # NOTE: this is not a count of parameter updates. It is just a count of - # calls of ``__next__``. - self.iteration = 0 - self.sos = sos - self.eos = eos - # use -1 instead of None internally - self._previous_epoch_detail = -1.0 - - def __next__(self): - # This iterator returns a list representing a mini-batch. Each item - # indicates a sentence pair like ' w1 w2 w3' and 'w1 w2 w3 ' - # represented by token IDs. - n_batches = len(self.batch_indices) - if not self.repeat and self.iteration >= n_batches: - # If not self.repeat, this iterator stops at the end of the first - # epoch (i.e., when all words are visited once). - raise StopIteration - - batch = [] - for idx in self.batch_indices[self.iteration % n_batches]: - batch.append( - ( - np.append([self.sos], self.dataset[idx]), - np.append(self.dataset[idx], [self.eos]), - ) - ) - - self._previous_epoch_detail = self.epoch_detail - self.iteration += 1 - - epoch = self.iteration // n_batches - self.is_new_epoch = self.epoch < epoch - if self.is_new_epoch: - self.epoch = epoch - - return batch - - def start_shuffle(self): - random.shuffle(self.batch_indices) - - @property - def epoch_detail(self): - # Floating point version of epoch. - return self.iteration / len(self.batch_indices) - - @property - def previous_epoch_detail(self): - if self._previous_epoch_detail < 0: - return None - return self._previous_epoch_detail - - def serialize(self, serializer): - # It is important to serialize the state to be recovered on resume. - self.iteration = serializer("iteration", self.iteration) - self.epoch = serializer("epoch", self.epoch) - try: - self._previous_epoch_detail = serializer( - "previous_epoch_detail", self._previous_epoch_detail - ) - except KeyError: - # guess previous_epoch_detail for older version - self._previous_epoch_detail = self.epoch + ( - self.current_position - 1 - ) / len(self.batch_indices) - if self.epoch_detail > 0: - self._previous_epoch_detail = max(self._previous_epoch_detail, 0.0) - else: - self._previous_epoch_detail = -1.0 - - -class MakeSymlinkToBestModel(extension.Extension): - """Extension that makes a symbolic link to the best model - - :param str key: Key of value - :param str prefix: Prefix of model files and link target - :param str suffix: Suffix of link target - """ - - def __init__(self, key, prefix="model", suffix="best"): - super(MakeSymlinkToBestModel, self).__init__() - self.best_model = -1 - self.min_loss = 0.0 - self.key = key - self.prefix = prefix - self.suffix = suffix - - def __call__(self, trainer): - observation = trainer.observation - if self.key in observation: - loss = observation[self.key] - if self.best_model == -1 or loss < self.min_loss: - self.min_loss = loss - self.best_model = trainer.updater.epoch - src = "%s.%d" % (self.prefix, self.best_model) - dest = os.path.join(trainer.out, "%s.%s" % (self.prefix, self.suffix)) - if os.path.lexists(dest): - os.remove(dest) - os.symlink(src, dest) - logging.info("best model is " + src) - - def serialize(self, serializer): - if isinstance(serializer, chainer.serializer.Serializer): - serializer("_best_model", self.best_model) - serializer("_min_loss", self.min_loss) - serializer("_key", self.key) - serializer("_prefix", self.prefix) - serializer("_suffix", self.suffix) - else: - self.best_model = serializer("_best_model", -1) - self.min_loss = serializer("_min_loss", 0.0) - self.key = serializer("_key", "") - self.prefix = serializer("_prefix", "model") - self.suffix = serializer("_suffix", "best") - - -# TODO(Hori): currently it only works with character-word level LM. -# need to consider any types of subwords-to-word mapping. -def make_lexical_tree(word_dict, subword_dict, word_unk): - """Make a lexical tree to compute word-level probabilities""" - # node [dict(subword_id -> node), word_id, word_set[start-1, end]] - root = [{}, -1, None] - for w, wid in word_dict.items(): - if wid > 0 and wid != word_unk: # skip and - if True in [c not in subword_dict for c in w]: # skip unknown subword - continue - succ = root[0] # get successors from root node - for i, c in enumerate(w): - cid = subword_dict[c] - if cid not in succ: # if next node does not exist, make a new node - succ[cid] = [{}, -1, (wid - 1, wid)] - else: - prev = succ[cid][2] - succ[cid][2] = (min(prev[0], wid - 1), max(prev[1], wid)) - if i == len(w) - 1: # if word end, set word id - succ[cid][1] = wid - succ = succ[cid][0] # move to the child successors - return root diff --git a/spaces/serdaryildiz/TRCaptionNet/Model/clip/__init__.py b/spaces/serdaryildiz/TRCaptionNet/Model/clip/__init__.py deleted file mode 100644 index dcc5619538c0f7c782508bdbd9587259d805e0d9..0000000000000000000000000000000000000000 --- a/spaces/serdaryildiz/TRCaptionNet/Model/clip/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .clip import * diff --git a/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/langinfo.py b/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/langinfo.py deleted file mode 100644 index efb7e372feeb67d7106eb5c443de2e14053fd204..0000000000000000000000000000000000000000 --- a/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/langinfo.py +++ /dev/null @@ -1,488 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -## language codes -LC_TA='ta' - -SCRIPT_RANGES={ - 'pa':[0x0a00,0x0a7f] , - 'gu':[0x0a80,0x0aff] , - 'or':[0x0b00,0x0b7f] , - 'ta':[0x0b80,0x0bff] , - 'te':[0x0c00,0x0c7f] , - 'kn':[0x0c80,0x0cff] , - 'ml':[0x0d00,0x0d7f] , - 'si':[0x0d80,0x0dff] , - 'hi':[0x0900,0x097f] , - 'mr':[0x0900,0x097f] , - 'kK':[0x0900,0x097f] , - 'sa':[0x0900,0x097f] , - 'ne':[0x0900,0x097f] , - 'sd':[0x0900,0x097f] , - 'bn':[0x0980,0x09ff] , - 'as':[0x0980,0x09ff] , - } - -DRAVIDIAN_LANGUAGES=['ta', 'te', 'kn', 'ml',] -IE_LANGUAGES=['hi', 'mr', 'kK', 'sa', 'ne', 'sd', 'bn', 'as', 'pa', 'gu', 'or', 'si', ] -DANDA_DELIM_LANGUAGES=['as','bn','hi','ne','or','pa','sa','sd'] - -URDU_RANGES=[ - [0x0600,0x06ff], - [0x0750,0x077f], - [0xfb50,0xfdff], - [0xfe70,0xfeff], - ] - -COORDINATED_RANGE_START_INCLUSIVE=0 -COORDINATED_RANGE_END_INCLUSIVE=0x6f - -NUMERIC_OFFSET_START=0x66 -NUMERIC_OFFSET_END=0x6f - -HALANTA_OFFSET=0x4d -AUM_OFFSET=0x50 -NUKTA_OFFSET=0x3c - -RUPEE_SIGN=0x20b9 - -DANDA=0x0964 -DOUBLE_DANDA=0x0965 - -#TODO: add missing fricatives and approximants -VELAR_RANGE=[0x15,0x19] -PALATAL_RANGE=[0x1a,0x1e] -RETROFLEX_RANGE=[0x1f,0x23] -DENTAL_RANGE=[0x24,0x29] -LABIAL_RANGE=[0x2a,0x2e] - -# verify -VOICED_LIST=[0x17,0x18,0x1c,0x1d,0x21,0x22,0x26,0x27,0x2c,0x2d] -UNVOICED_LIST=[0x15,0x16,0x1a,0x1b,0x1f,0x20,0x24,0x25,0x2a,0x2b] #TODO: add sibilants/sonorants -ASPIRATED_LIST=[0x16,0x18,0x1b,0x1d,0x20,0x22,0x25,0x27,0x2b,0x2d] -UNASPIRATED_LIST=[0x15,0x17,0x1a,0x1c,0x1f,0x21,0x24,0x26,0x2a,0x2c] -NASAL_LIST=[0x19,0x1e,0x23,0x28,0x29,0x2d] -FRICATIVE_LIST=[0x36,0x37,0x38] -APPROXIMANT_LIST=[0x2f,0x30,0x31,0x32,0x33,0x34,0x35] - -#TODO: ha has to be properly categorized - -def is_danda_delim(lang): - """ - Returns True if danda/double danda is a possible delimiter for the language - """ - return lang in DANDA_DELIM_LANGUAGES - -def get_offset(c,lang): - """ - Applicable to Brahmi derived Indic scripts - """ - return ord(c)-SCRIPT_RANGES[lang][0] - -def offset_to_char(c,lang): - """ - Applicable to Brahmi derived Indic scripts - """ - return chr(c+SCRIPT_RANGES[lang][0]) - -def in_coordinated_range(c_offset): - """ - Applicable to Brahmi derived Indic scripts - """ - return (c_offset>=COORDINATED_RANGE_START_INCLUSIVE and c_offset<=COORDINATED_RANGE_END_INCLUSIVE) - -def is_indiclang_char(c,lang): - """ - Applicable to Brahmi derived Indic scripts - """ - o=get_offset(c,lang) - return (o>=0 and o<=0x7f) or ord(c)==DANDA or ord(c)==DOUBLE_DANDA - -# def is_vowel(c,lang): -# """ -# Is the character a vowel -# """ -# o=get_offset(c,lang) -# return (o>=0x04 and o<=0x14) - -# def is_vowel_sign(c,lang): -# """ -# Is the character a vowel sign (maatraa) -# """ -# o=get_offset(c,lang) -# return (o>=0x3e and o<=0x4c) - -# def is_halanta(c,lang): -# """ -# Is the character the halanta character -# """ -# o=get_offset(c,lang) -# return (o==HALANTA_OFFSET) - -# def is_nukta(c,lang): -# """ -# Is the character the halanta character -# """ -# o=get_offset(c,lang) -# return (o==NUKTA_OFFSET) - -# def is_aum(c,lang): -# """ -# Is the character a vowel sign (maatraa) -# """ -# o=get_offset(c,lang) -# return (o==AUM_OFFSET) - -# def is_consonant(c,lang): -# """ -# Is the character a consonant -# """ -# o=get_offset(c,lang) -# return (o>=0x15 and o<=0x39) - -# def is_velar(c,lang): -# """ -# Is the character a velar -# """ -# o=get_offset(c,lang) -# return (o>=VELAR_RANGE[0] and o<=VELAR_RANGE[1]) - -# def is_palatal(c,lang): -# """ -# Is the character a palatal -# """ -# o=get_offset(c,lang) -# return (o>=PALATAL_RANGE[0] and o<=PALATAL_RANGE[1]) - -# def is_retroflex(c,lang): -# """ -# Is the character a retroflex -# """ -# o=get_offset(c,lang) -# return (o>=RETROFLEX_RANGE[0] and o<=RETROFLEX_RANGE[1]) - -# def is_dental(c,lang): -# """ -# Is the character a dental -# """ -# o=get_offset(c,lang) -# return (o>=DENTAL_RANGE[0] and o<=DENTAL_RANGE[1]) - -# def is_labial(c,lang): -# """ -# Is the character a labial -# """ -# o=get_offset(c,lang) -# return (o>=LABIAL_RANGE[0] and o<=LABIAL_RANGE[1]) - -# def is_voiced(c,lang): -# """ -# Is the character a voiced consonant -# """ -# o=get_offset(c,lang) -# return o in VOICED_LIST - -# def is_unvoiced(c,lang): -# """ -# Is the character a unvoiced consonant -# """ -# o=get_offset(c,lang) -# return o in UNVOICED_LIST - -# def is_aspirated(c,lang): -# """ -# Is the character a aspirated consonant -# """ -# o=get_offset(c,lang) -# return o in ASPIRATED_LIST - -# def is_unaspirated(c,lang): -# """ -# Is the character a unaspirated consonant -# """ -# o=get_offset(c,lang) -# return o in UNASPIRATED_LIST - -# def is_nasal(c,lang): -# """ -# Is the character a nasal consonant -# """ -# o=get_offset(c,lang) -# return o in NASAL_LIST - -# def is_fricative(c,lang): -# """ -# Is the character a fricative consonant -# """ -# o=get_offset(c,lang) -# return o in FRICATIVE_LIST - -# def is_approximant(c,lang): -# """ -# Is the character an approximant consonant -# """ -# o=get_offset(c,lang) -# return o in APPROXIMANT_LIST - -# def is_number(c,lang): -# """ -# Is the character a number -# """ -# o=get_offset(c,lang) -# return (o>=0x66 and o<=0x6f) - - -def is_vowel(c,lang): - """ - Is the character a vowel - """ - o=get_offset(c,lang) - return (o>=0x04 and o<=0x14) - -def is_vowel_sign(c,lang): - """ - Is the character a vowel sign (maatraa) - """ - o=get_offset(c,lang) - return (o>=0x3e and o<=0x4c) - -def is_halanta(c,lang): - """ - Is the character the halanta character - """ - o=get_offset(c,lang) - return (o==HALANTA_OFFSET) - -def is_nukta(c,lang): - """ - Is the character the halanta character - """ - o=get_offset(c,lang) - return (o==NUKTA_OFFSET) - -def is_aum(c,lang): - """ - Is the character a vowel sign (maatraa) - """ - o=get_offset(c,lang) - return (o==AUM_OFFSET) - -def is_consonant(c,lang): - """ - Is the character a consonant - """ - o=get_offset(c,lang) - return (o>=0x15 and o<=0x39) - -def is_velar(c,lang): - """ - Is the character a velar - """ - o=get_offset(c,lang) - return (o>=VELAR_RANGE[0] and o<=VELAR_RANGE[1]) - -def is_palatal(c,lang): - """ - Is the character a palatal - """ - o=get_offset(c,lang) - return (o>=PALATAL_RANGE[0] and o<=PALATAL_RANGE[1]) - -def is_retroflex(c,lang): - """ - Is the character a retroflex - """ - o=get_offset(c,lang) - return (o>=RETROFLEX_RANGE[0] and o<=RETROFLEX_RANGE[1]) - -def is_dental(c,lang): - """ - Is the character a dental - """ - o=get_offset(c,lang) - return (o>=DENTAL_RANGE[0] and o<=DENTAL_RANGE[1]) - -def is_labial(c,lang): - """ - Is the character a labial - """ - o=get_offset(c,lang) - return (o>=LABIAL_RANGE[0] and o<=LABIAL_RANGE[1]) - -def is_voiced(c,lang): - """ - Is the character a voiced consonant - """ - o=get_offset(c,lang) - return o in VOICED_LIST - -def is_unvoiced(c,lang): - """ - Is the character a unvoiced consonant - """ - o=get_offset(c,lang) - return o in UNVOICED_LIST - -def is_aspirated(c,lang): - """ - Is the character a aspirated consonant - """ - o=get_offset(c,lang) - return o in ASPIRATED_LIST - -def is_unaspirated(c,lang): - """ - Is the character a unaspirated consonant - """ - o=get_offset(c,lang) - return o in UNASPIRATED_LIST - -def is_nasal(c,lang): - """ - Is the character a nasal consonant - """ - o=get_offset(c,lang) - return o in NASAL_LIST - -def is_fricative(c,lang): - """ - Is the character a fricative consonant - """ - o=get_offset(c,lang) - return o in FRICATIVE_LIST - -def is_approximant(c,lang): - """ - Is the character an approximant consonant - """ - o=get_offset(c,lang) - return o in APPROXIMANT_LIST - -def is_number(c,lang): - """ - Is the character a number - """ - o=get_offset(c,lang) - return (o>=0x66 and o<=0x6f) - - -################################################## - -def is_vowel_offset(c_offset): - """ - Is the offset a vowel - """ - return (c_offset>=0x04 and c_offset<=0x14) - -def is_vowel_sign_offset(c_offset): - """ - Is the offset a vowel sign (maatraa) - """ - return (c_offset>=0x3e and c_offset<=0x4c) - -def is_halanta_offset(c_offset): - """ - Is the offset the halanta offset - """ - return (c_offset==HALANTA_OFFSET) - -def is_nukta_offset(c_offset): - """ - Is the offset the halanta offset - """ - return (c_offset==NUKTA_OFFSET) - -def is_aum_offset(c_offset): - """ - Is the offset a vowel sign (maatraa) - """ - return (c_offset==AUM_OFFSET) - -def is_consonant_offset(c_offset): - """ - Is the offset a consonant - """ - return (c_offset>=0x15 and c_offset<=0x39) - -def is_velar_offset(c_offset): - """ - Is the offset a velar - """ - return (c_offset>=VELAR_RANGE[0] and c_offset<=VELAR_RANGE[1]) - -def is_palatal_offset(c_offset): - """ - Is the offset a palatal - """ - return (c_offset>=PALATAL_RANGE[0] and c_offset<=PALATAL_RANGE[1]) - -def is_retroflex_offset(c_offset): - """ - Is the offset a retroflex - """ - return (c_offset>=RETROFLEX_RANGE[0] and c_offset<=RETROFLEX_RANGE[1]) - -def is_dental_offset(c_offset): - """ - Is the offset a dental - """ - return (c_offset>=DENTAL_RANGE[0] and c_offset<=DENTAL_RANGE[1]) - -def is_labial_offset(c_offset): - """ - Is the offset a labial - """ - return (c_offset>=LABIAL_RANGE[0] and c_offset<=LABIAL_RANGE[1]) - -def is_voiced_offset(c_offset): - """ - Is the offset a voiced consonant - """ - return c_offset in VOICED_LIST - -def is_unvoiced_offset(c_offset): - """ - Is the offset a unvoiced consonant - """ - return c_offset in UNVOICED_LIST - -def is_aspirated_offset(c_offset): - """ - Is the offset a aspirated consonant - """ - return c_offset in ASPIRATED_LIST - -def is_unaspirated_offset(c_offset): - """ - Is the offset a unaspirated consonant - """ - return c_offset in UNASPIRATED_LIST - -def is_nasal_offset(c_offset): - """ - Is the offset a nasal consonant - """ - return c_offset in NASAL_LIST - -def is_fricative_offset(c_offset): - """ - Is the offset a fricative consonant - """ - return c_offset in FRICATIVE_LIST - -def is_approximant_offset(c_offset): - """ - Is the offset an approximant consonant - """ - return c_offset in APPROXIMANT_LIST - -def is_number_offset(c_offset): - """ - Is the offset a number - """ - return (c_offset>=0x66 and c_offset<=0x6f) diff --git a/spaces/shamikbose89/title-generator-from-abstract/app.py b/spaces/shamikbose89/title-generator-from-abstract/app.py deleted file mode 100644 index 1c41ca8e9747c840175b5b66e22d774026cf61aa..0000000000000000000000000000000000000000 --- a/spaces/shamikbose89/title-generator-from-abstract/app.py +++ /dev/null @@ -1,8 +0,0 @@ -import gradio as gr -from gradio.mix import Parallel - -distilbart_model = "huggingface/sshleifer/distilbart-cnn-12-6" -model_name = "huggingface/sshleifer/distill-pegasus-xsum-16-4" -base_model = gr.Interface.load(distilbart_model) -my_model = gr.Interface.load(model_name, title = "Finetuned model output", inputs = "text", outputs="text") -Parallel(base_model, my_model) \ No newline at end of file diff --git a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/controlnet_annotator/midas/midas/blocks.py b/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/controlnet_annotator/midas/midas/blocks.py deleted file mode 100644 index 2145d18fa98060a618536d9a64fe6589e9be4f78..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/controlnet_annotator/midas/midas/blocks.py +++ /dev/null @@ -1,342 +0,0 @@ -import torch -import torch.nn as nn - -from .vit import ( - _make_pretrained_vitb_rn50_384, - _make_pretrained_vitl16_384, - _make_pretrained_vitb16_384, - forward_vit, -) - -def _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None, use_vit_only=False, use_readout="ignore",): - if backbone == "vitl16_384": - pretrained = _make_pretrained_vitl16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [256, 512, 1024, 1024], features, groups=groups, expand=expand - ) # ViT-L/16 - 85.0% Top1 (backbone) - elif backbone == "vitb_rn50_384": - pretrained = _make_pretrained_vitb_rn50_384( - use_pretrained, - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) - scratch = _make_scratch( - [256, 512, 768, 768], features, groups=groups, expand=expand - ) # ViT-H/16 - 85.0% Top1 (backbone) - elif backbone == "vitb16_384": - pretrained = _make_pretrained_vitb16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [96, 192, 384, 768], features, groups=groups, expand=expand - ) # ViT-B/16 - 84.6% Top1 (backbone) - elif backbone == "resnext101_wsl": - pretrained = _make_pretrained_resnext101_wsl(use_pretrained) - scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand) # efficientnet_lite3 - elif backbone == "efficientnet_lite3": - pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable) - scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand) # efficientnet_lite3 - else: - print(f"Backbone '{backbone}' not implemented") - assert False - - return pretrained, scratch - - -def _make_scratch(in_shape, out_shape, groups=1, expand=False): - scratch = nn.Module() - - out_shape1 = out_shape - out_shape2 = out_shape - out_shape3 = out_shape - out_shape4 = out_shape - if expand==True: - out_shape1 = out_shape - out_shape2 = out_shape*2 - out_shape3 = out_shape*4 - out_shape4 = out_shape*8 - - scratch.layer1_rn = nn.Conv2d( - in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer2_rn = nn.Conv2d( - in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer3_rn = nn.Conv2d( - in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer4_rn = nn.Conv2d( - in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - - return scratch - - -def _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False): - efficientnet = torch.hub.load( - "rwightman/gen-efficientnet-pytorch", - "tf_efficientnet_lite3", - pretrained=use_pretrained, - exportable=exportable - ) - return _make_efficientnet_backbone(efficientnet) - - -def _make_efficientnet_backbone(effnet): - pretrained = nn.Module() - - pretrained.layer1 = nn.Sequential( - effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2] - ) - pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3]) - pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5]) - pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9]) - - return pretrained - - -def _make_resnet_backbone(resnet): - pretrained = nn.Module() - pretrained.layer1 = nn.Sequential( - resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1 - ) - - pretrained.layer2 = resnet.layer2 - pretrained.layer3 = resnet.layer3 - pretrained.layer4 = resnet.layer4 - - return pretrained - - -def _make_pretrained_resnext101_wsl(use_pretrained): - resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl") - return _make_resnet_backbone(resnet) - - - -class Interpolate(nn.Module): - """Interpolation module. - """ - - def __init__(self, scale_factor, mode, align_corners=False): - """Init. - - Args: - scale_factor (float): scaling - mode (str): interpolation mode - """ - super(Interpolate, self).__init__() - - self.interp = nn.functional.interpolate - self.scale_factor = scale_factor - self.mode = mode - self.align_corners = align_corners - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: interpolated data - """ - - x = self.interp( - x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners - ) - - return x - - -class ResidualConvUnit(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - out = self.relu(x) - out = self.conv1(out) - out = self.relu(out) - out = self.conv2(out) - - return out + x - - -class FeatureFusionBlock(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock, self).__init__() - - self.resConfUnit1 = ResidualConvUnit(features) - self.resConfUnit2 = ResidualConvUnit(features) - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - output += self.resConfUnit1(xs[1]) - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=True - ) - - return output - - - - -class ResidualConvUnit_custom(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features, activation, bn): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.bn = bn - - self.groups=1 - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - if self.bn==True: - self.bn1 = nn.BatchNorm2d(features) - self.bn2 = nn.BatchNorm2d(features) - - self.activation = activation - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - - out = self.activation(x) - out = self.conv1(out) - if self.bn==True: - out = self.bn1(out) - - out = self.activation(out) - out = self.conv2(out) - if self.bn==True: - out = self.bn2(out) - - if self.groups > 1: - out = self.conv_merge(out) - - return self.skip_add.add(out, x) - - # return out + x - - -class FeatureFusionBlock_custom(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock_custom, self).__init__() - - self.deconv = deconv - self.align_corners = align_corners - - self.groups=1 - - self.expand = expand - out_features = features - if self.expand==True: - out_features = features//2 - - self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1) - - self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn) - self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn) - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - res = self.resConfUnit1(xs[1]) - output = self.skip_add.add(output, res) - # output += res - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=self.align_corners - ) - - output = self.out_conv(output) - - return output - diff --git a/spaces/shj7972/gradiospace/app.py b/spaces/shj7972/gradiospace/app.py deleted file mode 100644 index c457a71c5564407084bc14d1224c9423d1d53561..0000000000000000000000000000000000000000 --- a/spaces/shj7972/gradiospace/app.py +++ /dev/null @@ -1,29 +0,0 @@ -import gradio as gr -#from transformers import pipeline -#pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-es") - -#def greet(name, is_morning, temperature): -# salutation = "Good morning" if is_morning else "Good evening" -# greeting = f"{salutation} {name}. It is {temperature} degrees today" -# celsius = (temperature - 32) * 5 / 9 -# return greeting, round(celsius, 2) - -#demo = gr.Interface( -# fn=greet, -# inputs=["text", "checkbox", gr.Slider(0, 100)], -# outputs=["text", "number"], -#) -import numpy as np - -def sepia(input_img): - sepia_filter = np.array([ - [0.393, 0.769, 0.189], - [0.349, 0.686, 0.168], - [0.272, 0.534, 0.131] - ]) - sepia_img = input_img.dot(sepia_filter.T) - sepia_img /= sepia_img.max() - return sepia_img - -demo = gr.Interface(sepia, gr.Image(shape=(200, 200)), "image") -demo.launch() \ No newline at end of file diff --git a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/quat_affine.py b/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/quat_affine.py deleted file mode 100644 index 9ebcd20f3e2948c905242dc3e09df6684b99ace7..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/quat_affine.py +++ /dev/null @@ -1,459 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Quaternion geometry modules. - -This introduces a representation of coordinate frames that is based around a -‘QuatAffine’ object. This object describes an array of coordinate frames. -It consists of vectors corresponding to the -origin of the frames as well as orientations which are stored in two -ways, as unit quaternions as well as a rotation matrices. -The rotation matrices are derived from the unit quaternions and the two are kept -in sync. -For an explanation of the relation between unit quaternions and rotations see -https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation - -This representation is used in the model for the backbone frames. - -One important thing to note here, is that while we update both representations -the jit compiler is going to ensure that only the parts that are -actually used are executed. -""" - - -import functools -from typing import Tuple - -import jax -import jax.numpy as jnp -import numpy as np - -# pylint: disable=bad-whitespace -QUAT_TO_ROT = np.zeros((4, 4, 3, 3), dtype=np.float32) - -QUAT_TO_ROT[0, 0] = [[ 1, 0, 0], [ 0, 1, 0], [ 0, 0, 1]] # rr -QUAT_TO_ROT[1, 1] = [[ 1, 0, 0], [ 0,-1, 0], [ 0, 0,-1]] # ii -QUAT_TO_ROT[2, 2] = [[-1, 0, 0], [ 0, 1, 0], [ 0, 0,-1]] # jj -QUAT_TO_ROT[3, 3] = [[-1, 0, 0], [ 0,-1, 0], [ 0, 0, 1]] # kk - -QUAT_TO_ROT[1, 2] = [[ 0, 2, 0], [ 2, 0, 0], [ 0, 0, 0]] # ij -QUAT_TO_ROT[1, 3] = [[ 0, 0, 2], [ 0, 0, 0], [ 2, 0, 0]] # ik -QUAT_TO_ROT[2, 3] = [[ 0, 0, 0], [ 0, 0, 2], [ 0, 2, 0]] # jk - -QUAT_TO_ROT[0, 1] = [[ 0, 0, 0], [ 0, 0,-2], [ 0, 2, 0]] # ir -QUAT_TO_ROT[0, 2] = [[ 0, 0, 2], [ 0, 0, 0], [-2, 0, 0]] # jr -QUAT_TO_ROT[0, 3] = [[ 0,-2, 0], [ 2, 0, 0], [ 0, 0, 0]] # kr - -QUAT_MULTIPLY = np.zeros((4, 4, 4), dtype=np.float32) -QUAT_MULTIPLY[:, :, 0] = [[ 1, 0, 0, 0], - [ 0,-1, 0, 0], - [ 0, 0,-1, 0], - [ 0, 0, 0,-1]] - -QUAT_MULTIPLY[:, :, 1] = [[ 0, 1, 0, 0], - [ 1, 0, 0, 0], - [ 0, 0, 0, 1], - [ 0, 0,-1, 0]] - -QUAT_MULTIPLY[:, :, 2] = [[ 0, 0, 1, 0], - [ 0, 0, 0,-1], - [ 1, 0, 0, 0], - [ 0, 1, 0, 0]] - -QUAT_MULTIPLY[:, :, 3] = [[ 0, 0, 0, 1], - [ 0, 0, 1, 0], - [ 0,-1, 0, 0], - [ 1, 0, 0, 0]] - -QUAT_MULTIPLY_BY_VEC = QUAT_MULTIPLY[:, 1:, :] -# pylint: enable=bad-whitespace - - -def rot_to_quat(rot, unstack_inputs=False): - """Convert rotation matrix to quaternion. - - Note that this function calls self_adjoint_eig which is extremely expensive on - the GPU. If at all possible, this function should run on the CPU. - - Args: - rot: rotation matrix (see below for format). - unstack_inputs: If true, rotation matrix should be shape (..., 3, 3) - otherwise the rotation matrix should be a list of lists of tensors. - - Returns: - Quaternion as (..., 4) tensor. - """ - if unstack_inputs: - rot = [jnp.moveaxis(x, -1, 0) for x in jnp.moveaxis(rot, -2, 0)] - - [[xx, xy, xz], [yx, yy, yz], [zx, zy, zz]] = rot - - # pylint: disable=bad-whitespace - k = [[ xx + yy + zz, zy - yz, xz - zx, yx - xy,], - [ zy - yz, xx - yy - zz, xy + yx, xz + zx,], - [ xz - zx, xy + yx, yy - xx - zz, yz + zy,], - [ yx - xy, xz + zx, yz + zy, zz - xx - yy,]] - # pylint: enable=bad-whitespace - - k = (1./3.) * jnp.stack([jnp.stack(x, axis=-1) for x in k], - axis=-2) - - # Get eigenvalues in non-decreasing order and associated. - _, qs = jnp.linalg.eigh(k) - return qs[..., -1] - - -def rot_list_to_tensor(rot_list): - """Convert list of lists to rotation tensor.""" - return jnp.stack( - [jnp.stack(rot_list[0], axis=-1), - jnp.stack(rot_list[1], axis=-1), - jnp.stack(rot_list[2], axis=-1)], - axis=-2) - - -def vec_list_to_tensor(vec_list): - """Convert list to vector tensor.""" - return jnp.stack(vec_list, axis=-1) - - -def quat_to_rot(normalized_quat): - """Convert a normalized quaternion to a rotation matrix.""" - rot_tensor = jnp.sum( - np.reshape(QUAT_TO_ROT, (4, 4, 9)) * - normalized_quat[..., :, None, None] * - normalized_quat[..., None, :, None], - axis=(-3, -2)) - rot = jnp.moveaxis(rot_tensor, -1, 0) # Unstack. - return [[rot[0], rot[1], rot[2]], - [rot[3], rot[4], rot[5]], - [rot[6], rot[7], rot[8]]] - - -def quat_multiply_by_vec(quat, vec): - """Multiply a quaternion by a pure-vector quaternion.""" - return jnp.sum( - QUAT_MULTIPLY_BY_VEC * - quat[..., :, None, None] * - vec[..., None, :, None], - axis=(-3, -2)) - - -def quat_multiply(quat1, quat2): - """Multiply a quaternion by another quaternion.""" - return jnp.sum( - QUAT_MULTIPLY * - quat1[..., :, None, None] * - quat2[..., None, :, None], - axis=(-3, -2)) - - -def apply_rot_to_vec(rot, vec, unstack=False): - """Multiply rotation matrix by a vector.""" - if unstack: - x, y, z = [vec[:, i] for i in range(3)] - else: - x, y, z = vec - return [rot[0][0] * x + rot[0][1] * y + rot[0][2] * z, - rot[1][0] * x + rot[1][1] * y + rot[1][2] * z, - rot[2][0] * x + rot[2][1] * y + rot[2][2] * z] - - -def apply_inverse_rot_to_vec(rot, vec): - """Multiply the inverse of a rotation matrix by a vector.""" - # Inverse rotation is just transpose - return [rot[0][0] * vec[0] + rot[1][0] * vec[1] + rot[2][0] * vec[2], - rot[0][1] * vec[0] + rot[1][1] * vec[1] + rot[2][1] * vec[2], - rot[0][2] * vec[0] + rot[1][2] * vec[1] + rot[2][2] * vec[2]] - - -class QuatAffine(object): - """Affine transformation represented by quaternion and vector.""" - - def __init__(self, quaternion, translation, rotation=None, normalize=True, - unstack_inputs=False): - """Initialize from quaternion and translation. - - Args: - quaternion: Rotation represented by a quaternion, to be applied - before translation. Must be a unit quaternion unless normalize==True. - translation: Translation represented as a vector. - rotation: Same rotation as the quaternion, represented as a (..., 3, 3) - tensor. If None, rotation will be calculated from the quaternion. - normalize: If True, l2 normalize the quaternion on input. - unstack_inputs: If True, translation is a vector with last component 3 - """ - - if quaternion is not None: - assert quaternion.shape[-1] == 4 - - if unstack_inputs: - if rotation is not None: - rotation = [jnp.moveaxis(x, -1, 0) # Unstack. - for x in jnp.moveaxis(rotation, -2, 0)] # Unstack. - translation = jnp.moveaxis(translation, -1, 0) # Unstack. - - if normalize and quaternion is not None: - quaternion = quaternion / jnp.linalg.norm(quaternion, axis=-1, - keepdims=True) - - if rotation is None: - rotation = quat_to_rot(quaternion) - - self.quaternion = quaternion - self.rotation = [list(row) for row in rotation] - self.translation = list(translation) - - assert all(len(row) == 3 for row in self.rotation) - assert len(self.translation) == 3 - - def to_tensor(self): - return jnp.concatenate( - [self.quaternion] + - [jnp.expand_dims(x, axis=-1) for x in self.translation], - axis=-1) - - def apply_tensor_fn(self, tensor_fn): - """Return a new QuatAffine with tensor_fn applied (e.g. stop_gradient).""" - return QuatAffine( - tensor_fn(self.quaternion), - [tensor_fn(x) for x in self.translation], - rotation=[[tensor_fn(x) for x in row] for row in self.rotation], - normalize=False) - - def apply_rotation_tensor_fn(self, tensor_fn): - """Return a new QuatAffine with tensor_fn applied to the rotation part.""" - return QuatAffine( - tensor_fn(self.quaternion), - [x for x in self.translation], - rotation=[[tensor_fn(x) for x in row] for row in self.rotation], - normalize=False) - - def scale_translation(self, position_scale): - """Return a new quat affine with a different scale for translation.""" - - return QuatAffine( - self.quaternion, - [x * position_scale for x in self.translation], - rotation=[[x for x in row] for row in self.rotation], - normalize=False) - - @classmethod - def from_tensor(cls, tensor, normalize=False): - quaternion, tx, ty, tz = jnp.split(tensor, [4, 5, 6], axis=-1) - return cls(quaternion, - [tx[..., 0], ty[..., 0], tz[..., 0]], - normalize=normalize) - - def pre_compose(self, update): - """Return a new QuatAffine which applies the transformation update first. - - Args: - update: Length-6 vector. 3-vector of x, y, and z such that the quaternion - update is (1, x, y, z) and zero for the 3-vector is the identity - quaternion. 3-vector for translation concatenated. - - Returns: - New QuatAffine object. - """ - vector_quaternion_update, x, y, z = jnp.split(update, [3, 4, 5], axis=-1) - trans_update = [jnp.squeeze(x, axis=-1), - jnp.squeeze(y, axis=-1), - jnp.squeeze(z, axis=-1)] - - new_quaternion = (self.quaternion + - quat_multiply_by_vec(self.quaternion, - vector_quaternion_update)) - - trans_update = apply_rot_to_vec(self.rotation, trans_update) - new_translation = [ - self.translation[0] + trans_update[0], - self.translation[1] + trans_update[1], - self.translation[2] + trans_update[2]] - - return QuatAffine(new_quaternion, new_translation) - - def apply_to_point(self, point, extra_dims=0): - """Apply affine to a point. - - Args: - point: List of 3 tensors to apply affine. - extra_dims: Number of dimensions at the end of the transformed_point - shape that are not present in the rotation and translation. The most - common use is rotation N points at once with extra_dims=1 for use in a - network. - - Returns: - Transformed point after applying affine. - """ - rotation = self.rotation - translation = self.translation - for _ in range(extra_dims): - expand_fn = functools.partial(jnp.expand_dims, axis=-1) - rotation = jax.tree_map(expand_fn, rotation) - translation = jax.tree_map(expand_fn, translation) - - rot_point = apply_rot_to_vec(rotation, point) - return [ - rot_point[0] + translation[0], - rot_point[1] + translation[1], - rot_point[2] + translation[2]] - - def invert_point(self, transformed_point, extra_dims=0): - """Apply inverse of transformation to a point. - - Args: - transformed_point: List of 3 tensors to apply affine - extra_dims: Number of dimensions at the end of the transformed_point - shape that are not present in the rotation and translation. The most - common use is rotation N points at once with extra_dims=1 for use in a - network. - - Returns: - Transformed point after applying affine. - """ - rotation = self.rotation - translation = self.translation - for _ in range(extra_dims): - expand_fn = functools.partial(jnp.expand_dims, axis=-1) - rotation = jax.tree_map(expand_fn, rotation) - translation = jax.tree_map(expand_fn, translation) - - rot_point = [ - transformed_point[0] - translation[0], - transformed_point[1] - translation[1], - transformed_point[2] - translation[2]] - - return apply_inverse_rot_to_vec(rotation, rot_point) - - def __repr__(self): - return 'QuatAffine(%r, %r)' % (self.quaternion, self.translation) - - -def _multiply(a, b): - return jnp.stack([ - jnp.array([a[0][0]*b[0][0] + a[0][1]*b[1][0] + a[0][2]*b[2][0], - a[0][0]*b[0][1] + a[0][1]*b[1][1] + a[0][2]*b[2][1], - a[0][0]*b[0][2] + a[0][1]*b[1][2] + a[0][2]*b[2][2]]), - - jnp.array([a[1][0]*b[0][0] + a[1][1]*b[1][0] + a[1][2]*b[2][0], - a[1][0]*b[0][1] + a[1][1]*b[1][1] + a[1][2]*b[2][1], - a[1][0]*b[0][2] + a[1][1]*b[1][2] + a[1][2]*b[2][2]]), - - jnp.array([a[2][0]*b[0][0] + a[2][1]*b[1][0] + a[2][2]*b[2][0], - a[2][0]*b[0][1] + a[2][1]*b[1][1] + a[2][2]*b[2][1], - a[2][0]*b[0][2] + a[2][1]*b[1][2] + a[2][2]*b[2][2]])]) - - -def make_canonical_transform( - n_xyz: jnp.ndarray, - ca_xyz: jnp.ndarray, - c_xyz: jnp.ndarray) -> Tuple[jnp.ndarray, jnp.ndarray]: - """Returns translation and rotation matrices to canonicalize residue atoms. - - Note that this method does not take care of symmetries. If you provide the - atom positions in the non-standard way, the N atom will end up not at - [-0.527250, 1.359329, 0.0] but instead at [-0.527250, -1.359329, 0.0]. You - need to take care of such cases in your code. - - Args: - n_xyz: An array of shape [batch, 3] of nitrogen xyz coordinates. - ca_xyz: An array of shape [batch, 3] of carbon alpha xyz coordinates. - c_xyz: An array of shape [batch, 3] of carbon xyz coordinates. - - Returns: - A tuple (translation, rotation) where: - translation is an array of shape [batch, 3] defining the translation. - rotation is an array of shape [batch, 3, 3] defining the rotation. - After applying the translation and rotation to all atoms in a residue: - * All atoms will be shifted so that CA is at the origin, - * All atoms will be rotated so that C is at the x-axis, - * All atoms will be shifted so that N is in the xy plane. - """ - assert len(n_xyz.shape) == 2, n_xyz.shape - assert n_xyz.shape[-1] == 3, n_xyz.shape - assert n_xyz.shape == ca_xyz.shape == c_xyz.shape, ( - n_xyz.shape, ca_xyz.shape, c_xyz.shape) - - # Place CA at the origin. - translation = -ca_xyz - n_xyz = n_xyz + translation - c_xyz = c_xyz + translation - - # Place C on the x-axis. - c_x, c_y, c_z = [c_xyz[:, i] for i in range(3)] - # Rotate by angle c1 in the x-y plane (around the z-axis). - sin_c1 = -c_y / jnp.sqrt(1e-20 + c_x**2 + c_y**2) - cos_c1 = c_x / jnp.sqrt(1e-20 + c_x**2 + c_y**2) - zeros = jnp.zeros_like(sin_c1) - ones = jnp.ones_like(sin_c1) - # pylint: disable=bad-whitespace - c1_rot_matrix = jnp.stack([jnp.array([cos_c1, -sin_c1, zeros]), - jnp.array([sin_c1, cos_c1, zeros]), - jnp.array([zeros, zeros, ones])]) - - # Rotate by angle c2 in the x-z plane (around the y-axis). - sin_c2 = c_z / jnp.sqrt(1e-20 + c_x**2 + c_y**2 + c_z**2) - cos_c2 = jnp.sqrt(c_x**2 + c_y**2) / jnp.sqrt( - 1e-20 + c_x**2 + c_y**2 + c_z**2) - c2_rot_matrix = jnp.stack([jnp.array([cos_c2, zeros, sin_c2]), - jnp.array([zeros, ones, zeros]), - jnp.array([-sin_c2, zeros, cos_c2])]) - - c_rot_matrix = _multiply(c2_rot_matrix, c1_rot_matrix) - n_xyz = jnp.stack(apply_rot_to_vec(c_rot_matrix, n_xyz, unstack=True)).T - - # Place N in the x-y plane. - _, n_y, n_z = [n_xyz[:, i] for i in range(3)] - # Rotate by angle alpha in the y-z plane (around the x-axis). - sin_n = -n_z / jnp.sqrt(1e-20 + n_y**2 + n_z**2) - cos_n = n_y / jnp.sqrt(1e-20 + n_y**2 + n_z**2) - n_rot_matrix = jnp.stack([jnp.array([ones, zeros, zeros]), - jnp.array([zeros, cos_n, -sin_n]), - jnp.array([zeros, sin_n, cos_n])]) - # pylint: enable=bad-whitespace - - return (translation, - jnp.transpose(_multiply(n_rot_matrix, c_rot_matrix), [2, 0, 1])) - - -def make_transform_from_reference( - n_xyz: jnp.ndarray, - ca_xyz: jnp.ndarray, - c_xyz: jnp.ndarray) -> Tuple[jnp.ndarray, jnp.ndarray]: - """Returns rotation and translation matrices to convert from reference. - - Note that this method does not take care of symmetries. If you provide the - atom positions in the non-standard way, the N atom will end up not at - [-0.527250, 1.359329, 0.0] but instead at [-0.527250, -1.359329, 0.0]. You - need to take care of such cases in your code. - - Args: - n_xyz: An array of shape [batch, 3] of nitrogen xyz coordinates. - ca_xyz: An array of shape [batch, 3] of carbon alpha xyz coordinates. - c_xyz: An array of shape [batch, 3] of carbon xyz coordinates. - - Returns: - A tuple (rotation, translation) where: - rotation is an array of shape [batch, 3, 3] defining the rotation. - translation is an array of shape [batch, 3] defining the translation. - After applying the translation and rotation to the reference backbone, - the coordinates will approximately equal to the input coordinates. - - The order of translation and rotation differs from make_canonical_transform - because the rotation from this function should be applied before the - translation, unlike make_canonical_transform. - """ - translation, rotation = make_canonical_transform(n_xyz, ca_xyz, c_xyz) - return np.transpose(rotation, (0, 2, 1)), -translation diff --git a/spaces/simonduerr/diffdock/models/score_model.py b/spaces/simonduerr/diffdock/models/score_model.py deleted file mode 100644 index 60c64feabdb3d23096a61e4b5e77004b87d6febd..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/diffdock/models/score_model.py +++ /dev/null @@ -1,442 +0,0 @@ -import math - -from e3nn import o3 -import torch -from torch import nn -from torch.nn import functional as F -from torch_cluster import radius, radius_graph -from torch_scatter import scatter, scatter_mean -import numpy as np -from e3nn.nn import BatchNorm - -from utils import so3, torus -from datasets.process_mols import lig_feature_dims, rec_residue_feature_dims - - -class AtomEncoder(torch.nn.Module): - - def __init__(self, emb_dim, feature_dims, sigma_embed_dim, lm_embedding_type= None): - # first element of feature_dims tuple is a list with the lenght of each categorical feature and the second is the number of scalar features - super(AtomEncoder, self).__init__() - self.atom_embedding_list = torch.nn.ModuleList() - self.num_categorical_features = len(feature_dims[0]) - self.num_scalar_features = feature_dims[1] + sigma_embed_dim - self.lm_embedding_type = lm_embedding_type - for i, dim in enumerate(feature_dims[0]): - emb = torch.nn.Embedding(dim, emb_dim) - torch.nn.init.xavier_uniform_(emb.weight.data) - self.atom_embedding_list.append(emb) - - if self.num_scalar_features > 0: - self.linear = torch.nn.Linear(self.num_scalar_features, emb_dim) - if self.lm_embedding_type is not None: - if self.lm_embedding_type == 'esm': - self.lm_embedding_dim = 1280 - else: raise ValueError('LM Embedding type was not correctly determined. LM embedding type: ', self.lm_embedding_type) - self.lm_embedding_layer = torch.nn.Linear(self.lm_embedding_dim + emb_dim, emb_dim) - - def forward(self, x): - x_embedding = 0 - if self.lm_embedding_type is not None: - assert x.shape[1] == self.num_categorical_features + self.num_scalar_features + self.lm_embedding_dim - else: - assert x.shape[1] == self.num_categorical_features + self.num_scalar_features - for i in range(self.num_categorical_features): - x_embedding += self.atom_embedding_list[i](x[:, i].long()) - - if self.num_scalar_features > 0: - x_embedding += self.linear(x[:, self.num_categorical_features:self.num_categorical_features + self.num_scalar_features]) - if self.lm_embedding_type is not None: - x_embedding = self.lm_embedding_layer(torch.cat([x_embedding, x[:, -self.lm_embedding_dim:]], axis=1)) - return x_embedding - - -class TensorProductConvLayer(torch.nn.Module): - def __init__(self, in_irreps, sh_irreps, out_irreps, n_edge_features, residual=True, batch_norm=True, dropout=0.0, - hidden_features=None): - super(TensorProductConvLayer, self).__init__() - self.in_irreps = in_irreps - self.out_irreps = out_irreps - self.sh_irreps = sh_irreps - self.residual = residual - if hidden_features is None: - hidden_features = n_edge_features - - self.tp = tp = o3.FullyConnectedTensorProduct(in_irreps, sh_irreps, out_irreps, shared_weights=False) - - self.fc = nn.Sequential( - nn.Linear(n_edge_features, hidden_features), - nn.ReLU(), - nn.Dropout(dropout), - nn.Linear(hidden_features, tp.weight_numel) - ) - self.batch_norm = BatchNorm(out_irreps) if batch_norm else None - - def forward(self, node_attr, edge_index, edge_attr, edge_sh, out_nodes=None, reduce='mean'): - - edge_src, edge_dst = edge_index - tp = self.tp(node_attr[edge_dst], edge_sh, self.fc(edge_attr)) - - out_nodes = out_nodes or node_attr.shape[0] - out = scatter(tp, edge_src, dim=0, dim_size=out_nodes, reduce=reduce) - - if self.residual: - padded = F.pad(node_attr, (0, out.shape[-1] - node_attr.shape[-1])) - out = out + padded - - if self.batch_norm: - out = self.batch_norm(out) - return out - - -class TensorProductScoreModel(torch.nn.Module): - def __init__(self, t_to_sigma, device, timestep_emb_func, in_lig_edge_features=4, sigma_embed_dim=32, sh_lmax=2, - ns=16, nv=4, num_conv_layers=2, lig_max_radius=5, rec_max_radius=30, cross_max_distance=250, - center_max_distance=30, distance_embed_dim=32, cross_distance_embed_dim=32, no_torsion=False, - scale_by_sigma=True, use_second_order_repr=False, batch_norm=True, - dynamic_max_cross=False, dropout=0.0, lm_embedding_type=None, confidence_mode=False, - confidence_dropout=0, confidence_no_batchnorm=False, num_confidence_outputs=1): - super(TensorProductScoreModel, self).__init__() - self.t_to_sigma = t_to_sigma - self.in_lig_edge_features = in_lig_edge_features - self.sigma_embed_dim = sigma_embed_dim - self.lig_max_radius = lig_max_radius - self.rec_max_radius = rec_max_radius - self.cross_max_distance = cross_max_distance - self.dynamic_max_cross = dynamic_max_cross - self.center_max_distance = center_max_distance - self.distance_embed_dim = distance_embed_dim - self.cross_distance_embed_dim = cross_distance_embed_dim - self.sh_irreps = o3.Irreps.spherical_harmonics(lmax=sh_lmax) - self.ns, self.nv = ns, nv - self.scale_by_sigma = scale_by_sigma - self.device = device - self.no_torsion = no_torsion - self.timestep_emb_func = timestep_emb_func - self.confidence_mode = confidence_mode - self.num_conv_layers = num_conv_layers - - self.lig_node_embedding = AtomEncoder(emb_dim=ns, feature_dims=lig_feature_dims, sigma_embed_dim=sigma_embed_dim) - self.lig_edge_embedding = nn.Sequential(nn.Linear(in_lig_edge_features + sigma_embed_dim + distance_embed_dim, ns),nn.ReLU(), nn.Dropout(dropout),nn.Linear(ns, ns)) - - self.rec_node_embedding = AtomEncoder(emb_dim=ns, feature_dims=rec_residue_feature_dims, sigma_embed_dim=sigma_embed_dim, lm_embedding_type=lm_embedding_type) - self.rec_edge_embedding = nn.Sequential(nn.Linear(sigma_embed_dim + distance_embed_dim, ns), nn.ReLU(), nn.Dropout(dropout),nn.Linear(ns, ns)) - - self.cross_edge_embedding = nn.Sequential(nn.Linear(sigma_embed_dim + cross_distance_embed_dim, ns), nn.ReLU(), nn.Dropout(dropout),nn.Linear(ns, ns)) - - self.lig_distance_expansion = GaussianSmearing(0.0, lig_max_radius, distance_embed_dim) - self.rec_distance_expansion = GaussianSmearing(0.0, rec_max_radius, distance_embed_dim) - self.cross_distance_expansion = GaussianSmearing(0.0, cross_max_distance, cross_distance_embed_dim) - - if use_second_order_repr: - irrep_seq = [ - f'{ns}x0e', - f'{ns}x0e + {nv}x1o + {nv}x2e', - f'{ns}x0e + {nv}x1o + {nv}x2e + {nv}x1e + {nv}x2o', - f'{ns}x0e + {nv}x1o + {nv}x2e + {nv}x1e + {nv}x2o + {ns}x0o' - ] - else: - irrep_seq = [ - f'{ns}x0e', - f'{ns}x0e + {nv}x1o', - f'{ns}x0e + {nv}x1o + {nv}x1e', - f'{ns}x0e + {nv}x1o + {nv}x1e + {ns}x0o' - ] - - lig_conv_layers, rec_conv_layers, lig_to_rec_conv_layers, rec_to_lig_conv_layers = [], [], [], [] - for i in range(num_conv_layers): - in_irreps = irrep_seq[min(i, len(irrep_seq) - 1)] - out_irreps = irrep_seq[min(i + 1, len(irrep_seq) - 1)] - parameters = { - 'in_irreps': in_irreps, - 'sh_irreps': self.sh_irreps, - 'out_irreps': out_irreps, - 'n_edge_features': 3 * ns, - 'hidden_features': 3 * ns, - 'residual': False, - 'batch_norm': batch_norm, - 'dropout': dropout - } - - lig_layer = TensorProductConvLayer(**parameters) - lig_conv_layers.append(lig_layer) - rec_layer = TensorProductConvLayer(**parameters) - rec_conv_layers.append(rec_layer) - lig_to_rec_layer = TensorProductConvLayer(**parameters) - lig_to_rec_conv_layers.append(lig_to_rec_layer) - rec_to_lig_layer = TensorProductConvLayer(**parameters) - rec_to_lig_conv_layers.append(rec_to_lig_layer) - - self.lig_conv_layers = nn.ModuleList(lig_conv_layers) - self.rec_conv_layers = nn.ModuleList(rec_conv_layers) - self.lig_to_rec_conv_layers = nn.ModuleList(lig_to_rec_conv_layers) - self.rec_to_lig_conv_layers = nn.ModuleList(rec_to_lig_conv_layers) - - if self.confidence_mode: - self.confidence_predictor = nn.Sequential( - nn.Linear(2*self.ns if num_conv_layers >= 3 else self.ns,ns), - nn.BatchNorm1d(ns) if not confidence_no_batchnorm else nn.Identity(), - nn.ReLU(), - nn.Dropout(confidence_dropout), - nn.Linear(ns, ns), - nn.BatchNorm1d(ns) if not confidence_no_batchnorm else nn.Identity(), - nn.ReLU(), - nn.Dropout(confidence_dropout), - nn.Linear(ns, num_confidence_outputs) - ) - else: - # center of mass translation and rotation components - self.center_distance_expansion = GaussianSmearing(0.0, center_max_distance, distance_embed_dim) - self.center_edge_embedding = nn.Sequential( - nn.Linear(distance_embed_dim + sigma_embed_dim, ns), - nn.ReLU(), - nn.Dropout(dropout), - nn.Linear(ns, ns) - ) - - self.final_conv = TensorProductConvLayer( - in_irreps=self.lig_conv_layers[-1].out_irreps, - sh_irreps=self.sh_irreps, - out_irreps=f'2x1o + 2x1e', - n_edge_features=2 * ns, - residual=False, - dropout=dropout, - batch_norm=batch_norm - ) - self.tr_final_layer = nn.Sequential(nn.Linear(1 + sigma_embed_dim, ns),nn.Dropout(dropout), nn.ReLU(), nn.Linear(ns, 1)) - self.rot_final_layer = nn.Sequential(nn.Linear(1 + sigma_embed_dim, ns),nn.Dropout(dropout), nn.ReLU(), nn.Linear(ns, 1)) - - if not no_torsion: - # torsion angles components - self.final_edge_embedding = nn.Sequential( - nn.Linear(distance_embed_dim, ns), - nn.ReLU(), - nn.Dropout(dropout), - nn.Linear(ns, ns) - ) - self.final_tp_tor = o3.FullTensorProduct(self.sh_irreps, "2e") - self.tor_bond_conv = TensorProductConvLayer( - in_irreps=self.lig_conv_layers[-1].out_irreps, - sh_irreps=self.final_tp_tor.irreps_out, - out_irreps=f'{ns}x0o + {ns}x0e', - n_edge_features=3 * ns, - residual=False, - dropout=dropout, - batch_norm=batch_norm - ) - self.tor_final_layer = nn.Sequential( - nn.Linear(2 * ns, ns, bias=False), - nn.Tanh(), - nn.Dropout(dropout), - nn.Linear(ns, 1, bias=False) - ) - - def forward(self, data): - if not self.confidence_mode: - tr_sigma, rot_sigma, tor_sigma = self.t_to_sigma(*[data.complex_t[noise_type] for noise_type in ['tr', 'rot', 'tor']]) - else: - tr_sigma, rot_sigma, tor_sigma = [data.complex_t[noise_type] for noise_type in ['tr', 'rot', 'tor']] - - # build ligand graph - lig_node_attr, lig_edge_index, lig_edge_attr, lig_edge_sh = self.build_lig_conv_graph(data) - lig_src, lig_dst = lig_edge_index - lig_node_attr = self.lig_node_embedding(lig_node_attr) - lig_edge_attr = self.lig_edge_embedding(lig_edge_attr) - - # build receptor graph - rec_node_attr, rec_edge_index, rec_edge_attr, rec_edge_sh = self.build_rec_conv_graph(data) - rec_src, rec_dst = rec_edge_index - rec_node_attr = self.rec_node_embedding(rec_node_attr) - rec_edge_attr = self.rec_edge_embedding(rec_edge_attr) - - # build cross graph - if self.dynamic_max_cross: - cross_cutoff = (tr_sigma * 3 + 20).unsqueeze(1) - else: - cross_cutoff = self.cross_max_distance - cross_edge_index, cross_edge_attr, cross_edge_sh = self.build_cross_conv_graph(data, cross_cutoff) - cross_lig, cross_rec = cross_edge_index - cross_edge_attr = self.cross_edge_embedding(cross_edge_attr) - - for l in range(len(self.lig_conv_layers)): - # intra graph message passing - lig_edge_attr_ = torch.cat([lig_edge_attr, lig_node_attr[lig_src, :self.ns], lig_node_attr[lig_dst, :self.ns]], -1) - lig_intra_update = self.lig_conv_layers[l](lig_node_attr, lig_edge_index, lig_edge_attr_, lig_edge_sh) - - # inter graph message passing - rec_to_lig_edge_attr_ = torch.cat([cross_edge_attr, lig_node_attr[cross_lig, :self.ns], rec_node_attr[cross_rec, :self.ns]], -1) - lig_inter_update = self.rec_to_lig_conv_layers[l](rec_node_attr, cross_edge_index, rec_to_lig_edge_attr_, cross_edge_sh, - out_nodes=lig_node_attr.shape[0]) - - if l != len(self.lig_conv_layers) - 1: - rec_edge_attr_ = torch.cat([rec_edge_attr, rec_node_attr[rec_src, :self.ns], rec_node_attr[rec_dst, :self.ns]], -1) - rec_intra_update = self.rec_conv_layers[l](rec_node_attr, rec_edge_index, rec_edge_attr_, rec_edge_sh) - - lig_to_rec_edge_attr_ = torch.cat([cross_edge_attr, lig_node_attr[cross_lig, :self.ns], rec_node_attr[cross_rec, :self.ns]], -1) - rec_inter_update = self.lig_to_rec_conv_layers[l](lig_node_attr, torch.flip(cross_edge_index, dims=[0]), lig_to_rec_edge_attr_, - cross_edge_sh, out_nodes=rec_node_attr.shape[0]) - - # padding original features - lig_node_attr = F.pad(lig_node_attr, (0, lig_intra_update.shape[-1] - lig_node_attr.shape[-1])) - - # update features with residual updates - lig_node_attr = lig_node_attr + lig_intra_update + lig_inter_update - - if l != len(self.lig_conv_layers) - 1: - rec_node_attr = F.pad(rec_node_attr, (0, rec_intra_update.shape[-1] - rec_node_attr.shape[-1])) - rec_node_attr = rec_node_attr + rec_intra_update + rec_inter_update - - # compute confidence score - if self.confidence_mode: - scalar_lig_attr = torch.cat([lig_node_attr[:,:self.ns],lig_node_attr[:,-self.ns:] ], dim=1) if self.num_conv_layers >= 3 else lig_node_attr[:,:self.ns] - confidence = self.confidence_predictor(scatter_mean(scalar_lig_attr, data['ligand'].batch, dim=0)).squeeze(dim=-1) - return confidence - - # compute translational and rotational score vectors - center_edge_index, center_edge_attr, center_edge_sh = self.build_center_conv_graph(data) - center_edge_attr = self.center_edge_embedding(center_edge_attr) - center_edge_attr = torch.cat([center_edge_attr, lig_node_attr[center_edge_index[0], :self.ns]], -1) - global_pred = self.final_conv(lig_node_attr, center_edge_index, center_edge_attr, center_edge_sh, out_nodes=data.num_graphs) - - tr_pred = global_pred[:, :3] + global_pred[:, 6:9] - rot_pred = global_pred[:, 3:6] + global_pred[:, 9:] - data.graph_sigma_emb = self.timestep_emb_func(data.complex_t['tr']) - - # fix the magnitude of translational and rotational score vectors - tr_norm = torch.linalg.vector_norm(tr_pred, dim=1).unsqueeze(1) - tr_pred = tr_pred / tr_norm * self.tr_final_layer(torch.cat([tr_norm, data.graph_sigma_emb], dim=1)) - rot_norm = torch.linalg.vector_norm(rot_pred, dim=1).unsqueeze(1) - rot_pred = rot_pred / rot_norm * self.rot_final_layer(torch.cat([rot_norm, data.graph_sigma_emb], dim=1)) - - if self.scale_by_sigma: - tr_pred = tr_pred / tr_sigma.unsqueeze(1) - rot_pred = rot_pred * so3.score_norm(rot_sigma.cpu()).unsqueeze(1).to(data['ligand'].x.device) - - if self.no_torsion or data['ligand'].edge_mask.sum() == 0: return tr_pred, rot_pred, torch.empty(0, device=self.device) - - # torsional components - tor_bonds, tor_edge_index, tor_edge_attr, tor_edge_sh = self.build_bond_conv_graph(data) - tor_bond_vec = data['ligand'].pos[tor_bonds[1]] - data['ligand'].pos[tor_bonds[0]] - tor_bond_attr = lig_node_attr[tor_bonds[0]] + lig_node_attr[tor_bonds[1]] - - tor_bonds_sh = o3.spherical_harmonics("2e", tor_bond_vec, normalize=True, normalization='component') - tor_edge_sh = self.final_tp_tor(tor_edge_sh, tor_bonds_sh[tor_edge_index[0]]) - - tor_edge_attr = torch.cat([tor_edge_attr, lig_node_attr[tor_edge_index[1], :self.ns], - tor_bond_attr[tor_edge_index[0], :self.ns]], -1) - tor_pred = self.tor_bond_conv(lig_node_attr, tor_edge_index, tor_edge_attr, tor_edge_sh, - out_nodes=data['ligand'].edge_mask.sum(), reduce='mean') - tor_pred = self.tor_final_layer(tor_pred).squeeze(1) - edge_sigma = tor_sigma[data['ligand'].batch][data['ligand', 'ligand'].edge_index[0]][data['ligand'].edge_mask] - - if self.scale_by_sigma: - tor_pred = tor_pred * torch.sqrt(torch.tensor(torus.score_norm(edge_sigma.cpu().numpy())).float() - .to(data['ligand'].x.device)) - return tr_pred, rot_pred, tor_pred - - def build_lig_conv_graph(self, data): - # builds the ligand graph edges and initial node and edge features - data['ligand'].node_sigma_emb = self.timestep_emb_func(data['ligand'].node_t['tr']) - - # compute edges - radius_edges = radius_graph(data['ligand'].pos, self.lig_max_radius, data['ligand'].batch) - edge_index = torch.cat([data['ligand', 'ligand'].edge_index, radius_edges], 1).long() - edge_attr = torch.cat([ - data['ligand', 'ligand'].edge_attr, - torch.zeros(radius_edges.shape[-1], self.in_lig_edge_features, device=data['ligand'].x.device) - ], 0) - - # compute initial features - edge_sigma_emb = data['ligand'].node_sigma_emb[edge_index[0].long()] - edge_attr = torch.cat([edge_attr, edge_sigma_emb], 1) - node_attr = torch.cat([data['ligand'].x, data['ligand'].node_sigma_emb], 1) - - src, dst = edge_index - edge_vec = data['ligand'].pos[dst.long()] - data['ligand'].pos[src.long()] - edge_length_emb = self.lig_distance_expansion(edge_vec.norm(dim=-1)) - - edge_attr = torch.cat([edge_attr, edge_length_emb], 1) - edge_sh = o3.spherical_harmonics(self.sh_irreps, edge_vec, normalize=True, normalization='component') - - return node_attr, edge_index, edge_attr, edge_sh - - def build_rec_conv_graph(self, data): - # builds the receptor initial node and edge embeddings - data['receptor'].node_sigma_emb = self.timestep_emb_func(data['receptor'].node_t['tr']) # tr rot and tor noise is all the same - node_attr = torch.cat([data['receptor'].x, data['receptor'].node_sigma_emb], 1) - - # this assumes the edges were already created in preprocessing since protein's structure is fixed - edge_index = data['receptor', 'receptor'].edge_index - src, dst = edge_index - edge_vec = data['receptor'].pos[dst.long()] - data['receptor'].pos[src.long()] - - edge_length_emb = self.rec_distance_expansion(edge_vec.norm(dim=-1)) - edge_sigma_emb = data['receptor'].node_sigma_emb[edge_index[0].long()] - edge_attr = torch.cat([edge_sigma_emb, edge_length_emb], 1) - edge_sh = o3.spherical_harmonics(self.sh_irreps, edge_vec, normalize=True, normalization='component') - - return node_attr, edge_index, edge_attr, edge_sh - - def build_cross_conv_graph(self, data, cross_distance_cutoff): - # builds the cross edges between ligand and receptor - if torch.is_tensor(cross_distance_cutoff): - # different cutoff for every graph (depends on the diffusion time) - edge_index = radius(data['receptor'].pos / cross_distance_cutoff[data['receptor'].batch], - data['ligand'].pos / cross_distance_cutoff[data['ligand'].batch], 1, - data['receptor'].batch, data['ligand'].batch, max_num_neighbors=10000) - else: - edge_index = radius(data['receptor'].pos, data['ligand'].pos, cross_distance_cutoff, - data['receptor'].batch, data['ligand'].batch, max_num_neighbors=10000) - - src, dst = edge_index - edge_vec = data['receptor'].pos[dst.long()] - data['ligand'].pos[src.long()] - - edge_length_emb = self.cross_distance_expansion(edge_vec.norm(dim=-1)) - edge_sigma_emb = data['ligand'].node_sigma_emb[src.long()] - edge_attr = torch.cat([edge_sigma_emb, edge_length_emb], 1) - edge_sh = o3.spherical_harmonics(self.sh_irreps, edge_vec, normalize=True, normalization='component') - - return edge_index, edge_attr, edge_sh - - def build_center_conv_graph(self, data): - # builds the filter and edges for the convolution generating translational and rotational scores - edge_index = torch.cat([data['ligand'].batch.unsqueeze(0), torch.arange(len(data['ligand'].batch)).to(data['ligand'].x.device).unsqueeze(0)], dim=0) - - center_pos, count = torch.zeros((data.num_graphs, 3)).to(data['ligand'].x.device), torch.zeros((data.num_graphs, 3)).to(data['ligand'].x.device) - center_pos.index_add_(0, index=data['ligand'].batch, source=data['ligand'].pos) - center_pos = center_pos / torch.bincount(data['ligand'].batch).unsqueeze(1) - - edge_vec = data['ligand'].pos[edge_index[1]] - center_pos[edge_index[0]] - edge_attr = self.center_distance_expansion(edge_vec.norm(dim=-1)) - edge_sigma_emb = data['ligand'].node_sigma_emb[edge_index[1].long()] - edge_attr = torch.cat([edge_attr, edge_sigma_emb], 1) - edge_sh = o3.spherical_harmonics(self.sh_irreps, edge_vec, normalize=True, normalization='component') - return edge_index, edge_attr, edge_sh - - def build_bond_conv_graph(self, data): - # builds the graph for the convolution between the center of the rotatable bonds and the neighbouring nodes - bonds = data['ligand', 'ligand'].edge_index[:, data['ligand'].edge_mask].long() - bond_pos = (data['ligand'].pos[bonds[0]] + data['ligand'].pos[bonds[1]]) / 2 - bond_batch = data['ligand'].batch[bonds[0]] - edge_index = radius(data['ligand'].pos, bond_pos, self.lig_max_radius, batch_x=data['ligand'].batch, batch_y=bond_batch) - - edge_vec = data['ligand'].pos[edge_index[1]] - bond_pos[edge_index[0]] - edge_attr = self.lig_distance_expansion(edge_vec.norm(dim=-1)) - - edge_attr = self.final_edge_embedding(edge_attr) - edge_sh = o3.spherical_harmonics(self.sh_irreps, edge_vec, normalize=True, normalization='component') - - return bonds, edge_index, edge_attr, edge_sh - - -class GaussianSmearing(torch.nn.Module): - # used to embed the edge distances - def __init__(self, start=0.0, stop=5.0, num_gaussians=50): - super().__init__() - offset = torch.linspace(start, stop, num_gaussians) - self.coeff = -0.5 / (offset[1] - offset[0]).item() ** 2 - self.register_buffer('offset', offset) - - def forward(self, dist): - dist = dist.view(-1, 1) - self.offset.view(1, -1) - return torch.exp(self.coeff * torch.pow(dist, 2)) diff --git a/spaces/sino72/Passenger_Reconization/deep_sort/utils/asserts.py b/spaces/sino72/Passenger_Reconization/deep_sort/utils/asserts.py deleted file mode 100644 index 59a73cc04025762d6490fcd2945a747d963def32..0000000000000000000000000000000000000000 --- a/spaces/sino72/Passenger_Reconization/deep_sort/utils/asserts.py +++ /dev/null @@ -1,13 +0,0 @@ -from os import environ - - -def assert_in(file, files_to_check): - if file not in files_to_check: - raise AssertionError("{} does not exist in the list".format(str(file))) - return True - - -def assert_in_env(check_list: list): - for item in check_list: - assert_in(item, environ.keys()) - return True diff --git a/spaces/skf15963/summary/fengshen/examples/clue1.1/predict2submit/tnews_submit.py b/spaces/skf15963/summary/fengshen/examples/clue1.1/predict2submit/tnews_submit.py deleted file mode 100644 index eada0476b270624af8c397afb7df70e4e24473b3..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/clue1.1/predict2submit/tnews_submit.py +++ /dev/null @@ -1,47 +0,0 @@ -import json -from tqdm import tqdm -import argparse - - -def save_data(data,file_path): - with open(file_path, 'w', encoding='utf8') as f: - for line in data: - json_data=json.dumps(line,ensure_ascii=False) - f.write(json_data+'\n') - -def submit(file_path): - id2label={"故事": "100", - "文化": "101", - "娱乐": "102", - "体育": "103", - "财经": "104", - "房产": "106", - "汽车": "107", - "教育": "108", - "科技": "109", - "军事": "110", - "旅游": "112", - "国际": "113", - "股票": "114", - "农业": "115", - "电竞": "116"} - - with open(file_path, 'r', encoding='utf8') as f: - lines = f.readlines() - result=[] - for line in tqdm(lines): - data = json.loads(line) - result.append({'id':data['id'],'label':id2label[data['choice'][data['label']]]}) - return result - - -if __name__=="__main__": - parser = argparse.ArgumentParser(description="train") - parser.add_argument("--data_path", type=str,default="") - parser.add_argument("--save_path", type=str,default="") - - args = parser.parse_args() - save_data(submit(args.data_path), args.save_path) - - - \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/models/deltalm/tokenizer_deltalm.py b/spaces/skf15963/summary/fengshen/models/deltalm/tokenizer_deltalm.py deleted file mode 100644 index dcc81acffeb15bea28c0dd5bb10287fc897cd55d..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/models/deltalm/tokenizer_deltalm.py +++ /dev/null @@ -1,323 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -import os -import re -import warnings -from shutil import copyfile -from typing import Any, Dict, List, Optional, Tuple - -import sentencepiece as spm - -from transformers.tokenization_utils import PreTrainedTokenizer -from transformers.utils import logging - - -SPIECE_UNDERLINE = "▁" - -VOCAB_FILES_NAMES = {"vocab_file": "spm.model"} - -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": {"IDEA-CCNL/deltalm": "https://huggingface.co/IDEA-CCNL/Randeng-Deltalm-362M-En-Zn/resolve/main/spm.model"} -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "IDEA-CCNL/deltalm": 512, -} - - -logger = logging.get_logger(__name__) - - -class DeltalmTokenizer(PreTrainedTokenizer): - """ - Construct a T5 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece). - This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to - this superclass for more information regarding those methods. - Args: - vocab_file (`str`): - [SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that - contains the vocabulary necessary to instantiate a tokenizer. - eos_token (`str`, *optional*, defaults to `""`): - The end of sequence token. - - When building a sequence using special tokens, this is not the token that is used for the end of sequence. - The token used is the `sep_token`. - - unk_token (`str`, *optional*, defaults to `""`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - pad_token (`str`, *optional*, defaults to `""`): - The token used for padding, for example when batching sequences of different lengths. - extra_ids (`int`, *optional*, defaults to 100): - Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are - accessible as "" where "{%d}" is a number between 0 and extra_ids-1. Extra tokens are - indexed from the end of the vocabulary up to beginning ("" is the last token in the vocabulary - like in T5 preprocessing see - [here](https://github.com/google-research/text-to-text-transfer-transformer/blob/9fd7b14a769417be33bc6c850f9598764913c833/t5/data/preprocessors.py#L2117)). - additional_special_tokens (`List[str]`, *optional*): - Additional special tokens used by the tokenizer. - sp_model_kwargs (`dict`, *optional*): - Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for - SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, - to set: - - `enable_sampling`: Enable subword regularization. - - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - - `nbest_size = {0,1}`: No sampling is performed. - - `nbest_size > 1`: samples from the nbest_size results. - - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) - using forward-filtering-and-backward-sampling algorithm. - - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for - BPE-dropout. - Attributes: - sp_model (`SentencePieceProcessor`): - The *SentencePiece* processor that is used for every conversion (string, tokens and IDs). - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - model_input_names = ["input_ids", "attention_mask"] - - def __init__( - self, - vocab_file, - bos_token="", - eos_token="", - unk_token="", - pad_token="", - extra_ids=0, - additional_special_tokens=None, - sp_model_kwargs: Optional[Dict[str, Any]] = None, - **kwargs - ) -> None: - # Add extra_ids to the special token list - if extra_ids > 0 and additional_special_tokens is None: - additional_special_tokens = [f"" for i in range(extra_ids)] - elif extra_ids > 0 and additional_special_tokens is not None: - # Check that we have the right number of extra_id special tokens - extra_tokens = len(set(filter(lambda x: bool("extra_id" in str(x)), additional_special_tokens))) - if extra_tokens != extra_ids: - raise ValueError( - f"Both extra_ids ({extra_ids}) and additional_special_tokens ({additional_special_tokens}) are" - " provided to T5Tokenizer. In this case the additional_special_tokens must include the extra_ids" - " tokens" - ) - - self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs - super().__init__( - bos_token=bos_token, - eos_token=eos_token, - unk_token=unk_token, - pad_token=pad_token, - additional_special_tokens=additional_special_tokens, - extra_ids=extra_ids, - sp_model_kwargs=self.sp_model_kwargs, - **kwargs, - ) - - self.vocab_file = vocab_file - self.offset = 1 - self._extra_ids = extra_ids - - self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) - self.sp_model.Load(vocab_file) - - self.encoder: Dict[int, str] = { - 0: self.bos_token, - 1: self.pad_token, - 2: self.eos_token, - 3: self.unk_token, - } - - self.decoder: Dict[str, int] = {v: k for k, v in self.encoder.items()} - - @staticmethod - def _eventually_correct_t5_max_length(pretrained_model_name_or_path, max_model_length, init_max_model_length): - if pretrained_model_name_or_path in DeltalmTokenizer.max_model_input_sizes: - deprecated_max_model_length = DeltalmTokenizer.max_model_input_sizes[pretrained_model_name_or_path] - if init_max_model_length is not None and init_max_model_length != max_model_length: - return init_max_model_length - elif init_max_model_length is None: - warnings.warn( - "This tokenizer was incorrectly instantiated with a model max length of" - f" {deprecated_max_model_length} which will be corrected in Transformers v5.\nFor now, this" - " behavior is kept to avoid breaking backwards compatibility when padding/encoding with" - " `truncation is True`.\n- Be aware that you SHOULD NOT rely on" - f" {pretrained_model_name_or_path} automatically truncating your input to" - f" {deprecated_max_model_length} when padding/encoding.\n- If you want to encode/pad to sequences" - f" longer than {deprecated_max_model_length} you can either instantiate this tokenizer with" - " `model_max_length` or pass `max_length` when encoding/padding.\n- To avoid this warning, please" - " instantiate this tokenizer with `model_max_length` set to your preferred value.", - FutureWarning, - ) - - return max_model_length - - @property - def vocab_size(self): - return self.sp_model.get_piece_size() # + self._extra_ids - - def get_vocab(self): - vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} - vocab.update(self.added_tokens_encoder) - return vocab - - def get_special_tokens_mask( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False - ) -> List[int]: - """ - Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding - special tokens using the tokenizer `prepare_for_model` method. - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - already_has_special_tokens (`bool`, *optional*, defaults to `False`): - Whether or not the token list is already formatted with special tokens for the model. - Returns: - `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. - """ - if already_has_special_tokens: - return super().get_special_tokens_mask( - token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True - ) - - # normal case: some special tokens - if token_ids_1 is None: - return ([0] * len(token_ids_0)) + [1] - return ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1)) + [1] - - def _add_eos_if_not_present(self, token_ids: List[int]) -> List[int]: - """Do not add eos again if user already added it.""" - if len(token_ids) > 0 and token_ids[-1] == self.eos_token_id: - warnings.warn( - f"This sequence already has {self.eos_token}. In future versions this behavior may lead to duplicated" - " eos tokens being added." - ) - return token_ids - else: - return token_ids + [self.eos_token_id] - - def create_token_type_ids_from_sequences( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make - use of token type ids, therefore a list of zeros is returned. - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - Returns: - `List[int]`: List of zeros. - """ - eos = [self.eos_token_id] - - if token_ids_1 is None: - return len(token_ids_0 + eos) * [0] - return len(token_ids_0 + eos + token_ids_1 + eos) * [0] - - def build_inputs_with_special_tokens( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and - adding special tokens. A sequence has the following format: - - single sequence: `X ` - - pair of sequences: `A B ` - Args: - token_ids_0 (`List[int]`): - List of IDs to which the special tokens will be added. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - Returns: - `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. - """ - token_ids_0 = self._add_eos_if_not_present(token_ids_0) - if token_ids_1 is None: - return token_ids_0 - else: - token_ids_1 = self._add_eos_if_not_present(token_ids_1) - return token_ids_0 + token_ids_1 - - def __getstate__(self): - state = self.__dict__.copy() - state["sp_model"] = None - return state - - def __setstate__(self, d): - self.__dict__ = d - - # for backward compatibility - if not hasattr(self, "sp_model_kwargs"): - self.sp_model_kwargs = {} - - self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) - self.sp_model.Load(self.vocab_file) - - def _tokenize(self, text: str) -> List[str]: - """Take as input a string and return a list of strings (tokens) for words/sub-words""" - return self.sp_model.encode(text, out_type=str) - - def _convert_token_to_id(self, token): - """Converts a token (str) in an id using the vocab.""" - if token.startswith("", token) - num = int(match.group(1)) - return self.vocab_size - num - 1 - elif token in self.decoder: - return self.decoder[token] - - sp_id = self.sp_model.piece_to_id(token) - return sp_id + self.offset - - def _convert_id_to_token(self, index): - """Converts an index (integer) in a token (str) using the vocab.""" - # if index < self.sp_model.get_piece_size(): - # token = self.sp_model.IdToPiece(index) - # else: - # token = f"" - # return token - if index in self.encoder: - return self.encoder[index] - elif index in self.added_tokens_encoder: - return self.added_tokens_encoder[index] - elif index < self.sp_model.get_piece_size() + 4: - token = self.sp_model.IdToPiece(index-self.offset) - else: - token = f"" - return token - - def convert_tokens_to_string(self, tokens): - """Converts a sequence of tokens (string) in a single string.""" - current_sub_tokens = [] - out_string = "" - for token in tokens: - # make sure that special tokens are not decoded using sentencepiece model - if token in self.all_special_tokens: - out_string += self.sp_model.decode_pieces(current_sub_tokens) + token + " " - current_sub_tokens = [] - else: - current_sub_tokens.append(token) - out_string += self.sp_model.decode_pieces(current_sub_tokens) - return out_string.strip() - - def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: - if not os.path.isdir(save_directory): - logger.error(f"Vocabulary path ({save_directory}) should be a directory") - return - out_vocab_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] - ) - - if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file): - copyfile(self.vocab_file, out_vocab_file) - elif not os.path.isfile(self.vocab_file): - with open(out_vocab_file, "wb") as fi: - content_spiece_model = self.sp_model.serialized_model_proto() - fi.write(content_spiece_model) - - return (out_vocab_file,) diff --git a/spaces/skimai/DragGAN_Streamlit/stylegan2/torch_utils/__init__.py b/spaces/skimai/DragGAN_Streamlit/stylegan2/torch_utils/__init__.py deleted file mode 100644 index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000 --- a/spaces/skimai/DragGAN_Streamlit/stylegan2/torch_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/sparanoid/milky-green-sovits-4/vdecoder/hifigan/nvSTFT.py b/spaces/sparanoid/milky-green-sovits-4/vdecoder/hifigan/nvSTFT.py deleted file mode 100644 index 88597d62a505715091f9ba62d38bf0a85a31b95a..0000000000000000000000000000000000000000 --- a/spaces/sparanoid/milky-green-sovits-4/vdecoder/hifigan/nvSTFT.py +++ /dev/null @@ -1,111 +0,0 @@ -import math -import os -os.environ["LRU_CACHE_CAPACITY"] = "3" -import random -import torch -import torch.utils.data -import numpy as np -import librosa -from librosa.util import normalize -from librosa.filters import mel as librosa_mel_fn -from scipy.io.wavfile import read -import soundfile as sf - -def load_wav_to_torch(full_path, target_sr=None, return_empty_on_exception=False): - sampling_rate = None - try: - data, sampling_rate = sf.read(full_path, always_2d=True)# than soundfile. - except Exception as ex: - print(f"'{full_path}' failed to load.\nException:") - print(ex) - if return_empty_on_exception: - return [], sampling_rate or target_sr or 32000 - else: - raise Exception(ex) - - if len(data.shape) > 1: - data = data[:, 0] - assert len(data) > 2# check duration of audio file is > 2 samples (because otherwise the slice operation was on the wrong dimension) - - if np.issubdtype(data.dtype, np.integer): # if audio data is type int - max_mag = -np.iinfo(data.dtype).min # maximum magnitude = min possible value of intXX - else: # if audio data is type fp32 - max_mag = max(np.amax(data), -np.amin(data)) - max_mag = (2**31)+1 if max_mag > (2**15) else ((2**15)+1 if max_mag > 1.01 else 1.0) # data should be either 16-bit INT, 32-bit INT or [-1 to 1] float32 - - data = torch.FloatTensor(data.astype(np.float32))/max_mag - - if (torch.isinf(data) | torch.isnan(data)).any() and return_empty_on_exception:# resample will crash with inf/NaN inputs. return_empty_on_exception will return empty arr instead of except - return [], sampling_rate or target_sr or 32000 - if target_sr is not None and sampling_rate != target_sr: - data = torch.from_numpy(librosa.core.resample(data.numpy(), orig_sr=sampling_rate, target_sr=target_sr)) - sampling_rate = target_sr - - return data, sampling_rate - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - -class STFT(): - def __init__(self, sr=22050, n_mels=80, n_fft=1024, win_size=1024, hop_length=256, fmin=20, fmax=11025, clip_val=1e-5): - self.target_sr = sr - - self.n_mels = n_mels - self.n_fft = n_fft - self.win_size = win_size - self.hop_length = hop_length - self.fmin = fmin - self.fmax = fmax - self.clip_val = clip_val - self.mel_basis = {} - self.hann_window = {} - - def get_mel(self, y, center=False): - sampling_rate = self.target_sr - n_mels = self.n_mels - n_fft = self.n_fft - win_size = self.win_size - hop_length = self.hop_length - fmin = self.fmin - fmax = self.fmax - clip_val = self.clip_val - - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - if fmax not in self.mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax) - self.mel_basis[str(fmax)+'_'+str(y.device)] = torch.from_numpy(mel).float().to(y.device) - self.hann_window[str(y.device)] = torch.hann_window(self.win_size).to(y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_length)/2), int((n_fft-hop_length)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_length, win_length=win_size, window=self.hann_window[str(y.device)], - center=center, pad_mode='reflect', normalized=False, onesided=True) - # print(111,spec) - spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9)) - # print(222,spec) - spec = torch.matmul(self.mel_basis[str(fmax)+'_'+str(y.device)], spec) - # print(333,spec) - spec = dynamic_range_compression_torch(spec, clip_val=clip_val) - # print(444,spec) - return spec - - def __call__(self, audiopath): - audio, sr = load_wav_to_torch(audiopath, target_sr=self.target_sr) - spect = self.get_mel(audio.unsqueeze(0)).squeeze(0) - return spect - -stft = STFT() diff --git a/spaces/srush/minichain/gatsby.py b/spaces/srush/minichain/gatsby.py deleted file mode 100644 index 517a48ce57f727db3250256cab04af9372196fd9..0000000000000000000000000000000000000000 --- a/spaces/srush/minichain/gatsby.py +++ /dev/null @@ -1,54 +0,0 @@ -# + tags=["hide_inp"] -desc = """ -### Book QA - -Chain that does question answering with Hugging Face embeddings. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/srush/MiniChain/blob/master/examples/gatsby.ipynb) - -(Adapted from the [LlamaIndex example](https://github.com/jerryjliu/gpt_index/blob/main/examples/gatsby/TestGatsby.ipynb).) -""" -# - - -# $ - -import datasets -import numpy as np -from minichain import prompt, show, HuggingFaceEmbed, OpenAI, transform - -# Load data with embeddings (computed beforehand) - -gatsby = datasets.load_from_disk("gatsby") -gatsby.add_faiss_index("embeddings") - -# Fast KNN retrieval prompt - -@prompt(HuggingFaceEmbed("sentence-transformers/all-mpnet-base-v2")) -def embed(model, inp): - return model(inp) - -@transform() -def get_neighbors(embedding, k=1): - res = gatsby.get_nearest_examples("embeddings", np.array(embedding), k) - return res.examples["passages"] - -@prompt(OpenAI(), template_file="gatsby.pmpt.tpl") -def ask(model, query, neighbors): - return model(dict(question=query, docs=neighbors)) - -def gatsby_q(query): - n = get_neighbors(embed(query)) - return ask(query, n) - - -# $ - - -gradio = show(gatsby_q, - subprompts=[ask], - examples=["What did Gatsby do before he met Daisy?", - "What did the narrator do after getting back to Chicago?"], - keys={"HF_KEY"}, - description=desc, - code=open("gatsby.py", "r").read().split("$")[1].strip().strip("#").strip() - ) -if __name__ == "__main__": - gradio.queue().launch() diff --git a/spaces/starnek/mix-design-concrete/README.md b/spaces/starnek/mix-design-concrete/README.md deleted file mode 100644 index de16af1443a4647caa487d42ee64c719aa86043a..0000000000000000000000000000000000000000 --- a/spaces/starnek/mix-design-concrete/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mix Design Concrete -emoji: 🏆 -colorFrom: green -colorTo: red -sdk: streamlit -sdk_version: 1.25.0 -app_file: 1_MIX_DESIGN.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/stomexserde/gpt4-ui/Examples/Ek Thi Rani Aisi Bhi Man 3 Movie Free Download In Hindi Hd 720p.md b/spaces/stomexserde/gpt4-ui/Examples/Ek Thi Rani Aisi Bhi Man 3 Movie Free Download In Hindi Hd 720p.md deleted file mode 100644 index 3dc4ffcf2bc29fab86a5985bb28493b3a9a2dc00..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Ek Thi Rani Aisi Bhi Man 3 Movie Free Download In Hindi Hd 720p.md +++ /dev/null @@ -1,15 +0,0 @@ -
    -

    Ek Thi Rani Aisi Bhi: A Biopic of a Royal and Political Icon

    -

    Ek Thi Rani Aisi Bhi (Once there was a queen) is a 2017 Hindi language biopic directed by Gul Bahar Singh starring Hema Malini, who portrays Vijaya Raje Scindia, the Rajmata of Gwalior and a prominent leader of Jan Sangh and BJP. The film is based on Mridula Sinha's biography novel Rajpath se Lokhpath par (From the royal path to the public path), and traces her journey from Palace to the public.

    -

    The film depicts the life and struggles of Vijaya Raje Scindia, who was born into a royal family of Baroda and married Maharaja Jivaji Rao Scindia of Gwalior at the age of 16. She faced many challenges after the merger of the princely states with the Indian Union after Independence, which left her husband disheartened and depressed. He died prematurely in 1961, leaving her alone to raise their five children.

    -

    Ek Thi Rani Aisi Bhi man 3 movie free download in hindi hd 720p


    Download > https://urlgoal.com/2uI8uG



    -

    Vijaya Raje Scindia entered politics with the encouragement of Sardar Patel and became one of the founding members of Jan Sangh, the predecessor of BJP. She fought for the rights and welfare of her people and became a popular and respected figure in Indian politics. She also faced ideological conflicts with her son Madhorao Scindia, who joined Congress and opposed her views. She never compromised with her faith and values and became an inspiration for millions.

    -

    The film also stars Vinod Khanna as Maharaja Jivaji Rao Scindia, Sachin Khedekar as ADC, Rajesh Shringarpore as Madhorao Scindia, Anjan Srivastav as Jailor and Ram Gopal Bajaj as Sardar Patel. The film was produced by Rajmata Vijaya Raje Scindia Smiriti Nyas and released in theatres on 21 April 2017. It was also made tax-free in Uttar Pradesh and Madhya Pradesh before its release. The film premiered on television on Zee Classic on 6 May 2017. The film was also shown in 49th International film festival of India in Goa.

    -

    Ek Thi Rani Aisi Bhi is a tribute to a remarkable woman who rose from being a queen to a leader of the masses. It is a story of courage, conviction and compassion that will inspire generations to come.

    - -

    The film has received mixed reviews from critics and audiences. Some have praised the film for its portrayal of Vijaya Raje Scindia's life and achievements, while others have criticized it for being biased and dull. Bollywood Hungama gave the film 2 stars out of 5 and wrote, "Ek Thi Rani Aisi Bhi is a film that could have been a great tribute to a great personality but falls short due to its poor execution and lack of entertainment value." [^1^]

    -

    IMDb users have rated the film 7.2 out of 10 based on 10 reviews. Some have appreciated the film for its historical accuracy and Hema Malini's performance, while others have found it boring and slow-paced. One user commented, "The movie is very informative and gives a glimpse of the life of Rajmata Vijayaraje Scindia. Hema Malini has done a great job in portraying her character. The movie is not very entertaining but it is worth watching for those who are interested in history and politics." [^2^]

    -

    -

    The film has also faced some controversies and legal issues. The Congress party in Madhya Pradesh had objected to the film's release before the state assembly elections in 2018, alleging that it was a propaganda tool for BJP. The party had also filed a petition in the High Court seeking a stay on the film's release, but the court had dismissed it. The film's director Gul Bahar Singh had also accused the producer Rajmata Vijaya Raje Scindia Smiriti Nyas of not paying him his dues and interfering with his creative freedom. He had claimed that he was not happy with the final cut of the film and that he was not invited to any of the promotional events. [^3^]

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Haseena Maan Jaayegi 1 Movie Download !LINK!.md b/spaces/stomexserde/gpt4-ui/Examples/Haseena Maan Jaayegi 1 Movie Download !LINK!.md deleted file mode 100644 index 2c6636b60fa57e74daa4ce80392b66c943d3daa8..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Haseena Maan Jaayegi 1 Movie Download !LINK!.md +++ /dev/null @@ -1,25 +0,0 @@ -
    -

    Haseena Maan Jaayegi 1 Movie Download: A Classic Comedy of Two Brothers and Their Dream Girls

    - -

    If you are looking for a movie that will make you laugh out loud, then you should check out Haseena Maan Jaayegi 1, a 1999 Bollywood comedy film directed by David Dhawan and starring Govinda, Sanjay Dutt, Karisma Kapoor and Pooja Batra. The movie is about two brothers, Sonu and Monu, who are separated from their wealthy father and each other at a young age. They grow up to be con artists who try to win the hearts of their dream girls, Ritu and Pooja, who are the daughters of two rich businessmen. However, their plans are complicated by the presence of a gangster, a police commissioner, and their father's old friend.

    -

    Haseena Maan Jaayegi 1 Movie Download


    DOWNLOAD →→→ https://urlgoal.com/2uIbUF



    - -

    Haseena Maan Jaayegi 1 is a hilarious movie that features many funny scenes and dialogues, as well as some catchy songs. The movie showcases the comic timing and chemistry of Govinda and Sanjay Dutt, who play off each other very well. The movie also has some memorable supporting characters, such as Bhootnath (Paresh Rawal), Kunj Biharilal (Satish Kaushik), Amirchand (Kader Khan) and Gulzarilal (Anupam Kher). The movie is a remake of the 1966 Telugu film Preminchi Choodu.

    - -

    You can watch Haseena Maan Jaayegi 1 online or download it from various streaming services or websites. However, you should be careful about the quality and legality of the sources you use. Some of them may contain viruses or malware that can harm your device or data. You should also respect the copyrights of the makers and distributors of the movie and avoid piracy.

    - -

    Haseena Maan Jaayegi 1 is a movie that will entertain you with its humor and romance. It is a classic comedy that you can enjoy with your family and friends.

    - -

    If you are wondering what makes Haseena Maan Jaayegi 1 a movie worth watching, here are some of the reasons why you should give it a try:

    -

    - -
      -
    • The movie is a comedy of errors that keeps you hooked with its twists and turns. The movie has a lot of situational comedy that arises from the misunderstandings and confusions of the characters. The movie also has some witty dialogues that will make you laugh.
    • -
    • The movie is a showcase of the talent and versatility of Govinda and Sanjay Dutt, who play dual roles in the movie. Govinda plays Monu, a street-smart con artist, and Chacha Ji, a fake uncle who helps Sonu and Monu. Sanjay Dutt plays Sonu, a smooth-talker who falls in love with Ritu, and Bhai, a gangster who wants to marry Pooja. The actors do a great job of portraying their different characters and switching between them.
    • -
    • The movie has some romantic moments between the lead pairs, who have good chemistry on screen. Karisma Kapoor plays Ritu, a smart and independent girl who works as a journalist. Pooja Batra plays Pooja, a sweet and naive girl who is the daughter of a rich businessman. The movie shows how Sonu and Monu woo their respective love interests and overcome the obstacles in their way.
    • -
    • The movie has some catchy songs that add to the fun and charm of the movie. The movie has songs composed by Anu Malik and sung by Sonu Nigam, Alka Yagnik, Kavita Krishnamurthy, Udit Narayan and others. Some of the popular songs from the movie are Haseena Maan Jaayegi, Chitti Pahad Chade, Dil Tera Deewana Hai Sanam and What Is Mobile Number.
    • -
    - -

    Haseena Maan Jaayegi 1 is a movie that will make you laugh and smile with its comedy and romance. It is a movie that you can watch anytime and enjoy with your loved ones.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/stratussox/yolov5_inference/models/__init__.py b/spaces/stratussox/yolov5_inference/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/suancaixianyu/Real-CUGAN/README.md b/spaces/suancaixianyu/Real-CUGAN/README.md deleted file mode 100644 index d673114edadba73e80f33a3c71bc0dbee8758cc8..0000000000000000000000000000000000000000 --- a/spaces/suancaixianyu/Real-CUGAN/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Real CUGAN -emoji: 🐢 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: DianXian/Real-CUGAN ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sub314xxl/MetaGPT/metagpt/static/assets/home-a1d5c9a6.js b/spaces/sub314xxl/MetaGPT/metagpt/static/assets/home-a1d5c9a6.js deleted file mode 100644 index 850fa15fdf043ed58411c15e63fbeb6d7ca0895a..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/static/assets/home-a1d5c9a6.js +++ /dev/null @@ -1,335 +0,0 @@ -var Fw=Object.defineProperty;var Bw=(t,e,n)=>e in t?Fw(t,e,{enumerable:!0,configurable:!0,writable:!0,value:n}):t[e]=n;var Mi=(t,e,n)=>(Bw(t,typeof e!="symbol"?e+"":e,n),n);import{d as be,x as yt,c as le,f as V,h as ae,l as B,n as It,k as Bt,r as ee,o as kn,w as Zt,v as ue,$ as q,j as oi,p as ot,q as dt,a2 as wr,a3 as Zi,a4 as Ji,t as vt,F as st,U as Ws,R as $i,H as bm,G as Kn,i as Ft,B as ni,g as $m,K as Pn,a5 as j,J as Gw,A as Ks,a6 as Yw,I as OC,a7 as Hm,Q as si,T as Hi,z as zm,L as Qs,b as Ya,a8 as Vm,P as yn,s as Ge,u as Qe,M as ji,a9 as qw,y as AC,D as Ls,aa as $w,_ as Hw,a as zw,ab as Vw,N as Ww,ac as Kw}from"./vue-e0bc46a9.js";import{_ as Un,j as Fn,k as Bn,m as Qw,o as Ht,e as Rt,M as yC,n as IC,p as Ka,q as Xw,r as Zw,O as Zc,s as Jw,D as DC,u as jw,v as eM,A as tM,T as nM,B as hm,g as xC,w as rM,L as Gf,x as iM,y as aM,z as oM,E as sM,G as lM,S as cM,H as uM}from"./vendor-4cd7d240.js";import{C as Wm,U as dM,t as Km}from"./index-054e9309.js";import{c as La,a as _M,g as Qm}from"./__commonjsHelpers__-042e6b4d.js";const pM="/static/assets/bigTexCard-be1474a9.svg",mM="/static/assets/example-1902a4ef.png",gM="/static/assets/example-c1208c62.mp4",EM=["xlink:href"],ea=be({__name:"Icon",props:{iconId:{},fill:{},size:{},disabled:{type:Boolean}},setup(t){const e=t,n=yt(e,"iconId"),i=yt(e,"fill"),o=le(()=>n.value.split("-").filter(l=>l).map(l=>l[0].toUpperCase()+l.slice(1)).join("")),s=le(()=>`width: ${e.size}px;height:${e.size}px;fill:${i.value}`);return(l,c)=>(V(),ae("svg",{class:It([o.value,n.value,"IconCommon",l.disabled?"iconDisabled":""]),style:Bt(s.value),"aria-hidden":"true"},[B("use",{"xlink:href":`#${n.value}`},null,8,EM)],6))}});const fM="/static/assets/wechat-4dad209c.png",SM=be({name:"IconDownload",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:t=>["butt","round","square"].includes(t)},strokeLinejoin:{type:String,default:"miter",validator:t=>["arcs","bevel","miter","miter-clip","round"].includes(t)},rotate:Number,spin:Boolean},emits:{click:t=>!0},setup(t,{emit:e}){const n=Fn("icon"),i=le(()=>[n,`${n}-download`,{[`${n}-spin`]:t.spin}]),o=le(()=>{const l={};return t.size&&(l.fontSize=Bn(t.size)?`${t.size}px`:t.size),t.rotate&&(l.transform=`rotate(${t.rotate}deg)`),l});return{cls:i,innerStyle:o,onClick:l=>{e("click",l)}}}}),bM=["stroke-width","stroke-linecap","stroke-linejoin"],hM=B("path",{d:"m33.072 22.071-9.07 9.071-9.072-9.07M24 5v26m16 4v6H8v-6"},null,-1),TM=[hM];function vM(t,e,n,i,o,s){return V(),ae("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:It(t.cls),style:Bt(t.innerStyle),"stroke-width":t.strokeWidth,"stroke-linecap":t.strokeLinecap,"stroke-linejoin":t.strokeLinejoin,onClick:e[0]||(e[0]=(...l)=>t.onClick&&t.onClick(...l))},TM,14,bM)}var Jc=Un(SM,[["render",vM]]);const CM=Object.assign(Jc,{install:(t,e)=>{var n;const i=(n=e==null?void 0:e.iconPrefix)!=null?n:"";t.component(i+Jc.name,Jc)}}),RM=be({name:"IconScan",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:t=>["butt","round","square"].includes(t)},strokeLinejoin:{type:String,default:"miter",validator:t=>["arcs","bevel","miter","miter-clip","round"].includes(t)},rotate:Number,spin:Boolean},emits:{click:t=>!0},setup(t,{emit:e}){const n=Fn("icon"),i=le(()=>[n,`${n}-scan`,{[`${n}-spin`]:t.spin}]),o=le(()=>{const l={};return t.size&&(l.fontSize=Bn(t.size)?`${t.size}px`:t.size),t.rotate&&(l.transform=`rotate(${t.rotate}deg)`),l});return{cls:i,innerStyle:o,onClick:l=>{e("click",l)}}}}),NM=["stroke-width","stroke-linecap","stroke-linejoin"],OM=B("path",{d:"M7 17V7h10m24 10V7H31m10 24v10H31M7 31v10h10M5 24h38"},null,-1),AM=[OM];function yM(t,e,n,i,o,s){return V(),ae("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:It(t.cls),style:Bt(t.innerStyle),"stroke-width":t.strokeWidth,"stroke-linecap":t.strokeLinecap,"stroke-linejoin":t.strokeLinejoin,onClick:e[0]||(e[0]=(...l)=>t.onClick&&t.onClick(...l))},AM,14,NM)}var jc=Un(RM,[["render",yM]]);const IM=Object.assign(jc,{install:(t,e)=>{var n;const i=(n=e==null?void 0:e.iconPrefix)!=null?n:"";t.component(i+jc.name,jc)}}),DM=be({name:"IconSync",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:t=>["butt","round","square"].includes(t)},strokeLinejoin:{type:String,default:"miter",validator:t=>["arcs","bevel","miter","miter-clip","round"].includes(t)},rotate:Number,spin:Boolean},emits:{click:t=>!0},setup(t,{emit:e}){const n=Fn("icon"),i=le(()=>[n,`${n}-sync`,{[`${n}-spin`]:t.spin}]),o=le(()=>{const l={};return t.size&&(l.fontSize=Bn(t.size)?`${t.size}px`:t.size),t.rotate&&(l.transform=`rotate(${t.rotate}deg)`),l});return{cls:i,innerStyle:o,onClick:l=>{e("click",l)}}}}),xM=["stroke-width","stroke-linecap","stroke-linejoin"],wM=B("path",{d:"M11.98 11.703c-6.64 6.64-6.64 17.403 0 24.042a16.922 16.922 0 0 0 8.942 4.7M34.603 37.156l1.414-1.415c6.64-6.639 6.64-17.402 0-24.041A16.922 16.922 0 0 0 27.075 7M14.81 11.982l-1.414-1.414-1.414-1.414h2.829v2.828ZM33.192 36.02l1.414 1.414 1.414 1.415h-2.828V36.02Z"},null,-1),MM=[wM];function LM(t,e,n,i,o,s){return V(),ae("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:It(t.cls),style:Bt(t.innerStyle),"stroke-width":t.strokeWidth,"stroke-linecap":t.strokeLinecap,"stroke-linejoin":t.strokeLinejoin,onClick:e[0]||(e[0]=(...l)=>t.onClick&&t.onClick(...l))},MM,14,xM)}var eu=Un(DM,[["render",LM]]);const gs=Object.assign(eu,{install:(t,e)=>{var n;const i=(n=e==null?void 0:e.iconPrefix)!=null?n:"";t.component(i+eu.name,eu)}}),PM=be({name:"IconVoice",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:t=>["butt","round","square"].includes(t)},strokeLinejoin:{type:String,default:"miter",validator:t=>["arcs","bevel","miter","miter-clip","round"].includes(t)},rotate:Number,spin:Boolean},emits:{click:t=>!0},setup(t,{emit:e}){const n=Fn("icon"),i=le(()=>[n,`${n}-voice`,{[`${n}-spin`]:t.spin}]),o=le(()=>{const l={};return t.size&&(l.fontSize=Bn(t.size)?`${t.size}px`:t.size),t.rotate&&(l.transform=`rotate(${t.rotate}deg)`),l});return{cls:i,innerStyle:o,onClick:l=>{e("click",l)}}}}),kM=["stroke-width","stroke-linecap","stroke-linejoin"],UM=B("path",{d:"M41 21v1c0 8.837-7.163 16-16 16h-2c-8.837 0-16-7.163-16-16v-1m17 17v6m0-14a9 9 0 0 1-9-9v-6a9 9 0 1 1 18 0v6a9 9 0 0 1-9 9Z"},null,-1),FM=[UM];function BM(t,e,n,i,o,s){return V(),ae("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:It(t.cls),style:Bt(t.innerStyle),"stroke-width":t.strokeWidth,"stroke-linecap":t.strokeLinecap,"stroke-linejoin":t.strokeLinejoin,onClick:e[0]||(e[0]=(...l)=>t.onClick&&t.onClick(...l))},FM,14,kM)}var tu=Un(PM,[["render",BM]]);const GM=Object.assign(tu,{install:(t,e)=>{var n;const i=(n=e==null?void 0:e.iconPrefix)!=null?n:"";t.component(i+tu.name,tu)}}),YM=be({name:"IconPauseCircleFill",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:t=>["butt","round","square"].includes(t)},strokeLinejoin:{type:String,default:"miter",validator:t=>["arcs","bevel","miter","miter-clip","round"].includes(t)},rotate:Number,spin:Boolean},emits:{click:t=>!0},setup(t,{emit:e}){const n=Fn("icon"),i=le(()=>[n,`${n}-pause-circle-fill`,{[`${n}-spin`]:t.spin}]),o=le(()=>{const l={};return t.size&&(l.fontSize=Bn(t.size)?`${t.size}px`:t.size),t.rotate&&(l.transform=`rotate(${t.rotate}deg)`),l});return{cls:i,innerStyle:o,onClick:l=>{e("click",l)}}}}),qM=["stroke-width","stroke-linecap","stroke-linejoin"],$M=B("path",{"fill-rule":"evenodd","clip-rule":"evenodd",d:"M24 44c11.046 0 20-8.954 20-20S35.046 4 24 4 4 12.954 4 24s8.954 20 20 20Zm-6-27a1 1 0 0 0-1 1v12a1 1 0 0 0 1 1h3a1 1 0 0 0 1-1V18a1 1 0 0 0-1-1h-3Zm9 0a1 1 0 0 0-1 1v12a1 1 0 0 0 1 1h3a1 1 0 0 0 1-1V18a1 1 0 0 0-1-1h-3Z",fill:"currentColor",stroke:"none"},null,-1),HM=[$M];function zM(t,e,n,i,o,s){return V(),ae("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:It(t.cls),style:Bt(t.innerStyle),"stroke-width":t.strokeWidth,"stroke-linecap":t.strokeLinecap,"stroke-linejoin":t.strokeLinejoin,onClick:e[0]||(e[0]=(...l)=>t.onClick&&t.onClick(...l))},HM,14,qM)}var nu=Un(YM,[["render",zM]]);const VM=Object.assign(nu,{install:(t,e)=>{var n;const i=(n=e==null?void 0:e.iconPrefix)!=null?n:"";t.component(i+nu.name,nu)}}),WM=be({name:"IconPlayCircleFill",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:t=>["butt","round","square"].includes(t)},strokeLinejoin:{type:String,default:"miter",validator:t=>["arcs","bevel","miter","miter-clip","round"].includes(t)},rotate:Number,spin:Boolean},emits:{click:t=>!0},setup(t,{emit:e}){const n=Fn("icon"),i=le(()=>[n,`${n}-play-circle-fill`,{[`${n}-spin`]:t.spin}]),o=le(()=>{const l={};return t.size&&(l.fontSize=Bn(t.size)?`${t.size}px`:t.size),t.rotate&&(l.transform=`rotate(${t.rotate}deg)`),l});return{cls:i,innerStyle:o,onClick:l=>{e("click",l)}}}}),KM=["stroke-width","stroke-linecap","stroke-linejoin"],QM=B("path",{"fill-rule":"evenodd","clip-rule":"evenodd",d:"M44 24c0 11.046-8.954 20-20 20S4 35.046 4 24 12.954 4 24 4s20 8.954 20 20Zm-23.662-7.783C19.302 15.605 18 16.36 18 17.575v12.85c0 1.214 1.302 1.97 2.338 1.358l10.89-6.425c1.03-.607 1.03-2.11 0-2.716l-10.89-6.425Z",fill:"currentColor",stroke:"none"},null,-1),XM=[QM];function ZM(t,e,n,i,o,s){return V(),ae("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:It(t.cls),style:Bt(t.innerStyle),"stroke-width":t.strokeWidth,"stroke-linecap":t.strokeLinecap,"stroke-linejoin":t.strokeLinejoin,onClick:e[0]||(e[0]=(...l)=>t.onClick&&t.onClick(...l))},XM,14,KM)}var ru=Un(WM,[["render",ZM]]);const JM=Object.assign(ru,{install:(t,e)=>{var n;const i=(n=e==null?void 0:e.iconPrefix)!=null?n:"";t.component(i+ru.name,ru)}}),jM=be({name:"IconRecordStop",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:t=>["butt","round","square"].includes(t)},strokeLinejoin:{type:String,default:"miter",validator:t=>["arcs","bevel","miter","miter-clip","round"].includes(t)},rotate:Number,spin:Boolean},emits:{click:t=>!0},setup(t,{emit:e}){const n=Fn("icon"),i=le(()=>[n,`${n}-record-stop`,{[`${n}-spin`]:t.spin}]),o=le(()=>{const l={};return t.size&&(l.fontSize=Bn(t.size)?`${t.size}px`:t.size),t.rotate&&(l.transform=`rotate(${t.rotate}deg)`),l});return{cls:i,innerStyle:o,onClick:l=>{e("click",l)}}}}),eL=["stroke-width","stroke-linecap","stroke-linejoin"],tL=B("path",{"clip-rule":"evenodd",d:"M24 6c9.941 0 18 8.059 18 18s-8.059 18-18 18S6 33.941 6 24 14.059 6 24 6Z"},null,-1),nL=B("path",{d:"M19 20a1 1 0 0 1 1-1h8a1 1 0 0 1 1 1v8a1 1 0 0 1-1 1h-8a1 1 0 0 1-1-1v-8Z",fill:"currentColor",stroke:"none"},null,-1),rL=B("path",{d:"M19 20a1 1 0 0 1 1-1h8a1 1 0 0 1 1 1v8a1 1 0 0 1-1 1h-8a1 1 0 0 1-1-1v-8Z"},null,-1),iL=[tL,nL,rL];function aL(t,e,n,i,o,s){return V(),ae("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:It(t.cls),style:Bt(t.innerStyle),"stroke-width":t.strokeWidth,"stroke-linecap":t.strokeLinecap,"stroke-linejoin":t.strokeLinejoin,onClick:e[0]||(e[0]=(...l)=>t.onClick&&t.onClick(...l))},iL,14,eL)}var iu=Un(jM,[["render",aL]]);const oL=Object.assign(iu,{install:(t,e)=>{var n;const i=(n=e==null?void 0:e.iconPrefix)!=null?n:"";t.component(i+iu.name,iu)}}),sL=be({name:"IconBook",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:t=>["butt","round","square"].includes(t)},strokeLinejoin:{type:String,default:"miter",validator:t=>["arcs","bevel","miter","miter-clip","round"].includes(t)},rotate:Number,spin:Boolean},emits:{click:t=>!0},setup(t,{emit:e}){const n=Fn("icon"),i=le(()=>[n,`${n}-book`,{[`${n}-spin`]:t.spin}]),o=le(()=>{const l={};return t.size&&(l.fontSize=Bn(t.size)?`${t.size}px`:t.size),t.rotate&&(l.transform=`rotate(${t.rotate}deg)`),l});return{cls:i,innerStyle:o,onClick:l=>{e("click",l)}}}}),lL=["stroke-width","stroke-linecap","stroke-linejoin"],cL=B("path",{d:"M24 13 7 7v28l17 6 17-6V7l-17 6Zm0 0v27.5M29 18l7-2.5M29 25l7-2.5M29 32l7-2.5M19 18l-7-2.5m7 9.5-7-2.5m7 9.5-7-2.5"},null,-1),uL=[cL];function dL(t,e,n,i,o,s){return V(),ae("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:It(t.cls),style:Bt(t.innerStyle),"stroke-width":t.strokeWidth,"stroke-linecap":t.strokeLinecap,"stroke-linejoin":t.strokeLinejoin,onClick:e[0]||(e[0]=(...l)=>t.onClick&&t.onClick(...l))},uL,14,lL)}var au=Un(sL,[["render",dL]]);const _L=Object.assign(au,{install:(t,e)=>{var n;const i=(n=e==null?void 0:e.iconPrefix)!=null?n:"";t.component(i+au.name,au)}}),pL=be({name:"IconImage",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:t=>["butt","round","square"].includes(t)},strokeLinejoin:{type:String,default:"miter",validator:t=>["arcs","bevel","miter","miter-clip","round"].includes(t)},rotate:Number,spin:Boolean},emits:{click:t=>!0},setup(t,{emit:e}){const n=Fn("icon"),i=le(()=>[n,`${n}-image`,{[`${n}-spin`]:t.spin}]),o=le(()=>{const l={};return t.size&&(l.fontSize=Bn(t.size)?`${t.size}px`:t.size),t.rotate&&(l.transform=`rotate(${t.rotate}deg)`),l});return{cls:i,innerStyle:o,onClick:l=>{e("click",l)}}}}),mL=["stroke-width","stroke-linecap","stroke-linejoin"],gL=B("path",{d:"m24 33 9-9v9h-9Zm0 0-3.5-4.5L17 33h7Zm15 8H9a2 2 0 0 1-2-2V9a2 2 0 0 1 2-2h30a2 2 0 0 1 2 2v30a2 2 0 0 1-2 2ZM15 15h2v2h-2v-2Z"},null,-1),EL=B("path",{d:"M33 33v-9l-9 9h9ZM23.5 33l-3-4-3 4h6ZM15 15h2v2h-2z",fill:"currentColor",stroke:"none"},null,-1),fL=[gL,EL];function SL(t,e,n,i,o,s){return V(),ae("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:It(t.cls),style:Bt(t.innerStyle),"stroke-width":t.strokeWidth,"stroke-linecap":t.strokeLinecap,"stroke-linejoin":t.strokeLinejoin,onClick:e[0]||(e[0]=(...l)=>t.onClick&&t.onClick(...l))},fL,14,mL)}var ou=Un(pL,[["render",SL]]);const bL=Object.assign(ou,{install:(t,e)=>{var n;const i=(n=e==null?void 0:e.iconPrefix)!=null?n:"";t.component(i+ou.name,ou)}}),hL=be({name:"IconNav",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:t=>["butt","round","square"].includes(t)},strokeLinejoin:{type:String,default:"miter",validator:t=>["arcs","bevel","miter","miter-clip","round"].includes(t)},rotate:Number,spin:Boolean},emits:{click:t=>!0},setup(t,{emit:e}){const n=Fn("icon"),i=le(()=>[n,`${n}-nav`,{[`${n}-spin`]:t.spin}]),o=le(()=>{const l={};return t.size&&(l.fontSize=Bn(t.size)?`${t.size}px`:t.size),t.rotate&&(l.transform=`rotate(${t.rotate}deg)`),l});return{cls:i,innerStyle:o,onClick:l=>{e("click",l)}}}}),TL=["stroke-width","stroke-linecap","stroke-linejoin"],vL=B("path",{d:"M6 19h10m0 0h26m-26 0V9m0 10v10m0 0v10m0-10H6m10 0h26M6 9h36v30H6V9Z"},null,-1),CL=[vL];function RL(t,e,n,i,o,s){return V(),ae("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:It(t.cls),style:Bt(t.innerStyle),"stroke-width":t.strokeWidth,"stroke-linecap":t.strokeLinecap,"stroke-linejoin":t.strokeLinejoin,onClick:e[0]||(e[0]=(...l)=>t.onClick&&t.onClick(...l))},CL,14,TL)}var su=Un(hL,[["render",RL]]);const NL=Object.assign(su,{install:(t,e)=>{var n;const i=(n=e==null?void 0:e.iconPrefix)!=null?n:"";t.component(i+su.name,su)}}),OL=be({name:"IconPublic",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:t=>["butt","round","square"].includes(t)},strokeLinejoin:{type:String,default:"miter",validator:t=>["arcs","bevel","miter","miter-clip","round"].includes(t)},rotate:Number,spin:Boolean},emits:{click:t=>!0},setup(t,{emit:e}){const n=Fn("icon"),i=le(()=>[n,`${n}-public`,{[`${n}-spin`]:t.spin}]),o=le(()=>{const l={};return t.size&&(l.fontSize=Bn(t.size)?`${t.size}px`:t.size),t.rotate&&(l.transform=`rotate(${t.rotate}deg)`),l});return{cls:i,innerStyle:o,onClick:l=>{e("click",l)}}}}),AL=["stroke-width","stroke-linecap","stroke-linejoin"],yL=B("path",{d:"M15 21.5 6.704 19M15 21.5l4.683 5.152a1 1 0 0 1 .25.814L18 40.976l10.918-16.117a1 1 0 0 0-.298-1.409L21.5 19 15 21.5Zm0 0 6.062-6.995a1 1 0 0 0 .138-1.103L18 7.024M42 24c0 9.941-8.059 18-18 18S6 33.941 6 24 14.059 6 24 6s18 8.059 18 18Z"},null,-1),IL=[yL];function DL(t,e,n,i,o,s){return V(),ae("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:It(t.cls),style:Bt(t.innerStyle),"stroke-width":t.strokeWidth,"stroke-linecap":t.strokeLinecap,"stroke-linejoin":t.strokeLinejoin,onClick:e[0]||(e[0]=(...l)=>t.onClick&&t.onClick(...l))},IL,14,AL)}var lu=Un(OL,[["render",DL]]);const xL=Object.assign(lu,{install:(t,e)=>{var n;const i=(n=e==null?void 0:e.iconPrefix)!=null?n:"";t.component(i+lu.name,lu)}}),wL={class:"header"},ML={class:"content"},LL=be({__name:"modal",props:{visible:{type:Boolean}},emits:["update:visible","close"],setup(t,{emit:e}){const n=t,i=ee(null),o=()=>{e("update:visible",!1),e("close")};return kn(()=>{Zt(()=>n.visible,s=>{var l,c;s?(l=i.value)==null||l.showModal():(c=i.value)==null||c.close()},{immediate:!0})}),(s,l)=>(V(),ae("dialog",{ref_key:"dialogRef",ref:i,class:"customDialog"},[B("div",wL,[ue(q(Qw),{style:{cursor:"pointer"},size:24,onClick:o})]),B("div",ML,[oi(s.$slots,"default",{},void 0,!0)])],512))}});const Dt=(t,e)=>{const n=t.__vccOpts||t;for(const[i,o]of e)n[i]=o;return n},wC=Dt(LL,[["__scopeId","data-v-6fddb6c7"]]),Xs=t=>(Zi("data-v-e442bd8c"),t=t(),Ji(),t),PL={class:"wechatModal"},kL={class:"title"},UL=Xs(()=>B("div",{class:"titleText"},"WeChat",-1)),FL=Xs(()=>B("div",{class:"desc"}," Add the MetaGPT WeChat assistant to get the latest MetaGPT updates. Join the MetaGPT community for more high-quality technical and product discussions. ",-1)),BL={class:"qrCode"},GL=Xs(()=>B("img",{style:{width:"100%"},src:fM,alt:""},null,-1)),YL={class:"scanText"},qL=Xs(()=>B("span",null,"Scan on WeChat to add.",-1)),$L=be({__name:"wechatModal",props:{visible:{type:Boolean}},emits:["update:visible"],setup(t,{emit:e}){const i=yt(t,"visible"),o=()=>{e("update:visible",!1)};return(s,l)=>(V(),ot(wC,{visible:q(i),"onUpdate:visible":l[0]||(l[0]=c=>wr(i)?i.value=c:null),style:{width:"527px"},onClose:o},{default:dt(()=>[B("div",PL,[B("div",kL,[ue(ea,{size:28,fill:"#28C445","icon-id":"icon-wechat2"}),UL]),FL,B("div",BL,[GL,B("span",YL,[ue(q(IM),{size:16}),qL])])])]),_:1},8,["visible"]))}});const HL=Dt($L,[["__scopeId","data-v-e442bd8c"]]),zL="/static/assets/blacklogo-ead63efd.svg",VL={},WL={style:{width:"32px","vertical-align":"middle"},src:zL};function KL(t,e){return V(),ae("img",WL)}const QL=Dt(VL,[["render",KL]]);let Ps=[];const MC=new WeakMap;function XL(){Ps.forEach(t=>t(...MC.get(t))),Ps=[]}function LC(t,...e){MC.set(t,e),!Ps.includes(t)&&Ps.push(t)===1&&requestAnimationFrame(XL)}function ks(t){return t.composedPath()[0]||null}const Yf={black:"#000",silver:"#C0C0C0",gray:"#808080",white:"#FFF",maroon:"#800000",red:"#F00",purple:"#800080",fuchsia:"#F0F",green:"#008000",lime:"#0F0",olive:"#808000",yellow:"#FF0",navy:"#000080",blue:"#00F",teal:"#008080",aqua:"#0FF",transparent:"#0000"},ta="^\\s*",na="\\s*$",Xr="\\s*((\\.\\d+)|(\\d+(\\.\\d*)?))\\s*",Zr="([0-9A-Fa-f])",Jr="([0-9A-Fa-f]{2})",ZL=new RegExp(`${ta}rgb\\s*\\(${Xr},${Xr},${Xr}\\)${na}`),JL=new RegExp(`${ta}rgba\\s*\\(${Xr},${Xr},${Xr},${Xr}\\)${na}`),jL=new RegExp(`${ta}#${Zr}${Zr}${Zr}${na}`),eP=new RegExp(`${ta}#${Jr}${Jr}${Jr}${na}`),tP=new RegExp(`${ta}#${Zr}${Zr}${Zr}${Zr}${na}`),nP=new RegExp(`${ta}#${Jr}${Jr}${Jr}${Jr}${na}`);function sn(t){return parseInt(t,16)}function Ki(t){try{let e;if(e=eP.exec(t))return[sn(e[1]),sn(e[2]),sn(e[3]),1];if(e=ZL.exec(t))return[Qt(e[1]),Qt(e[5]),Qt(e[9]),1];if(e=JL.exec(t))return[Qt(e[1]),Qt(e[5]),Qt(e[9]),qa(e[13])];if(e=jL.exec(t))return[sn(e[1]+e[1]),sn(e[2]+e[2]),sn(e[3]+e[3]),1];if(e=nP.exec(t))return[sn(e[1]),sn(e[2]),sn(e[3]),qa(sn(e[4])/255)];if(e=tP.exec(t))return[sn(e[1]+e[1]),sn(e[2]+e[2]),sn(e[3]+e[3]),qa(sn(e[4]+e[4])/255)];if(t in Yf)return Ki(Yf[t]);throw new Error(`[seemly/rgba]: Invalid color value ${t}.`)}catch(e){throw e}}function rP(t){return t>1?1:t<0?0:t}function iP(t,e,n,i){return`rgba(${Qt(t)}, ${Qt(e)}, ${Qt(n)}, ${rP(i)})`}function cu(t,e,n,i,o){return Qt((t*e*(1-i)+n*i)/o)}function PC(t,e){Array.isArray(t)||(t=Ki(t)),Array.isArray(e)||(e=Ki(e));const n=t[3],i=e[3],o=qa(n+i-n*i);return iP(cu(t[0],n,e[0],i,o),cu(t[1],n,e[1],i,o),cu(t[2],n,e[2],i,o),o)}function Es(t,e){const[n,i,o,s=1]=Array.isArray(t)?t:Ki(t),{lightness:l=1,alpha:c=1}=e;return aP([n*l,i*l,o*l,s*c])}function qa(t){const e=Math.round(Number(t)*100)/100;return e>1?1:e<0?0:e}function Qt(t){const e=Math.round(Number(t));return e>255?255:e<0?0:e}function aP(t){const[e,n,i]=t;return 3 in t?`rgba(${Qt(e)}, ${Qt(n)}, ${Qt(i)}, ${qa(t[3])})`:`rgba(${Qt(e)}, ${Qt(n)}, ${Qt(i)}, 1)`}function kC(t=8){return Math.random().toString(16).slice(2,2+t)}function oP(t,e=[],n){const i={};return e.forEach(o=>{i[o]=t[o]}),Object.assign(i,n)}function Tm(t,e=!0,n=[]){return t.forEach(i=>{if(i!==null){if(typeof i!="object"){(typeof i=="string"||typeof i=="number")&&n.push(vt(String(i)));return}if(Array.isArray(i)){Tm(i,e,n);return}if(i.type===st){if(i.children===null)return;Array.isArray(i.children)&&Tm(i.children,e,n)}else i.type!==Ws&&n.push(i)}}),n}function Ga(t,...e){if(Array.isArray(t))t.forEach(n=>Ga(n,...e));else return t(...e)}function qf(t,e){console.error(`[naive/${t}]: ${e}`)}function sP(t,e){throw new Error(`[naive/${t}]: ${e}`)}function $f(t,e="default",n=void 0){const i=t[e];if(!i)return qf("getFirstSlotVNode",`slot[${e}] is empty`),null;const o=Tm(i(n));return o.length===1?o[0]:(qf("getFirstSlotVNode",`slot[${e}] should have exactly one child`),null)}function Xm(t){return t.some(e=>$i(e)?!(e.type===Ws||e.type===st&&!Xm(e.children)):!0)?t:null}function uu(t,e){const n=t&&Xm(t());return e(n||null)}function Hf(t){return!(t&&Xm(t()))}const zf=be({render(){var t,e;return(e=(t=this.$slots).default)===null||e===void 0?void 0:e.call(t)}}),lP=/^(\d|\.)+$/,Vf=/(\d|\.)+/;function du(t,{c:e=1,offset:n=0,attachPx:i=!0}={}){if(typeof t=="number"){const o=(t+n)*e;return o===0?"0":`${o}px`}else if(typeof t=="string")if(lP.test(t)){const o=(Number(t)+n)*e;return i?o===0?"0":`${o}px`:`${o}`}else{const o=Vf.exec(t);return o?t.replace(Vf,String((Number(o[0])+n)*e)):t}return t}function cP(t){let e=0;for(let n=0;n{let o=cP(i);if(o){if(o===1){t.forEach(l=>{n.push(i.replace("&",l))});return}}else{t.forEach(l=>{n.push((l&&l+" ")+i)});return}let s=[i];for(;o--;){const l=[];s.forEach(c=>{t.forEach(d=>{l.push(c.replace("&",d))})}),s=l}s.forEach(l=>n.push(l))}),n}function _P(t,e){const n=[];return e.split(UC).forEach(i=>{t.forEach(o=>{n.push((o&&o+" ")+i)})}),n}function pP(t){let e=[""];return t.forEach(n=>{n=n&&n.trim(),n&&(n.includes("&")?e=dP(e,n):e=_P(e,n))}),e.join(", ").replace(uP," ")}function Wf(t){if(!t)return;const e=t.parentElement;e&&e.removeChild(t)}function Zs(t){return document.querySelector(`style[cssr-id="${t}"]`)}function mP(t){const e=document.createElement("style");return e.setAttribute("cssr-id",t),e}function fs(t){return t?/^\s*@(s|m)/.test(t):!1}const gP=/[A-Z]/g;function FC(t){return t.replace(gP,e=>"-"+e.toLowerCase())}function EP(t,e=" "){return typeof t=="object"&&t!==null?` { -`+Object.entries(t).map(n=>e+` ${FC(n[0])}: ${n[1]};`).join(` -`)+` -`+e+"}":`: ${t};`}function fP(t,e,n){return typeof t=="function"?t({context:e.context,props:n}):t}function Kf(t,e,n,i){if(!e)return"";const o=fP(e,n,i);if(!o)return"";if(typeof o=="string")return`${t} { -${o} -}`;const s=Object.keys(o);if(s.length===0)return n.config.keepEmptyBlock?t+` { -}`:"";const l=t?[t+" {"]:[];return s.forEach(c=>{const d=o[c];if(c==="raw"){l.push(` -`+d+` -`);return}c=FC(c),d!=null&&l.push(` ${c}${EP(d)}`)}),t&&l.push("}"),l.join(` -`)}function vm(t,e,n){t&&t.forEach(i=>{if(Array.isArray(i))vm(i,e,n);else if(typeof i=="function"){const o=i(e);Array.isArray(o)?vm(o,e,n):o&&n(o)}else i&&n(i)})}function BC(t,e,n,i,o,s){const l=t.$;let c="";if(!l||typeof l=="string")fs(l)?c=l:e.push(l);else if(typeof l=="function"){const p=l({context:i.context,props:o});fs(p)?c=p:e.push(p)}else if(l.before&&l.before(i.context),!l.$||typeof l.$=="string")fs(l.$)?c=l.$:e.push(l.$);else if(l.$){const p=l.$({context:i.context,props:o});fs(p)?c=p:e.push(p)}const d=pP(e),_=Kf(d,t.props,i,o);c?(n.push(`${c} {`),s&&_&&s.insertRule(`${c} { -${_} -} -`)):(s&&_&&s.insertRule(_),!s&&_.length&&n.push(_)),t.children&&vm(t.children,{context:i.context,props:o},p=>{if(typeof p=="string"){const g=Kf(d,{raw:p},i,o);s?s.insertRule(g):n.push(g)}else BC(p,e,n,i,o,s)}),e.pop(),c&&n.push("}"),l&&l.after&&l.after(i.context)}function GC(t,e,n,i=!1){const o=[];return BC(t,[],o,e,n,i?t.instance.__styleSheet:void 0),i?"":o.join(` - -`)}function Cm(t){for(var e=0,n,i=0,o=t.length;o>=4;++i,o-=4)n=t.charCodeAt(i)&255|(t.charCodeAt(++i)&255)<<8|(t.charCodeAt(++i)&255)<<16|(t.charCodeAt(++i)&255)<<24,n=(n&65535)*1540483477+((n>>>16)*59797<<16),n^=n>>>24,e=(n&65535)*1540483477+((n>>>16)*59797<<16)^(e&65535)*1540483477+((e>>>16)*59797<<16);switch(o){case 3:e^=(t.charCodeAt(i+2)&255)<<16;case 2:e^=(t.charCodeAt(i+1)&255)<<8;case 1:e^=t.charCodeAt(i)&255,e=(e&65535)*1540483477+((e>>>16)*59797<<16)}return e^=e>>>13,e=(e&65535)*1540483477+((e>>>16)*59797<<16),((e^e>>>15)>>>0).toString(36)}typeof window<"u"&&(window.__cssrContext={});function SP(t,e,n){const{els:i}=e;if(n===void 0)i.forEach(Wf),e.els=[];else{const o=Zs(n);o&&i.includes(o)&&(Wf(o),e.els=i.filter(s=>s!==o))}}function Qf(t,e){t.push(e)}function bP(t,e,n,i,o,s,l,c,d){if(s&&!d){if(n===void 0){console.error("[css-render/mount]: `id` is required in `silent` mode.");return}const E=window.__cssrContext;E[n]||(E[n]=!0,GC(e,t,i,s));return}let _;if(n===void 0&&(_=e.render(i),n=Cm(_)),d){d.adapter(n,_??e.render(i));return}const p=Zs(n);if(p!==null&&!l)return p;const g=p??mP(n);if(_===void 0&&(_=e.render(i)),g.textContent=_,p!==null)return p;if(c){const E=document.head.querySelector(`meta[name="${c}"]`);if(E)return document.head.insertBefore(g,E),Qf(e.els,g),g}return o?document.head.insertBefore(g,document.head.querySelector("style, link")):document.head.appendChild(g),Qf(e.els,g),g}function hP(t){return GC(this,this.instance,t)}function TP(t={}){const{id:e,ssr:n,props:i,head:o=!1,silent:s=!1,force:l=!1,anchorMetaName:c}=t;return bP(this.instance,this,e,i,o,s,l,c,n)}function vP(t={}){const{id:e}=t;SP(this.instance,this,e)}const Ss=function(t,e,n,i){return{instance:t,$:e,props:n,children:i,els:[],render:hP,mount:TP,unmount:vP}},CP=function(t,e,n,i){return Array.isArray(e)?Ss(t,{$:null},null,e):Array.isArray(n)?Ss(t,e,null,n):Array.isArray(i)?Ss(t,e,n,i):Ss(t,e,n,null)};function YC(t={}){let e=null;const n={c:(...i)=>CP(n,...i),use:(i,...o)=>i.install(n,...o),find:Zs,context:{},config:t,get __styleSheet(){if(!e){const i=document.createElement("style");return document.head.appendChild(i),e=document.styleSheets[document.styleSheets.length-1],e}return e}};return n}function RP(t,e){if(t===void 0)return!1;if(e){const{context:{ids:n}}=e;return n.has(t)}return Zs(t)!==null}function NP(t){let e=".",n="__",i="--",o;if(t){let S=t.blockPrefix;S&&(e=S),S=t.elementPrefix,S&&(n=S),S=t.modifierPrefix,S&&(i=S)}const s={install(S){o=S.c;const C=S.context;C.bem={},C.bem.b=null,C.bem.els=null}};function l(S){let C,h;return{before(T){C=T.bem.b,h=T.bem.els,T.bem.els=null},after(T){T.bem.b=C,T.bem.els=h},$({context:T,props:N}){return S=typeof S=="string"?S:S({context:T,props:N}),T.bem.b=S,`${(N==null?void 0:N.bPrefix)||e}${T.bem.b}`}}}function c(S){let C;return{before(h){C=h.bem.els},after(h){h.bem.els=C},$({context:h,props:T}){return S=typeof S=="string"?S:S({context:h,props:T}),h.bem.els=S.split(",").map(N=>N.trim()),h.bem.els.map(N=>`${(T==null?void 0:T.bPrefix)||e}${h.bem.b}${n}${N}`).join(", ")}}}function d(S){return{$({context:C,props:h}){S=typeof S=="string"?S:S({context:C,props:h});const T=S.split(",").map(x=>x.trim());function N(x){return T.map(P=>`&${(h==null?void 0:h.bPrefix)||e}${C.bem.b}${x!==void 0?`${n}${x}`:""}${i}${P}`).join(", ")}const y=C.bem.els;return y!==null?N(y[0]):N()}}}function _(S){return{$({context:C,props:h}){S=typeof S=="string"?S:S({context:C,props:h});const T=C.bem.els;return`&:not(${(h==null?void 0:h.bPrefix)||e}${C.bem.b}${T!==null&&T.length>0?`${n}${T[0]}`:""}${i}${S})`}}}return Object.assign(s,{cB:(...S)=>o(l(S[0]),S[1],S[2]),cE:(...S)=>o(c(S[0]),S[1],S[2]),cM:(...S)=>o(d(S[0]),S[1],S[2]),cNotM:(...S)=>o(_(S[0]),S[1],S[2])}),s}const OP="n",AP=`.${OP}-`,yP="__",IP="--",qC=YC(),$C=NP({blockPrefix:AP,elementPrefix:yP,modifierPrefix:IP});qC.use($C);const{c:je,find:qDe}=qC,{cB:St,cE:jr,cM:_r,cNotM:$a}=$C,DP=(...t)=>je(">",[St(...t)]);let _u;function xP(){return _u===void 0&&(_u=navigator.userAgent.includes("Node.js")||navigator.userAgent.includes("jsdom")),_u}const wP=typeof document<"u"&&typeof window<"u";function MP(t){const e=ee(!!t.value);if(e.value)return bm(e);const n=Zt(t,i=>{i&&(e.value=!0,n())});return bm(e)}function Qa(t){const e=le(t),n=ee(e.value);return Zt(e,i=>{n.value=i}),typeof t=="function"?n:{__v_isRef:!0,get value(){return n.value},set value(i){t.set(i)}}}const LP=typeof window<"u";let zi,Ha;const PP=()=>{var t,e;zi=LP?(e=(t=document)===null||t===void 0?void 0:t.fonts)===null||e===void 0?void 0:e.ready:void 0,Ha=!1,zi!==void 0?zi.then(()=>{Ha=!0}):Ha=!0};PP();function kP(t){if(Ha)return;let e=!1;kn(()=>{Ha||zi==null||zi.then(()=>{e||t()})}),Kn(()=>{e=!0})}function UP(t,e){return Zt(t,n=>{n!==void 0&&(e.value=n)}),le(()=>t.value===void 0?e.value:t.value)}function Zm(){const t=ee(!1);return kn(()=>{t.value=!0}),bm(t)}function FP(t,e){return le(()=>{for(const n of e)if(t[n]!==void 0)return t[n];return t[e[e.length-1]]})}const BP=(typeof window>"u"?!1:/iPad|iPhone|iPod/.test(navigator.platform)||navigator.platform==="MacIntel"&&navigator.maxTouchPoints>1)&&!window.MSStream;function GP(){return BP}const YP="n-internal-select-menu-body",HC="n-modal-body",zC="n-drawer-body",VC="n-popover-body",WC="__disabled__";function Qi(t){const e=Ft(HC,null),n=Ft(zC,null),i=Ft(VC,null),o=Ft(YP,null),s=ee();if(typeof document<"u"){s.value=document.fullscreenElement;const l=()=>{s.value=document.fullscreenElement};kn(()=>{Ht("fullscreenchange",document,l)}),Kn(()=>{Rt("fullscreenchange",document,l)})}return Qa(()=>{var l;const{to:c}=t;return c!==void 0?c===!1?WC:c===!0?s.value||"body":c:e!=null&&e.value?(l=e.value.$el)!==null&&l!==void 0?l:e.value:n!=null&&n.value?n.value:i!=null&&i.value?i.value:o!=null&&o.value?o.value:c??(s.value||"body")})}Qi.tdkey=WC;Qi.propTo={type:[String,Object,Boolean],default:void 0};function Rm(t,e,n="default"){const i=e[n];if(i===void 0)throw new Error(`[vueuc/${t}]: slot[${n}] is empty.`);return i()}function Nm(t,e=!0,n=[]){return t.forEach(i=>{if(i!==null){if(typeof i!="object"){(typeof i=="string"||typeof i=="number")&&n.push(vt(String(i)));return}if(Array.isArray(i)){Nm(i,e,n);return}if(i.type===st){if(i.children===null)return;Array.isArray(i.children)&&Nm(i.children,e,n)}else i.type!==Ws&&n.push(i)}}),n}function Xf(t,e,n="default"){const i=e[n];if(i===void 0)throw new Error(`[vueuc/${t}]: slot[${n}] is empty.`);const o=Nm(i());if(o.length===1)return o[0];throw new Error(`[vueuc/${t}]: slot[${n}] should have exactly one child.`)}let Nr=null;function KC(){if(Nr===null&&(Nr=document.getElementById("v-binder-view-measurer"),Nr===null)){Nr=document.createElement("div"),Nr.id="v-binder-view-measurer";const{style:t}=Nr;t.position="fixed",t.left="0",t.right="0",t.top="0",t.bottom="0",t.pointerEvents="none",t.visibility="hidden",document.body.appendChild(Nr)}return Nr.getBoundingClientRect()}function qP(t,e){const n=KC();return{top:e,left:t,height:0,width:0,right:n.width-t,bottom:n.height-e}}function pu(t){const e=t.getBoundingClientRect(),n=KC();return{left:e.left-n.left,top:e.top-n.top,bottom:n.height+n.top-e.bottom,right:n.width+n.left-e.right,width:e.width,height:e.height}}function $P(t){return t.nodeType===9?null:t.parentNode}function QC(t){if(t===null)return null;const e=$P(t);if(e===null)return null;if(e.nodeType===9)return document;if(e.nodeType===1){const{overflow:n,overflowX:i,overflowY:o}=getComputedStyle(e);if(/(auto|scroll|overlay)/.test(n+o+i))return e}return QC(e)}const HP=be({name:"Binder",props:{syncTargetWithParent:Boolean,syncTarget:{type:Boolean,default:!0}},setup(t){var e;ni("VBinder",(e=$m())===null||e===void 0?void 0:e.proxy);const n=Ft("VBinder",null),i=ee(null),o=T=>{i.value=T,n&&t.syncTargetWithParent&&n.setTargetRef(T)};let s=[];const l=()=>{let T=i.value;for(;T=QC(T),T!==null;)s.push(T);for(const N of s)Ht("scroll",N,g,!0)},c=()=>{for(const T of s)Rt("scroll",T,g,!0);s=[]},d=new Set,_=T=>{d.size===0&&l(),d.has(T)||d.add(T)},p=T=>{d.has(T)&&d.delete(T),d.size===0&&c()},g=()=>{LC(E)},E=()=>{d.forEach(T=>T())},f=new Set,S=T=>{f.size===0&&Ht("resize",window,h),f.has(T)||f.add(T)},C=T=>{f.has(T)&&f.delete(T),f.size===0&&Rt("resize",window,h)},h=()=>{f.forEach(T=>T())};return Kn(()=>{Rt("resize",window,h),c()}),{targetRef:i,setTargetRef:o,addScrollListener:_,removeScrollListener:p,addResizeListener:S,removeResizeListener:C}},render(){return Rm("binder",this.$slots)}}),zP=HP,VP=be({name:"Target",setup(){const{setTargetRef:t,syncTarget:e}=Ft("VBinder");return{syncTarget:e,setTargetDirective:{mounted:t,updated:t}}},render(){const{syncTarget:t,setTargetDirective:e}=this;return t?Pn(Xf("follower",this.$slots),[[e]]):Xf("follower",this.$slots)}}),Li="@@mmoContext",WP={mounted(t,{value:e}){t[Li]={handler:void 0},typeof e=="function"&&(t[Li].handler=e,Ht("mousemoveoutside",t,e))},updated(t,{value:e}){const n=t[Li];typeof e=="function"?n.handler?n.handler!==e&&(Rt("mousemoveoutside",t,n.handler),n.handler=e,Ht("mousemoveoutside",t,e)):(t[Li].handler=e,Ht("mousemoveoutside",t,e)):n.handler&&(Rt("mousemoveoutside",t,n.handler),n.handler=void 0)},unmounted(t){const{handler:e}=t[Li];e&&Rt("mousemoveoutside",t,e),t[Li].handler=void 0}},KP=WP,Pi="@@coContext",QP={mounted(t,{value:e,modifiers:n}){t[Pi]={handler:void 0},typeof e=="function"&&(t[Pi].handler=e,Ht("clickoutside",t,e,{capture:n.capture}))},updated(t,{value:e,modifiers:n}){const i=t[Pi];typeof e=="function"?i.handler?i.handler!==e&&(Rt("clickoutside",t,i.handler,{capture:n.capture}),i.handler=e,Ht("clickoutside",t,e,{capture:n.capture})):(t[Pi].handler=e,Ht("clickoutside",t,e,{capture:n.capture})):i.handler&&(Rt("clickoutside",t,i.handler,{capture:n.capture}),i.handler=void 0)},unmounted(t,{modifiers:e}){const{handler:n}=t[Pi];n&&Rt("clickoutside",t,n,{capture:e.capture}),t[Pi].handler=void 0}},Zf=QP;function XP(t,e){console.error(`[vdirs/${t}]: ${e}`)}class ZP{constructor(){this.elementZIndex=new Map,this.nextZIndex=2e3}get elementCount(){return this.elementZIndex.size}ensureZIndex(e,n){const{elementZIndex:i}=this;if(n!==void 0){e.style.zIndex=`${n}`,i.delete(e);return}const{nextZIndex:o}=this;i.has(e)&&i.get(e)+1===this.nextZIndex||(e.style.zIndex=`${o}`,i.set(e,o),this.nextZIndex=o+1,this.squashState())}unregister(e,n){const{elementZIndex:i}=this;i.has(e)?i.delete(e):n===void 0&&XP("z-index-manager/unregister-element","Element not found when unregistering."),this.squashState()}squashState(){const{elementCount:e}=this;e||(this.nextZIndex=2e3),this.nextZIndex-e>2500&&this.rearrange()}rearrange(){const e=Array.from(this.elementZIndex.entries());e.sort((n,i)=>n[1]-i[1]),this.nextZIndex=2e3,e.forEach(n=>{const i=n[0],o=this.nextZIndex++;`${o}`!==i.style.zIndex&&(i.style.zIndex=`${o}`)})}}const mu=new ZP,ki="@@ziContext",JP={mounted(t,e){const{value:n={}}=e,{zIndex:i,enabled:o}=n;t[ki]={enabled:!!o,initialized:!1},o&&(mu.ensureZIndex(t,i),t[ki].initialized=!0)},updated(t,e){const{value:n={}}=e,{zIndex:i,enabled:o}=n,s=t[ki].enabled;o&&!s&&(mu.ensureZIndex(t,i),t[ki].initialized=!0),t[ki].enabled=!!o},unmounted(t,e){if(!t[ki].initialized)return;const{value:n={}}=e,{zIndex:i}=n;mu.unregister(t,i)}},Jm=JP,XC=Symbol("@css-render/vue3-ssr");function jP(t,e){return``}function e0(t,e){const n=Ft(XC,null);if(n===null){console.error("[css-render/vue3-ssr]: no ssr context found.");return}const{styles:i,ids:o}=n;o.has(t)||i!==null&&(o.add(t),i.push(jP(t,e)))}const t0=typeof document<"u";function ro(){if(t0)return;const t=Ft(XC,null);if(t!==null)return{adapter:e0,context:t}}function Jf(t,e){console.error(`[vueuc/${t}]: ${e}`)}const{c:bs}=YC(),n0="vueuc-style";function jf(t){return typeof t=="string"?document.querySelector(t):t()}const ZC=be({name:"LazyTeleport",props:{to:{type:[String,Object],default:void 0},disabled:Boolean,show:{type:Boolean,required:!0}},setup(t){return{showTeleport:MP(yt(t,"show")),mergedTo:le(()=>{const{to:e}=t;return e??"body"})}},render(){return this.showTeleport?this.disabled?Rm("lazy-teleport",this.$slots):j(Gw,{disabled:this.disabled,to:this.mergedTo},Rm("lazy-teleport",this.$slots)):null}}),hs={top:"bottom",bottom:"top",left:"right",right:"left"},eS={start:"end",center:"center",end:"start"},gu={top:"height",bottom:"height",left:"width",right:"width"},r0={"bottom-start":"top left",bottom:"top center","bottom-end":"top right","top-start":"bottom left",top:"bottom center","top-end":"bottom right","right-start":"top left",right:"center left","right-end":"bottom left","left-start":"top right",left:"center right","left-end":"bottom right"},i0={"bottom-start":"bottom left",bottom:"bottom center","bottom-end":"bottom right","top-start":"top left",top:"top center","top-end":"top right","right-start":"top right",right:"center right","right-end":"bottom right","left-start":"top left",left:"center left","left-end":"bottom left"},a0={"bottom-start":"right","bottom-end":"left","top-start":"right","top-end":"left","right-start":"bottom","right-end":"top","left-start":"bottom","left-end":"top"},tS={top:!0,bottom:!1,left:!0,right:!1},nS={top:"end",bottom:"start",left:"end",right:"start"};function o0(t,e,n,i,o,s){if(!o||s)return{placement:t,top:0,left:0};const[l,c]=t.split("-");let d=c??"center",_={top:0,left:0};const p=(f,S,C)=>{let h=0,T=0;const N=n[f]-e[S]-e[f];return N>0&&i&&(C?T=tS[S]?N:-N:h=tS[S]?N:-N),{left:h,top:T}},g=l==="left"||l==="right";if(d!=="center"){const f=a0[t],S=hs[f],C=gu[f];if(n[C]>e[C]){if(e[f]+e[C]e[S]&&(d=eS[c])}else{const f=l==="bottom"||l==="top"?"left":"top",S=hs[f],C=gu[f],h=(n[C]-e[C])/2;(e[f]e[S]?(d=nS[f],_=p(C,f,g)):(d=nS[S],_=p(C,S,g)))}let E=l;return e[l] *",{pointerEvents:"all"})])]),u0=be({name:"Follower",inheritAttrs:!1,props:{show:Boolean,enabled:{type:Boolean,default:void 0},placement:{type:String,default:"bottom"},syncTrigger:{type:Array,default:["resize","scroll"]},to:[String,Object],flip:{type:Boolean,default:!0},internalShift:Boolean,x:Number,y:Number,width:String,minWidth:String,containerClass:String,teleportDisabled:Boolean,zindexable:{type:Boolean,default:!0},zIndex:Number,overlap:Boolean},setup(t){const e=Ft("VBinder"),n=Qa(()=>t.enabled!==void 0?t.enabled:t.show),i=ee(null),o=ee(null),s=()=>{const{syncTrigger:E}=t;E.includes("scroll")&&e.addScrollListener(d),E.includes("resize")&&e.addResizeListener(d)},l=()=>{e.removeScrollListener(d),e.removeResizeListener(d)};kn(()=>{n.value&&(d(),s())});const c=ro();c0.mount({id:"vueuc/binder",head:!0,anchorMetaName:n0,ssr:c}),Kn(()=>{l()}),kP(()=>{n.value&&d()});const d=()=>{if(!n.value)return;const E=i.value;if(E===null)return;const f=e.targetRef,{x:S,y:C,overlap:h}=t,T=S!==void 0&&C!==void 0?qP(S,C):pu(f);E.style.setProperty("--v-target-width",`${Math.round(T.width)}px`),E.style.setProperty("--v-target-height",`${Math.round(T.height)}px`);const{width:N,minWidth:y,placement:x,internalShift:P,flip:D}=t;E.setAttribute("v-placement",x),h?E.setAttribute("v-overlap",""):E.removeAttribute("v-overlap");const{style:k}=E;N==="target"?k.width=`${T.width}px`:N!==void 0?k.width=N:k.width="",y==="target"?k.minWidth=`${T.width}px`:y!==void 0?k.minWidth=y:k.minWidth="";const U=pu(E),W=pu(o.value),{left:z,top:K,placement:Ee}=o0(x,T,U,P,D,h),oe=s0(Ee,h),{left:L,top:J,transform:re}=l0(Ee,W,T,K,z,h);E.setAttribute("v-placement",Ee),E.style.setProperty("--v-offset-left",`${Math.round(z)}px`),E.style.setProperty("--v-offset-top",`${Math.round(K)}px`),E.style.transform=`translateX(${L}) translateY(${J}) ${re}`,E.style.setProperty("--v-transform-origin",oe),E.style.transformOrigin=oe};Zt(n,E=>{E?(s(),_()):l()});const _=()=>{Ks().then(d).catch(E=>console.error(E))};["placement","x","y","internalShift","flip","width","overlap","minWidth"].forEach(E=>{Zt(yt(t,E),d)}),["teleportDisabled"].forEach(E=>{Zt(yt(t,E),_)}),Zt(yt(t,"syncTrigger"),E=>{E.includes("resize")?e.addResizeListener(d):e.removeResizeListener(d),E.includes("scroll")?e.addScrollListener(d):e.removeScrollListener(d)});const p=Zm(),g=Qa(()=>{const{to:E}=t;if(E!==void 0)return E;p.value});return{VBinder:e,mergedEnabled:n,offsetContainerRef:o,followerRef:i,mergedTo:g,syncPosition:d}},render(){return j(ZC,{show:this.show,to:this.mergedTo,disabled:this.teleportDisabled},{default:()=>{var t,e;const n=j("div",{class:["v-binder-follower-container",this.containerClass],ref:"offsetContainerRef"},[j("div",{class:"v-binder-follower-content",ref:"followerRef"},(e=(t=this.$slots).default)===null||e===void 0?void 0:e.call(t))]);return this.zindexable?Pn(n,[[Jm,{enabled:this.mergedEnabled,zIndex:this.zIndex}]]):n}})}});var ri=[],d0=function(){return ri.some(function(t){return t.activeTargets.length>0})},_0=function(){return ri.some(function(t){return t.skippedTargets.length>0})},rS="ResizeObserver loop completed with undelivered notifications.",p0=function(){var t;typeof ErrorEvent=="function"?t=new ErrorEvent("error",{message:rS}):(t=document.createEvent("Event"),t.initEvent("error",!1,!1),t.message=rS),window.dispatchEvent(t)},Xa;(function(t){t.BORDER_BOX="border-box",t.CONTENT_BOX="content-box",t.DEVICE_PIXEL_CONTENT_BOX="device-pixel-content-box"})(Xa||(Xa={}));var ii=function(t){return Object.freeze(t)},m0=function(){function t(e,n){this.inlineSize=e,this.blockSize=n,ii(this)}return t}(),JC=function(){function t(e,n,i,o){return this.x=e,this.y=n,this.width=i,this.height=o,this.top=this.y,this.left=this.x,this.bottom=this.top+this.height,this.right=this.left+this.width,ii(this)}return t.prototype.toJSON=function(){var e=this,n=e.x,i=e.y,o=e.top,s=e.right,l=e.bottom,c=e.left,d=e.width,_=e.height;return{x:n,y:i,top:o,right:s,bottom:l,left:c,width:d,height:_}},t.fromRect=function(e){return new t(e.x,e.y,e.width,e.height)},t}(),jm=function(t){return t instanceof SVGElement&&"getBBox"in t},jC=function(t){if(jm(t)){var e=t.getBBox(),n=e.width,i=e.height;return!n&&!i}var o=t,s=o.offsetWidth,l=o.offsetHeight;return!(s||l||t.getClientRects().length)},iS=function(t){var e;if(t instanceof Element)return!0;var n=(e=t==null?void 0:t.ownerDocument)===null||e===void 0?void 0:e.defaultView;return!!(n&&t instanceof n.Element)},g0=function(t){switch(t.tagName){case"INPUT":if(t.type!=="image")break;case"VIDEO":case"AUDIO":case"EMBED":case"OBJECT":case"CANVAS":case"IFRAME":case"IMG":return!0}return!1},za=typeof window<"u"?window:{},Ts=new WeakMap,aS=/auto|scroll/,E0=/^tb|vertical/,f0=/msie|trident/i.test(za.navigator&&za.navigator.userAgent),Hn=function(t){return parseFloat(t||"0")},Vi=function(t,e,n){return t===void 0&&(t=0),e===void 0&&(e=0),n===void 0&&(n=!1),new m0((n?e:t)||0,(n?t:e)||0)},oS=ii({devicePixelContentBoxSize:Vi(),borderBoxSize:Vi(),contentBoxSize:Vi(),contentRect:new JC(0,0,0,0)}),eR=function(t,e){if(e===void 0&&(e=!1),Ts.has(t)&&!e)return Ts.get(t);if(jC(t))return Ts.set(t,oS),oS;var n=getComputedStyle(t),i=jm(t)&&t.ownerSVGElement&&t.getBBox(),o=!f0&&n.boxSizing==="border-box",s=E0.test(n.writingMode||""),l=!i&&aS.test(n.overflowY||""),c=!i&&aS.test(n.overflowX||""),d=i?0:Hn(n.paddingTop),_=i?0:Hn(n.paddingRight),p=i?0:Hn(n.paddingBottom),g=i?0:Hn(n.paddingLeft),E=i?0:Hn(n.borderTopWidth),f=i?0:Hn(n.borderRightWidth),S=i?0:Hn(n.borderBottomWidth),C=i?0:Hn(n.borderLeftWidth),h=g+_,T=d+p,N=C+f,y=E+S,x=c?t.offsetHeight-y-t.clientHeight:0,P=l?t.offsetWidth-N-t.clientWidth:0,D=o?h+N:0,k=o?T+y:0,U=i?i.width:Hn(n.width)-D-P,W=i?i.height:Hn(n.height)-k-x,z=U+h+P+N,K=W+T+x+y,Ee=ii({devicePixelContentBoxSize:Vi(Math.round(U*devicePixelRatio),Math.round(W*devicePixelRatio),s),borderBoxSize:Vi(z,K,s),contentBoxSize:Vi(U,W,s),contentRect:new JC(g,d,U,W)});return Ts.set(t,Ee),Ee},tR=function(t,e,n){var i=eR(t,n),o=i.borderBoxSize,s=i.contentBoxSize,l=i.devicePixelContentBoxSize;switch(e){case Xa.DEVICE_PIXEL_CONTENT_BOX:return l;case Xa.BORDER_BOX:return o;default:return s}},S0=function(){function t(e){var n=eR(e);this.target=e,this.contentRect=n.contentRect,this.borderBoxSize=ii([n.borderBoxSize]),this.contentBoxSize=ii([n.contentBoxSize]),this.devicePixelContentBoxSize=ii([n.devicePixelContentBoxSize])}return t}(),nR=function(t){if(jC(t))return 1/0;for(var e=0,n=t.parentNode;n;)e+=1,n=n.parentNode;return e},b0=function(){var t=1/0,e=[];ri.forEach(function(l){if(l.activeTargets.length!==0){var c=[];l.activeTargets.forEach(function(_){var p=new S0(_.target),g=nR(_.target);c.push(p),_.lastReportedSize=tR(_.target,_.observedBox),gt?n.activeTargets.push(o):n.skippedTargets.push(o))})})},h0=function(){var t=0;for(sS(t);d0();)t=b0(),sS(t);return _0()&&p0(),t>0},Eu,rR=[],T0=function(){return rR.splice(0).forEach(function(t){return t()})},v0=function(t){if(!Eu){var e=0,n=document.createTextNode(""),i={characterData:!0};new MutationObserver(function(){return T0()}).observe(n,i),Eu=function(){n.textContent="".concat(e?e--:e++)}}rR.push(t),Eu()},C0=function(t){v0(function(){requestAnimationFrame(t)})},Ms=0,R0=function(){return!!Ms},N0=250,O0={attributes:!0,characterData:!0,childList:!0,subtree:!0},lS=["resize","load","transitionend","animationend","animationstart","animationiteration","keyup","keydown","mouseup","mousedown","mouseover","mouseout","blur","focus"],cS=function(t){return t===void 0&&(t=0),Date.now()+t},fu=!1,A0=function(){function t(){var e=this;this.stopped=!0,this.listener=function(){return e.schedule()}}return t.prototype.run=function(e){var n=this;if(e===void 0&&(e=N0),!fu){fu=!0;var i=cS(e);C0(function(){var o=!1;try{o=h0()}finally{if(fu=!1,e=i-cS(),!R0())return;o?n.run(1e3):e>0?n.run(e):n.start()}})}},t.prototype.schedule=function(){this.stop(),this.run()},t.prototype.observe=function(){var e=this,n=function(){return e.observer&&e.observer.observe(document.body,O0)};document.body?n():za.addEventListener("DOMContentLoaded",n)},t.prototype.start=function(){var e=this;this.stopped&&(this.stopped=!1,this.observer=new MutationObserver(this.listener),this.observe(),lS.forEach(function(n){return za.addEventListener(n,e.listener,!0)}))},t.prototype.stop=function(){var e=this;this.stopped||(this.observer&&this.observer.disconnect(),lS.forEach(function(n){return za.removeEventListener(n,e.listener,!0)}),this.stopped=!0)},t}(),Om=new A0,uS=function(t){!Ms&&t>0&&Om.start(),Ms+=t,!Ms&&Om.stop()},y0=function(t){return!jm(t)&&!g0(t)&&getComputedStyle(t).display==="inline"},I0=function(){function t(e,n){this.target=e,this.observedBox=n||Xa.CONTENT_BOX,this.lastReportedSize={inlineSize:0,blockSize:0}}return t.prototype.isActive=function(){var e=tR(this.target,this.observedBox,!0);return y0(this.target)&&(this.lastReportedSize=e),this.lastReportedSize.inlineSize!==e.inlineSize||this.lastReportedSize.blockSize!==e.blockSize},t}(),D0=function(){function t(e,n){this.activeTargets=[],this.skippedTargets=[],this.observationTargets=[],this.observer=e,this.callback=n}return t}(),vs=new WeakMap,dS=function(t,e){for(var n=0;n=0&&(s&&ri.splice(ri.indexOf(i),1),i.observationTargets.splice(o,1),uS(-1))},t.disconnect=function(e){var n=this,i=vs.get(e);i.observationTargets.slice().forEach(function(o){return n.unobserve(e,o.target)}),i.activeTargets.splice(0,i.activeTargets.length)},t}(),x0=function(){function t(e){if(arguments.length===0)throw new TypeError("Failed to construct 'ResizeObserver': 1 argument required, but only 0 present.");if(typeof e!="function")throw new TypeError("Failed to construct 'ResizeObserver': The callback provided as parameter 1 is not a function.");Cs.connect(this,e)}return t.prototype.observe=function(e,n){if(arguments.length===0)throw new TypeError("Failed to execute 'observe' on 'ResizeObserver': 1 argument required, but only 0 present.");if(!iS(e))throw new TypeError("Failed to execute 'observe' on 'ResizeObserver': parameter 1 is not of type 'Element");Cs.observe(this,e,n)},t.prototype.unobserve=function(e){if(arguments.length===0)throw new TypeError("Failed to execute 'unobserve' on 'ResizeObserver': 1 argument required, but only 0 present.");if(!iS(e))throw new TypeError("Failed to execute 'unobserve' on 'ResizeObserver': parameter 1 is not of type 'Element");Cs.unobserve(this,e)},t.prototype.disconnect=function(){Cs.disconnect(this)},t.toString=function(){return"function ResizeObserver () { [polyfill code] }"},t}();class w0{constructor(){this.handleResize=this.handleResize.bind(this),this.observer=new(typeof window<"u"&&window.ResizeObserver||x0)(this.handleResize),this.elHandlersMap=new Map}handleResize(e){for(const n of e){const i=this.elHandlersMap.get(n.target);i!==void 0&&i(n)}}registerHandler(e,n){this.elHandlersMap.set(e,n),this.observer.observe(e)}unregisterHandler(e){this.elHandlersMap.has(e)&&(this.elHandlersMap.delete(e),this.observer.unobserve(e))}}const _S=new w0,pS=be({name:"ResizeObserver",props:{onResize:Function},setup(t){let e=!1;const n=$m().proxy;function i(o){const{onResize:s}=t;s!==void 0&&s(o)}kn(()=>{const o=n.$el;if(o===void 0){Jf("resize-observer","$el does not exist.");return}if(o.nextElementSibling!==o.nextSibling&&o.nodeType===3&&o.nodeValue!==""){Jf("resize-observer","$el can not be observed (it may be a text node).");return}o.nextElementSibling!==null&&(_S.registerHandler(o.nextElementSibling,i),e=!0)}),Kn(()=>{e&&_S.unregisterHandler(n.$el.nextElementSibling)})},render(){return oi(this.$slots,"default")}});function iR(t){return t instanceof HTMLElement}function aR(t){for(let e=0;e=0;e--){const n=t.childNodes[e];if(iR(n)&&(sR(n)||oR(n)))return!0}return!1}function sR(t){if(!M0(t))return!1;try{t.focus({preventScroll:!0})}catch{}return document.activeElement===t}function M0(t){if(t.tabIndex>0||t.tabIndex===0&&t.getAttribute("tabIndex")!==null)return!0;if(t.getAttribute("disabled"))return!1;switch(t.nodeName){case"A":return!!t.href&&t.rel!=="ignore";case"INPUT":return t.type!=="hidden"&&t.type!=="file";case"BUTTON":case"SELECT":case"TEXTAREA":return!0;default:return!1}}let Pa=[];const L0=be({name:"FocusTrap",props:{disabled:Boolean,active:Boolean,autoFocus:{type:Boolean,default:!0},onEsc:Function,initialFocusTo:String,finalFocusTo:String,returnFocusOnDeactivated:{type:Boolean,default:!0}},setup(t){const e=kC(),n=ee(null),i=ee(null);let o=!1,s=!1;const l=typeof document>"u"?null:document.activeElement;function c(){return Pa[Pa.length-1]===e}function d(h){var T;h.code==="Escape"&&c()&&((T=t.onEsc)===null||T===void 0||T.call(t,h))}kn(()=>{Zt(()=>t.active,h=>{h?(g(),Ht("keydown",document,d)):(Rt("keydown",document,d),o&&E())},{immediate:!0})}),Kn(()=>{Rt("keydown",document,d),o&&E()});function _(h){if(!s&&c()){const T=p();if(T===null||T.contains(ks(h)))return;f("first")}}function p(){const h=n.value;if(h===null)return null;let T=h;for(;T=T.nextSibling,!(T===null||T instanceof Element&&T.tagName==="DIV"););return T}function g(){var h;if(!t.disabled){if(Pa.push(e),t.autoFocus){const{initialFocusTo:T}=t;T===void 0?f("first"):(h=jf(T))===null||h===void 0||h.focus({preventScroll:!0})}o=!0,document.addEventListener("focus",_,!0)}}function E(){var h;if(t.disabled||(document.removeEventListener("focus",_,!0),Pa=Pa.filter(N=>N!==e),c()))return;const{finalFocusTo:T}=t;T!==void 0?(h=jf(T))===null||h===void 0||h.focus({preventScroll:!0}):t.returnFocusOnDeactivated&&l instanceof HTMLElement&&(s=!0,l.focus({preventScroll:!0}),s=!1)}function f(h){if(c()&&t.active){const T=n.value,N=i.value;if(T!==null&&N!==null){const y=p();if(y==null||y===N){s=!0,T.focus({preventScroll:!0}),s=!1;return}s=!0;const x=h==="first"?aR(y):oR(y);s=!1,x||(s=!0,T.focus({preventScroll:!0}),s=!1)}}}function S(h){if(s)return;const T=p();T!==null&&(h.relatedTarget!==null&&T.contains(h.relatedTarget)?f("last"):f("first"))}function C(h){s||(h.relatedTarget!==null&&h.relatedTarget===n.value?f("last"):f("first"))}return{focusableStartRef:n,focusableEndRef:i,focusableStyle:"position: absolute; height: 0; width: 0;",handleStartFocus:S,handleEndFocus:C}},render(){const{default:t}=this.$slots;if(t===void 0)return null;if(this.disabled)return t();const{active:e,focusableStyle:n}=this;return j(st,null,[j("div",{"aria-hidden":"true",tabindex:e?"0":"-1",ref:"focusableStartRef",style:n,onFocus:this.handleStartFocus}),t(),j("div",{"aria-hidden":"true",style:n,ref:"focusableEndRef",tabindex:e?"0":"-1",onFocus:this.handleEndFocus})])}});function P0(t){const e={isDeactivated:!1};let n=!1;return Yw(()=>{if(e.isDeactivated=!1,!n){n=!0;return}t()}),OC(()=>{e.isDeactivated=!0,n||(n=!0)}),e}var k0=typeof global=="object"&&global&&global.Object===Object&&global;const lR=k0;var U0=typeof self=="object"&&self&&self.Object===Object&&self,F0=lR||U0||Function("return this")();const Qn=F0;var B0=Qn.Symbol;const Mr=B0;var cR=Object.prototype,G0=cR.hasOwnProperty,Y0=cR.toString,ka=Mr?Mr.toStringTag:void 0;function q0(t){var e=G0.call(t,ka),n=t[ka];try{t[ka]=void 0;var i=!0}catch{}var o=Y0.call(t);return i&&(e?t[ka]=n:delete t[ka]),o}var $0=Object.prototype,H0=$0.toString;function z0(t){return H0.call(t)}var V0="[object Null]",W0="[object Undefined]",mS=Mr?Mr.toStringTag:void 0;function ui(t){return t==null?t===void 0?W0:V0:mS&&mS in Object(t)?q0(t):z0(t)}function Lr(t){return t!=null&&typeof t=="object"}var K0="[object Symbol]";function eg(t){return typeof t=="symbol"||Lr(t)&&ui(t)==K0}function uR(t,e){for(var n=-1,i=t==null?0:t.length,o=Array(i);++n0){if(++e>=bk)return arguments[0]}else e=0;return t.apply(void 0,arguments)}}function Ck(t){return function(){return t}}var Rk=function(){try{var t=_i(Object,"defineProperty");return t({},"",{}),t}catch{}}();const Us=Rk;var Nk=Us?function(t,e){return Us(t,"toString",{configurable:!0,enumerable:!1,value:Ck(e),writable:!0})}:tg;const Ok=Nk;var Ak=vk(Ok);const yk=Ak;var Ik=9007199254740991,Dk=/^(?:0|[1-9]\d*)$/;function rg(t,e){var n=typeof t;return e=e??Ik,!!e&&(n=="number"||n!="symbol"&&Dk.test(t))&&t>-1&&t%1==0&&t-1&&t%1==0&&t<=Uk}function ra(t){return t!=null&&ag(t.length)&&!ng(t)}function Fk(t,e,n){if(!Pr(n))return!1;var i=typeof e;return(i=="number"?ra(n)&&rg(e,n.length):i=="string"&&e in n)?io(n[e],t):!1}function Bk(t){return kk(function(e,n){var i=-1,o=n.length,s=o>1?n[o-1]:void 0,l=o>2?n[2]:void 0;for(s=t.length>3&&typeof s=="function"?(o--,s):void 0,l&&Fk(n[0],n[1],l)&&(s=o<3?void 0:s,o=1),e=Object(e);++i-1}function nU(t,e){var n=this.__data__,i=Js(n,t);return i<0?(++this.size,n.push([t,e])):n[i][1]=e,this}function pr(t){var e=-1,n=t==null?0:t.length;for(this.clear();++eo?0:o+e),n=n>o?o:n,n<0&&(n+=o),o=e>n?0:n-e>>>0,e>>>=0;for(var s=Array(o);++i=i?t:AU(t,e,n)}var IU="\\ud800-\\udfff",DU="\\u0300-\\u036f",xU="\\ufe20-\\ufe2f",wU="\\u20d0-\\u20ff",MU=DU+xU+wU,LU="\\ufe0e\\ufe0f",PU="\\u200d",kU=RegExp("["+PU+IU+MU+LU+"]");function vR(t){return kU.test(t)}function UU(t){return t.split("")}var CR="\\ud800-\\udfff",FU="\\u0300-\\u036f",BU="\\ufe20-\\ufe2f",GU="\\u20d0-\\u20ff",YU=FU+BU+GU,qU="\\ufe0e\\ufe0f",$U="["+CR+"]",ym="["+YU+"]",Im="\\ud83c[\\udffb-\\udfff]",HU="(?:"+ym+"|"+Im+")",RR="[^"+CR+"]",NR="(?:\\ud83c[\\udde6-\\uddff]){2}",OR="[\\ud800-\\udbff][\\udc00-\\udfff]",zU="\\u200d",AR=HU+"?",yR="["+qU+"]?",VU="(?:"+zU+"(?:"+[RR,NR,OR].join("|")+")"+yR+AR+")*",WU=yR+AR+VU,KU="(?:"+[RR+ym+"?",ym,NR,OR,$U].join("|")+")",QU=RegExp(Im+"(?="+Im+")|"+KU+WU,"g");function XU(t){return t.match(QU)||[]}function ZU(t){return vR(t)?XU(t):UU(t)}function JU(t){return function(e){e=el(e);var n=vR(e)?ZU(e):void 0,i=n?n[0]:e.charAt(0),o=n?yU(n,1).join(""):e.slice(1);return i[t]()+o}}var jU=JU("toUpperCase");const eF=jU;function tF(t,e,n,i){var o=-1,s=t==null?0:t.length;for(i&&s&&(n=t[++o]);++oc))return!1;var _=s.get(t),p=s.get(e);if(_&&p)return _==e&&p==t;var g=-1,E=!0,f=n&NB?new Ys:void 0;for(s.set(t,e),s.set(e,t);++g{const p=s==null?void 0:s.value;n.mount({id:p===void 0?e:p+e,head:!0,props:{bPrefix:p?`.${p}-`:void 0},anchorMetaName:ja,ssr:l}),c!=null&&c.preflightStyleDisabled||KR.mount({id:"n-global",head:!0,anchorMetaName:ja,ssr:l})};l?_():Hm(_)}return le(()=>{var _;const{theme:{common:p,self:g,peers:E={}}={},themeOverrides:f={},builtinThemeOverrides:S={}}=o,{common:C,peers:h}=f,{common:T=void 0,[t]:{common:N=void 0,self:y=void 0,peers:x={}}={}}=(c==null?void 0:c.mergedThemeRef.value)||{},{common:P=void 0,[t]:D={}}=(c==null?void 0:c.mergedThemeOverridesRef.value)||{},{common:k,peers:U={}}=D,W=Ns({},p||N||T||i.common,P,k,C),z=Ns((_=g||y||i.self)===null||_===void 0?void 0:_(W),S,D,f);return{common:W,self:z,peers:Ns({},i.peers,x,E),peerOverrides:Ns({},S.peers,U,h)}})}fn.props={theme:Object,themeOverrides:Object,builtinThemeOverrides:Object};const AG="n";function pi(t={},e={defaultBordered:!0}){const n=Ft(ia,null);return{inlineThemeDisabled:n==null?void 0:n.inlineThemeDisabled,mergedRtlRef:n==null?void 0:n.mergedRtlRef,mergedComponentPropsRef:n==null?void 0:n.mergedComponentPropsRef,mergedBreakpointsRef:n==null?void 0:n.mergedBreakpointsRef,mergedBorderedRef:le(()=>{var i,o;const{bordered:s}=t;return s!==void 0?s:(o=(i=n==null?void 0:n.mergedBorderedRef.value)!==null&&i!==void 0?i:e.defaultBordered)!==null&&o!==void 0?o:!0}),mergedClsPrefixRef:le(()=>(n==null?void 0:n.mergedClsPrefixRef.value)||AG),namespaceRef:le(()=>n==null?void 0:n.mergedNamespaceRef.value)}}const yG={name:"en-US",global:{undo:"Undo",redo:"Redo",confirm:"Confirm",clear:"Clear"},Popconfirm:{positiveText:"Confirm",negativeText:"Cancel"},Cascader:{placeholder:"Please Select",loading:"Loading",loadingRequiredMessage:t=>`Please load all ${t}'s descendants before checking it.`},Time:{dateFormat:"yyyy-MM-dd",dateTimeFormat:"yyyy-MM-dd HH:mm:ss"},DatePicker:{yearFormat:"yyyy",monthFormat:"MMM",dayFormat:"eeeeee",yearTypeFormat:"yyyy",monthTypeFormat:"yyyy-MM",dateFormat:"yyyy-MM-dd",dateTimeFormat:"yyyy-MM-dd HH:mm:ss",quarterFormat:"yyyy-qqq",clear:"Clear",now:"Now",confirm:"Confirm",selectTime:"Select Time",selectDate:"Select Date",datePlaceholder:"Select Date",datetimePlaceholder:"Select Date and Time",monthPlaceholder:"Select Month",yearPlaceholder:"Select Year",quarterPlaceholder:"Select Quarter",startDatePlaceholder:"Start Date",endDatePlaceholder:"End Date",startDatetimePlaceholder:"Start Date and Time",endDatetimePlaceholder:"End Date and Time",startMonthPlaceholder:"Start Month",endMonthPlaceholder:"End Month",monthBeforeYear:!0,firstDayOfWeek:6,today:"Today"},DataTable:{checkTableAll:"Select all in the table",uncheckTableAll:"Unselect all in the table",confirm:"Confirm",clear:"Clear"},LegacyTransfer:{sourceTitle:"Source",targetTitle:"Target"},Transfer:{selectAll:"Select all",unselectAll:"Unselect all",clearAll:"Clear",total:t=>`Total ${t} items`,selected:t=>`${t} items selected`},Empty:{description:"No Data"},Select:{placeholder:"Please Select"},TimePicker:{placeholder:"Select Time",positiveText:"OK",negativeText:"Cancel",now:"Now"},Pagination:{goto:"Goto",selectionSuffix:"page"},DynamicTags:{add:"Add"},Log:{loading:"Loading"},Input:{placeholder:"Please Input"},InputNumber:{placeholder:"Please Input"},DynamicInput:{create:"Create"},ThemeEditor:{title:"Theme Editor",clearAllVars:"Clear All Variables",clearSearch:"Clear Search",filterCompName:"Filter Component Name",filterVarName:"Filter Variable Name",import:"Import",export:"Export",restore:"Reset to Default"},Image:{tipPrevious:"Previous picture (←)",tipNext:"Next picture (→)",tipCounterclockwise:"Counterclockwise",tipClockwise:"Clockwise",tipZoomOut:"Zoom out",tipZoomIn:"Zoom in",tipClose:"Close (Esc)",tipOriginalSize:"Zoom to original size"}},IG=yG;function Tu(t){return function(){var e=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},n=e.width?String(e.width):t.defaultWidth,i=t.formats[n]||t.formats[t.defaultWidth];return i}}function Ua(t){return function(e,n){var i=n!=null&&n.context?String(n.context):"standalone",o;if(i==="formatting"&&t.formattingValues){var s=t.defaultFormattingWidth||t.defaultWidth,l=n!=null&&n.width?String(n.width):s;o=t.formattingValues[l]||t.formattingValues[s]}else{var c=t.defaultWidth,d=n!=null&&n.width?String(n.width):t.defaultWidth;o=t.values[d]||t.values[c]}var _=t.argumentCallback?t.argumentCallback(e):e;return o[_]}}function Fa(t){return function(e){var n=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{},i=n.width,o=i&&t.matchPatterns[i]||t.matchPatterns[t.defaultMatchWidth],s=e.match(o);if(!s)return null;var l=s[0],c=i&&t.parsePatterns[i]||t.parsePatterns[t.defaultParseWidth],d=Array.isArray(c)?xG(c,function(g){return g.test(l)}):DG(c,function(g){return g.test(l)}),_;_=t.valueCallback?t.valueCallback(d):d,_=n.valueCallback?n.valueCallback(_):_;var p=e.slice(l.length);return{value:_,rest:p}}}function DG(t,e){for(var n in t)if(t.hasOwnProperty(n)&&e(t[n]))return n}function xG(t,e){for(var n=0;n1&&arguments[1]!==void 0?arguments[1]:{},i=e.match(t.matchPattern);if(!i)return null;var o=i[0],s=e.match(t.parsePattern);if(!s)return null;var l=t.valueCallback?t.valueCallback(s[0]):s[0];l=n.valueCallback?n.valueCallback(l):l;var c=e.slice(o.length);return{value:l,rest:c}}}var MG={lessThanXSeconds:{one:"less than a second",other:"less than {{count}} seconds"},xSeconds:{one:"1 second",other:"{{count}} seconds"},halfAMinute:"half a minute",lessThanXMinutes:{one:"less than a minute",other:"less than {{count}} minutes"},xMinutes:{one:"1 minute",other:"{{count}} minutes"},aboutXHours:{one:"about 1 hour",other:"about {{count}} hours"},xHours:{one:"1 hour",other:"{{count}} hours"},xDays:{one:"1 day",other:"{{count}} days"},aboutXWeeks:{one:"about 1 week",other:"about {{count}} weeks"},xWeeks:{one:"1 week",other:"{{count}} weeks"},aboutXMonths:{one:"about 1 month",other:"about {{count}} months"},xMonths:{one:"1 month",other:"{{count}} months"},aboutXYears:{one:"about 1 year",other:"about {{count}} years"},xYears:{one:"1 year",other:"{{count}} years"},overXYears:{one:"over 1 year",other:"over {{count}} years"},almostXYears:{one:"almost 1 year",other:"almost {{count}} years"}},LG=function(e,n,i){var o,s=MG[e];return typeof s=="string"?o=s:n===1?o=s.one:o=s.other.replace("{{count}}",n.toString()),i!=null&&i.addSuffix?i.comparison&&i.comparison>0?"in "+o:o+" ago":o};const PG=LG;var kG={full:"EEEE, MMMM do, y",long:"MMMM do, y",medium:"MMM d, y",short:"MM/dd/yyyy"},UG={full:"h:mm:ss a zzzz",long:"h:mm:ss a z",medium:"h:mm:ss a",short:"h:mm a"},FG={full:"{{date}} 'at' {{time}}",long:"{{date}} 'at' {{time}}",medium:"{{date}}, {{time}}",short:"{{date}}, {{time}}"},BG={date:Tu({formats:kG,defaultWidth:"full"}),time:Tu({formats:UG,defaultWidth:"full"}),dateTime:Tu({formats:FG,defaultWidth:"full"})};const GG=BG;var YG={lastWeek:"'last' eeee 'at' p",yesterday:"'yesterday at' p",today:"'today at' p",tomorrow:"'tomorrow at' p",nextWeek:"eeee 'at' p",other:"P"},qG=function(e,n,i,o){return YG[e]};const $G=qG;var HG={narrow:["B","A"],abbreviated:["BC","AD"],wide:["Before Christ","Anno Domini"]},zG={narrow:["1","2","3","4"],abbreviated:["Q1","Q2","Q3","Q4"],wide:["1st quarter","2nd quarter","3rd quarter","4th quarter"]},VG={narrow:["J","F","M","A","M","J","J","A","S","O","N","D"],abbreviated:["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"],wide:["January","February","March","April","May","June","July","August","September","October","November","December"]},WG={narrow:["S","M","T","W","T","F","S"],short:["Su","Mo","Tu","We","Th","Fr","Sa"],abbreviated:["Sun","Mon","Tue","Wed","Thu","Fri","Sat"],wide:["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"]},KG={narrow:{am:"a",pm:"p",midnight:"mi",noon:"n",morning:"morning",afternoon:"afternoon",evening:"evening",night:"night"},abbreviated:{am:"AM",pm:"PM",midnight:"midnight",noon:"noon",morning:"morning",afternoon:"afternoon",evening:"evening",night:"night"},wide:{am:"a.m.",pm:"p.m.",midnight:"midnight",noon:"noon",morning:"morning",afternoon:"afternoon",evening:"evening",night:"night"}},QG={narrow:{am:"a",pm:"p",midnight:"mi",noon:"n",morning:"in the morning",afternoon:"in the afternoon",evening:"in the evening",night:"at night"},abbreviated:{am:"AM",pm:"PM",midnight:"midnight",noon:"noon",morning:"in the morning",afternoon:"in the afternoon",evening:"in the evening",night:"at night"},wide:{am:"a.m.",pm:"p.m.",midnight:"midnight",noon:"noon",morning:"in the morning",afternoon:"in the afternoon",evening:"in the evening",night:"at night"}},XG=function(e,n){var i=Number(e),o=i%100;if(o>20||o<10)switch(o%10){case 1:return i+"st";case 2:return i+"nd";case 3:return i+"rd"}return i+"th"},ZG={ordinalNumber:XG,era:Ua({values:HG,defaultWidth:"wide"}),quarter:Ua({values:zG,defaultWidth:"wide",argumentCallback:function(e){return e-1}}),month:Ua({values:VG,defaultWidth:"wide"}),day:Ua({values:WG,defaultWidth:"wide"}),dayPeriod:Ua({values:KG,defaultWidth:"wide",formattingValues:QG,defaultFormattingWidth:"wide"})};const JG=ZG;var jG=/^(\d+)(th|st|nd|rd)?/i,eY=/\d+/i,tY={narrow:/^(b|a)/i,abbreviated:/^(b\.?\s?c\.?|b\.?\s?c\.?\s?e\.?|a\.?\s?d\.?|c\.?\s?e\.?)/i,wide:/^(before christ|before common era|anno domini|common era)/i},nY={any:[/^b/i,/^(a|c)/i]},rY={narrow:/^[1234]/i,abbreviated:/^q[1234]/i,wide:/^[1234](th|st|nd|rd)? quarter/i},iY={any:[/1/i,/2/i,/3/i,/4/i]},aY={narrow:/^[jfmasond]/i,abbreviated:/^(jan|feb|mar|apr|may|jun|jul|aug|sep|oct|nov|dec)/i,wide:/^(january|february|march|april|may|june|july|august|september|october|november|december)/i},oY={narrow:[/^j/i,/^f/i,/^m/i,/^a/i,/^m/i,/^j/i,/^j/i,/^a/i,/^s/i,/^o/i,/^n/i,/^d/i],any:[/^ja/i,/^f/i,/^mar/i,/^ap/i,/^may/i,/^jun/i,/^jul/i,/^au/i,/^s/i,/^o/i,/^n/i,/^d/i]},sY={narrow:/^[smtwf]/i,short:/^(su|mo|tu|we|th|fr|sa)/i,abbreviated:/^(sun|mon|tue|wed|thu|fri|sat)/i,wide:/^(sunday|monday|tuesday|wednesday|thursday|friday|saturday)/i},lY={narrow:[/^s/i,/^m/i,/^t/i,/^w/i,/^t/i,/^f/i,/^s/i],any:[/^su/i,/^m/i,/^tu/i,/^w/i,/^th/i,/^f/i,/^sa/i]},cY={narrow:/^(a|p|mi|n|(in the|at) (morning|afternoon|evening|night))/i,any:/^([ap]\.?\s?m\.?|midnight|noon|(in the|at) (morning|afternoon|evening|night))/i},uY={any:{am:/^a/i,pm:/^p/i,midnight:/^mi/i,noon:/^no/i,morning:/morning/i,afternoon:/afternoon/i,evening:/evening/i,night:/night/i}},dY={ordinalNumber:wG({matchPattern:jG,parsePattern:eY,valueCallback:function(e){return parseInt(e,10)}}),era:Fa({matchPatterns:tY,defaultMatchWidth:"wide",parsePatterns:nY,defaultParseWidth:"any"}),quarter:Fa({matchPatterns:rY,defaultMatchWidth:"wide",parsePatterns:iY,defaultParseWidth:"any",valueCallback:function(e){return e+1}}),month:Fa({matchPatterns:aY,defaultMatchWidth:"wide",parsePatterns:oY,defaultParseWidth:"any"}),day:Fa({matchPatterns:sY,defaultMatchWidth:"wide",parsePatterns:lY,defaultParseWidth:"any"}),dayPeriod:Fa({matchPatterns:cY,defaultMatchWidth:"any",parsePatterns:uY,defaultParseWidth:"any"})};const _Y=dY;var pY={code:"en-US",formatDistance:PG,formatLong:GG,formatRelative:$G,localize:JG,match:_Y,options:{weekStartsOn:0,firstWeekContainsDate:1}};const mY=pY,gY={name:"en-US",locale:mY},EY=gY;function fY(t){const{mergedLocaleRef:e,mergedDateLocaleRef:n}=Ft(ia,null)||{},i=le(()=>{var s,l;return(l=(s=e==null?void 0:e.value)===null||s===void 0?void 0:s[t])!==null&&l!==void 0?l:IG[t]});return{dateLocaleRef:le(()=>{var s;return(s=n==null?void 0:n.value)!==null&&s!==void 0?s:EY}),localeRef:i}}function SY(t,e,n){if(!e)return;const i=ro(),o=Ft(ia,null),s=()=>{const l=n==null?void 0:n.value;e.mount({id:l===void 0?t:l+t,head:!0,anchorMetaName:ja,props:{bPrefix:l?`.${l}-`:void 0},ssr:i}),o!=null&&o.preflightStyleDisabled||KR.mount({id:"n-global",head:!0,anchorMetaName:ja,ssr:i})};i?s():Hm(s)}function _g(t,e,n,i){var o;n||sP("useThemeClass","cssVarsRef is not passed");const s=(o=Ft(ia,null))===null||o===void 0?void 0:o.mergedThemeHashRef,l=ee(""),c=ro();let d;const _=`__${t}`,p=()=>{let g=_;const E=e?e.value:void 0,f=s==null?void 0:s.value;f&&(g+="-"+f),E&&(g+="-"+E);const{themeOverrides:S,builtinThemeOverrides:C}=i;S&&(g+="-"+Cm(JSON.stringify(S))),C&&(g+="-"+Cm(JSON.stringify(C))),l.value=g,d=()=>{const h=n.value;let T="";for(const N in h)T+=`${N}: ${h[N]};`;je(`.${g}`,T).mount({id:g,ssr:c}),d=void 0}};return si(()=>{p()}),{themeClass:l,onRender:()=>{d==null||d()}}}function bY(t,e,n){if(!e)return;const i=ro(),o=le(()=>{const{value:l}=e;if(!l)return;const c=l[t];if(c)return c}),s=()=>{si(()=>{const{value:l}=n,c=`${l}${t}Rtl`;if(RP(c,i))return;const{value:d}=o;d&&d.style.mount({id:c,head:!0,anchorMetaName:ja,props:{bPrefix:l?`.${l}-`:void 0},ssr:i})})};return i?s():Hm(s),o}function rl(t,e){return be({name:eF(t),setup(){var n;const i=(n=Ft(ia,null))===null||n===void 0?void 0:n.mergedIconsRef;return()=>{var o;const s=(o=i==null?void 0:i.value)===null||o===void 0?void 0:o[t];return s?s():e}}})}const hY=rl("rotateClockwise",j("svg",{viewBox:"0 0 20 20",fill:"none",xmlns:"http://www.w3.org/2000/svg"},j("path",{d:"M3 10C3 6.13401 6.13401 3 10 3C13.866 3 17 6.13401 17 10C17 12.7916 15.3658 15.2026 13 16.3265V14.5C13 14.2239 12.7761 14 12.5 14C12.2239 14 12 14.2239 12 14.5V17.5C12 17.7761 12.2239 18 12.5 18H15.5C15.7761 18 16 17.7761 16 17.5C16 17.2239 15.7761 17 15.5 17H13.8758C16.3346 15.6357 18 13.0128 18 10C18 5.58172 14.4183 2 10 2C5.58172 2 2 5.58172 2 10C2 10.2761 2.22386 10.5 2.5 10.5C2.77614 10.5 3 10.2761 3 10Z",fill:"currentColor"}),j("path",{d:"M10 12C11.1046 12 12 11.1046 12 10C12 8.89543 11.1046 8 10 8C8.89543 8 8 8.89543 8 10C8 11.1046 8.89543 12 10 12ZM10 11C9.44772 11 9 10.5523 9 10C9 9.44772 9.44772 9 10 9C10.5523 9 11 9.44772 11 10C11 10.5523 10.5523 11 10 11Z",fill:"currentColor"}))),TY=rl("rotateClockwise",j("svg",{viewBox:"0 0 20 20",fill:"none",xmlns:"http://www.w3.org/2000/svg"},j("path",{d:"M17 10C17 6.13401 13.866 3 10 3C6.13401 3 3 6.13401 3 10C3 12.7916 4.63419 15.2026 7 16.3265V14.5C7 14.2239 7.22386 14 7.5 14C7.77614 14 8 14.2239 8 14.5V17.5C8 17.7761 7.77614 18 7.5 18H4.5C4.22386 18 4 17.7761 4 17.5C4 17.2239 4.22386 17 4.5 17H6.12422C3.66539 15.6357 2 13.0128 2 10C2 5.58172 5.58172 2 10 2C14.4183 2 18 5.58172 18 10C18 10.2761 17.7761 10.5 17.5 10.5C17.2239 10.5 17 10.2761 17 10Z",fill:"currentColor"}),j("path",{d:"M10 12C8.89543 12 8 11.1046 8 10C8 8.89543 8.89543 8 10 8C11.1046 8 12 8.89543 12 10C12 11.1046 11.1046 12 10 12ZM10 11C10.5523 11 11 10.5523 11 10C11 9.44772 10.5523 9 10 9C9.44772 9 9 9.44772 9 10C9 10.5523 9.44772 11 10 11Z",fill:"currentColor"}))),vY=rl("zoomIn",j("svg",{viewBox:"0 0 20 20",fill:"none",xmlns:"http://www.w3.org/2000/svg"},j("path",{d:"M11.5 8.5C11.5 8.22386 11.2761 8 11 8H9V6C9 5.72386 8.77614 5.5 8.5 5.5C8.22386 5.5 8 5.72386 8 6V8H6C5.72386 8 5.5 8.22386 5.5 8.5C5.5 8.77614 5.72386 9 6 9H8V11C8 11.2761 8.22386 11.5 8.5 11.5C8.77614 11.5 9 11.2761 9 11V9H11C11.2761 9 11.5 8.77614 11.5 8.5Z",fill:"currentColor"}),j("path",{d:"M8.5 3C11.5376 3 14 5.46243 14 8.5C14 9.83879 13.5217 11.0659 12.7266 12.0196L16.8536 16.1464C17.0488 16.3417 17.0488 16.6583 16.8536 16.8536C16.68 17.0271 16.4106 17.0464 16.2157 16.9114L16.1464 16.8536L12.0196 12.7266C11.0659 13.5217 9.83879 14 8.5 14C5.46243 14 3 11.5376 3 8.5C3 5.46243 5.46243 3 8.5 3ZM8.5 4C6.01472 4 4 6.01472 4 8.5C4 10.9853 6.01472 13 8.5 13C10.9853 13 13 10.9853 13 8.5C13 6.01472 10.9853 4 8.5 4Z",fill:"currentColor"}))),CY=rl("zoomOut",j("svg",{viewBox:"0 0 20 20",fill:"none",xmlns:"http://www.w3.org/2000/svg"},j("path",{d:"M11 8C11.2761 8 11.5 8.22386 11.5 8.5C11.5 8.77614 11.2761 9 11 9H6C5.72386 9 5.5 8.77614 5.5 8.5C5.5 8.22386 5.72386 8 6 8H11Z",fill:"currentColor"}),j("path",{d:"M14 8.5C14 5.46243 11.5376 3 8.5 3C5.46243 3 3 5.46243 3 8.5C3 11.5376 5.46243 14 8.5 14C9.83879 14 11.0659 13.5217 12.0196 12.7266L16.1464 16.8536L16.2157 16.9114C16.4106 17.0464 16.68 17.0271 16.8536 16.8536C17.0488 16.6583 17.0488 16.3417 16.8536 16.1464L12.7266 12.0196C13.5217 11.0659 14 9.83879 14 8.5ZM4 8.5C4 6.01472 6.01472 4 8.5 4C10.9853 4 13 6.01472 13 8.5C13 10.9853 10.9853 13 8.5 13C6.01472 13 4 10.9853 4 8.5Z",fill:"currentColor"}))),RY=be({name:"ResizeSmall",render(){return j("svg",{xmlns:"http://www.w3.org/2000/svg",viewBox:"0 0 20 20"},j("g",{fill:"none"},j("path",{d:"M5.5 4A1.5 1.5 0 0 0 4 5.5v1a.5.5 0 0 1-1 0v-1A2.5 2.5 0 0 1 5.5 3h1a.5.5 0 0 1 0 1h-1zM16 5.5A1.5 1.5 0 0 0 14.5 4h-1a.5.5 0 0 1 0-1h1A2.5 2.5 0 0 1 17 5.5v1a.5.5 0 0 1-1 0v-1zm0 9a1.5 1.5 0 0 1-1.5 1.5h-1a.5.5 0 0 0 0 1h1a2.5 2.5 0 0 0 2.5-2.5v-1a.5.5 0 0 0-1 0v1zm-12 0A1.5 1.5 0 0 0 5.5 16h1.25a.5.5 0 0 1 0 1H5.5A2.5 2.5 0 0 1 3 14.5v-1.25a.5.5 0 0 1 1 0v1.25zM8.5 7A1.5 1.5 0 0 0 7 8.5v3A1.5 1.5 0 0 0 8.5 13h3a1.5 1.5 0 0 0 1.5-1.5v-3A1.5 1.5 0 0 0 11.5 7h-3zM8 8.5a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 .5.5v3a.5.5 0 0 1-.5.5h-3a.5.5 0 0 1-.5-.5v-3z",fill:"currentColor"})))}}),NY=St("base-icon",` - height: 1em; - width: 1em; - line-height: 1em; - text-align: center; - display: inline-block; - position: relative; - fill: currentColor; - transform: translateZ(0); -`,[je("svg",` - height: 1em; - width: 1em; - `)]),Or=be({name:"BaseIcon",props:{role:String,ariaLabel:String,ariaDisabled:{type:Boolean,default:void 0},ariaHidden:{type:Boolean,default:void 0},clsPrefix:{type:String,required:!0},onClick:Function,onMousedown:Function,onMouseup:Function},setup(t){SY("-base-icon",NY,yt(t,"clsPrefix"))},render(){return j("i",{class:`${this.clsPrefix}-base-icon`,onClick:this.onClick,onMousedown:this.onMousedown,onMouseup:this.onMouseup,role:this.role,"aria-label":this.ariaLabel,"aria-hidden":this.ariaHidden,"aria-disabled":this.ariaDisabled},this.$slots)}}),Se={neutralBase:"#FFF",neutralInvertBase:"#000",neutralTextBase:"#000",neutralPopover:"#fff",neutralCard:"#fff",neutralModal:"#fff",neutralBody:"#fff",alpha1:"0.82",alpha2:"0.72",alpha3:"0.38",alpha4:"0.24",alpha5:"0.18",alphaClose:"0.6",alphaDisabled:"0.5",alphaDisabledInput:"0.02",alphaPending:"0.05",alphaTablePending:"0.02",alphaPressed:"0.07",alphaAvatar:"0.2",alphaRail:"0.14",alphaProgressRail:".08",alphaBorder:"0.12",alphaDivider:"0.06",alphaInput:"0",alphaAction:"0.02",alphaTab:"0.04",alphaScrollbar:"0.25",alphaScrollbarHover:"0.4",alphaCode:"0.05",alphaTag:"0.02",primaryHover:"#36ad6a",primaryDefault:"#18a058",primaryActive:"#0c7a43",primarySuppl:"#36ad6a",infoHover:"#4098fc",infoDefault:"#2080f0",infoActive:"#1060c9",infoSuppl:"#4098fc",errorHover:"#de576d",errorDefault:"#d03050",errorActive:"#ab1f3f",errorSuppl:"#de576d",warningHover:"#fcb040",warningDefault:"#f0a020",warningActive:"#c97c10",warningSuppl:"#fcb040",successHover:"#36ad6a",successDefault:"#18a058",successActive:"#0c7a43",successSuppl:"#36ad6a"},OY=Ki(Se.neutralBase),QR=Ki(Se.neutralInvertBase),AY="rgba("+QR.slice(0,3).join(", ")+", ";function HS(t){return AY+String(t)+")"}function Kt(t){const e=Array.from(QR);return e[3]=Number(t),PC(OY,e)}const yY=Object.assign(Object.assign({name:"common"},nl),{baseColor:Se.neutralBase,primaryColor:Se.primaryDefault,primaryColorHover:Se.primaryHover,primaryColorPressed:Se.primaryActive,primaryColorSuppl:Se.primarySuppl,infoColor:Se.infoDefault,infoColorHover:Se.infoHover,infoColorPressed:Se.infoActive,infoColorSuppl:Se.infoSuppl,successColor:Se.successDefault,successColorHover:Se.successHover,successColorPressed:Se.successActive,successColorSuppl:Se.successSuppl,warningColor:Se.warningDefault,warningColorHover:Se.warningHover,warningColorPressed:Se.warningActive,warningColorSuppl:Se.warningSuppl,errorColor:Se.errorDefault,errorColorHover:Se.errorHover,errorColorPressed:Se.errorActive,errorColorSuppl:Se.errorSuppl,textColorBase:Se.neutralTextBase,textColor1:"rgb(31, 34, 37)",textColor2:"rgb(51, 54, 57)",textColor3:"rgb(118, 124, 130)",textColorDisabled:Kt(Se.alpha4),placeholderColor:Kt(Se.alpha4),placeholderColorDisabled:Kt(Se.alpha5),iconColor:Kt(Se.alpha4),iconColorHover:Es(Kt(Se.alpha4),{lightness:.75}),iconColorPressed:Es(Kt(Se.alpha4),{lightness:.9}),iconColorDisabled:Kt(Se.alpha5),opacity1:Se.alpha1,opacity2:Se.alpha2,opacity3:Se.alpha3,opacity4:Se.alpha4,opacity5:Se.alpha5,dividerColor:"rgb(239, 239, 245)",borderColor:"rgb(224, 224, 230)",closeIconColor:Kt(Number(Se.alphaClose)),closeIconColorHover:Kt(Number(Se.alphaClose)),closeIconColorPressed:Kt(Number(Se.alphaClose)),closeColorHover:"rgba(0, 0, 0, .09)",closeColorPressed:"rgba(0, 0, 0, .13)",clearColor:Kt(Se.alpha4),clearColorHover:Es(Kt(Se.alpha4),{lightness:.75}),clearColorPressed:Es(Kt(Se.alpha4),{lightness:.9}),scrollbarColor:HS(Se.alphaScrollbar),scrollbarColorHover:HS(Se.alphaScrollbarHover),scrollbarWidth:"5px",scrollbarHeight:"5px",scrollbarBorderRadius:"5px",progressRailColor:Kt(Se.alphaProgressRail),railColor:"rgb(219, 219, 223)",popoverColor:Se.neutralPopover,tableColor:Se.neutralCard,cardColor:Se.neutralCard,modalColor:Se.neutralModal,bodyColor:Se.neutralBody,tagColor:"#eee",avatarColor:Kt(Se.alphaAvatar),invertedColor:"rgb(0, 20, 40)",inputColor:Kt(Se.alphaInput),codeColor:"rgb(244, 244, 248)",tabColor:"rgb(247, 247, 250)",actionColor:"rgb(250, 250, 252)",tableHeaderColor:"rgb(250, 250, 252)",hoverColor:"rgb(243, 243, 245)",tableColorHover:"rgba(0, 0, 100, 0.03)",tableColorStriped:"rgba(0, 0, 100, 0.02)",pressedColor:"rgb(237, 237, 239)",opacityDisabled:Se.alphaDisabled,inputColorDisabled:"rgb(250, 250, 252)",buttonColor2:"rgba(46, 51, 56, .05)",buttonColor2Hover:"rgba(46, 51, 56, .09)",buttonColor2Pressed:"rgba(46, 51, 56, .13)",boxShadow1:"0 1px 2px -2px rgba(0, 0, 0, .08), 0 3px 6px 0 rgba(0, 0, 0, .06), 0 5px 12px 4px rgba(0, 0, 0, .04)",boxShadow2:"0 3px 6px -4px rgba(0, 0, 0, .12), 0 6px 16px 0 rgba(0, 0, 0, .08), 0 9px 28px 8px rgba(0, 0, 0, .05)",boxShadow3:"0 6px 16px -9px rgba(0, 0, 0, .08), 0 9px 28px 0 rgba(0, 0, 0, .05), 0 12px 48px 16px rgba(0, 0, 0, .03)"}),ao=yY,IY=t=>{const{scrollbarColor:e,scrollbarColorHover:n}=t;return{color:e,colorHover:n}},DY={name:"Scrollbar",common:ao,self:IY},xY=DY,{cubicBezierEaseInOut:zS}=nl;function Pm({name:t="fade-in",enterDuration:e="0.2s",leaveDuration:n="0.2s",enterCubicBezier:i=zS,leaveCubicBezier:o=zS}={}){return[je(`&.${t}-transition-enter-active`,{transition:`all ${e} ${i}!important`}),je(`&.${t}-transition-leave-active`,{transition:`all ${n} ${o}!important`}),je(`&.${t}-transition-enter-from, &.${t}-transition-leave-to`,{opacity:0}),je(`&.${t}-transition-leave-from, &.${t}-transition-enter-to`,{opacity:1})]}const wY=St("scrollbar",` - overflow: hidden; - position: relative; - z-index: auto; - height: 100%; - width: 100%; -`,[je(">",[St("scrollbar-container",` - width: 100%; - overflow: scroll; - height: 100%; - max-height: inherit; - scrollbar-width: none; - `,[je("&::-webkit-scrollbar, &::-webkit-scrollbar-track-piece, &::-webkit-scrollbar-thumb",` - width: 0; - height: 0; - display: none; - `),je(">",[St("scrollbar-content",` - box-sizing: border-box; - min-width: 100%; - `)])])]),je(">, +",[St("scrollbar-rail",` - position: absolute; - pointer-events: none; - user-select: none; - -webkit-user-select: none; - `,[_r("horizontal",` - left: 2px; - right: 2px; - bottom: 4px; - height: var(--n-scrollbar-height); - `,[je(">",[jr("scrollbar",` - height: var(--n-scrollbar-height); - border-radius: var(--n-scrollbar-border-radius); - right: 0; - `)])]),_r("vertical",` - right: 4px; - top: 2px; - bottom: 2px; - width: var(--n-scrollbar-width); - `,[je(">",[jr("scrollbar",` - width: var(--n-scrollbar-width); - border-radius: var(--n-scrollbar-border-radius); - bottom: 0; - `)])]),_r("disabled",[je(">",[jr("scrollbar",{pointerEvents:"none"})])]),je(">",[jr("scrollbar",` - position: absolute; - cursor: pointer; - pointer-events: all; - background-color: var(--n-scrollbar-color); - transition: background-color .2s var(--n-scrollbar-bezier); - `,[Pm(),je("&:hover",{backgroundColor:"var(--n-scrollbar-color-hover)"})])])])])]),MY=Object.assign(Object.assign({},fn.props),{size:{type:Number,default:5},duration:{type:Number,default:0},scrollable:{type:Boolean,default:!0},xScrollable:Boolean,trigger:{type:String,default:"hover"},useUnifiedContainer:Boolean,triggerDisplayManually:Boolean,container:Function,content:Function,containerClass:String,containerStyle:[String,Object],contentClass:String,contentStyle:[String,Object],horizontalRailStyle:[String,Object],verticalRailStyle:[String,Object],onScroll:Function,onWheel:Function,onResize:Function,internalOnUpdateScrollLeft:Function,internalHoistYRail:Boolean}),XR=be({name:"Scrollbar",props:MY,inheritAttrs:!1,setup(t){const{mergedClsPrefixRef:e,inlineThemeDisabled:n,mergedRtlRef:i}=pi(t),o=bY("Scrollbar",i,e),s=ee(null),l=ee(null),c=ee(null),d=ee(null),_=ee(null),p=ee(null),g=ee(null),E=ee(null),f=ee(null),S=ee(null),C=ee(null),h=ee(0),T=ee(0),N=ee(!1),y=ee(!1);let x=!1,P=!1,D,k,U=0,W=0,z=0,K=0;const Ee=GP(),oe=le(()=>{const{value:Z}=E,{value:ge}=p,{value:Ae}=S;return Z===null||ge===null||Ae===null?0:Math.min(Z,Ae*Z/ge+t.size*1.5)}),L=le(()=>`${oe.value}px`),J=le(()=>{const{value:Z}=f,{value:ge}=g,{value:Ae}=C;return Z===null||ge===null||Ae===null?0:Ae*Z/ge+t.size*1.5}),re=le(()=>`${J.value}px`),G=le(()=>{const{value:Z}=E,{value:ge}=h,{value:Ae}=p,{value:it}=S;if(Z===null||Ae===null||it===null)return 0;{const ht=Ae-Z;return ht?ge/ht*(it-oe.value):0}}),X=le(()=>`${G.value}px`),_e=le(()=>{const{value:Z}=f,{value:ge}=T,{value:Ae}=g,{value:it}=C;if(Z===null||Ae===null||it===null)return 0;{const ht=Ae-Z;return ht?ge/ht*(it-J.value):0}}),ve=le(()=>`${_e.value}px`),he=le(()=>{const{value:Z}=E,{value:ge}=p;return Z!==null&&ge!==null&&ge>Z}),tt=le(()=>{const{value:Z}=f,{value:ge}=g;return Z!==null&&ge!==null&&ge>Z}),lt=le(()=>{const{trigger:Z}=t;return Z==="none"||N.value}),$e=le(()=>{const{trigger:Z}=t;return Z==="none"||y.value}),Ce=le(()=>{const{container:Z}=t;return Z?Z():l.value}),Be=le(()=>{const{content:Z}=t;return Z?Z():c.value}),Ve=P0(()=>{t.container||rt({top:h.value,left:T.value})}),xe=()=>{Ve.isDeactivated||Nt()},He=Z=>{if(Ve.isDeactivated)return;const{onResize:ge}=t;ge&&ge(Z),Nt()},rt=(Z,ge)=>{if(!t.scrollable)return;if(typeof Z=="number"){te(ge??0,Z,0,!1,"auto");return}const{left:Ae,top:it,index:ht,elSize:wt,position:tn,behavior:mt,el:ln,debounce:tr=!0}=Z;(Ae!==void 0||it!==void 0)&&te(Ae??0,it??0,0,!1,mt),ln!==void 0?te(0,ln.offsetTop,ln.offsetHeight,tr,mt):ht!==void 0&&wt!==void 0?te(0,ht*wt,wt,tr,mt):tn==="bottom"?te(0,Number.MAX_SAFE_INTEGER,0,!1,mt):tn==="top"&&te(0,0,0,!1,mt)},We=(Z,ge)=>{if(!t.scrollable)return;const{value:Ae}=Ce;Ae&&(typeof Z=="object"?Ae.scrollBy(Z):Ae.scrollBy(Z,ge||0))};function te(Z,ge,Ae,it,ht){const{value:wt}=Ce;if(wt){if(it){const{scrollTop:tn,offsetHeight:mt}=wt;if(ge>tn){ge+Ae<=tn+mt||wt.scrollTo({left:Z,top:ge+Ae-mt,behavior:ht});return}}wt.scrollTo({left:Z,top:ge,behavior:ht})}}function pe(){pt(),me(),Nt()}function ie(){Pe()}function Pe(){we(),Xe()}function we(){k!==void 0&&window.clearTimeout(k),k=window.setTimeout(()=>{y.value=!1},t.duration)}function Xe(){D!==void 0&&window.clearTimeout(D),D=window.setTimeout(()=>{N.value=!1},t.duration)}function pt(){D!==void 0&&window.clearTimeout(D),N.value=!0}function me(){k!==void 0&&window.clearTimeout(k),y.value=!0}function bt(Z){const{onScroll:ge}=t;ge&&ge(Z),Ue()}function Ue(){const{value:Z}=Ce;Z&&(h.value=Z.scrollTop,T.value=Z.scrollLeft*(o!=null&&o.value?-1:1))}function Ie(){const{value:Z}=Be;Z&&(p.value=Z.offsetHeight,g.value=Z.offsetWidth);const{value:ge}=Ce;ge&&(E.value=ge.offsetHeight,f.value=ge.offsetWidth);const{value:Ae}=_,{value:it}=d;Ae&&(C.value=Ae.offsetWidth),it&&(S.value=it.offsetHeight)}function zt(){const{value:Z}=Ce;Z&&(h.value=Z.scrollTop,T.value=Z.scrollLeft*(o!=null&&o.value?-1:1),E.value=Z.offsetHeight,f.value=Z.offsetWidth,p.value=Z.scrollHeight,g.value=Z.scrollWidth);const{value:ge}=_,{value:Ae}=d;ge&&(C.value=ge.offsetWidth),Ae&&(S.value=Ae.offsetHeight)}function Nt(){t.scrollable&&(t.useUnifiedContainer?zt():(Ie(),Ue()))}function Gt(Z){var ge;return!(!((ge=s.value)===null||ge===void 0)&&ge.contains(ks(Z)))}function Sn(Z){Z.preventDefault(),Z.stopPropagation(),P=!0,Ht("mousemove",window,ne,!0),Ht("mouseup",window,ce,!0),W=T.value,z=o!=null&&o.value?window.innerWidth-Z.clientX:Z.clientX}function ne(Z){if(!P)return;D!==void 0&&window.clearTimeout(D),k!==void 0&&window.clearTimeout(k);const{value:ge}=f,{value:Ae}=g,{value:it}=J;if(ge===null||Ae===null)return;const wt=(o!=null&&o.value?window.innerWidth-Z.clientX-z:Z.clientX-z)*(Ae-ge)/(ge-it),tn=Ae-ge;let mt=W+wt;mt=Math.min(tn,mt),mt=Math.max(mt,0);const{value:ln}=Ce;if(ln){ln.scrollLeft=mt*(o!=null&&o.value?-1:1);const{internalOnUpdateScrollLeft:tr}=t;tr&&tr(mt)}}function ce(Z){Z.preventDefault(),Z.stopPropagation(),Rt("mousemove",window,ne,!0),Rt("mouseup",window,ce,!0),P=!1,Nt(),Gt(Z)&&Pe()}function Oe(Z){Z.preventDefault(),Z.stopPropagation(),x=!0,Ht("mousemove",window,Me,!0),Ht("mouseup",window,ct,!0),U=h.value,K=Z.clientY}function Me(Z){if(!x)return;D!==void 0&&window.clearTimeout(D),k!==void 0&&window.clearTimeout(k);const{value:ge}=E,{value:Ae}=p,{value:it}=oe;if(ge===null||Ae===null)return;const wt=(Z.clientY-K)*(Ae-ge)/(ge-it),tn=Ae-ge;let mt=U+wt;mt=Math.min(tn,mt),mt=Math.max(mt,0);const{value:ln}=Ce;ln&&(ln.scrollTop=mt)}function ct(Z){Z.preventDefault(),Z.stopPropagation(),Rt("mousemove",window,Me,!0),Rt("mouseup",window,ct,!0),x=!1,Nt(),Gt(Z)&&Pe()}si(()=>{const{value:Z}=tt,{value:ge}=he,{value:Ae}=e,{value:it}=_,{value:ht}=d;it&&(Z?it.classList.remove(`${Ae}-scrollbar-rail--disabled`):it.classList.add(`${Ae}-scrollbar-rail--disabled`)),ht&&(ge?ht.classList.remove(`${Ae}-scrollbar-rail--disabled`):ht.classList.add(`${Ae}-scrollbar-rail--disabled`))}),kn(()=>{t.container||Nt()}),Kn(()=>{D!==void 0&&window.clearTimeout(D),k!==void 0&&window.clearTimeout(k),Rt("mousemove",window,Me,!0),Rt("mouseup",window,ct,!0)});const xt=fn("Scrollbar","-scrollbar",wY,xY,t,e),Ze=le(()=>{const{common:{cubicBezierEaseInOut:Z,scrollbarBorderRadius:ge,scrollbarHeight:Ae,scrollbarWidth:it},self:{color:ht,colorHover:wt}}=xt.value;return{"--n-scrollbar-bezier":Z,"--n-scrollbar-color":ht,"--n-scrollbar-color-hover":wt,"--n-scrollbar-border-radius":ge,"--n-scrollbar-width":it,"--n-scrollbar-height":Ae}}),Yt=n?_g("scrollbar",void 0,Ze,t):void 0;return Object.assign(Object.assign({},{scrollTo:rt,scrollBy:We,sync:Nt,syncUnifiedContainer:zt,handleMouseEnterWrapper:pe,handleMouseLeaveWrapper:ie}),{mergedClsPrefix:e,rtlEnabled:o,containerScrollTop:h,wrapperRef:s,containerRef:l,contentRef:c,yRailRef:d,xRailRef:_,needYBar:he,needXBar:tt,yBarSizePx:L,xBarSizePx:re,yBarTopPx:X,xBarLeftPx:ve,isShowXBar:lt,isShowYBar:$e,isIos:Ee,handleScroll:bt,handleContentResize:xe,handleContainerResize:He,handleYScrollMouseDown:Oe,handleXScrollMouseDown:Sn,cssVars:n?void 0:Ze,themeClass:Yt==null?void 0:Yt.themeClass,onRender:Yt==null?void 0:Yt.onRender})},render(){var t;const{$slots:e,mergedClsPrefix:n,triggerDisplayManually:i,rtlEnabled:o,internalHoistYRail:s}=this;if(!this.scrollable)return(t=e.default)===null||t===void 0?void 0:t.call(e);const l=this.trigger==="none",c=()=>j("div",{ref:"yRailRef",class:[`${n}-scrollbar-rail`,`${n}-scrollbar-rail--vertical`],"data-scrollbar-rail":!0,style:this.verticalRailStyle,"aria-hidden":!0},j(l?zf:Hi,l?null:{name:"fade-in-transition"},{default:()=>this.needYBar&&this.isShowYBar&&!this.isIos?j("div",{class:`${n}-scrollbar-rail__scrollbar`,style:{height:this.yBarSizePx,top:this.yBarTopPx},onMousedown:this.handleYScrollMouseDown}):null})),d=()=>{var p,g;return(p=this.onRender)===null||p===void 0||p.call(this),j("div",zm(this.$attrs,{role:"none",ref:"wrapperRef",class:[`${n}-scrollbar`,this.themeClass,o&&`${n}-scrollbar--rtl`],style:this.cssVars,onMouseenter:i?void 0:this.handleMouseEnterWrapper,onMouseleave:i?void 0:this.handleMouseLeaveWrapper}),[this.container?(g=e.default)===null||g===void 0?void 0:g.call(e):j("div",{role:"none",ref:"containerRef",class:[`${n}-scrollbar-container`,this.containerClass],style:this.containerStyle,onScroll:this.handleScroll,onWheel:this.onWheel},j(pS,{onResize:this.handleContentResize},{default:()=>j("div",{ref:"contentRef",role:"none",style:[{width:this.xScrollable?"fit-content":null},this.contentStyle],class:[`${n}-scrollbar-content`,this.contentClass]},e)})),s?null:c(),this.xScrollable&&j("div",{ref:"xRailRef",class:[`${n}-scrollbar-rail`,`${n}-scrollbar-rail--horizontal`],style:this.horizontalRailStyle,"data-scrollbar-rail":!0,"aria-hidden":!0},j(l?zf:Hi,l?null:{name:"fade-in-transition"},{default:()=>this.needXBar&&this.isShowXBar&&!this.isIos?j("div",{class:`${n}-scrollbar-rail__scrollbar`,style:{width:this.xBarSizePx,right:o?this.xBarLeftPx:void 0,left:o?void 0:this.xBarLeftPx},onMousedown:this.handleXScrollMouseDown}):null}))])},_=this.container?d():j(pS,{onResize:this.handleContainerResize},{default:d});return s?j(st,null,_,c()):_}}),LY=XR,PY=XR,{cubicBezierEaseIn:VS,cubicBezierEaseOut:WS}=nl;function kY({transformOrigin:t="inherit",duration:e=".2s",enterScale:n=".9",originalTransform:i="",originalTransition:o=""}={}){return[je("&.fade-in-scale-up-transition-leave-active",{transformOrigin:t,transition:`opacity ${e} ${VS}, transform ${e} ${VS} ${o&&","+o}`}),je("&.fade-in-scale-up-transition-enter-active",{transformOrigin:t,transition:`opacity ${e} ${WS}, transform ${e} ${WS} ${o&&","+o}`}),je("&.fade-in-scale-up-transition-enter-from, &.fade-in-scale-up-transition-leave-to",{opacity:0,transform:`${i} scale(${n})`}),je("&.fade-in-scale-up-transition-leave-from, &.fade-in-scale-up-transition-enter-to",{opacity:1,transform:`${i} scale(1)`})]}const UY={space:"6px",spaceArrow:"10px",arrowOffset:"10px",arrowOffsetVertical:"10px",arrowHeight:"6px",padding:"8px 14px"},FY=t=>{const{boxShadow2:e,popoverColor:n,textColor2:i,borderRadius:o,fontSize:s,dividerColor:l}=t;return Object.assign(Object.assign({},UY),{fontSize:s,borderRadius:o,color:n,dividerColor:l,textColor:i,boxShadow:e})},BY={name:"Popover",common:ao,self:FY},ZR=BY,vu={top:"bottom",bottom:"top",left:"right",right:"left"},Pt="var(--n-arrow-height) * 1.414",GY=je([St("popover",` - transition: - box-shadow .3s var(--n-bezier), - background-color .3s var(--n-bezier), - color .3s var(--n-bezier); - position: relative; - font-size: var(--n-font-size); - color: var(--n-text-color); - box-shadow: var(--n-box-shadow); - word-break: break-word; - `,[je(">",[St("scrollbar",` - height: inherit; - max-height: inherit; - `)]),$a("raw",` - background-color: var(--n-color); - border-radius: var(--n-border-radius); - `,[$a("scrollable",[$a("show-header-or-footer","padding: var(--n-padding);")])]),jr("header",` - padding: var(--n-padding); - border-bottom: 1px solid var(--n-divider-color); - transition: border-color .3s var(--n-bezier); - `),jr("footer",` - padding: var(--n-padding); - border-top: 1px solid var(--n-divider-color); - transition: border-color .3s var(--n-bezier); - `),_r("scrollable, show-header-or-footer",[jr("content",` - padding: var(--n-padding); - `)])]),St("popover-shared",` - transform-origin: inherit; - `,[St("popover-arrow-wrapper",` - position: absolute; - overflow: hidden; - pointer-events: none; - `,[St("popover-arrow",` - transition: background-color .3s var(--n-bezier); - position: absolute; - display: block; - width: calc(${Pt}); - height: calc(${Pt}); - box-shadow: 0 0 8px 0 rgba(0, 0, 0, .12); - transform: rotate(45deg); - background-color: var(--n-color); - pointer-events: all; - `)]),je("&.popover-transition-enter-from, &.popover-transition-leave-to",` - opacity: 0; - transform: scale(.85); - `),je("&.popover-transition-enter-to, &.popover-transition-leave-from",` - transform: scale(1); - opacity: 1; - `),je("&.popover-transition-enter-active",` - transition: - box-shadow .3s var(--n-bezier), - background-color .3s var(--n-bezier), - color .3s var(--n-bezier), - opacity .15s var(--n-bezier-ease-out), - transform .15s var(--n-bezier-ease-out); - `),je("&.popover-transition-leave-active",` - transition: - box-shadow .3s var(--n-bezier), - background-color .3s var(--n-bezier), - color .3s var(--n-bezier), - opacity .15s var(--n-bezier-ease-in), - transform .15s var(--n-bezier-ease-in); - `)]),An("top-start",` - top: calc(${Pt} / -2); - left: calc(${dr("top-start")} - var(--v-offset-left)); - `),An("top",` - top: calc(${Pt} / -2); - transform: translateX(calc(${Pt} / -2)) rotate(45deg); - left: 50%; - `),An("top-end",` - top: calc(${Pt} / -2); - right: calc(${dr("top-end")} + var(--v-offset-left)); - `),An("bottom-start",` - bottom: calc(${Pt} / -2); - left: calc(${dr("bottom-start")} - var(--v-offset-left)); - `),An("bottom",` - bottom: calc(${Pt} / -2); - transform: translateX(calc(${Pt} / -2)) rotate(45deg); - left: 50%; - `),An("bottom-end",` - bottom: calc(${Pt} / -2); - right: calc(${dr("bottom-end")} + var(--v-offset-left)); - `),An("left-start",` - left: calc(${Pt} / -2); - top: calc(${dr("left-start")} - var(--v-offset-top)); - `),An("left",` - left: calc(${Pt} / -2); - transform: translateY(calc(${Pt} / -2)) rotate(45deg); - top: 50%; - `),An("left-end",` - left: calc(${Pt} / -2); - bottom: calc(${dr("left-end")} + var(--v-offset-top)); - `),An("right-start",` - right: calc(${Pt} / -2); - top: calc(${dr("right-start")} - var(--v-offset-top)); - `),An("right",` - right: calc(${Pt} / -2); - transform: translateY(calc(${Pt} / -2)) rotate(45deg); - top: 50%; - `),An("right-end",` - right: calc(${Pt} / -2); - bottom: calc(${dr("right-end")} + var(--v-offset-top)); - `),...hG({top:["right-start","left-start"],right:["top-end","bottom-end"],bottom:["right-end","left-end"],left:["top-start","bottom-start"]},(t,e)=>{const n=["right","left"].includes(e),i=n?"width":"height";return t.map(o=>{const s=o.split("-")[1]==="end",c=`calc((${`var(--v-target-${i}, 0px)`} - ${Pt}) / 2)`,d=dr(o);return je(`[v-placement="${o}"] >`,[St("popover-shared",[_r("center-arrow",[St("popover-arrow",`${e}: calc(max(${c}, ${d}) ${s?"+":"-"} var(--v-offset-${n?"left":"top"}));`)])])])})})]);function dr(t){return["top","bottom"].includes(t.split("-")[0])?"var(--n-arrow-offset)":"var(--n-arrow-offset-vertical)"}function An(t,e){const n=t.split("-")[0],i=["top","bottom"].includes(n)?"height: var(--n-space-arrow);":"width: var(--n-space-arrow);";return je(`[v-placement="${t}"] >`,[St("popover-shared",` - margin-${vu[n]}: var(--n-space); - `,[_r("show-arrow",` - margin-${vu[n]}: var(--n-space-arrow); - `),_r("overlap",` - margin: 0; - `),DP("popover-arrow-wrapper",` - right: 0; - left: 0; - top: 0; - bottom: 0; - ${n}: 100%; - ${vu[n]}: auto; - ${i} - `,[St("popover-arrow",e)])])])}const JR=Object.assign(Object.assign({},fn.props),{to:Qi.propTo,show:Boolean,trigger:String,showArrow:Boolean,delay:Number,duration:Number,raw:Boolean,arrowPointToCenter:Boolean,arrowStyle:[String,Object],displayDirective:String,x:Number,y:Number,flip:Boolean,overlap:Boolean,placement:String,width:[Number,String],keepAliveOnHover:Boolean,scrollable:Boolean,contentStyle:[Object,String],headerStyle:[Object,String],footerStyle:[Object,String],internalDeactivateImmediately:Boolean,animated:Boolean,onClickoutside:Function,internalTrapFocus:Boolean,internalOnAfterLeave:Function,minWidth:Number,maxWidth:Number}),YY=({arrowStyle:t,clsPrefix:e})=>j("div",{key:"__popover-arrow__",class:`${e}-popover-arrow-wrapper`},j("div",{class:`${e}-popover-arrow`,style:t})),qY=be({name:"PopoverBody",inheritAttrs:!1,props:JR,setup(t,{slots:e,attrs:n}){const{namespaceRef:i,mergedClsPrefixRef:o,inlineThemeDisabled:s}=pi(t),l=fn("Popover","-popover",GY,ZR,t,o),c=ee(null),d=Ft("NPopover"),_=ee(null),p=ee(t.show),g=ee(!1);si(()=>{const{show:k}=t;k&&!xP()&&!t.internalDeactivateImmediately&&(g.value=!0)});const E=le(()=>{const{trigger:k,onClickoutside:U}=t,W=[],{positionManuallyRef:{value:z}}=d;return z||(k==="click"&&!U&&W.push([Zf,x,void 0,{capture:!0}]),k==="hover"&&W.push([KP,y])),U&&W.push([Zf,x,void 0,{capture:!0}]),(t.displayDirective==="show"||t.animated&&g.value)&&W.push([Qs,t.show]),W}),f=le(()=>{const k=t.width==="trigger"?void 0:du(t.width),U=[];k&&U.push({width:k});const{maxWidth:W,minWidth:z}=t;return W&&U.push({maxWidth:du(W)}),z&&U.push({maxWidth:du(z)}),s||U.push(S.value),U}),S=le(()=>{const{common:{cubicBezierEaseInOut:k,cubicBezierEaseIn:U,cubicBezierEaseOut:W},self:{space:z,spaceArrow:K,padding:Ee,fontSize:oe,textColor:L,dividerColor:J,color:re,boxShadow:G,borderRadius:X,arrowHeight:_e,arrowOffset:ve,arrowOffsetVertical:he}}=l.value;return{"--n-box-shadow":G,"--n-bezier":k,"--n-bezier-ease-in":U,"--n-bezier-ease-out":W,"--n-font-size":oe,"--n-text-color":L,"--n-color":re,"--n-divider-color":J,"--n-border-radius":X,"--n-arrow-height":_e,"--n-arrow-offset":ve,"--n-arrow-offset-vertical":he,"--n-padding":Ee,"--n-space":z,"--n-space-arrow":K}}),C=s?_g("popover",void 0,S,t):void 0;d.setBodyInstance({syncPosition:h}),Kn(()=>{d.setBodyInstance(null)}),Zt(yt(t,"show"),k=>{t.animated||(k?p.value=!0:p.value=!1)});function h(){var k;(k=c.value)===null||k===void 0||k.syncPosition()}function T(k){t.trigger==="hover"&&t.keepAliveOnHover&&t.show&&d.handleMouseEnter(k)}function N(k){t.trigger==="hover"&&t.keepAliveOnHover&&d.handleMouseLeave(k)}function y(k){t.trigger==="hover"&&!P().contains(ks(k))&&d.handleMouseMoveOutside(k)}function x(k){(t.trigger==="click"&&!P().contains(ks(k))||t.onClickoutside)&&d.handleClickOutside(k)}function P(){return d.getTriggerElement()}ni(VC,_),ni(zC,null),ni(HC,null);function D(){if(C==null||C.onRender(),!(t.displayDirective==="show"||t.show||t.animated&&g.value))return null;let U;const W=d.internalRenderBodyRef.value,{value:z}=o;if(W)U=W([`${z}-popover-shared`,C==null?void 0:C.themeClass.value,t.overlap&&`${z}-popover-shared--overlap`,t.showArrow&&`${z}-popover-shared--show-arrow`,t.arrowPointToCenter&&`${z}-popover-shared--center-arrow`],_,f.value,T,N);else{const{value:K}=d.extraClassRef,{internalTrapFocus:Ee}=t,oe=!Hf(e.header)||!Hf(e.footer),L=()=>{var J;const re=oe?j(st,null,uu(e.header,_e=>_e?j("div",{class:`${z}-popover__header`,style:t.headerStyle},_e):null),uu(e.default,_e=>_e?j("div",{class:`${z}-popover__content`,style:t.contentStyle},e):null),uu(e.footer,_e=>_e?j("div",{class:`${z}-popover__footer`,style:t.footerStyle},_e):null)):t.scrollable?(J=e.default)===null||J===void 0?void 0:J.call(e):j("div",{class:`${z}-popover__content`,style:t.contentStyle},e),G=t.scrollable?j(PY,{contentClass:oe?void 0:`${z}-popover__content`,contentStyle:oe?void 0:t.contentStyle},{default:()=>re}):re,X=t.showArrow?YY({arrowStyle:t.arrowStyle,clsPrefix:z}):null;return[G,X]};U=j("div",zm({class:[`${z}-popover`,`${z}-popover-shared`,C==null?void 0:C.themeClass.value,K.map(J=>`${z}-${J}`),{[`${z}-popover--scrollable`]:t.scrollable,[`${z}-popover--show-header-or-footer`]:oe,[`${z}-popover--raw`]:t.raw,[`${z}-popover-shared--overlap`]:t.overlap,[`${z}-popover-shared--show-arrow`]:t.showArrow,[`${z}-popover-shared--center-arrow`]:t.arrowPointToCenter}],ref:_,style:f.value,onKeydown:d.handleKeydown,onMouseenter:T,onMouseleave:N},n),Ee?j(L0,{active:t.show,autoFocus:!0},{default:L}):L())}return Pn(U,E.value)}return{displayed:g,namespace:i,isMounted:d.isMountedRef,zIndex:d.zIndexRef,followerRef:c,adjustedTo:Qi(t),followerEnabled:p,renderContentNode:D}},render(){return j(u0,{ref:"followerRef",zIndex:this.zIndex,show:this.show,enabled:this.followerEnabled,to:this.adjustedTo,x:this.x,y:this.y,flip:this.flip,placement:this.placement,containerClass:this.namespace,overlap:this.overlap,width:this.width==="trigger"?"target":void 0,teleportDisabled:this.adjustedTo===Qi.tdkey},{default:()=>this.animated?j(Hi,{name:"popover-transition",appear:this.isMounted,onEnter:()=>{this.followerEnabled=!0},onAfterLeave:()=>{var t;(t=this.internalOnAfterLeave)===null||t===void 0||t.call(this),this.followerEnabled=!1,this.displayed=!1}},{default:this.renderContentNode}):this.renderContentNode()})}}),$Y=Object.keys(JR),HY={focus:["onFocus","onBlur"],click:["onClick"],hover:["onMouseenter","onMouseleave"],manual:[],nested:["onFocus","onBlur","onMouseenter","onMouseleave","onClick"]};function zY(t,e,n){HY[e].forEach(i=>{t.props?t.props=Object.assign({},t.props):t.props={};const o=t.props[i],s=n[i];o?t.props[i]=(...l)=>{o(...l),s(...l)}:t.props[i]=s})}const jR={show:{type:Boolean,default:void 0},defaultShow:Boolean,showArrow:{type:Boolean,default:!0},trigger:{type:String,default:"hover"},delay:{type:Number,default:100},duration:{type:Number,default:100},raw:Boolean,placement:{type:String,default:"top"},x:Number,y:Number,arrowPointToCenter:Boolean,disabled:Boolean,getDisabled:Function,displayDirective:{type:String,default:"if"},arrowStyle:[String,Object],flip:{type:Boolean,default:!0},animated:{type:Boolean,default:!0},width:{type:[Number,String],default:void 0},overlap:Boolean,keepAliveOnHover:{type:Boolean,default:!0},zIndex:Number,to:Qi.propTo,scrollable:Boolean,contentStyle:[Object,String],headerStyle:[Object,String],footerStyle:[Object,String],onClickoutside:Function,"onUpdate:show":[Function,Array],onUpdateShow:[Function,Array],internalDeactivateImmediately:Boolean,internalSyncTargetWithParent:Boolean,internalInheritedEventHandlers:{type:Array,default:()=>[]},internalTrapFocus:Boolean,internalExtraClass:{type:Array,default:()=>[]},onShow:[Function,Array],onHide:[Function,Array],arrow:{type:Boolean,default:void 0},minWidth:Number,maxWidth:Number},VY=Object.assign(Object.assign(Object.assign({},fn.props),jR),{internalOnAfterLeave:Function,internalRenderBody:Function}),WY=be({name:"Popover",inheritAttrs:!1,props:VY,__popover__:!0,setup(t){const e=Zm(),n=ee(null),i=le(()=>t.show),o=ee(t.defaultShow),s=UP(i,o),l=Qa(()=>t.disabled?!1:s.value),c=()=>{if(t.disabled)return!0;const{getDisabled:L}=t;return!!(L!=null&&L())},d=()=>c()?!1:s.value,_=FP(t,["arrow","showArrow"]),p=le(()=>t.overlap?!1:_.value);let g=null;const E=ee(null),f=ee(null),S=Qa(()=>t.x!==void 0&&t.y!==void 0);function C(L){const{"onUpdate:show":J,onUpdateShow:re,onShow:G,onHide:X}=t;o.value=L,J&&Ga(J,L),re&&Ga(re,L),L&&G&&Ga(G,!0),L&&X&&Ga(X,!1)}function h(){g&&g.syncPosition()}function T(){const{value:L}=E;L&&(window.clearTimeout(L),E.value=null)}function N(){const{value:L}=f;L&&(window.clearTimeout(L),f.value=null)}function y(){const L=c();if(t.trigger==="focus"&&!L){if(d())return;C(!0)}}function x(){const L=c();if(t.trigger==="focus"&&!L){if(!d())return;C(!1)}}function P(){const L=c();if(t.trigger==="hover"&&!L){if(N(),E.value!==null||d())return;const J=()=>{C(!0),E.value=null},{delay:re}=t;re===0?J():E.value=window.setTimeout(J,re)}}function D(){const L=c();if(t.trigger==="hover"&&!L){if(T(),f.value!==null||!d())return;const J=()=>{C(!1),f.value=null},{duration:re}=t;re===0?J():f.value=window.setTimeout(J,re)}}function k(){D()}function U(L){var J;d()&&(t.trigger==="click"&&(T(),N(),C(!1)),(J=t.onClickoutside)===null||J===void 0||J.call(t,L))}function W(){if(t.trigger==="click"&&!c()){T(),N();const L=!d();C(L)}}function z(L){t.internalTrapFocus&&L.key==="Escape"&&(T(),N(),C(!1))}function K(L){o.value=L}function Ee(){var L;return(L=n.value)===null||L===void 0?void 0:L.targetRef}function oe(L){g=L}return ni("NPopover",{getTriggerElement:Ee,handleKeydown:z,handleMouseEnter:P,handleMouseLeave:D,handleClickOutside:U,handleMouseMoveOutside:k,setBodyInstance:oe,positionManuallyRef:S,isMountedRef:e,zIndexRef:yt(t,"zIndex"),extraClassRef:yt(t,"internalExtraClass"),internalRenderBodyRef:yt(t,"internalRenderBody")}),si(()=>{s.value&&c()&&C(!1)}),{binderInstRef:n,positionManually:S,mergedShowConsideringDisabledProp:l,uncontrolledShow:o,mergedShowArrow:p,getMergedShow:d,setShow:K,handleClick:W,handleMouseEnter:P,handleMouseLeave:D,handleFocus:y,handleBlur:x,syncPosition:h}},render(){var t;const{positionManually:e,$slots:n}=this;let i,o=!1;if(!e&&(n.activator?i=$f(n,"activator"):i=$f(n,"trigger"),i)){i=Ya(i),i=i.type===Vm?j("span",[i]):i;const s={onClick:this.handleClick,onMouseenter:this.handleMouseEnter,onMouseleave:this.handleMouseLeave,onFocus:this.handleFocus,onBlur:this.handleBlur};if(!((t=i.type)===null||t===void 0)&&t.__popover__)o=!0,i.props||(i.props={internalSyncTargetWithParent:!0,internalInheritedEventHandlers:[]}),i.props.internalSyncTargetWithParent=!0,i.props.internalInheritedEventHandlers?i.props.internalInheritedEventHandlers=[s,...i.props.internalInheritedEventHandlers]:i.props.internalInheritedEventHandlers=[s];else{const{internalInheritedEventHandlers:l}=this,c=[s,...l],d={onBlur:_=>{c.forEach(p=>{p.onBlur(_)})},onFocus:_=>{c.forEach(p=>{p.onFocus(_)})},onClick:_=>{c.forEach(p=>{p.onClick(_)})},onMouseenter:_=>{c.forEach(p=>{p.onMouseenter(_)})},onMouseleave:_=>{c.forEach(p=>{p.onMouseleave(_)})}};zY(i,l?"nested":e?"manual":this.trigger,d)}}return j(zP,{ref:"binderInstRef",syncTarget:!o,syncTargetWithParent:this.internalSyncTargetWithParent},{default:()=>{this.mergedShowConsideringDisabledProp;const s=this.getMergedShow();return[this.internalTrapFocus&&s?Pn(j("div",{style:{position:"fixed",inset:0}}),[[Jm,{enabled:s,zIndex:this.zIndex}]]):null,e?null:j(VP,null,{default:()=>i}),j(qY,oP(this.$props,$Y,Object.assign(Object.assign({},this.$attrs),{showArrow:this.mergedShowArrow,show:s})),{default:()=>{var l,c;return(c=(l=this.$slots).default)===null||c===void 0?void 0:c.call(l)},header:()=>{var l,c;return(c=(l=this.$slots).header)===null||c===void 0?void 0:c.call(l)},footer:()=>{var l,c;return(c=(l=this.$slots).footer)===null||c===void 0?void 0:c.call(l)}})]}})}}),KY=wP&&"loading"in document.createElement("img"),QY=(t={})=>{var e;const{root:n=null}=t;return{hash:`${t.rootMargin||"0px 0px 0px 0px"}-${Array.isArray(t.threshold)?t.threshold.join(","):(e=t.threshold)!==null&&e!==void 0?e:"0"}`,options:Object.assign(Object.assign({},t),{root:(typeof n=="string"?document.querySelector(n):n)||document.documentElement})}},Cu=new WeakMap,Ru=new WeakMap,Nu=new WeakMap,XY=(t,e,n)=>{if(!t)return()=>{};const i=QY(e),{root:o}=i.options;let s;const l=Cu.get(o);l?s=l:(s=new Map,Cu.set(o,s));let c,d;s.has(i.hash)?(d=s.get(i.hash),d[1].has(t)||(c=d[0],d[1].add(t),c.observe(t))):(c=new IntersectionObserver(g=>{g.forEach(E=>{if(E.isIntersecting){const f=Ru.get(E.target),S=Nu.get(E.target);f&&f(),S&&(S.value=!0)}})},i.options),c.observe(t),d=[c,new Set([t])],s.set(i.hash,d));let _=!1;const p=()=>{_||(Ru.delete(t),Nu.delete(t),_=!0,d[1].has(t)&&(d[0].unobserve(t),d[1].delete(t)),d[1].size<=0&&s.delete(i.hash),s.size||Cu.delete(o))};return Ru.set(t,p),Nu.set(t,n),p},ZY={padding:"8px 14px"},JY=t=>{const{borderRadius:e,boxShadow2:n,baseColor:i}=t;return Object.assign(Object.assign({},ZY),{borderRadius:e,boxShadow:n,color:PC(i,"rgba(0, 0, 0, .85)"),textColor:i})},jY={name:"Tooltip",common:ao,peers:{Popover:ZR},self:JY},pg=jY,eq={name:"Ellipsis",common:ao,peers:{Tooltip:pg}},tq=eq,nq=Object.assign(Object.assign({},jR),fn.props),eN=be({name:"Tooltip",props:nq,__popover__:!0,setup(t){const{mergedClsPrefixRef:e}=pi(t),n=fn("Tooltip","-tooltip",void 0,pg,t,e),i=ee(null);return Object.assign(Object.assign({},{syncPosition(){i.value.syncPosition()},setShow(s){i.value.setShow(s)}}),{popoverRef:i,mergedTheme:n,popoverThemeOverrides:le(()=>n.value.self)})},render(){const{mergedTheme:t,internalExtraClass:e}=this;return j(WY,Object.assign(Object.assign({},this.$props),{theme:t.peers.Popover,themeOverrides:t.peerOverrides.Popover,builtinThemeOverrides:this.popoverThemeOverrides,internalExtraClass:e.concat("tooltip"),ref:"popoverRef"}),this.$slots)}}),rq=St("ellipsis",{overflow:"hidden"},[$a("line-clamp",` - white-space: nowrap; - display: inline-block; - vertical-align: bottom; - max-width: 100%; - `),_r("line-clamp",` - display: -webkit-inline-box; - -webkit-box-orient: vertical; - `),_r("cursor-pointer",` - cursor: pointer; - `)]);function KS(t){return`${t}-ellipsis--line-clamp`}function QS(t,e){return`${t}-ellipsis--cursor-${e}`}const iq=Object.assign(Object.assign({},fn.props),{expandTrigger:String,lineClamp:[Number,String],tooltip:{type:[Boolean,Object],default:!0}}),aq=be({name:"Ellipsis",inheritAttrs:!1,props:iq,setup(t,{slots:e,attrs:n}){const{mergedClsPrefixRef:i}=pi(t),o=fn("Ellipsis","-ellipsis",rq,tq,t,i),s=ee(null),l=ee(null),c=ee(null),d=ee(!1),_=le(()=>{const{lineClamp:h}=t,{value:T}=d;return h!==void 0?{textOverflow:"","-webkit-line-clamp":T?"":h}:{textOverflow:T?"":"ellipsis","-webkit-line-clamp":""}});function p(){let h=!1;const{value:T}=d;if(T)return!0;const{value:N}=s;if(N){const{lineClamp:y}=t;if(f(N),y!==void 0)h=N.scrollHeight<=N.offsetHeight;else{const{value:x}=l;x&&(h=x.getBoundingClientRect().width<=N.getBoundingClientRect().width)}S(N,h)}return h}const g=le(()=>t.expandTrigger==="click"?()=>{var h;const{value:T}=d;T&&((h=c.value)===null||h===void 0||h.setShow(!1)),d.value=!T}:void 0);OC(()=>{var h;t.tooltip&&((h=c.value)===null||h===void 0||h.setShow(!1))});const E=()=>j("span",Object.assign({},zm(n,{class:[`${i.value}-ellipsis`,t.lineClamp!==void 0?KS(i.value):void 0,t.expandTrigger==="click"?QS(i.value,"pointer"):void 0],style:_.value}),{ref:"triggerRef",onClick:g.value,onMouseenter:t.expandTrigger==="click"?p:void 0}),t.lineClamp?e:j("span",{ref:"triggerInnerRef"},e));function f(h){if(!h)return;const T=_.value,N=KS(i.value);t.lineClamp!==void 0?C(h,N,"add"):C(h,N,"remove");for(const y in T)h.style[y]!==T[y]&&(h.style[y]=T[y])}function S(h,T){const N=QS(i.value,"pointer");t.expandTrigger==="click"&&!T?C(h,N,"add"):C(h,N,"remove")}function C(h,T,N){N==="add"?h.classList.contains(T)||h.classList.add(T):h.classList.contains(T)&&h.classList.remove(T)}return{mergedTheme:o,triggerRef:s,triggerInnerRef:l,tooltipRef:c,handleClick:g,renderTrigger:E,getTooltipDisabled:p}},render(){var t;const{tooltip:e,renderTrigger:n,$slots:i}=this;if(e){const{mergedTheme:o}=this;return j(eN,Object.assign({ref:"tooltipRef",placement:"top"},e,{getDisabled:this.getTooltipDisabled,theme:o.peers.Tooltip,themeOverrides:o.peerOverrides.Tooltip}),{trigger:n,default:(t=i.tooltip)!==null&&t!==void 0?t:i.default})}else return n()}}),mg=Object.assign(Object.assign({},fn.props),{showToolbar:{type:Boolean,default:!0},showToolbarTooltip:Boolean}),tN="n-image";function oq(){return{toolbarIconColor:"rgba(255, 255, 255, .9)",toolbarColor:"rgba(0, 0, 0, .35)",toolbarBoxShadow:"none",toolbarBorderRadius:"24px"}}const sq={name:"Image",common:ao,peers:{Tooltip:pg},self:oq},lq=j("svg",{viewBox:"0 0 20 20",fill:"none",xmlns:"http://www.w3.org/2000/svg"},j("path",{d:"M6 5C5.75454 5 5.55039 5.17688 5.50806 5.41012L5.5 5.5V14.5C5.5 14.7761 5.72386 15 6 15C6.24546 15 6.44961 14.8231 6.49194 14.5899L6.5 14.5V5.5C6.5 5.22386 6.27614 5 6 5ZM13.8536 5.14645C13.68 4.97288 13.4106 4.9536 13.2157 5.08859L13.1464 5.14645L8.64645 9.64645C8.47288 9.82001 8.4536 10.0894 8.58859 10.2843L8.64645 10.3536L13.1464 14.8536C13.3417 15.0488 13.6583 15.0488 13.8536 14.8536C14.0271 14.68 14.0464 14.4106 13.9114 14.2157L13.8536 14.1464L9.70711 10L13.8536 5.85355C14.0488 5.65829 14.0488 5.34171 13.8536 5.14645Z",fill:"currentColor"})),cq=j("svg",{viewBox:"0 0 20 20",fill:"none",xmlns:"http://www.w3.org/2000/svg"},j("path",{d:"M13.5 5C13.7455 5 13.9496 5.17688 13.9919 5.41012L14 5.5V14.5C14 14.7761 13.7761 15 13.5 15C13.2545 15 13.0504 14.8231 13.0081 14.5899L13 14.5V5.5C13 5.22386 13.2239 5 13.5 5ZM5.64645 5.14645C5.82001 4.97288 6.08944 4.9536 6.28431 5.08859L6.35355 5.14645L10.8536 9.64645C11.0271 9.82001 11.0464 10.0894 10.9114 10.2843L10.8536 10.3536L6.35355 14.8536C6.15829 15.0488 5.84171 15.0488 5.64645 14.8536C5.47288 14.68 5.4536 14.4106 5.58859 14.2157L5.64645 14.1464L9.79289 10L5.64645 5.85355C5.45118 5.65829 5.45118 5.34171 5.64645 5.14645Z",fill:"currentColor"})),uq=j("svg",{viewBox:"0 0 20 20",fill:"none",xmlns:"http://www.w3.org/2000/svg"},j("path",{d:"M4.089 4.216l.057-.07a.5.5 0 0 1 .638-.057l.07.057L10 9.293l5.146-5.147a.5.5 0 0 1 .638-.057l.07.057a.5.5 0 0 1 .057.638l-.057.07L10.707 10l5.147 5.146a.5.5 0 0 1 .057.638l-.057.07a.5.5 0 0 1-.638.057l-.07-.057L10 10.707l-5.146 5.147a.5.5 0 0 1-.638.057l-.07-.057a.5.5 0 0 1-.057-.638l.057-.07L9.293 10L4.146 4.854a.5.5 0 0 1-.057-.638l.057-.07l-.057.07z",fill:"currentColor"})),dq=je([je("body >",[St("image-container","position: fixed;")]),St("image-preview-container",` - position: fixed; - left: 0; - right: 0; - top: 0; - bottom: 0; - display: flex; - `),St("image-preview-overlay",` - z-index: -1; - position: absolute; - left: 0; - right: 0; - top: 0; - bottom: 0; - background: rgba(0, 0, 0, .3); - `,[Pm()]),St("image-preview-toolbar",` - z-index: 1; - position: absolute; - left: 50%; - transform: translateX(-50%); - border-radius: var(--n-toolbar-border-radius); - height: 48px; - bottom: 40px; - padding: 0 12px; - background: var(--n-toolbar-color); - box-shadow: var(--n-toolbar-box-shadow); - color: var(--n-toolbar-icon-color); - transition: color .3s var(--n-bezier); - display: flex; - align-items: center; - `,[St("base-icon",` - padding: 0 8px; - font-size: 28px; - cursor: pointer; - `),Pm()]),St("image-preview-wrapper",` - position: absolute; - left: 0; - right: 0; - top: 0; - bottom: 0; - display: flex; - pointer-events: none; - `,[kY()]),St("image-preview",` - user-select: none; - -webkit-user-select: none; - pointer-events: all; - margin: auto; - max-height: calc(100vh - 32px); - max-width: calc(100vw - 32px); - transition: transform .3s var(--n-bezier); - `),St("image",` - display: inline-flex; - max-height: 100%; - max-width: 100%; - `,[$a("preview-disabled",` - cursor: pointer; - `),je("img",` - border-radius: inherit; - `)])]),Os=32,nN=be({name:"ImagePreview",props:Object.assign(Object.assign({},mg),{onNext:Function,onPrev:Function,clsPrefix:{type:String,required:!0}}),setup(t){const e=fn("Image","-image",dq,sq,t,yt(t,"clsPrefix"));let n=null;const i=ee(null),o=ee(null),s=ee(void 0),l=ee(!1),c=ee(!1),{localeRef:d}=fY("Image");function _(){const{value:te}=o;if(!n||!te)return;const{style:pe}=te,ie=n.getBoundingClientRect(),Pe=ie.left+ie.width/2,we=ie.top+ie.height/2;pe.transformOrigin=`${Pe}px ${we}px`}function p(te){var pe,ie;switch(te.key){case" ":te.preventDefault();break;case"ArrowLeft":(pe=t.onPrev)===null||pe===void 0||pe.call(t);break;case"ArrowRight":(ie=t.onNext)===null||ie===void 0||ie.call(t);break;case"Escape":Ce();break}}Zt(l,te=>{te?Ht("keydown",document,p):Rt("keydown",document,p)}),Kn(()=>{Rt("keydown",document,p)});let g=0,E=0,f=0,S=0,C=0,h=0,T=0,N=0,y=!1;function x(te){const{clientX:pe,clientY:ie}=te;f=pe-g,S=ie-E,LC($e)}function P(te){const{mouseUpClientX:pe,mouseUpClientY:ie,mouseDownClientX:Pe,mouseDownClientY:we}=te,Xe=Pe-pe,pt=we-ie,me=`vertical${pt>0?"Top":"Bottom"}`,bt=`horizontal${Xe>0?"Left":"Right"}`;return{moveVerticalDirection:me,moveHorizontalDirection:bt,deltaHorizontal:Xe,deltaVertical:pt}}function D(te){const{value:pe}=i;if(!pe)return{offsetX:0,offsetY:0};const ie=pe.getBoundingClientRect(),{moveVerticalDirection:Pe,moveHorizontalDirection:we,deltaHorizontal:Xe,deltaVertical:pt}=te||{};let me=0,bt=0;return ie.width<=window.innerWidth?me=0:ie.left>0?me=(ie.width-window.innerWidth)/2:ie.right0?bt=(ie.height-window.innerHeight)/2:ie.bottom.5){const te=oe;Ee-=1,oe=Math.max(.5,Math.pow(K,Ee));const pe=te-oe;$e(!1);const ie=D();oe+=pe,$e(!1),oe-=pe,f=ie.offsetX,S=ie.offsetY,$e()}}function $e(te=!0){var pe;const{value:ie}=i;if(!ie)return;const{style:Pe}=ie,we=Bt((pe=U==null?void 0:U.previewedImgPropsRef.value)===null||pe===void 0?void 0:pe.style);let Xe="";if(typeof we=="string")Xe=we+";";else for(const me in we)Xe+=`${vG(me)}: ${we[me]};`;const pt=`transform-origin: center; transform: translateX(${f}px) translateY(${S}px) rotate(${L}deg) scale(${oe});`;y?Pe.cssText=Xe+"cursor: grabbing; transition: none;"+pt:Pe.cssText=Xe+"cursor: grab;"+pt+(te?"":"transition: none;"),te||ie.offsetHeight}function Ce(){l.value=!l.value,c.value=!0}function Be(){oe=he(),Ee=Math.ceil(Math.log(oe)/Math.log(K)),f=0,S=0,$e()}const Ve={setPreviewSrc:te=>{s.value=te},setThumbnailEl:te=>{n=te},toggleShow:Ce};function xe(te,pe){if(t.showToolbarTooltip){const{value:ie}=e;return j(eN,{to:!1,theme:ie.peers.Tooltip,themeOverrides:ie.peerOverrides.Tooltip,keepAliveOnHover:!1},{default:()=>d.value[pe],trigger:()=>te})}else return te}const He=le(()=>{const{common:{cubicBezierEaseInOut:te},self:{toolbarIconColor:pe,toolbarBorderRadius:ie,toolbarBoxShadow:Pe,toolbarColor:we}}=e.value;return{"--n-bezier":te,"--n-toolbar-icon-color":pe,"--n-toolbar-color":we,"--n-toolbar-border-radius":ie,"--n-toolbar-box-shadow":Pe}}),{inlineThemeDisabled:rt}=pi(),We=rt?_g("image-preview",void 0,He,t):void 0;return Object.assign({previewRef:i,previewWrapperRef:o,previewSrc:s,show:l,appear:Zm(),displayed:c,previewedImgProps:U==null?void 0:U.previewedImgPropsRef,handleWheel(te){te.preventDefault()},handlePreviewMousedown:W,handlePreviewDblclick:z,syncTransformOrigin:_,handleAfterLeave:()=>{J(),L=0,c.value=!1},handleDragStart:te=>{var pe,ie;(ie=(pe=U==null?void 0:U.previewedImgPropsRef.value)===null||pe===void 0?void 0:pe.onDragstart)===null||ie===void 0||ie.call(pe,te),te.preventDefault()},zoomIn:tt,zoomOut:lt,rotateCounterclockwise:X,rotateClockwise:_e,handleSwitchPrev:re,handleSwitchNext:G,withTooltip:xe,resizeToOrignalImageSize:Be,cssVars:rt?void 0:He,themeClass:We==null?void 0:We.themeClass,onRender:We==null?void 0:We.onRender},Ve)},render(){var t,e;const{clsPrefix:n}=this;return j(st,null,(e=(t=this.$slots).default)===null||e===void 0?void 0:e.call(t),j(ZC,{show:this.show},{default:()=>{var i;return this.show||this.displayed?((i=this.onRender)===null||i===void 0||i.call(this),Pn(j("div",{class:[`${n}-image-preview-container`,this.themeClass],style:this.cssVars,onWheel:this.handleWheel},j(Hi,{name:"fade-in-transition",appear:this.appear},{default:()=>this.show?j("div",{class:`${n}-image-preview-overlay`,onClick:this.toggleShow}):null}),this.showToolbar?j(Hi,{name:"fade-in-transition",appear:this.appear},{default:()=>{if(!this.show)return null;const{withTooltip:o}=this;return j("div",{class:`${n}-image-preview-toolbar`},this.onPrev?j(st,null,o(j(Or,{clsPrefix:n,onClick:this.handleSwitchPrev},{default:()=>lq}),"tipPrevious"),o(j(Or,{clsPrefix:n,onClick:this.handleSwitchNext},{default:()=>cq}),"tipNext")):null,o(j(Or,{clsPrefix:n,onClick:this.rotateCounterclockwise},{default:()=>j(TY,null)}),"tipCounterclockwise"),o(j(Or,{clsPrefix:n,onClick:this.rotateClockwise},{default:()=>j(hY,null)}),"tipClockwise"),o(j(Or,{clsPrefix:n,onClick:this.resizeToOrignalImageSize},{default:()=>j(RY,null)}),"tipOriginalSize"),o(j(Or,{clsPrefix:n,onClick:this.zoomOut},{default:()=>j(CY,null)}),"tipZoomOut"),o(j(Or,{clsPrefix:n,onClick:this.zoomIn},{default:()=>j(vY,null)}),"tipZoomIn"),o(j(Or,{clsPrefix:n,onClick:this.toggleShow},{default:()=>uq}),"tipClose"))}}):null,j(Hi,{name:"fade-in-scale-up-transition",onAfterLeave:this.handleAfterLeave,appear:this.appear,onEnter:this.syncTransformOrigin,onBeforeLeave:this.syncTransformOrigin},{default:()=>{const{previewedImgProps:o={}}=this;return Pn(j("div",{class:`${n}-image-preview-wrapper`,ref:"previewWrapperRef"},j("img",Object.assign({},o,{draggable:!1,onMousedown:this.handlePreviewMousedown,onDblclick:this.handlePreviewDblclick,class:[`${n}-image-preview`,o.class],key:this.previewSrc,src:this.previewSrc,ref:"previewRef",onDragstart:this.handleDragStart}))),[[Qs,this.show]])}})),[[Jm,{enabled:this.show}]])):null}}))}}),rN="n-image-group",_q=mg,pq=be({name:"ImageGroup",props:_q,setup(t){let e;const{mergedClsPrefixRef:n}=pi(t),i=`c${kC()}`,o=$m(),s=d=>{var _;e=d,(_=c.value)===null||_===void 0||_.setPreviewSrc(d)};function l(d){if(!(o!=null&&o.proxy))return;const p=o.proxy.$el.parentElement.querySelectorAll(`[data-group-id=${i}]:not([data-error=true])`);if(!p.length)return;const g=Array.from(p).findIndex(E=>E.dataset.previewSrc===e);~g?s(p[(g+d+p.length)%p.length].dataset.previewSrc):s(p[0].dataset.previewSrc)}ni(rN,{mergedClsPrefixRef:n,setPreviewSrc:s,setThumbnailEl:d=>{var _;(_=c.value)===null||_===void 0||_.setThumbnailEl(d)},toggleShow:()=>{var d;(d=c.value)===null||d===void 0||d.toggleShow()},groupId:i});const c=ee(null);return{mergedClsPrefix:n,previewInstRef:c,next:()=>{l(1)},prev:()=>{l(-1)}}},render(){return j(nN,{theme:this.theme,themeOverrides:this.themeOverrides,clsPrefix:this.mergedClsPrefix,ref:"previewInstRef",onPrev:this.prev,onNext:this.next,showToolbar:this.showToolbar,showToolbarTooltip:this.showToolbarTooltip},this.$slots)}}),mq=Object.assign({alt:String,height:[String,Number],imgProps:Object,previewedImgProps:Object,lazy:Boolean,intersectionObserverOptions:Object,objectFit:{type:String,default:"fill"},previewSrc:String,fallbackSrc:String,width:[String,Number],src:String,previewDisabled:Boolean,loadDescription:String,onError:Function,onLoad:Function},mg),gq=be({name:"Image",props:mq,inheritAttrs:!1,setup(t){const e=ee(null),n=ee(!1),i=ee(null),o=Ft(rN,null),{mergedClsPrefixRef:s}=o||pi(t),l={click:()=>{if(t.previewDisabled||n.value)return;const _=t.previewSrc||t.src;if(o){o.setPreviewSrc(_),o.setThumbnailEl(e.value),o.toggleShow();return}const{value:p}=i;p&&(p.setPreviewSrc(_),p.setThumbnailEl(e.value),p.toggleShow())}},c=ee(!t.lazy);kn(()=>{var _;(_=e.value)===null||_===void 0||_.setAttribute("data-group-id",(o==null?void 0:o.groupId)||"")}),kn(()=>{if(t.lazy&&t.intersectionObserverOptions){let _;const p=si(()=>{_==null||_(),_=void 0,_=XY(e.value,t.intersectionObserverOptions,c)});Kn(()=>{p(),_==null||_()})}}),si(()=>{var _;t.src,(_=t.imgProps)===null||_===void 0||_.src,n.value=!1});const d=ee(!1);return ni(tN,{previewedImgPropsRef:yt(t,"previewedImgProps")}),Object.assign({mergedClsPrefix:s,groupId:o==null?void 0:o.groupId,previewInstRef:i,imageRef:e,showError:n,shouldStartLoading:c,loaded:d,mergedOnClick:_=>{var p,g;l.click(),(g=(p=t.imgProps)===null||p===void 0?void 0:p.onClick)===null||g===void 0||g.call(p,_)},mergedOnError:_=>{if(!c.value)return;n.value=!0;const{onError:p,imgProps:{onError:g}={}}=t;p==null||p(_),g==null||g(_)},mergedOnLoad:_=>{const{onLoad:p,imgProps:{onLoad:g}={}}=t;p==null||p(_),g==null||g(_),d.value=!0}},l)},render(){var t,e;const{mergedClsPrefix:n,imgProps:i={},loaded:o,$attrs:s,lazy:l}=this,c=(e=(t=this.$slots).placeholder)===null||e===void 0?void 0:e.call(t),d=this.src||i.src,_=j("img",Object.assign(Object.assign({},i),{ref:"imageRef",width:this.width||i.width,height:this.height||i.height,src:this.showError?this.fallbackSrc:l&&this.intersectionObserverOptions?this.shouldStartLoading?d:void 0:d,alt:this.alt||i.alt,"aria-label":this.alt||i.alt,onClick:this.mergedOnClick,onError:this.mergedOnError,onLoad:this.mergedOnLoad,loading:KY&&l&&!this.intersectionObserverOptions?"lazy":"eager",style:[i.style||"",c&&!o?{height:"0",width:"0",visibility:"hidden"}:"",{objectFit:this.objectFit}],"data-error":this.showError,"data-preview-src":this.previewSrc||this.src}));return j("div",Object.assign({},s,{role:"none",class:[s.class,`${n}-image`,(this.previewDisabled||this.showError)&&`${n}-image--preview-disabled`]}),this.groupId?_:j(nN,{theme:this.theme,themeOverrides:this.themeOverrides,clsPrefix:n,ref:"previewInstRef",showToolbar:this.showToolbar,showToolbarTooltip:this.showToolbarTooltip},{default:()=>_}),!o&&c)}}),Eq=Object.assign(Object.assign({},fn.props),{trigger:String,xScrollable:Boolean,onScroll:Function,size:Number}),fq=be({name:"Scrollbar",props:Eq,setup(){const t=ee(null);return Object.assign(Object.assign({},{scrollTo:(...n)=>{var i;(i=t.value)===null||i===void 0||i.scrollTo(n[0],n[1])},scrollBy:(...n)=>{var i;(i=t.value)===null||i===void 0||i.scrollBy(n[0],n[1])}}),{scrollbarInstRef:t})},render(){return j(LY,Object.assign({ref:"scrollbarInstRef"},this.$props),this.$slots)}}),iN=fq,Xn=t=>(Zi("data-v-01535cfb"),t=t(),Ji(),t),Sq={class:"heroWrapper"},bq=Xn(()=>B("h1",null,[vt("MetaGPT: Meta Programming for"),B("br"),vt("Multi-Agent Collaborative Framework.")],-1)),hq={class:"contributors"},Tq={class:"contributor"},vq={class:"affiliationIndex"},Cq={key:0},Rq={class:"affiliations"},Nq={class:"affiliationIndex"},Oq={key:0},Aq={class:"links"},yq=["onClick"],Iq={class:"linkIcon"},Dq={class:"linkName"},xq={class:"bigTex"},wq=Xn(()=>B("div",{class:"header"},"Citation",-1)),Mq={class:"copyBtn"},Lq={class:"bigTexContent"},Pq=Xn(()=>B("br",null,null,-1)),kq=Xn(()=>B("br",null,null,-1)),Uq=Xn(()=>B("br",null,null,-1)),Fq=Xn(()=>B("br",null,null,-1)),Bq=Xn(()=>B("br",null,null,-1)),Gq=Xn(()=>B("br",null,null,-1)),Yq=Xn(()=>B("br",null,null,-1)),qq=Xn(()=>B("img",{style:{"margin-left":"-1px",position:"absolute"},src:pM,alt:""},null,-1)),$q={class:"galance"},Hq={key:0,style:{width:"100%","border-radius":"20px"},src:mM,alt:""},Ui="      ",zq=be({__name:"heroPage",setup(t){const e=[{name:"Sirui Hong",affiliation:"1, "},{name:"Xiawu Zheng",affiliation:"2, "},{name:"Jonathan Chen",affiliation:"1, "},{name:"Yuheng Cheng",affiliation:"3, "},{name:"Jinlin Wang",affiliation:"1, "},{name:"Ceyao Zhang",affiliation:"3, "},{name:"Zili Wang",affiliation:"1, "},{name:"Steven Ka Shing Yau",affiliation:"4, "},{name:"Zijuan Lin",affiliation:"2, "},{name:"Liyang Zhou",affiliation:"5, "},{name:"Chenyu Ran",affiliation:"1, "},{name:"Lingfeng Xiao",affiliation:"1, "},{name:"Chenglin Wu",affiliation:"1*."}],n=["1DeepWisdom, ","2Xiamen University, ","3The Chinese University of Hong Kong,Shenzhen, ","4Nanjing University, ","5University of Pennsylvania, ","6University of California, Berkeley."],i=ee(!1),o=E=>{window.open(E,"_blank")},s=E=>ue(ea,{style:"vertical-align:middle",size:26,iconId:E},null),l=[{name:"Paper",icon:s("icon-paper1"),action:()=>{o("https://arxiv.org/abs/2308.00352")}},{name:"Code",icon:s("icon-github-fill"),action:()=>{o("https://github.com/geekan/MetaGPT")}},{name:"Discord",icon:s("icon-Discord"),action:()=>{o("https://discord.gg/ZRHeExS6xv")}},{name:"Twitter",icon:s("icon-twitter2"),action:()=>{o("https://twitter.com/DeepWisdom2019")}},{name:"AgentStore(Waitlist)",icon:ue(QL,null,null),action:()=>{o("https://airtable.com/appInfdG0eJ9J4NNL/shrEd9DrwVE3jX6oz")}}],c=ee(),d=()=>{var E;(E=c.value)==null||E.play()};kn(()=>{d()});const _=ee(!1),p=E=>{_.value=!0},g=()=>{yC.success("Copy Success"),Wm(`@misc{hong2023metagpt, - title={MetaGPT: Meta Programming for Multi-Agent Collaborative Framework}, - author={Sirui Hong and Xiawu Zheng and Jonathan Chen and Yuheng Cheng and Jinlin Wang and Ceyao Zhang and Zili Wang and Steven KaShing Yau and Zijuan Lin and Liyang Zhou and Chenyu Ran and Lingfeng Xiao and Chenglin Wu}, - year={2023}, - eprint={2308.00352}, - archivePrefix={arXiv}, - primaryClass={cs.AI} -}`)};return(E,f)=>(V(),ae(st,null,[B("div",Sq,[bq,B("span",hq,[(V(),ae(st,null,yn(e,(S,C)=>B("span",{key:C},[B("span",Tq,Qe(S.name),1),B("span",vq,Qe(S.affiliation),1),C===5?(V(),ae("br",Cq)):Ge("",!0)])),64))]),B("span",Rq,[(V(),ae(st,null,yn(n,(S,C)=>B("span",{key:C},[B("span",Nq,Qe(S),1),C===2?(V(),ae("br",Oq)):Ge("",!0)])),64))]),B("div",Aq,[(V(),ae(st,null,yn(l,(S,C)=>B("div",{key:C,class:It({link:!0,enabled:S.action}),onClick:S.action},[B("span",Iq,[(V(),ot(ji(S.icon)))]),B("span",Dq,Qe(S.name),1)],10,yq)),64))]),B("div",xq,[wq,B("div",Mq,[ue(q(IC),{onClick:g})]),B("div",Lq,[ue(q(iN),{style:{"max-height":"100%"}},{default:dt(()=>[vt(" @misc{hong2023metagpt,"),Pq,B("span",{innerHTML:Ui}),vt("title={MetaGPT: Meta Programming for Multi-Agent Collaborative Framework},"),kq,B("span",{innerHTML:Ui}),vt("author={Sirui Hong and Xiawu Zheng and Jonathan Chen and Yuheng Cheng and Jinlin Wang and Ceyao Zhang and Zili Wang and Steven KaShing Yau and Zijuan Lin and Liyang Zhou and Chenyu Ran and Lingfeng Xiao and Chenglin Wu}, "),Uq,B("span",{innerHTML:Ui}),vt("year={2023},"),Fq,B("span",{innerHTML:Ui}),vt("eprint={2308.00352},"),Bq,B("span",{innerHTML:Ui}),vt("archivePrefix={arXiv},"),Gq,B("span",{innerHTML:Ui}),vt("primaryClass={cs.AI}"),Yq,vt(" } ")]),_:1})]),qq]),B("div",$q,[q(_)?Ge("",!0):(V(),ae("img",Hq)),Pn(B("video",{ref_key:"videoRef",ref:c,muted:"",src:gM,style:{width:"100%","border-radius":"20px"},autoplay:!0,playsinline:!0,loop:!0,"x-webkit-airplay":"deny",onloadeddata:p},null,512),[[Qs,q(_)]])])]),ue(HL,{visible:q(i),"onUpdate:visible":f[0]||(f[0]=S=>wr(i)?i.value=S:null)},null,8,["visible"])],64))}});const Vq=Dt(zq,[["__scopeId","data-v-01535cfb"]]),Wq="/static/assets/sparkles-50d190d6.svg",aN="/static/assets/role0-73c01153.png",oN="/static/assets/role1-8fd25a72.png",sN="/static/assets/role2-d49c6a81.png",lN="/static/assets/role3-d2c54a2d.png",Kq="/static/assets/role4-a6ccbb5f.png",Qq="/static/assets/role5-16fddacc.png",Xq="/static/assets/role6-c9a82246.png",Zq="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADEAAAAxCAYAAABznEEcAAAACXBIWXMAABYlAAAWJQFJUiTwAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3bSURBVHgB7VkJdJXlmX6+f7/7zXKTsCdkgChCpcKgJgwgglNcwAVcqhaKA2gRONZljl3mOp4RcCnVEWdadXCsBTFQFbEMYtkElMqASFUEKiEhCQlZ7n7/+29f33uVNrUBEog9PT2+Of+5ud//v9/3P9/3vs+7XOBr+Vr+dqRm69Vl/MiMYHd0xi3fGxyz/rPh6CERcA6y+blLfxpA7DMuGweTNXNHdFWvFeJbcU/Jvqn70y+iB+SsQbwybZrIDPtuwVLADDWkCYlru6jKMo5vlGZqaDPVCvSAnDWI6dXVtpWQd7TV20h81ox0bfr3XdG7fMnGCocrgANkMmhED4iEc5BkpGJak5UeE0/4Twy/67+2dkUnJrgHMkEGoz/HdBrQA3JOIKY88lwTfazujk6Ga4OzIHKLZ4w96AE5axBHLgwHJUEd7w369uRtmne0q3oOwwRZVGDT/6qoHfzy/R3Df3B9hVJ0Q35e3ga28fYXujJnl0BMIyeuJh84+f2Nixb0b4/z/x/iKSh0Cf7axLTlo73VM4931BkXXq6l+lbcyCXPQJnxJsVJ/Tr067q6faJruEeRYdtOqm7Hi7s66rx6wX3lrpR7tUsMANx9U+rKXxx0v3nbzjO932lBbK5YWFrCCreX1hb2eXnilQ+IG2c8mh1X4solmukqZE6WmcT+clSdT8MP/hHAovWlLXmhTVwJlbnJdCzLQMYWdGNs/3Wc+fpJkkRj5paa8Ey943pBx+NTTBecNJlblNZRRT+6IKdlJwfa9zXT20c0VfCM8+8nx926+JbuGKmUroPrFmRbXMDHLf9jwGsT3P8NV1GZh3kRb7ORbjahNxtaU0q9QXYFYEkMqiNWf3m98R+HPzAhP94a12ua26PLpO23/B/OGYSjfUo0iGgkg7a2RO3J8TG1i9t1QX8ibiZgZgwwnbvTEpuZvTct/EqJJXmuKNTcaM+ka9xNzdex43VrYu0xcCUAl+oGd1CTV3NkZWdrjjh6930D6meVFR+eNQ9dlNOCmHgw/HRcYrcft/X7a2BO6HivVW77WdxJWLpBp2HY0DPmDdnxeEIocmwBKduET0zW7vmPi14V/OwOJeCD5smHI4pwpfXl6+cPyqCHhOEc5P3y+7f3E/pW5nkDiMjp9KGSaPGSkqB9tM8F9anikiBTOHgytRYGqxJdoXyfKwTmYijxxya8MTm4CT0k5xQnuGRsNy290ra9tBuCyxcXzn9j7Zxd/3TfhiWWFF9keHxgQuAaRXHDiwBMgyMQdKAzFHRl/o9GPVqiymahZoliO4sdH7brkSb0NAhDMA5Zjk6+Y0NkDHIGZTS8a9tjVyy+7N6tWiyJhYaSF1AkDp23RtIeFvSJRVlfi3c2316KPSqX5/gtbaIqKFWaoaiSJebmLiY9c/TzHG5pp6SJN7H1tx47qdet3GnPmu8NaFw/o/SdZ+7My35nolXHYSHLYxItJNl/osRNj48Nl0UO9tUSkYEFLbW9nejv16mSAIszeCy9peO8y0tnaB8MfeiBvIzvSC87tLhAKpzgF/JUhdNJWkQERLs8o1Hm6GaS4a2E7ZrVUb9LJ/Hu8qmlpX3cv/BpJ6pES8eQ3sBnL1//fupDo5GtdGgnOAQmQmGi01Gv+pnpCfrIXrj0nh2H00kH6TajZu3M4O6Tz2yuCJfmc2F1sV14kU/xQOASMR6gGw4yjgWLcYNxJoicSdmMy5u2uJlJ7O02CAo6zxb60lVW+zEKRGkYERuxCB/FWvrQCfSHIAh0Mbgcoe5Uc+z8SeVDVz5w4L1kRt31JwD/Whqy5G0lcqifS3aBCA3thhnJcP5ku2a8vW56485wOHxyY9juiuUXwEk2jtw9p6VbILIpB3fSVWTIsNNRcIvRYg5FXA61SYVCEVkiAGBkrrOOzG1fesuBvMoVneZSby6p2NDxOzOth/ws0E+RVdp52hjbeRI2C5fXzIzkHgj/mTofeWDm/s7mPaNP3DZ58KhEytJ47j1tWKaFjGERJgeuukK4KJnLnoSt6tCLPpqqWmxHfMftd2V1xy3dHJz63M5/PNXcnIm/PaEn0ZbO8Dam31F26LsLy04C6IacEYSeSBTFYgzJRAqC6qFCxoJBu6Y0BuCNhKAqKhg5tVF+LFshIB2N9KGHlm2pXvCrFlM+XuMt23XN+sjPOpt7/OElyyIe9Rsf85biQZ/MfR5nKWc0J0nkY0xTxLGGJIb091NkboduEtG/Xw6X5IMsyjmK1QfV5PJsCm9IReN47/DAa8FVBG0NSUudTVPN6TjvuMc3VKTdRf98P1PPJ1IY803l23TagouJRKiMpUWBNUvMOSgJ5vuUEm7ZcFPZvrMGoafNwSqxRGOTiL5FNuRAHrQdGgLN/eByuXKnkPA0wxrcQJWaSBDMXB5Q4TkK8WgrWuQgWFTIVX2XL34lEBN7zUup/hsbZO8wSG7aJI3IQaZYIJNZCGSynycR5M0DMgIfpQvOt5OihYt/FT2kcOOXwv7fLdkSHq93C0S9HvxGuWOSL0g4UpPGeYP90FrL4Vb8kIj0DNNAfOxeEE5ydgs5KqH3GOypxaL+T+NwqhCXehouWLh4/RPH5NAdlpbnVyQvfMwNmU5KyEgQbFKgaoV1IGgufnFROQ6FUn4Ng1x+HvZVjQ7RyLwug5g9e7b8Uap8wCDtvZy9xyIyjh9Loe/cZghPDYTd4iAy5BBw/gkyKaoRbGIwcnhOz+qmiQKhAU1MwLzafyk47ul7j0cLwCd4KChS4KJNETICRJ1DNNGgWhRPbOc4RRzuODzIRdbbElnIyfYUtGyNRDW9mT115S/6VacFYfafOOpou4pPWQVGaB/QTEBTswSP1or8uz9A7D8Hwpp4kIbJL3g25CFbscGiY7FNGxuiF2MVroXtLUaB6s/tPE9zSOkUtIywyZPCKreT2bDiid6dUvKtC/iQeNK+SGfCdEdjU9IJRxf99tPdApHRrfNM24V1sWtwnnIQmpQEt0U0HONQBjbDF87ApKaLYQgg3yZzoihrmEhRjbEqOgm/0cZD9BbCQ7bvECg73Yqx2JrRjb4XPv3Q1QdwBnnpSfYpfWSvFbNn75YNrUB8YV6Z/uXnTkuxg131Iywyk5gdwEvtc7Jul7scW0JDjQC91UBebx/NQhRL5qPrGUTjJv63ZSI2K1VQPAGogggjE0dxZBdmxefjcusp9c5By2ahm/Lzn480Xwj/JYAzghiqfDTUzanoIQ/7JDMU66I3fgGEghtll01HODJNNooH5ENSyF7jSbwWGYnfaiOJeLyEje7rEVTFVuJOJ4xy11EoInlMOnJPw5vfHYkektOC4HZ68AT35mzhkKOKdbHrsDZ2c67xJdJ3bspoJSB6rY3SijKYfYZiuzyaCh+qLwQiWyOGq5Mv4lqpGh7yXJmmEWg8GktSQtf6ML5qEGuWPthLsDO9L3Ttxgj1wxwQ5oh4re1mrG2/FQL9n71AphWro1bm/gzGlPmxatLvUO6n2ttKYUp6FSbJGym34jl241RcO+T9BvmOns5ccfi1u/p1tvar4XGl+56t3JncOHlvbNt3KnG2ILZFB5TbFAOy4eum4C/RX8q2TQkIgVnTdhtWtnyPCn4tdyIi+Y3RwpHco6MPNQReHrkZTw36Da5ybckeZy7Psvnn9OvYdo7BTJOzkgC/rLO1g2JyUZFsXSKrgQs1t/wszhZEbSrvko+TFblawSsmsSD0GErE1hwIljWt+DV4sP6n1NkuISBSrg5w0uTsn9jI7DZQ6Y5i5LhJGDx8BDwB/xcvb9EJZZNHhmDQn00c8zpbO4/n1ypGb1jtRNctJ1I4g5ySYvWMNHCbMRbDtE/hEduQL0bwo6J/w0+aH0GdUZ5DX2f2xY/qH8Zt+WswwbuTzCsbdTns47T7J4gAQmnkD8hDaGQhTJsjHo0RHRsI+H3UuhGQTKW3d7a2UzPoSTNh5POk7RJL8n6IM8gpux2Xz1/7bmuq4OIBSgTfL15KkdaCQkHNtPNR3T4Lbycn5oBkL/IMVCp78Z3gCqqF6bTohXNVa7bxSRFbCFBULjEhFNJXF2mYUsRi7Aeeq55/Bj0g4qluDBs95dF2I+Rqt/ui3SzFKO1DqPS4m5x0tHsfgtBRYwyESVE4uxP1di+sT05Ci1GAXk4b/Ab1IdPZX1RkatvkgUUGAS3DtmYahz2SiH5zZv7Me87YY+2qdHoSK5cuLD3a6BxZ2ZKNSfQC3IPz1HrcF1qKkBQnG5RzftBqhvBK+3XYqlfmdiN7Ze2TupT4B6EeI6WD6KNZkSHu5Ao6geWhH07Zja9AOgexaMY4b/LI5g3RW7A9PiVbntMLaiiRWvFwaBF6K/SrG5kKWRiZDvlGugxvpL+F/XwUDYrwUT/YK/N33BpbXezGC/PDLIavUDoFUR2eOs5rtm4OSiJWNd+Ld1KTPt99ejr788gN7nWYrL2NAqeVEiwaMIixWACip3/kkDz62UZXv9cnPzxgB/5K0ik7UVYakchcFApkcwueQS9Rx+vx60E9sJwjv568CttTl+ESYTeqxE9Q5uXvuj2F/6MphatHhKu6XSOfq5ySnbbce/32AsmodFPFpdL+H01UYFn8LrQ5RaB2KqhVF/WIfFWRV3jxx4+xv9qudyanBPHO7bP7B13GW34pMUSlklHKUGQmTjogX7H1TfFbq3we9aUHHmVx/A3IabvinHN2bOaPb1ac5GDF3adNC/R7zbVkei2+lq/l71f+AAyU/NPXrcSqAAAAAElFTkSuQmCC",Jq="/static/assets/kanban-d6729062.png",jq="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACkAAAApCAYAAACoYAD2AAAACXBIWXMAABYlAAAWJQFJUiTwAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAesSURBVHgB7Zd7bF5lHce/z3Oec30vXbt1WdeFGNGk6jRKdGNRo5QlkhiniyFK0DFh6lCSCgvgYCGVBZjiNrYiI+yiBoMsAecfxgRmKIpT6gbxH0kFlewC7Nr37Xs594vf844lNKlr37qykOzbnPa9nD7nc77P73aAS3qP6MgtQwvxf0pgljS8evB9zpj35FzfWNrd3fWCNq94bXHbt09gBpoVyJH1989N/lUdcc5ElxeaEp3SQamnPBotcK4qPvL942hTErOhk421hhdfrqIMSFKETRfJkUpf4Wj1ucb3frIAbWpWIEWSLteyFCLL8tdI0wS+5yF5q/Yh54g33C7orECmcXhMyJS0Z48MCX9iBFEIUQ36CsfctkBnBTLM0kdjmSBThOORCh50Naa7cRozBLK+wskcdHBaoLMCeVhUu99snozDgFssQiQy9zGlm0CcED6lw6bRVxiXw9k0QC845BM3rLlOO1X/ZaNRV0crbyKsNBGlEaJ8wxmfSZwhbYaAG7ZA4U4NekFL0K+/ceMKOVbfp8Z8CT9GHMRwmsBlSSccx4ElLRR0E7ajw7AlZFkH5lq0KhltWmB5Gjw+q5BPXHfTV/Tx+tN6xZeal0KGGbIkQ16FLDfDosCBXSjBVg4c04BjCeg6s98RLdCsIEZdx5gUdNqQh76zqUO68XI9lVcapnI1U/zxA4/e/lz+3d58i0/XfmVUAqncFBp3spXc0Jg8GkJF1yoBFjUUHS3DNghqaLD0FLoWQRq8k04D2XxnVCya83lx/90n2oY8sPreT1nV6Ld2PVtoJxpMU0EWefGSOPhK9saO5qkTu8wc0MugWoCSeBJC6RB0LVUKPmtldryGXlfCtjtg6xYcOmmqiKAxNFYBkW//+7tewYc/vkTcvqp57vraVIAjawf79Ur4rFVLuiyXse4LWBHgMAGcMO0tjcdfboydEJqfQY/oHQEVl1XKgG5aULoOTWrQMomIjtZCD2YQ8Mp0ueUR4fPcz0tTzM8Drzs1vPGNf/vLgXMMaipIxtdu3YsdRQDJdUSU8G8GLZLQQ4GFmgXb68GrwRtIJc8RdJEOaqYJoRsQfJ+m3E7emJVI1AsWjqQNLIzHEckiIq5pEVLpMTsUYyRnrVQb72Q4L+Sf7/hxCW/VuvLWJmLBFscPU4EsX4zOSLoqVIa5RhmLGwn+rc4gsVQLULYA6VYOyCSSYQrFRDIzgfGCicNxiB4rYxFIYXE9I/fVVgjmd/7d+WDvnglGnQ9yz4E/hKs/9llHi9PPaT4vQjcU+TR2D00w/lhlhc5fOuNUKHQEEp7FzDYMSG6x4A2BcFnILY2TVg9P+X+hLlDrKVSyuR2DwjT7fF12eGUrC7tLO6qXOas+sWnTBCenlTgHv3bHZnMsus2uADbdM/OyognYTCCNNY9vWtMOxl1OPDUcK7B4Sx0pHUwIGBMwJmDAruPR+fF5RsXrLi3/5m9+9nK+/vBN9y2uJtGxlb8YrE52/WmXoJGV67YUKtGtvH8mjSQXSwghjZIBlHmwpCBkQJ2pI6pVcSwLESSCXSZrdZrwbcDmPKtSX1j+wtf3bjs43WtPmd3ntGv0r89866NXdog4XpbXQRYhZjCThFstGIcoELRAVwt0l6YWGz4aoc+hAq2+HSgBr9upxd321V99augltKFpQ+ba+eoIQZeVEUbLNFYLXaoWqGRW53EJMz/UWVC+LLk+arEPj3GbA/o9pau+9NTQy2hTbUHmeuy1kWdXL17SgSBepgK2NmawRkiRTzZMitYMyXjN41Rjfy5pCU4XVaXR03H1F/c+1DZgrinr5GR68Yqu3/W+Vrv1I6M+JJ9htDzBmRiI8ipPQOPtlR2Tg04nrugyz4he5z+Yodp28qH161d4jWDvcU03IyHQUXHZChmfNFFLmDghAzbvKGGQj+jIO4rIki4oI/jRi396HjNQW/Pk0OBgvx+kj4ehKCfM8H/MKeP1+eYzZyIXjUaI2CVUPivW/FY5QoXlrtZsHZnnrcIMNW3IoY0b+4Mo28eeURbCZicxoBxr4JoXHr6m0omtp2MPNS9GFLBw56XI59Z7dLPp8Wgi9dxezFDTgtzxwAP97Nj7NWWXFedBgkJ3igP3/HzD9vz7pYd23lbpyracSjyMEzBg8U7yx4Q4b8QsQAyDKIqOYoaaEnLH1gf7U03br+tFqbQCL8iyY9kDGx67c/s7z1t6aPe6SqfccpJFvMou4xLOZ092mUdNzpR13fw9ZqjzQj6yefMKFpl9ul6SuuYg5uSTCXPNXdvWbZ/s/GUv7V43NsfacpwT7xi7TJ2fNQyFWrl8uFac8yBmqP/ZFncObe1Hpu2T0imLTMEdjzjupQM/2Pjd7VMtOvyZmzfZzeadBQ6yakFpTMwpfrLv8Z++jhlKTA441C+k3Kc0pyxZ9NwaAf104JZ7bpwS8JyeX3n3tSqLeg1Lf3rJk/fNOB4nhdyzY6g/y87GYA7o1xn0bjpw84ZV0wa80JoQk7uGtn6avWO/YZaZJDZCPrMkvriogLkmtEU+pTzMEiMVn4+DZoLQTdesvev63bjImgAZNbNqPqQaKuawKgbW/vDiA+aaAFkfw/VsEyssS/5z4N4bhnFJl/Tu679UWnoRYOoVMgAAAABJRU5ErkJggg==",e2="/static/assets/contributors-753a72cb.png",mi=t=>(Zi("data-v-d5d425dc"),t=t(),Ji(),t),t2={class:"wechatModal"},n2=mi(()=>B("div",{class:"title"},[B("img",{style:{width:"24px","margin-top":"-2px"},src:Zq,alt:""}),B("div",{class:"titleText"},"Welcome to be our contributor")],-1)),r2=mi(()=>B("div",{class:"desc"}," You can pick tasks from the roadmap. If you're concerned about development conflicts, you can create issues in advance to reserve tasks. You can also contribute in other roles within the MetaGPT software team. Come join us and build together! ",-1)),i2={class:"links"},a2={style:{width:"84px","text-align":"right"}},o2=["onClick"],s2=mi(()=>B("div",{class:"viwer"},[B("img",{style:{width:"100%"},src:Jq,alt:""})],-1)),l2={class:"button"},c2=mi(()=>B("img",{src:jq,style:{width:"20px"},alt:""},null,-1)),u2=mi(()=>B("div",{class:"welcomText"},"We are waiting for your join.",-1)),d2=mi(()=>B("div",{class:"contributor"},[B("span",null,"Contributors"),B("div",{class:"count"},"23")],-1)),_2=mi(()=>B("img",{style:{width:"244px"},src:e2,alt:""},null,-1)),p2=be({__name:"contributorModal",props:{visible:{type:Boolean}},emits:["update:visible"],setup(t,{emit:e}){const i=yt(t,"visible"),o=()=>{e("update:visible",!1)},s={ROADMAP:"https://github.com/geekan/MetaGPT/blob/main/docs/ROADMAP.md",TASKS:"https://github.com/users/geekan/projects/1/views/2"},l=c=>{window.open(c)};return(c,d)=>(V(),ot(wC,{visible:q(i),"onUpdate:visible":d[1]||(d[1]=_=>wr(i)?i.value=_:null),style:{width:"709px",height:"799px"},onClose:o},{default:dt(()=>[B("div",t2,[n2,r2,B("div",i2,[(V(),ae(st,null,yn(s,(_,p)=>B("span",{key:p,style:{display:"flex",gap:"20px"}},[B("div",a2,Qe(p)+":",1),B("div",{class:"link",onClick:g=>l(_)},Qe(_),9,o2)])),64))]),s2,B("div",l2,[c2,B("span",{onClick:d[0]||(d[0]=_=>l("https://github.com/users/geekan/projects/1/views/2"))},"Lock Task")]),u2,d2,_2])]),_:1},8,["visible"]))}});const m2=Dt(p2,[["__scopeId","data-v-d5d425dc"]]),km=t=>({...t,parent:null,children:[]}),XS=(t,e)=>{t.children.length||(t.children=[]),t.children.push(e),e.parent=t},g2=(t,e)=>{const n=new Map;n.set(e.id,e);const i=[];return e.children=i,t.forEach(o=>{n.set(o.id,o)}),t.forEach(o=>{const s=km(o);n.set(o.id,s)}),{messageMap:n,root:e}},cN=t=>{t.children.length&&(t.children=t.children.sort((e,n)=>Ka(e.created_at).isAfter(n.created_at)?1:-1),t.children.forEach(e=>cN(e)))},uN=t=>{const e=[];for(e.push(t);e.length;){const n=e.pop();if(!n.children.length)return n;e.push(...n.children)}return e[e.length-1]},E2=t=>{const e=(s,l)=>s.findIndex(c=>c.id===l.id)+1,n=[],i=uN(t);let o=i;for(;o!=null&&o.parent;)n.unshift({current:e(o.parent.children,o),is_user_message:o.is_user_message,activeNode:o,renderPath:o.parent.children}),o=o.parent;return{renderPath:n,lastLeaf:i}};var Xt=(t=>(t.RUNNING="running",t.FINISH="finish",t.FAILED="failed",t.TERMINATE="terminate",t))(Xt||{}),Dr=(t=>(t.TEXT="text",t.AUDIO="audio",t.IMAGE="image",t.FAILED="failed",t))(Dr||{}),Ut=(t=>(t.INIT="init",t.IDLE="idle",t.RUNNING="running",t.FINISH="finish",t.FAILED="failed",t.TERMINATE="terminate",t))(Ut||{}),qs={exports:{}};/** - * @license - * Lodash - * Copyright OpenJS Foundation and other contributors - * Released under MIT license - * Based on Underscore.js 1.8.3 - * Copyright Jeremy Ashkenas, DocumentCloud and Investigative Reporters & Editors - */qs.exports;(function(t,e){(function(){var n,i="4.17.21",o=200,s="Unsupported core-js use. Try https://npms.io/search?q=ponyfill.",l="Expected a function",c="Invalid `variable` option passed into `_.template`",d="__lodash_hash_undefined__",_=500,p="__lodash_placeholder__",g=1,E=2,f=4,S=1,C=2,h=1,T=2,N=4,y=8,x=16,P=32,D=64,k=128,U=256,W=512,z=30,K="...",Ee=800,oe=16,L=1,J=2,re=3,G=1/0,X=9007199254740991,_e=17976931348623157e292,ve=0/0,he=4294967295,tt=he-1,lt=he>>>1,$e=[["ary",k],["bind",h],["bindKey",T],["curry",y],["curryRight",x],["flip",W],["partial",P],["partialRight",D],["rearg",U]],Ce="[object Arguments]",Be="[object Array]",Ve="[object AsyncFunction]",xe="[object Boolean]",He="[object Date]",rt="[object DOMException]",We="[object Error]",te="[object Function]",pe="[object GeneratorFunction]",ie="[object Map]",Pe="[object Number]",we="[object Null]",Xe="[object Object]",pt="[object Promise]",me="[object Proxy]",bt="[object RegExp]",Ue="[object Set]",Ie="[object String]",zt="[object Symbol]",Nt="[object Undefined]",Gt="[object WeakMap]",Sn="[object WeakSet]",ne="[object ArrayBuffer]",ce="[object DataView]",Oe="[object Float32Array]",Me="[object Float64Array]",ct="[object Int8Array]",xt="[object Int16Array]",Ze="[object Int32Array]",Yt="[object Uint8Array]",er="[object Uint8ClampedArray]",Z="[object Uint16Array]",ge="[object Uint32Array]",Ae=/\b__p \+= '';/g,it=/\b(__p \+=) '' \+/g,ht=/(__e\(.*?\)|\b__t\)) \+\n'';/g,wt=/&(?:amp|lt|gt|quot|#39);/g,tn=/[&<>"']/g,mt=RegExp(wt.source),ln=RegExp(tn.source),tr=/<%-([\s\S]+?)%>/g,gl=/<%([\s\S]+?)%>/g,lo=/<%=([\s\S]+?)%>/g,El=/\.|\[(?:[^[\]]*|(["'])(?:(?!\1)[^\\]|\\.)*?\1)\]/,fl=/^\w*$/,Sl=/[^.[\]]+|\[(?:(-?\d+(?:\.\d+)?)|(["'])((?:(?!\2)[^\\]|\\.)*?)\2)\]|(?=(?:\.|\[\])(?:\.|\[\]|$))/g,ca=/[\\^$.*+?()[\]{}|]/g,bl=RegExp(ca.source),ua=/^\s+/,hl=/\s/,Tl=/\{(?:\n\/\* \[wrapped with .+\] \*\/)?\n?/,vl=/\{\n\/\* \[wrapped with (.+)\] \*/,Cl=/,? & /,Rl=/[^\x00-\x2f\x3a-\x40\x5b-\x60\x7b-\x7f]+/g,Nl=/[()=,{}\[\]\/\s]/,Ol=/\\(\\)?/g,Al=/\$\{([^\\}]*(?:\\.[^\\}]*)*)\}/g,co=/\w*$/,yl=/^[-+]0x[0-9a-f]+$/i,Il=/^0b[01]+$/i,Dl=/^\[object .+?Constructor\]$/,xl=/^0o[0-7]+$/i,wl=/^(?:0|[1-9]\d*)$/,Ml=/[\xc0-\xd6\xd8-\xf6\xf8-\xff\u0100-\u017f]/g,Ei=/($^)/,Ll=/['\n\r\u2028\u2029\\]/g,fi="\\ud800-\\udfff",Pl="\\u0300-\\u036f",kl="\\ufe20-\\ufe2f",Ul="\\u20d0-\\u20ff",uo=Pl+kl+Ul,_o="\\u2700-\\u27bf",po="a-z\\xdf-\\xf6\\xf8-\\xff",Fl="\\xac\\xb1\\xd7\\xf7",Bl="\\x00-\\x2f\\x3a-\\x40\\x5b-\\x60\\x7b-\\xbf",Gl="\\u2000-\\u206f",Yl=" \\t\\x0b\\f\\xa0\\ufeff\\n\\r\\u2028\\u2029\\u1680\\u180e\\u2000\\u2001\\u2002\\u2003\\u2004\\u2005\\u2006\\u2007\\u2008\\u2009\\u200a\\u202f\\u205f\\u3000",mo="A-Z\\xc0-\\xd6\\xd8-\\xde",go="\\ufe0e\\ufe0f",Eo=Fl+Bl+Gl+Yl,da="['’]",ql="["+fi+"]",fo="["+Eo+"]",Si="["+uo+"]",So="\\d+",$l="["+_o+"]",bo="["+po+"]",ho="[^"+fi+Eo+So+_o+po+mo+"]",_a="\\ud83c[\\udffb-\\udfff]",Hl="(?:"+Si+"|"+_a+")",To="[^"+fi+"]",pa="(?:\\ud83c[\\udde6-\\uddff]){2}",ma="[\\ud800-\\udbff][\\udc00-\\udfff]",gr="["+mo+"]",vo="\\u200d",Co="(?:"+bo+"|"+ho+")",Ro="(?:"+gr+"|"+ho+")",ga="(?:"+da+"(?:d|ll|m|re|s|t|ve))?",Ea="(?:"+da+"(?:D|LL|M|RE|S|T|VE))?",No=Hl+"?",Oo="["+go+"]?",Ao="(?:"+vo+"(?:"+[To,pa,ma].join("|")+")"+Oo+No+")*",bi="\\d*(?:1st|2nd|3rd|(?![123])\\dth)(?=\\b|[A-Z_])",fa="\\d*(?:1ST|2ND|3RD|(?![123])\\dTH)(?=\\b|[a-z_])",Sa=Oo+No+Ao,yo="(?:"+[$l,pa,ma].join("|")+")"+Sa,Io="(?:"+[To+Si+"?",Si,pa,ma,ql].join("|")+")",Dg=RegExp(da,"g"),xg=RegExp(Si,"g"),zl=RegExp(_a+"(?="+_a+")|"+Io+Sa,"g"),QN=RegExp([gr+"?"+bo+"+"+ga+"(?="+[fo,gr,"$"].join("|")+")",Ro+"+"+Ea+"(?="+[fo,gr+Co,"$"].join("|")+")",gr+"?"+Co+"+"+ga,gr+"+"+Ea,fa,bi,So,yo].join("|"),"g"),XN=RegExp("["+vo+fi+uo+go+"]"),ZN=/[a-z][A-Z]|[A-Z]{2}[a-z]|[0-9][a-zA-Z]|[a-zA-Z][0-9]|[^a-zA-Z0-9 ]/,JN=["Array","Buffer","DataView","Date","Error","Float32Array","Float64Array","Function","Int8Array","Int16Array","Int32Array","Map","Math","Object","Promise","RegExp","Set","String","Symbol","TypeError","Uint8Array","Uint8ClampedArray","Uint16Array","Uint32Array","WeakMap","_","clearTimeout","isFinite","parseInt","setTimeout"],jN=-1,gt={};gt[Oe]=gt[Me]=gt[ct]=gt[xt]=gt[Ze]=gt[Yt]=gt[er]=gt[Z]=gt[ge]=!0,gt[Ce]=gt[Be]=gt[ne]=gt[xe]=gt[ce]=gt[He]=gt[We]=gt[te]=gt[ie]=gt[Pe]=gt[Xe]=gt[bt]=gt[Ue]=gt[Ie]=gt[Gt]=!1;var _t={};_t[Ce]=_t[Be]=_t[ne]=_t[ce]=_t[xe]=_t[He]=_t[Oe]=_t[Me]=_t[ct]=_t[xt]=_t[Ze]=_t[ie]=_t[Pe]=_t[Xe]=_t[bt]=_t[Ue]=_t[Ie]=_t[zt]=_t[Yt]=_t[er]=_t[Z]=_t[ge]=!0,_t[We]=_t[te]=_t[Gt]=!1;var eO={À:"A",Á:"A",Â:"A",Ã:"A",Ä:"A",Å:"A",à:"a",á:"a",â:"a",ã:"a",ä:"a",å:"a",Ç:"C",ç:"c",Ð:"D",ð:"d",È:"E",É:"E",Ê:"E",Ë:"E",è:"e",é:"e",ê:"e",ë:"e",Ì:"I",Í:"I",Î:"I",Ï:"I",ì:"i",í:"i",î:"i",ï:"i",Ñ:"N",ñ:"n",Ò:"O",Ó:"O",Ô:"O",Õ:"O",Ö:"O",Ø:"O",ò:"o",ó:"o",ô:"o",õ:"o",ö:"o",ø:"o",Ù:"U",Ú:"U",Û:"U",Ü:"U",ù:"u",ú:"u",û:"u",ü:"u",Ý:"Y",ý:"y",ÿ:"y",Æ:"Ae",æ:"ae",Þ:"Th",þ:"th",ß:"ss",Ā:"A",Ă:"A",Ą:"A",ā:"a",ă:"a",ą:"a",Ć:"C",Ĉ:"C",Ċ:"C",Č:"C",ć:"c",ĉ:"c",ċ:"c",č:"c",Ď:"D",Đ:"D",ď:"d",đ:"d",Ē:"E",Ĕ:"E",Ė:"E",Ę:"E",Ě:"E",ē:"e",ĕ:"e",ė:"e",ę:"e",ě:"e",Ĝ:"G",Ğ:"G",Ġ:"G",Ģ:"G",ĝ:"g",ğ:"g",ġ:"g",ģ:"g",Ĥ:"H",Ħ:"H",ĥ:"h",ħ:"h",Ĩ:"I",Ī:"I",Ĭ:"I",Į:"I",İ:"I",ĩ:"i",ī:"i",ĭ:"i",į:"i",ı:"i",Ĵ:"J",ĵ:"j",Ķ:"K",ķ:"k",ĸ:"k",Ĺ:"L",Ļ:"L",Ľ:"L",Ŀ:"L",Ł:"L",ĺ:"l",ļ:"l",ľ:"l",ŀ:"l",ł:"l",Ń:"N",Ņ:"N",Ň:"N",Ŋ:"N",ń:"n",ņ:"n",ň:"n",ŋ:"n",Ō:"O",Ŏ:"O",Ő:"O",ō:"o",ŏ:"o",ő:"o",Ŕ:"R",Ŗ:"R",Ř:"R",ŕ:"r",ŗ:"r",ř:"r",Ś:"S",Ŝ:"S",Ş:"S",Š:"S",ś:"s",ŝ:"s",ş:"s",š:"s",Ţ:"T",Ť:"T",Ŧ:"T",ţ:"t",ť:"t",ŧ:"t",Ũ:"U",Ū:"U",Ŭ:"U",Ů:"U",Ű:"U",Ų:"U",ũ:"u",ū:"u",ŭ:"u",ů:"u",ű:"u",ų:"u",Ŵ:"W",ŵ:"w",Ŷ:"Y",ŷ:"y",Ÿ:"Y",Ź:"Z",Ż:"Z",Ž:"Z",ź:"z",ż:"z",ž:"z",IJ:"IJ",ij:"ij",Œ:"Oe",œ:"oe",ʼn:"'n",ſ:"s"},tO={"&":"&","<":"<",">":">",'"':""","'":"'"},nO={"&":"&","<":"<",">":">",""":'"',"'":"'"},rO={"\\":"\\","'":"'","\n":"n","\r":"r","\u2028":"u2028","\u2029":"u2029"},iO=parseFloat,aO=parseInt,wg=typeof La=="object"&&La&&La.Object===Object&&La,oO=typeof self=="object"&&self&&self.Object===Object&&self,qt=wg||oO||Function("return this")(),Vl=e&&!e.nodeType&&e,Ur=Vl&&!0&&t&&!t.nodeType&&t,Mg=Ur&&Ur.exports===Vl,Wl=Mg&&wg.process,bn=function(){try{var w=Ur&&Ur.require&&Ur.require("util").types;return w||Wl&&Wl.binding&&Wl.binding("util")}catch{}}(),Lg=bn&&bn.isArrayBuffer,Pg=bn&&bn.isDate,kg=bn&&bn.isMap,Ug=bn&&bn.isRegExp,Fg=bn&&bn.isSet,Bg=bn&&bn.isTypedArray;function cn(w,Y,F){switch(F.length){case 0:return w.call(Y);case 1:return w.call(Y,F[0]);case 2:return w.call(Y,F[0],F[1]);case 3:return w.call(Y,F[0],F[1],F[2])}return w.apply(Y,F)}function sO(w,Y,F,de){for(var ye=-1,et=w==null?0:w.length;++ye-1}function Kl(w,Y,F){for(var de=-1,ye=w==null?0:w.length;++de-1;);return F}function Wg(w,Y){for(var F=w.length;F--&&hi(Y,w[F],0)>-1;);return F}function EO(w,Y){for(var F=w.length,de=0;F--;)w[F]===Y&&++de;return de}var fO=Jl(eO),SO=Jl(tO);function bO(w){return"\\"+rO[w]}function hO(w,Y){return w==null?n:w[Y]}function Ti(w){return XN.test(w)}function TO(w){return ZN.test(w)}function vO(w){for(var Y,F=[];!(Y=w.next()).done;)F.push(Y.value);return F}function nc(w){var Y=-1,F=Array(w.size);return w.forEach(function(de,ye){F[++Y]=[ye,de]}),F}function Kg(w,Y){return function(F){return w(Y(F))}}function Sr(w,Y){for(var F=-1,de=w.length,ye=0,et=[];++F-1}function cA(r,a){var u=this.__data__,m=Wo(u,r);return m<0?(++this.size,u.push([r,a])):u[m][1]=a,this}nr.prototype.clear=aA,nr.prototype.delete=oA,nr.prototype.get=sA,nr.prototype.has=lA,nr.prototype.set=cA;function rr(r){var a=-1,u=r==null?0:r.length;for(this.clear();++a=a?r:a)),r}function Cn(r,a,u,m,b,R){var O,A=a&g,M=a&E,$=a&f;if(u&&(O=b?u(r,m,b,R):u(r)),O!==n)return O;if(!Tt(r))return r;var H=De(r);if(H){if(O=py(r),!A)return nn(r,O)}else{var Q=Wt(r),se=Q==te||Q==pe;if(Rr(r))return DE(r,A);if(Q==Xe||Q==Ce||se&&!b){if(O=M||se?{}:QE(r),!A)return M?ny(r,NA(O,r)):ty(r,oE(O,r))}else{if(!_t[Q])return b?r:{};O=my(r,Q,A)}}R||(R=new wn);var fe=R.get(r);if(fe)return fe;R.set(r,O),Nf(r)?r.forEach(function(Ne){O.add(Cn(Ne,a,u,Ne,r,R))}):Cf(r)&&r.forEach(function(Ne,Ye){O.set(Ye,Cn(Ne,a,u,Ye,r,R))});var Re=$?M?yc:Ac:M?an:kt,ke=H?n:Re(r);return hn(ke||r,function(Ne,Ye){ke&&(Ye=Ne,Ne=r[Ye]),Na(O,Ye,Cn(Ne,a,u,Ye,r,R))}),O}function OA(r){var a=kt(r);return function(u){return sE(u,r,a)}}function sE(r,a,u){var m=u.length;if(r==null)return!m;for(r=ut(r);m--;){var b=u[m],R=a[b],O=r[b];if(O===n&&!(b in r)||!R(O))return!1}return!0}function lE(r,a,u){if(typeof r!="function")throw new Tn(l);return wa(function(){r.apply(n,u)},a)}function Oa(r,a,u,m){var b=-1,R=Do,O=!0,A=r.length,M=[],$=a.length;if(!A)return M;u&&(a=ft(a,un(u))),m?(R=Kl,O=!1):a.length>=o&&(R=ba,O=!1,a=new Gr(a));e:for(;++bb?0:b+u),m=m===n||m>b?b:Le(m),m<0&&(m+=b),m=u>m?0:Af(m);u0&&u(A)?a>1?$t(A,a-1,u,m,b):fr(b,A):m||(b[b.length]=A)}return b}var cc=kE(),dE=kE(!0);function Yn(r,a){return r&&cc(r,a,kt)}function uc(r,a){return r&&dE(r,a,kt)}function Qo(r,a){return Er(a,function(u){return lr(r[u])})}function qr(r,a){a=vr(a,r);for(var u=0,m=a.length;r!=null&&ua}function IA(r,a){return r!=null&&at.call(r,a)}function DA(r,a){return r!=null&&a in ut(r)}function xA(r,a,u){return r>=Vt(a,u)&&r=120&&H.length>=120)?new Gr(O&&H):n}H=r[0];var Q=-1,se=A[0];e:for(;++Q-1;)A!==r&&Go.call(A,M,1),Go.call(r,M,1);return r}function vE(r,a){for(var u=r?a.length:0,m=u-1;u--;){var b=a[u];if(u==m||b!==R){var R=b;sr(b)?Go.call(r,b,1):hc(r,b)}}return r}function fc(r,a){return r+$o(nE()*(a-r+1))}function HA(r,a,u,m){for(var b=-1,R=Lt(qo((a-r)/(u||1)),0),O=F(R);R--;)O[m?R:++b]=r,r+=u;return O}function Sc(r,a){var u="";if(!r||a<1||a>X)return u;do a%2&&(u+=r),a=$o(a/2),a&&(r+=r);while(a);return u}function Fe(r,a){return Pc(JE(r,a,on),r+"")}function zA(r){return aE(wi(r))}function VA(r,a){var u=wi(r);return os(u,Yr(a,0,u.length))}function Ia(r,a,u,m){if(!Tt(r))return r;a=vr(a,r);for(var b=-1,R=a.length,O=R-1,A=r;A!=null&&++bb?0:b+a),u=u>b?b:u,u<0&&(u+=b),b=a>u?0:u-a>>>0,a>>>=0;for(var R=F(b);++m>>1,O=r[R];O!==null&&!_n(O)&&(u?O<=a:O=o){var $=a?null:oy(r);if($)return wo($);O=!1,b=ba,M=new Gr}else M=a?[]:A;e:for(;++m=m?r:Rn(r,a,u)}var IE=UO||function(r){return qt.clearTimeout(r)};function DE(r,a){if(a)return r.slice();var u=r.length,m=Zg?Zg(u):new r.constructor(u);return r.copy(m),m}function Rc(r){var a=new r.constructor(r.byteLength);return new Fo(a).set(new Fo(r)),a}function ZA(r,a){var u=a?Rc(r.buffer):r.buffer;return new r.constructor(u,r.byteOffset,r.byteLength)}function JA(r){var a=new r.constructor(r.source,co.exec(r));return a.lastIndex=r.lastIndex,a}function jA(r){return Ra?ut(Ra.call(r)):{}}function xE(r,a){var u=a?Rc(r.buffer):r.buffer;return new r.constructor(u,r.byteOffset,r.length)}function wE(r,a){if(r!==a){var u=r!==n,m=r===null,b=r===r,R=_n(r),O=a!==n,A=a===null,M=a===a,$=_n(a);if(!A&&!$&&!R&&r>a||R&&O&&M&&!A&&!$||m&&O&&M||!u&&M||!b)return 1;if(!m&&!R&&!$&&r=A)return M;var $=u[m];return M*($=="desc"?-1:1)}}return r.index-a.index}function ME(r,a,u,m){for(var b=-1,R=r.length,O=u.length,A=-1,M=a.length,$=Lt(R-O,0),H=F(M+$),Q=!m;++A1?u[b-1]:n,O=b>2?u[2]:n;for(R=r.length>3&&typeof R=="function"?(b--,R):n,O&&jt(u[0],u[1],O)&&(R=b<3?n:R,b=1),a=ut(a);++m-1?b[R?a[O]:O]:n}}function BE(r){return or(function(a){var u=a.length,m=u,b=vn.prototype.thru;for(r&&a.reverse();m--;){var R=a[m];if(typeof R!="function")throw new Tn(l);if(b&&!O&&is(R)=="wrapper")var O=new vn([],!0)}for(m=O?m:u;++m1&&Ke.reverse(),H&&MA))return!1;var $=R.get(r),H=R.get(a);if($&&H)return $==a&&H==r;var Q=-1,se=!0,fe=u&C?new Gr:n;for(R.set(r,a),R.set(a,r);++Q1?"& ":"")+a[m],a=a.join(u>2?", ":" "),r.replace(Tl,`{ -/* [wrapped with `+a+`] */ -`)}function Ey(r){return De(r)||zr(r)||!!(eE&&r&&r[eE])}function sr(r,a){var u=typeof r;return a=a??X,!!a&&(u=="number"||u!="symbol"&&wl.test(r))&&r>-1&&r%1==0&&r0){if(++a>=Ee)return arguments[0]}else a=0;return r.apply(n,arguments)}}function os(r,a){var u=-1,m=r.length,b=m-1;for(a=a===n?m:a;++u1?r[a-1]:n;return u=typeof u=="function"?(r.pop(),u):n,df(r,u)});function _f(r){var a=v(r);return a.__chain__=!0,a}function AI(r,a){return a(r),r}function ss(r,a){return a(r)}var yI=or(function(r){var a=r.length,u=a?r[0]:0,m=this.__wrapped__,b=function(R){return lc(R,r)};return a>1||this.__actions__.length||!(m instanceof ze)||!sr(u)?this.thru(b):(m=m.slice(u,+u+(a?1:0)),m.__actions__.push({func:ss,args:[b],thisArg:n}),new vn(m,this.__chain__).thru(function(R){return a&&!R.length&&R.push(n),R}))});function II(){return _f(this)}function DI(){return new vn(this.value(),this.__chain__)}function xI(){this.__values__===n&&(this.__values__=Of(this.value()));var r=this.__index__>=this.__values__.length,a=r?n:this.__values__[this.__index__++];return{done:r,value:a}}function wI(){return this}function MI(r){for(var a,u=this;u instanceof Vo;){var m=af(u);m.__index__=0,m.__values__=n,a?b.__wrapped__=m:a=m;var b=m;u=u.__wrapped__}return b.__wrapped__=r,a}function LI(){var r=this.__wrapped__;if(r instanceof ze){var a=r;return this.__actions__.length&&(a=new ze(this)),a=a.reverse(),a.__actions__.push({func:ss,args:[kc],thisArg:n}),new vn(a,this.__chain__)}return this.thru(kc)}function PI(){return AE(this.__wrapped__,this.__actions__)}var kI=jo(function(r,a,u){at.call(r,u)?++r[u]:ir(r,u,1)});function UI(r,a,u){var m=De(r)?Gg:AA;return u&&jt(r,a,u)&&(a=n),m(r,Te(a,3))}function FI(r,a){var u=De(r)?Er:uE;return u(r,Te(a,3))}var BI=FE(of),GI=FE(sf);function YI(r,a){return $t(ls(r,a),1)}function qI(r,a){return $t(ls(r,a),G)}function $I(r,a,u){return u=u===n?1:Le(u),$t(ls(r,a),u)}function pf(r,a){var u=De(r)?hn:hr;return u(r,Te(a,3))}function mf(r,a){var u=De(r)?lO:cE;return u(r,Te(a,3))}var HI=jo(function(r,a,u){at.call(r,u)?r[u].push(a):ir(r,u,[a])});function zI(r,a,u,m){r=rn(r)?r:wi(r),u=u&&!m?Le(u):0;var b=r.length;return u<0&&(u=Lt(b+u,0)),ps(r)?u<=b&&r.indexOf(a,u)>-1:!!b&&hi(r,a,u)>-1}var VI=Fe(function(r,a,u){var m=-1,b=typeof a=="function",R=rn(r)?F(r.length):[];return hr(r,function(O){R[++m]=b?cn(a,O,u):Aa(O,a,u)}),R}),WI=jo(function(r,a,u){ir(r,u,a)});function ls(r,a){var u=De(r)?ft:EE;return u(r,Te(a,3))}function KI(r,a,u,m){return r==null?[]:(De(a)||(a=a==null?[]:[a]),u=m?n:u,De(u)||(u=u==null?[]:[u]),hE(r,a,u))}var QI=jo(function(r,a,u){r[u?0:1].push(a)},function(){return[[],[]]});function XI(r,a,u){var m=De(r)?Ql:Hg,b=arguments.length<3;return m(r,Te(a,4),u,b,hr)}function ZI(r,a,u){var m=De(r)?cO:Hg,b=arguments.length<3;return m(r,Te(a,4),u,b,cE)}function JI(r,a){var u=De(r)?Er:uE;return u(r,ds(Te(a,3)))}function jI(r){var a=De(r)?aE:zA;return a(r)}function eD(r,a,u){(u?jt(r,a,u):a===n)?a=1:a=Le(a);var m=De(r)?vA:VA;return m(r,a)}function tD(r){var a=De(r)?CA:KA;return a(r)}function nD(r){if(r==null)return 0;if(rn(r))return ps(r)?vi(r):r.length;var a=Wt(r);return a==ie||a==Ue?r.size:mc(r).length}function rD(r,a,u){var m=De(r)?Xl:QA;return u&&jt(r,a,u)&&(a=n),m(r,Te(a,3))}var iD=Fe(function(r,a){if(r==null)return[];var u=a.length;return u>1&&jt(r,a[0],a[1])?a=[]:u>2&&jt(a[0],a[1],a[2])&&(a=[a[0]]),hE(r,$t(a,1),[])}),cs=FO||function(){return qt.Date.now()};function aD(r,a){if(typeof a!="function")throw new Tn(l);return r=Le(r),function(){if(--r<1)return a.apply(this,arguments)}}function gf(r,a,u){return a=u?n:a,a=r&&a==null?r.length:a,ar(r,k,n,n,n,n,a)}function Ef(r,a){var u;if(typeof a!="function")throw new Tn(l);return r=Le(r),function(){return--r>0&&(u=a.apply(this,arguments)),r<=1&&(a=n),u}}var Fc=Fe(function(r,a,u){var m=h;if(u.length){var b=Sr(u,Di(Fc));m|=P}return ar(r,m,a,u,b)}),ff=Fe(function(r,a,u){var m=h|T;if(u.length){var b=Sr(u,Di(ff));m|=P}return ar(a,m,r,u,b)});function Sf(r,a,u){a=u?n:a;var m=ar(r,y,n,n,n,n,n,a);return m.placeholder=Sf.placeholder,m}function bf(r,a,u){a=u?n:a;var m=ar(r,x,n,n,n,n,n,a);return m.placeholder=bf.placeholder,m}function hf(r,a,u){var m,b,R,O,A,M,$=0,H=!1,Q=!1,se=!0;if(typeof r!="function")throw new Tn(l);a=On(a)||0,Tt(u)&&(H=!!u.leading,Q="maxWait"in u,R=Q?Lt(On(u.maxWait)||0,a):R,se="trailing"in u?!!u.trailing:se);function fe(At){var Ln=m,ur=b;return m=b=n,$=At,O=r.apply(ur,Ln),O}function Re(At){return $=At,A=wa(Ye,a),H?fe(At):O}function ke(At){var Ln=At-M,ur=At-$,Bf=a-Ln;return Q?Vt(Bf,R-ur):Bf}function Ne(At){var Ln=At-M,ur=At-$;return M===n||Ln>=a||Ln<0||Q&&ur>=R}function Ye(){var At=cs();if(Ne(At))return Ke(At);A=wa(Ye,ke(At))}function Ke(At){return A=n,se&&m?fe(At):(m=b=n,O)}function pn(){A!==n&&IE(A),$=0,m=M=b=A=n}function en(){return A===n?O:Ke(cs())}function mn(){var At=cs(),Ln=Ne(At);if(m=arguments,b=this,M=At,Ln){if(A===n)return Re(M);if(Q)return IE(A),A=wa(Ye,a),fe(M)}return A===n&&(A=wa(Ye,a)),O}return mn.cancel=pn,mn.flush=en,mn}var oD=Fe(function(r,a){return lE(r,1,a)}),sD=Fe(function(r,a,u){return lE(r,On(a)||0,u)});function lD(r){return ar(r,W)}function us(r,a){if(typeof r!="function"||a!=null&&typeof a!="function")throw new Tn(l);var u=function(){var m=arguments,b=a?a.apply(this,m):m[0],R=u.cache;if(R.has(b))return R.get(b);var O=r.apply(this,m);return u.cache=R.set(b,O)||R,O};return u.cache=new(us.Cache||rr),u}us.Cache=rr;function ds(r){if(typeof r!="function")throw new Tn(l);return function(){var a=arguments;switch(a.length){case 0:return!r.call(this);case 1:return!r.call(this,a[0]);case 2:return!r.call(this,a[0],a[1]);case 3:return!r.call(this,a[0],a[1],a[2])}return!r.apply(this,a)}}function cD(r){return Ef(2,r)}var uD=XA(function(r,a){a=a.length==1&&De(a[0])?ft(a[0],un(Te())):ft($t(a,1),un(Te()));var u=a.length;return Fe(function(m){for(var b=-1,R=Vt(m.length,u);++b=a}),zr=pE(function(){return arguments}())?pE:function(r){return Ct(r)&&at.call(r,"callee")&&!jg.call(r,"callee")},De=F.isArray,ND=Lg?un(Lg):MA;function rn(r){return r!=null&&_s(r.length)&&!lr(r)}function Ot(r){return Ct(r)&&rn(r)}function OD(r){return r===!0||r===!1||Ct(r)&&Jt(r)==xe}var Rr=GO||Xc,AD=Pg?un(Pg):LA;function yD(r){return Ct(r)&&r.nodeType===1&&!Ma(r)}function ID(r){if(r==null)return!0;if(rn(r)&&(De(r)||typeof r=="string"||typeof r.splice=="function"||Rr(r)||xi(r)||zr(r)))return!r.length;var a=Wt(r);if(a==ie||a==Ue)return!r.size;if(xa(r))return!mc(r).length;for(var u in r)if(at.call(r,u))return!1;return!0}function DD(r,a){return ya(r,a)}function xD(r,a,u){u=typeof u=="function"?u:n;var m=u?u(r,a):n;return m===n?ya(r,a,n,u):!!m}function Gc(r){if(!Ct(r))return!1;var a=Jt(r);return a==We||a==rt||typeof r.message=="string"&&typeof r.name=="string"&&!Ma(r)}function wD(r){return typeof r=="number"&&tE(r)}function lr(r){if(!Tt(r))return!1;var a=Jt(r);return a==te||a==pe||a==Ve||a==me}function vf(r){return typeof r=="number"&&r==Le(r)}function _s(r){return typeof r=="number"&&r>-1&&r%1==0&&r<=X}function Tt(r){var a=typeof r;return r!=null&&(a=="object"||a=="function")}function Ct(r){return r!=null&&typeof r=="object"}var Cf=kg?un(kg):kA;function MD(r,a){return r===a||pc(r,a,Dc(a))}function LD(r,a,u){return u=typeof u=="function"?u:n,pc(r,a,Dc(a),u)}function PD(r){return Rf(r)&&r!=+r}function kD(r){if(by(r))throw new ye(s);return mE(r)}function UD(r){return r===null}function FD(r){return r==null}function Rf(r){return typeof r=="number"||Ct(r)&&Jt(r)==Pe}function Ma(r){if(!Ct(r)||Jt(r)!=Xe)return!1;var a=Bo(r);if(a===null)return!0;var u=at.call(a,"constructor")&&a.constructor;return typeof u=="function"&&u instanceof u&&Po.call(u)==LO}var Yc=Ug?un(Ug):UA;function BD(r){return vf(r)&&r>=-X&&r<=X}var Nf=Fg?un(Fg):FA;function ps(r){return typeof r=="string"||!De(r)&&Ct(r)&&Jt(r)==Ie}function _n(r){return typeof r=="symbol"||Ct(r)&&Jt(r)==zt}var xi=Bg?un(Bg):BA;function GD(r){return r===n}function YD(r){return Ct(r)&&Wt(r)==Gt}function qD(r){return Ct(r)&&Jt(r)==Sn}var $D=rs(gc),HD=rs(function(r,a){return r<=a});function Of(r){if(!r)return[];if(rn(r))return ps(r)?xn(r):nn(r);if(ha&&r[ha])return vO(r[ha]());var a=Wt(r),u=a==ie?nc:a==Ue?wo:wi;return u(r)}function cr(r){if(!r)return r===0?r:0;if(r=On(r),r===G||r===-G){var a=r<0?-1:1;return a*_e}return r===r?r:0}function Le(r){var a=cr(r),u=a%1;return a===a?u?a-u:a:0}function Af(r){return r?Yr(Le(r),0,he):0}function On(r){if(typeof r=="number")return r;if(_n(r))return ve;if(Tt(r)){var a=typeof r.valueOf=="function"?r.valueOf():r;r=Tt(a)?a+"":a}if(typeof r!="string")return r===0?r:+r;r=zg(r);var u=Il.test(r);return u||xl.test(r)?aO(r.slice(2),u?2:8):yl.test(r)?ve:+r}function yf(r){return qn(r,an(r))}function zD(r){return r?Yr(Le(r),-X,X):r===0?r:0}function nt(r){return r==null?"":dn(r)}var VD=yi(function(r,a){if(xa(a)||rn(a)){qn(a,kt(a),r);return}for(var u in a)at.call(a,u)&&Na(r,u,a[u])}),If=yi(function(r,a){qn(a,an(a),r)}),ms=yi(function(r,a,u,m){qn(a,an(a),r,m)}),WD=yi(function(r,a,u,m){qn(a,kt(a),r,m)}),KD=or(lc);function QD(r,a){var u=Ai(r);return a==null?u:oE(u,a)}var XD=Fe(function(r,a){r=ut(r);var u=-1,m=a.length,b=m>2?a[2]:n;for(b&&jt(a[0],a[1],b)&&(m=1);++u1),R}),qn(r,yc(r),u),m&&(u=Cn(u,g|E|f,sy));for(var b=a.length;b--;)hc(u,a[b]);return u});function mx(r,a){return xf(r,ds(Te(a)))}var gx=or(function(r,a){return r==null?{}:qA(r,a)});function xf(r,a){if(r==null)return{};var u=ft(yc(r),function(m){return[m]});return a=Te(a),TE(r,u,function(m,b){return a(m,b[0])})}function Ex(r,a,u){a=vr(a,r);var m=-1,b=a.length;for(b||(b=1,r=n);++ma){var m=r;r=a,a=m}if(u||r%1||a%1){var b=nE();return Vt(r+b*(a-r+iO("1e-"+((b+"").length-1))),a)}return fc(r,a)}var Ax=Ii(function(r,a,u){return a=a.toLowerCase(),r+(u?Lf(a):a)});function Lf(r){return Hc(nt(r).toLowerCase())}function Pf(r){return r=nt(r),r&&r.replace(Ml,fO).replace(xg,"")}function yx(r,a,u){r=nt(r),a=dn(a);var m=r.length;u=u===n?m:Yr(Le(u),0,m);var b=u;return u-=a.length,u>=0&&r.slice(u,b)==a}function Ix(r){return r=nt(r),r&&ln.test(r)?r.replace(tn,SO):r}function Dx(r){return r=nt(r),r&&bl.test(r)?r.replace(ca,"\\$&"):r}var xx=Ii(function(r,a,u){return r+(u?"-":"")+a.toLowerCase()}),wx=Ii(function(r,a,u){return r+(u?" ":"")+a.toLowerCase()}),Mx=UE("toLowerCase");function Lx(r,a,u){r=nt(r),a=Le(a);var m=a?vi(r):0;if(!a||m>=a)return r;var b=(a-m)/2;return ns($o(b),u)+r+ns(qo(b),u)}function Px(r,a,u){r=nt(r),a=Le(a);var m=a?vi(r):0;return a&&m>>0,u?(r=nt(r),r&&(typeof a=="string"||a!=null&&!Yc(a))&&(a=dn(a),!a&&Ti(r))?Cr(xn(r),0,u):r.split(a,u)):[]}var qx=Ii(function(r,a,u){return r+(u?" ":"")+Hc(a)});function $x(r,a,u){return r=nt(r),u=u==null?0:Yr(Le(u),0,r.length),a=dn(a),r.slice(u,u+a.length)==a}function Hx(r,a,u){var m=v.templateSettings;u&&jt(r,a,u)&&(a=n),r=nt(r),a=ms({},a,m,HE);var b=ms({},a.imports,m.imports,HE),R=kt(b),O=tc(b,R),A,M,$=0,H=a.interpolate||Ei,Q="__p += '",se=rc((a.escape||Ei).source+"|"+H.source+"|"+(H===lo?Al:Ei).source+"|"+(a.evaluate||Ei).source+"|$","g"),fe="//# sourceURL="+(at.call(a,"sourceURL")?(a.sourceURL+"").replace(/\s/g," "):"lodash.templateSources["+ ++jN+"]")+` -`;r.replace(se,function(Ne,Ye,Ke,pn,en,mn){return Ke||(Ke=pn),Q+=r.slice($,mn).replace(Ll,bO),Ye&&(A=!0,Q+=`' + -__e(`+Ye+`) + -'`),en&&(M=!0,Q+=`'; -`+en+`; -__p += '`),Ke&&(Q+=`' + -((__t = (`+Ke+`)) == null ? '' : __t) + -'`),$=mn+Ne.length,Ne}),Q+=`'; -`;var Re=at.call(a,"variable")&&a.variable;if(!Re)Q=`with (obj) { -`+Q+` -} -`;else if(Nl.test(Re))throw new ye(c);Q=(M?Q.replace(Ae,""):Q).replace(it,"$1").replace(ht,"$1;"),Q="function("+(Re||"obj")+`) { -`+(Re?"":`obj || (obj = {}); -`)+"var __t, __p = ''"+(A?", __e = _.escape":"")+(M?`, __j = Array.prototype.join; -function print() { __p += __j.call(arguments, '') } -`:`; -`)+Q+`return __p -}`;var ke=Uf(function(){return et(R,fe+"return "+Q).apply(n,O)});if(ke.source=Q,Gc(ke))throw ke;return ke}function zx(r){return nt(r).toLowerCase()}function Vx(r){return nt(r).toUpperCase()}function Wx(r,a,u){if(r=nt(r),r&&(u||a===n))return zg(r);if(!r||!(a=dn(a)))return r;var m=xn(r),b=xn(a),R=Vg(m,b),O=Wg(m,b)+1;return Cr(m,R,O).join("")}function Kx(r,a,u){if(r=nt(r),r&&(u||a===n))return r.slice(0,Qg(r)+1);if(!r||!(a=dn(a)))return r;var m=xn(r),b=Wg(m,xn(a))+1;return Cr(m,0,b).join("")}function Qx(r,a,u){if(r=nt(r),r&&(u||a===n))return r.replace(ua,"");if(!r||!(a=dn(a)))return r;var m=xn(r),b=Vg(m,xn(a));return Cr(m,b).join("")}function Xx(r,a){var u=z,m=K;if(Tt(a)){var b="separator"in a?a.separator:b;u="length"in a?Le(a.length):u,m="omission"in a?dn(a.omission):m}r=nt(r);var R=r.length;if(Ti(r)){var O=xn(r);R=O.length}if(u>=R)return r;var A=u-vi(m);if(A<1)return m;var M=O?Cr(O,0,A).join(""):r.slice(0,A);if(b===n)return M+m;if(O&&(A+=M.length-A),Yc(b)){if(r.slice(A).search(b)){var $,H=M;for(b.global||(b=rc(b.source,nt(co.exec(b))+"g")),b.lastIndex=0;$=b.exec(H);)var Q=$.index;M=M.slice(0,Q===n?A:Q)}}else if(r.indexOf(dn(b),A)!=A){var se=M.lastIndexOf(b);se>-1&&(M=M.slice(0,se))}return M+m}function Zx(r){return r=nt(r),r&&mt.test(r)?r.replace(wt,OO):r}var Jx=Ii(function(r,a,u){return r+(u?" ":"")+a.toUpperCase()}),Hc=UE("toUpperCase");function kf(r,a,u){return r=nt(r),a=u?n:a,a===n?TO(r)?IO(r):_O(r):r.match(a)||[]}var Uf=Fe(function(r,a){try{return cn(r,n,a)}catch(u){return Gc(u)?u:new ye(u)}}),jx=or(function(r,a){return hn(a,function(u){u=$n(u),ir(r,u,Fc(r[u],r))}),r});function ew(r){var a=r==null?0:r.length,u=Te();return r=a?ft(r,function(m){if(typeof m[1]!="function")throw new Tn(l);return[u(m[0]),m[1]]}):[],Fe(function(m){for(var b=-1;++bX)return[];var u=he,m=Vt(r,he);a=Te(a),r-=he;for(var b=ec(m,a);++u0||a<0)?new ze(u):(r<0?u=u.takeRight(-r):r&&(u=u.drop(r)),a!==n&&(a=Le(a),u=a<0?u.dropRight(-a):u.take(a-r)),u)},ze.prototype.takeRightWhile=function(r){return this.reverse().takeWhile(r).reverse()},ze.prototype.toArray=function(){return this.take(he)},Yn(ze.prototype,function(r,a){var u=/^(?:filter|find|map|reject)|While$/.test(a),m=/^(?:head|last)$/.test(a),b=v[m?"take"+(a=="last"?"Right":""):a],R=m||/^find/.test(a);b&&(v.prototype[a]=function(){var O=this.__wrapped__,A=m?[1]:arguments,M=O instanceof ze,$=A[0],H=M||De(O),Q=function(Ye){var Ke=b.apply(v,fr([Ye],A));return m&&se?Ke[0]:Ke};H&&u&&typeof $=="function"&&$.length!=1&&(M=H=!1);var se=this.__chain__,fe=!!this.__actions__.length,Re=R&&!se,ke=M&&!fe;if(!R&&H){O=ke?O:new ze(this);var Ne=r.apply(O,A);return Ne.__actions__.push({func:ss,args:[Q],thisArg:n}),new vn(Ne,se)}return Re&&ke?r.apply(this,A):(Ne=this.thru(Q),Re?m?Ne.value()[0]:Ne.value():Ne)})}),hn(["pop","push","shift","sort","splice","unshift"],function(r){var a=Mo[r],u=/^(?:push|sort|unshift)$/.test(r)?"tap":"thru",m=/^(?:pop|shift)$/.test(r);v.prototype[r]=function(){var b=arguments;if(m&&!this.__chain__){var R=this.value();return a.apply(De(R)?R:[],b)}return this[u](function(O){return a.apply(De(O)?O:[],b)})}}),Yn(ze.prototype,function(r,a){var u=v[a];if(u){var m=u.name+"";at.call(Oi,m)||(Oi[m]=[]),Oi[m].push({name:a,func:u})}}),Oi[es(n,T).name]=[{name:"wrapper",func:n}],ze.prototype.clone=ZO,ze.prototype.reverse=JO,ze.prototype.value=jO,v.prototype.at=yI,v.prototype.chain=II,v.prototype.commit=DI,v.prototype.next=xI,v.prototype.plant=MI,v.prototype.reverse=LI,v.prototype.toJSON=v.prototype.valueOf=v.prototype.value=PI,v.prototype.first=v.prototype.head,ha&&(v.prototype[ha]=wI),v},Ci=DO();Ur?((Ur.exports=Ci)._=Ci,Vl._=Ci):qt._=Ci}).call(La)})(qs,qs.exports);var Ir=qs.exports;class f2{constructor(e){Mi(this,"queue");Mi(this,"processing");Mi(this,"callback");Mi(this,"steps");Mi(this,"requestAnimationFrameId");this.queue=[],this.processing=!1,this.callback=e,this.steps=[],this.requestAnimationFrameId=null}registerCallback(e){this.callback=e}clearQueue(){this.queue=[],this.processing=!1,this.callback=void 0,this.steps=[],this.requestAnimationFrameId&&(cancelAnimationFrame(this.requestAnimationFrameId),this.requestAnimationFrameId=null)}pushToQueue(e){this.queue.push(e),this.requestAnimationFrameId||this.processQueueWithRAF()}processQueueWithRAF(){var e,n,i,o;if(console.log(this.queue),!this.processing&&this.queue.length>0){this.processing=!0;const s=(l,c)=>Ir.isUndefined(c)?l:c;for(const l of this.queue){const c=this.steps[this.steps.length-1];if(l.id!==(c==null?void 0:c.id)){l.contents=l.content?[l.content]:[],this.steps.push(l);continue}if(Ir.assignWith(c,Ir.omit(l,"content"),s),!l.content)continue;const d=c.contents[c.contents.length-1];if(((e=l.content)==null?void 0:e.id)!==(d==null?void 0:d.id)||l.content.type!==(d==null?void 0:d.type)){c.contents.push(l.content);continue}if(Ir.assignWith(d,Ir.omit(l.content,"value"),s),((n=l.content)==null?void 0:n.type)===Dr.IMAGE){const _=d.value.answer.endsWith(",");d.value.answer+=`${_?"":","}${(i=l.content)==null?void 0:i.value.answer}`}else d.value.answer+=((o=l.content)==null?void 0:o.value.answer)||""}this.queue=[],this.callback&&this.callback(this.steps),this.processing=!1}this.queue.length>0?this.requestAnimationFrameId=requestAnimationFrame(()=>{this.processQueueWithRAF()}):this.requestAnimationFrameId=null}}const Vr=ee(),As=ee(!0),ZS=ee(0),dN=()=>{const t=async i=>{const{behavior:o="auto",isAsync:s=!0,force:l=!1}=i||{};if(!Vr.value||!l&&!As.value)return;s&&await Ks();const c=(i==null?void 0:i.top)||Vr.value.scrollHeight;if(o==="auto"){Vr.value.scrollTop=c;return}As.value=!0,Vr.value.scrollTo({top:c,behavior:o})},e=(i=10)=>{const{scrollTop:o,clientHeight:s,scrollHeight:l}=Vr.value,c=o+s;return c<=l+i&&c>=l-i},n=()=>{const{scrollTop:i}=Vr.value,o=e();if(i{if(!ei.value||(t==null?void 0:t.id)===(e==null?void 0:e.id))return;const{renderPath:n,lastLeaf:i}=E2(ei.value);Fm.value=n,Qr.value=i},{deep:!0,immediate:!0});const ys=new f2,Bi=ee(),$s=ee(null),tb=ee(!1),h2=t=>{$s.value=new AbortController;const{query:e}=t;return fetch("/api/messages",{signal:$s.value.signal,headers:{accept:"text/event-stream","content-type":"application/json"},body:JSON.stringify({query:e,config:{OPENAI_API_KEY:t.OPENAI_API_KEY,OPENAI_API_MODEL:t.OPENAI_API_MODEL}}),method:"POST",credentials:"include"})},aa=()=>{const{toBottom:t}=dN(),e=async g=>{Bi.value&&(Bi.value.steps=[...g]),t({behavior:"smooth"})},n=()=>{const g=Date.now()+Math.random()*1e3;return gn.value.has(g)?n():g},i=(g,E)=>{const f=[],S=n();return g||f.push({id:S,status:Xt.FINISH,contents:[{id:S,type:Dr.TEXT,value:{answer:E}}]}),km({user_id:0,role:"Product Manager",is_user_message:!g,chat_id:Um.value,steps:f,id:S,created_at:Ka().format("YYYY-MM-DD HH:mm:ss")})},o=(g,E)=>{if(!Ba.value.has(E))return;const f=gn.value.get(E);gn.value.delete(E),Ba.value.delete(E),gn.value.set(f.id,f)},s=(g,E)=>{var N;const f=gn.value.get(E);f.created_at=g.timestamp,((N=Qr.value)==null?void 0:N.id)===E&&(Qr.value.created_at=g.timestamp);const S=g;if(console.log(S),!(S!=null&&S.chat_id))return;const C=f.parent;gn.value.delete(E),Ba.value.delete(E);const h=C.children.findIndex(y=>y.id===E);C.children.splice(h,1);const T={...S,children:[],parent:C};C.children.push(T),gn.value.set(T.id,T),ei.value=T,Bi.value=void 0,ys.clearQueue()},l=(g,E)=>{let f=null;const S=i(!0);return Ou.value?f=gn.value.get(E):(f=i(!1,g),XS(gn.value.get(E),f),gn.value.set(f.id,f),Ba.value.add(f.id)),XS(f,S),ei.value=S,gn.value.set(S.id,S),Ba.value.add(S.id),{userMessage:f,agentMessage:S}},c=async(g,E,f=!1)=>{if(!g||!Um)return;Ou.value=f,ys.registerCallback(e);const S=E!==void 0?E:Qr.value.id,{userMessage:C,agentMessage:h}=l(g,S);Bi.value=h,Fi.value=Ut.RUNNING;const T=N=>{if(N.name==="AbortError"){Fi.value=Ut.TERMINATE;return}Fi.value=Ut.FAILED};h2({query:g,OPENAI_API_KEY:JS.value,OPENAI_API_MODEL:jS.value}).then(N=>{const y=N.body.getReader();let x=!0,P="";y.read().then(function D({done:k,value:U}){if(k){Fi.value=Ut.FINISH;return}y.read().then(D).catch(T);let W=dM(U);const z=W.endsWith(` - -`),K=W.lastIndexOf(` - -`);if(z)W=P+W,P="";else if(K===-1){P+=W;return}else{const oe=W;W=P+W.slice(0,K),P=oe.slice(K)}W.split(` - -`).filter(oe=>oe).map(oe=>{try{return console.info("转码后",decodeURIComponent(oe)),JSON.parse(decodeURIComponent(oe))}catch(L){return console.info("转码失败",L),console.info("输出:",oe),""}}).filter(oe=>oe).forEach(oe=>{var re,G;const{step:L,role:J}=oe;if(L.role=J,!(oe!=null&&oe.qa_type)&&L&&ys.pushToQueue(L),o(oe,C.id),s(oe,h.id),x){if(L.status===Xt.FAILED)throw new Error(((G=(re=L.content)==null?void 0:re.value)==null?void 0:G.answer)||Km("未知错误"));x=!1}})}).catch(T)})};return{isRegen:Ou,isPreview:_N,globalStatus:Fi,chatTree:eb,chatTreeMap:gn,chatRenderPathList:Fm,lastLeafNode:Qr,activeTreeNode:ei,activeAgentNode:Bi,sendMessage:c,stopMessage:()=>{var g,E,f;(g=$s.value)==null||g.abort(),$s.value=null,(f=(E=Bi.value)==null?void 0:E.steps)==null||f.forEach(S=>{S.status===Xt.RUNNING&&(S.timestamp="",S.status=Xt.FINISH)})},regenMessage:async()=>{var S;let g=Qr.value;((S=Qr.value)==null?void 0:S.is_user_message)||(g=g.parent);const{contents:f}=g.steps[0];ys.clearQueue(),c(f[0].value.answer,g.id,!0)},genRootNode:async g=>{tb.value=!0;const E=km({id:0,is_user_message:!1,created_at:"",user_id:0,chat_id:0,steps:[{id:0,status:Xt.FINISH,contents:[{id:0,type:Dr.TEXT,value:{answer:g??""}}]}]});Fi.value=Ut.INIT;const{messageMap:f,root:S}=g2([],E);cN(S);const C=uN(S);eb.value=S,gn.value=f,ei.value=C,Fm.value=[],t({force:!0}),await Ks(),tb.value=!1},apiKey:JS,model:jS,shakeApiKeyInput:b2}},T2=()=>(Um.value=1,S2.value=0,_N.value=!1,aa());var Ar=(t=>(t.textToSpeech="text_to_speech",t.textToImage="text_to_image",t.aiCall="ai_call",t.dataAnalysis="data_analysis",t.crawler="crawler",t.knowledge="knowledge",t))(Ar||{});const v2={text_to_speech:"语音生成",text_to_image:"文生图",ai_call:"AI调用",data_analysis:"数据分析",crawler:"搜索",knowledge:"知识库"},C2={[Ar.textToSpeech]:()=>ue(GM,null,null),[Ar.textToImage]:()=>ue(bL,null,null),[Ar.aiCall]:()=>ue(ea,{iconId:"icon-ri-mental-health-line"},null),[Ar.dataAnalysis]:()=>ue(NL,null,null),[Ar.crawler]:()=>ue(xL,null,null),[Ar.knowledge]:()=>ue(_L,null,null)},R2=[Ar.knowledge],pN=[{job:"Product Manager",avatar:"role0",status:"Hired",tags:["PM"],color:["#8855F0","#D9CFF9"]},{job:"Project Manager",avatar:"role1",status:"Hired",tags:["PM"],color:["#E79400","#FCBED1"]},{job:"Architect",avatar:"role2",status:"Hired",tags:["CTO"],color:["#2E85F5","#BEE5FE"]},{job:"Engineer",avatar:"role3",status:"Hired",tags:["RD"],color:["#33C7BE","#C9F6EF"]},{job:"QA Engineer",avatar:"role4",status:"In recruitment",tags:["QA"],color:["#C9CDD4","#E5E6EB"],action:"I can build+"},{job:"UI Designer",avatar:"role5",status:"In recruitment",tags:["UI"],color:["#C9CDD4","#E5E6EB"],action:"I can build+"},{job:"Saler",avatar:"role6",status:"In recruitment",tags:["Saler"],color:["#C9CDD4","#E5E6EB"],action:"I can build+"}],N2=t=>(Zi("data-v-d4dce6b2"),t=t(),Ji(),t),O2={class:"roleListWrapper"},A2={class:"title"},y2=N2(()=>B("img",{src:Wq,alt:""},null,-1)),I2=["type"],D2={style:{width:"100%",padding:"0px 32px","box-sizing":"border-box"}},x2={class:"roleList"},w2={key:0,src:aN,alt:""},M2={key:1,src:oN,alt:""},L2={key:2,src:sN,alt:""},P2={key:3,src:lN,alt:""},k2={key:4,src:Kq,alt:""},U2={key:5,src:Qq,alt:""},F2={key:6,src:Xq,alt:""},B2={class:"infomation"},G2={class:"job"},Y2={class:"jobName"},q2={class:"jobStatus"},$2={class:"tags"},H2=be({__name:"roleList",setup(t){const e=ee(!1),n=()=>{e.value=!0},{apiKey:i,shakeApiKeyInput:o,model:s}=aa(),l=ee(),c=ee(!1),d=()=>{c.value=!0,Ks(()=>{var S;(S=l.value)==null||S.focus()})},_=ee(!1),p=()=>{_.value=!_.value},g=()=>{_.value=!0},E=()=>{_.value=!1},f=()=>{c.value=!1,E()};return(S,C)=>(V(),ae(st,null,[B("div",O2,[B("div",A2,[y2,B("span",null,Qe(S.$t("My Software Team")),1)]),B("div",{class:It({keyFill:!0,keyFilled:q(i),shake:q(o)})},[!q(i)&&!q(c)?(V(),ae("div",{key:0,class:"placeholder",onClick:d},Qe(S.$t("Please fill in your OpenAI API key to activate the hired software team.")),1)):Pn((V(),ae("input",{key:1,ref_key:"apiKeyInputRef",ref:l,"onUpdate:modelValue":C[0]||(C[0]=h=>wr(i)?i.value=h:null),type:q(_)?"text":"password",onFocus:g,onBlur:f},null,40,I2)),[[qw,q(i)]]),B("span",{class:"showPassword",onClick:p},[q(_)?(V(),ot(q(Xw),{key:0})):(V(),ot(q(Zw),{key:1}))])],2),ue(q(Jw),{modelValue:q(s),"onUpdate:modelValue":C[1]||(C[1]=h=>wr(s)?s.value=h:null),placeholder:"Please select a model",size:"large"},{default:dt(()=>[ue(q(Zc),{label:"gpt-4",value:"gpt-4"}),ue(q(Zc),{label:"gpt-3.5-turbo",value:"gpt-3.5-turbo"}),ue(q(Zc),{label:"gpt-3.5-turbo-16k",value:"gpt-3.5-turbo-16k"})]),_:1},8,["modelValue"]),B("div",D2,[ue(q(DC))]),ue(q(iN),{style:{flex:"1"}},{default:dt(()=>[B("div",x2,[(V(!0),ae(st,null,yn(q(pN),(h,T)=>(V(),ae("div",{key:T,class:"role"},[B("div",{class:"avatar",style:Bt({borderColor:` ${h.color[0]}`})},[B("div",{class:"innerPie",style:Bt({background:` ${h.color[1]}`})},null,4),T===0?(V(),ae("img",w2)):Ge("",!0),T===1?(V(),ae("img",M2)):Ge("",!0),T===2?(V(),ae("img",L2)):Ge("",!0),T===3?(V(),ae("img",P2)):Ge("",!0),T===4?(V(),ae("img",k2)):Ge("",!0),T===5?(V(),ae("img",U2)):Ge("",!0),T===6?(V(),ae("img",F2)):Ge("",!0),B("div",{class:It({rightPoint:!0,pointActive:!h.action})},null,2)],4),B("div",B2,[B("div",G2,[B("div",Y2,Qe(h.job),1),B("div",q2,Qe(h.status),1)]),B("div",$2,[(V(!0),ae(st,null,yn(h.tags,(N,y)=>(V(),ae("div",{key:y,class:"tagItem"},Qe(N),1))),128)),h.action?(V(),ae("div",{key:0,class:"action",onClick:n},"I can build+")):Ge("",!0)])])]))),128))])]),_:1})]),ue(m2,{visible:q(e),"onUpdate:visible":C[2]||(C[2]=h=>wr(e)?e.value=h:null)},null,8,["visible"])],64))}});const z2=Dt(H2,[["__scopeId","data-v-d4dce6b2"]]),V2="/static/assets/btn0-e612db37.png",W2="/static/assets/btn1-25da2f4c.png",K2="/static/assets/btn2-d21834a1.png",Q2="/static/assets/btn3-cf765453.png",X2="/static/assets/roundLogo-9e67acc4.svg",Z2=t=>(Zi("data-v-491f84be"),t=t(),Ji(),t),J2=Z2(()=>B("span",{class:"loading"},[B("span"),B("span"),B("span")],-1)),j2=[J2],e$=be({__name:"index",props:{color:{}},setup(t){const e=t,n=le(()=>({color:e.color}));return(i,o)=>(V(),ae("span",{class:"loading_wrap",style:Bt(q(n))},j2,4))}});const t$=Dt(e$,[["__scopeId","data-v-491f84be"]]),n$={class:"message_info"},r$={key:0,class:"avatar",src:X2,alt:""},i$={class:"item_info"},a$={class:"name"},o$={key:0,class:"responseSwitcher",size:[4,0]},s$={class:"time"},l$={class:"message_wrap"},c$=be({__name:"index",props:{renderNode:{},isRootNode:{type:Boolean}},setup(t){const e=t,n={[Xt.FINISH]:"#23C343",[Xt.RUNNING]:"transparent",[Xt.FAILED]:"#F53F3F",[Xt.TERMINATE]:"#23C343"},i=yt(e,"isRootNode"),o=yt(e,"renderNode"),s=yt(o.value,"activeNode"),{is_user_message:l,steps:c}=AC(s.value),{activeTreeNode:d,globalStatus:_}=aa(),p=le(()=>c.value.length?c.value[c.value.length-1].status:Xt.RUNNING),g=le(()=>{const T=l.value?Km("Me"):"MetaGPT",N=l.value?"/src/assets/role/me.svg":"/src/assets/heroPage/roundLogo.svg";return{name:T,avatarUrl:N}}),E=le(()=>({color:l.value?"transparent":n[p.value],backgroundColor:g.value.avatarUrl?"transparent":"#3370ff"})),f=le(()=>l.value||i.value?!1:c.value.length===0&&p.value===Xt.RUNNING&&_.value===Ut.RUNNING),S=le(()=>o.value.current),C=le(()=>o.value.renderPath.length),h=T=>{const N=S.value+T;N<1||N>C.value||(d.value=o.value.renderPath[N-1])};return(T,N)=>(V(),ae("div",n$,[q(l)?Ge("",!0):(V(),ae("img",r$)),B("section",{class:It({info_box:!0,right_pos:q(l)})},[B("section",i$,[ue(q(aq),{style:{"max-width":"250px"}},{default:dt(()=>[B("span",a$,Qe(q(g).name),1)]),_:1}),q(C)>1?(V(),ae("div",o$,[ue(q(jw),{class:It({disabled:q(S)===1}),onClick:N[0]||(N[0]=y=>h(-1))},null,8,["class"]),B("span",null,Qe(q(S))+" / "+Qe(q(C)),1),ue(q(eM),{class:It({disabled:q(S)===q(C)}),onClick:N[1]||(N[1]=y=>h(1))},null,8,["class"])])):Ge("",!0),q(f)?(V(),ot(t$,{key:1,color:"#165dff"})):Ge("",!0),B("span",s$,Qe(q(Ka)(q(s).created_at).format("YYYY-MM-DD HH:mm:ss")),1)]),B("section",l$,[oi(T.$slots,"content",{},void 0,!0)])],2),q(l)?(V(),ot(q(tM),{key:1,class:"avatar",style:Bt(q(E)),"image-url":q(g).avatarUrl,size:40},{default:dt(()=>[vt(Qe(q(g).name),1)]),_:1},8,["style","image-url"])):Ge("",!0)]))}});const u$=Dt(c$,[["__scopeId","data-v-de77c762"]]),d$={class:"message_container"},_$={class:"user_message"},p$={class:"msg_wrap"},m$={key:1,class:"message"},g$={key:0,class:"btn_group"},E$=be({__name:"index",props:{activeNode:{}},setup(t){const n=yt(t,"activeNode"),{sendMessage:i,globalStatus:o}=aa(),s=le(()=>{var g;const p=n.value.steps[0];return(g=p==null?void 0:p.contents)==null?void 0:g[0]}),l=le(()=>{var p,g;return((g=(p=s.value)==null?void 0:p.value)==null?void 0:g.answer)||""}),c=ee(l.value),d=ee(!1),_=()=>{i(c.value),d.value=!1};return(p,g)=>(V(),ae("div",d$,[B("section",_$,[B("div",p$,[q(d)?(V(),ot(q(nM),{key:0,modelValue:q(c),"onUpdate:modelValue":g[0]||(g[0]=E=>wr(c)?c.value=E:null),"auto-size":""},null,8,["modelValue"])):(V(),ae("span",m$,Qe(q(l)),1))]),q(d)?(V(),ae("div",g$,[ue(q(hm),{type:"outline",onClick:g[1]||(g[1]=Ls(E=>d.value=!1,["stop"]))},{default:dt(()=>[vt(Qe(p.$t("取消")),1)]),_:1}),ue(q(hm),{disabled:q(o)===q(Ut).RUNNING,type:"primary",onClick:Ls(_,["stop"])},{default:dt(()=>[vt(Qe(p.$t("保存并提交")),1)]),_:1},8,["disabled","onClick"])])):Ge("",!0)])]))}});const f$=Dt(E$,[["__scopeId","data-v-6f899d6f"]]),S$={class:"step_skill"},b$={class:"trigger"},h$={class:"link_group"},T$=be({__name:"skill",props:{skill:{},knowledgeBase:{},knowledgeLink:{}},setup(t){const e=t,{skill:n,knowledgeBase:i,knowledgeLink:o}=AC(e),s=l=>{console.log(l)};return(l,c)=>(V(),ae("div",S$,[B("span",b$,[ue(ea,{"icon-id":"icon-ri-check-double-line"}),vt(" "+Qe(l.$t(q(R2).includes(q(n))?"高级技能":"触发技能")),1)]),ue(q(rM),null,{default:dt(()=>[ue(q(xC),{align:"center",size:4},{default:dt(()=>[(V(),ot(ji(q(C2)[q(n)]))),vt(" "+Qe(q(v2)[q(n)]),1)]),_:1})]),_:1}),B("div",h$,[q(i)?(V(),ot(q(Gf),{key:0,type:"text"},{default:dt(()=>[vt(Qe(l.$t("知识库"))+" ",1),B("span",{onClick:c[0]||(c[0]=Ls(d=>s("base"),["stop"]))},"("+Qe(q(i).length)+")",1)]),_:1})):Ge("",!0),q(o)?(V(),ot(q(Gf),{key:1,type:"text"},{default:dt(()=>[vt(Qe(l.$t("知识链接"))+" ",1),B("span",{onClick:c[1]||(c[1]=Ls(d=>s("link"),["stop"]))},"("+Qe(q(o).length)+")",1)]),_:1})):Ge("",!0)])]))}});const v$=Dt(T$,[["__scopeId","data-v-17bf8a16"]]),C$={class:"step_item"},R$={class:"step_title_wrap"},N$={class:"title"},O$={key:0,class:"icon_loading"},A$={class:"description"},y$={class:"step_info"},I$={class:"step_content_wrap"},D$={class:"step_content"},x$=be({__name:"step",props:{description:{},status:{},title:{},skill:{}},setup(t){const e=t,n=le(()=>{const{status:i}=e;return i===Xt.FAILED?"error":i===Xt.RUNNING?"process":"finish"});return(i,o)=>(V(),ae("div",C$,[ue(q(iM),{class:"step",status:q(n)},{icon:dt(()=>[oi(i.$slots,"icon",{},void 0,!0)]),default:dt(()=>[B("div",R$,[B("span",N$,Qe(e.title),1),e.status===q(Xt).RUNNING?(V(),ae("span",O$,[ue(ea,{class:"rotate",style:{color:"#165dff"},"icon-id":"icon-ri-loader-2-fill"})])):Ge("",!0),B("div",A$,Qe(e.description),1)])]),_:3},8,["status"]),B("section",y$,[B("section",I$,[ue(q(DC),{direction:"vertical",class:It(["divider",{active:e.status===q(Xt).RUNNING}])},null,8,["class"]),B("div",D$,[oi(i.$slots,"default",{},void 0,!0)])]),e.skill?(V(),ot(v$,{key:0,skill:e.skill},null,8,["skill"])):Ge("",!0)])]))}});const w$=Dt(x$,[["__scopeId","data-v-690b1166"]]);var Je={};const M$="Á",L$="á",P$="Ă",k$="ă",U$="∾",F$="∿",B$="∾̳",G$="Â",Y$="â",q$="´",$$="А",H$="а",z$="Æ",V$="æ",W$="⁡",K$="𝔄",Q$="𝔞",X$="À",Z$="à",J$="ℵ",j$="ℵ",eH="Α",tH="α",nH="Ā",rH="ā",iH="⨿",aH="&",oH="&",sH="⩕",lH="⩓",cH="∧",uH="⩜",dH="⩘",_H="⩚",pH="∠",mH="⦤",gH="∠",EH="⦨",fH="⦩",SH="⦪",bH="⦫",hH="⦬",TH="⦭",vH="⦮",CH="⦯",RH="∡",NH="∟",OH="⊾",AH="⦝",yH="∢",IH="Å",DH="⍼",xH="Ą",wH="ą",MH="𝔸",LH="𝕒",PH="⩯",kH="≈",UH="⩰",FH="≊",BH="≋",GH="'",YH="⁡",qH="≈",$H="≊",HH="Å",zH="å",VH="𝒜",WH="𝒶",KH="≔",QH="*",XH="≈",ZH="≍",JH="Ã",jH="ã",ez="Ä",tz="ä",nz="∳",rz="⨑",iz="≌",az="϶",oz="‵",sz="∽",lz="⋍",cz="∖",uz="⫧",dz="⊽",_z="⌅",pz="⌆",mz="⌅",gz="⎵",Ez="⎶",fz="≌",Sz="Б",bz="б",hz="„",Tz="∵",vz="∵",Cz="∵",Rz="⦰",Nz="϶",Oz="ℬ",Az="ℬ",yz="Β",Iz="β",Dz="ℶ",xz="≬",wz="𝔅",Mz="𝔟",Lz="⋂",Pz="◯",kz="⋃",Uz="⨀",Fz="⨁",Bz="⨂",Gz="⨆",Yz="★",qz="▽",$z="△",Hz="⨄",zz="⋁",Vz="⋀",Wz="⤍",Kz="⧫",Qz="▪",Xz="▴",Zz="▾",Jz="◂",jz="▸",eV="␣",tV="▒",nV="░",rV="▓",iV="█",aV="=⃥",oV="≡⃥",sV="⫭",lV="⌐",cV="𝔹",uV="𝕓",dV="⊥",_V="⊥",pV="⋈",mV="⧉",gV="┐",EV="╕",fV="╖",SV="╗",bV="┌",hV="╒",TV="╓",vV="╔",CV="─",RV="═",NV="┬",OV="╤",AV="╥",yV="╦",IV="┴",DV="╧",xV="╨",wV="╩",MV="⊟",LV="⊞",PV="⊠",kV="┘",UV="╛",FV="╜",BV="╝",GV="└",YV="╘",qV="╙",$V="╚",HV="│",zV="║",VV="┼",WV="╪",KV="╫",QV="╬",XV="┤",ZV="╡",JV="╢",jV="╣",eW="├",tW="╞",nW="╟",rW="╠",iW="‵",aW="˘",oW="˘",sW="¦",lW="𝒷",cW="ℬ",uW="⁏",dW="∽",_W="⋍",pW="⧅",mW="\\",gW="⟈",EW="•",fW="•",SW="≎",bW="⪮",hW="≏",TW="≎",vW="≏",CW="Ć",RW="ć",NW="⩄",OW="⩉",AW="⩋",yW="∩",IW="⋒",DW="⩇",xW="⩀",wW="ⅅ",MW="∩︀",LW="⁁",PW="ˇ",kW="ℭ",UW="⩍",FW="Č",BW="č",GW="Ç",YW="ç",qW="Ĉ",$W="ĉ",HW="∰",zW="⩌",VW="⩐",WW="Ċ",KW="ċ",QW="¸",XW="¸",ZW="⦲",JW="¢",jW="·",e3="·",t3="𝔠",n3="ℭ",r3="Ч",i3="ч",a3="✓",o3="✓",s3="Χ",l3="χ",c3="ˆ",u3="≗",d3="↺",_3="↻",p3="⊛",m3="⊚",g3="⊝",E3="⊙",f3="®",S3="Ⓢ",b3="⊖",h3="⊕",T3="⊗",v3="○",C3="⧃",R3="≗",N3="⨐",O3="⫯",A3="⧂",y3="∲",I3="”",D3="’",x3="♣",w3="♣",M3=":",L3="∷",P3="⩴",k3="≔",U3="≔",F3=",",B3="@",G3="∁",Y3="∘",q3="∁",$3="ℂ",H3="≅",z3="⩭",V3="≡",W3="∮",K3="∯",Q3="∮",X3="𝕔",Z3="ℂ",J3="∐",j3="∐",eK="©",tK="©",nK="℗",rK="∳",iK="↵",aK="✗",oK="⨯",sK="𝒞",lK="𝒸",cK="⫏",uK="⫑",dK="⫐",_K="⫒",pK="⋯",mK="⤸",gK="⤵",EK="⋞",fK="⋟",SK="↶",bK="⤽",hK="⩈",TK="⩆",vK="≍",CK="∪",RK="⋓",NK="⩊",OK="⊍",AK="⩅",yK="∪︀",IK="↷",DK="⤼",xK="⋞",wK="⋟",MK="⋎",LK="⋏",PK="¤",kK="↶",UK="↷",FK="⋎",BK="⋏",GK="∲",YK="∱",qK="⌭",$K="†",HK="‡",zK="ℸ",VK="↓",WK="↡",KK="⇓",QK="‐",XK="⫤",ZK="⊣",JK="⤏",jK="˝",eQ="Ď",tQ="ď",nQ="Д",rQ="д",iQ="‡",aQ="⇊",oQ="ⅅ",sQ="ⅆ",lQ="⤑",cQ="⩷",uQ="°",dQ="∇",_Q="Δ",pQ="δ",mQ="⦱",gQ="⥿",EQ="𝔇",fQ="𝔡",SQ="⥥",bQ="⇃",hQ="⇂",TQ="´",vQ="˙",CQ="˝",RQ="`",NQ="˜",OQ="⋄",AQ="⋄",yQ="⋄",IQ="♦",DQ="♦",xQ="¨",wQ="ⅆ",MQ="ϝ",LQ="⋲",PQ="÷",kQ="÷",UQ="⋇",FQ="⋇",BQ="Ђ",GQ="ђ",YQ="⌞",qQ="⌍",$Q="$",HQ="𝔻",zQ="𝕕",VQ="¨",WQ="˙",KQ="⃜",QQ="≐",XQ="≑",ZQ="≐",JQ="∸",jQ="∔",e4="⊡",t4="⌆",n4="∯",r4="¨",i4="⇓",a4="⇐",o4="⇔",s4="⫤",l4="⟸",c4="⟺",u4="⟹",d4="⇒",_4="⊨",p4="⇑",m4="⇕",g4="∥",E4="⤓",f4="↓",S4="↓",b4="⇓",h4="⇵",T4="̑",v4="⇊",C4="⇃",R4="⇂",N4="⥐",O4="⥞",A4="⥖",y4="↽",I4="⥟",D4="⥗",x4="⇁",w4="↧",M4="⊤",L4="⤐",P4="⌟",k4="⌌",U4="𝒟",F4="𝒹",B4="Ѕ",G4="ѕ",Y4="⧶",q4="Đ",$4="đ",H4="⋱",z4="▿",V4="▾",W4="⇵",K4="⥯",Q4="⦦",X4="Џ",Z4="џ",J4="⟿",j4="É",e5="é",t5="⩮",n5="Ě",r5="ě",i5="Ê",a5="ê",o5="≖",s5="≕",l5="Э",c5="э",u5="⩷",d5="Ė",_5="ė",p5="≑",m5="ⅇ",g5="≒",E5="𝔈",f5="𝔢",S5="⪚",b5="È",h5="è",T5="⪖",v5="⪘",C5="⪙",R5="∈",N5="⏧",O5="ℓ",A5="⪕",y5="⪗",I5="Ē",D5="ē",x5="∅",w5="∅",M5="◻",L5="∅",P5="▫",k5=" ",U5=" ",F5=" ",B5="Ŋ",G5="ŋ",Y5=" ",q5="Ę",$5="ę",H5="𝔼",z5="𝕖",V5="⋕",W5="⧣",K5="⩱",Q5="ε",X5="Ε",Z5="ε",J5="ϵ",j5="≖",e6="≕",t6="≂",n6="⪖",r6="⪕",i6="⩵",a6="=",o6="≂",s6="≟",l6="⇌",c6="≡",u6="⩸",d6="⧥",_6="⥱",p6="≓",m6="ℯ",g6="ℰ",E6="≐",f6="⩳",S6="≂",b6="Η",h6="η",T6="Ð",v6="ð",C6="Ë",R6="ë",N6="€",O6="!",A6="∃",y6="∃",I6="ℰ",D6="ⅇ",x6="ⅇ",w6="≒",M6="Ф",L6="ф",P6="♀",k6="ffi",U6="ff",F6="ffl",B6="𝔉",G6="𝔣",Y6="fi",q6="◼",$6="▪",H6="fj",z6="♭",V6="fl",W6="▱",K6="ƒ",Q6="𝔽",X6="𝕗",Z6="∀",J6="∀",j6="⋔",e9="⫙",t9="ℱ",n9="⨍",r9="½",i9="⅓",a9="¼",o9="⅕",s9="⅙",l9="⅛",c9="⅔",u9="⅖",d9="¾",_9="⅗",p9="⅜",m9="⅘",g9="⅚",E9="⅝",f9="⅞",S9="⁄",b9="⌢",h9="𝒻",T9="ℱ",v9="ǵ",C9="Γ",R9="γ",N9="Ϝ",O9="ϝ",A9="⪆",y9="Ğ",I9="ğ",D9="Ģ",x9="Ĝ",w9="ĝ",M9="Г",L9="г",P9="Ġ",k9="ġ",U9="≥",F9="≧",B9="⪌",G9="⋛",Y9="≥",q9="≧",$9="⩾",H9="⪩",z9="⩾",V9="⪀",W9="⪂",K9="⪄",Q9="⋛︀",X9="⪔",Z9="𝔊",J9="𝔤",j9="≫",e8="⋙",t8="⋙",n8="ℷ",r8="Ѓ",i8="ѓ",a8="⪥",o8="≷",s8="⪒",l8="⪤",c8="⪊",u8="⪊",d8="⪈",_8="≩",p8="⪈",m8="≩",g8="⋧",E8="𝔾",f8="𝕘",S8="`",b8="≥",h8="⋛",T8="≧",v8="⪢",C8="≷",R8="⩾",N8="≳",O8="𝒢",A8="ℊ",y8="≳",I8="⪎",D8="⪐",x8="⪧",w8="⩺",M8=">",L8=">",P8="≫",k8="⋗",U8="⦕",F8="⩼",B8="⪆",G8="⥸",Y8="⋗",q8="⋛",$8="⪌",H8="≷",z8="≳",V8="≩︀",W8="≩︀",K8="ˇ",Q8=" ",X8="½",Z8="ℋ",J8="Ъ",j8="ъ",e7="⥈",t7="↔",n7="⇔",r7="↭",i7="^",a7="ℏ",o7="Ĥ",s7="ĥ",l7="♥",c7="♥",u7="…",d7="⊹",_7="𝔥",p7="ℌ",m7="ℋ",g7="⤥",E7="⤦",f7="⇿",S7="∻",b7="↩",h7="↪",T7="𝕙",v7="ℍ",C7="―",R7="─",N7="𝒽",O7="ℋ",A7="ℏ",y7="Ħ",I7="ħ",D7="≎",x7="≏",w7="⁃",M7="‐",L7="Í",P7="í",k7="⁣",U7="Î",F7="î",B7="И",G7="и",Y7="İ",q7="Е",$7="е",H7="¡",z7="⇔",V7="𝔦",W7="ℑ",K7="Ì",Q7="ì",X7="ⅈ",Z7="⨌",J7="∭",j7="⧜",eX="℩",tX="IJ",nX="ij",rX="Ī",iX="ī",aX="ℑ",oX="ⅈ",sX="ℐ",lX="ℑ",cX="ı",uX="ℑ",dX="⊷",_X="Ƶ",pX="⇒",mX="℅",gX="∞",EX="⧝",fX="ı",SX="⊺",bX="∫",hX="∬",TX="ℤ",vX="∫",CX="⊺",RX="⋂",NX="⨗",OX="⨼",AX="⁣",yX="⁢",IX="Ё",DX="ё",xX="Į",wX="į",MX="𝕀",LX="𝕚",PX="Ι",kX="ι",UX="⨼",FX="¿",BX="𝒾",GX="ℐ",YX="∈",qX="⋵",$X="⋹",HX="⋴",zX="⋳",VX="∈",WX="⁢",KX="Ĩ",QX="ĩ",XX="І",ZX="і",JX="Ï",jX="ï",eZ="Ĵ",tZ="ĵ",nZ="Й",rZ="й",iZ="𝔍",aZ="𝔧",oZ="ȷ",sZ="𝕁",lZ="𝕛",cZ="𝒥",uZ="𝒿",dZ="Ј",_Z="ј",pZ="Є",mZ="є",gZ="Κ",EZ="κ",fZ="ϰ",SZ="Ķ",bZ="ķ",hZ="К",TZ="к",vZ="𝔎",CZ="𝔨",RZ="ĸ",NZ="Х",OZ="х",AZ="Ќ",yZ="ќ",IZ="𝕂",DZ="𝕜",xZ="𝒦",wZ="𝓀",MZ="⇚",LZ="Ĺ",PZ="ĺ",kZ="⦴",UZ="ℒ",FZ="Λ",BZ="λ",GZ="⟨",YZ="⟪",qZ="⦑",$Z="⟨",HZ="⪅",zZ="ℒ",VZ="«",WZ="⇤",KZ="⤟",QZ="←",XZ="↞",ZZ="⇐",JZ="⤝",jZ="↩",eJ="↫",tJ="⤹",nJ="⥳",rJ="↢",iJ="⤙",aJ="⤛",oJ="⪫",sJ="⪭",lJ="⪭︀",cJ="⤌",uJ="⤎",dJ="❲",_J="{",pJ="[",mJ="⦋",gJ="⦏",EJ="⦍",fJ="Ľ",SJ="ľ",bJ="Ļ",hJ="ļ",TJ="⌈",vJ="{",CJ="Л",RJ="л",NJ="⤶",OJ="“",AJ="„",yJ="⥧",IJ="⥋",DJ="↲",xJ="≤",wJ="≦",MJ="⟨",LJ="⇤",PJ="←",kJ="←",UJ="⇐",FJ="⇆",BJ="↢",GJ="⌈",YJ="⟦",qJ="⥡",$J="⥙",HJ="⇃",zJ="⌊",VJ="↽",WJ="↼",KJ="⇇",QJ="↔",XJ="↔",ZJ="⇔",JJ="⇆",jJ="⇋",ej="↭",tj="⥎",nj="↤",rj="⊣",ij="⥚",aj="⋋",oj="⧏",sj="⊲",lj="⊴",cj="⥑",uj="⥠",dj="⥘",_j="↿",pj="⥒",mj="↼",gj="⪋",Ej="⋚",fj="≤",Sj="≦",bj="⩽",hj="⪨",Tj="⩽",vj="⩿",Cj="⪁",Rj="⪃",Nj="⋚︀",Oj="⪓",Aj="⪅",yj="⋖",Ij="⋚",Dj="⪋",xj="⋚",wj="≦",Mj="≶",Lj="≶",Pj="⪡",kj="≲",Uj="⩽",Fj="≲",Bj="⥼",Gj="⌊",Yj="𝔏",qj="𝔩",$j="≶",Hj="⪑",zj="⥢",Vj="↽",Wj="↼",Kj="⥪",Qj="▄",Xj="Љ",Zj="љ",Jj="⇇",jj="≪",eee="⋘",tee="⌞",nee="⇚",ree="⥫",iee="◺",aee="Ŀ",oee="ŀ",see="⎰",lee="⎰",cee="⪉",uee="⪉",dee="⪇",_ee="≨",pee="⪇",mee="≨",gee="⋦",Eee="⟬",fee="⇽",See="⟦",bee="⟵",hee="⟵",Tee="⟸",vee="⟷",Cee="⟷",Ree="⟺",Nee="⟼",Oee="⟶",Aee="⟶",yee="⟹",Iee="↫",Dee="↬",xee="⦅",wee="𝕃",Mee="𝕝",Lee="⨭",Pee="⨴",kee="∗",Uee="_",Fee="↙",Bee="↘",Gee="◊",Yee="◊",qee="⧫",$ee="(",Hee="⦓",zee="⇆",Vee="⌟",Wee="⇋",Kee="⥭",Qee="‎",Xee="⊿",Zee="‹",Jee="𝓁",jee="ℒ",ete="↰",tte="↰",nte="≲",rte="⪍",ite="⪏",ate="[",ote="‘",ste="‚",lte="Ł",cte="ł",ute="⪦",dte="⩹",_te="<",pte="<",mte="≪",gte="⋖",Ete="⋋",fte="⋉",Ste="⥶",bte="⩻",hte="◃",Tte="⊴",vte="◂",Cte="⦖",Rte="⥊",Nte="⥦",Ote="≨︀",Ate="≨︀",yte="¯",Ite="♂",Dte="✠",xte="✠",wte="↦",Mte="↦",Lte="↧",Pte="↤",kte="↥",Ute="▮",Fte="⨩",Bte="М",Gte="м",Yte="—",qte="∺",$te="∡",Hte=" ",zte="ℳ",Vte="𝔐",Wte="𝔪",Kte="℧",Qte="µ",Xte="*",Zte="⫰",Jte="∣",jte="·",ene="⊟",tne="−",nne="∸",rne="⨪",ine="∓",ane="⫛",one="…",sne="∓",lne="⊧",cne="𝕄",une="𝕞",dne="∓",_ne="𝓂",pne="ℳ",mne="∾",gne="Μ",Ene="μ",fne="⊸",Sne="⊸",bne="∇",hne="Ń",Tne="ń",vne="∠⃒",Cne="≉",Rne="⩰̸",Nne="≋̸",One="ʼn",Ane="≉",yne="♮",Ine="ℕ",Dne="♮",xne=" ",wne="≎̸",Mne="≏̸",Lne="⩃",Pne="Ň",kne="ň",Une="Ņ",Fne="ņ",Bne="≇",Gne="⩭̸",Yne="⩂",qne="Н",$ne="н",Hne="–",zne="⤤",Vne="↗",Wne="⇗",Kne="↗",Qne="≠",Xne="≐̸",Zne="​",Jne="​",jne="​",ere="​",tre="≢",nre="⤨",rre="≂̸",ire="≫",are="≪",ore=` -`,sre="∄",lre="∄",cre="𝔑",ure="𝔫",dre="≧̸",_re="≱",pre="≱",mre="≧̸",gre="⩾̸",Ere="⩾̸",fre="⋙̸",Sre="≵",bre="≫⃒",hre="≯",Tre="≯",vre="≫̸",Cre="↮",Rre="⇎",Nre="⫲",Ore="∋",Are="⋼",yre="⋺",Ire="∋",Dre="Њ",xre="њ",wre="↚",Mre="⇍",Lre="‥",Pre="≦̸",kre="≰",Ure="↚",Fre="⇍",Bre="↮",Gre="⇎",Yre="≰",qre="≦̸",$re="⩽̸",Hre="⩽̸",zre="≮",Vre="⋘̸",Wre="≴",Kre="≪⃒",Qre="≮",Xre="⋪",Zre="⋬",Jre="≪̸",jre="∤",eie="⁠",tie=" ",nie="𝕟",rie="ℕ",iie="⫬",aie="¬",oie="≢",sie="≭",lie="∦",cie="∉",uie="≠",die="≂̸",_ie="∄",pie="≯",mie="≱",gie="≧̸",Eie="≫̸",fie="≹",Sie="⩾̸",bie="≵",hie="≎̸",Tie="≏̸",vie="∉",Cie="⋵̸",Rie="⋹̸",Nie="∉",Oie="⋷",Aie="⋶",yie="⧏̸",Iie="⋪",Die="⋬",xie="≮",wie="≰",Mie="≸",Lie="≪̸",Pie="⩽̸",kie="≴",Uie="⪢̸",Fie="⪡̸",Bie="∌",Gie="∌",Yie="⋾",qie="⋽",$ie="⊀",Hie="⪯̸",zie="⋠",Vie="∌",Wie="⧐̸",Kie="⋫",Qie="⋭",Xie="⊏̸",Zie="⋢",Jie="⊐̸",jie="⋣",eae="⊂⃒",tae="⊈",nae="⊁",rae="⪰̸",iae="⋡",aae="≿̸",oae="⊃⃒",sae="⊉",lae="≁",cae="≄",uae="≇",dae="≉",_ae="∤",pae="∦",mae="∦",gae="⫽⃥",Eae="∂̸",fae="⨔",Sae="⊀",bae="⋠",hae="⊀",Tae="⪯̸",vae="⪯̸",Cae="⤳̸",Rae="↛",Nae="⇏",Oae="↝̸",Aae="↛",yae="⇏",Iae="⋫",Dae="⋭",xae="⊁",wae="⋡",Mae="⪰̸",Lae="𝒩",Pae="𝓃",kae="∤",Uae="∦",Fae="≁",Bae="≄",Gae="≄",Yae="∤",qae="∦",$ae="⋢",Hae="⋣",zae="⊄",Vae="⫅̸",Wae="⊈",Kae="⊂⃒",Qae="⊈",Xae="⫅̸",Zae="⊁",Jae="⪰̸",jae="⊅",eoe="⫆̸",toe="⊉",noe="⊃⃒",roe="⊉",ioe="⫆̸",aoe="≹",ooe="Ñ",soe="ñ",loe="≸",coe="⋪",uoe="⋬",doe="⋫",_oe="⋭",poe="Ν",moe="ν",goe="#",Eoe="№",foe=" ",Soe="≍⃒",boe="⊬",hoe="⊭",Toe="⊮",voe="⊯",Coe="≥⃒",Roe=">⃒",Noe="⤄",Ooe="⧞",Aoe="⤂",yoe="≤⃒",Ioe="<⃒",Doe="⊴⃒",xoe="⤃",woe="⊵⃒",Moe="∼⃒",Loe="⤣",Poe="↖",koe="⇖",Uoe="↖",Foe="⤧",Boe="Ó",Goe="ó",Yoe="⊛",qoe="Ô",$oe="ô",Hoe="⊚",zoe="О",Voe="о",Woe="⊝",Koe="Ő",Qoe="ő",Xoe="⨸",Zoe="⊙",Joe="⦼",joe="Œ",ese="œ",tse="⦿",nse="𝔒",rse="𝔬",ise="˛",ase="Ò",ose="ò",sse="⧁",lse="⦵",cse="Ω",use="∮",dse="↺",_se="⦾",pse="⦻",mse="‾",gse="⧀",Ese="Ō",fse="ō",Sse="Ω",bse="ω",hse="Ο",Tse="ο",vse="⦶",Cse="⊖",Rse="𝕆",Nse="𝕠",Ose="⦷",Ase="“",yse="‘",Ise="⦹",Dse="⊕",xse="↻",wse="⩔",Mse="∨",Lse="⩝",Pse="ℴ",kse="ℴ",Use="ª",Fse="º",Bse="⊶",Gse="⩖",Yse="⩗",qse="⩛",$se="Ⓢ",Hse="𝒪",zse="ℴ",Vse="Ø",Wse="ø",Kse="⊘",Qse="Õ",Xse="õ",Zse="⨶",Jse="⨷",jse="⊗",ele="Ö",tle="ö",nle="⌽",rle="‾",ile="⏞",ale="⎴",ole="⏜",sle="¶",lle="∥",cle="∥",ule="⫳",dle="⫽",_le="∂",ple="∂",mle="П",gle="п",Ele="%",fle=".",Sle="‰",ble="⊥",hle="‱",Tle="𝔓",vle="𝔭",Cle="Φ",Rle="φ",Nle="ϕ",Ole="ℳ",Ale="☎",yle="Π",Ile="π",Dle="⋔",xle="ϖ",wle="ℏ",Mle="ℎ",Lle="ℏ",Ple="⨣",kle="⊞",Ule="⨢",Fle="+",Ble="∔",Gle="⨥",Yle="⩲",qle="±",$le="±",Hle="⨦",zle="⨧",Vle="±",Wle="ℌ",Kle="⨕",Qle="𝕡",Xle="ℙ",Zle="£",Jle="⪷",jle="⪻",ece="≺",tce="≼",nce="⪷",rce="≺",ice="≼",ace="≺",oce="⪯",sce="≼",lce="≾",cce="⪯",uce="⪹",dce="⪵",_ce="⋨",pce="⪯",mce="⪳",gce="≾",Ece="′",fce="″",Sce="ℙ",bce="⪹",hce="⪵",Tce="⋨",vce="∏",Cce="∏",Rce="⌮",Nce="⌒",Oce="⌓",Ace="∝",yce="∝",Ice="∷",Dce="∝",xce="≾",wce="⊰",Mce="𝒫",Lce="𝓅",Pce="Ψ",kce="ψ",Uce=" ",Fce="𝔔",Bce="𝔮",Gce="⨌",Yce="𝕢",qce="ℚ",$ce="⁗",Hce="𝒬",zce="𝓆",Vce="ℍ",Wce="⨖",Kce="?",Qce="≟",Xce='"',Zce='"',Jce="⇛",jce="∽̱",eue="Ŕ",tue="ŕ",nue="√",rue="⦳",iue="⟩",aue="⟫",oue="⦒",sue="⦥",lue="⟩",cue="»",uue="⥵",due="⇥",_ue="⤠",pue="⤳",mue="→",gue="↠",Eue="⇒",fue="⤞",Sue="↪",bue="↬",hue="⥅",Tue="⥴",vue="⤖",Cue="↣",Rue="↝",Nue="⤚",Oue="⤜",Aue="∶",yue="ℚ",Iue="⤍",Due="⤏",xue="⤐",wue="❳",Mue="}",Lue="]",Pue="⦌",kue="⦎",Uue="⦐",Fue="Ř",Bue="ř",Gue="Ŗ",Yue="ŗ",que="⌉",$ue="}",Hue="Р",zue="р",Vue="⤷",Wue="⥩",Kue="”",Que="”",Xue="↳",Zue="ℜ",Jue="ℛ",jue="ℜ",ede="ℝ",tde="ℜ",nde="▭",rde="®",ide="®",ade="∋",ode="⇋",sde="⥯",lde="⥽",cde="⌋",ude="𝔯",dde="ℜ",_de="⥤",pde="⇁",mde="⇀",gde="⥬",Ede="Ρ",fde="ρ",Sde="ϱ",bde="⟩",hde="⇥",Tde="→",vde="→",Cde="⇒",Rde="⇄",Nde="↣",Ode="⌉",Ade="⟧",yde="⥝",Ide="⥕",Dde="⇂",xde="⌋",wde="⇁",Mde="⇀",Lde="⇄",Pde="⇌",kde="⇉",Ude="↝",Fde="↦",Bde="⊢",Gde="⥛",Yde="⋌",qde="⧐",$de="⊳",Hde="⊵",zde="⥏",Vde="⥜",Wde="⥔",Kde="↾",Qde="⥓",Xde="⇀",Zde="˚",Jde="≓",jde="⇄",e_e="⇌",t_e="‏",n_e="⎱",r_e="⎱",i_e="⫮",a_e="⟭",o_e="⇾",s_e="⟧",l_e="⦆",c_e="𝕣",u_e="ℝ",d_e="⨮",__e="⨵",p_e="⥰",m_e=")",g_e="⦔",E_e="⨒",f_e="⇉",S_e="⇛",b_e="›",h_e="𝓇",T_e="ℛ",v_e="↱",C_e="↱",R_e="]",N_e="’",O_e="’",A_e="⋌",y_e="⋊",I_e="▹",D_e="⊵",x_e="▸",w_e="⧎",M_e="⧴",L_e="⥨",P_e="℞",k_e="Ś",U_e="ś",F_e="‚",B_e="⪸",G_e="Š",Y_e="š",q_e="⪼",$_e="≻",H_e="≽",z_e="⪰",V_e="⪴",W_e="Ş",K_e="ş",Q_e="Ŝ",X_e="ŝ",Z_e="⪺",J_e="⪶",j_e="⋩",epe="⨓",tpe="≿",npe="С",rpe="с",ipe="⊡",ape="⋅",ope="⩦",spe="⤥",lpe="↘",cpe="⇘",upe="↘",dpe="§",_pe=";",ppe="⤩",mpe="∖",gpe="∖",Epe="✶",fpe="𝔖",Spe="𝔰",bpe="⌢",hpe="♯",Tpe="Щ",vpe="щ",Cpe="Ш",Rpe="ш",Npe="↓",Ope="←",Ape="∣",ype="∥",Ipe="→",Dpe="↑",xpe="­",wpe="Σ",Mpe="σ",Lpe="ς",Ppe="ς",kpe="∼",Upe="⩪",Fpe="≃",Bpe="≃",Gpe="⪞",Ype="⪠",qpe="⪝",$pe="⪟",Hpe="≆",zpe="⨤",Vpe="⥲",Wpe="←",Kpe="∘",Qpe="∖",Xpe="⨳",Zpe="⧤",Jpe="∣",jpe="⌣",eme="⪪",tme="⪬",nme="⪬︀",rme="Ь",ime="ь",ame="⌿",ome="⧄",sme="/",lme="𝕊",cme="𝕤",ume="♠",dme="♠",_me="∥",pme="⊓",mme="⊓︀",gme="⊔",Eme="⊔︀",fme="√",Sme="⊏",bme="⊑",hme="⊏",Tme="⊑",vme="⊐",Cme="⊒",Rme="⊐",Nme="⊒",Ome="□",Ame="□",yme="⊓",Ime="⊏",Dme="⊑",xme="⊐",wme="⊒",Mme="⊔",Lme="▪",Pme="□",kme="▪",Ume="→",Fme="𝒮",Bme="𝓈",Gme="∖",Yme="⌣",qme="⋆",$me="⋆",Hme="☆",zme="★",Vme="ϵ",Wme="ϕ",Kme="¯",Qme="⊂",Xme="⋐",Zme="⪽",Jme="⫅",jme="⊆",ege="⫃",tge="⫁",nge="⫋",rge="⊊",ige="⪿",age="⥹",oge="⊂",sge="⋐",lge="⊆",cge="⫅",uge="⊆",dge="⊊",_ge="⫋",pge="⫇",mge="⫕",gge="⫓",Ege="⪸",fge="≻",Sge="≽",bge="≻",hge="⪰",Tge="≽",vge="≿",Cge="⪰",Rge="⪺",Nge="⪶",Oge="⋩",Age="≿",yge="∋",Ige="∑",Dge="∑",xge="♪",wge="¹",Mge="²",Lge="³",Pge="⊃",kge="⋑",Uge="⪾",Fge="⫘",Bge="⫆",Gge="⊇",Yge="⫄",qge="⊃",$ge="⊇",Hge="⟉",zge="⫗",Vge="⥻",Wge="⫂",Kge="⫌",Qge="⊋",Xge="⫀",Zge="⊃",Jge="⋑",jge="⊇",eEe="⫆",tEe="⊋",nEe="⫌",rEe="⫈",iEe="⫔",aEe="⫖",oEe="⤦",sEe="↙",lEe="⇙",cEe="↙",uEe="⤪",dEe="ß",_Ee=" ",pEe="⌖",mEe="Τ",gEe="τ",EEe="⎴",fEe="Ť",SEe="ť",bEe="Ţ",hEe="ţ",TEe="Т",vEe="т",CEe="⃛",REe="⌕",NEe="𝔗",OEe="𝔱",AEe="∴",yEe="∴",IEe="∴",DEe="Θ",xEe="θ",wEe="ϑ",MEe="ϑ",LEe="≈",PEe="∼",kEe="  ",UEe=" ",FEe=" ",BEe="≈",GEe="∼",YEe="Þ",qEe="þ",$Ee="˜",HEe="∼",zEe="≃",VEe="≅",WEe="≈",KEe="⨱",QEe="⊠",XEe="×",ZEe="⨰",JEe="∭",jEe="⤨",efe="⌶",tfe="⫱",nfe="⊤",rfe="𝕋",ife="𝕥",afe="⫚",ofe="⤩",sfe="‴",lfe="™",cfe="™",ufe="▵",dfe="▿",_fe="◃",pfe="⊴",mfe="≜",gfe="▹",Efe="⊵",ffe="◬",Sfe="≜",bfe="⨺",hfe="⃛",Tfe="⨹",vfe="⧍",Cfe="⨻",Rfe="⏢",Nfe="𝒯",Ofe="𝓉",Afe="Ц",yfe="ц",Ife="Ћ",Dfe="ћ",xfe="Ŧ",wfe="ŧ",Mfe="≬",Lfe="↞",Pfe="↠",kfe="Ú",Ufe="ú",Ffe="↑",Bfe="↟",Gfe="⇑",Yfe="⥉",qfe="Ў",$fe="ў",Hfe="Ŭ",zfe="ŭ",Vfe="Û",Wfe="û",Kfe="У",Qfe="у",Xfe="⇅",Zfe="Ű",Jfe="ű",jfe="⥮",eSe="⥾",tSe="𝔘",nSe="𝔲",rSe="Ù",iSe="ù",aSe="⥣",oSe="↿",sSe="↾",lSe="▀",cSe="⌜",uSe="⌜",dSe="⌏",_Se="◸",pSe="Ū",mSe="ū",gSe="¨",ESe="_",fSe="⏟",SSe="⎵",bSe="⏝",hSe="⋃",TSe="⊎",vSe="Ų",CSe="ų",RSe="𝕌",NSe="𝕦",OSe="⤒",ASe="↑",ySe="↑",ISe="⇑",DSe="⇅",xSe="↕",wSe="↕",MSe="⇕",LSe="⥮",PSe="↿",kSe="↾",USe="⊎",FSe="↖",BSe="↗",GSe="υ",YSe="ϒ",qSe="ϒ",$Se="Υ",HSe="υ",zSe="↥",VSe="⊥",WSe="⇈",KSe="⌝",QSe="⌝",XSe="⌎",ZSe="Ů",JSe="ů",jSe="◹",ebe="𝒰",tbe="𝓊",nbe="⋰",rbe="Ũ",ibe="ũ",abe="▵",obe="▴",sbe="⇈",lbe="Ü",cbe="ü",ube="⦧",dbe="⦜",_be="ϵ",pbe="ϰ",mbe="∅",gbe="ϕ",Ebe="ϖ",fbe="∝",Sbe="↕",bbe="⇕",hbe="ϱ",Tbe="ς",vbe="⊊︀",Cbe="⫋︀",Rbe="⊋︀",Nbe="⫌︀",Obe="ϑ",Abe="⊲",ybe="⊳",Ibe="⫨",Dbe="⫫",xbe="⫩",wbe="В",Mbe="в",Lbe="⊢",Pbe="⊨",kbe="⊩",Ube="⊫",Fbe="⫦",Bbe="⊻",Gbe="∨",Ybe="⋁",qbe="≚",$be="⋮",Hbe="|",zbe="‖",Vbe="|",Wbe="‖",Kbe="∣",Qbe="|",Xbe="❘",Zbe="≀",Jbe=" ",jbe="𝔙",ehe="𝔳",the="⊲",nhe="⊂⃒",rhe="⊃⃒",ihe="𝕍",ahe="𝕧",ohe="∝",she="⊳",lhe="𝒱",che="𝓋",uhe="⫋︀",dhe="⊊︀",_he="⫌︀",phe="⊋︀",mhe="⊪",ghe="⦚",Ehe="Ŵ",fhe="ŵ",She="⩟",bhe="∧",hhe="⋀",The="≙",vhe="℘",Che="𝔚",Rhe="𝔴",Nhe="𝕎",Ohe="𝕨",Ahe="℘",yhe="≀",Ihe="≀",Dhe="𝒲",xhe="𝓌",whe="⋂",Mhe="◯",Lhe="⋃",Phe="▽",khe="𝔛",Uhe="𝔵",Fhe="⟷",Bhe="⟺",Ghe="Ξ",Yhe="ξ",qhe="⟵",$he="⟸",Hhe="⟼",zhe="⋻",Vhe="⨀",Whe="𝕏",Khe="𝕩",Qhe="⨁",Xhe="⨂",Zhe="⟶",Jhe="⟹",jhe="𝒳",eTe="𝓍",tTe="⨆",nTe="⨄",rTe="△",iTe="⋁",aTe="⋀",oTe="Ý",sTe="ý",lTe="Я",cTe="я",uTe="Ŷ",dTe="ŷ",_Te="Ы",pTe="ы",mTe="¥",gTe="𝔜",ETe="𝔶",fTe="Ї",STe="ї",bTe="𝕐",hTe="𝕪",TTe="𝒴",vTe="𝓎",CTe="Ю",RTe="ю",NTe="ÿ",OTe="Ÿ",ATe="Ź",yTe="ź",ITe="Ž",DTe="ž",xTe="З",wTe="з",MTe="Ż",LTe="ż",PTe="ℨ",kTe="​",UTe="Ζ",FTe="ζ",BTe="𝔷",GTe="ℨ",YTe="Ж",qTe="ж",$Te="⇝",HTe="𝕫",zTe="ℤ",VTe="𝒵",WTe="𝓏",KTe="‍",QTe="‌",XTe={Aacute:M$,aacute:L$,Abreve:P$,abreve:k$,ac:U$,acd:F$,acE:B$,Acirc:G$,acirc:Y$,acute:q$,Acy:$$,acy:H$,AElig:z$,aelig:V$,af:W$,Afr:K$,afr:Q$,Agrave:X$,agrave:Z$,alefsym:J$,aleph:j$,Alpha:eH,alpha:tH,Amacr:nH,amacr:rH,amalg:iH,amp:aH,AMP:oH,andand:sH,And:lH,and:cH,andd:uH,andslope:dH,andv:_H,ang:pH,ange:mH,angle:gH,angmsdaa:EH,angmsdab:fH,angmsdac:SH,angmsdad:bH,angmsdae:hH,angmsdaf:TH,angmsdag:vH,angmsdah:CH,angmsd:RH,angrt:NH,angrtvb:OH,angrtvbd:AH,angsph:yH,angst:IH,angzarr:DH,Aogon:xH,aogon:wH,Aopf:MH,aopf:LH,apacir:PH,ap:kH,apE:UH,ape:FH,apid:BH,apos:GH,ApplyFunction:YH,approx:qH,approxeq:$H,Aring:HH,aring:zH,Ascr:VH,ascr:WH,Assign:KH,ast:QH,asymp:XH,asympeq:ZH,Atilde:JH,atilde:jH,Auml:ez,auml:tz,awconint:nz,awint:rz,backcong:iz,backepsilon:az,backprime:oz,backsim:sz,backsimeq:lz,Backslash:cz,Barv:uz,barvee:dz,barwed:_z,Barwed:pz,barwedge:mz,bbrk:gz,bbrktbrk:Ez,bcong:fz,Bcy:Sz,bcy:bz,bdquo:hz,becaus:Tz,because:vz,Because:Cz,bemptyv:Rz,bepsi:Nz,bernou:Oz,Bernoullis:Az,Beta:yz,beta:Iz,beth:Dz,between:xz,Bfr:wz,bfr:Mz,bigcap:Lz,bigcirc:Pz,bigcup:kz,bigodot:Uz,bigoplus:Fz,bigotimes:Bz,bigsqcup:Gz,bigstar:Yz,bigtriangledown:qz,bigtriangleup:$z,biguplus:Hz,bigvee:zz,bigwedge:Vz,bkarow:Wz,blacklozenge:Kz,blacksquare:Qz,blacktriangle:Xz,blacktriangledown:Zz,blacktriangleleft:Jz,blacktriangleright:jz,blank:eV,blk12:tV,blk14:nV,blk34:rV,block:iV,bne:aV,bnequiv:oV,bNot:sV,bnot:lV,Bopf:cV,bopf:uV,bot:dV,bottom:_V,bowtie:pV,boxbox:mV,boxdl:gV,boxdL:EV,boxDl:fV,boxDL:SV,boxdr:bV,boxdR:hV,boxDr:TV,boxDR:vV,boxh:CV,boxH:RV,boxhd:NV,boxHd:OV,boxhD:AV,boxHD:yV,boxhu:IV,boxHu:DV,boxhU:xV,boxHU:wV,boxminus:MV,boxplus:LV,boxtimes:PV,boxul:kV,boxuL:UV,boxUl:FV,boxUL:BV,boxur:GV,boxuR:YV,boxUr:qV,boxUR:$V,boxv:HV,boxV:zV,boxvh:VV,boxvH:WV,boxVh:KV,boxVH:QV,boxvl:XV,boxvL:ZV,boxVl:JV,boxVL:jV,boxvr:eW,boxvR:tW,boxVr:nW,boxVR:rW,bprime:iW,breve:aW,Breve:oW,brvbar:sW,bscr:lW,Bscr:cW,bsemi:uW,bsim:dW,bsime:_W,bsolb:pW,bsol:mW,bsolhsub:gW,bull:EW,bullet:fW,bump:SW,bumpE:bW,bumpe:hW,Bumpeq:TW,bumpeq:vW,Cacute:CW,cacute:RW,capand:NW,capbrcup:OW,capcap:AW,cap:yW,Cap:IW,capcup:DW,capdot:xW,CapitalDifferentialD:wW,caps:MW,caret:LW,caron:PW,Cayleys:kW,ccaps:UW,Ccaron:FW,ccaron:BW,Ccedil:GW,ccedil:YW,Ccirc:qW,ccirc:$W,Cconint:HW,ccups:zW,ccupssm:VW,Cdot:WW,cdot:KW,cedil:QW,Cedilla:XW,cemptyv:ZW,cent:JW,centerdot:jW,CenterDot:e3,cfr:t3,Cfr:n3,CHcy:r3,chcy:i3,check:a3,checkmark:o3,Chi:s3,chi:l3,circ:c3,circeq:u3,circlearrowleft:d3,circlearrowright:_3,circledast:p3,circledcirc:m3,circleddash:g3,CircleDot:E3,circledR:f3,circledS:S3,CircleMinus:b3,CirclePlus:h3,CircleTimes:T3,cir:v3,cirE:C3,cire:R3,cirfnint:N3,cirmid:O3,cirscir:A3,ClockwiseContourIntegral:y3,CloseCurlyDoubleQuote:I3,CloseCurlyQuote:D3,clubs:x3,clubsuit:w3,colon:M3,Colon:L3,Colone:P3,colone:k3,coloneq:U3,comma:F3,commat:B3,comp:G3,compfn:Y3,complement:q3,complexes:$3,cong:H3,congdot:z3,Congruent:V3,conint:W3,Conint:K3,ContourIntegral:Q3,copf:X3,Copf:Z3,coprod:J3,Coproduct:j3,copy:eK,COPY:tK,copysr:nK,CounterClockwiseContourIntegral:rK,crarr:iK,cross:aK,Cross:oK,Cscr:sK,cscr:lK,csub:cK,csube:uK,csup:dK,csupe:_K,ctdot:pK,cudarrl:mK,cudarrr:gK,cuepr:EK,cuesc:fK,cularr:SK,cularrp:bK,cupbrcap:hK,cupcap:TK,CupCap:vK,cup:CK,Cup:RK,cupcup:NK,cupdot:OK,cupor:AK,cups:yK,curarr:IK,curarrm:DK,curlyeqprec:xK,curlyeqsucc:wK,curlyvee:MK,curlywedge:LK,curren:PK,curvearrowleft:kK,curvearrowright:UK,cuvee:FK,cuwed:BK,cwconint:GK,cwint:YK,cylcty:qK,dagger:$K,Dagger:HK,daleth:zK,darr:VK,Darr:WK,dArr:KK,dash:QK,Dashv:XK,dashv:ZK,dbkarow:JK,dblac:jK,Dcaron:eQ,dcaron:tQ,Dcy:nQ,dcy:rQ,ddagger:iQ,ddarr:aQ,DD:oQ,dd:sQ,DDotrahd:lQ,ddotseq:cQ,deg:uQ,Del:dQ,Delta:_Q,delta:pQ,demptyv:mQ,dfisht:gQ,Dfr:EQ,dfr:fQ,dHar:SQ,dharl:bQ,dharr:hQ,DiacriticalAcute:TQ,DiacriticalDot:vQ,DiacriticalDoubleAcute:CQ,DiacriticalGrave:RQ,DiacriticalTilde:NQ,diam:OQ,diamond:AQ,Diamond:yQ,diamondsuit:IQ,diams:DQ,die:xQ,DifferentialD:wQ,digamma:MQ,disin:LQ,div:PQ,divide:kQ,divideontimes:UQ,divonx:FQ,DJcy:BQ,djcy:GQ,dlcorn:YQ,dlcrop:qQ,dollar:$Q,Dopf:HQ,dopf:zQ,Dot:VQ,dot:WQ,DotDot:KQ,doteq:QQ,doteqdot:XQ,DotEqual:ZQ,dotminus:JQ,dotplus:jQ,dotsquare:e4,doublebarwedge:t4,DoubleContourIntegral:n4,DoubleDot:r4,DoubleDownArrow:i4,DoubleLeftArrow:a4,DoubleLeftRightArrow:o4,DoubleLeftTee:s4,DoubleLongLeftArrow:l4,DoubleLongLeftRightArrow:c4,DoubleLongRightArrow:u4,DoubleRightArrow:d4,DoubleRightTee:_4,DoubleUpArrow:p4,DoubleUpDownArrow:m4,DoubleVerticalBar:g4,DownArrowBar:E4,downarrow:f4,DownArrow:S4,Downarrow:b4,DownArrowUpArrow:h4,DownBreve:T4,downdownarrows:v4,downharpoonleft:C4,downharpoonright:R4,DownLeftRightVector:N4,DownLeftTeeVector:O4,DownLeftVectorBar:A4,DownLeftVector:y4,DownRightTeeVector:I4,DownRightVectorBar:D4,DownRightVector:x4,DownTeeArrow:w4,DownTee:M4,drbkarow:L4,drcorn:P4,drcrop:k4,Dscr:U4,dscr:F4,DScy:B4,dscy:G4,dsol:Y4,Dstrok:q4,dstrok:$4,dtdot:H4,dtri:z4,dtrif:V4,duarr:W4,duhar:K4,dwangle:Q4,DZcy:X4,dzcy:Z4,dzigrarr:J4,Eacute:j4,eacute:e5,easter:t5,Ecaron:n5,ecaron:r5,Ecirc:i5,ecirc:a5,ecir:o5,ecolon:s5,Ecy:l5,ecy:c5,eDDot:u5,Edot:d5,edot:_5,eDot:p5,ee:m5,efDot:g5,Efr:E5,efr:f5,eg:S5,Egrave:b5,egrave:h5,egs:T5,egsdot:v5,el:C5,Element:R5,elinters:N5,ell:O5,els:A5,elsdot:y5,Emacr:I5,emacr:D5,empty:x5,emptyset:w5,EmptySmallSquare:M5,emptyv:L5,EmptyVerySmallSquare:P5,emsp13:k5,emsp14:U5,emsp:F5,ENG:B5,eng:G5,ensp:Y5,Eogon:q5,eogon:$5,Eopf:H5,eopf:z5,epar:V5,eparsl:W5,eplus:K5,epsi:Q5,Epsilon:X5,epsilon:Z5,epsiv:J5,eqcirc:j5,eqcolon:e6,eqsim:t6,eqslantgtr:n6,eqslantless:r6,Equal:i6,equals:a6,EqualTilde:o6,equest:s6,Equilibrium:l6,equiv:c6,equivDD:u6,eqvparsl:d6,erarr:_6,erDot:p6,escr:m6,Escr:g6,esdot:E6,Esim:f6,esim:S6,Eta:b6,eta:h6,ETH:T6,eth:v6,Euml:C6,euml:R6,euro:N6,excl:O6,exist:A6,Exists:y6,expectation:I6,exponentiale:D6,ExponentialE:x6,fallingdotseq:w6,Fcy:M6,fcy:L6,female:P6,ffilig:k6,fflig:U6,ffllig:F6,Ffr:B6,ffr:G6,filig:Y6,FilledSmallSquare:q6,FilledVerySmallSquare:$6,fjlig:H6,flat:z6,fllig:V6,fltns:W6,fnof:K6,Fopf:Q6,fopf:X6,forall:Z6,ForAll:J6,fork:j6,forkv:e9,Fouriertrf:t9,fpartint:n9,frac12:r9,frac13:i9,frac14:a9,frac15:o9,frac16:s9,frac18:l9,frac23:c9,frac25:u9,frac34:d9,frac35:_9,frac38:p9,frac45:m9,frac56:g9,frac58:E9,frac78:f9,frasl:S9,frown:b9,fscr:h9,Fscr:T9,gacute:v9,Gamma:C9,gamma:R9,Gammad:N9,gammad:O9,gap:A9,Gbreve:y9,gbreve:I9,Gcedil:D9,Gcirc:x9,gcirc:w9,Gcy:M9,gcy:L9,Gdot:P9,gdot:k9,ge:U9,gE:F9,gEl:B9,gel:G9,geq:Y9,geqq:q9,geqslant:$9,gescc:H9,ges:z9,gesdot:V9,gesdoto:W9,gesdotol:K9,gesl:Q9,gesles:X9,Gfr:Z9,gfr:J9,gg:j9,Gg:e8,ggg:t8,gimel:n8,GJcy:r8,gjcy:i8,gla:a8,gl:o8,glE:s8,glj:l8,gnap:c8,gnapprox:u8,gne:d8,gnE:_8,gneq:p8,gneqq:m8,gnsim:g8,Gopf:E8,gopf:f8,grave:S8,GreaterEqual:b8,GreaterEqualLess:h8,GreaterFullEqual:T8,GreaterGreater:v8,GreaterLess:C8,GreaterSlantEqual:R8,GreaterTilde:N8,Gscr:O8,gscr:A8,gsim:y8,gsime:I8,gsiml:D8,gtcc:x8,gtcir:w8,gt:M8,GT:L8,Gt:P8,gtdot:k8,gtlPar:U8,gtquest:F8,gtrapprox:B8,gtrarr:G8,gtrdot:Y8,gtreqless:q8,gtreqqless:$8,gtrless:H8,gtrsim:z8,gvertneqq:V8,gvnE:W8,Hacek:K8,hairsp:Q8,half:X8,hamilt:Z8,HARDcy:J8,hardcy:j8,harrcir:e7,harr:t7,hArr:n7,harrw:r7,Hat:i7,hbar:a7,Hcirc:o7,hcirc:s7,hearts:l7,heartsuit:c7,hellip:u7,hercon:d7,hfr:_7,Hfr:p7,HilbertSpace:m7,hksearow:g7,hkswarow:E7,hoarr:f7,homtht:S7,hookleftarrow:b7,hookrightarrow:h7,hopf:T7,Hopf:v7,horbar:C7,HorizontalLine:R7,hscr:N7,Hscr:O7,hslash:A7,Hstrok:y7,hstrok:I7,HumpDownHump:D7,HumpEqual:x7,hybull:w7,hyphen:M7,Iacute:L7,iacute:P7,ic:k7,Icirc:U7,icirc:F7,Icy:B7,icy:G7,Idot:Y7,IEcy:q7,iecy:$7,iexcl:H7,iff:z7,ifr:V7,Ifr:W7,Igrave:K7,igrave:Q7,ii:X7,iiiint:Z7,iiint:J7,iinfin:j7,iiota:eX,IJlig:tX,ijlig:nX,Imacr:rX,imacr:iX,image:aX,ImaginaryI:oX,imagline:sX,imagpart:lX,imath:cX,Im:uX,imof:dX,imped:_X,Implies:pX,incare:mX,in:"∈",infin:gX,infintie:EX,inodot:fX,intcal:SX,int:bX,Int:hX,integers:TX,Integral:vX,intercal:CX,Intersection:RX,intlarhk:NX,intprod:OX,InvisibleComma:AX,InvisibleTimes:yX,IOcy:IX,iocy:DX,Iogon:xX,iogon:wX,Iopf:MX,iopf:LX,Iota:PX,iota:kX,iprod:UX,iquest:FX,iscr:BX,Iscr:GX,isin:YX,isindot:qX,isinE:$X,isins:HX,isinsv:zX,isinv:VX,it:WX,Itilde:KX,itilde:QX,Iukcy:XX,iukcy:ZX,Iuml:JX,iuml:jX,Jcirc:eZ,jcirc:tZ,Jcy:nZ,jcy:rZ,Jfr:iZ,jfr:aZ,jmath:oZ,Jopf:sZ,jopf:lZ,Jscr:cZ,jscr:uZ,Jsercy:dZ,jsercy:_Z,Jukcy:pZ,jukcy:mZ,Kappa:gZ,kappa:EZ,kappav:fZ,Kcedil:SZ,kcedil:bZ,Kcy:hZ,kcy:TZ,Kfr:vZ,kfr:CZ,kgreen:RZ,KHcy:NZ,khcy:OZ,KJcy:AZ,kjcy:yZ,Kopf:IZ,kopf:DZ,Kscr:xZ,kscr:wZ,lAarr:MZ,Lacute:LZ,lacute:PZ,laemptyv:kZ,lagran:UZ,Lambda:FZ,lambda:BZ,lang:GZ,Lang:YZ,langd:qZ,langle:$Z,lap:HZ,Laplacetrf:zZ,laquo:VZ,larrb:WZ,larrbfs:KZ,larr:QZ,Larr:XZ,lArr:ZZ,larrfs:JZ,larrhk:jZ,larrlp:eJ,larrpl:tJ,larrsim:nJ,larrtl:rJ,latail:iJ,lAtail:aJ,lat:oJ,late:sJ,lates:lJ,lbarr:cJ,lBarr:uJ,lbbrk:dJ,lbrace:_J,lbrack:pJ,lbrke:mJ,lbrksld:gJ,lbrkslu:EJ,Lcaron:fJ,lcaron:SJ,Lcedil:bJ,lcedil:hJ,lceil:TJ,lcub:vJ,Lcy:CJ,lcy:RJ,ldca:NJ,ldquo:OJ,ldquor:AJ,ldrdhar:yJ,ldrushar:IJ,ldsh:DJ,le:xJ,lE:wJ,LeftAngleBracket:MJ,LeftArrowBar:LJ,leftarrow:PJ,LeftArrow:kJ,Leftarrow:UJ,LeftArrowRightArrow:FJ,leftarrowtail:BJ,LeftCeiling:GJ,LeftDoubleBracket:YJ,LeftDownTeeVector:qJ,LeftDownVectorBar:$J,LeftDownVector:HJ,LeftFloor:zJ,leftharpoondown:VJ,leftharpoonup:WJ,leftleftarrows:KJ,leftrightarrow:QJ,LeftRightArrow:XJ,Leftrightarrow:ZJ,leftrightarrows:JJ,leftrightharpoons:jJ,leftrightsquigarrow:ej,LeftRightVector:tj,LeftTeeArrow:nj,LeftTee:rj,LeftTeeVector:ij,leftthreetimes:aj,LeftTriangleBar:oj,LeftTriangle:sj,LeftTriangleEqual:lj,LeftUpDownVector:cj,LeftUpTeeVector:uj,LeftUpVectorBar:dj,LeftUpVector:_j,LeftVectorBar:pj,LeftVector:mj,lEg:gj,leg:Ej,leq:fj,leqq:Sj,leqslant:bj,lescc:hj,les:Tj,lesdot:vj,lesdoto:Cj,lesdotor:Rj,lesg:Nj,lesges:Oj,lessapprox:Aj,lessdot:yj,lesseqgtr:Ij,lesseqqgtr:Dj,LessEqualGreater:xj,LessFullEqual:wj,LessGreater:Mj,lessgtr:Lj,LessLess:Pj,lesssim:kj,LessSlantEqual:Uj,LessTilde:Fj,lfisht:Bj,lfloor:Gj,Lfr:Yj,lfr:qj,lg:$j,lgE:Hj,lHar:zj,lhard:Vj,lharu:Wj,lharul:Kj,lhblk:Qj,LJcy:Xj,ljcy:Zj,llarr:Jj,ll:jj,Ll:eee,llcorner:tee,Lleftarrow:nee,llhard:ree,lltri:iee,Lmidot:aee,lmidot:oee,lmoustache:see,lmoust:lee,lnap:cee,lnapprox:uee,lne:dee,lnE:_ee,lneq:pee,lneqq:mee,lnsim:gee,loang:Eee,loarr:fee,lobrk:See,longleftarrow:bee,LongLeftArrow:hee,Longleftarrow:Tee,longleftrightarrow:vee,LongLeftRightArrow:Cee,Longleftrightarrow:Ree,longmapsto:Nee,longrightarrow:Oee,LongRightArrow:Aee,Longrightarrow:yee,looparrowleft:Iee,looparrowright:Dee,lopar:xee,Lopf:wee,lopf:Mee,loplus:Lee,lotimes:Pee,lowast:kee,lowbar:Uee,LowerLeftArrow:Fee,LowerRightArrow:Bee,loz:Gee,lozenge:Yee,lozf:qee,lpar:$ee,lparlt:Hee,lrarr:zee,lrcorner:Vee,lrhar:Wee,lrhard:Kee,lrm:Qee,lrtri:Xee,lsaquo:Zee,lscr:Jee,Lscr:jee,lsh:ete,Lsh:tte,lsim:nte,lsime:rte,lsimg:ite,lsqb:ate,lsquo:ote,lsquor:ste,Lstrok:lte,lstrok:cte,ltcc:ute,ltcir:dte,lt:_te,LT:pte,Lt:mte,ltdot:gte,lthree:Ete,ltimes:fte,ltlarr:Ste,ltquest:bte,ltri:hte,ltrie:Tte,ltrif:vte,ltrPar:Cte,lurdshar:Rte,luruhar:Nte,lvertneqq:Ote,lvnE:Ate,macr:yte,male:Ite,malt:Dte,maltese:xte,Map:"⤅",map:wte,mapsto:Mte,mapstodown:Lte,mapstoleft:Pte,mapstoup:kte,marker:Ute,mcomma:Fte,Mcy:Bte,mcy:Gte,mdash:Yte,mDDot:qte,measuredangle:$te,MediumSpace:Hte,Mellintrf:zte,Mfr:Vte,mfr:Wte,mho:Kte,micro:Qte,midast:Xte,midcir:Zte,mid:Jte,middot:jte,minusb:ene,minus:tne,minusd:nne,minusdu:rne,MinusPlus:ine,mlcp:ane,mldr:one,mnplus:sne,models:lne,Mopf:cne,mopf:une,mp:dne,mscr:_ne,Mscr:pne,mstpos:mne,Mu:gne,mu:Ene,multimap:fne,mumap:Sne,nabla:bne,Nacute:hne,nacute:Tne,nang:vne,nap:Cne,napE:Rne,napid:Nne,napos:One,napprox:Ane,natural:yne,naturals:Ine,natur:Dne,nbsp:xne,nbump:wne,nbumpe:Mne,ncap:Lne,Ncaron:Pne,ncaron:kne,Ncedil:Une,ncedil:Fne,ncong:Bne,ncongdot:Gne,ncup:Yne,Ncy:qne,ncy:$ne,ndash:Hne,nearhk:zne,nearr:Vne,neArr:Wne,nearrow:Kne,ne:Qne,nedot:Xne,NegativeMediumSpace:Zne,NegativeThickSpace:Jne,NegativeThinSpace:jne,NegativeVeryThinSpace:ere,nequiv:tre,nesear:nre,nesim:rre,NestedGreaterGreater:ire,NestedLessLess:are,NewLine:ore,nexist:sre,nexists:lre,Nfr:cre,nfr:ure,ngE:dre,nge:_re,ngeq:pre,ngeqq:mre,ngeqslant:gre,nges:Ere,nGg:fre,ngsim:Sre,nGt:bre,ngt:hre,ngtr:Tre,nGtv:vre,nharr:Cre,nhArr:Rre,nhpar:Nre,ni:Ore,nis:Are,nisd:yre,niv:Ire,NJcy:Dre,njcy:xre,nlarr:wre,nlArr:Mre,nldr:Lre,nlE:Pre,nle:kre,nleftarrow:Ure,nLeftarrow:Fre,nleftrightarrow:Bre,nLeftrightarrow:Gre,nleq:Yre,nleqq:qre,nleqslant:$re,nles:Hre,nless:zre,nLl:Vre,nlsim:Wre,nLt:Kre,nlt:Qre,nltri:Xre,nltrie:Zre,nLtv:Jre,nmid:jre,NoBreak:eie,NonBreakingSpace:tie,nopf:nie,Nopf:rie,Not:iie,not:aie,NotCongruent:oie,NotCupCap:sie,NotDoubleVerticalBar:lie,NotElement:cie,NotEqual:uie,NotEqualTilde:die,NotExists:_ie,NotGreater:pie,NotGreaterEqual:mie,NotGreaterFullEqual:gie,NotGreaterGreater:Eie,NotGreaterLess:fie,NotGreaterSlantEqual:Sie,NotGreaterTilde:bie,NotHumpDownHump:hie,NotHumpEqual:Tie,notin:vie,notindot:Cie,notinE:Rie,notinva:Nie,notinvb:Oie,notinvc:Aie,NotLeftTriangleBar:yie,NotLeftTriangle:Iie,NotLeftTriangleEqual:Die,NotLess:xie,NotLessEqual:wie,NotLessGreater:Mie,NotLessLess:Lie,NotLessSlantEqual:Pie,NotLessTilde:kie,NotNestedGreaterGreater:Uie,NotNestedLessLess:Fie,notni:Bie,notniva:Gie,notnivb:Yie,notnivc:qie,NotPrecedes:$ie,NotPrecedesEqual:Hie,NotPrecedesSlantEqual:zie,NotReverseElement:Vie,NotRightTriangleBar:Wie,NotRightTriangle:Kie,NotRightTriangleEqual:Qie,NotSquareSubset:Xie,NotSquareSubsetEqual:Zie,NotSquareSuperset:Jie,NotSquareSupersetEqual:jie,NotSubset:eae,NotSubsetEqual:tae,NotSucceeds:nae,NotSucceedsEqual:rae,NotSucceedsSlantEqual:iae,NotSucceedsTilde:aae,NotSuperset:oae,NotSupersetEqual:sae,NotTilde:lae,NotTildeEqual:cae,NotTildeFullEqual:uae,NotTildeTilde:dae,NotVerticalBar:_ae,nparallel:pae,npar:mae,nparsl:gae,npart:Eae,npolint:fae,npr:Sae,nprcue:bae,nprec:hae,npreceq:Tae,npre:vae,nrarrc:Cae,nrarr:Rae,nrArr:Nae,nrarrw:Oae,nrightarrow:Aae,nRightarrow:yae,nrtri:Iae,nrtrie:Dae,nsc:xae,nsccue:wae,nsce:Mae,Nscr:Lae,nscr:Pae,nshortmid:kae,nshortparallel:Uae,nsim:Fae,nsime:Bae,nsimeq:Gae,nsmid:Yae,nspar:qae,nsqsube:$ae,nsqsupe:Hae,nsub:zae,nsubE:Vae,nsube:Wae,nsubset:Kae,nsubseteq:Qae,nsubseteqq:Xae,nsucc:Zae,nsucceq:Jae,nsup:jae,nsupE:eoe,nsupe:toe,nsupset:noe,nsupseteq:roe,nsupseteqq:ioe,ntgl:aoe,Ntilde:ooe,ntilde:soe,ntlg:loe,ntriangleleft:coe,ntrianglelefteq:uoe,ntriangleright:doe,ntrianglerighteq:_oe,Nu:poe,nu:moe,num:goe,numero:Eoe,numsp:foe,nvap:Soe,nvdash:boe,nvDash:hoe,nVdash:Toe,nVDash:voe,nvge:Coe,nvgt:Roe,nvHarr:Noe,nvinfin:Ooe,nvlArr:Aoe,nvle:yoe,nvlt:Ioe,nvltrie:Doe,nvrArr:xoe,nvrtrie:woe,nvsim:Moe,nwarhk:Loe,nwarr:Poe,nwArr:koe,nwarrow:Uoe,nwnear:Foe,Oacute:Boe,oacute:Goe,oast:Yoe,Ocirc:qoe,ocirc:$oe,ocir:Hoe,Ocy:zoe,ocy:Voe,odash:Woe,Odblac:Koe,odblac:Qoe,odiv:Xoe,odot:Zoe,odsold:Joe,OElig:joe,oelig:ese,ofcir:tse,Ofr:nse,ofr:rse,ogon:ise,Ograve:ase,ograve:ose,ogt:sse,ohbar:lse,ohm:cse,oint:use,olarr:dse,olcir:_se,olcross:pse,oline:mse,olt:gse,Omacr:Ese,omacr:fse,Omega:Sse,omega:bse,Omicron:hse,omicron:Tse,omid:vse,ominus:Cse,Oopf:Rse,oopf:Nse,opar:Ose,OpenCurlyDoubleQuote:Ase,OpenCurlyQuote:yse,operp:Ise,oplus:Dse,orarr:xse,Or:wse,or:Mse,ord:Lse,order:Pse,orderof:kse,ordf:Use,ordm:Fse,origof:Bse,oror:Gse,orslope:Yse,orv:qse,oS:$se,Oscr:Hse,oscr:zse,Oslash:Vse,oslash:Wse,osol:Kse,Otilde:Qse,otilde:Xse,otimesas:Zse,Otimes:Jse,otimes:jse,Ouml:ele,ouml:tle,ovbar:nle,OverBar:rle,OverBrace:ile,OverBracket:ale,OverParenthesis:ole,para:sle,parallel:lle,par:cle,parsim:ule,parsl:dle,part:_le,PartialD:ple,Pcy:mle,pcy:gle,percnt:Ele,period:fle,permil:Sle,perp:ble,pertenk:hle,Pfr:Tle,pfr:vle,Phi:Cle,phi:Rle,phiv:Nle,phmmat:Ole,phone:Ale,Pi:yle,pi:Ile,pitchfork:Dle,piv:xle,planck:wle,planckh:Mle,plankv:Lle,plusacir:Ple,plusb:kle,pluscir:Ule,plus:Fle,plusdo:Ble,plusdu:Gle,pluse:Yle,PlusMinus:qle,plusmn:$le,plussim:Hle,plustwo:zle,pm:Vle,Poincareplane:Wle,pointint:Kle,popf:Qle,Popf:Xle,pound:Zle,prap:Jle,Pr:jle,pr:ece,prcue:tce,precapprox:nce,prec:rce,preccurlyeq:ice,Precedes:ace,PrecedesEqual:oce,PrecedesSlantEqual:sce,PrecedesTilde:lce,preceq:cce,precnapprox:uce,precneqq:dce,precnsim:_ce,pre:pce,prE:mce,precsim:gce,prime:Ece,Prime:fce,primes:Sce,prnap:bce,prnE:hce,prnsim:Tce,prod:vce,Product:Cce,profalar:Rce,profline:Nce,profsurf:Oce,prop:Ace,Proportional:yce,Proportion:Ice,propto:Dce,prsim:xce,prurel:wce,Pscr:Mce,pscr:Lce,Psi:Pce,psi:kce,puncsp:Uce,Qfr:Fce,qfr:Bce,qint:Gce,qopf:Yce,Qopf:qce,qprime:$ce,Qscr:Hce,qscr:zce,quaternions:Vce,quatint:Wce,quest:Kce,questeq:Qce,quot:Xce,QUOT:Zce,rAarr:Jce,race:jce,Racute:eue,racute:tue,radic:nue,raemptyv:rue,rang:iue,Rang:aue,rangd:oue,range:sue,rangle:lue,raquo:cue,rarrap:uue,rarrb:due,rarrbfs:_ue,rarrc:pue,rarr:mue,Rarr:gue,rArr:Eue,rarrfs:fue,rarrhk:Sue,rarrlp:bue,rarrpl:hue,rarrsim:Tue,Rarrtl:vue,rarrtl:Cue,rarrw:Rue,ratail:Nue,rAtail:Oue,ratio:Aue,rationals:yue,rbarr:Iue,rBarr:Due,RBarr:xue,rbbrk:wue,rbrace:Mue,rbrack:Lue,rbrke:Pue,rbrksld:kue,rbrkslu:Uue,Rcaron:Fue,rcaron:Bue,Rcedil:Gue,rcedil:Yue,rceil:que,rcub:$ue,Rcy:Hue,rcy:zue,rdca:Vue,rdldhar:Wue,rdquo:Kue,rdquor:Que,rdsh:Xue,real:Zue,realine:Jue,realpart:jue,reals:ede,Re:tde,rect:nde,reg:rde,REG:ide,ReverseElement:ade,ReverseEquilibrium:ode,ReverseUpEquilibrium:sde,rfisht:lde,rfloor:cde,rfr:ude,Rfr:dde,rHar:_de,rhard:pde,rharu:mde,rharul:gde,Rho:Ede,rho:fde,rhov:Sde,RightAngleBracket:bde,RightArrowBar:hde,rightarrow:Tde,RightArrow:vde,Rightarrow:Cde,RightArrowLeftArrow:Rde,rightarrowtail:Nde,RightCeiling:Ode,RightDoubleBracket:Ade,RightDownTeeVector:yde,RightDownVectorBar:Ide,RightDownVector:Dde,RightFloor:xde,rightharpoondown:wde,rightharpoonup:Mde,rightleftarrows:Lde,rightleftharpoons:Pde,rightrightarrows:kde,rightsquigarrow:Ude,RightTeeArrow:Fde,RightTee:Bde,RightTeeVector:Gde,rightthreetimes:Yde,RightTriangleBar:qde,RightTriangle:$de,RightTriangleEqual:Hde,RightUpDownVector:zde,RightUpTeeVector:Vde,RightUpVectorBar:Wde,RightUpVector:Kde,RightVectorBar:Qde,RightVector:Xde,ring:Zde,risingdotseq:Jde,rlarr:jde,rlhar:e_e,rlm:t_e,rmoustache:n_e,rmoust:r_e,rnmid:i_e,roang:a_e,roarr:o_e,robrk:s_e,ropar:l_e,ropf:c_e,Ropf:u_e,roplus:d_e,rotimes:__e,RoundImplies:p_e,rpar:m_e,rpargt:g_e,rppolint:E_e,rrarr:f_e,Rrightarrow:S_e,rsaquo:b_e,rscr:h_e,Rscr:T_e,rsh:v_e,Rsh:C_e,rsqb:R_e,rsquo:N_e,rsquor:O_e,rthree:A_e,rtimes:y_e,rtri:I_e,rtrie:D_e,rtrif:x_e,rtriltri:w_e,RuleDelayed:M_e,ruluhar:L_e,rx:P_e,Sacute:k_e,sacute:U_e,sbquo:F_e,scap:B_e,Scaron:G_e,scaron:Y_e,Sc:q_e,sc:$_e,sccue:H_e,sce:z_e,scE:V_e,Scedil:W_e,scedil:K_e,Scirc:Q_e,scirc:X_e,scnap:Z_e,scnE:J_e,scnsim:j_e,scpolint:epe,scsim:tpe,Scy:npe,scy:rpe,sdotb:ipe,sdot:ape,sdote:ope,searhk:spe,searr:lpe,seArr:cpe,searrow:upe,sect:dpe,semi:_pe,seswar:ppe,setminus:mpe,setmn:gpe,sext:Epe,Sfr:fpe,sfr:Spe,sfrown:bpe,sharp:hpe,SHCHcy:Tpe,shchcy:vpe,SHcy:Cpe,shcy:Rpe,ShortDownArrow:Npe,ShortLeftArrow:Ope,shortmid:Ape,shortparallel:ype,ShortRightArrow:Ipe,ShortUpArrow:Dpe,shy:xpe,Sigma:wpe,sigma:Mpe,sigmaf:Lpe,sigmav:Ppe,sim:kpe,simdot:Upe,sime:Fpe,simeq:Bpe,simg:Gpe,simgE:Ype,siml:qpe,simlE:$pe,simne:Hpe,simplus:zpe,simrarr:Vpe,slarr:Wpe,SmallCircle:Kpe,smallsetminus:Qpe,smashp:Xpe,smeparsl:Zpe,smid:Jpe,smile:jpe,smt:eme,smte:tme,smtes:nme,SOFTcy:rme,softcy:ime,solbar:ame,solb:ome,sol:sme,Sopf:lme,sopf:cme,spades:ume,spadesuit:dme,spar:_me,sqcap:pme,sqcaps:mme,sqcup:gme,sqcups:Eme,Sqrt:fme,sqsub:Sme,sqsube:bme,sqsubset:hme,sqsubseteq:Tme,sqsup:vme,sqsupe:Cme,sqsupset:Rme,sqsupseteq:Nme,square:Ome,Square:Ame,SquareIntersection:yme,SquareSubset:Ime,SquareSubsetEqual:Dme,SquareSuperset:xme,SquareSupersetEqual:wme,SquareUnion:Mme,squarf:Lme,squ:Pme,squf:kme,srarr:Ume,Sscr:Fme,sscr:Bme,ssetmn:Gme,ssmile:Yme,sstarf:qme,Star:$me,star:Hme,starf:zme,straightepsilon:Vme,straightphi:Wme,strns:Kme,sub:Qme,Sub:Xme,subdot:Zme,subE:Jme,sube:jme,subedot:ege,submult:tge,subnE:nge,subne:rge,subplus:ige,subrarr:age,subset:oge,Subset:sge,subseteq:lge,subseteqq:cge,SubsetEqual:uge,subsetneq:dge,subsetneqq:_ge,subsim:pge,subsub:mge,subsup:gge,succapprox:Ege,succ:fge,succcurlyeq:Sge,Succeeds:bge,SucceedsEqual:hge,SucceedsSlantEqual:Tge,SucceedsTilde:vge,succeq:Cge,succnapprox:Rge,succneqq:Nge,succnsim:Oge,succsim:Age,SuchThat:yge,sum:Ige,Sum:Dge,sung:xge,sup1:wge,sup2:Mge,sup3:Lge,sup:Pge,Sup:kge,supdot:Uge,supdsub:Fge,supE:Bge,supe:Gge,supedot:Yge,Superset:qge,SupersetEqual:$ge,suphsol:Hge,suphsub:zge,suplarr:Vge,supmult:Wge,supnE:Kge,supne:Qge,supplus:Xge,supset:Zge,Supset:Jge,supseteq:jge,supseteqq:eEe,supsetneq:tEe,supsetneqq:nEe,supsim:rEe,supsub:iEe,supsup:aEe,swarhk:oEe,swarr:sEe,swArr:lEe,swarrow:cEe,swnwar:uEe,szlig:dEe,Tab:_Ee,target:pEe,Tau:mEe,tau:gEe,tbrk:EEe,Tcaron:fEe,tcaron:SEe,Tcedil:bEe,tcedil:hEe,Tcy:TEe,tcy:vEe,tdot:CEe,telrec:REe,Tfr:NEe,tfr:OEe,there4:AEe,therefore:yEe,Therefore:IEe,Theta:DEe,theta:xEe,thetasym:wEe,thetav:MEe,thickapprox:LEe,thicksim:PEe,ThickSpace:kEe,ThinSpace:UEe,thinsp:FEe,thkap:BEe,thksim:GEe,THORN:YEe,thorn:qEe,tilde:$Ee,Tilde:HEe,TildeEqual:zEe,TildeFullEqual:VEe,TildeTilde:WEe,timesbar:KEe,timesb:QEe,times:XEe,timesd:ZEe,tint:JEe,toea:jEe,topbot:efe,topcir:tfe,top:nfe,Topf:rfe,topf:ife,topfork:afe,tosa:ofe,tprime:sfe,trade:lfe,TRADE:cfe,triangle:ufe,triangledown:dfe,triangleleft:_fe,trianglelefteq:pfe,triangleq:mfe,triangleright:gfe,trianglerighteq:Efe,tridot:ffe,trie:Sfe,triminus:bfe,TripleDot:hfe,triplus:Tfe,trisb:vfe,tritime:Cfe,trpezium:Rfe,Tscr:Nfe,tscr:Ofe,TScy:Afe,tscy:yfe,TSHcy:Ife,tshcy:Dfe,Tstrok:xfe,tstrok:wfe,twixt:Mfe,twoheadleftarrow:Lfe,twoheadrightarrow:Pfe,Uacute:kfe,uacute:Ufe,uarr:Ffe,Uarr:Bfe,uArr:Gfe,Uarrocir:Yfe,Ubrcy:qfe,ubrcy:$fe,Ubreve:Hfe,ubreve:zfe,Ucirc:Vfe,ucirc:Wfe,Ucy:Kfe,ucy:Qfe,udarr:Xfe,Udblac:Zfe,udblac:Jfe,udhar:jfe,ufisht:eSe,Ufr:tSe,ufr:nSe,Ugrave:rSe,ugrave:iSe,uHar:aSe,uharl:oSe,uharr:sSe,uhblk:lSe,ulcorn:cSe,ulcorner:uSe,ulcrop:dSe,ultri:_Se,Umacr:pSe,umacr:mSe,uml:gSe,UnderBar:ESe,UnderBrace:fSe,UnderBracket:SSe,UnderParenthesis:bSe,Union:hSe,UnionPlus:TSe,Uogon:vSe,uogon:CSe,Uopf:RSe,uopf:NSe,UpArrowBar:OSe,uparrow:ASe,UpArrow:ySe,Uparrow:ISe,UpArrowDownArrow:DSe,updownarrow:xSe,UpDownArrow:wSe,Updownarrow:MSe,UpEquilibrium:LSe,upharpoonleft:PSe,upharpoonright:kSe,uplus:USe,UpperLeftArrow:FSe,UpperRightArrow:BSe,upsi:GSe,Upsi:YSe,upsih:qSe,Upsilon:$Se,upsilon:HSe,UpTeeArrow:zSe,UpTee:VSe,upuparrows:WSe,urcorn:KSe,urcorner:QSe,urcrop:XSe,Uring:ZSe,uring:JSe,urtri:jSe,Uscr:ebe,uscr:tbe,utdot:nbe,Utilde:rbe,utilde:ibe,utri:abe,utrif:obe,uuarr:sbe,Uuml:lbe,uuml:cbe,uwangle:ube,vangrt:dbe,varepsilon:_be,varkappa:pbe,varnothing:mbe,varphi:gbe,varpi:Ebe,varpropto:fbe,varr:Sbe,vArr:bbe,varrho:hbe,varsigma:Tbe,varsubsetneq:vbe,varsubsetneqq:Cbe,varsupsetneq:Rbe,varsupsetneqq:Nbe,vartheta:Obe,vartriangleleft:Abe,vartriangleright:ybe,vBar:Ibe,Vbar:Dbe,vBarv:xbe,Vcy:wbe,vcy:Mbe,vdash:Lbe,vDash:Pbe,Vdash:kbe,VDash:Ube,Vdashl:Fbe,veebar:Bbe,vee:Gbe,Vee:Ybe,veeeq:qbe,vellip:$be,verbar:Hbe,Verbar:zbe,vert:Vbe,Vert:Wbe,VerticalBar:Kbe,VerticalLine:Qbe,VerticalSeparator:Xbe,VerticalTilde:Zbe,VeryThinSpace:Jbe,Vfr:jbe,vfr:ehe,vltri:the,vnsub:nhe,vnsup:rhe,Vopf:ihe,vopf:ahe,vprop:ohe,vrtri:she,Vscr:lhe,vscr:che,vsubnE:uhe,vsubne:dhe,vsupnE:_he,vsupne:phe,Vvdash:mhe,vzigzag:ghe,Wcirc:Ehe,wcirc:fhe,wedbar:She,wedge:bhe,Wedge:hhe,wedgeq:The,weierp:vhe,Wfr:Che,wfr:Rhe,Wopf:Nhe,wopf:Ohe,wp:Ahe,wr:yhe,wreath:Ihe,Wscr:Dhe,wscr:xhe,xcap:whe,xcirc:Mhe,xcup:Lhe,xdtri:Phe,Xfr:khe,xfr:Uhe,xharr:Fhe,xhArr:Bhe,Xi:Ghe,xi:Yhe,xlarr:qhe,xlArr:$he,xmap:Hhe,xnis:zhe,xodot:Vhe,Xopf:Whe,xopf:Khe,xoplus:Qhe,xotime:Xhe,xrarr:Zhe,xrArr:Jhe,Xscr:jhe,xscr:eTe,xsqcup:tTe,xuplus:nTe,xutri:rTe,xvee:iTe,xwedge:aTe,Yacute:oTe,yacute:sTe,YAcy:lTe,yacy:cTe,Ycirc:uTe,ycirc:dTe,Ycy:_Te,ycy:pTe,yen:mTe,Yfr:gTe,yfr:ETe,YIcy:fTe,yicy:STe,Yopf:bTe,yopf:hTe,Yscr:TTe,yscr:vTe,YUcy:CTe,yucy:RTe,yuml:NTe,Yuml:OTe,Zacute:ATe,zacute:yTe,Zcaron:ITe,zcaron:DTe,Zcy:xTe,zcy:wTe,Zdot:MTe,zdot:LTe,zeetrf:PTe,ZeroWidthSpace:kTe,Zeta:UTe,zeta:FTe,zfr:BTe,Zfr:GTe,ZHcy:YTe,zhcy:qTe,zigrarr:$Te,zopf:HTe,Zopf:zTe,Zscr:VTe,zscr:WTe,zwj:KTe,zwnj:QTe};var mN=XTe,gg=/[!-#%-\*,-\/:;\?@\[-\]_\{\}\xA1\xA7\xAB\xB6\xB7\xBB\xBF\u037E\u0387\u055A-\u055F\u0589\u058A\u05BE\u05C0\u05C3\u05C6\u05F3\u05F4\u0609\u060A\u060C\u060D\u061B\u061E\u061F\u066A-\u066D\u06D4\u0700-\u070D\u07F7-\u07F9\u0830-\u083E\u085E\u0964\u0965\u0970\u09FD\u0A76\u0AF0\u0C84\u0DF4\u0E4F\u0E5A\u0E5B\u0F04-\u0F12\u0F14\u0F3A-\u0F3D\u0F85\u0FD0-\u0FD4\u0FD9\u0FDA\u104A-\u104F\u10FB\u1360-\u1368\u1400\u166D\u166E\u169B\u169C\u16EB-\u16ED\u1735\u1736\u17D4-\u17D6\u17D8-\u17DA\u1800-\u180A\u1944\u1945\u1A1E\u1A1F\u1AA0-\u1AA6\u1AA8-\u1AAD\u1B5A-\u1B60\u1BFC-\u1BFF\u1C3B-\u1C3F\u1C7E\u1C7F\u1CC0-\u1CC7\u1CD3\u2010-\u2027\u2030-\u2043\u2045-\u2051\u2053-\u205E\u207D\u207E\u208D\u208E\u2308-\u230B\u2329\u232A\u2768-\u2775\u27C5\u27C6\u27E6-\u27EF\u2983-\u2998\u29D8-\u29DB\u29FC\u29FD\u2CF9-\u2CFC\u2CFE\u2CFF\u2D70\u2E00-\u2E2E\u2E30-\u2E4E\u3001-\u3003\u3008-\u3011\u3014-\u301F\u3030\u303D\u30A0\u30FB\uA4FE\uA4FF\uA60D-\uA60F\uA673\uA67E\uA6F2-\uA6F7\uA874-\uA877\uA8CE\uA8CF\uA8F8-\uA8FA\uA8FC\uA92E\uA92F\uA95F\uA9C1-\uA9CD\uA9DE\uA9DF\uAA5C-\uAA5F\uAADE\uAADF\uAAF0\uAAF1\uABEB\uFD3E\uFD3F\uFE10-\uFE19\uFE30-\uFE52\uFE54-\uFE61\uFE63\uFE68\uFE6A\uFE6B\uFF01-\uFF03\uFF05-\uFF0A\uFF0C-\uFF0F\uFF1A\uFF1B\uFF1F\uFF20\uFF3B-\uFF3D\uFF3F\uFF5B\uFF5D\uFF5F-\uFF65]|\uD800[\uDD00-\uDD02\uDF9F\uDFD0]|\uD801\uDD6F|\uD802[\uDC57\uDD1F\uDD3F\uDE50-\uDE58\uDE7F\uDEF0-\uDEF6\uDF39-\uDF3F\uDF99-\uDF9C]|\uD803[\uDF55-\uDF59]|\uD804[\uDC47-\uDC4D\uDCBB\uDCBC\uDCBE-\uDCC1\uDD40-\uDD43\uDD74\uDD75\uDDC5-\uDDC8\uDDCD\uDDDB\uDDDD-\uDDDF\uDE38-\uDE3D\uDEA9]|\uD805[\uDC4B-\uDC4F\uDC5B\uDC5D\uDCC6\uDDC1-\uDDD7\uDE41-\uDE43\uDE60-\uDE6C\uDF3C-\uDF3E]|\uD806[\uDC3B\uDE3F-\uDE46\uDE9A-\uDE9C\uDE9E-\uDEA2]|\uD807[\uDC41-\uDC45\uDC70\uDC71\uDEF7\uDEF8]|\uD809[\uDC70-\uDC74]|\uD81A[\uDE6E\uDE6F\uDEF5\uDF37-\uDF3B\uDF44]|\uD81B[\uDE97-\uDE9A]|\uD82F\uDC9F|\uD836[\uDE87-\uDE8B]|\uD83A[\uDD5E\uDD5F]/,oa={},nb={};function ZTe(t){var e,n,i=nb[t];if(i)return i;for(i=nb[t]=[],e=0;e<128;e++)n=String.fromCharCode(e),/^[0-9a-z]$/i.test(n)?i.push(n):i.push("%"+("0"+e.toString(16).toUpperCase()).slice(-2));for(e=0;e"u"&&(n=!0),c=ZTe(e),i=0,o=t.length;i=55296&&s<=57343){if(s>=55296&&s<=56319&&i+1=56320&&l<=57343)){d+=encodeURIComponent(t[i]+t[i+1]),i++;continue}d+="%EF%BF%BD";continue}d+=encodeURIComponent(t[i])}return d}il.defaultChars=";/?:@&=+$,-_.!~*'()#";il.componentChars="-_.!~*'()";var JTe=il,rb={};function jTe(t){var e,n,i=rb[t];if(i)return i;for(i=rb[t]=[],e=0;e<128;e++)n=String.fromCharCode(e),i.push(n);for(e=0;e=55296&&p<=57343?g+="���":g+=String.fromCharCode(p),o+=6;continue}if((l&248)===240&&o+91114111?g+="����":(p-=65536,g+=String.fromCharCode(55296+(p>>10),56320+(p&1023))),o+=9;continue}g+="�"}return g})}al.defaultChars=";/?:@&=+$,#";al.componentChars="";var eve=al,tve=function(e){var n="";return n+=e.protocol||"",n+=e.slashes?"//":"",n+=e.auth?e.auth+"@":"",e.hostname&&e.hostname.indexOf(":")!==-1?n+="["+e.hostname+"]":n+=e.hostname||"",n+=e.port?":"+e.port:"",n+=e.pathname||"",n+=e.search||"",n+=e.hash||"",n};function Hs(){this.protocol=null,this.slashes=null,this.auth=null,this.port=null,this.hostname=null,this.hash=null,this.search=null,this.pathname=null}var nve=/^([a-z0-9.+-]+:)/i,rve=/:[0-9]*$/,ive=/^(\/\/?(?!\/)[^\?\s]*)(\?[^\s]*)?$/,ave=["<",">",'"',"`"," ","\r",` -`," "],ove=["{","}","|","\\","^","`"].concat(ave),sve=["'"].concat(ove),ib=["%","/","?",";","#"].concat(sve),ab=["/","?","#"],lve=255,ob=/^[+a-z0-9A-Z_-]{0,63}$/,cve=/^([+a-z0-9A-Z_-]{0,63})(.*)$/,sb={javascript:!0,"javascript:":!0},lb={http:!0,https:!0,ftp:!0,gopher:!0,file:!0,"http:":!0,"https:":!0,"ftp:":!0,"gopher:":!0,"file:":!0};function uve(t,e){if(t&&t instanceof Hs)return t;var n=new Hs;return n.parse(t,e),n}Hs.prototype.parse=function(t,e){var n,i,o,s,l,c=t;if(c=c.trim(),!e&&t.split("#").length===1){var d=ive.exec(c);if(d)return this.pathname=d[1],d[2]&&(this.search=d[2]),this}var _=nve.exec(c);if(_&&(_=_[0],o=_.toLowerCase(),this.protocol=_,c=c.substr(_.length)),(e||_||c.match(/^\/\/[^@\/]+@[^@\/]+/))&&(l=c.substr(0,2)==="//",l&&!(_&&sb[_])&&(c=c.substr(2),this.slashes=!0)),!sb[_]&&(l||_&&!lb[_])){var p=-1;for(n=0;n127?T+="x":T+=h[N];if(!T.match(ob)){var x=C.slice(0,n),P=C.slice(n+1),D=h.match(cve);D&&(x.push(D[1]),P.unshift(D[2])),P.length&&(c=P.join(".")+c),this.hostname=x.join(".");break}}}}this.hostname.length>lve&&(this.hostname=""),S&&(this.hostname=this.hostname.substr(1,this.hostname.length-2))}var k=c.indexOf("#");k!==-1&&(this.hash=c.substr(k),c=c.slice(0,k));var U=c.indexOf("?");return U!==-1&&(this.search=c.substr(U),c=c.slice(0,U)),c&&(this.pathname=c),lb[o]&&this.hostname&&!this.pathname&&(this.pathname=""),this};Hs.prototype.parseHost=function(t){var e=rve.exec(t);e&&(e=e[0],e!==":"&&(this.port=e.substr(1)),t=t.substr(0,t.length-e.length)),t&&(this.hostname=t)};var dve=uve;oa.encode=JTe;oa.decode=eve;oa.format=tve;oa.parse=dve;var Wr={},Au,cb;function gN(){return cb||(cb=1,Au=/[\0-\uD7FF\uE000-\uFFFF]|[\uD800-\uDBFF][\uDC00-\uDFFF]|[\uD800-\uDBFF](?![\uDC00-\uDFFF])|(?:[^\uD800-\uDBFF]|^)[\uDC00-\uDFFF]/),Au}var yu,ub;function EN(){return ub||(ub=1,yu=/[\0-\x1F\x7F-\x9F]/),yu}var Iu,db;function _ve(){return db||(db=1,Iu=/[\xAD\u0600-\u0605\u061C\u06DD\u070F\u08E2\u180E\u200B-\u200F\u202A-\u202E\u2060-\u2064\u2066-\u206F\uFEFF\uFFF9-\uFFFB]|\uD804[\uDCBD\uDCCD]|\uD82F[\uDCA0-\uDCA3]|\uD834[\uDD73-\uDD7A]|\uDB40[\uDC01\uDC20-\uDC7F]/),Iu}var Du,_b;function fN(){return _b||(_b=1,Du=/[ \xA0\u1680\u2000-\u200A\u2028\u2029\u202F\u205F\u3000]/),Du}var pb;function pve(){return pb||(pb=1,Wr.Any=gN(),Wr.Cc=EN(),Wr.Cf=_ve(),Wr.P=gg,Wr.Z=fN()),Wr}(function(t){function e(L){return Object.prototype.toString.call(L)}function n(L){return e(L)==="[object String]"}var i=Object.prototype.hasOwnProperty;function o(L,J){return i.call(L,J)}function s(L){var J=Array.prototype.slice.call(arguments,1);return J.forEach(function(re){if(re){if(typeof re!="object")throw new TypeError(re+"must be object");Object.keys(re).forEach(function(G){L[G]=re[G]})}}),L}function l(L,J,re){return[].concat(L.slice(0,J),re,L.slice(J+1))}function c(L){return!(L>=55296&&L<=57343||L>=64976&&L<=65007||(L&65535)===65535||(L&65535)===65534||L>=0&&L<=8||L===11||L>=14&&L<=31||L>=127&&L<=159||L>1114111)}function d(L){if(L>65535){L-=65536;var J=55296+(L>>10),re=56320+(L&1023);return String.fromCharCode(J,re)}return String.fromCharCode(L)}var _=/\\([!"#$%&'()*+,\-.\/:;<=>?@[\\\]^_`{|}~])/g,p=/&([a-z#][a-z0-9]{1,31});/gi,g=new RegExp(_.source+"|"+p.source,"gi"),E=/^#((?:x[a-f0-9]{1,8}|[0-9]{1,8}))/i,f=mN;function S(L,J){var re=0;return o(f,J)?f[J]:J.charCodeAt(0)===35&&E.test(J)&&(re=J[1].toLowerCase()==="x"?parseInt(J.slice(2),16):parseInt(J.slice(1),10),c(re))?d(re):L}function C(L){return L.indexOf("\\")<0?L:L.replace(_,"$1")}function h(L){return L.indexOf("\\")<0&&L.indexOf("&")<0?L:L.replace(g,function(J,re,G){return re||S(J,G)})}var T=/[&<>"]/,N=/[&<>"]/g,y={"&":"&","<":"<",">":">",'"':"""};function x(L){return y[L]}function P(L){return T.test(L)?L.replace(N,x):L}var D=/[.?*+^$[\]\\(){}|-]/g;function k(L){return L.replace(D,"\\$&")}function U(L){switch(L){case 9:case 32:return!0}return!1}function W(L){if(L>=8192&&L<=8202)return!0;switch(L){case 9:case 10:case 11:case 12:case 13:case 32:case 160:case 5760:case 8239:case 8287:case 12288:return!0}return!1}var z=gg;function K(L){return z.test(L)}function Ee(L){switch(L){case 33:case 34:case 35:case 36:case 37:case 38:case 39:case 40:case 41:case 42:case 43:case 44:case 45:case 46:case 47:case 58:case 59:case 60:case 61:case 62:case 63:case 64:case 91:case 92:case 93:case 94:case 95:case 96:case 123:case 124:case 125:case 126:return!0;default:return!1}}function oe(L){return L=L.trim().replace(/\s+/g," "),"ẞ".toLowerCase()==="Ṿ"&&(L=L.replace(/ẞ/g,"ß")),L.toLowerCase().toUpperCase()}t.lib={},t.lib.mdurl=oa,t.lib.ucmicro=pve(),t.assign=s,t.isString=n,t.has=o,t.unescapeMd=C,t.unescapeAll=h,t.isValidEntityCode=c,t.fromCodePoint=d,t.escapeHtml=P,t.arrayReplaceAt=l,t.isSpace=U,t.isWhiteSpace=W,t.isMdAsciiPunct=Ee,t.isPunctChar=K,t.escapeRE=k,t.normalizeReference=oe})(Je);var ol={},mve=function(e,n,i){var o,s,l,c,d=-1,_=e.posMax,p=e.pos;for(e.pos=n+1,o=1;e.pos<_;){if(l=e.src.charCodeAt(e.pos),l===93&&(o--,o===0)){s=!0;break}if(c=e.pos,e.md.inline.skipToken(e),l===91){if(c===e.pos-1)o++;else if(i)return e.pos=p,-1}}return s&&(d=e.pos),e.pos=p,d},mb=Je.unescapeAll,gve=function(e,n,i){var o,s,l=0,c=n,d={ok:!1,pos:0,lines:0,str:""};if(e.charCodeAt(n)===60){for(n++;n32))return d;if(o===41){if(s===0)break;s--}n++}return c===n||s!==0||(d.str=mb(e.slice(c,n)),d.lines=l,d.pos=n,d.ok=!0),d},Eve=Je.unescapeAll,fve=function(e,n,i){var o,s,l=0,c=n,d={ok:!1,pos:0,lines:0,str:""};if(n>=i||(s=e.charCodeAt(n),s!==34&&s!==39&&s!==40))return d;for(n++,s===40&&(s=41);n"+ci(t[e].content)+""};Zn.code_block=function(t,e,n,i,o){var s=t[e];return""+ci(t[e].content)+` -`};Zn.fence=function(t,e,n,i,o){var s=t[e],l=s.info?bve(s.info).trim():"",c="",d="",_,p,g,E,f;return l&&(g=l.split(/(\s+)/g),c=g[0],d=g.slice(2).join("")),n.highlight?_=n.highlight(s.content,c,d)||ci(s.content):_=ci(s.content),_.indexOf(""+_+` -`):"
    "+_+`
    -`};Zn.image=function(t,e,n,i,o){var s=t[e];return s.attrs[s.attrIndex("alt")][1]=o.renderInlineAsText(s.children,n,i),o.renderToken(t,e,n)};Zn.hardbreak=function(t,e,n){return n.xhtmlOut?`
    -`:`
    -`};Zn.softbreak=function(t,e,n){return n.breaks?n.xhtmlOut?`
    -`:`
    -`:` -`};Zn.text=function(t,e){return ci(t[e].content)};Zn.html_block=function(t,e){return t[e].content};Zn.html_inline=function(t,e){return t[e].content};function sa(){this.rules=Sve({},Zn)}sa.prototype.renderAttrs=function(e){var n,i,o;if(!e.attrs)return"";for(o="",n=0,i=e.attrs.length;n -`:">",s)};sa.prototype.renderInline=function(t,e,n){for(var i,o="",s=this.rules,l=0,c=t.length;l\s]/i.test(t)}function yve(t){return/^<\/a\s*>/i.test(t)}var Ive=function(e){var n,i,o,s,l,c,d,_,p,g,E,f,S,C,h,T,N=e.tokens,y;if(e.md.options.linkify){for(i=0,o=N.length;i=0;n--){if(c=s[n],c.type==="link_close"){for(n--;s[n].level!==c.level&&s[n].type!=="link_open";)n--;continue}if(c.type==="html_inline"&&(Ave(c.content)&&S>0&&S--,yve(c.content)&&S++),!(S>0)&&c.type==="text"&&e.md.linkify.test(c.content)){for(p=c.content,y=e.md.linkify.match(p),d=[],f=c.level,E=0,y.length>0&&y[0].index===0&&n>0&&s[n-1].type==="text_special"&&(y=y.slice(1)),_=0;_E&&(l=new e.Token("text","",0),l.content=p.slice(E,g),l.level=f,d.push(l)),l=new e.Token("link_open","a",1),l.attrs=[["href",h]],l.level=f++,l.markup="linkify",l.info="auto",d.push(l),l=new e.Token("text","",0),l.content=T,l.level=f,d.push(l),l=new e.Token("link_close","a",-1),l.level=--f,l.markup="linkify",l.info="auto",d.push(l),E=y[_].lastIndex);E=0;e--)n=t[e],n.type==="text"&&!i&&(n.content=n.content.replace(xve,Mve)),n.type==="link_open"&&n.info==="auto"&&i--,n.type==="link_close"&&n.info==="auto"&&i++}function Pve(t){var e,n,i=0;for(e=t.length-1;e>=0;e--)n=t[e],n.type==="text"&&!i&&SN.test(n.content)&&(n.content=n.content.replace(/\+-/g,"±").replace(/\.{2,}/g,"…").replace(/([?!])…/g,"$1..").replace(/([?!]){4,}/g,"$1$1$1").replace(/,{2,}/g,",").replace(/(^|[^-])---(?=[^-]|$)/mg,"$1—").replace(/(^|\s)--(?=\s|$)/mg,"$1–").replace(/(^|[^-\s])--(?=[^-\s]|$)/mg,"$1–")),n.type==="link_open"&&n.info==="auto"&&i--,n.type==="link_close"&&n.info==="auto"&&i++}var kve=function(e){var n;if(e.md.options.typographer)for(n=e.tokens.length-1;n>=0;n--)e.tokens[n].type==="inline"&&(Dve.test(e.tokens[n].content)&&Lve(e.tokens[n].children),SN.test(e.tokens[n].content)&&Pve(e.tokens[n].children))},gb=Je.isWhiteSpace,Eb=Je.isPunctChar,fb=Je.isMdAsciiPunct,Uve=/['"]/,Sb=/['"]/g,bb="’";function Is(t,e,n){return t.slice(0,e)+n+t.slice(e+1)}function Fve(t,e){var n,i,o,s,l,c,d,_,p,g,E,f,S,C,h,T,N,y,x,P,D;for(x=[],n=0;n=0&&!(x[N].level<=d);N--);if(x.length=N+1,i.type==="text"){o=i.content,l=0,c=o.length;e:for(;l=0)p=o.charCodeAt(s.index-1);else for(N=n-1;N>=0&&!(t[N].type==="softbreak"||t[N].type==="hardbreak");N--)if(t[N].content){p=t[N].content.charCodeAt(t[N].content.length-1);break}if(g=32,l=48&&p<=57&&(T=h=!1),h&&T&&(h=E,T=f),!h&&!T){y&&(i.content=Is(i.content,s.index,bb));continue}if(T){for(N=x.length-1;N>=0&&(_=x[N],!(x[N].level=0;n--)e.tokens[n].type!=="inline"||!Uve.test(e.tokens[n].content)||Fve(e.tokens[n].children,e)},Gve=function(e){var n,i,o,s,l,c,d=e.tokens;for(n=0,i=d.length;n=0&&(i=this.attrs[n][1]),i};la.prototype.attrJoin=function(e,n){var i=this.attrIndex(e);i<0?this.attrPush([e,n]):this.attrs[i][1]=this.attrs[i][1]+" "+n};var fg=la,Yve=fg;function bN(t,e,n){this.src=t,this.env=n,this.tokens=[],this.inlineMode=!1,this.md=e}bN.prototype.Token=Yve;var qve=bN,$ve=Eg,xu=[["normalize",Cve],["block",Rve],["inline",Nve],["linkify",Ive],["replacements",kve],["smartquotes",Bve],["text_join",Gve]];function Sg(){this.ruler=new $ve;for(var t=0;ti||(p=n+1,e.sCount[p]=4||(c=e.bMarks[p]+e.tShift[p],c>=e.eMarks[p])||(P=e.src.charCodeAt(c++),P!==124&&P!==45&&P!==58)||c>=e.eMarks[p]||(D=e.src.charCodeAt(c++),D!==124&&D!==45&&D!==58&&!wu(D))||P===45&&wu(D))return!1;for(;c=4||(g=hb(l),g.length&&g[0]===""&&g.shift(),g.length&&g[g.length-1]===""&&g.pop(),E=g.length,E===0||E!==S.length))return!1;if(o)return!0;for(N=e.parentType,e.parentType="table",x=e.md.block.ruler.getRules("blockquote"),f=e.push("table_open","table",1),f.map=h=[n,0],f=e.push("thead_open","thead",1),f.map=[n,n+1],f=e.push("tr_open","tr",1),f.map=[n,n+1],d=0;d=4)break;for(g=hb(l),g.length&&g[0]===""&&g.shift(),g.length&&g[g.length-1]===""&&g.pop(),p===n+2&&(f=e.push("tbody_open","tbody",1),f.map=T=[n+2,0]),f=e.push("tr_open","tr",1),f.map=[p,p+1],d=0;d=4){o++,s=o;continue}break}return e.line=s,l=e.push("code_block","code",0),l.content=e.getLines(n,s,4+e.blkIndent,!1)+` -`,l.map=[n,e.line],!0},Wve=function(e,n,i,o){var s,l,c,d,_,p,g,E=!1,f=e.bMarks[n]+e.tShift[n],S=e.eMarks[n];if(e.sCount[n]-e.blkIndent>=4||f+3>S||(s=e.src.charCodeAt(f),s!==126&&s!==96)||(_=f,f=e.skipChars(f,s),l=f-_,l<3)||(g=e.src.slice(_,f),c=e.src.slice(f,S),s===96&&c.indexOf(String.fromCharCode(s))>=0))return!1;if(o)return!0;for(d=n;d++,!(d>=i||(f=_=e.bMarks[d]+e.tShift[d],S=e.eMarks[d],f=4)&&(f=e.skipChars(f,s),!(f-_=4||e.src.charCodeAt(z++)!==62)return!1;if(o)return!0;for(d=f=e.sCount[n]+1,e.src.charCodeAt(z)===32?(z++,d++,f++,s=!1,x=!0):e.src.charCodeAt(z)===9?(x=!0,(e.bsCount[n]+f)%4===3?(z++,d++,f++,s=!1):s=!0):x=!1,S=[e.bMarks[n]],e.bMarks[n]=z;z=K,N=[e.sCount[n]],e.sCount[n]=f-d,y=[e.tShift[n]],e.tShift[n]=z-e.bMarks[n],D=e.md.block.ruler.getRules("blockquote"),T=e.parentType,e.parentType="blockquote",E=n+1;E=K));E++){if(e.src.charCodeAt(z++)===62&&!U){for(d=f=e.sCount[E]+1,e.src.charCodeAt(z)===32?(z++,d++,f++,s=!1,x=!0):e.src.charCodeAt(z)===9?(x=!0,(e.bsCount[E]+f)%4===3?(z++,d++,f++,s=!1):s=!0):x=!1,S.push(e.bMarks[E]),e.bMarks[E]=z;z=K,C.push(e.bsCount[E]),e.bsCount[E]=e.sCount[E]+1+(x?1:0),N.push(e.sCount[E]),e.sCount[E]=f-d,y.push(e.tShift[E]),e.tShift[E]=z-e.bMarks[E];continue}if(p)break;for(P=!1,c=0,_=D.length;c<_;c++)if(D[c](e,E,i,!0)){P=!0;break}if(P){e.lineMax=E,e.blkIndent!==0&&(S.push(e.bMarks[E]),C.push(e.bsCount[E]),y.push(e.tShift[E]),N.push(e.sCount[E]),e.sCount[E]-=e.blkIndent);break}S.push(e.bMarks[E]),C.push(e.bsCount[E]),y.push(e.tShift[E]),N.push(e.sCount[E]),e.sCount[E]=-1}for(h=e.blkIndent,e.blkIndent=0,k=e.push("blockquote_open","blockquote",1),k.markup=">",k.map=g=[n,0],e.md.block.tokenize(e,n,E),k=e.push("blockquote_close","blockquote",-1),k.markup=">",e.lineMax=W,e.parentType=T,g[1]=e.line,c=0;c=4||(s=e.src.charCodeAt(_++),s!==42&&s!==45&&s!==95))return!1;for(l=1;_=s||(n=t.src.charCodeAt(o++),n<48||n>57))return-1;for(;;){if(o>=s)return-1;if(n=t.src.charCodeAt(o++),n>=48&&n<=57){if(o-i>=10)return-1;continue}if(n===41||n===46)break;return-1}return o=4||e.listIndent>=0&&e.sCount[n]-e.listIndent>=4&&e.sCount[n]=e.blkIndent&&(G=!0),(K=Cb(e,n))>=0){if(g=!0,oe=e.bMarks[n]+e.tShift[n],T=Number(e.src.slice(oe,K-1)),G&&T!==1)return!1}else if((K=vb(e,n))>=0)g=!1;else return!1;if(G&&e.skipSpaces(K)>=e.eMarks[n])return!1;if(h=e.src.charCodeAt(K-1),o)return!0;for(C=e.tokens.length,g?(re=e.push("ordered_list_open","ol",1),T!==1&&(re.attrs=[["start",T]])):re=e.push("bullet_list_open","ul",1),re.map=S=[n,0],re.markup=String.fromCharCode(h),y=n,Ee=!1,J=e.md.block.ruler.getRules("list"),D=e.parentType,e.parentType="list";y=N?_=1:_=x-p,_>4&&(_=1),d=p+_,re=e.push("list_item_open","li",1),re.markup=String.fromCharCode(h),re.map=E=[n,0],g&&(re.info=e.src.slice(oe,K-1)),W=e.tight,U=e.tShift[n],k=e.sCount[n],P=e.listIndent,e.listIndent=e.blkIndent,e.blkIndent=d,e.tight=!0,e.tShift[n]=l-e.bMarks[n],e.sCount[n]=x,l>=N&&e.isEmpty(n+1)?e.line=Math.min(e.line+2,i):e.md.block.tokenize(e,n,i,!0),(!e.tight||Ee)&&(X=!1),Ee=e.line-n>1&&e.isEmpty(e.line-1),e.blkIndent=e.listIndent,e.listIndent=P,e.tShift[n]=U,e.sCount[n]=k,e.tight=W,re=e.push("list_item_close","li",-1),re.markup=String.fromCharCode(h),y=n=e.line,E[1]=y,l=e.bMarks[n],y>=i||e.sCount[y]=4)break;for(L=!1,c=0,f=J.length;c=4||e.src.charCodeAt(D)!==91)return!1;for(;++D3)&&!(e.sCount[U]<0)){for(N=!1,p=0,g=y.length;p"u"&&(e.env.references={}),typeof e.env.references[E]>"u"&&(e.env.references[E]={title:x,href:_}),e.parentType=S,e.line=n+P+1),!0)},tCe=["address","article","aside","base","basefont","blockquote","body","caption","center","col","colgroup","dd","details","dialog","dir","div","dl","dt","fieldset","figcaption","figure","footer","form","frame","frameset","h1","h2","h3","h4","h5","h6","head","header","hr","html","iframe","legend","li","link","main","menu","menuitem","nav","noframes","ol","optgroup","option","p","param","section","source","summary","table","tbody","td","tfoot","th","thead","title","tr","track","ul"],sl={},nCe="[a-zA-Z_:][a-zA-Z0-9:._-]*",rCe="[^\"'=<>`\\x00-\\x20]+",iCe="'[^']*'",aCe='"[^"]*"',oCe="(?:"+rCe+"|"+iCe+"|"+aCe+")",sCe="(?:\\s+"+nCe+"(?:\\s*=\\s*"+oCe+")?)",TN="<[A-Za-z][A-Za-z0-9\\-]*"+sCe+"*\\s*\\/?>",vN="<\\/[A-Za-z][A-Za-z0-9\\-]*\\s*>",lCe="|",cCe="<[?][\\s\\S]*?[?]>",uCe="]*>",dCe="",_Ce=new RegExp("^(?:"+TN+"|"+vN+"|"+lCe+"|"+cCe+"|"+uCe+"|"+dCe+")"),pCe=new RegExp("^(?:"+TN+"|"+vN+")");sl.HTML_TAG_RE=_Ce;sl.HTML_OPEN_CLOSE_TAG_RE=pCe;var mCe=tCe,gCe=sl.HTML_OPEN_CLOSE_TAG_RE,Gi=[[/^<(script|pre|style|textarea)(?=(\s|>|$))/i,/<\/(script|pre|style|textarea)>/i,!0],[/^/,!0],[/^<\?/,/\?>/,!0],[/^/,!0],[/^/,!0],[new RegExp("^|$))","i"),/^$/,!0],[new RegExp(gCe.source+"\\s*$"),/^$/,!1]],ECe=function(e,n,i,o){var s,l,c,d,_=e.bMarks[n]+e.tShift[n],p=e.eMarks[n];if(e.sCount[n]-e.blkIndent>=4||!e.md.options.html||e.src.charCodeAt(_)!==60)return!1;for(d=e.src.slice(_,p),s=0;s=4||(s=e.src.charCodeAt(_),s!==35||_>=p))return!1;for(l=1,s=e.src.charCodeAt(++_);s===35&&_6||__&&Rb(e.src.charCodeAt(c-1))&&(p=c),e.line=n+1,d=e.push("heading_open","h"+String(l),1),d.markup="########".slice(0,l),d.map=[n,e.line],d=e.push("inline","",0),d.content=e.src.slice(_,p).trim(),d.map=[n,e.line],d.children=[],d=e.push("heading_close","h"+String(l),-1),d.markup="########".slice(0,l)),!0)},SCe=function(e,n,i){var o,s,l,c,d,_,p,g,E,f=n+1,S,C=e.md.block.ruler.getRules("paragraph");if(e.sCount[n]-e.blkIndent>=4)return!1;for(S=e.parentType,e.parentType="paragraph";f3)){if(e.sCount[f]>=e.blkIndent&&(_=e.bMarks[f]+e.tShift[f],p=e.eMarks[f],_=p)))){g=E===61?1:2;break}if(!(e.sCount[f]<0)){for(s=!1,l=0,c=C.length;l3)&&!(e.sCount[_]<0)){for(o=!1,s=0,l=p.length;s0&&this.level++,this.tokens.push(i),i};Jn.prototype.isEmpty=function(e){return this.bMarks[e]+this.tShift[e]>=this.eMarks[e]};Jn.prototype.skipEmptyLines=function(e){for(var n=this.lineMax;en;)if(!ll(this.src.charCodeAt(--e)))return e+1;return e};Jn.prototype.skipChars=function(e,n){for(var i=this.src.length;ei;)if(n!==this.src.charCodeAt(--e))return e+1;return e};Jn.prototype.getLines=function(e,n,i,o){var s,l,c,d,_,p,g,E=e;if(e>=n)return"";for(p=new Array(n-e),s=0;Ei?p[s]=new Array(l-i+1).join(" ")+this.src.slice(d,_):p[s]=this.src.slice(d,_)}return p.join("")};Jn.prototype.Token=CN;var hCe=Jn,TCe=Eg,xs=[["table",zve,["paragraph","reference"]],["code",Vve],["fence",Wve,["paragraph","reference","blockquote","list"]],["blockquote",Kve,["paragraph","reference","blockquote","list"]],["hr",Xve,["paragraph","reference","blockquote","list"]],["list",Jve,["paragraph","reference","blockquote"]],["reference",eCe],["html_block",ECe,["paragraph","reference","blockquote"]],["heading",fCe,["paragraph","reference","blockquote"]],["lheading",SCe],["paragraph",bCe]];function cl(){this.ruler=new TCe;for(var t=0;t=n||t.sCount[c]=_){t.line=n;break}for(o=0;o0||(i=e.pos,o=e.posMax,i+3>o)||e.src.charCodeAt(i)!==58||e.src.charCodeAt(i+1)!==47||e.src.charCodeAt(i+2)!==47||(s=e.pending.match(NCe),!s)||(l=s[1],c=e.md.linkify.matchAtStart(e.src.slice(i-l.length)),!c)||(d=c.url,d=d.replace(/\*+$/,""),_=e.md.normalizeLink(d),!e.md.validateLink(_))?!1:(n||(e.pending=e.pending.slice(0,-l.length),p=e.push("link_open","a",1),p.attrs=[["href",_]],p.markup="linkify",p.info="auto",p=e.push("text","",0),p.content=e.md.normalizeLinkText(d),p=e.push("link_close","a",-1),p.markup="linkify",p.info="auto"),e.pos+=d.length-l.length,!0)},ACe=Je.isSpace,yCe=function(e,n){var i,o,s,l=e.pos;if(e.src.charCodeAt(l)!==10)return!1;if(i=e.pending.length-1,o=e.posMax,!n)if(i>=0&&e.pending.charCodeAt(i)===32)if(i>=1&&e.pending.charCodeAt(i-1)===32){for(s=i-1;s>=1&&e.pending.charCodeAt(s-1)===32;)s--;e.pending=e.pending.slice(0,s),e.push("hardbreak","br",0)}else e.pending=e.pending.slice(0,-1),e.push("softbreak","br",0);else e.push("softbreak","br",0);for(l++;l?@[]^_`{|}~-".split("").forEach(function(t){bg[t.charCodeAt(0)]=1});var DCe=function(e,n){var i,o,s,l,c,d=e.pos,_=e.posMax;if(e.src.charCodeAt(d)!==92||(d++,d>=_))return!1;if(i=e.src.charCodeAt(d),i===10){for(n||e.push("hardbreak","br",0),d++;d<_&&(i=e.src.charCodeAt(d),!!ICe(i));)d++;return e.pos=d,!0}return l=e.src[d],i>=55296&&i<=56319&&d+1<_&&(o=e.src.charCodeAt(d+1),o>=56320&&o<=57343&&(l+=e.src[d+1],d++)),s="\\"+l,n||(c=e.push("text_special","",0),i<256&&bg[i]!==0?c.content=l:c.content=s,c.markup=s,c.info="escape"),e.pos=d+1,!0},xCe=function(e,n){var i,o,s,l,c,d,_,p,g=e.pos,E=e.src.charCodeAt(g);if(E!==96)return!1;for(i=g,g++,o=e.posMax;g=0;n--)i=e[n],!(i.marker!==95&&i.marker!==42)&&i.end!==-1&&(o=e[i.end],c=n>0&&e[n-1].end===i.end+1&&e[n-1].marker===i.marker&&e[n-1].token===i.token-1&&e[i.end+1].token===o.token+1,l=String.fromCharCode(i.marker),s=t.tokens[i.token],s.type=c?"strong_open":"em_open",s.tag=c?"strong":"em",s.nesting=1,s.markup=c?l+l:l,s.content="",s=t.tokens[o.token],s.type=c?"strong_close":"em_close",s.tag=c?"strong":"em",s.nesting=-1,s.markup=c?l+l:l,s.content="",c&&(t.tokens[e[n-1].token].content="",t.tokens[e[i.end+1].token].content="",n--))}dl.postProcess=function(e){var n,i=e.tokens_meta,o=e.tokens_meta.length;for(Ab(e,e.delimiters),n=0;n=C)return!1;if(h=d,_=e.md.helpers.parseLinkDestination(e.src,d,e.posMax),_.ok){for(E=e.md.normalizeLink(_.str),e.md.validateLink(E)?d=_.pos:E="",h=d;d=C||e.src.charCodeAt(d)!==41)&&(T=!0),d++}if(T){if(typeof e.env.references>"u")return!1;if(d=0?s=e.src.slice(h,d++):d=l+1):d=l+1,s||(s=e.src.slice(c,l)),p=e.env.references[wCe(s)],!p)return e.pos=S,!1;E=p.href,f=p.title}return n||(e.pos=c,e.posMax=l,g=e.push("link_open","a",1),g.attrs=i=[["href",E]],f&&i.push(["title",f]),e.linkLevel++,e.md.inline.tokenize(e),e.linkLevel--,g=e.push("link_close","a",-1)),e.pos=d,e.posMax=C,!0},LCe=Je.normalizeReference,Pu=Je.isSpace,PCe=function(e,n){var i,o,s,l,c,d,_,p,g,E,f,S,C,h="",T=e.pos,N=e.posMax;if(e.src.charCodeAt(e.pos)!==33||e.src.charCodeAt(e.pos+1)!==91||(d=e.pos+2,c=e.md.helpers.parseLinkLabel(e,e.pos+1,!1),c<0))return!1;if(_=c+1,_=N)return!1;for(C=_,g=e.md.helpers.parseLinkDestination(e.src,_,e.posMax),g.ok&&(h=e.md.normalizeLink(g.str),e.md.validateLink(h)?_=g.pos:h=""),C=_;_=N||e.src.charCodeAt(_)!==41)return e.pos=T,!1;_++}else{if(typeof e.env.references>"u")return!1;if(_=0?l=e.src.slice(C,_++):_=c+1):_=c+1,l||(l=e.src.slice(d,c)),p=e.env.references[LCe(l)],!p)return e.pos=T,!1;h=p.href,E=p.title}return n||(s=e.src.slice(d,c),e.md.inline.parse(s,e.md,e.env,S=[]),f=e.push("image","img",0),f.attrs=i=[["src",h],["alt",""]],f.children=S,f.content=s,E&&i.push(["title",E])),e.pos=_,e.posMax=N,!0},kCe=/^([a-zA-Z0-9.!#$%&'*+\/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*)$/,UCe=/^([a-zA-Z][a-zA-Z0-9+.\-]{1,31}):([^<>\x00-\x20]*)$/,FCe=function(e,n){var i,o,s,l,c,d,_=e.pos;if(e.src.charCodeAt(_)!==60)return!1;for(c=e.pos,d=e.posMax;;){if(++_>=d||(l=e.src.charCodeAt(_),l===60))return!1;if(l===62)break}return i=e.src.slice(c+1,_),UCe.test(i)?(o=e.md.normalizeLink(i),e.md.validateLink(o)?(n||(s=e.push("link_open","a",1),s.attrs=[["href",o]],s.markup="autolink",s.info="auto",s=e.push("text","",0),s.content=e.md.normalizeLinkText(i),s=e.push("link_close","a",-1),s.markup="autolink",s.info="auto"),e.pos+=i.length+2,!0):!1):kCe.test(i)?(o=e.md.normalizeLink("mailto:"+i),e.md.validateLink(o)?(n||(s=e.push("link_open","a",1),s.attrs=[["href",o]],s.markup="autolink",s.info="auto",s=e.push("text","",0),s.content=e.md.normalizeLinkText(i),s=e.push("link_close","a",-1),s.markup="autolink",s.info="auto"),e.pos+=i.length+2,!0):!1):!1},BCe=sl.HTML_TAG_RE;function GCe(t){return/^\s]/i.test(t)}function YCe(t){return/^<\/a\s*>/i.test(t)}function qCe(t){var e=t|32;return e>=97&&e<=122}var $Ce=function(e,n){var i,o,s,l,c=e.pos;return!e.md.options.html||(s=e.posMax,e.src.charCodeAt(c)!==60||c+2>=s)||(i=e.src.charCodeAt(c+1),i!==33&&i!==63&&i!==47&&!qCe(i))||(o=e.src.slice(c).match(BCe),!o)?!1:(n||(l=e.push("html_inline","",0),l.content=e.src.slice(c,c+o[0].length),GCe(l.content)&&e.linkLevel++,YCe(l.content)&&e.linkLevel--),e.pos+=o[0].length,!0)},yb=mN,HCe=Je.has,zCe=Je.isValidEntityCode,Ib=Je.fromCodePoint,VCe=/^&#((?:x[a-f0-9]{1,6}|[0-9]{1,7}));/i,WCe=/^&([a-z][a-z0-9]{1,31});/i,KCe=function(e,n){var i,o,s,l,c=e.pos,d=e.posMax;if(e.src.charCodeAt(c)!==38||c+1>=d)return!1;if(i=e.src.charCodeAt(c+1),i===35){if(s=e.src.slice(c).match(VCe),s)return n||(o=s[1][0].toLowerCase()==="x"?parseInt(s[1].slice(1),16):parseInt(s[1],10),l=e.push("text_special","",0),l.content=zCe(o)?Ib(o):Ib(65533),l.markup=s[0],l.info="entity"),e.pos+=s[0].length,!0}else if(s=e.src.slice(c).match(WCe),s&&HCe(yb,s[1]))return n||(l=e.push("text_special","",0),l.content=yb[s[1]],l.markup=s[0],l.info="entity"),e.pos+=s[0].length,!0;return!1};function Db(t,e){var n,i,o,s,l,c,d,_,p={},g=e.length;if(g){var E=0,f=-2,S=[];for(n=0;nl;i-=S[i]+1)if(s=e[i],s.marker===o.marker&&s.open&&s.end<0&&(d=!1,(s.close||o.open)&&(s.length+o.length)%3===0&&(s.length%3!==0||o.length%3!==0)&&(d=!0),!d)){_=i>0&&!e[i-1].open?S[i-1]+1:0,S[n]=n-i+_,S[i]=_,o.open=!1,s.end=n,s.close=!1,c=-1,f=-2;break}c!==-1&&(p[o.marker][(o.open?3:0)+(o.length||0)%3]=c)}}}var QCe=function(e){var n,i=e.tokens_meta,o=e.tokens_meta.length;for(Db(e,e.delimiters),n=0;n0&&o++,s[n].type==="text"&&n+10&&(this.level++,this._prev_delimiters.push(this.delimiters),this.delimiters=[],o={delimiters:this.delimiters}),this.pendingLevel=this.level,this.tokens.push(i),this.tokens_meta.push(o),i};oo.prototype.scanDelims=function(t,e){var n=t,i,o,s,l,c,d,_,p,g,E=!0,f=!0,S=this.posMax,C=this.src.charCodeAt(t);for(i=t>0?this.src.charCodeAt(t-1):32;n=s)break;continue}t.pending+=t.src[t.pos++]}t.pending&&t.pushPending()};so.prototype.parse=function(t,e,n,i){var o,s,l,c=new this.State(t,e,n,i);for(this.tokenize(c),s=this.ruler2.getRules(""),l=s.length,o=0;o|$))",e.tpl_email_fuzzy="(^|"+n+'|"|\\(|'+e.src_ZCc+")("+e.src_email_name+"@"+e.tpl_host_fuzzy_strict+")",e.tpl_link_fuzzy="(^|(?![.:/\\-_@])(?:[$+<=>^`||]|"+e.src_ZPCc+"))((?![$+<=>^`||])"+e.tpl_host_port_fuzzy_strict+e.src_path+")",e.tpl_link_no_ip_fuzzy="(^|(?![.:/\\-_@])(?:[$+<=>^`||]|"+e.src_ZPCc+"))((?![$+<=>^`||])"+e.tpl_host_port_no_ip_fuzzy_strict+e.src_path+")",e}),Fu}function Bm(t){var e=Array.prototype.slice.call(arguments,1);return e.forEach(function(n){n&&Object.keys(n).forEach(function(i){t[i]=n[i]})}),t}function _l(t){return Object.prototype.toString.call(t)}function eRe(t){return _l(t)==="[object String]"}function tRe(t){return _l(t)==="[object Object]"}function nRe(t){return _l(t)==="[object RegExp]"}function kb(t){return _l(t)==="[object Function]"}function rRe(t){return t.replace(/[.?*+^$[\]\\(){}|-]/g,"\\$&")}var RN={fuzzyLink:!0,fuzzyEmail:!0,fuzzyIP:!1};function iRe(t){return Object.keys(t||{}).reduce(function(e,n){return e||RN.hasOwnProperty(n)},!1)}var aRe={"http:":{validate:function(t,e,n){var i=t.slice(e);return n.re.http||(n.re.http=new RegExp("^\\/\\/"+n.re.src_auth+n.re.src_host_port_strict+n.re.src_path,"i")),n.re.http.test(i)?i.match(n.re.http)[0].length:0}},"https:":"http:","ftp:":"http:","//":{validate:function(t,e,n){var i=t.slice(e);return n.re.no_http||(n.re.no_http=new RegExp("^"+n.re.src_auth+"(?:localhost|(?:(?:"+n.re.src_domain+")\\.)+"+n.re.src_domain_root+")"+n.re.src_port+n.re.src_host_terminator+n.re.src_path,"i")),n.re.no_http.test(i)?e>=3&&t[e-3]===":"||e>=3&&t[e-3]==="/"?0:i.match(n.re.no_http)[0].length:0}},"mailto:":{validate:function(t,e,n){var i=t.slice(e);return n.re.mailto||(n.re.mailto=new RegExp("^"+n.re.src_email_name+"@"+n.re.src_host_strict,"i")),n.re.mailto.test(i)?i.match(n.re.mailto)[0].length:0}}},oRe="a[cdefgilmnoqrstuwxz]|b[abdefghijmnorstvwyz]|c[acdfghiklmnoruvwxyz]|d[ejkmoz]|e[cegrstu]|f[ijkmor]|g[abdefghilmnpqrstuwy]|h[kmnrtu]|i[delmnoqrst]|j[emop]|k[eghimnprwyz]|l[abcikrstuvy]|m[acdeghklmnopqrstuvwxyz]|n[acefgilopruz]|om|p[aefghklmnrstwy]|qa|r[eosuw]|s[abcdeghijklmnortuvxyz]|t[cdfghjklmnortvwz]|u[agksyz]|v[aceginu]|w[fs]|y[et]|z[amw]",sRe="biz|com|edu|gov|net|org|pro|web|xxx|aero|asia|coop|info|museum|name|shop|рф".split("|");function lRe(t){t.__index__=-1,t.__text_cache__=""}function cRe(t){return function(e,n){var i=e.slice(n);return t.test(i)?i.match(t)[0].length:0}}function Ub(){return function(t,e){e.normalize(t)}}function zs(t){var e=t.re=jCe()(t.__opts__),n=t.__tlds__.slice();t.onCompile(),t.__tlds_replaced__||n.push(oRe),n.push(e.src_xn),e.src_tlds=n.join("|");function i(c){return c.replace("%TLDS%",e.src_tlds)}e.email_fuzzy=RegExp(i(e.tpl_email_fuzzy),"i"),e.link_fuzzy=RegExp(i(e.tpl_link_fuzzy),"i"),e.link_no_ip_fuzzy=RegExp(i(e.tpl_link_no_ip_fuzzy),"i"),e.host_fuzzy_test=RegExp(i(e.tpl_host_fuzzy_test),"i");var o=[];t.__compiled__={};function s(c,d){throw new Error('(LinkifyIt) Invalid schema "'+c+'": '+d)}Object.keys(t.__schemas__).forEach(function(c){var d=t.__schemas__[c];if(d!==null){var _={validate:null,link:null};if(t.__compiled__[c]=_,tRe(d)){nRe(d.validate)?_.validate=cRe(d.validate):kb(d.validate)?_.validate=d.validate:s(c,d),kb(d.normalize)?_.normalize=d.normalize:d.normalize?s(c,d):_.normalize=Ub();return}if(eRe(d)){o.push(c);return}s(c,d)}}),o.forEach(function(c){t.__compiled__[t.__schemas__[c]]&&(t.__compiled__[c].validate=t.__compiled__[t.__schemas__[c]].validate,t.__compiled__[c].normalize=t.__compiled__[t.__schemas__[c]].normalize)}),t.__compiled__[""]={validate:null,normalize:Ub()};var l=Object.keys(t.__compiled__).filter(function(c){return c.length>0&&t.__compiled__[c]}).map(rRe).join("|");t.re.schema_test=RegExp("(^|(?!_)(?:[><|]|"+e.src_ZPCc+"))("+l+")","i"),t.re.schema_search=RegExp("(^|(?!_)(?:[><|]|"+e.src_ZPCc+"))("+l+")","ig"),t.re.schema_at_start=RegExp("^"+t.re.schema_search.source,"i"),t.re.pretest=RegExp("("+t.re.schema_test.source+")|("+t.re.host_fuzzy_test.source+")|@","i"),lRe(t)}function uRe(t,e){var n=t.__index__,i=t.__last_index__,o=t.__text_cache__.slice(n,i);this.schema=t.__schema__.toLowerCase(),this.index=n+e,this.lastIndex=i+e,this.raw=o,this.text=o,this.url=o}function Gm(t,e){var n=new uRe(t,e);return t.__compiled__[n.schema].normalize(n,t),n}function En(t,e){if(!(this instanceof En))return new En(t,e);e||iRe(t)&&(e=t,t={}),this.__opts__=Bm({},RN,e),this.__index__=-1,this.__last_index__=-1,this.__schema__="",this.__text_cache__="",this.__schemas__=Bm({},aRe,t),this.__compiled__={},this.__tlds__=sRe,this.__tlds_replaced__=!1,this.re={},zs(this)}En.prototype.add=function(e,n){return this.__schemas__[e]=n,zs(this),this};En.prototype.set=function(e){return this.__opts__=Bm(this.__opts__,e),this};En.prototype.test=function(e){if(this.__text_cache__=e,this.__index__=-1,!e.length)return!1;var n,i,o,s,l,c,d,_,p;if(this.re.schema_test.test(e)){for(d=this.re.schema_search,d.lastIndex=0;(n=d.exec(e))!==null;)if(s=this.testSchemaAt(e,n[2],d.lastIndex),s){this.__schema__=n[2],this.__index__=n.index+n[1].length,this.__last_index__=n.index+n[0].length+s;break}}return this.__opts__.fuzzyLink&&this.__compiled__["http:"]&&(_=e.search(this.re.host_fuzzy_test),_>=0&&(this.__index__<0||_=0&&(o=e.match(this.re.email_fuzzy))!==null&&(l=o.index+o[1].length,c=o.index+o[0].length,(this.__index__<0||lthis.__last_index__)&&(this.__schema__="mailto:",this.__index__=l,this.__last_index__=c))),this.__index__>=0};En.prototype.pretest=function(e){return this.re.pretest.test(e)};En.prototype.testSchemaAt=function(e,n,i){return this.__compiled__[n.toLowerCase()]?this.__compiled__[n.toLowerCase()].validate(e,i,this):0};En.prototype.match=function(e){var n=0,i=[];this.__index__>=0&&this.__text_cache__===e&&(i.push(Gm(this,n)),n=this.__last_index__);for(var o=n?e.slice(n):e;this.test(o);)i.push(Gm(this,n)),o=o.slice(this.__last_index__),n+=this.__last_index__;return i.length?i:null};En.prototype.matchAtStart=function(e){if(this.__text_cache__=e,this.__index__=-1,!e.length)return null;var n=this.re.schema_at_start.exec(e);if(!n)return null;var i=this.testSchemaAt(e,n[2],n[0].length);return i?(this.__schema__=n[2],this.__index__=n.index+n[1].length,this.__last_index__=n.index+n[0].length+i,Gm(this,0)):null};En.prototype.tlds=function(e,n){return e=Array.isArray(e)?e:[e],n?(this.__tlds__=this.__tlds__.concat(e).sort().filter(function(i,o,s){return i!==s[o-1]}).reverse(),zs(this),this):(this.__tlds__=e.slice(),this.__tlds_replaced__=!0,zs(this),this)};En.prototype.normalize=function(e){e.schema||(e.url="http://"+e.url),e.schema==="mailto:"&&!/^mailto:/i.test(e.url)&&(e.url="mailto:"+e.url)};En.prototype.onCompile=function(){};var dRe=En;const Wi=2147483647,zn=36,Tg=1,eo=26,_Re=38,pRe=700,NN=72,ON=128,AN="-",mRe=/^xn--/,gRe=/[^\0-\x7F]/,ERe=/[\x2E\u3002\uFF0E\uFF61]/g,fRe={overflow:"Overflow: input needs wider integers to process","not-basic":"Illegal input >= 0x80 (not a basic code point)","invalid-input":"Invalid input"},Bu=zn-Tg,Vn=Math.floor,Gu=String.fromCharCode;function yr(t){throw new RangeError(fRe[t])}function SRe(t,e){const n=[];let i=t.length;for(;i--;)n[i]=e(t[i]);return n}function yN(t,e){const n=t.split("@");let i="";n.length>1&&(i=n[0]+"@",t=n[1]),t=t.replace(ERe,".");const o=t.split("."),s=SRe(o,e).join(".");return i+s}function vg(t){const e=[];let n=0;const i=t.length;for(;n=55296&&o<=56319&&nString.fromCodePoint(...t),bRe=function(t){return t>=48&&t<58?26+(t-48):t>=65&&t<91?t-65:t>=97&&t<123?t-97:zn},Fb=function(t,e){return t+22+75*(t<26)-((e!=0)<<5)},DN=function(t,e,n){let i=0;for(t=n?Vn(t/pRe):t>>1,t+=Vn(t/e);t>Bu*eo>>1;i+=zn)t=Vn(t/Bu);return Vn(i+(Bu+1)*t/(t+_Re))},Cg=function(t){const e=[],n=t.length;let i=0,o=ON,s=NN,l=t.lastIndexOf(AN);l<0&&(l=0);for(let c=0;c=128&&yr("not-basic"),e.push(t.charCodeAt(c));for(let c=l>0?l+1:0;c=n&&yr("invalid-input");const E=bRe(t.charCodeAt(c++));E>=zn&&yr("invalid-input"),E>Vn((Wi-i)/p)&&yr("overflow"),i+=E*p;const f=g<=s?Tg:g>=s+eo?eo:g-s;if(EVn(Wi/S)&&yr("overflow"),p*=S}const _=e.length+1;s=DN(i-d,_,d==0),Vn(i/_)>Wi-o&&yr("overflow"),o+=Vn(i/_),i%=_,e.splice(i++,0,o)}return String.fromCodePoint(...e)},Rg=function(t){const e=[];t=vg(t);const n=t.length;let i=ON,o=0,s=NN;for(const d of t)d<128&&e.push(Gu(d));const l=e.length;let c=l;for(l&&e.push(AN);c=i&&pVn((Wi-o)/_)&&yr("overflow"),o+=(d-i)*_,i=d;for(const p of t)if(pWi&&yr("overflow"),p===i){let g=o;for(let E=zn;;E+=zn){const f=E<=s?Tg:E>=s+eo?eo:E-s;if(g=0))try{e.hostname=MN.toASCII(e.hostname)}catch{}return ti.encode(ti.format(e))}function URe(t){var e=ti.parse(t,!0);if(e.hostname&&(!e.protocol||LN.indexOf(e.protocol)>=0))try{e.hostname=MN.toUnicode(e.hostname)}catch{}return ti.decode(ti.format(e),ti.decode.defaultChars+"%")}function Dn(t,e){if(!(this instanceof Dn))return new Dn(t,e);e||Wa.isString(t)||(e=t||{},t="default"),this.inline=new DRe,this.block=new IRe,this.core=new yRe,this.renderer=new ARe,this.linkify=new xRe,this.validateLink=PRe,this.normalizeLink=kRe,this.normalizeLinkText=URe,this.utils=Wa,this.helpers=Wa.assign({},ORe),this.options={},this.configure(t),e&&this.set(e)}Dn.prototype.set=function(t){return Wa.assign(this.options,t),this};Dn.prototype.configure=function(t){var e=this,n;if(Wa.isString(t)&&(n=t,t=wRe[n],!t))throw new Error('Wrong `markdown-it` preset "'+n+'", check name');if(!t)throw new Error("Wrong `markdown-it` preset, can't be empty");return t.options&&e.set(t.options),t.components&&Object.keys(t.components).forEach(function(i){t.components[i].rules&&e[i].ruler.enableOnly(t.components[i].rules),t.components[i].rules2&&e[i].ruler2.enableOnly(t.components[i].rules2)}),this};Dn.prototype.enable=function(t,e){var n=[];Array.isArray(t)||(t=[t]),["core","block","inline"].forEach(function(o){n=n.concat(this[o].ruler.enable(t,!0))},this),n=n.concat(this.inline.ruler2.enable(t,!0));var i=t.filter(function(o){return n.indexOf(o)<0});if(i.length&&!e)throw new Error("MarkdownIt. Failed to enable unknown rule(s): "+i);return this};Dn.prototype.disable=function(t,e){var n=[];Array.isArray(t)||(t=[t]),["core","block","inline"].forEach(function(o){n=n.concat(this[o].ruler.disable(t,!0))},this),n=n.concat(this.inline.ruler2.disable(t,!0));var i=t.filter(function(o){return n.indexOf(o)<0});if(i.length&&!e)throw new Error("MarkdownIt. Failed to disable unknown rule(s): "+i);return this};Dn.prototype.use=function(t){var e=[this].concat(Array.prototype.slice.call(arguments,1));return t.apply(t,e),this};Dn.prototype.parse=function(t,e){if(typeof t!="string")throw new Error("Input data should be a String");var n=new this.core.State(t,this,e);return this.core.process(n),n.tokens};Dn.prototype.render=function(t,e){return e=e||{},this.renderer.render(this.parse(t,e),this.options,e)};Dn.prototype.parseInline=function(t,e){var n=new this.core.State(t,this,e);return n.inlineMode=!0,this.core.process(n),n.tokens};Dn.prototype.renderInline=function(t,e){return e=e||{},this.renderer.render(this.parseInline(t,e),this.options,e)};var FRe=Dn,BRe=FRe;const GRe=Qm(BRe);var kr={};kr.getAttrs=function(t,e,n){const i=/[^\t\n\f />"'=]/,o=" ",s="=",l=".",c="#",d=[];let _="",p="",g=!0,E=!1;for(let f=e+n.leftDelimiter.length;f=i+1:p.length>=i}let s,l,c,d;const _=i-e.rightDelimiter.length;switch(t){case"start":c=n.slice(0,e.leftDelimiter.length),s=c===e.leftDelimiter?0:-1,l=s===-1?-1:n.indexOf(e.rightDelimiter,_),d=n.charAt(l+e.rightDelimiter.length),d&&e.rightDelimiter.indexOf(d)!==-1&&(l=-1);break;case"end":s=n.lastIndexOf(e.leftDelimiter),l=s===-1?-1:n.indexOf(e.rightDelimiter,s+_),l=l===n.length-e.rightDelimiter.length?l:-1;break;case"only":c=n.slice(0,e.leftDelimiter.length),s=c===e.leftDelimiter?0:-1,c=n.slice(n.length-e.rightDelimiter.length),l=c===e.rightDelimiter?n.length-e.rightDelimiter.length:-1;break;default:throw new Error(`Unexpected case ${t}, expected 'start', 'end' or 'only'`)}return s!==-1&&l!==-1&&o(n.substring(s,l+e.rightDelimiter.length))}};kr.removeDelimiter=function(t,e){const n=Ym(e.leftDelimiter),i=Ym(e.rightDelimiter),o=new RegExp("[ \\n]?"+n+"[^"+n+i+"]+"+i+"$"),s=t.search(o);return s!==-1?t.slice(0,s):t};function Ym(t){return t.replace(/[-/\\^$*+?.()|[\]{}]/g,"\\$&")}kr.escapeRegExp=Ym;kr.getMatchingOpeningToken=function(t,e){if(t[e].type==="softbreak")return!1;if(t[e].nesting===0)return t[e];const n=t[e].level,i=t[e].type.replace("_close","_open");for(;e>=0;--e)if(t[e].type===i&&t[e].level===n)return t[e];return!1};const YRe=/[&<>"]/,qRe=/[&<>"]/g,$Re={"&":"&","<":"<",">":">",'"':"""};function HRe(t){return $Re[t]}kr.escapeHtml=function(t){return YRe.test(t)?t.replace(qRe,HRe):t};const qe=kr;var zRe=t=>{const e=new RegExp("^ {0,3}[-*_]{3,} ?"+qe.escapeRegExp(t.leftDelimiter)+"[^"+qe.escapeRegExp(t.rightDelimiter)+"]");return[{name:"fenced code blocks",tests:[{shift:0,block:!0,info:qe.hasDelimiters("end",t)}],transform:(n,i)=>{const o=n[i],s=o.info.lastIndexOf(t.leftDelimiter),l=qe.getAttrs(o.info,s,t);qe.addAttrs(l,o),o.info=qe.removeDelimiter(o.info,t)}},{name:"inline nesting 0",tests:[{shift:0,type:"inline",children:[{shift:-1,type:n=>n==="image"||n==="code_inline"},{shift:0,type:"text",content:qe.hasDelimiters("start",t)}]}],transform:(n,i,o)=>{const s=n[i].children[o],l=s.content.indexOf(t.rightDelimiter),c=n[i].children[o-1],d=qe.getAttrs(s.content,0,t);qe.addAttrs(d,c),s.content.length===l+t.rightDelimiter.length?n[i].children.splice(o,1):s.content=s.content.slice(l+t.rightDelimiter.length)}},{name:"tables",tests:[{shift:0,type:"table_close"},{shift:1,type:"paragraph_open"},{shift:2,type:"inline",content:qe.hasDelimiters("only",t)}],transform:(n,i)=>{const o=n[i+2],s=qe.getMatchingOpeningToken(n,i),l=qe.getAttrs(o.content,0,t);qe.addAttrs(l,s),n.splice(i+1,3)}},{name:"inline attributes",tests:[{shift:0,type:"inline",children:[{shift:-1,nesting:-1},{shift:0,type:"text",content:qe.hasDelimiters("start",t)}]}],transform:(n,i,o)=>{const s=n[i].children[o],l=s.content,c=qe.getAttrs(l,0,t),d=qe.getMatchingOpeningToken(n[i].children,o-1);qe.addAttrs(c,d),s.content=l.slice(l.indexOf(t.rightDelimiter)+t.rightDelimiter.length)}},{name:"list softbreak",tests:[{shift:-2,type:"list_item_open"},{shift:0,type:"inline",children:[{position:-2,type:"softbreak"},{position:-1,type:"text",content:qe.hasDelimiters("only",t)}]}],transform:(n,i,o)=>{const l=n[i].children[o].content,c=qe.getAttrs(l,0,t);let d=i-2;for(;n[d-1]&&n[d-1].type!=="ordered_list_open"&&n[d-1].type!=="bullet_list_open";)d--;qe.addAttrs(c,n[d-1]),n[i].children=n[i].children.slice(0,-2)}},{name:"list double softbreak",tests:[{shift:0,type:n=>n==="bullet_list_close"||n==="ordered_list_close"},{shift:1,type:"paragraph_open"},{shift:2,type:"inline",content:qe.hasDelimiters("only",t),children:n=>n.length===1},{shift:3,type:"paragraph_close"}],transform:(n,i)=>{const s=n[i+2].content,l=qe.getAttrs(s,0,t),c=qe.getMatchingOpeningToken(n,i);qe.addAttrs(l,c),n.splice(i+1,3)}},{name:"list item end",tests:[{shift:-2,type:"list_item_open"},{shift:0,type:"inline",children:[{position:-1,type:"text",content:qe.hasDelimiters("end",t)}]}],transform:(n,i,o)=>{const s=n[i].children[o],l=s.content,c=qe.getAttrs(l,l.lastIndexOf(t.leftDelimiter),t);qe.addAttrs(c,n[i-2]);const d=l.slice(0,l.lastIndexOf(t.leftDelimiter));s.content=Bb(d)!==" "?d:d.slice(0,-1)}},{name:` -{.a} softbreak then curly in start`,tests:[{shift:0,type:"inline",children:[{position:-2,type:"softbreak"},{position:-1,type:"text",content:qe.hasDelimiters("only",t)}]}],transform:(n,i,o)=>{const s=n[i].children[o],l=qe.getAttrs(s.content,0,t);let c=i+1;for(;n[c+1]&&n[c+1].nesting===-1;)c++;const d=qe.getMatchingOpeningToken(n,c);qe.addAttrs(l,d),n[i].children=n[i].children.slice(0,-2)}},{name:"horizontal rule",tests:[{shift:0,type:"paragraph_open"},{shift:1,type:"inline",children:n=>n.length===1,content:n=>n.match(e)!==null},{shift:2,type:"paragraph_close"}],transform:(n,i)=>{const o=n[i];o.type="hr",o.tag="hr",o.nesting=0;const s=n[i+1].content,l=s.lastIndexOf(t.leftDelimiter),c=qe.getAttrs(s,l,t);qe.addAttrs(c,o),o.markup=s,n.splice(i+1,2)}},{name:"end of block",tests:[{shift:0,type:"inline",children:[{position:-1,content:qe.hasDelimiters("end",t),type:n=>n!=="code_inline"&&n!=="math_inline"}]}],transform:(n,i,o)=>{const s=n[i].children[o],l=s.content,c=qe.getAttrs(l,l.lastIndexOf(t.leftDelimiter),t);let d=i+1;for(;n[d+1]&&n[d+1].nesting===-1;)d++;const _=qe.getMatchingOpeningToken(n,d);qe.addAttrs(c,_);const p=l.slice(0,l.lastIndexOf(t.leftDelimiter));s.content=Bb(p)!==" "?p:p.slice(0,-1)}}]};function Bb(t){return t.slice(-1)[0]}const VRe=zRe,WRe={leftDelimiter:"{",rightDelimiter:"}",allowedAttributes:[]};var KRe=function(e,n){let i=Object.assign({},WRe);i=Object.assign(i,n);const o=VRe(i);function s(l){const c=l.tokens;for(let d=0;d{const S=qm(c,d,f);return S.j!==null&&(g=S.j),S.match})&&(p.transform(c,d,g),(p.name==="inline attributes"||p.name==="inline nesting 0")&&_--)}}e.core.ruler.before("linkify","curly_attributes",s)};function qm(t,e,n){const i={match:!1,j:null},o=n.shift!==void 0?e+n.shift:n.position;if(n.shift!==void 0&&o<0)return i;const s=ZRe(t,o);if(s===void 0)return i;for(const l of Object.keys(n))if(!(l==="shift"||l==="position")){if(s[l]===void 0)return i;if(l==="children"&&QRe(n.children)){if(s.children.length===0)return i;let c;const d=n.children,_=s.children;if(d.every(p=>p.position!==void 0)){if(c=d.every(p=>qm(_,p.position,p).match),c){const p=JRe(d).position;i.j=p>=0?p:_.length+p}}else for(let p=0;p<_.length;p++)if(c=d.every(g=>qm(_,p,g).match),c){i.j=p;break}if(c===!1)return i;continue}switch(typeof n[l]){case"boolean":case"number":case"string":if(s[l]!==n[l])return i;break;case"function":if(!n[l](s[l]))return i;break;case"object":if(XRe(n[l])){if(n[l].every(d=>d(s[l]))===!1)return i;break}default:throw new Error(`Unknown type of pattern test (key: ${l}). Test should be of type boolean, number, string, function or array of functions.`)}}return i.match=!0,i}function QRe(t){return Array.isArray(t)&&t.length&&t.every(e=>typeof e=="object")}function XRe(t){return Array.isArray(t)&&t.length&&t.every(e=>typeof e=="function")}function ZRe(t,e){return e>=0?t[e]:t[t.length+e]}function JRe(t){return t.slice(-1)[0]||{}}const jRe=Qm(KRe);function PN(t){return t instanceof Map?t.clear=t.delete=t.set=function(){throw new Error("map is read-only")}:t instanceof Set&&(t.add=t.clear=t.delete=function(){throw new Error("set is read-only")}),Object.freeze(t),Object.getOwnPropertyNames(t).forEach(e=>{const n=t[e],i=typeof n;(i==="object"||i==="function")&&!Object.isFrozen(n)&&PN(n)}),t}class Gb{constructor(e){e.data===void 0&&(e.data={}),this.data=e.data,this.isMatchIgnored=!1}ignoreMatch(){this.isMatchIgnored=!0}}function kN(t){return t.replace(/&/g,"&").replace(//g,">").replace(/"/g,""").replace(/'/g,"'")}function xr(t,...e){const n=Object.create(null);for(const i in t)n[i]=t[i];return e.forEach(function(i){for(const o in i)n[o]=i[o]}),n}const eNe="",Yb=t=>!!t.scope,tNe=(t,{prefix:e})=>{if(t.startsWith("language:"))return t.replace("language:","language-");if(t.includes(".")){const n=t.split(".");return[`${e}${n.shift()}`,...n.map((i,o)=>`${i}${"_".repeat(o+1)}`)].join(" ")}return`${e}${t}`};class nNe{constructor(e,n){this.buffer="",this.classPrefix=n.classPrefix,e.walk(this)}addText(e){this.buffer+=kN(e)}openNode(e){if(!Yb(e))return;const n=tNe(e.scope,{prefix:this.classPrefix});this.span(n)}closeNode(e){Yb(e)&&(this.buffer+=eNe)}value(){return this.buffer}span(e){this.buffer+=``}}const qb=(t={})=>{const e={children:[]};return Object.assign(e,t),e};class Ng{constructor(){this.rootNode=qb(),this.stack=[this.rootNode]}get top(){return this.stack[this.stack.length-1]}get root(){return this.rootNode}add(e){this.top.children.push(e)}openNode(e){const n=qb({scope:e});this.add(n),this.stack.push(n)}closeNode(){if(this.stack.length>1)return this.stack.pop()}closeAllNodes(){for(;this.closeNode(););}toJSON(){return JSON.stringify(this.rootNode,null,4)}walk(e){return this.constructor._walk(e,this.rootNode)}static _walk(e,n){return typeof n=="string"?e.addText(n):n.children&&(e.openNode(n),n.children.forEach(i=>this._walk(e,i)),e.closeNode(n)),e}static _collapse(e){typeof e!="string"&&e.children&&(e.children.every(n=>typeof n=="string")?e.children=[e.children.join("")]:e.children.forEach(n=>{Ng._collapse(n)}))}}class rNe extends Ng{constructor(e){super(),this.options=e}addText(e){e!==""&&this.add(e)}startScope(e){this.openNode(e)}endScope(){this.closeNode()}__addSublanguage(e,n){const i=e.root;n&&(i.scope=`language:${n}`),this.add(i)}toHTML(){return new nNe(this,this.options).value()}finalize(){return this.closeAllNodes(),!0}}function to(t){return t?typeof t=="string"?t:t.source:null}function UN(t){return gi("(?=",t,")")}function iNe(t){return gi("(?:",t,")*")}function aNe(t){return gi("(?:",t,")?")}function gi(...t){return t.map(n=>to(n)).join("")}function oNe(t){const e=t[t.length-1];return typeof e=="object"&&e.constructor===Object?(t.splice(t.length-1,1),e):{}}function Og(...t){return"("+(oNe(t).capture?"":"?:")+t.map(i=>to(i)).join("|")+")"}function FN(t){return new RegExp(t.toString()+"|").exec("").length-1}function sNe(t,e){const n=t&&t.exec(e);return n&&n.index===0}const lNe=/\[(?:[^\\\]]|\\.)*\]|\(\??|\\([1-9][0-9]*)|\\./;function Ag(t,{joinWith:e}){let n=0;return t.map(i=>{n+=1;const o=n;let s=to(i),l="";for(;s.length>0;){const c=lNe.exec(s);if(!c){l+=s;break}l+=s.substring(0,c.index),s=s.substring(c.index+c[0].length),c[0][0]==="\\"&&c[1]?l+="\\"+String(Number(c[1])+o):(l+=c[0],c[0]==="("&&n++)}return l}).map(i=>`(${i})`).join(e)}const cNe=/\b\B/,BN="[a-zA-Z]\\w*",yg="[a-zA-Z_]\\w*",GN="\\b\\d+(\\.\\d+)?",YN="(-?)(\\b0[xX][a-fA-F0-9]+|(\\b\\d+(\\.\\d*)?|\\.\\d+)([eE][-+]?\\d+)?)",qN="\\b(0b[01]+)",uNe="!|!=|!==|%|%=|&|&&|&=|\\*|\\*=|\\+|\\+=|,|-|-=|/=|/|:|;|<<|<<=|<=|<|===|==|=|>>>=|>>=|>=|>>>|>>|>|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~",dNe=(t={})=>{const e=/^#![ ]*\//;return t.binary&&(t.begin=gi(e,/.*\b/,t.binary,/\b.*/)),xr({scope:"meta",begin:e,end:/$/,relevance:0,"on:begin":(n,i)=>{n.index!==0&&i.ignoreMatch()}},t)},no={begin:"\\\\[\\s\\S]",relevance:0},_Ne={scope:"string",begin:"'",end:"'",illegal:"\\n",contains:[no]},pNe={scope:"string",begin:'"',end:'"',illegal:"\\n",contains:[no]},mNe={begin:/\b(a|an|the|are|I'm|isn't|don't|doesn't|won't|but|just|should|pretty|simply|enough|gonna|going|wtf|so|such|will|you|your|they|like|more)\b/},pl=function(t,e,n={}){const i=xr({scope:"comment",begin:t,end:e,contains:[]},n);i.contains.push({scope:"doctag",begin:"[ ]*(?=(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):)",end:/(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):/,excludeBegin:!0,relevance:0});const o=Og("I","a","is","so","us","to","at","if","in","it","on",/[A-Za-z]+['](d|ve|re|ll|t|s|n)/,/[A-Za-z]+[-][a-z]+/,/[A-Za-z][a-z]{2,}/);return i.contains.push({begin:gi(/[ ]+/,"(",o,/[.]?[:]?([.][ ]|[ ])/,"){3}")}),i},gNe=pl("//","$"),ENe=pl("/\\*","\\*/"),fNe=pl("#","$"),SNe={scope:"number",begin:GN,relevance:0},bNe={scope:"number",begin:YN,relevance:0},hNe={scope:"number",begin:qN,relevance:0},TNe={begin:/(?=\/[^/\n]*\/)/,contains:[{scope:"regexp",begin:/\//,end:/\/[gimuy]*/,illegal:/\n/,contains:[no,{begin:/\[/,end:/\]/,relevance:0,contains:[no]}]}]},vNe={scope:"title",begin:BN,relevance:0},CNe={scope:"title",begin:yg,relevance:0},RNe={begin:"\\.\\s*"+yg,relevance:0},NNe=function(t){return Object.assign(t,{"on:begin":(e,n)=>{n.data._beginMatch=e[1]},"on:end":(e,n)=>{n.data._beginMatch!==e[1]&&n.ignoreMatch()}})};var ws=Object.freeze({__proto__:null,MATCH_NOTHING_RE:cNe,IDENT_RE:BN,UNDERSCORE_IDENT_RE:yg,NUMBER_RE:GN,C_NUMBER_RE:YN,BINARY_NUMBER_RE:qN,RE_STARTERS_RE:uNe,SHEBANG:dNe,BACKSLASH_ESCAPE:no,APOS_STRING_MODE:_Ne,QUOTE_STRING_MODE:pNe,PHRASAL_WORDS_MODE:mNe,COMMENT:pl,C_LINE_COMMENT_MODE:gNe,C_BLOCK_COMMENT_MODE:ENe,HASH_COMMENT_MODE:fNe,NUMBER_MODE:SNe,C_NUMBER_MODE:bNe,BINARY_NUMBER_MODE:hNe,REGEXP_MODE:TNe,TITLE_MODE:vNe,UNDERSCORE_TITLE_MODE:CNe,METHOD_GUARD:RNe,END_SAME_AS_BEGIN:NNe});function ONe(t,e){t.input[t.index-1]==="."&&e.ignoreMatch()}function ANe(t,e){t.className!==void 0&&(t.scope=t.className,delete t.className)}function yNe(t,e){e&&t.beginKeywords&&(t.begin="\\b("+t.beginKeywords.split(" ").join("|")+")(?!\\.)(?=\\b|\\s)",t.__beforeBegin=ONe,t.keywords=t.keywords||t.beginKeywords,delete t.beginKeywords,t.relevance===void 0&&(t.relevance=0))}function INe(t,e){Array.isArray(t.illegal)&&(t.illegal=Og(...t.illegal))}function DNe(t,e){if(t.match){if(t.begin||t.end)throw new Error("begin & end are not supported with match");t.begin=t.match,delete t.match}}function xNe(t,e){t.relevance===void 0&&(t.relevance=1)}const wNe=(t,e)=>{if(!t.beforeMatch)return;if(t.starts)throw new Error("beforeMatch cannot be used with starts");const n=Object.assign({},t);Object.keys(t).forEach(i=>{delete t[i]}),t.keywords=n.keywords,t.begin=gi(n.beforeMatch,UN(n.begin)),t.starts={relevance:0,contains:[Object.assign(n,{endsParent:!0})]},t.relevance=0,delete n.beforeMatch},MNe=["of","and","for","in","not","or","if","then","parent","list","value"],LNe="keyword";function $N(t,e,n=LNe){const i=Object.create(null);return typeof t=="string"?o(n,t.split(" ")):Array.isArray(t)?o(n,t):Object.keys(t).forEach(function(s){Object.assign(i,$N(t[s],e,s))}),i;function o(s,l){e&&(l=l.map(c=>c.toLowerCase())),l.forEach(function(c){const d=c.split("|");i[d[0]]=[s,PNe(d[0],d[1])]})}}function PNe(t,e){return e?Number(e):kNe(t)?0:1}function kNe(t){return MNe.includes(t.toLowerCase())}const $b={},ai=t=>{console.error(t)},Hb=(t,...e)=>{console.log(`WARN: ${t}`,...e)},Yi=(t,e)=>{$b[`${t}/${e}`]||(console.log(`Deprecated as of ${t}. ${e}`),$b[`${t}/${e}`]=!0)},Vs=new Error;function HN(t,e,{key:n}){let i=0;const o=t[n],s={},l={};for(let c=1;c<=e.length;c++)l[c+i]=o[c],s[c+i]=!0,i+=FN(e[c-1]);t[n]=l,t[n]._emit=s,t[n]._multi=!0}function UNe(t){if(Array.isArray(t.begin)){if(t.skip||t.excludeBegin||t.returnBegin)throw ai("skip, excludeBegin, returnBegin not compatible with beginScope: {}"),Vs;if(typeof t.beginScope!="object"||t.beginScope===null)throw ai("beginScope must be object"),Vs;HN(t,t.begin,{key:"beginScope"}),t.begin=Ag(t.begin,{joinWith:""})}}function FNe(t){if(Array.isArray(t.end)){if(t.skip||t.excludeEnd||t.returnEnd)throw ai("skip, excludeEnd, returnEnd not compatible with endScope: {}"),Vs;if(typeof t.endScope!="object"||t.endScope===null)throw ai("endScope must be object"),Vs;HN(t,t.end,{key:"endScope"}),t.end=Ag(t.end,{joinWith:""})}}function BNe(t){t.scope&&typeof t.scope=="object"&&t.scope!==null&&(t.beginScope=t.scope,delete t.scope)}function GNe(t){BNe(t),typeof t.beginScope=="string"&&(t.beginScope={_wrap:t.beginScope}),typeof t.endScope=="string"&&(t.endScope={_wrap:t.endScope}),UNe(t),FNe(t)}function YNe(t){function e(l,c){return new RegExp(to(l),"m"+(t.case_insensitive?"i":"")+(t.unicodeRegex?"u":"")+(c?"g":""))}class n{constructor(){this.matchIndexes={},this.regexes=[],this.matchAt=1,this.position=0}addRule(c,d){d.position=this.position++,this.matchIndexes[this.matchAt]=d,this.regexes.push([d,c]),this.matchAt+=FN(c)+1}compile(){this.regexes.length===0&&(this.exec=()=>null);const c=this.regexes.map(d=>d[1]);this.matcherRe=e(Ag(c,{joinWith:"|"}),!0),this.lastIndex=0}exec(c){this.matcherRe.lastIndex=this.lastIndex;const d=this.matcherRe.exec(c);if(!d)return null;const _=d.findIndex((g,E)=>E>0&&g!==void 0),p=this.matchIndexes[_];return d.splice(0,_),Object.assign(d,p)}}class i{constructor(){this.rules=[],this.multiRegexes=[],this.count=0,this.lastIndex=0,this.regexIndex=0}getMatcher(c){if(this.multiRegexes[c])return this.multiRegexes[c];const d=new n;return this.rules.slice(c).forEach(([_,p])=>d.addRule(_,p)),d.compile(),this.multiRegexes[c]=d,d}resumingScanAtSamePosition(){return this.regexIndex!==0}considerAll(){this.regexIndex=0}addRule(c,d){this.rules.push([c,d]),d.type==="begin"&&this.count++}exec(c){const d=this.getMatcher(this.regexIndex);d.lastIndex=this.lastIndex;let _=d.exec(c);if(this.resumingScanAtSamePosition()&&!(_&&_.index===this.lastIndex)){const p=this.getMatcher(0);p.lastIndex=this.lastIndex+1,_=p.exec(c)}return _&&(this.regexIndex+=_.position+1,this.regexIndex===this.count&&this.considerAll()),_}}function o(l){const c=new i;return l.contains.forEach(d=>c.addRule(d.begin,{rule:d,type:"begin"})),l.terminatorEnd&&c.addRule(l.terminatorEnd,{type:"end"}),l.illegal&&c.addRule(l.illegal,{type:"illegal"}),c}function s(l,c){const d=l;if(l.isCompiled)return d;[ANe,DNe,GNe,wNe].forEach(p=>p(l,c)),t.compilerExtensions.forEach(p=>p(l,c)),l.__beforeBegin=null,[yNe,INe,xNe].forEach(p=>p(l,c)),l.isCompiled=!0;let _=null;return typeof l.keywords=="object"&&l.keywords.$pattern&&(l.keywords=Object.assign({},l.keywords),_=l.keywords.$pattern,delete l.keywords.$pattern),_=_||/\w+/,l.keywords&&(l.keywords=$N(l.keywords,t.case_insensitive)),d.keywordPatternRe=e(_,!0),c&&(l.begin||(l.begin=/\B|\b/),d.beginRe=e(d.begin),!l.end&&!l.endsWithParent&&(l.end=/\B|\b/),l.end&&(d.endRe=e(d.end)),d.terminatorEnd=to(d.end)||"",l.endsWithParent&&c.terminatorEnd&&(d.terminatorEnd+=(l.end?"|":"")+c.terminatorEnd)),l.illegal&&(d.illegalRe=e(l.illegal)),l.contains||(l.contains=[]),l.contains=[].concat(...l.contains.map(function(p){return qNe(p==="self"?l:p)})),l.contains.forEach(function(p){s(p,d)}),l.starts&&s(l.starts,c),d.matcher=o(d),d}if(t.compilerExtensions||(t.compilerExtensions=[]),t.contains&&t.contains.includes("self"))throw new Error("ERR: contains `self` is not supported at the top-level of a language. See documentation.");return t.classNameAliases=xr(t.classNameAliases||{}),s(t)}function zN(t){return t?t.endsWithParent||zN(t.starts):!1}function qNe(t){return t.variants&&!t.cachedVariants&&(t.cachedVariants=t.variants.map(function(e){return xr(t,{variants:null},e)})),t.cachedVariants?t.cachedVariants:zN(t)?xr(t,{starts:t.starts?xr(t.starts):null}):Object.isFrozen(t)?xr(t):t}var $Ne="11.8.0";class HNe extends Error{constructor(e,n){super(e),this.name="HTMLInjectionError",this.html=n}}const Yu=kN,zb=xr,Vb=Symbol("nomatch"),zNe=7,VN=function(t){const e=Object.create(null),n=Object.create(null),i=[];let o=!0;const s="Could not find the language '{}', did you forget to load/include a language module?",l={disableAutodetect:!0,name:"Plain text",contains:[]};let c={ignoreUnescapedHTML:!1,throwUnescapedHTML:!1,noHighlightRe:/^(no-?highlight)$/i,languageDetectRe:/\blang(?:uage)?-([\w-]+)\b/i,classPrefix:"hljs-",cssSelector:"pre code",languages:null,__emitter:rNe};function d(G){return c.noHighlightRe.test(G)}function _(G){let X=G.className+" ";X+=G.parentNode?G.parentNode.className:"";const _e=c.languageDetectRe.exec(X);if(_e){const ve=W(_e[1]);return ve||(Hb(s.replace("{}",_e[1])),Hb("Falling back to no-highlight mode for this block.",G)),ve?_e[1]:"no-highlight"}return X.split(/\s+/).find(ve=>d(ve)||W(ve))}function p(G,X,_e){let ve="",he="";typeof X=="object"?(ve=G,_e=X.ignoreIllegals,he=X.language):(Yi("10.7.0","highlight(lang, code, ...args) has been deprecated."),Yi("10.7.0",`Please use highlight(code, options) instead. -https://github.com/highlightjs/highlight.js/issues/2277`),he=G,ve=X),_e===void 0&&(_e=!0);const tt={code:ve,language:he};J("before:highlight",tt);const lt=tt.result?tt.result:g(tt.language,tt.code,_e);return lt.code=tt.code,J("after:highlight",lt),lt}function g(G,X,_e,ve){const he=Object.create(null);function tt(ne,ce){return ne.keywords[ce]}function lt(){if(!me.keywords){Ue.addText(Ie);return}let ne=0;me.keywordPatternRe.lastIndex=0;let ce=me.keywordPatternRe.exec(Ie),Oe="";for(;ce;){Oe+=Ie.substring(ne,ce.index);const Me=we.case_insensitive?ce[0].toLowerCase():ce[0],ct=tt(me,Me);if(ct){const[xt,Ze]=ct;if(Ue.addText(Oe),Oe="",he[Me]=(he[Me]||0)+1,he[Me]<=zNe&&(zt+=Ze),xt.startsWith("_"))Oe+=ce[0];else{const Yt=we.classNameAliases[xt]||xt;Be(ce[0],Yt)}}else Oe+=ce[0];ne=me.keywordPatternRe.lastIndex,ce=me.keywordPatternRe.exec(Ie)}Oe+=Ie.substring(ne),Ue.addText(Oe)}function $e(){if(Ie==="")return;let ne=null;if(typeof me.subLanguage=="string"){if(!e[me.subLanguage]){Ue.addText(Ie);return}ne=g(me.subLanguage,Ie,!0,bt[me.subLanguage]),bt[me.subLanguage]=ne._top}else ne=f(Ie,me.subLanguage.length?me.subLanguage:null);me.relevance>0&&(zt+=ne.relevance),Ue.__addSublanguage(ne._emitter,ne.language)}function Ce(){me.subLanguage!=null?$e():lt(),Ie=""}function Be(ne,ce){ne!==""&&(Ue.startScope(ce),Ue.addText(ne),Ue.endScope())}function Ve(ne,ce){let Oe=1;const Me=ce.length-1;for(;Oe<=Me;){if(!ne._emit[Oe]){Oe++;continue}const ct=we.classNameAliases[ne[Oe]]||ne[Oe],xt=ce[Oe];ct?Be(xt,ct):(Ie=xt,lt(),Ie=""),Oe++}}function xe(ne,ce){return ne.scope&&typeof ne.scope=="string"&&Ue.openNode(we.classNameAliases[ne.scope]||ne.scope),ne.beginScope&&(ne.beginScope._wrap?(Be(Ie,we.classNameAliases[ne.beginScope._wrap]||ne.beginScope._wrap),Ie=""):ne.beginScope._multi&&(Ve(ne.beginScope,ce),Ie="")),me=Object.create(ne,{parent:{value:me}}),me}function He(ne,ce,Oe){let Me=sNe(ne.endRe,Oe);if(Me){if(ne["on:end"]){const ct=new Gb(ne);ne["on:end"](ce,ct),ct.isMatchIgnored&&(Me=!1)}if(Me){for(;ne.endsParent&&ne.parent;)ne=ne.parent;return ne}}if(ne.endsWithParent)return He(ne.parent,ce,Oe)}function rt(ne){return me.matcher.regexIndex===0?(Ie+=ne[0],1):(Sn=!0,0)}function We(ne){const ce=ne[0],Oe=ne.rule,Me=new Gb(Oe),ct=[Oe.__beforeBegin,Oe["on:begin"]];for(const xt of ct)if(xt&&(xt(ne,Me),Me.isMatchIgnored))return rt(ce);return Oe.skip?Ie+=ce:(Oe.excludeBegin&&(Ie+=ce),Ce(),!Oe.returnBegin&&!Oe.excludeBegin&&(Ie=ce)),xe(Oe,ne),Oe.returnBegin?0:ce.length}function te(ne){const ce=ne[0],Oe=X.substring(ne.index),Me=He(me,ne,Oe);if(!Me)return Vb;const ct=me;me.endScope&&me.endScope._wrap?(Ce(),Be(ce,me.endScope._wrap)):me.endScope&&me.endScope._multi?(Ce(),Ve(me.endScope,ne)):ct.skip?Ie+=ce:(ct.returnEnd||ct.excludeEnd||(Ie+=ce),Ce(),ct.excludeEnd&&(Ie=ce));do me.scope&&Ue.closeNode(),!me.skip&&!me.subLanguage&&(zt+=me.relevance),me=me.parent;while(me!==Me.parent);return Me.starts&&xe(Me.starts,ne),ct.returnEnd?0:ce.length}function pe(){const ne=[];for(let ce=me;ce!==we;ce=ce.parent)ce.scope&&ne.unshift(ce.scope);ne.forEach(ce=>Ue.openNode(ce))}let ie={};function Pe(ne,ce){const Oe=ce&&ce[0];if(Ie+=ne,Oe==null)return Ce(),0;if(ie.type==="begin"&&ce.type==="end"&&ie.index===ce.index&&Oe===""){if(Ie+=X.slice(ce.index,ce.index+1),!o){const Me=new Error(`0 width match regex (${G})`);throw Me.languageName=G,Me.badRule=ie.rule,Me}return 1}if(ie=ce,ce.type==="begin")return We(ce);if(ce.type==="illegal"&&!_e){const Me=new Error('Illegal lexeme "'+Oe+'" for mode "'+(me.scope||"")+'"');throw Me.mode=me,Me}else if(ce.type==="end"){const Me=te(ce);if(Me!==Vb)return Me}if(ce.type==="illegal"&&Oe==="")return 1;if(Gt>1e5&&Gt>ce.index*3)throw new Error("potential infinite loop, way more iterations than matches");return Ie+=Oe,Oe.length}const we=W(G);if(!we)throw ai(s.replace("{}",G)),new Error('Unknown language: "'+G+'"');const Xe=YNe(we);let pt="",me=ve||Xe;const bt={},Ue=new c.__emitter(c);pe();let Ie="",zt=0,Nt=0,Gt=0,Sn=!1;try{if(we.__emitTokens)we.__emitTokens(X,Ue);else{for(me.matcher.considerAll();;){Gt++,Sn?Sn=!1:me.matcher.considerAll(),me.matcher.lastIndex=Nt;const ne=me.matcher.exec(X);if(!ne)break;const ce=X.substring(Nt,ne.index),Oe=Pe(ce,ne);Nt=ne.index+Oe}Pe(X.substring(Nt))}return Ue.finalize(),pt=Ue.toHTML(),{language:G,value:pt,relevance:zt,illegal:!1,_emitter:Ue,_top:me}}catch(ne){if(ne.message&&ne.message.includes("Illegal"))return{language:G,value:Yu(X),illegal:!0,relevance:0,_illegalBy:{message:ne.message,index:Nt,context:X.slice(Nt-100,Nt+100),mode:ne.mode,resultSoFar:pt},_emitter:Ue};if(o)return{language:G,value:Yu(X),illegal:!1,relevance:0,errorRaised:ne,_emitter:Ue,_top:me};throw ne}}function E(G){const X={value:Yu(G),illegal:!1,relevance:0,_top:l,_emitter:new c.__emitter(c)};return X._emitter.addText(G),X}function f(G,X){X=X||c.languages||Object.keys(e);const _e=E(G),ve=X.filter(W).filter(K).map(Ce=>g(Ce,G,!1));ve.unshift(_e);const he=ve.sort((Ce,Be)=>{if(Ce.relevance!==Be.relevance)return Be.relevance-Ce.relevance;if(Ce.language&&Be.language){if(W(Ce.language).supersetOf===Be.language)return 1;if(W(Be.language).supersetOf===Ce.language)return-1}return 0}),[tt,lt]=he,$e=tt;return $e.secondBest=lt,$e}function S(G,X,_e){const ve=X&&n[X]||_e;G.classList.add("hljs"),G.classList.add(`language-${ve}`)}function C(G){let X=null;const _e=_(G);if(d(_e))return;if(J("before:highlightElement",{el:G,language:_e}),G.children.length>0&&(c.ignoreUnescapedHTML||(console.warn("One of your code blocks includes unescaped HTML. This is a potentially serious security risk."),console.warn("https://github.com/highlightjs/highlight.js/wiki/security"),console.warn("The element with unescaped HTML:"),console.warn(G)),c.throwUnescapedHTML))throw new HNe("One of your code blocks includes unescaped HTML.",G.innerHTML);X=G;const ve=X.textContent,he=_e?p(ve,{language:_e,ignoreIllegals:!0}):f(ve);G.innerHTML=he.value,S(G,_e,he.language),G.result={language:he.language,re:he.relevance,relevance:he.relevance},he.secondBest&&(G.secondBest={language:he.secondBest.language,relevance:he.secondBest.relevance}),J("after:highlightElement",{el:G,result:he,text:ve})}function h(G){c=zb(c,G)}const T=()=>{x(),Yi("10.6.0","initHighlighting() deprecated. Use highlightAll() now.")};function N(){x(),Yi("10.6.0","initHighlightingOnLoad() deprecated. Use highlightAll() now.")}let y=!1;function x(){if(document.readyState==="loading"){y=!0;return}document.querySelectorAll(c.cssSelector).forEach(C)}function P(){y&&x()}typeof window<"u"&&window.addEventListener&&window.addEventListener("DOMContentLoaded",P,!1);function D(G,X){let _e=null;try{_e=X(t)}catch(ve){if(ai("Language definition for '{}' could not be registered.".replace("{}",G)),o)ai(ve);else throw ve;_e=l}_e.name||(_e.name=G),e[G]=_e,_e.rawDefinition=X.bind(null,t),_e.aliases&&z(_e.aliases,{languageName:G})}function k(G){delete e[G];for(const X of Object.keys(n))n[X]===G&&delete n[X]}function U(){return Object.keys(e)}function W(G){return G=(G||"").toLowerCase(),e[G]||e[n[G]]}function z(G,{languageName:X}){typeof G=="string"&&(G=[G]),G.forEach(_e=>{n[_e.toLowerCase()]=X})}function K(G){const X=W(G);return X&&!X.disableAutodetect}function Ee(G){G["before:highlightBlock"]&&!G["before:highlightElement"]&&(G["before:highlightElement"]=X=>{G["before:highlightBlock"](Object.assign({block:X.el},X))}),G["after:highlightBlock"]&&!G["after:highlightElement"]&&(G["after:highlightElement"]=X=>{G["after:highlightBlock"](Object.assign({block:X.el},X))})}function oe(G){Ee(G),i.push(G)}function L(G){const X=i.indexOf(G);X!==-1&&i.splice(X,1)}function J(G,X){const _e=G;i.forEach(function(ve){ve[_e]&&ve[_e](X)})}function re(G){return Yi("10.7.0","highlightBlock will be removed entirely in v12.0"),Yi("10.7.0","Please use highlightElement now."),C(G)}Object.assign(t,{highlight:p,highlightAuto:f,highlightAll:x,highlightElement:C,highlightBlock:re,configure:h,initHighlighting:T,initHighlightingOnLoad:N,registerLanguage:D,unregisterLanguage:k,listLanguages:U,getLanguage:W,registerAliases:z,autoDetection:K,inherit:zb,addPlugin:oe,removePlugin:L}),t.debugMode=function(){o=!1},t.safeMode=function(){o=!0},t.versionString=$Ne,t.regex={concat:gi,lookahead:UN,either:Og,optional:aNe,anyNumberOfTimes:iNe};for(const G in ws)typeof ws[G]=="object"&&PN(ws[G]);return Object.assign(t,ws),t},Xi=VN({});Xi.newInstance=()=>VN({});var VNe=Xi;Xi.HighlightJS=Xi;Xi.default=Xi;var qu,Wb;function WNe(){if(Wb)return qu;Wb=1;function t(e){const n="[A-Za-zА-Яа-яёЁ_][A-Za-zА-Яа-яёЁ_0-9]+",s="далее "+"возврат вызватьисключение выполнить для если и из или иначе иначеесли исключение каждого конецесли конецпопытки конеццикла не новый перейти перем по пока попытка прервать продолжить тогда цикл экспорт ",d="загрузитьизфайла "+"вебклиент вместо внешнеесоединение клиент конецобласти мобильноеприложениеклиент мобильноеприложениесервер наклиенте наклиентенасервере наклиентенасерверебезконтекста насервере насерверебезконтекста область перед после сервер толстыйклиентобычноеприложение толстыйклиентуправляемоеприложение тонкийклиент ",_="разделительстраниц разделительстрок символтабуляции ",p="ansitooem oemtoansi ввестивидсубконто ввестиперечисление ввестипериод ввестиплансчетов выбранныйплансчетов датагод датамесяц датачисло заголовоксистемы значениевстроку значениеизстроки каталогиб каталогпользователя кодсимв конгода конецпериодаби конецрассчитанногопериодаби конецстандартногоинтервала конквартала конмесяца коннедели лог лог10 максимальноеколичествосубконто названиеинтерфейса названиенабораправ назначитьвид назначитьсчет найтиссылки началопериодаби началостандартногоинтервала начгода начквартала начмесяца начнедели номерднягода номерднянедели номернеделигода обработкаожидания основнойжурналрасчетов основнойплансчетов основнойязык очиститьокносообщений периодстр получитьвремята получитьдатута получитьдокументта получитьзначенияотбора получитьпозициюта получитьпустоезначение получитьта префиксавтонумерации пропись пустоезначение разм разобратьпозициюдокумента рассчитатьрегистрына рассчитатьрегистрыпо симв создатьобъект статусвозврата стрколичествострок сформироватьпозициюдокумента счетпокоду текущеевремя типзначения типзначениястр установитьтана установитьтапо фиксшаблон шаблон ",g="acos asin atan base64значение base64строка cos exp log log10 pow sin sqrt tan xmlзначение xmlстрока xmlтип xmlтипзнч активноеокно безопасныйрежим безопасныйрежимразделенияданных булево ввестидату ввестизначение ввестистроку ввестичисло возможностьчтенияxml вопрос восстановитьзначение врег выгрузитьжурналрегистрации выполнитьобработкуоповещения выполнитьпроверкуправдоступа вычислить год данныеформывзначение дата день деньгода деньнедели добавитьмесяц заблокироватьданныедляредактирования заблокироватьработупользователя завершитьработусистемы загрузитьвнешнююкомпоненту закрытьсправку записатьjson записатьxml записатьдатуjson записьжурналарегистрации заполнитьзначениясвойств запроситьразрешениепользователя запуститьприложение запуститьсистему зафиксироватьтранзакцию значениевданныеформы значениевстрокувнутр значениевфайл значениезаполнено значениеизстрокивнутр значениеизфайла изxmlтипа импортмоделиxdto имякомпьютера имяпользователя инициализироватьпредопределенныеданные информацияобошибке каталогбиблиотекимобильногоустройства каталогвременныхфайлов каталогдокументов каталогпрограммы кодироватьстроку кодлокализацииинформационнойбазы кодсимвола командасистемы конецгода конецдня конецквартала конецмесяца конецминуты конецнедели конецчаса конфигурациябазыданныхизмененадинамически конфигурацияизменена копироватьданныеформы копироватьфайл краткоепредставлениеошибки лев макс местноевремя месяц мин минута монопольныйрежим найти найтинедопустимыесимволыxml найтиокнопонавигационнойссылке найтипомеченныенаудаление найтипоссылкам найтифайлы началогода началодня началоквартала началомесяца началоминуты началонедели началочаса начатьзапросразрешенияпользователя начатьзапускприложения начатькопированиефайла начатьперемещениефайла начатьподключениевнешнейкомпоненты начатьподключениерасширенияработыскриптографией начатьподключениерасширенияработысфайлами начатьпоискфайлов начатьполучениекаталогавременныхфайлов начатьполучениекаталогадокументов начатьполучениерабочегокаталогаданныхпользователя начатьполучениефайлов начатьпомещениефайла начатьпомещениефайлов начатьсозданиедвоичныхданныхизфайла начатьсозданиекаталога начатьтранзакцию начатьудалениефайлов начатьустановкувнешнейкомпоненты начатьустановкурасширенияработыскриптографией начатьустановкурасширенияработысфайлами неделягода необходимостьзавершениясоединения номерсеансаинформационнойбазы номерсоединенияинформационнойбазы нрег нстр обновитьинтерфейс обновитьнумерациюобъектов обновитьповторноиспользуемыезначения обработкапрерыванияпользователя объединитьфайлы окр описаниеошибки оповестить оповеститьобизменении отключитьобработчикзапросанастроекклиенталицензирования отключитьобработчикожидания отключитьобработчикоповещения открытьзначение открытьиндекссправки открытьсодержаниесправки открытьсправку открытьформу открытьформумодально отменитьтранзакцию очиститьжурналрегистрации очиститьнастройкипользователя очиститьсообщения параметрыдоступа перейтипонавигационнойссылке переместитьфайл подключитьвнешнююкомпоненту подключитьобработчикзапросанастроекклиенталицензирования подключитьобработчикожидания подключитьобработчикоповещения подключитьрасширениеработыскриптографией подключитьрасширениеработысфайлами подробноепредставлениеошибки показатьвводдаты показатьвводзначения показатьвводстроки показатьвводчисла показатьвопрос показатьзначение показатьинформациюобошибке показатьнакарте показатьоповещениепользователя показатьпредупреждение полноеимяпользователя получитьcomобъект получитьxmlтип получитьадреспоместоположению получитьблокировкусеансов получитьвремязавершенияспящегосеанса получитьвремязасыпанияпассивногосеанса получитьвремяожиданияблокировкиданных получитьданныевыбора получитьдополнительныйпараметрклиенталицензирования получитьдопустимыекодылокализации получитьдопустимыечасовыепояса получитьзаголовокклиентскогоприложения получитьзаголовоксистемы получитьзначенияотборажурналарегистрации получитьидентификаторконфигурации получитьизвременногохранилища получитьимявременногофайла получитьимяклиенталицензирования получитьинформациюэкрановклиента получитьиспользованиежурналарегистрации получитьиспользованиесобытияжурналарегистрации получитькраткийзаголовокприложения получитьмакетоформления получитьмаскувсефайлы получитьмаскувсефайлыклиента получитьмаскувсефайлысервера получитьместоположениепоадресу получитьминимальнуюдлинупаролейпользователей получитьнавигационнуюссылку получитьнавигационнуюссылкуинформационнойбазы получитьобновлениеконфигурациибазыданных получитьобновлениепредопределенныхданныхинформационнойбазы получитьобщиймакет получитьобщуюформу получитьокна получитьоперативнуюотметкувремени получитьотключениебезопасногорежима получитьпараметрыфункциональныхопцийинтерфейса получитьполноеимяпредопределенногозначения получитьпредставлениянавигационныхссылок получитьпроверкусложностипаролейпользователей получитьразделительпути получитьразделительпутиклиента получитьразделительпутисервера получитьсеансыинформационнойбазы получитьскоростьклиентскогосоединения получитьсоединенияинформационнойбазы получитьсообщенияпользователю получитьсоответствиеобъектаиформы получитьсоставстандартногоинтерфейсаodata получитьструктурухранениябазыданных получитьтекущийсеансинформационнойбазы получитьфайл получитьфайлы получитьформу получитьфункциональнуюопцию получитьфункциональнуюопциюинтерфейса получитьчасовойпоясинформационнойбазы пользователиос поместитьвовременноехранилище поместитьфайл поместитьфайлы прав праводоступа предопределенноезначение представлениекодалокализации представлениепериода представлениеправа представлениеприложения представлениесобытияжурналарегистрации представлениечасовогопояса предупреждение прекратитьработусистемы привилегированныйрежим продолжитьвызов прочитатьjson прочитатьxml прочитатьдатуjson пустаястрока рабочийкаталогданныхпользователя разблокироватьданныедляредактирования разделитьфайл разорватьсоединениесвнешнимисточникомданных раскодироватьстроку рольдоступна секунда сигнал символ скопироватьжурналрегистрации смещениелетнеговремени смещениестандартноговремени соединитьбуферыдвоичныхданных создатькаталог создатьфабрикуxdto сокрл сокрлп сокрп сообщить состояние сохранитьзначение сохранитьнастройкипользователя сред стрдлина стрзаканчиваетсяна стрзаменить стрнайти стрначинаетсяс строка строкасоединенияинформационнойбазы стрполучитьстроку стрразделить стрсоединить стрсравнить стрчисловхождений стрчислострок стршаблон текущаядата текущаядатасеанса текущаяуниверсальнаядата текущаяуниверсальнаядатавмиллисекундах текущийвариантинтерфейсаклиентскогоприложения текущийвариантосновногошрифтаклиентскогоприложения текущийкодлокализации текущийрежимзапуска текущийязык текущийязыксистемы тип типзнч транзакцияактивна трег удалитьданныеинформационнойбазы удалитьизвременногохранилища удалитьобъекты удалитьфайлы универсальноевремя установитьбезопасныйрежим установитьбезопасныйрежимразделенияданных установитьблокировкусеансов установитьвнешнююкомпоненту установитьвремязавершенияспящегосеанса установитьвремязасыпанияпассивногосеанса установитьвремяожиданияблокировкиданных установитьзаголовокклиентскогоприложения установитьзаголовоксистемы установитьиспользованиежурналарегистрации установитьиспользованиесобытияжурналарегистрации установитькраткийзаголовокприложения установитьминимальнуюдлинупаролейпользователей установитьмонопольныйрежим установитьнастройкиклиенталицензирования установитьобновлениепредопределенныхданныхинформационнойбазы установитьотключениебезопасногорежима установитьпараметрыфункциональныхопцийинтерфейса установитьпривилегированныйрежим установитьпроверкусложностипаролейпользователей установитьрасширениеработыскриптографией установитьрасширениеработысфайлами установитьсоединениесвнешнимисточникомданных установитьсоответствиеобъектаиформы установитьсоставстандартногоинтерфейсаodata установитьчасовойпоясинформационнойбазы установитьчасовойпояссеанса формат цел час часовойпояс часовойпояссеанса число числопрописью этоадресвременногохранилища ",E="wsссылки библиотекакартинок библиотекамакетовоформлениякомпоновкиданных библиотекастилей бизнеспроцессы внешниеисточникиданных внешниеобработки внешниеотчеты встроенныепокупки главныйинтерфейс главныйстиль документы доставляемыеуведомления журналыдокументов задачи информацияобинтернетсоединении использованиерабочейдаты историяработыпользователя константы критерииотбора метаданные обработки отображениерекламы отправкадоставляемыхуведомлений отчеты панельзадачос параметрзапуска параметрысеанса перечисления планывидоврасчета планывидовхарактеристик планыобмена планысчетов полнотекстовыйпоиск пользователиинформационнойбазы последовательности проверкавстроенныхпокупок рабочаядата расширенияконфигурации регистрыбухгалтерии регистрынакопления регистрырасчета регистрысведений регламентныезадания сериализаторxdto справочники средствагеопозиционирования средствакриптографии средствамультимедиа средстваотображениярекламы средствапочты средствателефонии фабрикаxdto файловыепотоки фоновыезадания хранилищанастроек хранилищевариантовотчетов хранилищенастроекданныхформ хранилищеобщихнастроек хранилищепользовательскихнастроекдинамическихсписков хранилищепользовательскихнастроекотчетов хранилищесистемныхнастроек ",f=_+p+g+E,S="webцвета windowsцвета windowsшрифты библиотекакартинок рамкистиля символы цветастиля шрифтыстиля ",C="автоматическоесохранениеданныхформывнастройках автонумерациявформе автораздвижениесерий анимациядиаграммы вариантвыравниванияэлементовизаголовков вариантуправлениявысотойтаблицы вертикальнаяпрокруткаформы вертикальноеположение вертикальноеположениеэлемента видгруппыформы виддекорацииформы виддополненияэлементаформы видизмененияданных видкнопкиформы видпереключателя видподписейкдиаграмме видполяформы видфлажка влияниеразмеранапузырекдиаграммы горизонтальноеположение горизонтальноеположениеэлемента группировкаколонок группировкаподчиненныхэлементовформы группыиэлементы действиеперетаскивания дополнительныйрежимотображения допустимыедействияперетаскивания интервалмеждуэлементамиформы использованиевывода использованиеполосыпрокрутки используемоезначениеточкибиржевойдиаграммы историявыборапривводе источникзначенийоситочекдиаграммы источникзначенияразмерапузырькадиаграммы категориягруппыкоманд максимумсерий начальноеотображениедерева начальноеотображениесписка обновлениетекстаредактирования ориентациядендрограммы ориентациядиаграммы ориентацияметокдиаграммы ориентацияметоксводнойдиаграммы ориентацияэлементаформы отображениевдиаграмме отображениевлегендедиаграммы отображениегруппыкнопок отображениезаголовкашкалыдиаграммы отображениезначенийсводнойдиаграммы отображениезначенияизмерительнойдиаграммы отображениеинтерваладиаграммыганта отображениекнопки отображениекнопкивыбора отображениеобсужденийформы отображениеобычнойгруппы отображениеотрицательныхзначенийпузырьковойдиаграммы отображениепанелипоиска отображениеподсказки отображениепредупрежденияприредактировании отображениеразметкиполосырегулирования отображениестраницформы отображениетаблицы отображениетекстазначениядиаграммыганта отображениеуправленияобычнойгруппы отображениефигурыкнопки палитрацветовдиаграммы поведениеобычнойгруппы поддержкамасштабадендрограммы поддержкамасштабадиаграммыганта поддержкамасштабасводнойдиаграммы поисквтаблицепривводе положениезаголовкаэлементаформы положениекартинкикнопкиформы положениекартинкиэлементаграфическойсхемы положениекоманднойпанелиформы положениекоманднойпанелиэлементаформы положениеопорнойточкиотрисовки положениеподписейкдиаграмме положениеподписейшкалызначенийизмерительнойдиаграммы положениесостоянияпросмотра положениестрокипоиска положениетекстасоединительнойлинии положениеуправленияпоиском положениешкалывремени порядокотображенияточекгоризонтальнойгистограммы порядоксерийвлегендедиаграммы размеркартинки расположениезаголовкашкалыдиаграммы растягиваниеповертикалидиаграммыганта режимавтоотображениясостояния режимвводастроктаблицы режимвыборанезаполненного режимвыделениядаты режимвыделениястрокитаблицы режимвыделениятаблицы режимизмененияразмера режимизменениясвязанногозначения режимиспользованиядиалогапечати режимиспользованияпараметракоманды режиммасштабированияпросмотра режимосновногоокнаклиентскогоприложения режимоткрытияокнаформы режимотображениявыделения режимотображениягеографическойсхемы режимотображениязначенийсерии режимотрисовкисеткиграфическойсхемы режимполупрозрачностидиаграммы режимпробеловдиаграммы режимразмещениянастранице режимредактированияколонки режимсглаживаниядиаграммы режимсглаживанияиндикатора режимсписказадач сквозноевыравнивание сохранениеданныхформывнастройках способзаполнениятекстазаголовкашкалыдиаграммы способопределенияограничивающегозначениядиаграммы стандартнаягруппакоманд стандартноеоформление статусоповещенияпользователя стильстрелки типаппроксимациилиниитрендадиаграммы типдиаграммы типединицышкалывремени типимпортасерийслоягеографическойсхемы типлиниигеографическойсхемы типлиниидиаграммы типмаркерагеографическойсхемы типмаркерадиаграммы типобластиоформления типорганизацииисточникаданныхгеографическойсхемы типотображениясериислоягеографическойсхемы типотображенияточечногообъектагеографическойсхемы типотображенияшкалыэлементалегендыгеографическойсхемы типпоискаобъектовгеографическойсхемы типпроекциигеографическойсхемы типразмещенияизмерений типразмещенияреквизитовизмерений типрамкиэлементауправления типсводнойдиаграммы типсвязидиаграммыганта типсоединениязначенийпосериямдиаграммы типсоединенияточекдиаграммы типсоединительнойлинии типстороныэлементаграфическойсхемы типформыотчета типшкалырадарнойдиаграммы факторлиниитрендадиаграммы фигуракнопки фигурыграфическойсхемы фиксациявтаблице форматдняшкалывремени форматкартинки ширинаподчиненныхэлементовформы ",h="виддвижениябухгалтерии виддвижениянакопления видпериодарегистрарасчета видсчета видточкимаршрутабизнеспроцесса использованиеагрегатарегистранакопления использованиегруппиэлементов использованиережимапроведения использованиесреза периодичностьагрегатарегистранакопления режимавтовремя режимзаписидокумента режимпроведениядокумента ",T="авторегистрацияизменений допустимыйномерсообщения отправкаэлементаданных получениеэлементаданных ",N="использованиерасшифровкитабличногодокумента ориентациястраницы положениеитоговколоноксводнойтаблицы положениеитоговстроксводнойтаблицы положениетекстаотносительнокартинки расположениезаголовкагруппировкитабличногодокумента способчтениязначенийтабличногодокумента типдвустороннейпечати типзаполненияобластитабличногодокумента типкурсоровтабличногодокумента типлиниирисункатабличногодокумента типлинииячейкитабличногодокумента типнаправленияпереходатабличногодокумента типотображениявыделениятабличногодокумента типотображениялинийсводнойтаблицы типразмещениятекстатабличногодокумента типрисункатабличногодокумента типсмещениятабличногодокумента типузоратабличногодокумента типфайлатабличногодокумента точностьпечати чередованиерасположениястраниц ",y="отображениевремениэлементовпланировщика ",x="типфайлаформатированногодокумента ",P="обходрезультатазапроса типзаписизапроса ",D="видзаполнениярасшифровкипостроителяотчета типдобавленияпредставлений типизмеренияпостроителяотчета типразмещенияитогов ",k="доступкфайлу режимдиалогавыборафайла режимоткрытияфайла ",U="типизмеренияпостроителязапроса ",W="видданныханализа методкластеризации типединицыинтервалавременианализаданных типзаполнениятаблицырезультатаанализаданных типиспользованиячисловыхзначенийанализаданных типисточникаданныхпоискаассоциаций типколонкианализаданныхдереворешений типколонкианализаданныхкластеризация типколонкианализаданныхобщаястатистика типколонкианализаданныхпоискассоциаций типколонкианализаданныхпоискпоследовательностей типколонкимоделипрогноза типмерырасстоянияанализаданных типотсеченияправилассоциации типполяанализаданных типстандартизациианализаданных типупорядочиванияправилассоциациианализаданных типупорядочиванияшаблоновпоследовательностейанализаданных типупрощениядереварешений ",z="wsнаправлениепараметра вариантxpathxs вариантзаписидатыjson вариантпростоготипаxs видгруппымоделиxs видфасетаxdto действиепостроителяdom завершенностьпростоготипаxs завершенностьсоставноготипаxs завершенностьсхемыxs запрещенныеподстановкиxs исключениягруппподстановкиxs категорияиспользованияатрибутаxs категорияограниченияидентичностиxs категорияограниченияпространствименxs методнаследованияxs модельсодержимогоxs назначениетипаxml недопустимыеподстановкиxs обработкапробельныхсимволовxs обработкасодержимогоxs ограничениезначенияxs параметрыотбораузловdom переносстрокjson позициявдокументеdom пробельныесимволыxml типатрибутаxml типзначенияjson типканоническогоxml типкомпонентыxs типпроверкиxml типрезультатаdomxpath типузлаdom типузлаxml формаxml формапредставленияxs форматдатыjson экранированиесимволовjson ",K="видсравнениякомпоновкиданных действиеобработкирасшифровкикомпоновкиданных направлениесортировкикомпоновкиданных расположениевложенныхэлементоврезультатакомпоновкиданных расположениеитоговкомпоновкиданных расположениегруппировкикомпоновкиданных расположениеполейгруппировкикомпоновкиданных расположениеполякомпоновкиданных расположениереквизитовкомпоновкиданных расположениересурсовкомпоновкиданных типбухгалтерскогоостаткакомпоновкиданных типвыводатекстакомпоновкиданных типгруппировкикомпоновкиданных типгруппыэлементовотборакомпоновкиданных типдополненияпериодакомпоновкиданных типзаголовкаполейкомпоновкиданных типмакетагруппировкикомпоновкиданных типмакетаобластикомпоновкиданных типостаткакомпоновкиданных типпериодакомпоновкиданных типразмещениятекстакомпоновкиданных типсвязинаборовданныхкомпоновкиданных типэлементарезультатакомпоновкиданных расположениелегендыдиаграммыкомпоновкиданных типпримененияотборакомпоновкиданных режимотображенияэлементанастройкикомпоновкиданных режимотображениянастроеккомпоновкиданных состояниеэлементанастройкикомпоновкиданных способвосстановлениянастроеккомпоновкиданных режимкомпоновкирезультата использованиепараметракомпоновкиданных автопозицияресурсовкомпоновкиданных вариантиспользованиягруппировкикомпоновкиданных расположениересурсоввдиаграммекомпоновкиданных фиксациякомпоновкиданных использованиеусловногооформлениякомпоновкиданных ",Ee="важностьинтернетпочтовогосообщения обработкатекстаинтернетпочтовогосообщения способкодированияинтернетпочтовоговложения способкодированиянеasciiсимволовинтернетпочтовогосообщения типтекстапочтовогосообщения протоколинтернетпочты статусразборапочтовогосообщения ",oe="режимтранзакциизаписижурналарегистрации статустранзакциизаписижурналарегистрации уровеньжурналарегистрации ",L="расположениехранилищасертификатовкриптографии режимвключениясертификатовкриптографии режимпроверкисертификатакриптографии типхранилищасертификатовкриптографии ",J="кодировкаименфайловвzipфайле методсжатияzip методшифрованияzip режимвосстановленияпутейфайловzip режимобработкиподкаталоговzip режимсохраненияпутейzip уровеньсжатияzip ",re="звуковоеоповещение направлениепереходакстроке позициявпотоке порядокбайтов режимблокировкиданных режимуправленияблокировкойданных сервисвстроенныхпокупок состояниефоновогозадания типподписчикадоставляемыхуведомлений уровеньиспользованиязащищенногосоединенияftp ",G="направлениепорядкасхемызапроса типдополненияпериодамисхемызапроса типконтрольнойточкисхемызапроса типобъединениясхемызапроса типпараметрадоступнойтаблицысхемызапроса типсоединениясхемызапроса ",X="httpметод автоиспользованиеобщегореквизита автопрефиксномеразадачи вариантвстроенногоязыка видиерархии видрегистранакопления видтаблицывнешнегоисточникаданных записьдвиженийприпроведении заполнениепоследовательностей индексирование использованиебазыпланавидоврасчета использованиебыстроговыбора использованиеобщегореквизита использованиеподчинения использованиеполнотекстовогопоиска использованиеразделяемыхданныхобщегореквизита использованиереквизита назначениеиспользованияприложения назначениерасширенияконфигурации направлениепередачи обновлениепредопределенныхданных оперативноепроведение основноепредставлениевидарасчета основноепредставлениевидахарактеристики основноепредставлениезадачи основноепредставлениепланаобмена основноепредставлениесправочника основноепредставлениесчета перемещениеграницыприпроведении периодичностьномерабизнеспроцесса периодичностьномерадокумента периодичностьрегистрарасчета периодичностьрегистрасведений повторноеиспользованиевозвращаемыхзначений полнотекстовыйпоискпривводепостроке принадлежностьобъекта проведение разделениеаутентификацииобщегореквизита разделениеданныхобщегореквизита разделениерасширенийконфигурацииобщегореквизита режимавтонумерацииобъектов режимзаписирегистра режимиспользованиямодальности режимиспользованиясинхронныхвызововрасширенийплатформыивнешнихкомпонент режимповторногоиспользованиясеансов режимполученияданныхвыборапривводепостроке режимсовместимости режимсовместимостиинтерфейса режимуправленияблокировкойданныхпоумолчанию сериикодовпланавидовхарактеристик сериикодовпланасчетов сериикодовсправочника созданиепривводе способвыбора способпоискастрокипривводепостроке способредактирования типданныхтаблицывнешнегоисточникаданных типкодапланавидоврасчета типкодасправочника типмакета типномерабизнеспроцесса типномерадокумента типномеразадачи типформы удалениедвижений ",_e="важностьпроблемыприменениярасширенияконфигурации вариантинтерфейсаклиентскогоприложения вариантмасштабаформклиентскогоприложения вариантосновногошрифтаклиентскогоприложения вариантстандартногопериода вариантстандартнойдатыначала видграницы видкартинки видотображенияполнотекстовогопоиска видрамки видсравнения видцвета видчисловогозначения видшрифта допустимаядлина допустимыйзнак использованиеbyteordermark использованиеметаданныхполнотекстовогопоиска источникрасширенийконфигурации клавиша кодвозвратадиалога кодировкаxbase кодировкатекста направлениепоиска направлениесортировки обновлениепредопределенныхданных обновлениеприизмененииданных отображениепанелиразделов проверказаполнения режимдиалогавопрос режимзапускаклиентскогоприложения режимокругления режимоткрытияформприложения режимполнотекстовогопоиска скоростьклиентскогосоединения состояниевнешнегоисточникаданных состояниеобновленияконфигурациибазыданных способвыборасертификатаwindows способкодированиястроки статуссообщения типвнешнейкомпоненты типплатформы типповеденияклавишиenter типэлементаинформацииовыполненииобновленияконфигурациибазыданных уровеньизоляциитранзакций хешфункция частидаты",ve=S+C+h+T+N+y+x+P+D+k+U+W+z+K+Ee+oe+L+J+re+G+X+_e,lt="comобъект ftpсоединение httpзапрос httpсервисответ httpсоединение wsопределения wsпрокси xbase анализданных аннотацияxs блокировкаданных буфердвоичныхданных включениеxs выражениекомпоновкиданных генераторслучайныхчисел географическаясхема географическиекоординаты графическаясхема группамоделиxs данныерасшифровкикомпоновкиданных двоичныеданные дендрограмма диаграмма диаграммаганта диалогвыборафайла диалогвыборацвета диалогвыборашрифта диалограсписаниярегламентногозадания диалогредактированиястандартногопериода диапазон документdom документhtml документацияxs доставляемоеуведомление записьdom записьfastinfoset записьhtml записьjson записьxml записьzipфайла записьданных записьтекста записьузловdom запрос защищенноесоединениеopenssl значенияполейрасшифровкикомпоновкиданных извлечениетекста импортxs интернетпочта интернетпочтовоесообщение интернетпочтовыйпрофиль интернетпрокси интернетсоединение информациядляприложенияxs использованиеатрибутаxs использованиесобытияжурналарегистрации источникдоступныхнастроеккомпоновкиданных итераторузловdom картинка квалификаторыдаты квалификаторыдвоичныхданных квалификаторыстроки квалификаторычисла компоновщикмакетакомпоновкиданных компоновщикнастроеккомпоновкиданных конструктормакетаоформлениякомпоновкиданных конструкторнастроеккомпоновкиданных конструкторформатнойстроки линия макеткомпоновкиданных макетобластикомпоновкиданных макетоформлениякомпоновкиданных маскаxs менеджеркриптографии наборсхемxml настройкикомпоновкиданных настройкисериализацииjson обработкакартинок обработкарасшифровкикомпоновкиданных обходдереваdom объявлениеатрибутаxs объявлениенотацииxs объявлениеэлементаxs описаниеиспользованиясобытиядоступжурналарегистрации описаниеиспользованиясобытияотказвдоступежурналарегистрации описаниеобработкирасшифровкикомпоновкиданных описаниепередаваемогофайла описаниетипов определениегруппыатрибутовxs определениегруппымоделиxs определениеограниченияидентичностиxs определениепростоготипаxs определениесоставноготипаxs определениетипадокументаdom определенияxpathxs отборкомпоновкиданных пакетотображаемыхдокументов параметрвыбора параметркомпоновкиданных параметрызаписиjson параметрызаписиxml параметрычтенияxml переопределениеxs планировщик полеанализаданных полекомпоновкиданных построительdom построительзапроса построительотчета построительотчетаанализаданных построительсхемxml поток потоквпамяти почта почтовоесообщение преобразованиеxsl преобразованиекканоническомуxml процессорвыводарезультатакомпоновкиданныхвколлекциюзначений процессорвыводарезультатакомпоновкиданныхвтабличныйдокумент процессоркомпоновкиданных разыменовательпространствименdom рамка расписаниерегламентногозадания расширенноеимяxml результатчтенияданных своднаядиаграмма связьпараметравыбора связьпотипу связьпотипукомпоновкиданных сериализаторxdto сертификатклиентаwindows сертификатклиентафайл сертификаткриптографии сертификатыудостоверяющихцентровwindows сертификатыудостоверяющихцентровфайл сжатиеданных системнаяинформация сообщениепользователю сочетаниеклавиш сравнениезначений стандартнаядатаначала стандартныйпериод схемаxml схемакомпоновкиданных табличныйдокумент текстовыйдокумент тестируемоеприложение типданныхxml уникальныйидентификатор фабрикаxdto файл файловыйпоток фасетдлиныxs фасетколичестваразрядовдробнойчастиxs фасетмаксимальноговключающегозначенияxs фасетмаксимальногоисключающегозначенияxs фасетмаксимальнойдлиныxs фасетминимальноговключающегозначенияxs фасетминимальногоисключающегозначенияxs фасетминимальнойдлиныxs фасетобразцаxs фасетобщегоколичестваразрядовxs фасетперечисленияxs фасетпробельныхсимволовxs фильтрузловdom форматированнаястрока форматированныйдокумент фрагментxs хешированиеданных хранилищезначения цвет чтениеfastinfoset чтениеhtml чтениеjson чтениеxml чтениеzipфайла чтениеданных чтениетекста чтениеузловdom шрифт элементрезультатакомпоновкиданных "+"comsafearray деревозначений массив соответствие списокзначений структура таблицазначений фиксированнаяструктура фиксированноесоответствие фиксированныймассив ",$e="null истина ложь неопределено",Ce=e.inherit(e.NUMBER_MODE),Be={className:"string",begin:'"|\\|',end:'"|$',contains:[{begin:'""'}]},Ve={begin:"'",end:"'",excludeBegin:!0,excludeEnd:!0,contains:[{className:"number",begin:"\\d{4}([\\.\\\\/:-]?\\d{2}){0,5}"}]},xe=e.inherit(e.C_LINE_COMMENT_MODE),He={className:"meta",begin:"#|&",end:"$",keywords:{$pattern:n,keyword:s+d},contains:[xe]},rt={className:"symbol",begin:"~",end:";|:",excludeEnd:!0},We={className:"function",variants:[{begin:"процедура|функция",end:"\\)",keywords:"процедура функция"},{begin:"конецпроцедуры|конецфункции",keywords:"конецпроцедуры конецфункции"}],contains:[{begin:"\\(",end:"\\)",endsParent:!0,contains:[{className:"params",begin:n,end:",",excludeEnd:!0,endsWithParent:!0,keywords:{$pattern:n,keyword:"знач",literal:$e},contains:[Ce,Be,Ve]},xe]},e.inherit(e.TITLE_MODE,{begin:n})]};return{name:"1C:Enterprise",case_insensitive:!0,keywords:{$pattern:n,keyword:s,built_in:f,class:ve,type:lt,literal:$e},contains:[He,We,xe,rt,Ce,Be,Ve]}}return qu=t,qu}var $u,Kb;function KNe(){if(Kb)return $u;Kb=1;function t(e){const n=e.regex,i=/^[a-zA-Z][a-zA-Z0-9-]*/,o=["ALPHA","BIT","CHAR","CR","CRLF","CTL","DIGIT","DQUOTE","HEXDIG","HTAB","LF","LWSP","OCTET","SP","VCHAR","WSP"],s=e.COMMENT(/;/,/$/),l={scope:"symbol",match:/%b[0-1]+(-[0-1]+|(\.[0-1]+)+)?/},c={scope:"symbol",match:/%d[0-9]+(-[0-9]+|(\.[0-9]+)+)?/},d={scope:"symbol",match:/%x[0-9A-F]+(-[0-9A-F]+|(\.[0-9A-F]+)+)?/},_={scope:"symbol",match:/%[si](?=".*")/},p={scope:"attribute",match:n.concat(i,/(?=\s*=)/)};return{name:"Augmented Backus-Naur Form",illegal:/[!@#$^&',?+~`|:]/,keywords:o,contains:[{scope:"operator",match:/=\/?/},p,s,l,c,d,_,e.QUOTE_STRING_MODE,e.NUMBER_MODE]}}return $u=t,$u}var Hu,Qb;function QNe(){if(Qb)return Hu;Qb=1;function t(e){const n=e.regex,i=["GET","POST","HEAD","PUT","DELETE","CONNECT","OPTIONS","PATCH","TRACE"];return{name:"Apache Access Log",contains:[{className:"number",begin:/^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}(:\d{1,5})?\b/,relevance:5},{className:"number",begin:/\b\d+\b/,relevance:0},{className:"string",begin:n.concat(/"/,n.either(...i)),end:/"/,keywords:i,illegal:/\n/,relevance:5,contains:[{begin:/HTTP\/[12]\.\d'/,relevance:5}]},{className:"string",begin:/\[\d[^\]\n]{8,}\]/,illegal:/\n/,relevance:1},{className:"string",begin:/\[/,end:/\]/,illegal:/\n/,relevance:0},{className:"string",begin:/"Mozilla\/\d\.\d \(/,end:/"/,illegal:/\n/,relevance:3},{className:"string",begin:/"/,end:/"/,illegal:/\n/,relevance:0}]}}return Hu=t,Hu}var zu,Xb;function XNe(){if(Xb)return zu;Xb=1;function t(e){const n=e.regex,i=/[a-zA-Z_$][a-zA-Z0-9_$]*/,o=n.concat(i,n.concat("(\\.",i,")*")),s=/([*]|[a-zA-Z_$][a-zA-Z0-9_$]*)/,l={className:"rest_arg",begin:/[.]{3}/,end:i,relevance:10};return{name:"ActionScript",aliases:["as"],keywords:{keyword:["as","break","case","catch","class","const","continue","default","delete","do","dynamic","each","else","extends","final","finally","for","function","get","if","implements","import","in","include","instanceof","interface","internal","is","namespace","native","new","override","package","private","protected","public","return","set","static","super","switch","this","throw","try","typeof","use","var","void","while","with"],literal:["true","false","null","undefined"]},contains:[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.C_NUMBER_MODE,{match:[/\bpackage/,/\s+/,o],className:{1:"keyword",3:"title.class"}},{match:[/\b(?:class|interface|extends|implements)/,/\s+/,i],className:{1:"keyword",3:"title.class"}},{className:"meta",beginKeywords:"import include",end:/;/,keywords:{keyword:"import include"}},{beginKeywords:"function",end:/[{;]/,excludeEnd:!0,illegal:/\S/,contains:[e.inherit(e.TITLE_MODE,{className:"title.function"}),{className:"params",begin:/\(/,end:/\)/,contains:[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,l]},{begin:n.concat(/:\s*/,s)}]},e.METHOD_GUARD],illegal:/#/}}return zu=t,zu}var Vu,Zb;function ZNe(){if(Zb)return Vu;Zb=1;function t(e){const n="\\d(_|\\d)*",i="[eE][-+]?"+n,o=n+"(\\."+n+")?("+i+")?",s="\\w+",c="\\b("+(n+"#"+s+"(\\."+s+")?#("+i+")?")+"|"+o+")",d="[A-Za-z](_?[A-Za-z0-9.])*",_=`[]\\{\\}%#'"`,p=e.COMMENT("--","$"),g={begin:"\\s+:\\s+",end:"\\s*(:=|;|\\)|=>|$)",illegal:_,contains:[{beginKeywords:"loop for declare others",endsParent:!0},{className:"keyword",beginKeywords:"not null constant access function procedure in out aliased exception"},{className:"type",begin:d,endsParent:!0,relevance:0}]};return{name:"Ada",case_insensitive:!0,keywords:{keyword:["abort","else","new","return","abs","elsif","not","reverse","abstract","end","accept","entry","select","access","exception","of","separate","aliased","exit","or","some","all","others","subtype","and","for","out","synchronized","array","function","overriding","at","tagged","generic","package","task","begin","goto","pragma","terminate","body","private","then","if","procedure","type","case","in","protected","constant","interface","is","raise","use","declare","range","delay","limited","record","when","delta","loop","rem","while","digits","renames","with","do","mod","requeue","xor"],literal:["True","False"]},contains:[p,{className:"string",begin:/"/,end:/"/,contains:[{begin:/""/,relevance:0}]},{className:"string",begin:/'.'/},{className:"number",begin:c,relevance:0},{className:"symbol",begin:"'"+d},{className:"title",begin:"(\\bwith\\s+)?(\\bprivate\\s+)?\\bpackage\\s+(\\bbody\\s+)?",end:"(is|$)",keywords:"package body",excludeBegin:!0,excludeEnd:!0,illegal:_},{begin:"(\\b(with|overriding)\\s+)?\\b(function|procedure)\\s+",end:"(\\bis|\\bwith|\\brenames|\\)\\s*;)",keywords:"overriding function procedure with is renames return",returnBegin:!0,contains:[p,{className:"title",begin:"(\\bwith\\s+)?\\b(function|procedure)\\s+",end:"(\\(|\\s+|$)",excludeBegin:!0,excludeEnd:!0,illegal:_},g,{className:"type",begin:"\\breturn\\s+",end:"(\\s+|;|$)",keywords:"return",excludeBegin:!0,excludeEnd:!0,endsParent:!0,illegal:_}]},{className:"type",begin:"\\b(sub)?type\\s+",end:"\\s+",keywords:"type",excludeBegin:!0,illegal:_},g]}}return Vu=t,Vu}var Wu,Jb;function JNe(){if(Jb)return Wu;Jb=1;function t(e){const n={className:"built_in",begin:"\\b(void|bool|int8|int16|int32|int64|int|uint8|uint16|uint32|uint64|uint|string|ref|array|double|float|auto|dictionary)"},i={className:"symbol",begin:"[a-zA-Z0-9_]+@"},o={className:"keyword",begin:"<",end:">",contains:[n,i]};return n.contains=[o],i.contains=[o],{name:"AngelScript",aliases:["asc"],keywords:["for","in|0","break","continue","while","do|0","return","if","else","case","switch","namespace","is","cast","or","and","xor","not","get|0","in","inout|10","out","override","set|0","private","public","const","default|0","final","shared","external","mixin|10","enum","typedef","funcdef","this","super","import","from","interface","abstract|0","try","catch","protected","explicit","property"],illegal:"(^using\\s+[A-Za-z0-9_\\.]+;$|\\bfunction\\s*[^\\(])",contains:[{className:"string",begin:"'",end:"'",illegal:"\\n",contains:[e.BACKSLASH_ESCAPE],relevance:0},{className:"string",begin:'"""',end:'"""'},{className:"string",begin:'"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE],relevance:0},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"string",begin:"^\\s*\\[",end:"\\]"},{beginKeywords:"interface namespace",end:/\{/,illegal:"[;.\\-]",contains:[{className:"symbol",begin:"[a-zA-Z0-9_]+"}]},{beginKeywords:"class",end:/\{/,illegal:"[;.\\-]",contains:[{className:"symbol",begin:"[a-zA-Z0-9_]+",contains:[{begin:"[:,]\\s*",contains:[{className:"symbol",begin:"[a-zA-Z0-9_]+"}]}]}]},n,i,{className:"literal",begin:"\\b(null|true|false)"},{className:"number",relevance:0,begin:"(-?)(\\b0[xXbBoOdD][a-fA-F0-9]+|(\\b\\d+(\\.\\d*)?f?|\\.\\d+f?)([eE][-+]?\\d+f?)?)"}]}}return Wu=t,Wu}var Ku,jb;function jNe(){if(jb)return Ku;jb=1;function t(e){const n={className:"number",begin:/[$%]\d+/},i={className:"number",begin:/\b\d+/},o={className:"number",begin:/\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}(:\d{1,5})?/},s={className:"number",begin:/:\d{1,5}/};return{name:"Apache config",aliases:["apacheconf"],case_insensitive:!0,contains:[e.HASH_COMMENT_MODE,{className:"section",begin:/<\/?/,end:/>/,contains:[o,s,e.inherit(e.QUOTE_STRING_MODE,{relevance:0})]},{className:"attribute",begin:/\w+/,relevance:0,keywords:{_:["order","deny","allow","setenv","rewriterule","rewriteengine","rewritecond","documentroot","sethandler","errordocument","loadmodule","options","header","listen","serverroot","servername"]},starts:{end:/$/,relevance:0,keywords:{literal:"on off all deny allow"},contains:[{className:"meta",begin:/\s\[/,end:/\]$/},{className:"variable",begin:/[\$%]\{/,end:/\}/,contains:["self",n]},o,i,e.QUOTE_STRING_MODE]}}],illegal:/\S/}}return Ku=t,Ku}var Qu,eh;function eOe(){if(eh)return Qu;eh=1;function t(e){const n=e.regex,i=e.inherit(e.QUOTE_STRING_MODE,{illegal:null}),o={className:"params",begin:/\(/,end:/\)/,contains:["self",e.C_NUMBER_MODE,i]},s=e.COMMENT(/--/,/$/),l=e.COMMENT(/\(\*/,/\*\)/,{contains:["self",s]}),c=[s,l,e.HASH_COMMENT_MODE],d=[/apart from/,/aside from/,/instead of/,/out of/,/greater than/,/isn't|(doesn't|does not) (equal|come before|come after|contain)/,/(greater|less) than( or equal)?/,/(starts?|ends|begins?) with/,/contained by/,/comes (before|after)/,/a (ref|reference)/,/POSIX (file|path)/,/(date|time) string/,/quoted form/],_=[/clipboard info/,/the clipboard/,/info for/,/list (disks|folder)/,/mount volume/,/path to/,/(close|open for) access/,/(get|set) eof/,/current date/,/do shell script/,/get volume settings/,/random number/,/set volume/,/system attribute/,/system info/,/time to GMT/,/(load|run|store) script/,/scripting components/,/ASCII (character|number)/,/localized string/,/choose (application|color|file|file name|folder|from list|remote application|URL)/,/display (alert|dialog)/];return{name:"AppleScript",aliases:["osascript"],keywords:{keyword:"about above after against and around as at back before beginning behind below beneath beside between but by considering contain contains continue copy div does eighth else end equal equals error every exit fifth first for fourth from front get given global if ignoring in into is it its last local me middle mod my ninth not of on onto or over prop property put ref reference repeat returning script second set seventh since sixth some tell tenth that the|0 then third through thru timeout times to transaction try until where while whose with without",literal:"AppleScript false linefeed return pi quote result space tab true",built_in:"alias application boolean class constant date file integer list number real record string text activate beep count delay launch log offset read round run say summarize write character characters contents day frontmost id item length month name|0 paragraph paragraphs rest reverse running time version weekday word words year"},contains:[i,e.C_NUMBER_MODE,{className:"built_in",begin:n.concat(/\b/,n.either(..._),/\b/)},{className:"built_in",begin:/^\s*return\b/},{className:"literal",begin:/\b(text item delimiters|current application|missing value)\b/},{className:"keyword",begin:n.concat(/\b/,n.either(...d),/\b/)},{beginKeywords:"on",illegal:/[${=;\n]/,contains:[e.UNDERSCORE_TITLE_MODE,o]},...c],illegal:/\/\/|->|=>|\[\[/}}return Qu=t,Qu}var Xu,th;function tOe(){if(th)return Xu;th=1;function t(e){const n="[A-Za-z_][0-9A-Za-z_]*",i={keyword:["if","for","while","var","new","function","do","return","void","else","break"],literal:["BackSlash","DoubleQuote","false","ForwardSlash","Infinity","NaN","NewLine","null","PI","SingleQuote","Tab","TextFormatting","true","undefined"],built_in:["Abs","Acos","All","Angle","Any","Area","AreaGeodetic","Array","Asin","Atan","Atan2","Attachments","Average","Back","Bearing","Boolean","Buffer","BufferGeodetic","Ceil","Centroid","Clip","Concatenate","Console","Constrain","Contains","ConvertDirection","Cos","Count","Crosses","Cut","Date","DateAdd","DateDiff","Day","Decode","DefaultValue","Densify","DensifyGeodetic","Dictionary","Difference","Disjoint","Distance","DistanceGeodetic","Distinct","Domain","DomainCode","DomainName","EnvelopeIntersects","Equals","Erase","Exp","Expects","Extent","Feature","FeatureSet","FeatureSetByAssociation","FeatureSetById","FeatureSetByName","FeatureSetByPortalItem","FeatureSetByRelationshipName","Filter","Find","First","Floor","FromCharCode","FromCodePoint","FromJSON","GdbVersion","Generalize","Geometry","GetFeatureSet","GetUser","GroupBy","Guid","Hash","HasKey","Hour","IIf","Includes","IndexOf","Insert","Intersection","Intersects","IsEmpty","IsNan","ISOMonth","ISOWeek","ISOWeekday","ISOYear","IsSelfIntersecting","IsSimple","Left|0","Length","Length3D","LengthGeodetic","Log","Lower","Map","Max","Mean","Mid","Millisecond","Min","Minute","Month","MultiPartToSinglePart","Multipoint","NextSequenceValue","None","Now","Number","Offset|0","OrderBy","Overlaps","Point","Polygon","Polyline","Pop","Portal","Pow","Proper","Push","Random","Reduce","Relate","Replace","Resize","Reverse","Right|0","RingIsClockwise","Rotate","Round","Schema","Second","SetGeometry","Simplify","Sin","Slice","Sort","Splice","Split","Sqrt","Stdev","SubtypeCode","SubtypeName","Subtypes","Sum","SymmetricDifference","Tan","Text","Timestamp","ToCharCode","ToCodePoint","Today","ToHex","ToLocal","Top|0","Touches","ToUTC","TrackAccelerationAt","TrackAccelerationWindow","TrackCurrentAcceleration","TrackCurrentDistance","TrackCurrentSpeed","TrackCurrentTime","TrackDistanceAt","TrackDistanceWindow","TrackDuration","TrackFieldWindow","TrackGeometryWindow","TrackIndex","TrackSpeedAt","TrackSpeedWindow","TrackStartTime","TrackWindow","Trim","TypeOf","Union","Upper","UrlEncode","Variance","Week","Weekday","When","Within","Year"]},o={className:"symbol",begin:"\\$[datastore|feature|layer|map|measure|sourcefeature|sourcelayer|targetfeature|targetlayer|value|view]+"},s={className:"number",variants:[{begin:"\\b(0[bB][01]+)"},{begin:"\\b(0[oO][0-7]+)"},{begin:e.C_NUMBER_RE}],relevance:0},l={className:"subst",begin:"\\$\\{",end:"\\}",keywords:i,contains:[]},c={className:"string",begin:"`",end:"`",contains:[e.BACKSLASH_ESCAPE,l]};l.contains=[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,c,s,e.REGEXP_MODE];const d=l.contains.concat([e.C_BLOCK_COMMENT_MODE,e.C_LINE_COMMENT_MODE]);return{name:"ArcGIS Arcade",case_insensitive:!0,keywords:i,contains:[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,c,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,o,s,{begin:/[{,]\s*/,relevance:0,contains:[{begin:n+"\\s*:",returnBegin:!0,relevance:0,contains:[{className:"attr",begin:n,relevance:0}]}]},{begin:"("+e.RE_STARTERS_RE+"|\\b(return)\\b)\\s*",keywords:"return",contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.REGEXP_MODE,{className:"function",begin:"(\\(.*?\\)|"+n+")\\s*=>",returnBegin:!0,end:"\\s*=>",contains:[{className:"params",variants:[{begin:n},{begin:/\(\s*\)/},{begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:i,contains:d}]}]}],relevance:0},{beginKeywords:"function",end:/\{/,excludeEnd:!0,contains:[e.inherit(e.TITLE_MODE,{className:"title.function",begin:n}),{className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,contains:d}],illegal:/\[|%/},{begin:/\$[(.]/}],illegal:/#(?!!)/}}return Xu=t,Xu}var Zu,nh;function nOe(){if(nh)return Zu;nh=1;function t(n){const i=n.regex,o=n.COMMENT("//","$",{contains:[{begin:/\\\n/}]}),s="decltype\\(auto\\)",l="[a-zA-Z_]\\w*::",c="<[^<>]+>",d="(?!struct)("+s+"|"+i.optional(l)+"[a-zA-Z_]\\w*"+i.optional(c)+")",_={className:"type",begin:"\\b[a-z\\d_]*_t\\b"},p="\\\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4,8}|[0-7]{3}|\\S)",g={className:"string",variants:[{begin:'(u8?|U|L)?"',end:'"',illegal:"\\n",contains:[n.BACKSLASH_ESCAPE]},{begin:"(u8?|U|L)?'("+p+"|.)",end:"'",illegal:"."},n.END_SAME_AS_BEGIN({begin:/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,end:/\)([^()\\ ]{0,16})"/})]},E={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)((ll|LL|l|L)(u|U)?|(u|U)(ll|LL|l|L)?|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"}],relevance:0},f={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:{keyword:"if else elif endif define undef warning error line pragma _Pragma ifdef ifndef include"},contains:[{begin:/\\\n/,relevance:0},n.inherit(g,{className:"string"}),{className:"string",begin:/<.*?>/},o,n.C_BLOCK_COMMENT_MODE]},S={className:"title",begin:i.optional(l)+n.IDENT_RE,relevance:0},C=i.optional(l)+n.IDENT_RE+"\\s*\\(",h=["alignas","alignof","and","and_eq","asm","atomic_cancel","atomic_commit","atomic_noexcept","auto","bitand","bitor","break","case","catch","class","co_await","co_return","co_yield","compl","concept","const_cast|10","consteval","constexpr","constinit","continue","decltype","default","delete","do","dynamic_cast|10","else","enum","explicit","export","extern","false","final","for","friend","goto","if","import","inline","module","mutable","namespace","new","noexcept","not","not_eq","nullptr","operator","or","or_eq","override","private","protected","public","reflexpr","register","reinterpret_cast|10","requires","return","sizeof","static_assert","static_cast|10","struct","switch","synchronized","template","this","thread_local","throw","transaction_safe","transaction_safe_dynamic","true","try","typedef","typeid","typename","union","using","virtual","volatile","while","xor","xor_eq"],T=["bool","char","char16_t","char32_t","char8_t","double","float","int","long","short","void","wchar_t","unsigned","signed","const","static"],N=["any","auto_ptr","barrier","binary_semaphore","bitset","complex","condition_variable","condition_variable_any","counting_semaphore","deque","false_type","future","imaginary","initializer_list","istringstream","jthread","latch","lock_guard","multimap","multiset","mutex","optional","ostringstream","packaged_task","pair","promise","priority_queue","queue","recursive_mutex","recursive_timed_mutex","scoped_lock","set","shared_future","shared_lock","shared_mutex","shared_timed_mutex","shared_ptr","stack","string_view","stringstream","timed_mutex","thread","true_type","tuple","unique_lock","unique_ptr","unordered_map","unordered_multimap","unordered_multiset","unordered_set","variant","vector","weak_ptr","wstring","wstring_view"],y=["abort","abs","acos","apply","as_const","asin","atan","atan2","calloc","ceil","cerr","cin","clog","cos","cosh","cout","declval","endl","exchange","exit","exp","fabs","floor","fmod","forward","fprintf","fputs","free","frexp","fscanf","future","invoke","isalnum","isalpha","iscntrl","isdigit","isgraph","islower","isprint","ispunct","isspace","isupper","isxdigit","labs","launder","ldexp","log","log10","make_pair","make_shared","make_shared_for_overwrite","make_tuple","make_unique","malloc","memchr","memcmp","memcpy","memset","modf","move","pow","printf","putchar","puts","realloc","scanf","sin","sinh","snprintf","sprintf","sqrt","sscanf","std","stderr","stdin","stdout","strcat","strchr","strcmp","strcpy","strcspn","strlen","strncat","strncmp","strncpy","strpbrk","strrchr","strspn","strstr","swap","tan","tanh","terminate","to_underlying","tolower","toupper","vfprintf","visit","vprintf","vsprintf"],D={type:T,keyword:h,literal:["NULL","false","nullopt","nullptr","true"],built_in:["_Pragma"],_type_hints:N},k={className:"function.dispatch",relevance:0,keywords:{_hint:y},begin:i.concat(/\b/,/(?!decltype)/,/(?!if)/,/(?!for)/,/(?!switch)/,/(?!while)/,n.IDENT_RE,i.lookahead(/(<[^<>]+>|)\s*\(/))},U=[k,f,_,o,n.C_BLOCK_COMMENT_MODE,E,g],W={variants:[{begin:/=/,end:/;/},{begin:/\(/,end:/\)/},{beginKeywords:"new throw return else",end:/;/}],keywords:D,contains:U.concat([{begin:/\(/,end:/\)/,keywords:D,contains:U.concat(["self"]),relevance:0}]),relevance:0},z={className:"function",begin:"("+d+"[\\*&\\s]+)+"+C,returnBegin:!0,end:/[{;=]/,excludeEnd:!0,keywords:D,illegal:/[^\w\s\*&:<>.]/,contains:[{begin:s,keywords:D,relevance:0},{begin:C,returnBegin:!0,contains:[S],relevance:0},{begin:/::/,relevance:0},{begin:/:/,endsWithParent:!0,contains:[g,E]},{relevance:0,match:/,/},{className:"params",begin:/\(/,end:/\)/,keywords:D,relevance:0,contains:[o,n.C_BLOCK_COMMENT_MODE,g,E,_,{begin:/\(/,end:/\)/,keywords:D,relevance:0,contains:["self",o,n.C_BLOCK_COMMENT_MODE,g,E,_]}]},_,o,n.C_BLOCK_COMMENT_MODE,f]};return{name:"C++",aliases:["cc","c++","h++","hpp","hh","hxx","cxx"],keywords:D,illegal:"",keywords:D,contains:["self",_]},{begin:n.IDENT_RE+"::",keywords:D},{match:[/\b(?:enum(?:\s+(?:class|struct))?|class|struct|union)/,/\s+/,/\w+/],className:{1:"keyword",3:"title.class"}}])}}function e(n){const i={type:["boolean","byte","word","String"],built_in:["KeyboardController","MouseController","SoftwareSerial","EthernetServer","EthernetClient","LiquidCrystal","RobotControl","GSMVoiceCall","EthernetUDP","EsploraTFT","HttpClient","RobotMotor","WiFiClient","GSMScanner","FileSystem","Scheduler","GSMServer","YunClient","YunServer","IPAddress","GSMClient","GSMModem","Keyboard","Ethernet","Console","GSMBand","Esplora","Stepper","Process","WiFiUDP","GSM_SMS","Mailbox","USBHost","Firmata","PImage","Client","Server","GSMPIN","FileIO","Bridge","Serial","EEPROM","Stream","Mouse","Audio","Servo","File","Task","GPRS","WiFi","Wire","TFT","GSM","SPI","SD"],_hints:["setup","loop","runShellCommandAsynchronously","analogWriteResolution","retrieveCallingNumber","printFirmwareVersion","analogReadResolution","sendDigitalPortPair","noListenOnLocalhost","readJoystickButton","setFirmwareVersion","readJoystickSwitch","scrollDisplayRight","getVoiceCallStatus","scrollDisplayLeft","writeMicroseconds","delayMicroseconds","beginTransmission","getSignalStrength","runAsynchronously","getAsynchronously","listenOnLocalhost","getCurrentCarrier","readAccelerometer","messageAvailable","sendDigitalPorts","lineFollowConfig","countryNameWrite","runShellCommand","readStringUntil","rewindDirectory","readTemperature","setClockDivider","readLightSensor","endTransmission","analogReference","detachInterrupt","countryNameRead","attachInterrupt","encryptionType","readBytesUntil","robotNameWrite","readMicrophone","robotNameRead","cityNameWrite","userNameWrite","readJoystickY","readJoystickX","mouseReleased","openNextFile","scanNetworks","noInterrupts","digitalWrite","beginSpeaker","mousePressed","isActionDone","mouseDragged","displayLogos","noAutoscroll","addParameter","remoteNumber","getModifiers","keyboardRead","userNameRead","waitContinue","processInput","parseCommand","printVersion","readNetworks","writeMessage","blinkVersion","cityNameRead","readMessage","setDataMode","parsePacket","isListening","setBitOrder","beginPacket","isDirectory","motorsWrite","drawCompass","digitalRead","clearScreen","serialEvent","rightToLeft","setTextSize","leftToRight","requestFrom","keyReleased","compassRead","analogWrite","interrupts","WiFiServer","disconnect","playMelody","parseFloat","autoscroll","getPINUsed","setPINUsed","setTimeout","sendAnalog","readSlider","analogRead","beginWrite","createChar","motorsStop","keyPressed","tempoWrite","readButton","subnetMask","debugPrint","macAddress","writeGreen","randomSeed","attachGPRS","readString","sendString","remotePort","releaseAll","mouseMoved","background","getXChange","getYChange","answerCall","getResult","voiceCall","endPacket","constrain","getSocket","writeJSON","getButton","available","connected","findUntil","readBytes","exitValue","readGreen","writeBlue","startLoop","IPAddress","isPressed","sendSysex","pauseMode","gatewayIP","setCursor","getOemKey","tuneWrite","noDisplay","loadImage","switchPIN","onRequest","onReceive","changePIN","playFile","noBuffer","parseInt","overflow","checkPIN","knobRead","beginTFT","bitClear","updateIR","bitWrite","position","writeRGB","highByte","writeRed","setSpeed","readBlue","noStroke","remoteIP","transfer","shutdown","hangCall","beginSMS","endWrite","attached","maintain","noCursor","checkReg","checkPUK","shiftOut","isValid","shiftIn","pulseIn","connect","println","localIP","pinMode","getIMEI","display","noBlink","process","getBand","running","beginSD","drawBMP","lowByte","setBand","release","bitRead","prepare","pointTo","readRed","setMode","noFill","remove","listen","stroke","detach","attach","noTone","exists","buffer","height","bitSet","circle","config","cursor","random","IRread","setDNS","endSMS","getKey","micros","millis","begin","print","write","ready","flush","width","isPIN","blink","clear","press","mkdir","rmdir","close","point","yield","image","BSSID","click","delay","read","text","move","peek","beep","rect","line","open","seek","fill","size","turn","stop","home","find","step","tone","sqrt","RSSI","SSID","end","bit","tan","cos","sin","pow","map","abs","max","min","get","run","put"],literal:["DIGITAL_MESSAGE","FIRMATA_STRING","ANALOG_MESSAGE","REPORT_DIGITAL","REPORT_ANALOG","INPUT_PULLUP","SET_PIN_MODE","INTERNAL2V56","SYSTEM_RESET","LED_BUILTIN","INTERNAL1V1","SYSEX_START","INTERNAL","EXTERNAL","DEFAULT","OUTPUT","INPUT","HIGH","LOW"]},o=t(n),s=o.keywords;return s.type=[...s.type,...i.type],s.literal=[...s.literal,...i.literal],s.built_in=[...s.built_in,...i.built_in],s._hints=i._hints,o.name="Arduino",o.aliases=["ino"],o.supersetOf="cpp",o}return Zu=e,Zu}var Ju,rh;function rOe(){if(rh)return Ju;rh=1;function t(e){const n={variants:[e.COMMENT("^[ \\t]*(?=#)","$",{relevance:0,excludeBegin:!0}),e.COMMENT("[;@]","$",{relevance:0}),e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]};return{name:"ARM Assembly",case_insensitive:!0,aliases:["arm"],keywords:{$pattern:"\\.?"+e.IDENT_RE,meta:".2byte .4byte .align .ascii .asciz .balign .byte .code .data .else .end .endif .endm .endr .equ .err .exitm .extern .global .hword .if .ifdef .ifndef .include .irp .long .macro .rept .req .section .set .skip .space .text .word .arm .thumb .code16 .code32 .force_thumb .thumb_func .ltorg ALIAS ALIGN ARM AREA ASSERT ATTR CN CODE CODE16 CODE32 COMMON CP DATA DCB DCD DCDU DCDO DCFD DCFDU DCI DCQ DCQU DCW DCWU DN ELIF ELSE END ENDFUNC ENDIF ENDP ENTRY EQU EXPORT EXPORTAS EXTERN FIELD FILL FUNCTION GBLA GBLL GBLS GET GLOBAL IF IMPORT INCBIN INCLUDE INFO KEEP LCLA LCLL LCLS LTORG MACRO MAP MEND MEXIT NOFP OPT PRESERVE8 PROC QN READONLY RELOC REQUIRE REQUIRE8 RLIST FN ROUT SETA SETL SETS SN SPACE SUBT THUMB THUMBX TTL WHILE WEND ",built_in:"r0 r1 r2 r3 r4 r5 r6 r7 r8 r9 r10 r11 r12 r13 r14 r15 pc lr sp ip sl sb fp a1 a2 a3 a4 v1 v2 v3 v4 v5 v6 v7 v8 f0 f1 f2 f3 f4 f5 f6 f7 p0 p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15 c0 c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 c13 c14 c15 q0 q1 q2 q3 q4 q5 q6 q7 q8 q9 q10 q11 q12 q13 q14 q15 cpsr_c cpsr_x cpsr_s cpsr_f cpsr_cx cpsr_cxs cpsr_xs cpsr_xsf cpsr_sf cpsr_cxsf spsr_c spsr_x spsr_s spsr_f spsr_cx spsr_cxs spsr_xs spsr_xsf spsr_sf spsr_cxsf s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s11 s12 s13 s14 s15 s16 s17 s18 s19 s20 s21 s22 s23 s24 s25 s26 s27 s28 s29 s30 s31 d0 d1 d2 d3 d4 d5 d6 d7 d8 d9 d10 d11 d12 d13 d14 d15 d16 d17 d18 d19 d20 d21 d22 d23 d24 d25 d26 d27 d28 d29 d30 d31 {PC} {VAR} {TRUE} {FALSE} {OPT} {CONFIG} {ENDIAN} {CODESIZE} {CPU} {FPU} {ARCHITECTURE} {PCSTOREOFFSET} {ARMASM_VERSION} {INTER} {ROPI} {RWPI} {SWST} {NOSWST} . @"},contains:[{className:"keyword",begin:"\\b(adc|(qd?|sh?|u[qh]?)?add(8|16)?|usada?8|(q|sh?|u[qh]?)?(as|sa)x|and|adrl?|sbc|rs[bc]|asr|b[lx]?|blx|bxj|cbn?z|tb[bh]|bic|bfc|bfi|[su]bfx|bkpt|cdp2?|clz|clrex|cmp|cmn|cpsi[ed]|cps|setend|dbg|dmb|dsb|eor|isb|it[te]{0,3}|lsl|lsr|ror|rrx|ldm(([id][ab])|f[ds])?|ldr((s|ex)?[bhd])?|movt?|mvn|mra|mar|mul|[us]mull|smul[bwt][bt]|smu[as]d|smmul|smmla|mla|umlaal|smlal?([wbt][bt]|d)|mls|smlsl?[ds]|smc|svc|sev|mia([bt]{2}|ph)?|mrr?c2?|mcrr2?|mrs|msr|orr|orn|pkh(tb|bt)|rbit|rev(16|sh)?|sel|[su]sat(16)?|nop|pop|push|rfe([id][ab])?|stm([id][ab])?|str(ex)?[bhd]?|(qd?)?sub|(sh?|q|u[qh]?)?sub(8|16)|[su]xt(a?h|a?b(16)?)|srs([id][ab])?|swpb?|swi|smi|tst|teq|wfe|wfi|yield)(eq|ne|cs|cc|mi|pl|vs|vc|hi|ls|ge|lt|gt|le|al|hs|lo)?[sptrx]?(?=\\s)"},n,e.QUOTE_STRING_MODE,{className:"string",begin:"'",end:"[^\\\\]'",relevance:0},{className:"title",begin:"\\|",end:"\\|",illegal:"\\n",relevance:0},{className:"number",variants:[{begin:"[#$=]?0x[0-9a-f]+"},{begin:"[#$=]?0b[01]+"},{begin:"[#$=]\\d+"},{begin:"\\b\\d+"}],relevance:0},{className:"symbol",variants:[{begin:"^[ \\t]*[a-z_\\.\\$][a-z0-9_\\.\\$]+:"},{begin:"^[a-z_\\.\\$][a-z0-9_\\.\\$]+"},{begin:"[=#]\\w+"}],relevance:0}]}}return Ju=t,Ju}var ju,ih;function iOe(){if(ih)return ju;ih=1;function t(e){const n=e.regex,i=n.concat(/[\p{L}_]/u,n.optional(/[\p{L}0-9_.-]*:/u),/[\p{L}0-9_.-]*/u),o=/[\p{L}0-9._:-]+/u,s={className:"symbol",begin:/&[a-z]+;|&#[0-9]+;|&#x[a-f0-9]+;/},l={begin:/\s/,contains:[{className:"keyword",begin:/#?[a-z_][a-z1-9_-]+/,illegal:/\n/}]},c=e.inherit(l,{begin:/\(/,end:/\)/}),d=e.inherit(e.APOS_STRING_MODE,{className:"string"}),_=e.inherit(e.QUOTE_STRING_MODE,{className:"string"}),p={endsWithParent:!0,illegal:/`]+/}]}]}]};return{name:"HTML, XML",aliases:["html","xhtml","rss","atom","xjb","xsd","xsl","plist","wsf","svg"],case_insensitive:!0,unicodeRegex:!0,contains:[{className:"meta",begin://,relevance:10,contains:[l,_,d,c,{begin:/\[/,end:/\]/,contains:[{className:"meta",begin://,contains:[l,c,_,d]}]}]},e.COMMENT(//,{relevance:10}),{begin://,relevance:10},s,{className:"meta",end:/\?>/,variants:[{begin:/<\?xml/,relevance:10,contains:[_]},{begin:/<\?[a-z][a-z0-9]+/}]},{className:"tag",begin:/)/,end:/>/,keywords:{name:"style"},contains:[p],starts:{end:/<\/style>/,returnEnd:!0,subLanguage:["css","xml"]}},{className:"tag",begin:/)/,end:/>/,keywords:{name:"script"},contains:[p],starts:{end:/<\/script>/,returnEnd:!0,subLanguage:["javascript","handlebars","xml"]}},{className:"tag",begin:/<>|<\/>/},{className:"tag",begin:n.concat(//,/>/,/\s/)))),end:/\/?>/,contains:[{className:"name",begin:i,relevance:0,starts:p}]},{className:"tag",begin:n.concat(/<\//,n.lookahead(n.concat(i,/>/))),contains:[{className:"name",begin:i,relevance:0},{begin:/>/,relevance:0,endsParent:!0}]}]}}return ju=t,ju}var ed,ah;function aOe(){if(ah)return ed;ah=1;function t(e){const n=e.regex,i={begin:"^'{3,}[ \\t]*$",relevance:10},o=[{begin:/\\[*_`]/},{begin:/\\\\\*{2}[^\n]*?\*{2}/},{begin:/\\\\_{2}[^\n]*_{2}/},{begin:/\\\\`{2}[^\n]*`{2}/},{begin:/[:;}][*_`](?![*_`])/}],s=[{className:"strong",begin:/\*{2}([^\n]+?)\*{2}/},{className:"strong",begin:n.concat(/\*\*/,/((\*(?!\*)|\\[^\n]|[^*\n\\])+\n)+/,/(\*(?!\*)|\\[^\n]|[^*\n\\])*/,/\*\*/),relevance:0},{className:"strong",begin:/\B\*(\S|\S[^\n]*?\S)\*(?!\w)/},{className:"strong",begin:/\*[^\s]([^\n]+\n)+([^\n]+)\*/}],l=[{className:"emphasis",begin:/_{2}([^\n]+?)_{2}/},{className:"emphasis",begin:n.concat(/__/,/((_(?!_)|\\[^\n]|[^_\n\\])+\n)+/,/(_(?!_)|\\[^\n]|[^_\n\\])*/,/__/),relevance:0},{className:"emphasis",begin:/\b_(\S|\S[^\n]*?\S)_(?!\w)/},{className:"emphasis",begin:/_[^\s]([^\n]+\n)+([^\n]+)_/},{className:"emphasis",begin:"\\B'(?!['\\s])",end:"(\\n{2}|')",contains:[{begin:"\\\\'\\w",relevance:0}],relevance:0}],c={className:"symbol",begin:"^(NOTE|TIP|IMPORTANT|WARNING|CAUTION):\\s+",relevance:10},d={className:"bullet",begin:"^(\\*+|-+|\\.+|[^\\n]+?::)\\s+"};return{name:"AsciiDoc",aliases:["adoc"],contains:[e.COMMENT("^/{4,}\\n","\\n/{4,}$",{relevance:10}),e.COMMENT("^//","$",{relevance:0}),{className:"title",begin:"^\\.\\w.*$"},{begin:"^[=\\*]{4,}\\n",end:"\\n^[=\\*]{4,}$",relevance:10},{className:"section",relevance:10,variants:[{begin:"^(={1,6})[ ].+?([ ]\\1)?$"},{begin:"^[^\\[\\]\\n]+?\\n[=\\-~\\^\\+]{2,}$"}]},{className:"meta",begin:"^:.+?:",end:"\\s",excludeEnd:!0,relevance:10},{className:"meta",begin:"^\\[.+?\\]$",relevance:0},{className:"quote",begin:"^_{4,}\\n",end:"\\n_{4,}$",relevance:10},{className:"code",begin:"^[\\-\\.]{4,}\\n",end:"\\n[\\-\\.]{4,}$",relevance:10},{begin:"^\\+{4,}\\n",end:"\\n\\+{4,}$",contains:[{begin:"<",end:">",subLanguage:"xml",relevance:0}],relevance:10},d,c,...o,...s,...l,{className:"string",variants:[{begin:"``.+?''"},{begin:"`.+?'"}]},{className:"code",begin:/`{2}/,end:/(\n{2}|`{2})/},{className:"code",begin:"(`.+?`|\\+.+?\\+)",relevance:0},{className:"code",begin:"^[ \\t]",end:"$",relevance:0},i,{begin:"(link:)?(http|https|ftp|file|irc|image:?):\\S+?\\[[^[]*?\\]",returnBegin:!0,contains:[{begin:"(link|image:?):",relevance:0},{className:"link",begin:"\\w",end:"[^\\[]+",relevance:0},{className:"string",begin:"\\[",end:"\\]",excludeBegin:!0,excludeEnd:!0,relevance:0}],relevance:10}]}}return ed=t,ed}var td,oh;function oOe(){if(oh)return td;oh=1;function t(e){const n=e.regex,i=["false","synchronized","int","abstract","float","private","char","boolean","static","null","if","const","for","true","while","long","throw","strictfp","finally","protected","import","native","final","return","void","enum","else","extends","implements","break","transient","new","catch","instanceof","byte","super","volatile","case","assert","short","package","default","double","public","try","this","switch","continue","throws","privileged","aspectOf","adviceexecution","proceed","cflowbelow","cflow","initialization","preinitialization","staticinitialization","withincode","target","within","execution","getWithinTypeName","handler","thisJoinPoint","thisJoinPointStaticPart","thisEnclosingJoinPointStaticPart","declare","parents","warning","error","soft","precedence","thisAspectInstance"],o=["get","set","args","call"];return{name:"AspectJ",keywords:i,illegal:/<\/|#/,contains:[e.COMMENT(/\/\*\*/,/\*\//,{relevance:0,contains:[{begin:/\w+@/,relevance:0},{className:"doctag",begin:/@[A-Za-z]+/}]}),e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{className:"class",beginKeywords:"aspect",end:/[{;=]/,excludeEnd:!0,illegal:/[:;"\[\]]/,contains:[{beginKeywords:"extends implements pertypewithin perthis pertarget percflowbelow percflow issingleton"},e.UNDERSCORE_TITLE_MODE,{begin:/\([^\)]*/,end:/[)]+/,keywords:i.concat(o),excludeEnd:!1}]},{className:"class",beginKeywords:"class interface",end:/[{;=]/,excludeEnd:!0,relevance:0,keywords:"class interface",illegal:/[:"\[\]]/,contains:[{beginKeywords:"extends implements"},e.UNDERSCORE_TITLE_MODE]},{beginKeywords:"pointcut after before around throwing returning",end:/[)]/,excludeEnd:!1,illegal:/["\[\]]/,contains:[{begin:n.concat(e.UNDERSCORE_IDENT_RE,/\s*\(/),returnBegin:!0,contains:[e.UNDERSCORE_TITLE_MODE]}]},{begin:/[:]/,returnBegin:!0,end:/[{;]/,relevance:0,excludeEnd:!1,keywords:i,illegal:/["\[\]]/,contains:[{begin:n.concat(e.UNDERSCORE_IDENT_RE,/\s*\(/),keywords:i.concat(o),relevance:0},e.QUOTE_STRING_MODE]},{beginKeywords:"new throw",relevance:0},{className:"function",begin:/\w+ +\w+(\.\w+)?\s*\([^\)]*\)\s*((throws)[\w\s,]+)?[\{;]/,returnBegin:!0,end:/[{;=]/,keywords:i,excludeEnd:!0,contains:[{begin:n.concat(e.UNDERSCORE_IDENT_RE,/\s*\(/),returnBegin:!0,relevance:0,contains:[e.UNDERSCORE_TITLE_MODE]},{className:"params",begin:/\(/,end:/\)/,relevance:0,keywords:i,contains:[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,e.C_NUMBER_MODE,e.C_BLOCK_COMMENT_MODE]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},e.C_NUMBER_MODE,{className:"meta",begin:/@[A-Za-z]+/}]}}return td=t,td}var nd,sh;function sOe(){if(sh)return nd;sh=1;function t(e){const n={begin:"`[\\s\\S]"};return{name:"AutoHotkey",case_insensitive:!0,aliases:["ahk"],keywords:{keyword:"Break Continue Critical Exit ExitApp Gosub Goto New OnExit Pause return SetBatchLines SetTimer Suspend Thread Throw Until ahk_id ahk_class ahk_pid ahk_exe ahk_group",literal:"true false NOT AND OR",built_in:"ComSpec Clipboard ClipboardAll ErrorLevel"},contains:[n,e.inherit(e.QUOTE_STRING_MODE,{contains:[n]}),e.COMMENT(";","$",{relevance:0}),e.C_BLOCK_COMMENT_MODE,{className:"number",begin:e.NUMBER_RE,relevance:0},{className:"variable",begin:"%[a-zA-Z0-9#_$@]+%"},{className:"built_in",begin:"^\\s*\\w+\\s*(,|%)"},{className:"title",variants:[{begin:'^[^\\n";]+::(?!=)'},{begin:'^[^\\n";]+:(?!=)',relevance:0}]},{className:"meta",begin:"^\\s*#\\w+",end:"$",relevance:0},{className:"built_in",begin:"A_[a-zA-Z0-9]+"},{begin:",\\s*,"}]}}return nd=t,nd}var rd,lh;function lOe(){if(lh)return rd;lh=1;function t(e){const n="ByRef Case Const ContinueCase ContinueLoop Dim Do Else ElseIf EndFunc EndIf EndSelect EndSwitch EndWith Enum Exit ExitLoop For Func Global If In Local Next ReDim Return Select Static Step Switch Then To Until Volatile WEnd While With",i=["EndRegion","forcedef","forceref","ignorefunc","include","include-once","NoTrayIcon","OnAutoItStartRegister","pragma","Region","RequireAdmin","Tidy_Off","Tidy_On","Tidy_Parameters"],o="True False And Null Not Or Default",s="Abs ACos AdlibRegister AdlibUnRegister Asc AscW ASin Assign ATan AutoItSetOption AutoItWinGetTitle AutoItWinSetTitle Beep Binary BinaryLen BinaryMid BinaryToString BitAND BitNOT BitOR BitRotate BitShift BitXOR BlockInput Break Call CDTray Ceiling Chr ChrW ClipGet ClipPut ConsoleRead ConsoleWrite ConsoleWriteError ControlClick ControlCommand ControlDisable ControlEnable ControlFocus ControlGetFocus ControlGetHandle ControlGetPos ControlGetText ControlHide ControlListView ControlMove ControlSend ControlSetText ControlShow ControlTreeView Cos Dec DirCopy DirCreate DirGetSize DirMove DirRemove DllCall DllCallAddress DllCallbackFree DllCallbackGetPtr DllCallbackRegister DllClose DllOpen DllStructCreate DllStructGetData DllStructGetPtr DllStructGetSize DllStructSetData DriveGetDrive DriveGetFileSystem DriveGetLabel DriveGetSerial DriveGetType DriveMapAdd DriveMapDel DriveMapGet DriveSetLabel DriveSpaceFree DriveSpaceTotal DriveStatus EnvGet EnvSet EnvUpdate Eval Execute Exp FileChangeDir FileClose FileCopy FileCreateNTFSLink FileCreateShortcut FileDelete FileExists FileFindFirstFile FileFindNextFile FileFlush FileGetAttrib FileGetEncoding FileGetLongName FileGetPos FileGetShortcut FileGetShortName FileGetSize FileGetTime FileGetVersion FileInstall FileMove FileOpen FileOpenDialog FileRead FileReadLine FileReadToArray FileRecycle FileRecycleEmpty FileSaveDialog FileSelectFolder FileSetAttrib FileSetEnd FileSetPos FileSetTime FileWrite FileWriteLine Floor FtpSetProxy FuncName GUICreate GUICtrlCreateAvi GUICtrlCreateButton GUICtrlCreateCheckbox GUICtrlCreateCombo GUICtrlCreateContextMenu GUICtrlCreateDate GUICtrlCreateDummy GUICtrlCreateEdit GUICtrlCreateGraphic GUICtrlCreateGroup GUICtrlCreateIcon GUICtrlCreateInput GUICtrlCreateLabel GUICtrlCreateList GUICtrlCreateListView GUICtrlCreateListViewItem GUICtrlCreateMenu GUICtrlCreateMenuItem GUICtrlCreateMonthCal GUICtrlCreateObj GUICtrlCreatePic GUICtrlCreateProgress GUICtrlCreateRadio GUICtrlCreateSlider GUICtrlCreateTab GUICtrlCreateTabItem GUICtrlCreateTreeView GUICtrlCreateTreeViewItem GUICtrlCreateUpdown GUICtrlDelete GUICtrlGetHandle GUICtrlGetState GUICtrlRead GUICtrlRecvMsg GUICtrlRegisterListViewSort GUICtrlSendMsg GUICtrlSendToDummy GUICtrlSetBkColor GUICtrlSetColor GUICtrlSetCursor GUICtrlSetData GUICtrlSetDefBkColor GUICtrlSetDefColor GUICtrlSetFont GUICtrlSetGraphic GUICtrlSetImage GUICtrlSetLimit GUICtrlSetOnEvent GUICtrlSetPos GUICtrlSetResizing GUICtrlSetState GUICtrlSetStyle GUICtrlSetTip GUIDelete GUIGetCursorInfo GUIGetMsg GUIGetStyle GUIRegisterMsg GUISetAccelerators GUISetBkColor GUISetCoord GUISetCursor GUISetFont GUISetHelp GUISetIcon GUISetOnEvent GUISetState GUISetStyle GUIStartGroup GUISwitch Hex HotKeySet HttpSetProxy HttpSetUserAgent HWnd InetClose InetGet InetGetInfo InetGetSize InetRead IniDelete IniRead IniReadSection IniReadSectionNames IniRenameSection IniWrite IniWriteSection InputBox Int IsAdmin IsArray IsBinary IsBool IsDeclared IsDllStruct IsFloat IsFunc IsHWnd IsInt IsKeyword IsNumber IsObj IsPtr IsString Log MemGetStats Mod MouseClick MouseClickDrag MouseDown MouseGetCursor MouseGetPos MouseMove MouseUp MouseWheel MsgBox Number ObjCreate ObjCreateInterface ObjEvent ObjGet ObjName OnAutoItExitRegister OnAutoItExitUnRegister Ping PixelChecksum PixelGetColor PixelSearch ProcessClose ProcessExists ProcessGetStats ProcessList ProcessSetPriority ProcessWait ProcessWaitClose ProgressOff ProgressOn ProgressSet Ptr Random RegDelete RegEnumKey RegEnumVal RegRead RegWrite Round Run RunAs RunAsWait RunWait Send SendKeepActive SetError SetExtended ShellExecute ShellExecuteWait Shutdown Sin Sleep SoundPlay SoundSetWaveVolume SplashImageOn SplashOff SplashTextOn Sqrt SRandom StatusbarGetText StderrRead StdinWrite StdioClose StdoutRead String StringAddCR StringCompare StringFormat StringFromASCIIArray StringInStr StringIsAlNum StringIsAlpha StringIsASCII StringIsDigit StringIsFloat StringIsInt StringIsLower StringIsSpace StringIsUpper StringIsXDigit StringLeft StringLen StringLower StringMid StringRegExp StringRegExpReplace StringReplace StringReverse StringRight StringSplit StringStripCR StringStripWS StringToASCIIArray StringToBinary StringTrimLeft StringTrimRight StringUpper Tan TCPAccept TCPCloseSocket TCPConnect TCPListen TCPNameToIP TCPRecv TCPSend TCPShutdown, UDPShutdown TCPStartup, UDPStartup TimerDiff TimerInit ToolTip TrayCreateItem TrayCreateMenu TrayGetMsg TrayItemDelete TrayItemGetHandle TrayItemGetState TrayItemGetText TrayItemSetOnEvent TrayItemSetState TrayItemSetText TraySetClick TraySetIcon TraySetOnEvent TraySetPauseIcon TraySetState TraySetToolTip TrayTip UBound UDPBind UDPCloseSocket UDPOpen UDPRecv UDPSend VarGetType WinActivate WinActive WinClose WinExists WinFlash WinGetCaretPos WinGetClassList WinGetClientSize WinGetHandle WinGetPos WinGetProcess WinGetState WinGetText WinGetTitle WinKill WinList WinMenuSelectItem WinMinimizeAll WinMinimizeAllUndo WinMove WinSetOnTop WinSetState WinSetTitle WinSetTrans WinWait WinWaitActive WinWaitClose WinWaitNotActive",l={variants:[e.COMMENT(";","$",{relevance:0}),e.COMMENT("#cs","#ce"),e.COMMENT("#comments-start","#comments-end")]},c={begin:"\\$[A-z0-9_]+"},d={className:"string",variants:[{begin:/"/,end:/"/,contains:[{begin:/""/,relevance:0}]},{begin:/'/,end:/'/,contains:[{begin:/''/,relevance:0}]}]},_={variants:[e.BINARY_NUMBER_MODE,e.C_NUMBER_MODE]},p={className:"meta",begin:"#",end:"$",keywords:{keyword:i},contains:[{begin:/\\\n/,relevance:0},{beginKeywords:"include",keywords:{keyword:"include"},end:"$",contains:[d,{className:"string",variants:[{begin:"<",end:">"},{begin:/"/,end:/"/,contains:[{begin:/""/,relevance:0}]},{begin:/'/,end:/'/,contains:[{begin:/''/,relevance:0}]}]}]},d,l]},g={className:"symbol",begin:"@[A-z0-9_]+"},E={beginKeywords:"Func",end:"$",illegal:"\\$|\\[|%",contains:[e.inherit(e.UNDERSCORE_TITLE_MODE,{className:"title.function"}),{className:"params",begin:"\\(",end:"\\)",contains:[c,d,_]}]};return{name:"AutoIt",case_insensitive:!0,illegal:/\/\*/,keywords:{keyword:n,built_in:s,literal:o},contains:[l,c,d,_,p,g,E]}}return rd=t,rd}var id,ch;function cOe(){if(ch)return id;ch=1;function t(e){return{name:"AVR Assembly",case_insensitive:!0,keywords:{$pattern:"\\.?"+e.IDENT_RE,keyword:"adc add adiw and andi asr bclr bld brbc brbs brcc brcs break breq brge brhc brhs brid brie brlo brlt brmi brne brpl brsh brtc brts brvc brvs bset bst call cbi cbr clc clh cli cln clr cls clt clv clz com cp cpc cpi cpse dec eicall eijmp elpm eor fmul fmuls fmulsu icall ijmp in inc jmp ld ldd ldi lds lpm lsl lsr mov movw mul muls mulsu neg nop or ori out pop push rcall ret reti rjmp rol ror sbc sbr sbrc sbrs sec seh sbi sbci sbic sbis sbiw sei sen ser ses set sev sez sleep spm st std sts sub subi swap tst wdr",built_in:"r0 r1 r2 r3 r4 r5 r6 r7 r8 r9 r10 r11 r12 r13 r14 r15 r16 r17 r18 r19 r20 r21 r22 r23 r24 r25 r26 r27 r28 r29 r30 r31 x|0 xh xl y|0 yh yl z|0 zh zl ucsr1c udr1 ucsr1a ucsr1b ubrr1l ubrr1h ucsr0c ubrr0h tccr3c tccr3a tccr3b tcnt3h tcnt3l ocr3ah ocr3al ocr3bh ocr3bl ocr3ch ocr3cl icr3h icr3l etimsk etifr tccr1c ocr1ch ocr1cl twcr twdr twar twsr twbr osccal xmcra xmcrb eicra spmcsr spmcr portg ddrg ping portf ddrf sreg sph spl xdiv rampz eicrb eimsk gimsk gicr eifr gifr timsk tifr mcucr mcucsr tccr0 tcnt0 ocr0 assr tccr1a tccr1b tcnt1h tcnt1l ocr1ah ocr1al ocr1bh ocr1bl icr1h icr1l tccr2 tcnt2 ocr2 ocdr wdtcr sfior eearh eearl eedr eecr porta ddra pina portb ddrb pinb portc ddrc pinc portd ddrd pind spdr spsr spcr udr0 ucsr0a ucsr0b ubrr0l acsr admux adcsr adch adcl porte ddre pine pinf",meta:".byte .cseg .db .def .device .dseg .dw .endmacro .equ .eseg .exit .include .list .listmac .macro .nolist .org .set"},contains:[e.C_BLOCK_COMMENT_MODE,e.COMMENT(";","$",{relevance:0}),e.C_NUMBER_MODE,e.BINARY_NUMBER_MODE,{className:"number",begin:"\\b(\\$[a-zA-Z0-9]+|0o[0-7]+)"},e.QUOTE_STRING_MODE,{className:"string",begin:"'",end:"[^\\\\]'",illegal:"[^\\\\][^']"},{className:"symbol",begin:"^[A-Za-z0-9_.$]+:"},{className:"meta",begin:"#",end:"$"},{className:"subst",begin:"@[0-9]+"}]}}return id=t,id}var ad,uh;function uOe(){if(uh)return ad;uh=1;function t(e){const n={className:"variable",variants:[{begin:/\$[\w\d#@][\w\d_]*/},{begin:/\$\{(.*?)\}/}]},i="BEGIN END if else while do for in break continue delete next nextfile function func exit|10",o={className:"string",contains:[e.BACKSLASH_ESCAPE],variants:[{begin:/(u|b)?r?'''/,end:/'''/,relevance:10},{begin:/(u|b)?r?"""/,end:/"""/,relevance:10},{begin:/(u|r|ur)'/,end:/'/,relevance:10},{begin:/(u|r|ur)"/,end:/"/,relevance:10},{begin:/(b|br)'/,end:/'/},{begin:/(b|br)"/,end:/"/},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]};return{name:"Awk",keywords:{keyword:i},contains:[n,o,e.REGEXP_MODE,e.HASH_COMMENT_MODE,e.NUMBER_MODE]}}return ad=t,ad}var od,dh;function dOe(){if(dh)return od;dh=1;function t(e){const n=e.UNDERSCORE_IDENT_RE,l={keyword:["abstract","as","asc","avg","break","breakpoint","by","byref","case","catch","changecompany","class","client","client","common","const","continue","count","crosscompany","delegate","delete_from","desc","display","div","do","edit","else","eventhandler","exists","extends","final","finally","firstfast","firstonly","firstonly1","firstonly10","firstonly100","firstonly1000","flush","for","forceliterals","forcenestedloop","forceplaceholders","forceselectorder","forupdate","from","generateonly","group","hint","if","implements","in","index","insert_recordset","interface","internal","is","join","like","maxof","minof","mod","namespace","new","next","nofetch","notexists","optimisticlock","order","outer","pessimisticlock","print","private","protected","public","readonly","repeatableread","retry","return","reverse","select","server","setting","static","sum","super","switch","this","throw","try","ttsabort","ttsbegin","ttscommit","unchecked","update_recordset","using","validtimestate","void","where","while"],built_in:["anytype","boolean","byte","char","container","date","double","enum","guid","int","int64","long","real","short","str","utcdatetime","var"],literal:["default","false","null","true"]},c={variants:[{match:[/(class|interface)\s+/,n,/\s+(extends|implements)\s+/,n]},{match:[/class\s+/,n]}],scope:{2:"title.class",4:"title.class.inherited"},keywords:l};return{name:"X++",aliases:["x++"],keywords:l,contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,e.C_NUMBER_MODE,{className:"meta",begin:"#",end:"$"},c]}}return od=t,od}var sd,_h;function _Oe(){if(_h)return sd;_h=1;function t(e){const n=e.regex,i={},o={begin:/\$\{/,end:/\}/,contains:["self",{begin:/:-/,contains:[i]}]};Object.assign(i,{className:"variable",variants:[{begin:n.concat(/\$[\w\d#@][\w\d_]*/,"(?![\\w\\d])(?![$])")},o]});const s={className:"subst",begin:/\$\(/,end:/\)/,contains:[e.BACKSLASH_ESCAPE]},l={begin:/<<-?\s*(?=\w+)/,starts:{contains:[e.END_SAME_AS_BEGIN({begin:/(\w+)/,end:/(\w+)/,className:"string"})]}},c={className:"string",begin:/"/,end:/"/,contains:[e.BACKSLASH_ESCAPE,i,s]};s.contains.push(c);const d={className:"",begin:/\\"/},_={className:"string",begin:/'/,end:/'/},p={begin:/\$?\(\(/,end:/\)\)/,contains:[{begin:/\d+#[0-9a-f]+/,className:"number"},e.NUMBER_MODE,i]},g=["fish","bash","zsh","sh","csh","ksh","tcsh","dash","scsh"],E=e.SHEBANG({binary:`(${g.join("|")})`,relevance:10}),f={className:"function",begin:/\w[\w\d_]*\s*\(\s*\)\s*\{/,returnBegin:!0,contains:[e.inherit(e.TITLE_MODE,{begin:/\w[\w\d_]*/})],relevance:0},S=["if","then","else","elif","fi","for","while","until","in","do","done","case","esac","function","select"],C=["true","false"],h={match:/(\/[a-z._-]+)+/},T=["break","cd","continue","eval","exec","exit","export","getopts","hash","pwd","readonly","return","shift","test","times","trap","umask","unset"],N=["alias","bind","builtin","caller","command","declare","echo","enable","help","let","local","logout","mapfile","printf","read","readarray","source","type","typeset","ulimit","unalias"],y=["autoload","bg","bindkey","bye","cap","chdir","clone","comparguments","compcall","compctl","compdescribe","compfiles","compgroups","compquote","comptags","comptry","compvalues","dirs","disable","disown","echotc","echoti","emulate","fc","fg","float","functions","getcap","getln","history","integer","jobs","kill","limit","log","noglob","popd","print","pushd","pushln","rehash","sched","setcap","setopt","stat","suspend","ttyctl","unfunction","unhash","unlimit","unsetopt","vared","wait","whence","where","which","zcompile","zformat","zftp","zle","zmodload","zparseopts","zprof","zpty","zregexparse","zsocket","zstyle","ztcp"],x=["chcon","chgrp","chown","chmod","cp","dd","df","dir","dircolors","ln","ls","mkdir","mkfifo","mknod","mktemp","mv","realpath","rm","rmdir","shred","sync","touch","truncate","vdir","b2sum","base32","base64","cat","cksum","comm","csplit","cut","expand","fmt","fold","head","join","md5sum","nl","numfmt","od","paste","ptx","pr","sha1sum","sha224sum","sha256sum","sha384sum","sha512sum","shuf","sort","split","sum","tac","tail","tr","tsort","unexpand","uniq","wc","arch","basename","chroot","date","dirname","du","echo","env","expr","factor","groups","hostid","id","link","logname","nice","nohup","nproc","pathchk","pinky","printenv","printf","pwd","readlink","runcon","seq","sleep","stat","stdbuf","stty","tee","test","timeout","tty","uname","unlink","uptime","users","who","whoami","yes"];return{name:"Bash",aliases:["sh"],keywords:{$pattern:/\b[a-z][a-z0-9._-]+\b/,keyword:S,literal:C,built_in:[...T,...N,"set","shopt",...y,...x]},contains:[E,e.SHEBANG(),f,p,e.HASH_COMMENT_MODE,l,h,c,d,_,i]}}return sd=t,sd}var ld,ph;function pOe(){if(ph)return ld;ph=1;function t(e){return{name:"BASIC",case_insensitive:!0,illegal:"^.",keywords:{$pattern:"[a-zA-Z][a-zA-Z0-9_$%!#]*",keyword:["ABS","ASC","AND","ATN","AUTO|0","BEEP","BLOAD|10","BSAVE|10","CALL","CALLS","CDBL","CHAIN","CHDIR","CHR$|10","CINT","CIRCLE","CLEAR","CLOSE","CLS","COLOR","COM","COMMON","CONT","COS","CSNG","CSRLIN","CVD","CVI","CVS","DATA","DATE$","DEFDBL","DEFINT","DEFSNG","DEFSTR","DEF|0","SEG","USR","DELETE","DIM","DRAW","EDIT","END","ENVIRON","ENVIRON$","EOF","EQV","ERASE","ERDEV","ERDEV$","ERL","ERR","ERROR","EXP","FIELD","FILES","FIX","FOR|0","FRE","GET","GOSUB|10","GOTO","HEX$","IF","THEN","ELSE|0","INKEY$","INP","INPUT","INPUT#","INPUT$","INSTR","IMP","INT","IOCTL","IOCTL$","KEY","ON","OFF","LIST","KILL","LEFT$","LEN","LET","LINE","LLIST","LOAD","LOC","LOCATE","LOF","LOG","LPRINT","USING","LSET","MERGE","MID$","MKDIR","MKD$","MKI$","MKS$","MOD","NAME","NEW","NEXT","NOISE","NOT","OCT$","ON","OR","PEN","PLAY","STRIG","OPEN","OPTION","BASE","OUT","PAINT","PALETTE","PCOPY","PEEK","PMAP","POINT","POKE","POS","PRINT","PRINT]","PSET","PRESET","PUT","RANDOMIZE","READ","REM","RENUM","RESET|0","RESTORE","RESUME","RETURN|0","RIGHT$","RMDIR","RND","RSET","RUN","SAVE","SCREEN","SGN","SHELL","SIN","SOUND","SPACE$","SPC","SQR","STEP","STICK","STOP","STR$","STRING$","SWAP","SYSTEM","TAB","TAN","TIME$","TIMER","TROFF","TRON","TO","USR","VAL","VARPTR","VARPTR$","VIEW","WAIT","WHILE","WEND","WIDTH","WINDOW","WRITE","XOR"]},contains:[e.QUOTE_STRING_MODE,e.COMMENT("REM","$",{relevance:10}),e.COMMENT("'","$",{relevance:0}),{className:"symbol",begin:"^[0-9]+ ",relevance:10},{className:"number",begin:"\\b\\d+(\\.\\d+)?([edED]\\d+)?[#!]?",relevance:0},{className:"number",begin:"(&[hH][0-9a-fA-F]{1,4})"},{className:"number",begin:"(&[oO][0-7]{1,6})"}]}}return ld=t,ld}var cd,mh;function mOe(){if(mh)return cd;mh=1;function t(e){return{name:"Backus–Naur Form",contains:[{className:"attribute",begin://},{begin:/::=/,end:/$/,contains:[{begin://},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]}]}}return cd=t,cd}var ud,gh;function gOe(){if(gh)return ud;gh=1;function t(e){const n={className:"literal",begin:/[+-]+/,relevance:0};return{name:"Brainfuck",aliases:["bf"],contains:[e.COMMENT(/[^\[\]\.,\+\-<> \r\n]/,/[\[\]\.,\+\-<> \r\n]/,{contains:[{match:/[ ]+[^\[\]\.,\+\-<> \r\n]/,relevance:0}],returnEnd:!0,relevance:0}),{className:"title",begin:"[\\[\\]]",relevance:0},{className:"string",begin:"[\\.,]",relevance:0},{begin:/(?=\+\+|--)/,contains:[n]},n]}}return ud=t,ud}var dd,Eh;function EOe(){if(Eh)return dd;Eh=1;function t(e){const n=e.regex,i=e.COMMENT("//","$",{contains:[{begin:/\\\n/}]}),o="decltype\\(auto\\)",s="[a-zA-Z_]\\w*::",l="<[^<>]+>",c="("+o+"|"+n.optional(s)+"[a-zA-Z_]\\w*"+n.optional(l)+")",d={className:"type",variants:[{begin:"\\b[a-z\\d_]*_t\\b"},{match:/\batomic_[a-z]{3,6}\b/}]},_="\\\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4,8}|[0-7]{3}|\\S)",p={className:"string",variants:[{begin:'(u8?|U|L)?"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},{begin:"(u8?|U|L)?'("+_+"|.)",end:"'",illegal:"."},e.END_SAME_AS_BEGIN({begin:/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,end:/\)([^()\\ ]{0,16})"/})]},g={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)((ll|LL|l|L)(u|U)?|(u|U)(ll|LL|l|L)?|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"}],relevance:0},E={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:{keyword:"if else elif endif define undef warning error line pragma _Pragma ifdef ifndef include"},contains:[{begin:/\\\n/,relevance:0},e.inherit(p,{className:"string"}),{className:"string",begin:/<.*?>/},i,e.C_BLOCK_COMMENT_MODE]},f={className:"title",begin:n.optional(s)+e.IDENT_RE,relevance:0},S=n.optional(s)+e.IDENT_RE+"\\s*\\(",T={keyword:["asm","auto","break","case","continue","default","do","else","enum","extern","for","fortran","goto","if","inline","register","restrict","return","sizeof","struct","switch","typedef","union","volatile","while","_Alignas","_Alignof","_Atomic","_Generic","_Noreturn","_Static_assert","_Thread_local","alignas","alignof","noreturn","static_assert","thread_local","_Pragma"],type:["float","double","signed","unsigned","int","short","long","char","void","_Bool","_Complex","_Imaginary","_Decimal32","_Decimal64","_Decimal128","const","static","complex","bool","imaginary"],literal:"true false NULL",built_in:"std string wstring cin cout cerr clog stdin stdout stderr stringstream istringstream ostringstream auto_ptr deque list queue stack vector map set pair bitset multiset multimap unordered_set unordered_map unordered_multiset unordered_multimap priority_queue make_pair array shared_ptr abort terminate abs acos asin atan2 atan calloc ceil cosh cos exit exp fabs floor fmod fprintf fputs free frexp fscanf future isalnum isalpha iscntrl isdigit isgraph islower isprint ispunct isspace isupper isxdigit tolower toupper labs ldexp log10 log malloc realloc memchr memcmp memcpy memset modf pow printf putchar puts scanf sinh sin snprintf sprintf sqrt sscanf strcat strchr strcmp strcpy strcspn strlen strncat strncmp strncpy strpbrk strrchr strspn strstr tanh tan vfprintf vprintf vsprintf endl initializer_list unique_ptr"},N=[E,d,i,e.C_BLOCK_COMMENT_MODE,g,p],y={variants:[{begin:/=/,end:/;/},{begin:/\(/,end:/\)/},{beginKeywords:"new throw return else",end:/;/}],keywords:T,contains:N.concat([{begin:/\(/,end:/\)/,keywords:T,contains:N.concat(["self"]),relevance:0}]),relevance:0},x={begin:"("+c+"[\\*&\\s]+)+"+S,returnBegin:!0,end:/[{;=]/,excludeEnd:!0,keywords:T,illegal:/[^\w\s\*&:<>.]/,contains:[{begin:o,keywords:T,relevance:0},{begin:S,returnBegin:!0,contains:[e.inherit(f,{className:"title.function"})],relevance:0},{relevance:0,match:/,/},{className:"params",begin:/\(/,end:/\)/,keywords:T,relevance:0,contains:[i,e.C_BLOCK_COMMENT_MODE,p,g,d,{begin:/\(/,end:/\)/,keywords:T,relevance:0,contains:["self",i,e.C_BLOCK_COMMENT_MODE,p,g,d]}]},d,i,e.C_BLOCK_COMMENT_MODE,E]};return{name:"C",aliases:["h"],keywords:T,disableAutodetect:!0,illegal:"=]/,contains:[{beginKeywords:"final class struct"},e.TITLE_MODE]}]),exports:{preprocessor:E,strings:p,keywords:T}}}return dd=t,dd}var _d,fh;function fOe(){if(fh)return _d;fh=1;function t(e){const n=e.regex,i=["div","mod","in","and","or","not","xor","asserterror","begin","case","do","downto","else","end","exit","for","local","if","of","repeat","then","to","until","while","with","var"],o="false true",s=[e.C_LINE_COMMENT_MODE,e.COMMENT(/\{/,/\}/,{relevance:0}),e.COMMENT(/\(\*/,/\*\)/,{relevance:10})],l={className:"string",begin:/'/,end:/'/,contains:[{begin:/''/}]},c={className:"string",begin:/(#\d+)+/},d={className:"number",begin:"\\b\\d+(\\.\\d+)?(DT|D|T)",relevance:0},_={className:"string",begin:'"',end:'"'},p={match:[/procedure/,/\s+/,/[a-zA-Z_][\w@]*/,/\s*/],scope:{1:"keyword",3:"title.function"},contains:[{className:"params",begin:/\(/,end:/\)/,keywords:i,contains:[l,c,e.NUMBER_MODE]},...s]},g=["Table","Form","Report","Dataport","Codeunit","XMLport","MenuSuite","Page","Query"],E={match:[/OBJECT/,/\s+/,n.either(...g),/\s+/,/\d+/,/\s+(?=[^\s])/,/.*/,/$/],relevance:3,scope:{1:"keyword",3:"type",5:"number",7:"title"}};return{name:"C/AL",case_insensitive:!0,keywords:{keyword:i,literal:o},illegal:/\/\*/,contains:[{match:/[\w]+(?=\=)/,scope:"attribute",relevance:0},l,c,d,_,e.NUMBER_MODE,E,p]}}return _d=t,_d}var pd,Sh;function SOe(){if(Sh)return pd;Sh=1;function t(e){const n=["struct","enum","interface","union","group","import","using","const","annotation","extends","in","of","on","as","with","from","fixed"],i=["Void","Bool","Int8","Int16","Int32","Int64","UInt8","UInt16","UInt32","UInt64","Float32","Float64","Text","Data","AnyPointer","AnyStruct","Capability","List"],o=["true","false"],s={variants:[{match:[/(struct|enum|interface)/,/\s+/,e.IDENT_RE]},{match:[/extends/,/\s*\(/,e.IDENT_RE,/\s*\)/]}],scope:{1:"keyword",3:"title.class"}};return{name:"Cap’n Proto",aliases:["capnp"],keywords:{keyword:n,type:i,literal:o},contains:[e.QUOTE_STRING_MODE,e.NUMBER_MODE,e.HASH_COMMENT_MODE,{className:"meta",begin:/@0x[\w\d]{16};/,illegal:/\n/},{className:"symbol",begin:/@\d+\b/},s]}}return pd=t,pd}var md,bh;function bOe(){if(bh)return md;bh=1;function t(e){const n=["assembly","module","package","import","alias","class","interface","object","given","value","assign","void","function","new","of","extends","satisfies","abstracts","in","out","return","break","continue","throw","assert","dynamic","if","else","switch","case","for","while","try","catch","finally","then","let","this","outer","super","is","exists","nonempty"],i=["shared","abstract","formal","default","actual","variable","late","native","deprecated","final","sealed","annotation","suppressWarnings","small"],o=["doc","by","license","see","throws","tagged"],s={className:"subst",excludeBegin:!0,excludeEnd:!0,begin:/``/,end:/``/,keywords:n,relevance:10},l=[{className:"string",begin:'"""',end:'"""',relevance:10},{className:"string",begin:'"',end:'"',contains:[s]},{className:"string",begin:"'",end:"'"},{className:"number",begin:"#[0-9a-fA-F_]+|\\$[01_]+|[0-9_]+(?:\\.[0-9_](?:[eE][+-]?\\d+)?)?[kMGTPmunpf]?",relevance:0}];return s.contains=l,{name:"Ceylon",keywords:{keyword:n.concat(i),meta:o},illegal:"\\$[^01]|#[^0-9a-fA-F]",contains:[e.C_LINE_COMMENT_MODE,e.COMMENT("/\\*","\\*/",{contains:["self"]}),{className:"meta",begin:'@[a-z]\\w*(?::"[^"]*")?'}].concat(l)}}return md=t,md}var gd,hh;function hOe(){if(hh)return gd;hh=1;function t(e){return{name:"Clean",aliases:["icl","dcl"],keywords:{keyword:["if","let","in","with","where","case","of","class","instance","otherwise","implementation","definition","system","module","from","import","qualified","as","special","code","inline","foreign","export","ccall","stdcall","generic","derive","infix","infixl","infixr"],built_in:"Int Real Char Bool",literal:"True False"},contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,e.C_NUMBER_MODE,{begin:"->|<-[|:]?|#!?|>>=|\\{\\||\\|\\}|:==|=:|<>"}]}}return gd=t,gd}var Ed,Th;function TOe(){if(Th)return Ed;Th=1;function t(e){const n="a-zA-Z_\\-!.?+*=<>&'",i="[#]?["+n+"]["+n+"0-9/;:$#]*",o="def defonce defprotocol defstruct defmulti defmethod defn- defn defmacro deftype defrecord",s={$pattern:i,built_in:o+" cond apply if-not if-let if not not= =|0 <|0 >|0 <=|0 >=|0 ==|0 +|0 /|0 *|0 -|0 rem quot neg? pos? delay? symbol? keyword? true? false? integer? empty? coll? list? set? ifn? fn? associative? sequential? sorted? counted? reversible? number? decimal? class? distinct? isa? float? rational? reduced? ratio? odd? even? char? seq? vector? string? map? nil? contains? zero? instance? not-every? not-any? libspec? -> ->> .. . inc compare do dotimes mapcat take remove take-while drop letfn drop-last take-last drop-while while intern condp case reduced cycle split-at split-with repeat replicate iterate range merge zipmap declare line-seq sort comparator sort-by dorun doall nthnext nthrest partition eval doseq await await-for let agent atom send send-off release-pending-sends add-watch mapv filterv remove-watch agent-error restart-agent set-error-handler error-handler set-error-mode! error-mode shutdown-agents quote var fn loop recur throw try monitor-enter monitor-exit macroexpand macroexpand-1 for dosync and or when when-not when-let comp juxt partial sequence memoize constantly complement identity assert peek pop doto proxy first rest cons cast coll last butlast sigs reify second ffirst fnext nfirst nnext meta with-meta ns in-ns create-ns import refer keys select-keys vals key val rseq name namespace promise into transient persistent! conj! assoc! dissoc! pop! disj! use class type num float double short byte boolean bigint biginteger bigdec print-method print-dup throw-if printf format load compile get-in update-in pr pr-on newline flush read slurp read-line subvec with-open memfn time re-find re-groups rand-int rand mod locking assert-valid-fdecl alias resolve ref deref refset swap! reset! set-validator! compare-and-set! alter-meta! reset-meta! commute get-validator alter ref-set ref-history-count ref-min-history ref-max-history ensure sync io! new next conj set! to-array future future-call into-array aset gen-class reduce map filter find empty hash-map hash-set sorted-map sorted-map-by sorted-set sorted-set-by vec vector seq flatten reverse assoc dissoc list disj get union difference intersection extend extend-type extend-protocol int nth delay count concat chunk chunk-buffer chunk-append chunk-first chunk-rest max min dec unchecked-inc-int unchecked-inc unchecked-dec-inc unchecked-dec unchecked-negate unchecked-add-int unchecked-add unchecked-subtract-int unchecked-subtract chunk-next chunk-cons chunked-seq? prn vary-meta lazy-seq spread list* str find-keyword keyword symbol gensym force rationalize"},l={begin:i,relevance:0},c={scope:"number",relevance:0,variants:[{match:/[-+]?0[xX][0-9a-fA-F]+N?/},{match:/[-+]?0[0-7]+N?/},{match:/[-+]?[1-9][0-9]?[rR][0-9a-zA-Z]+N?/},{match:/[-+]?[0-9]+\/[0-9]+N?/},{match:/[-+]?[0-9]+((\.[0-9]*([eE][+-]?[0-9]+)?M?)|([eE][+-]?[0-9]+M?|M))/},{match:/[-+]?([1-9][0-9]*|0)N?/}]},d={scope:"character",variants:[{match:/\\o[0-3]?[0-7]{1,2}/},{match:/\\u[0-9a-fA-F]{4}/},{match:/\\(newline|space|tab|formfeed|backspace|return)/},{match:/\\\S/,relevance:0}]},_={scope:"regex",begin:/#"/,end:/"/,contains:[e.BACKSLASH_ESCAPE]},p=e.inherit(e.QUOTE_STRING_MODE,{illegal:null}),g={scope:"punctuation",match:/,/,relevance:0},E=e.COMMENT(";","$",{relevance:0}),f={className:"literal",begin:/\b(true|false|nil)\b/},S={begin:"\\[|(#::?"+i+")?\\{",end:"[\\]\\}]",relevance:0},C={className:"symbol",begin:"[:]{1,2}"+i},h={begin:"\\(",end:"\\)"},T={endsWithParent:!0,relevance:0},N={keywords:s,className:"name",begin:i,relevance:0,starts:T},y=[g,h,d,_,p,E,C,S,c,f,l],x={beginKeywords:o,keywords:{$pattern:i,keyword:o},end:'(\\[|#|\\d|"|:|\\{|\\)|\\(|$)',contains:[{className:"title",begin:i,relevance:0,excludeEnd:!0,endsParent:!0}].concat(y)};return h.contains=[x,N,T],T.contains=y,S.contains=y,{name:"Clojure",aliases:["clj","edn"],illegal:/\S/,contains:[g,h,d,_,p,E,C,S,c,f]}}return Ed=t,Ed}var fd,vh;function vOe(){if(vh)return fd;vh=1;function t(e){return{name:"Clojure REPL",contains:[{className:"meta.prompt",begin:/^([\w.-]+|\s*#_)?=>/,starts:{end:/$/,subLanguage:"clojure"}}]}}return fd=t,fd}var Sd,Ch;function COe(){if(Ch)return Sd;Ch=1;function t(e){return{name:"CMake",aliases:["cmake.in"],case_insensitive:!0,keywords:{keyword:"break cmake_host_system_information cmake_minimum_required cmake_parse_arguments cmake_policy configure_file continue elseif else endforeach endfunction endif endmacro endwhile execute_process file find_file find_library find_package find_path find_program foreach function get_cmake_property get_directory_property get_filename_component get_property if include include_guard list macro mark_as_advanced math message option return separate_arguments set_directory_properties set_property set site_name string unset variable_watch while add_compile_definitions add_compile_options add_custom_command add_custom_target add_definitions add_dependencies add_executable add_library add_link_options add_subdirectory add_test aux_source_directory build_command create_test_sourcelist define_property enable_language enable_testing export fltk_wrap_ui get_source_file_property get_target_property get_test_property include_directories include_external_msproject include_regular_expression install link_directories link_libraries load_cache project qt_wrap_cpp qt_wrap_ui remove_definitions set_source_files_properties set_target_properties set_tests_properties source_group target_compile_definitions target_compile_features target_compile_options target_include_directories target_link_directories target_link_libraries target_link_options target_sources try_compile try_run ctest_build ctest_configure ctest_coverage ctest_empty_binary_directory ctest_memcheck ctest_read_custom_files ctest_run_script ctest_sleep ctest_start ctest_submit ctest_test ctest_update ctest_upload build_name exec_program export_library_dependencies install_files install_programs install_targets load_command make_directory output_required_files remove subdir_depends subdirs use_mangled_mesa utility_source variable_requires write_file qt5_use_modules qt5_use_package qt5_wrap_cpp on off true false and or not command policy target test exists is_newer_than is_directory is_symlink is_absolute matches less greater equal less_equal greater_equal strless strgreater strequal strless_equal strgreater_equal version_less version_greater version_equal version_less_equal version_greater_equal in_list defined"},contains:[{className:"variable",begin:/\$\{/,end:/\}/},e.COMMENT(/#\[\[/,/]]/),e.HASH_COMMENT_MODE,e.QUOTE_STRING_MODE,e.NUMBER_MODE]}}return Sd=t,Sd}var bd,Rh;function ROe(){if(Rh)return bd;Rh=1;const t=["as","in","of","if","for","while","finally","var","new","function","do","return","void","else","break","catch","instanceof","with","throw","case","default","try","switch","continue","typeof","delete","let","yield","const","class","debugger","async","await","static","import","from","export","extends"],e=["true","false","null","undefined","NaN","Infinity"],n=["Object","Function","Boolean","Symbol","Math","Date","Number","BigInt","String","RegExp","Array","Float32Array","Float64Array","Int8Array","Uint8Array","Uint8ClampedArray","Int16Array","Int32Array","Uint16Array","Uint32Array","BigInt64Array","BigUint64Array","Set","Map","WeakSet","WeakMap","ArrayBuffer","SharedArrayBuffer","Atomics","DataView","JSON","Promise","Generator","GeneratorFunction","AsyncFunction","Reflect","Proxy","Intl","WebAssembly"],i=["Error","EvalError","InternalError","RangeError","ReferenceError","SyntaxError","TypeError","URIError"],o=["setInterval","setTimeout","clearInterval","clearTimeout","require","exports","eval","isFinite","isNaN","parseFloat","parseInt","decodeURI","decodeURIComponent","encodeURI","encodeURIComponent","escape","unescape"],s=[].concat(o,n,i);function l(c){const d=["npm","print"],_=["yes","no","on","off"],p=["then","unless","until","loop","by","when","and","or","is","isnt","not"],g=["var","const","let","function","static"],E=P=>D=>!P.includes(D),f={keyword:t.concat(p).filter(E(g)),literal:e.concat(_),built_in:s.concat(d)},S="[A-Za-z$_][0-9A-Za-z$_]*",C={className:"subst",begin:/#\{/,end:/\}/,keywords:f},h=[c.BINARY_NUMBER_MODE,c.inherit(c.C_NUMBER_MODE,{starts:{end:"(\\s*/)?",relevance:0}}),{className:"string",variants:[{begin:/'''/,end:/'''/,contains:[c.BACKSLASH_ESCAPE]},{begin:/'/,end:/'/,contains:[c.BACKSLASH_ESCAPE]},{begin:/"""/,end:/"""/,contains:[c.BACKSLASH_ESCAPE,C]},{begin:/"/,end:/"/,contains:[c.BACKSLASH_ESCAPE,C]}]},{className:"regexp",variants:[{begin:"///",end:"///",contains:[C,c.HASH_COMMENT_MODE]},{begin:"//[gim]{0,3}(?=\\W)",relevance:0},{begin:/\/(?![ *]).*?(?![\\]).\/[gim]{0,3}(?=\W)/}]},{begin:"@"+S},{subLanguage:"javascript",excludeBegin:!0,excludeEnd:!0,variants:[{begin:"```",end:"```"},{begin:"`",end:"`"}]}];C.contains=h;const T=c.inherit(c.TITLE_MODE,{begin:S}),N="(\\(.*\\)\\s*)?\\B[-=]>",y={className:"params",begin:"\\([^\\(]",returnBegin:!0,contains:[{begin:/\(/,end:/\)/,keywords:f,contains:["self"].concat(h)}]},x={variants:[{match:[/class\s+/,S,/\s+extends\s+/,S]},{match:[/class\s+/,S]}],scope:{2:"title.class",4:"title.class.inherited"},keywords:f};return{name:"CoffeeScript",aliases:["coffee","cson","iced"],keywords:f,illegal:/\/\*/,contains:[...h,c.COMMENT("###","###"),c.HASH_COMMENT_MODE,{className:"function",begin:"^\\s*"+S+"\\s*=\\s*"+N,end:"[-=]>",returnBegin:!0,contains:[T,y]},{begin:/[:\(,=]\s*/,relevance:0,contains:[{className:"function",begin:N,end:"[-=]>",returnBegin:!0,contains:[y]}]},x,{begin:S+":",end:":",returnBegin:!0,returnEnd:!0,relevance:0}]}}return bd=l,bd}var hd,Nh;function NOe(){if(Nh)return hd;Nh=1;function t(e){return{name:"Coq",keywords:{keyword:["_|0","as","at","cofix","else","end","exists","exists2","fix","for","forall","fun","if","IF","in","let","match","mod","Prop","return","Set","then","Type","using","where","with","Abort","About","Add","Admit","Admitted","All","Arguments","Assumptions","Axiom","Back","BackTo","Backtrack","Bind","Blacklist","Canonical","Cd","Check","Class","Classes","Close","Coercion","Coercions","CoFixpoint","CoInductive","Collection","Combined","Compute","Conjecture","Conjectures","Constant","constr","Constraint","Constructors","Context","Corollary","CreateHintDb","Cut","Declare","Defined","Definition","Delimit","Dependencies","Dependent","Derive","Drop","eauto","End","Equality","Eval","Example","Existential","Existentials","Existing","Export","exporting","Extern","Extract","Extraction","Fact","Field","Fields","File","Fixpoint","Focus","for","From","Function","Functional","Generalizable","Global","Goal","Grab","Grammar","Graph","Guarded","Heap","Hint","HintDb","Hints","Hypotheses","Hypothesis","ident","Identity","If","Immediate","Implicit","Import","Include","Inductive","Infix","Info","Initial","Inline","Inspect","Instance","Instances","Intro","Intros","Inversion","Inversion_clear","Language","Left","Lemma","Let","Libraries","Library","Load","LoadPath","Local","Locate","Ltac","ML","Mode","Module","Modules","Monomorphic","Morphism","Next","NoInline","Notation","Obligation","Obligations","Opaque","Open","Optimize","Options","Parameter","Parameters","Parametric","Path","Paths","pattern","Polymorphic","Preterm","Print","Printing","Program","Projections","Proof","Proposition","Pwd","Qed","Quit","Rec","Record","Recursive","Redirect","Relation","Remark","Remove","Require","Reserved","Reset","Resolve","Restart","Rewrite","Right","Ring","Rings","Save","Scheme","Scope","Scopes","Script","Search","SearchAbout","SearchHead","SearchPattern","SearchRewrite","Section","Separate","Set","Setoid","Show","Solve","Sorted","Step","Strategies","Strategy","Structure","SubClass","Table","Tables","Tactic","Term","Test","Theorem","Time","Timeout","Transparent","Type","Typeclasses","Types","Undelimit","Undo","Unfocus","Unfocused","Unfold","Universe","Universes","Unset","Unshelve","using","Variable","Variables","Variant","Verbose","Visibility","where","with"],built_in:["abstract","absurd","admit","after","apply","as","assert","assumption","at","auto","autorewrite","autounfold","before","bottom","btauto","by","case","case_eq","cbn","cbv","change","classical_left","classical_right","clear","clearbody","cofix","compare","compute","congruence","constr_eq","constructor","contradict","contradiction","cut","cutrewrite","cycle","decide","decompose","dependent","destruct","destruction","dintuition","discriminate","discrR","do","double","dtauto","eapply","eassumption","eauto","ecase","econstructor","edestruct","ediscriminate","eelim","eexact","eexists","einduction","einjection","eleft","elim","elimtype","enough","equality","erewrite","eright","esimplify_eq","esplit","evar","exact","exactly_once","exfalso","exists","f_equal","fail","field","field_simplify","field_simplify_eq","first","firstorder","fix","fold","fourier","functional","generalize","generalizing","gfail","give_up","has_evar","hnf","idtac","in","induction","injection","instantiate","intro","intro_pattern","intros","intuition","inversion","inversion_clear","is_evar","is_var","lapply","lazy","left","lia","lra","move","native_compute","nia","nsatz","omega","once","pattern","pose","progress","proof","psatz","quote","record","red","refine","reflexivity","remember","rename","repeat","replace","revert","revgoals","rewrite","rewrite_strat","right","ring","ring_simplify","rtauto","set","setoid_reflexivity","setoid_replace","setoid_rewrite","setoid_symmetry","setoid_transitivity","shelve","shelve_unifiable","simpl","simple","simplify_eq","solve","specialize","split","split_Rabs","split_Rmult","stepl","stepr","subst","sum","swap","symmetry","tactic","tauto","time","timeout","top","transitivity","trivial","try","tryif","unfold","unify","until","using","vm_compute","with"]},contains:[e.QUOTE_STRING_MODE,e.COMMENT("\\(\\*","\\*\\)"),e.C_NUMBER_MODE,{className:"type",excludeBegin:!0,begin:"\\|\\s*",end:"\\w+"},{begin:/[-=]>/}]}}return hd=t,hd}var Td,Oh;function OOe(){if(Oh)return Td;Oh=1;function t(e){return{name:"Caché Object Script",case_insensitive:!0,aliases:["cls"],keywords:"property parameter class classmethod clientmethod extends as break catch close continue do d|0 else elseif for goto halt hang h|0 if job j|0 kill k|0 lock l|0 merge new open quit q|0 read r|0 return set s|0 tcommit throw trollback try tstart use view while write w|0 xecute x|0 zkill znspace zn ztrap zwrite zw zzdump zzwrite print zbreak zinsert zload zprint zremove zsave zzprint mv mvcall mvcrt mvdim mvprint zquit zsync ascii",contains:[{className:"number",begin:"\\b(\\d+(\\.\\d*)?|\\.\\d+)",relevance:0},{className:"string",variants:[{begin:'"',end:'"',contains:[{begin:'""',relevance:0}]}]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"comment",begin:/;/,end:"$",relevance:0},{className:"built_in",begin:/(?:\$\$?|\.\.)\^?[a-zA-Z]+/},{className:"built_in",begin:/\$\$\$[a-zA-Z]+/},{className:"built_in",begin:/%[a-z]+(?:\.[a-z]+)*/},{className:"symbol",begin:/\^%?[a-zA-Z][\w]*/},{className:"keyword",begin:/##class|##super|#define|#dim/},{begin:/&sql\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,subLanguage:"sql"},{begin:/&(js|jscript|javascript)/,excludeBegin:!0,excludeEnd:!0,subLanguage:"javascript"},{begin:/&html<\s*\s*>/,subLanguage:"xml"}]}}return Td=t,Td}var vd,Ah;function AOe(){if(Ah)return vd;Ah=1;function t(e){const n=e.regex,i=e.COMMENT("//","$",{contains:[{begin:/\\\n/}]}),o="decltype\\(auto\\)",s="[a-zA-Z_]\\w*::",l="<[^<>]+>",c="(?!struct)("+o+"|"+n.optional(s)+"[a-zA-Z_]\\w*"+n.optional(l)+")",d={className:"type",begin:"\\b[a-z\\d_]*_t\\b"},_="\\\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4,8}|[0-7]{3}|\\S)",p={className:"string",variants:[{begin:'(u8?|U|L)?"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},{begin:"(u8?|U|L)?'("+_+"|.)",end:"'",illegal:"."},e.END_SAME_AS_BEGIN({begin:/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,end:/\)([^()\\ ]{0,16})"/})]},g={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)((ll|LL|l|L)(u|U)?|(u|U)(ll|LL|l|L)?|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"}],relevance:0},E={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:{keyword:"if else elif endif define undef warning error line pragma _Pragma ifdef ifndef include"},contains:[{begin:/\\\n/,relevance:0},e.inherit(p,{className:"string"}),{className:"string",begin:/<.*?>/},i,e.C_BLOCK_COMMENT_MODE]},f={className:"title",begin:n.optional(s)+e.IDENT_RE,relevance:0},S=n.optional(s)+e.IDENT_RE+"\\s*\\(",C=["alignas","alignof","and","and_eq","asm","atomic_cancel","atomic_commit","atomic_noexcept","auto","bitand","bitor","break","case","catch","class","co_await","co_return","co_yield","compl","concept","const_cast|10","consteval","constexpr","constinit","continue","decltype","default","delete","do","dynamic_cast|10","else","enum","explicit","export","extern","false","final","for","friend","goto","if","import","inline","module","mutable","namespace","new","noexcept","not","not_eq","nullptr","operator","or","or_eq","override","private","protected","public","reflexpr","register","reinterpret_cast|10","requires","return","sizeof","static_assert","static_cast|10","struct","switch","synchronized","template","this","thread_local","throw","transaction_safe","transaction_safe_dynamic","true","try","typedef","typeid","typename","union","using","virtual","volatile","while","xor","xor_eq"],h=["bool","char","char16_t","char32_t","char8_t","double","float","int","long","short","void","wchar_t","unsigned","signed","const","static"],T=["any","auto_ptr","barrier","binary_semaphore","bitset","complex","condition_variable","condition_variable_any","counting_semaphore","deque","false_type","future","imaginary","initializer_list","istringstream","jthread","latch","lock_guard","multimap","multiset","mutex","optional","ostringstream","packaged_task","pair","promise","priority_queue","queue","recursive_mutex","recursive_timed_mutex","scoped_lock","set","shared_future","shared_lock","shared_mutex","shared_timed_mutex","shared_ptr","stack","string_view","stringstream","timed_mutex","thread","true_type","tuple","unique_lock","unique_ptr","unordered_map","unordered_multimap","unordered_multiset","unordered_set","variant","vector","weak_ptr","wstring","wstring_view"],N=["abort","abs","acos","apply","as_const","asin","atan","atan2","calloc","ceil","cerr","cin","clog","cos","cosh","cout","declval","endl","exchange","exit","exp","fabs","floor","fmod","forward","fprintf","fputs","free","frexp","fscanf","future","invoke","isalnum","isalpha","iscntrl","isdigit","isgraph","islower","isprint","ispunct","isspace","isupper","isxdigit","labs","launder","ldexp","log","log10","make_pair","make_shared","make_shared_for_overwrite","make_tuple","make_unique","malloc","memchr","memcmp","memcpy","memset","modf","move","pow","printf","putchar","puts","realloc","scanf","sin","sinh","snprintf","sprintf","sqrt","sscanf","std","stderr","stdin","stdout","strcat","strchr","strcmp","strcpy","strcspn","strlen","strncat","strncmp","strncpy","strpbrk","strrchr","strspn","strstr","swap","tan","tanh","terminate","to_underlying","tolower","toupper","vfprintf","visit","vprintf","vsprintf"],P={type:h,keyword:C,literal:["NULL","false","nullopt","nullptr","true"],built_in:["_Pragma"],_type_hints:T},D={className:"function.dispatch",relevance:0,keywords:{_hint:N},begin:n.concat(/\b/,/(?!decltype)/,/(?!if)/,/(?!for)/,/(?!switch)/,/(?!while)/,e.IDENT_RE,n.lookahead(/(<[^<>]+>|)\s*\(/))},k=[D,E,d,i,e.C_BLOCK_COMMENT_MODE,g,p],U={variants:[{begin:/=/,end:/;/},{begin:/\(/,end:/\)/},{beginKeywords:"new throw return else",end:/;/}],keywords:P,contains:k.concat([{begin:/\(/,end:/\)/,keywords:P,contains:k.concat(["self"]),relevance:0}]),relevance:0},W={className:"function",begin:"("+c+"[\\*&\\s]+)+"+S,returnBegin:!0,end:/[{;=]/,excludeEnd:!0,keywords:P,illegal:/[^\w\s\*&:<>.]/,contains:[{begin:o,keywords:P,relevance:0},{begin:S,returnBegin:!0,contains:[f],relevance:0},{begin:/::/,relevance:0},{begin:/:/,endsWithParent:!0,contains:[p,g]},{relevance:0,match:/,/},{className:"params",begin:/\(/,end:/\)/,keywords:P,relevance:0,contains:[i,e.C_BLOCK_COMMENT_MODE,p,g,d,{begin:/\(/,end:/\)/,keywords:P,relevance:0,contains:["self",i,e.C_BLOCK_COMMENT_MODE,p,g,d]}]},d,i,e.C_BLOCK_COMMENT_MODE,E]};return{name:"C++",aliases:["cc","c++","h++","hpp","hh","hxx","cxx"],keywords:P,illegal:"",keywords:P,contains:["self",d]},{begin:e.IDENT_RE+"::",keywords:P},{match:[/\b(?:enum(?:\s+(?:class|struct))?|class|struct|union)/,/\s+/,/\w+/],className:{1:"keyword",3:"title.class"}}])}}return vd=t,vd}var Cd,yh;function yOe(){if(yh)return Cd;yh=1;function t(e){const n="primitive rsc_template",i="group clone ms master location colocation order fencing_topology rsc_ticket acl_target acl_group user role tag xml",o="property rsc_defaults op_defaults",s="params meta operations op rule attributes utilization",l="read write deny defined not_defined in_range date spec in ref reference attribute type xpath version and or lt gt tag lte gte eq ne \\",c="number string",d="Master Started Slave Stopped start promote demote stop monitor true false";return{name:"crmsh",aliases:["crm","pcmk"],case_insensitive:!0,keywords:{keyword:s+" "+l+" "+c,literal:d},contains:[e.HASH_COMMENT_MODE,{beginKeywords:"node",starts:{end:"\\s*([\\w_-]+:)?",starts:{className:"title",end:"\\s*[\\$\\w_][\\w_-]*"}}},{beginKeywords:n,starts:{className:"title",end:"\\s*[\\$\\w_][\\w_-]*",starts:{end:"\\s*@?[\\w_][\\w_\\.:-]*"}}},{begin:"\\b("+i.split(" ").join("|")+")\\s+",keywords:i,starts:{className:"title",end:"[\\$\\w_][\\w_-]*"}},{beginKeywords:o,starts:{className:"title",end:"\\s*([\\w_-]+:)?"}},e.QUOTE_STRING_MODE,{className:"meta",begin:"(ocf|systemd|service|lsb):[\\w_:-]+",relevance:0},{className:"number",begin:"\\b\\d+(\\.\\d+)?(ms|s|h|m)?",relevance:0},{className:"literal",begin:"[-]?(infinity|inf)",relevance:0},{className:"attr",begin:/([A-Za-z$_#][\w_-]+)=/,relevance:0},{className:"tag",begin:"",relevance:0}]}}return Cd=t,Cd}var Rd,Ih;function IOe(){if(Ih)return Rd;Ih=1;function t(e){const n="(_?[ui](8|16|32|64|128))?",i="(_?f(32|64))?",o="[a-zA-Z_]\\w*[!?=]?",s="[a-zA-Z_]\\w*[!?=]?|[-+~]@|<<|>>|[=!]~|===?|<=>|[<>]=?|\\*\\*|[-/+%^&*~|]|//|//=|&[-+*]=?|&\\*\\*|\\[\\][=?]?",l="[A-Za-z_]\\w*(::\\w+)*(\\?|!)?",c={$pattern:o,keyword:"abstract alias annotation as as? asm begin break case class def do else elsif end ensure enum extend for fun if include instance_sizeof is_a? lib macro module next nil? of out pointerof private protected rescue responds_to? return require select self sizeof struct super then type typeof union uninitialized unless until verbatim when while with yield __DIR__ __END_LINE__ __FILE__ __LINE__",literal:"false nil true"},d={className:"subst",begin:/#\{/,end:/\}/,keywords:c},_={className:"variable",begin:"(\\$\\W)|((\\$|@@?)(\\w+))(?=[^@$?])(?![A-Za-z])(?![@$?'])"},p={className:"template-variable",variants:[{begin:"\\{\\{",end:"\\}\\}"},{begin:"\\{%",end:"%\\}"}],keywords:c};function g(N,y){const x=[{begin:N,end:y}];return x[0].contains=x,x}const E={className:"string",contains:[e.BACKSLASH_ESCAPE,d],variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/},{begin:/`/,end:/`/},{begin:"%[Qwi]?\\(",end:"\\)",contains:g("\\(","\\)")},{begin:"%[Qwi]?\\[",end:"\\]",contains:g("\\[","\\]")},{begin:"%[Qwi]?\\{",end:/\}/,contains:g(/\{/,/\}/)},{begin:"%[Qwi]?<",end:">",contains:g("<",">")},{begin:"%[Qwi]?\\|",end:"\\|"},{begin:/<<-\w+$/,end:/^\s*\w+$/}],relevance:0},f={className:"string",variants:[{begin:"%q\\(",end:"\\)",contains:g("\\(","\\)")},{begin:"%q\\[",end:"\\]",contains:g("\\[","\\]")},{begin:"%q\\{",end:/\}/,contains:g(/\{/,/\}/)},{begin:"%q<",end:">",contains:g("<",">")},{begin:"%q\\|",end:"\\|"},{begin:/<<-'\w+'$/,end:/^\s*\w+$/}],relevance:0},S={begin:"(?!%\\})("+e.RE_STARTERS_RE+"|\\n|\\b(case|if|select|unless|until|when|while)\\b)\\s*",keywords:"case if select unless until when while",contains:[{className:"regexp",contains:[e.BACKSLASH_ESCAPE,d],variants:[{begin:"//[a-z]*",relevance:0},{begin:"/(?!\\/)",end:"/[a-z]*"}]}],relevance:0},C={className:"regexp",contains:[e.BACKSLASH_ESCAPE,d],variants:[{begin:"%r\\(",end:"\\)",contains:g("\\(","\\)")},{begin:"%r\\[",end:"\\]",contains:g("\\[","\\]")},{begin:"%r\\{",end:/\}/,contains:g(/\{/,/\}/)},{begin:"%r<",end:">",contains:g("<",">")},{begin:"%r\\|",end:"\\|"}],relevance:0},h={className:"meta",begin:"@\\[",end:"\\]",contains:[e.inherit(e.QUOTE_STRING_MODE,{className:"string"})]},T=[p,E,f,C,S,h,_,e.HASH_COMMENT_MODE,{className:"class",beginKeywords:"class module struct",end:"$|;",illegal:/=/,contains:[e.HASH_COMMENT_MODE,e.inherit(e.TITLE_MODE,{begin:l}),{begin:"<"}]},{className:"class",beginKeywords:"lib enum union",end:"$|;",illegal:/=/,contains:[e.HASH_COMMENT_MODE,e.inherit(e.TITLE_MODE,{begin:l})]},{beginKeywords:"annotation",end:"$|;",illegal:/=/,contains:[e.HASH_COMMENT_MODE,e.inherit(e.TITLE_MODE,{begin:l})],relevance:2},{className:"function",beginKeywords:"def",end:/\B\b/,contains:[e.inherit(e.TITLE_MODE,{begin:s,endsParent:!0})]},{className:"function",beginKeywords:"fun macro",end:/\B\b/,contains:[e.inherit(e.TITLE_MODE,{begin:s,endsParent:!0})],relevance:2},{className:"symbol",begin:e.UNDERSCORE_IDENT_RE+"(!|\\?)?:",relevance:0},{className:"symbol",begin:":",contains:[E,{begin:s}],relevance:0},{className:"number",variants:[{begin:"\\b0b([01_]+)"+n},{begin:"\\b0o([0-7_]+)"+n},{begin:"\\b0x([A-Fa-f0-9_]+)"+n},{begin:"\\b([1-9][0-9_]*[0-9]|[0-9])(\\.[0-9][0-9_]*)?([eE]_?[-+]?[0-9_]*)?"+i+"(?!_)"},{begin:"\\b([1-9][0-9_]*|0)"+n}],relevance:0}];return d.contains=T,p.contains=T.slice(1),{name:"Crystal",aliases:["cr"],keywords:c,contains:T}}return Rd=t,Rd}var Nd,Dh;function DOe(){if(Dh)return Nd;Dh=1;function t(e){const n=["bool","byte","char","decimal","delegate","double","dynamic","enum","float","int","long","nint","nuint","object","sbyte","short","string","ulong","uint","ushort"],i=["public","private","protected","static","internal","protected","abstract","async","extern","override","unsafe","virtual","new","sealed","partial"],o=["default","false","null","true"],s=["abstract","as","base","break","case","catch","class","const","continue","do","else","event","explicit","extern","finally","fixed","for","foreach","goto","if","implicit","in","interface","internal","is","lock","namespace","new","operator","out","override","params","private","protected","public","readonly","record","ref","return","scoped","sealed","sizeof","stackalloc","static","struct","switch","this","throw","try","typeof","unchecked","unsafe","using","virtual","void","volatile","while"],l=["add","alias","and","ascending","async","await","by","descending","equals","from","get","global","group","init","into","join","let","nameof","not","notnull","on","or","orderby","partial","remove","select","set","unmanaged","value|0","var","when","where","with","yield"],c={keyword:s.concat(l),built_in:n,literal:o},d=e.inherit(e.TITLE_MODE,{begin:"[a-zA-Z](\\.?\\w)*"}),_={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)(u|U|l|L|ul|UL|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"}],relevance:0},p={className:"string",begin:'@"',end:'"',contains:[{begin:'""'}]},g=e.inherit(p,{illegal:/\n/}),E={className:"subst",begin:/\{/,end:/\}/,keywords:c},f=e.inherit(E,{illegal:/\n/}),S={className:"string",begin:/\$"/,end:'"',illegal:/\n/,contains:[{begin:/\{\{/},{begin:/\}\}/},e.BACKSLASH_ESCAPE,f]},C={className:"string",begin:/\$@"/,end:'"',contains:[{begin:/\{\{/},{begin:/\}\}/},{begin:'""'},E]},h=e.inherit(C,{illegal:/\n/,contains:[{begin:/\{\{/},{begin:/\}\}/},{begin:'""'},f]});E.contains=[C,S,p,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,_,e.C_BLOCK_COMMENT_MODE],f.contains=[h,S,g,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,_,e.inherit(e.C_BLOCK_COMMENT_MODE,{illegal:/\n/})];const T={variants:[C,S,p,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]},N={begin:"<",end:">",contains:[{beginKeywords:"in out"},d]},y=e.IDENT_RE+"(<"+e.IDENT_RE+"(\\s*,\\s*"+e.IDENT_RE+")*>)?(\\[\\])?",x={begin:"@"+e.IDENT_RE,relevance:0};return{name:"C#",aliases:["cs","c#"],keywords:c,illegal:/::/,contains:[e.COMMENT("///","$",{returnBegin:!0,contains:[{className:"doctag",variants:[{begin:"///",relevance:0},{begin:""},{begin:""}]}]}),e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"meta",begin:"#",end:"$",keywords:{keyword:"if else elif endif define undef warning error line region endregion pragma checksum"}},T,_,{beginKeywords:"class interface",relevance:0,end:/[{;=]/,illegal:/[^\s:,]/,contains:[{beginKeywords:"where class"},d,N,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{beginKeywords:"namespace",relevance:0,end:/[{;=]/,illegal:/[^\s:]/,contains:[d,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{beginKeywords:"record",relevance:0,end:/[{;=]/,illegal:/[^\s:]/,contains:[d,N,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{className:"meta",begin:"^\\s*\\[(?=[\\w])",excludeBegin:!0,end:"\\]",excludeEnd:!0,contains:[{className:"string",begin:/"/,end:/"/}]},{beginKeywords:"new return throw await else",relevance:0},{className:"function",begin:"("+y+"\\s+)+"+e.IDENT_RE+"\\s*(<[^=]+>\\s*)?\\(",returnBegin:!0,end:/\s*[{;=]/,excludeEnd:!0,keywords:c,contains:[{beginKeywords:i.join(" "),relevance:0},{begin:e.IDENT_RE+"\\s*(<[^=]+>\\s*)?\\(",returnBegin:!0,contains:[e.TITLE_MODE,N],relevance:0},{match:/\(\)/},{className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:c,relevance:0,contains:[T,_,e.C_BLOCK_COMMENT_MODE]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},x]}}return Nd=t,Nd}var Od,xh;function xOe(){if(xh)return Od;xh=1;function t(e){return{name:"CSP",case_insensitive:!1,keywords:{$pattern:"[a-zA-Z][a-zA-Z0-9_-]*",keyword:["base-uri","child-src","connect-src","default-src","font-src","form-action","frame-ancestors","frame-src","img-src","manifest-src","media-src","object-src","plugin-types","report-uri","sandbox","script-src","style-src","trusted-types","unsafe-hashes","worker-src"]},contains:[{className:"string",begin:"'",end:"'"},{className:"attribute",begin:"^Content",end:":",excludeEnd:!0}]}}return Od=t,Od}var Ad,wh;function wOe(){if(wh)return Ad;wh=1;const t=c=>({IMPORTANT:{scope:"meta",begin:"!important"},BLOCK_COMMENT:c.C_BLOCK_COMMENT_MODE,HEXCOLOR:{scope:"number",begin:/#(([0-9a-fA-F]{3,4})|(([0-9a-fA-F]{2}){3,4}))\b/},FUNCTION_DISPATCH:{className:"built_in",begin:/[\w-]+(?=\()/},ATTRIBUTE_SELECTOR_MODE:{scope:"selector-attr",begin:/\[/,end:/\]/,illegal:"$",contains:[c.APOS_STRING_MODE,c.QUOTE_STRING_MODE]},CSS_NUMBER_MODE:{scope:"number",begin:c.NUMBER_RE+"(%|em|ex|ch|rem|vw|vh|vmin|vmax|cm|mm|in|pt|pc|px|deg|grad|rad|turn|s|ms|Hz|kHz|dpi|dpcm|dppx)?",relevance:0},CSS_VARIABLE:{className:"attr",begin:/--[A-Za-z][A-Za-z0-9_-]*/}}),e=["a","abbr","address","article","aside","audio","b","blockquote","body","button","canvas","caption","cite","code","dd","del","details","dfn","div","dl","dt","em","fieldset","figcaption","figure","footer","form","h1","h2","h3","h4","h5","h6","header","hgroup","html","i","iframe","img","input","ins","kbd","label","legend","li","main","mark","menu","nav","object","ol","p","q","quote","samp","section","span","strong","summary","sup","table","tbody","td","textarea","tfoot","th","thead","time","tr","ul","var","video"],n=["any-hover","any-pointer","aspect-ratio","color","color-gamut","color-index","device-aspect-ratio","device-height","device-width","display-mode","forced-colors","grid","height","hover","inverted-colors","monochrome","orientation","overflow-block","overflow-inline","pointer","prefers-color-scheme","prefers-contrast","prefers-reduced-motion","prefers-reduced-transparency","resolution","scan","scripting","update","width","min-width","max-width","min-height","max-height"],i=["active","any-link","blank","checked","current","default","defined","dir","disabled","drop","empty","enabled","first","first-child","first-of-type","fullscreen","future","focus","focus-visible","focus-within","has","host","host-context","hover","indeterminate","in-range","invalid","is","lang","last-child","last-of-type","left","link","local-link","not","nth-child","nth-col","nth-last-child","nth-last-col","nth-last-of-type","nth-of-type","only-child","only-of-type","optional","out-of-range","past","placeholder-shown","read-only","read-write","required","right","root","scope","target","target-within","user-invalid","valid","visited","where"],o=["after","backdrop","before","cue","cue-region","first-letter","first-line","grammar-error","marker","part","placeholder","selection","slotted","spelling-error"],s=["align-content","align-items","align-self","all","animation","animation-delay","animation-direction","animation-duration","animation-fill-mode","animation-iteration-count","animation-name","animation-play-state","animation-timing-function","backface-visibility","background","background-attachment","background-blend-mode","background-clip","background-color","background-image","background-origin","background-position","background-repeat","background-size","block-size","border","border-block","border-block-color","border-block-end","border-block-end-color","border-block-end-style","border-block-end-width","border-block-start","border-block-start-color","border-block-start-style","border-block-start-width","border-block-style","border-block-width","border-bottom","border-bottom-color","border-bottom-left-radius","border-bottom-right-radius","border-bottom-style","border-bottom-width","border-collapse","border-color","border-image","border-image-outset","border-image-repeat","border-image-slice","border-image-source","border-image-width","border-inline","border-inline-color","border-inline-end","border-inline-end-color","border-inline-end-style","border-inline-end-width","border-inline-start","border-inline-start-color","border-inline-start-style","border-inline-start-width","border-inline-style","border-inline-width","border-left","border-left-color","border-left-style","border-left-width","border-radius","border-right","border-right-color","border-right-style","border-right-width","border-spacing","border-style","border-top","border-top-color","border-top-left-radius","border-top-right-radius","border-top-style","border-top-width","border-width","bottom","box-decoration-break","box-shadow","box-sizing","break-after","break-before","break-inside","caption-side","caret-color","clear","clip","clip-path","clip-rule","color","column-count","column-fill","column-gap","column-rule","column-rule-color","column-rule-style","column-rule-width","column-span","column-width","columns","contain","content","content-visibility","counter-increment","counter-reset","cue","cue-after","cue-before","cursor","direction","display","empty-cells","filter","flex","flex-basis","flex-direction","flex-flow","flex-grow","flex-shrink","flex-wrap","float","flow","font","font-display","font-family","font-feature-settings","font-kerning","font-language-override","font-size","font-size-adjust","font-smoothing","font-stretch","font-style","font-synthesis","font-variant","font-variant-caps","font-variant-east-asian","font-variant-ligatures","font-variant-numeric","font-variant-position","font-variation-settings","font-weight","gap","glyph-orientation-vertical","grid","grid-area","grid-auto-columns","grid-auto-flow","grid-auto-rows","grid-column","grid-column-end","grid-column-start","grid-gap","grid-row","grid-row-end","grid-row-start","grid-template","grid-template-areas","grid-template-columns","grid-template-rows","hanging-punctuation","height","hyphens","icon","image-orientation","image-rendering","image-resolution","ime-mode","inline-size","isolation","justify-content","left","letter-spacing","line-break","line-height","list-style","list-style-image","list-style-position","list-style-type","margin","margin-block","margin-block-end","margin-block-start","margin-bottom","margin-inline","margin-inline-end","margin-inline-start","margin-left","margin-right","margin-top","marks","mask","mask-border","mask-border-mode","mask-border-outset","mask-border-repeat","mask-border-slice","mask-border-source","mask-border-width","mask-clip","mask-composite","mask-image","mask-mode","mask-origin","mask-position","mask-repeat","mask-size","mask-type","max-block-size","max-height","max-inline-size","max-width","min-block-size","min-height","min-inline-size","min-width","mix-blend-mode","nav-down","nav-index","nav-left","nav-right","nav-up","none","normal","object-fit","object-position","opacity","order","orphans","outline","outline-color","outline-offset","outline-style","outline-width","overflow","overflow-wrap","overflow-x","overflow-y","padding","padding-block","padding-block-end","padding-block-start","padding-bottom","padding-inline","padding-inline-end","padding-inline-start","padding-left","padding-right","padding-top","page-break-after","page-break-before","page-break-inside","pause","pause-after","pause-before","perspective","perspective-origin","pointer-events","position","quotes","resize","rest","rest-after","rest-before","right","row-gap","scroll-margin","scroll-margin-block","scroll-margin-block-end","scroll-margin-block-start","scroll-margin-bottom","scroll-margin-inline","scroll-margin-inline-end","scroll-margin-inline-start","scroll-margin-left","scroll-margin-right","scroll-margin-top","scroll-padding","scroll-padding-block","scroll-padding-block-end","scroll-padding-block-start","scroll-padding-bottom","scroll-padding-inline","scroll-padding-inline-end","scroll-padding-inline-start","scroll-padding-left","scroll-padding-right","scroll-padding-top","scroll-snap-align","scroll-snap-stop","scroll-snap-type","scrollbar-color","scrollbar-gutter","scrollbar-width","shape-image-threshold","shape-margin","shape-outside","speak","speak-as","src","tab-size","table-layout","text-align","text-align-all","text-align-last","text-combine-upright","text-decoration","text-decoration-color","text-decoration-line","text-decoration-style","text-emphasis","text-emphasis-color","text-emphasis-position","text-emphasis-style","text-indent","text-justify","text-orientation","text-overflow","text-rendering","text-shadow","text-transform","text-underline-position","top","transform","transform-box","transform-origin","transform-style","transition","transition-delay","transition-duration","transition-property","transition-timing-function","unicode-bidi","vertical-align","visibility","voice-balance","voice-duration","voice-family","voice-pitch","voice-range","voice-rate","voice-stress","voice-volume","white-space","widows","width","will-change","word-break","word-spacing","word-wrap","writing-mode","z-index"].reverse();function l(c){const d=c.regex,_=t(c),p={begin:/-(webkit|moz|ms|o)-(?=[a-z])/},g="and or not only",E=/@-?\w[\w]*(-\w+)*/,f="[a-zA-Z-][a-zA-Z0-9_-]*",S=[c.APOS_STRING_MODE,c.QUOTE_STRING_MODE];return{name:"CSS",case_insensitive:!0,illegal:/[=|'\$]/,keywords:{keyframePosition:"from to"},classNameAliases:{keyframePosition:"selector-tag"},contains:[_.BLOCK_COMMENT,p,_.CSS_NUMBER_MODE,{className:"selector-id",begin:/#[A-Za-z0-9_-]+/,relevance:0},{className:"selector-class",begin:"\\."+f,relevance:0},_.ATTRIBUTE_SELECTOR_MODE,{className:"selector-pseudo",variants:[{begin:":("+i.join("|")+")"},{begin:":(:)?("+o.join("|")+")"}]},_.CSS_VARIABLE,{className:"attribute",begin:"\\b("+s.join("|")+")\\b"},{begin:/:/,end:/[;}{]/,contains:[_.BLOCK_COMMENT,_.HEXCOLOR,_.IMPORTANT,_.CSS_NUMBER_MODE,...S,{begin:/(url|data-uri)\(/,end:/\)/,relevance:0,keywords:{built_in:"url data-uri"},contains:[...S,{className:"string",begin:/[^)]/,endsWithParent:!0,excludeEnd:!0}]},_.FUNCTION_DISPATCH]},{begin:d.lookahead(/@/),end:"[{;]",relevance:0,illegal:/:/,contains:[{className:"keyword",begin:E},{begin:/\s/,endsWithParent:!0,excludeEnd:!0,relevance:0,keywords:{$pattern:/[a-z-]+/,keyword:g,attribute:n.join(" ")},contains:[{begin:/[a-z-]+(?=:)/,className:"attribute"},...S,_.CSS_NUMBER_MODE]}]},{className:"selector-tag",begin:"\\b("+e.join("|")+")\\b"}]}}return Ad=l,Ad}var yd,Mh;function MOe(){if(Mh)return yd;Mh=1;function t(e){const n={$pattern:e.UNDERSCORE_IDENT_RE,keyword:"abstract alias align asm assert auto body break byte case cast catch class const continue debug default delete deprecated do else enum export extern final finally for foreach foreach_reverse|10 goto if immutable import in inout int interface invariant is lazy macro mixin module new nothrow out override package pragma private protected public pure ref return scope shared static struct super switch synchronized template this throw try typedef typeid typeof union unittest version void volatile while with __FILE__ __LINE__ __gshared|10 __thread __traits __DATE__ __EOF__ __TIME__ __TIMESTAMP__ __VENDOR__ __VERSION__",built_in:"bool cdouble cent cfloat char creal dchar delegate double dstring float function idouble ifloat ireal long real short string ubyte ucent uint ulong ushort wchar wstring",literal:"false null true"},i="(0|[1-9][\\d_]*)",o="(0|[1-9][\\d_]*|\\d[\\d_]*|[\\d_]+?\\d)",s="0[bB][01_]+",l="([\\da-fA-F][\\da-fA-F_]*|_[\\da-fA-F][\\da-fA-F_]*)",c="0[xX]"+l,d="([eE][+-]?"+o+")",_="("+o+"(\\.\\d*|"+d+")|\\d+\\."+o+"|\\."+i+d+"?)",p="(0[xX]("+l+"\\."+l+"|\\.?"+l+")[pP][+-]?"+o+")",g="("+i+"|"+s+"|"+c+")",E="("+p+"|"+_+")",f=`\\\\(['"\\?\\\\abfnrtv]|u[\\dA-Fa-f]{4}|[0-7]{1,3}|x[\\dA-Fa-f]{2}|U[\\dA-Fa-f]{8})|&[a-zA-Z\\d]{2,};`,S={className:"number",begin:"\\b"+g+"(L|u|U|Lu|LU|uL|UL)?",relevance:0},C={className:"number",begin:"\\b("+E+"([fF]|L|i|[fF]i|Li)?|"+g+"(i|[fF]i|Li))",relevance:0},h={className:"string",begin:"'("+f+"|.)",end:"'",illegal:"."},N={className:"string",begin:'"',contains:[{begin:f,relevance:0}],end:'"[cwd]?'},y={className:"string",begin:'[rq]"',end:'"[cwd]?',relevance:5},x={className:"string",begin:"`",end:"`[cwd]?"},P={className:"string",begin:'x"[\\da-fA-F\\s\\n\\r]*"[cwd]?',relevance:10},D={className:"string",begin:'q"\\{',end:'\\}"'},k={className:"meta",begin:"^#!",end:"$",relevance:5},U={className:"meta",begin:"#(line)",end:"$",relevance:5},W={className:"keyword",begin:"@[a-zA-Z_][a-zA-Z_\\d]*"},z=e.COMMENT("\\/\\+","\\+\\/",{contains:["self"],relevance:10});return{name:"D",keywords:n,contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,z,P,N,y,x,D,C,S,h,k,U,W]}}return yd=t,yd}var Id,Lh;function LOe(){if(Lh)return Id;Lh=1;function t(e){const n=e.regex,i={begin:/<\/?[A-Za-z_]/,end:">",subLanguage:"xml",relevance:0},o={begin:"^[-\\*]{3,}",end:"$"},s={className:"code",variants:[{begin:"(`{3,})[^`](.|\\n)*?\\1`*[ ]*"},{begin:"(~{3,})[^~](.|\\n)*?\\1~*[ ]*"},{begin:"```",end:"```+[ ]*$"},{begin:"~~~",end:"~~~+[ ]*$"},{begin:"`.+?`"},{begin:"(?=^( {4}|\\t))",contains:[{begin:"^( {4}|\\t)",end:"(\\n)$"}],relevance:0}]},l={className:"bullet",begin:"^[ ]*([*+-]|(\\d+\\.))(?=\\s+)",end:"\\s+",excludeEnd:!0},c={begin:/^\[[^\n]+\]:/,returnBegin:!0,contains:[{className:"symbol",begin:/\[/,end:/\]/,excludeBegin:!0,excludeEnd:!0},{className:"link",begin:/:\s*/,end:/$/,excludeBegin:!0}]},d=/[A-Za-z][A-Za-z0-9+.-]*/,_={variants:[{begin:/\[.+?\]\[.*?\]/,relevance:0},{begin:/\[.+?\]\(((data|javascript|mailto):|(?:http|ftp)s?:\/\/).*?\)/,relevance:2},{begin:n.concat(/\[.+?\]\(/,d,/:\/\/.*?\)/),relevance:2},{begin:/\[.+?\]\([./?&#].*?\)/,relevance:1},{begin:/\[.*?\]\(.*?\)/,relevance:0}],returnBegin:!0,contains:[{match:/\[(?=\])/},{className:"string",relevance:0,begin:"\\[",end:"\\]",excludeBegin:!0,returnEnd:!0},{className:"link",relevance:0,begin:"\\]\\(",end:"\\)",excludeBegin:!0,excludeEnd:!0},{className:"symbol",relevance:0,begin:"\\]\\[",end:"\\]",excludeBegin:!0,excludeEnd:!0}]},p={className:"strong",contains:[],variants:[{begin:/_{2}(?!\s)/,end:/_{2}/},{begin:/\*{2}(?!\s)/,end:/\*{2}/}]},g={className:"emphasis",contains:[],variants:[{begin:/\*(?![*\s])/,end:/\*/},{begin:/_(?![_\s])/,end:/_/,relevance:0}]},E=e.inherit(p,{contains:[]}),f=e.inherit(g,{contains:[]});p.contains.push(f),g.contains.push(E);let S=[i,_];return[p,g,E,f].forEach(T=>{T.contains=T.contains.concat(S)}),S=S.concat(p,g),{name:"Markdown",aliases:["md","mkdown","mkd"],contains:[{className:"section",variants:[{begin:"^#{1,6}",end:"$",contains:S},{begin:"(?=^.+?\\n[=-]{2,}$)",contains:[{begin:"^[=-]*$"},{begin:"^",end:"\\n",contains:S}]}]},i,l,p,g,{className:"quote",begin:"^>\\s+",contains:S,end:"$"},s,o,_,c]}}return Id=t,Id}var Dd,Ph;function POe(){if(Ph)return Dd;Ph=1;function t(e){const n={className:"subst",variants:[{begin:"\\$[A-Za-z0-9_]+"}]},i={className:"subst",variants:[{begin:/\$\{/,end:/\}/}],keywords:"true false null this is new super"},o={className:"string",variants:[{begin:"r'''",end:"'''"},{begin:'r"""',end:'"""'},{begin:"r'",end:"'",illegal:"\\n"},{begin:'r"',end:'"',illegal:"\\n"},{begin:"'''",end:"'''",contains:[e.BACKSLASH_ESCAPE,n,i]},{begin:'"""',end:'"""',contains:[e.BACKSLASH_ESCAPE,n,i]},{begin:"'",end:"'",illegal:"\\n",contains:[e.BACKSLASH_ESCAPE,n,i]},{begin:'"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE,n,i]}]};i.contains=[e.C_NUMBER_MODE,o];const s=["Comparable","DateTime","Duration","Function","Iterable","Iterator","List","Map","Match","Object","Pattern","RegExp","Set","Stopwatch","String","StringBuffer","StringSink","Symbol","Type","Uri","bool","double","int","num","Element","ElementList"],l=s.map(_=>`${_}?`);return{name:"Dart",keywords:{keyword:["abstract","as","assert","async","await","base","break","case","catch","class","const","continue","covariant","default","deferred","do","dynamic","else","enum","export","extends","extension","external","factory","false","final","finally","for","Function","get","hide","if","implements","import","in","interface","is","late","library","mixin","new","null","on","operator","part","required","rethrow","return","sealed","set","show","static","super","switch","sync","this","throw","true","try","typedef","var","void","when","while","with","yield"],built_in:s.concat(l).concat(["Never","Null","dynamic","print","document","querySelector","querySelectorAll","window"]),$pattern:/[A-Za-z][A-Za-z0-9_]*\??/},contains:[o,e.COMMENT(/\/\*\*(?!\/)/,/\*\//,{subLanguage:"markdown",relevance:0}),e.COMMENT(/\/{3,} ?/,/$/,{contains:[{subLanguage:"markdown",begin:".",end:"$",relevance:0}]}),e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"class",beginKeywords:"class interface",end:/\{/,excludeEnd:!0,contains:[{beginKeywords:"extends implements"},e.UNDERSCORE_TITLE_MODE]},e.C_NUMBER_MODE,{className:"meta",begin:"@[A-Za-z]+"},{begin:"=>"}]}}return Dd=t,Dd}var xd,kh;function kOe(){if(kh)return xd;kh=1;function t(e){const n=["exports","register","file","shl","array","record","property","for","mod","while","set","ally","label","uses","raise","not","stored","class","safecall","var","interface","or","private","static","exit","index","inherited","to","else","stdcall","override","shr","asm","far","resourcestring","finalization","packed","virtual","out","and","protected","library","do","xorwrite","goto","near","function","end","div","overload","object","unit","begin","string","on","inline","repeat","until","destructor","write","message","program","with","read","initialization","except","default","nil","if","case","cdecl","in","downto","threadvar","of","try","pascal","const","external","constructor","type","public","then","implementation","finally","published","procedure","absolute","reintroduce","operator","as","is","abstract","alias","assembler","bitpacked","break","continue","cppdecl","cvar","enumerator","experimental","platform","deprecated","unimplemented","dynamic","export","far16","forward","generic","helper","implements","interrupt","iochecks","local","name","nodefault","noreturn","nostackframe","oldfpccall","otherwise","saveregisters","softfloat","specialize","strict","unaligned","varargs"],i=[e.C_LINE_COMMENT_MODE,e.COMMENT(/\{/,/\}/,{relevance:0}),e.COMMENT(/\(\*/,/\*\)/,{relevance:10})],o={className:"meta",variants:[{begin:/\{\$/,end:/\}/},{begin:/\(\*\$/,end:/\*\)/}]},s={className:"string",begin:/'/,end:/'/,contains:[{begin:/''/}]},l={className:"number",relevance:0,variants:[{begin:"\\$[0-9A-Fa-f]+"},{begin:"&[0-7]+"},{begin:"%[01]+"}]},c={className:"string",begin:/(#\d+)+/},d={begin:e.IDENT_RE+"\\s*=\\s*class\\s*\\(",returnBegin:!0,contains:[e.TITLE_MODE]},_={className:"function",beginKeywords:"function constructor destructor procedure",end:/[:;]/,keywords:"function constructor|10 destructor|10 procedure|10",contains:[e.TITLE_MODE,{className:"params",begin:/\(/,end:/\)/,keywords:n,contains:[s,c,o].concat(i)},o].concat(i)};return{name:"Delphi",aliases:["dpr","dfm","pas","pascal"],case_insensitive:!0,keywords:n,illegal:/"|\$[G-Zg-z]|\/\*|<\/|\|/,contains:[s,c,e.NUMBER_MODE,l,d,_,o].concat(i)}}return xd=t,xd}var wd,Uh;function UOe(){if(Uh)return wd;Uh=1;function t(e){const n=e.regex;return{name:"Diff",aliases:["patch"],contains:[{className:"meta",relevance:10,match:n.either(/^@@ +-\d+,\d+ +\+\d+,\d+ +@@/,/^\*\*\* +\d+,\d+ +\*\*\*\*$/,/^--- +\d+,\d+ +----$/)},{className:"comment",variants:[{begin:n.either(/Index: /,/^index/,/={3,}/,/^-{3}/,/^\*{3} /,/^\+{3}/,/^diff --git/),end:/$/},{match:/^\*{15}$/}]},{className:"addition",begin:/^\+/,end:/$/},{className:"deletion",begin:/^-/,end:/$/},{className:"addition",begin:/^!/,end:/$/}]}}return wd=t,wd}var Md,Fh;function FOe(){if(Fh)return Md;Fh=1;function t(e){const n={begin:/\|[A-Za-z]+:?/,keywords:{name:"truncatewords removetags linebreaksbr yesno get_digit timesince random striptags filesizeformat escape linebreaks length_is ljust rjust cut urlize fix_ampersands title floatformat capfirst pprint divisibleby add make_list unordered_list urlencode timeuntil urlizetrunc wordcount stringformat linenumbers slice date dictsort dictsortreversed default_if_none pluralize lower join center default truncatewords_html upper length phone2numeric wordwrap time addslashes slugify first escapejs force_escape iriencode last safe safeseq truncatechars localize unlocalize localtime utc timezone"},contains:[e.QUOTE_STRING_MODE,e.APOS_STRING_MODE]};return{name:"Django",aliases:["jinja"],case_insensitive:!0,subLanguage:"xml",contains:[e.COMMENT(/\{%\s*comment\s*%\}/,/\{%\s*endcomment\s*%\}/),e.COMMENT(/\{#/,/#\}/),{className:"template-tag",begin:/\{%/,end:/%\}/,contains:[{className:"name",begin:/\w+/,keywords:{name:"comment endcomment load templatetag ifchanged endifchanged if endif firstof for endfor ifnotequal endifnotequal widthratio extends include spaceless endspaceless regroup ifequal endifequal ssi now with cycle url filter endfilter debug block endblock else autoescape endautoescape csrf_token empty elif endwith static trans blocktrans endblocktrans get_static_prefix get_media_prefix plural get_current_language language get_available_languages get_current_language_bidi get_language_info get_language_info_list localize endlocalize localtime endlocaltime timezone endtimezone get_current_timezone verbatim"},starts:{endsWithParent:!0,keywords:"in by as",contains:[n],relevance:0}}]},{className:"template-variable",begin:/\{\{/,end:/\}\}/,contains:[n]}]}}return Md=t,Md}var Ld,Bh;function BOe(){if(Bh)return Ld;Bh=1;function t(e){return{name:"DNS Zone",aliases:["bind","zone"],keywords:["IN","A","AAAA","AFSDB","APL","CAA","CDNSKEY","CDS","CERT","CNAME","DHCID","DLV","DNAME","DNSKEY","DS","HIP","IPSECKEY","KEY","KX","LOC","MX","NAPTR","NS","NSEC","NSEC3","NSEC3PARAM","PTR","RRSIG","RP","SIG","SOA","SRV","SSHFP","TA","TKEY","TLSA","TSIG","TXT"],contains:[e.COMMENT(";","$",{relevance:0}),{className:"meta",begin:/^\$(TTL|GENERATE|INCLUDE|ORIGIN)\b/},{className:"number",begin:"((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:)))\\b"},{className:"number",begin:"((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]).){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\\b"},e.inherit(e.NUMBER_MODE,{begin:/\b\d+[dhwm]?/})]}}return Ld=t,Ld}var Pd,Gh;function GOe(){if(Gh)return Pd;Gh=1;function t(e){return{name:"Dockerfile",aliases:["docker"],case_insensitive:!0,keywords:["from","maintainer","expose","env","arg","user","onbuild","stopsignal"],contains:[e.HASH_COMMENT_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,e.NUMBER_MODE,{beginKeywords:"run cmd entrypoint volume add copy workdir label healthcheck shell",starts:{end:/[^\\]$/,subLanguage:"bash"}}],illegal:"",illegal:"\\n"}]},n,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},s={className:"variable",begin:/&[a-z\d_]*\b/},l={className:"keyword",begin:"/[a-z][a-z\\d-]*/"},c={className:"symbol",begin:"^\\s*[a-zA-Z_][a-zA-Z\\d_]*:"},d={className:"params",relevance:0,begin:"<",end:">",contains:[i,s]},_={className:"title.class",begin:/[a-zA-Z_][a-zA-Z\d_@-]*(?=\s\{)/,relevance:.2},p={className:"title.class",begin:/^\/(?=\s*\{)/,relevance:10},g={match:/[a-z][a-z-,]+(?=;)/,relevance:0,scope:"attr"},E={relevance:0,match:[/[a-z][a-z-,]+/,/\s*/,/=/],scope:{1:"attr",3:"operator"}},f={scope:"punctuation",relevance:0,match:/\};|[;{}]/};return{name:"Device Tree",contains:[p,s,l,c,_,E,g,d,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,i,n,o,f,{begin:e.IDENT_RE+"::",keywords:""}]}}return Fd=t,Fd}var Bd,Hh;function HOe(){if(Hh)return Bd;Hh=1;function t(e){const n="if eq ne lt lte gt gte select default math sep";return{name:"Dust",aliases:["dst"],case_insensitive:!0,subLanguage:"xml",contains:[{className:"template-tag",begin:/\{[#\/]/,end:/\}/,illegal:/;/,contains:[{className:"name",begin:/[a-zA-Z\.-]+/,starts:{endsWithParent:!0,relevance:0,contains:[e.QUOTE_STRING_MODE]}}]},{className:"template-variable",begin:/\{/,end:/\}/,illegal:/;/,keywords:n}]}}return Bd=t,Bd}var Gd,zh;function zOe(){if(zh)return Gd;zh=1;function t(e){const n=e.COMMENT(/\(\*/,/\*\)/),i={className:"attribute",begin:/^[ ]*[a-zA-Z]+([\s_-]+[a-zA-Z]+)*/},s={begin:/=/,end:/[.;]/,contains:[n,{className:"meta",begin:/\?.*\?/},{className:"string",variants:[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{begin:"`",end:"`"}]}]};return{name:"Extended Backus-Naur Form",illegal:/\S/,contains:[n,i,s]}}return Gd=t,Gd}var Yd,Vh;function VOe(){if(Vh)return Yd;Vh=1;function t(e){const n=e.regex,i="[a-zA-Z_][a-zA-Z0-9_.]*(!|\\?)?",o="[a-zA-Z_]\\w*[!?=]?|[-+~]@|<<|>>|=~|===?|<=>|[<>]=?|\\*\\*|[-/+%^&*~`|]|\\[\\]=?",c={$pattern:i,keyword:["after","alias","and","case","catch","cond","defstruct","defguard","do","else","end","fn","for","if","import","in","not","or","quote","raise","receive","require","reraise","rescue","try","unless","unquote","unquote_splicing","use","when","with|0"],literal:["false","nil","true"]},d={className:"subst",begin:/#\{/,end:/\}/,keywords:c},_={className:"number",begin:"(\\b0o[0-7_]+)|(\\b0b[01_]+)|(\\b0x[0-9a-fA-F_]+)|(-?\\b[0-9][0-9_]*(\\.[0-9_]+([eE][-+]?[0-9]+)?)?)",relevance:0},g={match:/\\[\s\S]/,scope:"char.escape",relevance:0},E=`[/|([{<"']`,f=[{begin:/"/,end:/"/},{begin:/'/,end:/'/},{begin:/\//,end:/\//},{begin:/\|/,end:/\|/},{begin:/\(/,end:/\)/},{begin:/\[/,end:/\]/},{begin:/\{/,end:/\}/},{begin://}],S=D=>({scope:"char.escape",begin:n.concat(/\\/,D),relevance:0}),C={className:"string",begin:"~[a-z](?="+E+")",contains:f.map(D=>e.inherit(D,{contains:[S(D.end),g,d]}))},h={className:"string",begin:"~[A-Z](?="+E+")",contains:f.map(D=>e.inherit(D,{contains:[S(D.end)]}))},T={className:"regex",variants:[{begin:"~r(?="+E+")",contains:f.map(D=>e.inherit(D,{end:n.concat(D.end,/[uismxfU]{0,7}/),contains:[S(D.end),g,d]}))},{begin:"~R(?="+E+")",contains:f.map(D=>e.inherit(D,{end:n.concat(D.end,/[uismxfU]{0,7}/),contains:[S(D.end)]}))}]},N={className:"string",contains:[e.BACKSLASH_ESCAPE,d],variants:[{begin:/"""/,end:/"""/},{begin:/'''/,end:/'''/},{begin:/~S"""/,end:/"""/,contains:[]},{begin:/~S"/,end:/"/,contains:[]},{begin:/~S'''/,end:/'''/,contains:[]},{begin:/~S'/,end:/'/,contains:[]},{begin:/'/,end:/'/},{begin:/"/,end:/"/}]},y={className:"function",beginKeywords:"def defp defmacro defmacrop",end:/\B\b/,contains:[e.inherit(e.TITLE_MODE,{begin:i,endsParent:!0})]},x=e.inherit(y,{className:"class",beginKeywords:"defimpl defmodule defprotocol defrecord",end:/\bdo\b|$|;/}),P=[N,T,h,C,e.HASH_COMMENT_MODE,x,y,{begin:"::"},{className:"symbol",begin:":(?![\\s:])",contains:[N,{begin:o}],relevance:0},{className:"symbol",begin:i+":(?!:)",relevance:0},{className:"title.class",begin:/(\b[A-Z][a-zA-Z0-9_]+)/,relevance:0},_,{className:"variable",begin:"(\\$\\W)|((\\$|@@?)(\\w+))"}];return d.contains=P,{name:"Elixir",aliases:["ex","exs"],keywords:c,contains:P}}return Yd=t,Yd}var qd,Wh;function WOe(){if(Wh)return qd;Wh=1;function t(e){const n={variants:[e.COMMENT("--","$"),e.COMMENT(/\{-/,/-\}/,{contains:["self"]})]},i={className:"type",begin:"\\b[A-Z][\\w']*",relevance:0},o={begin:"\\(",end:"\\)",illegal:'"',contains:[{className:"type",begin:"\\b[A-Z][\\w]*(\\((\\.\\.|,|\\w+)\\))?"},n]},s={begin:/\{/,end:/\}/,contains:o.contains},l={className:"string",begin:"'\\\\?.",end:"'",illegal:"."};return{name:"Elm",keywords:["let","in","if","then","else","case","of","where","module","import","exposing","type","alias","as","infix","infixl","infixr","port","effect","command","subscription"],contains:[{beginKeywords:"port effect module",end:"exposing",keywords:"port effect module where command subscription exposing",contains:[o,n],illegal:"\\W\\.|;"},{begin:"import",end:"$",keywords:"import as exposing",contains:[o,n],illegal:"\\W\\.|;"},{begin:"type",end:"$",keywords:"type alias",contains:[i,o,s,n]},{beginKeywords:"infix infixl infixr",end:"$",contains:[e.C_NUMBER_MODE,n]},{begin:"port",end:"$",keywords:"port",contains:[n]},l,e.QUOTE_STRING_MODE,e.C_NUMBER_MODE,i,e.inherit(e.TITLE_MODE,{begin:"^[_a-z][\\w']*"}),n,{begin:"->|<-"}],illegal:/;/}}return qd=t,qd}var $d,Kh;function KOe(){if(Kh)return $d;Kh=1;function t(e){const n=e.regex,i="([a-zA-Z_]\\w*[!?=]?|[-+~]@|<<|>>|=~|===?|<=>|[<>]=?|\\*\\*|[-/+%^&*~`|]|\\[\\]=?)",o=n.either(/\b([A-Z]+[a-z0-9]+)+/,/\b([A-Z]+[a-z0-9]+)+[A-Z]+/),s=n.concat(o,/(::\w+)*/),c={"variable.constant":["__FILE__","__LINE__","__ENCODING__"],"variable.language":["self","super"],keyword:["alias","and","begin","BEGIN","break","case","class","defined","do","else","elsif","end","END","ensure","for","if","in","module","next","not","or","redo","require","rescue","retry","return","then","undef","unless","until","when","while","yield",...["include","extend","prepend","public","private","protected","raise","throw"]],built_in:["proc","lambda","attr_accessor","attr_reader","attr_writer","define_method","private_constant","module_function"],literal:["true","false","nil"]},d={className:"doctag",begin:"@[A-Za-z]+"},_={begin:"#<",end:">"},p=[e.COMMENT("#","$",{contains:[d]}),e.COMMENT("^=begin","^=end",{contains:[d],relevance:10}),e.COMMENT("^__END__",e.MATCH_NOTHING_RE)],g={className:"subst",begin:/#\{/,end:/\}/,keywords:c},E={className:"string",contains:[e.BACKSLASH_ESCAPE,g],variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/},{begin:/`/,end:/`/},{begin:/%[qQwWx]?\(/,end:/\)/},{begin:/%[qQwWx]?\[/,end:/\]/},{begin:/%[qQwWx]?\{/,end:/\}/},{begin:/%[qQwWx]?/},{begin:/%[qQwWx]?\//,end:/\//},{begin:/%[qQwWx]?%/,end:/%/},{begin:/%[qQwWx]?-/,end:/-/},{begin:/%[qQwWx]?\|/,end:/\|/},{begin:/\B\?(\\\d{1,3})/},{begin:/\B\?(\\x[A-Fa-f0-9]{1,2})/},{begin:/\B\?(\\u\{?[A-Fa-f0-9]{1,6}\}?)/},{begin:/\B\?(\\M-\\C-|\\M-\\c|\\c\\M-|\\M-|\\C-\\M-)[\x20-\x7e]/},{begin:/\B\?\\(c|C-)[\x20-\x7e]/},{begin:/\B\?\\?\S/},{begin:n.concat(/<<[-~]?'?/,n.lookahead(/(\w+)(?=\W)[^\n]*\n(?:[^\n]*\n)*?\s*\1\b/)),contains:[e.END_SAME_AS_BEGIN({begin:/(\w+)/,end:/(\w+)/,contains:[e.BACKSLASH_ESCAPE,g]})]}]},f="[1-9](_?[0-9])*|0",S="[0-9](_?[0-9])*",C={className:"number",relevance:0,variants:[{begin:`\\b(${f})(\\.(${S}))?([eE][+-]?(${S})|r)?i?\\b`},{begin:"\\b0[dD][0-9](_?[0-9])*r?i?\\b"},{begin:"\\b0[bB][0-1](_?[0-1])*r?i?\\b"},{begin:"\\b0[oO][0-7](_?[0-7])*r?i?\\b"},{begin:"\\b0[xX][0-9a-fA-F](_?[0-9a-fA-F])*r?i?\\b"},{begin:"\\b0(_?[0-7])+r?i?\\b"}]},h={variants:[{match:/\(\)/},{className:"params",begin:/\(/,end:/(?=\))/,excludeBegin:!0,endsParent:!0,keywords:c}]},k=[E,{variants:[{match:[/class\s+/,s,/\s+<\s+/,s]},{match:[/\b(class|module)\s+/,s]}],scope:{2:"title.class",4:"title.class.inherited"},keywords:c},{match:[/(include|extend)\s+/,s],scope:{2:"title.class"},keywords:c},{relevance:0,match:[s,/\.new[. (]/],scope:{1:"title.class"}},{relevance:0,match:/\b[A-Z][A-Z_0-9]+\b/,className:"variable.constant"},{relevance:0,match:o,scope:"title.class"},{match:[/def/,/\s+/,i],scope:{1:"keyword",3:"title.function"},contains:[h]},{begin:e.IDENT_RE+"::"},{className:"symbol",begin:e.UNDERSCORE_IDENT_RE+"(!|\\?)?:",relevance:0},{className:"symbol",begin:":(?!\\s)",contains:[E,{begin:i}],relevance:0},C,{className:"variable",begin:"(\\$\\W)|((\\$|@@?)(\\w+))(?=[^@$?])(?![A-Za-z])(?![@$?'])"},{className:"params",begin:/\|/,end:/\|/,excludeBegin:!0,excludeEnd:!0,relevance:0,keywords:c},{begin:"("+e.RE_STARTERS_RE+"|unless)\\s*",keywords:"unless",contains:[{className:"regexp",contains:[e.BACKSLASH_ESCAPE,g],illegal:/\n/,variants:[{begin:"/",end:"/[a-z]*"},{begin:/%r\{/,end:/\}[a-z]*/},{begin:"%r\\(",end:"\\)[a-z]*"},{begin:"%r!",end:"![a-z]*"},{begin:"%r\\[",end:"\\][a-z]*"}]}].concat(_,p),relevance:0}].concat(_,p);g.contains=k,h.contains=k;const U="[>?]>",W="[\\w#]+\\(\\w+\\):\\d+:\\d+[>*]",z="(\\w+-)?\\d+\\.\\d+\\.\\d+(p\\d+)?[^\\d][^>]+>",K=[{begin:/^\s*=>/,starts:{end:"$",contains:k}},{className:"meta.prompt",begin:"^("+U+"|"+W+"|"+z+")(?=[ ])",starts:{end:"$",keywords:c,contains:k}}];return p.unshift(_),{name:"Ruby",aliases:["rb","gemspec","podspec","thor","irb"],keywords:c,illegal:/\/\*/,contains:[e.SHEBANG({binary:"ruby"})].concat(K).concat(p).concat(k)}}return $d=t,$d}var Hd,Qh;function QOe(){if(Qh)return Hd;Qh=1;function t(e){return{name:"ERB",subLanguage:"xml",contains:[e.COMMENT("<%#","%>"),{begin:"<%[%=-]?",end:"[%-]?%>",subLanguage:"ruby",excludeBegin:!0,excludeEnd:!0}]}}return Hd=t,Hd}var zd,Xh;function XOe(){if(Xh)return zd;Xh=1;function t(e){const n=e.regex;return{name:"Erlang REPL",keywords:{built_in:"spawn spawn_link self",keyword:"after and andalso|10 band begin bnot bor bsl bsr bxor case catch cond div end fun if let not of or orelse|10 query receive rem try when xor"},contains:[{className:"meta.prompt",begin:"^[0-9]+> ",relevance:10},e.COMMENT("%","$"),{className:"number",begin:"\\b(\\d+(_\\d+)*#[a-fA-F0-9]+(_[a-fA-F0-9]+)*|\\d+(_\\d+)*(\\.\\d+(_\\d+)*)?([eE][-+]?\\d+)?)",relevance:0},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{begin:n.concat(/\?(::)?/,/([A-Z]\w*)/,/((::)[A-Z]\w*)*/)},{begin:"->"},{begin:"ok"},{begin:"!"},{begin:"(\\b[a-z'][a-zA-Z0-9_']*:[a-z'][a-zA-Z0-9_']*)|(\\b[a-z'][a-zA-Z0-9_']*)",relevance:0},{begin:"[A-Z][a-zA-Z0-9_']*",relevance:0}]}}return zd=t,zd}var Vd,Zh;function ZOe(){if(Zh)return Vd;Zh=1;function t(e){const n="[a-z'][a-zA-Z0-9_']*",i="("+n+":"+n+"|"+n+")",o={keyword:"after and andalso|10 band begin bnot bor bsl bzr bxor case catch cond div end fun if let not of orelse|10 query receive rem try when xor",literal:"false true"},s=e.COMMENT("%","$"),l={className:"number",begin:"\\b(\\d+(_\\d+)*#[a-fA-F0-9]+(_[a-fA-F0-9]+)*|\\d+(_\\d+)*(\\.\\d+(_\\d+)*)?([eE][-+]?\\d+)?)",relevance:0},c={begin:"fun\\s+"+n+"/\\d+"},d={begin:i+"\\(",end:"\\)",returnBegin:!0,relevance:0,contains:[{begin:i,relevance:0},{begin:"\\(",end:"\\)",endsWithParent:!0,returnEnd:!0,relevance:0}]},_={begin:/\{/,end:/\}/,relevance:0},p={begin:"\\b_([A-Z][A-Za-z0-9_]*)?",relevance:0},g={begin:"[A-Z][a-zA-Z0-9_]*",relevance:0},E={begin:"#"+e.UNDERSCORE_IDENT_RE,relevance:0,returnBegin:!0,contains:[{begin:"#"+e.UNDERSCORE_IDENT_RE,relevance:0},{begin:/\{/,end:/\}/,relevance:0}]},f={beginKeywords:"fun receive if try case",end:"end",keywords:o};f.contains=[s,c,e.inherit(e.APOS_STRING_MODE,{className:""}),f,d,e.QUOTE_STRING_MODE,l,_,p,g,E];const S=[s,c,f,d,e.QUOTE_STRING_MODE,l,_,p,g,E];d.contains[1].contains=S,_.contains=S,E.contains[1].contains=S;const C=["-module","-record","-undef","-export","-ifdef","-ifndef","-author","-copyright","-doc","-vsn","-import","-include","-include_lib","-compile","-define","-else","-endif","-file","-behaviour","-behavior","-spec"],h={className:"params",begin:"\\(",end:"\\)",contains:S};return{name:"Erlang",aliases:["erl"],keywords:o,illegal:"(",returnBegin:!0,illegal:"\\(|#|//|/\\*|\\\\|:|;",contains:[h,e.inherit(e.TITLE_MODE,{begin:n})],starts:{end:";|\\.",keywords:o,contains:S}},s,{begin:"^-",end:"\\.",relevance:0,excludeEnd:!0,returnBegin:!0,keywords:{$pattern:"-"+e.IDENT_RE,keyword:C.map(T=>`${T}|1.5`).join(" ")},contains:[h]},l,e.QUOTE_STRING_MODE,E,p,g,_,{begin:/\.$/}]}}return Vd=t,Vd}var Wd,Jh;function JOe(){if(Jh)return Wd;Jh=1;function t(e){return{name:"Excel formulae",aliases:["xlsx","xls"],case_insensitive:!0,keywords:{$pattern:/[a-zA-Z][\w\.]*/,built_in:["ABS","ACCRINT","ACCRINTM","ACOS","ACOSH","ACOT","ACOTH","AGGREGATE","ADDRESS","AMORDEGRC","AMORLINC","AND","ARABIC","AREAS","ASC","ASIN","ASINH","ATAN","ATAN2","ATANH","AVEDEV","AVERAGE","AVERAGEA","AVERAGEIF","AVERAGEIFS","BAHTTEXT","BASE","BESSELI","BESSELJ","BESSELK","BESSELY","BETADIST","BETA.DIST","BETAINV","BETA.INV","BIN2DEC","BIN2HEX","BIN2OCT","BINOMDIST","BINOM.DIST","BINOM.DIST.RANGE","BINOM.INV","BITAND","BITLSHIFT","BITOR","BITRSHIFT","BITXOR","CALL","CEILING","CEILING.MATH","CEILING.PRECISE","CELL","CHAR","CHIDIST","CHIINV","CHITEST","CHISQ.DIST","CHISQ.DIST.RT","CHISQ.INV","CHISQ.INV.RT","CHISQ.TEST","CHOOSE","CLEAN","CODE","COLUMN","COLUMNS","COMBIN","COMBINA","COMPLEX","CONCAT","CONCATENATE","CONFIDENCE","CONFIDENCE.NORM","CONFIDENCE.T","CONVERT","CORREL","COS","COSH","COT","COTH","COUNT","COUNTA","COUNTBLANK","COUNTIF","COUNTIFS","COUPDAYBS","COUPDAYS","COUPDAYSNC","COUPNCD","COUPNUM","COUPPCD","COVAR","COVARIANCE.P","COVARIANCE.S","CRITBINOM","CSC","CSCH","CUBEKPIMEMBER","CUBEMEMBER","CUBEMEMBERPROPERTY","CUBERANKEDMEMBER","CUBESET","CUBESETCOUNT","CUBEVALUE","CUMIPMT","CUMPRINC","DATE","DATEDIF","DATEVALUE","DAVERAGE","DAY","DAYS","DAYS360","DB","DBCS","DCOUNT","DCOUNTA","DDB","DEC2BIN","DEC2HEX","DEC2OCT","DECIMAL","DEGREES","DELTA","DEVSQ","DGET","DISC","DMAX","DMIN","DOLLAR","DOLLARDE","DOLLARFR","DPRODUCT","DSTDEV","DSTDEVP","DSUM","DURATION","DVAR","DVARP","EDATE","EFFECT","ENCODEURL","EOMONTH","ERF","ERF.PRECISE","ERFC","ERFC.PRECISE","ERROR.TYPE","EUROCONVERT","EVEN","EXACT","EXP","EXPON.DIST","EXPONDIST","FACT","FACTDOUBLE","FALSE|0","F.DIST","FDIST","F.DIST.RT","FILTERXML","FIND","FINDB","F.INV","F.INV.RT","FINV","FISHER","FISHERINV","FIXED","FLOOR","FLOOR.MATH","FLOOR.PRECISE","FORECAST","FORECAST.ETS","FORECAST.ETS.CONFINT","FORECAST.ETS.SEASONALITY","FORECAST.ETS.STAT","FORECAST.LINEAR","FORMULATEXT","FREQUENCY","F.TEST","FTEST","FV","FVSCHEDULE","GAMMA","GAMMA.DIST","GAMMADIST","GAMMA.INV","GAMMAINV","GAMMALN","GAMMALN.PRECISE","GAUSS","GCD","GEOMEAN","GESTEP","GETPIVOTDATA","GROWTH","HARMEAN","HEX2BIN","HEX2DEC","HEX2OCT","HLOOKUP","HOUR","HYPERLINK","HYPGEOM.DIST","HYPGEOMDIST","IF","IFERROR","IFNA","IFS","IMABS","IMAGINARY","IMARGUMENT","IMCONJUGATE","IMCOS","IMCOSH","IMCOT","IMCSC","IMCSCH","IMDIV","IMEXP","IMLN","IMLOG10","IMLOG2","IMPOWER","IMPRODUCT","IMREAL","IMSEC","IMSECH","IMSIN","IMSINH","IMSQRT","IMSUB","IMSUM","IMTAN","INDEX","INDIRECT","INFO","INT","INTERCEPT","INTRATE","IPMT","IRR","ISBLANK","ISERR","ISERROR","ISEVEN","ISFORMULA","ISLOGICAL","ISNA","ISNONTEXT","ISNUMBER","ISODD","ISREF","ISTEXT","ISO.CEILING","ISOWEEKNUM","ISPMT","JIS","KURT","LARGE","LCM","LEFT","LEFTB","LEN","LENB","LINEST","LN","LOG","LOG10","LOGEST","LOGINV","LOGNORM.DIST","LOGNORMDIST","LOGNORM.INV","LOOKUP","LOWER","MATCH","MAX","MAXA","MAXIFS","MDETERM","MDURATION","MEDIAN","MID","MIDBs","MIN","MINIFS","MINA","MINUTE","MINVERSE","MIRR","MMULT","MOD","MODE","MODE.MULT","MODE.SNGL","MONTH","MROUND","MULTINOMIAL","MUNIT","N","NA","NEGBINOM.DIST","NEGBINOMDIST","NETWORKDAYS","NETWORKDAYS.INTL","NOMINAL","NORM.DIST","NORMDIST","NORMINV","NORM.INV","NORM.S.DIST","NORMSDIST","NORM.S.INV","NORMSINV","NOT","NOW","NPER","NPV","NUMBERVALUE","OCT2BIN","OCT2DEC","OCT2HEX","ODD","ODDFPRICE","ODDFYIELD","ODDLPRICE","ODDLYIELD","OFFSET","OR","PDURATION","PEARSON","PERCENTILE.EXC","PERCENTILE.INC","PERCENTILE","PERCENTRANK.EXC","PERCENTRANK.INC","PERCENTRANK","PERMUT","PERMUTATIONA","PHI","PHONETIC","PI","PMT","POISSON.DIST","POISSON","POWER","PPMT","PRICE","PRICEDISC","PRICEMAT","PROB","PRODUCT","PROPER","PV","QUARTILE","QUARTILE.EXC","QUARTILE.INC","QUOTIENT","RADIANS","RAND","RANDBETWEEN","RANK.AVG","RANK.EQ","RANK","RATE","RECEIVED","REGISTER.ID","REPLACE","REPLACEB","REPT","RIGHT","RIGHTB","ROMAN","ROUND","ROUNDDOWN","ROUNDUP","ROW","ROWS","RRI","RSQ","RTD","SEARCH","SEARCHB","SEC","SECH","SECOND","SERIESSUM","SHEET","SHEETS","SIGN","SIN","SINH","SKEW","SKEW.P","SLN","SLOPE","SMALL","SQL.REQUEST","SQRT","SQRTPI","STANDARDIZE","STDEV","STDEV.P","STDEV.S","STDEVA","STDEVP","STDEVPA","STEYX","SUBSTITUTE","SUBTOTAL","SUM","SUMIF","SUMIFS","SUMPRODUCT","SUMSQ","SUMX2MY2","SUMX2PY2","SUMXMY2","SWITCH","SYD","T","TAN","TANH","TBILLEQ","TBILLPRICE","TBILLYIELD","T.DIST","T.DIST.2T","T.DIST.RT","TDIST","TEXT","TEXTJOIN","TIME","TIMEVALUE","T.INV","T.INV.2T","TINV","TODAY","TRANSPOSE","TREND","TRIM","TRIMMEAN","TRUE|0","TRUNC","T.TEST","TTEST","TYPE","UNICHAR","UNICODE","UPPER","VALUE","VAR","VAR.P","VAR.S","VARA","VARP","VARPA","VDB","VLOOKUP","WEBSERVICE","WEEKDAY","WEEKNUM","WEIBULL","WEIBULL.DIST","WORKDAY","WORKDAY.INTL","XIRR","XNPV","XOR","YEAR","YEARFRAC","YIELD","YIELDDISC","YIELDMAT","Z.TEST","ZTEST"]},contains:[{begin:/^=/,end:/[^=]/,returnEnd:!0,illegal:/=/,relevance:10},{className:"symbol",begin:/\b[A-Z]{1,2}\d+\b/,end:/[^\d]/,excludeEnd:!0,relevance:0},{className:"symbol",begin:/[A-Z]{0,2}\d*:[A-Z]{0,2}\d*/,relevance:0},e.BACKSLASH_ESCAPE,e.QUOTE_STRING_MODE,{className:"number",begin:e.NUMBER_RE+"(%)?",relevance:0},e.COMMENT(/\bN\(/,/\)/,{excludeBegin:!0,excludeEnd:!0,illegal:/\n/})]}}return Wd=t,Wd}var Kd,jh;function jOe(){if(jh)return Kd;jh=1;function t(e){return{name:"FIX",contains:[{begin:/[^\u2401\u0001]+/,end:/[\u2401\u0001]/,excludeEnd:!0,returnBegin:!0,returnEnd:!1,contains:[{begin:/([^\u2401\u0001=]+)/,end:/=([^\u2401\u0001=]+)/,returnEnd:!0,returnBegin:!1,className:"attr"},{begin:/=/,end:/([\u2401\u0001])/,excludeEnd:!0,excludeBegin:!0,className:"string"}]}],case_insensitive:!0}}return Kd=t,Kd}var Qd,eT;function eAe(){if(eT)return Qd;eT=1;function t(e){const n={className:"string",begin:/'(.|\\[xXuU][a-zA-Z0-9]+)'/},i={className:"string",variants:[{begin:'"',end:'"'}]},s={className:"function",beginKeywords:"def",end:/[:={\[(\n;]/,excludeEnd:!0,contains:[{className:"title",relevance:0,begin:/[^0-9\n\t "'(),.`{}\[\]:;][^\n\t "'(),.`{}\[\]:;]+|[^0-9\n\t "'(),.`{}\[\]:;=]/}]};return{name:"Flix",keywords:{keyword:["case","class","def","else","enum","if","impl","import","in","lat","rel","index","let","match","namespace","switch","type","yield","with"],literal:["true","false"]},contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,n,i,s,e.C_NUMBER_MODE]}}return Qd=t,Qd}var Xd,tT;function tAe(){if(tT)return Xd;tT=1;function t(e){const n=e.regex,i={className:"params",begin:"\\(",end:"\\)"},o={variants:[e.COMMENT("!","$",{relevance:0}),e.COMMENT("^C[ ]","$",{relevance:0}),e.COMMENT("^C$","$",{relevance:0})]},s=/(_[a-z_\d]+)?/,l=/([de][+-]?\d+)?/,c={className:"number",variants:[{begin:n.concat(/\b\d+/,/\.(\d*)/,l,s)},{begin:n.concat(/\b\d+/,l,s)},{begin:n.concat(/\.\d+/,l,s)}],relevance:0},d={className:"function",beginKeywords:"subroutine function program",illegal:"[${=\\n]",contains:[e.UNDERSCORE_TITLE_MODE,i]},_={className:"string",relevance:0,variants:[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]};return{name:"Fortran",case_insensitive:!0,aliases:["f90","f95"],keywords:{keyword:["kind","do","concurrent","local","shared","while","private","call","intrinsic","where","elsewhere","type","endtype","endmodule","endselect","endinterface","end","enddo","endif","if","forall","endforall","only","contains","default","return","stop","then","block","endblock","endassociate","public","subroutine|10","function","program",".and.",".or.",".not.",".le.",".eq.",".ge.",".gt.",".lt.","goto","save","else","use","module","select","case","access","blank","direct","exist","file","fmt","form","formatted","iostat","name","named","nextrec","number","opened","rec","recl","sequential","status","unformatted","unit","continue","format","pause","cycle","exit","c_null_char","c_alert","c_backspace","c_form_feed","flush","wait","decimal","round","iomsg","synchronous","nopass","non_overridable","pass","protected","volatile","abstract","extends","import","non_intrinsic","value","deferred","generic","final","enumerator","class","associate","bind","enum","c_int","c_short","c_long","c_long_long","c_signed_char","c_size_t","c_int8_t","c_int16_t","c_int32_t","c_int64_t","c_int_least8_t","c_int_least16_t","c_int_least32_t","c_int_least64_t","c_int_fast8_t","c_int_fast16_t","c_int_fast32_t","c_int_fast64_t","c_intmax_t","C_intptr_t","c_float","c_double","c_long_double","c_float_complex","c_double_complex","c_long_double_complex","c_bool","c_char","c_null_ptr","c_null_funptr","c_new_line","c_carriage_return","c_horizontal_tab","c_vertical_tab","iso_c_binding","c_loc","c_funloc","c_associated","c_f_pointer","c_ptr","c_funptr","iso_fortran_env","character_storage_size","error_unit","file_storage_size","input_unit","iostat_end","iostat_eor","numeric_storage_size","output_unit","c_f_procpointer","ieee_arithmetic","ieee_support_underflow_control","ieee_get_underflow_mode","ieee_set_underflow_mode","newunit","contiguous","recursive","pad","position","action","delim","readwrite","eor","advance","nml","interface","procedure","namelist","include","sequence","elemental","pure","impure","integer","real","character","complex","logical","codimension","dimension","allocatable|10","parameter","external","implicit|10","none","double","precision","assign","intent","optional","pointer","target","in","out","common","equivalence","data"],literal:[".False.",".True."],built_in:["alog","alog10","amax0","amax1","amin0","amin1","amod","cabs","ccos","cexp","clog","csin","csqrt","dabs","dacos","dasin","datan","datan2","dcos","dcosh","ddim","dexp","dint","dlog","dlog10","dmax1","dmin1","dmod","dnint","dsign","dsin","dsinh","dsqrt","dtan","dtanh","float","iabs","idim","idint","idnint","ifix","isign","max0","max1","min0","min1","sngl","algama","cdabs","cdcos","cdexp","cdlog","cdsin","cdsqrt","cqabs","cqcos","cqexp","cqlog","cqsin","cqsqrt","dcmplx","dconjg","derf","derfc","dfloat","dgamma","dimag","dlgama","iqint","qabs","qacos","qasin","qatan","qatan2","qcmplx","qconjg","qcos","qcosh","qdim","qerf","qerfc","qexp","qgamma","qimag","qlgama","qlog","qlog10","qmax1","qmin1","qmod","qnint","qsign","qsin","qsinh","qsqrt","qtan","qtanh","abs","acos","aimag","aint","anint","asin","atan","atan2","char","cmplx","conjg","cos","cosh","exp","ichar","index","int","log","log10","max","min","nint","sign","sin","sinh","sqrt","tan","tanh","print","write","dim","lge","lgt","lle","llt","mod","nullify","allocate","deallocate","adjustl","adjustr","all","allocated","any","associated","bit_size","btest","ceiling","count","cshift","date_and_time","digits","dot_product","eoshift","epsilon","exponent","floor","fraction","huge","iand","ibclr","ibits","ibset","ieor","ior","ishft","ishftc","lbound","len_trim","matmul","maxexponent","maxloc","maxval","merge","minexponent","minloc","minval","modulo","mvbits","nearest","pack","present","product","radix","random_number","random_seed","range","repeat","reshape","rrspacing","scale","scan","selected_int_kind","selected_real_kind","set_exponent","shape","size","spacing","spread","sum","system_clock","tiny","transpose","trim","ubound","unpack","verify","achar","iachar","transfer","dble","entry","dprod","cpu_time","command_argument_count","get_command","get_command_argument","get_environment_variable","is_iostat_end","ieee_arithmetic","ieee_support_underflow_control","ieee_get_underflow_mode","ieee_set_underflow_mode","is_iostat_eor","move_alloc","new_line","selected_char_kind","same_type_as","extends_type_of","acosh","asinh","atanh","bessel_j0","bessel_j1","bessel_jn","bessel_y0","bessel_y1","bessel_yn","erf","erfc","erfc_scaled","gamma","log_gamma","hypot","norm2","atomic_define","atomic_ref","execute_command_line","leadz","trailz","storage_size","merge_bits","bge","bgt","ble","blt","dshiftl","dshiftr","findloc","iall","iany","iparity","image_index","lcobound","ucobound","maskl","maskr","num_images","parity","popcnt","poppar","shifta","shiftl","shiftr","this_image","sync","change","team","co_broadcast","co_max","co_min","co_sum","co_reduce"]},illegal:/\/\*/,contains:[_,d,{begin:/^C\s*=(?!=)/,relevance:0},o,c]}}return Xd=t,Xd}var Zd,nT;function nAe(){if(nT)return Zd;nT=1;function t(c){return new RegExp(c.replace(/[-/\\^$*+?.()|[\]{}]/g,"\\$&"),"m")}function e(c){return c?typeof c=="string"?c:c.source:null}function n(c){return i("(?=",c,")")}function i(...c){return c.map(_=>e(_)).join("")}function o(c){const d=c[c.length-1];return typeof d=="object"&&d.constructor===Object?(c.splice(c.length-1,1),d):{}}function s(...c){return"("+(o(c).capture?"":"?:")+c.map(p=>e(p)).join("|")+")"}function l(c){const d=["abstract","and","as","assert","base","begin","class","default","delegate","do","done","downcast","downto","elif","else","end","exception","extern","finally","fixed","for","fun","function","global","if","in","inherit","inline","interface","internal","lazy","let","match","member","module","mutable","namespace","new","of","open","or","override","private","public","rec","return","static","struct","then","to","try","type","upcast","use","val","void","when","while","with","yield"],_={scope:"keyword",match:/\b(yield|return|let|do|match|use)!/},p=["if","else","endif","line","nowarn","light","r","i","I","load","time","help","quit"],g=["true","false","null","Some","None","Ok","Error","infinity","infinityf","nan","nanf"],E=["__LINE__","__SOURCE_DIRECTORY__","__SOURCE_FILE__"],f=["bool","byte","sbyte","int8","int16","int32","uint8","uint16","uint32","int","uint","int64","uint64","nativeint","unativeint","decimal","float","double","float32","single","char","string","unit","bigint","option","voption","list","array","seq","byref","exn","inref","nativeptr","obj","outref","voidptr","Result"],C={keyword:d,literal:g,built_in:["not","ref","raise","reraise","dict","readOnlyDict","set","get","enum","sizeof","typeof","typedefof","nameof","nullArg","invalidArg","invalidOp","id","fst","snd","ignore","lock","using","box","unbox","tryUnbox","printf","printfn","sprintf","eprintf","eprintfn","fprintf","fprintfn","failwith","failwithf"],"variable.constant":E},T={variants:[c.COMMENT(/\(\*(?!\))/,/\*\)/,{contains:["self"]}),c.C_LINE_COMMENT_MODE]},N=/[a-zA-Z_](\w|')*/,y={scope:"variable",begin:/``/,end:/``/},x=/\B('|\^)/,P={scope:"symbol",variants:[{match:i(x,/``.*?``/)},{match:i(x,c.UNDERSCORE_IDENT_RE)}],relevance:0},D=function({includeEqual:Ce}){let Be;Ce?Be="!%&*+-/<=>@^|~?":Be="!%&*+-/<>@^|~?";const Ve=Array.from(Be),xe=i("[",...Ve.map(t),"]"),He=s(xe,/\./),rt=i(He,n(He)),We=s(i(rt,He,"*"),i(xe,"+"));return{scope:"operator",match:s(We,/:\?>/,/:\?/,/:>/,/:=/,/::?/,/\$/),relevance:0}},k=D({includeEqual:!0}),U=D({includeEqual:!1}),W=function(Ce,Be){return{begin:i(Ce,n(i(/\s*/,s(/\w/,/'/,/\^/,/#/,/``/,/\(/,/{\|/)))),beginScope:Be,end:n(s(/\n/,/=/)),relevance:0,keywords:c.inherit(C,{type:f}),contains:[T,P,c.inherit(y,{scope:null}),U]}},z=W(/:/,"operator"),K=W(/\bof\b/,"keyword"),Ee={begin:[/(^|\s+)/,/type/,/\s+/,N],beginScope:{2:"keyword",4:"title.class"},end:n(/\(|=|$/),keywords:C,contains:[T,c.inherit(y,{scope:null}),P,{scope:"operator",match:/<|>/},z]},oe={scope:"computation-expression",match:/\b[_a-z]\w*(?=\s*\{)/},L={begin:[/^\s*/,i(/#/,s(...p)),/\b/],beginScope:{2:"meta"},end:n(/\s|$/)},J={variants:[c.BINARY_NUMBER_MODE,c.C_NUMBER_MODE]},re={scope:"string",begin:/"/,end:/"/,contains:[c.BACKSLASH_ESCAPE]},G={scope:"string",begin:/@"/,end:/"/,contains:[{match:/""/},c.BACKSLASH_ESCAPE]},X={scope:"string",begin:/"""/,end:/"""/,relevance:2},_e={scope:"subst",begin:/\{/,end:/\}/,keywords:C},ve={scope:"string",begin:/\$"/,end:/"/,contains:[{match:/\{\{/},{match:/\}\}/},c.BACKSLASH_ESCAPE,_e]},he={scope:"string",begin:/(\$@|@\$)"/,end:/"/,contains:[{match:/\{\{/},{match:/\}\}/},{match:/""/},c.BACKSLASH_ESCAPE,_e]},tt={scope:"string",begin:/\$"""/,end:/"""/,contains:[{match:/\{\{/},{match:/\}\}/},_e],relevance:2},lt={scope:"string",match:i(/'/,s(/[^\\']/,/\\(?:.|\d{3}|x[a-fA-F\d]{2}|u[a-fA-F\d]{4}|U[a-fA-F\d]{8})/),/'/)};return _e.contains=[he,ve,G,re,lt,_,T,y,z,oe,L,J,P,k],{name:"F#",aliases:["fs","f#"],keywords:C,illegal:/\/\*/,classNameAliases:{"computation-expression":"keyword"},contains:[_,{variants:[tt,he,ve,X,G,re,lt]},T,y,Ee,{scope:"meta",begin:/\[\]/,relevance:2,contains:[y,X,G,re,lt,J]},K,z,oe,L,J,P,k]}}return Zd=l,Zd}var Jd,rT;function rAe(){if(rT)return Jd;rT=1;function t(e){const n=e.regex,i={keyword:"abort acronym acronyms alias all and assign binary card diag display else eq file files for free ge gt if integer le loop lt maximizing minimizing model models ne negative no not option options or ord positive prod put putpage puttl repeat sameas semicont semiint smax smin solve sos1 sos2 sum system table then until using while xor yes",literal:"eps inf na",built_in:"abs arccos arcsin arctan arctan2 Beta betaReg binomial ceil centropy cos cosh cvPower div div0 eDist entropy errorf execSeed exp fact floor frac gamma gammaReg log logBeta logGamma log10 log2 mapVal max min mod ncpCM ncpF ncpVUpow ncpVUsin normal pi poly power randBinomial randLinear randTriangle round rPower sigmoid sign signPower sin sinh slexp sllog10 slrec sqexp sqlog10 sqr sqrec sqrt tan tanh trunc uniform uniformInt vcPower bool_and bool_eqv bool_imp bool_not bool_or bool_xor ifThen rel_eq rel_ge rel_gt rel_le rel_lt rel_ne gday gdow ghour gleap gmillisec gminute gmonth gsecond gyear jdate jnow jstart jtime errorLevel execError gamsRelease gamsVersion handleCollect handleDelete handleStatus handleSubmit heapFree heapLimit heapSize jobHandle jobKill jobStatus jobTerminate licenseLevel licenseStatus maxExecError sleep timeClose timeComp timeElapsed timeExec timeStart"},o={className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0},s={className:"symbol",variants:[{begin:/=[lgenxc]=/},{begin:/\$/}]},l={className:"comment",variants:[{begin:"'",end:"'"},{begin:'"',end:'"'}],illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},c={begin:"/",end:"/",keywords:i,contains:[l,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,e.C_NUMBER_MODE]},d=/[a-z0-9&#*=?@\\><:,()$[\]_.{}!+%^-]+/,_={begin:/[a-z][a-z0-9_]*(\([a-z0-9_, ]*\))?[ \t]+/,excludeBegin:!0,end:"$",endsWithParent:!0,contains:[l,c,{className:"comment",begin:n.concat(d,n.anyNumberOfTimes(n.concat(/[ ]+/,d))),relevance:0}]};return{name:"GAMS",aliases:["gms"],case_insensitive:!0,keywords:i,contains:[e.COMMENT(/^\$ontext/,/^\$offtext/),{className:"meta",begin:"^\\$[a-z0-9]+",end:"$",returnBegin:!0,contains:[{className:"keyword",begin:"^\\$[a-z0-9]+"}]},e.COMMENT("^\\*","$"),e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,{beginKeywords:"set sets parameter parameters variable variables scalar scalars equation equations",end:";",contains:[e.COMMENT("^\\*","$"),e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,c,_]},{beginKeywords:"table",end:";",returnBegin:!0,contains:[{beginKeywords:"table",end:"$",contains:[_]},e.COMMENT("^\\*","$"),e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,e.C_NUMBER_MODE]},{className:"function",begin:/^[a-z][a-z0-9_,\-+' ()$]+\.{2}/,returnBegin:!0,contains:[{className:"title",begin:/^[a-z0-9_]+/},o,s]},e.C_NUMBER_MODE,s]}}return Jd=t,Jd}var jd,iT;function iAe(){if(iT)return jd;iT=1;function t(e){const n={keyword:"bool break call callexe checkinterrupt clear clearg closeall cls comlog compile continue create debug declare delete disable dlibrary dllcall do dos ed edit else elseif enable end endfor endif endp endo errorlog errorlogat expr external fn for format goto gosub graph if keyword let lib library line load loadarray loadexe loadf loadk loadm loadp loads loadx local locate loopnextindex lprint lpwidth lshow matrix msym ndpclex new open output outwidth plot plotsym pop prcsn print printdos proc push retp return rndcon rndmod rndmult rndseed run save saveall screen scroll setarray show sparse stop string struct system trace trap threadfor threadendfor threadbegin threadjoin threadstat threadend until use while winprint ne ge le gt lt and xor or not eq eqv",built_in:"abs acf aconcat aeye amax amean AmericanBinomCall AmericanBinomCall_Greeks AmericanBinomCall_ImpVol AmericanBinomPut AmericanBinomPut_Greeks AmericanBinomPut_ImpVol AmericanBSCall AmericanBSCall_Greeks AmericanBSCall_ImpVol AmericanBSPut AmericanBSPut_Greeks AmericanBSPut_ImpVol amin amult annotationGetDefaults annotationSetBkd annotationSetFont annotationSetLineColor annotationSetLineStyle annotationSetLineThickness annualTradingDays arccos arcsin areshape arrayalloc arrayindex arrayinit arraytomat asciiload asclabel astd astds asum atan atan2 atranspose axmargin balance band bandchol bandcholsol bandltsol bandrv bandsolpd bar base10 begwind besselj bessely beta box boxcox cdfBeta cdfBetaInv cdfBinomial cdfBinomialInv cdfBvn cdfBvn2 cdfBvn2e cdfCauchy cdfCauchyInv cdfChic cdfChii cdfChinc cdfChincInv cdfExp cdfExpInv cdfFc cdfFnc cdfFncInv cdfGam cdfGenPareto cdfHyperGeo cdfLaplace cdfLaplaceInv cdfLogistic cdfLogisticInv cdfmControlCreate cdfMvn cdfMvn2e cdfMvnce cdfMvne cdfMvt2e cdfMvtce cdfMvte cdfN cdfN2 cdfNc cdfNegBinomial cdfNegBinomialInv cdfNi cdfPoisson cdfPoissonInv cdfRayleigh cdfRayleighInv cdfTc cdfTci cdfTnc cdfTvn cdfWeibull cdfWeibullInv cdir ceil ChangeDir chdir chiBarSquare chol choldn cholsol cholup chrs close code cols colsf combinate combinated complex con cond conj cons ConScore contour conv convertsatostr convertstrtosa corrm corrms corrvc corrx corrxs cos cosh counts countwts crossprd crout croutp csrcol csrlin csvReadM csvReadSA cumprodc cumsumc curve cvtos datacreate datacreatecomplex datalist dataload dataloop dataopen datasave date datestr datestring datestrymd dayinyr dayofweek dbAddDatabase dbClose dbCommit dbCreateQuery dbExecQuery dbGetConnectOptions dbGetDatabaseName dbGetDriverName dbGetDrivers dbGetHostName dbGetLastErrorNum dbGetLastErrorText dbGetNumericalPrecPolicy dbGetPassword dbGetPort dbGetTableHeaders dbGetTables dbGetUserName dbHasFeature dbIsDriverAvailable dbIsOpen dbIsOpenError dbOpen dbQueryBindValue dbQueryClear dbQueryCols dbQueryExecPrepared dbQueryFetchAllM dbQueryFetchAllSA dbQueryFetchOneM dbQueryFetchOneSA dbQueryFinish dbQueryGetBoundValue dbQueryGetBoundValues dbQueryGetField dbQueryGetLastErrorNum dbQueryGetLastErrorText dbQueryGetLastInsertID dbQueryGetLastQuery dbQueryGetPosition dbQueryIsActive dbQueryIsForwardOnly dbQueryIsNull dbQueryIsSelect dbQueryIsValid dbQueryPrepare dbQueryRows dbQuerySeek dbQuerySeekFirst dbQuerySeekLast dbQuerySeekNext dbQuerySeekPrevious dbQuerySetForwardOnly dbRemoveDatabase dbRollback dbSetConnectOptions dbSetDatabaseName dbSetHostName dbSetNumericalPrecPolicy dbSetPort dbSetUserName dbTransaction DeleteFile delif delrows denseToSp denseToSpRE denToZero design det detl dfft dffti diag diagrv digamma doswin DOSWinCloseall DOSWinOpen dotfeq dotfeqmt dotfge dotfgemt dotfgt dotfgtmt dotfle dotflemt dotflt dotfltmt dotfne dotfnemt draw drop dsCreate dstat dstatmt dstatmtControlCreate dtdate dtday dttime dttodtv dttostr dttoutc dtvnormal dtvtodt dtvtoutc dummy dummybr dummydn eig eigh eighv eigv elapsedTradingDays endwind envget eof eqSolve eqSolvemt eqSolvemtControlCreate eqSolvemtOutCreate eqSolveset erf erfc erfccplx erfcplx error etdays ethsec etstr EuropeanBinomCall EuropeanBinomCall_Greeks EuropeanBinomCall_ImpVol EuropeanBinomPut EuropeanBinomPut_Greeks EuropeanBinomPut_ImpVol EuropeanBSCall EuropeanBSCall_Greeks EuropeanBSCall_ImpVol EuropeanBSPut EuropeanBSPut_Greeks EuropeanBSPut_ImpVol exctsmpl exec execbg exp extern eye fcheckerr fclearerr feq feqmt fflush fft ffti fftm fftmi fftn fge fgemt fgets fgetsa fgetsat fgetst fgt fgtmt fileinfo filesa fle flemt floor flt fltmt fmod fne fnemt fonts fopen formatcv formatnv fputs fputst fseek fstrerror ftell ftocv ftos ftostrC gamma gammacplx gammaii gausset gdaAppend gdaCreate gdaDStat gdaDStatMat gdaGetIndex gdaGetName gdaGetNames gdaGetOrders gdaGetType gdaGetTypes gdaGetVarInfo gdaIsCplx gdaLoad gdaPack gdaRead gdaReadByIndex gdaReadSome gdaReadSparse gdaReadStruct gdaReportVarInfo gdaSave gdaUpdate gdaUpdateAndPack gdaVars gdaWrite gdaWrite32 gdaWriteSome getarray getdims getf getGAUSShome getmatrix getmatrix4D getname getnamef getNextTradingDay getNextWeekDay getnr getorders getpath getPreviousTradingDay getPreviousWeekDay getRow getscalar3D getscalar4D getTrRow getwind glm gradcplx gradMT gradMTm gradMTT gradMTTm gradp graphprt graphset hasimag header headermt hess hessMT hessMTg hessMTgw hessMTm hessMTmw hessMTT hessMTTg hessMTTgw hessMTTm hessMTw hessp hist histf histp hsec imag indcv indexcat indices indices2 indicesf indicesfn indnv indsav integrate1d integrateControlCreate intgrat2 intgrat3 inthp1 inthp2 inthp3 inthp4 inthpControlCreate intquad1 intquad2 intquad3 intrleav intrleavsa intrsect intsimp inv invpd invswp iscplx iscplxf isden isinfnanmiss ismiss key keyav keyw lag lag1 lagn lapEighb lapEighi lapEighvb lapEighvi lapgEig lapgEigh lapgEighv lapgEigv lapgSchur lapgSvdcst lapgSvds lapgSvdst lapSvdcusv lapSvds lapSvdusv ldlp ldlsol linSolve listwise ln lncdfbvn lncdfbvn2 lncdfmvn lncdfn lncdfn2 lncdfnc lnfact lngammacplx lnpdfmvn lnpdfmvt lnpdfn lnpdft loadd loadstruct loadwind loess loessmt loessmtControlCreate log loglog logx logy lower lowmat lowmat1 ltrisol lu lusol machEpsilon make makevars makewind margin matalloc matinit mattoarray maxbytes maxc maxindc maxv maxvec mbesselei mbesselei0 mbesselei1 mbesseli mbesseli0 mbesseli1 meanc median mergeby mergevar minc minindc minv miss missex missrv moment momentd movingave movingaveExpwgt movingaveWgt nextindex nextn nextnevn nextwind ntos null null1 numCombinations ols olsmt olsmtControlCreate olsqr olsqr2 olsqrmt ones optn optnevn orth outtyp pacf packedToSp packr parse pause pdfCauchy pdfChi pdfExp pdfGenPareto pdfHyperGeo pdfLaplace pdfLogistic pdfn pdfPoisson pdfRayleigh pdfWeibull pi pinv pinvmt plotAddArrow plotAddBar plotAddBox plotAddHist plotAddHistF plotAddHistP plotAddPolar plotAddScatter plotAddShape plotAddTextbox plotAddTS plotAddXY plotArea plotBar plotBox plotClearLayout plotContour plotCustomLayout plotGetDefaults plotHist plotHistF plotHistP plotLayout plotLogLog plotLogX plotLogY plotOpenWindow plotPolar plotSave plotScatter plotSetAxesPen plotSetBar plotSetBarFill plotSetBarStacked plotSetBkdColor plotSetFill plotSetGrid plotSetLegend plotSetLineColor plotSetLineStyle plotSetLineSymbol plotSetLineThickness plotSetNewWindow plotSetTitle plotSetWhichYAxis plotSetXAxisShow plotSetXLabel plotSetXRange plotSetXTicInterval plotSetXTicLabel plotSetYAxisShow plotSetYLabel plotSetYRange plotSetZAxisShow plotSetZLabel plotSurface plotTS plotXY polar polychar polyeval polygamma polyint polymake polymat polymroot polymult polyroot pqgwin previousindex princomp printfm printfmt prodc psi putarray putf putvals pvCreate pvGetIndex pvGetParNames pvGetParVector pvLength pvList pvPack pvPacki pvPackm pvPackmi pvPacks pvPacksi pvPacksm pvPacksmi pvPutParVector pvTest pvUnpack QNewton QNewtonmt QNewtonmtControlCreate QNewtonmtOutCreate QNewtonSet QProg QProgmt QProgmtInCreate qqr qqre qqrep qr qre qrep qrsol qrtsol qtyr qtyre qtyrep quantile quantiled qyr qyre qyrep qz rank rankindx readr real reclassify reclassifyCuts recode recserar recsercp recserrc rerun rescale reshape rets rev rfft rffti rfftip rfftn rfftnp rfftp rndBernoulli rndBeta rndBinomial rndCauchy rndChiSquare rndCon rndCreateState rndExp rndGamma rndGeo rndGumbel rndHyperGeo rndi rndKMbeta rndKMgam rndKMi rndKMn rndKMnb rndKMp rndKMu rndKMvm rndLaplace rndLCbeta rndLCgam rndLCi rndLCn rndLCnb rndLCp rndLCu rndLCvm rndLogNorm rndMTu rndMVn rndMVt rndn rndnb rndNegBinomial rndp rndPoisson rndRayleigh rndStateSkip rndu rndvm rndWeibull rndWishart rotater round rows rowsf rref sampleData satostrC saved saveStruct savewind scale scale3d scalerr scalinfnanmiss scalmiss schtoc schur searchsourcepath seekr select selif seqa seqm setdif setdifsa setvars setvwrmode setwind shell shiftr sin singleindex sinh sleep solpd sortc sortcc sortd sorthc sorthcc sortind sortindc sortmc sortr sortrc spBiconjGradSol spChol spConjGradSol spCreate spDenseSubmat spDiagRvMat spEigv spEye spLDL spline spLU spNumNZE spOnes spreadSheetReadM spreadSheetReadSA spreadSheetWrite spScale spSubmat spToDense spTrTDense spTScalar spZeros sqpSolve sqpSolveMT sqpSolveMTControlCreate sqpSolveMTlagrangeCreate sqpSolveMToutCreate sqpSolveSet sqrt statements stdc stdsc stocv stof strcombine strindx strlen strput strrindx strsect strsplit strsplitPad strtodt strtof strtofcplx strtriml strtrimr strtrunc strtruncl strtruncpad strtruncr submat subscat substute subvec sumc sumr surface svd svd1 svd2 svdcusv svds svdusv sysstate tab tan tanh tempname time timedt timestr timeutc title tkf2eps tkf2ps tocart todaydt toeplitz token topolar trapchk trigamma trimr trunc type typecv typef union unionsa uniqindx uniqindxsa unique uniquesa upmat upmat1 upper utctodt utctodtv utrisol vals varCovMS varCovXS varget vargetl varmall varmares varput varputl vartypef vcm vcms vcx vcxs vec vech vecr vector vget view viewxyz vlist vnamecv volume vput vread vtypecv wait waitc walkindex where window writer xlabel xlsGetSheetCount xlsGetSheetSize xlsGetSheetTypes xlsMakeRange xlsReadM xlsReadSA xlsWrite xlsWriteM xlsWriteSA xpnd xtics xy xyz ylabel ytics zeros zeta zlabel ztics cdfEmpirical dot h5create h5open h5read h5readAttribute h5write h5writeAttribute ldl plotAddErrorBar plotAddSurface plotCDFEmpirical plotSetColormap plotSetContourLabels plotSetLegendFont plotSetTextInterpreter plotSetXTicCount plotSetYTicCount plotSetZLevels powerm strjoin sylvester strtrim",literal:"DB_AFTER_LAST_ROW DB_ALL_TABLES DB_BATCH_OPERATIONS DB_BEFORE_FIRST_ROW DB_BLOB DB_EVENT_NOTIFICATIONS DB_FINISH_QUERY DB_HIGH_PRECISION DB_LAST_INSERT_ID DB_LOW_PRECISION_DOUBLE DB_LOW_PRECISION_INT32 DB_LOW_PRECISION_INT64 DB_LOW_PRECISION_NUMBERS DB_MULTIPLE_RESULT_SETS DB_NAMED_PLACEHOLDERS DB_POSITIONAL_PLACEHOLDERS DB_PREPARED_QUERIES DB_QUERY_SIZE DB_SIMPLE_LOCKING DB_SYSTEM_TABLES DB_TABLES DB_TRANSACTIONS DB_UNICODE DB_VIEWS __STDIN __STDOUT __STDERR __FILE_DIR"},i=e.COMMENT("@","@"),o={className:"meta",begin:"#",end:"$",keywords:{keyword:"define definecs|10 undef ifdef ifndef iflight ifdllcall ifmac ifos2win ifunix else endif lineson linesoff srcfile srcline"},contains:[{begin:/\\\n/,relevance:0},{beginKeywords:"include",end:"$",keywords:{keyword:"include"},contains:[{className:"string",begin:'"',end:'"',illegal:"\\n"}]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,i]},s={begin:/\bstruct\s+/,end:/\s/,keywords:"struct",contains:[{className:"type",begin:e.UNDERSCORE_IDENT_RE,relevance:0}]},l=[{className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,endsWithParent:!0,relevance:0,contains:[{className:"literal",begin:/\.\.\./},e.C_NUMBER_MODE,e.C_BLOCK_COMMENT_MODE,i,s]}],c={className:"title",begin:e.UNDERSCORE_IDENT_RE,relevance:0},d=function(f,S,C){const h=e.inherit({className:"function",beginKeywords:f,end:S,excludeEnd:!0,contains:[].concat(l)},C||{});return h.contains.push(c),h.contains.push(e.C_NUMBER_MODE),h.contains.push(e.C_BLOCK_COMMENT_MODE),h.contains.push(i),h},_={className:"built_in",begin:"\\b("+n.built_in.split(" ").join("|")+")\\b"},p={className:"string",begin:'"',end:'"',contains:[e.BACKSLASH_ESCAPE],relevance:0},g={begin:e.UNDERSCORE_IDENT_RE+"\\s*\\(",returnBegin:!0,keywords:n,relevance:0,contains:[{beginKeywords:n.keyword},_,{className:"built_in",begin:e.UNDERSCORE_IDENT_RE,relevance:0}]},E={begin:/\(/,end:/\)/,relevance:0,keywords:{built_in:n.built_in,literal:n.literal},contains:[e.C_NUMBER_MODE,e.C_BLOCK_COMMENT_MODE,i,_,g,p,"self"]};return g.contains.push(E),{name:"GAUSS",aliases:["gss"],case_insensitive:!0,keywords:n,illegal:/(\{[%#]|[%#]\}| <- )/,contains:[e.C_NUMBER_MODE,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,i,p,o,{className:"keyword",begin:/\bexternal (matrix|string|array|sparse matrix|struct|proc|keyword|fn)/},d("proc keyword",";"),d("fn","="),{beginKeywords:"for threadfor",end:/;/,relevance:0,contains:[e.C_BLOCK_COMMENT_MODE,i,E]},{variants:[{begin:e.UNDERSCORE_IDENT_RE+"\\."+e.UNDERSCORE_IDENT_RE},{begin:e.UNDERSCORE_IDENT_RE+"\\s*="}],relevance:0},g,s]}}return jd=t,jd}var e_,aT;function aAe(){if(aT)return e_;aT=1;function t(e){const n="[A-Z_][A-Z0-9_.]*",i="%",o={$pattern:n,keyword:"IF DO WHILE ENDWHILE CALL ENDIF SUB ENDSUB GOTO REPEAT ENDREPEAT EQ LT GT NE GE LE OR XOR"},s={className:"meta",begin:"([O])([0-9]+)"},l=e.inherit(e.C_NUMBER_MODE,{begin:"([-+]?((\\.\\d+)|(\\d+)(\\.\\d*)?))|"+e.C_NUMBER_RE}),c=[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.COMMENT(/\(/,/\)/),l,e.inherit(e.APOS_STRING_MODE,{illegal:null}),e.inherit(e.QUOTE_STRING_MODE,{illegal:null}),{className:"name",begin:"([G])([0-9]+\\.?[0-9]?)"},{className:"name",begin:"([M])([0-9]+\\.?[0-9]?)"},{className:"attr",begin:"(VC|VS|#)",end:"(\\d+)"},{className:"attr",begin:"(VZOFX|VZOFY|VZOFZ)"},{className:"built_in",begin:"(ATAN|ABS|ACOS|ASIN|SIN|COS|EXP|FIX|FUP|ROUND|LN|TAN)(\\[)",contains:[l],end:"\\]"},{className:"symbol",variants:[{begin:"N",end:"\\d+",illegal:"\\W"}]}];return{name:"G-code (ISO 6983)",aliases:["nc"],case_insensitive:!0,keywords:o,contains:[{className:"meta",begin:i},s].concat(c)}}return e_=t,e_}var t_,oT;function oAe(){if(oT)return t_;oT=1;function t(e){return{name:"Gherkin",aliases:["feature"],keywords:"Feature Background Ability Business Need Scenario Scenarios Scenario Outline Scenario Template Examples Given And Then But When",contains:[{className:"symbol",begin:"\\*",relevance:0},{className:"meta",begin:"@[^@\\s]+"},{begin:"\\|",end:"\\|\\w*$",contains:[{className:"string",begin:"[^|]+"}]},{className:"variable",begin:"<",end:">"},e.HASH_COMMENT_MODE,{className:"string",begin:'"""',end:'"""'},e.QUOTE_STRING_MODE]}}return t_=t,t_}var n_,sT;function sAe(){if(sT)return n_;sT=1;function t(e){return{name:"GLSL",keywords:{keyword:"break continue discard do else for if return while switch case default attribute binding buffer ccw centroid centroid varying coherent column_major const cw depth_any depth_greater depth_less depth_unchanged early_fragment_tests equal_spacing flat fractional_even_spacing fractional_odd_spacing highp in index inout invariant invocations isolines layout line_strip lines lines_adjacency local_size_x local_size_y local_size_z location lowp max_vertices mediump noperspective offset origin_upper_left out packed patch pixel_center_integer point_mode points precise precision quads r11f_g11f_b10f r16 r16_snorm r16f r16i r16ui r32f r32i r32ui r8 r8_snorm r8i r8ui readonly restrict rg16 rg16_snorm rg16f rg16i rg16ui rg32f rg32i rg32ui rg8 rg8_snorm rg8i rg8ui rgb10_a2 rgb10_a2ui rgba16 rgba16_snorm rgba16f rgba16i rgba16ui rgba32f rgba32i rgba32ui rgba8 rgba8_snorm rgba8i rgba8ui row_major sample shared smooth std140 std430 stream triangle_strip triangles triangles_adjacency uniform varying vertices volatile writeonly",type:"atomic_uint bool bvec2 bvec3 bvec4 dmat2 dmat2x2 dmat2x3 dmat2x4 dmat3 dmat3x2 dmat3x3 dmat3x4 dmat4 dmat4x2 dmat4x3 dmat4x4 double dvec2 dvec3 dvec4 float iimage1D iimage1DArray iimage2D iimage2DArray iimage2DMS iimage2DMSArray iimage2DRect iimage3D iimageBuffer iimageCube iimageCubeArray image1D image1DArray image2D image2DArray image2DMS image2DMSArray image2DRect image3D imageBuffer imageCube imageCubeArray int isampler1D isampler1DArray isampler2D isampler2DArray isampler2DMS isampler2DMSArray isampler2DRect isampler3D isamplerBuffer isamplerCube isamplerCubeArray ivec2 ivec3 ivec4 mat2 mat2x2 mat2x3 mat2x4 mat3 mat3x2 mat3x3 mat3x4 mat4 mat4x2 mat4x3 mat4x4 sampler1D sampler1DArray sampler1DArrayShadow sampler1DShadow sampler2D sampler2DArray sampler2DArrayShadow sampler2DMS sampler2DMSArray sampler2DRect sampler2DRectShadow sampler2DShadow sampler3D samplerBuffer samplerCube samplerCubeArray samplerCubeArrayShadow samplerCubeShadow image1D uimage1DArray uimage2D uimage2DArray uimage2DMS uimage2DMSArray uimage2DRect uimage3D uimageBuffer uimageCube uimageCubeArray uint usampler1D usampler1DArray usampler2D usampler2DArray usampler2DMS usampler2DMSArray usampler2DRect usampler3D samplerBuffer usamplerCube usamplerCubeArray uvec2 uvec3 uvec4 vec2 vec3 vec4 void",built_in:"gl_MaxAtomicCounterBindings gl_MaxAtomicCounterBufferSize gl_MaxClipDistances gl_MaxClipPlanes gl_MaxCombinedAtomicCounterBuffers gl_MaxCombinedAtomicCounters gl_MaxCombinedImageUniforms gl_MaxCombinedImageUnitsAndFragmentOutputs gl_MaxCombinedTextureImageUnits gl_MaxComputeAtomicCounterBuffers gl_MaxComputeAtomicCounters gl_MaxComputeImageUniforms gl_MaxComputeTextureImageUnits gl_MaxComputeUniformComponents gl_MaxComputeWorkGroupCount gl_MaxComputeWorkGroupSize gl_MaxDrawBuffers gl_MaxFragmentAtomicCounterBuffers gl_MaxFragmentAtomicCounters gl_MaxFragmentImageUniforms gl_MaxFragmentInputComponents gl_MaxFragmentInputVectors gl_MaxFragmentUniformComponents gl_MaxFragmentUniformVectors gl_MaxGeometryAtomicCounterBuffers gl_MaxGeometryAtomicCounters gl_MaxGeometryImageUniforms gl_MaxGeometryInputComponents gl_MaxGeometryOutputComponents gl_MaxGeometryOutputVertices gl_MaxGeometryTextureImageUnits gl_MaxGeometryTotalOutputComponents gl_MaxGeometryUniformComponents gl_MaxGeometryVaryingComponents gl_MaxImageSamples gl_MaxImageUnits gl_MaxLights gl_MaxPatchVertices gl_MaxProgramTexelOffset gl_MaxTessControlAtomicCounterBuffers gl_MaxTessControlAtomicCounters gl_MaxTessControlImageUniforms gl_MaxTessControlInputComponents gl_MaxTessControlOutputComponents gl_MaxTessControlTextureImageUnits gl_MaxTessControlTotalOutputComponents gl_MaxTessControlUniformComponents gl_MaxTessEvaluationAtomicCounterBuffers gl_MaxTessEvaluationAtomicCounters gl_MaxTessEvaluationImageUniforms gl_MaxTessEvaluationInputComponents gl_MaxTessEvaluationOutputComponents gl_MaxTessEvaluationTextureImageUnits gl_MaxTessEvaluationUniformComponents gl_MaxTessGenLevel gl_MaxTessPatchComponents gl_MaxTextureCoords gl_MaxTextureImageUnits gl_MaxTextureUnits gl_MaxVaryingComponents gl_MaxVaryingFloats gl_MaxVaryingVectors gl_MaxVertexAtomicCounterBuffers gl_MaxVertexAtomicCounters gl_MaxVertexAttribs gl_MaxVertexImageUniforms gl_MaxVertexOutputComponents gl_MaxVertexOutputVectors gl_MaxVertexTextureImageUnits gl_MaxVertexUniformComponents gl_MaxVertexUniformVectors gl_MaxViewports gl_MinProgramTexelOffset gl_BackColor gl_BackLightModelProduct gl_BackLightProduct gl_BackMaterial gl_BackSecondaryColor gl_ClipDistance gl_ClipPlane gl_ClipVertex gl_Color gl_DepthRange gl_EyePlaneQ gl_EyePlaneR gl_EyePlaneS gl_EyePlaneT gl_Fog gl_FogCoord gl_FogFragCoord gl_FragColor gl_FragCoord gl_FragData gl_FragDepth gl_FrontColor gl_FrontFacing gl_FrontLightModelProduct gl_FrontLightProduct gl_FrontMaterial gl_FrontSecondaryColor gl_GlobalInvocationID gl_InstanceID gl_InvocationID gl_Layer gl_LightModel gl_LightSource gl_LocalInvocationID gl_LocalInvocationIndex gl_ModelViewMatrix gl_ModelViewMatrixInverse gl_ModelViewMatrixInverseTranspose gl_ModelViewMatrixTranspose gl_ModelViewProjectionMatrix gl_ModelViewProjectionMatrixInverse gl_ModelViewProjectionMatrixInverseTranspose gl_ModelViewProjectionMatrixTranspose gl_MultiTexCoord0 gl_MultiTexCoord1 gl_MultiTexCoord2 gl_MultiTexCoord3 gl_MultiTexCoord4 gl_MultiTexCoord5 gl_MultiTexCoord6 gl_MultiTexCoord7 gl_Normal gl_NormalMatrix gl_NormalScale gl_NumSamples gl_NumWorkGroups gl_ObjectPlaneQ gl_ObjectPlaneR gl_ObjectPlaneS gl_ObjectPlaneT gl_PatchVerticesIn gl_Point gl_PointCoord gl_PointSize gl_Position gl_PrimitiveID gl_PrimitiveIDIn gl_ProjectionMatrix gl_ProjectionMatrixInverse gl_ProjectionMatrixInverseTranspose gl_ProjectionMatrixTranspose gl_SampleID gl_SampleMask gl_SampleMaskIn gl_SamplePosition gl_SecondaryColor gl_TessCoord gl_TessLevelInner gl_TessLevelOuter gl_TexCoord gl_TextureEnvColor gl_TextureMatrix gl_TextureMatrixInverse gl_TextureMatrixInverseTranspose gl_TextureMatrixTranspose gl_Vertex gl_VertexID gl_ViewportIndex gl_WorkGroupID gl_WorkGroupSize gl_in gl_out EmitStreamVertex EmitVertex EndPrimitive EndStreamPrimitive abs acos acosh all any asin asinh atan atanh atomicAdd atomicAnd atomicCompSwap atomicCounter atomicCounterDecrement atomicCounterIncrement atomicExchange atomicMax atomicMin atomicOr atomicXor barrier bitCount bitfieldExtract bitfieldInsert bitfieldReverse ceil clamp cos cosh cross dFdx dFdy degrees determinant distance dot equal exp exp2 faceforward findLSB findMSB floatBitsToInt floatBitsToUint floor fma fract frexp ftransform fwidth greaterThan greaterThanEqual groupMemoryBarrier imageAtomicAdd imageAtomicAnd imageAtomicCompSwap imageAtomicExchange imageAtomicMax imageAtomicMin imageAtomicOr imageAtomicXor imageLoad imageSize imageStore imulExtended intBitsToFloat interpolateAtCentroid interpolateAtOffset interpolateAtSample inverse inversesqrt isinf isnan ldexp length lessThan lessThanEqual log log2 matrixCompMult max memoryBarrier memoryBarrierAtomicCounter memoryBarrierBuffer memoryBarrierImage memoryBarrierShared min mix mod modf noise1 noise2 noise3 noise4 normalize not notEqual outerProduct packDouble2x32 packHalf2x16 packSnorm2x16 packSnorm4x8 packUnorm2x16 packUnorm4x8 pow radians reflect refract round roundEven shadow1D shadow1DLod shadow1DProj shadow1DProjLod shadow2D shadow2DLod shadow2DProj shadow2DProjLod sign sin sinh smoothstep sqrt step tan tanh texelFetch texelFetchOffset texture texture1D texture1DLod texture1DProj texture1DProjLod texture2D texture2DLod texture2DProj texture2DProjLod texture3D texture3DLod texture3DProj texture3DProjLod textureCube textureCubeLod textureGather textureGatherOffset textureGatherOffsets textureGrad textureGradOffset textureLod textureLodOffset textureOffset textureProj textureProjGrad textureProjGradOffset textureProjLod textureProjLodOffset textureProjOffset textureQueryLevels textureQueryLod textureSize transpose trunc uaddCarry uintBitsToFloat umulExtended unpackDouble2x32 unpackHalf2x16 unpackSnorm2x16 unpackSnorm4x8 unpackUnorm2x16 unpackUnorm4x8 usubBorrow",literal:"true false"},illegal:'"',contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.C_NUMBER_MODE,{className:"meta",begin:"#",end:"$"}]}}return n_=t,n_}var r_,lT;function lAe(){if(lT)return r_;lT=1;function t(e){return{name:"GML",case_insensitive:!1,keywords:{keyword:["#endregion","#macro","#region","and","begin","break","case","constructor","continue","default","delete","div","do","else","end","enum","exit","for","function","globalvar","if","mod","not","or","repeat","return","switch","then","until","var","while","with","xor"],built_in:["abs","achievement_available","achievement_event","achievement_get_challenges","achievement_get_info","achievement_get_pic","achievement_increment","achievement_load_friends","achievement_load_leaderboard","achievement_load_progress","achievement_login","achievement_login_status","achievement_logout","achievement_post","achievement_post_score","achievement_reset","achievement_send_challenge","achievement_show","achievement_show_achievements","achievement_show_challenge_notifications","achievement_show_leaderboards","action_inherited","action_kill_object","ads_disable","ads_enable","ads_engagement_active","ads_engagement_available","ads_engagement_launch","ads_event","ads_event_preload","ads_get_display_height","ads_get_display_width","ads_interstitial_available","ads_interstitial_display","ads_move","ads_set_reward_callback","ads_setup","alarm_get","alarm_set","analytics_event","analytics_event_ext","angle_difference","ansi_char","application_get_position","application_surface_draw_enable","application_surface_enable","application_surface_is_enabled","arccos","arcsin","arctan","arctan2","array_copy","array_create","array_delete","array_equals","array_height_2d","array_insert","array_length","array_length_1d","array_length_2d","array_pop","array_push","array_resize","array_sort","asset_get_index","asset_get_type","audio_channel_num","audio_create_buffer_sound","audio_create_play_queue","audio_create_stream","audio_create_sync_group","audio_debug","audio_destroy_stream","audio_destroy_sync_group","audio_emitter_create","audio_emitter_exists","audio_emitter_falloff","audio_emitter_free","audio_emitter_gain","audio_emitter_get_gain","audio_emitter_get_listener_mask","audio_emitter_get_pitch","audio_emitter_get_vx","audio_emitter_get_vy","audio_emitter_get_vz","audio_emitter_get_x","audio_emitter_get_y","audio_emitter_get_z","audio_emitter_pitch","audio_emitter_position","audio_emitter_set_listener_mask","audio_emitter_velocity","audio_exists","audio_falloff_set_model","audio_free_buffer_sound","audio_free_play_queue","audio_get_listener_count","audio_get_listener_info","audio_get_listener_mask","audio_get_master_gain","audio_get_name","audio_get_recorder_count","audio_get_recorder_info","audio_get_type","audio_group_is_loaded","audio_group_load","audio_group_load_progress","audio_group_name","audio_group_set_gain","audio_group_stop_all","audio_group_unload","audio_is_paused","audio_is_playing","audio_listener_get_data","audio_listener_orientation","audio_listener_position","audio_listener_set_orientation","audio_listener_set_position","audio_listener_set_velocity","audio_listener_velocity","audio_master_gain","audio_music_gain","audio_music_is_playing","audio_pause_all","audio_pause_music","audio_pause_sound","audio_pause_sync_group","audio_play_in_sync_group","audio_play_music","audio_play_sound","audio_play_sound_at","audio_play_sound_on","audio_queue_sound","audio_resume_all","audio_resume_music","audio_resume_sound","audio_resume_sync_group","audio_set_listener_mask","audio_set_master_gain","audio_sound_gain","audio_sound_get_gain","audio_sound_get_listener_mask","audio_sound_get_pitch","audio_sound_get_track_position","audio_sound_length","audio_sound_pitch","audio_sound_set_listener_mask","audio_sound_set_track_position","audio_start_recording","audio_start_sync_group","audio_stop_all","audio_stop_music","audio_stop_recording","audio_stop_sound","audio_stop_sync_group","audio_sync_group_debug","audio_sync_group_get_track_pos","audio_sync_group_is_playing","audio_system","background_get_height","background_get_width","base64_decode","base64_encode","browser_input_capture","buffer_async_group_begin","buffer_async_group_end","buffer_async_group_option","buffer_base64_decode","buffer_base64_decode_ext","buffer_base64_encode","buffer_copy","buffer_copy_from_vertex_buffer","buffer_create","buffer_create_from_vertex_buffer","buffer_create_from_vertex_buffer_ext","buffer_delete","buffer_exists","buffer_fill","buffer_get_address","buffer_get_alignment","buffer_get_size","buffer_get_surface","buffer_get_type","buffer_load","buffer_load_async","buffer_load_ext","buffer_load_partial","buffer_md5","buffer_peek","buffer_poke","buffer_read","buffer_resize","buffer_save","buffer_save_async","buffer_save_ext","buffer_seek","buffer_set_surface","buffer_sha1","buffer_sizeof","buffer_tell","buffer_write","camera_apply","camera_create","camera_create_view","camera_destroy","camera_get_active","camera_get_begin_script","camera_get_default","camera_get_end_script","camera_get_proj_mat","camera_get_update_script","camera_get_view_angle","camera_get_view_border_x","camera_get_view_border_y","camera_get_view_height","camera_get_view_mat","camera_get_view_speed_x","camera_get_view_speed_y","camera_get_view_target","camera_get_view_width","camera_get_view_x","camera_get_view_y","camera_set_begin_script","camera_set_default","camera_set_end_script","camera_set_proj_mat","camera_set_update_script","camera_set_view_angle","camera_set_view_border","camera_set_view_mat","camera_set_view_pos","camera_set_view_size","camera_set_view_speed","camera_set_view_target","ceil","choose","chr","clamp","clickable_add","clickable_add_ext","clickable_change","clickable_change_ext","clickable_delete","clickable_exists","clickable_set_style","clipboard_get_text","clipboard_has_text","clipboard_set_text","cloud_file_save","cloud_string_save","cloud_synchronise","code_is_compiled","collision_circle","collision_circle_list","collision_ellipse","collision_ellipse_list","collision_line","collision_line_list","collision_point","collision_point_list","collision_rectangle","collision_rectangle_list","color_get_blue","color_get_green","color_get_hue","color_get_red","color_get_saturation","color_get_value","colour_get_blue","colour_get_green","colour_get_hue","colour_get_red","colour_get_saturation","colour_get_value","cos","darccos","darcsin","darctan","darctan2","date_compare_date","date_compare_datetime","date_compare_time","date_create_datetime","date_current_datetime","date_date_of","date_date_string","date_datetime_string","date_day_span","date_days_in_month","date_days_in_year","date_get_day","date_get_day_of_year","date_get_hour","date_get_hour_of_year","date_get_minute","date_get_minute_of_year","date_get_month","date_get_second","date_get_second_of_year","date_get_timezone","date_get_week","date_get_weekday","date_get_year","date_hour_span","date_inc_day","date_inc_hour","date_inc_minute","date_inc_month","date_inc_second","date_inc_week","date_inc_year","date_is_today","date_leap_year","date_minute_span","date_month_span","date_second_span","date_set_timezone","date_time_of","date_time_string","date_valid_datetime","date_week_span","date_year_span","dcos","debug_event","debug_get_callstack","degtorad","device_get_tilt_x","device_get_tilt_y","device_get_tilt_z","device_is_keypad_open","device_mouse_check_button","device_mouse_check_button_pressed","device_mouse_check_button_released","device_mouse_dbclick_enable","device_mouse_raw_x","device_mouse_raw_y","device_mouse_x","device_mouse_x_to_gui","device_mouse_y","device_mouse_y_to_gui","directory_create","directory_destroy","directory_exists","display_get_dpi_x","display_get_dpi_y","display_get_gui_height","display_get_gui_width","display_get_height","display_get_orientation","display_get_sleep_margin","display_get_timing_method","display_get_width","display_mouse_get_x","display_mouse_get_y","display_mouse_set","display_reset","display_set_gui_maximise","display_set_gui_maximize","display_set_gui_size","display_set_sleep_margin","display_set_timing_method","display_set_ui_visibility","distance_to_object","distance_to_point","dot_product","dot_product_3d","dot_product_3d_normalised","dot_product_3d_normalized","dot_product_normalised","dot_product_normalized","draw_arrow","draw_background","draw_background_ext","draw_background_part_ext","draw_background_tiled","draw_button","draw_circle","draw_circle_color","draw_circle_colour","draw_clear","draw_clear_alpha","draw_ellipse","draw_ellipse_color","draw_ellipse_colour","draw_enable_alphablend","draw_enable_drawevent","draw_enable_swf_aa","draw_flush","draw_get_alpha","draw_get_color","draw_get_colour","draw_get_lighting","draw_get_swf_aa_level","draw_getpixel","draw_getpixel_ext","draw_healthbar","draw_highscore","draw_light_define_ambient","draw_light_define_direction","draw_light_define_point","draw_light_enable","draw_light_get","draw_light_get_ambient","draw_line","draw_line_color","draw_line_colour","draw_line_width","draw_line_width_color","draw_line_width_colour","draw_path","draw_point","draw_point_color","draw_point_colour","draw_primitive_begin","draw_primitive_begin_texture","draw_primitive_end","draw_rectangle","draw_rectangle_color","draw_rectangle_colour","draw_roundrect","draw_roundrect_color","draw_roundrect_color_ext","draw_roundrect_colour","draw_roundrect_colour_ext","draw_roundrect_ext","draw_self","draw_set_alpha","draw_set_alpha_test","draw_set_alpha_test_ref_value","draw_set_blend_mode","draw_set_blend_mode_ext","draw_set_circle_precision","draw_set_color","draw_set_color_write_enable","draw_set_colour","draw_set_font","draw_set_halign","draw_set_lighting","draw_set_swf_aa_level","draw_set_valign","draw_skeleton","draw_skeleton_collision","draw_skeleton_instance","draw_skeleton_time","draw_sprite","draw_sprite_ext","draw_sprite_general","draw_sprite_part","draw_sprite_part_ext","draw_sprite_pos","draw_sprite_stretched","draw_sprite_stretched_ext","draw_sprite_tiled","draw_sprite_tiled_ext","draw_surface","draw_surface_ext","draw_surface_general","draw_surface_part","draw_surface_part_ext","draw_surface_stretched","draw_surface_stretched_ext","draw_surface_tiled","draw_surface_tiled_ext","draw_text","draw_text_color","draw_text_colour","draw_text_ext","draw_text_ext_color","draw_text_ext_colour","draw_text_ext_transformed","draw_text_ext_transformed_color","draw_text_ext_transformed_colour","draw_text_transformed","draw_text_transformed_color","draw_text_transformed_colour","draw_texture_flush","draw_tile","draw_tilemap","draw_triangle","draw_triangle_color","draw_triangle_colour","draw_vertex","draw_vertex_color","draw_vertex_colour","draw_vertex_texture","draw_vertex_texture_color","draw_vertex_texture_colour","ds_exists","ds_grid_add","ds_grid_add_disk","ds_grid_add_grid_region","ds_grid_add_region","ds_grid_clear","ds_grid_copy","ds_grid_create","ds_grid_destroy","ds_grid_get","ds_grid_get_disk_max","ds_grid_get_disk_mean","ds_grid_get_disk_min","ds_grid_get_disk_sum","ds_grid_get_max","ds_grid_get_mean","ds_grid_get_min","ds_grid_get_sum","ds_grid_height","ds_grid_multiply","ds_grid_multiply_disk","ds_grid_multiply_grid_region","ds_grid_multiply_region","ds_grid_read","ds_grid_resize","ds_grid_set","ds_grid_set_disk","ds_grid_set_grid_region","ds_grid_set_region","ds_grid_shuffle","ds_grid_sort","ds_grid_value_disk_exists","ds_grid_value_disk_x","ds_grid_value_disk_y","ds_grid_value_exists","ds_grid_value_x","ds_grid_value_y","ds_grid_width","ds_grid_write","ds_list_add","ds_list_clear","ds_list_copy","ds_list_create","ds_list_delete","ds_list_destroy","ds_list_empty","ds_list_find_index","ds_list_find_value","ds_list_insert","ds_list_mark_as_list","ds_list_mark_as_map","ds_list_read","ds_list_replace","ds_list_set","ds_list_shuffle","ds_list_size","ds_list_sort","ds_list_write","ds_map_add","ds_map_add_list","ds_map_add_map","ds_map_clear","ds_map_copy","ds_map_create","ds_map_delete","ds_map_destroy","ds_map_empty","ds_map_exists","ds_map_find_first","ds_map_find_last","ds_map_find_next","ds_map_find_previous","ds_map_find_value","ds_map_read","ds_map_replace","ds_map_replace_list","ds_map_replace_map","ds_map_secure_load","ds_map_secure_load_buffer","ds_map_secure_save","ds_map_secure_save_buffer","ds_map_set","ds_map_size","ds_map_write","ds_priority_add","ds_priority_change_priority","ds_priority_clear","ds_priority_copy","ds_priority_create","ds_priority_delete_max","ds_priority_delete_min","ds_priority_delete_value","ds_priority_destroy","ds_priority_empty","ds_priority_find_max","ds_priority_find_min","ds_priority_find_priority","ds_priority_read","ds_priority_size","ds_priority_write","ds_queue_clear","ds_queue_copy","ds_queue_create","ds_queue_dequeue","ds_queue_destroy","ds_queue_empty","ds_queue_enqueue","ds_queue_head","ds_queue_read","ds_queue_size","ds_queue_tail","ds_queue_write","ds_set_precision","ds_stack_clear","ds_stack_copy","ds_stack_create","ds_stack_destroy","ds_stack_empty","ds_stack_pop","ds_stack_push","ds_stack_read","ds_stack_size","ds_stack_top","ds_stack_write","dsin","dtan","effect_clear","effect_create_above","effect_create_below","environment_get_variable","event_inherited","event_perform","event_perform_object","event_user","exp","external_call","external_define","external_free","facebook_accesstoken","facebook_check_permission","facebook_dialog","facebook_graph_request","facebook_init","facebook_launch_offerwall","facebook_login","facebook_logout","facebook_post_message","facebook_request_publish_permissions","facebook_request_read_permissions","facebook_send_invite","facebook_status","facebook_user_id","file_attributes","file_bin_close","file_bin_open","file_bin_position","file_bin_read_byte","file_bin_rewrite","file_bin_seek","file_bin_size","file_bin_write_byte","file_copy","file_delete","file_exists","file_find_close","file_find_first","file_find_next","file_rename","file_text_close","file_text_eof","file_text_eoln","file_text_open_append","file_text_open_from_string","file_text_open_read","file_text_open_write","file_text_read_real","file_text_read_string","file_text_readln","file_text_write_real","file_text_write_string","file_text_writeln","filename_change_ext","filename_dir","filename_drive","filename_ext","filename_name","filename_path","floor","font_add","font_add_enable_aa","font_add_get_enable_aa","font_add_sprite","font_add_sprite_ext","font_delete","font_exists","font_get_bold","font_get_first","font_get_fontname","font_get_italic","font_get_last","font_get_name","font_get_size","font_get_texture","font_get_uvs","font_replace","font_replace_sprite","font_replace_sprite_ext","font_set_cache_size","font_texture_page_size","frac","game_end","game_get_speed","game_load","game_load_buffer","game_restart","game_save","game_save_buffer","game_set_speed","gamepad_axis_count","gamepad_axis_value","gamepad_button_check","gamepad_button_check_pressed","gamepad_button_check_released","gamepad_button_count","gamepad_button_value","gamepad_get_axis_deadzone","gamepad_get_button_threshold","gamepad_get_description","gamepad_get_device_count","gamepad_is_connected","gamepad_is_supported","gamepad_set_axis_deadzone","gamepad_set_button_threshold","gamepad_set_color","gamepad_set_colour","gamepad_set_vibration","gesture_double_tap_distance","gesture_double_tap_time","gesture_drag_distance","gesture_drag_time","gesture_flick_speed","gesture_get_double_tap_distance","gesture_get_double_tap_time","gesture_get_drag_distance","gesture_get_drag_time","gesture_get_flick_speed","gesture_get_pinch_angle_away","gesture_get_pinch_angle_towards","gesture_get_pinch_distance","gesture_get_rotate_angle","gesture_get_rotate_time","gesture_get_tap_count","gesture_pinch_angle_away","gesture_pinch_angle_towards","gesture_pinch_distance","gesture_rotate_angle","gesture_rotate_time","gesture_tap_count","get_integer","get_integer_async","get_login_async","get_open_filename","get_open_filename_ext","get_save_filename","get_save_filename_ext","get_string","get_string_async","get_timer","gml_pragma","gml_release_mode","gpu_get_alphatestenable","gpu_get_alphatestfunc","gpu_get_alphatestref","gpu_get_blendenable","gpu_get_blendmode","gpu_get_blendmode_dest","gpu_get_blendmode_destalpha","gpu_get_blendmode_ext","gpu_get_blendmode_ext_sepalpha","gpu_get_blendmode_src","gpu_get_blendmode_srcalpha","gpu_get_colorwriteenable","gpu_get_colourwriteenable","gpu_get_cullmode","gpu_get_fog","gpu_get_lightingenable","gpu_get_state","gpu_get_tex_filter","gpu_get_tex_filter_ext","gpu_get_tex_max_aniso","gpu_get_tex_max_aniso_ext","gpu_get_tex_max_mip","gpu_get_tex_max_mip_ext","gpu_get_tex_min_mip","gpu_get_tex_min_mip_ext","gpu_get_tex_mip_bias","gpu_get_tex_mip_bias_ext","gpu_get_tex_mip_enable","gpu_get_tex_mip_enable_ext","gpu_get_tex_mip_filter","gpu_get_tex_mip_filter_ext","gpu_get_tex_repeat","gpu_get_tex_repeat_ext","gpu_get_texfilter","gpu_get_texfilter_ext","gpu_get_texrepeat","gpu_get_texrepeat_ext","gpu_get_zfunc","gpu_get_ztestenable","gpu_get_zwriteenable","gpu_pop_state","gpu_push_state","gpu_set_alphatestenable","gpu_set_alphatestfunc","gpu_set_alphatestref","gpu_set_blendenable","gpu_set_blendmode","gpu_set_blendmode_ext","gpu_set_blendmode_ext_sepalpha","gpu_set_colorwriteenable","gpu_set_colourwriteenable","gpu_set_cullmode","gpu_set_fog","gpu_set_lightingenable","gpu_set_state","gpu_set_tex_filter","gpu_set_tex_filter_ext","gpu_set_tex_max_aniso","gpu_set_tex_max_aniso_ext","gpu_set_tex_max_mip","gpu_set_tex_max_mip_ext","gpu_set_tex_min_mip","gpu_set_tex_min_mip_ext","gpu_set_tex_mip_bias","gpu_set_tex_mip_bias_ext","gpu_set_tex_mip_enable","gpu_set_tex_mip_enable_ext","gpu_set_tex_mip_filter","gpu_set_tex_mip_filter_ext","gpu_set_tex_repeat","gpu_set_tex_repeat_ext","gpu_set_texfilter","gpu_set_texfilter_ext","gpu_set_texrepeat","gpu_set_texrepeat_ext","gpu_set_zfunc","gpu_set_ztestenable","gpu_set_zwriteenable","highscore_add","highscore_clear","highscore_name","highscore_value","http_get","http_get_file","http_post_string","http_request","iap_acquire","iap_activate","iap_consume","iap_enumerate_products","iap_product_details","iap_purchase_details","iap_restore_all","iap_status","ini_close","ini_key_delete","ini_key_exists","ini_open","ini_open_from_string","ini_read_real","ini_read_string","ini_section_delete","ini_section_exists","ini_write_real","ini_write_string","instance_activate_all","instance_activate_layer","instance_activate_object","instance_activate_region","instance_change","instance_copy","instance_create","instance_create_depth","instance_create_layer","instance_deactivate_all","instance_deactivate_layer","instance_deactivate_object","instance_deactivate_region","instance_destroy","instance_exists","instance_find","instance_furthest","instance_id_get","instance_nearest","instance_number","instance_place","instance_place_list","instance_position","instance_position_list","int64","io_clear","irandom","irandom_range","is_array","is_bool","is_infinity","is_int32","is_int64","is_matrix","is_method","is_nan","is_numeric","is_ptr","is_real","is_string","is_struct","is_undefined","is_vec3","is_vec4","json_decode","json_encode","keyboard_check","keyboard_check_direct","keyboard_check_pressed","keyboard_check_released","keyboard_clear","keyboard_get_map","keyboard_get_numlock","keyboard_key_press","keyboard_key_release","keyboard_set_map","keyboard_set_numlock","keyboard_unset_map","keyboard_virtual_height","keyboard_virtual_hide","keyboard_virtual_show","keyboard_virtual_status","layer_add_instance","layer_background_alpha","layer_background_blend","layer_background_change","layer_background_create","layer_background_destroy","layer_background_exists","layer_background_get_alpha","layer_background_get_blend","layer_background_get_htiled","layer_background_get_id","layer_background_get_index","layer_background_get_speed","layer_background_get_sprite","layer_background_get_stretch","layer_background_get_visible","layer_background_get_vtiled","layer_background_get_xscale","layer_background_get_yscale","layer_background_htiled","layer_background_index","layer_background_speed","layer_background_sprite","layer_background_stretch","layer_background_visible","layer_background_vtiled","layer_background_xscale","layer_background_yscale","layer_create","layer_depth","layer_destroy","layer_destroy_instances","layer_element_move","layer_exists","layer_force_draw_depth","layer_get_all","layer_get_all_elements","layer_get_depth","layer_get_element_layer","layer_get_element_type","layer_get_forced_depth","layer_get_hspeed","layer_get_id","layer_get_id_at_depth","layer_get_name","layer_get_script_begin","layer_get_script_end","layer_get_shader","layer_get_target_room","layer_get_visible","layer_get_vspeed","layer_get_x","layer_get_y","layer_has_instance","layer_hspeed","layer_instance_get_instance","layer_is_draw_depth_forced","layer_reset_target_room","layer_script_begin","layer_script_end","layer_set_target_room","layer_set_visible","layer_shader","layer_sprite_alpha","layer_sprite_angle","layer_sprite_blend","layer_sprite_change","layer_sprite_create","layer_sprite_destroy","layer_sprite_exists","layer_sprite_get_alpha","layer_sprite_get_angle","layer_sprite_get_blend","layer_sprite_get_id","layer_sprite_get_index","layer_sprite_get_speed","layer_sprite_get_sprite","layer_sprite_get_x","layer_sprite_get_xscale","layer_sprite_get_y","layer_sprite_get_yscale","layer_sprite_index","layer_sprite_speed","layer_sprite_x","layer_sprite_xscale","layer_sprite_y","layer_sprite_yscale","layer_tile_alpha","layer_tile_blend","layer_tile_change","layer_tile_create","layer_tile_destroy","layer_tile_exists","layer_tile_get_alpha","layer_tile_get_blend","layer_tile_get_region","layer_tile_get_sprite","layer_tile_get_visible","layer_tile_get_x","layer_tile_get_xscale","layer_tile_get_y","layer_tile_get_yscale","layer_tile_region","layer_tile_visible","layer_tile_x","layer_tile_xscale","layer_tile_y","layer_tile_yscale","layer_tilemap_create","layer_tilemap_destroy","layer_tilemap_exists","layer_tilemap_get_id","layer_vspeed","layer_x","layer_y","lengthdir_x","lengthdir_y","lerp","ln","load_csv","log10","log2","logn","make_color_hsv","make_color_rgb","make_colour_hsv","make_colour_rgb","math_get_epsilon","math_set_epsilon","matrix_build","matrix_build_identity","matrix_build_lookat","matrix_build_projection_ortho","matrix_build_projection_perspective","matrix_build_projection_perspective_fov","matrix_get","matrix_multiply","matrix_set","matrix_stack_clear","matrix_stack_is_empty","matrix_stack_multiply","matrix_stack_pop","matrix_stack_push","matrix_stack_set","matrix_stack_top","matrix_transform_vertex","max","md5_file","md5_string_unicode","md5_string_utf8","mean","median","merge_color","merge_colour","min","motion_add","motion_set","mouse_check_button","mouse_check_button_pressed","mouse_check_button_released","mouse_clear","mouse_wheel_down","mouse_wheel_up","move_bounce_all","move_bounce_solid","move_contact_all","move_contact_solid","move_outside_all","move_outside_solid","move_random","move_snap","move_towards_point","move_wrap","mp_grid_add_cell","mp_grid_add_instances","mp_grid_add_rectangle","mp_grid_clear_all","mp_grid_clear_cell","mp_grid_clear_rectangle","mp_grid_create","mp_grid_destroy","mp_grid_draw","mp_grid_get_cell","mp_grid_path","mp_grid_to_ds_grid","mp_linear_path","mp_linear_path_object","mp_linear_step","mp_linear_step_object","mp_potential_path","mp_potential_path_object","mp_potential_settings","mp_potential_step","mp_potential_step_object","network_connect","network_connect_raw","network_create_server","network_create_server_raw","network_create_socket","network_create_socket_ext","network_destroy","network_resolve","network_send_broadcast","network_send_packet","network_send_raw","network_send_udp","network_send_udp_raw","network_set_config","network_set_timeout","object_exists","object_get_depth","object_get_mask","object_get_name","object_get_parent","object_get_persistent","object_get_physics","object_get_solid","object_get_sprite","object_get_visible","object_is_ancestor","object_set_mask","object_set_persistent","object_set_solid","object_set_sprite","object_set_visible","ord","os_get_config","os_get_info","os_get_language","os_get_region","os_is_network_connected","os_is_paused","os_lock_orientation","os_powersave_enable","parameter_count","parameter_string","part_emitter_burst","part_emitter_clear","part_emitter_create","part_emitter_destroy","part_emitter_destroy_all","part_emitter_exists","part_emitter_region","part_emitter_stream","part_particles_clear","part_particles_count","part_particles_create","part_particles_create_color","part_particles_create_colour","part_system_automatic_draw","part_system_automatic_update","part_system_clear","part_system_create","part_system_create_layer","part_system_depth","part_system_destroy","part_system_draw_order","part_system_drawit","part_system_exists","part_system_get_layer","part_system_layer","part_system_position","part_system_update","part_type_alpha1","part_type_alpha2","part_type_alpha3","part_type_blend","part_type_clear","part_type_color1","part_type_color2","part_type_color3","part_type_color_hsv","part_type_color_mix","part_type_color_rgb","part_type_colour1","part_type_colour2","part_type_colour3","part_type_colour_hsv","part_type_colour_mix","part_type_colour_rgb","part_type_create","part_type_death","part_type_destroy","part_type_direction","part_type_exists","part_type_gravity","part_type_life","part_type_orientation","part_type_scale","part_type_shape","part_type_size","part_type_speed","part_type_sprite","part_type_step","path_add","path_add_point","path_append","path_assign","path_change_point","path_clear_points","path_delete","path_delete_point","path_duplicate","path_end","path_exists","path_flip","path_get_closed","path_get_kind","path_get_length","path_get_name","path_get_number","path_get_point_speed","path_get_point_x","path_get_point_y","path_get_precision","path_get_speed","path_get_time","path_get_x","path_get_y","path_insert_point","path_mirror","path_rescale","path_reverse","path_rotate","path_set_closed","path_set_kind","path_set_precision","path_shift","path_start","physics_apply_angular_impulse","physics_apply_force","physics_apply_impulse","physics_apply_local_force","physics_apply_local_impulse","physics_apply_torque","physics_draw_debug","physics_fixture_add_point","physics_fixture_bind","physics_fixture_bind_ext","physics_fixture_create","physics_fixture_delete","physics_fixture_set_angular_damping","physics_fixture_set_awake","physics_fixture_set_box_shape","physics_fixture_set_chain_shape","physics_fixture_set_circle_shape","physics_fixture_set_collision_group","physics_fixture_set_density","physics_fixture_set_edge_shape","physics_fixture_set_friction","physics_fixture_set_kinematic","physics_fixture_set_linear_damping","physics_fixture_set_polygon_shape","physics_fixture_set_restitution","physics_fixture_set_sensor","physics_get_density","physics_get_friction","physics_get_restitution","physics_joint_delete","physics_joint_distance_create","physics_joint_enable_motor","physics_joint_friction_create","physics_joint_gear_create","physics_joint_get_value","physics_joint_prismatic_create","physics_joint_pulley_create","physics_joint_revolute_create","physics_joint_rope_create","physics_joint_set_value","physics_joint_weld_create","physics_joint_wheel_create","physics_mass_properties","physics_particle_count","physics_particle_create","physics_particle_delete","physics_particle_delete_region_box","physics_particle_delete_region_circle","physics_particle_delete_region_poly","physics_particle_draw","physics_particle_draw_ext","physics_particle_get_damping","physics_particle_get_data","physics_particle_get_data_particle","physics_particle_get_density","physics_particle_get_gravity_scale","physics_particle_get_group_flags","physics_particle_get_max_count","physics_particle_get_radius","physics_particle_group_add_point","physics_particle_group_begin","physics_particle_group_box","physics_particle_group_circle","physics_particle_group_count","physics_particle_group_delete","physics_particle_group_end","physics_particle_group_get_ang_vel","physics_particle_group_get_angle","physics_particle_group_get_centre_x","physics_particle_group_get_centre_y","physics_particle_group_get_data","physics_particle_group_get_inertia","physics_particle_group_get_mass","physics_particle_group_get_vel_x","physics_particle_group_get_vel_y","physics_particle_group_get_x","physics_particle_group_get_y","physics_particle_group_join","physics_particle_group_polygon","physics_particle_set_category_flags","physics_particle_set_damping","physics_particle_set_density","physics_particle_set_flags","physics_particle_set_gravity_scale","physics_particle_set_group_flags","physics_particle_set_max_count","physics_particle_set_radius","physics_pause_enable","physics_remove_fixture","physics_set_density","physics_set_friction","physics_set_restitution","physics_test_overlap","physics_world_create","physics_world_draw_debug","physics_world_gravity","physics_world_update_iterations","physics_world_update_speed","place_empty","place_free","place_meeting","place_snapped","point_direction","point_distance","point_distance_3d","point_in_circle","point_in_rectangle","point_in_triangle","position_change","position_destroy","position_empty","position_meeting","power","ptr","push_cancel_local_notification","push_get_first_local_notification","push_get_next_local_notification","push_local_notification","radtodeg","random","random_get_seed","random_range","random_set_seed","randomise","randomize","real","rectangle_in_circle","rectangle_in_rectangle","rectangle_in_triangle","room_add","room_assign","room_duplicate","room_exists","room_get_camera","room_get_name","room_get_viewport","room_goto","room_goto_next","room_goto_previous","room_instance_add","room_instance_clear","room_next","room_previous","room_restart","room_set_background_color","room_set_background_colour","room_set_camera","room_set_height","room_set_persistent","room_set_view","room_set_view_enabled","room_set_viewport","room_set_width","round","screen_save","screen_save_part","script_execute","script_exists","script_get_name","sha1_file","sha1_string_unicode","sha1_string_utf8","shader_current","shader_enable_corner_id","shader_get_name","shader_get_sampler_index","shader_get_uniform","shader_is_compiled","shader_reset","shader_set","shader_set_uniform_f","shader_set_uniform_f_array","shader_set_uniform_i","shader_set_uniform_i_array","shader_set_uniform_matrix","shader_set_uniform_matrix_array","shaders_are_supported","shop_leave_rating","show_debug_message","show_debug_overlay","show_error","show_message","show_message_async","show_question","show_question_async","sign","sin","skeleton_animation_clear","skeleton_animation_get","skeleton_animation_get_duration","skeleton_animation_get_ext","skeleton_animation_get_frame","skeleton_animation_get_frames","skeleton_animation_list","skeleton_animation_mix","skeleton_animation_set","skeleton_animation_set_ext","skeleton_animation_set_frame","skeleton_attachment_create","skeleton_attachment_get","skeleton_attachment_set","skeleton_bone_data_get","skeleton_bone_data_set","skeleton_bone_state_get","skeleton_bone_state_set","skeleton_collision_draw_set","skeleton_get_bounds","skeleton_get_minmax","skeleton_get_num_bounds","skeleton_skin_get","skeleton_skin_list","skeleton_skin_set","skeleton_slot_data","sprite_add","sprite_add_from_surface","sprite_assign","sprite_collision_mask","sprite_create_from_surface","sprite_delete","sprite_duplicate","sprite_exists","sprite_flush","sprite_flush_multi","sprite_get_bbox_bottom","sprite_get_bbox_left","sprite_get_bbox_right","sprite_get_bbox_top","sprite_get_height","sprite_get_name","sprite_get_number","sprite_get_speed","sprite_get_speed_type","sprite_get_texture","sprite_get_tpe","sprite_get_uvs","sprite_get_width","sprite_get_xoffset","sprite_get_yoffset","sprite_merge","sprite_prefetch","sprite_prefetch_multi","sprite_replace","sprite_save","sprite_save_strip","sprite_set_alpha_from_sprite","sprite_set_cache_size","sprite_set_cache_size_ext","sprite_set_offset","sprite_set_speed","sqr","sqrt","steam_activate_overlay","steam_activate_overlay_browser","steam_activate_overlay_store","steam_activate_overlay_user","steam_available_languages","steam_clear_achievement","steam_create_leaderboard","steam_current_game_language","steam_download_friends_scores","steam_download_scores","steam_download_scores_around_user","steam_file_delete","steam_file_exists","steam_file_persisted","steam_file_read","steam_file_share","steam_file_size","steam_file_write","steam_file_write_file","steam_get_achievement","steam_get_app_id","steam_get_persona_name","steam_get_quota_free","steam_get_quota_total","steam_get_stat_avg_rate","steam_get_stat_float","steam_get_stat_int","steam_get_user_account_id","steam_get_user_persona_name","steam_get_user_steam_id","steam_initialised","steam_is_cloud_enabled_for_account","steam_is_cloud_enabled_for_app","steam_is_overlay_activated","steam_is_overlay_enabled","steam_is_screenshot_requested","steam_is_user_logged_on","steam_reset_all_stats","steam_reset_all_stats_achievements","steam_send_screenshot","steam_set_achievement","steam_set_stat_avg_rate","steam_set_stat_float","steam_set_stat_int","steam_stats_ready","steam_ugc_create_item","steam_ugc_create_query_all","steam_ugc_create_query_all_ex","steam_ugc_create_query_user","steam_ugc_create_query_user_ex","steam_ugc_download","steam_ugc_get_item_install_info","steam_ugc_get_item_update_info","steam_ugc_get_item_update_progress","steam_ugc_get_subscribed_items","steam_ugc_num_subscribed_items","steam_ugc_query_add_excluded_tag","steam_ugc_query_add_required_tag","steam_ugc_query_set_allow_cached_response","steam_ugc_query_set_cloud_filename_filter","steam_ugc_query_set_match_any_tag","steam_ugc_query_set_ranked_by_trend_days","steam_ugc_query_set_return_long_description","steam_ugc_query_set_return_total_only","steam_ugc_query_set_search_text","steam_ugc_request_item_details","steam_ugc_send_query","steam_ugc_set_item_content","steam_ugc_set_item_description","steam_ugc_set_item_preview","steam_ugc_set_item_tags","steam_ugc_set_item_title","steam_ugc_set_item_visibility","steam_ugc_start_item_update","steam_ugc_submit_item_update","steam_ugc_subscribe_item","steam_ugc_unsubscribe_item","steam_upload_score","steam_upload_score_buffer","steam_upload_score_buffer_ext","steam_upload_score_ext","steam_user_installed_dlc","steam_user_owns_dlc","string","string_byte_at","string_byte_length","string_char_at","string_copy","string_count","string_delete","string_digits","string_format","string_hash_to_newline","string_height","string_height_ext","string_insert","string_length","string_letters","string_lettersdigits","string_lower","string_ord_at","string_pos","string_repeat","string_replace","string_replace_all","string_set_byte_at","string_upper","string_width","string_width_ext","surface_copy","surface_copy_part","surface_create","surface_create_ext","surface_depth_disable","surface_exists","surface_free","surface_get_depth_disable","surface_get_height","surface_get_texture","surface_get_width","surface_getpixel","surface_getpixel_ext","surface_reset_target","surface_resize","surface_save","surface_save_part","surface_set_target","surface_set_target_ext","tan","texture_get_height","texture_get_texel_height","texture_get_texel_width","texture_get_uvs","texture_get_width","texture_global_scale","texture_set_stage","tile_get_empty","tile_get_flip","tile_get_index","tile_get_mirror","tile_get_rotate","tile_set_empty","tile_set_flip","tile_set_index","tile_set_mirror","tile_set_rotate","tilemap_clear","tilemap_get","tilemap_get_at_pixel","tilemap_get_cell_x_at_pixel","tilemap_get_cell_y_at_pixel","tilemap_get_frame","tilemap_get_global_mask","tilemap_get_height","tilemap_get_mask","tilemap_get_tile_height","tilemap_get_tile_width","tilemap_get_tileset","tilemap_get_width","tilemap_get_x","tilemap_get_y","tilemap_set","tilemap_set_at_pixel","tilemap_set_global_mask","tilemap_set_mask","tilemap_tileset","tilemap_x","tilemap_y","timeline_add","timeline_clear","timeline_delete","timeline_exists","timeline_get_name","timeline_max_moment","timeline_moment_add_script","timeline_moment_clear","timeline_size","typeof","url_get_domain","url_open","url_open_ext","url_open_full","variable_global_exists","variable_global_get","variable_global_set","variable_instance_exists","variable_instance_get","variable_instance_get_names","variable_instance_set","variable_struct_exists","variable_struct_get","variable_struct_get_names","variable_struct_names_count","variable_struct_remove","variable_struct_set","vertex_argb","vertex_begin","vertex_color","vertex_colour","vertex_create_buffer","vertex_create_buffer_ext","vertex_create_buffer_from_buffer","vertex_create_buffer_from_buffer_ext","vertex_delete_buffer","vertex_end","vertex_float1","vertex_float2","vertex_float3","vertex_float4","vertex_format_add_color","vertex_format_add_colour","vertex_format_add_custom","vertex_format_add_normal","vertex_format_add_position","vertex_format_add_position_3d","vertex_format_add_texcoord","vertex_format_add_textcoord","vertex_format_begin","vertex_format_delete","vertex_format_end","vertex_freeze","vertex_get_buffer_size","vertex_get_number","vertex_normal","vertex_position","vertex_position_3d","vertex_submit","vertex_texcoord","vertex_ubyte4","view_get_camera","view_get_hport","view_get_surface_id","view_get_visible","view_get_wport","view_get_xport","view_get_yport","view_set_camera","view_set_hport","view_set_surface_id","view_set_visible","view_set_wport","view_set_xport","view_set_yport","virtual_key_add","virtual_key_delete","virtual_key_hide","virtual_key_show","win8_appbar_add_element","win8_appbar_enable","win8_appbar_remove_element","win8_device_touchscreen_available","win8_license_initialize_sandbox","win8_license_trial_version","win8_livetile_badge_clear","win8_livetile_badge_notification","win8_livetile_notification_begin","win8_livetile_notification_end","win8_livetile_notification_expiry","win8_livetile_notification_image_add","win8_livetile_notification_secondary_begin","win8_livetile_notification_tag","win8_livetile_notification_text_add","win8_livetile_queue_enable","win8_livetile_tile_clear","win8_livetile_tile_notification","win8_search_add_suggestions","win8_search_disable","win8_search_enable","win8_secondarytile_badge_notification","win8_secondarytile_delete","win8_secondarytile_pin","win8_settingscharm_add_entry","win8_settingscharm_add_html_entry","win8_settingscharm_add_xaml_entry","win8_settingscharm_get_xaml_property","win8_settingscharm_remove_entry","win8_settingscharm_set_xaml_property","win8_share_file","win8_share_image","win8_share_screenshot","win8_share_text","win8_share_url","window_center","window_device","window_get_caption","window_get_color","window_get_colour","window_get_cursor","window_get_fullscreen","window_get_height","window_get_visible_rects","window_get_width","window_get_x","window_get_y","window_handle","window_has_focus","window_mouse_get_x","window_mouse_get_y","window_mouse_set","window_set_caption","window_set_color","window_set_colour","window_set_cursor","window_set_fullscreen","window_set_max_height","window_set_max_width","window_set_min_height","window_set_min_width","window_set_position","window_set_rectangle","window_set_size","window_view_mouse_get_x","window_view_mouse_get_y","window_views_mouse_get_x","window_views_mouse_get_y","winphone_license_trial_version","winphone_tile_back_content","winphone_tile_back_content_wide","winphone_tile_back_image","winphone_tile_back_image_wide","winphone_tile_back_title","winphone_tile_background_color","winphone_tile_background_colour","winphone_tile_count","winphone_tile_cycle_images","winphone_tile_front_image","winphone_tile_front_image_small","winphone_tile_front_image_wide","winphone_tile_icon_image","winphone_tile_small_background_image","winphone_tile_small_icon_image","winphone_tile_title","winphone_tile_wide_content","zip_unzip"],literal:["all","false","noone","pointer_invalid","pointer_null","true","undefined"],symbol:["ANSI_CHARSET","ARABIC_CHARSET","BALTIC_CHARSET","CHINESEBIG5_CHARSET","DEFAULT_CHARSET","EASTEUROPE_CHARSET","GB2312_CHARSET","GM_build_date","GM_runtime_version","GM_version","GREEK_CHARSET","HANGEUL_CHARSET","HEBREW_CHARSET","JOHAB_CHARSET","MAC_CHARSET","OEM_CHARSET","RUSSIAN_CHARSET","SHIFTJIS_CHARSET","SYMBOL_CHARSET","THAI_CHARSET","TURKISH_CHARSET","VIETNAMESE_CHARSET","achievement_achievement_info","achievement_filter_all_players","achievement_filter_favorites_only","achievement_filter_friends_only","achievement_friends_info","achievement_leaderboard_info","achievement_our_info","achievement_pic_loaded","achievement_show_achievement","achievement_show_bank","achievement_show_friend_picker","achievement_show_leaderboard","achievement_show_profile","achievement_show_purchase_prompt","achievement_show_ui","achievement_type_achievement_challenge","achievement_type_score_challenge","asset_font","asset_object","asset_path","asset_room","asset_script","asset_shader","asset_sound","asset_sprite","asset_tiles","asset_timeline","asset_unknown","audio_3d","audio_falloff_exponent_distance","audio_falloff_exponent_distance_clamped","audio_falloff_inverse_distance","audio_falloff_inverse_distance_clamped","audio_falloff_linear_distance","audio_falloff_linear_distance_clamped","audio_falloff_none","audio_mono","audio_new_system","audio_old_system","audio_stereo","bm_add","bm_complex","bm_dest_alpha","bm_dest_color","bm_dest_colour","bm_inv_dest_alpha","bm_inv_dest_color","bm_inv_dest_colour","bm_inv_src_alpha","bm_inv_src_color","bm_inv_src_colour","bm_max","bm_normal","bm_one","bm_src_alpha","bm_src_alpha_sat","bm_src_color","bm_src_colour","bm_subtract","bm_zero","browser_chrome","browser_edge","browser_firefox","browser_ie","browser_ie_mobile","browser_not_a_browser","browser_opera","browser_safari","browser_safari_mobile","browser_tizen","browser_unknown","browser_windows_store","buffer_bool","buffer_f16","buffer_f32","buffer_f64","buffer_fast","buffer_fixed","buffer_generalerror","buffer_grow","buffer_invalidtype","buffer_network","buffer_outofbounds","buffer_outofspace","buffer_s16","buffer_s32","buffer_s8","buffer_seek_end","buffer_seek_relative","buffer_seek_start","buffer_string","buffer_surface_copy","buffer_text","buffer_u16","buffer_u32","buffer_u64","buffer_u8","buffer_vbuffer","buffer_wrap","button_type","c_aqua","c_black","c_blue","c_dkgray","c_fuchsia","c_gray","c_green","c_lime","c_ltgray","c_maroon","c_navy","c_olive","c_orange","c_purple","c_red","c_silver","c_teal","c_white","c_yellow","cmpfunc_always","cmpfunc_equal","cmpfunc_greater","cmpfunc_greaterequal","cmpfunc_less","cmpfunc_lessequal","cmpfunc_never","cmpfunc_notequal","cr_appstart","cr_arrow","cr_beam","cr_cross","cr_default","cr_drag","cr_handpoint","cr_hourglass","cr_none","cr_size_all","cr_size_nesw","cr_size_ns","cr_size_nwse","cr_size_we","cr_uparrow","cull_clockwise","cull_counterclockwise","cull_noculling","device_emulator","device_ios_ipad","device_ios_ipad_retina","device_ios_iphone","device_ios_iphone5","device_ios_iphone6","device_ios_iphone6plus","device_ios_iphone_retina","device_ios_unknown","device_tablet","display_landscape","display_landscape_flipped","display_portrait","display_portrait_flipped","dll_cdecl","dll_stdcall","ds_type_grid","ds_type_list","ds_type_map","ds_type_priority","ds_type_queue","ds_type_stack","ef_cloud","ef_ellipse","ef_explosion","ef_firework","ef_flare","ef_rain","ef_ring","ef_smoke","ef_smokeup","ef_snow","ef_spark","ef_star","ev_alarm","ev_animation_end","ev_boundary","ev_cleanup","ev_close_button","ev_collision","ev_create","ev_destroy","ev_draw","ev_draw_begin","ev_draw_end","ev_draw_post","ev_draw_pre","ev_end_of_path","ev_game_end","ev_game_start","ev_gesture","ev_gesture_double_tap","ev_gesture_drag_end","ev_gesture_drag_start","ev_gesture_dragging","ev_gesture_flick","ev_gesture_pinch_end","ev_gesture_pinch_in","ev_gesture_pinch_out","ev_gesture_pinch_start","ev_gesture_rotate_end","ev_gesture_rotate_start","ev_gesture_rotating","ev_gesture_tap","ev_global_gesture_double_tap","ev_global_gesture_drag_end","ev_global_gesture_drag_start","ev_global_gesture_dragging","ev_global_gesture_flick","ev_global_gesture_pinch_end","ev_global_gesture_pinch_in","ev_global_gesture_pinch_out","ev_global_gesture_pinch_start","ev_global_gesture_rotate_end","ev_global_gesture_rotate_start","ev_global_gesture_rotating","ev_global_gesture_tap","ev_global_left_button","ev_global_left_press","ev_global_left_release","ev_global_middle_button","ev_global_middle_press","ev_global_middle_release","ev_global_right_button","ev_global_right_press","ev_global_right_release","ev_gui","ev_gui_begin","ev_gui_end","ev_joystick1_button1","ev_joystick1_button2","ev_joystick1_button3","ev_joystick1_button4","ev_joystick1_button5","ev_joystick1_button6","ev_joystick1_button7","ev_joystick1_button8","ev_joystick1_down","ev_joystick1_left","ev_joystick1_right","ev_joystick1_up","ev_joystick2_button1","ev_joystick2_button2","ev_joystick2_button3","ev_joystick2_button4","ev_joystick2_button5","ev_joystick2_button6","ev_joystick2_button7","ev_joystick2_button8","ev_joystick2_down","ev_joystick2_left","ev_joystick2_right","ev_joystick2_up","ev_keyboard","ev_keypress","ev_keyrelease","ev_left_button","ev_left_press","ev_left_release","ev_middle_button","ev_middle_press","ev_middle_release","ev_mouse","ev_mouse_enter","ev_mouse_leave","ev_mouse_wheel_down","ev_mouse_wheel_up","ev_no_button","ev_no_more_health","ev_no_more_lives","ev_other","ev_outside","ev_right_button","ev_right_press","ev_right_release","ev_room_end","ev_room_start","ev_step","ev_step_begin","ev_step_end","ev_step_normal","ev_trigger","ev_user0","ev_user1","ev_user2","ev_user3","ev_user4","ev_user5","ev_user6","ev_user7","ev_user8","ev_user9","ev_user10","ev_user11","ev_user12","ev_user13","ev_user14","ev_user15","fa_archive","fa_bottom","fa_center","fa_directory","fa_hidden","fa_left","fa_middle","fa_readonly","fa_right","fa_sysfile","fa_top","fa_volumeid","fb_login_default","fb_login_fallback_to_webview","fb_login_forcing_safari","fb_login_forcing_webview","fb_login_no_fallback_to_webview","fb_login_use_system_account","gamespeed_fps","gamespeed_microseconds","ge_lose","global","gp_axislh","gp_axislv","gp_axisrh","gp_axisrv","gp_face1","gp_face2","gp_face3","gp_face4","gp_padd","gp_padl","gp_padr","gp_padu","gp_select","gp_shoulderl","gp_shoulderlb","gp_shoulderr","gp_shoulderrb","gp_start","gp_stickl","gp_stickr","iap_available","iap_canceled","iap_ev_consume","iap_ev_product","iap_ev_purchase","iap_ev_restore","iap_ev_storeload","iap_failed","iap_purchased","iap_refunded","iap_status_available","iap_status_loading","iap_status_processing","iap_status_restoring","iap_status_unavailable","iap_status_uninitialised","iap_storeload_failed","iap_storeload_ok","iap_unavailable","input_type","kbv_autocapitalize_characters","kbv_autocapitalize_none","kbv_autocapitalize_sentences","kbv_autocapitalize_words","kbv_returnkey_continue","kbv_returnkey_default","kbv_returnkey_done","kbv_returnkey_emergency","kbv_returnkey_go","kbv_returnkey_google","kbv_returnkey_join","kbv_returnkey_next","kbv_returnkey_route","kbv_returnkey_search","kbv_returnkey_send","kbv_returnkey_yahoo","kbv_type_ascii","kbv_type_default","kbv_type_email","kbv_type_numbers","kbv_type_phone","kbv_type_phone_name","kbv_type_url","layerelementtype_background","layerelementtype_instance","layerelementtype_oldtilemap","layerelementtype_particlesystem","layerelementtype_sprite","layerelementtype_tile","layerelementtype_tilemap","layerelementtype_undefined","lb_disp_none","lb_disp_numeric","lb_disp_time_ms","lb_disp_time_sec","lb_sort_ascending","lb_sort_descending","lb_sort_none","leaderboard_type_number","leaderboard_type_time_mins_secs","lighttype_dir","lighttype_point","local","matrix_projection","matrix_view","matrix_world","mb_any","mb_left","mb_middle","mb_none","mb_right","mip_markedonly","mip_off","mip_on","network_config_connect_timeout","network_config_disable_reliable_udp","network_config_enable_reliable_udp","network_config_use_non_blocking_socket","network_socket_bluetooth","network_socket_tcp","network_socket_udp","network_type_connect","network_type_data","network_type_disconnect","network_type_non_blocking_connect","of_challen","of_challenge_tie","of_challenge_win","os_3ds","os_android","os_bb10","os_ios","os_linux","os_macosx","os_ps3","os_ps4","os_psvita","os_switch","os_symbian","os_tizen","os_tvos","os_unknown","os_uwp","os_wiiu","os_win32","os_win8native","os_windows","os_winphone","os_xbox360","os_xboxone","other","ov_achievements","ov_community","ov_friends","ov_gamegroup","ov_players","ov_settings","path_action_continue","path_action_restart","path_action_reverse","path_action_stop","phy_debug_render_aabb","phy_debug_render_collision_pairs","phy_debug_render_coms","phy_debug_render_core_shapes","phy_debug_render_joints","phy_debug_render_obb","phy_debug_render_shapes","phy_joint_anchor_1_x","phy_joint_anchor_1_y","phy_joint_anchor_2_x","phy_joint_anchor_2_y","phy_joint_angle","phy_joint_angle_limits","phy_joint_damping_ratio","phy_joint_frequency","phy_joint_length_1","phy_joint_length_2","phy_joint_lower_angle_limit","phy_joint_max_force","phy_joint_max_length","phy_joint_max_motor_force","phy_joint_max_motor_torque","phy_joint_max_torque","phy_joint_motor_force","phy_joint_motor_speed","phy_joint_motor_torque","phy_joint_reaction_force_x","phy_joint_reaction_force_y","phy_joint_reaction_torque","phy_joint_speed","phy_joint_translation","phy_joint_upper_angle_limit","phy_particle_data_flag_category","phy_particle_data_flag_color","phy_particle_data_flag_colour","phy_particle_data_flag_position","phy_particle_data_flag_typeflags","phy_particle_data_flag_velocity","phy_particle_flag_colormixing","phy_particle_flag_colourmixing","phy_particle_flag_elastic","phy_particle_flag_powder","phy_particle_flag_spring","phy_particle_flag_tensile","phy_particle_flag_viscous","phy_particle_flag_wall","phy_particle_flag_water","phy_particle_flag_zombie","phy_particle_group_flag_rigid","phy_particle_group_flag_solid","pi","pr_linelist","pr_linestrip","pr_pointlist","pr_trianglefan","pr_trianglelist","pr_trianglestrip","ps_distr_gaussian","ps_distr_invgaussian","ps_distr_linear","ps_shape_diamond","ps_shape_ellipse","ps_shape_line","ps_shape_rectangle","pt_shape_circle","pt_shape_cloud","pt_shape_disk","pt_shape_explosion","pt_shape_flare","pt_shape_line","pt_shape_pixel","pt_shape_ring","pt_shape_smoke","pt_shape_snow","pt_shape_spark","pt_shape_sphere","pt_shape_square","pt_shape_star","spritespeed_framespergameframe","spritespeed_framespersecond","text_type","tf_anisotropic","tf_linear","tf_point","tile_flip","tile_index_mask","tile_mirror","tile_rotate","timezone_local","timezone_utc","tm_countvsyncs","tm_sleep","ty_real","ty_string","ugc_filetype_community","ugc_filetype_microtrans","ugc_list_Favorited","ugc_list_Followed","ugc_list_Published","ugc_list_Subscribed","ugc_list_UsedOrPlayed","ugc_list_VotedDown","ugc_list_VotedOn","ugc_list_VotedUp","ugc_list_WillVoteLater","ugc_match_AllGuides","ugc_match_Artwork","ugc_match_Collections","ugc_match_ControllerBindings","ugc_match_IntegratedGuides","ugc_match_Items","ugc_match_Items_Mtx","ugc_match_Items_ReadyToUse","ugc_match_Screenshots","ugc_match_UsableInGame","ugc_match_Videos","ugc_match_WebGuides","ugc_query_AcceptedForGameRankedByAcceptanceDate","ugc_query_CreatedByFollowedUsersRankedByPublicationDate","ugc_query_CreatedByFriendsRankedByPublicationDate","ugc_query_FavoritedByFriendsRankedByPublicationDate","ugc_query_NotYetRated","ugc_query_RankedByNumTimesReported","ugc_query_RankedByPublicationDate","ugc_query_RankedByTextSearch","ugc_query_RankedByTotalVotesAsc","ugc_query_RankedByTrend","ugc_query_RankedByVote","ugc_query_RankedByVotesUp","ugc_result_success","ugc_sortorder_CreationOrderAsc","ugc_sortorder_CreationOrderDesc","ugc_sortorder_ForModeration","ugc_sortorder_LastUpdatedDesc","ugc_sortorder_SubscriptionDateDesc","ugc_sortorder_TitleAsc","ugc_sortorder_VoteScoreDesc","ugc_visibility_friends_only","ugc_visibility_private","ugc_visibility_public","vertex_type_color","vertex_type_colour","vertex_type_float1","vertex_type_float2","vertex_type_float3","vertex_type_float4","vertex_type_ubyte4","vertex_usage_binormal","vertex_usage_blendindices","vertex_usage_blendweight","vertex_usage_color","vertex_usage_colour","vertex_usage_depth","vertex_usage_fog","vertex_usage_normal","vertex_usage_position","vertex_usage_psize","vertex_usage_sample","vertex_usage_tangent","vertex_usage_texcoord","vertex_usage_textcoord","vk_add","vk_alt","vk_anykey","vk_backspace","vk_control","vk_decimal","vk_delete","vk_divide","vk_down","vk_end","vk_enter","vk_escape","vk_f1","vk_f2","vk_f3","vk_f4","vk_f5","vk_f6","vk_f7","vk_f8","vk_f9","vk_f10","vk_f11","vk_f12","vk_home","vk_insert","vk_lalt","vk_lcontrol","vk_left","vk_lshift","vk_multiply","vk_nokey","vk_numpad0","vk_numpad1","vk_numpad2","vk_numpad3","vk_numpad4","vk_numpad5","vk_numpad6","vk_numpad7","vk_numpad8","vk_numpad9","vk_pagedown","vk_pageup","vk_pause","vk_printscreen","vk_ralt","vk_rcontrol","vk_return","vk_right","vk_rshift","vk_shift","vk_space","vk_subtract","vk_tab","vk_up"],"variable.language":["alarm","application_surface","argument","argument0","argument1","argument2","argument3","argument4","argument5","argument6","argument7","argument8","argument9","argument10","argument11","argument12","argument13","argument14","argument15","argument_count","argument_relative","async_load","background_color","background_colour","background_showcolor","background_showcolour","bbox_bottom","bbox_left","bbox_right","bbox_top","browser_height","browser_width","caption_health","caption_lives","caption_score","current_day","current_hour","current_minute","current_month","current_second","current_time","current_weekday","current_year","cursor_sprite","debug_mode","delta_time","depth","direction","display_aa","error_last","error_occurred","event_action","event_data","event_number","event_object","event_type","fps","fps_real","friction","game_display_name","game_id","game_project_name","game_save_id","gamemaker_pro","gamemaker_registered","gamemaker_version","gravity","gravity_direction","health","hspeed","iap_data","id|0","image_alpha","image_angle","image_blend","image_index","image_number","image_speed","image_xscale","image_yscale","instance_count","instance_id","keyboard_key","keyboard_lastchar","keyboard_lastkey","keyboard_string","layer","lives","mask_index","mouse_button","mouse_lastbutton","mouse_x","mouse_y","object_index","os_browser","os_device","os_type","os_version","path_endaction","path_index","path_orientation","path_position","path_positionprevious","path_scale","path_speed","persistent","phy_active","phy_angular_damping","phy_angular_velocity","phy_bullet","phy_col_normal_x","phy_col_normal_y","phy_collision_points","phy_collision_x","phy_collision_y","phy_com_x","phy_com_y","phy_dynamic","phy_fixed_rotation","phy_inertia","phy_kinematic","phy_linear_damping","phy_linear_velocity_x","phy_linear_velocity_y","phy_mass","phy_position_x","phy_position_xprevious","phy_position_y","phy_position_yprevious","phy_rotation","phy_sleeping","phy_speed","phy_speed_x","phy_speed_y","program_directory","room","room_caption","room_first","room_height","room_last","room_persistent","room_speed","room_width","score","self","show_health","show_lives","show_score","solid","speed","sprite_height","sprite_index","sprite_width","sprite_xoffset","sprite_yoffset","temp_directory","timeline_index","timeline_loop","timeline_position","timeline_running","timeline_speed","view_angle","view_camera","view_current","view_enabled","view_hborder","view_hport","view_hspeed","view_hview","view_object","view_surface_id","view_vborder","view_visible","view_vspeed","view_wport","view_wview","view_xport","view_xview","view_yport","view_yview","visible","vspeed","webgl_enabled","working_directory","xprevious","xstart","x|0","yprevious","ystart","y|0"]},contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,e.C_NUMBER_MODE]}}return r_=t,r_}var i_,cT;function cAe(){if(cT)return i_;cT=1;function t(e){const l={keyword:["break","case","chan","const","continue","default","defer","else","fallthrough","for","func","go","goto","if","import","interface","map","package","range","return","select","struct","switch","type","var"],type:["bool","byte","complex64","complex128","error","float32","float64","int8","int16","int32","int64","string","uint8","uint16","uint32","uint64","int","uint","uintptr","rune"],literal:["true","false","iota","nil"],built_in:["append","cap","close","complex","copy","imag","len","make","new","panic","print","println","real","recover","delete"]};return{name:"Go",aliases:["golang"],keywords:l,illegal:"",end:",\\s+",returnBegin:!0,endsWithParent:!0,contains:[{className:"attr",begin:":\\w+"},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{begin:"\\w+",relevance:0}]}]},{begin:"\\(\\s*",end:"\\s*\\)",excludeEnd:!0,contains:[{begin:"\\w+\\s*=",end:"\\s+",returnBegin:!0,endsWithParent:!0,contains:[{className:"attr",begin:"\\w+",relevance:0},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{begin:"\\w+",relevance:0}]}]}]},{begin:"^\\s*[=~]\\s*"},{begin:/#\{/,end:/\}/,subLanguage:"ruby",excludeBegin:!0,excludeEnd:!0}]}}return c_=t,c_}var u_,gT;function gAe(){if(gT)return u_;gT=1;function t(e){const n=e.regex,i={$pattern:/[\w.\/]+/,built_in:["action","bindattr","collection","component","concat","debugger","each","each-in","get","hash","if","in","input","link-to","loc","log","lookup","mut","outlet","partial","query-params","render","template","textarea","unbound","unless","view","with","yield"]},o={$pattern:/[\w.\/]+/,literal:["true","false","undefined","null"]},s=/""|"[^"]+"/,l=/''|'[^']+'/,c=/\[\]|\[[^\]]+\]/,d=/[^\s!"#%&'()*+,.\/;<=>@\[\\\]^`{|}~]+/,_=/(\.|\/)/,p=n.either(s,l,c,d),g=n.concat(n.optional(/\.|\.\/|\//),p,n.anyNumberOfTimes(n.concat(_,p))),E=n.concat("(",c,"|",d,")(?==)"),f={begin:g},S=e.inherit(f,{keywords:o}),C={begin:/\(/,end:/\)/},h={className:"attr",begin:E,relevance:0,starts:{begin:/=/,end:/=/,starts:{contains:[e.NUMBER_MODE,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,S,C]}}},T={begin:/as\s+\|/,keywords:{keyword:"as"},end:/\|/,contains:[{begin:/\w+/}]},N={contains:[e.NUMBER_MODE,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,T,h,S,C],returnEnd:!0},y=e.inherit(f,{className:"name",keywords:i,starts:e.inherit(N,{end:/\)/})});C.contains=[y];const x=e.inherit(f,{keywords:i,className:"name",starts:e.inherit(N,{end:/\}\}/})}),P=e.inherit(f,{keywords:i,className:"name"}),D=e.inherit(f,{className:"name",keywords:i,starts:e.inherit(N,{end:/\}\}/})});return{name:"Handlebars",aliases:["hbs","html.hbs","html.handlebars","htmlbars"],case_insensitive:!0,subLanguage:"xml",contains:[{begin:/\\\{\{/,skip:!0},{begin:/\\\\(?=\{\{)/,skip:!0},e.COMMENT(/\{\{!--/,/--\}\}/),e.COMMENT(/\{\{!/,/\}\}/),{className:"template-tag",begin:/\{\{\{\{(?!\/)/,end:/\}\}\}\}/,contains:[x],starts:{end:/\{\{\{\{\//,returnEnd:!0,subLanguage:"xml"}},{className:"template-tag",begin:/\{\{\{\{\//,end:/\}\}\}\}/,contains:[P]},{className:"template-tag",begin:/\{\{#/,end:/\}\}/,contains:[x]},{className:"template-tag",begin:/\{\{(?=else\}\})/,end:/\}\}/,keywords:"else"},{className:"template-tag",begin:/\{\{(?=else if)/,end:/\}\}/,keywords:"else if"},{className:"template-tag",begin:/\{\{\//,end:/\}\}/,contains:[P]},{className:"template-variable",begin:/\{\{\{/,end:/\}\}\}/,contains:[D]},{className:"template-variable",begin:/\{\{/,end:/\}\}/,contains:[D]}]}}return u_=t,u_}var d_,ET;function EAe(){if(ET)return d_;ET=1;function t(e){const n={variants:[e.COMMENT("--","$"),e.COMMENT(/\{-/,/-\}/,{contains:["self"]})]},i={className:"meta",begin:/\{-#/,end:/#-\}/},o={className:"meta",begin:"^#",end:"$"},s={className:"type",begin:"\\b[A-Z][\\w']*",relevance:0},l={begin:"\\(",end:"\\)",illegal:'"',contains:[i,o,{className:"type",begin:"\\b[A-Z][\\w]*(\\((\\.\\.|,|\\w+)\\))?"},e.inherit(e.TITLE_MODE,{begin:"[_a-z][\\w']*"}),n]},c={begin:/\{/,end:/\}/,contains:l.contains},d="([0-9]_*)+",_="([0-9a-fA-F]_*)+",p="([01]_*)+",g="([0-7]_*)+",E={className:"number",relevance:0,variants:[{match:`\\b(${d})(\\.(${d}))?([eE][+-]?(${d}))?\\b`},{match:`\\b0[xX]_*(${_})(\\.(${_}))?([pP][+-]?(${d}))?\\b`},{match:`\\b0[oO](${g})\\b`},{match:`\\b0[bB](${p})\\b`}]};return{name:"Haskell",aliases:["hs"],keywords:"let in if then else case of where do module import hiding qualified type data newtype deriving class instance as default infix infixl infixr foreign export ccall stdcall cplusplus jvm dotnet safe unsafe family forall mdo proc rec",contains:[{beginKeywords:"module",end:"where",keywords:"module where",contains:[l,n],illegal:"\\W\\.|;"},{begin:"\\bimport\\b",end:"$",keywords:"import qualified as hiding",contains:[l,n],illegal:"\\W\\.|;"},{className:"class",begin:"^(\\s*)?(class|instance)\\b",end:"where",keywords:"class family instance where",contains:[s,l,n]},{className:"class",begin:"\\b(data|(new)?type)\\b",end:"$",keywords:"data family type newtype deriving",contains:[i,s,l,c,n]},{beginKeywords:"default",end:"$",contains:[s,l,n]},{beginKeywords:"infix infixl infixr",end:"$",contains:[e.C_NUMBER_MODE,n]},{begin:"\\bforeign\\b",end:"$",keywords:"foreign import export ccall stdcall cplusplus jvm dotnet safe unsafe",contains:[s,e.QUOTE_STRING_MODE,n]},{className:"meta",begin:"#!\\/usr\\/bin\\/env runhaskell",end:"$"},i,o,{scope:"string",begin:/'(?=\\?.')/,end:/'/,contains:[{scope:"char.escape",match:/\\./}]},e.QUOTE_STRING_MODE,E,s,e.inherit(e.TITLE_MODE,{begin:"^[_a-z][\\w']*"}),n,{begin:"->|<-"}]}}return d_=t,d_}var __,fT;function fAe(){if(fT)return __;fT=1;function t(e){return{name:"Haxe",aliases:["hx"],keywords:{keyword:"break case cast catch continue default do dynamic else enum extern for function here if import in inline never new override package private get set public return static super switch this throw trace try typedef untyped using var while "+"Int Float String Bool Dynamic Void Array ",built_in:"trace this",literal:"true false null _"},contains:[{className:"string",begin:"'",end:"'",contains:[e.BACKSLASH_ESCAPE,{className:"subst",begin:"\\$\\{",end:"\\}"},{className:"subst",begin:"\\$",end:/\W\}/}]},e.QUOTE_STRING_MODE,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.C_NUMBER_MODE,{className:"meta",begin:"@:",end:"$"},{className:"meta",begin:"#",end:"$",keywords:{keyword:"if else elseif end error"}},{className:"type",begin:":[ ]*",end:"[^A-Za-z0-9_ \\->]",excludeBegin:!0,excludeEnd:!0,relevance:0},{className:"type",begin:":[ ]*",end:"\\W",excludeBegin:!0,excludeEnd:!0},{className:"type",begin:"new *",end:"\\W",excludeBegin:!0,excludeEnd:!0},{className:"class",beginKeywords:"enum",end:"\\{",contains:[e.TITLE_MODE]},{className:"class",beginKeywords:"abstract",end:"[\\{$]",contains:[{className:"type",begin:"\\(",end:"\\)",excludeBegin:!0,excludeEnd:!0},{className:"type",begin:"from +",end:"\\W",excludeBegin:!0,excludeEnd:!0},{className:"type",begin:"to +",end:"\\W",excludeBegin:!0,excludeEnd:!0},e.TITLE_MODE],keywords:{keyword:"abstract from to"}},{className:"class",begin:"\\b(class|interface) +",end:"[\\{$]",excludeEnd:!0,keywords:"class interface",contains:[{className:"keyword",begin:"\\b(extends|implements) +",keywords:"extends implements",contains:[{className:"type",begin:e.IDENT_RE,relevance:0}]},e.TITLE_MODE]},{className:"function",beginKeywords:"function",end:"\\(",excludeEnd:!0,illegal:"\\S",contains:[e.TITLE_MODE]}],illegal:/<\//}}return __=t,__}var p_,ST;function SAe(){if(ST)return p_;ST=1;function t(e){return{name:"HSP",case_insensitive:!0,keywords:{$pattern:/[\w._]+/,keyword:"goto gosub return break repeat loop continue wait await dim sdim foreach dimtype dup dupptr end stop newmod delmod mref run exgoto on mcall assert logmes newlab resume yield onexit onerror onkey onclick oncmd exist delete mkdir chdir dirlist bload bsave bcopy memfile if else poke wpoke lpoke getstr chdpm memexpand memcpy memset notesel noteadd notedel noteload notesave randomize noteunsel noteget split strrep setease button chgdisp exec dialog mmload mmplay mmstop mci pset pget syscolor mes print title pos circle cls font sysfont objsize picload color palcolor palette redraw width gsel gcopy gzoom gmode bmpsave hsvcolor getkey listbox chkbox combox input mesbox buffer screen bgscr mouse objsel groll line clrobj boxf objprm objmode stick grect grotate gsquare gradf objimage objskip objenable celload celdiv celput newcom querycom delcom cnvstow comres axobj winobj sendmsg comevent comevarg sarrayconv callfunc cnvwtos comevdisp libptr system hspstat hspver stat cnt err strsize looplev sublev iparam wparam lparam refstr refdval int rnd strlen length length2 length3 length4 vartype gettime peek wpeek lpeek varptr varuse noteinfo instr abs limit getease str strmid strf getpath strtrim sin cos tan atan sqrt double absf expf logf limitf powf geteasef mousex mousey mousew hwnd hinstance hdc ginfo objinfo dirinfo sysinfo thismod __hspver__ __hsp30__ __date__ __time__ __line__ __file__ _debug __hspdef__ and or xor not screen_normal screen_palette screen_hide screen_fixedsize screen_tool screen_frame gmode_gdi gmode_mem gmode_rgb0 gmode_alpha gmode_rgb0alpha gmode_add gmode_sub gmode_pixela ginfo_mx ginfo_my ginfo_act ginfo_sel ginfo_wx1 ginfo_wy1 ginfo_wx2 ginfo_wy2 ginfo_vx ginfo_vy ginfo_sizex ginfo_sizey ginfo_winx ginfo_winy ginfo_mesx ginfo_mesy ginfo_r ginfo_g ginfo_b ginfo_paluse ginfo_dispx ginfo_dispy ginfo_cx ginfo_cy ginfo_intid ginfo_newid ginfo_sx ginfo_sy objinfo_mode objinfo_bmscr objinfo_hwnd notemax notesize dir_cur dir_exe dir_win dir_sys dir_cmdline dir_desktop dir_mydoc dir_tv font_normal font_bold font_italic font_underline font_strikeout font_antialias objmode_normal objmode_guifont objmode_usefont gsquare_grad msgothic msmincho do until while wend for next _break _continue switch case default swbreak swend ddim ldim alloc m_pi rad2deg deg2rad ease_linear ease_quad_in ease_quad_out ease_quad_inout ease_cubic_in ease_cubic_out ease_cubic_inout ease_quartic_in ease_quartic_out ease_quartic_inout ease_bounce_in ease_bounce_out ease_bounce_inout ease_shake_in ease_shake_out ease_shake_inout ease_loop"},contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,{className:"string",begin:/\{"/,end:/"\}/,contains:[e.BACKSLASH_ESCAPE]},e.COMMENT(";","$",{relevance:0}),{className:"meta",begin:"#",end:"$",keywords:{keyword:"addion cfunc cmd cmpopt comfunc const defcfunc deffunc define else endif enum epack func global if ifdef ifndef include modcfunc modfunc modinit modterm module pack packopt regcmd runtime undef usecom uselib"},contains:[e.inherit(e.QUOTE_STRING_MODE,{className:"string"}),e.NUMBER_MODE,e.C_NUMBER_MODE,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{className:"symbol",begin:"^\\*(\\w+|@)"},e.NUMBER_MODE,e.C_NUMBER_MODE]}}return p_=t,p_}var m_,bT;function bAe(){if(bT)return m_;bT=1;function t(e){const n=e.regex,i="HTTP/([32]|1\\.[01])",o=/[A-Za-z][A-Za-z0-9-]*/,s={className:"attribute",begin:n.concat("^",o,"(?=\\:\\s)"),starts:{contains:[{className:"punctuation",begin:/: /,relevance:0,starts:{end:"$",relevance:0}}]}},l=[s,{begin:"\\n\\n",starts:{subLanguage:[],endsWithParent:!0}}];return{name:"HTTP",aliases:["https"],illegal:/\S/,contains:[{begin:"^(?="+i+" \\d{3})",end:/$/,contains:[{className:"meta",begin:i},{className:"number",begin:"\\b\\d{3}\\b"}],starts:{end:/\b\B/,illegal:/\S/,contains:l}},{begin:"(?=^[A-Z]+ (.*?) "+i+"$)",end:/$/,contains:[{className:"string",begin:" ",end:" ",excludeBegin:!0,excludeEnd:!0},{className:"meta",begin:i},{className:"keyword",begin:"[A-Z]+"}],starts:{end:/\b\B/,illegal:/\S/,contains:l}},e.inherit(s,{relevance:0})]}}return m_=t,m_}var g_,hT;function hAe(){if(hT)return g_;hT=1;function t(e){const n="a-zA-Z_\\-!.?+*=<>&#'",i="["+n+"]["+n+"0-9/;:]*",o={$pattern:i,built_in:"!= % %= & &= * ** **= *= *map + += , --build-class-- --import-- -= . / // //= /= < << <<= <= = > >= >> >>= @ @= ^ ^= abs accumulate all and any ap-compose ap-dotimes ap-each ap-each-while ap-filter ap-first ap-if ap-last ap-map ap-map-when ap-pipe ap-reduce ap-reject apply as-> ascii assert assoc bin break butlast callable calling-module-name car case cdr chain chr coll? combinations compile compress cond cons cons? continue count curry cut cycle dec def default-method defclass defmacro defmacro-alias defmacro/g! defmain defmethod defmulti defn defn-alias defnc defnr defreader defseq del delattr delete-route dict-comp dir disassemble dispatch-reader-macro distinct divmod do doto drop drop-last drop-while empty? end-sequence eval eval-and-compile eval-when-compile even? every? except exec filter first flatten float? fn fnc fnr for for* format fraction genexpr gensym get getattr global globals group-by hasattr hash hex id identity if if* if-not if-python2 import in inc input instance? integer integer-char? integer? interleave interpose is is-coll is-cons is-empty is-even is-every is-float is-instance is-integer is-integer-char is-iterable is-iterator is-keyword is-neg is-none is-not is-numeric is-odd is-pos is-string is-symbol is-zero isinstance islice issubclass iter iterable? iterate iterator? keyword keyword? lambda last len let lif lif-not list* list-comp locals loop macro-error macroexpand macroexpand-1 macroexpand-all map max merge-with method-decorator min multi-decorator multicombinations name neg? next none? nonlocal not not-in not? nth numeric? oct odd? open or ord partition permutations pos? post-route postwalk pow prewalk print product profile/calls profile/cpu put-route quasiquote quote raise range read read-str recursive-replace reduce remove repeat repeatedly repr require rest round route route-with-methods rwm second seq set-comp setattr setv some sorted string string? sum switch symbol? take take-nth take-while tee try unless unquote unquote-splicing vars walk when while with with* with-decorator with-gensyms xi xor yield yield-from zero? zip zip-longest | |= ~"},s="[-+]?\\d+(\\.\\d+)?",l={begin:i,relevance:0},c={className:"number",begin:s,relevance:0},d=e.inherit(e.QUOTE_STRING_MODE,{illegal:null}),_=e.COMMENT(";","$",{relevance:0}),p={className:"literal",begin:/\b([Tt]rue|[Ff]alse|nil|None)\b/},g={begin:"[\\[\\{]",end:"[\\]\\}]",relevance:0},E={className:"comment",begin:"\\^"+i},f=e.COMMENT("\\^\\{","\\}"),S={className:"symbol",begin:"[:]{1,2}"+i},C={begin:"\\(",end:"\\)"},h={endsWithParent:!0,relevance:0},T={className:"name",relevance:0,keywords:o,begin:i,starts:h},N=[C,d,E,f,_,S,g,c,p,l];return C.contains=[e.COMMENT("comment",""),T,h],h.contains=N,g.contains=N,{name:"Hy",aliases:["hylang"],illegal:/\S/,contains:[e.SHEBANG(),C,d,E,f,_,S,g,c,p]}}return g_=t,g_}var E_,TT;function TAe(){if(TT)return E_;TT=1;function t(e){const n="\\[",i="\\]";return{name:"Inform 7",aliases:["i7"],case_insensitive:!0,keywords:{keyword:"thing room person man woman animal container supporter backdrop door scenery open closed locked inside gender is are say understand kind of rule"},contains:[{className:"string",begin:'"',end:'"',relevance:0,contains:[{className:"subst",begin:n,end:i}]},{className:"section",begin:/^(Volume|Book|Part|Chapter|Section|Table)\b/,end:"$"},{begin:/^(Check|Carry out|Report|Instead of|To|Rule|When|Before|After)\b/,end:":",contains:[{begin:"\\(This",end:"\\)"}]},{className:"comment",begin:n,end:i,contains:["self"]}]}}return E_=t,E_}var f_,vT;function vAe(){if(vT)return f_;vT=1;function t(e){const n=e.regex,i={className:"number",relevance:0,variants:[{begin:/([+-]+)?[\d]+_[\d_]+/},{begin:e.NUMBER_RE}]},o=e.COMMENT();o.variants=[{begin:/;/,end:/$/},{begin:/#/,end:/$/}];const s={className:"variable",variants:[{begin:/\$[\w\d"][\w\d_]*/},{begin:/\$\{(.*?)\}/}]},l={className:"literal",begin:/\bon|off|true|false|yes|no\b/},c={className:"string",contains:[e.BACKSLASH_ESCAPE],variants:[{begin:"'''",end:"'''",relevance:10},{begin:'"""',end:'"""',relevance:10},{begin:'"',end:'"'},{begin:"'",end:"'"}]},d={begin:/\[/,end:/\]/,contains:[o,l,s,c,i,"self"],relevance:0},_=/[A-Za-z0-9_-]+/,p=/"(\\"|[^"])*"/,g=/'[^']*'/,E=n.either(_,p,g),f=n.concat(E,"(\\s*\\.\\s*",E,")*",n.lookahead(/\s*=\s*[^#\s]/));return{name:"TOML, also INI",aliases:["toml"],case_insensitive:!0,illegal:/\S/,contains:[o,{className:"section",begin:/\[+/,end:/\]+/},{begin:f,className:"attr",starts:{end:/$/,contains:[o,d,l,s,c,i]}}]}}return f_=t,f_}var S_,CT;function CAe(){if(CT)return S_;CT=1;function t(e){const n=e.regex,i={className:"params",begin:"\\(",end:"\\)"},o=/(_[a-z_\d]+)?/,s=/([de][+-]?\d+)?/,l={className:"number",variants:[{begin:n.concat(/\b\d+/,/\.(\d*)/,s,o)},{begin:n.concat(/\b\d+/,s,o)},{begin:n.concat(/\.\d+/,s,o)}],relevance:0};return{name:"IRPF90",case_insensitive:!0,keywords:{literal:".False. .True.",keyword:"kind do while private call intrinsic where elsewhere type endtype endmodule endselect endinterface end enddo endif if forall endforall only contains default return stop then public subroutine|10 function program .and. .or. .not. .le. .eq. .ge. .gt. .lt. goto save else use module select case access blank direct exist file fmt form formatted iostat name named nextrec number opened rec recl sequential status unformatted unit continue format pause cycle exit c_null_char c_alert c_backspace c_form_feed flush wait decimal round iomsg synchronous nopass non_overridable pass protected volatile abstract extends import non_intrinsic value deferred generic final enumerator class associate bind enum c_int c_short c_long c_long_long c_signed_char c_size_t c_int8_t c_int16_t c_int32_t c_int64_t c_int_least8_t c_int_least16_t c_int_least32_t c_int_least64_t c_int_fast8_t c_int_fast16_t c_int_fast32_t c_int_fast64_t c_intmax_t C_intptr_t c_float c_double c_long_double c_float_complex c_double_complex c_long_double_complex c_bool c_char c_null_ptr c_null_funptr c_new_line c_carriage_return c_horizontal_tab c_vertical_tab iso_c_binding c_loc c_funloc c_associated c_f_pointer c_ptr c_funptr iso_fortran_env character_storage_size error_unit file_storage_size input_unit iostat_end iostat_eor numeric_storage_size output_unit c_f_procpointer ieee_arithmetic ieee_support_underflow_control ieee_get_underflow_mode ieee_set_underflow_mode newunit contiguous recursive pad position action delim readwrite eor advance nml interface procedure namelist include sequence elemental pure integer real character complex logical dimension allocatable|10 parameter external implicit|10 none double precision assign intent optional pointer target in out common equivalence data begin_provider &begin_provider end_provider begin_shell end_shell begin_template end_template subst assert touch soft_touch provide no_dep free irp_if irp_else irp_endif irp_write irp_read",built_in:"alog alog10 amax0 amax1 amin0 amin1 amod cabs ccos cexp clog csin csqrt dabs dacos dasin datan datan2 dcos dcosh ddim dexp dint dlog dlog10 dmax1 dmin1 dmod dnint dsign dsin dsinh dsqrt dtan dtanh float iabs idim idint idnint ifix isign max0 max1 min0 min1 sngl algama cdabs cdcos cdexp cdlog cdsin cdsqrt cqabs cqcos cqexp cqlog cqsin cqsqrt dcmplx dconjg derf derfc dfloat dgamma dimag dlgama iqint qabs qacos qasin qatan qatan2 qcmplx qconjg qcos qcosh qdim qerf qerfc qexp qgamma qimag qlgama qlog qlog10 qmax1 qmin1 qmod qnint qsign qsin qsinh qsqrt qtan qtanh abs acos aimag aint anint asin atan atan2 char cmplx conjg cos cosh exp ichar index int log log10 max min nint sign sin sinh sqrt tan tanh print write dim lge lgt lle llt mod nullify allocate deallocate adjustl adjustr all allocated any associated bit_size btest ceiling count cshift date_and_time digits dot_product eoshift epsilon exponent floor fraction huge iand ibclr ibits ibset ieor ior ishft ishftc lbound len_trim matmul maxexponent maxloc maxval merge minexponent minloc minval modulo mvbits nearest pack present product radix random_number random_seed range repeat reshape rrspacing scale scan selected_int_kind selected_real_kind set_exponent shape size spacing spread sum system_clock tiny transpose trim ubound unpack verify achar iachar transfer dble entry dprod cpu_time command_argument_count get_command get_command_argument get_environment_variable is_iostat_end ieee_arithmetic ieee_support_underflow_control ieee_get_underflow_mode ieee_set_underflow_mode is_iostat_eor move_alloc new_line selected_char_kind same_type_as extends_type_of acosh asinh atanh bessel_j0 bessel_j1 bessel_jn bessel_y0 bessel_y1 bessel_yn erf erfc erfc_scaled gamma log_gamma hypot norm2 atomic_define atomic_ref execute_command_line leadz trailz storage_size merge_bits bge bgt ble blt dshiftl dshiftr findloc iall iany iparity image_index lcobound ucobound maskl maskr num_images parity popcnt poppar shifta shiftl shiftr this_image IRP_ALIGN irp_here"},illegal:/\/\*/,contains:[e.inherit(e.APOS_STRING_MODE,{className:"string",relevance:0}),e.inherit(e.QUOTE_STRING_MODE,{className:"string",relevance:0}),{className:"function",beginKeywords:"subroutine function program",illegal:"[${=\\n]",contains:[e.UNDERSCORE_TITLE_MODE,i]},e.COMMENT("!","$",{relevance:0}),e.COMMENT("begin_doc","end_doc",{relevance:10}),l]}}return S_=t,S_}var b_,RT;function RAe(){if(RT)return b_;RT=1;function t(e){const n="[A-Za-zА-Яа-яёЁ_!][A-Za-zА-Яа-яёЁ_0-9]*",i="[A-Za-zА-Яа-яёЁ_][A-Za-zА-Яа-яёЁ_0-9]*",o="and и else иначе endexcept endfinally endforeach конецвсе endif конецесли endwhile конецпока except exitfor finally foreach все if если in в not не or или try while пока ",s="SYSRES_CONST_ACCES_RIGHT_TYPE_EDIT SYSRES_CONST_ACCES_RIGHT_TYPE_FULL SYSRES_CONST_ACCES_RIGHT_TYPE_VIEW SYSRES_CONST_ACCESS_MODE_REQUISITE_CODE SYSRES_CONST_ACCESS_NO_ACCESS_VIEW SYSRES_CONST_ACCESS_NO_ACCESS_VIEW_CODE SYSRES_CONST_ACCESS_RIGHTS_ADD_REQUISITE_CODE SYSRES_CONST_ACCESS_RIGHTS_ADD_REQUISITE_YES_CODE SYSRES_CONST_ACCESS_RIGHTS_CHANGE_REQUISITE_CODE SYSRES_CONST_ACCESS_RIGHTS_CHANGE_REQUISITE_YES_CODE SYSRES_CONST_ACCESS_RIGHTS_DELETE_REQUISITE_CODE SYSRES_CONST_ACCESS_RIGHTS_DELETE_REQUISITE_YES_CODE SYSRES_CONST_ACCESS_RIGHTS_EXECUTE_REQUISITE_CODE SYSRES_CONST_ACCESS_RIGHTS_EXECUTE_REQUISITE_YES_CODE SYSRES_CONST_ACCESS_RIGHTS_NO_ACCESS_REQUISITE_CODE SYSRES_CONST_ACCESS_RIGHTS_NO_ACCESS_REQUISITE_YES_CODE SYSRES_CONST_ACCESS_RIGHTS_RATIFY_REQUISITE_CODE SYSRES_CONST_ACCESS_RIGHTS_RATIFY_REQUISITE_YES_CODE SYSRES_CONST_ACCESS_RIGHTS_REQUISITE_CODE SYSRES_CONST_ACCESS_RIGHTS_VIEW SYSRES_CONST_ACCESS_RIGHTS_VIEW_CODE SYSRES_CONST_ACCESS_RIGHTS_VIEW_REQUISITE_CODE SYSRES_CONST_ACCESS_RIGHTS_VIEW_REQUISITE_YES_CODE SYSRES_CONST_ACCESS_TYPE_CHANGE SYSRES_CONST_ACCESS_TYPE_CHANGE_CODE SYSRES_CONST_ACCESS_TYPE_EXISTS SYSRES_CONST_ACCESS_TYPE_EXISTS_CODE SYSRES_CONST_ACCESS_TYPE_FULL SYSRES_CONST_ACCESS_TYPE_FULL_CODE SYSRES_CONST_ACCESS_TYPE_VIEW SYSRES_CONST_ACCESS_TYPE_VIEW_CODE SYSRES_CONST_ACTION_TYPE_ABORT SYSRES_CONST_ACTION_TYPE_ACCEPT SYSRES_CONST_ACTION_TYPE_ACCESS_RIGHTS SYSRES_CONST_ACTION_TYPE_ADD_ATTACHMENT SYSRES_CONST_ACTION_TYPE_CHANGE_CARD SYSRES_CONST_ACTION_TYPE_CHANGE_KIND SYSRES_CONST_ACTION_TYPE_CHANGE_STORAGE SYSRES_CONST_ACTION_TYPE_CONTINUE SYSRES_CONST_ACTION_TYPE_COPY SYSRES_CONST_ACTION_TYPE_CREATE SYSRES_CONST_ACTION_TYPE_CREATE_VERSION SYSRES_CONST_ACTION_TYPE_DELETE SYSRES_CONST_ACTION_TYPE_DELETE_ATTACHMENT SYSRES_CONST_ACTION_TYPE_DELETE_VERSION SYSRES_CONST_ACTION_TYPE_DISABLE_DELEGATE_ACCESS_RIGHTS SYSRES_CONST_ACTION_TYPE_ENABLE_DELEGATE_ACCESS_RIGHTS SYSRES_CONST_ACTION_TYPE_ENCRYPTION_BY_CERTIFICATE SYSRES_CONST_ACTION_TYPE_ENCRYPTION_BY_CERTIFICATE_AND_PASSWORD SYSRES_CONST_ACTION_TYPE_ENCRYPTION_BY_PASSWORD SYSRES_CONST_ACTION_TYPE_EXPORT_WITH_LOCK SYSRES_CONST_ACTION_TYPE_EXPORT_WITHOUT_LOCK SYSRES_CONST_ACTION_TYPE_IMPORT_WITH_UNLOCK SYSRES_CONST_ACTION_TYPE_IMPORT_WITHOUT_UNLOCK SYSRES_CONST_ACTION_TYPE_LIFE_CYCLE_STAGE SYSRES_CONST_ACTION_TYPE_LOCK SYSRES_CONST_ACTION_TYPE_LOCK_FOR_SERVER SYSRES_CONST_ACTION_TYPE_LOCK_MODIFY SYSRES_CONST_ACTION_TYPE_MARK_AS_READED SYSRES_CONST_ACTION_TYPE_MARK_AS_UNREADED SYSRES_CONST_ACTION_TYPE_MODIFY SYSRES_CONST_ACTION_TYPE_MODIFY_CARD SYSRES_CONST_ACTION_TYPE_MOVE_TO_ARCHIVE SYSRES_CONST_ACTION_TYPE_OFF_ENCRYPTION SYSRES_CONST_ACTION_TYPE_PASSWORD_CHANGE SYSRES_CONST_ACTION_TYPE_PERFORM SYSRES_CONST_ACTION_TYPE_RECOVER_FROM_LOCAL_COPY SYSRES_CONST_ACTION_TYPE_RESTART SYSRES_CONST_ACTION_TYPE_RESTORE_FROM_ARCHIVE SYSRES_CONST_ACTION_TYPE_REVISION SYSRES_CONST_ACTION_TYPE_SEND_BY_MAIL SYSRES_CONST_ACTION_TYPE_SIGN SYSRES_CONST_ACTION_TYPE_START SYSRES_CONST_ACTION_TYPE_UNLOCK SYSRES_CONST_ACTION_TYPE_UNLOCK_FROM_SERVER SYSRES_CONST_ACTION_TYPE_VERSION_STATE SYSRES_CONST_ACTION_TYPE_VERSION_VISIBILITY SYSRES_CONST_ACTION_TYPE_VIEW SYSRES_CONST_ACTION_TYPE_VIEW_SHADOW_COPY SYSRES_CONST_ACTION_TYPE_WORKFLOW_DESCRIPTION_MODIFY SYSRES_CONST_ACTION_TYPE_WRITE_HISTORY SYSRES_CONST_ACTIVE_VERSION_STATE_PICK_VALUE SYSRES_CONST_ADD_REFERENCE_MODE_NAME SYSRES_CONST_ADDITION_REQUISITE_CODE SYSRES_CONST_ADDITIONAL_PARAMS_REQUISITE_CODE SYSRES_CONST_ADITIONAL_JOB_END_DATE_REQUISITE_NAME SYSRES_CONST_ADITIONAL_JOB_READ_REQUISITE_NAME SYSRES_CONST_ADITIONAL_JOB_START_DATE_REQUISITE_NAME SYSRES_CONST_ADITIONAL_JOB_STATE_REQUISITE_NAME SYSRES_CONST_ADMINISTRATION_HISTORY_ADDING_USER_TO_GROUP_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_ADDING_USER_TO_GROUP_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_CREATION_COMP_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_CREATION_COMP_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_CREATION_GROUP_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_CREATION_GROUP_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_CREATION_USER_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_CREATION_USER_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_DATABASE_USER_CREATION SYSRES_CONST_ADMINISTRATION_HISTORY_DATABASE_USER_CREATION_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_DATABASE_USER_DELETION SYSRES_CONST_ADMINISTRATION_HISTORY_DATABASE_USER_DELETION_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_DELETION_COMP_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_DELETION_COMP_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_DELETION_GROUP_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_DELETION_GROUP_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_DELETION_USER_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_DELETION_USER_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_DELETION_USER_FROM_GROUP_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_DELETION_USER_FROM_GROUP_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_GRANTING_FILTERER_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_GRANTING_FILTERER_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_GRANTING_FILTERER_RESTRICTION_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_GRANTING_FILTERER_RESTRICTION_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_GRANTING_PRIVILEGE_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_GRANTING_PRIVILEGE_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_GRANTING_RIGHTS_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_GRANTING_RIGHTS_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_IS_MAIN_SERVER_CHANGED_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_IS_MAIN_SERVER_CHANGED_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_IS_PUBLIC_CHANGED_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_IS_PUBLIC_CHANGED_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_REMOVING_FILTERER_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_REMOVING_FILTERER_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_REMOVING_FILTERER_RESTRICTION_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_REMOVING_FILTERER_RESTRICTION_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_REMOVING_PRIVILEGE_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_REMOVING_PRIVILEGE_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_REMOVING_RIGHTS_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_REMOVING_RIGHTS_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_SERVER_LOGIN_CREATION SYSRES_CONST_ADMINISTRATION_HISTORY_SERVER_LOGIN_CREATION_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_SERVER_LOGIN_DELETION SYSRES_CONST_ADMINISTRATION_HISTORY_SERVER_LOGIN_DELETION_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_CATEGORY_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_CATEGORY_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_COMP_TITLE_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_COMP_TITLE_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_FULL_NAME_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_FULL_NAME_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_GROUP_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_GROUP_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_PARENT_GROUP_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_PARENT_GROUP_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_USER_AUTH_TYPE_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_USER_AUTH_TYPE_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_USER_LOGIN_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_USER_LOGIN_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_USER_STATUS_ACTION SYSRES_CONST_ADMINISTRATION_HISTORY_UPDATING_USER_STATUS_ACTION_CODE SYSRES_CONST_ADMINISTRATION_HISTORY_USER_PASSWORD_CHANGE SYSRES_CONST_ADMINISTRATION_HISTORY_USER_PASSWORD_CHANGE_ACTION SYSRES_CONST_ALL_ACCEPT_CONDITION_RUS SYSRES_CONST_ALL_USERS_GROUP SYSRES_CONST_ALL_USERS_GROUP_NAME SYSRES_CONST_ALL_USERS_SERVER_GROUP_NAME SYSRES_CONST_ALLOWED_ACCESS_TYPE_CODE SYSRES_CONST_ALLOWED_ACCESS_TYPE_NAME SYSRES_CONST_APP_VIEWER_TYPE_REQUISITE_CODE SYSRES_CONST_APPROVING_SIGNATURE_NAME SYSRES_CONST_APPROVING_SIGNATURE_REQUISITE_CODE SYSRES_CONST_ASSISTANT_SUBSTITUE_TYPE SYSRES_CONST_ASSISTANT_SUBSTITUE_TYPE_CODE SYSRES_CONST_ATTACH_TYPE_COMPONENT_TOKEN SYSRES_CONST_ATTACH_TYPE_DOC SYSRES_CONST_ATTACH_TYPE_EDOC SYSRES_CONST_ATTACH_TYPE_FOLDER SYSRES_CONST_ATTACH_TYPE_JOB SYSRES_CONST_ATTACH_TYPE_REFERENCE SYSRES_CONST_ATTACH_TYPE_TASK SYSRES_CONST_AUTH_ENCODED_PASSWORD SYSRES_CONST_AUTH_ENCODED_PASSWORD_CODE SYSRES_CONST_AUTH_NOVELL SYSRES_CONST_AUTH_PASSWORD SYSRES_CONST_AUTH_PASSWORD_CODE SYSRES_CONST_AUTH_WINDOWS SYSRES_CONST_AUTHENTICATING_SIGNATURE_NAME SYSRES_CONST_AUTHENTICATING_SIGNATURE_REQUISITE_CODE SYSRES_CONST_AUTO_ENUM_METHOD_FLAG SYSRES_CONST_AUTO_NUMERATION_CODE SYSRES_CONST_AUTO_STRONG_ENUM_METHOD_FLAG SYSRES_CONST_AUTOTEXT_NAME_REQUISITE_CODE SYSRES_CONST_AUTOTEXT_TEXT_REQUISITE_CODE SYSRES_CONST_AUTOTEXT_USAGE_ALL SYSRES_CONST_AUTOTEXT_USAGE_ALL_CODE SYSRES_CONST_AUTOTEXT_USAGE_SIGN SYSRES_CONST_AUTOTEXT_USAGE_SIGN_CODE SYSRES_CONST_AUTOTEXT_USAGE_WORK SYSRES_CONST_AUTOTEXT_USAGE_WORK_CODE SYSRES_CONST_AUTOTEXT_USE_ANYWHERE_CODE SYSRES_CONST_AUTOTEXT_USE_ON_SIGNING_CODE SYSRES_CONST_AUTOTEXT_USE_ON_WORK_CODE SYSRES_CONST_BEGIN_DATE_REQUISITE_CODE SYSRES_CONST_BLACK_LIFE_CYCLE_STAGE_FONT_COLOR SYSRES_CONST_BLUE_LIFE_CYCLE_STAGE_FONT_COLOR SYSRES_CONST_BTN_PART SYSRES_CONST_CALCULATED_ROLE_TYPE_CODE SYSRES_CONST_CALL_TYPE_VARIABLE_BUTTON_VALUE SYSRES_CONST_CALL_TYPE_VARIABLE_PROGRAM_VALUE SYSRES_CONST_CANCEL_MESSAGE_FUNCTION_RESULT SYSRES_CONST_CARD_PART SYSRES_CONST_CARD_REFERENCE_MODE_NAME SYSRES_CONST_CERTIFICATE_TYPE_REQUISITE_ENCRYPT_VALUE SYSRES_CONST_CERTIFICATE_TYPE_REQUISITE_SIGN_AND_ENCRYPT_VALUE SYSRES_CONST_CERTIFICATE_TYPE_REQUISITE_SIGN_VALUE SYSRES_CONST_CHECK_PARAM_VALUE_DATE_PARAM_TYPE SYSRES_CONST_CHECK_PARAM_VALUE_FLOAT_PARAM_TYPE SYSRES_CONST_CHECK_PARAM_VALUE_INTEGER_PARAM_TYPE SYSRES_CONST_CHECK_PARAM_VALUE_PICK_PARAM_TYPE SYSRES_CONST_CHECK_PARAM_VALUE_REEFRENCE_PARAM_TYPE SYSRES_CONST_CLOSED_RECORD_FLAG_VALUE_FEMININE SYSRES_CONST_CLOSED_RECORD_FLAG_VALUE_MASCULINE SYSRES_CONST_CODE_COMPONENT_TYPE_ADMIN SYSRES_CONST_CODE_COMPONENT_TYPE_DEVELOPER SYSRES_CONST_CODE_COMPONENT_TYPE_DOCS SYSRES_CONST_CODE_COMPONENT_TYPE_EDOC_CARDS SYSRES_CONST_CODE_COMPONENT_TYPE_EXTERNAL_EXECUTABLE SYSRES_CONST_CODE_COMPONENT_TYPE_OTHER SYSRES_CONST_CODE_COMPONENT_TYPE_REFERENCE SYSRES_CONST_CODE_COMPONENT_TYPE_REPORT SYSRES_CONST_CODE_COMPONENT_TYPE_SCRIPT SYSRES_CONST_CODE_COMPONENT_TYPE_URL SYSRES_CONST_CODE_REQUISITE_ACCESS SYSRES_CONST_CODE_REQUISITE_CODE SYSRES_CONST_CODE_REQUISITE_COMPONENT SYSRES_CONST_CODE_REQUISITE_DESCRIPTION SYSRES_CONST_CODE_REQUISITE_EXCLUDE_COMPONENT SYSRES_CONST_CODE_REQUISITE_RECORD SYSRES_CONST_COMMENT_REQ_CODE SYSRES_CONST_COMMON_SETTINGS_REQUISITE_CODE SYSRES_CONST_COMP_CODE_GRD SYSRES_CONST_COMPONENT_GROUP_TYPE_REQUISITE_CODE SYSRES_CONST_COMPONENT_TYPE_ADMIN_COMPONENTS SYSRES_CONST_COMPONENT_TYPE_DEVELOPER_COMPONENTS SYSRES_CONST_COMPONENT_TYPE_DOCS SYSRES_CONST_COMPONENT_TYPE_EDOC_CARDS SYSRES_CONST_COMPONENT_TYPE_EDOCS SYSRES_CONST_COMPONENT_TYPE_EXTERNAL_EXECUTABLE SYSRES_CONST_COMPONENT_TYPE_OTHER SYSRES_CONST_COMPONENT_TYPE_REFERENCE_TYPES SYSRES_CONST_COMPONENT_TYPE_REFERENCES SYSRES_CONST_COMPONENT_TYPE_REPORTS SYSRES_CONST_COMPONENT_TYPE_SCRIPTS SYSRES_CONST_COMPONENT_TYPE_URL SYSRES_CONST_COMPONENTS_REMOTE_SERVERS_VIEW_CODE SYSRES_CONST_CONDITION_BLOCK_DESCRIPTION SYSRES_CONST_CONST_FIRM_STATUS_COMMON SYSRES_CONST_CONST_FIRM_STATUS_INDIVIDUAL SYSRES_CONST_CONST_NEGATIVE_VALUE SYSRES_CONST_CONST_POSITIVE_VALUE SYSRES_CONST_CONST_SERVER_STATUS_DONT_REPLICATE SYSRES_CONST_CONST_SERVER_STATUS_REPLICATE SYSRES_CONST_CONTENTS_REQUISITE_CODE SYSRES_CONST_DATA_TYPE_BOOLEAN SYSRES_CONST_DATA_TYPE_DATE SYSRES_CONST_DATA_TYPE_FLOAT SYSRES_CONST_DATA_TYPE_INTEGER SYSRES_CONST_DATA_TYPE_PICK SYSRES_CONST_DATA_TYPE_REFERENCE SYSRES_CONST_DATA_TYPE_STRING SYSRES_CONST_DATA_TYPE_TEXT SYSRES_CONST_DATA_TYPE_VARIANT SYSRES_CONST_DATE_CLOSE_REQ_CODE SYSRES_CONST_DATE_FORMAT_DATE_ONLY_CHAR SYSRES_CONST_DATE_OPEN_REQ_CODE SYSRES_CONST_DATE_REQUISITE SYSRES_CONST_DATE_REQUISITE_CODE SYSRES_CONST_DATE_REQUISITE_NAME SYSRES_CONST_DATE_REQUISITE_TYPE SYSRES_CONST_DATE_TYPE_CHAR SYSRES_CONST_DATETIME_FORMAT_VALUE SYSRES_CONST_DEA_ACCESS_RIGHTS_ACTION_CODE SYSRES_CONST_DESCRIPTION_LOCALIZE_ID_REQUISITE_CODE SYSRES_CONST_DESCRIPTION_REQUISITE_CODE SYSRES_CONST_DET1_PART SYSRES_CONST_DET2_PART SYSRES_CONST_DET3_PART SYSRES_CONST_DET4_PART SYSRES_CONST_DET5_PART SYSRES_CONST_DET6_PART SYSRES_CONST_DETAIL_DATASET_KEY_REQUISITE_CODE SYSRES_CONST_DETAIL_PICK_REQUISITE_CODE SYSRES_CONST_DETAIL_REQ_CODE SYSRES_CONST_DO_NOT_USE_ACCESS_TYPE_CODE SYSRES_CONST_DO_NOT_USE_ACCESS_TYPE_NAME SYSRES_CONST_DO_NOT_USE_ON_VIEW_ACCESS_TYPE_CODE SYSRES_CONST_DO_NOT_USE_ON_VIEW_ACCESS_TYPE_NAME SYSRES_CONST_DOCUMENT_STORAGES_CODE SYSRES_CONST_DOCUMENT_TEMPLATES_TYPE_NAME SYSRES_CONST_DOUBLE_REQUISITE_CODE SYSRES_CONST_EDITOR_CLOSE_FILE_OBSERV_TYPE_CODE SYSRES_CONST_EDITOR_CLOSE_PROCESS_OBSERV_TYPE_CODE SYSRES_CONST_EDITOR_TYPE_REQUISITE_CODE SYSRES_CONST_EDITORS_APPLICATION_NAME_REQUISITE_CODE SYSRES_CONST_EDITORS_CREATE_SEVERAL_PROCESSES_REQUISITE_CODE SYSRES_CONST_EDITORS_EXTENSION_REQUISITE_CODE SYSRES_CONST_EDITORS_OBSERVER_BY_PROCESS_TYPE SYSRES_CONST_EDITORS_REFERENCE_CODE SYSRES_CONST_EDITORS_REPLACE_SPEC_CHARS_REQUISITE_CODE SYSRES_CONST_EDITORS_USE_PLUGINS_REQUISITE_CODE SYSRES_CONST_EDITORS_VIEW_DOCUMENT_OPENED_TO_EDIT_CODE SYSRES_CONST_EDOC_CARD_TYPE_REQUISITE_CODE SYSRES_CONST_EDOC_CARD_TYPES_LINK_REQUISITE_CODE SYSRES_CONST_EDOC_CERTIFICATE_AND_PASSWORD_ENCODE_CODE SYSRES_CONST_EDOC_CERTIFICATE_ENCODE_CODE SYSRES_CONST_EDOC_DATE_REQUISITE_CODE SYSRES_CONST_EDOC_KIND_REFERENCE_CODE SYSRES_CONST_EDOC_KINDS_BY_TEMPLATE_ACTION_CODE SYSRES_CONST_EDOC_MANAGE_ACCESS_CODE SYSRES_CONST_EDOC_NONE_ENCODE_CODE SYSRES_CONST_EDOC_NUMBER_REQUISITE_CODE SYSRES_CONST_EDOC_PASSWORD_ENCODE_CODE SYSRES_CONST_EDOC_READONLY_ACCESS_CODE SYSRES_CONST_EDOC_SHELL_LIFE_TYPE_VIEW_VALUE SYSRES_CONST_EDOC_SIZE_RESTRICTION_PRIORITY_REQUISITE_CODE SYSRES_CONST_EDOC_STORAGE_CHECK_ACCESS_RIGHTS_REQUISITE_CODE SYSRES_CONST_EDOC_STORAGE_COMPUTER_NAME_REQUISITE_CODE SYSRES_CONST_EDOC_STORAGE_DATABASE_NAME_REQUISITE_CODE SYSRES_CONST_EDOC_STORAGE_EDIT_IN_STORAGE_REQUISITE_CODE SYSRES_CONST_EDOC_STORAGE_LOCAL_PATH_REQUISITE_CODE SYSRES_CONST_EDOC_STORAGE_SHARED_SOURCE_NAME_REQUISITE_CODE SYSRES_CONST_EDOC_TEMPLATE_REQUISITE_CODE SYSRES_CONST_EDOC_TYPES_REFERENCE_CODE SYSRES_CONST_EDOC_VERSION_ACTIVE_STAGE_CODE SYSRES_CONST_EDOC_VERSION_DESIGN_STAGE_CODE SYSRES_CONST_EDOC_VERSION_OBSOLETE_STAGE_CODE SYSRES_CONST_EDOC_WRITE_ACCES_CODE SYSRES_CONST_EDOCUMENT_CARD_REQUISITES_REFERENCE_CODE_SELECTED_REQUISITE SYSRES_CONST_ENCODE_CERTIFICATE_TYPE_CODE SYSRES_CONST_END_DATE_REQUISITE_CODE SYSRES_CONST_ENUMERATION_TYPE_REQUISITE_CODE SYSRES_CONST_EXECUTE_ACCESS_RIGHTS_TYPE_CODE SYSRES_CONST_EXECUTIVE_FILE_STORAGE_TYPE SYSRES_CONST_EXIST_CONST SYSRES_CONST_EXIST_VALUE SYSRES_CONST_EXPORT_LOCK_TYPE_ASK SYSRES_CONST_EXPORT_LOCK_TYPE_WITH_LOCK SYSRES_CONST_EXPORT_LOCK_TYPE_WITHOUT_LOCK SYSRES_CONST_EXPORT_VERSION_TYPE_ASK SYSRES_CONST_EXPORT_VERSION_TYPE_LAST SYSRES_CONST_EXPORT_VERSION_TYPE_LAST_ACTIVE SYSRES_CONST_EXTENSION_REQUISITE_CODE SYSRES_CONST_FILTER_NAME_REQUISITE_CODE SYSRES_CONST_FILTER_REQUISITE_CODE SYSRES_CONST_FILTER_TYPE_COMMON_CODE SYSRES_CONST_FILTER_TYPE_COMMON_NAME SYSRES_CONST_FILTER_TYPE_USER_CODE SYSRES_CONST_FILTER_TYPE_USER_NAME SYSRES_CONST_FILTER_VALUE_REQUISITE_NAME SYSRES_CONST_FLOAT_NUMBER_FORMAT_CHAR SYSRES_CONST_FLOAT_REQUISITE_TYPE SYSRES_CONST_FOLDER_AUTHOR_VALUE SYSRES_CONST_FOLDER_KIND_ANY_OBJECTS SYSRES_CONST_FOLDER_KIND_COMPONENTS SYSRES_CONST_FOLDER_KIND_EDOCS SYSRES_CONST_FOLDER_KIND_JOBS SYSRES_CONST_FOLDER_KIND_TASKS SYSRES_CONST_FOLDER_TYPE_COMMON SYSRES_CONST_FOLDER_TYPE_COMPONENT SYSRES_CONST_FOLDER_TYPE_FAVORITES SYSRES_CONST_FOLDER_TYPE_INBOX SYSRES_CONST_FOLDER_TYPE_OUTBOX SYSRES_CONST_FOLDER_TYPE_QUICK_LAUNCH SYSRES_CONST_FOLDER_TYPE_SEARCH SYSRES_CONST_FOLDER_TYPE_SHORTCUTS SYSRES_CONST_FOLDER_TYPE_USER SYSRES_CONST_FROM_DICTIONARY_ENUM_METHOD_FLAG SYSRES_CONST_FULL_SUBSTITUTE_TYPE SYSRES_CONST_FULL_SUBSTITUTE_TYPE_CODE SYSRES_CONST_FUNCTION_CANCEL_RESULT SYSRES_CONST_FUNCTION_CATEGORY_SYSTEM SYSRES_CONST_FUNCTION_CATEGORY_USER SYSRES_CONST_FUNCTION_FAILURE_RESULT SYSRES_CONST_FUNCTION_SAVE_RESULT SYSRES_CONST_GENERATED_REQUISITE SYSRES_CONST_GREEN_LIFE_CYCLE_STAGE_FONT_COLOR SYSRES_CONST_GROUP_ACCOUNT_TYPE_VALUE_CODE SYSRES_CONST_GROUP_CATEGORY_NORMAL_CODE SYSRES_CONST_GROUP_CATEGORY_NORMAL_NAME SYSRES_CONST_GROUP_CATEGORY_SERVICE_CODE SYSRES_CONST_GROUP_CATEGORY_SERVICE_NAME SYSRES_CONST_GROUP_COMMON_CATEGORY_FIELD_VALUE SYSRES_CONST_GROUP_FULL_NAME_REQUISITE_CODE SYSRES_CONST_GROUP_NAME_REQUISITE_CODE SYSRES_CONST_GROUP_RIGHTS_T_REQUISITE_CODE SYSRES_CONST_GROUP_SERVER_CODES_REQUISITE_CODE SYSRES_CONST_GROUP_SERVER_NAME_REQUISITE_CODE SYSRES_CONST_GROUP_SERVICE_CATEGORY_FIELD_VALUE SYSRES_CONST_GROUP_USER_REQUISITE_CODE SYSRES_CONST_GROUPS_REFERENCE_CODE SYSRES_CONST_GROUPS_REQUISITE_CODE SYSRES_CONST_HIDDEN_MODE_NAME SYSRES_CONST_HIGH_LVL_REQUISITE_CODE SYSRES_CONST_HISTORY_ACTION_CREATE_CODE SYSRES_CONST_HISTORY_ACTION_DELETE_CODE SYSRES_CONST_HISTORY_ACTION_EDIT_CODE SYSRES_CONST_HOUR_CHAR SYSRES_CONST_ID_REQUISITE_CODE SYSRES_CONST_IDSPS_REQUISITE_CODE SYSRES_CONST_IMAGE_MODE_COLOR SYSRES_CONST_IMAGE_MODE_GREYSCALE SYSRES_CONST_IMAGE_MODE_MONOCHROME SYSRES_CONST_IMPORTANCE_HIGH SYSRES_CONST_IMPORTANCE_LOW SYSRES_CONST_IMPORTANCE_NORMAL SYSRES_CONST_IN_DESIGN_VERSION_STATE_PICK_VALUE SYSRES_CONST_INCOMING_WORK_RULE_TYPE_CODE SYSRES_CONST_INT_REQUISITE SYSRES_CONST_INT_REQUISITE_TYPE SYSRES_CONST_INTEGER_NUMBER_FORMAT_CHAR SYSRES_CONST_INTEGER_TYPE_CHAR SYSRES_CONST_IS_GENERATED_REQUISITE_NEGATIVE_VALUE SYSRES_CONST_IS_PUBLIC_ROLE_REQUISITE_CODE SYSRES_CONST_IS_REMOTE_USER_NEGATIVE_VALUE SYSRES_CONST_IS_REMOTE_USER_POSITIVE_VALUE SYSRES_CONST_IS_STORED_REQUISITE_NEGATIVE_VALUE SYSRES_CONST_IS_STORED_REQUISITE_STORED_VALUE SYSRES_CONST_ITALIC_LIFE_CYCLE_STAGE_DRAW_STYLE SYSRES_CONST_JOB_BLOCK_DESCRIPTION SYSRES_CONST_JOB_KIND_CONTROL_JOB SYSRES_CONST_JOB_KIND_JOB SYSRES_CONST_JOB_KIND_NOTICE SYSRES_CONST_JOB_STATE_ABORTED SYSRES_CONST_JOB_STATE_COMPLETE SYSRES_CONST_JOB_STATE_WORKING SYSRES_CONST_KIND_REQUISITE_CODE SYSRES_CONST_KIND_REQUISITE_NAME SYSRES_CONST_KINDS_CREATE_SHADOW_COPIES_REQUISITE_CODE SYSRES_CONST_KINDS_DEFAULT_EDOC_LIFE_STAGE_REQUISITE_CODE SYSRES_CONST_KINDS_EDOC_ALL_TEPLATES_ALLOWED_REQUISITE_CODE SYSRES_CONST_KINDS_EDOC_ALLOW_LIFE_CYCLE_STAGE_CHANGING_REQUISITE_CODE SYSRES_CONST_KINDS_EDOC_ALLOW_MULTIPLE_ACTIVE_VERSIONS_REQUISITE_CODE SYSRES_CONST_KINDS_EDOC_SHARE_ACCES_RIGHTS_BY_DEFAULT_CODE SYSRES_CONST_KINDS_EDOC_TEMPLATE_REQUISITE_CODE SYSRES_CONST_KINDS_EDOC_TYPE_REQUISITE_CODE SYSRES_CONST_KINDS_SIGNERS_REQUISITES_CODE SYSRES_CONST_KOD_INPUT_TYPE SYSRES_CONST_LAST_UPDATE_DATE_REQUISITE_CODE SYSRES_CONST_LIFE_CYCLE_START_STAGE_REQUISITE_CODE SYSRES_CONST_LILAC_LIFE_CYCLE_STAGE_FONT_COLOR SYSRES_CONST_LINK_OBJECT_KIND_COMPONENT SYSRES_CONST_LINK_OBJECT_KIND_DOCUMENT SYSRES_CONST_LINK_OBJECT_KIND_EDOC SYSRES_CONST_LINK_OBJECT_KIND_FOLDER SYSRES_CONST_LINK_OBJECT_KIND_JOB SYSRES_CONST_LINK_OBJECT_KIND_REFERENCE SYSRES_CONST_LINK_OBJECT_KIND_TASK SYSRES_CONST_LINK_REF_TYPE_REQUISITE_CODE SYSRES_CONST_LIST_REFERENCE_MODE_NAME SYSRES_CONST_LOCALIZATION_DICTIONARY_MAIN_VIEW_CODE SYSRES_CONST_MAIN_VIEW_CODE SYSRES_CONST_MANUAL_ENUM_METHOD_FLAG SYSRES_CONST_MASTER_COMP_TYPE_REQUISITE_CODE SYSRES_CONST_MASTER_TABLE_REC_ID_REQUISITE_CODE SYSRES_CONST_MAXIMIZED_MODE_NAME SYSRES_CONST_ME_VALUE SYSRES_CONST_MESSAGE_ATTENTION_CAPTION SYSRES_CONST_MESSAGE_CONFIRMATION_CAPTION SYSRES_CONST_MESSAGE_ERROR_CAPTION SYSRES_CONST_MESSAGE_INFORMATION_CAPTION SYSRES_CONST_MINIMIZED_MODE_NAME SYSRES_CONST_MINUTE_CHAR SYSRES_CONST_MODULE_REQUISITE_CODE SYSRES_CONST_MONITORING_BLOCK_DESCRIPTION SYSRES_CONST_MONTH_FORMAT_VALUE SYSRES_CONST_NAME_LOCALIZE_ID_REQUISITE_CODE SYSRES_CONST_NAME_REQUISITE_CODE SYSRES_CONST_NAME_SINGULAR_REQUISITE_CODE SYSRES_CONST_NAMEAN_INPUT_TYPE SYSRES_CONST_NEGATIVE_PICK_VALUE SYSRES_CONST_NEGATIVE_VALUE SYSRES_CONST_NO SYSRES_CONST_NO_PICK_VALUE SYSRES_CONST_NO_SIGNATURE_REQUISITE_CODE SYSRES_CONST_NO_VALUE SYSRES_CONST_NONE_ACCESS_RIGHTS_TYPE_CODE SYSRES_CONST_NONOPERATING_RECORD_FLAG_VALUE SYSRES_CONST_NONOPERATING_RECORD_FLAG_VALUE_MASCULINE SYSRES_CONST_NORMAL_ACCESS_RIGHTS_TYPE_CODE SYSRES_CONST_NORMAL_LIFE_CYCLE_STAGE_DRAW_STYLE SYSRES_CONST_NORMAL_MODE_NAME SYSRES_CONST_NOT_ALLOWED_ACCESS_TYPE_CODE SYSRES_CONST_NOT_ALLOWED_ACCESS_TYPE_NAME SYSRES_CONST_NOTE_REQUISITE_CODE SYSRES_CONST_NOTICE_BLOCK_DESCRIPTION SYSRES_CONST_NUM_REQUISITE SYSRES_CONST_NUM_STR_REQUISITE_CODE SYSRES_CONST_NUMERATION_AUTO_NOT_STRONG SYSRES_CONST_NUMERATION_AUTO_STRONG SYSRES_CONST_NUMERATION_FROM_DICTONARY SYSRES_CONST_NUMERATION_MANUAL SYSRES_CONST_NUMERIC_TYPE_CHAR SYSRES_CONST_NUMREQ_REQUISITE_CODE SYSRES_CONST_OBSOLETE_VERSION_STATE_PICK_VALUE SYSRES_CONST_OPERATING_RECORD_FLAG_VALUE SYSRES_CONST_OPERATING_RECORD_FLAG_VALUE_CODE SYSRES_CONST_OPERATING_RECORD_FLAG_VALUE_FEMININE SYSRES_CONST_OPERATING_RECORD_FLAG_VALUE_MASCULINE SYSRES_CONST_OPTIONAL_FORM_COMP_REQCODE_PREFIX SYSRES_CONST_ORANGE_LIFE_CYCLE_STAGE_FONT_COLOR SYSRES_CONST_ORIGINALREF_REQUISITE_CODE SYSRES_CONST_OURFIRM_REF_CODE SYSRES_CONST_OURFIRM_REQUISITE_CODE SYSRES_CONST_OURFIRM_VAR SYSRES_CONST_OUTGOING_WORK_RULE_TYPE_CODE SYSRES_CONST_PICK_NEGATIVE_RESULT SYSRES_CONST_PICK_POSITIVE_RESULT SYSRES_CONST_PICK_REQUISITE SYSRES_CONST_PICK_REQUISITE_TYPE SYSRES_CONST_PICK_TYPE_CHAR SYSRES_CONST_PLAN_STATUS_REQUISITE_CODE SYSRES_CONST_PLATFORM_VERSION_COMMENT SYSRES_CONST_PLUGINS_SETTINGS_DESCRIPTION_REQUISITE_CODE SYSRES_CONST_POSITIVE_PICK_VALUE SYSRES_CONST_POWER_TO_CREATE_ACTION_CODE SYSRES_CONST_POWER_TO_SIGN_ACTION_CODE SYSRES_CONST_PRIORITY_REQUISITE_CODE SYSRES_CONST_QUALIFIED_TASK_TYPE SYSRES_CONST_QUALIFIED_TASK_TYPE_CODE SYSRES_CONST_RECSTAT_REQUISITE_CODE SYSRES_CONST_RED_LIFE_CYCLE_STAGE_FONT_COLOR SYSRES_CONST_REF_ID_T_REF_TYPE_REQUISITE_CODE SYSRES_CONST_REF_REQUISITE SYSRES_CONST_REF_REQUISITE_TYPE SYSRES_CONST_REF_REQUISITES_REFERENCE_CODE_SELECTED_REQUISITE SYSRES_CONST_REFERENCE_RECORD_HISTORY_CREATE_ACTION_CODE SYSRES_CONST_REFERENCE_RECORD_HISTORY_DELETE_ACTION_CODE SYSRES_CONST_REFERENCE_RECORD_HISTORY_MODIFY_ACTION_CODE SYSRES_CONST_REFERENCE_TYPE_CHAR SYSRES_CONST_REFERENCE_TYPE_REQUISITE_NAME SYSRES_CONST_REFERENCES_ADD_PARAMS_REQUISITE_CODE SYSRES_CONST_REFERENCES_DISPLAY_REQUISITE_REQUISITE_CODE SYSRES_CONST_REMOTE_SERVER_STATUS_WORKING SYSRES_CONST_REMOTE_SERVER_TYPE_MAIN SYSRES_CONST_REMOTE_SERVER_TYPE_SECONDARY SYSRES_CONST_REMOTE_USER_FLAG_VALUE_CODE SYSRES_CONST_REPORT_APP_EDITOR_INTERNAL SYSRES_CONST_REPORT_BASE_REPORT_ID_REQUISITE_CODE SYSRES_CONST_REPORT_BASE_REPORT_REQUISITE_CODE SYSRES_CONST_REPORT_SCRIPT_REQUISITE_CODE SYSRES_CONST_REPORT_TEMPLATE_REQUISITE_CODE SYSRES_CONST_REPORT_VIEWER_CODE_REQUISITE_CODE SYSRES_CONST_REQ_ALLOW_COMPONENT_DEFAULT_VALUE SYSRES_CONST_REQ_ALLOW_RECORD_DEFAULT_VALUE SYSRES_CONST_REQ_ALLOW_SERVER_COMPONENT_DEFAULT_VALUE SYSRES_CONST_REQ_MODE_AVAILABLE_CODE SYSRES_CONST_REQ_MODE_EDIT_CODE SYSRES_CONST_REQ_MODE_HIDDEN_CODE SYSRES_CONST_REQ_MODE_NOT_AVAILABLE_CODE SYSRES_CONST_REQ_MODE_VIEW_CODE SYSRES_CONST_REQ_NUMBER_REQUISITE_CODE SYSRES_CONST_REQ_SECTION_VALUE SYSRES_CONST_REQ_TYPE_VALUE SYSRES_CONST_REQUISITE_FORMAT_BY_UNIT SYSRES_CONST_REQUISITE_FORMAT_DATE_FULL SYSRES_CONST_REQUISITE_FORMAT_DATE_TIME SYSRES_CONST_REQUISITE_FORMAT_LEFT SYSRES_CONST_REQUISITE_FORMAT_RIGHT SYSRES_CONST_REQUISITE_FORMAT_WITHOUT_UNIT SYSRES_CONST_REQUISITE_NUMBER_REQUISITE_CODE SYSRES_CONST_REQUISITE_SECTION_ACTIONS SYSRES_CONST_REQUISITE_SECTION_BUTTON SYSRES_CONST_REQUISITE_SECTION_BUTTONS SYSRES_CONST_REQUISITE_SECTION_CARD SYSRES_CONST_REQUISITE_SECTION_TABLE SYSRES_CONST_REQUISITE_SECTION_TABLE10 SYSRES_CONST_REQUISITE_SECTION_TABLE11 SYSRES_CONST_REQUISITE_SECTION_TABLE12 SYSRES_CONST_REQUISITE_SECTION_TABLE13 SYSRES_CONST_REQUISITE_SECTION_TABLE14 SYSRES_CONST_REQUISITE_SECTION_TABLE15 SYSRES_CONST_REQUISITE_SECTION_TABLE16 SYSRES_CONST_REQUISITE_SECTION_TABLE17 SYSRES_CONST_REQUISITE_SECTION_TABLE18 SYSRES_CONST_REQUISITE_SECTION_TABLE19 SYSRES_CONST_REQUISITE_SECTION_TABLE2 SYSRES_CONST_REQUISITE_SECTION_TABLE20 SYSRES_CONST_REQUISITE_SECTION_TABLE21 SYSRES_CONST_REQUISITE_SECTION_TABLE22 SYSRES_CONST_REQUISITE_SECTION_TABLE23 SYSRES_CONST_REQUISITE_SECTION_TABLE24 SYSRES_CONST_REQUISITE_SECTION_TABLE3 SYSRES_CONST_REQUISITE_SECTION_TABLE4 SYSRES_CONST_REQUISITE_SECTION_TABLE5 SYSRES_CONST_REQUISITE_SECTION_TABLE6 SYSRES_CONST_REQUISITE_SECTION_TABLE7 SYSRES_CONST_REQUISITE_SECTION_TABLE8 SYSRES_CONST_REQUISITE_SECTION_TABLE9 SYSRES_CONST_REQUISITES_PSEUDOREFERENCE_REQUISITE_NUMBER_REQUISITE_CODE SYSRES_CONST_RIGHT_ALIGNMENT_CODE SYSRES_CONST_ROLES_REFERENCE_CODE SYSRES_CONST_ROUTE_STEP_AFTER_RUS SYSRES_CONST_ROUTE_STEP_AND_CONDITION_RUS SYSRES_CONST_ROUTE_STEP_OR_CONDITION_RUS SYSRES_CONST_ROUTE_TYPE_COMPLEX SYSRES_CONST_ROUTE_TYPE_PARALLEL SYSRES_CONST_ROUTE_TYPE_SERIAL SYSRES_CONST_SBDATASETDESC_NEGATIVE_VALUE SYSRES_CONST_SBDATASETDESC_POSITIVE_VALUE SYSRES_CONST_SBVIEWSDESC_POSITIVE_VALUE SYSRES_CONST_SCRIPT_BLOCK_DESCRIPTION SYSRES_CONST_SEARCH_BY_TEXT_REQUISITE_CODE SYSRES_CONST_SEARCHES_COMPONENT_CONTENT SYSRES_CONST_SEARCHES_CRITERIA_ACTION_NAME SYSRES_CONST_SEARCHES_EDOC_CONTENT SYSRES_CONST_SEARCHES_FOLDER_CONTENT SYSRES_CONST_SEARCHES_JOB_CONTENT SYSRES_CONST_SEARCHES_REFERENCE_CODE SYSRES_CONST_SEARCHES_TASK_CONTENT SYSRES_CONST_SECOND_CHAR SYSRES_CONST_SECTION_REQUISITE_ACTIONS_VALUE SYSRES_CONST_SECTION_REQUISITE_CARD_VALUE SYSRES_CONST_SECTION_REQUISITE_CODE SYSRES_CONST_SECTION_REQUISITE_DETAIL_1_VALUE SYSRES_CONST_SECTION_REQUISITE_DETAIL_2_VALUE SYSRES_CONST_SECTION_REQUISITE_DETAIL_3_VALUE SYSRES_CONST_SECTION_REQUISITE_DETAIL_4_VALUE SYSRES_CONST_SECTION_REQUISITE_DETAIL_5_VALUE SYSRES_CONST_SECTION_REQUISITE_DETAIL_6_VALUE SYSRES_CONST_SELECT_REFERENCE_MODE_NAME SYSRES_CONST_SELECT_TYPE_SELECTABLE SYSRES_CONST_SELECT_TYPE_SELECTABLE_ONLY_CHILD SYSRES_CONST_SELECT_TYPE_SELECTABLE_WITH_CHILD SYSRES_CONST_SELECT_TYPE_UNSLECTABLE SYSRES_CONST_SERVER_TYPE_MAIN SYSRES_CONST_SERVICE_USER_CATEGORY_FIELD_VALUE SYSRES_CONST_SETTINGS_USER_REQUISITE_CODE SYSRES_CONST_SIGNATURE_AND_ENCODE_CERTIFICATE_TYPE_CODE SYSRES_CONST_SIGNATURE_CERTIFICATE_TYPE_CODE SYSRES_CONST_SINGULAR_TITLE_REQUISITE_CODE SYSRES_CONST_SQL_SERVER_AUTHENTIFICATION_FLAG_VALUE_CODE SYSRES_CONST_SQL_SERVER_ENCODE_AUTHENTIFICATION_FLAG_VALUE_CODE SYSRES_CONST_STANDART_ROUTE_REFERENCE_CODE SYSRES_CONST_STANDART_ROUTE_REFERENCE_COMMENT_REQUISITE_CODE SYSRES_CONST_STANDART_ROUTES_GROUPS_REFERENCE_CODE SYSRES_CONST_STATE_REQ_NAME SYSRES_CONST_STATE_REQUISITE_ACTIVE_VALUE SYSRES_CONST_STATE_REQUISITE_CLOSED_VALUE SYSRES_CONST_STATE_REQUISITE_CODE SYSRES_CONST_STATIC_ROLE_TYPE_CODE SYSRES_CONST_STATUS_PLAN_DEFAULT_VALUE SYSRES_CONST_STATUS_VALUE_AUTOCLEANING SYSRES_CONST_STATUS_VALUE_BLUE_SQUARE SYSRES_CONST_STATUS_VALUE_COMPLETE SYSRES_CONST_STATUS_VALUE_GREEN_SQUARE SYSRES_CONST_STATUS_VALUE_ORANGE_SQUARE SYSRES_CONST_STATUS_VALUE_PURPLE_SQUARE SYSRES_CONST_STATUS_VALUE_RED_SQUARE SYSRES_CONST_STATUS_VALUE_SUSPEND SYSRES_CONST_STATUS_VALUE_YELLOW_SQUARE SYSRES_CONST_STDROUTE_SHOW_TO_USERS_REQUISITE_CODE SYSRES_CONST_STORAGE_TYPE_FILE SYSRES_CONST_STORAGE_TYPE_SQL_SERVER SYSRES_CONST_STR_REQUISITE SYSRES_CONST_STRIKEOUT_LIFE_CYCLE_STAGE_DRAW_STYLE SYSRES_CONST_STRING_FORMAT_LEFT_ALIGN_CHAR SYSRES_CONST_STRING_FORMAT_RIGHT_ALIGN_CHAR SYSRES_CONST_STRING_REQUISITE_CODE SYSRES_CONST_STRING_REQUISITE_TYPE SYSRES_CONST_STRING_TYPE_CHAR SYSRES_CONST_SUBSTITUTES_PSEUDOREFERENCE_CODE SYSRES_CONST_SUBTASK_BLOCK_DESCRIPTION SYSRES_CONST_SYSTEM_SETTING_CURRENT_USER_PARAM_VALUE SYSRES_CONST_SYSTEM_SETTING_EMPTY_VALUE_PARAM_VALUE SYSRES_CONST_SYSTEM_VERSION_COMMENT SYSRES_CONST_TASK_ACCESS_TYPE_ALL SYSRES_CONST_TASK_ACCESS_TYPE_ALL_MEMBERS SYSRES_CONST_TASK_ACCESS_TYPE_MANUAL SYSRES_CONST_TASK_ENCODE_TYPE_CERTIFICATION SYSRES_CONST_TASK_ENCODE_TYPE_CERTIFICATION_AND_PASSWORD SYSRES_CONST_TASK_ENCODE_TYPE_NONE SYSRES_CONST_TASK_ENCODE_TYPE_PASSWORD SYSRES_CONST_TASK_ROUTE_ALL_CONDITION SYSRES_CONST_TASK_ROUTE_AND_CONDITION SYSRES_CONST_TASK_ROUTE_OR_CONDITION SYSRES_CONST_TASK_STATE_ABORTED SYSRES_CONST_TASK_STATE_COMPLETE SYSRES_CONST_TASK_STATE_CONTINUED SYSRES_CONST_TASK_STATE_CONTROL SYSRES_CONST_TASK_STATE_INIT SYSRES_CONST_TASK_STATE_WORKING SYSRES_CONST_TASK_TITLE SYSRES_CONST_TASK_TYPES_GROUPS_REFERENCE_CODE SYSRES_CONST_TASK_TYPES_REFERENCE_CODE SYSRES_CONST_TEMPLATES_REFERENCE_CODE SYSRES_CONST_TEST_DATE_REQUISITE_NAME SYSRES_CONST_TEST_DEV_DATABASE_NAME SYSRES_CONST_TEST_DEV_SYSTEM_CODE SYSRES_CONST_TEST_EDMS_DATABASE_NAME SYSRES_CONST_TEST_EDMS_MAIN_CODE SYSRES_CONST_TEST_EDMS_MAIN_DB_NAME SYSRES_CONST_TEST_EDMS_SECOND_CODE SYSRES_CONST_TEST_EDMS_SECOND_DB_NAME SYSRES_CONST_TEST_EDMS_SYSTEM_CODE SYSRES_CONST_TEST_NUMERIC_REQUISITE_NAME SYSRES_CONST_TEXT_REQUISITE SYSRES_CONST_TEXT_REQUISITE_CODE SYSRES_CONST_TEXT_REQUISITE_TYPE SYSRES_CONST_TEXT_TYPE_CHAR SYSRES_CONST_TYPE_CODE_REQUISITE_CODE SYSRES_CONST_TYPE_REQUISITE_CODE SYSRES_CONST_UNDEFINED_LIFE_CYCLE_STAGE_FONT_COLOR SYSRES_CONST_UNITS_SECTION_ID_REQUISITE_CODE SYSRES_CONST_UNITS_SECTION_REQUISITE_CODE SYSRES_CONST_UNOPERATING_RECORD_FLAG_VALUE_CODE SYSRES_CONST_UNSTORED_DATA_REQUISITE_CODE SYSRES_CONST_UNSTORED_DATA_REQUISITE_NAME SYSRES_CONST_USE_ACCESS_TYPE_CODE SYSRES_CONST_USE_ACCESS_TYPE_NAME SYSRES_CONST_USER_ACCOUNT_TYPE_VALUE_CODE SYSRES_CONST_USER_ADDITIONAL_INFORMATION_REQUISITE_CODE SYSRES_CONST_USER_AND_GROUP_ID_FROM_PSEUDOREFERENCE_REQUISITE_CODE SYSRES_CONST_USER_CATEGORY_NORMAL SYSRES_CONST_USER_CERTIFICATE_REQUISITE_CODE SYSRES_CONST_USER_CERTIFICATE_STATE_REQUISITE_CODE SYSRES_CONST_USER_CERTIFICATE_SUBJECT_NAME_REQUISITE_CODE SYSRES_CONST_USER_CERTIFICATE_THUMBPRINT_REQUISITE_CODE SYSRES_CONST_USER_COMMON_CATEGORY SYSRES_CONST_USER_COMMON_CATEGORY_CODE SYSRES_CONST_USER_FULL_NAME_REQUISITE_CODE SYSRES_CONST_USER_GROUP_TYPE_REQUISITE_CODE SYSRES_CONST_USER_LOGIN_REQUISITE_CODE SYSRES_CONST_USER_REMOTE_CONTROLLER_REQUISITE_CODE SYSRES_CONST_USER_REMOTE_SYSTEM_REQUISITE_CODE SYSRES_CONST_USER_RIGHTS_T_REQUISITE_CODE SYSRES_CONST_USER_SERVER_NAME_REQUISITE_CODE SYSRES_CONST_USER_SERVICE_CATEGORY SYSRES_CONST_USER_SERVICE_CATEGORY_CODE SYSRES_CONST_USER_STATUS_ADMINISTRATOR_CODE SYSRES_CONST_USER_STATUS_ADMINISTRATOR_NAME SYSRES_CONST_USER_STATUS_DEVELOPER_CODE SYSRES_CONST_USER_STATUS_DEVELOPER_NAME SYSRES_CONST_USER_STATUS_DISABLED_CODE SYSRES_CONST_USER_STATUS_DISABLED_NAME SYSRES_CONST_USER_STATUS_SYSTEM_DEVELOPER_CODE SYSRES_CONST_USER_STATUS_USER_CODE SYSRES_CONST_USER_STATUS_USER_NAME SYSRES_CONST_USER_STATUS_USER_NAME_DEPRECATED SYSRES_CONST_USER_TYPE_FIELD_VALUE_USER SYSRES_CONST_USER_TYPE_REQUISITE_CODE SYSRES_CONST_USERS_CONTROLLER_REQUISITE_CODE SYSRES_CONST_USERS_IS_MAIN_SERVER_REQUISITE_CODE SYSRES_CONST_USERS_REFERENCE_CODE SYSRES_CONST_USERS_REGISTRATION_CERTIFICATES_ACTION_NAME SYSRES_CONST_USERS_REQUISITE_CODE SYSRES_CONST_USERS_SYSTEM_REQUISITE_CODE SYSRES_CONST_USERS_USER_ACCESS_RIGHTS_TYPR_REQUISITE_CODE SYSRES_CONST_USERS_USER_AUTHENTICATION_REQUISITE_CODE SYSRES_CONST_USERS_USER_COMPONENT_REQUISITE_CODE SYSRES_CONST_USERS_USER_GROUP_REQUISITE_CODE SYSRES_CONST_USERS_VIEW_CERTIFICATES_ACTION_NAME SYSRES_CONST_VIEW_DEFAULT_CODE SYSRES_CONST_VIEW_DEFAULT_NAME SYSRES_CONST_VIEWER_REQUISITE_CODE SYSRES_CONST_WAITING_BLOCK_DESCRIPTION SYSRES_CONST_WIZARD_FORM_LABEL_TEST_STRING SYSRES_CONST_WIZARD_QUERY_PARAM_HEIGHT_ETALON_STRING SYSRES_CONST_WIZARD_REFERENCE_COMMENT_REQUISITE_CODE SYSRES_CONST_WORK_RULES_DESCRIPTION_REQUISITE_CODE SYSRES_CONST_WORK_TIME_CALENDAR_REFERENCE_CODE SYSRES_CONST_WORK_WORKFLOW_HARD_ROUTE_TYPE_VALUE SYSRES_CONST_WORK_WORKFLOW_HARD_ROUTE_TYPE_VALUE_CODE SYSRES_CONST_WORK_WORKFLOW_HARD_ROUTE_TYPE_VALUE_CODE_RUS SYSRES_CONST_WORK_WORKFLOW_SOFT_ROUTE_TYPE_VALUE_CODE_RUS SYSRES_CONST_WORKFLOW_ROUTE_TYPR_HARD SYSRES_CONST_WORKFLOW_ROUTE_TYPR_SOFT SYSRES_CONST_XML_ENCODING SYSRES_CONST_XREC_STAT_REQUISITE_CODE SYSRES_CONST_XRECID_FIELD_NAME SYSRES_CONST_YES SYSRES_CONST_YES_NO_2_REQUISITE_CODE SYSRES_CONST_YES_NO_REQUISITE_CODE SYSRES_CONST_YES_NO_T_REF_TYPE_REQUISITE_CODE SYSRES_CONST_YES_PICK_VALUE SYSRES_CONST_YES_VALUE ",l="CR FALSE nil NO_VALUE NULL TAB TRUE YES_VALUE ",c="ADMINISTRATORS_GROUP_NAME CUSTOMIZERS_GROUP_NAME DEVELOPERS_GROUP_NAME SERVICE_USERS_GROUP_NAME ",d="DECISION_BLOCK_FIRST_OPERAND_PROPERTY DECISION_BLOCK_NAME_PROPERTY DECISION_BLOCK_OPERATION_PROPERTY DECISION_BLOCK_RESULT_TYPE_PROPERTY DECISION_BLOCK_SECOND_OPERAND_PROPERTY ",_="ANY_FILE_EXTENTION COMPRESSED_DOCUMENT_EXTENSION EXTENDED_DOCUMENT_EXTENSION SHORT_COMPRESSED_DOCUMENT_EXTENSION SHORT_EXTENDED_DOCUMENT_EXTENSION ",p="JOB_BLOCK_ABORT_DEADLINE_PROPERTY JOB_BLOCK_AFTER_FINISH_EVENT JOB_BLOCK_AFTER_QUERY_PARAMETERS_EVENT JOB_BLOCK_ATTACHMENT_PROPERTY JOB_BLOCK_ATTACHMENTS_RIGHTS_GROUP_PROPERTY JOB_BLOCK_ATTACHMENTS_RIGHTS_TYPE_PROPERTY JOB_BLOCK_BEFORE_QUERY_PARAMETERS_EVENT JOB_BLOCK_BEFORE_START_EVENT JOB_BLOCK_CREATED_JOBS_PROPERTY JOB_BLOCK_DEADLINE_PROPERTY JOB_BLOCK_EXECUTION_RESULTS_PROPERTY JOB_BLOCK_IS_PARALLEL_PROPERTY JOB_BLOCK_IS_RELATIVE_ABORT_DEADLINE_PROPERTY JOB_BLOCK_IS_RELATIVE_DEADLINE_PROPERTY JOB_BLOCK_JOB_TEXT_PROPERTY JOB_BLOCK_NAME_PROPERTY JOB_BLOCK_NEED_SIGN_ON_PERFORM_PROPERTY JOB_BLOCK_PERFORMER_PROPERTY JOB_BLOCK_RELATIVE_ABORT_DEADLINE_TYPE_PROPERTY JOB_BLOCK_RELATIVE_DEADLINE_TYPE_PROPERTY JOB_BLOCK_SUBJECT_PROPERTY ",g="ENGLISH_LANGUAGE_CODE RUSSIAN_LANGUAGE_CODE ",E="smHidden smMaximized smMinimized smNormal wmNo wmYes ",f="COMPONENT_TOKEN_LINK_KIND DOCUMENT_LINK_KIND EDOCUMENT_LINK_KIND FOLDER_LINK_KIND JOB_LINK_KIND REFERENCE_LINK_KIND TASK_LINK_KIND ",S="COMPONENT_TOKEN_LOCK_TYPE EDOCUMENT_VERSION_LOCK_TYPE ",C="MONITOR_BLOCK_AFTER_FINISH_EVENT MONITOR_BLOCK_BEFORE_START_EVENT MONITOR_BLOCK_DEADLINE_PROPERTY MONITOR_BLOCK_INTERVAL_PROPERTY MONITOR_BLOCK_INTERVAL_TYPE_PROPERTY MONITOR_BLOCK_IS_RELATIVE_DEADLINE_PROPERTY MONITOR_BLOCK_NAME_PROPERTY MONITOR_BLOCK_RELATIVE_DEADLINE_TYPE_PROPERTY MONITOR_BLOCK_SEARCH_SCRIPT_PROPERTY ",h="NOTICE_BLOCK_AFTER_FINISH_EVENT NOTICE_BLOCK_ATTACHMENT_PROPERTY NOTICE_BLOCK_ATTACHMENTS_RIGHTS_GROUP_PROPERTY NOTICE_BLOCK_ATTACHMENTS_RIGHTS_TYPE_PROPERTY NOTICE_BLOCK_BEFORE_START_EVENT NOTICE_BLOCK_CREATED_NOTICES_PROPERTY NOTICE_BLOCK_DEADLINE_PROPERTY NOTICE_BLOCK_IS_RELATIVE_DEADLINE_PROPERTY NOTICE_BLOCK_NAME_PROPERTY NOTICE_BLOCK_NOTICE_TEXT_PROPERTY NOTICE_BLOCK_PERFORMER_PROPERTY NOTICE_BLOCK_RELATIVE_DEADLINE_TYPE_PROPERTY NOTICE_BLOCK_SUBJECT_PROPERTY ",T="dseAfterCancel dseAfterClose dseAfterDelete dseAfterDeleteOutOfTransaction dseAfterInsert dseAfterOpen dseAfterScroll dseAfterUpdate dseAfterUpdateOutOfTransaction dseBeforeCancel dseBeforeClose dseBeforeDelete dseBeforeDetailUpdate dseBeforeInsert dseBeforeOpen dseBeforeUpdate dseOnAnyRequisiteChange dseOnCloseRecord dseOnDeleteError dseOnOpenRecord dseOnPrepareUpdate dseOnUpdateError dseOnUpdateRatifiedRecord dseOnValidDelete dseOnValidUpdate reOnChange reOnChangeValues SELECTION_BEGIN_ROUTE_EVENT SELECTION_END_ROUTE_EVENT ",N="CURRENT_PERIOD_IS_REQUIRED PREVIOUS_CARD_TYPE_NAME SHOW_RECORD_PROPERTIES_FORM ",y="ACCESS_RIGHTS_SETTING_DIALOG_CODE ADMINISTRATOR_USER_CODE ANALYTIC_REPORT_TYPE asrtHideLocal asrtHideRemote CALCULATED_ROLE_TYPE_CODE COMPONENTS_REFERENCE_DEVELOPER_VIEW_CODE DCTS_TEST_PROTOCOLS_FOLDER_PATH E_EDOC_VERSION_ALREADY_APPROVINGLY_SIGNED E_EDOC_VERSION_ALREADY_APPROVINGLY_SIGNED_BY_USER E_EDOC_VERSION_ALREDY_SIGNED E_EDOC_VERSION_ALREDY_SIGNED_BY_USER EDOC_TYPES_CODE_REQUISITE_FIELD_NAME EDOCUMENTS_ALIAS_NAME FILES_FOLDER_PATH FILTER_OPERANDS_DELIMITER FILTER_OPERATIONS_DELIMITER FORMCARD_NAME FORMLIST_NAME GET_EXTENDED_DOCUMENT_EXTENSION_CREATION_MODE GET_EXTENDED_DOCUMENT_EXTENSION_IMPORT_MODE INTEGRATED_REPORT_TYPE IS_BUILDER_APPLICATION_ROLE IS_BUILDER_APPLICATION_ROLE2 IS_BUILDER_USERS ISBSYSDEV LOG_FOLDER_PATH mbCancel mbNo mbNoToAll mbOK mbYes mbYesToAll MEMORY_DATASET_DESRIPTIONS_FILENAME mrNo mrNoToAll mrYes mrYesToAll MULTIPLE_SELECT_DIALOG_CODE NONOPERATING_RECORD_FLAG_FEMININE NONOPERATING_RECORD_FLAG_MASCULINE OPERATING_RECORD_FLAG_FEMININE OPERATING_RECORD_FLAG_MASCULINE PROFILING_SETTINGS_COMMON_SETTINGS_CODE_VALUE PROGRAM_INITIATED_LOOKUP_ACTION ratDelete ratEdit ratInsert REPORT_TYPE REQUIRED_PICK_VALUES_VARIABLE rmCard rmList SBRTE_PROGID_DEV SBRTE_PROGID_RELEASE STATIC_ROLE_TYPE_CODE SUPPRESS_EMPTY_TEMPLATE_CREATION SYSTEM_USER_CODE UPDATE_DIALOG_DATASET USED_IN_OBJECT_HINT_PARAM USER_INITIATED_LOOKUP_ACTION USER_NAME_FORMAT USER_SELECTION_RESTRICTIONS WORKFLOW_TEST_PROTOCOLS_FOLDER_PATH ELS_SUBTYPE_CONTROL_NAME ELS_FOLDER_KIND_CONTROL_NAME REPEAT_PROCESS_CURRENT_OBJECT_EXCEPTION_NAME ",x="PRIVILEGE_COMPONENT_FULL_ACCESS PRIVILEGE_DEVELOPMENT_EXPORT PRIVILEGE_DEVELOPMENT_IMPORT PRIVILEGE_DOCUMENT_DELETE PRIVILEGE_ESD PRIVILEGE_FOLDER_DELETE PRIVILEGE_MANAGE_ACCESS_RIGHTS PRIVILEGE_MANAGE_REPLICATION PRIVILEGE_MANAGE_SESSION_SERVER PRIVILEGE_OBJECT_FULL_ACCESS PRIVILEGE_OBJECT_VIEW PRIVILEGE_RESERVE_LICENSE PRIVILEGE_SYSTEM_CUSTOMIZE PRIVILEGE_SYSTEM_DEVELOP PRIVILEGE_SYSTEM_INSTALL PRIVILEGE_TASK_DELETE PRIVILEGE_USER_PLUGIN_SETTINGS_CUSTOMIZE PRIVILEGES_PSEUDOREFERENCE_CODE ",P="ACCESS_TYPES_PSEUDOREFERENCE_CODE ALL_AVAILABLE_COMPONENTS_PSEUDOREFERENCE_CODE ALL_AVAILABLE_PRIVILEGES_PSEUDOREFERENCE_CODE ALL_REPLICATE_COMPONENTS_PSEUDOREFERENCE_CODE AVAILABLE_DEVELOPERS_COMPONENTS_PSEUDOREFERENCE_CODE COMPONENTS_PSEUDOREFERENCE_CODE FILTRATER_SETTINGS_CONFLICTS_PSEUDOREFERENCE_CODE GROUPS_PSEUDOREFERENCE_CODE RECEIVE_PROTOCOL_PSEUDOREFERENCE_CODE REFERENCE_REQUISITE_PSEUDOREFERENCE_CODE REFERENCE_REQUISITES_PSEUDOREFERENCE_CODE REFTYPES_PSEUDOREFERENCE_CODE REPLICATION_SEANCES_DIARY_PSEUDOREFERENCE_CODE SEND_PROTOCOL_PSEUDOREFERENCE_CODE SUBSTITUTES_PSEUDOREFERENCE_CODE SYSTEM_SETTINGS_PSEUDOREFERENCE_CODE UNITS_PSEUDOREFERENCE_CODE USERS_PSEUDOREFERENCE_CODE VIEWERS_PSEUDOREFERENCE_CODE ",D="CERTIFICATE_TYPE_ENCRYPT CERTIFICATE_TYPE_SIGN CERTIFICATE_TYPE_SIGN_AND_ENCRYPT ",k="STORAGE_TYPE_FILE STORAGE_TYPE_NAS_CIFS STORAGE_TYPE_SAPERION STORAGE_TYPE_SQL_SERVER ",U="COMPTYPE2_REQUISITE_DOCUMENTS_VALUE COMPTYPE2_REQUISITE_TASKS_VALUE COMPTYPE2_REQUISITE_FOLDERS_VALUE COMPTYPE2_REQUISITE_REFERENCES_VALUE ",W="SYSREQ_CODE SYSREQ_COMPTYPE2 SYSREQ_CONST_AVAILABLE_FOR_WEB SYSREQ_CONST_COMMON_CODE SYSREQ_CONST_COMMON_VALUE SYSREQ_CONST_FIRM_CODE SYSREQ_CONST_FIRM_STATUS SYSREQ_CONST_FIRM_VALUE SYSREQ_CONST_SERVER_STATUS SYSREQ_CONTENTS SYSREQ_DATE_OPEN SYSREQ_DATE_CLOSE SYSREQ_DESCRIPTION SYSREQ_DESCRIPTION_LOCALIZE_ID SYSREQ_DOUBLE SYSREQ_EDOC_ACCESS_TYPE SYSREQ_EDOC_AUTHOR SYSREQ_EDOC_CREATED SYSREQ_EDOC_DELEGATE_RIGHTS_REQUISITE_CODE SYSREQ_EDOC_EDITOR SYSREQ_EDOC_ENCODE_TYPE SYSREQ_EDOC_ENCRYPTION_PLUGIN_NAME SYSREQ_EDOC_ENCRYPTION_PLUGIN_VERSION SYSREQ_EDOC_EXPORT_DATE SYSREQ_EDOC_EXPORTER SYSREQ_EDOC_KIND SYSREQ_EDOC_LIFE_STAGE_NAME SYSREQ_EDOC_LOCKED_FOR_SERVER_CODE SYSREQ_EDOC_MODIFIED SYSREQ_EDOC_NAME SYSREQ_EDOC_NOTE SYSREQ_EDOC_QUALIFIED_ID SYSREQ_EDOC_SESSION_KEY SYSREQ_EDOC_SESSION_KEY_ENCRYPTION_PLUGIN_NAME SYSREQ_EDOC_SESSION_KEY_ENCRYPTION_PLUGIN_VERSION SYSREQ_EDOC_SIGNATURE_TYPE SYSREQ_EDOC_SIGNED SYSREQ_EDOC_STORAGE SYSREQ_EDOC_STORAGES_ARCHIVE_STORAGE SYSREQ_EDOC_STORAGES_CHECK_RIGHTS SYSREQ_EDOC_STORAGES_COMPUTER_NAME SYSREQ_EDOC_STORAGES_EDIT_IN_STORAGE SYSREQ_EDOC_STORAGES_EXECUTIVE_STORAGE SYSREQ_EDOC_STORAGES_FUNCTION SYSREQ_EDOC_STORAGES_INITIALIZED SYSREQ_EDOC_STORAGES_LOCAL_PATH SYSREQ_EDOC_STORAGES_SAPERION_DATABASE_NAME SYSREQ_EDOC_STORAGES_SEARCH_BY_TEXT SYSREQ_EDOC_STORAGES_SERVER_NAME SYSREQ_EDOC_STORAGES_SHARED_SOURCE_NAME SYSREQ_EDOC_STORAGES_TYPE SYSREQ_EDOC_TEXT_MODIFIED SYSREQ_EDOC_TYPE_ACT_CODE SYSREQ_EDOC_TYPE_ACT_DESCRIPTION SYSREQ_EDOC_TYPE_ACT_DESCRIPTION_LOCALIZE_ID SYSREQ_EDOC_TYPE_ACT_ON_EXECUTE SYSREQ_EDOC_TYPE_ACT_ON_EXECUTE_EXISTS SYSREQ_EDOC_TYPE_ACT_SECTION SYSREQ_EDOC_TYPE_ADD_PARAMS SYSREQ_EDOC_TYPE_COMMENT SYSREQ_EDOC_TYPE_EVENT_TEXT SYSREQ_EDOC_TYPE_NAME_IN_SINGULAR SYSREQ_EDOC_TYPE_NAME_IN_SINGULAR_LOCALIZE_ID SYSREQ_EDOC_TYPE_NAME_LOCALIZE_ID SYSREQ_EDOC_TYPE_NUMERATION_METHOD SYSREQ_EDOC_TYPE_PSEUDO_REQUISITE_CODE SYSREQ_EDOC_TYPE_REQ_CODE SYSREQ_EDOC_TYPE_REQ_DESCRIPTION SYSREQ_EDOC_TYPE_REQ_DESCRIPTION_LOCALIZE_ID SYSREQ_EDOC_TYPE_REQ_IS_LEADING SYSREQ_EDOC_TYPE_REQ_IS_REQUIRED SYSREQ_EDOC_TYPE_REQ_NUMBER SYSREQ_EDOC_TYPE_REQ_ON_CHANGE SYSREQ_EDOC_TYPE_REQ_ON_CHANGE_EXISTS SYSREQ_EDOC_TYPE_REQ_ON_SELECT SYSREQ_EDOC_TYPE_REQ_ON_SELECT_KIND SYSREQ_EDOC_TYPE_REQ_SECTION SYSREQ_EDOC_TYPE_VIEW_CARD SYSREQ_EDOC_TYPE_VIEW_CODE SYSREQ_EDOC_TYPE_VIEW_COMMENT SYSREQ_EDOC_TYPE_VIEW_IS_MAIN SYSREQ_EDOC_TYPE_VIEW_NAME SYSREQ_EDOC_TYPE_VIEW_NAME_LOCALIZE_ID SYSREQ_EDOC_VERSION_AUTHOR SYSREQ_EDOC_VERSION_CRC SYSREQ_EDOC_VERSION_DATA SYSREQ_EDOC_VERSION_EDITOR SYSREQ_EDOC_VERSION_EXPORT_DATE SYSREQ_EDOC_VERSION_EXPORTER SYSREQ_EDOC_VERSION_HIDDEN SYSREQ_EDOC_VERSION_LIFE_STAGE SYSREQ_EDOC_VERSION_MODIFIED SYSREQ_EDOC_VERSION_NOTE SYSREQ_EDOC_VERSION_SIGNATURE_TYPE SYSREQ_EDOC_VERSION_SIGNED SYSREQ_EDOC_VERSION_SIZE SYSREQ_EDOC_VERSION_SOURCE SYSREQ_EDOC_VERSION_TEXT_MODIFIED SYSREQ_EDOCKIND_DEFAULT_VERSION_STATE_CODE SYSREQ_FOLDER_KIND SYSREQ_FUNC_CATEGORY SYSREQ_FUNC_COMMENT SYSREQ_FUNC_GROUP SYSREQ_FUNC_GROUP_COMMENT SYSREQ_FUNC_GROUP_NUMBER SYSREQ_FUNC_HELP SYSREQ_FUNC_PARAM_DEF_VALUE SYSREQ_FUNC_PARAM_IDENT SYSREQ_FUNC_PARAM_NUMBER SYSREQ_FUNC_PARAM_TYPE SYSREQ_FUNC_TEXT SYSREQ_GROUP_CATEGORY SYSREQ_ID SYSREQ_LAST_UPDATE SYSREQ_LEADER_REFERENCE SYSREQ_LINE_NUMBER SYSREQ_MAIN_RECORD_ID SYSREQ_NAME SYSREQ_NAME_LOCALIZE_ID SYSREQ_NOTE SYSREQ_ORIGINAL_RECORD SYSREQ_OUR_FIRM SYSREQ_PROFILING_SETTINGS_BATCH_LOGING SYSREQ_PROFILING_SETTINGS_BATCH_SIZE SYSREQ_PROFILING_SETTINGS_PROFILING_ENABLED SYSREQ_PROFILING_SETTINGS_SQL_PROFILING_ENABLED SYSREQ_PROFILING_SETTINGS_START_LOGGED SYSREQ_RECORD_STATUS SYSREQ_REF_REQ_FIELD_NAME SYSREQ_REF_REQ_FORMAT SYSREQ_REF_REQ_GENERATED SYSREQ_REF_REQ_LENGTH SYSREQ_REF_REQ_PRECISION SYSREQ_REF_REQ_REFERENCE SYSREQ_REF_REQ_SECTION SYSREQ_REF_REQ_STORED SYSREQ_REF_REQ_TOKENS SYSREQ_REF_REQ_TYPE SYSREQ_REF_REQ_VIEW SYSREQ_REF_TYPE_ACT_CODE SYSREQ_REF_TYPE_ACT_DESCRIPTION SYSREQ_REF_TYPE_ACT_DESCRIPTION_LOCALIZE_ID SYSREQ_REF_TYPE_ACT_ON_EXECUTE SYSREQ_REF_TYPE_ACT_ON_EXECUTE_EXISTS SYSREQ_REF_TYPE_ACT_SECTION SYSREQ_REF_TYPE_ADD_PARAMS SYSREQ_REF_TYPE_COMMENT SYSREQ_REF_TYPE_COMMON_SETTINGS SYSREQ_REF_TYPE_DISPLAY_REQUISITE_NAME SYSREQ_REF_TYPE_EVENT_TEXT SYSREQ_REF_TYPE_MAIN_LEADING_REF SYSREQ_REF_TYPE_NAME_IN_SINGULAR SYSREQ_REF_TYPE_NAME_IN_SINGULAR_LOCALIZE_ID SYSREQ_REF_TYPE_NAME_LOCALIZE_ID SYSREQ_REF_TYPE_NUMERATION_METHOD SYSREQ_REF_TYPE_REQ_CODE SYSREQ_REF_TYPE_REQ_DESCRIPTION SYSREQ_REF_TYPE_REQ_DESCRIPTION_LOCALIZE_ID SYSREQ_REF_TYPE_REQ_IS_CONTROL SYSREQ_REF_TYPE_REQ_IS_FILTER SYSREQ_REF_TYPE_REQ_IS_LEADING SYSREQ_REF_TYPE_REQ_IS_REQUIRED SYSREQ_REF_TYPE_REQ_NUMBER SYSREQ_REF_TYPE_REQ_ON_CHANGE SYSREQ_REF_TYPE_REQ_ON_CHANGE_EXISTS SYSREQ_REF_TYPE_REQ_ON_SELECT SYSREQ_REF_TYPE_REQ_ON_SELECT_KIND SYSREQ_REF_TYPE_REQ_SECTION SYSREQ_REF_TYPE_VIEW_CARD SYSREQ_REF_TYPE_VIEW_CODE SYSREQ_REF_TYPE_VIEW_COMMENT SYSREQ_REF_TYPE_VIEW_IS_MAIN SYSREQ_REF_TYPE_VIEW_NAME SYSREQ_REF_TYPE_VIEW_NAME_LOCALIZE_ID SYSREQ_REFERENCE_TYPE_ID SYSREQ_STATE SYSREQ_STATЕ SYSREQ_SYSTEM_SETTINGS_VALUE SYSREQ_TYPE SYSREQ_UNIT SYSREQ_UNIT_ID SYSREQ_USER_GROUPS_GROUP_FULL_NAME SYSREQ_USER_GROUPS_GROUP_NAME SYSREQ_USER_GROUPS_GROUP_SERVER_NAME SYSREQ_USERS_ACCESS_RIGHTS SYSREQ_USERS_AUTHENTICATION SYSREQ_USERS_CATEGORY SYSREQ_USERS_COMPONENT SYSREQ_USERS_COMPONENT_USER_IS_PUBLIC SYSREQ_USERS_DOMAIN SYSREQ_USERS_FULL_USER_NAME SYSREQ_USERS_GROUP SYSREQ_USERS_IS_MAIN_SERVER SYSREQ_USERS_LOGIN SYSREQ_USERS_REFERENCE_USER_IS_PUBLIC SYSREQ_USERS_STATUS SYSREQ_USERS_USER_CERTIFICATE SYSREQ_USERS_USER_CERTIFICATE_INFO SYSREQ_USERS_USER_CERTIFICATE_PLUGIN_NAME SYSREQ_USERS_USER_CERTIFICATE_PLUGIN_VERSION SYSREQ_USERS_USER_CERTIFICATE_STATE SYSREQ_USERS_USER_CERTIFICATE_SUBJECT_NAME SYSREQ_USERS_USER_CERTIFICATE_THUMBPRINT SYSREQ_USERS_USER_DEFAULT_CERTIFICATE SYSREQ_USERS_USER_DESCRIPTION SYSREQ_USERS_USER_GLOBAL_NAME SYSREQ_USERS_USER_LOGIN SYSREQ_USERS_USER_MAIN_SERVER SYSREQ_USERS_USER_TYPE SYSREQ_WORK_RULES_FOLDER_ID ",z="RESULT_VAR_NAME RESULT_VAR_NAME_ENG ",K="AUTO_NUMERATION_RULE_ID CANT_CHANGE_ID_REQUISITE_RULE_ID CANT_CHANGE_OURFIRM_REQUISITE_RULE_ID CHECK_CHANGING_REFERENCE_RECORD_USE_RULE_ID CHECK_CODE_REQUISITE_RULE_ID CHECK_DELETING_REFERENCE_RECORD_USE_RULE_ID CHECK_FILTRATER_CHANGES_RULE_ID CHECK_RECORD_INTERVAL_RULE_ID CHECK_REFERENCE_INTERVAL_RULE_ID CHECK_REQUIRED_DATA_FULLNESS_RULE_ID CHECK_REQUIRED_REQUISITES_FULLNESS_RULE_ID MAKE_RECORD_UNRATIFIED_RULE_ID RESTORE_AUTO_NUMERATION_RULE_ID SET_FIRM_CONTEXT_FROM_RECORD_RULE_ID SET_FIRST_RECORD_IN_LIST_FORM_RULE_ID SET_IDSPS_VALUE_RULE_ID SET_NEXT_CODE_VALUE_RULE_ID SET_OURFIRM_BOUNDS_RULE_ID SET_OURFIRM_REQUISITE_RULE_ID ",Ee="SCRIPT_BLOCK_AFTER_FINISH_EVENT SCRIPT_BLOCK_BEFORE_START_EVENT SCRIPT_BLOCK_EXECUTION_RESULTS_PROPERTY SCRIPT_BLOCK_NAME_PROPERTY SCRIPT_BLOCK_SCRIPT_PROPERTY ",oe="SUBTASK_BLOCK_ABORT_DEADLINE_PROPERTY SUBTASK_BLOCK_AFTER_FINISH_EVENT SUBTASK_BLOCK_ASSIGN_PARAMS_EVENT SUBTASK_BLOCK_ATTACHMENTS_PROPERTY SUBTASK_BLOCK_ATTACHMENTS_RIGHTS_GROUP_PROPERTY SUBTASK_BLOCK_ATTACHMENTS_RIGHTS_TYPE_PROPERTY SUBTASK_BLOCK_BEFORE_START_EVENT SUBTASK_BLOCK_CREATED_TASK_PROPERTY SUBTASK_BLOCK_CREATION_EVENT SUBTASK_BLOCK_DEADLINE_PROPERTY SUBTASK_BLOCK_IMPORTANCE_PROPERTY SUBTASK_BLOCK_INITIATOR_PROPERTY SUBTASK_BLOCK_IS_RELATIVE_ABORT_DEADLINE_PROPERTY SUBTASK_BLOCK_IS_RELATIVE_DEADLINE_PROPERTY SUBTASK_BLOCK_JOBS_TYPE_PROPERTY SUBTASK_BLOCK_NAME_PROPERTY SUBTASK_BLOCK_PARALLEL_ROUTE_PROPERTY SUBTASK_BLOCK_PERFORMERS_PROPERTY SUBTASK_BLOCK_RELATIVE_ABORT_DEADLINE_TYPE_PROPERTY SUBTASK_BLOCK_RELATIVE_DEADLINE_TYPE_PROPERTY SUBTASK_BLOCK_REQUIRE_SIGN_PROPERTY SUBTASK_BLOCK_STANDARD_ROUTE_PROPERTY SUBTASK_BLOCK_START_EVENT SUBTASK_BLOCK_STEP_CONTROL_PROPERTY SUBTASK_BLOCK_SUBJECT_PROPERTY SUBTASK_BLOCK_TASK_CONTROL_PROPERTY SUBTASK_BLOCK_TEXT_PROPERTY SUBTASK_BLOCK_UNLOCK_ATTACHMENTS_ON_STOP_PROPERTY SUBTASK_BLOCK_USE_STANDARD_ROUTE_PROPERTY SUBTASK_BLOCK_WAIT_FOR_TASK_COMPLETE_PROPERTY ",L="SYSCOMP_CONTROL_JOBS SYSCOMP_FOLDERS SYSCOMP_JOBS SYSCOMP_NOTICES SYSCOMP_TASKS ",J="SYSDLG_CREATE_EDOCUMENT SYSDLG_CREATE_EDOCUMENT_VERSION SYSDLG_CURRENT_PERIOD SYSDLG_EDIT_FUNCTION_HELP SYSDLG_EDOCUMENT_KINDS_FOR_TEMPLATE SYSDLG_EXPORT_MULTIPLE_EDOCUMENTS SYSDLG_EXPORT_SINGLE_EDOCUMENT SYSDLG_IMPORT_EDOCUMENT SYSDLG_MULTIPLE_SELECT SYSDLG_SETUP_ACCESS_RIGHTS SYSDLG_SETUP_DEFAULT_RIGHTS SYSDLG_SETUP_FILTER_CONDITION SYSDLG_SETUP_SIGN_RIGHTS SYSDLG_SETUP_TASK_OBSERVERS SYSDLG_SETUP_TASK_ROUTE SYSDLG_SETUP_USERS_LIST SYSDLG_SIGN_EDOCUMENT SYSDLG_SIGN_MULTIPLE_EDOCUMENTS ",re="SYSREF_ACCESS_RIGHTS_TYPES SYSREF_ADMINISTRATION_HISTORY SYSREF_ALL_AVAILABLE_COMPONENTS SYSREF_ALL_AVAILABLE_PRIVILEGES SYSREF_ALL_REPLICATING_COMPONENTS SYSREF_AVAILABLE_DEVELOPERS_COMPONENTS SYSREF_CALENDAR_EVENTS SYSREF_COMPONENT_TOKEN_HISTORY SYSREF_COMPONENT_TOKENS SYSREF_COMPONENTS SYSREF_CONSTANTS SYSREF_DATA_RECEIVE_PROTOCOL SYSREF_DATA_SEND_PROTOCOL SYSREF_DIALOGS SYSREF_DIALOGS_REQUISITES SYSREF_EDITORS SYSREF_EDOC_CARDS SYSREF_EDOC_TYPES SYSREF_EDOCUMENT_CARD_REQUISITES SYSREF_EDOCUMENT_CARD_TYPES SYSREF_EDOCUMENT_CARD_TYPES_REFERENCE SYSREF_EDOCUMENT_CARDS SYSREF_EDOCUMENT_HISTORY SYSREF_EDOCUMENT_KINDS SYSREF_EDOCUMENT_REQUISITES SYSREF_EDOCUMENT_SIGNATURES SYSREF_EDOCUMENT_TEMPLATES SYSREF_EDOCUMENT_TEXT_STORAGES SYSREF_EDOCUMENT_VIEWS SYSREF_FILTERER_SETUP_CONFLICTS SYSREF_FILTRATER_SETTING_CONFLICTS SYSREF_FOLDER_HISTORY SYSREF_FOLDERS SYSREF_FUNCTION_GROUPS SYSREF_FUNCTION_PARAMS SYSREF_FUNCTIONS SYSREF_JOB_HISTORY SYSREF_LINKS SYSREF_LOCALIZATION_DICTIONARY SYSREF_LOCALIZATION_LANGUAGES SYSREF_MODULES SYSREF_PRIVILEGES SYSREF_RECORD_HISTORY SYSREF_REFERENCE_REQUISITES SYSREF_REFERENCE_TYPE_VIEWS SYSREF_REFERENCE_TYPES SYSREF_REFERENCES SYSREF_REFERENCES_REQUISITES SYSREF_REMOTE_SERVERS SYSREF_REPLICATION_SESSIONS_LOG SYSREF_REPLICATION_SESSIONS_PROTOCOL SYSREF_REPORTS SYSREF_ROLES SYSREF_ROUTE_BLOCK_GROUPS SYSREF_ROUTE_BLOCKS SYSREF_SCRIPTS SYSREF_SEARCHES SYSREF_SERVER_EVENTS SYSREF_SERVER_EVENTS_HISTORY SYSREF_STANDARD_ROUTE_GROUPS SYSREF_STANDARD_ROUTES SYSREF_STATUSES SYSREF_SYSTEM_SETTINGS SYSREF_TASK_HISTORY SYSREF_TASK_KIND_GROUPS SYSREF_TASK_KINDS SYSREF_TASK_RIGHTS SYSREF_TASK_SIGNATURES SYSREF_TASKS SYSREF_UNITS SYSREF_USER_GROUPS SYSREF_USER_GROUPS_REFERENCE SYSREF_USER_SUBSTITUTION SYSREF_USERS SYSREF_USERS_REFERENCE SYSREF_VIEWERS SYSREF_WORKING_TIME_CALENDARS ",G="ACCESS_RIGHTS_TABLE_NAME EDMS_ACCESS_TABLE_NAME EDOC_TYPES_TABLE_NAME ",X="TEST_DEV_DB_NAME TEST_DEV_SYSTEM_CODE TEST_EDMS_DB_NAME TEST_EDMS_MAIN_CODE TEST_EDMS_MAIN_DB_NAME TEST_EDMS_SECOND_CODE TEST_EDMS_SECOND_DB_NAME TEST_EDMS_SYSTEM_CODE TEST_ISB5_MAIN_CODE TEST_ISB5_SECOND_CODE TEST_SQL_SERVER_2005_NAME TEST_SQL_SERVER_NAME ",_e="ATTENTION_CAPTION cbsCommandLinks cbsDefault CONFIRMATION_CAPTION ERROR_CAPTION INFORMATION_CAPTION mrCancel mrOk ",ve="EDOC_VERSION_ACTIVE_STAGE_CODE EDOC_VERSION_DESIGN_STAGE_CODE EDOC_VERSION_OBSOLETE_STAGE_CODE ",he="cpDataEnciphermentEnabled cpDigitalSignatureEnabled cpID cpIssuer cpPluginVersion cpSerial cpSubjectName cpSubjSimpleName cpValidFromDate cpValidToDate ",tt="ISBL_SYNTAX NO_SYNTAX XML_SYNTAX ",lt="WAIT_BLOCK_AFTER_FINISH_EVENT WAIT_BLOCK_BEFORE_START_EVENT WAIT_BLOCK_DEADLINE_PROPERTY WAIT_BLOCK_IS_RELATIVE_DEADLINE_PROPERTY WAIT_BLOCK_NAME_PROPERTY WAIT_BLOCK_RELATIVE_DEADLINE_TYPE_PROPERTY ",$e="SYSRES_COMMON SYSRES_CONST SYSRES_MBFUNC SYSRES_SBDATA SYSRES_SBGUI SYSRES_SBINTF SYSRES_SBREFDSC SYSRES_SQLERRORS SYSRES_SYSCOMP ",Ce=s+l+c+d+_+p+g+E+f+S+C+h+T+N+y+x+P+D+k+U+W+z+K+Ee+oe+L+J+re+G+X+_e+ve+he+tt+lt+$e,Be="atUser atGroup atRole ",Ve="aemEnabledAlways aemDisabledAlways aemEnabledOnBrowse aemEnabledOnEdit aemDisabledOnBrowseEmpty ",xe="apBegin apEnd ",He="alLeft alRight ",rt="asmNever asmNoButCustomize asmAsLastTime asmYesButCustomize asmAlways ",We="cirCommon cirRevoked ",te="ctSignature ctEncode ctSignatureEncode ",pe="clbUnchecked clbChecked clbGrayed ",ie="ceISB ceAlways ceNever ",Pe="ctDocument ctReference ctScript ctUnknown ctReport ctDialog ctFunction ctFolder ctEDocument ctTask ctJob ctNotice ctControlJob ",we="cfInternal cfDisplay ",Xe="ciUnspecified ciWrite ciRead ",pt="ckFolder ckEDocument ckTask ckJob ckComponentToken ckAny ckReference ckScript ckReport ckDialog ",me="ctISBLEditor ctBevel ctButton ctCheckListBox ctComboBox ctComboEdit ctGrid ctDBCheckBox ctDBComboBox ctDBEdit ctDBEllipsis ctDBMemo ctDBNavigator ctDBRadioGroup ctDBStatusLabel ctEdit ctGroupBox ctInplaceHint ctMemo ctPanel ctListBox ctRadioButton ctRichEdit ctTabSheet ctWebBrowser ctImage ctHyperLink ctLabel ctDBMultiEllipsis ctRibbon ctRichView ctInnerPanel ctPanelGroup ctBitButton ",bt="cctDate cctInteger cctNumeric cctPick cctReference cctString cctText ",Ue="cltInternal cltPrimary cltGUI ",Ie="dseBeforeOpen dseAfterOpen dseBeforeClose dseAfterClose dseOnValidDelete dseBeforeDelete dseAfterDelete dseAfterDeleteOutOfTransaction dseOnDeleteError dseBeforeInsert dseAfterInsert dseOnValidUpdate dseBeforeUpdate dseOnUpdateRatifiedRecord dseAfterUpdate dseAfterUpdateOutOfTransaction dseOnUpdateError dseAfterScroll dseOnOpenRecord dseOnCloseRecord dseBeforeCancel dseAfterCancel dseOnUpdateDeadlockError dseBeforeDetailUpdate dseOnPrepareUpdate dseOnAnyRequisiteChange ",zt="dssEdit dssInsert dssBrowse dssInActive ",Nt="dftDate dftShortDate dftDateTime dftTimeStamp ",Gt="dotDays dotHours dotMinutes dotSeconds ",Sn="dtkndLocal dtkndUTC ",ne="arNone arView arEdit arFull ",ce="ddaView ddaEdit ",Oe="emLock emEdit emSign emExportWithLock emImportWithUnlock emChangeVersionNote emOpenForModify emChangeLifeStage emDelete emCreateVersion emImport emUnlockExportedWithLock emStart emAbort emReInit emMarkAsReaded emMarkAsUnreaded emPerform emAccept emResume emChangeRights emEditRoute emEditObserver emRecoveryFromLocalCopy emChangeWorkAccessType emChangeEncodeTypeToCertificate emChangeEncodeTypeToPassword emChangeEncodeTypeToNone emChangeEncodeTypeToCertificatePassword emChangeStandardRoute emGetText emOpenForView emMoveToStorage emCreateObject emChangeVersionHidden emDeleteVersion emChangeLifeCycleStage emApprovingSign emExport emContinue emLockFromEdit emUnLockForEdit emLockForServer emUnlockFromServer emDelegateAccessRights emReEncode ",Me="ecotFile ecotProcess ",ct="eaGet eaCopy eaCreate eaCreateStandardRoute ",xt="edltAll edltNothing edltQuery ",Ze="essmText essmCard ",Yt="esvtLast esvtLastActive esvtSpecified ",er="edsfExecutive edsfArchive ",Z="edstSQLServer edstFile ",ge="edvstNone edvstEDocumentVersionCopy edvstFile edvstTemplate edvstScannedFile ",Ae="vsDefault vsDesign vsActive vsObsolete ",it="etNone etCertificate etPassword etCertificatePassword ",ht="ecException ecWarning ecInformation ",wt="estAll estApprovingOnly ",tn="evtLast evtLastActive evtQuery ",mt="fdtString fdtNumeric fdtInteger fdtDate fdtText fdtUnknown fdtWideString fdtLargeInteger ",ln="ftInbox ftOutbox ftFavorites ftCommonFolder ftUserFolder ftComponents ftQuickLaunch ftShortcuts ftSearch ",tr="grhAuto grhX1 grhX2 grhX3 ",gl="hltText hltRTF hltHTML ",lo="iffBMP iffJPEG iffMultiPageTIFF iffSinglePageTIFF iffTIFF iffPNG ",El="im8bGrayscale im24bRGB im1bMonochrome ",fl="itBMP itJPEG itWMF itPNG ",Sl="ikhInformation ikhWarning ikhError ikhNoIcon ",ca="icUnknown icScript icFunction icIntegratedReport icAnalyticReport icDataSetEventHandler icActionHandler icFormEventHandler icLookUpEventHandler icRequisiteChangeEventHandler icBeforeSearchEventHandler icRoleCalculation icSelectRouteEventHandler icBlockPropertyCalculation icBlockQueryParamsEventHandler icChangeSearchResultEventHandler icBlockEventHandler icSubTaskInitEventHandler icEDocDataSetEventHandler icEDocLookUpEventHandler icEDocActionHandler icEDocFormEventHandler icEDocRequisiteChangeEventHandler icStructuredConversionRule icStructuredConversionEventBefore icStructuredConversionEventAfter icWizardEventHandler icWizardFinishEventHandler icWizardStepEventHandler icWizardStepFinishEventHandler icWizardActionEnableEventHandler icWizardActionExecuteEventHandler icCreateJobsHandler icCreateNoticesHandler icBeforeLookUpEventHandler icAfterLookUpEventHandler icTaskAbortEventHandler icWorkflowBlockActionHandler icDialogDataSetEventHandler icDialogActionHandler icDialogLookUpEventHandler icDialogRequisiteChangeEventHandler icDialogFormEventHandler icDialogValidCloseEventHandler icBlockFormEventHandler icTaskFormEventHandler icReferenceMethod icEDocMethod icDialogMethod icProcessMessageHandler ",bl="isShow isHide isByUserSettings ",ua="jkJob jkNotice jkControlJob ",hl="jtInner jtLeft jtRight jtFull jtCross ",Tl="lbpAbove lbpBelow lbpLeft lbpRight ",vl="eltPerConnection eltPerUser ",Cl="sfcUndefined sfcBlack sfcGreen sfcRed sfcBlue sfcOrange sfcLilac ",Rl="sfsItalic sfsStrikeout sfsNormal ",Nl="ldctStandardRoute ldctWizard ldctScript ldctFunction ldctRouteBlock ldctIntegratedReport ldctAnalyticReport ldctReferenceType ldctEDocumentType ldctDialog ldctServerEvents ",Ol="mrcrtNone mrcrtUser mrcrtMaximal mrcrtCustom ",Al="vtEqual vtGreaterOrEqual vtLessOrEqual vtRange ",co="rdYesterday rdToday rdTomorrow rdThisWeek rdThisMonth rdThisYear rdNextMonth rdNextWeek rdLastWeek rdLastMonth ",yl="rdWindow rdFile rdPrinter ",Il="rdtString rdtNumeric rdtInteger rdtDate rdtReference rdtAccount rdtText rdtPick rdtUnknown rdtLargeInteger rdtDocument ",Dl="reOnChange reOnChangeValues ",xl="ttGlobal ttLocal ttUser ttSystem ",wl="ssmBrowse ssmSelect ssmMultiSelect ssmBrowseModal ",Ml="smSelect smLike smCard ",Ei="stNone stAuthenticating stApproving ",Ll="sctString sctStream ",fi="sstAnsiSort sstNaturalSort ",Pl="svtEqual svtContain ",kl="soatString soatNumeric soatInteger soatDatetime soatReferenceRecord soatText soatPick soatBoolean soatEDocument soatAccount soatIntegerCollection soatNumericCollection soatStringCollection soatPickCollection soatDatetimeCollection soatBooleanCollection soatReferenceRecordCollection soatEDocumentCollection soatAccountCollection soatContents soatUnknown ",Ul="tarAbortByUser tarAbortByWorkflowException ",uo="tvtAllWords tvtExactPhrase tvtAnyWord ",_o="usNone usCompleted usRedSquare usBlueSquare usYellowSquare usGreenSquare usOrangeSquare usPurpleSquare usFollowUp ",po="utUnknown utUser utDeveloper utAdministrator utSystemDeveloper utDisconnected ",Fl="btAnd btDetailAnd btOr btNotOr btOnly ",Bl="vmView vmSelect vmNavigation ",Gl="vsmSingle vsmMultiple vsmMultipleCheck vsmNoSelection ",Yl="wfatPrevious wfatNext wfatCancel wfatFinish ",mo="wfepUndefined wfepText3 wfepText6 wfepText9 wfepSpinEdit wfepDropDown wfepRadioGroup wfepFlag wfepText12 wfepText15 wfepText18 wfepText21 wfepText24 wfepText27 wfepText30 wfepRadioGroupColumn1 wfepRadioGroupColumn2 wfepRadioGroupColumn3 ",go="wfetQueryParameter wfetText wfetDelimiter wfetLabel ",Eo="wptString wptInteger wptNumeric wptBoolean wptDateTime wptPick wptText wptUser wptUserList wptEDocumentInfo wptEDocumentInfoList wptReferenceRecordInfo wptReferenceRecordInfoList wptFolderInfo wptTaskInfo wptContents wptFileName wptDate ",da="wsrComplete wsrGoNext wsrGoPrevious wsrCustom wsrCancel wsrGoFinal ",ql="wstForm wstEDocument wstTaskCard wstReferenceRecordCard wstFinal ",fo="waAll waPerformers waManual ",Si="wsbStart wsbFinish wsbNotice wsbStep wsbDecision wsbWait wsbMonitor wsbScript wsbConnector wsbSubTask wsbLifeCycleStage wsbPause ",So="wdtInteger wdtFloat wdtString wdtPick wdtDateTime wdtBoolean wdtTask wdtJob wdtFolder wdtEDocument wdtReferenceRecord wdtUser wdtGroup wdtRole wdtIntegerCollection wdtFloatCollection wdtStringCollection wdtPickCollection wdtDateTimeCollection wdtBooleanCollection wdtTaskCollection wdtJobCollection wdtFolderCollection wdtEDocumentCollection wdtReferenceRecordCollection wdtUserCollection wdtGroupCollection wdtRoleCollection wdtContents wdtUserList wdtSearchDescription wdtDeadLine wdtPickSet wdtAccountCollection ",$l="wiLow wiNormal wiHigh ",bo="wrtSoft wrtHard ",ho="wsInit wsRunning wsDone wsControlled wsAborted wsContinued ",_a="wtmFull wtmFromCurrent wtmOnlyCurrent ",Hl=Be+Ve+xe+He+rt+We+te+pe+ie+Pe+we+Xe+pt+me+bt+Ue+Ie+zt+Nt+Gt+Sn+ne+ce+Oe+Me+ct+xt+Ze+Yt+er+Z+ge+Ae+it+ht+wt+tn+mt+ln+tr+gl+lo+El+fl+Sl+ca+bl+ua+hl+Tl+vl+Cl+Rl+Nl+Ol+Al+co+yl+Il+Dl+xl+wl+Ml+Ei+Ll+fi+Pl+kl+Ul+uo+_o+po+Fl+Bl+Gl+Yl+mo+go+Eo+da+ql+fo+Si+So+$l+bo+ho+_a,To="AddSubString AdjustLineBreaks AmountInWords Analysis ArrayDimCount ArrayHighBound ArrayLowBound ArrayOf ArrayReDim Assert Assigned BeginOfMonth BeginOfPeriod BuildProfilingOperationAnalysis CallProcedure CanReadFile CArrayElement CDataSetRequisite ChangeDate ChangeReferenceDataset Char CharPos CheckParam CheckParamValue CompareStrings ConstantExists ControlState ConvertDateStr Copy CopyFile CreateArray CreateCachedReference CreateConnection CreateDialog CreateDualListDialog CreateEditor CreateException CreateFile CreateFolderDialog CreateInputDialog CreateLinkFile CreateList CreateLock CreateMemoryDataSet CreateObject CreateOpenDialog CreateProgress CreateQuery CreateReference CreateReport CreateSaveDialog CreateScript CreateSQLPivotFunction CreateStringList CreateTreeListSelectDialog CSelectSQL CSQL CSubString CurrentUserID CurrentUserName CurrentVersion DataSetLocateEx DateDiff DateTimeDiff DateToStr DayOfWeek DeleteFile DirectoryExists DisableCheckAccessRights DisableCheckFullShowingRestriction DisableMassTaskSendingRestrictions DropTable DupeString EditText EnableCheckAccessRights EnableCheckFullShowingRestriction EnableMassTaskSendingRestrictions EndOfMonth EndOfPeriod ExceptionExists ExceptionsOff ExceptionsOn Execute ExecuteProcess Exit ExpandEnvironmentVariables ExtractFileDrive ExtractFileExt ExtractFileName ExtractFilePath ExtractParams FileExists FileSize FindFile FindSubString FirmContext ForceDirectories Format FormatDate FormatNumeric FormatSQLDate FormatString FreeException GetComponent GetComponentLaunchParam GetConstant GetLastException GetReferenceRecord GetRefTypeByRefID GetTableID GetTempFolder IfThen In IndexOf InputDialog InputDialogEx InteractiveMode IsFileLocked IsGraphicFile IsNumeric Length LoadString LoadStringFmt LocalTimeToUTC LowerCase Max MessageBox MessageBoxEx MimeDecodeBinary MimeDecodeString MimeEncodeBinary MimeEncodeString Min MoneyInWords MoveFile NewID Now OpenFile Ord Precision Raise ReadCertificateFromFile ReadFile ReferenceCodeByID ReferenceNumber ReferenceRequisiteMode ReferenceRequisiteValue RegionDateSettings RegionNumberSettings RegionTimeSettings RegRead RegWrite RenameFile Replace Round SelectServerCode SelectSQL ServerDateTime SetConstant SetManagedFolderFieldsState ShowConstantsInputDialog ShowMessage Sleep Split SQL SQL2XLSTAB SQLProfilingSendReport StrToDate SubString SubStringCount SystemSetting Time TimeDiff Today Transliterate Trim UpperCase UserStatus UTCToLocalTime ValidateXML VarIsClear VarIsEmpty VarIsNull WorkTimeDiff WriteFile WriteFileEx WriteObjectHistory Анализ БазаДанных БлокЕсть БлокЕстьРасш БлокИнфо БлокСнять БлокСнятьРасш БлокУстановить Ввод ВводМеню ВедС ВедСпр ВерхняяГраницаМассива ВнешПрогр Восст ВременнаяПапка Время ВыборSQL ВыбратьЗапись ВыделитьСтр Вызвать Выполнить ВыпПрогр ГрафическийФайл ГруппаДополнительно ДатаВремяСерв ДеньНедели ДиалогДаНет ДлинаСтр ДобПодстр ЕПусто ЕслиТо ЕЧисло ЗамПодстр ЗаписьСправочника ЗначПоляСпр ИДТипСпр ИзвлечьДиск ИзвлечьИмяФайла ИзвлечьПуть ИзвлечьРасширение ИзмДат ИзменитьРазмерМассива ИзмеренийМассива ИмяОрг ИмяПоляСпр Индекс ИндикаторЗакрыть ИндикаторОткрыть ИндикаторШаг ИнтерактивныйРежим ИтогТблСпр КодВидВедСпр КодВидСпрПоИД КодПоAnalit КодСимвола КодСпр КолПодстр КолПроп КонМес Конст КонстЕсть КонстЗнач КонТран КопироватьФайл КопияСтр КПериод КСтрТблСпр Макс МаксСтрТблСпр Массив Меню МенюРасш Мин НаборДанныхНайтиРасш НаимВидСпр НаимПоAnalit НаимСпр НастроитьПереводыСтрок НачМес НачТран НижняяГраницаМассива НомерСпр НПериод Окно Окр Окружение ОтлИнфДобавить ОтлИнфУдалить Отчет ОтчетАнал ОтчетИнт ПапкаСуществует Пауза ПВыборSQL ПереименоватьФайл Переменные ПереместитьФайл Подстр ПоискПодстр ПоискСтр ПолучитьИДТаблицы ПользовательДополнительно ПользовательИД ПользовательИмя ПользовательСтатус Прервать ПроверитьПараметр ПроверитьПараметрЗнач ПроверитьУсловие РазбСтр РазнВремя РазнДат РазнДатаВремя РазнРабВремя РегУстВрем РегУстДат РегУстЧсл РедТекст РеестрЗапись РеестрСписокИменПарам РеестрЧтение РеквСпр РеквСпрПр Сегодня Сейчас Сервер СерверПроцессИД СертификатФайлСчитать СжПроб Символ СистемаДиректумКод СистемаИнформация СистемаКод Содержит СоединениеЗакрыть СоединениеОткрыть СоздатьДиалог СоздатьДиалогВыбораИзДвухСписков СоздатьДиалогВыбораПапки СоздатьДиалогОткрытияФайла СоздатьДиалогСохраненияФайла СоздатьЗапрос СоздатьИндикатор СоздатьИсключение СоздатьКэшированныйСправочник СоздатьМассив СоздатьНаборДанных СоздатьОбъект СоздатьОтчет СоздатьПапку СоздатьРедактор СоздатьСоединение СоздатьСписок СоздатьСписокСтрок СоздатьСправочник СоздатьСценарий СоздСпр СостСпр Сохр СохрСпр СписокСистем Спр Справочник СпрБлокЕсть СпрБлокСнять СпрБлокСнятьРасш СпрБлокУстановить СпрИзмНабДан СпрКод СпрНомер СпрОбновить СпрОткрыть СпрОтменить СпрПарам СпрПолеЗнач СпрПолеИмя СпрРекв СпрРеквВведЗн СпрРеквНовые СпрРеквПр СпрРеквПредЗн СпрРеквРежим СпрРеквТипТекст СпрСоздать СпрСост СпрСохранить СпрТблИтог СпрТблСтр СпрТблСтрКол СпрТблСтрМакс СпрТблСтрМин СпрТблСтрПред СпрТблСтрСлед СпрТблСтрСозд СпрТблСтрУд СпрТекПредст СпрУдалить СравнитьСтр СтрВерхРегистр СтрНижнРегистр СтрТблСпр СумПроп Сценарий СценарийПарам ТекВерсия ТекОрг Точн Тран Транслитерация УдалитьТаблицу УдалитьФайл УдСпр УдСтрТблСпр Уст УстановкиКонстант ФайлАтрибутСчитать ФайлАтрибутУстановить ФайлВремя ФайлВремяУстановить ФайлВыбрать ФайлЗанят ФайлЗаписать ФайлИскать ФайлКопировать ФайлМожноЧитать ФайлОткрыть ФайлПереименовать ФайлПерекодировать ФайлПереместить ФайлПросмотреть ФайлРазмер ФайлСоздать ФайлСсылкаСоздать ФайлСуществует ФайлСчитать ФайлУдалить ФмтSQLДат ФмтДат ФмтСтр ФмтЧсл Формат ЦМассивЭлемент ЦНаборДанныхРеквизит ЦПодстр ",pa="AltState Application CallType ComponentTokens CreatedJobs CreatedNotices ControlState DialogResult Dialogs EDocuments EDocumentVersionSource Folders GlobalIDs Job Jobs InputValue LookUpReference LookUpRequisiteNames LookUpSearch Object ParentComponent Processes References Requisite ReportName Reports Result Scripts Searches SelectedAttachments SelectedItems SelectMode Sender ServerEvents ServiceFactory ShiftState SubTask SystemDialogs Tasks Wizard Wizards Work ВызовСпособ ИмяОтчета РеквЗнач ",ma="IApplication IAccessRights IAccountRepository IAccountSelectionRestrictions IAction IActionList IAdministrationHistoryDescription IAnchors IApplication IArchiveInfo IAttachment IAttachmentList ICheckListBox ICheckPointedList IColumn IComponent IComponentDescription IComponentToken IComponentTokenFactory IComponentTokenInfo ICompRecordInfo IConnection IContents IControl IControlJob IControlJobInfo IControlList ICrypto ICrypto2 ICustomJob ICustomJobInfo ICustomListBox ICustomObjectWizardStep ICustomWork ICustomWorkInfo IDataSet IDataSetAccessInfo IDataSigner IDateCriterion IDateRequisite IDateRequisiteDescription IDateValue IDeaAccessRights IDeaObjectInfo IDevelopmentComponentLock IDialog IDialogFactory IDialogPickRequisiteItems IDialogsFactory IDICSFactory IDocRequisite IDocumentInfo IDualListDialog IECertificate IECertificateInfo IECertificates IEditControl IEditorForm IEdmsExplorer IEdmsObject IEdmsObjectDescription IEdmsObjectFactory IEdmsObjectInfo IEDocument IEDocumentAccessRights IEDocumentDescription IEDocumentEditor IEDocumentFactory IEDocumentInfo IEDocumentStorage IEDocumentVersion IEDocumentVersionListDialog IEDocumentVersionSource IEDocumentWizardStep IEDocVerSignature IEDocVersionState IEnabledMode IEncodeProvider IEncrypter IEvent IEventList IException IExternalEvents IExternalHandler IFactory IField IFileDialog IFolder IFolderDescription IFolderDialog IFolderFactory IFolderInfo IForEach IForm IFormTitle IFormWizardStep IGlobalIDFactory IGlobalIDInfo IGrid IHasher IHistoryDescription IHyperLinkControl IImageButton IImageControl IInnerPanel IInplaceHint IIntegerCriterion IIntegerList IIntegerRequisite IIntegerValue IISBLEditorForm IJob IJobDescription IJobFactory IJobForm IJobInfo ILabelControl ILargeIntegerCriterion ILargeIntegerRequisite ILargeIntegerValue ILicenseInfo ILifeCycleStage IList IListBox ILocalIDInfo ILocalization ILock IMemoryDataSet IMessagingFactory IMetadataRepository INotice INoticeInfo INumericCriterion INumericRequisite INumericValue IObject IObjectDescription IObjectImporter IObjectInfo IObserver IPanelGroup IPickCriterion IPickProperty IPickRequisite IPickRequisiteDescription IPickRequisiteItem IPickRequisiteItems IPickValue IPrivilege IPrivilegeList IProcess IProcessFactory IProcessMessage IProgress IProperty IPropertyChangeEvent IQuery IReference IReferenceCriterion IReferenceEnabledMode IReferenceFactory IReferenceHistoryDescription IReferenceInfo IReferenceRecordCardWizardStep IReferenceRequisiteDescription IReferencesFactory IReferenceValue IRefRequisite IReport IReportFactory IRequisite IRequisiteDescription IRequisiteDescriptionList IRequisiteFactory IRichEdit IRouteStep IRule IRuleList ISchemeBlock IScript IScriptFactory ISearchCriteria ISearchCriterion ISearchDescription ISearchFactory ISearchFolderInfo ISearchForObjectDescription ISearchResultRestrictions ISecuredContext ISelectDialog IServerEvent IServerEventFactory IServiceDialog IServiceFactory ISignature ISignProvider ISignProvider2 ISignProvider3 ISimpleCriterion IStringCriterion IStringList IStringRequisite IStringRequisiteDescription IStringValue ISystemDialogsFactory ISystemInfo ITabSheet ITask ITaskAbortReasonInfo ITaskCardWizardStep ITaskDescription ITaskFactory ITaskInfo ITaskRoute ITextCriterion ITextRequisite ITextValue ITreeListSelectDialog IUser IUserList IValue IView IWebBrowserControl IWizard IWizardAction IWizardFactory IWizardFormElement IWizardParam IWizardPickParam IWizardReferenceParam IWizardStep IWorkAccessRights IWorkDescription IWorkflowAskableParam IWorkflowAskableParams IWorkflowBlock IWorkflowBlockResult IWorkflowEnabledMode IWorkflowParam IWorkflowPickParam IWorkflowReferenceParam IWorkState IWorkTreeCustomNode IWorkTreeJobNode IWorkTreeTaskNode IXMLEditorForm SBCrypto ",gr=Ce+Hl,vo=pa,Co="null true false nil ",Ro={className:"number",begin:e.NUMBER_RE,relevance:0},ga={className:"string",variants:[{begin:'"',end:'"'},{begin:"'",end:"'"}]},Ea={className:"doctag",begin:"\\b(?:TODO|DONE|BEGIN|END|STUB|CHG|FIXME|NOTE|BUG|XXX)\\b",relevance:0},No={className:"comment",begin:"//",end:"$",relevance:0,contains:[e.PHRASAL_WORDS_MODE,Ea]},Oo={className:"comment",begin:"/\\*",end:"\\*/",relevance:0,contains:[e.PHRASAL_WORDS_MODE,Ea]},Ao={variants:[No,Oo]},bi={$pattern:n,keyword:o,built_in:gr,class:vo,literal:Co},fa={begin:"\\.\\s*"+e.UNDERSCORE_IDENT_RE,keywords:bi,relevance:0},Sa={className:"type",begin:":[ \\t]*("+ma.trim().replace(/\s/g,"|")+")",end:"[ \\t]*=",excludeEnd:!0},yo={className:"variable",keywords:bi,begin:n,relevance:0,contains:[Sa,fa]},Io=i+"\\(";return{name:"ISBL",case_insensitive:!0,keywords:bi,illegal:"\\$|\\?|%|,|;$|~|#|@|o(l,c,d-1))}function s(l){const c=l.regex,d="[À-ʸa-zA-Z_$][À-ʸa-zA-Z_$0-9]*",_=d+o("(?:<"+d+"~~~(?:\\s*,\\s*"+d+"~~~)*>)?",/~~~/g,2),S={keyword:["synchronized","abstract","private","var","static","if","const ","for","while","strictfp","finally","protected","import","native","final","void","enum","else","break","transient","catch","instanceof","volatile","case","assert","package","default","public","try","switch","continue","throws","protected","public","private","module","requires","exports","do","sealed","yield","permits"],literal:["false","true","null"],type:["char","boolean","long","float","int","byte","short","double"],built_in:["super","this"]},C={className:"meta",begin:"@"+d,contains:[{begin:/\(/,end:/\)/,contains:["self"]}]},h={className:"params",begin:/\(/,end:/\)/,keywords:S,relevance:0,contains:[l.C_BLOCK_COMMENT_MODE],endsParent:!0};return{name:"Java",aliases:["jsp"],keywords:S,illegal:/<\/|#/,contains:[l.COMMENT("/\\*\\*","\\*/",{relevance:0,contains:[{begin:/\w+@/,relevance:0},{className:"doctag",begin:"@[A-Za-z]+"}]}),{begin:/import java\.[a-z]+\./,keywords:"import",relevance:2},l.C_LINE_COMMENT_MODE,l.C_BLOCK_COMMENT_MODE,{begin:/"""/,end:/"""/,className:"string",contains:[l.BACKSLASH_ESCAPE]},l.APOS_STRING_MODE,l.QUOTE_STRING_MODE,{match:[/\b(?:class|interface|enum|extends|implements|new)/,/\s+/,d],className:{1:"keyword",3:"title.class"}},{match:/non-sealed/,scope:"keyword"},{begin:[c.concat(/(?!else)/,d),/\s+/,d,/\s+/,/=(?!=)/],className:{1:"type",3:"variable",5:"operator"}},{begin:[/record/,/\s+/,d],className:{1:"keyword",3:"title.class"},contains:[h,l.C_LINE_COMMENT_MODE,l.C_BLOCK_COMMENT_MODE]},{beginKeywords:"new throw return else",relevance:0},{begin:["(?:"+_+"\\s+)",l.UNDERSCORE_IDENT_RE,/\s*(?=\()/],className:{2:"title.function"},keywords:S,contains:[{className:"params",begin:/\(/,end:/\)/,keywords:S,relevance:0,contains:[C,l.APOS_STRING_MODE,l.QUOTE_STRING_MODE,i,l.C_BLOCK_COMMENT_MODE]},l.C_LINE_COMMENT_MODE,l.C_BLOCK_COMMENT_MODE]},i,C]}}return h_=s,h_}var T_,OT;function OAe(){if(OT)return T_;OT=1;const t="[A-Za-z$_][0-9A-Za-z$_]*",e=["as","in","of","if","for","while","finally","var","new","function","do","return","void","else","break","catch","instanceof","with","throw","case","default","try","switch","continue","typeof","delete","let","yield","const","class","debugger","async","await","static","import","from","export","extends"],n=["true","false","null","undefined","NaN","Infinity"],i=["Object","Function","Boolean","Symbol","Math","Date","Number","BigInt","String","RegExp","Array","Float32Array","Float64Array","Int8Array","Uint8Array","Uint8ClampedArray","Int16Array","Int32Array","Uint16Array","Uint32Array","BigInt64Array","BigUint64Array","Set","Map","WeakSet","WeakMap","ArrayBuffer","SharedArrayBuffer","Atomics","DataView","JSON","Promise","Generator","GeneratorFunction","AsyncFunction","Reflect","Proxy","Intl","WebAssembly"],o=["Error","EvalError","InternalError","RangeError","ReferenceError","SyntaxError","TypeError","URIError"],s=["setInterval","setTimeout","clearInterval","clearTimeout","require","exports","eval","isFinite","isNaN","parseFloat","parseInt","decodeURI","decodeURIComponent","encodeURI","encodeURIComponent","escape","unescape"],l=["arguments","this","super","console","window","document","localStorage","sessionStorage","module","global"],c=[].concat(s,i,o);function d(_){const p=_.regex,g=(Ve,{after:xe})=>{const He="",end:""},S=/<[A-Za-z0-9\\._:-]+\s*\/>/,C={begin:/<[A-Za-z0-9\\._:-]+/,end:/\/[A-Za-z0-9\\._:-]+>|\/>/,isTrulyOpeningTag:(Ve,xe)=>{const He=Ve[0].length+Ve.index,rt=Ve.input[He];if(rt==="<"||rt===","){xe.ignoreMatch();return}rt===">"&&(g(Ve,{after:He})||xe.ignoreMatch());let We;const te=Ve.input.substring(He);if(We=te.match(/^\s*=/)){xe.ignoreMatch();return}if((We=te.match(/^\s+extends\s+/))&&We.index===0){xe.ignoreMatch();return}}},h={$pattern:t,keyword:e,literal:n,built_in:c,"variable.language":l},T="[0-9](_?[0-9])*",N=`\\.(${T})`,y="0|[1-9](_?[0-9])*|0[0-7]*[89][0-9]*",x={className:"number",variants:[{begin:`(\\b(${y})((${N})|\\.)?|(${N}))[eE][+-]?(${T})\\b`},{begin:`\\b(${y})\\b((${N})\\b|\\.)?|(${N})\\b`},{begin:"\\b(0|[1-9](_?[0-9])*)n\\b"},{begin:"\\b0[xX][0-9a-fA-F](_?[0-9a-fA-F])*n?\\b"},{begin:"\\b0[bB][0-1](_?[0-1])*n?\\b"},{begin:"\\b0[oO][0-7](_?[0-7])*n?\\b"},{begin:"\\b0[0-7]+n?\\b"}],relevance:0},P={className:"subst",begin:"\\$\\{",end:"\\}",keywords:h,contains:[]},D={begin:"html`",end:"",starts:{end:"`",returnEnd:!1,contains:[_.BACKSLASH_ESCAPE,P],subLanguage:"xml"}},k={begin:"css`",end:"",starts:{end:"`",returnEnd:!1,contains:[_.BACKSLASH_ESCAPE,P],subLanguage:"css"}},U={begin:"gql`",end:"",starts:{end:"`",returnEnd:!1,contains:[_.BACKSLASH_ESCAPE,P],subLanguage:"graphql"}},W={className:"string",begin:"`",end:"`",contains:[_.BACKSLASH_ESCAPE,P]},K={className:"comment",variants:[_.COMMENT(/\/\*\*(?!\/)/,"\\*/",{relevance:0,contains:[{begin:"(?=@[A-Za-z]+)",relevance:0,contains:[{className:"doctag",begin:"@[A-Za-z]+"},{className:"type",begin:"\\{",end:"\\}",excludeEnd:!0,excludeBegin:!0,relevance:0},{className:"variable",begin:E+"(?=\\s*(-)|$)",endsParent:!0,relevance:0},{begin:/(?=[^\n])\s/,relevance:0}]}]}),_.C_BLOCK_COMMENT_MODE,_.C_LINE_COMMENT_MODE]},Ee=[_.APOS_STRING_MODE,_.QUOTE_STRING_MODE,D,k,U,W,{match:/\$\d+/},x];P.contains=Ee.concat({begin:/\{/,end:/\}/,keywords:h,contains:["self"].concat(Ee)});const oe=[].concat(K,P.contains),L=oe.concat([{begin:/\(/,end:/\)/,keywords:h,contains:["self"].concat(oe)}]),J={className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:h,contains:L},re={variants:[{match:[/class/,/\s+/,E,/\s+/,/extends/,/\s+/,p.concat(E,"(",p.concat(/\./,E),")*")],scope:{1:"keyword",3:"title.class",5:"keyword",7:"title.class.inherited"}},{match:[/class/,/\s+/,E],scope:{1:"keyword",3:"title.class"}}]},G={relevance:0,match:p.either(/\bJSON/,/\b[A-Z][a-z]+([A-Z][a-z]*|\d)*/,/\b[A-Z]{2,}([A-Z][a-z]+|\d)+([A-Z][a-z]*)*/,/\b[A-Z]{2,}[a-z]+([A-Z][a-z]+|\d)*([A-Z][a-z]*)*/),className:"title.class",keywords:{_:[...i,...o]}},X={label:"use_strict",className:"meta",relevance:10,begin:/^\s*['"]use (strict|asm)['"]/},_e={variants:[{match:[/function/,/\s+/,E,/(?=\s*\()/]},{match:[/function/,/\s*(?=\()/]}],className:{1:"keyword",3:"title.function"},label:"func.def",contains:[J],illegal:/%/},ve={relevance:0,match:/\b[A-Z][A-Z_0-9]+\b/,className:"variable.constant"};function he(Ve){return p.concat("(?!",Ve.join("|"),")")}const tt={match:p.concat(/\b/,he([...s,"super","import"]),E,p.lookahead(/\(/)),className:"title.function",relevance:0},lt={begin:p.concat(/\./,p.lookahead(p.concat(E,/(?![0-9A-Za-z$_(])/))),end:E,excludeBegin:!0,keywords:"prototype",className:"property",relevance:0},$e={match:[/get|set/,/\s+/,E,/(?=\()/],className:{1:"keyword",3:"title.function"},contains:[{begin:/\(\)/},J]},Ce="(\\([^()]*(\\([^()]*(\\([^()]*\\)[^()]*)*\\)[^()]*)*\\)|"+_.UNDERSCORE_IDENT_RE+")\\s*=>",Be={match:[/const|var|let/,/\s+/,E,/\s*/,/=\s*/,/(async\s*)?/,p.lookahead(Ce)],keywords:"async",className:{1:"keyword",3:"title.function"},contains:[J]};return{name:"JavaScript",aliases:["js","jsx","mjs","cjs"],keywords:h,exports:{PARAMS_CONTAINS:L,CLASS_REFERENCE:G},illegal:/#(?![$_A-z])/,contains:[_.SHEBANG({label:"shebang",binary:"node",relevance:5}),X,_.APOS_STRING_MODE,_.QUOTE_STRING_MODE,D,k,U,W,K,{match:/\$\d+/},x,G,{className:"attr",begin:E+p.lookahead(":"),relevance:0},Be,{begin:"("+_.RE_STARTERS_RE+"|\\b(case|return|throw)\\b)\\s*",keywords:"return throw case",relevance:0,contains:[K,_.REGEXP_MODE,{className:"function",begin:Ce,returnBegin:!0,end:"\\s*=>",contains:[{className:"params",variants:[{begin:_.UNDERSCORE_IDENT_RE,relevance:0},{className:null,begin:/\(\s*\)/,skip:!0},{begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:h,contains:L}]}]},{begin:/,/,relevance:0},{match:/\s+/,relevance:0},{variants:[{begin:f.begin,end:f.end},{match:S},{begin:C.begin,"on:begin":C.isTrulyOpeningTag,end:C.end}],subLanguage:"xml",contains:[{begin:C.begin,end:C.end,skip:!0,contains:["self"]}]}]},_e,{beginKeywords:"while if switch catch for"},{begin:"\\b(?!function)"+_.UNDERSCORE_IDENT_RE+"\\([^()]*(\\([^()]*(\\([^()]*\\)[^()]*)*\\)[^()]*)*\\)\\s*\\{",returnBegin:!0,label:"func.def",contains:[J,_.inherit(_.TITLE_MODE,{begin:E,className:"title.function"})]},{match:/\.\.\./,relevance:0},lt,{match:"\\$"+E,relevance:0},{match:[/\bconstructor(?=\s*\()/],className:{1:"title.function"},contains:[J]},tt,ve,re,$e,{match:/\$[(.]/}]}}return T_=d,T_}var v_,AT;function AAe(){if(AT)return v_;AT=1;function t(e){const i={className:"params",begin:/\(/,end:/\)/,contains:[{begin:/[\w-]+ *=/,returnBegin:!0,relevance:0,contains:[{className:"attr",begin:/[\w-]+/}]}],relevance:0},o={className:"function",begin:/:[\w\-.]+/,relevance:0},s={className:"string",begin:/\B([\/.])[\w\-.\/=]+/},l={className:"params",begin:/--[\w\-=\/]+/};return{name:"JBoss CLI",aliases:["wildfly-cli"],keywords:{$pattern:"[a-z-]+",keyword:"alias batch cd clear command connect connection-factory connection-info data-source deploy deployment-info deployment-overlay echo echo-dmr help history if jdbc-driver-info jms-queue|20 jms-topic|20 ls patch pwd quit read-attribute read-operation reload rollout-plan run-batch set shutdown try unalias undeploy unset version xa-data-source",literal:"true false"},contains:[e.HASH_COMMENT_MODE,e.QUOTE_STRING_MODE,l,o,s,i]}}return v_=t,v_}var C_,yT;function yAe(){if(yT)return C_;yT=1;function t(e){const n={className:"attr",begin:/"(\\.|[^\\"\r\n])*"(?=\s*:)/,relevance:1.01},i={match:/[{}[\],:]/,className:"punctuation",relevance:0},o=["true","false","null"],s={scope:"literal",beginKeywords:o.join(" ")};return{name:"JSON",keywords:{literal:o},contains:[n,i,e.QUOTE_STRING_MODE,s,e.C_NUMBER_MODE,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE],illegal:"\\S"}}return C_=t,C_}var R_,IT;function IAe(){if(IT)return R_;IT=1;function t(e){const n="[A-Za-z_\\u00A1-\\uFFFF][A-Za-z_0-9\\u00A1-\\uFFFF]*",l={$pattern:n,keyword:["baremodule","begin","break","catch","ccall","const","continue","do","else","elseif","end","export","false","finally","for","function","global","if","import","in","isa","let","local","macro","module","quote","return","true","try","using","where","while"],literal:["ARGS","C_NULL","DEPOT_PATH","ENDIAN_BOM","ENV","Inf","Inf16","Inf32","Inf64","InsertionSort","LOAD_PATH","MergeSort","NaN","NaN16","NaN32","NaN64","PROGRAM_FILE","QuickSort","RoundDown","RoundFromZero","RoundNearest","RoundNearestTiesAway","RoundNearestTiesUp","RoundToZero","RoundUp","VERSION|0","devnull","false","im","missing","nothing","pi","stderr","stdin","stdout","true","undef","π","ℯ"],built_in:["AbstractArray","AbstractChannel","AbstractChar","AbstractDict","AbstractDisplay","AbstractFloat","AbstractIrrational","AbstractMatrix","AbstractRange","AbstractSet","AbstractString","AbstractUnitRange","AbstractVecOrMat","AbstractVector","Any","ArgumentError","Array","AssertionError","BigFloat","BigInt","BitArray","BitMatrix","BitSet","BitVector","Bool","BoundsError","CapturedException","CartesianIndex","CartesianIndices","Cchar","Cdouble","Cfloat","Channel","Char","Cint","Cintmax_t","Clong","Clonglong","Cmd","Colon","Complex","ComplexF16","ComplexF32","ComplexF64","CompositeException","Condition","Cptrdiff_t","Cshort","Csize_t","Cssize_t","Cstring","Cuchar","Cuint","Cuintmax_t","Culong","Culonglong","Cushort","Cvoid","Cwchar_t","Cwstring","DataType","DenseArray","DenseMatrix","DenseVecOrMat","DenseVector","Dict","DimensionMismatch","Dims","DivideError","DomainError","EOFError","Enum","ErrorException","Exception","ExponentialBackOff","Expr","Float16","Float32","Float64","Function","GlobalRef","HTML","IO","IOBuffer","IOContext","IOStream","IdDict","IndexCartesian","IndexLinear","IndexStyle","InexactError","InitError","Int","Int128","Int16","Int32","Int64","Int8","Integer","InterruptException","InvalidStateException","Irrational","KeyError","LinRange","LineNumberNode","LinearIndices","LoadError","MIME","Matrix","Method","MethodError","Missing","MissingException","Module","NTuple","NamedTuple","Nothing","Number","OrdinalRange","OutOfMemoryError","OverflowError","Pair","PartialQuickSort","PermutedDimsArray","Pipe","ProcessFailedException","Ptr","QuoteNode","Rational","RawFD","ReadOnlyMemoryError","Real","ReentrantLock","Ref","Regex","RegexMatch","RoundingMode","SegmentationFault","Set","Signed","Some","StackOverflowError","StepRange","StepRangeLen","StridedArray","StridedMatrix","StridedVecOrMat","StridedVector","String","StringIndexError","SubArray","SubString","SubstitutionString","Symbol","SystemError","Task","TaskFailedException","Text","TextDisplay","Timer","Tuple","Type","TypeError","TypeVar","UInt","UInt128","UInt16","UInt32","UInt64","UInt8","UndefInitializer","UndefKeywordError","UndefRefError","UndefVarError","Union","UnionAll","UnitRange","Unsigned","Val","Vararg","VecElement","VecOrMat","Vector","VersionNumber","WeakKeyDict","WeakRef"]},c={keywords:l,illegal:/<\//},d={className:"number",begin:/(\b0x[\d_]*(\.[\d_]*)?|0x\.\d[\d_]*)p[-+]?\d+|\b0[box][a-fA-F0-9][a-fA-F0-9_]*|(\b\d[\d_]*(\.[\d_]*)?|\.\d[\d_]*)([eEfF][-+]?\d+)?/,relevance:0},_={className:"string",begin:/'(.|\\[xXuU][a-zA-Z0-9]+)'/},p={className:"subst",begin:/\$\(/,end:/\)/,keywords:l},g={className:"variable",begin:"\\$"+n},E={className:"string",contains:[e.BACKSLASH_ESCAPE,p,g],variants:[{begin:/\w*"""/,end:/"""\w*/,relevance:10},{begin:/\w*"/,end:/"\w*/}]},f={className:"string",contains:[e.BACKSLASH_ESCAPE,p,g],begin:"`",end:"`"},S={className:"meta",begin:"@"+n},C={className:"comment",variants:[{begin:"#=",end:"=#",relevance:10},{begin:"#",end:"$"}]};return c.name="Julia",c.contains=[d,_,E,f,S,C,e.HASH_COMMENT_MODE,{className:"keyword",begin:"\\b(((abstract|primitive)\\s+)type|(mutable\\s+)?struct)\\b"},{begin:/<:/}],p.contains=c.contains,c}return R_=t,R_}var N_,DT;function DAe(){if(DT)return N_;DT=1;function t(e){return{name:"Julia REPL",contains:[{className:"meta.prompt",begin:/^julia>/,relevance:10,starts:{end:/^(?![ ]{6})/,subLanguage:"julia"}}],aliases:["jldoctest"]}}return N_=t,N_}var O_,xT;function xAe(){if(xT)return O_;xT=1;var t="[0-9](_*[0-9])*",e=`\\.(${t})`,n="[0-9a-fA-F](_*[0-9a-fA-F])*",i={className:"number",variants:[{begin:`(\\b(${t})((${e})|\\.)?|(${e}))[eE][+-]?(${t})[fFdD]?\\b`},{begin:`\\b(${t})((${e})[fFdD]?\\b|\\.([fFdD]\\b)?)`},{begin:`(${e})[fFdD]?\\b`},{begin:`\\b(${t})[fFdD]\\b`},{begin:`\\b0[xX]((${n})\\.?|(${n})?\\.(${n}))[pP][+-]?(${t})[fFdD]?\\b`},{begin:"\\b(0|[1-9](_*[0-9])*)[lL]?\\b"},{begin:`\\b0[xX](${n})[lL]?\\b`},{begin:"\\b0(_*[0-7])*[lL]?\\b"},{begin:"\\b0[bB][01](_*[01])*[lL]?\\b"}],relevance:0};function o(s){const l={keyword:"abstract as val var vararg get set class object open private protected public noinline crossinline dynamic final enum if else do while for when throw try catch finally import package is in fun override companion reified inline lateinit init interface annotation data sealed internal infix operator out by constructor super tailrec where const inner suspend typealias external expect actual",built_in:"Byte Short Char Int Long Boolean Float Double Void Unit Nothing",literal:"true false null"},c={className:"keyword",begin:/\b(break|continue|return|this)\b/,starts:{contains:[{className:"symbol",begin:/@\w+/}]}},d={className:"symbol",begin:s.UNDERSCORE_IDENT_RE+"@"},_={className:"subst",begin:/\$\{/,end:/\}/,contains:[s.C_NUMBER_MODE]},p={className:"variable",begin:"\\$"+s.UNDERSCORE_IDENT_RE},g={className:"string",variants:[{begin:'"""',end:'"""(?=[^"])',contains:[p,_]},{begin:"'",end:"'",illegal:/\n/,contains:[s.BACKSLASH_ESCAPE]},{begin:'"',end:'"',illegal:/\n/,contains:[s.BACKSLASH_ESCAPE,p,_]}]};_.contains.push(g);const E={className:"meta",begin:"@(?:file|property|field|get|set|receiver|param|setparam|delegate)\\s*:(?:\\s*"+s.UNDERSCORE_IDENT_RE+")?"},f={className:"meta",begin:"@"+s.UNDERSCORE_IDENT_RE,contains:[{begin:/\(/,end:/\)/,contains:[s.inherit(g,{className:"string"}),"self"]}]},S=i,C=s.COMMENT("/\\*","\\*/",{contains:[s.C_BLOCK_COMMENT_MODE]}),h={variants:[{className:"type",begin:s.UNDERSCORE_IDENT_RE},{begin:/\(/,end:/\)/,contains:[]}]},T=h;return T.variants[1].contains=[h],h.variants[1].contains=[T],{name:"Kotlin",aliases:["kt","kts"],keywords:l,contains:[s.COMMENT("/\\*\\*","\\*/",{relevance:0,contains:[{className:"doctag",begin:"@[A-Za-z]+"}]}),s.C_LINE_COMMENT_MODE,C,c,d,E,f,{className:"function",beginKeywords:"fun",end:"[(]|$",returnBegin:!0,excludeEnd:!0,keywords:l,relevance:5,contains:[{begin:s.UNDERSCORE_IDENT_RE+"\\s*\\(",returnBegin:!0,relevance:0,contains:[s.UNDERSCORE_TITLE_MODE]},{className:"type",begin://,keywords:"reified",relevance:0},{className:"params",begin:/\(/,end:/\)/,endsParent:!0,keywords:l,relevance:0,contains:[{begin:/:/,end:/[=,\/]/,endsWithParent:!0,contains:[h,s.C_LINE_COMMENT_MODE,C],relevance:0},s.C_LINE_COMMENT_MODE,C,E,f,g,s.C_NUMBER_MODE]},C]},{begin:[/class|interface|trait/,/\s+/,s.UNDERSCORE_IDENT_RE],beginScope:{3:"title.class"},keywords:"class interface trait",end:/[:\{(]|$/,excludeEnd:!0,illegal:"extends implements",contains:[{beginKeywords:"public protected internal private constructor"},s.UNDERSCORE_TITLE_MODE,{className:"type",begin://,excludeBegin:!0,excludeEnd:!0,relevance:0},{className:"type",begin:/[,:]\s*/,end:/[<\(,){\s]|$/,excludeBegin:!0,returnEnd:!0},E,f]},g,{className:"meta",begin:"^#!/usr/bin/env",end:"$",illegal:` -`},S]}}return O_=o,O_}var A_,wT;function wAe(){if(wT)return A_;wT=1;function t(e){const n="[a-zA-Z_][\\w.]*",i="<\\?(lasso(script)?|=)",o="\\]|\\?>",s={$pattern:n+"|&[lg]t;",literal:"true false none minimal full all void and or not bw nbw ew new cn ncn lt lte gt gte eq neq rx nrx ft",built_in:"array date decimal duration integer map pair string tag xml null boolean bytes keyword list locale queue set stack staticarray local var variable global data self inherited currentcapture givenblock",keyword:"cache database_names database_schemanames database_tablenames define_tag define_type email_batch encode_set html_comment handle handle_error header if inline iterate ljax_target link link_currentaction link_currentgroup link_currentrecord link_detail link_firstgroup link_firstrecord link_lastgroup link_lastrecord link_nextgroup link_nextrecord link_prevgroup link_prevrecord log loop namespace_using output_none portal private protect records referer referrer repeating resultset rows search_args search_arguments select sort_args sort_arguments thread_atomic value_list while abort case else fail_if fail_ifnot fail if_empty if_false if_null if_true loop_abort loop_continue loop_count params params_up return return_value run_children soap_definetag soap_lastrequest soap_lastresponse tag_name ascending average by define descending do equals frozen group handle_failure import in into join let match max min on order parent protected provide public require returnhome skip split_thread sum take thread to trait type where with yield yieldhome"},l=e.COMMENT("",{relevance:0}),c={className:"meta",begin:"\\[noprocess\\]",starts:{end:"\\[/noprocess\\]",returnEnd:!0,contains:[l]}},d={className:"meta",begin:"\\[/noprocess|"+i},_={className:"symbol",begin:"'"+n+"'"},p=[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.inherit(e.C_NUMBER_MODE,{begin:e.C_NUMBER_RE+"|(-?infinity|NaN)\\b"}),e.inherit(e.APOS_STRING_MODE,{illegal:null}),e.inherit(e.QUOTE_STRING_MODE,{illegal:null}),{className:"string",begin:"`",end:"`"},{variants:[{begin:"[#$]"+n},{begin:"#",end:"\\d+",illegal:"\\W"}]},{className:"type",begin:"::\\s*",end:n,illegal:"\\W"},{className:"params",variants:[{begin:"-(?!infinity)"+n,relevance:0},{begin:"(\\.\\.\\.)"}]},{begin:/(->|\.)\s*/,relevance:0,contains:[_]},{className:"class",beginKeywords:"define",returnEnd:!0,end:"\\(|=>",contains:[e.inherit(e.TITLE_MODE,{begin:n+"(=(?!>))?|[-+*/%](?!>)"})]}];return{name:"Lasso",aliases:["ls","lassoscript"],case_insensitive:!0,keywords:s,contains:[{className:"meta",begin:o,relevance:0,starts:{end:"\\[|"+i,returnEnd:!0,relevance:0,contains:[l]}},c,d,{className:"meta",begin:"\\[no_square_brackets",starts:{end:"\\[/no_square_brackets\\]",keywords:s,contains:[{className:"meta",begin:o,relevance:0,starts:{end:"\\[noprocess\\]|"+i,returnEnd:!0,contains:[l]}},c,d].concat(p)}},{className:"meta",begin:"\\[",relevance:0},{className:"meta",begin:"^#!",end:"lasso9$",relevance:10}].concat(p)}}return A_=t,A_}var y_,MT;function MAe(){if(MT)return y_;MT=1;function t(e){const i=e.regex.either(...["(?:NeedsTeXFormat|RequirePackage|GetIdInfo)","Provides(?:Expl)?(?:Package|Class|File)","(?:DeclareOption|ProcessOptions)","(?:documentclass|usepackage|input|include)","makeat(?:letter|other)","ExplSyntax(?:On|Off)","(?:new|renew|provide)?command","(?:re)newenvironment","(?:New|Renew|Provide|Declare)(?:Expandable)?DocumentCommand","(?:New|Renew|Provide|Declare)DocumentEnvironment","(?:(?:e|g|x)?def|let)","(?:begin|end)","(?:part|chapter|(?:sub){0,2}section|(?:sub)?paragraph)","caption","(?:label|(?:eq|page|name)?ref|(?:paren|foot|super)?cite)","(?:alpha|beta|[Gg]amma|[Dd]elta|(?:var)?epsilon|zeta|eta|[Tt]heta|vartheta)","(?:iota|(?:var)?kappa|[Ll]ambda|mu|nu|[Xx]i|[Pp]i|varpi|(?:var)rho)","(?:[Ss]igma|varsigma|tau|[Uu]psilon|[Pp]hi|varphi|chi|[Pp]si|[Oo]mega)","(?:frac|sum|prod|lim|infty|times|sqrt|leq|geq|left|right|middle|[bB]igg?)","(?:[lr]angle|q?quad|[lcvdi]?dots|d?dot|hat|tilde|bar)"].map(K=>K+"(?![a-zA-Z@:_])")),o=new RegExp(["(?:__)?[a-zA-Z]{2,}_[a-zA-Z](?:_?[a-zA-Z])+:[a-zA-Z]*","[lgc]__?[a-zA-Z](?:_?[a-zA-Z])*_[a-zA-Z]{2,}","[qs]__?[a-zA-Z](?:_?[a-zA-Z])+","use(?:_i)?:[a-zA-Z]*","(?:else|fi|or):","(?:if|cs|exp):w","(?:hbox|vbox):n","::[a-zA-Z]_unbraced","::[a-zA-Z:]"].map(K=>K+"(?![a-zA-Z:_])").join("|")),s=[{begin:/[a-zA-Z@]+/},{begin:/[^a-zA-Z@]?/}],l=[{begin:/\^{6}[0-9a-f]{6}/},{begin:/\^{5}[0-9a-f]{5}/},{begin:/\^{4}[0-9a-f]{4}/},{begin:/\^{3}[0-9a-f]{3}/},{begin:/\^{2}[0-9a-f]{2}/},{begin:/\^{2}[\u0000-\u007f]/}],c={className:"keyword",begin:/\\/,relevance:0,contains:[{endsParent:!0,begin:i},{endsParent:!0,begin:o},{endsParent:!0,variants:l},{endsParent:!0,relevance:0,variants:s}]},d={className:"params",relevance:0,begin:/#+\d?/},_={variants:l},p={className:"built_in",relevance:0,begin:/[$&^_]/},g={className:"meta",begin:/% ?!(T[eE]X|tex|BIB|bib)/,end:"$",relevance:10},E=e.COMMENT("%","$",{relevance:0}),f=[c,d,_,p,g,E],S={begin:/\{/,end:/\}/,relevance:0,contains:["self",...f]},C=e.inherit(S,{relevance:0,endsParent:!0,contains:[S,...f]}),h={begin:/\[/,end:/\]/,endsParent:!0,relevance:0,contains:[S,...f]},T={begin:/\s+/,relevance:0},N=[C],y=[h],x=function(K,Ee){return{contains:[T],starts:{relevance:0,contains:K,starts:Ee}}},P=function(K,Ee){return{begin:"\\\\"+K+"(?![a-zA-Z@:_])",keywords:{$pattern:/\\[a-zA-Z]+/,keyword:"\\"+K},relevance:0,contains:[T],starts:Ee}},D=function(K,Ee){return e.inherit({begin:"\\\\begin(?=[ ]*(\\r?\\n[ ]*)?\\{"+K+"\\})",keywords:{$pattern:/\\[a-zA-Z]+/,keyword:"\\begin"},relevance:0},x(N,Ee))},k=(K="string")=>e.END_SAME_AS_BEGIN({className:K,begin:/(.|\r?\n)/,end:/(.|\r?\n)/,excludeBegin:!0,excludeEnd:!0,endsParent:!0}),U=function(K){return{className:"string",end:"(?=\\\\end\\{"+K+"\\})"}},W=(K="string")=>({relevance:0,begin:/\{/,starts:{endsParent:!0,contains:[{className:K,end:/(?=\})/,endsParent:!0,contains:[{begin:/\{/,end:/\}/,relevance:0,contains:["self"]}]}]}}),z=[...["verb","lstinline"].map(K=>P(K,{contains:[k()]})),P("mint",x(N,{contains:[k()]})),P("mintinline",x(N,{contains:[W(),k()]})),P("url",{contains:[W("link"),W("link")]}),P("hyperref",{contains:[W("link")]}),P("href",x(y,{contains:[W("link")]})),...[].concat(...["","\\*"].map(K=>[D("verbatim"+K,U("verbatim"+K)),D("filecontents"+K,x(N,U("filecontents"+K))),...["","B","L"].map(Ee=>D(Ee+"Verbatim"+K,x(y,U(Ee+"Verbatim"+K))))])),D("minted",x(y,x(N,U("minted"))))];return{name:"LaTeX",aliases:["tex"],contains:[...z,...f]}}return y_=t,y_}var I_,LT;function LAe(){if(LT)return I_;LT=1;function t(e){return{name:"LDIF",contains:[{className:"attribute",match:"^dn(?=:)",relevance:10},{className:"attribute",match:"^\\w+(?=:)"},{className:"literal",match:"^-"},e.HASH_COMMENT_MODE]}}return I_=t,I_}var D_,PT;function PAe(){if(PT)return D_;PT=1;function t(e){return{name:"Leaf",contains:[{className:"function",begin:"#+[A-Za-z_0-9]*\\(",end:/ \{/,returnBegin:!0,excludeEnd:!0,contains:[{className:"keyword",begin:"#+"},{className:"title",begin:"[A-Za-z_][A-Za-z_0-9]*"},{className:"params",begin:"\\(",end:"\\)",endsParent:!0,contains:[{className:"string",begin:'"',end:'"'},{className:"variable",begin:"[A-Za-z_][A-Za-z_0-9]*"}]}]}]}}return D_=t,D_}var x_,kT;function kAe(){if(kT)return x_;kT=1;const t=d=>({IMPORTANT:{scope:"meta",begin:"!important"},BLOCK_COMMENT:d.C_BLOCK_COMMENT_MODE,HEXCOLOR:{scope:"number",begin:/#(([0-9a-fA-F]{3,4})|(([0-9a-fA-F]{2}){3,4}))\b/},FUNCTION_DISPATCH:{className:"built_in",begin:/[\w-]+(?=\()/},ATTRIBUTE_SELECTOR_MODE:{scope:"selector-attr",begin:/\[/,end:/\]/,illegal:"$",contains:[d.APOS_STRING_MODE,d.QUOTE_STRING_MODE]},CSS_NUMBER_MODE:{scope:"number",begin:d.NUMBER_RE+"(%|em|ex|ch|rem|vw|vh|vmin|vmax|cm|mm|in|pt|pc|px|deg|grad|rad|turn|s|ms|Hz|kHz|dpi|dpcm|dppx)?",relevance:0},CSS_VARIABLE:{className:"attr",begin:/--[A-Za-z][A-Za-z0-9_-]*/}}),e=["a","abbr","address","article","aside","audio","b","blockquote","body","button","canvas","caption","cite","code","dd","del","details","dfn","div","dl","dt","em","fieldset","figcaption","figure","footer","form","h1","h2","h3","h4","h5","h6","header","hgroup","html","i","iframe","img","input","ins","kbd","label","legend","li","main","mark","menu","nav","object","ol","p","q","quote","samp","section","span","strong","summary","sup","table","tbody","td","textarea","tfoot","th","thead","time","tr","ul","var","video"],n=["any-hover","any-pointer","aspect-ratio","color","color-gamut","color-index","device-aspect-ratio","device-height","device-width","display-mode","forced-colors","grid","height","hover","inverted-colors","monochrome","orientation","overflow-block","overflow-inline","pointer","prefers-color-scheme","prefers-contrast","prefers-reduced-motion","prefers-reduced-transparency","resolution","scan","scripting","update","width","min-width","max-width","min-height","max-height"],i=["active","any-link","blank","checked","current","default","defined","dir","disabled","drop","empty","enabled","first","first-child","first-of-type","fullscreen","future","focus","focus-visible","focus-within","has","host","host-context","hover","indeterminate","in-range","invalid","is","lang","last-child","last-of-type","left","link","local-link","not","nth-child","nth-col","nth-last-child","nth-last-col","nth-last-of-type","nth-of-type","only-child","only-of-type","optional","out-of-range","past","placeholder-shown","read-only","read-write","required","right","root","scope","target","target-within","user-invalid","valid","visited","where"],o=["after","backdrop","before","cue","cue-region","first-letter","first-line","grammar-error","marker","part","placeholder","selection","slotted","spelling-error"],s=["align-content","align-items","align-self","all","animation","animation-delay","animation-direction","animation-duration","animation-fill-mode","animation-iteration-count","animation-name","animation-play-state","animation-timing-function","backface-visibility","background","background-attachment","background-blend-mode","background-clip","background-color","background-image","background-origin","background-position","background-repeat","background-size","block-size","border","border-block","border-block-color","border-block-end","border-block-end-color","border-block-end-style","border-block-end-width","border-block-start","border-block-start-color","border-block-start-style","border-block-start-width","border-block-style","border-block-width","border-bottom","border-bottom-color","border-bottom-left-radius","border-bottom-right-radius","border-bottom-style","border-bottom-width","border-collapse","border-color","border-image","border-image-outset","border-image-repeat","border-image-slice","border-image-source","border-image-width","border-inline","border-inline-color","border-inline-end","border-inline-end-color","border-inline-end-style","border-inline-end-width","border-inline-start","border-inline-start-color","border-inline-start-style","border-inline-start-width","border-inline-style","border-inline-width","border-left","border-left-color","border-left-style","border-left-width","border-radius","border-right","border-right-color","border-right-style","border-right-width","border-spacing","border-style","border-top","border-top-color","border-top-left-radius","border-top-right-radius","border-top-style","border-top-width","border-width","bottom","box-decoration-break","box-shadow","box-sizing","break-after","break-before","break-inside","caption-side","caret-color","clear","clip","clip-path","clip-rule","color","column-count","column-fill","column-gap","column-rule","column-rule-color","column-rule-style","column-rule-width","column-span","column-width","columns","contain","content","content-visibility","counter-increment","counter-reset","cue","cue-after","cue-before","cursor","direction","display","empty-cells","filter","flex","flex-basis","flex-direction","flex-flow","flex-grow","flex-shrink","flex-wrap","float","flow","font","font-display","font-family","font-feature-settings","font-kerning","font-language-override","font-size","font-size-adjust","font-smoothing","font-stretch","font-style","font-synthesis","font-variant","font-variant-caps","font-variant-east-asian","font-variant-ligatures","font-variant-numeric","font-variant-position","font-variation-settings","font-weight","gap","glyph-orientation-vertical","grid","grid-area","grid-auto-columns","grid-auto-flow","grid-auto-rows","grid-column","grid-column-end","grid-column-start","grid-gap","grid-row","grid-row-end","grid-row-start","grid-template","grid-template-areas","grid-template-columns","grid-template-rows","hanging-punctuation","height","hyphens","icon","image-orientation","image-rendering","image-resolution","ime-mode","inline-size","isolation","justify-content","left","letter-spacing","line-break","line-height","list-style","list-style-image","list-style-position","list-style-type","margin","margin-block","margin-block-end","margin-block-start","margin-bottom","margin-inline","margin-inline-end","margin-inline-start","margin-left","margin-right","margin-top","marks","mask","mask-border","mask-border-mode","mask-border-outset","mask-border-repeat","mask-border-slice","mask-border-source","mask-border-width","mask-clip","mask-composite","mask-image","mask-mode","mask-origin","mask-position","mask-repeat","mask-size","mask-type","max-block-size","max-height","max-inline-size","max-width","min-block-size","min-height","min-inline-size","min-width","mix-blend-mode","nav-down","nav-index","nav-left","nav-right","nav-up","none","normal","object-fit","object-position","opacity","order","orphans","outline","outline-color","outline-offset","outline-style","outline-width","overflow","overflow-wrap","overflow-x","overflow-y","padding","padding-block","padding-block-end","padding-block-start","padding-bottom","padding-inline","padding-inline-end","padding-inline-start","padding-left","padding-right","padding-top","page-break-after","page-break-before","page-break-inside","pause","pause-after","pause-before","perspective","perspective-origin","pointer-events","position","quotes","resize","rest","rest-after","rest-before","right","row-gap","scroll-margin","scroll-margin-block","scroll-margin-block-end","scroll-margin-block-start","scroll-margin-bottom","scroll-margin-inline","scroll-margin-inline-end","scroll-margin-inline-start","scroll-margin-left","scroll-margin-right","scroll-margin-top","scroll-padding","scroll-padding-block","scroll-padding-block-end","scroll-padding-block-start","scroll-padding-bottom","scroll-padding-inline","scroll-padding-inline-end","scroll-padding-inline-start","scroll-padding-left","scroll-padding-right","scroll-padding-top","scroll-snap-align","scroll-snap-stop","scroll-snap-type","scrollbar-color","scrollbar-gutter","scrollbar-width","shape-image-threshold","shape-margin","shape-outside","speak","speak-as","src","tab-size","table-layout","text-align","text-align-all","text-align-last","text-combine-upright","text-decoration","text-decoration-color","text-decoration-line","text-decoration-style","text-emphasis","text-emphasis-color","text-emphasis-position","text-emphasis-style","text-indent","text-justify","text-orientation","text-overflow","text-rendering","text-shadow","text-transform","text-underline-position","top","transform","transform-box","transform-origin","transform-style","transition","transition-delay","transition-duration","transition-property","transition-timing-function","unicode-bidi","vertical-align","visibility","voice-balance","voice-duration","voice-family","voice-pitch","voice-range","voice-rate","voice-stress","voice-volume","white-space","widows","width","will-change","word-break","word-spacing","word-wrap","writing-mode","z-index"].reverse(),l=i.concat(o);function c(d){const _=t(d),p=l,g="and or not only",E="[\\w-]+",f="("+E+"|@\\{"+E+"\\})",S=[],C=[],h=function(K){return{className:"string",begin:"~?"+K+".*?"+K}},T=function(K,Ee,oe){return{className:K,begin:Ee,relevance:oe}},N={$pattern:/[a-z-]+/,keyword:g,attribute:n.join(" ")},y={begin:"\\(",end:"\\)",contains:C,keywords:N,relevance:0};C.push(d.C_LINE_COMMENT_MODE,d.C_BLOCK_COMMENT_MODE,h("'"),h('"'),_.CSS_NUMBER_MODE,{begin:"(url|data-uri)\\(",starts:{className:"string",end:"[\\)\\n]",excludeEnd:!0}},_.HEXCOLOR,y,T("variable","@@?"+E,10),T("variable","@\\{"+E+"\\}"),T("built_in","~?`[^`]*?`"),{className:"attribute",begin:E+"\\s*:",end:":",returnBegin:!0,excludeEnd:!0},_.IMPORTANT,{beginKeywords:"and not"},_.FUNCTION_DISPATCH);const x=C.concat({begin:/\{/,end:/\}/,contains:S}),P={beginKeywords:"when",endsWithParent:!0,contains:[{beginKeywords:"and not"}].concat(C)},D={begin:f+"\\s*:",returnBegin:!0,end:/[;}]/,relevance:0,contains:[{begin:/-(webkit|moz|ms|o)-/},_.CSS_VARIABLE,{className:"attribute",begin:"\\b("+s.join("|")+")\\b",end:/(?=:)/,starts:{endsWithParent:!0,illegal:"[<=$]",relevance:0,contains:C}}]},k={className:"keyword",begin:"@(import|media|charset|font-face|(-[a-z]+-)?keyframes|supports|document|namespace|page|viewport|host)\\b",starts:{end:"[;{}]",keywords:N,returnEnd:!0,contains:C,relevance:0}},U={className:"variable",variants:[{begin:"@"+E+"\\s*:",relevance:15},{begin:"@"+E}],starts:{end:"[;}]",returnEnd:!0,contains:x}},W={variants:[{begin:"[\\.#:&\\[>]",end:"[;{}]"},{begin:f,end:/\{/}],returnBegin:!0,returnEnd:!0,illegal:`[<='$"]`,relevance:0,contains:[d.C_LINE_COMMENT_MODE,d.C_BLOCK_COMMENT_MODE,P,T("keyword","all\\b"),T("variable","@\\{"+E+"\\}"),{begin:"\\b("+e.join("|")+")\\b",className:"selector-tag"},_.CSS_NUMBER_MODE,T("selector-tag",f,0),T("selector-id","#"+f),T("selector-class","\\."+f,0),T("selector-tag","&",0),_.ATTRIBUTE_SELECTOR_MODE,{className:"selector-pseudo",begin:":("+i.join("|")+")"},{className:"selector-pseudo",begin:":(:)?("+o.join("|")+")"},{begin:/\(/,end:/\)/,relevance:0,contains:x},{begin:"!important"},_.FUNCTION_DISPATCH]},z={begin:E+`:(:)?(${p.join("|")})`,returnBegin:!0,contains:[W]};return S.push(d.C_LINE_COMMENT_MODE,d.C_BLOCK_COMMENT_MODE,k,U,z,D,W,P,_.FUNCTION_DISPATCH),{name:"Less",case_insensitive:!0,illegal:`[=>'/<($"]`,contains:S}}return x_=c,x_}var w_,UT;function UAe(){if(UT)return w_;UT=1;function t(e){const n="[a-zA-Z_\\-+\\*\\/<=>&#][a-zA-Z0-9_\\-+*\\/<=>&#!]*",i="\\|[^]*?\\|",o="(-|\\+)?\\d+(\\.\\d+|\\/\\d+)?((d|e|f|l|s|D|E|F|L|S)(\\+|-)?\\d+)?",s={className:"literal",begin:"\\b(t{1}|nil)\\b"},l={className:"number",variants:[{begin:o,relevance:0},{begin:"#(b|B)[0-1]+(/[0-1]+)?"},{begin:"#(o|O)[0-7]+(/[0-7]+)?"},{begin:"#(x|X)[0-9a-fA-F]+(/[0-9a-fA-F]+)?"},{begin:"#(c|C)\\("+o+" +"+o,end:"\\)"}]},c=e.inherit(e.QUOTE_STRING_MODE,{illegal:null}),d=e.COMMENT(";","$",{relevance:0}),_={begin:"\\*",end:"\\*"},p={className:"symbol",begin:"[:&]"+n},g={begin:n,relevance:0},E={begin:i},S={contains:[l,c,_,p,{begin:"\\(",end:"\\)",contains:["self",s,c,l,g]},g],variants:[{begin:"['`]\\(",end:"\\)"},{begin:"\\(quote ",end:"\\)",keywords:{name:"quote"}},{begin:"'"+i}]},C={variants:[{begin:"'"+n},{begin:"#'"+n+"(::"+n+")*"}]},h={begin:"\\(\\s*",end:"\\)"},T={endsWithParent:!0,relevance:0};return h.contains=[{className:"name",variants:[{begin:n,relevance:0},{begin:i}]},T],T.contains=[S,C,h,s,l,c,d,_,p,E,g],{name:"Lisp",illegal:/\S/,contains:[l,e.SHEBANG(),s,c,d,S,C,h,g]}}return w_=t,w_}var M_,FT;function FAe(){if(FT)return M_;FT=1;function t(e){const n={className:"variable",variants:[{begin:"\\b([gtps][A-Z]{1}[a-zA-Z0-9]*)(\\[.+\\])?(?:\\s*?)"},{begin:"\\$_[A-Z]+"}],relevance:0},i=[e.C_BLOCK_COMMENT_MODE,e.HASH_COMMENT_MODE,e.COMMENT("--","$"),e.COMMENT("[^:]//","$")],o=e.inherit(e.TITLE_MODE,{variants:[{begin:"\\b_*rig[A-Z][A-Za-z0-9_\\-]*"},{begin:"\\b_[a-z0-9\\-]+"}]}),s=e.inherit(e.TITLE_MODE,{begin:"\\b([A-Za-z0-9_\\-]+)\\b"});return{name:"LiveCode",case_insensitive:!1,keywords:{keyword:"$_COOKIE $_FILES $_GET $_GET_BINARY $_GET_RAW $_POST $_POST_BINARY $_POST_RAW $_SESSION $_SERVER codepoint codepoints segment segments codeunit codeunits sentence sentences trueWord trueWords paragraph after byte bytes english the until http forever descending using line real8 with seventh for stdout finally element word words fourth before black ninth sixth characters chars stderr uInt1 uInt1s uInt2 uInt2s stdin string lines relative rel any fifth items from middle mid at else of catch then third it file milliseconds seconds second secs sec int1 int1s int4 int4s internet int2 int2s normal text item last long detailed effective uInt4 uInt4s repeat end repeat URL in try into switch to words https token binfile each tenth as ticks tick system real4 by dateItems without char character ascending eighth whole dateTime numeric short first ftp integer abbreviated abbr abbrev private case while if div mod wrap and or bitAnd bitNot bitOr bitXor among not in a an within contains ends with begins the keys of keys",literal:"SIX TEN FORMFEED NINE ZERO NONE SPACE FOUR FALSE COLON CRLF PI COMMA ENDOFFILE EOF EIGHT FIVE QUOTE EMPTY ONE TRUE RETURN CR LINEFEED RIGHT BACKSLASH NULL SEVEN TAB THREE TWO six ten formfeed nine zero none space four false colon crlf pi comma endoffile eof eight five quote empty one true return cr linefeed right backslash null seven tab three two RIVERSION RISTATE FILE_READ_MODE FILE_WRITE_MODE FILE_WRITE_MODE DIR_WRITE_MODE FILE_READ_UMASK FILE_WRITE_UMASK DIR_READ_UMASK DIR_WRITE_UMASK",built_in:"put abs acos aliasReference annuity arrayDecode arrayEncode asin atan atan2 average avg avgDev base64Decode base64Encode baseConvert binaryDecode binaryEncode byteOffset byteToNum cachedURL cachedURLs charToNum cipherNames codepointOffset codepointProperty codepointToNum codeunitOffset commandNames compound compress constantNames cos date dateFormat decompress difference directories diskSpace DNSServers exp exp1 exp2 exp10 extents files flushEvents folders format functionNames geometricMean global globals hasMemory harmonicMean hostAddress hostAddressToName hostName hostNameToAddress isNumber ISOToMac itemOffset keys len length libURLErrorData libUrlFormData libURLftpCommand libURLLastHTTPHeaders libURLLastRHHeaders libUrlMultipartFormAddPart libUrlMultipartFormData libURLVersion lineOffset ln ln1 localNames log log2 log10 longFilePath lower macToISO matchChunk matchText matrixMultiply max md5Digest median merge messageAuthenticationCode messageDigest millisec millisecs millisecond milliseconds min monthNames nativeCharToNum normalizeText num number numToByte numToChar numToCodepoint numToNativeChar offset open openfiles openProcesses openProcessIDs openSockets paragraphOffset paramCount param params peerAddress pendingMessages platform popStdDev populationStandardDeviation populationVariance popVariance processID random randomBytes replaceText result revCreateXMLTree revCreateXMLTreeFromFile revCurrentRecord revCurrentRecordIsFirst revCurrentRecordIsLast revDatabaseColumnCount revDatabaseColumnIsNull revDatabaseColumnLengths revDatabaseColumnNames revDatabaseColumnNamed revDatabaseColumnNumbered revDatabaseColumnTypes revDatabaseConnectResult revDatabaseCursors revDatabaseID revDatabaseTableNames revDatabaseType revDataFromQuery revdb_closeCursor revdb_columnbynumber revdb_columncount revdb_columnisnull revdb_columnlengths revdb_columnnames revdb_columntypes revdb_commit revdb_connect revdb_connections revdb_connectionerr revdb_currentrecord revdb_cursorconnection revdb_cursorerr revdb_cursors revdb_dbtype revdb_disconnect revdb_execute revdb_iseof revdb_isbof revdb_movefirst revdb_movelast revdb_movenext revdb_moveprev revdb_query revdb_querylist revdb_recordcount revdb_rollback revdb_tablenames revGetDatabaseDriverPath revNumberOfRecords revOpenDatabase revOpenDatabases revQueryDatabase revQueryDatabaseBlob revQueryResult revQueryIsAtStart revQueryIsAtEnd revUnixFromMacPath revXMLAttribute revXMLAttributes revXMLAttributeValues revXMLChildContents revXMLChildNames revXMLCreateTreeFromFileWithNamespaces revXMLCreateTreeWithNamespaces revXMLDataFromXPathQuery revXMLEvaluateXPath revXMLFirstChild revXMLMatchingNode revXMLNextSibling revXMLNodeContents revXMLNumberOfChildren revXMLParent revXMLPreviousSibling revXMLRootNode revXMLRPC_CreateRequest revXMLRPC_Documents revXMLRPC_Error revXMLRPC_GetHost revXMLRPC_GetMethod revXMLRPC_GetParam revXMLText revXMLRPC_Execute revXMLRPC_GetParamCount revXMLRPC_GetParamNode revXMLRPC_GetParamType revXMLRPC_GetPath revXMLRPC_GetPort revXMLRPC_GetProtocol revXMLRPC_GetRequest revXMLRPC_GetResponse revXMLRPC_GetSocket revXMLTree revXMLTrees revXMLValidateDTD revZipDescribeItem revZipEnumerateItems revZipOpenArchives round sampVariance sec secs seconds sentenceOffset sha1Digest shell shortFilePath sin specialFolderPath sqrt standardDeviation statRound stdDev sum sysError systemVersion tan tempName textDecode textEncode tick ticks time to tokenOffset toLower toUpper transpose truewordOffset trunc uniDecode uniEncode upper URLDecode URLEncode URLStatus uuid value variableNames variance version waitDepth weekdayNames wordOffset xsltApplyStylesheet xsltApplyStylesheetFromFile xsltLoadStylesheet xsltLoadStylesheetFromFile add breakpoint cancel clear local variable file word line folder directory URL close socket process combine constant convert create new alias folder directory decrypt delete variable word line folder directory URL dispatch divide do encrypt filter get include intersect kill libURLDownloadToFile libURLFollowHttpRedirects libURLftpUpload libURLftpUploadFile libURLresetAll libUrlSetAuthCallback libURLSetDriver libURLSetCustomHTTPHeaders libUrlSetExpect100 libURLSetFTPListCommand libURLSetFTPMode libURLSetFTPStopTime libURLSetStatusCallback load extension loadedExtensions multiply socket prepare process post seek rel relative read from process rename replace require resetAll resolve revAddXMLNode revAppendXML revCloseCursor revCloseDatabase revCommitDatabase revCopyFile revCopyFolder revCopyXMLNode revDeleteFolder revDeleteXMLNode revDeleteAllXMLTrees revDeleteXMLTree revExecuteSQL revGoURL revInsertXMLNode revMoveFolder revMoveToFirstRecord revMoveToLastRecord revMoveToNextRecord revMoveToPreviousRecord revMoveToRecord revMoveXMLNode revPutIntoXMLNode revRollBackDatabase revSetDatabaseDriverPath revSetXMLAttribute revXMLRPC_AddParam revXMLRPC_DeleteAllDocuments revXMLAddDTD revXMLRPC_Free revXMLRPC_FreeAll revXMLRPC_DeleteDocument revXMLRPC_DeleteParam revXMLRPC_SetHost revXMLRPC_SetMethod revXMLRPC_SetPort revXMLRPC_SetProtocol revXMLRPC_SetSocket revZipAddItemWithData revZipAddItemWithFile revZipAddUncompressedItemWithData revZipAddUncompressedItemWithFile revZipCancel revZipCloseArchive revZipDeleteItem revZipExtractItemToFile revZipExtractItemToVariable revZipSetProgressCallback revZipRenameItem revZipReplaceItemWithData revZipReplaceItemWithFile revZipOpenArchive send set sort split start stop subtract symmetric union unload vectorDotProduct wait write"},contains:[n,{className:"keyword",begin:"\\bend\\sif\\b"},{className:"function",beginKeywords:"function",end:"$",contains:[n,s,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,e.BINARY_NUMBER_MODE,e.C_NUMBER_MODE,o]},{className:"function",begin:"\\bend\\s+",end:"$",keywords:"end",contains:[s,o],relevance:0},{beginKeywords:"command on",end:"$",contains:[n,s,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,e.BINARY_NUMBER_MODE,e.C_NUMBER_MODE,o]},{className:"meta",variants:[{begin:"<\\?(rev|lc|livecode)",relevance:10},{begin:"<\\?"},{begin:"\\?>"}]},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,e.BINARY_NUMBER_MODE,e.C_NUMBER_MODE,o].concat(i),illegal:";$|^\\[|^=|&|\\{"}}return M_=t,M_}var L_,BT;function BAe(){if(BT)return L_;BT=1;const t=["as","in","of","if","for","while","finally","var","new","function","do","return","void","else","break","catch","instanceof","with","throw","case","default","try","switch","continue","typeof","delete","let","yield","const","class","debugger","async","await","static","import","from","export","extends"],e=["true","false","null","undefined","NaN","Infinity"],n=["Object","Function","Boolean","Symbol","Math","Date","Number","BigInt","String","RegExp","Array","Float32Array","Float64Array","Int8Array","Uint8Array","Uint8ClampedArray","Int16Array","Int32Array","Uint16Array","Uint32Array","BigInt64Array","BigUint64Array","Set","Map","WeakSet","WeakMap","ArrayBuffer","SharedArrayBuffer","Atomics","DataView","JSON","Promise","Generator","GeneratorFunction","AsyncFunction","Reflect","Proxy","Intl","WebAssembly"],i=["Error","EvalError","InternalError","RangeError","ReferenceError","SyntaxError","TypeError","URIError"],o=["setInterval","setTimeout","clearInterval","clearTimeout","require","exports","eval","isFinite","isNaN","parseFloat","parseInt","decodeURI","decodeURIComponent","encodeURI","encodeURIComponent","escape","unescape"],s=[].concat(o,n,i);function l(c){const d=["npm","print"],_=["yes","no","on","off","it","that","void"],p=["then","unless","until","loop","of","by","when","and","or","is","isnt","not","it","that","otherwise","from","to","til","fallthrough","case","enum","native","list","map","__hasProp","__extends","__slice","__bind","__indexOf"],g={keyword:t.concat(p),literal:e.concat(_),built_in:s.concat(d)},E="[A-Za-z$_](?:-[0-9A-Za-z$_]|[0-9A-Za-z$_])*",f=c.inherit(c.TITLE_MODE,{begin:E}),S={className:"subst",begin:/#\{/,end:/\}/,keywords:g},C={className:"subst",begin:/#[A-Za-z$_]/,end:/(?:-[0-9A-Za-z$_]|[0-9A-Za-z$_])*/,keywords:g},h=[c.BINARY_NUMBER_MODE,{className:"number",begin:"(\\b0[xX][a-fA-F0-9_]+)|(\\b\\d(\\d|_\\d)*(\\.(\\d(\\d|_\\d)*)?)?(_*[eE]([-+]\\d(_\\d|\\d)*)?)?[_a-z]*)",relevance:0,starts:{end:"(\\s*/)?",relevance:0}},{className:"string",variants:[{begin:/'''/,end:/'''/,contains:[c.BACKSLASH_ESCAPE]},{begin:/'/,end:/'/,contains:[c.BACKSLASH_ESCAPE]},{begin:/"""/,end:/"""/,contains:[c.BACKSLASH_ESCAPE,S,C]},{begin:/"/,end:/"/,contains:[c.BACKSLASH_ESCAPE,S,C]},{begin:/\\/,end:/(\s|$)/,excludeEnd:!0}]},{className:"regexp",variants:[{begin:"//",end:"//[gim]*",contains:[S,c.HASH_COMMENT_MODE]},{begin:/\/(?![ *])(\\.|[^\\\n])*?\/[gim]*(?=\W)/}]},{begin:"@"+E},{begin:"``",end:"``",excludeBegin:!0,excludeEnd:!0,subLanguage:"javascript"}];S.contains=h;const T={className:"params",begin:"\\(",returnBegin:!0,contains:[{begin:/\(/,end:/\)/,keywords:g,contains:["self"].concat(h)}]},N={begin:"(#=>|=>|\\|>>|-?->|!->)"},y={variants:[{match:[/class\s+/,E,/\s+extends\s+/,E]},{match:[/class\s+/,E]}],scope:{2:"title.class",4:"title.class.inherited"},keywords:g};return{name:"LiveScript",aliases:["ls"],keywords:g,illegal:/\/\*/,contains:h.concat([c.COMMENT("\\/\\*","\\*\\/"),c.HASH_COMMENT_MODE,N,{className:"function",contains:[f,T],returnBegin:!0,variants:[{begin:"("+E+"\\s*(?:=|:=)\\s*)?(\\(.*\\)\\s*)?\\B->\\*?",end:"->\\*?"},{begin:"("+E+"\\s*(?:=|:=)\\s*)?!?(\\(.*\\)\\s*)?\\B[-~]{1,2}>\\*?",end:"[-~]{1,2}>\\*?"},{begin:"("+E+"\\s*(?:=|:=)\\s*)?(\\(.*\\)\\s*)?\\B!?[-~]{1,2}>\\*?",end:"!?[-~]{1,2}>\\*?"}]},y,{begin:E+":",end:":",returnBegin:!0,returnEnd:!0,relevance:0}])}}return L_=l,L_}var P_,GT;function GAe(){if(GT)return P_;GT=1;function t(e){const n=e.regex,i=/([-a-zA-Z$._][\w$.-]*)/,o={className:"type",begin:/\bi\d+(?=\s|\b)/},s={className:"operator",relevance:0,begin:/=/},l={className:"punctuation",relevance:0,begin:/,/},c={className:"number",variants:[{begin:/[su]?0[xX][KMLHR]?[a-fA-F0-9]+/},{begin:/[-+]?\d+(?:[.]\d+)?(?:[eE][-+]?\d+(?:[.]\d+)?)?/}],relevance:0},d={className:"symbol",variants:[{begin:/^\s*[a-z]+:/}],relevance:0},_={className:"variable",variants:[{begin:n.concat(/%/,i)},{begin:/%\d+/},{begin:/#\d+/}]},p={className:"title",variants:[{begin:n.concat(/@/,i)},{begin:/@\d+/},{begin:n.concat(/!/,i)},{begin:n.concat(/!\d+/,i)},{begin:/!\d+/}]};return{name:"LLVM IR",keywords:"begin end true false declare define global constant private linker_private internal available_externally linkonce linkonce_odr weak weak_odr appending dllimport dllexport common default hidden protected extern_weak external thread_local zeroinitializer undef null to tail target triple datalayout volatile nuw nsw nnan ninf nsz arcp fast exact inbounds align addrspace section alias module asm sideeffect gc dbg linker_private_weak attributes blockaddress initialexec localdynamic localexec prefix unnamed_addr ccc fastcc coldcc x86_stdcallcc x86_fastcallcc arm_apcscc arm_aapcscc arm_aapcs_vfpcc ptx_device ptx_kernel intel_ocl_bicc msp430_intrcc spir_func spir_kernel x86_64_sysvcc x86_64_win64cc x86_thiscallcc cc c signext zeroext inreg sret nounwind noreturn noalias nocapture byval nest readnone readonly inlinehint noinline alwaysinline optsize ssp sspreq noredzone noimplicitfloat naked builtin cold nobuiltin noduplicate nonlazybind optnone returns_twice sanitize_address sanitize_memory sanitize_thread sspstrong uwtable returned type opaque eq ne slt sgt sle sge ult ugt ule uge oeq one olt ogt ole oge ord uno ueq une x acq_rel acquire alignstack atomic catch cleanup filter inteldialect max min monotonic nand personality release seq_cst singlethread umax umin unordered xchg add fadd sub fsub mul fmul udiv sdiv fdiv urem srem frem shl lshr ashr and or xor icmp fcmp phi call trunc zext sext fptrunc fpext uitofp sitofp fptoui fptosi inttoptr ptrtoint bitcast addrspacecast select va_arg ret br switch invoke unwind unreachable indirectbr landingpad resume malloc alloca free load store getelementptr extractelement insertelement shufflevector getresult extractvalue insertvalue atomicrmw cmpxchg fence argmemonly double",contains:[o,e.COMMENT(/;\s*$/,null,{relevance:0}),e.COMMENT(/;/,/$/),{className:"string",begin:/"/,end:/"/,contains:[{className:"char.escape",match:/\\\d\d/}]},p,l,s,_,d,c]}}return P_=t,P_}var k_,YT;function YAe(){if(YT)return k_;YT=1;function t(e){const i={className:"string",begin:'"',end:'"',contains:[{className:"subst",begin:/\\[tn"\\]/}]},o={className:"number",relevance:0,begin:e.C_NUMBER_RE},s={className:"literal",variants:[{begin:"\\b(PI|TWO_PI|PI_BY_TWO|DEG_TO_RAD|RAD_TO_DEG|SQRT2)\\b"},{begin:"\\b(XP_ERROR_(EXPERIENCES_DISABLED|EXPERIENCE_(DISABLED|SUSPENDED)|INVALID_(EXPERIENCE|PARAMETERS)|KEY_NOT_FOUND|MATURITY_EXCEEDED|NONE|NOT_(FOUND|PERMITTED(_LAND)?)|NO_EXPERIENCE|QUOTA_EXCEEDED|RETRY_UPDATE|STORAGE_EXCEPTION|STORE_DISABLED|THROTTLED|UNKNOWN_ERROR)|JSON_APPEND|STATUS_(PHYSICS|ROTATE_[XYZ]|PHANTOM|SANDBOX|BLOCK_GRAB(_OBJECT)?|(DIE|RETURN)_AT_EDGE|CAST_SHADOWS|OK|MALFORMED_PARAMS|TYPE_MISMATCH|BOUNDS_ERROR|NOT_(FOUND|SUPPORTED)|INTERNAL_ERROR|WHITELIST_FAILED)|AGENT(_(BY_(LEGACY_|USER)NAME|FLYING|ATTACHMENTS|SCRIPTED|MOUSELOOK|SITTING|ON_OBJECT|AWAY|WALKING|IN_AIR|TYPING|CROUCHING|BUSY|ALWAYS_RUN|AUTOPILOT|LIST_(PARCEL(_OWNER)?|REGION)))?|CAMERA_(PITCH|DISTANCE|BEHINDNESS_(ANGLE|LAG)|(FOCUS|POSITION)(_(THRESHOLD|LOCKED|LAG))?|FOCUS_OFFSET|ACTIVE)|ANIM_ON|LOOP|REVERSE|PING_PONG|SMOOTH|ROTATE|SCALE|ALL_SIDES|LINK_(ROOT|SET|ALL_(OTHERS|CHILDREN)|THIS)|ACTIVE|PASS(IVE|_(ALWAYS|IF_NOT_HANDLED|NEVER))|SCRIPTED|CONTROL_(FWD|BACK|(ROT_)?(LEFT|RIGHT)|UP|DOWN|(ML_)?LBUTTON)|PERMISSION_(RETURN_OBJECTS|DEBIT|OVERRIDE_ANIMATIONS|SILENT_ESTATE_MANAGEMENT|TAKE_CONTROLS|TRIGGER_ANIMATION|ATTACH|CHANGE_LINKS|(CONTROL|TRACK)_CAMERA|TELEPORT)|INVENTORY_(TEXTURE|SOUND|OBJECT|SCRIPT|LANDMARK|CLOTHING|NOTECARD|BODYPART|ANIMATION|GESTURE|ALL|NONE)|CHANGED_(INVENTORY|COLOR|SHAPE|SCALE|TEXTURE|LINK|ALLOWED_DROP|OWNER|REGION(_START)?|TELEPORT|MEDIA)|OBJECT_(CLICK_ACTION|HOVER_HEIGHT|LAST_OWNER_ID|(PHYSICS|SERVER|STREAMING)_COST|UNKNOWN_DETAIL|CHARACTER_TIME|PHANTOM|PHYSICS|TEMP_(ATTACHED|ON_REZ)|NAME|DESC|POS|PRIM_(COUNT|EQUIVALENCE)|RETURN_(PARCEL(_OWNER)?|REGION)|REZZER_KEY|ROO?T|VELOCITY|OMEGA|OWNER|GROUP(_TAG)?|CREATOR|ATTACHED_(POINT|SLOTS_AVAILABLE)|RENDER_WEIGHT|(BODY_SHAPE|PATHFINDING)_TYPE|(RUNNING|TOTAL)_SCRIPT_COUNT|TOTAL_INVENTORY_COUNT|SCRIPT_(MEMORY|TIME))|TYPE_(INTEGER|FLOAT|STRING|KEY|VECTOR|ROTATION|INVALID)|(DEBUG|PUBLIC)_CHANNEL|ATTACH_(AVATAR_CENTER|CHEST|HEAD|BACK|PELVIS|MOUTH|CHIN|NECK|NOSE|BELLY|[LR](SHOULDER|HAND|FOOT|EAR|EYE|[UL](ARM|LEG)|HIP)|(LEFT|RIGHT)_PEC|HUD_(CENTER_[12]|TOP_(RIGHT|CENTER|LEFT)|BOTTOM(_(RIGHT|LEFT))?)|[LR]HAND_RING1|TAIL_(BASE|TIP)|[LR]WING|FACE_(JAW|[LR]EAR|[LR]EYE|TOUNGE)|GROIN|HIND_[LR]FOOT)|LAND_(LEVEL|RAISE|LOWER|SMOOTH|NOISE|REVERT)|DATA_(ONLINE|NAME|BORN|SIM_(POS|STATUS|RATING)|PAYINFO)|PAYMENT_INFO_(ON_FILE|USED)|REMOTE_DATA_(CHANNEL|REQUEST|REPLY)|PSYS_(PART_(BF_(ZERO|ONE(_MINUS_(DEST_COLOR|SOURCE_(ALPHA|COLOR)))?|DEST_COLOR|SOURCE_(ALPHA|COLOR))|BLEND_FUNC_(DEST|SOURCE)|FLAGS|(START|END)_(COLOR|ALPHA|SCALE|GLOW)|MAX_AGE|(RIBBON|WIND|INTERP_(COLOR|SCALE)|BOUNCE|FOLLOW_(SRC|VELOCITY)|TARGET_(POS|LINEAR)|EMISSIVE)_MASK)|SRC_(MAX_AGE|PATTERN|ANGLE_(BEGIN|END)|BURST_(RATE|PART_COUNT|RADIUS|SPEED_(MIN|MAX))|ACCEL|TEXTURE|TARGET_KEY|OMEGA|PATTERN_(DROP|EXPLODE|ANGLE(_CONE(_EMPTY)?)?)))|VEHICLE_(REFERENCE_FRAME|TYPE_(NONE|SLED|CAR|BOAT|AIRPLANE|BALLOON)|(LINEAR|ANGULAR)_(FRICTION_TIMESCALE|MOTOR_DIRECTION)|LINEAR_MOTOR_OFFSET|HOVER_(HEIGHT|EFFICIENCY|TIMESCALE)|BUOYANCY|(LINEAR|ANGULAR)_(DEFLECTION_(EFFICIENCY|TIMESCALE)|MOTOR_(DECAY_)?TIMESCALE)|VERTICAL_ATTRACTION_(EFFICIENCY|TIMESCALE)|BANKING_(EFFICIENCY|MIX|TIMESCALE)|FLAG_(NO_DEFLECTION_UP|LIMIT_(ROLL_ONLY|MOTOR_UP)|HOVER_((WATER|TERRAIN|UP)_ONLY|GLOBAL_HEIGHT)|MOUSELOOK_(STEER|BANK)|CAMERA_DECOUPLED))|PRIM_(ALLOW_UNSIT|ALPHA_MODE(_(BLEND|EMISSIVE|MASK|NONE))?|NORMAL|SPECULAR|TYPE(_(BOX|CYLINDER|PRISM|SPHERE|TORUS|TUBE|RING|SCULPT))?|HOLE_(DEFAULT|CIRCLE|SQUARE|TRIANGLE)|MATERIAL(_(STONE|METAL|GLASS|WOOD|FLESH|PLASTIC|RUBBER))?|SHINY_(NONE|LOW|MEDIUM|HIGH)|BUMP_(NONE|BRIGHT|DARK|WOOD|BARK|BRICKS|CHECKER|CONCRETE|TILE|STONE|DISKS|GRAVEL|BLOBS|SIDING|LARGETILE|STUCCO|SUCTION|WEAVE)|TEXGEN_(DEFAULT|PLANAR)|SCRIPTED_SIT_ONLY|SCULPT_(TYPE_(SPHERE|TORUS|PLANE|CYLINDER|MASK)|FLAG_(MIRROR|INVERT))|PHYSICS(_(SHAPE_(CONVEX|NONE|PRIM|TYPE)))?|(POS|ROT)_LOCAL|SLICE|TEXT|FLEXIBLE|POINT_LIGHT|TEMP_ON_REZ|PHANTOM|POSITION|SIT_TARGET|SIZE|ROTATION|TEXTURE|NAME|OMEGA|DESC|LINK_TARGET|COLOR|BUMP_SHINY|FULLBRIGHT|TEXGEN|GLOW|MEDIA_(ALT_IMAGE_ENABLE|CONTROLS|(CURRENT|HOME)_URL|AUTO_(LOOP|PLAY|SCALE|ZOOM)|FIRST_CLICK_INTERACT|(WIDTH|HEIGHT)_PIXELS|WHITELIST(_ENABLE)?|PERMS_(INTERACT|CONTROL)|PARAM_MAX|CONTROLS_(STANDARD|MINI)|PERM_(NONE|OWNER|GROUP|ANYONE)|MAX_(URL_LENGTH|WHITELIST_(SIZE|COUNT)|(WIDTH|HEIGHT)_PIXELS)))|MASK_(BASE|OWNER|GROUP|EVERYONE|NEXT)|PERM_(TRANSFER|MODIFY|COPY|MOVE|ALL)|PARCEL_(MEDIA_COMMAND_(STOP|PAUSE|PLAY|LOOP|TEXTURE|URL|TIME|AGENT|UNLOAD|AUTO_ALIGN|TYPE|SIZE|DESC|LOOP_SET)|FLAG_(ALLOW_(FLY|(GROUP_)?SCRIPTS|LANDMARK|TERRAFORM|DAMAGE|CREATE_(GROUP_)?OBJECTS)|USE_(ACCESS_(GROUP|LIST)|BAN_LIST|LAND_PASS_LIST)|LOCAL_SOUND_ONLY|RESTRICT_PUSHOBJECT|ALLOW_(GROUP|ALL)_OBJECT_ENTRY)|COUNT_(TOTAL|OWNER|GROUP|OTHER|SELECTED|TEMP)|DETAILS_(NAME|DESC|OWNER|GROUP|AREA|ID|SEE_AVATARS))|LIST_STAT_(MAX|MIN|MEAN|MEDIAN|STD_DEV|SUM(_SQUARES)?|NUM_COUNT|GEOMETRIC_MEAN|RANGE)|PAY_(HIDE|DEFAULT)|REGION_FLAG_(ALLOW_DAMAGE|FIXED_SUN|BLOCK_TERRAFORM|SANDBOX|DISABLE_(COLLISIONS|PHYSICS)|BLOCK_FLY|ALLOW_DIRECT_TELEPORT|RESTRICT_PUSHOBJECT)|HTTP_(METHOD|MIMETYPE|BODY_(MAXLENGTH|TRUNCATED)|CUSTOM_HEADER|PRAGMA_NO_CACHE|VERBOSE_THROTTLE|VERIFY_CERT)|SIT_(INVALID_(AGENT|LINK_OBJECT)|NO(T_EXPERIENCE|_(ACCESS|EXPERIENCE_PERMISSION|SIT_TARGET)))|STRING_(TRIM(_(HEAD|TAIL))?)|CLICK_ACTION_(NONE|TOUCH|SIT|BUY|PAY|OPEN(_MEDIA)?|PLAY|ZOOM)|TOUCH_INVALID_FACE|PROFILE_(NONE|SCRIPT_MEMORY)|RC_(DATA_FLAGS|DETECT_PHANTOM|GET_(LINK_NUM|NORMAL|ROOT_KEY)|MAX_HITS|REJECT_(TYPES|AGENTS|(NON)?PHYSICAL|LAND))|RCERR_(CAST_TIME_EXCEEDED|SIM_PERF_LOW|UNKNOWN)|ESTATE_ACCESS_(ALLOWED_(AGENT|GROUP)_(ADD|REMOVE)|BANNED_AGENT_(ADD|REMOVE))|DENSITY|FRICTION|RESTITUTION|GRAVITY_MULTIPLIER|KFM_(COMMAND|CMD_(PLAY|STOP|PAUSE)|MODE|FORWARD|LOOP|PING_PONG|REVERSE|DATA|ROTATION|TRANSLATION)|ERR_(GENERIC|PARCEL_PERMISSIONS|MALFORMED_PARAMS|RUNTIME_PERMISSIONS|THROTTLED)|CHARACTER_(CMD_((SMOOTH_)?STOP|JUMP)|DESIRED_(TURN_)?SPEED|RADIUS|STAY_WITHIN_PARCEL|LENGTH|ORIENTATION|ACCOUNT_FOR_SKIPPED_FRAMES|AVOIDANCE_MODE|TYPE(_([ABCD]|NONE))?|MAX_(DECEL|TURN_RADIUS|(ACCEL|SPEED)))|PURSUIT_(OFFSET|FUZZ_FACTOR|GOAL_TOLERANCE|INTERCEPT)|REQUIRE_LINE_OF_SIGHT|FORCE_DIRECT_PATH|VERTICAL|HORIZONTAL|AVOID_(CHARACTERS|DYNAMIC_OBSTACLES|NONE)|PU_(EVADE_(HIDDEN|SPOTTED)|FAILURE_(DYNAMIC_PATHFINDING_DISABLED|INVALID_(GOAL|START)|NO_(NAVMESH|VALID_DESTINATION)|OTHER|TARGET_GONE|(PARCEL_)?UNREACHABLE)|(GOAL|SLOWDOWN_DISTANCE)_REACHED)|TRAVERSAL_TYPE(_(FAST|NONE|SLOW))?|CONTENT_TYPE_(ATOM|FORM|HTML|JSON|LLSD|RSS|TEXT|XHTML|XML)|GCNP_(RADIUS|STATIC)|(PATROL|WANDER)_PAUSE_AT_WAYPOINTS|OPT_(AVATAR|CHARACTER|EXCLUSION_VOLUME|LEGACY_LINKSET|MATERIAL_VOLUME|OTHER|STATIC_OBSTACLE|WALKABLE)|SIM_STAT_PCT_CHARS_STEPPED)\\b"},{begin:"\\b(FALSE|TRUE)\\b"},{begin:"\\b(ZERO_ROTATION)\\b"},{begin:"\\b(EOF|JSON_(ARRAY|DELETE|FALSE|INVALID|NULL|NUMBER|OBJECT|STRING|TRUE)|NULL_KEY|TEXTURE_(BLANK|DEFAULT|MEDIA|PLYWOOD|TRANSPARENT)|URL_REQUEST_(GRANTED|DENIED))\\b"},{begin:"\\b(ZERO_VECTOR|TOUCH_INVALID_(TEXCOORD|VECTOR))\\b"}]},l={className:"built_in",begin:"\\b(ll(AgentInExperience|(Create|DataSize|Delete|KeyCount|Keys|Read|Update)KeyValue|GetExperience(Details|ErrorMessage)|ReturnObjectsBy(ID|Owner)|Json(2List|[GS]etValue|ValueType)|Sin|Cos|Tan|Atan2|Sqrt|Pow|Abs|Fabs|Frand|Floor|Ceil|Round|Vec(Mag|Norm|Dist)|Rot(Between|2(Euler|Fwd|Left|Up))|(Euler|Axes)2Rot|Whisper|(Region|Owner)?Say|Shout|Listen(Control|Remove)?|Sensor(Repeat|Remove)?|Detected(Name|Key|Owner|Type|Pos|Vel|Grab|Rot|Group|LinkNumber)|Die|Ground|Wind|([GS]et)(AnimationOverride|MemoryLimit|PrimMediaParams|ParcelMusicURL|Object(Desc|Name)|PhysicsMaterial|Status|Scale|Color|Alpha|Texture|Pos|Rot|Force|Torque)|ResetAnimationOverride|(Scale|Offset|Rotate)Texture|(Rot)?Target(Remove)?|(Stop)?MoveToTarget|Apply(Rotational)?Impulse|Set(KeyframedMotion|ContentType|RegionPos|(Angular)?Velocity|Buoyancy|HoverHeight|ForceAndTorque|TimerEvent|ScriptState|Damage|TextureAnim|Sound(Queueing|Radius)|Vehicle(Type|(Float|Vector|Rotation)Param)|(Touch|Sit)?Text|Camera(Eye|At)Offset|PrimitiveParams|ClickAction|Link(Alpha|Color|PrimitiveParams(Fast)?|Texture(Anim)?|Camera|Media)|RemoteScriptAccessPin|PayPrice|LocalRot)|ScaleByFactor|Get((Max|Min)ScaleFactor|ClosestNavPoint|StaticPath|SimStats|Env|PrimitiveParams|Link(PrimitiveParams|Number(OfSides)?|Key|Name|Media)|HTTPHeader|FreeURLs|Object(Details|PermMask|PrimCount)|Parcel(MaxPrims|Details|Prim(Count|Owners))|Attached(List)?|(SPMax|Free|Used)Memory|Region(Name|TimeDilation|FPS|Corner|AgentCount)|Root(Position|Rotation)|UnixTime|(Parcel|Region)Flags|(Wall|GMT)clock|SimulatorHostname|BoundingBox|GeometricCenter|Creator|NumberOf(Prims|NotecardLines|Sides)|Animation(List)?|(Camera|Local)(Pos|Rot)|Vel|Accel|Omega|Time(stamp|OfDay)|(Object|CenterOf)?Mass|MassMKS|Energy|Owner|(Owner)?Key|SunDirection|Texture(Offset|Scale|Rot)|Inventory(Number|Name|Key|Type|Creator|PermMask)|Permissions(Key)?|StartParameter|List(Length|EntryType)|Date|Agent(Size|Info|Language|List)|LandOwnerAt|NotecardLine|Script(Name|State))|(Get|Reset|GetAndReset)Time|PlaySound(Slave)?|LoopSound(Master|Slave)?|(Trigger|Stop|Preload)Sound|((Get|Delete)Sub|Insert)String|To(Upper|Lower)|Give(InventoryList|Money)|RezObject|(Stop)?LookAt|Sleep|CollisionFilter|(Take|Release)Controls|DetachFromAvatar|AttachToAvatar(Temp)?|InstantMessage|(GetNext)?Email|StopHover|MinEventDelay|RotLookAt|String(Length|Trim)|(Start|Stop)Animation|TargetOmega|Request(Experience)?Permissions|(Create|Break)Link|BreakAllLinks|(Give|Remove)Inventory|Water|PassTouches|Request(Agent|Inventory)Data|TeleportAgent(Home|GlobalCoords)?|ModifyLand|CollisionSound|ResetScript|MessageLinked|PushObject|PassCollisions|AxisAngle2Rot|Rot2(Axis|Angle)|A(cos|sin)|AngleBetween|AllowInventoryDrop|SubStringIndex|List2(CSV|Integer|Json|Float|String|Key|Vector|Rot|List(Strided)?)|DeleteSubList|List(Statistics|Sort|Randomize|(Insert|Find|Replace)List)|EdgeOfWorld|AdjustSoundVolume|Key2Name|TriggerSoundLimited|EjectFromLand|(CSV|ParseString)2List|OverMyLand|SameGroup|UnSit|Ground(Slope|Normal|Contour)|GroundRepel|(Set|Remove)VehicleFlags|SitOnLink|(AvatarOn)?(Link)?SitTarget|Script(Danger|Profiler)|Dialog|VolumeDetect|ResetOtherScript|RemoteLoadScriptPin|(Open|Close)RemoteDataChannel|SendRemoteData|RemoteDataReply|(Integer|String)ToBase64|XorBase64|Log(10)?|Base64To(String|Integer)|ParseStringKeepNulls|RezAtRoot|RequestSimulatorData|ForceMouselook|(Load|Release|(E|Une)scape)URL|ParcelMedia(CommandList|Query)|ModPow|MapDestination|(RemoveFrom|AddTo|Reset)Land(Pass|Ban)List|(Set|Clear)CameraParams|HTTP(Request|Response)|TextBox|DetectedTouch(UV|Face|Pos|(N|Bin)ormal|ST)|(MD5|SHA1|DumpList2)String|Request(Secure)?URL|Clear(Prim|Link)Media|(Link)?ParticleSystem|(Get|Request)(Username|DisplayName)|RegionSayTo|CastRay|GenerateKey|TransferLindenDollars|ManageEstateAccess|(Create|Delete)Character|ExecCharacterCmd|Evade|FleeFrom|NavigateTo|PatrolPoints|Pursue|UpdateCharacter|WanderWithin))\\b"};return{name:"LSL (Linden Scripting Language)",illegal:":",contains:[i,{className:"comment",variants:[e.COMMENT("//","$"),e.COMMENT("/\\*","\\*/")],relevance:0},o,{className:"section",variants:[{begin:"\\b(state|default)\\b"},{begin:"\\b(state_(entry|exit)|touch(_(start|end))?|(land_)?collision(_(start|end))?|timer|listen|(no_)?sensor|control|(not_)?at_(rot_)?target|money|email|experience_permissions(_denied)?|run_time_permissions|changed|attach|dataserver|moving_(start|end)|link_message|(on|object)_rez|remote_data|http_re(sponse|quest)|path_update|transaction_result)\\b"}]},l,s,{className:"type",begin:"\\b(integer|float|string|key|vector|quaternion|rotation|list)\\b"}]}}return k_=t,k_}var U_,qT;function qAe(){if(qT)return U_;qT=1;function t(e){const n="\\[=*\\[",i="\\]=*\\]",o={begin:n,end:i,contains:["self"]},s=[e.COMMENT("--(?!"+n+")","$"),e.COMMENT("--"+n,i,{contains:[o],relevance:10})];return{name:"Lua",keywords:{$pattern:e.UNDERSCORE_IDENT_RE,literal:"true false nil",keyword:"and break do else elseif end for goto if in local not or repeat return then until while",built_in:"_G _ENV _VERSION __index __newindex __mode __call __metatable __tostring __len __gc __add __sub __mul __div __mod __pow __concat __unm __eq __lt __le assert collectgarbage dofile error getfenv getmetatable ipairs load loadfile loadstring module next pairs pcall print rawequal rawget rawset require select setfenv setmetatable tonumber tostring type unpack xpcall arg self coroutine resume yield status wrap create running debug getupvalue debug sethook getmetatable gethook setmetatable setlocal traceback setfenv getinfo setupvalue getlocal getregistry getfenv io lines write close flush open output type read stderr stdin input stdout popen tmpfile math log max acos huge ldexp pi cos tanh pow deg tan cosh sinh random randomseed frexp ceil floor rad abs sqrt modf asin min mod fmod log10 atan2 exp sin atan os exit setlocale date getenv difftime remove time clock tmpname rename execute package preload loadlib loaded loaders cpath config path seeall string sub upper len gfind rep find match char dump gmatch reverse byte format gsub lower table setn insert getn foreachi maxn foreach concat sort remove"},contains:s.concat([{className:"function",beginKeywords:"function",end:"\\)",contains:[e.inherit(e.TITLE_MODE,{begin:"([_a-zA-Z]\\w*\\.)*([_a-zA-Z]\\w*:)?[_a-zA-Z]\\w*"}),{className:"params",begin:"\\(",endsWithParent:!0,contains:s}].concat(s)},e.C_NUMBER_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{className:"string",begin:n,end:i,contains:[o],relevance:5}])}}return U_=t,U_}var F_,$T;function $Ae(){if($T)return F_;$T=1;function t(e){const n={className:"variable",variants:[{begin:"\\$\\("+e.UNDERSCORE_IDENT_RE+"\\)",contains:[e.BACKSLASH_ESCAPE]},{begin:/\$[@%{C.has(k[0])||U.ignoreMatch()}},{className:"symbol",relevance:0,begin:S}]},T={className:"named-character",begin:/\\\[[$a-zA-Z][$a-zA-Z0-9]+\]/},N={className:"operator",relevance:0,begin:/[+\-*/,;.:@~=><&|_`'^?!%]+/},y={className:"pattern",relevance:0,begin:/([a-zA-Z$][a-zA-Z0-9$]*)?_+([a-zA-Z$][a-zA-Z0-9$]*)?/},x={className:"slot",relevance:0,begin:/#[a-zA-Z$][a-zA-Z0-9$]*|#+[0-9]?/},P={className:"brace",relevance:0,begin:/[[\](){}]/},D={className:"message-name",relevance:0,begin:i.concat("::",S)};return{name:"Mathematica",aliases:["mma","wl"],classNameAliases:{brace:"punctuation",pattern:"type",slot:"type",symbol:"variable","named-character":"variable","builtin-symbol":"built_in","message-name":"string"},contains:[n.COMMENT(/\(\*/,/\*\)/,{contains:["self"]}),y,x,D,h,T,n.QUOTE_STRING_MODE,f,N,P]}}return B_=e,B_}var G_,zT;function zAe(){if(zT)return G_;zT=1;function t(e){const n="('|\\.')+",i={relevance:0,contains:[{begin:n}]};return{name:"Matlab",keywords:{keyword:"arguments break case catch classdef continue else elseif end enumeration events for function global if methods otherwise parfor persistent properties return spmd switch try while",built_in:"sin sind sinh asin asind asinh cos cosd cosh acos acosd acosh tan tand tanh atan atand atan2 atanh sec secd sech asec asecd asech csc cscd csch acsc acscd acsch cot cotd coth acot acotd acoth hypot exp expm1 log log1p log10 log2 pow2 realpow reallog realsqrt sqrt nthroot nextpow2 abs angle complex conj imag real unwrap isreal cplxpair fix floor ceil round mod rem sign airy besselj bessely besselh besseli besselk beta betainc betaln ellipj ellipke erf erfc erfcx erfinv expint gamma gammainc gammaln psi legendre cross dot factor isprime primes gcd lcm rat rats perms nchoosek factorial cart2sph cart2pol pol2cart sph2cart hsv2rgb rgb2hsv zeros ones eye repmat rand randn linspace logspace freqspace meshgrid accumarray size length ndims numel disp isempty isequal isequalwithequalnans cat reshape diag blkdiag tril triu fliplr flipud flipdim rot90 find sub2ind ind2sub bsxfun ndgrid permute ipermute shiftdim circshift squeeze isscalar isvector ans eps realmax realmin pi i|0 inf nan isnan isinf isfinite j|0 why compan gallery hadamard hankel hilb invhilb magic pascal rosser toeplitz vander wilkinson max min nanmax nanmin mean nanmean type table readtable writetable sortrows sort figure plot plot3 scatter scatter3 cellfun legend intersect ismember procrustes hold num2cell "},illegal:'(//|"|#|/\\*|\\s+/\\w+)',contains:[{className:"function",beginKeywords:"function",end:"$",contains:[e.UNDERSCORE_TITLE_MODE,{className:"params",variants:[{begin:"\\(",end:"\\)"},{begin:"\\[",end:"\\]"}]}]},{className:"built_in",begin:/true|false/,relevance:0,starts:i},{begin:"[a-zA-Z][a-zA-Z_0-9]*"+n,relevance:0},{className:"number",begin:e.C_NUMBER_RE,relevance:0,starts:i},{className:"string",begin:"'",end:"'",contains:[{begin:"''"}]},{begin:/\]|\}|\)/,relevance:0,starts:i},{className:"string",begin:'"',end:'"',contains:[{begin:'""'}],starts:i},e.COMMENT("^\\s*%\\{\\s*$","^\\s*%\\}\\s*$"),e.COMMENT("%","$")]}}return G_=t,G_}var Y_,VT;function VAe(){if(VT)return Y_;VT=1;function t(e){return{name:"Maxima",keywords:{$pattern:"[A-Za-z_%][0-9A-Za-z_%]*",keyword:"if then else elseif for thru do while unless step in and or not",literal:"true false unknown inf minf ind und %e %i %pi %phi %gamma",built_in:" abasep abs absint absolute_real_time acos acosh acot acoth acsc acsch activate addcol add_edge add_edges addmatrices addrow add_vertex add_vertices adjacency_matrix adjoin adjoint af agd airy airy_ai airy_bi airy_dai airy_dbi algsys alg_type alias allroots alphacharp alphanumericp amortization %and annuity_fv annuity_pv antid antidiff AntiDifference append appendfile apply apply1 apply2 applyb1 apropos args arit_amortization arithmetic arithsum array arrayapply arrayinfo arraymake arraysetapply ascii asec asech asin asinh askinteger asksign assoc assoc_legendre_p assoc_legendre_q assume assume_external_byte_order asympa at atan atan2 atanh atensimp atom atvalue augcoefmatrix augmented_lagrangian_method av average_degree backtrace bars barsplot barsplot_description base64 base64_decode bashindices batch batchload bc2 bdvac belln benefit_cost bern bernpoly bernstein_approx bernstein_expand bernstein_poly bessel bessel_i bessel_j bessel_k bessel_simplify bessel_y beta beta_incomplete beta_incomplete_generalized beta_incomplete_regularized bezout bfallroots bffac bf_find_root bf_fmin_cobyla bfhzeta bfloat bfloatp bfpsi bfpsi0 bfzeta biconnected_components bimetric binomial bipartition block blockmatrixp bode_gain bode_phase bothcoef box boxplot boxplot_description break bug_report build_info|10 buildq build_sample burn cabs canform canten cardinality carg cartan cartesian_product catch cauchy_matrix cbffac cdf_bernoulli cdf_beta cdf_binomial cdf_cauchy cdf_chi2 cdf_continuous_uniform cdf_discrete_uniform cdf_exp cdf_f cdf_gamma cdf_general_finite_discrete cdf_geometric cdf_gumbel cdf_hypergeometric cdf_laplace cdf_logistic cdf_lognormal cdf_negative_binomial cdf_noncentral_chi2 cdf_noncentral_student_t cdf_normal cdf_pareto cdf_poisson cdf_rank_sum cdf_rayleigh cdf_signed_rank cdf_student_t cdf_weibull cdisplay ceiling central_moment cequal cequalignore cf cfdisrep cfexpand cgeodesic cgreaterp cgreaterpignore changename changevar chaosgame charat charfun charfun2 charlist charp charpoly chdir chebyshev_t chebyshev_u checkdiv check_overlaps chinese cholesky christof chromatic_index chromatic_number cint circulant_graph clear_edge_weight clear_rules clear_vertex_label clebsch_gordan clebsch_graph clessp clesspignore close closefile cmetric coeff coefmatrix cograd col collapse collectterms columnop columnspace columnswap columnvector combination combine comp2pui compare compfile compile compile_file complement_graph complete_bipartite_graph complete_graph complex_number_p components compose_functions concan concat conjugate conmetderiv connected_components connect_vertices cons constant constantp constituent constvalue cont2part content continuous_freq contortion contour_plot contract contract_edge contragrad contrib_ode convert coord copy copy_file copy_graph copylist copymatrix cor cos cosh cot coth cov cov1 covdiff covect covers crc24sum create_graph create_list csc csch csetup cspline ctaylor ct_coordsys ctransform ctranspose cube_graph cuboctahedron_graph cunlisp cv cycle_digraph cycle_graph cylindrical days360 dblint deactivate declare declare_constvalue declare_dimensions declare_fundamental_dimensions declare_fundamental_units declare_qty declare_translated declare_unit_conversion declare_units declare_weights decsym defcon define define_alt_display define_variable defint defmatch defrule defstruct deftaylor degree_sequence del delete deleten delta demo demoivre denom depends derivdegree derivlist describe desolve determinant dfloat dgauss_a dgauss_b dgeev dgemm dgeqrf dgesv dgesvd diag diagmatrix diag_matrix diagmatrixp diameter diff digitcharp dimacs_export dimacs_import dimension dimensionless dimensions dimensions_as_list direct directory discrete_freq disjoin disjointp disolate disp dispcon dispform dispfun dispJordan display disprule dispterms distrib divide divisors divsum dkummer_m dkummer_u dlange dodecahedron_graph dotproduct dotsimp dpart draw draw2d draw3d drawdf draw_file draw_graph dscalar echelon edge_coloring edge_connectivity edges eigens_by_jacobi eigenvalues eigenvectors eighth einstein eivals eivects elapsed_real_time elapsed_run_time ele2comp ele2polynome ele2pui elem elementp elevation_grid elim elim_allbut eliminate eliminate_using ellipse elliptic_e elliptic_ec elliptic_eu elliptic_f elliptic_kc elliptic_pi ematrix empty_graph emptyp endcons entermatrix entertensor entier equal equalp equiv_classes erf erfc erf_generalized erfi errcatch error errormsg errors euler ev eval_string evenp every evolution evolution2d evundiff example exp expand expandwrt expandwrt_factored expint expintegral_chi expintegral_ci expintegral_e expintegral_e1 expintegral_ei expintegral_e_simplify expintegral_li expintegral_shi expintegral_si explicit explose exponentialize express expt exsec extdiff extract_linear_equations extremal_subset ezgcd %f f90 facsum factcomb factor factorfacsum factorial factorout factorsum facts fast_central_elements fast_linsolve fasttimes featurep fernfale fft fib fibtophi fifth filename_merge file_search file_type fillarray findde find_root find_root_abs find_root_error find_root_rel first fix flatten flength float floatnump floor flower_snark flush flush1deriv flushd flushnd flush_output fmin_cobyla forget fortran fourcos fourexpand fourier fourier_elim fourint fourintcos fourintsin foursimp foursin fourth fposition frame_bracket freeof freshline fresnel_c fresnel_s from_adjacency_matrix frucht_graph full_listify fullmap fullmapl fullratsimp fullratsubst fullsetify funcsolve fundamental_dimensions fundamental_units fundef funmake funp fv g0 g1 gamma gamma_greek gamma_incomplete gamma_incomplete_generalized gamma_incomplete_regularized gauss gauss_a gauss_b gaussprob gcd gcdex gcdivide gcfac gcfactor gd generalized_lambert_w genfact gen_laguerre genmatrix gensym geo_amortization geo_annuity_fv geo_annuity_pv geomap geometric geometric_mean geosum get getcurrentdirectory get_edge_weight getenv get_lu_factors get_output_stream_string get_pixel get_plot_option get_tex_environment get_tex_environment_default get_vertex_label gfactor gfactorsum ggf girth global_variances gn gnuplot_close gnuplot_replot gnuplot_reset gnuplot_restart gnuplot_start go Gosper GosperSum gr2d gr3d gradef gramschmidt graph6_decode graph6_encode graph6_export graph6_import graph_center graph_charpoly graph_eigenvalues graph_flow graph_order graph_periphery graph_product graph_size graph_union great_rhombicosidodecahedron_graph great_rhombicuboctahedron_graph grid_graph grind grobner_basis grotzch_graph hamilton_cycle hamilton_path hankel hankel_1 hankel_2 harmonic harmonic_mean hav heawood_graph hermite hessian hgfred hilbertmap hilbert_matrix hipow histogram histogram_description hodge horner hypergeometric i0 i1 %ibes ic1 ic2 ic_convert ichr1 ichr2 icosahedron_graph icosidodecahedron_graph icurvature ident identfor identity idiff idim idummy ieqn %if ifactors iframes ifs igcdex igeodesic_coords ilt image imagpart imetric implicit implicit_derivative implicit_plot indexed_tensor indices induced_subgraph inferencep inference_result infix info_display init_atensor init_ctensor in_neighbors innerproduct inpart inprod inrt integerp integer_partitions integrate intersect intersection intervalp intopois intosum invariant1 invariant2 inverse_fft inverse_jacobi_cd inverse_jacobi_cn inverse_jacobi_cs inverse_jacobi_dc inverse_jacobi_dn inverse_jacobi_ds inverse_jacobi_nc inverse_jacobi_nd inverse_jacobi_ns inverse_jacobi_sc inverse_jacobi_sd inverse_jacobi_sn invert invert_by_adjoint invert_by_lu inv_mod irr is is_biconnected is_bipartite is_connected is_digraph is_edge_in_graph is_graph is_graph_or_digraph ishow is_isomorphic isolate isomorphism is_planar isqrt isreal_p is_sconnected is_tree is_vertex_in_graph items_inference %j j0 j1 jacobi jacobian jacobi_cd jacobi_cn jacobi_cs jacobi_dc jacobi_dn jacobi_ds jacobi_nc jacobi_nd jacobi_ns jacobi_p jacobi_sc jacobi_sd jacobi_sn JF jn join jordan julia julia_set julia_sin %k kdels kdelta kill killcontext kostka kron_delta kronecker_product kummer_m kummer_u kurtosis kurtosis_bernoulli kurtosis_beta kurtosis_binomial kurtosis_chi2 kurtosis_continuous_uniform kurtosis_discrete_uniform kurtosis_exp kurtosis_f kurtosis_gamma kurtosis_general_finite_discrete kurtosis_geometric kurtosis_gumbel kurtosis_hypergeometric kurtosis_laplace kurtosis_logistic kurtosis_lognormal kurtosis_negative_binomial kurtosis_noncentral_chi2 kurtosis_noncentral_student_t kurtosis_normal kurtosis_pareto kurtosis_poisson kurtosis_rayleigh kurtosis_student_t kurtosis_weibull label labels lagrange laguerre lambda lambert_w laplace laplacian_matrix last lbfgs lc2kdt lcharp lc_l lcm lc_u ldefint ldisp ldisplay legendre_p legendre_q leinstein length let letrules letsimp levi_civita lfreeof lgtreillis lhs li liediff limit Lindstedt linear linearinterpol linear_program linear_regression line_graph linsolve listarray list_correlations listify list_matrix_entries list_nc_monomials listoftens listofvars listp lmax lmin load loadfile local locate_matrix_entry log logcontract log_gamma lopow lorentz_gauge lowercasep lpart lratsubst lreduce lriemann lsquares_estimates lsquares_estimates_approximate lsquares_estimates_exact lsquares_mse lsquares_residual_mse lsquares_residuals lsum ltreillis lu_backsub lucas lu_factor %m macroexpand macroexpand1 make_array makebox makefact makegamma make_graph make_level_picture makelist makeOrders make_poly_continent make_poly_country make_polygon make_random_state make_rgb_picture makeset make_string_input_stream make_string_output_stream make_transform mandelbrot mandelbrot_set map mapatom maplist matchdeclare matchfix mat_cond mat_fullunblocker mat_function mathml_display mat_norm matrix matrixmap matrixp matrix_size mattrace mat_trace mat_unblocker max max_clique max_degree max_flow maximize_lp max_independent_set max_matching maybe md5sum mean mean_bernoulli mean_beta mean_binomial mean_chi2 mean_continuous_uniform mean_deviation mean_discrete_uniform mean_exp mean_f mean_gamma mean_general_finite_discrete mean_geometric mean_gumbel mean_hypergeometric mean_laplace mean_logistic mean_lognormal mean_negative_binomial mean_noncentral_chi2 mean_noncentral_student_t mean_normal mean_pareto mean_poisson mean_rayleigh mean_student_t mean_weibull median median_deviation member mesh metricexpandall mgf1_sha1 min min_degree min_edge_cut minfactorial minimalPoly minimize_lp minimum_spanning_tree minor minpack_lsquares minpack_solve min_vertex_cover min_vertex_cut mkdir mnewton mod mode_declare mode_identity ModeMatrix moebius mon2schur mono monomial_dimensions multibernstein_poly multi_display_for_texinfo multi_elem multinomial multinomial_coeff multi_orbit multiplot_mode multi_pui multsym multthru mycielski_graph nary natural_unit nc_degree ncexpt ncharpoly negative_picture neighbors new newcontext newdet new_graph newline newton new_variable next_prime nicedummies niceindices ninth nofix nonarray noncentral_moment nonmetricity nonnegintegerp nonscalarp nonzeroandfreeof notequal nounify nptetrad npv nroots nterms ntermst nthroot nullity nullspace num numbered_boundaries numberp number_to_octets num_distinct_partitions numerval numfactor num_partitions nusum nzeta nzetai nzetar octets_to_number octets_to_oid odd_girth oddp ode2 ode_check odelin oid_to_octets op opena opena_binary openr openr_binary openw openw_binary operatorp opsubst optimize %or orbit orbits ordergreat ordergreatp orderless orderlessp orthogonal_complement orthopoly_recur orthopoly_weight outermap out_neighbors outofpois pade parabolic_cylinder_d parametric parametric_surface parg parGosper parse_string parse_timedate part part2cont partfrac partition partition_set partpol path_digraph path_graph pathname_directory pathname_name pathname_type pdf_bernoulli pdf_beta pdf_binomial pdf_cauchy pdf_chi2 pdf_continuous_uniform pdf_discrete_uniform pdf_exp pdf_f pdf_gamma pdf_general_finite_discrete pdf_geometric pdf_gumbel pdf_hypergeometric pdf_laplace pdf_logistic pdf_lognormal pdf_negative_binomial pdf_noncentral_chi2 pdf_noncentral_student_t pdf_normal pdf_pareto pdf_poisson pdf_rank_sum pdf_rayleigh pdf_signed_rank pdf_student_t pdf_weibull pearson_skewness permanent permut permutation permutations petersen_graph petrov pickapart picture_equalp picturep piechart piechart_description planar_embedding playback plog plot2d plot3d plotdf ploteq plsquares pochhammer points poisdiff poisexpt poisint poismap poisplus poissimp poissubst poistimes poistrim polar polarform polartorect polar_to_xy poly_add poly_buchberger poly_buchberger_criterion poly_colon_ideal poly_content polydecomp poly_depends_p poly_elimination_ideal poly_exact_divide poly_expand poly_expt poly_gcd polygon poly_grobner poly_grobner_equal poly_grobner_member poly_grobner_subsetp poly_ideal_intersection poly_ideal_polysaturation poly_ideal_polysaturation1 poly_ideal_saturation poly_ideal_saturation1 poly_lcm poly_minimization polymod poly_multiply polynome2ele polynomialp poly_normal_form poly_normalize poly_normalize_list poly_polysaturation_extension poly_primitive_part poly_pseudo_divide poly_reduced_grobner poly_reduction poly_saturation_extension poly_s_polynomial poly_subtract polytocompanion pop postfix potential power_mod powerseries powerset prefix prev_prime primep primes principal_components print printf printfile print_graph printpois printprops prodrac product properties propvars psi psubst ptriangularize pui pui2comp pui2ele pui2polynome pui_direct puireduc push put pv qput qrange qty quad_control quad_qag quad_qagi quad_qagp quad_qags quad_qawc quad_qawf quad_qawo quad_qaws quadrilateral quantile quantile_bernoulli quantile_beta quantile_binomial quantile_cauchy quantile_chi2 quantile_continuous_uniform quantile_discrete_uniform quantile_exp quantile_f quantile_gamma quantile_general_finite_discrete quantile_geometric quantile_gumbel quantile_hypergeometric quantile_laplace quantile_logistic quantile_lognormal quantile_negative_binomial quantile_noncentral_chi2 quantile_noncentral_student_t quantile_normal quantile_pareto quantile_poisson quantile_rayleigh quantile_student_t quantile_weibull quartile_skewness quit qunit quotient racah_v racah_w radcan radius random random_bernoulli random_beta random_binomial random_bipartite_graph random_cauchy random_chi2 random_continuous_uniform random_digraph random_discrete_uniform random_exp random_f random_gamma random_general_finite_discrete random_geometric random_graph random_graph1 random_gumbel random_hypergeometric random_laplace random_logistic random_lognormal random_negative_binomial random_network random_noncentral_chi2 random_noncentral_student_t random_normal random_pareto random_permutation random_poisson random_rayleigh random_regular_graph random_student_t random_tournament random_tree random_weibull range rank rat ratcoef ratdenom ratdiff ratdisrep ratexpand ratinterpol rational rationalize ratnumer ratnump ratp ratsimp ratsubst ratvars ratweight read read_array read_binary_array read_binary_list read_binary_matrix readbyte readchar read_hashed_array readline read_list read_matrix read_nested_list readonly read_xpm real_imagpart_to_conjugate realpart realroots rearray rectangle rectform rectform_log_if_constant recttopolar rediff reduce_consts reduce_order region region_boundaries region_boundaries_plus rem remainder remarray rembox remcomps remcon remcoord remfun remfunction remlet remove remove_constvalue remove_dimensions remove_edge remove_fundamental_dimensions remove_fundamental_units remove_plot_option remove_vertex rempart remrule remsym remvalue rename rename_file reset reset_displays residue resolvante resolvante_alternee1 resolvante_bipartite resolvante_diedrale resolvante_klein resolvante_klein3 resolvante_produit_sym resolvante_unitaire resolvante_vierer rest resultant return reveal reverse revert revert2 rgb2level rhs ricci riemann rinvariant risch rk rmdir rncombine romberg room rootscontract round row rowop rowswap rreduce run_testsuite %s save saving scalarp scaled_bessel_i scaled_bessel_i0 scaled_bessel_i1 scalefactors scanmap scatterplot scatterplot_description scene schur2comp sconcat scopy scsimp scurvature sdowncase sec sech second sequal sequalignore set_alt_display setdifference set_draw_defaults set_edge_weight setelmx setequalp setify setp set_partitions set_plot_option set_prompt set_random_state set_tex_environment set_tex_environment_default setunits setup_autoload set_up_dot_simplifications set_vertex_label seventh sexplode sf sha1sum sha256sum shortest_path shortest_weighted_path show showcomps showratvars sierpinskiale sierpinskimap sign signum similaritytransform simp_inequality simplify_sum simplode simpmetderiv simtran sin sinh sinsert sinvertcase sixth skewness skewness_bernoulli skewness_beta skewness_binomial skewness_chi2 skewness_continuous_uniform skewness_discrete_uniform skewness_exp skewness_f skewness_gamma skewness_general_finite_discrete skewness_geometric skewness_gumbel skewness_hypergeometric skewness_laplace skewness_logistic skewness_lognormal skewness_negative_binomial skewness_noncentral_chi2 skewness_noncentral_student_t skewness_normal skewness_pareto skewness_poisson skewness_rayleigh skewness_student_t skewness_weibull slength smake small_rhombicosidodecahedron_graph small_rhombicuboctahedron_graph smax smin smismatch snowmap snub_cube_graph snub_dodecahedron_graph solve solve_rec solve_rec_rat some somrac sort sparse6_decode sparse6_encode sparse6_export sparse6_import specint spherical spherical_bessel_j spherical_bessel_y spherical_hankel1 spherical_hankel2 spherical_harmonic spherical_to_xyz splice split sposition sprint sqfr sqrt sqrtdenest sremove sremovefirst sreverse ssearch ssort sstatus ssubst ssubstfirst staircase standardize standardize_inverse_trig starplot starplot_description status std std1 std_bernoulli std_beta std_binomial std_chi2 std_continuous_uniform std_discrete_uniform std_exp std_f std_gamma std_general_finite_discrete std_geometric std_gumbel std_hypergeometric std_laplace std_logistic std_lognormal std_negative_binomial std_noncentral_chi2 std_noncentral_student_t std_normal std_pareto std_poisson std_rayleigh std_student_t std_weibull stemplot stirling stirling1 stirling2 strim striml strimr string stringout stringp strong_components struve_h struve_l sublis sublist sublist_indices submatrix subsample subset subsetp subst substinpart subst_parallel substpart substring subvar subvarp sum sumcontract summand_to_rec supcase supcontext symbolp symmdifference symmetricp system take_channel take_inference tan tanh taylor taylorinfo taylorp taylor_simplifier taytorat tcl_output tcontract tellrat tellsimp tellsimpafter tentex tenth test_mean test_means_difference test_normality test_proportion test_proportions_difference test_rank_sum test_sign test_signed_rank test_variance test_variance_ratio tex tex1 tex_display texput %th third throw time timedate timer timer_info tldefint tlimit todd_coxeter toeplitz tokens to_lisp topological_sort to_poly to_poly_solve totaldisrep totalfourier totient tpartpol trace tracematrix trace_options transform_sample translate translate_file transpose treefale tree_reduce treillis treinat triangle triangularize trigexpand trigrat trigreduce trigsimp trunc truncate truncated_cube_graph truncated_dodecahedron_graph truncated_icosahedron_graph truncated_tetrahedron_graph tr_warnings_get tube tutte_graph ueivects uforget ultraspherical underlying_graph undiff union unique uniteigenvectors unitp units unit_step unitvector unorder unsum untellrat untimer untrace uppercasep uricci uriemann uvect vandermonde_matrix var var1 var_bernoulli var_beta var_binomial var_chi2 var_continuous_uniform var_discrete_uniform var_exp var_f var_gamma var_general_finite_discrete var_geometric var_gumbel var_hypergeometric var_laplace var_logistic var_lognormal var_negative_binomial var_noncentral_chi2 var_noncentral_student_t var_normal var_pareto var_poisson var_rayleigh var_student_t var_weibull vector vectorpotential vectorsimp verbify vers vertex_coloring vertex_connectivity vertex_degree vertex_distance vertex_eccentricity vertex_in_degree vertex_out_degree vertices vertices_to_cycle vertices_to_path %w weyl wheel_graph wiener_index wigner_3j wigner_6j wigner_9j with_stdout write_binary_data writebyte write_data writefile wronskian xreduce xthru %y Zeilberger zeroequiv zerofor zeromatrix zeromatrixp zeta zgeev zheev zlange zn_add_table zn_carmichael_lambda zn_characteristic_factors zn_determinant zn_factor_generators zn_invert_by_lu zn_log zn_mult_table absboxchar activecontexts adapt_depth additive adim aform algebraic algepsilon algexact aliases allbut all_dotsimp_denoms allocation allsym alphabetic animation antisymmetric arrays askexp assume_pos assume_pos_pred assumescalar asymbol atomgrad atrig1 axes axis_3d axis_bottom axis_left axis_right axis_top azimuth background background_color backsubst berlefact bernstein_explicit besselexpand beta_args_sum_to_integer beta_expand bftorat bftrunc bindtest border boundaries_array box boxchar breakup %c capping cauchysum cbrange cbtics center cflength cframe_flag cnonmet_flag color color_bar color_bar_tics colorbox columns commutative complex cone context contexts contour contour_levels cosnpiflag ctaypov ctaypt ctayswitch ctayvar ct_coords ctorsion_flag ctrgsimp cube current_let_rule_package cylinder data_file_name debugmode decreasing default_let_rule_package delay dependencies derivabbrev derivsubst detout diagmetric diff dim dimensions dispflag display2d|10 display_format_internal distribute_over doallmxops domain domxexpt domxmxops domxnctimes dontfactor doscmxops doscmxplus dot0nscsimp dot0simp dot1simp dotassoc dotconstrules dotdistrib dotexptsimp dotident dotscrules draw_graph_program draw_realpart edge_color edge_coloring edge_partition edge_type edge_width %edispflag elevation %emode endphi endtheta engineering_format_floats enhanced3d %enumer epsilon_lp erfflag erf_representation errormsg error_size error_syms error_type %e_to_numlog eval even evenfun evflag evfun ev_point expandwrt_denom expintexpand expintrep expon expop exptdispflag exptisolate exptsubst facexpand facsum_combine factlim factorflag factorial_expand factors_only fb feature features file_name file_output_append file_search_demo file_search_lisp file_search_maxima|10 file_search_tests file_search_usage file_type_lisp file_type_maxima|10 fill_color fill_density filled_func fixed_vertices flipflag float2bf font font_size fortindent fortspaces fpprec fpprintprec functions gamma_expand gammalim gdet genindex gensumnum GGFCFMAX GGFINFINITY globalsolve gnuplot_command gnuplot_curve_styles gnuplot_curve_titles gnuplot_default_term_command gnuplot_dumb_term_command gnuplot_file_args gnuplot_file_name gnuplot_out_file gnuplot_pdf_term_command gnuplot_pm3d gnuplot_png_term_command gnuplot_postamble gnuplot_preamble gnuplot_ps_term_command gnuplot_svg_term_command gnuplot_term gnuplot_view_args Gosper_in_Zeilberger gradefs grid grid2d grind halfangles head_angle head_both head_length head_type height hypergeometric_representation %iargs ibase icc1 icc2 icounter idummyx ieqnprint ifb ifc1 ifc2 ifg ifgi ifr iframe_bracket_form ifri igeowedge_flag ikt1 ikt2 imaginary inchar increasing infeval infinity inflag infolists inm inmc1 inmc2 intanalysis integer integervalued integrate_use_rootsof integration_constant integration_constant_counter interpolate_color intfaclim ip_grid ip_grid_in irrational isolate_wrt_times iterations itr julia_parameter %k1 %k2 keepfloat key key_pos kinvariant kt label label_alignment label_orientation labels lassociative lbfgs_ncorrections lbfgs_nfeval_max leftjust legend letrat let_rule_packages lfg lg lhospitallim limsubst linear linear_solver linechar linel|10 linenum line_type linewidth line_width linsolve_params linsolvewarn lispdisp listarith listconstvars listdummyvars lmxchar load_pathname loadprint logabs logarc logcb logconcoeffp logexpand lognegint logsimp logx logx_secondary logy logy_secondary logz lriem m1pbranch macroexpansion macros mainvar manual_demo maperror mapprint matrix_element_add matrix_element_mult matrix_element_transpose maxapplydepth maxapplyheight maxima_tempdir|10 maxima_userdir|10 maxnegex MAX_ORD maxposex maxpsifracdenom maxpsifracnum maxpsinegint maxpsiposint maxtayorder mesh_lines_color method mod_big_prime mode_check_errorp mode_checkp mode_check_warnp mod_test mod_threshold modular_linear_solver modulus multiplicative multiplicities myoptions nary negdistrib negsumdispflag newline newtonepsilon newtonmaxiter nextlayerfactor niceindicespref nm nmc noeval nolabels nonegative_lp noninteger nonscalar noun noundisp nouns np npi nticks ntrig numer numer_pbranch obase odd oddfun opacity opproperties opsubst optimprefix optionset orientation origin orthopoly_returns_intervals outative outchar packagefile palette partswitch pdf_file pfeformat phiresolution %piargs piece pivot_count_sx pivot_max_sx plot_format plot_options plot_realpart png_file pochhammer_max_index points pointsize point_size points_joined point_type poislim poisson poly_coefficient_ring poly_elimination_order polyfactor poly_grobner_algorithm poly_grobner_debug poly_monomial_order poly_primary_elimination_order poly_return_term_list poly_secondary_elimination_order poly_top_reduction_only posfun position powerdisp pred prederror primep_number_of_tests product_use_gamma program programmode promote_float_to_bigfloat prompt proportional_axes props psexpand ps_file radexpand radius radsubstflag rassociative ratalgdenom ratchristof ratdenomdivide rateinstein ratepsilon ratfac rational ratmx ratprint ratriemann ratsimpexpons ratvarswitch ratweights ratweyl ratwtlvl real realonly redraw refcheck resolution restart resultant ric riem rmxchar %rnum_list rombergabs rombergit rombergmin rombergtol rootsconmode rootsepsilon run_viewer same_xy same_xyz savedef savefactors scalar scalarmatrixp scale scale_lp setcheck setcheckbreak setval show_edge_color show_edges show_edge_type show_edge_width show_id show_label showtime show_vertex_color show_vertex_size show_vertex_type show_vertices show_weight simp simplified_output simplify_products simpproduct simpsum sinnpiflag solvedecomposes solveexplicit solvefactors solvenullwarn solveradcan solvetrigwarn space sparse sphere spring_embedding_depth sqrtdispflag stardisp startphi starttheta stats_numer stringdisp structures style sublis_apply_lambda subnumsimp sumexpand sumsplitfact surface surface_hide svg_file symmetric tab taylordepth taylor_logexpand taylor_order_coefficients taylor_truncate_polynomials tensorkill terminal testsuite_files thetaresolution timer_devalue title tlimswitch tr track transcompile transform transform_xy translate_fast_arrays transparent transrun tr_array_as_ref tr_bound_function_applyp tr_file_tty_messagesp tr_float_can_branch_complex tr_function_call_default trigexpandplus trigexpandtimes triginverses trigsign trivial_solutions tr_numer tr_optimize_max_loop tr_semicompile tr_state_vars tr_warn_bad_function_calls tr_warn_fexpr tr_warn_meval tr_warn_mode tr_warn_undeclared tr_warn_undefined_variable tstep ttyoff tube_extremes ufg ug %unitexpand unit_vectors uric uriem use_fast_arrays user_preamble usersetunits values vect_cross verbose vertex_color vertex_coloring vertex_partition vertex_size vertex_type view warnings weyl width windowname windowtitle wired_surface wireframe xaxis xaxis_color xaxis_secondary xaxis_type xaxis_width xlabel xlabel_secondary xlength xrange xrange_secondary xtics xtics_axis xtics_rotate xtics_rotate_secondary xtics_secondary xtics_secondary_axis xu_grid x_voxel xy_file xyplane xy_scale yaxis yaxis_color yaxis_secondary yaxis_type yaxis_width ylabel ylabel_secondary ylength yrange yrange_secondary ytics ytics_axis ytics_rotate ytics_rotate_secondary ytics_secondary ytics_secondary_axis yv_grid y_voxel yx_ratio zaxis zaxis_color zaxis_type zaxis_width zeroa zerob zerobern zeta%pi zlabel zlabel_rotate zlength zmin zn_primroot_limit zn_primroot_pretest",symbol:"_ __ %|0 %%|0"},contains:[{className:"comment",begin:"/\\*",end:"\\*/",contains:["self"]},e.QUOTE_STRING_MODE,{className:"number",relevance:0,variants:[{begin:"\\b(\\d+|\\d+\\.|\\.\\d+|\\d+\\.\\d+)[Ee][-+]?\\d+\\b"},{begin:"\\b(\\d+|\\d+\\.|\\.\\d+|\\d+\\.\\d+)[Bb][-+]?\\d+\\b",relevance:10},{begin:"\\b(\\.\\d+|\\d+\\.\\d+)\\b"},{begin:"\\b(\\d+|0[0-9A-Za-z]+)\\.?\\b"}]}],illegal:/@/}}return Y_=t,Y_}var q_,WT;function WAe(){if(WT)return q_;WT=1;function t(e){return{name:"MEL",keywords:"int float string vector matrix if else switch case default while do for in break continue global proc return about abs addAttr addAttributeEditorNodeHelp addDynamic addNewShelfTab addPP addPanelCategory addPrefixToName advanceToNextDrivenKey affectedNet affects aimConstraint air alias aliasAttr align alignCtx alignCurve alignSurface allViewFit ambientLight angle angleBetween animCone animCurveEditor animDisplay animView annotate appendStringArray applicationName applyAttrPreset applyTake arcLenDimContext arcLengthDimension arclen arrayMapper art3dPaintCtx artAttrCtx artAttrPaintVertexCtx artAttrSkinPaintCtx artAttrTool artBuildPaintMenu artFluidAttrCtx artPuttyCtx artSelectCtx artSetPaintCtx artUserPaintCtx assignCommand assignInputDevice assignViewportFactories attachCurve attachDeviceAttr attachSurface attrColorSliderGrp attrCompatibility attrControlGrp attrEnumOptionMenu attrEnumOptionMenuGrp attrFieldGrp attrFieldSliderGrp attrNavigationControlGrp attrPresetEditWin attributeExists attributeInfo attributeMenu attributeQuery autoKeyframe autoPlace bakeClip bakeFluidShading bakePartialHistory bakeResults bakeSimulation basename basenameEx batchRender bessel bevel bevelPlus binMembership bindSkin blend2 blendShape blendShapeEditor blendShapePanel blendTwoAttr blindDataType boneLattice boundary boxDollyCtx boxZoomCtx bufferCurve buildBookmarkMenu buildKeyframeMenu button buttonManip CBG cacheFile cacheFileCombine cacheFileMerge cacheFileTrack camera cameraView canCreateManip canvas capitalizeString catch catchQuiet ceil changeSubdivComponentDisplayLevel changeSubdivRegion channelBox character characterMap characterOutlineEditor characterize chdir checkBox checkBoxGrp checkDefaultRenderGlobals choice circle circularFillet clamp clear clearCache clip clipEditor clipEditorCurrentTimeCtx clipSchedule clipSchedulerOutliner clipTrimBefore closeCurve closeSurface cluster cmdFileOutput cmdScrollFieldExecuter cmdScrollFieldReporter cmdShell coarsenSubdivSelectionList collision color colorAtPoint colorEditor colorIndex colorIndexSliderGrp colorSliderButtonGrp colorSliderGrp columnLayout commandEcho commandLine commandPort compactHairSystem componentEditor compositingInterop computePolysetVolume condition cone confirmDialog connectAttr connectControl connectDynamic connectJoint connectionInfo constrain constrainValue constructionHistory container containsMultibyte contextInfo control convertFromOldLayers convertIffToPsd convertLightmap convertSolidTx convertTessellation convertUnit copyArray copyFlexor copyKey copySkinWeights cos cpButton cpCache cpClothSet cpCollision cpConstraint cpConvClothToMesh cpForces cpGetSolverAttr cpPanel cpProperty cpRigidCollisionFilter cpSeam cpSetEdit cpSetSolverAttr cpSolver cpSolverTypes cpTool cpUpdateClothUVs createDisplayLayer createDrawCtx createEditor createLayeredPsdFile createMotionField createNewShelf createNode createRenderLayer createSubdivRegion cross crossProduct ctxAbort ctxCompletion ctxEditMode ctxTraverse currentCtx currentTime currentTimeCtx currentUnit curve curveAddPtCtx curveCVCtx curveEPCtx curveEditorCtx curveIntersect curveMoveEPCtx curveOnSurface curveSketchCtx cutKey cycleCheck cylinder dagPose date defaultLightListCheckBox defaultNavigation defineDataServer defineVirtualDevice deformer deg_to_rad delete deleteAttr deleteShadingGroupsAndMaterials deleteShelfTab deleteUI deleteUnusedBrushes delrandstr detachCurve detachDeviceAttr detachSurface deviceEditor devicePanel dgInfo dgdirty dgeval dgtimer dimWhen directKeyCtx directionalLight dirmap dirname disable disconnectAttr disconnectJoint diskCache displacementToPoly displayAffected displayColor displayCull displayLevelOfDetail displayPref displayRGBColor displaySmoothness displayStats displayString displaySurface distanceDimContext distanceDimension doBlur dolly dollyCtx dopeSheetEditor dot dotProduct doubleProfileBirailSurface drag dragAttrContext draggerContext dropoffLocator duplicate duplicateCurve duplicateSurface dynCache dynControl dynExport dynExpression dynGlobals dynPaintEditor dynParticleCtx dynPref dynRelEdPanel dynRelEditor dynamicLoad editAttrLimits editDisplayLayerGlobals editDisplayLayerMembers editRenderLayerAdjustment editRenderLayerGlobals editRenderLayerMembers editor editorTemplate effector emit emitter enableDevice encodeString endString endsWith env equivalent equivalentTol erf error eval evalDeferred evalEcho event exactWorldBoundingBox exclusiveLightCheckBox exec executeForEachObject exists exp expression expressionEditorListen extendCurve extendSurface extrude fcheck fclose feof fflush fgetline fgetword file fileBrowserDialog fileDialog fileExtension fileInfo filetest filletCurve filter filterCurve filterExpand filterStudioImport findAllIntersections findAnimCurves findKeyframe findMenuItem findRelatedSkinCluster finder firstParentOf fitBspline flexor floatEq floatField floatFieldGrp floatScrollBar floatSlider floatSlider2 floatSliderButtonGrp floatSliderGrp floor flow fluidCacheInfo fluidEmitter fluidVoxelInfo flushUndo fmod fontDialog fopen formLayout format fprint frameLayout fread freeFormFillet frewind fromNativePath fwrite gamma gauss geometryConstraint getApplicationVersionAsFloat getAttr getClassification getDefaultBrush getFileList getFluidAttr getInputDeviceRange getMayaPanelTypes getModifiers getPanel getParticleAttr getPluginResource getenv getpid glRender glRenderEditor globalStitch gmatch goal gotoBindPose grabColor gradientControl gradientControlNoAttr graphDollyCtx graphSelectContext graphTrackCtx gravity grid gridLayout group groupObjectsByName HfAddAttractorToAS HfAssignAS HfBuildEqualMap HfBuildFurFiles HfBuildFurImages HfCancelAFR HfConnectASToHF HfCreateAttractor HfDeleteAS HfEditAS HfPerformCreateAS HfRemoveAttractorFromAS HfSelectAttached HfSelectAttractors HfUnAssignAS hardenPointCurve hardware hardwareRenderPanel headsUpDisplay headsUpMessage help helpLine hermite hide hilite hitTest hotBox hotkey hotkeyCheck hsv_to_rgb hudButton hudSlider hudSliderButton hwReflectionMap hwRender hwRenderLoad hyperGraph hyperPanel hyperShade hypot iconTextButton iconTextCheckBox iconTextRadioButton iconTextRadioCollection iconTextScrollList iconTextStaticLabel ikHandle ikHandleCtx ikHandleDisplayScale ikSolver ikSplineHandleCtx ikSystem ikSystemInfo ikfkDisplayMethod illustratorCurves image imfPlugins inheritTransform insertJoint insertJointCtx insertKeyCtx insertKnotCurve insertKnotSurface instance instanceable instancer intField intFieldGrp intScrollBar intSlider intSliderGrp interToUI internalVar intersect iprEngine isAnimCurve isConnected isDirty isParentOf isSameObject isTrue isValidObjectName isValidString isValidUiName isolateSelect itemFilter itemFilterAttr itemFilterRender itemFilterType joint jointCluster jointCtx jointDisplayScale jointLattice keyTangent keyframe keyframeOutliner keyframeRegionCurrentTimeCtx keyframeRegionDirectKeyCtx keyframeRegionDollyCtx keyframeRegionInsertKeyCtx keyframeRegionMoveKeyCtx keyframeRegionScaleKeyCtx keyframeRegionSelectKeyCtx keyframeRegionSetKeyCtx keyframeRegionTrackCtx keyframeStats lassoContext lattice latticeDeformKeyCtx launch launchImageEditor layerButton layeredShaderPort layeredTexturePort layout layoutDialog lightList lightListEditor lightListPanel lightlink lineIntersection linearPrecision linstep listAnimatable listAttr listCameras listConnections listDeviceAttachments listHistory listInputDeviceAxes listInputDeviceButtons listInputDevices listMenuAnnotation listNodeTypes listPanelCategories listRelatives listSets listTransforms listUnselected listerEditor loadFluid loadNewShelf loadPlugin loadPluginLanguageResources loadPrefObjects localizedPanelLabel lockNode loft log longNameOf lookThru ls lsThroughFilter lsType lsUI Mayatomr mag makeIdentity makeLive makePaintable makeRoll makeSingleSurface makeTubeOn makebot manipMoveContext manipMoveLimitsCtx manipOptions manipRotateContext manipRotateLimitsCtx manipScaleContext manipScaleLimitsCtx marker match max memory menu menuBarLayout menuEditor menuItem menuItemToShelf menuSet menuSetPref messageLine min minimizeApp mirrorJoint modelCurrentTimeCtx modelEditor modelPanel mouse movIn movOut move moveIKtoFK moveKeyCtx moveVertexAlongDirection multiProfileBirailSurface mute nParticle nameCommand nameField namespace namespaceInfo newPanelItems newton nodeCast nodeIconButton nodeOutliner nodePreset nodeType noise nonLinear normalConstraint normalize nurbsBoolean nurbsCopyUVSet nurbsCube nurbsEditUV nurbsPlane nurbsSelect nurbsSquare nurbsToPoly nurbsToPolygonsPref nurbsToSubdiv nurbsToSubdivPref nurbsUVSet nurbsViewDirectionVector objExists objectCenter objectLayer objectType objectTypeUI obsoleteProc oceanNurbsPreviewPlane offsetCurve offsetCurveOnSurface offsetSurface openGLExtension openMayaPref optionMenu optionMenuGrp optionVar orbit orbitCtx orientConstraint outlinerEditor outlinerPanel overrideModifier paintEffectsDisplay pairBlend palettePort paneLayout panel panelConfiguration panelHistory paramDimContext paramDimension paramLocator parent parentConstraint particle particleExists particleInstancer particleRenderInfo partition pasteKey pathAnimation pause pclose percent performanceOptions pfxstrokes pickWalk picture pixelMove planarSrf plane play playbackOptions playblast plugAttr plugNode pluginInfo pluginResourceUtil pointConstraint pointCurveConstraint pointLight pointMatrixMult pointOnCurve pointOnSurface pointPosition poleVectorConstraint polyAppend polyAppendFacetCtx polyAppendVertex polyAutoProjection polyAverageNormal polyAverageVertex polyBevel polyBlendColor polyBlindData polyBoolOp polyBridgeEdge polyCacheMonitor polyCheck polyChipOff polyClipboard polyCloseBorder polyCollapseEdge polyCollapseFacet polyColorBlindData polyColorDel polyColorPerVertex polyColorSet polyCompare polyCone polyCopyUV polyCrease polyCreaseCtx polyCreateFacet polyCreateFacetCtx polyCube polyCut polyCutCtx polyCylinder polyCylindricalProjection polyDelEdge polyDelFacet polyDelVertex polyDuplicateAndConnect polyDuplicateEdge polyEditUV polyEditUVShell polyEvaluate polyExtrudeEdge polyExtrudeFacet polyExtrudeVertex polyFlipEdge polyFlipUV polyForceUV polyGeoSampler polyHelix polyInfo polyInstallAction polyLayoutUV polyListComponentConversion polyMapCut polyMapDel polyMapSew polyMapSewMove polyMergeEdge polyMergeEdgeCtx polyMergeFacet polyMergeFacetCtx polyMergeUV polyMergeVertex polyMirrorFace polyMoveEdge polyMoveFacet polyMoveFacetUV polyMoveUV polyMoveVertex polyNormal polyNormalPerVertex polyNormalizeUV polyOptUvs polyOptions polyOutput polyPipe polyPlanarProjection polyPlane polyPlatonicSolid polyPoke polyPrimitive polyPrism polyProjection polyPyramid polyQuad polyQueryBlindData polyReduce polySelect polySelectConstraint polySelectConstraintMonitor polySelectCtx polySelectEditCtx polySeparate polySetToFaceNormal polySewEdge polyShortestPathCtx polySmooth polySoftEdge polySphere polySphericalProjection polySplit polySplitCtx polySplitEdge polySplitRing polySplitVertex polyStraightenUVBorder polySubdivideEdge polySubdivideFacet polyToSubdiv polyTorus polyTransfer polyTriangulate polyUVSet polyUnite polyWedgeFace popen popupMenu pose pow preloadRefEd print progressBar progressWindow projFileViewer projectCurve projectTangent projectionContext projectionManip promptDialog propModCtx propMove psdChannelOutliner psdEditTextureFile psdExport psdTextureFile putenv pwd python querySubdiv quit rad_to_deg radial radioButton radioButtonGrp radioCollection radioMenuItemCollection rampColorPort rand randomizeFollicles randstate rangeControl readTake rebuildCurve rebuildSurface recordAttr recordDevice redo reference referenceEdit referenceQuery refineSubdivSelectionList refresh refreshAE registerPluginResource rehash reloadImage removeJoint removeMultiInstance removePanelCategory rename renameAttr renameSelectionList renameUI render renderGlobalsNode renderInfo renderLayerButton renderLayerParent renderLayerPostProcess renderLayerUnparent renderManip renderPartition renderQualityNode renderSettings renderThumbnailUpdate renderWindowEditor renderWindowSelectContext renderer reorder reorderDeformers requires reroot resampleFluid resetAE resetPfxToPolyCamera resetTool resolutionNode retarget reverseCurve reverseSurface revolve rgb_to_hsv rigidBody rigidSolver roll rollCtx rootOf rot rotate rotationInterpolation roundConstantRadius rowColumnLayout rowLayout runTimeCommand runup sampleImage saveAllShelves saveAttrPreset saveFluid saveImage saveInitialState saveMenu savePrefObjects savePrefs saveShelf saveToolSettings scale scaleBrushBrightness scaleComponents scaleConstraint scaleKey scaleKeyCtx sceneEditor sceneUIReplacement scmh scriptCtx scriptEditorInfo scriptJob scriptNode scriptTable scriptToShelf scriptedPanel scriptedPanelType scrollField scrollLayout sculpt searchPathArray seed selLoadSettings select selectContext selectCurveCV selectKey selectKeyCtx selectKeyframeRegionCtx selectMode selectPref selectPriority selectType selectedNodes selectionConnection separator setAttr setAttrEnumResource setAttrMapping setAttrNiceNameResource setConstraintRestPosition setDefaultShadingGroup setDrivenKeyframe setDynamic setEditCtx setEditor setFluidAttr setFocus setInfinity setInputDeviceMapping setKeyCtx setKeyPath setKeyframe setKeyframeBlendshapeTargetWts setMenuMode setNodeNiceNameResource setNodeTypeFlag setParent setParticleAttr setPfxToPolyCamera setPluginResource setProject setStampDensity setStartupMessage setState setToolTo setUITemplate setXformManip sets shadingConnection shadingGeometryRelCtx shadingLightRelCtx shadingNetworkCompare shadingNode shapeCompare shelfButton shelfLayout shelfTabLayout shellField shortNameOf showHelp showHidden showManipCtx showSelectionInTitle showShadingGroupAttrEditor showWindow sign simplify sin singleProfileBirailSurface size sizeBytes skinCluster skinPercent smoothCurve smoothTangentSurface smoothstep snap2to2 snapKey snapMode snapTogetherCtx snapshot soft softMod softModCtx sort sound soundControl source spaceLocator sphere sphrand spotLight spotLightPreviewPort spreadSheetEditor spring sqrt squareSurface srtContext stackTrace startString startsWith stitchAndExplodeShell stitchSurface stitchSurfacePoints strcmp stringArrayCatenate stringArrayContains stringArrayCount stringArrayInsertAtIndex stringArrayIntersector stringArrayRemove stringArrayRemoveAtIndex stringArrayRemoveDuplicates stringArrayRemoveExact stringArrayToString stringToStringArray strip stripPrefixFromName stroke subdAutoProjection subdCleanTopology subdCollapse subdDuplicateAndConnect subdEditUV subdListComponentConversion subdMapCut subdMapSewMove subdMatchTopology subdMirror subdToBlind subdToPoly subdTransferUVsToCache subdiv subdivCrease subdivDisplaySmoothness substitute substituteAllString substituteGeometry substring surface surfaceSampler surfaceShaderList swatchDisplayPort switchTable symbolButton symbolCheckBox sysFile system tabLayout tan tangentConstraint texLatticeDeformContext texManipContext texMoveContext texMoveUVShellContext texRotateContext texScaleContext texSelectContext texSelectShortestPathCtx texSmudgeUVContext texWinToolCtx text textCurves textField textFieldButtonGrp textFieldGrp textManip textScrollList textToShelf textureDisplacePlane textureHairColor texturePlacementContext textureWindow threadCount threePointArcCtx timeControl timePort timerX toNativePath toggle toggleAxis toggleWindowVisibility tokenize tokenizeList tolerance tolower toolButton toolCollection toolDropped toolHasOptions toolPropertyWindow torus toupper trace track trackCtx transferAttributes transformCompare transformLimits translator trim trunc truncateFluidCache truncateHairCache tumble tumbleCtx turbulence twoPointArcCtx uiRes uiTemplate unassignInputDevice undo undoInfo ungroup uniform unit unloadPlugin untangleUV untitledFileName untrim upAxis updateAE userCtx uvLink uvSnapshot validateShelfName vectorize view2dToolCtx viewCamera viewClipPlane viewFit viewHeadOn viewLookAt viewManip viewPlace viewSet visor volumeAxis vortex waitCursor warning webBrowser webBrowserPrefs whatIs window windowPref wire wireContext workspace wrinkle wrinkleContext writeTake xbmLangPathList xform",illegal:""},{begin:"<=",relevance:0},{begin:"=>",relevance:0},{begin:"/\\\\"},{begin:"\\\\/"}]},{className:"built_in",variants:[{begin:":-\\|-->"},{begin:"=",relevance:0}]},i,e.C_BLOCK_COMMENT_MODE,o,e.NUMBER_MODE,s,l,{begin:/:-/},{begin:/\.$/}]}}return $_=t,$_}var H_,QT;function QAe(){if(QT)return H_;QT=1;function t(e){return{name:"MIPS Assembly",case_insensitive:!0,aliases:["mips"],keywords:{$pattern:"\\.?"+e.IDENT_RE,meta:".2byte .4byte .align .ascii .asciz .balign .byte .code .data .else .end .endif .endm .endr .equ .err .exitm .extern .global .hword .if .ifdef .ifndef .include .irp .long .macro .rept .req .section .set .skip .space .text .word .ltorg ",built_in:"$0 $1 $2 $3 $4 $5 $6 $7 $8 $9 $10 $11 $12 $13 $14 $15 $16 $17 $18 $19 $20 $21 $22 $23 $24 $25 $26 $27 $28 $29 $30 $31 zero at v0 v1 a0 a1 a2 a3 a4 a5 a6 a7 t0 t1 t2 t3 t4 t5 t6 t7 t8 t9 s0 s1 s2 s3 s4 s5 s6 s7 s8 k0 k1 gp sp fp ra $f0 $f1 $f2 $f2 $f4 $f5 $f6 $f7 $f8 $f9 $f10 $f11 $f12 $f13 $f14 $f15 $f16 $f17 $f18 $f19 $f20 $f21 $f22 $f23 $f24 $f25 $f26 $f27 $f28 $f29 $f30 $f31 Context Random EntryLo0 EntryLo1 Context PageMask Wired EntryHi HWREna BadVAddr Count Compare SR IntCtl SRSCtl SRSMap Cause EPC PRId EBase Config Config1 Config2 Config3 LLAddr Debug DEPC DESAVE CacheErr ECC ErrorEPC TagLo DataLo TagHi DataHi WatchLo WatchHi PerfCtl PerfCnt "},contains:[{className:"keyword",begin:"\\b(addi?u?|andi?|b(al)?|beql?|bgez(al)?l?|bgtzl?|blezl?|bltz(al)?l?|bnel?|cl[oz]|divu?|ext|ins|j(al)?|jalr(\\.hb)?|jr(\\.hb)?|lbu?|lhu?|ll|lui|lw[lr]?|maddu?|mfhi|mflo|movn|movz|move|msubu?|mthi|mtlo|mul|multu?|nop|nor|ori?|rotrv?|sb|sc|se[bh]|sh|sllv?|slti?u?|srav?|srlv?|subu?|sw[lr]?|xori?|wsbh|abs\\.[sd]|add\\.[sd]|alnv.ps|bc1[ft]l?|c\\.(s?f|un|u?eq|[ou]lt|[ou]le|ngle?|seq|l[et]|ng[et])\\.[sd]|(ceil|floor|round|trunc)\\.[lw]\\.[sd]|cfc1|cvt\\.d\\.[lsw]|cvt\\.l\\.[dsw]|cvt\\.ps\\.s|cvt\\.s\\.[dlw]|cvt\\.s\\.p[lu]|cvt\\.w\\.[dls]|div\\.[ds]|ldx?c1|luxc1|lwx?c1|madd\\.[sd]|mfc1|mov[fntz]?\\.[ds]|msub\\.[sd]|mth?c1|mul\\.[ds]|neg\\.[ds]|nmadd\\.[ds]|nmsub\\.[ds]|p[lu][lu]\\.ps|recip\\.fmt|r?sqrt\\.[ds]|sdx?c1|sub\\.[ds]|suxc1|swx?c1|break|cache|d?eret|[de]i|ehb|mfc0|mtc0|pause|prefx?|rdhwr|rdpgpr|sdbbp|ssnop|synci?|syscall|teqi?|tgei?u?|tlb(p|r|w[ir])|tlti?u?|tnei?|wait|wrpgpr)",end:"\\s"},e.COMMENT("[;#](?!\\s*$)","$"),e.C_BLOCK_COMMENT_MODE,e.QUOTE_STRING_MODE,{className:"string",begin:"'",end:"[^\\\\]'",relevance:0},{className:"title",begin:"\\|",end:"\\|",illegal:"\\n",relevance:0},{className:"number",variants:[{begin:"0x[0-9a-f]+"},{begin:"\\b-?\\d+"}],relevance:0},{className:"symbol",variants:[{begin:"^\\s*[a-z_\\.\\$][a-z0-9_\\.\\$]+:"},{begin:"^\\s*[0-9]+:"},{begin:"[0-9]+[bf]"}],relevance:0}],illegal:/\//}}return H_=t,H_}var z_,XT;function XAe(){if(XT)return z_;XT=1;function t(e){return{name:"Mizar",keywords:"environ vocabularies notations constructors definitions registrations theorems schemes requirements begin end definition registration cluster existence pred func defpred deffunc theorem proof let take assume then thus hence ex for st holds consider reconsider such that and in provided of as from be being by means equals implies iff redefine define now not or attr is mode suppose per cases set thesis contradiction scheme reserve struct correctness compatibility coherence symmetry assymetry reflexivity irreflexivity connectedness uniqueness commutativity idempotence involutiveness projectivity",contains:[e.COMMENT("::","$")]}}return z_=t,z_}var V_,ZT;function ZAe(){if(ZT)return V_;ZT=1;function t(e){const n=e.regex,i=["abs","accept","alarm","and","atan2","bind","binmode","bless","break","caller","chdir","chmod","chomp","chop","chown","chr","chroot","close","closedir","connect","continue","cos","crypt","dbmclose","dbmopen","defined","delete","die","do","dump","each","else","elsif","endgrent","endhostent","endnetent","endprotoent","endpwent","endservent","eof","eval","exec","exists","exit","exp","fcntl","fileno","flock","for","foreach","fork","format","formline","getc","getgrent","getgrgid","getgrnam","gethostbyaddr","gethostbyname","gethostent","getlogin","getnetbyaddr","getnetbyname","getnetent","getpeername","getpgrp","getpriority","getprotobyname","getprotobynumber","getprotoent","getpwent","getpwnam","getpwuid","getservbyname","getservbyport","getservent","getsockname","getsockopt","given","glob","gmtime","goto","grep","gt","hex","if","index","int","ioctl","join","keys","kill","last","lc","lcfirst","length","link","listen","local","localtime","log","lstat","lt","ma","map","mkdir","msgctl","msgget","msgrcv","msgsnd","my","ne","next","no","not","oct","open","opendir","or","ord","our","pack","package","pipe","pop","pos","print","printf","prototype","push","q|0","qq","quotemeta","qw","qx","rand","read","readdir","readline","readlink","readpipe","recv","redo","ref","rename","require","reset","return","reverse","rewinddir","rindex","rmdir","say","scalar","seek","seekdir","select","semctl","semget","semop","send","setgrent","sethostent","setnetent","setpgrp","setpriority","setprotoent","setpwent","setservent","setsockopt","shift","shmctl","shmget","shmread","shmwrite","shutdown","sin","sleep","socket","socketpair","sort","splice","split","sprintf","sqrt","srand","stat","state","study","sub","substr","symlink","syscall","sysopen","sysread","sysseek","system","syswrite","tell","telldir","tie","tied","time","times","tr","truncate","uc","ucfirst","umask","undef","unless","unlink","unpack","unshift","untie","until","use","utime","values","vec","wait","waitpid","wantarray","warn","when","while","write","x|0","xor","y|0"],o=/[dualxmsipngr]{0,12}/,s={$pattern:/[\w.]+/,keyword:i.join(" ")},l={className:"subst",begin:"[$@]\\{",end:"\\}",keywords:s},c={begin:/->\{/,end:/\}/},d={variants:[{begin:/\$\d/},{begin:n.concat(/[$%@](\^\w\b|#\w+(::\w+)*|\{\w+\}|\w+(::\w*)*)/,"(?![A-Za-z])(?![@$%])")},{begin:/[$%@][^\s\w{]/,relevance:0}]},_=[e.BACKSLASH_ESCAPE,l,d],p=[/!/,/\//,/\|/,/\?/,/'/,/"/,/#/],g=(S,C,h="\\1")=>{const T=h==="\\1"?h:n.concat(h,C);return n.concat(n.concat("(?:",S,")"),C,/(?:\\.|[^\\\/])*?/,T,/(?:\\.|[^\\\/])*?/,h,o)},E=(S,C,h)=>n.concat(n.concat("(?:",S,")"),C,/(?:\\.|[^\\\/])*?/,h,o),f=[d,e.HASH_COMMENT_MODE,e.COMMENT(/^=\w/,/=cut/,{endsWithParent:!0}),c,{className:"string",contains:_,variants:[{begin:"q[qwxr]?\\s*\\(",end:"\\)",relevance:5},{begin:"q[qwxr]?\\s*\\[",end:"\\]",relevance:5},{begin:"q[qwxr]?\\s*\\{",end:"\\}",relevance:5},{begin:"q[qwxr]?\\s*\\|",end:"\\|",relevance:5},{begin:"q[qwxr]?\\s*<",end:">",relevance:5},{begin:"qw\\s+q",end:"q",relevance:5},{begin:"'",end:"'",contains:[e.BACKSLASH_ESCAPE]},{begin:'"',end:'"'},{begin:"`",end:"`",contains:[e.BACKSLASH_ESCAPE]},{begin:/\{\w+\}/,relevance:0},{begin:"-?\\w+\\s*=>",relevance:0}]},{className:"number",begin:"(\\b0[0-7_]+)|(\\b0x[0-9a-fA-F_]+)|(\\b[1-9][0-9_]*(\\.[0-9_]+)?)|[0_]\\b",relevance:0},{begin:"(\\/\\/|"+e.RE_STARTERS_RE+"|\\b(split|return|print|reverse|grep)\\b)\\s*",keywords:"split return print reverse grep",relevance:0,contains:[e.HASH_COMMENT_MODE,{className:"regexp",variants:[{begin:g("s|tr|y",n.either(...p,{capture:!0}))},{begin:g("s|tr|y","\\(","\\)")},{begin:g("s|tr|y","\\[","\\]")},{begin:g("s|tr|y","\\{","\\}")}],relevance:2},{className:"regexp",variants:[{begin:/(m|qr)\/\//,relevance:0},{begin:E("(?:m|qr)?",/\//,/\//)},{begin:E("m|qr",n.either(...p,{capture:!0}),/\1/)},{begin:E("m|qr",/\(/,/\)/)},{begin:E("m|qr",/\[/,/\]/)},{begin:E("m|qr",/\{/,/\}/)}]}]},{className:"function",beginKeywords:"sub",end:"(\\s*\\(.*?\\))?[;{]",excludeEnd:!0,relevance:5,contains:[e.TITLE_MODE]},{begin:"-\\w\\b",relevance:0},{begin:"^__DATA__$",end:"^__END__$",subLanguage:"mojolicious",contains:[{begin:"^@@.*",end:"$",className:"comment"}]}];return l.contains=f,c.contains=f,{name:"Perl",aliases:["pl","pm"],keywords:s,contains:f}}return V_=t,V_}var W_,JT;function JAe(){if(JT)return W_;JT=1;function t(e){return{name:"Mojolicious",subLanguage:"xml",contains:[{className:"meta",begin:"^__(END|DATA)__$"},{begin:"^\\s*%{1,2}={0,2}",end:"$",subLanguage:"perl"},{begin:"<%{1,2}={0,2}",end:"={0,1}%>",subLanguage:"perl",excludeBegin:!0,excludeEnd:!0}]}}return W_=t,W_}var K_,jT;function jAe(){if(jT)return K_;jT=1;function t(e){const n={className:"number",relevance:0,variants:[{begin:"[$][a-fA-F0-9]+"},e.NUMBER_MODE]},i={variants:[{match:[/(function|method)/,/\s+/,e.UNDERSCORE_IDENT_RE]}],scope:{1:"keyword",3:"title.function"}},o={variants:[{match:[/(class|interface|extends|implements)/,/\s+/,e.UNDERSCORE_IDENT_RE]}],scope:{1:"keyword",3:"title.class"}};return{name:"Monkey",case_insensitive:!0,keywords:{keyword:["public","private","property","continue","exit","extern","new","try","catch","eachin","not","abstract","final","select","case","default","const","local","global","field","end","if","then","else","elseif","endif","while","wend","repeat","until","forever","for","to","step","next","return","module","inline","throw","import","and","or","shl","shr","mod"],built_in:["DebugLog","DebugStop","Error","Print","ACos","ACosr","ASin","ASinr","ATan","ATan2","ATan2r","ATanr","Abs","Abs","Ceil","Clamp","Clamp","Cos","Cosr","Exp","Floor","Log","Max","Max","Min","Min","Pow","Sgn","Sgn","Sin","Sinr","Sqrt","Tan","Tanr","Seed","PI","HALFPI","TWOPI"],literal:["true","false","null"]},illegal:/\/\*/,contains:[e.COMMENT("#rem","#end"),e.COMMENT("'","$",{relevance:0}),i,o,{className:"variable.language",begin:/\b(self|super)\b/},{className:"meta",begin:/\s*#/,end:"$",keywords:{keyword:"if else elseif endif end then"}},{match:[/^\s*/,/strict\b/],scope:{2:"meta"}},{beginKeywords:"alias",end:"=",contains:[e.UNDERSCORE_TITLE_MODE]},e.QUOTE_STRING_MODE,n]}}return K_=t,K_}var Q_,ev;function eye(){if(ev)return Q_;ev=1;function t(e){const n={keyword:"if then not for in while do return else elseif break continue switch and or unless when class extends super local import export from using",literal:"true false nil",built_in:"_G _VERSION assert collectgarbage dofile error getfenv getmetatable ipairs load loadfile loadstring module next pairs pcall print rawequal rawget rawset require select setfenv setmetatable tonumber tostring type unpack xpcall coroutine debug io math os package string table"},i="[A-Za-z$_][0-9A-Za-z$_]*",o={className:"subst",begin:/#\{/,end:/\}/,keywords:n},s=[e.inherit(e.C_NUMBER_MODE,{starts:{end:"(\\s*/)?",relevance:0}}),{className:"string",variants:[{begin:/'/,end:/'/,contains:[e.BACKSLASH_ESCAPE]},{begin:/"/,end:/"/,contains:[e.BACKSLASH_ESCAPE,o]}]},{className:"built_in",begin:"@__"+e.IDENT_RE},{begin:"@"+e.IDENT_RE},{begin:e.IDENT_RE+"\\\\"+e.IDENT_RE}];o.contains=s;const l=e.inherit(e.TITLE_MODE,{begin:i}),c="(\\(.*\\)\\s*)?\\B[-=]>",d={className:"params",begin:"\\([^\\(]",returnBegin:!0,contains:[{begin:/\(/,end:/\)/,keywords:n,contains:["self"].concat(s)}]};return{name:"MoonScript",aliases:["moon"],keywords:n,illegal:/\/\*/,contains:s.concat([e.COMMENT("--","$"),{className:"function",begin:"^\\s*"+i+"\\s*=\\s*"+c,end:"[-=]>",returnBegin:!0,contains:[l,d]},{begin:/[\(,:=]\s*/,relevance:0,contains:[{className:"function",begin:c,end:"[-=]>",returnBegin:!0,contains:[d]}]},{className:"class",beginKeywords:"class",end:"$",illegal:/[:="\[\]]/,contains:[{beginKeywords:"extends",endsWithParent:!0,illegal:/[:="\[\]]/,contains:[l]},l]},{className:"name",begin:i+":",end:":",returnBegin:!0,returnEnd:!0,relevance:0}])}}return Q_=t,Q_}var X_,tv;function tye(){if(tv)return X_;tv=1;function t(e){return{name:"N1QL",case_insensitive:!0,contains:[{beginKeywords:"build create index delete drop explain infer|10 insert merge prepare select update upsert|10",end:/;/,keywords:{keyword:["all","alter","analyze","and","any","array","as","asc","begin","between","binary","boolean","break","bucket","build","by","call","case","cast","cluster","collate","collection","commit","connect","continue","correlate","cover","create","database","dataset","datastore","declare","decrement","delete","derived","desc","describe","distinct","do","drop","each","element","else","end","every","except","exclude","execute","exists","explain","fetch","first","flatten","for","force","from","function","grant","group","gsi","having","if","ignore","ilike","in","include","increment","index","infer","inline","inner","insert","intersect","into","is","join","key","keys","keyspace","known","last","left","let","letting","like","limit","lsm","map","mapping","matched","materialized","merge","minus","namespace","nest","not","number","object","offset","on","option","or","order","outer","over","parse","partition","password","path","pool","prepare","primary","private","privilege","procedure","public","raw","realm","reduce","rename","return","returning","revoke","right","role","rollback","satisfies","schema","select","self","semi","set","show","some","start","statistics","string","system","then","to","transaction","trigger","truncate","under","union","unique","unknown","unnest","unset","update","upsert","use","user","using","validate","value","valued","values","via","view","when","where","while","with","within","work","xor"],literal:["true","false","null","missing|5"],built_in:["array_agg","array_append","array_concat","array_contains","array_count","array_distinct","array_ifnull","array_length","array_max","array_min","array_position","array_prepend","array_put","array_range","array_remove","array_repeat","array_replace","array_reverse","array_sort","array_sum","avg","count","max","min","sum","greatest","least","ifmissing","ifmissingornull","ifnull","missingif","nullif","ifinf","ifnan","ifnanorinf","naninf","neginfif","posinfif","clock_millis","clock_str","date_add_millis","date_add_str","date_diff_millis","date_diff_str","date_part_millis","date_part_str","date_trunc_millis","date_trunc_str","duration_to_str","millis","str_to_millis","millis_to_str","millis_to_utc","millis_to_zone_name","now_millis","now_str","str_to_duration","str_to_utc","str_to_zone_name","decode_json","encode_json","encoded_size","poly_length","base64","base64_encode","base64_decode","meta","uuid","abs","acos","asin","atan","atan2","ceil","cos","degrees","e","exp","ln","log","floor","pi","power","radians","random","round","sign","sin","sqrt","tan","trunc","object_length","object_names","object_pairs","object_inner_pairs","object_values","object_inner_values","object_add","object_put","object_remove","object_unwrap","regexp_contains","regexp_like","regexp_position","regexp_replace","contains","initcap","length","lower","ltrim","position","repeat","replace","rtrim","split","substr","title","trim","upper","isarray","isatom","isboolean","isnumber","isobject","isstring","type","toarray","toatom","toboolean","tonumber","toobject","tostring"]},contains:[{className:"string",begin:"'",end:"'",contains:[e.BACKSLASH_ESCAPE]},{className:"string",begin:'"',end:'"',contains:[e.BACKSLASH_ESCAPE]},{className:"symbol",begin:"`",end:"`",contains:[e.BACKSLASH_ESCAPE]},e.C_NUMBER_MODE,e.C_BLOCK_COMMENT_MODE]},e.C_BLOCK_COMMENT_MODE]}}return X_=t,X_}var Z_,nv;function nye(){if(nv)return Z_;nv=1;function t(e){const n={match:[/^\s*(?=\S)/,/[^:]+/,/:\s*/,/$/],className:{2:"attribute",3:"punctuation"}},i={match:[/^\s*(?=\S)/,/[^:]*[^: ]/,/[ ]*:/,/[ ]/,/.*$/],className:{2:"attribute",3:"punctuation",5:"string"}},o={match:[/^\s*/,/>/,/[ ]/,/.*$/],className:{2:"punctuation",4:"string"}},s={variants:[{match:[/^\s*/,/-/,/[ ]/,/.*$/]},{match:[/^\s*/,/-$/]}],className:{2:"bullet",4:"string"}};return{name:"Nested Text",aliases:["nt"],contains:[e.inherit(e.HASH_COMMENT_MODE,{begin:/^\s*(?=#)/,excludeBegin:!0}),s,o,n,i]}}return Z_=t,Z_}var J_,rv;function rye(){if(rv)return J_;rv=1;function t(e){const n=e.regex,i={className:"variable",variants:[{begin:/\$\d+/},{begin:/\$\{\w+\}/},{begin:n.concat(/[$@]/,e.UNDERSCORE_IDENT_RE)}]},s={endsWithParent:!0,keywords:{$pattern:/[a-z_]{2,}|\/dev\/poll/,literal:["on","off","yes","no","true","false","none","blocked","debug","info","notice","warn","error","crit","select","break","last","permanent","redirect","kqueue","rtsig","epoll","poll","/dev/poll"]},relevance:0,illegal:"=>",contains:[e.HASH_COMMENT_MODE,{className:"string",contains:[e.BACKSLASH_ESCAPE,i],variants:[{begin:/"/,end:/"/},{begin:/'/,end:/'/}]},{begin:"([a-z]+):/",end:"\\s",endsWithParent:!0,excludeEnd:!0,contains:[i]},{className:"regexp",contains:[e.BACKSLASH_ESCAPE,i],variants:[{begin:"\\s\\^",end:"\\s|\\{|;",returnEnd:!0},{begin:"~\\*?\\s+",end:"\\s|\\{|;",returnEnd:!0},{begin:"\\*(\\.[a-z\\-]+)+"},{begin:"([a-z\\-]+\\.)+\\*"}]},{className:"number",begin:"\\b\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}(:\\d{1,5})?\\b"},{className:"number",begin:"\\b\\d+[kKmMgGdshdwy]?\\b",relevance:0},i]};return{name:"Nginx config",aliases:["nginxconf"],contains:[e.HASH_COMMENT_MODE,{beginKeywords:"upstream location",end:/;|\{/,contains:s.contains,keywords:{section:"upstream location"}},{className:"section",begin:n.concat(e.UNDERSCORE_IDENT_RE+n.lookahead(/\s+\{/)),relevance:0},{begin:n.lookahead(e.UNDERSCORE_IDENT_RE+"\\s"),end:";|\\{",contains:[{className:"attribute",begin:e.UNDERSCORE_IDENT_RE,starts:s}],relevance:0}],illegal:"[^\\s\\}\\{]"}}return J_=t,J_}var j_,iv;function iye(){if(iv)return j_;iv=1;function t(e){return{name:"Nim",keywords:{keyword:["addr","and","as","asm","bind","block","break","case","cast","const","continue","converter","discard","distinct","div","do","elif","else","end","enum","except","export","finally","for","from","func","generic","guarded","if","import","in","include","interface","is","isnot","iterator","let","macro","method","mixin","mod","nil","not","notin","object","of","or","out","proc","ptr","raise","ref","return","shared","shl","shr","static","template","try","tuple","type","using","var","when","while","with","without","xor","yield"],literal:["true","false"],type:["int","int8","int16","int32","int64","uint","uint8","uint16","uint32","uint64","float","float32","float64","bool","char","string","cstring","pointer","expr","stmt","void","auto","any","range","array","openarray","varargs","seq","set","clong","culong","cchar","cschar","cshort","cint","csize","clonglong","cfloat","cdouble","clongdouble","cuchar","cushort","cuint","culonglong","cstringarray","semistatic"],built_in:["stdin","stdout","stderr","result"]},contains:[{className:"meta",begin:/\{\./,end:/\.\}/,relevance:10},{className:"string",begin:/[a-zA-Z]\w*"/,end:/"/,contains:[{begin:/""/}]},{className:"string",begin:/([a-zA-Z]\w*)?"""/,end:/"""/},e.QUOTE_STRING_MODE,{className:"type",begin:/\b[A-Z]\w+\b/,relevance:0},{className:"number",relevance:0,variants:[{begin:/\b(0[xX][0-9a-fA-F][_0-9a-fA-F]*)('?[iIuU](8|16|32|64))?/},{begin:/\b(0o[0-7][_0-7]*)('?[iIuUfF](8|16|32|64))?/},{begin:/\b(0(b|B)[01][_01]*)('?[iIuUfF](8|16|32|64))?/},{begin:/\b(\d[_\d]*)('?[iIuUfF](8|16|32|64))?/}]},e.HASH_COMMENT_MODE]}}return j_=t,j_}var ep,av;function aye(){if(av)return ep;av=1;function t(e){const n={keyword:["rec","with","let","in","inherit","assert","if","else","then"],literal:["true","false","or","and","null"],built_in:["import","abort","baseNameOf","dirOf","isNull","builtins","map","removeAttrs","throw","toString","derivation"]},i={className:"subst",begin:/\$\{/,end:/\}/,keywords:n},o={className:"char.escape",begin:/''\$/},s={begin:/[a-zA-Z0-9-_]+(\s*=)/,returnBegin:!0,relevance:0,contains:[{className:"attr",begin:/\S+/,relevance:.2}]},l={className:"string",contains:[o,i],variants:[{begin:"''",end:"''"},{begin:'"',end:'"'}]},c=[e.NUMBER_MODE,e.HASH_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,l,s];return i.contains=c,{name:"Nix",aliases:["nixos"],keywords:n,contains:c}}return ep=t,ep}var tp,ov;function oye(){if(ov)return tp;ov=1;function t(e){return{name:"Node REPL",contains:[{className:"meta.prompt",starts:{end:/ |$/,starts:{end:"$",subLanguage:"javascript"}},variants:[{begin:/^>(?=[ ]|$)/},{begin:/^\.\.\.(?=[ ]|$)/}]}]}}return tp=t,tp}var np,sv;function sye(){if(sv)return np;sv=1;function t(e){const n=e.regex,i=["ADMINTOOLS","APPDATA","CDBURN_AREA","CMDLINE","COMMONFILES32","COMMONFILES64","COMMONFILES","COOKIES","DESKTOP","DOCUMENTS","EXEDIR","EXEFILE","EXEPATH","FAVORITES","FONTS","HISTORY","HWNDPARENT","INSTDIR","INTERNET_CACHE","LANGUAGE","LOCALAPPDATA","MUSIC","NETHOOD","OUTDIR","PICTURES","PLUGINSDIR","PRINTHOOD","PROFILE","PROGRAMFILES32","PROGRAMFILES64","PROGRAMFILES","QUICKLAUNCH","RECENT","RESOURCES_LOCALIZED","RESOURCES","SENDTO","SMPROGRAMS","SMSTARTUP","STARTMENU","SYSDIR","TEMP","TEMPLATES","VIDEOS","WINDIR"],o=["ARCHIVE","FILE_ATTRIBUTE_ARCHIVE","FILE_ATTRIBUTE_NORMAL","FILE_ATTRIBUTE_OFFLINE","FILE_ATTRIBUTE_READONLY","FILE_ATTRIBUTE_SYSTEM","FILE_ATTRIBUTE_TEMPORARY","HKCR","HKCU","HKDD","HKEY_CLASSES_ROOT","HKEY_CURRENT_CONFIG","HKEY_CURRENT_USER","HKEY_DYN_DATA","HKEY_LOCAL_MACHINE","HKEY_PERFORMANCE_DATA","HKEY_USERS","HKLM","HKPD","HKU","IDABORT","IDCANCEL","IDIGNORE","IDNO","IDOK","IDRETRY","IDYES","MB_ABORTRETRYIGNORE","MB_DEFBUTTON1","MB_DEFBUTTON2","MB_DEFBUTTON3","MB_DEFBUTTON4","MB_ICONEXCLAMATION","MB_ICONINFORMATION","MB_ICONQUESTION","MB_ICONSTOP","MB_OK","MB_OKCANCEL","MB_RETRYCANCEL","MB_RIGHT","MB_RTLREADING","MB_SETFOREGROUND","MB_TOPMOST","MB_USERICON","MB_YESNO","NORMAL","OFFLINE","READONLY","SHCTX","SHELL_CONTEXT","SYSTEM|TEMPORARY"],s=["addincludedir","addplugindir","appendfile","cd","define","delfile","echo","else","endif","error","execute","finalize","getdllversion","gettlbversion","if","ifdef","ifmacrodef","ifmacrondef","ifndef","include","insertmacro","macro","macroend","makensis","packhdr","searchparse","searchreplace","system","tempfile","undef","uninstfinalize","verbose","warning"],l={className:"variable.constant",begin:n.concat(/\$/,n.either(...i))},c={className:"variable",begin:/\$+\{[\!\w.:-]+\}/},d={className:"variable",begin:/\$+\w[\w\.]*/,illegal:/\(\)\{\}/},_={className:"variable",begin:/\$+\([\w^.:!-]+\)/},p={className:"params",begin:n.either(...o)},g={className:"keyword",begin:n.concat(/!/,n.either(...s))},E={className:"char.escape",begin:/\$(\\[nrt]|\$)/},f={className:"title.function",begin:/\w+::\w+/},S={className:"string",variants:[{begin:'"',end:'"'},{begin:"'",end:"'"},{begin:"`",end:"`"}],illegal:/\n/,contains:[E,l,c,d,_]},C=["Abort","AddBrandingImage","AddSize","AllowRootDirInstall","AllowSkipFiles","AutoCloseWindow","BGFont","BGGradient","BrandingText","BringToFront","Call","CallInstDLL","Caption","ChangeUI","CheckBitmap","ClearErrors","CompletedText","ComponentText","CopyFiles","CRCCheck","CreateDirectory","CreateFont","CreateShortCut","Delete","DeleteINISec","DeleteINIStr","DeleteRegKey","DeleteRegValue","DetailPrint","DetailsButtonText","DirText","DirVar","DirVerify","EnableWindow","EnumRegKey","EnumRegValue","Exch","Exec","ExecShell","ExecShellWait","ExecWait","ExpandEnvStrings","File","FileBufSize","FileClose","FileErrorText","FileOpen","FileRead","FileReadByte","FileReadUTF16LE","FileReadWord","FileWriteUTF16LE","FileSeek","FileWrite","FileWriteByte","FileWriteWord","FindClose","FindFirst","FindNext","FindWindow","FlushINI","GetCurInstType","GetCurrentAddress","GetDlgItem","GetDLLVersion","GetDLLVersionLocal","GetErrorLevel","GetFileTime","GetFileTimeLocal","GetFullPathName","GetFunctionAddress","GetInstDirError","GetKnownFolderPath","GetLabelAddress","GetTempFileName","GetWinVer","Goto","HideWindow","Icon","IfAbort","IfErrors","IfFileExists","IfRebootFlag","IfRtlLanguage","IfShellVarContextAll","IfSilent","InitPluginsDir","InstallButtonText","InstallColors","InstallDir","InstallDirRegKey","InstProgressFlags","InstType","InstTypeGetText","InstTypeSetText","Int64Cmp","Int64CmpU","Int64Fmt","IntCmp","IntCmpU","IntFmt","IntOp","IntPtrCmp","IntPtrCmpU","IntPtrOp","IsWindow","LangString","LicenseBkColor","LicenseData","LicenseForceSelection","LicenseLangString","LicenseText","LoadAndSetImage","LoadLanguageFile","LockWindow","LogSet","LogText","ManifestDPIAware","ManifestLongPathAware","ManifestMaxVersionTested","ManifestSupportedOS","MessageBox","MiscButtonText","Name|0","Nop","OutFile","Page","PageCallbacks","PEAddResource","PEDllCharacteristics","PERemoveResource","PESubsysVer","Pop","Push","Quit","ReadEnvStr","ReadINIStr","ReadRegDWORD","ReadRegStr","Reboot","RegDLL","Rename","RequestExecutionLevel","ReserveFile","Return","RMDir","SearchPath","SectionGetFlags","SectionGetInstTypes","SectionGetSize","SectionGetText","SectionIn","SectionSetFlags","SectionSetInstTypes","SectionSetSize","SectionSetText","SendMessage","SetAutoClose","SetBrandingImage","SetCompress","SetCompressor","SetCompressorDictSize","SetCtlColors","SetCurInstType","SetDatablockOptimize","SetDateSave","SetDetailsPrint","SetDetailsView","SetErrorLevel","SetErrors","SetFileAttributes","SetFont","SetOutPath","SetOverwrite","SetRebootFlag","SetRegView","SetShellVarContext","SetSilent","ShowInstDetails","ShowUninstDetails","ShowWindow","SilentInstall","SilentUnInstall","Sleep","SpaceTexts","StrCmp","StrCmpS","StrCpy","StrLen","SubCaption","Unicode","UninstallButtonText","UninstallCaption","UninstallIcon","UninstallSubCaption","UninstallText","UninstPage","UnRegDLL","Var","VIAddVersionKey","VIFileVersion","VIProductVersion","WindowIcon","WriteINIStr","WriteRegBin","WriteRegDWORD","WriteRegExpandStr","WriteRegMultiStr","WriteRegNone","WriteRegStr","WriteUninstaller","XPStyle"],h=["admin","all","auto","both","bottom","bzip2","colored","components","current","custom","directory","false","force","hide","highest","ifdiff","ifnewer","instfiles","lastused","leave","left","license","listonly","lzma","nevershow","none","normal","notset","off","on","open","print","right","show","silent","silentlog","smooth","textonly","top","true","try","un.components","un.custom","un.directory","un.instfiles","un.license","uninstConfirm","user","Win10","Win7","Win8","WinVista","zlib"],T={match:[/Function/,/\s+/,n.concat(/(\.)?/,e.IDENT_RE)],scope:{1:"keyword",3:"title.function"}},y={match:[/Var/,/\s+/,/(?:\/GLOBAL\s+)?/,/[A-Za-z][\w.]*/],scope:{1:"keyword",3:"params",4:"variable"}};return{name:"NSIS",case_insensitive:!0,keywords:{keyword:C,literal:h},contains:[e.HASH_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.COMMENT(";","$",{relevance:0}),y,T,{beginKeywords:"Function PageEx Section SectionGroup FunctionEnd SectionEnd"},S,g,c,d,_,p,f,e.NUMBER_MODE]}}return np=t,np}var rp,lv;function lye(){if(lv)return rp;lv=1;function t(e){const n={className:"built_in",begin:"\\b(AV|CA|CF|CG|CI|CL|CM|CN|CT|MK|MP|MTK|MTL|NS|SCN|SK|UI|WK|XC)\\w+"},i=/[a-zA-Z@][a-zA-Z0-9_]*/,d={"variable.language":["this","super"],$pattern:i,keyword:["while","export","sizeof","typedef","const","struct","for","union","volatile","static","mutable","if","do","return","goto","enum","else","break","extern","asm","case","default","register","explicit","typename","switch","continue","inline","readonly","assign","readwrite","self","@synchronized","id","typeof","nonatomic","IBOutlet","IBAction","strong","weak","copy","in","out","inout","bycopy","byref","oneway","__strong","__weak","__block","__autoreleasing","@private","@protected","@public","@try","@property","@end","@throw","@catch","@finally","@autoreleasepool","@synthesize","@dynamic","@selector","@optional","@required","@encode","@package","@import","@defs","@compatibility_alias","__bridge","__bridge_transfer","__bridge_retained","__bridge_retain","__covariant","__contravariant","__kindof","_Nonnull","_Nullable","_Null_unspecified","__FUNCTION__","__PRETTY_FUNCTION__","__attribute__","getter","setter","retain","unsafe_unretained","nonnull","nullable","null_unspecified","null_resettable","class","instancetype","NS_DESIGNATED_INITIALIZER","NS_UNAVAILABLE","NS_REQUIRES_SUPER","NS_RETURNS_INNER_POINTER","NS_INLINE","NS_AVAILABLE","NS_DEPRECATED","NS_ENUM","NS_OPTIONS","NS_SWIFT_UNAVAILABLE","NS_ASSUME_NONNULL_BEGIN","NS_ASSUME_NONNULL_END","NS_REFINED_FOR_SWIFT","NS_SWIFT_NAME","NS_SWIFT_NOTHROW","NS_DURING","NS_HANDLER","NS_ENDHANDLER","NS_VALUERETURN","NS_VOIDRETURN"],literal:["false","true","FALSE","TRUE","nil","YES","NO","NULL"],built_in:["dispatch_once_t","dispatch_queue_t","dispatch_sync","dispatch_async","dispatch_once"],type:["int","float","char","unsigned","signed","short","long","double","wchar_t","unichar","void","bool","BOOL","id|0","_Bool"]},_={$pattern:i,keyword:["@interface","@class","@protocol","@implementation"]};return{name:"Objective-C",aliases:["mm","objc","obj-c","obj-c++","objective-c++"],keywords:d,illegal:"/,end:/$/,illegal:"\\n"},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{className:"class",begin:"("+_.keyword.join("|")+")\\b",end:/(\{|$)/,excludeEnd:!0,keywords:_,contains:[e.UNDERSCORE_TITLE_MODE]},{begin:"\\."+e.UNDERSCORE_IDENT_RE,relevance:0}]}}return rp=t,rp}var ip,cv;function cye(){if(cv)return ip;cv=1;function t(e){return{name:"OCaml",aliases:["ml"],keywords:{$pattern:"[a-z_]\\w*!?",keyword:"and as assert asr begin class constraint do done downto else end exception external for fun function functor if in include inherit! inherit initializer land lazy let lor lsl lsr lxor match method!|10 method mod module mutable new object of open! open or private rec sig struct then to try type val! val virtual when while with parser value",built_in:"array bool bytes char exn|5 float int int32 int64 list lazy_t|5 nativeint|5 string unit in_channel out_channel ref",literal:"true false"},illegal:/\/\/|>>/,contains:[{className:"literal",begin:"\\[(\\|\\|)?\\]|\\(\\)",relevance:0},e.COMMENT("\\(\\*","\\*\\)",{contains:["self"]}),{className:"symbol",begin:"'[A-Za-z_](?!')[\\w']*"},{className:"type",begin:"`[A-Z][\\w']*"},{className:"type",begin:"\\b[A-Z][\\w']*",relevance:0},{begin:"[a-z_]\\w*'[\\w']*",relevance:0},e.inherit(e.APOS_STRING_MODE,{className:"string",relevance:0}),e.inherit(e.QUOTE_STRING_MODE,{illegal:null}),{className:"number",begin:"\\b(0[xX][a-fA-F0-9_]+[Lln]?|0[oO][0-7_]+[Lln]?|0[bB][01_]+[Lln]?|[0-9][0-9_]*([Lln]|(\\.[0-9_]*)?([eE][-+]?[0-9_]+)?)?)",relevance:0},{begin:/->/}]}}return ip=t,ip}var ap,uv;function uye(){if(uv)return ap;uv=1;function t(e){const n={className:"keyword",begin:"\\$(f[asn]|t|vp[rtd]|children)"},i={className:"literal",begin:"false|true|PI|undef"},o={className:"number",begin:"\\b\\d+(\\.\\d+)?(e-?\\d+)?",relevance:0},s=e.inherit(e.QUOTE_STRING_MODE,{illegal:null}),l={className:"meta",keywords:{keyword:"include use"},begin:"include|use <",end:">"},c={className:"params",begin:"\\(",end:"\\)",contains:["self",o,s,n,i]},d={begin:"[*!#%]",relevance:0},_={className:"function",beginKeywords:"module function",end:/=|\{/,contains:[c,e.UNDERSCORE_TITLE_MODE]};return{name:"OpenSCAD",aliases:["scad"],keywords:{keyword:"function module include use for intersection_for if else \\%",literal:"false true PI undef",built_in:"circle square polygon text sphere cube cylinder polyhedron translate rotate scale resize mirror multmatrix color offset hull minkowski union difference intersection abs sign sin cos tan acos asin atan atan2 floor round ceil ln log pow sqrt exp rands min max concat lookup str chr search version version_num norm cross parent_module echo import import_dxf dxf_linear_extrude linear_extrude rotate_extrude surface projection render children dxf_cross dxf_dim let assign"},contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,o,l,s,n,d,_]}}return ap=t,ap}var op,dv;function dye(){if(dv)return op;dv=1;function t(e){const n={$pattern:/\.?\w+/,keyword:"abstract add and array as asc aspect assembly async begin break block by case class concat const copy constructor continue create default delegate desc distinct div do downto dynamic each else empty end ensure enum equals event except exit extension external false final finalize finalizer finally flags for forward from function future global group has if implementation implements implies in index inherited inline interface into invariants is iterator join locked locking loop matching method mod module namespace nested new nil not notify nullable of old on operator or order out override parallel params partial pinned private procedure property protected public queryable raise read readonly record reintroduce remove repeat require result reverse sealed select self sequence set shl shr skip static step soft take then to true try tuple type union unit unsafe until uses using var virtual raises volatile where while with write xor yield await mapped deprecated stdcall cdecl pascal register safecall overload library platform reference packed strict published autoreleasepool selector strong weak unretained"},i=e.COMMENT(/\{/,/\}/,{relevance:0}),o=e.COMMENT("\\(\\*","\\*\\)",{relevance:10}),s={className:"string",begin:"'",end:"'",contains:[{begin:"''"}]},l={className:"string",begin:"(#\\d+)+"},c={beginKeywords:"function constructor destructor procedure method",end:"[:;]",keywords:"function constructor|10 destructor|10 procedure|10 method|10",contains:[e.inherit(e.TITLE_MODE,{scope:"title.function"}),{className:"params",begin:"\\(",end:"\\)",keywords:n,contains:[s,l]},i,o]},d={scope:"punctuation",match:/;/,relevance:0};return{name:"Oxygene",case_insensitive:!0,keywords:n,illegal:'("|\\$[G-Zg-z]|\\/\\*||->)',contains:[i,o,e.C_LINE_COMMENT_MODE,s,l,e.NUMBER_MODE,c,d]}}return op=t,op}var sp,_v;function _ye(){if(_v)return sp;_v=1;function t(e){const n=e.COMMENT(/\{/,/\}/,{contains:["self"]});return{name:"Parser3",subLanguage:"xml",relevance:0,contains:[e.COMMENT("^#","$"),e.COMMENT(/\^rem\{/,/\}/,{relevance:10,contains:[n]}),{className:"meta",begin:"^@(?:BASE|USE|CLASS|OPTIONS)$",relevance:10},{className:"title",begin:"@[\\w\\-]+\\[[\\w^;\\-]*\\](?:\\[[\\w^;\\-]*\\])?(?:.*)$"},{className:"variable",begin:/\$\{?[\w\-.:]+\}?/},{className:"keyword",begin:/\^[\w\-.:]+/},{className:"number",begin:"\\^#[0-9a-fA-F]+"},e.C_NUMBER_MODE]}}return sp=t,sp}var lp,pv;function pye(){if(pv)return lp;pv=1;function t(e){const n={className:"variable",begin:/\$[\w\d#@][\w\d_]*/,relevance:0},i={className:"variable",begin:/<(?!\/)/,end:/>/};return{name:"Packet Filter config",aliases:["pf.conf"],keywords:{$pattern:/[a-z0-9_<>-]+/,built_in:"block match pass load anchor|5 antispoof|10 set table",keyword:"in out log quick on rdomain inet inet6 proto from port os to route allow-opts divert-packet divert-reply divert-to flags group icmp-type icmp6-type label once probability recieved-on rtable prio queue tos tag tagged user keep fragment for os drop af-to|10 binat-to|10 nat-to|10 rdr-to|10 bitmask least-stats random round-robin source-hash static-port dup-to reply-to route-to parent bandwidth default min max qlimit block-policy debug fingerprints hostid limit loginterface optimization reassemble ruleset-optimization basic none profile skip state-defaults state-policy timeout const counters persist no modulate synproxy state|5 floating if-bound no-sync pflow|10 sloppy source-track global rule max-src-nodes max-src-states max-src-conn max-src-conn-rate overload flush scrub|5 max-mss min-ttl no-df|10 random-id",literal:"all any no-route self urpf-failed egress|5 unknown"},contains:[e.HASH_COMMENT_MODE,e.NUMBER_MODE,e.QUOTE_STRING_MODE,n,i]}}return lp=t,lp}var cp,mv;function mye(){if(mv)return cp;mv=1;function t(e){const n=e.COMMENT("--","$"),i="[a-zA-Z_][a-zA-Z_0-9$]*",o="\\$([a-zA-Z_]?|[a-zA-Z_][a-zA-Z_0-9]*)\\$",s="<<\\s*"+i+"\\s*>>",l="ABORT ALTER ANALYZE BEGIN CALL CHECKPOINT|10 CLOSE CLUSTER COMMENT COMMIT COPY CREATE DEALLOCATE DECLARE DELETE DISCARD DO DROP END EXECUTE EXPLAIN FETCH GRANT IMPORT INSERT LISTEN LOAD LOCK MOVE NOTIFY PREPARE REASSIGN|10 REFRESH REINDEX RELEASE RESET REVOKE ROLLBACK SAVEPOINT SECURITY SELECT SET SHOW START TRUNCATE UNLISTEN|10 UPDATE VACUUM|10 VALUES AGGREGATE COLLATION CONVERSION|10 DATABASE DEFAULT PRIVILEGES DOMAIN TRIGGER EXTENSION FOREIGN WRAPPER|10 TABLE FUNCTION GROUP LANGUAGE LARGE OBJECT MATERIALIZED VIEW OPERATOR CLASS FAMILY POLICY PUBLICATION|10 ROLE RULE SCHEMA SEQUENCE SERVER STATISTICS SUBSCRIPTION SYSTEM TABLESPACE CONFIGURATION DICTIONARY PARSER TEMPLATE TYPE USER MAPPING PREPARED ACCESS METHOD CAST AS TRANSFORM TRANSACTION OWNED TO INTO SESSION AUTHORIZATION INDEX PROCEDURE ASSERTION ALL ANALYSE AND ANY ARRAY ASC ASYMMETRIC|10 BOTH CASE CHECK COLLATE COLUMN CONCURRENTLY|10 CONSTRAINT CROSS DEFERRABLE RANGE DESC DISTINCT ELSE EXCEPT FOR FREEZE|10 FROM FULL HAVING ILIKE IN INITIALLY INNER INTERSECT IS ISNULL JOIN LATERAL LEADING LIKE LIMIT NATURAL NOT NOTNULL NULL OFFSET ON ONLY OR ORDER OUTER OVERLAPS PLACING PRIMARY REFERENCES RETURNING SIMILAR SOME SYMMETRIC TABLESAMPLE THEN TRAILING UNION UNIQUE USING VARIADIC|10 VERBOSE WHEN WHERE WINDOW WITH BY RETURNS INOUT OUT SETOF|10 IF STRICT CURRENT CONTINUE OWNER LOCATION OVER PARTITION WITHIN BETWEEN ESCAPE EXTERNAL INVOKER DEFINER WORK RENAME VERSION CONNECTION CONNECT TABLES TEMP TEMPORARY FUNCTIONS SEQUENCES TYPES SCHEMAS OPTION CASCADE RESTRICT ADD ADMIN EXISTS VALID VALIDATE ENABLE DISABLE REPLICA|10 ALWAYS PASSING COLUMNS PATH REF VALUE OVERRIDING IMMUTABLE STABLE VOLATILE BEFORE AFTER EACH ROW PROCEDURAL ROUTINE NO HANDLER VALIDATOR OPTIONS STORAGE OIDS|10 WITHOUT INHERIT DEPENDS CALLED INPUT LEAKPROOF|10 COST ROWS NOWAIT SEARCH UNTIL ENCRYPTED|10 PASSWORD CONFLICT|10 INSTEAD INHERITS CHARACTERISTICS WRITE CURSOR ALSO STATEMENT SHARE EXCLUSIVE INLINE ISOLATION REPEATABLE READ COMMITTED SERIALIZABLE UNCOMMITTED LOCAL GLOBAL SQL PROCEDURES RECURSIVE SNAPSHOT ROLLUP CUBE TRUSTED|10 INCLUDE FOLLOWING PRECEDING UNBOUNDED RANGE GROUPS UNENCRYPTED|10 SYSID FORMAT DELIMITER HEADER QUOTE ENCODING FILTER OFF FORCE_QUOTE FORCE_NOT_NULL FORCE_NULL COSTS BUFFERS TIMING SUMMARY DISABLE_PAGE_SKIPPING RESTART CYCLE GENERATED IDENTITY DEFERRED IMMEDIATE LEVEL LOGGED UNLOGGED OF NOTHING NONE EXCLUDE ATTRIBUTE USAGE ROUTINES TRUE FALSE NAN INFINITY ",c="SUPERUSER NOSUPERUSER CREATEDB NOCREATEDB CREATEROLE NOCREATEROLE INHERIT NOINHERIT LOGIN NOLOGIN REPLICATION NOREPLICATION BYPASSRLS NOBYPASSRLS ",d="ALIAS BEGIN CONSTANT DECLARE END EXCEPTION RETURN PERFORM|10 RAISE GET DIAGNOSTICS STACKED|10 FOREACH LOOP ELSIF EXIT WHILE REVERSE SLICE DEBUG LOG INFO NOTICE WARNING ASSERT OPEN ",_="BIGINT INT8 BIGSERIAL SERIAL8 BIT VARYING VARBIT BOOLEAN BOOL BOX BYTEA CHARACTER CHAR VARCHAR CIDR CIRCLE DATE DOUBLE PRECISION FLOAT8 FLOAT INET INTEGER INT INT4 INTERVAL JSON JSONB LINE LSEG|10 MACADDR MACADDR8 MONEY NUMERIC DEC DECIMAL PATH POINT POLYGON REAL FLOAT4 SMALLINT INT2 SMALLSERIAL|10 SERIAL2|10 SERIAL|10 SERIAL4|10 TEXT TIME ZONE TIMETZ|10 TIMESTAMP TIMESTAMPTZ|10 TSQUERY|10 TSVECTOR|10 TXID_SNAPSHOT|10 UUID XML NATIONAL NCHAR INT4RANGE|10 INT8RANGE|10 NUMRANGE|10 TSRANGE|10 TSTZRANGE|10 DATERANGE|10 ANYELEMENT ANYARRAY ANYNONARRAY ANYENUM ANYRANGE CSTRING INTERNAL RECORD PG_DDL_COMMAND VOID UNKNOWN OPAQUE REFCURSOR NAME OID REGPROC|10 REGPROCEDURE|10 REGOPER|10 REGOPERATOR|10 REGCLASS|10 REGTYPE|10 REGROLE|10 REGNAMESPACE|10 REGCONFIG|10 REGDICTIONARY|10 ",p=_.trim().split(" ").map(function(h){return h.split("|")[0]}).join("|"),g="CURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURRENT_CATALOG|10 CURRENT_DATE LOCALTIME LOCALTIMESTAMP CURRENT_ROLE|10 CURRENT_SCHEMA|10 SESSION_USER PUBLIC ",E="FOUND NEW OLD TG_NAME|10 TG_WHEN|10 TG_LEVEL|10 TG_OP|10 TG_RELID|10 TG_RELNAME|10 TG_TABLE_NAME|10 TG_TABLE_SCHEMA|10 TG_NARGS|10 TG_ARGV|10 TG_EVENT|10 TG_TAG|10 ROW_COUNT RESULT_OID|10 PG_CONTEXT|10 RETURNED_SQLSTATE COLUMN_NAME CONSTRAINT_NAME PG_DATATYPE_NAME|10 MESSAGE_TEXT TABLE_NAME SCHEMA_NAME PG_EXCEPTION_DETAIL|10 PG_EXCEPTION_HINT|10 PG_EXCEPTION_CONTEXT|10 ",f="SQLSTATE SQLERRM|10 SUCCESSFUL_COMPLETION WARNING DYNAMIC_RESULT_SETS_RETURNED IMPLICIT_ZERO_BIT_PADDING NULL_VALUE_ELIMINATED_IN_SET_FUNCTION PRIVILEGE_NOT_GRANTED PRIVILEGE_NOT_REVOKED STRING_DATA_RIGHT_TRUNCATION DEPRECATED_FEATURE NO_DATA NO_ADDITIONAL_DYNAMIC_RESULT_SETS_RETURNED SQL_STATEMENT_NOT_YET_COMPLETE CONNECTION_EXCEPTION CONNECTION_DOES_NOT_EXIST CONNECTION_FAILURE SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION SQLSERVER_REJECTED_ESTABLISHMENT_OF_SQLCONNECTION TRANSACTION_RESOLUTION_UNKNOWN PROTOCOL_VIOLATION TRIGGERED_ACTION_EXCEPTION FEATURE_NOT_SUPPORTED INVALID_TRANSACTION_INITIATION LOCATOR_EXCEPTION INVALID_LOCATOR_SPECIFICATION INVALID_GRANTOR INVALID_GRANT_OPERATION INVALID_ROLE_SPECIFICATION DIAGNOSTICS_EXCEPTION STACKED_DIAGNOSTICS_ACCESSED_WITHOUT_ACTIVE_HANDLER CASE_NOT_FOUND CARDINALITY_VIOLATION DATA_EXCEPTION ARRAY_SUBSCRIPT_ERROR CHARACTER_NOT_IN_REPERTOIRE DATETIME_FIELD_OVERFLOW DIVISION_BY_ZERO ERROR_IN_ASSIGNMENT ESCAPE_CHARACTER_CONFLICT INDICATOR_OVERFLOW INTERVAL_FIELD_OVERFLOW INVALID_ARGUMENT_FOR_LOGARITHM INVALID_ARGUMENT_FOR_NTILE_FUNCTION INVALID_ARGUMENT_FOR_NTH_VALUE_FUNCTION INVALID_ARGUMENT_FOR_POWER_FUNCTION INVALID_ARGUMENT_FOR_WIDTH_BUCKET_FUNCTION INVALID_CHARACTER_VALUE_FOR_CAST INVALID_DATETIME_FORMAT INVALID_ESCAPE_CHARACTER INVALID_ESCAPE_OCTET INVALID_ESCAPE_SEQUENCE NONSTANDARD_USE_OF_ESCAPE_CHARACTER INVALID_INDICATOR_PARAMETER_VALUE INVALID_PARAMETER_VALUE INVALID_REGULAR_EXPRESSION INVALID_ROW_COUNT_IN_LIMIT_CLAUSE INVALID_ROW_COUNT_IN_RESULT_OFFSET_CLAUSE INVALID_TABLESAMPLE_ARGUMENT INVALID_TABLESAMPLE_REPEAT INVALID_TIME_ZONE_DISPLACEMENT_VALUE INVALID_USE_OF_ESCAPE_CHARACTER MOST_SPECIFIC_TYPE_MISMATCH NULL_VALUE_NOT_ALLOWED NULL_VALUE_NO_INDICATOR_PARAMETER NUMERIC_VALUE_OUT_OF_RANGE SEQUENCE_GENERATOR_LIMIT_EXCEEDED STRING_DATA_LENGTH_MISMATCH STRING_DATA_RIGHT_TRUNCATION SUBSTRING_ERROR TRIM_ERROR UNTERMINATED_C_STRING ZERO_LENGTH_CHARACTER_STRING FLOATING_POINT_EXCEPTION INVALID_TEXT_REPRESENTATION INVALID_BINARY_REPRESENTATION BAD_COPY_FILE_FORMAT UNTRANSLATABLE_CHARACTER NOT_AN_XML_DOCUMENT INVALID_XML_DOCUMENT INVALID_XML_CONTENT INVALID_XML_COMMENT INVALID_XML_PROCESSING_INSTRUCTION INTEGRITY_CONSTRAINT_VIOLATION RESTRICT_VIOLATION NOT_NULL_VIOLATION FOREIGN_KEY_VIOLATION UNIQUE_VIOLATION CHECK_VIOLATION EXCLUSION_VIOLATION INVALID_CURSOR_STATE INVALID_TRANSACTION_STATE ACTIVE_SQL_TRANSACTION BRANCH_TRANSACTION_ALREADY_ACTIVE HELD_CURSOR_REQUIRES_SAME_ISOLATION_LEVEL INAPPROPRIATE_ACCESS_MODE_FOR_BRANCH_TRANSACTION INAPPROPRIATE_ISOLATION_LEVEL_FOR_BRANCH_TRANSACTION NO_ACTIVE_SQL_TRANSACTION_FOR_BRANCH_TRANSACTION READ_ONLY_SQL_TRANSACTION SCHEMA_AND_DATA_STATEMENT_MIXING_NOT_SUPPORTED NO_ACTIVE_SQL_TRANSACTION IN_FAILED_SQL_TRANSACTION IDLE_IN_TRANSACTION_SESSION_TIMEOUT INVALID_SQL_STATEMENT_NAME TRIGGERED_DATA_CHANGE_VIOLATION INVALID_AUTHORIZATION_SPECIFICATION INVALID_PASSWORD DEPENDENT_PRIVILEGE_DESCRIPTORS_STILL_EXIST DEPENDENT_OBJECTS_STILL_EXIST INVALID_TRANSACTION_TERMINATION SQL_ROUTINE_EXCEPTION FUNCTION_EXECUTED_NO_RETURN_STATEMENT MODIFYING_SQL_DATA_NOT_PERMITTED PROHIBITED_SQL_STATEMENT_ATTEMPTED READING_SQL_DATA_NOT_PERMITTED INVALID_CURSOR_NAME EXTERNAL_ROUTINE_EXCEPTION CONTAINING_SQL_NOT_PERMITTED MODIFYING_SQL_DATA_NOT_PERMITTED PROHIBITED_SQL_STATEMENT_ATTEMPTED READING_SQL_DATA_NOT_PERMITTED EXTERNAL_ROUTINE_INVOCATION_EXCEPTION INVALID_SQLSTATE_RETURNED NULL_VALUE_NOT_ALLOWED TRIGGER_PROTOCOL_VIOLATED SRF_PROTOCOL_VIOLATED EVENT_TRIGGER_PROTOCOL_VIOLATED SAVEPOINT_EXCEPTION INVALID_SAVEPOINT_SPECIFICATION INVALID_CATALOG_NAME INVALID_SCHEMA_NAME TRANSACTION_ROLLBACK TRANSACTION_INTEGRITY_CONSTRAINT_VIOLATION SERIALIZATION_FAILURE STATEMENT_COMPLETION_UNKNOWN DEADLOCK_DETECTED SYNTAX_ERROR_OR_ACCESS_RULE_VIOLATION SYNTAX_ERROR INSUFFICIENT_PRIVILEGE CANNOT_COERCE GROUPING_ERROR WINDOWING_ERROR INVALID_RECURSION INVALID_FOREIGN_KEY INVALID_NAME NAME_TOO_LONG RESERVED_NAME DATATYPE_MISMATCH INDETERMINATE_DATATYPE COLLATION_MISMATCH INDETERMINATE_COLLATION WRONG_OBJECT_TYPE GENERATED_ALWAYS UNDEFINED_COLUMN UNDEFINED_FUNCTION UNDEFINED_TABLE UNDEFINED_PARAMETER UNDEFINED_OBJECT DUPLICATE_COLUMN DUPLICATE_CURSOR DUPLICATE_DATABASE DUPLICATE_FUNCTION DUPLICATE_PREPARED_STATEMENT DUPLICATE_SCHEMA DUPLICATE_TABLE DUPLICATE_ALIAS DUPLICATE_OBJECT AMBIGUOUS_COLUMN AMBIGUOUS_FUNCTION AMBIGUOUS_PARAMETER AMBIGUOUS_ALIAS INVALID_COLUMN_REFERENCE INVALID_COLUMN_DEFINITION INVALID_CURSOR_DEFINITION INVALID_DATABASE_DEFINITION INVALID_FUNCTION_DEFINITION INVALID_PREPARED_STATEMENT_DEFINITION INVALID_SCHEMA_DEFINITION INVALID_TABLE_DEFINITION INVALID_OBJECT_DEFINITION WITH_CHECK_OPTION_VIOLATION INSUFFICIENT_RESOURCES DISK_FULL OUT_OF_MEMORY TOO_MANY_CONNECTIONS CONFIGURATION_LIMIT_EXCEEDED PROGRAM_LIMIT_EXCEEDED STATEMENT_TOO_COMPLEX TOO_MANY_COLUMNS TOO_MANY_ARGUMENTS OBJECT_NOT_IN_PREREQUISITE_STATE OBJECT_IN_USE CANT_CHANGE_RUNTIME_PARAM LOCK_NOT_AVAILABLE OPERATOR_INTERVENTION QUERY_CANCELED ADMIN_SHUTDOWN CRASH_SHUTDOWN CANNOT_CONNECT_NOW DATABASE_DROPPED SYSTEM_ERROR IO_ERROR UNDEFINED_FILE DUPLICATE_FILE SNAPSHOT_TOO_OLD CONFIG_FILE_ERROR LOCK_FILE_EXISTS FDW_ERROR FDW_COLUMN_NAME_NOT_FOUND FDW_DYNAMIC_PARAMETER_VALUE_NEEDED FDW_FUNCTION_SEQUENCE_ERROR FDW_INCONSISTENT_DESCRIPTOR_INFORMATION FDW_INVALID_ATTRIBUTE_VALUE FDW_INVALID_COLUMN_NAME FDW_INVALID_COLUMN_NUMBER FDW_INVALID_DATA_TYPE FDW_INVALID_DATA_TYPE_DESCRIPTORS FDW_INVALID_DESCRIPTOR_FIELD_IDENTIFIER FDW_INVALID_HANDLE FDW_INVALID_OPTION_INDEX FDW_INVALID_OPTION_NAME FDW_INVALID_STRING_LENGTH_OR_BUFFER_LENGTH FDW_INVALID_STRING_FORMAT FDW_INVALID_USE_OF_NULL_POINTER FDW_TOO_MANY_HANDLES FDW_OUT_OF_MEMORY FDW_NO_SCHEMAS FDW_OPTION_NAME_NOT_FOUND FDW_REPLY_HANDLE FDW_SCHEMA_NOT_FOUND FDW_TABLE_NOT_FOUND FDW_UNABLE_TO_CREATE_EXECUTION FDW_UNABLE_TO_CREATE_REPLY FDW_UNABLE_TO_ESTABLISH_CONNECTION PLPGSQL_ERROR RAISE_EXCEPTION NO_DATA_FOUND TOO_MANY_ROWS ASSERT_FAILURE INTERNAL_ERROR DATA_CORRUPTED INDEX_CORRUPTED ",C="ARRAY_AGG AVG BIT_AND BIT_OR BOOL_AND BOOL_OR COUNT EVERY JSON_AGG JSONB_AGG JSON_OBJECT_AGG JSONB_OBJECT_AGG MAX MIN MODE STRING_AGG SUM XMLAGG CORR COVAR_POP COVAR_SAMP REGR_AVGX REGR_AVGY REGR_COUNT REGR_INTERCEPT REGR_R2 REGR_SLOPE REGR_SXX REGR_SXY REGR_SYY STDDEV STDDEV_POP STDDEV_SAMP VARIANCE VAR_POP VAR_SAMP PERCENTILE_CONT PERCENTILE_DISC ROW_NUMBER RANK DENSE_RANK PERCENT_RANK CUME_DIST NTILE LAG LEAD FIRST_VALUE LAST_VALUE NTH_VALUE NUM_NONNULLS NUM_NULLS ABS CBRT CEIL CEILING DEGREES DIV EXP FLOOR LN LOG MOD PI POWER RADIANS ROUND SCALE SIGN SQRT TRUNC WIDTH_BUCKET RANDOM SETSEED ACOS ACOSD ASIN ASIND ATAN ATAND ATAN2 ATAN2D COS COSD COT COTD SIN SIND TAN TAND BIT_LENGTH CHAR_LENGTH CHARACTER_LENGTH LOWER OCTET_LENGTH OVERLAY POSITION SUBSTRING TREAT TRIM UPPER ASCII BTRIM CHR CONCAT CONCAT_WS CONVERT CONVERT_FROM CONVERT_TO DECODE ENCODE INITCAP LEFT LENGTH LPAD LTRIM MD5 PARSE_IDENT PG_CLIENT_ENCODING QUOTE_IDENT|10 QUOTE_LITERAL|10 QUOTE_NULLABLE|10 REGEXP_MATCH REGEXP_MATCHES REGEXP_REPLACE REGEXP_SPLIT_TO_ARRAY REGEXP_SPLIT_TO_TABLE REPEAT REPLACE REVERSE RIGHT RPAD RTRIM SPLIT_PART STRPOS SUBSTR TO_ASCII TO_HEX TRANSLATE OCTET_LENGTH GET_BIT GET_BYTE SET_BIT SET_BYTE TO_CHAR TO_DATE TO_NUMBER TO_TIMESTAMP AGE CLOCK_TIMESTAMP|10 DATE_PART DATE_TRUNC ISFINITE JUSTIFY_DAYS JUSTIFY_HOURS JUSTIFY_INTERVAL MAKE_DATE MAKE_INTERVAL|10 MAKE_TIME MAKE_TIMESTAMP|10 MAKE_TIMESTAMPTZ|10 NOW STATEMENT_TIMESTAMP|10 TIMEOFDAY TRANSACTION_TIMESTAMP|10 ENUM_FIRST ENUM_LAST ENUM_RANGE AREA CENTER DIAMETER HEIGHT ISCLOSED ISOPEN NPOINTS PCLOSE POPEN RADIUS WIDTH BOX BOUND_BOX CIRCLE LINE LSEG PATH POLYGON ABBREV BROADCAST HOST HOSTMASK MASKLEN NETMASK NETWORK SET_MASKLEN TEXT INET_SAME_FAMILY INET_MERGE MACADDR8_SET7BIT ARRAY_TO_TSVECTOR GET_CURRENT_TS_CONFIG NUMNODE PLAINTO_TSQUERY PHRASETO_TSQUERY WEBSEARCH_TO_TSQUERY QUERYTREE SETWEIGHT STRIP TO_TSQUERY TO_TSVECTOR JSON_TO_TSVECTOR JSONB_TO_TSVECTOR TS_DELETE TS_FILTER TS_HEADLINE TS_RANK TS_RANK_CD TS_REWRITE TSQUERY_PHRASE TSVECTOR_TO_ARRAY TSVECTOR_UPDATE_TRIGGER TSVECTOR_UPDATE_TRIGGER_COLUMN XMLCOMMENT XMLCONCAT XMLELEMENT XMLFOREST XMLPI XMLROOT XMLEXISTS XML_IS_WELL_FORMED XML_IS_WELL_FORMED_DOCUMENT XML_IS_WELL_FORMED_CONTENT XPATH XPATH_EXISTS XMLTABLE XMLNAMESPACES TABLE_TO_XML TABLE_TO_XMLSCHEMA TABLE_TO_XML_AND_XMLSCHEMA QUERY_TO_XML QUERY_TO_XMLSCHEMA QUERY_TO_XML_AND_XMLSCHEMA CURSOR_TO_XML CURSOR_TO_XMLSCHEMA SCHEMA_TO_XML SCHEMA_TO_XMLSCHEMA SCHEMA_TO_XML_AND_XMLSCHEMA DATABASE_TO_XML DATABASE_TO_XMLSCHEMA DATABASE_TO_XML_AND_XMLSCHEMA XMLATTRIBUTES TO_JSON TO_JSONB ARRAY_TO_JSON ROW_TO_JSON JSON_BUILD_ARRAY JSONB_BUILD_ARRAY JSON_BUILD_OBJECT JSONB_BUILD_OBJECT JSON_OBJECT JSONB_OBJECT JSON_ARRAY_LENGTH JSONB_ARRAY_LENGTH JSON_EACH JSONB_EACH JSON_EACH_TEXT JSONB_EACH_TEXT JSON_EXTRACT_PATH JSONB_EXTRACT_PATH JSON_OBJECT_KEYS JSONB_OBJECT_KEYS JSON_POPULATE_RECORD JSONB_POPULATE_RECORD JSON_POPULATE_RECORDSET JSONB_POPULATE_RECORDSET JSON_ARRAY_ELEMENTS JSONB_ARRAY_ELEMENTS JSON_ARRAY_ELEMENTS_TEXT JSONB_ARRAY_ELEMENTS_TEXT JSON_TYPEOF JSONB_TYPEOF JSON_TO_RECORD JSONB_TO_RECORD JSON_TO_RECORDSET JSONB_TO_RECORDSET JSON_STRIP_NULLS JSONB_STRIP_NULLS JSONB_SET JSONB_INSERT JSONB_PRETTY CURRVAL LASTVAL NEXTVAL SETVAL COALESCE NULLIF GREATEST LEAST ARRAY_APPEND ARRAY_CAT ARRAY_NDIMS ARRAY_DIMS ARRAY_FILL ARRAY_LENGTH ARRAY_LOWER ARRAY_POSITION ARRAY_POSITIONS ARRAY_PREPEND ARRAY_REMOVE ARRAY_REPLACE ARRAY_TO_STRING ARRAY_UPPER CARDINALITY STRING_TO_ARRAY UNNEST ISEMPTY LOWER_INC UPPER_INC LOWER_INF UPPER_INF RANGE_MERGE GENERATE_SERIES GENERATE_SUBSCRIPTS CURRENT_DATABASE CURRENT_QUERY CURRENT_SCHEMA|10 CURRENT_SCHEMAS|10 INET_CLIENT_ADDR INET_CLIENT_PORT INET_SERVER_ADDR INET_SERVER_PORT ROW_SECURITY_ACTIVE FORMAT_TYPE TO_REGCLASS TO_REGPROC TO_REGPROCEDURE TO_REGOPER TO_REGOPERATOR TO_REGTYPE TO_REGNAMESPACE TO_REGROLE COL_DESCRIPTION OBJ_DESCRIPTION SHOBJ_DESCRIPTION TXID_CURRENT TXID_CURRENT_IF_ASSIGNED TXID_CURRENT_SNAPSHOT TXID_SNAPSHOT_XIP TXID_SNAPSHOT_XMAX TXID_SNAPSHOT_XMIN TXID_VISIBLE_IN_SNAPSHOT TXID_STATUS CURRENT_SETTING SET_CONFIG BRIN_SUMMARIZE_NEW_VALUES BRIN_SUMMARIZE_RANGE BRIN_DESUMMARIZE_RANGE GIN_CLEAN_PENDING_LIST SUPPRESS_REDUNDANT_UPDATES_TRIGGER LO_FROM_BYTEA LO_PUT LO_GET LO_CREAT LO_CREATE LO_UNLINK LO_IMPORT LO_EXPORT LOREAD LOWRITE GROUPING CAST ".trim().split(" ").map(function(h){return h.split("|")[0]}).join("|");return{name:"PostgreSQL",aliases:["postgres","postgresql"],supersetOf:"sql",case_insensitive:!0,keywords:{keyword:l+d+c,built_in:g+E+f},illegal:/:==|\W\s*\(\*|(^|\s)\$[a-z]|\{\{|[a-z]:\s*$|\.\.\.|TO:|DO:/,contains:[{className:"keyword",variants:[{begin:/\bTEXT\s*SEARCH\b/},{begin:/\b(PRIMARY|FOREIGN|FOR(\s+NO)?)\s+KEY\b/},{begin:/\bPARALLEL\s+(UNSAFE|RESTRICTED|SAFE)\b/},{begin:/\bSTORAGE\s+(PLAIN|EXTERNAL|EXTENDED|MAIN)\b/},{begin:/\bMATCH\s+(FULL|PARTIAL|SIMPLE)\b/},{begin:/\bNULLS\s+(FIRST|LAST)\b/},{begin:/\bEVENT\s+TRIGGER\b/},{begin:/\b(MAPPING|OR)\s+REPLACE\b/},{begin:/\b(FROM|TO)\s+(PROGRAM|STDIN|STDOUT)\b/},{begin:/\b(SHARE|EXCLUSIVE)\s+MODE\b/},{begin:/\b(LEFT|RIGHT)\s+(OUTER\s+)?JOIN\b/},{begin:/\b(FETCH|MOVE)\s+(NEXT|PRIOR|FIRST|LAST|ABSOLUTE|RELATIVE|FORWARD|BACKWARD)\b/},{begin:/\bPRESERVE\s+ROWS\b/},{begin:/\bDISCARD\s+PLANS\b/},{begin:/\bREFERENCING\s+(OLD|NEW)\b/},{begin:/\bSKIP\s+LOCKED\b/},{begin:/\bGROUPING\s+SETS\b/},{begin:/\b(BINARY|INSENSITIVE|SCROLL|NO\s+SCROLL)\s+(CURSOR|FOR)\b/},{begin:/\b(WITH|WITHOUT)\s+HOLD\b/},{begin:/\bWITH\s+(CASCADED|LOCAL)\s+CHECK\s+OPTION\b/},{begin:/\bEXCLUDE\s+(TIES|NO\s+OTHERS)\b/},{begin:/\bFORMAT\s+(TEXT|XML|JSON|YAML)\b/},{begin:/\bSET\s+((SESSION|LOCAL)\s+)?NAMES\b/},{begin:/\bIS\s+(NOT\s+)?UNKNOWN\b/},{begin:/\bSECURITY\s+LABEL\b/},{begin:/\bSTANDALONE\s+(YES|NO|NO\s+VALUE)\b/},{begin:/\bWITH\s+(NO\s+)?DATA\b/},{begin:/\b(FOREIGN|SET)\s+DATA\b/},{begin:/\bSET\s+(CATALOG|CONSTRAINTS)\b/},{begin:/\b(WITH|FOR)\s+ORDINALITY\b/},{begin:/\bIS\s+(NOT\s+)?DOCUMENT\b/},{begin:/\bXML\s+OPTION\s+(DOCUMENT|CONTENT)\b/},{begin:/\b(STRIP|PRESERVE)\s+WHITESPACE\b/},{begin:/\bNO\s+(ACTION|MAXVALUE|MINVALUE)\b/},{begin:/\bPARTITION\s+BY\s+(RANGE|LIST|HASH)\b/},{begin:/\bAT\s+TIME\s+ZONE\b/},{begin:/\bGRANTED\s+BY\b/},{begin:/\bRETURN\s+(QUERY|NEXT)\b/},{begin:/\b(ATTACH|DETACH)\s+PARTITION\b/},{begin:/\bFORCE\s+ROW\s+LEVEL\s+SECURITY\b/},{begin:/\b(INCLUDING|EXCLUDING)\s+(COMMENTS|CONSTRAINTS|DEFAULTS|IDENTITY|INDEXES|STATISTICS|STORAGE|ALL)\b/},{begin:/\bAS\s+(ASSIGNMENT|IMPLICIT|PERMISSIVE|RESTRICTIVE|ENUM|RANGE)\b/}]},{begin:/\b(FORMAT|FAMILY|VERSION)\s*\(/},{begin:/\bINCLUDE\s*\(/,keywords:"INCLUDE"},{begin:/\bRANGE(?!\s*(BETWEEN|UNBOUNDED|CURRENT|[-0-9]+))/},{begin:/\b(VERSION|OWNER|TEMPLATE|TABLESPACE|CONNECTION\s+LIMIT|PROCEDURE|RESTRICT|JOIN|PARSER|COPY|START|END|COLLATION|INPUT|ANALYZE|STORAGE|LIKE|DEFAULT|DELIMITER|ENCODING|COLUMN|CONSTRAINT|TABLE|SCHEMA)\s*=/},{begin:/\b(PG_\w+?|HAS_[A-Z_]+_PRIVILEGE)\b/,relevance:10},{begin:/\bEXTRACT\s*\(/,end:/\bFROM\b/,returnEnd:!0,keywords:{type:"CENTURY DAY DECADE DOW DOY EPOCH HOUR ISODOW ISOYEAR MICROSECONDS MILLENNIUM MILLISECONDS MINUTE MONTH QUARTER SECOND TIMEZONE TIMEZONE_HOUR TIMEZONE_MINUTE WEEK YEAR"}},{begin:/\b(XMLELEMENT|XMLPI)\s*\(\s*NAME/,keywords:{keyword:"NAME"}},{begin:/\b(XMLPARSE|XMLSERIALIZE)\s*\(\s*(DOCUMENT|CONTENT)/,keywords:{keyword:"DOCUMENT CONTENT"}},{beginKeywords:"CACHE INCREMENT MAXVALUE MINVALUE",end:e.C_NUMBER_RE,returnEnd:!0,keywords:"BY CACHE INCREMENT MAXVALUE MINVALUE"},{className:"type",begin:/\b(WITH|WITHOUT)\s+TIME\s+ZONE\b/},{className:"type",begin:/\bINTERVAL\s+(YEAR|MONTH|DAY|HOUR|MINUTE|SECOND)(\s+TO\s+(MONTH|HOUR|MINUTE|SECOND))?\b/},{begin:/\bRETURNS\s+(LANGUAGE_HANDLER|TRIGGER|EVENT_TRIGGER|FDW_HANDLER|INDEX_AM_HANDLER|TSM_HANDLER)\b/,keywords:{keyword:"RETURNS",type:"LANGUAGE_HANDLER TRIGGER EVENT_TRIGGER FDW_HANDLER INDEX_AM_HANDLER TSM_HANDLER"}},{begin:"\\b("+C+")\\s*\\("},{begin:"\\.("+p+")\\b"},{begin:"\\b("+p+")\\s+PATH\\b",keywords:{keyword:"PATH",type:_.replace("PATH ","")}},{className:"type",begin:"\\b("+p+")\\b"},{className:"string",begin:"'",end:"'",contains:[{begin:"''"}]},{className:"string",begin:"(e|E|u&|U&)'",end:"'",contains:[{begin:"\\\\."}],relevance:10},e.END_SAME_AS_BEGIN({begin:o,end:o,contains:[{subLanguage:["pgsql","perl","python","tcl","r","lua","java","php","ruby","bash","scheme","xml","json"],endsWithParent:!0}]}),{begin:'"',end:'"',contains:[{begin:'""'}]},e.C_NUMBER_MODE,e.C_BLOCK_COMMENT_MODE,n,{className:"meta",variants:[{begin:"%(ROW)?TYPE",relevance:10},{begin:"\\$\\d+"},{begin:"^#\\w",end:"$"}]},{className:"symbol",begin:s,relevance:10}]}}return cp=t,cp}var up,gv;function gye(){if(gv)return up;gv=1;function t(e){const n=e.regex,i=/(?![A-Za-z0-9])(?![$])/,o=n.concat(/[a-zA-Z_\x7f-\xff][a-zA-Z0-9_\x7f-\xff]*/,i),s=n.concat(/(\\?[A-Z][a-z0-9_\x7f-\xff]+|\\?[A-Z]+(?=[A-Z][a-z0-9_\x7f-\xff])){1,}/,i),l={scope:"variable",match:"\\$+"+o},c={scope:"meta",variants:[{begin:/<\?php/,relevance:10},{begin:/<\?=/},{begin:/<\?/,relevance:.1},{begin:/\?>/}]},d={scope:"subst",variants:[{begin:/\$\w+/},{begin:/\{\$/,end:/\}/}]},_=e.inherit(e.APOS_STRING_MODE,{illegal:null}),p=e.inherit(e.QUOTE_STRING_MODE,{illegal:null,contains:e.QUOTE_STRING_MODE.contains.concat(d)}),g={begin:/<<<[ \t]*(?:(\w+)|"(\w+)")\n/,end:/[ \t]*(\w+)\b/,contains:e.QUOTE_STRING_MODE.contains.concat(d),"on:begin":(L,J)=>{J.data._beginMatch=L[1]||L[2]},"on:end":(L,J)=>{J.data._beginMatch!==L[1]&&J.ignoreMatch()}},E=e.END_SAME_AS_BEGIN({begin:/<<<[ \t]*'(\w+)'\n/,end:/[ \t]*(\w+)\b/}),f=`[ -]`,S={scope:"string",variants:[p,_,g,E]},C={scope:"number",variants:[{begin:"\\b0[bB][01]+(?:_[01]+)*\\b"},{begin:"\\b0[oO][0-7]+(?:_[0-7]+)*\\b"},{begin:"\\b0[xX][\\da-fA-F]+(?:_[\\da-fA-F]+)*\\b"},{begin:"(?:\\b\\d+(?:_\\d+)*(\\.(?:\\d+(?:_\\d+)*))?|\\B\\.\\d+)(?:[eE][+-]?\\d+)?"}],relevance:0},h=["false","null","true"],T=["__CLASS__","__DIR__","__FILE__","__FUNCTION__","__COMPILER_HALT_OFFSET__","__LINE__","__METHOD__","__NAMESPACE__","__TRAIT__","die","echo","exit","include","include_once","print","require","require_once","array","abstract","and","as","binary","bool","boolean","break","callable","case","catch","class","clone","const","continue","declare","default","do","double","else","elseif","empty","enddeclare","endfor","endforeach","endif","endswitch","endwhile","enum","eval","extends","final","finally","float","for","foreach","from","global","goto","if","implements","instanceof","insteadof","int","integer","interface","isset","iterable","list","match|0","mixed","new","never","object","or","private","protected","public","readonly","real","return","string","switch","throw","trait","try","unset","use","var","void","while","xor","yield"],N=["Error|0","AppendIterator","ArgumentCountError","ArithmeticError","ArrayIterator","ArrayObject","AssertionError","BadFunctionCallException","BadMethodCallException","CachingIterator","CallbackFilterIterator","CompileError","Countable","DirectoryIterator","DivisionByZeroError","DomainException","EmptyIterator","ErrorException","Exception","FilesystemIterator","FilterIterator","GlobIterator","InfiniteIterator","InvalidArgumentException","IteratorIterator","LengthException","LimitIterator","LogicException","MultipleIterator","NoRewindIterator","OutOfBoundsException","OutOfRangeException","OuterIterator","OverflowException","ParentIterator","ParseError","RangeException","RecursiveArrayIterator","RecursiveCachingIterator","RecursiveCallbackFilterIterator","RecursiveDirectoryIterator","RecursiveFilterIterator","RecursiveIterator","RecursiveIteratorIterator","RecursiveRegexIterator","RecursiveTreeIterator","RegexIterator","RuntimeException","SeekableIterator","SplDoublyLinkedList","SplFileInfo","SplFileObject","SplFixedArray","SplHeap","SplMaxHeap","SplMinHeap","SplObjectStorage","SplObserver","SplPriorityQueue","SplQueue","SplStack","SplSubject","SplTempFileObject","TypeError","UnderflowException","UnexpectedValueException","UnhandledMatchError","ArrayAccess","BackedEnum","Closure","Fiber","Generator","Iterator","IteratorAggregate","Serializable","Stringable","Throwable","Traversable","UnitEnum","WeakReference","WeakMap","Directory","__PHP_Incomplete_Class","parent","php_user_filter","self","static","stdClass"],x={keyword:T,literal:(L=>{const J=[];return L.forEach(re=>{J.push(re),re.toLowerCase()===re?J.push(re.toUpperCase()):J.push(re.toLowerCase())}),J})(h),built_in:N},P=L=>L.map(J=>J.replace(/\|\d+$/,"")),D={variants:[{match:[/new/,n.concat(f,"+"),n.concat("(?!",P(N).join("\\b|"),"\\b)"),s],scope:{1:"keyword",4:"title.class"}}]},k=n.concat(o,"\\b(?!\\()"),U={variants:[{match:[n.concat(/::/,n.lookahead(/(?!class\b)/)),k],scope:{2:"variable.constant"}},{match:[/::/,/class/],scope:{2:"variable.language"}},{match:[s,n.concat(/::/,n.lookahead(/(?!class\b)/)),k],scope:{1:"title.class",3:"variable.constant"}},{match:[s,n.concat("::",n.lookahead(/(?!class\b)/))],scope:{1:"title.class"}},{match:[s,/::/,/class/],scope:{1:"title.class",3:"variable.language"}}]},W={scope:"attr",match:n.concat(o,n.lookahead(":"),n.lookahead(/(?!::)/))},z={relevance:0,begin:/\(/,end:/\)/,keywords:x,contains:[W,l,U,e.C_BLOCK_COMMENT_MODE,S,C,D]},K={relevance:0,match:[/\b/,n.concat("(?!fn\\b|function\\b|",P(T).join("\\b|"),"|",P(N).join("\\b|"),"\\b)"),o,n.concat(f,"*"),n.lookahead(/(?=\()/)],scope:{3:"title.function.invoke"},contains:[z]};z.contains.push(K);const Ee=[W,U,e.C_BLOCK_COMMENT_MODE,S,C,D],oe={begin:n.concat(/#\[\s*/,s),beginScope:"meta",end:/]/,endScope:"meta",keywords:{literal:h,keyword:["new","array"]},contains:[{begin:/\[/,end:/]/,keywords:{literal:h,keyword:["new","array"]},contains:["self",...Ee]},...Ee,{scope:"meta",match:s}]};return{case_insensitive:!1,keywords:x,contains:[oe,e.HASH_COMMENT_MODE,e.COMMENT("//","$"),e.COMMENT("/\\*","\\*/",{contains:[{scope:"doctag",match:"@[A-Za-z]+"}]}),{match:/__halt_compiler\(\);/,keywords:"__halt_compiler",starts:{scope:"comment",end:e.MATCH_NOTHING_RE,contains:[{match:/\?>/,scope:"meta",endsParent:!0}]}},c,{scope:"variable.language",match:/\$this\b/},l,K,U,{match:[/const/,/\s/,o],scope:{1:"keyword",3:"variable.constant"}},D,{scope:"function",relevance:0,beginKeywords:"fn function",end:/[;{]/,excludeEnd:!0,illegal:"[$%\\[]",contains:[{beginKeywords:"use"},e.UNDERSCORE_TITLE_MODE,{begin:"=>",endsParent:!0},{scope:"params",begin:"\\(",end:"\\)",excludeBegin:!0,excludeEnd:!0,keywords:x,contains:["self",l,U,e.C_BLOCK_COMMENT_MODE,S,C]}]},{scope:"class",variants:[{beginKeywords:"enum",illegal:/[($"]/},{beginKeywords:"class interface trait",illegal:/[:($"]/}],relevance:0,end:/\{/,excludeEnd:!0,contains:[{beginKeywords:"extends implements"},e.UNDERSCORE_TITLE_MODE]},{beginKeywords:"namespace",relevance:0,end:";",illegal:/[.']/,contains:[e.inherit(e.UNDERSCORE_TITLE_MODE,{scope:"title.class"})]},{beginKeywords:"use",relevance:0,end:";",contains:[{match:/\b(as|const|function)\b/,scope:"keyword"},e.UNDERSCORE_TITLE_MODE]},S,C]}}return up=t,up}var dp,Ev;function Eye(){if(Ev)return dp;Ev=1;function t(e){return{name:"PHP template",subLanguage:"xml",contains:[{begin:/<\?(php|=)?/,end:/\?>/,subLanguage:"php",contains:[{begin:"/\\*",end:"\\*/",skip:!0},{begin:'b"',end:'"',skip:!0},{begin:"b'",end:"'",skip:!0},e.inherit(e.APOS_STRING_MODE,{illegal:null,className:null,contains:null,skip:!0}),e.inherit(e.QUOTE_STRING_MODE,{illegal:null,className:null,contains:null,skip:!0})]}]}}return dp=t,dp}var _p,fv;function fye(){if(fv)return _p;fv=1;function t(e){return{name:"Plain text",aliases:["text","txt"],disableAutodetect:!0}}return _p=t,_p}var pp,Sv;function Sye(){if(Sv)return pp;Sv=1;function t(e){const n={keyword:"actor addressof and as be break class compile_error compile_intrinsic consume continue delegate digestof do else elseif embed end error for fun if ifdef in interface is isnt lambda let match new not object or primitive recover repeat return struct then trait try type until use var where while with xor",meta:"iso val tag trn box ref",literal:"this false true"},i={className:"string",begin:'"""',end:'"""',relevance:10},o={className:"string",begin:'"',end:'"',contains:[e.BACKSLASH_ESCAPE]},s={className:"string",begin:"'",end:"'",contains:[e.BACKSLASH_ESCAPE],relevance:0},l={className:"type",begin:"\\b_?[A-Z][\\w]*",relevance:0},c={begin:e.IDENT_RE+"'",relevance:0};return{name:"Pony",keywords:n,contains:[l,i,o,s,c,{className:"number",begin:"(-?)(\\b0[xX][a-fA-F0-9]+|\\b0[bB][01]+|(\\b\\d+(_\\d+)?(\\.\\d*)?|\\.\\d+)([eE][-+]?\\d+)?)",relevance:0},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]}}return pp=t,pp}var mp,bv;function bye(){if(bv)return mp;bv=1;function t(e){const n=["string","char","byte","int","long","bool","decimal","single","double","DateTime","xml","array","hashtable","void"],i="Add|Clear|Close|Copy|Enter|Exit|Find|Format|Get|Hide|Join|Lock|Move|New|Open|Optimize|Pop|Push|Redo|Remove|Rename|Reset|Resize|Search|Select|Set|Show|Skip|Split|Step|Switch|Undo|Unlock|Watch|Backup|Checkpoint|Compare|Compress|Convert|ConvertFrom|ConvertTo|Dismount|Edit|Expand|Export|Group|Import|Initialize|Limit|Merge|Mount|Out|Publish|Restore|Save|Sync|Unpublish|Update|Approve|Assert|Build|Complete|Confirm|Deny|Deploy|Disable|Enable|Install|Invoke|Register|Request|Restart|Resume|Start|Stop|Submit|Suspend|Uninstall|Unregister|Wait|Debug|Measure|Ping|Repair|Resolve|Test|Trace|Connect|Disconnect|Read|Receive|Send|Write|Block|Grant|Protect|Revoke|Unblock|Unprotect|Use|ForEach|Sort|Tee|Where",o="-and|-as|-band|-bnot|-bor|-bxor|-casesensitive|-ccontains|-ceq|-cge|-cgt|-cle|-clike|-clt|-cmatch|-cne|-cnotcontains|-cnotlike|-cnotmatch|-contains|-creplace|-csplit|-eq|-exact|-f|-file|-ge|-gt|-icontains|-ieq|-ige|-igt|-ile|-ilike|-ilt|-imatch|-in|-ine|-inotcontains|-inotlike|-inotmatch|-ireplace|-is|-isnot|-isplit|-join|-le|-like|-lt|-match|-ne|-not|-notcontains|-notin|-notlike|-notmatch|-or|-regex|-replace|-shl|-shr|-split|-wildcard|-xor",s={$pattern:/-?[A-z\.\-]+\b/,keyword:"if else foreach return do while until elseif begin for trap data dynamicparam end break throw param continue finally in switch exit filter try process catch hidden static parameter",built_in:"ac asnp cat cd CFS chdir clc clear clhy cli clp cls clv cnsn compare copy cp cpi cpp curl cvpa dbp del diff dir dnsn ebp echo|0 epal epcsv epsn erase etsn exsn fc fhx fl ft fw gal gbp gc gcb gci gcm gcs gdr gerr ghy gi gin gjb gl gm gmo gp gps gpv group gsn gsnp gsv gtz gu gv gwmi h history icm iex ihy ii ipal ipcsv ipmo ipsn irm ise iwmi iwr kill lp ls man md measure mi mount move mp mv nal ndr ni nmo npssc nsn nv ogv oh popd ps pushd pwd r rbp rcjb rcsn rd rdr ren ri rjb rm rmdir rmo rni rnp rp rsn rsnp rujb rv rvpa rwmi sajb sal saps sasv sbp sc scb select set shcm si sl sleep sls sort sp spjb spps spsv start stz sujb sv swmi tee trcm type wget where wjb write"},l=/\w[\w\d]*((-)[\w\d]+)*/,c={begin:"`[\\s\\S]",relevance:0},d={className:"variable",variants:[{begin:/\$\B/},{className:"keyword",begin:/\$this/},{begin:/\$[\w\d][\w\d_:]*/}]},_={className:"literal",begin:/\$(null|true|false)\b/},p={className:"string",variants:[{begin:/"/,end:/"/},{begin:/@"/,end:/^"@/}],contains:[c,d,{className:"variable",begin:/\$[A-z]/,end:/[^A-z]/}]},g={className:"string",variants:[{begin:/'/,end:/'/},{begin:/@'/,end:/^'@/}]},E={className:"doctag",variants:[{begin:/\.(synopsis|description|example|inputs|outputs|notes|link|component|role|functionality)/},{begin:/\.(parameter|forwardhelptargetname|forwardhelpcategory|remotehelprunspace|externalhelp)\s+\S+/}]},f=e.inherit(e.COMMENT(null,null),{variants:[{begin:/#/,end:/$/},{begin:/<#/,end:/#>/}],contains:[E]}),S={className:"built_in",variants:[{begin:"(".concat(i,")+(-)[\\w\\d]+")}]},C={className:"class",beginKeywords:"class enum",end:/\s*[{]/,excludeEnd:!0,relevance:0,contains:[e.TITLE_MODE]},h={className:"function",begin:/function\s+/,end:/\s*\{|$/,excludeEnd:!0,returnBegin:!0,relevance:0,contains:[{begin:"function",relevance:0,className:"keyword"},{className:"title",begin:l,relevance:0},{begin:/\(/,end:/\)/,className:"params",relevance:0,contains:[d]}]},T={begin:/using\s/,end:/$/,returnBegin:!0,contains:[p,g,{className:"keyword",begin:/(using|assembly|command|module|namespace|type)/}]},N={variants:[{className:"operator",begin:"(".concat(o,")\\b")},{className:"literal",begin:/(-){1,2}[\w\d-]+/,relevance:0}]},y={className:"selector-tag",begin:/@\B/,relevance:0},x={className:"function",begin:/\[.*\]\s*[\w]+[ ]??\(/,end:/$/,returnBegin:!0,relevance:0,contains:[{className:"keyword",begin:"(".concat(s.keyword.toString().replace(/\s/g,"|"),")\\b"),endsParent:!0,relevance:0},e.inherit(e.TITLE_MODE,{endsParent:!0})]},P=[x,f,c,e.NUMBER_MODE,p,g,S,d,_,y],D={begin:/\[/,end:/\]/,excludeBegin:!0,excludeEnd:!0,relevance:0,contains:[].concat("self",P,{begin:"("+n.join("|")+")",className:"built_in",relevance:0},{className:"type",begin:/[\.\w\d]+/,relevance:0})};return x.contains.unshift(D),{name:"PowerShell",aliases:["pwsh","ps","ps1"],case_insensitive:!0,keywords:s,contains:P.concat(C,h,T,N,D)}}return mp=t,mp}var gp,hv;function hye(){if(hv)return gp;hv=1;function t(e){const n=e.regex,i=["displayHeight","displayWidth","mouseY","mouseX","mousePressed","pmouseX","pmouseY","key","keyCode","pixels","focused","frameCount","frameRate","height","width","size","createGraphics","beginDraw","createShape","loadShape","PShape","arc","ellipse","line","point","quad","rect","triangle","bezier","bezierDetail","bezierPoint","bezierTangent","curve","curveDetail","curvePoint","curveTangent","curveTightness","shape","shapeMode","beginContour","beginShape","bezierVertex","curveVertex","endContour","endShape","quadraticVertex","vertex","ellipseMode","noSmooth","rectMode","smooth","strokeCap","strokeJoin","strokeWeight","mouseClicked","mouseDragged","mouseMoved","mousePressed","mouseReleased","mouseWheel","keyPressed","keyPressedkeyReleased","keyTyped","print","println","save","saveFrame","day","hour","millis","minute","month","second","year","background","clear","colorMode","fill","noFill","noStroke","stroke","alpha","blue","brightness","color","green","hue","lerpColor","red","saturation","modelX","modelY","modelZ","screenX","screenY","screenZ","ambient","emissive","shininess","specular","add","createImage","beginCamera","camera","endCamera","frustum","ortho","perspective","printCamera","printProjection","cursor","frameRate","noCursor","exit","loop","noLoop","popStyle","pushStyle","redraw","binary","boolean","byte","char","float","hex","int","str","unbinary","unhex","join","match","matchAll","nf","nfc","nfp","nfs","split","splitTokens","trim","append","arrayCopy","concat","expand","reverse","shorten","sort","splice","subset","box","sphere","sphereDetail","createInput","createReader","loadBytes","loadJSONArray","loadJSONObject","loadStrings","loadTable","loadXML","open","parseXML","saveTable","selectFolder","selectInput","beginRaw","beginRecord","createOutput","createWriter","endRaw","endRecord","PrintWritersaveBytes","saveJSONArray","saveJSONObject","saveStream","saveStrings","saveXML","selectOutput","popMatrix","printMatrix","pushMatrix","resetMatrix","rotate","rotateX","rotateY","rotateZ","scale","shearX","shearY","translate","ambientLight","directionalLight","lightFalloff","lights","lightSpecular","noLights","normal","pointLight","spotLight","image","imageMode","loadImage","noTint","requestImage","tint","texture","textureMode","textureWrap","blend","copy","filter","get","loadPixels","set","updatePixels","blendMode","loadShader","PShaderresetShader","shader","createFont","loadFont","text","textFont","textAlign","textLeading","textMode","textSize","textWidth","textAscent","textDescent","abs","ceil","constrain","dist","exp","floor","lerp","log","mag","map","max","min","norm","pow","round","sq","sqrt","acos","asin","atan","atan2","cos","degrees","radians","sin","tan","noise","noiseDetail","noiseSeed","random","randomGaussian","randomSeed"],o=e.IDENT_RE,s={variants:[{match:n.concat(n.either(...i),n.lookahead(/\s*\(/)),className:"built_in"},{relevance:0,match:n.concat(/\b(?!for|if|while)/,o,n.lookahead(/\s*\(/)),className:"title.function"}]},l={match:[/new\s+/,o],className:{1:"keyword",2:"class.title"}},c={relevance:0,match:[/\./,o],className:{2:"property"}},d={variants:[{match:[/class/,/\s+/,o,/\s+/,/extends/,/\s+/,o]},{match:[/class/,/\s+/,o]}],className:{1:"keyword",3:"title.class",5:"keyword",7:"title.class.inherited"}},_=["boolean","byte","char","color","double","float","int","long","short"],p=["BufferedReader","PVector","PFont","PImage","PGraphics","HashMap","String","Array","FloatDict","ArrayList","FloatList","IntDict","IntList","JSONArray","JSONObject","Object","StringDict","StringList","Table","TableRow","XML"];return{name:"Processing",aliases:["pde"],keywords:{keyword:[...["abstract","assert","break","case","catch","const","continue","default","else","enum","final","finally","for","if","import","instanceof","long","native","new","package","private","private","protected","protected","public","public","return","static","strictfp","switch","synchronized","throw","throws","transient","try","void","volatile","while"]],literal:"P2D P3D HALF_PI PI QUARTER_PI TAU TWO_PI null true false",title:"setup draw",variable:"super this",built_in:[...i,...p],type:_},contains:[d,l,s,c,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,e.C_NUMBER_MODE]}}return gp=t,gp}var Ep,Tv;function Tye(){if(Tv)return Ep;Tv=1;function t(e){return{name:"Python profiler",contains:[e.C_NUMBER_MODE,{begin:"[a-zA-Z_][\\da-zA-Z_]+\\.[\\da-zA-Z_]{1,3}",end:":",excludeEnd:!0},{begin:"(ncalls|tottime|cumtime)",end:"$",keywords:"ncalls tottime|10 cumtime|10 filename",relevance:10},{begin:"function calls",end:"$",contains:[e.C_NUMBER_MODE],relevance:10},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{className:"string",begin:"\\(",end:"\\)$",excludeBegin:!0,excludeEnd:!0,relevance:0}]}}return Ep=t,Ep}var fp,vv;function vye(){if(vv)return fp;vv=1;function t(e){const n={begin:/[a-z][A-Za-z0-9_]*/,relevance:0},i={className:"symbol",variants:[{begin:/[A-Z][a-zA-Z0-9_]*/},{begin:/_[A-Za-z0-9_]*/}],relevance:0},o={begin:/\(/,end:/\)/,relevance:0},s={begin:/\[/,end:/\]/},l={className:"comment",begin:/%/,end:/$/,contains:[e.PHRASAL_WORDS_MODE]},c={className:"string",begin:/`/,end:/`/,contains:[e.BACKSLASH_ESCAPE]},d={className:"string",begin:/0'(\\'|.)/},_={className:"string",begin:/0'\\s/},g=[n,i,o,{begin:/:-/},s,l,e.C_BLOCK_COMMENT_MODE,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,c,d,_,e.C_NUMBER_MODE];return o.contains=g,s.contains=g,{name:"Prolog",contains:g.concat([{begin:/\.$/}])}}return fp=t,fp}var Sp,Cv;function Cye(){if(Cv)return Sp;Cv=1;function t(e){const n="[ \\t\\f]*",i="[ \\t\\f]+",o=n+"[:=]"+n,s=i,l="("+o+"|"+s+")",c="([^\\\\:= \\t\\f\\n]|\\\\.)+",d={end:l,relevance:0,starts:{className:"string",end:/$/,relevance:0,contains:[{begin:"\\\\\\\\"},{begin:"\\\\\\n"}]}};return{name:".properties",disableAutodetect:!0,case_insensitive:!0,illegal:/\S/,contains:[e.COMMENT("^\\s*[!#]","$"),{returnBegin:!0,variants:[{begin:c+o},{begin:c+s}],contains:[{className:"attr",begin:c,endsParent:!0}],starts:d},{className:"attr",begin:c+n+"$"}]}}return Sp=t,Sp}var bp,Rv;function Rye(){if(Rv)return bp;Rv=1;function t(e){const n=["package","import","option","optional","required","repeated","group","oneof"],i=["double","float","int32","int64","uint32","uint64","sint32","sint64","fixed32","fixed64","sfixed32","sfixed64","bool","string","bytes"],o={match:[/(message|enum|service)\s+/,e.IDENT_RE],scope:{1:"keyword",2:"title.class"}};return{name:"Protocol Buffers",aliases:["proto"],keywords:{keyword:n,type:i,literal:["true","false"]},contains:[e.QUOTE_STRING_MODE,e.NUMBER_MODE,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,o,{className:"function",beginKeywords:"rpc",end:/[{;]/,excludeEnd:!0,keywords:"rpc returns"},{begin:/^\s*[A-Z_]+(?=\s*=[^\n]+;$)/}]}}return bp=t,bp}var hp,Nv;function Nye(){if(Nv)return hp;Nv=1;function t(e){const n={keyword:"and case default else elsif false if in import enherits node or true undef unless main settings $string ",literal:"alias audit before loglevel noop require subscribe tag owner ensure group mode name|0 changes context force incl lens load_path onlyif provider returns root show_diff type_check en_address ip_address realname command environment hour monute month monthday special target weekday creates cwd ogoutput refresh refreshonly tries try_sleep umask backup checksum content ctime force ignore links mtime purge recurse recurselimit replace selinux_ignore_defaults selrange selrole seltype seluser source souirce_permissions sourceselect validate_cmd validate_replacement allowdupe attribute_membership auth_membership forcelocal gid ia_load_module members system host_aliases ip allowed_trunk_vlans description device_url duplex encapsulation etherchannel native_vlan speed principals allow_root auth_class auth_type authenticate_user k_of_n mechanisms rule session_owner shared options device fstype enable hasrestart directory present absent link atboot blockdevice device dump pass remounts poller_tag use message withpath adminfile allow_virtual allowcdrom category configfiles flavor install_options instance package_settings platform responsefile status uninstall_options vendor unless_system_user unless_uid binary control flags hasstatus manifest pattern restart running start stop allowdupe auths expiry gid groups home iterations key_membership keys managehome membership password password_max_age password_min_age profile_membership profiles project purge_ssh_keys role_membership roles salt shell uid baseurl cost descr enabled enablegroups exclude failovermethod gpgcheck gpgkey http_caching include includepkgs keepalive metadata_expire metalink mirrorlist priority protect proxy proxy_password proxy_username repo_gpgcheck s3_enabled skip_if_unavailable sslcacert sslclientcert sslclientkey sslverify mounted",built_in:"architecture augeasversion blockdevices boardmanufacturer boardproductname boardserialnumber cfkey dhcp_servers domain ec2_ ec2_userdata facterversion filesystems ldom fqdn gid hardwareisa hardwaremodel hostname id|0 interfaces ipaddress ipaddress_ ipaddress6 ipaddress6_ iphostnumber is_virtual kernel kernelmajversion kernelrelease kernelversion kernelrelease kernelversion lsbdistcodename lsbdistdescription lsbdistid lsbdistrelease lsbmajdistrelease lsbminordistrelease lsbrelease macaddress macaddress_ macosx_buildversion macosx_productname macosx_productversion macosx_productverson_major macosx_productversion_minor manufacturer memoryfree memorysize netmask metmask_ network_ operatingsystem operatingsystemmajrelease operatingsystemrelease osfamily partitions path physicalprocessorcount processor processorcount productname ps puppetversion rubysitedir rubyversion selinux selinux_config_mode selinux_config_policy selinux_current_mode selinux_current_mode selinux_enforced selinux_policyversion serialnumber sp_ sshdsakey sshecdsakey sshrsakey swapencrypted swapfree swapsize timezone type uniqueid uptime uptime_days uptime_hours uptime_seconds uuid virtual vlans xendomains zfs_version zonenae zones zpool_version"},i=e.COMMENT("#","$"),o="([A-Za-z_]|::)(\\w|::)*",s=e.inherit(e.TITLE_MODE,{begin:o}),l={className:"variable",begin:"\\$"+o},c={className:"string",contains:[e.BACKSLASH_ESCAPE,l],variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/}]};return{name:"Puppet",aliases:["pp"],contains:[i,l,c,{beginKeywords:"class",end:"\\{|;",illegal:/=/,contains:[s,i]},{beginKeywords:"define",end:/\{/,contains:[{className:"section",begin:e.IDENT_RE,endsParent:!0}]},{begin:e.IDENT_RE+"\\s+\\{",returnBegin:!0,end:/\S/,contains:[{className:"keyword",begin:e.IDENT_RE,relevance:.2},{begin:/\{/,end:/\}/,keywords:n,relevance:0,contains:[c,i,{begin:"[a-zA-Z_]+\\s*=>",returnBegin:!0,end:"=>",contains:[{className:"attr",begin:e.IDENT_RE}]},{className:"number",begin:"(\\b0[0-7_]+)|(\\b0x[0-9a-fA-F_]+)|(\\b[1-9][0-9_]*(\\.[0-9_]+)?)|[0_]\\b",relevance:0},l]}],relevance:0}]}}return hp=t,hp}var Tp,Ov;function Oye(){if(Ov)return Tp;Ov=1;function t(e){const n={className:"string",begin:'(~)?"',end:'"',illegal:"\\n"},i={className:"symbol",begin:"#[a-zA-Z_]\\w*\\$?"};return{name:"PureBASIC",aliases:["pb","pbi"],keywords:"Align And Array As Break CallDebugger Case CompilerCase CompilerDefault CompilerElse CompilerElseIf CompilerEndIf CompilerEndSelect CompilerError CompilerIf CompilerSelect CompilerWarning Continue Data DataSection Debug DebugLevel Declare DeclareC DeclareCDLL DeclareDLL DeclareModule Default Define Dim DisableASM DisableDebugger DisableExplicit Else ElseIf EnableASM EnableDebugger EnableExplicit End EndDataSection EndDeclareModule EndEnumeration EndIf EndImport EndInterface EndMacro EndModule EndProcedure EndSelect EndStructure EndStructureUnion EndWith Enumeration EnumerationBinary Extends FakeReturn For ForEach ForEver Global Gosub Goto If Import ImportC IncludeBinary IncludeFile IncludePath Interface List Macro MacroExpandedCount Map Module NewList NewMap Next Not Or Procedure ProcedureC ProcedureCDLL ProcedureDLL ProcedureReturn Protected Prototype PrototypeC ReDim Read Repeat Restore Return Runtime Select Shared Static Step Structure StructureUnion Swap Threaded To UndefineMacro Until Until UnuseModule UseModule Wend While With XIncludeFile XOr",contains:[e.COMMENT(";","$",{relevance:0}),{className:"function",begin:"\\b(Procedure|Declare)(C|CDLL|DLL)?\\b",end:"\\(",excludeEnd:!0,returnBegin:!0,contains:[{className:"keyword",begin:"(Procedure|Declare)(C|CDLL|DLL)?",excludeEnd:!0},{className:"type",begin:"\\.\\w*"},e.UNDERSCORE_TITLE_MODE]},n,i]}}return Tp=t,Tp}var vp,Av;function Aye(){if(Av)return vp;Av=1;function t(e){const n=e.regex,i=/[\p{XID_Start}_]\p{XID_Continue}*/u,o=["and","as","assert","async","await","break","case","class","continue","def","del","elif","else","except","finally","for","from","global","if","import","in","is","lambda","match","nonlocal|10","not","or","pass","raise","return","try","while","with","yield"],d={$pattern:/[A-Za-z]\w+|__\w+__/,keyword:o,built_in:["__import__","abs","all","any","ascii","bin","bool","breakpoint","bytearray","bytes","callable","chr","classmethod","compile","complex","delattr","dict","dir","divmod","enumerate","eval","exec","filter","float","format","frozenset","getattr","globals","hasattr","hash","help","hex","id","input","int","isinstance","issubclass","iter","len","list","locals","map","max","memoryview","min","next","object","oct","open","ord","pow","print","property","range","repr","reversed","round","set","setattr","slice","sorted","staticmethod","str","sum","super","tuple","type","vars","zip"],literal:["__debug__","Ellipsis","False","None","NotImplemented","True"],type:["Any","Callable","Coroutine","Dict","List","Literal","Generic","Optional","Sequence","Set","Tuple","Type","Union"]},_={className:"meta",begin:/^(>>>|\.\.\.) /},p={className:"subst",begin:/\{/,end:/\}/,keywords:d,illegal:/#/},g={begin:/\{\{/,relevance:0},E={className:"string",contains:[e.BACKSLASH_ESCAPE],variants:[{begin:/([uU]|[bB]|[rR]|[bB][rR]|[rR][bB])?'''/,end:/'''/,contains:[e.BACKSLASH_ESCAPE,_],relevance:10},{begin:/([uU]|[bB]|[rR]|[bB][rR]|[rR][bB])?"""/,end:/"""/,contains:[e.BACKSLASH_ESCAPE,_],relevance:10},{begin:/([fF][rR]|[rR][fF]|[fF])'''/,end:/'''/,contains:[e.BACKSLASH_ESCAPE,_,g,p]},{begin:/([fF][rR]|[rR][fF]|[fF])"""/,end:/"""/,contains:[e.BACKSLASH_ESCAPE,_,g,p]},{begin:/([uU]|[rR])'/,end:/'/,relevance:10},{begin:/([uU]|[rR])"/,end:/"/,relevance:10},{begin:/([bB]|[bB][rR]|[rR][bB])'/,end:/'/},{begin:/([bB]|[bB][rR]|[rR][bB])"/,end:/"/},{begin:/([fF][rR]|[rR][fF]|[fF])'/,end:/'/,contains:[e.BACKSLASH_ESCAPE,g,p]},{begin:/([fF][rR]|[rR][fF]|[fF])"/,end:/"/,contains:[e.BACKSLASH_ESCAPE,g,p]},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]},f="[0-9](_?[0-9])*",S=`(\\b(${f}))?\\.(${f})|\\b(${f})\\.`,C=`\\b|${o.join("|")}`,h={className:"number",relevance:0,variants:[{begin:`(\\b(${f})|(${S}))[eE][+-]?(${f})[jJ]?(?=${C})`},{begin:`(${S})[jJ]?`},{begin:`\\b([1-9](_?[0-9])*|0+(_?0)*)[lLjJ]?(?=${C})`},{begin:`\\b0[bB](_?[01])+[lL]?(?=${C})`},{begin:`\\b0[oO](_?[0-7])+[lL]?(?=${C})`},{begin:`\\b0[xX](_?[0-9a-fA-F])+[lL]?(?=${C})`},{begin:`\\b(${f})[jJ](?=${C})`}]},T={className:"comment",begin:n.lookahead(/# type:/),end:/$/,keywords:d,contains:[{begin:/# type:/},{begin:/#/,end:/\b\B/,endsWithParent:!0}]},N={className:"params",variants:[{className:"",begin:/\(\s*\)/,skip:!0},{begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:d,contains:["self",_,h,E,e.HASH_COMMENT_MODE]}]};return p.contains=[E,h,_],{name:"Python",aliases:["py","gyp","ipython"],unicodeRegex:!0,keywords:d,illegal:/(<\/|\?)|=>/,contains:[_,h,{begin:/\bself\b/},{beginKeywords:"if",relevance:0},E,T,e.HASH_COMMENT_MODE,{match:[/\bdef/,/\s+/,i],scope:{1:"keyword",3:"title.function"},contains:[N]},{variants:[{match:[/\bclass/,/\s+/,i,/\s*/,/\(\s*/,i,/\s*\)/]},{match:[/\bclass/,/\s+/,i]}],scope:{1:"keyword",3:"title.class",6:"title.class.inherited"}},{className:"meta",begin:/^[\t ]*@/,end:/(?=#)|$/,contains:[h,N,E]}]}}return vp=t,vp}var Cp,yv;function yye(){if(yv)return Cp;yv=1;function t(e){return{aliases:["pycon"],contains:[{className:"meta.prompt",starts:{end:/ |$/,starts:{end:"$",subLanguage:"python"}},variants:[{begin:/^>>>(?=[ ]|$)/},{begin:/^\.\.\.(?=[ ]|$)/}]}]}}return Cp=t,Cp}var Rp,Iv;function Iye(){if(Iv)return Rp;Iv=1;function t(e){return{name:"Q",aliases:["k","kdb"],keywords:{$pattern:/(`?)[A-Za-z0-9_]+\b/,keyword:"do while select delete by update from",literal:"0b 1b",built_in:"neg not null string reciprocal floor ceiling signum mod xbar xlog and or each scan over prior mmu lsq inv md5 ltime gtime count first var dev med cov cor all any rand sums prds mins maxs fills deltas ratios avgs differ prev next rank reverse iasc idesc asc desc msum mcount mavg mdev xrank mmin mmax xprev rotate distinct group where flip type key til get value attr cut set upsert raze union inter except cross sv vs sublist enlist read0 read1 hopen hclose hdel hsym hcount peach system ltrim rtrim trim lower upper ssr view tables views cols xcols keys xkey xcol xasc xdesc fkeys meta lj aj aj0 ij pj asof uj ww wj wj1 fby xgroup ungroup ej save load rsave rload show csv parse eval min max avg wavg wsum sin cos tan sum",type:"`float `double int `timestamp `timespan `datetime `time `boolean `symbol `char `byte `short `long `real `month `date `minute `second `guid"},contains:[e.C_LINE_COMMENT_MODE,e.QUOTE_STRING_MODE,e.C_NUMBER_MODE]}}return Rp=t,Rp}var Np,Dv;function Dye(){if(Dv)return Np;Dv=1;function t(e){const n=e.regex,i={keyword:"in of on if for while finally var new function do return void else break catch instanceof with throw case default try this switch continue typeof delete let yield const export super debugger as async await import",literal:"true false null undefined NaN Infinity",built_in:"eval isFinite isNaN parseFloat parseInt decodeURI decodeURIComponent encodeURI encodeURIComponent escape unescape Object Function Boolean Error EvalError InternalError RangeError ReferenceError StopIteration SyntaxError TypeError URIError Number Math Date String RegExp Array Float32Array Float64Array Int16Array Int32Array Int8Array Uint16Array Uint32Array Uint8Array Uint8ClampedArray ArrayBuffer DataView JSON Intl arguments require module console window document Symbol Set Map WeakSet WeakMap Proxy Reflect Behavior bool color coordinate date double enumeration font geocircle georectangle geoshape int list matrix4x4 parent point quaternion real rect size string url variant vector2d vector3d vector4d Promise"},o="[a-zA-Z_][a-zA-Z0-9\\._]*",s={className:"keyword",begin:"\\bproperty\\b",starts:{className:"string",end:"(:|=|;|,|//|/\\*|$)",returnEnd:!0}},l={className:"keyword",begin:"\\bsignal\\b",starts:{className:"string",end:"(\\(|:|=|;|,|//|/\\*|$)",returnEnd:!0}},c={className:"attribute",begin:"\\bid\\s*:",starts:{className:"string",end:o,returnEnd:!1}},d={begin:o+"\\s*:",returnBegin:!0,contains:[{className:"attribute",begin:o,end:"\\s*:",excludeEnd:!0,relevance:0}],relevance:0},_={begin:n.concat(o,/\s*\{/),end:/\{/,returnBegin:!0,relevance:0,contains:[e.inherit(e.TITLE_MODE,{begin:o})]};return{name:"QML",aliases:["qt"],case_insensitive:!1,keywords:i,contains:[{className:"meta",begin:/^\s*['"]use (strict|asm)['"]/},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{className:"string",begin:"`",end:"`",contains:[e.BACKSLASH_ESCAPE,{className:"subst",begin:"\\$\\{",end:"\\}"}]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"number",variants:[{begin:"\\b(0[bB][01]+)"},{begin:"\\b(0[oO][0-7]+)"},{begin:e.C_NUMBER_RE}],relevance:0},{begin:"("+e.RE_STARTERS_RE+"|\\b(case|return|throw)\\b)\\s*",keywords:"return throw case",contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.REGEXP_MODE,{begin:/\s*[);\]]/,relevance:0,subLanguage:"xml"}],relevance:0},l,s,{className:"function",beginKeywords:"function",end:/\{/,excludeEnd:!0,contains:[e.inherit(e.TITLE_MODE,{begin:/[A-Za-z$_][0-9A-Za-z$_]*/}),{className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]}],illegal:/\[|%/},{begin:"\\."+e.IDENT_RE,relevance:0},c,d,_],illegal:/#/}}return Np=t,Np}var Op,xv;function xye(){if(xv)return Op;xv=1;function t(e){const n=e.regex,i=/(?:(?:[a-zA-Z]|\.[._a-zA-Z])[._a-zA-Z0-9]*)|\.(?!\d)/,o=n.either(/0[xX][0-9a-fA-F]+\.[0-9a-fA-F]*[pP][+-]?\d+i?/,/0[xX][0-9a-fA-F]+(?:[pP][+-]?\d+)?[Li]?/,/(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?[Li]?/),s=/[=!<>:]=|\|\||&&|:::?|<-|<<-|->>|->|\|>|[-+*\/?!$&|:<=>@^~]|\*\*/,l=n.either(/[()]/,/[{}]/,/\[\[/,/[[\]]/,/\\/,/,/);return{name:"R",keywords:{$pattern:i,keyword:"function if in break next repeat else for while",literal:"NULL NA TRUE FALSE Inf NaN NA_integer_|10 NA_real_|10 NA_character_|10 NA_complex_|10",built_in:"LETTERS letters month.abb month.name pi T F abs acos acosh all any anyNA Arg as.call as.character as.complex as.double as.environment as.integer as.logical as.null.default as.numeric as.raw asin asinh atan atanh attr attributes baseenv browser c call ceiling class Conj cos cosh cospi cummax cummin cumprod cumsum digamma dim dimnames emptyenv exp expression floor forceAndCall gamma gc.time globalenv Im interactive invisible is.array is.atomic is.call is.character is.complex is.double is.environment is.expression is.finite is.function is.infinite is.integer is.language is.list is.logical is.matrix is.na is.name is.nan is.null is.numeric is.object is.pairlist is.raw is.recursive is.single is.symbol lazyLoadDBfetch length lgamma list log max min missing Mod names nargs nzchar oldClass on.exit pos.to.env proc.time prod quote range Re rep retracemem return round seq_along seq_len seq.int sign signif sin sinh sinpi sqrt standardGeneric substitute sum switch tan tanh tanpi tracemem trigamma trunc unclass untracemem UseMethod xtfrm"},contains:[e.COMMENT(/#'/,/$/,{contains:[{scope:"doctag",match:/@examples/,starts:{end:n.lookahead(n.either(/\n^#'\s*(?=@[a-zA-Z]+)/,/\n^(?!#')/)),endsParent:!0}},{scope:"doctag",begin:"@param",end:/$/,contains:[{scope:"variable",variants:[{match:i},{match:/`(?:\\.|[^`\\])+`/}],endsParent:!0}]},{scope:"doctag",match:/@[a-zA-Z]+/},{scope:"keyword",match:/\\[a-zA-Z]+/}]}),e.HASH_COMMENT_MODE,{scope:"string",contains:[e.BACKSLASH_ESCAPE],variants:[e.END_SAME_AS_BEGIN({begin:/[rR]"(-*)\(/,end:/\)(-*)"/}),e.END_SAME_AS_BEGIN({begin:/[rR]"(-*)\{/,end:/\}(-*)"/}),e.END_SAME_AS_BEGIN({begin:/[rR]"(-*)\[/,end:/\](-*)"/}),e.END_SAME_AS_BEGIN({begin:/[rR]'(-*)\(/,end:/\)(-*)'/}),e.END_SAME_AS_BEGIN({begin:/[rR]'(-*)\{/,end:/\}(-*)'/}),e.END_SAME_AS_BEGIN({begin:/[rR]'(-*)\[/,end:/\](-*)'/}),{begin:'"',end:'"',relevance:0},{begin:"'",end:"'",relevance:0}]},{relevance:0,variants:[{scope:{1:"operator",2:"number"},match:[s,o]},{scope:{1:"operator",2:"number"},match:[/%[^%]*%/,o]},{scope:{1:"punctuation",2:"number"},match:[l,o]},{scope:{2:"number"},match:[/[^a-zA-Z0-9._]|^/,o]}]},{scope:{3:"operator"},match:[i,/\s+/,/<-/,/\s+/]},{scope:"operator",relevance:0,variants:[{match:s},{match:/%[^%]*%/}]},{scope:"punctuation",relevance:0,match:l},{begin:"`",end:"`",contains:[{begin:/\\./}]}]}}return Op=t,Op}var Ap,wv;function wye(){if(wv)return Ap;wv=1;function t(e){function n(D){return D.map(function(k){return k.split("").map(function(U){return"\\"+U}).join("")}).join("|")}const i="~?[a-z$_][0-9a-zA-Z$_]*",o="`?[A-Z$_][0-9a-zA-Z$_]*",s="'?[a-z$_][0-9a-z$_]*",l="\\s*:\\s*[a-z$_][0-9a-z$_]*(\\(\\s*("+s+"\\s*(,"+s+"\\s*)*)?\\))?",c=i+"("+l+"){0,2}",d="("+n(["||","++","**","+.","*","/","*.","/.","..."])+"|\\|>|&&|==|===)",_="\\s+"+d+"\\s+",p={keyword:"and as asr assert begin class constraint do done downto else end exception external for fun function functor if in include inherit initializer land lazy let lor lsl lsr lxor match method mod module mutable new nonrec object of open or private rec sig struct then to try type val virtual when while with",built_in:"array bool bytes char exn|5 float int int32 int64 list lazy_t|5 nativeint|5 ref string unit ",literal:"true false"},g="\\b(0[xX][a-fA-F0-9_]+[Lln]?|0[oO][0-7_]+[Lln]?|0[bB][01_]+[Lln]?|[0-9][0-9_]*([Lln]|(\\.[0-9_]*)?([eE][-+]?[0-9_]+)?)?)",E={className:"number",relevance:0,variants:[{begin:g},{begin:"\\(-"+g+"\\)"}]},f={className:"operator",relevance:0,begin:d},S=[{className:"identifier",relevance:0,begin:i},f,E],C=[e.QUOTE_STRING_MODE,f,{className:"module",begin:"\\b"+o,returnBegin:!0,relevance:0,end:".",contains:[{className:"identifier",begin:o,relevance:0}]}],h=[{className:"module",begin:"\\b"+o,returnBegin:!0,end:".",relevance:0,contains:[{className:"identifier",begin:o,relevance:0}]}],T={begin:i,end:"(,|\\n|\\))",relevance:0,contains:[f,{className:"typing",begin:":",end:"(,|\\n)",returnBegin:!0,relevance:0,contains:h}]},N={className:"function",relevance:0,keywords:p,variants:[{begin:"\\s(\\(\\.?.*?\\)|"+i+")\\s*=>",end:"\\s*=>",returnBegin:!0,relevance:0,contains:[{className:"params",variants:[{begin:i},{begin:c},{begin:/\(\s*\)/}]}]},{begin:"\\s\\(\\.?[^;\\|]*\\)\\s*=>",end:"\\s=>",returnBegin:!0,relevance:0,contains:[{className:"params",relevance:0,variants:[T]}]},{begin:"\\(\\.\\s"+i+"\\)\\s*=>"}]};C.push(N);const y={className:"constructor",begin:o+"\\(",end:"\\)",illegal:"\\n",keywords:p,contains:[e.QUOTE_STRING_MODE,f,{className:"params",begin:"\\b"+i}]},x={className:"pattern-match",begin:"\\|",returnBegin:!0,keywords:p,end:"=>",relevance:0,contains:[y,f,{relevance:0,className:"constructor",begin:o}]},P={className:"module-access",keywords:p,returnBegin:!0,variants:[{begin:"\\b("+o+"\\.)+"+i},{begin:"\\b("+o+"\\.)+\\(",end:"\\)",returnBegin:!0,contains:[N,{begin:"\\(",end:"\\)",relevance:0,skip:!0}].concat(C)},{begin:"\\b("+o+"\\.)+\\{",end:/\}/}],contains:C};return h.push(P),{name:"ReasonML",aliases:["re"],keywords:p,illegal:"(:-|:=|\\$\\{|\\+=)",contains:[e.COMMENT("/\\*","\\*/",{illegal:"^(#,\\/\\/)"}),{className:"character",begin:"'(\\\\[^']+|[^'])'",illegal:"\\n",relevance:0},e.QUOTE_STRING_MODE,{className:"literal",begin:"\\(\\)",relevance:0},{className:"literal",begin:"\\[\\|",end:"\\|\\]",relevance:0,contains:S},{className:"literal",begin:"\\[",end:"\\]",relevance:0,contains:S},y,{className:"operator",begin:_,illegal:"-->",relevance:0},E,e.C_LINE_COMMENT_MODE,x,N,{className:"module-def",begin:"\\bmodule\\s+"+i+"\\s+"+o+"\\s+=\\s+\\{",end:/\}/,returnBegin:!0,keywords:p,relevance:0,contains:[{className:"module",relevance:0,begin:o},{begin:/\{/,end:/\}/,relevance:0,skip:!0}].concat(C)},P]}}return Ap=t,Ap}var yp,Mv;function Mye(){if(Mv)return yp;Mv=1;function t(e){return{name:"RenderMan RIB",keywords:"ArchiveRecord AreaLightSource Atmosphere Attribute AttributeBegin AttributeEnd Basis Begin Blobby Bound Clipping ClippingPlane Color ColorSamples ConcatTransform Cone CoordinateSystem CoordSysTransform CropWindow Curves Cylinder DepthOfField Detail DetailRange Disk Displacement Display End ErrorHandler Exposure Exterior Format FrameAspectRatio FrameBegin FrameEnd GeneralPolygon GeometricApproximation Geometry Hider Hyperboloid Identity Illuminate Imager Interior LightSource MakeCubeFaceEnvironment MakeLatLongEnvironment MakeShadow MakeTexture Matte MotionBegin MotionEnd NuPatch ObjectBegin ObjectEnd ObjectInstance Opacity Option Orientation Paraboloid Patch PatchMesh Perspective PixelFilter PixelSamples PixelVariance Points PointsGeneralPolygons PointsPolygons Polygon Procedural Projection Quantize ReadArchive RelativeDetail ReverseOrientation Rotate Scale ScreenWindow ShadingInterpolation ShadingRate Shutter Sides Skew SolidBegin SolidEnd Sphere SubdivisionMesh Surface TextureCoordinates Torus Transform TransformBegin TransformEnd TransformPoints Translate TrimCurve WorldBegin WorldEnd",illegal:"/}],illegal:/./},e.COMMENT("^#","$"),d,_,c,{begin:/[\w-]+=([^\s{}[\]()>]+)/,relevance:0,returnBegin:!0,contains:[{className:"attribute",begin:/[^=]+/},{begin:/=/,endsWithParent:!0,relevance:0,contains:[d,_,c,{className:"literal",begin:"\\b("+s.split(" ").join("|")+")\\b"},{begin:/("[^"]*"|[^\s{}[\]]+)/}]}]},{className:"number",begin:/\*[0-9a-fA-F]+/},{begin:"\\b("+o.split(" ").join("|")+")([\\s[(\\]|])",returnBegin:!0,contains:[{className:"built_in",begin:/\w+/}]},{className:"built_in",variants:[{begin:"(\\.\\./|/|\\s)(("+l.split(" ").join("|")+");?\\s)+"},{begin:/\.\./,relevance:0}]}]}}return Dp=t,Dp}var xp,kv;function kye(){if(kv)return xp;kv=1;function t(e){const n=["abs","acos","ambient","area","asin","atan","atmosphere","attribute","calculatenormal","ceil","cellnoise","clamp","comp","concat","cos","degrees","depth","Deriv","diffuse","distance","Du","Dv","environment","exp","faceforward","filterstep","floor","format","fresnel","incident","length","lightsource","log","match","max","min","mod","noise","normalize","ntransform","opposite","option","phong","pnoise","pow","printf","ptlined","radians","random","reflect","refract","renderinfo","round","setcomp","setxcomp","setycomp","setzcomp","shadow","sign","sin","smoothstep","specular","specularbrdf","spline","sqrt","step","tan","texture","textureinfo","trace","transform","vtransform","xcomp","ycomp","zcomp"],i=["matrix","float","color","point","normal","vector"],o=["while","for","if","do","return","else","break","extern","continue"],s={match:[/(surface|displacement|light|volume|imager)/,/\s+/,e.IDENT_RE],scope:{1:"keyword",3:"title.class"}};return{name:"RenderMan RSL",keywords:{keyword:o,built_in:n,type:i},illegal:""},i]}}return Mp=t,Mp}var Lp,Bv;function Bye(){if(Bv)return Lp;Bv=1;function t(e){const n=e.regex,i=["do","if","then","else","end","until","while","abort","array","attrib","by","call","cards","cards4","catname","continue","datalines","datalines4","delete","delim","delimiter","display","dm","drop","endsas","error","file","filename","footnote","format","goto","in","infile","informat","input","keep","label","leave","length","libname","link","list","lostcard","merge","missing","modify","options","output","out","page","put","redirect","remove","rename","replace","retain","return","select","set","skip","startsas","stop","title","update","waitsas","where","window","x|0","systask","add","and","alter","as","cascade","check","create","delete","describe","distinct","drop","foreign","from","group","having","index","insert","into","in","key","like","message","modify","msgtype","not","null","on","or","order","primary","references","reset","restrict","select","set","table","unique","update","validate","view","where"],o=["abs","addr","airy","arcos","arsin","atan","attrc","attrn","band","betainv","blshift","bnot","bor","brshift","bxor","byte","cdf","ceil","cexist","cinv","close","cnonct","collate","compbl","compound","compress","cos","cosh","css","curobs","cv","daccdb","daccdbsl","daccsl","daccsyd","dacctab","dairy","date","datejul","datepart","datetime","day","dclose","depdb","depdbsl","depdbsl","depsl","depsl","depsyd","depsyd","deptab","deptab","dequote","dhms","dif","digamma","dim","dinfo","dnum","dopen","doptname","doptnum","dread","dropnote","dsname","erf","erfc","exist","exp","fappend","fclose","fcol","fdelete","fetch","fetchobs","fexist","fget","fileexist","filename","fileref","finfo","finv","fipname","fipnamel","fipstate","floor","fnonct","fnote","fopen","foptname","foptnum","fpoint","fpos","fput","fread","frewind","frlen","fsep","fuzz","fwrite","gaminv","gamma","getoption","getvarc","getvarn","hbound","hms","hosthelp","hour","ibessel","index","indexc","indexw","input","inputc","inputn","int","intck","intnx","intrr","irr","jbessel","juldate","kurtosis","lag","lbound","left","length","lgamma","libname","libref","log","log10","log2","logpdf","logpmf","logsdf","lowcase","max","mdy","mean","min","minute","mod","month","mopen","mort","n","netpv","nmiss","normal","note","npv","open","ordinal","pathname","pdf","peek","peekc","pmf","point","poisson","poke","probbeta","probbnml","probchi","probf","probgam","probhypr","probit","probnegb","probnorm","probt","put","putc","putn","qtr","quote","ranbin","rancau","ranexp","rangam","range","rank","rannor","ranpoi","rantbl","rantri","ranuni","repeat","resolve","reverse","rewind","right","round","saving","scan","sdf","second","sign","sin","sinh","skewness","soundex","spedis","sqrt","std","stderr","stfips","stname","stnamel","substr","sum","symget","sysget","sysmsg","sysprod","sysrc","system","tan","tanh","time","timepart","tinv","tnonct","today","translate","tranwrd","trigamma","trim","trimn","trunc","uniform","upcase","uss","var","varfmt","varinfmt","varlabel","varlen","varname","varnum","varray","varrayx","vartype","verify","vformat","vformatd","vformatdx","vformatn","vformatnx","vformatw","vformatwx","vformatx","vinarray","vinarrayx","vinformat","vinformatd","vinformatdx","vinformatn","vinformatnx","vinformatw","vinformatwx","vinformatx","vlabel","vlabelx","vlength","vlengthx","vname","vnamex","vtype","vtypex","weekday","year","yyq","zipfips","zipname","zipnamel","zipstate"],s=["bquote","nrbquote","cmpres","qcmpres","compstor","datatyp","display","do","else","end","eval","global","goto","if","index","input","keydef","label","left","length","let","local","lowcase","macro","mend","nrbquote","nrquote","nrstr","put","qcmpres","qleft","qlowcase","qscan","qsubstr","qsysfunc","qtrim","quote","qupcase","scan","str","substr","superq","syscall","sysevalf","sysexec","sysfunc","sysget","syslput","sysprod","sysrc","sysrput","then","to","trim","unquote","until","upcase","verify","while","window"];return{name:"SAS",case_insensitive:!0,keywords:{literal:["null","missing","_all_","_automatic_","_character_","_infile_","_n_","_name_","_null_","_numeric_","_user_","_webout_"],keyword:i},contains:[{className:"keyword",begin:/^\s*(proc [\w\d_]+|data|run|quit)[\s;]/},{className:"variable",begin:/&[a-zA-Z_&][a-zA-Z0-9_]*\.?/},{begin:[/^\s*/,/datalines;|cards;/,/(?:.*\n)+/,/^\s*;\s*$/],className:{2:"keyword",3:"string"}},{begin:[/%mend|%macro/,/\s+/,/[a-zA-Z_&][a-zA-Z0-9_]*/],className:{1:"built_in",3:"title.function"}},{className:"built_in",begin:"%"+n.either(...s)},{className:"title.function",begin:/%[a-zA-Z_][a-zA-Z_0-9]*/},{className:"meta",begin:n.either(...o)+"(?=\\()"},{className:"string",variants:[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]},e.COMMENT("\\*",";"),e.C_BLOCK_COMMENT_MODE]}}return Lp=t,Lp}var Pp,Gv;function Gye(){if(Gv)return Pp;Gv=1;function t(e){const n=e.regex,i={className:"meta",begin:"@[A-Za-z]+"},o={className:"subst",variants:[{begin:"\\$[A-Za-z0-9_]+"},{begin:/\$\{/,end:/\}/}]},s={className:"string",variants:[{begin:'"""',end:'"""'},{begin:'"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},{begin:'[a-z]+"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE,o]},{className:"string",begin:'[a-z]+"""',end:'"""',contains:[o],relevance:10}]},l={className:"type",begin:"\\b[A-Z][A-Za-z0-9_]*",relevance:0},c={className:"title",begin:/[^0-9\n\t "'(),.`{}\[\]:;][^\n\t "'(),.`{}\[\]:;]+|[^0-9\n\t "'(),.`{}\[\]:;=]/,relevance:0},d={className:"class",beginKeywords:"class object trait type",end:/[:={\[\n;]/,excludeEnd:!0,contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{beginKeywords:"extends with",relevance:10},{begin:/\[/,end:/\]/,excludeBegin:!0,excludeEnd:!0,relevance:0,contains:[l]},{className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,relevance:0,contains:[l]},c]},_={className:"function",beginKeywords:"def",end:n.lookahead(/[:={\[(\n;]/),contains:[c]},p={begin:[/^\s*/,"extension",/\s+(?=[[(])/],beginScope:{2:"keyword"}},g={begin:[/^\s*/,/end/,/\s+/,/(extension\b)?/],beginScope:{2:"keyword",4:"keyword"}},E=[{match:/\.inline\b/},{begin:/\binline(?=\s)/,keywords:"inline"}],f={begin:[/\(\s*/,/using/,/\s+(?!\))/],beginScope:{2:"keyword"}};return{name:"Scala",keywords:{literal:"true false null",keyword:"type yield lazy override def with val var sealed abstract private trait object if then forSome for while do throw finally protected extends import final return else break new catch super class case package default try this match continue throws implicit export enum given transparent"},contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,s,l,_,d,e.C_NUMBER_MODE,p,g,...E,f,i]}}return Pp=t,Pp}var kp,Yv;function Yye(){if(Yv)return kp;Yv=1;function t(e){const n="[^\\(\\)\\[\\]\\{\\}\",'`;#|\\\\\\s]+",i="(-|\\+)?\\d+([./]\\d+)?",o=i+"[+\\-]"+i+"i",s={$pattern:n,built_in:"case-lambda call/cc class define-class exit-handler field import inherit init-field interface let*-values let-values let/ec mixin opt-lambda override protect provide public rename require require-for-syntax syntax syntax-case syntax-error unit/sig unless when with-syntax and begin call-with-current-continuation call-with-input-file call-with-output-file case cond define define-syntax delay do dynamic-wind else for-each if lambda let let* let-syntax letrec letrec-syntax map or syntax-rules ' * + , ,@ - ... / ; < <= = => > >= ` abs acos angle append apply asin assoc assq assv atan boolean? caar cadr call-with-input-file call-with-output-file call-with-values car cdddar cddddr cdr ceiling char->integer char-alphabetic? char-ci<=? char-ci=? char-ci>? char-downcase char-lower-case? char-numeric? char-ready? char-upcase char-upper-case? char-whitespace? char<=? char=? char>? char? close-input-port close-output-port complex? cons cos current-input-port current-output-port denominator display eof-object? eq? equal? eqv? eval even? exact->inexact exact? exp expt floor force gcd imag-part inexact->exact inexact? input-port? integer->char integer? interaction-environment lcm length list list->string list->vector list-ref list-tail list? load log magnitude make-polar make-rectangular make-string make-vector max member memq memv min modulo negative? newline not null-environment null? number->string number? numerator odd? open-input-file open-output-file output-port? pair? peek-char port? positive? procedure? quasiquote quote quotient rational? rationalize read read-char real-part real? remainder reverse round scheme-report-environment set! set-car! set-cdr! sin sqrt string string->list string->number string->symbol string-append string-ci<=? string-ci=? string-ci>? string-copy string-fill! string-length string-ref string-set! string<=? string=? string>? string? substring symbol->string symbol? tan transcript-off transcript-on truncate values vector vector->list vector-fill! vector-length vector-ref vector-set! with-input-from-file with-output-to-file write write-char zero?"},l={className:"literal",begin:"(#t|#f|#\\\\"+n+"|#\\\\.)"},c={className:"number",variants:[{begin:i,relevance:0},{begin:o,relevance:0},{begin:"#b[0-1]+(/[0-1]+)?"},{begin:"#o[0-7]+(/[0-7]+)?"},{begin:"#x[0-9a-f]+(/[0-9a-f]+)?"}]},d=e.QUOTE_STRING_MODE,_=[e.COMMENT(";","$",{relevance:0}),e.COMMENT("#\\|","\\|#")],p={begin:n,relevance:0},g={className:"symbol",begin:"'"+n},E={endsWithParent:!0,relevance:0},f={variants:[{begin:/'/},{begin:"`"}],contains:[{begin:"\\(",end:"\\)",contains:["self",l,d,c,p,g]}]},S={className:"name",relevance:0,begin:n,keywords:s},h={variants:[{begin:"\\(",end:"\\)"},{begin:"\\[",end:"\\]"}],contains:[{begin:/lambda/,endsWithParent:!0,returnBegin:!0,contains:[S,{endsParent:!0,variants:[{begin:/\(/,end:/\)/},{begin:/\[/,end:/\]/}],contains:[p]}]},S,E]};return E.contains=[l,c,d,p,g,f,h].concat(_),{name:"Scheme",aliases:["scm"],illegal:/\S/,contains:[e.SHEBANG(),c,d,g,f,h].concat(_)}}return kp=t,kp}var Up,qv;function qye(){if(qv)return Up;qv=1;function t(e){const n=[e.C_NUMBER_MODE,{className:"string",begin:`'|"`,end:`'|"`,contains:[e.BACKSLASH_ESCAPE,{begin:"''"}]}];return{name:"Scilab",aliases:["sci"],keywords:{$pattern:/%?\w+/,keyword:"abort break case clear catch continue do elseif else endfunction end for function global if pause return resume select try then while",literal:"%f %F %t %T %pi %eps %inf %nan %e %i %z %s",built_in:"abs and acos asin atan ceil cd chdir clearglobal cosh cos cumprod deff disp error exec execstr exists exp eye gettext floor fprintf fread fsolve imag isdef isempty isinfisnan isvector lasterror length load linspace list listfiles log10 log2 log max min msprintf mclose mopen ones or pathconvert poly printf prod pwd rand real round sinh sin size gsort sprintf sqrt strcat strcmps tring sum system tanh tan type typename warning zeros matrix"},illegal:'("|#|/\\*|\\s+/\\w+)',contains:[{className:"function",beginKeywords:"function",end:"$",contains:[e.UNDERSCORE_TITLE_MODE,{className:"params",begin:"\\(",end:"\\)"}]},{begin:"[a-zA-Z_][a-zA-Z_0-9]*[\\.']+",relevance:0},{begin:"\\[",end:"\\][\\.']*",relevance:0,contains:n},e.COMMENT("//","$")].concat(n)}}return Up=t,Up}var Fp,$v;function $ye(){if($v)return Fp;$v=1;const t=c=>({IMPORTANT:{scope:"meta",begin:"!important"},BLOCK_COMMENT:c.C_BLOCK_COMMENT_MODE,HEXCOLOR:{scope:"number",begin:/#(([0-9a-fA-F]{3,4})|(([0-9a-fA-F]{2}){3,4}))\b/},FUNCTION_DISPATCH:{className:"built_in",begin:/[\w-]+(?=\()/},ATTRIBUTE_SELECTOR_MODE:{scope:"selector-attr",begin:/\[/,end:/\]/,illegal:"$",contains:[c.APOS_STRING_MODE,c.QUOTE_STRING_MODE]},CSS_NUMBER_MODE:{scope:"number",begin:c.NUMBER_RE+"(%|em|ex|ch|rem|vw|vh|vmin|vmax|cm|mm|in|pt|pc|px|deg|grad|rad|turn|s|ms|Hz|kHz|dpi|dpcm|dppx)?",relevance:0},CSS_VARIABLE:{className:"attr",begin:/--[A-Za-z][A-Za-z0-9_-]*/}}),e=["a","abbr","address","article","aside","audio","b","blockquote","body","button","canvas","caption","cite","code","dd","del","details","dfn","div","dl","dt","em","fieldset","figcaption","figure","footer","form","h1","h2","h3","h4","h5","h6","header","hgroup","html","i","iframe","img","input","ins","kbd","label","legend","li","main","mark","menu","nav","object","ol","p","q","quote","samp","section","span","strong","summary","sup","table","tbody","td","textarea","tfoot","th","thead","time","tr","ul","var","video"],n=["any-hover","any-pointer","aspect-ratio","color","color-gamut","color-index","device-aspect-ratio","device-height","device-width","display-mode","forced-colors","grid","height","hover","inverted-colors","monochrome","orientation","overflow-block","overflow-inline","pointer","prefers-color-scheme","prefers-contrast","prefers-reduced-motion","prefers-reduced-transparency","resolution","scan","scripting","update","width","min-width","max-width","min-height","max-height"],i=["active","any-link","blank","checked","current","default","defined","dir","disabled","drop","empty","enabled","first","first-child","first-of-type","fullscreen","future","focus","focus-visible","focus-within","has","host","host-context","hover","indeterminate","in-range","invalid","is","lang","last-child","last-of-type","left","link","local-link","not","nth-child","nth-col","nth-last-child","nth-last-col","nth-last-of-type","nth-of-type","only-child","only-of-type","optional","out-of-range","past","placeholder-shown","read-only","read-write","required","right","root","scope","target","target-within","user-invalid","valid","visited","where"],o=["after","backdrop","before","cue","cue-region","first-letter","first-line","grammar-error","marker","part","placeholder","selection","slotted","spelling-error"],s=["align-content","align-items","align-self","all","animation","animation-delay","animation-direction","animation-duration","animation-fill-mode","animation-iteration-count","animation-name","animation-play-state","animation-timing-function","backface-visibility","background","background-attachment","background-blend-mode","background-clip","background-color","background-image","background-origin","background-position","background-repeat","background-size","block-size","border","border-block","border-block-color","border-block-end","border-block-end-color","border-block-end-style","border-block-end-width","border-block-start","border-block-start-color","border-block-start-style","border-block-start-width","border-block-style","border-block-width","border-bottom","border-bottom-color","border-bottom-left-radius","border-bottom-right-radius","border-bottom-style","border-bottom-width","border-collapse","border-color","border-image","border-image-outset","border-image-repeat","border-image-slice","border-image-source","border-image-width","border-inline","border-inline-color","border-inline-end","border-inline-end-color","border-inline-end-style","border-inline-end-width","border-inline-start","border-inline-start-color","border-inline-start-style","border-inline-start-width","border-inline-style","border-inline-width","border-left","border-left-color","border-left-style","border-left-width","border-radius","border-right","border-right-color","border-right-style","border-right-width","border-spacing","border-style","border-top","border-top-color","border-top-left-radius","border-top-right-radius","border-top-style","border-top-width","border-width","bottom","box-decoration-break","box-shadow","box-sizing","break-after","break-before","break-inside","caption-side","caret-color","clear","clip","clip-path","clip-rule","color","column-count","column-fill","column-gap","column-rule","column-rule-color","column-rule-style","column-rule-width","column-span","column-width","columns","contain","content","content-visibility","counter-increment","counter-reset","cue","cue-after","cue-before","cursor","direction","display","empty-cells","filter","flex","flex-basis","flex-direction","flex-flow","flex-grow","flex-shrink","flex-wrap","float","flow","font","font-display","font-family","font-feature-settings","font-kerning","font-language-override","font-size","font-size-adjust","font-smoothing","font-stretch","font-style","font-synthesis","font-variant","font-variant-caps","font-variant-east-asian","font-variant-ligatures","font-variant-numeric","font-variant-position","font-variation-settings","font-weight","gap","glyph-orientation-vertical","grid","grid-area","grid-auto-columns","grid-auto-flow","grid-auto-rows","grid-column","grid-column-end","grid-column-start","grid-gap","grid-row","grid-row-end","grid-row-start","grid-template","grid-template-areas","grid-template-columns","grid-template-rows","hanging-punctuation","height","hyphens","icon","image-orientation","image-rendering","image-resolution","ime-mode","inline-size","isolation","justify-content","left","letter-spacing","line-break","line-height","list-style","list-style-image","list-style-position","list-style-type","margin","margin-block","margin-block-end","margin-block-start","margin-bottom","margin-inline","margin-inline-end","margin-inline-start","margin-left","margin-right","margin-top","marks","mask","mask-border","mask-border-mode","mask-border-outset","mask-border-repeat","mask-border-slice","mask-border-source","mask-border-width","mask-clip","mask-composite","mask-image","mask-mode","mask-origin","mask-position","mask-repeat","mask-size","mask-type","max-block-size","max-height","max-inline-size","max-width","min-block-size","min-height","min-inline-size","min-width","mix-blend-mode","nav-down","nav-index","nav-left","nav-right","nav-up","none","normal","object-fit","object-position","opacity","order","orphans","outline","outline-color","outline-offset","outline-style","outline-width","overflow","overflow-wrap","overflow-x","overflow-y","padding","padding-block","padding-block-end","padding-block-start","padding-bottom","padding-inline","padding-inline-end","padding-inline-start","padding-left","padding-right","padding-top","page-break-after","page-break-before","page-break-inside","pause","pause-after","pause-before","perspective","perspective-origin","pointer-events","position","quotes","resize","rest","rest-after","rest-before","right","row-gap","scroll-margin","scroll-margin-block","scroll-margin-block-end","scroll-margin-block-start","scroll-margin-bottom","scroll-margin-inline","scroll-margin-inline-end","scroll-margin-inline-start","scroll-margin-left","scroll-margin-right","scroll-margin-top","scroll-padding","scroll-padding-block","scroll-padding-block-end","scroll-padding-block-start","scroll-padding-bottom","scroll-padding-inline","scroll-padding-inline-end","scroll-padding-inline-start","scroll-padding-left","scroll-padding-right","scroll-padding-top","scroll-snap-align","scroll-snap-stop","scroll-snap-type","scrollbar-color","scrollbar-gutter","scrollbar-width","shape-image-threshold","shape-margin","shape-outside","speak","speak-as","src","tab-size","table-layout","text-align","text-align-all","text-align-last","text-combine-upright","text-decoration","text-decoration-color","text-decoration-line","text-decoration-style","text-emphasis","text-emphasis-color","text-emphasis-position","text-emphasis-style","text-indent","text-justify","text-orientation","text-overflow","text-rendering","text-shadow","text-transform","text-underline-position","top","transform","transform-box","transform-origin","transform-style","transition","transition-delay","transition-duration","transition-property","transition-timing-function","unicode-bidi","vertical-align","visibility","voice-balance","voice-duration","voice-family","voice-pitch","voice-range","voice-rate","voice-stress","voice-volume","white-space","widows","width","will-change","word-break","word-spacing","word-wrap","writing-mode","z-index"].reverse();function l(c){const d=t(c),_=o,p=i,g="@[a-z-]+",E="and or not only",S={className:"variable",begin:"(\\$"+"[a-zA-Z-][a-zA-Z0-9_-]*"+")\\b",relevance:0};return{name:"SCSS",case_insensitive:!0,illegal:"[=/|']",contains:[c.C_LINE_COMMENT_MODE,c.C_BLOCK_COMMENT_MODE,d.CSS_NUMBER_MODE,{className:"selector-id",begin:"#[A-Za-z0-9_-]+",relevance:0},{className:"selector-class",begin:"\\.[A-Za-z0-9_-]+",relevance:0},d.ATTRIBUTE_SELECTOR_MODE,{className:"selector-tag",begin:"\\b("+e.join("|")+")\\b",relevance:0},{className:"selector-pseudo",begin:":("+p.join("|")+")"},{className:"selector-pseudo",begin:":(:)?("+_.join("|")+")"},S,{begin:/\(/,end:/\)/,contains:[d.CSS_NUMBER_MODE]},d.CSS_VARIABLE,{className:"attribute",begin:"\\b("+s.join("|")+")\\b"},{begin:"\\b(whitespace|wait|w-resize|visible|vertical-text|vertical-ideographic|uppercase|upper-roman|upper-alpha|underline|transparent|top|thin|thick|text|text-top|text-bottom|tb-rl|table-header-group|table-footer-group|sw-resize|super|strict|static|square|solid|small-caps|separate|se-resize|scroll|s-resize|rtl|row-resize|ridge|right|repeat|repeat-y|repeat-x|relative|progress|pointer|overline|outside|outset|oblique|nowrap|not-allowed|normal|none|nw-resize|no-repeat|no-drop|newspaper|ne-resize|n-resize|move|middle|medium|ltr|lr-tb|lowercase|lower-roman|lower-alpha|loose|list-item|line|line-through|line-edge|lighter|left|keep-all|justify|italic|inter-word|inter-ideograph|inside|inset|inline|inline-block|inherit|inactive|ideograph-space|ideograph-parenthesis|ideograph-numeric|ideograph-alpha|horizontal|hidden|help|hand|groove|fixed|ellipsis|e-resize|double|dotted|distribute|distribute-space|distribute-letter|distribute-all-lines|disc|disabled|default|decimal|dashed|crosshair|collapse|col-resize|circle|char|center|capitalize|break-word|break-all|bottom|both|bolder|bold|block|bidi-override|below|baseline|auto|always|all-scroll|absolute|table|table-cell)\\b"},{begin:/:/,end:/[;}{]/,relevance:0,contains:[d.BLOCK_COMMENT,S,d.HEXCOLOR,d.CSS_NUMBER_MODE,c.QUOTE_STRING_MODE,c.APOS_STRING_MODE,d.IMPORTANT,d.FUNCTION_DISPATCH]},{begin:"@(page|font-face)",keywords:{$pattern:g,keyword:"@page @font-face"}},{begin:"@",end:"[{;]",returnBegin:!0,keywords:{$pattern:/[a-z-]+/,keyword:E,attribute:n.join(" ")},contains:[{begin:g,className:"keyword"},{begin:/[a-z-]+(?=:)/,className:"attribute"},S,c.QUOTE_STRING_MODE,c.APOS_STRING_MODE,d.HEXCOLOR,d.CSS_NUMBER_MODE]},d.FUNCTION_DISPATCH]}}return Fp=l,Fp}var Bp,Hv;function Hye(){if(Hv)return Bp;Hv=1;function t(e){return{name:"Shell Session",aliases:["console","shellsession"],contains:[{className:"meta.prompt",begin:/^\s{0,3}[/~\w\d[\]()@-]*[>%$#][ ]?/,starts:{end:/[^\\](?=\s*$)/,subLanguage:"bash"}}]}}return Bp=t,Bp}var Gp,zv;function zye(){if(zv)return Gp;zv=1;function t(e){const n=["add","and","cmp","cmpg","cmpl","const","div","double","float","goto","if","int","long","move","mul","neg","new","nop","not","or","rem","return","shl","shr","sput","sub","throw","ushr","xor"],i=["aget","aput","array","check","execute","fill","filled","goto/16","goto/32","iget","instance","invoke","iput","monitor","packed","sget","sparse"],o=["transient","constructor","abstract","final","synthetic","public","private","protected","static","bridge","system"];return{name:"Smali",contains:[{className:"string",begin:'"',end:'"',relevance:0},e.COMMENT("#","$",{relevance:0}),{className:"keyword",variants:[{begin:"\\s*\\.end\\s[a-zA-Z0-9]*"},{begin:"^[ ]*\\.[a-zA-Z]*",relevance:0},{begin:"\\s:[a-zA-Z_0-9]*",relevance:0},{begin:"\\s("+o.join("|")+")"}]},{className:"built_in",variants:[{begin:"\\s("+n.join("|")+")\\s"},{begin:"\\s("+n.join("|")+")((-|/)[a-zA-Z0-9]+)+\\s",relevance:10},{begin:"\\s("+i.join("|")+")((-|/)[a-zA-Z0-9]+)*\\s",relevance:10}]},{className:"class",begin:`L[^(;: -]*;`,relevance:0},{begin:"[vp][0-9]+"}]}}return Gp=t,Gp}var Yp,Vv;function Vye(){if(Vv)return Yp;Vv=1;function t(e){const n="[a-z][a-zA-Z0-9_]*",i={className:"string",begin:"\\$.{1}"},o={className:"symbol",begin:"#"+e.UNDERSCORE_IDENT_RE};return{name:"Smalltalk",aliases:["st"],keywords:["self","super","nil","true","false","thisContext"],contains:[e.COMMENT('"','"'),e.APOS_STRING_MODE,{className:"type",begin:"\\b[A-Z][A-Za-z0-9_]*",relevance:0},{begin:n+":",relevance:0},e.C_NUMBER_MODE,o,i,{begin:"\\|[ ]*"+n+"([ ]+"+n+")*[ ]*\\|",returnBegin:!0,end:/\|/,illegal:/\S/,contains:[{begin:"(\\|[ ]*)?"+n}]},{begin:"#\\(",end:"\\)",contains:[e.APOS_STRING_MODE,i,e.C_NUMBER_MODE,o]}]}}return Yp=t,Yp}var qp,Wv;function Wye(){if(Wv)return qp;Wv=1;function t(e){return{name:"SML (Standard ML)",aliases:["ml"],keywords:{$pattern:"[a-z_]\\w*!?",keyword:"abstype and andalso as case datatype do else end eqtype exception fn fun functor handle if in include infix infixr let local nonfix of op open orelse raise rec sharing sig signature struct structure then type val with withtype where while",built_in:"array bool char exn int list option order real ref string substring vector unit word",literal:"true false NONE SOME LESS EQUAL GREATER nil"},illegal:/\/\/|>>/,contains:[{className:"literal",begin:/\[(\|\|)?\]|\(\)/,relevance:0},e.COMMENT("\\(\\*","\\*\\)",{contains:["self"]}),{className:"symbol",begin:"'[A-Za-z_](?!')[\\w']*"},{className:"type",begin:"`[A-Z][\\w']*"},{className:"type",begin:"\\b[A-Z][\\w']*",relevance:0},{begin:"[a-z_]\\w*'[\\w']*"},e.inherit(e.APOS_STRING_MODE,{className:"string",relevance:0}),e.inherit(e.QUOTE_STRING_MODE,{illegal:null}),{className:"number",begin:"\\b(0[xX][a-fA-F0-9_]+[Lln]?|0[oO][0-7_]+[Lln]?|0[bB][01_]+[Lln]?|[0-9][0-9_]*([Lln]|(\\.[0-9_]*)?([eE][-+]?[0-9_]+)?)?)",relevance:0},{begin:/[-=]>/}]}}return qp=t,qp}var $p,Kv;function Kye(){if(Kv)return $p;Kv=1;function t(e){const n={className:"variable",begin:/\b_+[a-zA-Z]\w*/},i={className:"title",begin:/[a-zA-Z][a-zA-Z_0-9]*_fnc_[a-zA-Z_0-9]+/},o={className:"string",variants:[{begin:'"',end:'"',contains:[{begin:'""',relevance:0}]},{begin:"'",end:"'",contains:[{begin:"''",relevance:0}]}]},s=["break","breakWith","breakOut","breakTo","case","catch","continue","continueWith","default","do","else","exit","exitWith","for","forEach","from","if","local","private","switch","step","then","throw","to","try","waitUntil","while","with"],l=["blufor","civilian","configNull","controlNull","displayNull","diaryRecordNull","east","endl","false","grpNull","independent","lineBreak","locationNull","nil","objNull","opfor","pi","resistance","scriptNull","sideAmbientLife","sideEmpty","sideEnemy","sideFriendly","sideLogic","sideUnknown","taskNull","teamMemberNull","true","west"],c=["abs","accTime","acos","action","actionIDs","actionKeys","actionKeysEx","actionKeysImages","actionKeysNames","actionKeysNamesArray","actionName","actionParams","activateAddons","activatedAddons","activateKey","activeTitleEffectParams","add3DENConnection","add3DENEventHandler","add3DENLayer","addAction","addBackpack","addBackpackCargo","addBackpackCargoGlobal","addBackpackGlobal","addBinocularItem","addCamShake","addCuratorAddons","addCuratorCameraArea","addCuratorEditableObjects","addCuratorEditingArea","addCuratorPoints","addEditorObject","addEventHandler","addForce","addForceGeneratorRTD","addGoggles","addGroupIcon","addHandgunItem","addHeadgear","addItem","addItemCargo","addItemCargoGlobal","addItemPool","addItemToBackpack","addItemToUniform","addItemToVest","addLiveStats","addMagazine","addMagazineAmmoCargo","addMagazineCargo","addMagazineCargoGlobal","addMagazineGlobal","addMagazinePool","addMagazines","addMagazineTurret","addMenu","addMenuItem","addMissionEventHandler","addMPEventHandler","addMusicEventHandler","addonFiles","addOwnedMine","addPlayerScores","addPrimaryWeaponItem","addPublicVariableEventHandler","addRating","addResources","addScore","addScoreSide","addSecondaryWeaponItem","addSwitchableUnit","addTeamMember","addToRemainsCollector","addTorque","addUniform","addUserActionEventHandler","addVehicle","addVest","addWaypoint","addWeapon","addWeaponCargo","addWeaponCargoGlobal","addWeaponGlobal","addWeaponItem","addWeaponPool","addWeaponTurret","addWeaponWithAttachmentsCargo","addWeaponWithAttachmentsCargoGlobal","admin","agent","agents","AGLToASL","aimedAtTarget","aimPos","airDensityCurveRTD","airDensityRTD","airplaneThrottle","airportSide","AISFinishHeal","alive","all3DENEntities","allActiveTitleEffects","allAddonsInfo","allAirports","allControls","allCurators","allCutLayers","allDead","allDeadMen","allDiaryRecords","allDiarySubjects","allDisplays","allEnv3DSoundSources","allGroups","allLODs","allMapMarkers","allMines","allMissionObjects","allObjects","allow3DMode","allowCrewInImmobile","allowCuratorLogicIgnoreAreas","allowDamage","allowDammage","allowedService","allowFileOperations","allowFleeing","allowGetIn","allowService","allowSprint","allPlayers","allSimpleObjects","allSites","allTurrets","allUnits","allUnitsUAV","allUsers","allVariables","ambientTemperature","ammo","ammoOnPylon","and","animate","animateBay","animateDoor","animatePylon","animateSource","animationNames","animationPhase","animationSourcePhase","animationState","apertureParams","append","apply","armoryPoints","arrayIntersect","asin","ASLToAGL","ASLToATL","assert","assignAsCargo","assignAsCargoIndex","assignAsCommander","assignAsDriver","assignAsGunner","assignAsTurret","assignCurator","assignedCargo","assignedCommander","assignedDriver","assignedGroup","assignedGunner","assignedItems","assignedTarget","assignedTeam","assignedVehicle","assignedVehicleRole","assignedVehicles","assignItem","assignTeam","assignToAirport","atan","atan2","atg","ATLToASL","attachedObject","attachedObjects","attachedTo","attachObject","attachTo","attackEnabled","awake","backpack","backpackCargo","backpackContainer","backpackItems","backpackMagazines","backpackSpaceFor","behaviour","benchmark","bezierInterpolation","binocular","binocularItems","binocularMagazine","boundingBox","boundingBoxReal","boundingCenter","brakesDisabled","briefingName","buildingExit","buildingPos","buldozer_EnableRoadDiag","buldozer_IsEnabledRoadDiag","buldozer_LoadNewRoads","buldozer_reloadOperMap","buttonAction","buttonSetAction","cadetMode","calculatePath","calculatePlayerVisibilityByFriendly","call","callExtension","camCommand","camCommit","camCommitPrepared","camCommitted","camConstuctionSetParams","camCreate","camDestroy","cameraEffect","cameraEffectEnableHUD","cameraInterest","cameraOn","cameraView","campaignConfigFile","camPreload","camPreloaded","camPrepareBank","camPrepareDir","camPrepareDive","camPrepareFocus","camPrepareFov","camPrepareFovRange","camPreparePos","camPrepareRelPos","camPrepareTarget","camSetBank","camSetDir","camSetDive","camSetFocus","camSetFov","camSetFovRange","camSetPos","camSetRelPos","camSetTarget","camTarget","camUseNVG","canAdd","canAddItemToBackpack","canAddItemToUniform","canAddItemToVest","cancelSimpleTaskDestination","canDeployWeapon","canFire","canMove","canSlingLoad","canStand","canSuspend","canTriggerDynamicSimulation","canUnloadInCombat","canVehicleCargo","captive","captiveNum","cbChecked","cbSetChecked","ceil","channelEnabled","cheatsEnabled","checkAIFeature","checkVisibility","className","clear3DENAttribute","clear3DENInventory","clearAllItemsFromBackpack","clearBackpackCargo","clearBackpackCargoGlobal","clearForcesRTD","clearGroupIcons","clearItemCargo","clearItemCargoGlobal","clearItemPool","clearMagazineCargo","clearMagazineCargoGlobal","clearMagazinePool","clearOverlay","clearRadio","clearWeaponCargo","clearWeaponCargoGlobal","clearWeaponPool","clientOwner","closeDialog","closeDisplay","closeOverlay","collapseObjectTree","collect3DENHistory","collectiveRTD","collisionDisabledWith","combatBehaviour","combatMode","commandArtilleryFire","commandChat","commander","commandFire","commandFollow","commandFSM","commandGetOut","commandingMenu","commandMove","commandRadio","commandStop","commandSuppressiveFire","commandTarget","commandWatch","comment","commitOverlay","compatibleItems","compatibleMagazines","compile","compileFinal","compileScript","completedFSM","composeText","configClasses","configFile","configHierarchy","configName","configOf","configProperties","configSourceAddonList","configSourceMod","configSourceModList","confirmSensorTarget","connectTerminalToUAV","connectToServer","controlsGroupCtrl","conversationDisabled","copyFromClipboard","copyToClipboard","copyWaypoints","cos","count","countEnemy","countFriendly","countSide","countType","countUnknown","create3DENComposition","create3DENEntity","createAgent","createCenter","createDialog","createDiaryLink","createDiaryRecord","createDiarySubject","createDisplay","createGearDialog","createGroup","createGuardedPoint","createHashMap","createHashMapFromArray","createLocation","createMarker","createMarkerLocal","createMenu","createMine","createMissionDisplay","createMPCampaignDisplay","createSimpleObject","createSimpleTask","createSite","createSoundSource","createTask","createTeam","createTrigger","createUnit","createVehicle","createVehicleCrew","createVehicleLocal","crew","ctAddHeader","ctAddRow","ctClear","ctCurSel","ctData","ctFindHeaderRows","ctFindRowHeader","ctHeaderControls","ctHeaderCount","ctRemoveHeaders","ctRemoveRows","ctrlActivate","ctrlAddEventHandler","ctrlAngle","ctrlAnimateModel","ctrlAnimationPhaseModel","ctrlAt","ctrlAutoScrollDelay","ctrlAutoScrollRewind","ctrlAutoScrollSpeed","ctrlBackgroundColor","ctrlChecked","ctrlClassName","ctrlCommit","ctrlCommitted","ctrlCreate","ctrlDelete","ctrlEnable","ctrlEnabled","ctrlFade","ctrlFontHeight","ctrlForegroundColor","ctrlHTMLLoaded","ctrlIDC","ctrlIDD","ctrlMapAnimAdd","ctrlMapAnimClear","ctrlMapAnimCommit","ctrlMapAnimDone","ctrlMapCursor","ctrlMapMouseOver","ctrlMapPosition","ctrlMapScale","ctrlMapScreenToWorld","ctrlMapSetPosition","ctrlMapWorldToScreen","ctrlModel","ctrlModelDirAndUp","ctrlModelScale","ctrlMousePosition","ctrlParent","ctrlParentControlsGroup","ctrlPosition","ctrlRemoveAllEventHandlers","ctrlRemoveEventHandler","ctrlScale","ctrlScrollValues","ctrlSetActiveColor","ctrlSetAngle","ctrlSetAutoScrollDelay","ctrlSetAutoScrollRewind","ctrlSetAutoScrollSpeed","ctrlSetBackgroundColor","ctrlSetChecked","ctrlSetDisabledColor","ctrlSetEventHandler","ctrlSetFade","ctrlSetFocus","ctrlSetFont","ctrlSetFontH1","ctrlSetFontH1B","ctrlSetFontH2","ctrlSetFontH2B","ctrlSetFontH3","ctrlSetFontH3B","ctrlSetFontH4","ctrlSetFontH4B","ctrlSetFontH5","ctrlSetFontH5B","ctrlSetFontH6","ctrlSetFontH6B","ctrlSetFontHeight","ctrlSetFontHeightH1","ctrlSetFontHeightH2","ctrlSetFontHeightH3","ctrlSetFontHeightH4","ctrlSetFontHeightH5","ctrlSetFontHeightH6","ctrlSetFontHeightSecondary","ctrlSetFontP","ctrlSetFontPB","ctrlSetFontSecondary","ctrlSetForegroundColor","ctrlSetModel","ctrlSetModelDirAndUp","ctrlSetModelScale","ctrlSetMousePosition","ctrlSetPixelPrecision","ctrlSetPosition","ctrlSetPositionH","ctrlSetPositionW","ctrlSetPositionX","ctrlSetPositionY","ctrlSetScale","ctrlSetScrollValues","ctrlSetShadow","ctrlSetStructuredText","ctrlSetText","ctrlSetTextColor","ctrlSetTextColorSecondary","ctrlSetTextSecondary","ctrlSetTextSelection","ctrlSetTooltip","ctrlSetTooltipColorBox","ctrlSetTooltipColorShade","ctrlSetTooltipColorText","ctrlSetTooltipMaxWidth","ctrlSetURL","ctrlSetURLOverlayMode","ctrlShadow","ctrlShow","ctrlShown","ctrlStyle","ctrlText","ctrlTextColor","ctrlTextHeight","ctrlTextSecondary","ctrlTextSelection","ctrlTextWidth","ctrlTooltip","ctrlType","ctrlURL","ctrlURLOverlayMode","ctrlVisible","ctRowControls","ctRowCount","ctSetCurSel","ctSetData","ctSetHeaderTemplate","ctSetRowTemplate","ctSetValue","ctValue","curatorAddons","curatorCamera","curatorCameraArea","curatorCameraAreaCeiling","curatorCoef","curatorEditableObjects","curatorEditingArea","curatorEditingAreaType","curatorMouseOver","curatorPoints","curatorRegisteredObjects","curatorSelected","curatorWaypointCost","current3DENOperation","currentChannel","currentCommand","currentMagazine","currentMagazineDetail","currentMagazineDetailTurret","currentMagazineTurret","currentMuzzle","currentNamespace","currentPilot","currentTask","currentTasks","currentThrowable","currentVisionMode","currentWaypoint","currentWeapon","currentWeaponMode","currentWeaponTurret","currentZeroing","cursorObject","cursorTarget","customChat","customRadio","customWaypointPosition","cutFadeOut","cutObj","cutRsc","cutText","damage","date","dateToNumber","dayTime","deActivateKey","debriefingText","debugFSM","debugLog","decayGraphValues","deg","delete3DENEntities","deleteAt","deleteCenter","deleteCollection","deleteEditorObject","deleteGroup","deleteGroupWhenEmpty","deleteIdentity","deleteLocation","deleteMarker","deleteMarkerLocal","deleteRange","deleteResources","deleteSite","deleteStatus","deleteTeam","deleteVehicle","deleteVehicleCrew","deleteWaypoint","detach","detectedMines","diag_activeMissionFSMs","diag_activeScripts","diag_activeSQFScripts","diag_activeSQSScripts","diag_allMissionEventHandlers","diag_captureFrame","diag_captureFrameToFile","diag_captureSlowFrame","diag_codePerformance","diag_deltaTime","diag_drawmode","diag_dumpCalltraceToLog","diag_dumpScriptAssembly","diag_dumpTerrainSynth","diag_dynamicSimulationEnd","diag_enable","diag_enabled","diag_exportConfig","diag_exportTerrainSVG","diag_fps","diag_fpsmin","diag_frameno","diag_getTerrainSegmentOffset","diag_lightNewLoad","diag_list","diag_localized","diag_log","diag_logSlowFrame","diag_mergeConfigFile","diag_recordTurretLimits","diag_resetFSM","diag_resetshapes","diag_scope","diag_setLightNew","diag_stacktrace","diag_tickTime","diag_toggle","dialog","diarySubjectExists","didJIP","didJIPOwner","difficulty","difficultyEnabled","difficultyEnabledRTD","difficultyOption","direction","directionStabilizationEnabled","directSay","disableAI","disableBrakes","disableCollisionWith","disableConversation","disableDebriefingStats","disableMapIndicators","disableNVGEquipment","disableRemoteSensors","disableSerialization","disableTIEquipment","disableUAVConnectability","disableUserInput","displayAddEventHandler","displayChild","displayCtrl","displayParent","displayRemoveAllEventHandlers","displayRemoveEventHandler","displaySetEventHandler","displayUniqueName","displayUpdate","dissolveTeam","distance","distance2D","distanceSqr","distributionRegion","do3DENAction","doArtilleryFire","doFire","doFollow","doFSM","doGetOut","doMove","doorPhase","doStop","doSuppressiveFire","doTarget","doWatch","drawArrow","drawEllipse","drawIcon","drawIcon3D","drawLaser","drawLine","drawLine3D","drawLink","drawLocation","drawPolygon","drawRectangle","drawTriangle","driver","drop","dynamicSimulationDistance","dynamicSimulationDistanceCoef","dynamicSimulationEnabled","dynamicSimulationSystemEnabled","echo","edit3DENMissionAttributes","editObject","editorSetEventHandler","effectiveCommander","elevatePeriscope","emptyPositions","enableAI","enableAIFeature","enableAimPrecision","enableAttack","enableAudioFeature","enableAutoStartUpRTD","enableAutoTrimRTD","enableCamShake","enableCaustics","enableChannel","enableCollisionWith","enableCopilot","enableDebriefingStats","enableDiagLegend","enableDirectionStabilization","enableDynamicSimulation","enableDynamicSimulationSystem","enableEndDialog","enableEngineArtillery","enableEnvironment","enableFatigue","enableGunLights","enableInfoPanelComponent","enableIRLasers","enableMimics","enablePersonTurret","enableRadio","enableReload","enableRopeAttach","enableSatNormalOnDetail","enableSaving","enableSentences","enableSimulation","enableSimulationGlobal","enableStamina","enableStressDamage","enableTeamSwitch","enableTraffic","enableUAVConnectability","enableUAVWaypoints","enableVehicleCargo","enableVehicleSensor","enableWeaponDisassembly","endLoadingScreen","endMission","engineOn","enginesIsOnRTD","enginesPowerRTD","enginesRpmRTD","enginesTorqueRTD","entities","environmentEnabled","environmentVolume","equipmentDisabled","estimatedEndServerTime","estimatedTimeLeft","evalObjectArgument","everyBackpack","everyContainer","exec","execEditorScript","execFSM","execVM","exp","expectedDestination","exportJIPMessages","eyeDirection","eyePos","face","faction","fadeEnvironment","fadeMusic","fadeRadio","fadeSound","fadeSpeech","failMission","fileExists","fillWeaponsFromPool","find","findAny","findCover","findDisplay","findEditorObject","findEmptyPosition","findEmptyPositionReady","findIf","findNearestEnemy","finishMissionInit","finite","fire","fireAtTarget","firstBackpack","flag","flagAnimationPhase","flagOwner","flagSide","flagTexture","flatten","fleeing","floor","flyInHeight","flyInHeightASL","focusedCtrl","fog","fogForecast","fogParams","forceAddUniform","forceAtPositionRTD","forceCadetDifficulty","forcedMap","forceEnd","forceFlagTexture","forceFollowRoad","forceGeneratorRTD","forceMap","forceRespawn","forceSpeed","forceUnicode","forceWalk","forceWeaponFire","forceWeatherChange","forEachMember","forEachMemberAgent","forEachMemberTeam","forgetTarget","format","formation","formationDirection","formationLeader","formationMembers","formationPosition","formationTask","formatText","formLeader","freeExtension","freeLook","fromEditor","fuel","fullCrew","gearIDCAmmoCount","gearSlotAmmoCount","gearSlotData","gestureState","get","get3DENActionState","get3DENAttribute","get3DENCamera","get3DENConnections","get3DENEntity","get3DENEntityID","get3DENGrid","get3DENIconsVisible","get3DENLayerEntities","get3DENLinesVisible","get3DENMissionAttribute","get3DENMouseOver","get3DENSelected","getAimingCoef","getAllEnv3DSoundControllers","getAllEnvSoundControllers","getAllHitPointsDamage","getAllOwnedMines","getAllPylonsInfo","getAllSoundControllers","getAllUnitTraits","getAmmoCargo","getAnimAimPrecision","getAnimSpeedCoef","getArray","getArtilleryAmmo","getArtilleryComputerSettings","getArtilleryETA","getAssetDLCInfo","getAssignedCuratorLogic","getAssignedCuratorUnit","getAttackTarget","getAudioOptionVolumes","getBackpackCargo","getBleedingRemaining","getBurningValue","getCalculatePlayerVisibilityByFriendly","getCameraViewDirection","getCargoIndex","getCenterOfMass","getClientState","getClientStateNumber","getCompatiblePylonMagazines","getConnectedUAV","getConnectedUAVUnit","getContainerMaxLoad","getCorpse","getCruiseControl","getCursorObjectParams","getCustomAimCoef","getCustomSoundController","getCustomSoundControllerCount","getDammage","getDebriefingText","getDescription","getDir","getDirVisual","getDiverState","getDLCAssetsUsage","getDLCAssetsUsageByName","getDLCs","getDLCUsageTime","getEditorCamera","getEditorMode","getEditorObjectScope","getElevationOffset","getEngineTargetRPMRTD","getEnv3DSoundController","getEnvSoundController","getEventHandlerInfo","getFatigue","getFieldManualStartPage","getForcedFlagTexture","getForcedSpeed","getFriend","getFSMVariable","getFuelCargo","getGraphValues","getGroupIcon","getGroupIconParams","getGroupIcons","getHideFrom","getHit","getHitIndex","getHitPointDamage","getItemCargo","getLighting","getLightingAt","getLoadedModsInfo","getMagazineCargo","getMarkerColor","getMarkerPos","getMarkerSize","getMarkerType","getMass","getMissionConfig","getMissionConfigValue","getMissionDLCs","getMissionLayerEntities","getMissionLayers","getMissionPath","getModelInfo","getMousePosition","getMusicPlayedTime","getNumber","getObjectArgument","getObjectChildren","getObjectDLC","getObjectFOV","getObjectID","getObjectMaterials","getObjectProxy","getObjectScale","getObjectTextures","getObjectType","getObjectViewDistance","getOpticsMode","getOrDefault","getOrDefaultCall","getOxygenRemaining","getPersonUsedDLCs","getPilotCameraDirection","getPilotCameraPosition","getPilotCameraRotation","getPilotCameraTarget","getPiPViewDistance","getPlateNumber","getPlayerChannel","getPlayerID","getPlayerScores","getPlayerUID","getPlayerVoNVolume","getPos","getPosASL","getPosASLVisual","getPosASLW","getPosATL","getPosATLVisual","getPosVisual","getPosWorld","getPosWorldVisual","getPylonMagazines","getRelDir","getRelPos","getRemoteSensorsDisabled","getRepairCargo","getResolution","getRoadInfo","getRotorBrakeRTD","getSensorTargets","getSensorThreats","getShadowDistance","getShotParents","getSlingLoad","getSoundController","getSoundControllerResult","getSpeed","getStamina","getStatValue","getSteamFriendsServers","getSubtitleOptions","getSuppression","getTerrainGrid","getTerrainHeight","getTerrainHeightASL","getTerrainInfo","getText","getTextRaw","getTextureInfo","getTextWidth","getTiParameters","getTotalDLCUsageTime","getTrimOffsetRTD","getTurretLimits","getTurretOpticsMode","getUnitFreefallInfo","getUnitLoadout","getUnitTrait","getUnloadInCombat","getUserInfo","getUserMFDText","getUserMFDValue","getVariable","getVehicleCargo","getVehicleTiPars","getWeaponCargo","getWeaponSway","getWingsOrientationRTD","getWingsPositionRTD","getWPPos","glanceAt","globalChat","globalRadio","goggles","goto","group","groupChat","groupFromNetId","groupIconSelectable","groupIconsVisible","groupID","groupOwner","groupRadio","groups","groupSelectedUnits","groupSelectUnit","gunner","gusts","halt","handgunItems","handgunMagazine","handgunWeapon","handsHit","hashValue","hasInterface","hasPilotCamera","hasWeapon","hcAllGroups","hcGroupParams","hcLeader","hcRemoveAllGroups","hcRemoveGroup","hcSelected","hcSelectGroup","hcSetGroup","hcShowBar","hcShownBar","headgear","hideBody","hideObject","hideObjectGlobal","hideSelection","hint","hintC","hintCadet","hintSilent","hmd","hostMission","htmlLoad","HUDMovementLevels","humidity","image","importAllGroups","importance","in","inArea","inAreaArray","incapacitatedState","inflame","inflamed","infoPanel","infoPanelComponentEnabled","infoPanelComponents","infoPanels","inGameUISetEventHandler","inheritsFrom","initAmbientLife","inPolygon","inputAction","inputController","inputMouse","inRangeOfArtillery","insert","insertEditorObject","intersect","is3DEN","is3DENMultiplayer","is3DENPreview","isAbleToBreathe","isActionMenuVisible","isAgent","isAimPrecisionEnabled","isAllowedCrewInImmobile","isArray","isAutoHoverOn","isAutonomous","isAutoStartUpEnabledRTD","isAutotest","isAutoTrimOnRTD","isAwake","isBleeding","isBurning","isClass","isCollisionLightOn","isCopilotEnabled","isDamageAllowed","isDedicated","isDLCAvailable","isEngineOn","isEqualRef","isEqualTo","isEqualType","isEqualTypeAll","isEqualTypeAny","isEqualTypeArray","isEqualTypeParams","isFilePatchingEnabled","isFinal","isFlashlightOn","isFlatEmpty","isForcedWalk","isFormationLeader","isGameFocused","isGamePaused","isGroupDeletedWhenEmpty","isHidden","isInRemainsCollector","isInstructorFigureEnabled","isIRLaserOn","isKeyActive","isKindOf","isLaserOn","isLightOn","isLocalized","isManualFire","isMarkedForCollection","isMissionProfileNamespaceLoaded","isMultiplayer","isMultiplayerSolo","isNil","isNotEqualRef","isNotEqualTo","isNull","isNumber","isObjectHidden","isObjectRTD","isOnRoad","isPiPEnabled","isPlayer","isRealTime","isRemoteExecuted","isRemoteExecutedJIP","isSaving","isSensorTargetConfirmed","isServer","isShowing3DIcons","isSimpleObject","isSprintAllowed","isStaminaEnabled","isSteamMission","isSteamOverlayEnabled","isStreamFriendlyUIEnabled","isStressDamageEnabled","isText","isTouchingGround","isTurnedOut","isTutHintsEnabled","isUAVConnectable","isUAVConnected","isUIContext","isUniformAllowed","isVehicleCargo","isVehicleRadarOn","isVehicleSensorEnabled","isWalking","isWeaponDeployed","isWeaponRested","itemCargo","items","itemsWithMagazines","join","joinAs","joinAsSilent","joinSilent","joinString","kbAddDatabase","kbAddDatabaseTargets","kbAddTopic","kbHasTopic","kbReact","kbRemoveTopic","kbTell","kbWasSaid","keyImage","keyName","keys","knowsAbout","land","landAt","landResult","language","laserTarget","lbAdd","lbClear","lbColor","lbColorRight","lbCurSel","lbData","lbDelete","lbIsSelected","lbPicture","lbPictureRight","lbSelection","lbSetColor","lbSetColorRight","lbSetCurSel","lbSetData","lbSetPicture","lbSetPictureColor","lbSetPictureColorDisabled","lbSetPictureColorSelected","lbSetPictureRight","lbSetPictureRightColor","lbSetPictureRightColorDisabled","lbSetPictureRightColorSelected","lbSetSelectColor","lbSetSelectColorRight","lbSetSelected","lbSetText","lbSetTextRight","lbSetTooltip","lbSetValue","lbSize","lbSort","lbSortBy","lbSortByValue","lbText","lbTextRight","lbTooltip","lbValue","leader","leaderboardDeInit","leaderboardGetRows","leaderboardInit","leaderboardRequestRowsFriends","leaderboardRequestRowsGlobal","leaderboardRequestRowsGlobalAroundUser","leaderboardsRequestUploadScore","leaderboardsRequestUploadScoreKeepBest","leaderboardState","leaveVehicle","libraryCredits","libraryDisclaimers","lifeState","lightAttachObject","lightDetachObject","lightIsOn","lightnings","limitSpeed","linearConversion","lineIntersects","lineIntersectsObjs","lineIntersectsSurfaces","lineIntersectsWith","linkItem","list","listObjects","listRemoteTargets","listVehicleSensors","ln","lnbAddArray","lnbAddColumn","lnbAddRow","lnbClear","lnbColor","lnbColorRight","lnbCurSelRow","lnbData","lnbDeleteColumn","lnbDeleteRow","lnbGetColumnsPosition","lnbPicture","lnbPictureRight","lnbSetColor","lnbSetColorRight","lnbSetColumnsPos","lnbSetCurSelRow","lnbSetData","lnbSetPicture","lnbSetPictureColor","lnbSetPictureColorRight","lnbSetPictureColorSelected","lnbSetPictureColorSelectedRight","lnbSetPictureRight","lnbSetText","lnbSetTextRight","lnbSetTooltip","lnbSetValue","lnbSize","lnbSort","lnbSortBy","lnbSortByValue","lnbText","lnbTextRight","lnbValue","load","loadAbs","loadBackpack","loadConfig","loadFile","loadGame","loadIdentity","loadMagazine","loadOverlay","loadStatus","loadUniform","loadVest","localize","localNamespace","locationPosition","lock","lockCameraTo","lockCargo","lockDriver","locked","lockedCameraTo","lockedCargo","lockedDriver","lockedInventory","lockedTurret","lockIdentity","lockInventory","lockTurret","lockWp","log","logEntities","logNetwork","logNetworkTerminate","lookAt","lookAtPos","magazineCargo","magazines","magazinesAllTurrets","magazinesAmmo","magazinesAmmoCargo","magazinesAmmoFull","magazinesDetail","magazinesDetailBackpack","magazinesDetailUniform","magazinesDetailVest","magazinesTurret","magazineTurretAmmo","mapAnimAdd","mapAnimClear","mapAnimCommit","mapAnimDone","mapCenterOnCamera","mapGridPosition","markAsFinishedOnSteam","markerAlpha","markerBrush","markerChannel","markerColor","markerDir","markerPolyline","markerPos","markerShadow","markerShape","markerSize","markerText","markerType","matrixMultiply","matrixTranspose","max","maxLoad","members","menuAction","menuAdd","menuChecked","menuClear","menuCollapse","menuData","menuDelete","menuEnable","menuEnabled","menuExpand","menuHover","menuPicture","menuSetAction","menuSetCheck","menuSetData","menuSetPicture","menuSetShortcut","menuSetText","menuSetURL","menuSetValue","menuShortcut","menuShortcutText","menuSize","menuSort","menuText","menuURL","menuValue","merge","min","mineActive","mineDetectedBy","missileTarget","missileTargetPos","missionConfigFile","missionDifficulty","missionEnd","missionName","missionNameSource","missionNamespace","missionProfileNamespace","missionStart","missionVersion","mod","modelToWorld","modelToWorldVisual","modelToWorldVisualWorld","modelToWorldWorld","modParams","moonIntensity","moonPhase","morale","move","move3DENCamera","moveInAny","moveInCargo","moveInCommander","moveInDriver","moveInGunner","moveInTurret","moveObjectToEnd","moveOut","moveTime","moveTo","moveToCompleted","moveToFailed","musicVolume","name","namedProperties","nameSound","nearEntities","nearestBuilding","nearestLocation","nearestLocations","nearestLocationWithDubbing","nearestMines","nearestObject","nearestObjects","nearestTerrainObjects","nearObjects","nearObjectsReady","nearRoads","nearSupplies","nearTargets","needReload","needService","netId","netObjNull","newOverlay","nextMenuItemIndex","nextWeatherChange","nMenuItems","not","numberOfEnginesRTD","numberToDate","objectCurators","objectFromNetId","objectParent","objStatus","onBriefingGroup","onBriefingNotes","onBriefingPlan","onBriefingTeamSwitch","onCommandModeChanged","onDoubleClick","onEachFrame","onGroupIconClick","onGroupIconOverEnter","onGroupIconOverLeave","onHCGroupSelectionChanged","onMapSingleClick","onPlayerConnected","onPlayerDisconnected","onPreloadFinished","onPreloadStarted","onShowNewObject","onTeamSwitch","openCuratorInterface","openDLCPage","openGPS","openMap","openSteamApp","openYoutubeVideo","or","orderGetIn","overcast","overcastForecast","owner","param","params","parseNumber","parseSimpleArray","parseText","parsingNamespace","particlesQuality","periscopeElevation","pickWeaponPool","pitch","pixelGrid","pixelGridBase","pixelGridNoUIScale","pixelH","pixelW","playableSlotsNumber","playableUnits","playAction","playActionNow","player","playerRespawnTime","playerSide","playersNumber","playGesture","playMission","playMove","playMoveNow","playMusic","playScriptedMission","playSound","playSound3D","playSoundUI","pose","position","positionCameraToWorld","posScreenToWorld","posWorldToScreen","ppEffectAdjust","ppEffectCommit","ppEffectCommitted","ppEffectCreate","ppEffectDestroy","ppEffectEnable","ppEffectEnabled","ppEffectForceInNVG","precision","preloadCamera","preloadObject","preloadSound","preloadTitleObj","preloadTitleRsc","preprocessFile","preprocessFileLineNumbers","primaryWeapon","primaryWeaponItems","primaryWeaponMagazine","priority","processDiaryLink","productVersion","profileName","profileNamespace","profileNameSteam","progressLoadingScreen","progressPosition","progressSetPosition","publicVariable","publicVariableClient","publicVariableServer","pushBack","pushBackUnique","putWeaponPool","queryItemsPool","queryMagazinePool","queryWeaponPool","rad","radioChannelAdd","radioChannelCreate","radioChannelInfo","radioChannelRemove","radioChannelSetCallSign","radioChannelSetLabel","radioEnabled","radioVolume","rain","rainbow","rainParams","random","rank","rankId","rating","rectangular","regexFind","regexMatch","regexReplace","registeredTasks","registerTask","reload","reloadEnabled","remoteControl","remoteExec","remoteExecCall","remoteExecutedOwner","remove3DENConnection","remove3DENEventHandler","remove3DENLayer","removeAction","removeAll3DENEventHandlers","removeAllActions","removeAllAssignedItems","removeAllBinocularItems","removeAllContainers","removeAllCuratorAddons","removeAllCuratorCameraAreas","removeAllCuratorEditingAreas","removeAllEventHandlers","removeAllHandgunItems","removeAllItems","removeAllItemsWithMagazines","removeAllMissionEventHandlers","removeAllMPEventHandlers","removeAllMusicEventHandlers","removeAllOwnedMines","removeAllPrimaryWeaponItems","removeAllSecondaryWeaponItems","removeAllUserActionEventHandlers","removeAllWeapons","removeBackpack","removeBackpackGlobal","removeBinocularItem","removeCuratorAddons","removeCuratorCameraArea","removeCuratorEditableObjects","removeCuratorEditingArea","removeDiaryRecord","removeDiarySubject","removeDrawIcon","removeDrawLinks","removeEventHandler","removeFromRemainsCollector","removeGoggles","removeGroupIcon","removeHandgunItem","removeHeadgear","removeItem","removeItemFromBackpack","removeItemFromUniform","removeItemFromVest","removeItems","removeMagazine","removeMagazineGlobal","removeMagazines","removeMagazinesTurret","removeMagazineTurret","removeMenuItem","removeMissionEventHandler","removeMPEventHandler","removeMusicEventHandler","removeOwnedMine","removePrimaryWeaponItem","removeSecondaryWeaponItem","removeSimpleTask","removeSwitchableUnit","removeTeamMember","removeUniform","removeUserActionEventHandler","removeVest","removeWeapon","removeWeaponAttachmentCargo","removeWeaponCargo","removeWeaponGlobal","removeWeaponTurret","reportRemoteTarget","requiredVersion","resetCamShake","resetSubgroupDirection","resize","resources","respawnVehicle","restartEditorCamera","reveal","revealMine","reverse","reversedMouseY","roadAt","roadsConnectedTo","roleDescription","ropeAttachedObjects","ropeAttachedTo","ropeAttachEnabled","ropeAttachTo","ropeCreate","ropeCut","ropeDestroy","ropeDetach","ropeEndPosition","ropeLength","ropes","ropesAttachedTo","ropeSegments","ropeUnwind","ropeUnwound","rotorsForcesRTD","rotorsRpmRTD","round","runInitScript","safeZoneH","safeZoneW","safeZoneWAbs","safeZoneX","safeZoneXAbs","safeZoneY","save3DENInventory","saveGame","saveIdentity","saveJoysticks","saveMissionProfileNamespace","saveOverlay","saveProfileNamespace","saveStatus","saveVar","savingEnabled","say","say2D","say3D","scopeName","score","scoreSide","screenshot","screenToWorld","scriptDone","scriptName","scudState","secondaryWeapon","secondaryWeaponItems","secondaryWeaponMagazine","select","selectBestPlaces","selectDiarySubject","selectedEditorObjects","selectEditorObject","selectionNames","selectionPosition","selectionVectorDirAndUp","selectLeader","selectMax","selectMin","selectNoPlayer","selectPlayer","selectRandom","selectRandomWeighted","selectWeapon","selectWeaponTurret","sendAUMessage","sendSimpleCommand","sendTask","sendTaskResult","sendUDPMessage","sentencesEnabled","serverCommand","serverCommandAvailable","serverCommandExecutable","serverName","serverNamespace","serverTime","set","set3DENAttribute","set3DENAttributes","set3DENGrid","set3DENIconsVisible","set3DENLayer","set3DENLinesVisible","set3DENLogicType","set3DENMissionAttribute","set3DENMissionAttributes","set3DENModelsVisible","set3DENObjectType","set3DENSelected","setAccTime","setActualCollectiveRTD","setAirplaneThrottle","setAirportSide","setAmmo","setAmmoCargo","setAmmoOnPylon","setAnimSpeedCoef","setAperture","setApertureNew","setArmoryPoints","setAttributes","setAutonomous","setBehaviour","setBehaviourStrong","setBleedingRemaining","setBrakesRTD","setCameraInterest","setCamShakeDefParams","setCamShakeParams","setCamUseTi","setCaptive","setCenterOfMass","setCollisionLight","setCombatBehaviour","setCombatMode","setCompassOscillation","setConvoySeparation","setCruiseControl","setCuratorCameraAreaCeiling","setCuratorCoef","setCuratorEditingAreaType","setCuratorWaypointCost","setCurrentChannel","setCurrentTask","setCurrentWaypoint","setCustomAimCoef","SetCustomMissionData","setCustomSoundController","setCustomWeightRTD","setDamage","setDammage","setDate","setDebriefingText","setDefaultCamera","setDestination","setDetailMapBlendPars","setDiaryRecordText","setDiarySubjectPicture","setDir","setDirection","setDrawIcon","setDriveOnPath","setDropInterval","setDynamicSimulationDistance","setDynamicSimulationDistanceCoef","setEditorMode","setEditorObjectScope","setEffectCondition","setEffectiveCommander","setEngineRpmRTD","setFace","setFaceanimation","setFatigue","setFeatureType","setFlagAnimationPhase","setFlagOwner","setFlagSide","setFlagTexture","setFog","setForceGeneratorRTD","setFormation","setFormationTask","setFormDir","setFriend","setFromEditor","setFSMVariable","setFuel","setFuelCargo","setGroupIcon","setGroupIconParams","setGroupIconsSelectable","setGroupIconsVisible","setGroupid","setGroupIdGlobal","setGroupOwner","setGusts","setHideBehind","setHit","setHitIndex","setHitPointDamage","setHorizonParallaxCoef","setHUDMovementLevels","setHumidity","setIdentity","setImportance","setInfoPanel","setLeader","setLightAmbient","setLightAttenuation","setLightBrightness","setLightColor","setLightConePars","setLightDayLight","setLightFlareMaxDistance","setLightFlareSize","setLightIntensity","setLightIR","setLightnings","setLightUseFlare","setLightVolumeShape","setLocalWindParams","setMagazineTurretAmmo","setMarkerAlpha","setMarkerAlphaLocal","setMarkerBrush","setMarkerBrushLocal","setMarkerColor","setMarkerColorLocal","setMarkerDir","setMarkerDirLocal","setMarkerPolyline","setMarkerPolylineLocal","setMarkerPos","setMarkerPosLocal","setMarkerShadow","setMarkerShadowLocal","setMarkerShape","setMarkerShapeLocal","setMarkerSize","setMarkerSizeLocal","setMarkerText","setMarkerTextLocal","setMarkerType","setMarkerTypeLocal","setMass","setMaxLoad","setMimic","setMissileTarget","setMissileTargetPos","setMousePosition","setMusicEffect","setMusicEventHandler","setName","setNameSound","setObjectArguments","setObjectMaterial","setObjectMaterialGlobal","setObjectProxy","setObjectScale","setObjectTexture","setObjectTextureGlobal","setObjectViewDistance","setOpticsMode","setOvercast","setOwner","setOxygenRemaining","setParticleCircle","setParticleClass","setParticleFire","setParticleParams","setParticleRandom","setPilotCameraDirection","setPilotCameraRotation","setPilotCameraTarget","setPilotLight","setPiPEffect","setPiPViewDistance","setPitch","setPlateNumber","setPlayable","setPlayerRespawnTime","setPlayerVoNVolume","setPos","setPosASL","setPosASL2","setPosASLW","setPosATL","setPosition","setPosWorld","setPylonLoadout","setPylonsPriority","setRadioMsg","setRain","setRainbow","setRandomLip","setRank","setRectangular","setRepairCargo","setRotorBrakeRTD","setShadowDistance","setShotParents","setSide","setSimpleTaskAlwaysVisible","setSimpleTaskCustomData","setSimpleTaskDescription","setSimpleTaskDestination","setSimpleTaskTarget","setSimpleTaskType","setSimulWeatherLayers","setSize","setSkill","setSlingLoad","setSoundEffect","setSpeaker","setSpeech","setSpeedMode","setStamina","setStaminaScheme","setStatValue","setSuppression","setSystemOfUnits","setTargetAge","setTaskMarkerOffset","setTaskResult","setTaskState","setTerrainGrid","setTerrainHeight","setText","setTimeMultiplier","setTiParameter","setTitleEffect","setTowParent","setTrafficDensity","setTrafficDistance","setTrafficGap","setTrafficSpeed","setTriggerActivation","setTriggerArea","setTriggerInterval","setTriggerStatements","setTriggerText","setTriggerTimeout","setTriggerType","setTurretLimits","setTurretOpticsMode","setType","setUnconscious","setUnitAbility","setUnitCombatMode","setUnitFreefallHeight","setUnitLoadout","setUnitPos","setUnitPosWeak","setUnitRank","setUnitRecoilCoefficient","setUnitTrait","setUnloadInCombat","setUserActionText","setUserMFDText","setUserMFDValue","setVariable","setVectorDir","setVectorDirAndUp","setVectorUp","setVehicleAmmo","setVehicleAmmoDef","setVehicleArmor","setVehicleCargo","setVehicleId","setVehicleLock","setVehiclePosition","setVehicleRadar","setVehicleReceiveRemoteTargets","setVehicleReportOwnPosition","setVehicleReportRemoteTargets","setVehicleTiPars","setVehicleVarName","setVelocity","setVelocityModelSpace","setVelocityTransformation","setViewDistance","setVisibleIfTreeCollapsed","setWantedRPMRTD","setWaves","setWaypointBehaviour","setWaypointCombatMode","setWaypointCompletionRadius","setWaypointDescription","setWaypointForceBehaviour","setWaypointFormation","setWaypointHousePosition","setWaypointLoiterAltitude","setWaypointLoiterRadius","setWaypointLoiterType","setWaypointName","setWaypointPosition","setWaypointScript","setWaypointSpeed","setWaypointStatements","setWaypointTimeout","setWaypointType","setWaypointVisible","setWeaponReloadingTime","setWeaponZeroing","setWind","setWindDir","setWindForce","setWindStr","setWingForceScaleRTD","setWPPos","show3DIcons","showChat","showCinemaBorder","showCommandingMenu","showCompass","showCuratorCompass","showGps","showHUD","showLegend","showMap","shownArtilleryComputer","shownChat","shownCompass","shownCuratorCompass","showNewEditorObject","shownGps","shownHUD","shownMap","shownPad","shownRadio","shownScoretable","shownSubtitles","shownUAVFeed","shownWarrant","shownWatch","showPad","showRadio","showScoretable","showSubtitles","showUAVFeed","showWarrant","showWatch","showWaypoint","showWaypoints","side","sideChat","sideRadio","simpleTasks","simulationEnabled","simulCloudDensity","simulCloudOcclusion","simulInClouds","simulWeatherSync","sin","size","sizeOf","skill","skillFinal","skipTime","sleep","sliderPosition","sliderRange","sliderSetPosition","sliderSetRange","sliderSetSpeed","sliderSpeed","slingLoadAssistantShown","soldierMagazines","someAmmo","sort","soundVolume","spawn","speaker","speechVolume","speed","speedMode","splitString","sqrt","squadParams","stance","startLoadingScreen","stop","stopEngineRTD","stopped","str","sunOrMoon","supportInfo","suppressFor","surfaceIsWater","surfaceNormal","surfaceTexture","surfaceType","swimInDepth","switchableUnits","switchAction","switchCamera","switchGesture","switchLight","switchMove","synchronizedObjects","synchronizedTriggers","synchronizedWaypoints","synchronizeObjectsAdd","synchronizeObjectsRemove","synchronizeTrigger","synchronizeWaypoint","systemChat","systemOfUnits","systemTime","systemTimeUTC","tan","targetKnowledge","targets","targetsAggregate","targetsQuery","taskAlwaysVisible","taskChildren","taskCompleted","taskCustomData","taskDescription","taskDestination","taskHint","taskMarkerOffset","taskName","taskParent","taskResult","taskState","taskType","teamMember","teamName","teams","teamSwitch","teamSwitchEnabled","teamType","terminate","terrainIntersect","terrainIntersectASL","terrainIntersectAtASL","text","textLog","textLogFormat","tg","time","timeMultiplier","titleCut","titleFadeOut","titleObj","titleRsc","titleText","toArray","toFixed","toLower","toLowerANSI","toString","toUpper","toUpperANSI","triggerActivated","triggerActivation","triggerAmmo","triggerArea","triggerAttachedVehicle","triggerAttachObject","triggerAttachVehicle","triggerDynamicSimulation","triggerInterval","triggerStatements","triggerText","triggerTimeout","triggerTimeoutCurrent","triggerType","trim","turretLocal","turretOwner","turretUnit","tvAdd","tvClear","tvCollapse","tvCollapseAll","tvCount","tvCurSel","tvData","tvDelete","tvExpand","tvExpandAll","tvIsSelected","tvPicture","tvPictureRight","tvSelection","tvSetColor","tvSetCurSel","tvSetData","tvSetPicture","tvSetPictureColor","tvSetPictureColorDisabled","tvSetPictureColorSelected","tvSetPictureRight","tvSetPictureRightColor","tvSetPictureRightColorDisabled","tvSetPictureRightColorSelected","tvSetSelectColor","tvSetSelected","tvSetText","tvSetTooltip","tvSetValue","tvSort","tvSortAll","tvSortByValue","tvSortByValueAll","tvText","tvTooltip","tvValue","type","typeName","typeOf","UAVControl","uiNamespace","uiSleep","unassignCurator","unassignItem","unassignTeam","unassignVehicle","underwater","uniform","uniformContainer","uniformItems","uniformMagazines","uniqueUnitItems","unitAddons","unitAimPosition","unitAimPositionVisual","unitBackpack","unitCombatMode","unitIsUAV","unitPos","unitReady","unitRecoilCoefficient","units","unitsBelowHeight","unitTurret","unlinkItem","unlockAchievement","unregisterTask","updateDrawIcon","updateMenuItem","updateObjectTree","useAIOperMapObstructionTest","useAISteeringComponent","useAudioTimeForMoves","userInputDisabled","values","vectorAdd","vectorCos","vectorCrossProduct","vectorDiff","vectorDir","vectorDirVisual","vectorDistance","vectorDistanceSqr","vectorDotProduct","vectorFromTo","vectorLinearConversion","vectorMagnitude","vectorMagnitudeSqr","vectorModelToWorld","vectorModelToWorldVisual","vectorMultiply","vectorNormalized","vectorUp","vectorUpVisual","vectorWorldToModel","vectorWorldToModelVisual","vehicle","vehicleCargoEnabled","vehicleChat","vehicleMoveInfo","vehicleRadio","vehicleReceiveRemoteTargets","vehicleReportOwnPosition","vehicleReportRemoteTargets","vehicles","vehicleVarName","velocity","velocityModelSpace","verifySignature","vest","vestContainer","vestItems","vestMagazines","viewDistance","visibleCompass","visibleGps","visibleMap","visiblePosition","visiblePositionASL","visibleScoretable","visibleWatch","waves","waypointAttachedObject","waypointAttachedVehicle","waypointAttachObject","waypointAttachVehicle","waypointBehaviour","waypointCombatMode","waypointCompletionRadius","waypointDescription","waypointForceBehaviour","waypointFormation","waypointHousePosition","waypointLoiterAltitude","waypointLoiterRadius","waypointLoiterType","waypointName","waypointPosition","waypoints","waypointScript","waypointsEnabledUAV","waypointShow","waypointSpeed","waypointStatements","waypointTimeout","waypointTimeoutCurrent","waypointType","waypointVisible","weaponAccessories","weaponAccessoriesCargo","weaponCargo","weaponDirection","weaponInertia","weaponLowered","weaponReloadingTime","weapons","weaponsInfo","weaponsItems","weaponsItemsCargo","weaponState","weaponsTurret","weightRTD","WFSideText","wind","windDir","windRTD","windStr","wingsForcesRTD","worldName","worldSize","worldToModel","worldToModelVisual","worldToScreen"],d={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:"define undef ifdef ifndef else endif include if",contains:[{begin:/\\\n/,relevance:0},e.inherit(o,{className:"string"}),{begin:/<[^\n>]*>/,end:/$/,illegal:"\\n"},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]};return{name:"SQF",case_insensitive:!0,keywords:{keyword:s,built_in:c,literal:l},contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.NUMBER_MODE,n,i,o,d],illegal:[/\$[^a-fA-F0-9]/,/\w\$/,/\?/,/@/,/ \| /,/[a-zA-Z_]\./,/\:\=/,/\[\:/]}}return $p=t,$p}var Hp,Qv;function Qye(){if(Qv)return Hp;Qv=1;function t(e){const n=e.regex,i=e.COMMENT("--","$"),o={className:"string",variants:[{begin:/'/,end:/'/,contains:[{begin:/''/}]}]},s={begin:/"/,end:/"/,contains:[{begin:/""/}]},l=["true","false","unknown"],c=["double precision","large object","with timezone","without timezone"],d=["bigint","binary","blob","boolean","char","character","clob","date","dec","decfloat","decimal","float","int","integer","interval","nchar","nclob","national","numeric","real","row","smallint","time","timestamp","varchar","varying","varbinary"],_=["add","asc","collation","desc","final","first","last","view"],p=["abs","acos","all","allocate","alter","and","any","are","array","array_agg","array_max_cardinality","as","asensitive","asin","asymmetric","at","atan","atomic","authorization","avg","begin","begin_frame","begin_partition","between","bigint","binary","blob","boolean","both","by","call","called","cardinality","cascaded","case","cast","ceil","ceiling","char","char_length","character","character_length","check","classifier","clob","close","coalesce","collate","collect","column","commit","condition","connect","constraint","contains","convert","copy","corr","corresponding","cos","cosh","count","covar_pop","covar_samp","create","cross","cube","cume_dist","current","current_catalog","current_date","current_default_transform_group","current_path","current_role","current_row","current_schema","current_time","current_timestamp","current_path","current_role","current_transform_group_for_type","current_user","cursor","cycle","date","day","deallocate","dec","decimal","decfloat","declare","default","define","delete","dense_rank","deref","describe","deterministic","disconnect","distinct","double","drop","dynamic","each","element","else","empty","end","end_frame","end_partition","end-exec","equals","escape","every","except","exec","execute","exists","exp","external","extract","false","fetch","filter","first_value","float","floor","for","foreign","frame_row","free","from","full","function","fusion","get","global","grant","group","grouping","groups","having","hold","hour","identity","in","indicator","initial","inner","inout","insensitive","insert","int","integer","intersect","intersection","interval","into","is","join","json_array","json_arrayagg","json_exists","json_object","json_objectagg","json_query","json_table","json_table_primitive","json_value","lag","language","large","last_value","lateral","lead","leading","left","like","like_regex","listagg","ln","local","localtime","localtimestamp","log","log10","lower","match","match_number","match_recognize","matches","max","member","merge","method","min","minute","mod","modifies","module","month","multiset","national","natural","nchar","nclob","new","no","none","normalize","not","nth_value","ntile","null","nullif","numeric","octet_length","occurrences_regex","of","offset","old","omit","on","one","only","open","or","order","out","outer","over","overlaps","overlay","parameter","partition","pattern","per","percent","percent_rank","percentile_cont","percentile_disc","period","portion","position","position_regex","power","precedes","precision","prepare","primary","procedure","ptf","range","rank","reads","real","recursive","ref","references","referencing","regr_avgx","regr_avgy","regr_count","regr_intercept","regr_r2","regr_slope","regr_sxx","regr_sxy","regr_syy","release","result","return","returns","revoke","right","rollback","rollup","row","row_number","rows","running","savepoint","scope","scroll","search","second","seek","select","sensitive","session_user","set","show","similar","sin","sinh","skip","smallint","some","specific","specifictype","sql","sqlexception","sqlstate","sqlwarning","sqrt","start","static","stddev_pop","stddev_samp","submultiset","subset","substring","substring_regex","succeeds","sum","symmetric","system","system_time","system_user","table","tablesample","tan","tanh","then","time","timestamp","timezone_hour","timezone_minute","to","trailing","translate","translate_regex","translation","treat","trigger","trim","trim_array","true","truncate","uescape","union","unique","unknown","unnest","update","upper","user","using","value","values","value_of","var_pop","var_samp","varbinary","varchar","varying","versioning","when","whenever","where","width_bucket","window","with","within","without","year"],g=["abs","acos","array_agg","asin","atan","avg","cast","ceil","ceiling","coalesce","corr","cos","cosh","count","covar_pop","covar_samp","cume_dist","dense_rank","deref","element","exp","extract","first_value","floor","json_array","json_arrayagg","json_exists","json_object","json_objectagg","json_query","json_table","json_table_primitive","json_value","lag","last_value","lead","listagg","ln","log","log10","lower","max","min","mod","nth_value","ntile","nullif","percent_rank","percentile_cont","percentile_disc","position","position_regex","power","rank","regr_avgx","regr_avgy","regr_count","regr_intercept","regr_r2","regr_slope","regr_sxx","regr_sxy","regr_syy","row_number","sin","sinh","sqrt","stddev_pop","stddev_samp","substring","substring_regex","sum","tan","tanh","translate","translate_regex","treat","trim","trim_array","unnest","upper","value_of","var_pop","var_samp","width_bucket"],E=["current_catalog","current_date","current_default_transform_group","current_path","current_role","current_schema","current_transform_group_for_type","current_user","session_user","system_time","system_user","current_time","localtime","current_timestamp","localtimestamp"],f=["create table","insert into","primary key","foreign key","not null","alter table","add constraint","grouping sets","on overflow","character set","respect nulls","ignore nulls","nulls first","nulls last","depth first","breadth first"],S=g,C=[...p,..._].filter(x=>!g.includes(x)),h={className:"variable",begin:/@[a-z0-9][a-z0-9_]*/},T={className:"operator",begin:/[-+*/=%^~]|&&?|\|\|?|!=?|<(?:=>?|<|>)?|>[>=]?/,relevance:0},N={begin:n.concat(/\b/,n.either(...S),/\s*\(/),relevance:0,keywords:{built_in:S}};function y(x,{exceptions:P,when:D}={}){const k=D;return P=P||[],x.map(U=>U.match(/\|\d+$/)||P.includes(U)?U:k(U)?`${U}|0`:U)}return{name:"SQL",case_insensitive:!0,illegal:/[{}]|<\//,keywords:{$pattern:/\b[\w\.]+/,keyword:y(C,{when:x=>x.length<3}),literal:l,type:d,built_in:E},contains:[{begin:n.either(...f),relevance:0,keywords:{$pattern:/[\w\.]+/,keyword:C.concat(f),literal:l,type:d}},{className:"type",begin:n.either(...c)},N,h,o,s,e.C_NUMBER_MODE,e.C_BLOCK_COMMENT_MODE,i,T]}}return Hp=t,Hp}var zp,Xv;function Xye(){if(Xv)return zp;Xv=1;function t(e){const n=e.regex,i=["functions","model","data","parameters","quantities","transformed","generated"],o=["for","in","if","else","while","break","continue","return"],s=["array","complex","int","real","vector","ordered","positive_ordered","simplex","unit_vector","row_vector","matrix","cholesky_factor_corr|10","cholesky_factor_cov|10","corr_matrix|10","cov_matrix|10","void"],l=["Phi","Phi_approx","abs","acos","acosh","add_diag","algebra_solver","algebra_solver_newton","append_array","append_col","append_row","asin","asinh","atan","atan2","atanh","bessel_first_kind","bessel_second_kind","binary_log_loss","binomial_coefficient_log","block","cbrt","ceil","chol2inv","cholesky_decompose","choose","col","cols","columns_dot_product","columns_dot_self","conj","cos","cosh","cov_exp_quad","crossprod","csr_extract_u","csr_extract_v","csr_extract_w","csr_matrix_times_vector","csr_to_dense_matrix","cumulative_sum","determinant","diag_matrix","diag_post_multiply","diag_pre_multiply","diagonal","digamma","dims","distance","dot_product","dot_self","eigenvalues_sym","eigenvectors_sym","erf","erfc","exp","exp2","expm1","fabs","falling_factorial","fdim","floor","fma","fmax","fmin","fmod","gamma_p","gamma_q","generalized_inverse","get_imag","get_lp","get_real","head","hmm_hidden_state_prob","hmm_marginal","hypot","identity_matrix","inc_beta","int_step","integrate_1d","integrate_ode","integrate_ode_adams","integrate_ode_bdf","integrate_ode_rk45","inv","inv_Phi","inv_cloglog","inv_logit","inv_sqrt","inv_square","inverse","inverse_spd","is_inf","is_nan","lambert_w0","lambert_wm1","lbeta","lchoose","ldexp","lgamma","linspaced_array","linspaced_int_array","linspaced_row_vector","linspaced_vector","lmgamma","lmultiply","log","log1m","log1m_exp","log1m_inv_logit","log1p","log1p_exp","log_determinant","log_diff_exp","log_falling_factorial","log_inv_logit","log_inv_logit_diff","log_mix","log_modified_bessel_first_kind","log_rising_factorial","log_softmax","log_sum_exp","logit","machine_precision","map_rect","matrix_exp","matrix_exp_multiply","matrix_power","max","mdivide_left_spd","mdivide_left_tri_low","mdivide_right_spd","mdivide_right_tri_low","mean","min","modified_bessel_first_kind","modified_bessel_second_kind","multiply_log","multiply_lower_tri_self_transpose","negative_infinity","norm","not_a_number","num_elements","ode_adams","ode_adams_tol","ode_adjoint_tol_ctl","ode_bdf","ode_bdf_tol","ode_ckrk","ode_ckrk_tol","ode_rk45","ode_rk45_tol","one_hot_array","one_hot_int_array","one_hot_row_vector","one_hot_vector","ones_array","ones_int_array","ones_row_vector","ones_vector","owens_t","polar","positive_infinity","pow","print","prod","proj","qr_Q","qr_R","qr_thin_Q","qr_thin_R","quad_form","quad_form_diag","quad_form_sym","quantile","rank","reduce_sum","reject","rep_array","rep_matrix","rep_row_vector","rep_vector","reverse","rising_factorial","round","row","rows","rows_dot_product","rows_dot_self","scale_matrix_exp_multiply","sd","segment","sin","singular_values","sinh","size","softmax","sort_asc","sort_desc","sort_indices_asc","sort_indices_desc","sqrt","square","squared_distance","step","sub_col","sub_row","sum","svd_U","svd_V","symmetrize_from_lower_tri","tail","tan","tanh","target","tcrossprod","tgamma","to_array_1d","to_array_2d","to_complex","to_matrix","to_row_vector","to_vector","trace","trace_gen_quad_form","trace_quad_form","trigamma","trunc","uniform_simplex","variance","zeros_array","zeros_int_array","zeros_row_vector"],c=["bernoulli","bernoulli_logit","bernoulli_logit_glm","beta","beta_binomial","beta_proportion","binomial","binomial_logit","categorical","categorical_logit","categorical_logit_glm","cauchy","chi_square","dirichlet","discrete_range","double_exponential","exp_mod_normal","exponential","frechet","gamma","gaussian_dlm_obs","gumbel","hmm_latent","hypergeometric","inv_chi_square","inv_gamma","inv_wishart","lkj_corr","lkj_corr_cholesky","logistic","lognormal","multi_gp","multi_gp_cholesky","multi_normal","multi_normal_cholesky","multi_normal_prec","multi_student_t","multinomial","multinomial_logit","neg_binomial","neg_binomial_2","neg_binomial_2_log","neg_binomial_2_log_glm","normal","normal_id_glm","ordered_logistic","ordered_logistic_glm","ordered_probit","pareto","pareto_type_2","poisson","poisson_log","poisson_log_glm","rayleigh","scaled_inv_chi_square","skew_double_exponential","skew_normal","std_normal","student_t","uniform","von_mises","weibull","wiener","wishart"],d=e.COMMENT(/\/\*/,/\*\//,{relevance:0,contains:[{scope:"doctag",match:/@(return|param)/}]}),_={scope:"meta",begin:/#include\b/,end:/$/,contains:[{match:/[a-z][a-z-._]+/,scope:"string"},e.C_LINE_COMMENT_MODE]},p=["lower","upper","offset","multiplier"];return{name:"Stan",aliases:["stanfuncs"],keywords:{$pattern:e.IDENT_RE,title:i,type:s,keyword:o,built_in:l},contains:[e.C_LINE_COMMENT_MODE,_,e.HASH_COMMENT_MODE,d,{scope:"built_in",match:/\s(pi|e|sqrt2|log2|log10)(?=\()/,relevance:0},{match:n.concat(/[<,]\s*/,n.either(...p),/\s*=/),keywords:p},{scope:"keyword",match:/\btarget(?=\s*\+=)/},{match:[/~\s*/,n.either(...c),/(?:\(\))/,/\s*T(?=\s*\[)/],scope:{2:"built_in",4:"keyword"}},{scope:"built_in",keywords:c,begin:n.concat(/\w*/,n.either(...c),/(_lpdf|_lupdf|_lpmf|_cdf|_lcdf|_lccdf|_qf)(?=\s*[\(.*\)])/)},{begin:[/~/,/\s*/,n.concat(n.either(...c),/(?=\s*[\(.*\)])/)],scope:{3:"built_in"}},{begin:[/~/,/\s*\w+(?=\s*[\(.*\)])/,"(?!.*/\b("+n.either(...c)+")\b)"],scope:{2:"title.function"}},{scope:"title.function",begin:/\w*(_lpdf|_lupdf|_lpmf|_cdf|_lcdf|_lccdf|_qf)(?=\s*[\(.*\)])/},{scope:"number",match:n.concat(/(?:\b\d+(?:_\d+)*(?:\.(?:\d+(?:_\d+)*)?)?|\B\.\d+(?:_\d+)*)/,/(?:[eE][+-]?\d+(?:_\d+)*)?i?(?!\w)/),relevance:0},{scope:"string",begin:/"/,end:/"/}]}}return zp=t,zp}var Vp,Zv;function Zye(){if(Zv)return Vp;Zv=1;function t(e){return{name:"Stata",aliases:["do","ado"],case_insensitive:!0,keywords:"if else in foreach for forv forva forval forvalu forvalue forvalues by bys bysort xi quietly qui capture about ac ac_7 acprplot acprplot_7 adjust ado adopath adoupdate alpha ameans an ano anov anova anova_estat anova_terms anovadef aorder ap app appe appen append arch arch_dr arch_estat arch_p archlm areg areg_p args arima arima_dr arima_estat arima_p as asmprobit asmprobit_estat asmprobit_lf asmprobit_mfx__dlg asmprobit_p ass asse asser assert avplot avplot_7 avplots avplots_7 bcskew0 bgodfrey bias binreg bip0_lf biplot bipp_lf bipr_lf bipr_p biprobit bitest bitesti bitowt blogit bmemsize boot bootsamp bootstrap bootstrap_8 boxco_l boxco_p boxcox boxcox_6 boxcox_p bprobit br break brier bro brow brows browse brr brrstat bs bs_7 bsampl_w bsample bsample_7 bsqreg bstat bstat_7 bstat_8 bstrap bstrap_7 bubble bubbleplot ca ca_estat ca_p cabiplot camat canon canon_8 canon_8_p canon_estat canon_p cap caprojection capt captu captur capture cat cc cchart cchart_7 cci cd censobs_table centile cf char chdir checkdlgfiles checkestimationsample checkhlpfiles checksum chelp ci cii cl class classutil clear cli clis clist clo clog clog_lf clog_p clogi clogi_sw clogit clogit_lf clogit_p clogitp clogl_sw cloglog clonevar clslistarray cluster cluster_measures cluster_stop cluster_tree cluster_tree_8 clustermat cmdlog cnr cnre cnreg cnreg_p cnreg_sw cnsreg codebook collaps4 collapse colormult_nb colormult_nw compare compress conf confi confir confirm conren cons const constr constra constrai constrain constraint continue contract copy copyright copysource cor corc corr corr2data corr_anti corr_kmo corr_smc corre correl correla correlat correlate corrgram cou coun count cox cox_p cox_sw coxbase coxhaz coxvar cprplot cprplot_7 crc cret cretu cretur creturn cross cs cscript cscript_log csi ct ct_is ctset ctst_5 ctst_st cttost cumsp cumsp_7 cumul cusum cusum_7 cutil d|0 datasig datasign datasigna datasignat datasignatu datasignatur datasignature datetof db dbeta de dec deco decod decode deff des desc descr descri describ describe destring dfbeta dfgls dfuller di di_g dir dirstats dis discard disp disp_res disp_s displ displa display distinct do doe doed doedi doedit dotplot dotplot_7 dprobit drawnorm drop ds ds_util dstdize duplicates durbina dwstat dydx e|0 ed edi edit egen eivreg emdef en enc enco encod encode eq erase ereg ereg_lf ereg_p ereg_sw ereghet ereghet_glf ereghet_glf_sh ereghet_gp ereghet_ilf ereghet_ilf_sh ereghet_ip eret eretu eretur ereturn err erro error esize est est_cfexist est_cfname est_clickable est_expand est_hold est_table est_unhold est_unholdok estat estat_default estat_summ estat_vce_only esti estimates etodow etof etomdy ex exi exit expand expandcl fac fact facto factor factor_estat factor_p factor_pca_rotated factor_rotate factormat fcast fcast_compute fcast_graph fdades fdadesc fdadescr fdadescri fdadescrib fdadescribe fdasav fdasave fdause fh_st file open file read file close file filefilter fillin find_hlp_file findfile findit findit_7 fit fl fli flis flist for5_0 forest forestplot form forma format fpredict frac_154 frac_adj frac_chk frac_cox frac_ddp frac_dis frac_dv frac_in frac_mun frac_pp frac_pq frac_pv frac_wgt frac_xo fracgen fracplot fracplot_7 fracpoly fracpred fron_ex fron_hn fron_p fron_tn fron_tn2 frontier ftodate ftoe ftomdy ftowdate funnel funnelplot g|0 gamhet_glf gamhet_gp gamhet_ilf gamhet_ip gamma gamma_d2 gamma_p gamma_sw gammahet gdi_hexagon gdi_spokes ge gen gene gener genera generat generate genrank genstd genvmean gettoken gl gladder gladder_7 glim_l01 glim_l02 glim_l03 glim_l04 glim_l05 glim_l06 glim_l07 glim_l08 glim_l09 glim_l10 glim_l11 glim_l12 glim_lf glim_mu glim_nw1 glim_nw2 glim_nw3 glim_p glim_v1 glim_v2 glim_v3 glim_v4 glim_v5 glim_v6 glim_v7 glm glm_6 glm_p glm_sw glmpred glo glob globa global glogit glogit_8 glogit_p gmeans gnbre_lf gnbreg gnbreg_5 gnbreg_p gomp_lf gompe_sw gomper_p gompertz gompertzhet gomphet_glf gomphet_glf_sh gomphet_gp gomphet_ilf gomphet_ilf_sh gomphet_ip gphdot gphpen gphprint gprefs gprobi_p gprobit gprobit_8 gr gr7 gr_copy gr_current gr_db gr_describe gr_dir gr_draw gr_draw_replay gr_drop gr_edit gr_editviewopts gr_example gr_example2 gr_export gr_print gr_qscheme gr_query gr_read gr_rename gr_replay gr_save gr_set gr_setscheme gr_table gr_undo gr_use graph graph7 grebar greigen greigen_7 greigen_8 grmeanby grmeanby_7 gs_fileinfo gs_filetype gs_graphinfo gs_stat gsort gwood h|0 hadimvo hareg hausman haver he heck_d2 heckma_p heckman heckp_lf heckpr_p heckprob hel help hereg hetpr_lf hetpr_p hetprob hettest hexdump hilite hist hist_7 histogram hlogit hlu hmeans hotel hotelling hprobit hreg hsearch icd9 icd9_ff icd9p iis impute imtest inbase include inf infi infil infile infix inp inpu input ins insheet insp inspe inspec inspect integ inten intreg intreg_7 intreg_p intrg2_ll intrg_ll intrg_ll2 ipolate iqreg ir irf irf_create irfm iri is_svy is_svysum isid istdize ivprob_1_lf ivprob_lf ivprobit ivprobit_p ivreg ivreg_footnote ivtob_1_lf ivtob_lf ivtobit ivtobit_p jackknife jacknife jknife jknife_6 jknife_8 jkstat joinby kalarma1 kap kap_3 kapmeier kappa kapwgt kdensity kdensity_7 keep ksm ksmirnov ktau kwallis l|0 la lab labbe labbeplot labe label labelbook ladder levels levelsof leverage lfit lfit_p li lincom line linktest lis list lloghet_glf lloghet_glf_sh lloghet_gp lloghet_ilf lloghet_ilf_sh lloghet_ip llogi_sw llogis_p llogist llogistic llogistichet lnorm_lf lnorm_sw lnorma_p lnormal lnormalhet lnormhet_glf lnormhet_glf_sh lnormhet_gp lnormhet_ilf lnormhet_ilf_sh lnormhet_ip lnskew0 loadingplot loc loca local log logi logis_lf logistic logistic_p logit logit_estat logit_p loglogs logrank loneway lookfor lookup lowess lowess_7 lpredict lrecomp lroc lroc_7 lrtest ls lsens lsens_7 lsens_x lstat ltable ltable_7 ltriang lv lvr2plot lvr2plot_7 m|0 ma mac macr macro makecns man manova manova_estat manova_p manovatest mantel mark markin markout marksample mat mat_capp mat_order mat_put_rr mat_rapp mata mata_clear mata_describe mata_drop mata_matdescribe mata_matsave mata_matuse mata_memory mata_mlib mata_mosave mata_rename mata_which matalabel matcproc matlist matname matr matri matrix matrix_input__dlg matstrik mcc mcci md0_ md1_ md1debug_ md2_ md2debug_ mds mds_estat mds_p mdsconfig mdslong mdsmat mdsshepard mdytoe mdytof me_derd mean means median memory memsize menl meqparse mer merg merge meta mfp mfx mhelp mhodds minbound mixed_ll mixed_ll_reparm mkassert mkdir mkmat mkspline ml ml_5 ml_adjs ml_bhhhs ml_c_d ml_check ml_clear ml_cnt ml_debug ml_defd ml_e0 ml_e0_bfgs ml_e0_cycle ml_e0_dfp ml_e0i ml_e1 ml_e1_bfgs ml_e1_bhhh ml_e1_cycle ml_e1_dfp ml_e2 ml_e2_cycle ml_ebfg0 ml_ebfr0 ml_ebfr1 ml_ebh0q ml_ebhh0 ml_ebhr0 ml_ebr0i ml_ecr0i ml_edfp0 ml_edfr0 ml_edfr1 ml_edr0i ml_eds ml_eer0i ml_egr0i ml_elf ml_elf_bfgs ml_elf_bhhh ml_elf_cycle ml_elf_dfp ml_elfi ml_elfs ml_enr0i ml_enrr0 ml_erdu0 ml_erdu0_bfgs ml_erdu0_bhhh ml_erdu0_bhhhq ml_erdu0_cycle ml_erdu0_dfp ml_erdu0_nrbfgs ml_exde ml_footnote ml_geqnr ml_grad0 ml_graph ml_hbhhh ml_hd0 ml_hold ml_init ml_inv ml_log ml_max ml_mlout ml_mlout_8 ml_model ml_nb0 ml_opt ml_p ml_plot ml_query ml_rdgrd ml_repor ml_s_e ml_score ml_searc ml_technique ml_unhold mleval mlf_ mlmatbysum mlmatsum mlog mlogi mlogit mlogit_footnote mlogit_p mlopts mlsum mlvecsum mnl0_ mor more mov move mprobit mprobit_lf mprobit_p mrdu0_ mrdu1_ mvdecode mvencode mvreg mvreg_estat n|0 nbreg nbreg_al nbreg_lf nbreg_p nbreg_sw nestreg net newey newey_7 newey_p news nl nl_7 nl_9 nl_9_p nl_p nl_p_7 nlcom nlcom_p nlexp2 nlexp2_7 nlexp2a nlexp2a_7 nlexp3 nlexp3_7 nlgom3 nlgom3_7 nlgom4 nlgom4_7 nlinit nllog3 nllog3_7 nllog4 nllog4_7 nlog_rd nlogit nlogit_p nlogitgen nlogittree nlpred no nobreak noi nois noisi noisil noisily note notes notes_dlg nptrend numlabel numlist odbc old_ver olo olog ologi ologi_sw ologit ologit_p ologitp on one onew onewa oneway op_colnm op_comp op_diff op_inv op_str opr opro oprob oprob_sw oprobi oprobi_p oprobit oprobitp opts_exclusive order orthog orthpoly ou out outf outfi outfil outfile outs outsh outshe outshee outsheet ovtest pac pac_7 palette parse parse_dissim pause pca pca_8 pca_display pca_estat pca_p pca_rotate pcamat pchart pchart_7 pchi pchi_7 pcorr pctile pentium pergram pergram_7 permute permute_8 personal peto_st pkcollapse pkcross pkequiv pkexamine pkexamine_7 pkshape pksumm pksumm_7 pl plo plot plugin pnorm pnorm_7 poisgof poiss_lf poiss_sw poisso_p poisson poisson_estat post postclose postfile postutil pperron pr prais prais_e prais_e2 prais_p predict predictnl preserve print pro prob probi probit probit_estat probit_p proc_time procoverlay procrustes procrustes_estat procrustes_p profiler prog progr progra program prop proportion prtest prtesti pwcorr pwd q\\s qby qbys qchi qchi_7 qladder qladder_7 qnorm qnorm_7 qqplot qqplot_7 qreg qreg_c qreg_p qreg_sw qu quadchk quantile quantile_7 que quer query range ranksum ratio rchart rchart_7 rcof recast reclink recode reg reg3 reg3_p regdw regr regre regre_p2 regres regres_p regress regress_estat regriv_p remap ren rena renam rename renpfix repeat replace report reshape restore ret retu retur return rm rmdir robvar roccomp roccomp_7 roccomp_8 rocf_lf rocfit rocfit_8 rocgold rocplot rocplot_7 roctab roctab_7 rolling rologit rologit_p rot rota rotat rotate rotatemat rreg rreg_p ru run runtest rvfplot rvfplot_7 rvpplot rvpplot_7 sa safesum sample sampsi sav save savedresults saveold sc sca scal scala scalar scatter scm_mine sco scob_lf scob_p scobi_sw scobit scor score scoreplot scoreplot_help scree screeplot screeplot_help sdtest sdtesti se search separate seperate serrbar serrbar_7 serset set set_defaults sfrancia sh she shel shell shewhart shewhart_7 signestimationsample signrank signtest simul simul_7 simulate simulate_8 sktest sleep slogit slogit_d2 slogit_p smooth snapspan so sor sort spearman spikeplot spikeplot_7 spikeplt spline_x split sqreg sqreg_p sret sretu sretur sreturn ssc st st_ct st_hc st_hcd st_hcd_sh st_is st_issys st_note st_promo st_set st_show st_smpl st_subid stack statsby statsby_8 stbase stci stci_7 stcox stcox_estat stcox_fr stcox_fr_ll stcox_p stcox_sw stcoxkm stcoxkm_7 stcstat stcurv stcurve stcurve_7 stdes stem stepwise stereg stfill stgen stir stjoin stmc stmh stphplot stphplot_7 stphtest stphtest_7 stptime strate strate_7 streg streg_sw streset sts sts_7 stset stsplit stsum sttocc sttoct stvary stweib su suest suest_8 sum summ summa summar summari summariz summarize sunflower sureg survcurv survsum svar svar_p svmat svy svy_disp svy_dreg svy_est svy_est_7 svy_estat svy_get svy_gnbreg_p svy_head svy_header svy_heckman_p svy_heckprob_p svy_intreg_p svy_ivreg_p svy_logistic_p svy_logit_p svy_mlogit_p svy_nbreg_p svy_ologit_p svy_oprobit_p svy_poisson_p svy_probit_p svy_regress_p svy_sub svy_sub_7 svy_x svy_x_7 svy_x_p svydes svydes_8 svygen svygnbreg svyheckman svyheckprob svyintreg svyintreg_7 svyintrg svyivreg svylc svylog_p svylogit svymarkout svymarkout_8 svymean svymlog svymlogit svynbreg svyolog svyologit svyoprob svyoprobit svyopts svypois svypois_7 svypoisson svyprobit svyprobt svyprop svyprop_7 svyratio svyreg svyreg_p svyregress svyset svyset_7 svyset_8 svytab svytab_7 svytest svytotal sw sw_8 swcnreg swcox swereg swilk swlogis swlogit swologit swoprbt swpois swprobit swqreg swtobit swweib symmetry symmi symplot symplot_7 syntax sysdescribe sysdir sysuse szroeter ta tab tab1 tab2 tab_or tabd tabdi tabdis tabdisp tabi table tabodds tabodds_7 tabstat tabu tabul tabula tabulat tabulate te tempfile tempname tempvar tes test testnl testparm teststd tetrachoric time_it timer tis tob tobi tobit tobit_p tobit_sw token tokeni tokeniz tokenize tostring total translate translator transmap treat_ll treatr_p treatreg trim trimfill trnb_cons trnb_mean trpoiss_d2 trunc_ll truncr_p truncreg tsappend tset tsfill tsline tsline_ex tsreport tsrevar tsrline tsset tssmooth tsunab ttest ttesti tut_chk tut_wait tutorial tw tware_st two twoway twoway__fpfit_serset twoway__function_gen twoway__histogram_gen twoway__ipoint_serset twoway__ipoints_serset twoway__kdensity_gen twoway__lfit_serset twoway__normgen_gen twoway__pci_serset twoway__qfit_serset twoway__scatteri_serset twoway__sunflower_gen twoway_ksm_serset ty typ type typeof u|0 unab unabbrev unabcmd update us use uselabel var var_mkcompanion var_p varbasic varfcast vargranger varirf varirf_add varirf_cgraph varirf_create varirf_ctable varirf_describe varirf_dir varirf_drop varirf_erase varirf_graph varirf_ograph varirf_rename varirf_set varirf_table varlist varlmar varnorm varsoc varstable varstable_w varstable_w2 varwle vce vec vec_fevd vec_mkphi vec_p vec_p_w vecirf_create veclmar veclmar_w vecnorm vecnorm_w vecrank vecstable verinst vers versi versio version view viewsource vif vwls wdatetof webdescribe webseek webuse weib1_lf weib2_lf weib_lf weib_lf0 weibhet_glf weibhet_glf_sh weibhet_glfa weibhet_glfa_sh weibhet_gp weibhet_ilf weibhet_ilf_sh weibhet_ilfa weibhet_ilfa_sh weibhet_ip weibu_sw weibul_p weibull weibull_c weibull_s weibullhet wh whelp whi which whil while wilc_st wilcoxon win wind windo window winexec wntestb wntestb_7 wntestq xchart xchart_7 xcorr xcorr_7 xi xi_6 xmlsav xmlsave xmluse xpose xsh xshe xshel xshell xt_iis xt_tis xtab_p xtabond xtbin_p xtclog xtcloglog xtcloglog_8 xtcloglog_d2 xtcloglog_pa_p xtcloglog_re_p xtcnt_p xtcorr xtdata xtdes xtfront_p xtfrontier xtgee xtgee_elink xtgee_estat xtgee_makeivar xtgee_p xtgee_plink xtgls xtgls_p xthaus xthausman xtht_p xthtaylor xtile xtint_p xtintreg xtintreg_8 xtintreg_d2 xtintreg_p xtivp_1 xtivp_2 xtivreg xtline xtline_ex xtlogit xtlogit_8 xtlogit_d2 xtlogit_fe_p xtlogit_pa_p xtlogit_re_p xtmixed xtmixed_estat xtmixed_p xtnb_fe xtnb_lf xtnbreg xtnbreg_pa_p xtnbreg_refe_p xtpcse xtpcse_p xtpois xtpoisson xtpoisson_d2 xtpoisson_pa_p xtpoisson_refe_p xtpred xtprobit xtprobit_8 xtprobit_d2 xtprobit_re_p xtps_fe xtps_lf xtps_ren xtps_ren_8 xtrar_p xtrc xtrc_p xtrchh xtrefe_p xtreg xtreg_be xtreg_fe xtreg_ml xtreg_pa_p xtreg_re xtregar xtrere_p xtset xtsf_ll xtsf_llti xtsum xttab xttest0 xttobit xttobit_8 xttobit_p xttrans yx yxview__barlike_draw yxview_area_draw yxview_bar_draw yxview_dot_draw yxview_dropline_draw yxview_function_draw yxview_iarrow_draw yxview_ilabels_draw yxview_normal_draw yxview_pcarrow_draw yxview_pcbarrow_draw yxview_pccapsym_draw yxview_pcscatter_draw yxview_pcspike_draw yxview_rarea_draw yxview_rbar_draw yxview_rbarm_draw yxview_rcap_draw yxview_rcapsym_draw yxview_rconnected_draw yxview_rline_draw yxview_rscatter_draw yxview_rspike_draw yxview_spike_draw yxview_sunflower_draw zap_s zinb zinb_llf zinb_plf zip zip_llf zip_p zip_plf zt_ct_5 zt_hc_5 zt_hcd_5 zt_is_5 zt_iss_5 zt_sho_5 zt_smp_5 ztbase_5 ztcox_5 ztdes_5 ztereg_5 ztfill_5 ztgen_5 ztir_5 ztjoin_5 ztnb ztnb_p ztp ztp_p zts_5 ztset_5 ztspli_5 ztsum_5 zttoct_5 ztvary_5 ztweib_5",contains:[{className:"symbol",begin:/`[a-zA-Z0-9_]+'/},{className:"variable",begin:/\$\{?[a-zA-Z0-9_]+\}?/,relevance:0},{className:"string",variants:[{begin:`\`"[^\r -]*?"'`},{begin:`"[^\r -"]*"`}]},{className:"built_in",variants:[{begin:"\\b(abs|acos|asin|atan|atan2|atanh|ceil|cloglog|comb|cos|digamma|exp|floor|invcloglog|invlogit|ln|lnfact|lnfactorial|lngamma|log|log10|max|min|mod|reldif|round|sign|sin|sqrt|sum|tan|tanh|trigamma|trunc|betaden|Binomial|binorm|binormal|chi2|chi2tail|dgammapda|dgammapdada|dgammapdadx|dgammapdx|dgammapdxdx|F|Fden|Ftail|gammaden|gammap|ibeta|invbinomial|invchi2|invchi2tail|invF|invFtail|invgammap|invibeta|invnchi2|invnFtail|invnibeta|invnorm|invnormal|invttail|nbetaden|nchi2|nFden|nFtail|nibeta|norm|normal|normalden|normd|npnchi2|tden|ttail|uniform|abbrev|char|index|indexnot|length|lower|ltrim|match|plural|proper|real|regexm|regexr|regexs|reverse|rtrim|string|strlen|strlower|strltrim|strmatch|strofreal|strpos|strproper|strreverse|strrtrim|strtrim|strupper|subinstr|subinword|substr|trim|upper|word|wordcount|_caller|autocode|byteorder|chop|clip|cond|e|epsdouble|epsfloat|group|inlist|inrange|irecode|matrix|maxbyte|maxdouble|maxfloat|maxint|maxlong|mi|minbyte|mindouble|minfloat|minint|minlong|missing|r|recode|replay|return|s|scalar|d|date|day|dow|doy|halfyear|mdy|month|quarter|week|year|d|daily|dofd|dofh|dofm|dofq|dofw|dofy|h|halfyearly|hofd|m|mofd|monthly|q|qofd|quarterly|tin|twithin|w|weekly|wofd|y|yearly|yh|ym|yofd|yq|yw|cholesky|colnumb|colsof|corr|det|diag|diag0cnt|el|get|hadamard|I|inv|invsym|issym|issymmetric|J|matmissing|matuniform|mreldif|nullmat|rownumb|rowsof|sweep|syminv|trace|vec|vecdiag)(?=\\()"}]},e.COMMENT("^[ ]*\\*.*$",!1),e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]}}return Vp=t,Vp}var Wp,Jv;function Jye(){if(Jv)return Wp;Jv=1;function t(e){return{name:"STEP Part 21",aliases:["p21","step","stp"],case_insensitive:!0,keywords:{$pattern:"[A-Z_][A-Z0-9_.]*",keyword:["HEADER","ENDSEC","DATA"]},contains:[{className:"meta",begin:"ISO-10303-21;",relevance:10},{className:"meta",begin:"END-ISO-10303-21;",relevance:10},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.COMMENT("/\\*\\*!","\\*/"),e.C_NUMBER_MODE,e.inherit(e.APOS_STRING_MODE,{illegal:null}),e.inherit(e.QUOTE_STRING_MODE,{illegal:null}),{className:"string",begin:"'",end:"'"},{className:"symbol",variants:[{begin:"#",end:"\\d+",illegal:"\\W"}]}]}}return Wp=t,Wp}var Kp,jv;function jye(){if(jv)return Kp;jv=1;const t=c=>({IMPORTANT:{scope:"meta",begin:"!important"},BLOCK_COMMENT:c.C_BLOCK_COMMENT_MODE,HEXCOLOR:{scope:"number",begin:/#(([0-9a-fA-F]{3,4})|(([0-9a-fA-F]{2}){3,4}))\b/},FUNCTION_DISPATCH:{className:"built_in",begin:/[\w-]+(?=\()/},ATTRIBUTE_SELECTOR_MODE:{scope:"selector-attr",begin:/\[/,end:/\]/,illegal:"$",contains:[c.APOS_STRING_MODE,c.QUOTE_STRING_MODE]},CSS_NUMBER_MODE:{scope:"number",begin:c.NUMBER_RE+"(%|em|ex|ch|rem|vw|vh|vmin|vmax|cm|mm|in|pt|pc|px|deg|grad|rad|turn|s|ms|Hz|kHz|dpi|dpcm|dppx)?",relevance:0},CSS_VARIABLE:{className:"attr",begin:/--[A-Za-z][A-Za-z0-9_-]*/}}),e=["a","abbr","address","article","aside","audio","b","blockquote","body","button","canvas","caption","cite","code","dd","del","details","dfn","div","dl","dt","em","fieldset","figcaption","figure","footer","form","h1","h2","h3","h4","h5","h6","header","hgroup","html","i","iframe","img","input","ins","kbd","label","legend","li","main","mark","menu","nav","object","ol","p","q","quote","samp","section","span","strong","summary","sup","table","tbody","td","textarea","tfoot","th","thead","time","tr","ul","var","video"],n=["any-hover","any-pointer","aspect-ratio","color","color-gamut","color-index","device-aspect-ratio","device-height","device-width","display-mode","forced-colors","grid","height","hover","inverted-colors","monochrome","orientation","overflow-block","overflow-inline","pointer","prefers-color-scheme","prefers-contrast","prefers-reduced-motion","prefers-reduced-transparency","resolution","scan","scripting","update","width","min-width","max-width","min-height","max-height"],i=["active","any-link","blank","checked","current","default","defined","dir","disabled","drop","empty","enabled","first","first-child","first-of-type","fullscreen","future","focus","focus-visible","focus-within","has","host","host-context","hover","indeterminate","in-range","invalid","is","lang","last-child","last-of-type","left","link","local-link","not","nth-child","nth-col","nth-last-child","nth-last-col","nth-last-of-type","nth-of-type","only-child","only-of-type","optional","out-of-range","past","placeholder-shown","read-only","read-write","required","right","root","scope","target","target-within","user-invalid","valid","visited","where"],o=["after","backdrop","before","cue","cue-region","first-letter","first-line","grammar-error","marker","part","placeholder","selection","slotted","spelling-error"],s=["align-content","align-items","align-self","all","animation","animation-delay","animation-direction","animation-duration","animation-fill-mode","animation-iteration-count","animation-name","animation-play-state","animation-timing-function","backface-visibility","background","background-attachment","background-blend-mode","background-clip","background-color","background-image","background-origin","background-position","background-repeat","background-size","block-size","border","border-block","border-block-color","border-block-end","border-block-end-color","border-block-end-style","border-block-end-width","border-block-start","border-block-start-color","border-block-start-style","border-block-start-width","border-block-style","border-block-width","border-bottom","border-bottom-color","border-bottom-left-radius","border-bottom-right-radius","border-bottom-style","border-bottom-width","border-collapse","border-color","border-image","border-image-outset","border-image-repeat","border-image-slice","border-image-source","border-image-width","border-inline","border-inline-color","border-inline-end","border-inline-end-color","border-inline-end-style","border-inline-end-width","border-inline-start","border-inline-start-color","border-inline-start-style","border-inline-start-width","border-inline-style","border-inline-width","border-left","border-left-color","border-left-style","border-left-width","border-radius","border-right","border-right-color","border-right-style","border-right-width","border-spacing","border-style","border-top","border-top-color","border-top-left-radius","border-top-right-radius","border-top-style","border-top-width","border-width","bottom","box-decoration-break","box-shadow","box-sizing","break-after","break-before","break-inside","caption-side","caret-color","clear","clip","clip-path","clip-rule","color","column-count","column-fill","column-gap","column-rule","column-rule-color","column-rule-style","column-rule-width","column-span","column-width","columns","contain","content","content-visibility","counter-increment","counter-reset","cue","cue-after","cue-before","cursor","direction","display","empty-cells","filter","flex","flex-basis","flex-direction","flex-flow","flex-grow","flex-shrink","flex-wrap","float","flow","font","font-display","font-family","font-feature-settings","font-kerning","font-language-override","font-size","font-size-adjust","font-smoothing","font-stretch","font-style","font-synthesis","font-variant","font-variant-caps","font-variant-east-asian","font-variant-ligatures","font-variant-numeric","font-variant-position","font-variation-settings","font-weight","gap","glyph-orientation-vertical","grid","grid-area","grid-auto-columns","grid-auto-flow","grid-auto-rows","grid-column","grid-column-end","grid-column-start","grid-gap","grid-row","grid-row-end","grid-row-start","grid-template","grid-template-areas","grid-template-columns","grid-template-rows","hanging-punctuation","height","hyphens","icon","image-orientation","image-rendering","image-resolution","ime-mode","inline-size","isolation","justify-content","left","letter-spacing","line-break","line-height","list-style","list-style-image","list-style-position","list-style-type","margin","margin-block","margin-block-end","margin-block-start","margin-bottom","margin-inline","margin-inline-end","margin-inline-start","margin-left","margin-right","margin-top","marks","mask","mask-border","mask-border-mode","mask-border-outset","mask-border-repeat","mask-border-slice","mask-border-source","mask-border-width","mask-clip","mask-composite","mask-image","mask-mode","mask-origin","mask-position","mask-repeat","mask-size","mask-type","max-block-size","max-height","max-inline-size","max-width","min-block-size","min-height","min-inline-size","min-width","mix-blend-mode","nav-down","nav-index","nav-left","nav-right","nav-up","none","normal","object-fit","object-position","opacity","order","orphans","outline","outline-color","outline-offset","outline-style","outline-width","overflow","overflow-wrap","overflow-x","overflow-y","padding","padding-block","padding-block-end","padding-block-start","padding-bottom","padding-inline","padding-inline-end","padding-inline-start","padding-left","padding-right","padding-top","page-break-after","page-break-before","page-break-inside","pause","pause-after","pause-before","perspective","perspective-origin","pointer-events","position","quotes","resize","rest","rest-after","rest-before","right","row-gap","scroll-margin","scroll-margin-block","scroll-margin-block-end","scroll-margin-block-start","scroll-margin-bottom","scroll-margin-inline","scroll-margin-inline-end","scroll-margin-inline-start","scroll-margin-left","scroll-margin-right","scroll-margin-top","scroll-padding","scroll-padding-block","scroll-padding-block-end","scroll-padding-block-start","scroll-padding-bottom","scroll-padding-inline","scroll-padding-inline-end","scroll-padding-inline-start","scroll-padding-left","scroll-padding-right","scroll-padding-top","scroll-snap-align","scroll-snap-stop","scroll-snap-type","scrollbar-color","scrollbar-gutter","scrollbar-width","shape-image-threshold","shape-margin","shape-outside","speak","speak-as","src","tab-size","table-layout","text-align","text-align-all","text-align-last","text-combine-upright","text-decoration","text-decoration-color","text-decoration-line","text-decoration-style","text-emphasis","text-emphasis-color","text-emphasis-position","text-emphasis-style","text-indent","text-justify","text-orientation","text-overflow","text-rendering","text-shadow","text-transform","text-underline-position","top","transform","transform-box","transform-origin","transform-style","transition","transition-delay","transition-duration","transition-property","transition-timing-function","unicode-bidi","vertical-align","visibility","voice-balance","voice-duration","voice-family","voice-pitch","voice-range","voice-rate","voice-stress","voice-volume","white-space","widows","width","will-change","word-break","word-spacing","word-wrap","writing-mode","z-index"].reverse();function l(c){const d=t(c),_="and or not only",p={className:"variable",begin:"\\$"+c.IDENT_RE},g=["charset","css","debug","extend","font-face","for","import","include","keyframes","media","mixin","page","warn","while"],E="(?=[.\\s\\n[:,(])";return{name:"Stylus",aliases:["styl"],case_insensitive:!1,keywords:"if else for in",illegal:"("+["\\?","(\\bReturn\\b)","(\\bEnd\\b)","(\\bend\\b)","(\\bdef\\b)",";","#\\s","\\*\\s","===\\s","\\|","%"].join("|")+")",contains:[c.QUOTE_STRING_MODE,c.APOS_STRING_MODE,c.C_LINE_COMMENT_MODE,c.C_BLOCK_COMMENT_MODE,d.HEXCOLOR,{begin:"\\.[a-zA-Z][a-zA-Z0-9_-]*"+E,className:"selector-class"},{begin:"#[a-zA-Z][a-zA-Z0-9_-]*"+E,className:"selector-id"},{begin:"\\b("+e.join("|")+")"+E,className:"selector-tag"},{className:"selector-pseudo",begin:"&?:("+i.join("|")+")"+E},{className:"selector-pseudo",begin:"&?:(:)?("+o.join("|")+")"+E},d.ATTRIBUTE_SELECTOR_MODE,{className:"keyword",begin:/@media/,starts:{end:/[{;}]/,keywords:{$pattern:/[a-z-]+/,keyword:_,attribute:n.join(" ")},contains:[d.CSS_NUMBER_MODE]}},{className:"keyword",begin:"@((-(o|moz|ms|webkit)-)?("+g.join("|")+"))\\b"},p,d.CSS_NUMBER_MODE,{className:"function",begin:"^[a-zA-Z][a-zA-Z0-9_-]*\\(.*\\)",illegal:"[\\n]",returnBegin:!0,contains:[{className:"title",begin:"\\b[a-zA-Z][a-zA-Z0-9_-]*"},{className:"params",begin:/\(/,end:/\)/,contains:[d.HEXCOLOR,p,c.APOS_STRING_MODE,d.CSS_NUMBER_MODE,c.QUOTE_STRING_MODE]}]},d.CSS_VARIABLE,{className:"attribute",begin:"\\b("+s.join("|")+")\\b",starts:{end:/;|$/,contains:[d.HEXCOLOR,p,c.APOS_STRING_MODE,c.QUOTE_STRING_MODE,d.CSS_NUMBER_MODE,c.C_BLOCK_COMMENT_MODE,d.IMPORTANT,d.FUNCTION_DISPATCH],illegal:/\./,relevance:0}},d.FUNCTION_DISPATCH]}}return Kp=l,Kp}var Qp,eC;function eIe(){if(eC)return Qp;eC=1;function t(e){return{name:"SubUnit",case_insensitive:!0,contains:[{className:"string",begin:`\\[ -(multipart)?`,end:`\\] -`},{className:"string",begin:"\\d{4}-\\d{2}-\\d{2}(\\s+)\\d{2}:\\d{2}:\\d{2}.\\d+Z"},{className:"string",begin:"(\\+|-)\\d+"},{className:"keyword",relevance:10,variants:[{begin:"^(test|testing|success|successful|failure|error|skip|xfail|uxsuccess)(:?)\\s+(test)?"},{begin:"^progress(:?)(\\s+)?(pop|push)?"},{begin:"^tags:"},{begin:"^time:"}]}]}}return Qp=t,Qp}var Xp,tC;function tIe(){if(tC)return Xp;tC=1;function t(U){return U?typeof U=="string"?U:U.source:null}function e(U){return n("(?=",U,")")}function n(...U){return U.map(z=>t(z)).join("")}function i(U){const W=U[U.length-1];return typeof W=="object"&&W.constructor===Object?(U.splice(U.length-1,1),W):{}}function o(...U){return"("+(i(U).capture?"":"?:")+U.map(K=>t(K)).join("|")+")"}const s=U=>n(/\b/,U,/\w$/.test(U)?/\b/:/\B/),l=["Protocol","Type"].map(s),c=["init","self"].map(s),d=["Any","Self"],_=["actor","any","associatedtype","async","await",/as\?/,/as!/,"as","break","case","catch","class","continue","convenience","default","defer","deinit","didSet","distributed","do","dynamic","else","enum","extension","fallthrough",/fileprivate\(set\)/,"fileprivate","final","for","func","get","guard","if","import","indirect","infix",/init\?/,/init!/,"inout",/internal\(set\)/,"internal","in","is","isolated","nonisolated","lazy","let","mutating","nonmutating",/open\(set\)/,"open","operator","optional","override","postfix","precedencegroup","prefix",/private\(set\)/,"private","protocol",/public\(set\)/,"public","repeat","required","rethrows","return","set","some","static","struct","subscript","super","switch","throws","throw",/try\?/,/try!/,"try","typealias",/unowned\(safe\)/,/unowned\(unsafe\)/,"unowned","var","weak","where","while","willSet"],p=["false","nil","true"],g=["assignment","associativity","higherThan","left","lowerThan","none","right"],E=["#colorLiteral","#column","#dsohandle","#else","#elseif","#endif","#error","#file","#fileID","#fileLiteral","#filePath","#function","#if","#imageLiteral","#keyPath","#line","#selector","#sourceLocation","#warn_unqualified_access","#warning"],f=["abs","all","any","assert","assertionFailure","debugPrint","dump","fatalError","getVaList","isKnownUniquelyReferenced","max","min","numericCast","pointwiseMax","pointwiseMin","precondition","preconditionFailure","print","readLine","repeatElement","sequence","stride","swap","swift_unboxFromSwiftValueWithType","transcode","type","unsafeBitCast","unsafeDowncast","withExtendedLifetime","withUnsafeMutablePointer","withUnsafePointer","withVaList","withoutActuallyEscaping","zip"],S=o(/[/=\-+!*%<>&|^~?]/,/[\u00A1-\u00A7]/,/[\u00A9\u00AB]/,/[\u00AC\u00AE]/,/[\u00B0\u00B1]/,/[\u00B6\u00BB\u00BF\u00D7\u00F7]/,/[\u2016-\u2017]/,/[\u2020-\u2027]/,/[\u2030-\u203E]/,/[\u2041-\u2053]/,/[\u2055-\u205E]/,/[\u2190-\u23FF]/,/[\u2500-\u2775]/,/[\u2794-\u2BFF]/,/[\u2E00-\u2E7F]/,/[\u3001-\u3003]/,/[\u3008-\u3020]/,/[\u3030]/),C=o(S,/[\u0300-\u036F]/,/[\u1DC0-\u1DFF]/,/[\u20D0-\u20FF]/,/[\uFE00-\uFE0F]/,/[\uFE20-\uFE2F]/),h=n(S,C,"*"),T=o(/[a-zA-Z_]/,/[\u00A8\u00AA\u00AD\u00AF\u00B2-\u00B5\u00B7-\u00BA]/,/[\u00BC-\u00BE\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u00FF]/,/[\u0100-\u02FF\u0370-\u167F\u1681-\u180D\u180F-\u1DBF]/,/[\u1E00-\u1FFF]/,/[\u200B-\u200D\u202A-\u202E\u203F-\u2040\u2054\u2060-\u206F]/,/[\u2070-\u20CF\u2100-\u218F\u2460-\u24FF\u2776-\u2793]/,/[\u2C00-\u2DFF\u2E80-\u2FFF]/,/[\u3004-\u3007\u3021-\u302F\u3031-\u303F\u3040-\uD7FF]/,/[\uF900-\uFD3D\uFD40-\uFDCF\uFDF0-\uFE1F\uFE30-\uFE44]/,/[\uFE47-\uFEFE\uFF00-\uFFFD]/),N=o(T,/\d/,/[\u0300-\u036F\u1DC0-\u1DFF\u20D0-\u20FF\uFE20-\uFE2F]/),y=n(T,N,"*"),x=n(/[A-Z]/,N,"*"),P=["autoclosure",n(/convention\(/,o("swift","block","c"),/\)/),"discardableResult","dynamicCallable","dynamicMemberLookup","escaping","frozen","GKInspectable","IBAction","IBDesignable","IBInspectable","IBOutlet","IBSegueAction","inlinable","main","nonobjc","NSApplicationMain","NSCopying","NSManaged",n(/objc\(/,y,/\)/),"objc","objcMembers","propertyWrapper","requires_stored_property_inits","resultBuilder","testable","UIApplicationMain","unknown","usableFromInline"],D=["iOS","iOSApplicationExtension","macOS","macOSApplicationExtension","macCatalyst","macCatalystApplicationExtension","watchOS","watchOSApplicationExtension","tvOS","tvOSApplicationExtension","swift"];function k(U){const W={match:/\s+/,relevance:0},z=U.COMMENT("/\\*","\\*/",{contains:["self"]}),K=[U.C_LINE_COMMENT_MODE,z],Ee={match:[/\./,o(...l,...c)],className:{2:"keyword"}},oe={match:n(/\./,o(..._)),relevance:0},L=_.filter(Ze=>typeof Ze=="string").concat(["_|0"]),J=_.filter(Ze=>typeof Ze!="string").concat(d).map(s),re={variants:[{className:"keyword",match:o(...J,...c)}]},G={$pattern:o(/\b\w+/,/#\w+/),keyword:L.concat(E),literal:p},X=[Ee,oe,re],_e={match:n(/\./,o(...f)),relevance:0},ve={className:"built_in",match:n(/\b/,o(...f),/(?=\()/)},he=[_e,ve],tt={match:/->/,relevance:0},lt={className:"operator",relevance:0,variants:[{match:h},{match:`\\.(\\.|${C})+`}]},$e=[tt,lt],Ce="([0-9]_*)+",Be="([0-9a-fA-F]_*)+",Ve={className:"number",relevance:0,variants:[{match:`\\b(${Ce})(\\.(${Ce}))?([eE][+-]?(${Ce}))?\\b`},{match:`\\b0x(${Be})(\\.(${Be}))?([pP][+-]?(${Ce}))?\\b`},{match:/\b0o([0-7]_*)+\b/},{match:/\b0b([01]_*)+\b/}]},xe=(Ze="")=>({className:"subst",variants:[{match:n(/\\/,Ze,/[0\\tnr"']/)},{match:n(/\\/,Ze,/u\{[0-9a-fA-F]{1,8}\}/)}]}),He=(Ze="")=>({className:"subst",match:n(/\\/,Ze,/[\t ]*(?:[\r\n]|\r\n)/)}),rt=(Ze="")=>({className:"subst",label:"interpol",begin:n(/\\/,Ze,/\(/),end:/\)/}),We=(Ze="")=>({begin:n(Ze,/"""/),end:n(/"""/,Ze),contains:[xe(Ze),He(Ze),rt(Ze)]}),te=(Ze="")=>({begin:n(Ze,/"/),end:n(/"/,Ze),contains:[xe(Ze),rt(Ze)]}),pe={className:"string",variants:[We(),We("#"),We("##"),We("###"),te(),te("#"),te("##"),te("###")]},ie={match:n(/`/,y,/`/)},Pe={className:"variable",match:/\$\d+/},we={className:"variable",match:`\\$${N}+`},Xe=[ie,Pe,we],pt={match:/(@|#(un)?)available/,className:"keyword",starts:{contains:[{begin:/\(/,end:/\)/,keywords:D,contains:[...$e,Ve,pe]}]}},me={className:"keyword",match:n(/@/,o(...P))},bt={className:"meta",match:n(/@/,y)},Ue=[pt,me,bt],Ie={match:e(/\b[A-Z]/),relevance:0,contains:[{className:"type",match:n(/(AV|CA|CF|CG|CI|CL|CM|CN|CT|MK|MP|MTK|MTL|NS|SCN|SK|UI|WK|XC)/,N,"+")},{className:"type",match:x,relevance:0},{match:/[?!]+/,relevance:0},{match:/\.\.\./,relevance:0},{match:n(/\s+&\s+/,e(x)),relevance:0}]},zt={begin://,keywords:G,contains:[...K,...X,...Ue,tt,Ie]};Ie.contains.push(zt);const Nt={match:n(y,/\s*:/),keywords:"_|0",relevance:0},Gt={begin:/\(/,end:/\)/,relevance:0,keywords:G,contains:["self",Nt,...K,...X,...he,...$e,Ve,pe,...Xe,...Ue,Ie]},Sn={begin://,contains:[...K,Ie]},ne={begin:o(e(n(y,/\s*:/)),e(n(y,/\s+/,y,/\s*:/))),end:/:/,relevance:0,contains:[{className:"keyword",match:/\b_\b/},{className:"params",match:y}]},ce={begin:/\(/,end:/\)/,keywords:G,contains:[ne,...K,...X,...$e,Ve,pe,...Ue,Ie,Gt],endsParent:!0,illegal:/["']/},Oe={match:[/func/,/\s+/,o(ie.match,y,h)],className:{1:"keyword",3:"title.function"},contains:[Sn,ce,W],illegal:[/\[/,/%/]},Me={match:[/\b(?:subscript|init[?!]?)/,/\s*(?=[<(])/],className:{1:"keyword"},contains:[Sn,ce,W],illegal:/\[|%/},ct={match:[/operator/,/\s+/,h],className:{1:"keyword",3:"title"}},xt={begin:[/precedencegroup/,/\s+/,x],className:{1:"keyword",3:"title"},contains:[Ie],keywords:[...g,...p],end:/}/};for(const Ze of pe.variants){const Yt=Ze.contains.find(Z=>Z.label==="interpol");Yt.keywords=G;const er=[...X,...he,...$e,Ve,pe,...Xe];Yt.contains=[...er,{begin:/\(/,end:/\)/,contains:["self",...er]}]}return{name:"Swift",keywords:G,contains:[...K,Oe,Me,{beginKeywords:"struct protocol class extension enum actor",end:"\\{",excludeEnd:!0,keywords:G,contains:[U.inherit(U.TITLE_MODE,{className:"title.class",begin:/[A-Za-z$_][\u00C0-\u02B80-9A-Za-z$_]*/}),...X]},ct,xt,{beginKeywords:"import",end:/$/,contains:[...K],relevance:0},...X,...he,...$e,Ve,pe,...Xe,...Ue,Ie,Gt]}}return Xp=k,Xp}var Zp,nC;function nIe(){if(nC)return Zp;nC=1;function t(e){return{name:"Tagger Script",contains:[{className:"comment",begin:/\$noop\(/,end:/\)/,contains:[{begin:/\\[()]/},{begin:/\(/,end:/\)/,contains:[{begin:/\\[()]/},"self"]}],relevance:10},{className:"keyword",begin:/\$[_a-zA-Z0-9]+(?=\()/},{className:"variable",begin:/%[_a-zA-Z0-9:]+%/},{className:"symbol",begin:/\\[\\nt$%,()]/},{className:"symbol",begin:/\\u[a-fA-F0-9]{4}/}]}}return Zp=t,Zp}var Jp,rC;function rIe(){if(rC)return Jp;rC=1;function t(e){const n="true false yes no null",i="[\\w#;/?:@&=+$,.~*'()[\\]]+",o={className:"attr",variants:[{begin:"\\w[\\w :\\/.-]*:(?=[ ]|$)"},{begin:'"\\w[\\w :\\/.-]*":(?=[ ]|$)'},{begin:"'\\w[\\w :\\/.-]*':(?=[ ]|$)"}]},s={className:"template-variable",variants:[{begin:/\{\{/,end:/\}\}/},{begin:/%\{/,end:/\}/}]},l={className:"string",relevance:0,variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/},{begin:/\S+/}],contains:[e.BACKSLASH_ESCAPE,s]},c=e.inherit(l,{variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/},{begin:/[^\s,{}[\]]+/}]}),d="[0-9]{4}(-[0-9][0-9]){0,2}",_="([Tt \\t][0-9][0-9]?(:[0-9][0-9]){2})?",p="(\\.[0-9]*)?",g="([ \\t])*(Z|[-+][0-9][0-9]?(:[0-9][0-9])?)?",E={className:"number",begin:"\\b"+d+_+p+g+"\\b"},f={end:",",endsWithParent:!0,excludeEnd:!0,keywords:n,relevance:0},S={begin:/\{/,end:/\}/,contains:[f],illegal:"\\n",relevance:0},C={begin:"\\[",end:"\\]",contains:[f],illegal:"\\n",relevance:0},h=[o,{className:"meta",begin:"^---\\s*$",relevance:10},{className:"string",begin:"[\\|>]([1-9]?[+-])?[ ]*\\n( +)[^ ][^\\n]*\\n(\\2[^\\n]+\\n?)*"},{begin:"<%[%=-]?",end:"[%-]?%>",subLanguage:"ruby",excludeBegin:!0,excludeEnd:!0,relevance:0},{className:"type",begin:"!\\w+!"+i},{className:"type",begin:"!<"+i+">"},{className:"type",begin:"!"+i},{className:"type",begin:"!!"+i},{className:"meta",begin:"&"+e.UNDERSCORE_IDENT_RE+"$"},{className:"meta",begin:"\\*"+e.UNDERSCORE_IDENT_RE+"$"},{className:"bullet",begin:"-(?=[ ]|$)",relevance:0},e.HASH_COMMENT_MODE,{beginKeywords:n,keywords:{literal:n}},E,{className:"number",begin:e.C_NUMBER_RE+"\\b",relevance:0},S,C,l],T=[...h];return T.pop(),T.push(c),f.contains=T,{name:"YAML",case_insensitive:!0,aliases:["yml"],contains:h}}return Jp=t,Jp}var jp,iC;function iIe(){if(iC)return jp;iC=1;function t(e){return{name:"Test Anything Protocol",case_insensitive:!0,contains:[e.HASH_COMMENT_MODE,{className:"meta",variants:[{begin:"^TAP version (\\d+)$"},{begin:"^1\\.\\.(\\d+)$"}]},{begin:/---$/,end:"\\.\\.\\.$",subLanguage:"yaml",relevance:0},{className:"number",begin:" (\\d+) "},{className:"symbol",variants:[{begin:"^ok"},{begin:"^not ok"}]}]}}return jp=t,jp}var em,aC;function aIe(){if(aC)return em;aC=1;function t(e){const n=e.regex,i=/[a-zA-Z_][a-zA-Z0-9_]*/,o={className:"number",variants:[e.BINARY_NUMBER_MODE,e.C_NUMBER_MODE]};return{name:"Tcl",aliases:["tk"],keywords:["after","append","apply","array","auto_execok","auto_import","auto_load","auto_mkindex","auto_mkindex_old","auto_qualify","auto_reset","bgerror","binary","break","catch","cd","chan","clock","close","concat","continue","dde","dict","encoding","eof","error","eval","exec","exit","expr","fblocked","fconfigure","fcopy","file","fileevent","filename","flush","for","foreach","format","gets","glob","global","history","http","if","incr","info","interp","join","lappend|10","lassign|10","lindex|10","linsert|10","list","llength|10","load","lrange|10","lrepeat|10","lreplace|10","lreverse|10","lsearch|10","lset|10","lsort|10","mathfunc","mathop","memory","msgcat","namespace","open","package","parray","pid","pkg::create","pkg_mkIndex","platform","platform::shell","proc","puts","pwd","read","refchan","regexp","registry","regsub|10","rename","return","safe","scan","seek","set","socket","source","split","string","subst","switch","tcl_endOfWord","tcl_findLibrary","tcl_startOfNextWord","tcl_startOfPreviousWord","tcl_wordBreakAfter","tcl_wordBreakBefore","tcltest","tclvars","tell","time","tm","trace","unknown","unload","unset","update","uplevel","upvar","variable","vwait","while"],contains:[e.COMMENT(";[ \\t]*#","$"),e.COMMENT("^[ \\t]*#","$"),{beginKeywords:"proc",end:"[\\{]",excludeEnd:!0,contains:[{className:"title",begin:"[ \\t\\n\\r]+(::)?[a-zA-Z_]((::)?[a-zA-Z0-9_])*",end:"[ \\t\\n\\r]",endsWithParent:!0,excludeEnd:!0}]},{className:"variable",variants:[{begin:n.concat(/\$/,n.optional(/::/),i,"(::",i,")*")},{begin:"\\$\\{(::)?[a-zA-Z_]((::)?[a-zA-Z0-9_])*",end:"\\}",contains:[o]}]},{className:"string",contains:[e.BACKSLASH_ESCAPE],variants:[e.inherit(e.QUOTE_STRING_MODE,{illegal:null})]},o]}}return em=t,em}var tm,oC;function oIe(){if(oC)return tm;oC=1;function t(e){const n=["bool","byte","i16","i32","i64","double","string","binary"];return{name:"Thrift",keywords:{keyword:["namespace","const","typedef","struct","enum","service","exception","void","oneway","set","list","map","required","optional"],type:n,literal:"true false"},contains:[e.QUOTE_STRING_MODE,e.NUMBER_MODE,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"class",beginKeywords:"struct enum service exception",end:/\{/,illegal:/\n/,contains:[e.inherit(e.TITLE_MODE,{starts:{endsWithParent:!0,excludeEnd:!0}})]},{begin:"\\b(set|list|map)\\s*<",keywords:{type:[...n,"set","list","map"]},end:">",contains:["self"]}]}}return tm=t,tm}var nm,sC;function sIe(){if(sC)return nm;sC=1;function t(e){const n={className:"number",begin:"[1-9][0-9]*",relevance:0},i={className:"symbol",begin:":[^\\]]+"},o={className:"built_in",begin:"(AR|P|PAYLOAD|PR|R|SR|RSR|LBL|VR|UALM|MESSAGE|UTOOL|UFRAME|TIMER|TIMER_OVERFLOW|JOINT_MAX_SPEED|RESUME_PROG|DIAG_REC)\\[",end:"\\]",contains:["self",n,i]},s={className:"built_in",begin:"(AI|AO|DI|DO|F|RI|RO|UI|UO|GI|GO|SI|SO)\\[",end:"\\]",contains:["self",n,e.QUOTE_STRING_MODE,i]};return{name:"TP",keywords:{keyword:["ABORT","ACC","ADJUST","AND","AP_LD","BREAK","CALL","CNT","COL","CONDITION","CONFIG","DA","DB","DIV","DETECT","ELSE","END","ENDFOR","ERR_NUM","ERROR_PROG","FINE","FOR","GP","GUARD","INC","IF","JMP","LINEAR_MAX_SPEED","LOCK","MOD","MONITOR","OFFSET","Offset","OR","OVERRIDE","PAUSE","PREG","PTH","RT_LD","RUN","SELECT","SKIP","Skip","TA","TB","TO","TOOL_OFFSET","Tool_Offset","UF","UT","UFRAME_NUM","UTOOL_NUM","UNLOCK","WAIT","X","Y","Z","W","P","R","STRLEN","SUBSTR","FINDSTR","VOFFSET","PROG","ATTR","MN","POS"],literal:["ON","OFF","max_speed","LPOS","JPOS","ENABLE","DISABLE","START","STOP","RESET"]},contains:[o,s,{className:"keyword",begin:"/(PROG|ATTR|MN|POS|END)\\b"},{className:"keyword",begin:"(CALL|RUN|POINT_LOGIC|LBL)\\b"},{className:"keyword",begin:"\\b(ACC|CNT|Skip|Offset|PSPD|RT_LD|AP_LD|Tool_Offset)"},{className:"number",begin:"\\d+(sec|msec|mm/sec|cm/min|inch/min|deg/sec|mm|in|cm)?\\b",relevance:0},e.COMMENT("//","[;$]"),e.COMMENT("!","[;$]"),e.COMMENT("--eg:","$"),e.QUOTE_STRING_MODE,{className:"string",begin:"'",end:"'"},e.C_NUMBER_MODE,{className:"variable",begin:"\\$[A-Za-z0-9_]+"}]}}return nm=t,nm}var rm,lC;function lIe(){if(lC)return rm;lC=1;function t(e){const n=e.regex,i=["absolute_url","asset|0","asset_version","attribute","block","constant","controller|0","country_timezones","csrf_token","cycle","date","dump","expression","form|0","form_end","form_errors","form_help","form_label","form_rest","form_row","form_start","form_widget","html_classes","include","is_granted","logout_path","logout_url","max","min","parent","path|0","random","range","relative_path","render","render_esi","source","template_from_string","url|0"],o=["abs","abbr_class","abbr_method","batch","capitalize","column","convert_encoding","country_name","currency_name","currency_symbol","data_uri","date","date_modify","default","escape","file_excerpt","file_link","file_relative","filter","first","format","format_args","format_args_as_text","format_currency","format_date","format_datetime","format_file","format_file_from_text","format_number","format_time","html_to_markdown","humanize","inky_to_html","inline_css","join","json_encode","keys","language_name","last","length","locale_name","lower","map","markdown","markdown_to_html","merge","nl2br","number_format","raw","reduce","replace","reverse","round","slice","slug","sort","spaceless","split","striptags","timezone_name","title","trans","transchoice","trim","u|0","upper","url_encode","yaml_dump","yaml_encode"];let s=["apply","autoescape","block","cache","deprecated","do","embed","extends","filter","flush","for","form_theme","from","if","import","include","macro","sandbox","set","stopwatch","trans","trans_default_domain","transchoice","use","verbatim","with"];s=s.concat(s.map(C=>`end${C}`));const l={scope:"string",variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/}]},c={scope:"number",match:/\d+/},d={begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,contains:[l,c]},_={beginKeywords:i.join(" "),keywords:{name:i},relevance:0,contains:[d]},p={match:/\|(?=[A-Za-z_]+:?)/,beginScope:"punctuation",relevance:0,contains:[{match:/[A-Za-z_]+:?/,keywords:o}]},g=(C,{relevance:h})=>({beginScope:{1:"template-tag",3:"name"},relevance:h||2,endScope:"template-tag",begin:[/\{%/,/\s*/,n.either(...C)],end:/%\}/,keywords:"in",contains:[p,_,l,c]}),E=/[a-z_]+/,f=g(s,{relevance:2}),S=g([E],{relevance:1});return{name:"Twig",aliases:["craftcms"],case_insensitive:!0,subLanguage:"xml",contains:[e.COMMENT(/\{#/,/#\}/),f,S,{className:"template-variable",begin:/\{\{/,end:/\}\}/,contains:["self",p,_,l,c]}]}}return rm=t,rm}var im,cC;function cIe(){if(cC)return im;cC=1;const t="[A-Za-z$_][0-9A-Za-z$_]*",e=["as","in","of","if","for","while","finally","var","new","function","do","return","void","else","break","catch","instanceof","with","throw","case","default","try","switch","continue","typeof","delete","let","yield","const","class","debugger","async","await","static","import","from","export","extends"],n=["true","false","null","undefined","NaN","Infinity"],i=["Object","Function","Boolean","Symbol","Math","Date","Number","BigInt","String","RegExp","Array","Float32Array","Float64Array","Int8Array","Uint8Array","Uint8ClampedArray","Int16Array","Int32Array","Uint16Array","Uint32Array","BigInt64Array","BigUint64Array","Set","Map","WeakSet","WeakMap","ArrayBuffer","SharedArrayBuffer","Atomics","DataView","JSON","Promise","Generator","GeneratorFunction","AsyncFunction","Reflect","Proxy","Intl","WebAssembly"],o=["Error","EvalError","InternalError","RangeError","ReferenceError","SyntaxError","TypeError","URIError"],s=["setInterval","setTimeout","clearInterval","clearTimeout","require","exports","eval","isFinite","isNaN","parseFloat","parseInt","decodeURI","decodeURIComponent","encodeURI","encodeURIComponent","escape","unescape"],l=["arguments","this","super","console","window","document","localStorage","sessionStorage","module","global"],c=[].concat(s,i,o);function d(p){const g=p.regex,E=(xe,{after:He})=>{const rt="",end:""},C=/<[A-Za-z0-9\\._:-]+\s*\/>/,h={begin:/<[A-Za-z0-9\\._:-]+/,end:/\/[A-Za-z0-9\\._:-]+>|\/>/,isTrulyOpeningTag:(xe,He)=>{const rt=xe[0].length+xe.index,We=xe.input[rt];if(We==="<"||We===","){He.ignoreMatch();return}We===">"&&(E(xe,{after:rt})||He.ignoreMatch());let te;const pe=xe.input.substring(rt);if(te=pe.match(/^\s*=/)){He.ignoreMatch();return}if((te=pe.match(/^\s+extends\s+/))&&te.index===0){He.ignoreMatch();return}}},T={$pattern:t,keyword:e,literal:n,built_in:c,"variable.language":l},N="[0-9](_?[0-9])*",y=`\\.(${N})`,x="0|[1-9](_?[0-9])*|0[0-7]*[89][0-9]*",P={className:"number",variants:[{begin:`(\\b(${x})((${y})|\\.)?|(${y}))[eE][+-]?(${N})\\b`},{begin:`\\b(${x})\\b((${y})\\b|\\.)?|(${y})\\b`},{begin:"\\b(0|[1-9](_?[0-9])*)n\\b"},{begin:"\\b0[xX][0-9a-fA-F](_?[0-9a-fA-F])*n?\\b"},{begin:"\\b0[bB][0-1](_?[0-1])*n?\\b"},{begin:"\\b0[oO][0-7](_?[0-7])*n?\\b"},{begin:"\\b0[0-7]+n?\\b"}],relevance:0},D={className:"subst",begin:"\\$\\{",end:"\\}",keywords:T,contains:[]},k={begin:"html`",end:"",starts:{end:"`",returnEnd:!1,contains:[p.BACKSLASH_ESCAPE,D],subLanguage:"xml"}},U={begin:"css`",end:"",starts:{end:"`",returnEnd:!1,contains:[p.BACKSLASH_ESCAPE,D],subLanguage:"css"}},W={begin:"gql`",end:"",starts:{end:"`",returnEnd:!1,contains:[p.BACKSLASH_ESCAPE,D],subLanguage:"graphql"}},z={className:"string",begin:"`",end:"`",contains:[p.BACKSLASH_ESCAPE,D]},Ee={className:"comment",variants:[p.COMMENT(/\/\*\*(?!\/)/,"\\*/",{relevance:0,contains:[{begin:"(?=@[A-Za-z]+)",relevance:0,contains:[{className:"doctag",begin:"@[A-Za-z]+"},{className:"type",begin:"\\{",end:"\\}",excludeEnd:!0,excludeBegin:!0,relevance:0},{className:"variable",begin:f+"(?=\\s*(-)|$)",endsParent:!0,relevance:0},{begin:/(?=[^\n])\s/,relevance:0}]}]}),p.C_BLOCK_COMMENT_MODE,p.C_LINE_COMMENT_MODE]},oe=[p.APOS_STRING_MODE,p.QUOTE_STRING_MODE,k,U,W,z,{match:/\$\d+/},P];D.contains=oe.concat({begin:/\{/,end:/\}/,keywords:T,contains:["self"].concat(oe)});const L=[].concat(Ee,D.contains),J=L.concat([{begin:/\(/,end:/\)/,keywords:T,contains:["self"].concat(L)}]),re={className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:T,contains:J},G={variants:[{match:[/class/,/\s+/,f,/\s+/,/extends/,/\s+/,g.concat(f,"(",g.concat(/\./,f),")*")],scope:{1:"keyword",3:"title.class",5:"keyword",7:"title.class.inherited"}},{match:[/class/,/\s+/,f],scope:{1:"keyword",3:"title.class"}}]},X={relevance:0,match:g.either(/\bJSON/,/\b[A-Z][a-z]+([A-Z][a-z]*|\d)*/,/\b[A-Z]{2,}([A-Z][a-z]+|\d)+([A-Z][a-z]*)*/,/\b[A-Z]{2,}[a-z]+([A-Z][a-z]+|\d)*([A-Z][a-z]*)*/),className:"title.class",keywords:{_:[...i,...o]}},_e={label:"use_strict",className:"meta",relevance:10,begin:/^\s*['"]use (strict|asm)['"]/},ve={variants:[{match:[/function/,/\s+/,f,/(?=\s*\()/]},{match:[/function/,/\s*(?=\()/]}],className:{1:"keyword",3:"title.function"},label:"func.def",contains:[re],illegal:/%/},he={relevance:0,match:/\b[A-Z][A-Z_0-9]+\b/,className:"variable.constant"};function tt(xe){return g.concat("(?!",xe.join("|"),")")}const lt={match:g.concat(/\b/,tt([...s,"super","import"]),f,g.lookahead(/\(/)),className:"title.function",relevance:0},$e={begin:g.concat(/\./,g.lookahead(g.concat(f,/(?![0-9A-Za-z$_(])/))),end:f,excludeBegin:!0,keywords:"prototype",className:"property",relevance:0},Ce={match:[/get|set/,/\s+/,f,/(?=\()/],className:{1:"keyword",3:"title.function"},contains:[{begin:/\(\)/},re]},Be="(\\([^()]*(\\([^()]*(\\([^()]*\\)[^()]*)*\\)[^()]*)*\\)|"+p.UNDERSCORE_IDENT_RE+")\\s*=>",Ve={match:[/const|var|let/,/\s+/,f,/\s*/,/=\s*/,/(async\s*)?/,g.lookahead(Be)],keywords:"async",className:{1:"keyword",3:"title.function"},contains:[re]};return{name:"JavaScript",aliases:["js","jsx","mjs","cjs"],keywords:T,exports:{PARAMS_CONTAINS:J,CLASS_REFERENCE:X},illegal:/#(?![$_A-z])/,contains:[p.SHEBANG({label:"shebang",binary:"node",relevance:5}),_e,p.APOS_STRING_MODE,p.QUOTE_STRING_MODE,k,U,W,z,Ee,{match:/\$\d+/},P,X,{className:"attr",begin:f+g.lookahead(":"),relevance:0},Ve,{begin:"("+p.RE_STARTERS_RE+"|\\b(case|return|throw)\\b)\\s*",keywords:"return throw case",relevance:0,contains:[Ee,p.REGEXP_MODE,{className:"function",begin:Be,returnBegin:!0,end:"\\s*=>",contains:[{className:"params",variants:[{begin:p.UNDERSCORE_IDENT_RE,relevance:0},{className:null,begin:/\(\s*\)/,skip:!0},{begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:T,contains:J}]}]},{begin:/,/,relevance:0},{match:/\s+/,relevance:0},{variants:[{begin:S.begin,end:S.end},{match:C},{begin:h.begin,"on:begin":h.isTrulyOpeningTag,end:h.end}],subLanguage:"xml",contains:[{begin:h.begin,end:h.end,skip:!0,contains:["self"]}]}]},ve,{beginKeywords:"while if switch catch for"},{begin:"\\b(?!function)"+p.UNDERSCORE_IDENT_RE+"\\([^()]*(\\([^()]*(\\([^()]*\\)[^()]*)*\\)[^()]*)*\\)\\s*\\{",returnBegin:!0,label:"func.def",contains:[re,p.inherit(p.TITLE_MODE,{begin:f,className:"title.function"})]},{match:/\.\.\./,relevance:0},$e,{match:"\\$"+f,relevance:0},{match:[/\bconstructor(?=\s*\()/],className:{1:"title.function"},contains:[re]},lt,he,G,Ce,{match:/\$[(.]/}]}}function _(p){const g=d(p),E=t,f=["any","void","number","boolean","string","object","never","symbol","bigint","unknown"],S={beginKeywords:"namespace",end:/\{/,excludeEnd:!0,contains:[g.exports.CLASS_REFERENCE]},C={beginKeywords:"interface",end:/\{/,excludeEnd:!0,keywords:{keyword:"interface extends",built_in:f},contains:[g.exports.CLASS_REFERENCE]},h={className:"meta",relevance:10,begin:/^\s*['"]use strict['"]/},T=["type","namespace","interface","public","private","protected","implements","declare","abstract","readonly","enum","override"],N={$pattern:t,keyword:e.concat(T),literal:n,built_in:c.concat(f),"variable.language":l},y={className:"meta",begin:"@"+E},x=(D,k,U)=>{const W=D.contains.findIndex(z=>z.label===k);if(W===-1)throw new Error("can not find mode to replace");D.contains.splice(W,1,U)};Object.assign(g.keywords,N),g.exports.PARAMS_CONTAINS.push(y),g.contains=g.contains.concat([y,S,C]),x(g,"shebang",p.SHEBANG()),x(g,"use_strict",h);const P=g.contains.find(D=>D.label==="func.def");return P.relevance=0,Object.assign(g,{name:"TypeScript",aliases:["ts","tsx","mts","cts"]}),g}return im=_,im}var am,uC;function uIe(){if(uC)return am;uC=1;function t(e){return{name:"Vala",keywords:{keyword:"char uchar unichar int uint long ulong short ushort int8 int16 int32 int64 uint8 uint16 uint32 uint64 float double bool struct enum string void weak unowned owned async signal static abstract interface override virtual delegate if while do for foreach else switch case break default return try catch public private protected internal using new this get set const stdout stdin stderr var",built_in:"DBus GLib CCode Gee Object Gtk Posix",literal:"false true null"},contains:[{className:"class",beginKeywords:"class interface namespace",end:/\{/,excludeEnd:!0,illegal:"[^,:\\n\\s\\.]",contains:[e.UNDERSCORE_TITLE_MODE]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"string",begin:'"""',end:'"""',relevance:5},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,e.C_NUMBER_MODE,{className:"meta",begin:"^#",end:"$"}]}}return am=t,am}var om,dC;function dIe(){if(dC)return om;dC=1;function t(e){const n=e.regex,i={className:"string",begin:/"(""|[^/n])"C\b/},o={className:"string",begin:/"/,end:/"/,illegal:/\n/,contains:[{begin:/""/}]},s=/\d{1,2}\/\d{1,2}\/\d{4}/,l=/\d{4}-\d{1,2}-\d{1,2}/,c=/(\d|1[012])(:\d+){0,2} *(AM|PM)/,d=/\d{1,2}(:\d{1,2}){1,2}/,_={className:"literal",variants:[{begin:n.concat(/# */,n.either(l,s),/ *#/)},{begin:n.concat(/# */,d,/ *#/)},{begin:n.concat(/# */,c,/ *#/)},{begin:n.concat(/# */,n.either(l,s),/ +/,n.either(c,d),/ *#/)}]},p={className:"number",relevance:0,variants:[{begin:/\b\d[\d_]*((\.[\d_]+(E[+-]?[\d_]+)?)|(E[+-]?[\d_]+))[RFD@!#]?/},{begin:/\b\d[\d_]*((U?[SIL])|[%&])?/},{begin:/&H[\dA-F_]+((U?[SIL])|[%&])?/},{begin:/&O[0-7_]+((U?[SIL])|[%&])?/},{begin:/&B[01_]+((U?[SIL])|[%&])?/}]},g={className:"label",begin:/^\w+:/},E=e.COMMENT(/'''/,/$/,{contains:[{className:"doctag",begin:/<\/?/,end:/>/}]}),f=e.COMMENT(null,/$/,{variants:[{begin:/'/},{begin:/([\t ]|^)REM(?=\s)/}]});return{name:"Visual Basic .NET",aliases:["vb"],case_insensitive:!0,classNameAliases:{label:"symbol"},keywords:{keyword:"addhandler alias aggregate ansi as async assembly auto binary by byref byval call case catch class compare const continue custom declare default delegate dim distinct do each equals else elseif end enum erase error event exit explicit finally for friend from function get global goto group handles if implements imports in inherits interface into iterator join key let lib loop me mid module mustinherit mustoverride mybase myclass namespace narrowing new next notinheritable notoverridable of off on operator option optional order overloads overridable overrides paramarray partial preserve private property protected public raiseevent readonly redim removehandler resume return select set shadows shared skip static step stop structure strict sub synclock take text then throw to try unicode until using when where while widening with withevents writeonly yield",built_in:"addressof and andalso await directcast gettype getxmlnamespace is isfalse isnot istrue like mod nameof new not or orelse trycast typeof xor cbool cbyte cchar cdate cdbl cdec cint clng cobj csbyte cshort csng cstr cuint culng cushort",type:"boolean byte char date decimal double integer long object sbyte short single string uinteger ulong ushort",literal:"true false nothing"},illegal:"//|\\{|\\}|endif|gosub|variant|wend|^\\$ ",contains:[i,o,_,p,g,E,f,{className:"meta",begin:/[\t ]*#(const|disable|else|elseif|enable|end|externalsource|if|region)\b/,end:/$/,keywords:{keyword:"const disable else elseif enable end externalsource if region then"},contains:[f]}]}}return om=t,om}var sm,_C;function _Ie(){if(_C)return sm;_C=1;function t(e){const n=e.regex,i=["lcase","month","vartype","instrrev","ubound","setlocale","getobject","rgb","getref","string","weekdayname","rnd","dateadd","monthname","now","day","minute","isarray","cbool","round","formatcurrency","conversions","csng","timevalue","second","year","space","abs","clng","timeserial","fixs","len","asc","isempty","maths","dateserial","atn","timer","isobject","filter","weekday","datevalue","ccur","isdate","instr","datediff","formatdatetime","replace","isnull","right","sgn","array","snumeric","log","cdbl","hex","chr","lbound","msgbox","ucase","getlocale","cos","cdate","cbyte","rtrim","join","hour","oct","typename","trim","strcomp","int","createobject","loadpicture","tan","formatnumber","mid","split","cint","sin","datepart","ltrim","sqr","time","derived","eval","date","formatpercent","exp","inputbox","left","ascw","chrw","regexp","cstr","err"],o=["server","response","request","scriptengine","scriptenginebuildversion","scriptengineminorversion","scriptenginemajorversion"],s={begin:n.concat(n.either(...i),"\\s*\\("),relevance:0,keywords:{built_in:i}};return{name:"VBScript",aliases:["vbs"],case_insensitive:!0,keywords:{keyword:["call","class","const","dim","do","loop","erase","execute","executeglobal","exit","for","each","next","function","if","then","else","on","error","option","explicit","new","private","property","let","get","public","randomize","redim","rem","select","case","set","stop","sub","while","wend","with","end","to","elseif","is","or","xor","and","not","class_initialize","class_terminate","default","preserve","in","me","byval","byref","step","resume","goto"],built_in:o,literal:["true","false","null","nothing","empty"]},illegal:"//",contains:[s,e.inherit(e.QUOTE_STRING_MODE,{contains:[{begin:'""'}]}),e.COMMENT(/'/,/$/,{relevance:0}),e.C_NUMBER_MODE]}}return sm=t,sm}var lm,pC;function pIe(){if(pC)return lm;pC=1;function t(e){return{name:"VBScript in HTML",subLanguage:"xml",contains:[{begin:"<%",end:"%>",subLanguage:"vbscript"}]}}return lm=t,lm}var cm,mC;function mIe(){if(mC)return cm;mC=1;function t(e){const n=e.regex,i={$pattern:/\$?[\w]+(\$[\w]+)*/,keyword:["accept_on","alias","always","always_comb","always_ff","always_latch","and","assert","assign","assume","automatic","before","begin","bind","bins","binsof","bit","break","buf|0","bufif0","bufif1","byte","case","casex","casez","cell","chandle","checker","class","clocking","cmos","config","const","constraint","context","continue","cover","covergroup","coverpoint","cross","deassign","default","defparam","design","disable","dist","do","edge","else","end","endcase","endchecker","endclass","endclocking","endconfig","endfunction","endgenerate","endgroup","endinterface","endmodule","endpackage","endprimitive","endprogram","endproperty","endspecify","endsequence","endtable","endtask","enum","event","eventually","expect","export","extends","extern","final","first_match","for","force","foreach","forever","fork","forkjoin","function","generate|5","genvar","global","highz0","highz1","if","iff","ifnone","ignore_bins","illegal_bins","implements","implies","import","incdir","include","initial","inout","input","inside","instance","int","integer","interconnect","interface","intersect","join","join_any","join_none","large","let","liblist","library","local","localparam","logic","longint","macromodule","matches","medium","modport","module","nand","negedge","nettype","new","nexttime","nmos","nor","noshowcancelled","not","notif0","notif1","or","output","package","packed","parameter","pmos","posedge","primitive","priority","program","property","protected","pull0","pull1","pulldown","pullup","pulsestyle_ondetect","pulsestyle_onevent","pure","rand","randc","randcase","randsequence","rcmos","real","realtime","ref","reg","reject_on","release","repeat","restrict","return","rnmos","rpmos","rtran","rtranif0","rtranif1","s_always","s_eventually","s_nexttime","s_until","s_until_with","scalared","sequence","shortint","shortreal","showcancelled","signed","small","soft","solve","specify","specparam","static","string","strong","strong0","strong1","struct","super","supply0","supply1","sync_accept_on","sync_reject_on","table","tagged","task","this","throughout","time","timeprecision","timeunit","tran","tranif0","tranif1","tri","tri0","tri1","triand","trior","trireg","type","typedef","union","unique","unique0","unsigned","until","until_with","untyped","use","uwire","var","vectored","virtual","void","wait","wait_order","wand","weak","weak0","weak1","while","wildcard","wire","with","within","wor","xnor","xor"],literal:["null"],built_in:["$finish","$stop","$exit","$fatal","$error","$warning","$info","$realtime","$time","$printtimescale","$bitstoreal","$bitstoshortreal","$itor","$signed","$cast","$bits","$stime","$timeformat","$realtobits","$shortrealtobits","$rtoi","$unsigned","$asserton","$assertkill","$assertpasson","$assertfailon","$assertnonvacuouson","$assertoff","$assertcontrol","$assertpassoff","$assertfailoff","$assertvacuousoff","$isunbounded","$sampled","$fell","$changed","$past_gclk","$fell_gclk","$changed_gclk","$rising_gclk","$steady_gclk","$coverage_control","$coverage_get","$coverage_save","$set_coverage_db_name","$rose","$stable","$past","$rose_gclk","$stable_gclk","$future_gclk","$falling_gclk","$changing_gclk","$display","$coverage_get_max","$coverage_merge","$get_coverage","$load_coverage_db","$typename","$unpacked_dimensions","$left","$low","$increment","$clog2","$ln","$log10","$exp","$sqrt","$pow","$floor","$ceil","$sin","$cos","$tan","$countbits","$onehot","$isunknown","$fatal","$warning","$dimensions","$right","$high","$size","$asin","$acos","$atan","$atan2","$hypot","$sinh","$cosh","$tanh","$asinh","$acosh","$atanh","$countones","$onehot0","$error","$info","$random","$dist_chi_square","$dist_erlang","$dist_exponential","$dist_normal","$dist_poisson","$dist_t","$dist_uniform","$q_initialize","$q_remove","$q_exam","$async$and$array","$async$nand$array","$async$or$array","$async$nor$array","$sync$and$array","$sync$nand$array","$sync$or$array","$sync$nor$array","$q_add","$q_full","$psprintf","$async$and$plane","$async$nand$plane","$async$or$plane","$async$nor$plane","$sync$and$plane","$sync$nand$plane","$sync$or$plane","$sync$nor$plane","$system","$display","$displayb","$displayh","$displayo","$strobe","$strobeb","$strobeh","$strobeo","$write","$readmemb","$readmemh","$writememh","$value$plusargs","$dumpvars","$dumpon","$dumplimit","$dumpports","$dumpportson","$dumpportslimit","$writeb","$writeh","$writeo","$monitor","$monitorb","$monitorh","$monitoro","$writememb","$dumpfile","$dumpoff","$dumpall","$dumpflush","$dumpportsoff","$dumpportsall","$dumpportsflush","$fclose","$fdisplay","$fdisplayb","$fdisplayh","$fdisplayo","$fstrobe","$fstrobeb","$fstrobeh","$fstrobeo","$swrite","$swriteb","$swriteh","$swriteo","$fscanf","$fread","$fseek","$fflush","$feof","$fopen","$fwrite","$fwriteb","$fwriteh","$fwriteo","$fmonitor","$fmonitorb","$fmonitorh","$fmonitoro","$sformat","$sformatf","$fgetc","$ungetc","$fgets","$sscanf","$rewind","$ftell","$ferror"]},o=["__FILE__","__LINE__"],s=["begin_keywords","celldefine","default_nettype","default_decay_time","default_trireg_strength","define","delay_mode_distributed","delay_mode_path","delay_mode_unit","delay_mode_zero","else","elsif","end_keywords","endcelldefine","endif","ifdef","ifndef","include","line","nounconnected_drive","pragma","resetall","timescale","unconnected_drive","undef","undefineall"];return{name:"Verilog",aliases:["v","sv","svh"],case_insensitive:!1,keywords:i,contains:[e.C_BLOCK_COMMENT_MODE,e.C_LINE_COMMENT_MODE,e.QUOTE_STRING_MODE,{scope:"number",contains:[e.BACKSLASH_ESCAPE],variants:[{begin:/\b((\d+'([bhodBHOD]))[0-9xzXZa-fA-F_]+)/},{begin:/\B(('([bhodBHOD]))[0-9xzXZa-fA-F_]+)/},{begin:/\b[0-9][0-9_]*/,relevance:0}]},{scope:"variable",variants:[{begin:"#\\((?!parameter).+\\)"},{begin:"\\.\\w+",relevance:0}]},{scope:"variable.constant",match:n.concat(/`/,n.either(...o))},{scope:"meta",begin:n.concat(/`/,n.either(...s)),end:/$|\/\/|\/\*/,returnEnd:!0,keywords:s}]}}return cm=t,cm}var um,gC;function gIe(){if(gC)return um;gC=1;function t(e){const n="\\d(_|\\d)*",i="[eE][-+]?"+n,o=n+"(\\."+n+")?("+i+")?",s="\\w+",c="\\b("+(n+"#"+s+"(\\."+s+")?#("+i+")?")+"|"+o+")";return{name:"VHDL",case_insensitive:!0,keywords:{keyword:["abs","access","after","alias","all","and","architecture","array","assert","assume","assume_guarantee","attribute","begin","block","body","buffer","bus","case","component","configuration","constant","context","cover","disconnect","downto","default","else","elsif","end","entity","exit","fairness","file","for","force","function","generate","generic","group","guarded","if","impure","in","inertial","inout","is","label","library","linkage","literal","loop","map","mod","nand","new","next","nor","not","null","of","on","open","or","others","out","package","parameter","port","postponed","procedure","process","property","protected","pure","range","record","register","reject","release","rem","report","restrict","restrict_guarantee","return","rol","ror","select","sequence","severity","shared","signal","sla","sll","sra","srl","strong","subtype","then","to","transport","type","unaffected","units","until","use","variable","view","vmode","vprop","vunit","wait","when","while","with","xnor","xor"],built_in:["boolean","bit","character","integer","time","delay_length","natural","positive","string","bit_vector","file_open_kind","file_open_status","std_logic","std_logic_vector","unsigned","signed","boolean_vector","integer_vector","std_ulogic","std_ulogic_vector","unresolved_unsigned","u_unsigned","unresolved_signed","u_signed","real_vector","time_vector"],literal:["false","true","note","warning","error","failure","line","text","side","width"]},illegal:/\{/,contains:[e.C_BLOCK_COMMENT_MODE,e.COMMENT("--","$"),e.QUOTE_STRING_MODE,{className:"number",begin:c,relevance:0},{className:"string",begin:"'(U|X|0|1|Z|W|L|H|-)'",contains:[e.BACKSLASH_ESCAPE]},{className:"symbol",begin:"'[A-Za-z](_?[A-Za-z0-9])*",contains:[e.BACKSLASH_ESCAPE]}]}}return um=t,um}var dm,EC;function EIe(){if(EC)return dm;EC=1;function t(e){return{name:"Vim Script",keywords:{$pattern:/[!#@\w]+/,keyword:"N|0 P|0 X|0 a|0 ab abc abo al am an|0 ar arga argd arge argdo argg argl argu as au aug aun b|0 bN ba bad bd be bel bf bl bm bn bo bp br brea breaka breakd breakl bro bufdo buffers bun bw c|0 cN cNf ca cabc caddb cad caddf cal cat cb cc ccl cd ce cex cf cfir cgetb cgete cg changes chd che checkt cl cla clo cm cmapc cme cn cnew cnf cno cnorea cnoreme co col colo com comc comp con conf cope cp cpf cq cr cs cst cu cuna cunme cw delm deb debugg delc delf dif diffg diffo diffp diffpu diffs diffthis dig di dl dell dj dli do doautoa dp dr ds dsp e|0 ea ec echoe echoh echom echon el elsei em en endfo endf endt endw ene ex exe exi exu f|0 files filet fin fina fini fir fix fo foldc foldd folddoc foldo for fu go gr grepa gu gv ha helpf helpg helpt hi hid his ia iabc if ij il im imapc ime ino inorea inoreme int is isp iu iuna iunme j|0 ju k|0 keepa kee keepj lN lNf l|0 lad laddb laddf la lan lat lb lc lch lcl lcs le lefta let lex lf lfir lgetb lgete lg lgr lgrepa lh ll lla lli lmak lm lmapc lne lnew lnf ln loadk lo loc lockv lol lope lp lpf lr ls lt lu lua luad luaf lv lvimgrepa lw m|0 ma mak map mapc marks mat me menut mes mk mks mksp mkv mkvie mod mz mzf nbc nb nbs new nm nmapc nme nn nnoreme noa no noh norea noreme norm nu nun nunme ol o|0 om omapc ome on ono onoreme opt ou ounme ow p|0 profd prof pro promptr pc ped pe perld po popu pp pre prev ps pt ptN ptf ptj ptl ptn ptp ptr pts pu pw py3 python3 py3d py3f py pyd pyf quita qa rec red redi redr redraws reg res ret retu rew ri rightb rub rubyd rubyf rund ru rv sN san sa sal sav sb sbN sba sbf sbl sbm sbn sbp sbr scrip scripte scs se setf setg setl sf sfir sh sim sig sil sl sla sm smap smapc sme sn sni sno snor snoreme sor so spelld spe spelli spellr spellu spellw sp spr sre st sta startg startr star stopi stj sts sun sunm sunme sus sv sw sy synti sync tN tabN tabc tabdo tabe tabf tabfir tabl tabm tabnew tabn tabo tabp tabr tabs tab ta tags tc tcld tclf te tf th tj tl tm tn to tp tr try ts tu u|0 undoj undol una unh unl unlo unm unme uns up ve verb vert vim vimgrepa vi viu vie vm vmapc vme vne vn vnoreme vs vu vunme windo w|0 wN wa wh wi winc winp wn wp wq wqa ws wu wv x|0 xa xmapc xm xme xn xnoreme xu xunme y|0 z|0 ~ Next Print append abbreviate abclear aboveleft all amenu anoremenu args argadd argdelete argedit argglobal arglocal argument ascii autocmd augroup aunmenu buffer bNext ball badd bdelete behave belowright bfirst blast bmodified bnext botright bprevious brewind break breakadd breakdel breaklist browse bunload bwipeout change cNext cNfile cabbrev cabclear caddbuffer caddexpr caddfile call catch cbuffer cclose center cexpr cfile cfirst cgetbuffer cgetexpr cgetfile chdir checkpath checktime clist clast close cmap cmapclear cmenu cnext cnewer cnfile cnoremap cnoreabbrev cnoremenu copy colder colorscheme command comclear compiler continue confirm copen cprevious cpfile cquit crewind cscope cstag cunmap cunabbrev cunmenu cwindow delete delmarks debug debuggreedy delcommand delfunction diffupdate diffget diffoff diffpatch diffput diffsplit digraphs display deletel djump dlist doautocmd doautoall deletep drop dsearch dsplit edit earlier echo echoerr echohl echomsg else elseif emenu endif endfor endfunction endtry endwhile enew execute exit exusage file filetype find finally finish first fixdel fold foldclose folddoopen folddoclosed foldopen function global goto grep grepadd gui gvim hardcopy help helpfind helpgrep helptags highlight hide history insert iabbrev iabclear ijump ilist imap imapclear imenu inoremap inoreabbrev inoremenu intro isearch isplit iunmap iunabbrev iunmenu join jumps keepalt keepmarks keepjumps lNext lNfile list laddexpr laddbuffer laddfile last language later lbuffer lcd lchdir lclose lcscope left leftabove lexpr lfile lfirst lgetbuffer lgetexpr lgetfile lgrep lgrepadd lhelpgrep llast llist lmake lmap lmapclear lnext lnewer lnfile lnoremap loadkeymap loadview lockmarks lockvar lolder lopen lprevious lpfile lrewind ltag lunmap luado luafile lvimgrep lvimgrepadd lwindow move mark make mapclear match menu menutranslate messages mkexrc mksession mkspell mkvimrc mkview mode mzscheme mzfile nbclose nbkey nbsart next nmap nmapclear nmenu nnoremap nnoremenu noautocmd noremap nohlsearch noreabbrev noremenu normal number nunmap nunmenu oldfiles open omap omapclear omenu only onoremap onoremenu options ounmap ounmenu ownsyntax print profdel profile promptfind promptrepl pclose pedit perl perldo pop popup ppop preserve previous psearch ptag ptNext ptfirst ptjump ptlast ptnext ptprevious ptrewind ptselect put pwd py3do py3file python pydo pyfile quit quitall qall read recover redo redir redraw redrawstatus registers resize retab return rewind right rightbelow ruby rubydo rubyfile rundo runtime rviminfo substitute sNext sandbox sargument sall saveas sbuffer sbNext sball sbfirst sblast sbmodified sbnext sbprevious sbrewind scriptnames scriptencoding scscope set setfiletype setglobal setlocal sfind sfirst shell simalt sign silent sleep slast smagic smapclear smenu snext sniff snomagic snoremap snoremenu sort source spelldump spellgood spellinfo spellrepall spellundo spellwrong split sprevious srewind stop stag startgreplace startreplace startinsert stopinsert stjump stselect sunhide sunmap sunmenu suspend sview swapname syntax syntime syncbind tNext tabNext tabclose tabedit tabfind tabfirst tablast tabmove tabnext tabonly tabprevious tabrewind tag tcl tcldo tclfile tearoff tfirst throw tjump tlast tmenu tnext topleft tprevious trewind tselect tunmenu undo undojoin undolist unabbreviate unhide unlet unlockvar unmap unmenu unsilent update vglobal version verbose vertical vimgrep vimgrepadd visual viusage view vmap vmapclear vmenu vnew vnoremap vnoremenu vsplit vunmap vunmenu write wNext wall while winsize wincmd winpos wnext wprevious wqall wsverb wundo wviminfo xit xall xmapclear xmap xmenu xnoremap xnoremenu xunmap xunmenu yank",built_in:"synIDtrans atan2 range matcharg did_filetype asin feedkeys xor argv complete_check add getwinposx getqflist getwinposy screencol clearmatches empty extend getcmdpos mzeval garbagecollect setreg ceil sqrt diff_hlID inputsecret get getfperm getpid filewritable shiftwidth max sinh isdirectory synID system inputrestore winline atan visualmode inputlist tabpagewinnr round getregtype mapcheck hasmapto histdel argidx findfile sha256 exists toupper getcmdline taglist string getmatches bufnr strftime winwidth bufexists strtrans tabpagebuflist setcmdpos remote_read printf setloclist getpos getline bufwinnr float2nr len getcmdtype diff_filler luaeval resolve libcallnr foldclosedend reverse filter has_key bufname str2float strlen setline getcharmod setbufvar index searchpos shellescape undofile foldclosed setqflist buflisted strchars str2nr virtcol floor remove undotree remote_expr winheight gettabwinvar reltime cursor tabpagenr finddir localtime acos getloclist search tanh matchend rename gettabvar strdisplaywidth type abs py3eval setwinvar tolower wildmenumode log10 spellsuggest bufloaded synconcealed nextnonblank server2client complete settabwinvar executable input wincol setmatches getftype hlID inputsave searchpair or screenrow line settabvar histadd deepcopy strpart remote_peek and eval getftime submatch screenchar winsaveview matchadd mkdir screenattr getfontname libcall reltimestr getfsize winnr invert pow getbufline byte2line soundfold repeat fnameescape tagfiles sin strwidth spellbadword trunc maparg log lispindent hostname setpos globpath remote_foreground getchar synIDattr fnamemodify cscope_connection stridx winbufnr indent min complete_add nr2char searchpairpos inputdialog values matchlist items hlexists strridx browsedir expand fmod pathshorten line2byte argc count getwinvar glob foldtextresult getreg foreground cosh matchdelete has char2nr simplify histget searchdecl iconv winrestcmd pumvisible writefile foldlevel haslocaldir keys cos matchstr foldtext histnr tan tempname getcwd byteidx getbufvar islocked escape eventhandler remote_send serverlist winrestview synstack pyeval prevnonblank readfile cindent filereadable changenr exp"},illegal:/;/,contains:[e.NUMBER_MODE,{className:"string",begin:"'",end:"'",illegal:"\\n"},{className:"string",begin:/"(\\"|\n\\|[^"\n])*"/},e.COMMENT('"',"$"),{className:"variable",begin:/[bwtglsav]:[\w\d_]+/},{begin:[/\b(?:function|function!)/,/\s+/,e.IDENT_RE],className:{1:"keyword",3:"title"},end:"$",relevance:0,contains:[{className:"params",begin:"\\(",end:"\\)"}]},{className:"symbol",begin:/<[\w-]+>/}]}}return dm=t,dm}var _m,fC;function fIe(){if(fC)return _m;fC=1;function t(e){e.regex;const n=e.COMMENT(/\(;/,/;\)/);n.contains.push("self");const i=e.COMMENT(/;;/,/$/),o=["anyfunc","block","br","br_if","br_table","call","call_indirect","data","drop","elem","else","end","export","func","global.get","global.set","local.get","local.set","local.tee","get_global","get_local","global","if","import","local","loop","memory","memory.grow","memory.size","module","mut","nop","offset","param","result","return","select","set_global","set_local","start","table","tee_local","then","type","unreachable"],s={begin:[/(?:func|call|call_indirect)/,/\s+/,/\$[^\s)]+/],className:{1:"keyword",3:"title.function"}},l={className:"variable",begin:/\$[\w_]+/},c={match:/(\((?!;)|\))+/,className:"punctuation",relevance:0},d={className:"number",relevance:0,match:/[+-]?\b(?:\d(?:_?\d)*(?:\.\d(?:_?\d)*)?(?:[eE][+-]?\d(?:_?\d)*)?|0x[\da-fA-F](?:_?[\da-fA-F])*(?:\.[\da-fA-F](?:_?[\da-fA-D])*)?(?:[pP][+-]?\d(?:_?\d)*)?)\b|\binf\b|\bnan(?::0x[\da-fA-F](?:_?[\da-fA-D])*)?\b/},_={match:/(i32|i64|f32|f64)(?!\.)/,className:"type"},p={className:"keyword",match:/\b(f32|f64|i32|i64)(?:\.(?:abs|add|and|ceil|clz|const|convert_[su]\/i(?:32|64)|copysign|ctz|demote\/f64|div(?:_[su])?|eqz?|extend_[su]\/i32|floor|ge(?:_[su])?|gt(?:_[su])?|le(?:_[su])?|load(?:(?:8|16|32)_[su])?|lt(?:_[su])?|max|min|mul|nearest|neg?|or|popcnt|promote\/f32|reinterpret\/[fi](?:32|64)|rem_[su]|rot[lr]|shl|shr_[su]|store(?:8|16|32)?|sqrt|sub|trunc(?:_[su]\/f(?:32|64))?|wrap\/i64|xor))\b/};return{name:"WebAssembly",keywords:{$pattern:/[\w.]+/,keyword:o},contains:[i,n,{match:[/(?:offset|align)/,/\s*/,/=/],className:{1:"keyword",3:"operator"}},l,c,s,e.QUOTE_STRING_MODE,_,p,d]}}return _m=t,_m}var pm,SC;function SIe(){if(SC)return pm;SC=1;function t(e){const n=e.regex,i=/[a-zA-Z]\w*/,o=["as","break","class","construct","continue","else","for","foreign","if","import","in","is","return","static","var","while"],s=["true","false","null"],l=["this","super"],c=["Bool","Class","Fiber","Fn","List","Map","Null","Num","Object","Range","Sequence","String","System"],d=["-","~",/\*/,"%",/\.\.\./,/\.\./,/\+/,"<<",">>",">=","<=","<",">",/\^/,/!=/,/!/,/\bis\b/,"==","&&","&",/\|\|/,/\|/,/\?:/,"="],_={relevance:0,match:n.concat(/\b(?!(if|while|for|else|super)\b)/,i,/(?=\s*[({])/),className:"title.function"},p={match:n.concat(n.either(n.concat(/\b(?!(if|while|for|else|super)\b)/,i),n.either(...d)),/(?=\s*\([^)]+\)\s*\{)/),className:"title.function",starts:{contains:[{begin:/\(/,end:/\)/,contains:[{relevance:0,scope:"params",match:i}]}]}},g={variants:[{match:[/class\s+/,i,/\s+is\s+/,i]},{match:[/class\s+/,i]}],scope:{2:"title.class",4:"title.class.inherited"},keywords:o},E={relevance:0,match:n.either(...d),className:"operator"},f={className:"string",begin:/"""/,end:/"""/},S={className:"property",begin:n.concat(/\./,n.lookahead(i)),end:i,excludeBegin:!0,relevance:0},C={relevance:0,match:n.concat(/\b_/,i),scope:"variable"},h={relevance:0,match:/\b[A-Z]+[a-z]+([A-Z]+[a-z]+)*/,scope:"title.class",keywords:{_:c}},T=e.C_NUMBER_MODE,N={match:[i,/\s*/,/=/,/\s*/,/\(/,i,/\)\s*\{/],scope:{1:"title.function",3:"operator",6:"params"}},y=e.COMMENT(/\/\*\*/,/\*\//,{contains:[{match:/@[a-z]+/,scope:"doctag"},"self"]}),x={scope:"subst",begin:/%\(/,end:/\)/,contains:[T,h,_,C,E]},P={scope:"string",begin:/"/,end:/"/,contains:[x,{scope:"char.escape",variants:[{match:/\\\\|\\["0%abefnrtv]/},{match:/\\x[0-9A-F]{2}/},{match:/\\u[0-9A-F]{4}/},{match:/\\U[0-9A-F]{8}/}]}]};x.contains.push(P);const D=[...o,...l,...s],k={relevance:0,match:n.concat("\\b(?!",D.join("|"),"\\b)",/[a-zA-Z_]\w*(?:[?!]|\b)/),className:"variable"};return{name:"Wren",keywords:{keyword:o,"variable.language":l,literal:s},contains:[{scope:"comment",variants:[{begin:[/#!?/,/[A-Za-z_]+(?=\()/],beginScope:{},keywords:{literal:s},contains:[],end:/\)/},{begin:[/#!?/,/[A-Za-z_]+/],beginScope:{},end:/$/}]},T,P,f,y,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,h,g,N,p,_,E,C,S,k]}}return pm=t,pm}var mm,bC;function bIe(){if(bC)return mm;bC=1;function t(e){return{name:"Intel x86 Assembly",case_insensitive:!0,keywords:{$pattern:"[.%]?"+e.IDENT_RE,keyword:"lock rep repe repz repne repnz xaquire xrelease bnd nobnd aaa aad aam aas adc add and arpl bb0_reset bb1_reset bound bsf bsr bswap bt btc btr bts call cbw cdq cdqe clc cld cli clts cmc cmp cmpsb cmpsd cmpsq cmpsw cmpxchg cmpxchg486 cmpxchg8b cmpxchg16b cpuid cpu_read cpu_write cqo cwd cwde daa das dec div dmint emms enter equ f2xm1 fabs fadd faddp fbld fbstp fchs fclex fcmovb fcmovbe fcmove fcmovnb fcmovnbe fcmovne fcmovnu fcmovu fcom fcomi fcomip fcomp fcompp fcos fdecstp fdisi fdiv fdivp fdivr fdivrp femms feni ffree ffreep fiadd ficom ficomp fidiv fidivr fild fimul fincstp finit fist fistp fisttp fisub fisubr fld fld1 fldcw fldenv fldl2e fldl2t fldlg2 fldln2 fldpi fldz fmul fmulp fnclex fndisi fneni fninit fnop fnsave fnstcw fnstenv fnstsw fpatan fprem fprem1 fptan frndint frstor fsave fscale fsetpm fsin fsincos fsqrt fst fstcw fstenv fstp fstsw fsub fsubp fsubr fsubrp ftst fucom fucomi fucomip fucomp fucompp fxam fxch fxtract fyl2x fyl2xp1 hlt ibts icebp idiv imul in inc incbin insb insd insw int int01 int1 int03 int3 into invd invpcid invlpg invlpga iret iretd iretq iretw jcxz jecxz jrcxz jmp jmpe lahf lar lds lea leave les lfence lfs lgdt lgs lidt lldt lmsw loadall loadall286 lodsb lodsd lodsq lodsw loop loope loopne loopnz loopz lsl lss ltr mfence monitor mov movd movq movsb movsd movsq movsw movsx movsxd movzx mul mwait neg nop not or out outsb outsd outsw packssdw packsswb packuswb paddb paddd paddsb paddsiw paddsw paddusb paddusw paddw pand pandn pause paveb pavgusb pcmpeqb pcmpeqd pcmpeqw pcmpgtb pcmpgtd pcmpgtw pdistib pf2id pfacc pfadd pfcmpeq pfcmpge pfcmpgt pfmax pfmin pfmul pfrcp pfrcpit1 pfrcpit2 pfrsqit1 pfrsqrt pfsub pfsubr pi2fd pmachriw pmaddwd pmagw pmulhriw pmulhrwa pmulhrwc pmulhw pmullw pmvgezb pmvlzb pmvnzb pmvzb pop popa popad popaw popf popfd popfq popfw por prefetch prefetchw pslld psllq psllw psrad psraw psrld psrlq psrlw psubb psubd psubsb psubsiw psubsw psubusb psubusw psubw punpckhbw punpckhdq punpckhwd punpcklbw punpckldq punpcklwd push pusha pushad pushaw pushf pushfd pushfq pushfw pxor rcl rcr rdshr rdmsr rdpmc rdtsc rdtscp ret retf retn rol ror rdm rsdc rsldt rsm rsts sahf sal salc sar sbb scasb scasd scasq scasw sfence sgdt shl shld shr shrd sidt sldt skinit smi smint smintold smsw stc std sti stosb stosd stosq stosw str sub svdc svldt svts swapgs syscall sysenter sysexit sysret test ud0 ud1 ud2b ud2 ud2a umov verr verw fwait wbinvd wrshr wrmsr xadd xbts xchg xlatb xlat xor cmove cmovz cmovne cmovnz cmova cmovnbe cmovae cmovnb cmovb cmovnae cmovbe cmovna cmovg cmovnle cmovge cmovnl cmovl cmovnge cmovle cmovng cmovc cmovnc cmovo cmovno cmovs cmovns cmovp cmovpe cmovnp cmovpo je jz jne jnz ja jnbe jae jnb jb jnae jbe jna jg jnle jge jnl jl jnge jle jng jc jnc jo jno js jns jpo jnp jpe jp sete setz setne setnz seta setnbe setae setnb setnc setb setnae setcset setbe setna setg setnle setge setnl setl setnge setle setng sets setns seto setno setpe setp setpo setnp addps addss andnps andps cmpeqps cmpeqss cmpleps cmpless cmpltps cmpltss cmpneqps cmpneqss cmpnleps cmpnless cmpnltps cmpnltss cmpordps cmpordss cmpunordps cmpunordss cmpps cmpss comiss cvtpi2ps cvtps2pi cvtsi2ss cvtss2si cvttps2pi cvttss2si divps divss ldmxcsr maxps maxss minps minss movaps movhps movlhps movlps movhlps movmskps movntps movss movups mulps mulss orps rcpps rcpss rsqrtps rsqrtss shufps sqrtps sqrtss stmxcsr subps subss ucomiss unpckhps unpcklps xorps fxrstor fxrstor64 fxsave fxsave64 xgetbv xsetbv xsave xsave64 xsaveopt xsaveopt64 xrstor xrstor64 prefetchnta prefetcht0 prefetcht1 prefetcht2 maskmovq movntq pavgb pavgw pextrw pinsrw pmaxsw pmaxub pminsw pminub pmovmskb pmulhuw psadbw pshufw pf2iw pfnacc pfpnacc pi2fw pswapd maskmovdqu clflush movntdq movnti movntpd movdqa movdqu movdq2q movq2dq paddq pmuludq pshufd pshufhw pshuflw pslldq psrldq psubq punpckhqdq punpcklqdq addpd addsd andnpd andpd cmpeqpd cmpeqsd cmplepd cmplesd cmpltpd cmpltsd cmpneqpd cmpneqsd cmpnlepd cmpnlesd cmpnltpd cmpnltsd cmpordpd cmpordsd cmpunordpd cmpunordsd cmppd comisd cvtdq2pd cvtdq2ps cvtpd2dq cvtpd2pi cvtpd2ps cvtpi2pd cvtps2dq cvtps2pd cvtsd2si cvtsd2ss cvtsi2sd cvtss2sd cvttpd2pi cvttpd2dq cvttps2dq cvttsd2si divpd divsd maxpd maxsd minpd minsd movapd movhpd movlpd movmskpd movupd mulpd mulsd orpd shufpd sqrtpd sqrtsd subpd subsd ucomisd unpckhpd unpcklpd xorpd addsubpd addsubps haddpd haddps hsubpd hsubps lddqu movddup movshdup movsldup clgi stgi vmcall vmclear vmfunc vmlaunch vmload vmmcall vmptrld vmptrst vmread vmresume vmrun vmsave vmwrite vmxoff vmxon invept invvpid pabsb pabsw pabsd palignr phaddw phaddd phaddsw phsubw phsubd phsubsw pmaddubsw pmulhrsw pshufb psignb psignw psignd extrq insertq movntsd movntss lzcnt blendpd blendps blendvpd blendvps dppd dpps extractps insertps movntdqa mpsadbw packusdw pblendvb pblendw pcmpeqq pextrb pextrd pextrq phminposuw pinsrb pinsrd pinsrq pmaxsb pmaxsd pmaxud pmaxuw pminsb pminsd pminud pminuw pmovsxbw pmovsxbd pmovsxbq pmovsxwd pmovsxwq pmovsxdq pmovzxbw pmovzxbd pmovzxbq pmovzxwd pmovzxwq pmovzxdq pmuldq pmulld ptest roundpd roundps roundsd roundss crc32 pcmpestri pcmpestrm pcmpistri pcmpistrm pcmpgtq popcnt getsec pfrcpv pfrsqrtv movbe aesenc aesenclast aesdec aesdeclast aesimc aeskeygenassist vaesenc vaesenclast vaesdec vaesdeclast vaesimc vaeskeygenassist vaddpd vaddps vaddsd vaddss vaddsubpd vaddsubps vandpd vandps vandnpd vandnps vblendpd vblendps vblendvpd vblendvps vbroadcastss vbroadcastsd vbroadcastf128 vcmpeq_ospd vcmpeqpd vcmplt_ospd vcmpltpd vcmple_ospd vcmplepd vcmpunord_qpd vcmpunordpd vcmpneq_uqpd vcmpneqpd vcmpnlt_uspd vcmpnltpd vcmpnle_uspd vcmpnlepd vcmpord_qpd vcmpordpd vcmpeq_uqpd vcmpnge_uspd vcmpngepd vcmpngt_uspd vcmpngtpd vcmpfalse_oqpd vcmpfalsepd vcmpneq_oqpd vcmpge_ospd vcmpgepd vcmpgt_ospd vcmpgtpd vcmptrue_uqpd vcmptruepd vcmplt_oqpd vcmple_oqpd vcmpunord_spd vcmpneq_uspd vcmpnlt_uqpd vcmpnle_uqpd vcmpord_spd vcmpeq_uspd vcmpnge_uqpd vcmpngt_uqpd vcmpfalse_ospd vcmpneq_ospd vcmpge_oqpd vcmpgt_oqpd vcmptrue_uspd vcmppd vcmpeq_osps vcmpeqps vcmplt_osps vcmpltps vcmple_osps vcmpleps vcmpunord_qps vcmpunordps vcmpneq_uqps vcmpneqps vcmpnlt_usps vcmpnltps vcmpnle_usps vcmpnleps vcmpord_qps vcmpordps vcmpeq_uqps vcmpnge_usps vcmpngeps vcmpngt_usps vcmpngtps vcmpfalse_oqps vcmpfalseps vcmpneq_oqps vcmpge_osps vcmpgeps vcmpgt_osps vcmpgtps vcmptrue_uqps vcmptrueps vcmplt_oqps vcmple_oqps vcmpunord_sps vcmpneq_usps vcmpnlt_uqps vcmpnle_uqps vcmpord_sps vcmpeq_usps vcmpnge_uqps vcmpngt_uqps vcmpfalse_osps vcmpneq_osps vcmpge_oqps vcmpgt_oqps vcmptrue_usps vcmpps vcmpeq_ossd vcmpeqsd vcmplt_ossd vcmpltsd vcmple_ossd vcmplesd vcmpunord_qsd vcmpunordsd vcmpneq_uqsd vcmpneqsd vcmpnlt_ussd vcmpnltsd vcmpnle_ussd vcmpnlesd vcmpord_qsd vcmpordsd vcmpeq_uqsd vcmpnge_ussd vcmpngesd vcmpngt_ussd vcmpngtsd vcmpfalse_oqsd vcmpfalsesd vcmpneq_oqsd vcmpge_ossd vcmpgesd vcmpgt_ossd vcmpgtsd vcmptrue_uqsd vcmptruesd vcmplt_oqsd vcmple_oqsd vcmpunord_ssd vcmpneq_ussd vcmpnlt_uqsd vcmpnle_uqsd vcmpord_ssd vcmpeq_ussd vcmpnge_uqsd vcmpngt_uqsd vcmpfalse_ossd vcmpneq_ossd vcmpge_oqsd vcmpgt_oqsd vcmptrue_ussd vcmpsd vcmpeq_osss vcmpeqss vcmplt_osss vcmpltss vcmple_osss vcmpless vcmpunord_qss vcmpunordss vcmpneq_uqss vcmpneqss vcmpnlt_usss vcmpnltss vcmpnle_usss vcmpnless vcmpord_qss vcmpordss vcmpeq_uqss vcmpnge_usss vcmpngess vcmpngt_usss vcmpngtss vcmpfalse_oqss vcmpfalsess vcmpneq_oqss vcmpge_osss vcmpgess vcmpgt_osss vcmpgtss vcmptrue_uqss vcmptruess vcmplt_oqss vcmple_oqss vcmpunord_sss vcmpneq_usss vcmpnlt_uqss vcmpnle_uqss vcmpord_sss vcmpeq_usss vcmpnge_uqss vcmpngt_uqss vcmpfalse_osss vcmpneq_osss vcmpge_oqss vcmpgt_oqss vcmptrue_usss vcmpss vcomisd vcomiss vcvtdq2pd vcvtdq2ps vcvtpd2dq vcvtpd2ps vcvtps2dq vcvtps2pd vcvtsd2si vcvtsd2ss vcvtsi2sd vcvtsi2ss vcvtss2sd vcvtss2si vcvttpd2dq vcvttps2dq vcvttsd2si vcvttss2si vdivpd vdivps vdivsd vdivss vdppd vdpps vextractf128 vextractps vhaddpd vhaddps vhsubpd vhsubps vinsertf128 vinsertps vlddqu vldqqu vldmxcsr vmaskmovdqu vmaskmovps vmaskmovpd vmaxpd vmaxps vmaxsd vmaxss vminpd vminps vminsd vminss vmovapd vmovaps vmovd vmovq vmovddup vmovdqa vmovqqa vmovdqu vmovqqu vmovhlps vmovhpd vmovhps vmovlhps vmovlpd vmovlps vmovmskpd vmovmskps vmovntdq vmovntqq vmovntdqa vmovntpd vmovntps vmovsd vmovshdup vmovsldup vmovss vmovupd vmovups vmpsadbw vmulpd vmulps vmulsd vmulss vorpd vorps vpabsb vpabsw vpabsd vpacksswb vpackssdw vpackuswb vpackusdw vpaddb vpaddw vpaddd vpaddq vpaddsb vpaddsw vpaddusb vpaddusw vpalignr vpand vpandn vpavgb vpavgw vpblendvb vpblendw vpcmpestri vpcmpestrm vpcmpistri vpcmpistrm vpcmpeqb vpcmpeqw vpcmpeqd vpcmpeqq vpcmpgtb vpcmpgtw vpcmpgtd vpcmpgtq vpermilpd vpermilps vperm2f128 vpextrb vpextrw vpextrd vpextrq vphaddw vphaddd vphaddsw vphminposuw vphsubw vphsubd vphsubsw vpinsrb vpinsrw vpinsrd vpinsrq vpmaddwd vpmaddubsw vpmaxsb vpmaxsw vpmaxsd vpmaxub vpmaxuw vpmaxud vpminsb vpminsw vpminsd vpminub vpminuw vpminud vpmovmskb vpmovsxbw vpmovsxbd vpmovsxbq vpmovsxwd vpmovsxwq vpmovsxdq vpmovzxbw vpmovzxbd vpmovzxbq vpmovzxwd vpmovzxwq vpmovzxdq vpmulhuw vpmulhrsw vpmulhw vpmullw vpmulld vpmuludq vpmuldq vpor vpsadbw vpshufb vpshufd vpshufhw vpshuflw vpsignb vpsignw vpsignd vpslldq vpsrldq vpsllw vpslld vpsllq vpsraw vpsrad vpsrlw vpsrld vpsrlq vptest vpsubb vpsubw vpsubd vpsubq vpsubsb vpsubsw vpsubusb vpsubusw vpunpckhbw vpunpckhwd vpunpckhdq vpunpckhqdq vpunpcklbw vpunpcklwd vpunpckldq vpunpcklqdq vpxor vrcpps vrcpss vrsqrtps vrsqrtss vroundpd vroundps vroundsd vroundss vshufpd vshufps vsqrtpd vsqrtps vsqrtsd vsqrtss vstmxcsr vsubpd vsubps vsubsd vsubss vtestps vtestpd vucomisd vucomiss vunpckhpd vunpckhps vunpcklpd vunpcklps vxorpd vxorps vzeroall vzeroupper pclmullqlqdq pclmulhqlqdq pclmullqhqdq pclmulhqhqdq pclmulqdq vpclmullqlqdq vpclmulhqlqdq vpclmullqhqdq vpclmulhqhqdq vpclmulqdq vfmadd132ps vfmadd132pd vfmadd312ps vfmadd312pd vfmadd213ps vfmadd213pd vfmadd123ps vfmadd123pd vfmadd231ps vfmadd231pd vfmadd321ps vfmadd321pd vfmaddsub132ps vfmaddsub132pd vfmaddsub312ps vfmaddsub312pd vfmaddsub213ps vfmaddsub213pd vfmaddsub123ps vfmaddsub123pd vfmaddsub231ps vfmaddsub231pd vfmaddsub321ps vfmaddsub321pd vfmsub132ps vfmsub132pd vfmsub312ps vfmsub312pd vfmsub213ps vfmsub213pd vfmsub123ps vfmsub123pd vfmsub231ps vfmsub231pd vfmsub321ps vfmsub321pd vfmsubadd132ps vfmsubadd132pd vfmsubadd312ps vfmsubadd312pd vfmsubadd213ps vfmsubadd213pd vfmsubadd123ps vfmsubadd123pd vfmsubadd231ps vfmsubadd231pd vfmsubadd321ps vfmsubadd321pd vfnmadd132ps vfnmadd132pd vfnmadd312ps vfnmadd312pd vfnmadd213ps vfnmadd213pd vfnmadd123ps vfnmadd123pd vfnmadd231ps vfnmadd231pd vfnmadd321ps vfnmadd321pd vfnmsub132ps vfnmsub132pd vfnmsub312ps vfnmsub312pd vfnmsub213ps vfnmsub213pd vfnmsub123ps vfnmsub123pd vfnmsub231ps vfnmsub231pd vfnmsub321ps vfnmsub321pd vfmadd132ss vfmadd132sd vfmadd312ss vfmadd312sd vfmadd213ss vfmadd213sd vfmadd123ss vfmadd123sd vfmadd231ss vfmadd231sd vfmadd321ss vfmadd321sd vfmsub132ss vfmsub132sd vfmsub312ss vfmsub312sd vfmsub213ss vfmsub213sd vfmsub123ss vfmsub123sd vfmsub231ss vfmsub231sd vfmsub321ss vfmsub321sd vfnmadd132ss vfnmadd132sd vfnmadd312ss vfnmadd312sd vfnmadd213ss vfnmadd213sd vfnmadd123ss vfnmadd123sd vfnmadd231ss vfnmadd231sd vfnmadd321ss vfnmadd321sd vfnmsub132ss vfnmsub132sd vfnmsub312ss vfnmsub312sd vfnmsub213ss vfnmsub213sd vfnmsub123ss vfnmsub123sd vfnmsub231ss vfnmsub231sd vfnmsub321ss vfnmsub321sd rdfsbase rdgsbase rdrand wrfsbase wrgsbase vcvtph2ps vcvtps2ph adcx adox rdseed clac stac xstore xcryptecb xcryptcbc xcryptctr xcryptcfb xcryptofb montmul xsha1 xsha256 llwpcb slwpcb lwpval lwpins vfmaddpd vfmaddps vfmaddsd vfmaddss vfmaddsubpd vfmaddsubps vfmsubaddpd vfmsubaddps vfmsubpd vfmsubps vfmsubsd vfmsubss vfnmaddpd vfnmaddps vfnmaddsd vfnmaddss vfnmsubpd vfnmsubps vfnmsubsd vfnmsubss vfrczpd vfrczps vfrczsd vfrczss vpcmov vpcomb vpcomd vpcomq vpcomub vpcomud vpcomuq vpcomuw vpcomw vphaddbd vphaddbq vphaddbw vphadddq vphaddubd vphaddubq vphaddubw vphaddudq vphadduwd vphadduwq vphaddwd vphaddwq vphsubbw vphsubdq vphsubwd vpmacsdd vpmacsdqh vpmacsdql vpmacssdd vpmacssdqh vpmacssdql vpmacsswd vpmacssww vpmacswd vpmacsww vpmadcsswd vpmadcswd vpperm vprotb vprotd vprotq vprotw vpshab vpshad vpshaq vpshaw vpshlb vpshld vpshlq vpshlw vbroadcasti128 vpblendd vpbroadcastb vpbroadcastw vpbroadcastd vpbroadcastq vpermd vpermpd vpermps vpermq vperm2i128 vextracti128 vinserti128 vpmaskmovd vpmaskmovq vpsllvd vpsllvq vpsravd vpsrlvd vpsrlvq vgatherdpd vgatherqpd vgatherdps vgatherqps vpgatherdd vpgatherqd vpgatherdq vpgatherqq xabort xbegin xend xtest andn bextr blci blcic blsi blsic blcfill blsfill blcmsk blsmsk blsr blcs bzhi mulx pdep pext rorx sarx shlx shrx tzcnt tzmsk t1mskc valignd valignq vblendmpd vblendmps vbroadcastf32x4 vbroadcastf64x4 vbroadcasti32x4 vbroadcasti64x4 vcompresspd vcompressps vcvtpd2udq vcvtps2udq vcvtsd2usi vcvtss2usi vcvttpd2udq vcvttps2udq vcvttsd2usi vcvttss2usi vcvtudq2pd vcvtudq2ps vcvtusi2sd vcvtusi2ss vexpandpd vexpandps vextractf32x4 vextractf64x4 vextracti32x4 vextracti64x4 vfixupimmpd vfixupimmps vfixupimmsd vfixupimmss vgetexppd vgetexpps vgetexpsd vgetexpss vgetmantpd vgetmantps vgetmantsd vgetmantss vinsertf32x4 vinsertf64x4 vinserti32x4 vinserti64x4 vmovdqa32 vmovdqa64 vmovdqu32 vmovdqu64 vpabsq vpandd vpandnd vpandnq vpandq vpblendmd vpblendmq vpcmpltd vpcmpled vpcmpneqd vpcmpnltd vpcmpnled vpcmpd vpcmpltq vpcmpleq vpcmpneqq vpcmpnltq vpcmpnleq vpcmpq vpcmpequd vpcmpltud vpcmpleud vpcmpnequd vpcmpnltud vpcmpnleud vpcmpud vpcmpequq vpcmpltuq vpcmpleuq vpcmpnequq vpcmpnltuq vpcmpnleuq vpcmpuq vpcompressd vpcompressq vpermi2d vpermi2pd vpermi2ps vpermi2q vpermt2d vpermt2pd vpermt2ps vpermt2q vpexpandd vpexpandq vpmaxsq vpmaxuq vpminsq vpminuq vpmovdb vpmovdw vpmovqb vpmovqd vpmovqw vpmovsdb vpmovsdw vpmovsqb vpmovsqd vpmovsqw vpmovusdb vpmovusdw vpmovusqb vpmovusqd vpmovusqw vpord vporq vprold vprolq vprolvd vprolvq vprord vprorq vprorvd vprorvq vpscatterdd vpscatterdq vpscatterqd vpscatterqq vpsraq vpsravq vpternlogd vpternlogq vptestmd vptestmq vptestnmd vptestnmq vpxord vpxorq vrcp14pd vrcp14ps vrcp14sd vrcp14ss vrndscalepd vrndscaleps vrndscalesd vrndscaless vrsqrt14pd vrsqrt14ps vrsqrt14sd vrsqrt14ss vscalefpd vscalefps vscalefsd vscalefss vscatterdpd vscatterdps vscatterqpd vscatterqps vshuff32x4 vshuff64x2 vshufi32x4 vshufi64x2 kandnw kandw kmovw knotw kortestw korw kshiftlw kshiftrw kunpckbw kxnorw kxorw vpbroadcastmb2q vpbroadcastmw2d vpconflictd vpconflictq vplzcntd vplzcntq vexp2pd vexp2ps vrcp28pd vrcp28ps vrcp28sd vrcp28ss vrsqrt28pd vrsqrt28ps vrsqrt28sd vrsqrt28ss vgatherpf0dpd vgatherpf0dps vgatherpf0qpd vgatherpf0qps vgatherpf1dpd vgatherpf1dps vgatherpf1qpd vgatherpf1qps vscatterpf0dpd vscatterpf0dps vscatterpf0qpd vscatterpf0qps vscatterpf1dpd vscatterpf1dps vscatterpf1qpd vscatterpf1qps prefetchwt1 bndmk bndcl bndcu bndcn bndmov bndldx bndstx sha1rnds4 sha1nexte sha1msg1 sha1msg2 sha256rnds2 sha256msg1 sha256msg2 hint_nop0 hint_nop1 hint_nop2 hint_nop3 hint_nop4 hint_nop5 hint_nop6 hint_nop7 hint_nop8 hint_nop9 hint_nop10 hint_nop11 hint_nop12 hint_nop13 hint_nop14 hint_nop15 hint_nop16 hint_nop17 hint_nop18 hint_nop19 hint_nop20 hint_nop21 hint_nop22 hint_nop23 hint_nop24 hint_nop25 hint_nop26 hint_nop27 hint_nop28 hint_nop29 hint_nop30 hint_nop31 hint_nop32 hint_nop33 hint_nop34 hint_nop35 hint_nop36 hint_nop37 hint_nop38 hint_nop39 hint_nop40 hint_nop41 hint_nop42 hint_nop43 hint_nop44 hint_nop45 hint_nop46 hint_nop47 hint_nop48 hint_nop49 hint_nop50 hint_nop51 hint_nop52 hint_nop53 hint_nop54 hint_nop55 hint_nop56 hint_nop57 hint_nop58 hint_nop59 hint_nop60 hint_nop61 hint_nop62 hint_nop63",built_in:"ip eip rip al ah bl bh cl ch dl dh sil dil bpl spl r8b r9b r10b r11b r12b r13b r14b r15b ax bx cx dx si di bp sp r8w r9w r10w r11w r12w r13w r14w r15w eax ebx ecx edx esi edi ebp esp eip r8d r9d r10d r11d r12d r13d r14d r15d rax rbx rcx rdx rsi rdi rbp rsp r8 r9 r10 r11 r12 r13 r14 r15 cs ds es fs gs ss st st0 st1 st2 st3 st4 st5 st6 st7 mm0 mm1 mm2 mm3 mm4 mm5 mm6 mm7 xmm0 xmm1 xmm2 xmm3 xmm4 xmm5 xmm6 xmm7 xmm8 xmm9 xmm10 xmm11 xmm12 xmm13 xmm14 xmm15 xmm16 xmm17 xmm18 xmm19 xmm20 xmm21 xmm22 xmm23 xmm24 xmm25 xmm26 xmm27 xmm28 xmm29 xmm30 xmm31 ymm0 ymm1 ymm2 ymm3 ymm4 ymm5 ymm6 ymm7 ymm8 ymm9 ymm10 ymm11 ymm12 ymm13 ymm14 ymm15 ymm16 ymm17 ymm18 ymm19 ymm20 ymm21 ymm22 ymm23 ymm24 ymm25 ymm26 ymm27 ymm28 ymm29 ymm30 ymm31 zmm0 zmm1 zmm2 zmm3 zmm4 zmm5 zmm6 zmm7 zmm8 zmm9 zmm10 zmm11 zmm12 zmm13 zmm14 zmm15 zmm16 zmm17 zmm18 zmm19 zmm20 zmm21 zmm22 zmm23 zmm24 zmm25 zmm26 zmm27 zmm28 zmm29 zmm30 zmm31 k0 k1 k2 k3 k4 k5 k6 k7 bnd0 bnd1 bnd2 bnd3 cr0 cr1 cr2 cr3 cr4 cr8 dr0 dr1 dr2 dr3 dr8 tr3 tr4 tr5 tr6 tr7 r0 r1 r2 r3 r4 r5 r6 r7 r0b r1b r2b r3b r4b r5b r6b r7b r0w r1w r2w r3w r4w r5w r6w r7w r0d r1d r2d r3d r4d r5d r6d r7d r0h r1h r2h r3h r0l r1l r2l r3l r4l r5l r6l r7l r8l r9l r10l r11l r12l r13l r14l r15l db dw dd dq dt ddq do dy dz resb resw resd resq rest resdq reso resy resz incbin equ times byte word dword qword nosplit rel abs seg wrt strict near far a32 ptr",meta:"%define %xdefine %+ %undef %defstr %deftok %assign %strcat %strlen %substr %rotate %elif %else %endif %if %ifmacro %ifctx %ifidn %ifidni %ifid %ifnum %ifstr %iftoken %ifempty %ifenv %error %warning %fatal %rep %endrep %include %push %pop %repl %pathsearch %depend %use %arg %stacksize %local %line %comment %endcomment .nolist __FILE__ __LINE__ __SECT__ __BITS__ __OUTPUT_FORMAT__ __DATE__ __TIME__ __DATE_NUM__ __TIME_NUM__ __UTC_DATE__ __UTC_TIME__ __UTC_DATE_NUM__ __UTC_TIME_NUM__ __PASS__ struc endstruc istruc at iend align alignb sectalign daz nodaz up down zero default option assume public bits use16 use32 use64 default section segment absolute extern global common cpu float __utf16__ __utf16le__ __utf16be__ __utf32__ __utf32le__ __utf32be__ __float8__ __float16__ __float32__ __float64__ __float80m__ __float80e__ __float128l__ __float128h__ __Infinity__ __QNaN__ __SNaN__ Inf NaN QNaN SNaN float8 float16 float32 float64 float80m float80e float128l float128h __FLOAT_DAZ__ __FLOAT_ROUND__ __FLOAT__"},contains:[e.COMMENT(";","$",{relevance:0}),{className:"number",variants:[{begin:"\\b(?:([0-9][0-9_]*)?\\.[0-9_]*(?:[eE][+-]?[0-9_]+)?|(0[Xx])?[0-9][0-9_]*(\\.[0-9_]*)?(?:[pP](?:[+-]?[0-9_]+)?)?)\\b",relevance:0},{begin:"\\$[0-9][0-9A-Fa-f]*",relevance:0},{begin:"\\b(?:[0-9A-Fa-f][0-9A-Fa-f_]*[Hh]|[0-9][0-9_]*[DdTt]?|[0-7][0-7_]*[QqOo]|[0-1][0-1_]*[BbYy])\\b"},{begin:"\\b(?:0[Xx][0-9A-Fa-f_]+|0[DdTt][0-9_]+|0[QqOo][0-7_]+|0[BbYy][0-1_]+)\\b"}]},e.QUOTE_STRING_MODE,{className:"string",variants:[{begin:"'",end:"[^\\\\]'"},{begin:"`",end:"[^\\\\]`"}],relevance:0},{className:"symbol",variants:[{begin:"^\\s*[A-Za-z._?][A-Za-z0-9_$#@~.?]*(:|\\s+label)"},{begin:"^\\s*%%[A-Za-z0-9_$#@~.?]*:"}],relevance:0},{className:"subst",begin:"%[0-9]+",relevance:0},{className:"subst",begin:"%!S+",relevance:0},{className:"meta",begin:/^\s*\.[\w_-]+/}]}}return mm=t,mm}var gm,hC;function hIe(){if(hC)return gm;hC=1;function t(e){const n=["if","then","else","do","while","until","for","loop","import","with","is","as","where","when","by","data","constant","integer","real","text","name","boolean","symbol","infix","prefix","postfix","block","tree"],i=["in","mod","rem","and","or","xor","not","abs","sign","floor","ceil","sqrt","sin","cos","tan","asin","acos","atan","exp","expm1","log","log2","log10","log1p","pi","at","text_length","text_range","text_find","text_replace","contains","page","slide","basic_slide","title_slide","title","subtitle","fade_in","fade_out","fade_at","clear_color","color","line_color","line_width","texture_wrap","texture_transform","texture","scale_?x","scale_?y","scale_?z?","translate_?x","translate_?y","translate_?z?","rotate_?x","rotate_?y","rotate_?z?","rectangle","circle","ellipse","sphere","path","line_to","move_to","quad_to","curve_to","theme","background","contents","locally","time","mouse_?x","mouse_?y","mouse_buttons"],o=["ObjectLoader","Animate","MovieCredits","Slides","Filters","Shading","Materials","LensFlare","Mapping","VLCAudioVideo","StereoDecoder","PointCloud","NetworkAccess","RemoteControl","RegExp","ChromaKey","Snowfall","NodeJS","Speech","Charts"],l={$pattern:/[a-zA-Z][a-zA-Z0-9_?]*/,keyword:n,literal:["true","false","nil"],built_in:i.concat(o)},c={className:"string",begin:'"',end:'"',illegal:"\\n"},d={className:"string",begin:"'",end:"'",illegal:"\\n"},_={className:"string",begin:"<<",end:">>"},p={className:"number",begin:"[0-9]+#[0-9A-Z_]+(\\.[0-9-A-Z_]+)?#?([Ee][+-]?[0-9]+)?"},g={beginKeywords:"import",end:"$",keywords:l,contains:[c]},E={className:"function",begin:/[a-z][^\n]*->/,returnBegin:!0,end:/->/,contains:[e.inherit(e.TITLE_MODE,{starts:{endsWithParent:!0,keywords:l}})]};return{name:"XL",aliases:["tao"],keywords:l,contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,c,d,_,E,g,p,e.NUMBER_MODE]}}return gm=t,gm}var Em,TC;function TIe(){if(TC)return Em;TC=1;function t(e){return{name:"XQuery",aliases:["xpath","xq"],case_insensitive:!1,illegal:/(proc)|(abstract)|(extends)|(until)|(#)/,keywords:{$pattern:/[a-zA-Z$][a-zA-Z0-9_:-]*/,keyword:["module","schema","namespace","boundary-space","preserve","no-preserve","strip","default","collation","base-uri","ordering","context","decimal-format","decimal-separator","copy-namespaces","empty-sequence","except","exponent-separator","external","grouping-separator","inherit","no-inherit","lax","minus-sign","per-mille","percent","schema-attribute","schema-element","strict","unordered","zero-digit","declare","import","option","function","validate","variable","for","at","in","let","where","order","group","by","return","if","then","else","tumbling","sliding","window","start","when","only","end","previous","next","stable","ascending","descending","allowing","empty","greatest","least","some","every","satisfies","switch","case","typeswitch","try","catch","and","or","to","union","intersect","instance","of","treat","as","castable","cast","map","array","delete","insert","into","replace","value","rename","copy","modify","update"],type:["item","document-node","node","attribute","document","element","comment","namespace","namespace-node","processing-instruction","text","construction","xs:anyAtomicType","xs:untypedAtomic","xs:duration","xs:time","xs:decimal","xs:float","xs:double","xs:gYearMonth","xs:gYear","xs:gMonthDay","xs:gMonth","xs:gDay","xs:boolean","xs:base64Binary","xs:hexBinary","xs:anyURI","xs:QName","xs:NOTATION","xs:dateTime","xs:dateTimeStamp","xs:date","xs:string","xs:normalizedString","xs:token","xs:language","xs:NMTOKEN","xs:Name","xs:NCName","xs:ID","xs:IDREF","xs:ENTITY","xs:integer","xs:nonPositiveInteger","xs:negativeInteger","xs:long","xs:int","xs:short","xs:byte","xs:nonNegativeInteger","xs:unisignedLong","xs:unsignedInt","xs:unsignedShort","xs:unsignedByte","xs:positiveInteger","xs:yearMonthDuration","xs:dayTimeDuration"],literal:["eq","ne","lt","le","gt","ge","is","self::","child::","descendant::","descendant-or-self::","attribute::","following::","following-sibling::","parent::","ancestor::","ancestor-or-self::","preceding::","preceding-sibling::","NaN"]},contains:[{className:"variable",begin:/[$][\w\-:]+/},{className:"built_in",variants:[{begin:/\barray:/,end:/(?:append|filter|flatten|fold-(?:left|right)|for-each(?:-pair)?|get|head|insert-before|join|put|remove|reverse|size|sort|subarray|tail)\b/},{begin:/\bmap:/,end:/(?:contains|entry|find|for-each|get|keys|merge|put|remove|size)\b/},{begin:/\bmath:/,end:/(?:a(?:cos|sin|tan[2]?)|cos|exp(?:10)?|log(?:10)?|pi|pow|sin|sqrt|tan)\b/},{begin:/\bop:/,end:/\(/,excludeEnd:!0},{begin:/\bfn:/,end:/\(/,excludeEnd:!0},{begin:/[^/,end:/(\/[\w._:-]+>)/,subLanguage:"xml",contains:[{begin:/\{/,end:/\}/,subLanguage:"xquery"},"self"]}]}}return Em=t,Em}var fm,vC;function vIe(){if(vC)return fm;vC=1;function t(e){const n={className:"string",contains:[e.BACKSLASH_ESCAPE],variants:[e.inherit(e.APOS_STRING_MODE,{illegal:null}),e.inherit(e.QUOTE_STRING_MODE,{illegal:null})]},i=e.UNDERSCORE_TITLE_MODE,o={variants:[e.BINARY_NUMBER_MODE,e.C_NUMBER_MODE]},s="namespace class interface use extends function return abstract final public protected private static deprecated throw try catch Exception echo empty isset instanceof unset let var new const self require if else elseif switch case default do while loop for continue break likely unlikely __LINE__ __FILE__ __DIR__ __FUNCTION__ __CLASS__ __TRAIT__ __METHOD__ __NAMESPACE__ array boolean float double integer object resource string char long unsigned bool int uint ulong uchar true false null undefined";return{name:"Zephir",aliases:["zep"],keywords:s,contains:[e.C_LINE_COMMENT_MODE,e.COMMENT(/\/\*/,/\*\//,{contains:[{className:"doctag",begin:/@[A-Za-z]+/}]}),{className:"string",begin:/<<<['"]?\w+['"]?$/,end:/^\w+;/,contains:[e.BACKSLASH_ESCAPE]},{begin:/(::|->)+[a-zA-Z_\x7f-\xff][a-zA-Z0-9_\x7f-\xff]*/},{className:"function",beginKeywords:"function fn",end:/[;{]/,excludeEnd:!0,illegal:/\$|\[|%/,contains:[i,{className:"params",begin:/\(/,end:/\)/,keywords:s,contains:["self",e.C_BLOCK_COMMENT_MODE,n,o]}]},{className:"class",beginKeywords:"class interface",end:/\{/,excludeEnd:!0,illegal:/[:($"]/,contains:[{beginKeywords:"extends implements"},i]},{beginKeywords:"namespace",end:/;/,illegal:/[.']/,contains:[i]},{beginKeywords:"use",end:/;/,contains:[i]},{begin:/=>/},n,o]}}return fm=t,fm}var I=VNe;I.registerLanguage("1c",WNe());I.registerLanguage("abnf",KNe());I.registerLanguage("accesslog",QNe());I.registerLanguage("actionscript",XNe());I.registerLanguage("ada",ZNe());I.registerLanguage("angelscript",JNe());I.registerLanguage("apache",jNe());I.registerLanguage("applescript",eOe());I.registerLanguage("arcade",tOe());I.registerLanguage("arduino",nOe());I.registerLanguage("armasm",rOe());I.registerLanguage("xml",iOe());I.registerLanguage("asciidoc",aOe());I.registerLanguage("aspectj",oOe());I.registerLanguage("autohotkey",sOe());I.registerLanguage("autoit",lOe());I.registerLanguage("avrasm",cOe());I.registerLanguage("awk",uOe());I.registerLanguage("axapta",dOe());I.registerLanguage("bash",_Oe());I.registerLanguage("basic",pOe());I.registerLanguage("bnf",mOe());I.registerLanguage("brainfuck",gOe());I.registerLanguage("c",EOe());I.registerLanguage("cal",fOe());I.registerLanguage("capnproto",SOe());I.registerLanguage("ceylon",bOe());I.registerLanguage("clean",hOe());I.registerLanguage("clojure",TOe());I.registerLanguage("clojure-repl",vOe());I.registerLanguage("cmake",COe());I.registerLanguage("coffeescript",ROe());I.registerLanguage("coq",NOe());I.registerLanguage("cos",OOe());I.registerLanguage("cpp",AOe());I.registerLanguage("crmsh",yOe());I.registerLanguage("crystal",IOe());I.registerLanguage("csharp",DOe());I.registerLanguage("csp",xOe());I.registerLanguage("css",wOe());I.registerLanguage("d",MOe());I.registerLanguage("markdown",LOe());I.registerLanguage("dart",POe());I.registerLanguage("delphi",kOe());I.registerLanguage("diff",UOe());I.registerLanguage("django",FOe());I.registerLanguage("dns",BOe());I.registerLanguage("dockerfile",GOe());I.registerLanguage("dos",YOe());I.registerLanguage("dsconfig",qOe());I.registerLanguage("dts",$Oe());I.registerLanguage("dust",HOe());I.registerLanguage("ebnf",zOe());I.registerLanguage("elixir",VOe());I.registerLanguage("elm",WOe());I.registerLanguage("ruby",KOe());I.registerLanguage("erb",QOe());I.registerLanguage("erlang-repl",XOe());I.registerLanguage("erlang",ZOe());I.registerLanguage("excel",JOe());I.registerLanguage("fix",jOe());I.registerLanguage("flix",eAe());I.registerLanguage("fortran",tAe());I.registerLanguage("fsharp",nAe());I.registerLanguage("gams",rAe());I.registerLanguage("gauss",iAe());I.registerLanguage("gcode",aAe());I.registerLanguage("gherkin",oAe());I.registerLanguage("glsl",sAe());I.registerLanguage("gml",lAe());I.registerLanguage("go",cAe());I.registerLanguage("golo",uAe());I.registerLanguage("gradle",dAe());I.registerLanguage("graphql",_Ae());I.registerLanguage("groovy",pAe());I.registerLanguage("haml",mAe());I.registerLanguage("handlebars",gAe());I.registerLanguage("haskell",EAe());I.registerLanguage("haxe",fAe());I.registerLanguage("hsp",SAe());I.registerLanguage("http",bAe());I.registerLanguage("hy",hAe());I.registerLanguage("inform7",TAe());I.registerLanguage("ini",vAe());I.registerLanguage("irpf90",CAe());I.registerLanguage("isbl",RAe());I.registerLanguage("java",NAe());I.registerLanguage("javascript",OAe());I.registerLanguage("jboss-cli",AAe());I.registerLanguage("json",yAe());I.registerLanguage("julia",IAe());I.registerLanguage("julia-repl",DAe());I.registerLanguage("kotlin",xAe());I.registerLanguage("lasso",wAe());I.registerLanguage("latex",MAe());I.registerLanguage("ldif",LAe());I.registerLanguage("leaf",PAe());I.registerLanguage("less",kAe());I.registerLanguage("lisp",UAe());I.registerLanguage("livecodeserver",FAe());I.registerLanguage("livescript",BAe());I.registerLanguage("llvm",GAe());I.registerLanguage("lsl",YAe());I.registerLanguage("lua",qAe());I.registerLanguage("makefile",$Ae());I.registerLanguage("mathematica",HAe());I.registerLanguage("matlab",zAe());I.registerLanguage("maxima",VAe());I.registerLanguage("mel",WAe());I.registerLanguage("mercury",KAe());I.registerLanguage("mipsasm",QAe());I.registerLanguage("mizar",XAe());I.registerLanguage("perl",ZAe());I.registerLanguage("mojolicious",JAe());I.registerLanguage("monkey",jAe());I.registerLanguage("moonscript",eye());I.registerLanguage("n1ql",tye());I.registerLanguage("nestedtext",nye());I.registerLanguage("nginx",rye());I.registerLanguage("nim",iye());I.registerLanguage("nix",aye());I.registerLanguage("node-repl",oye());I.registerLanguage("nsis",sye());I.registerLanguage("objectivec",lye());I.registerLanguage("ocaml",cye());I.registerLanguage("openscad",uye());I.registerLanguage("oxygene",dye());I.registerLanguage("parser3",_ye());I.registerLanguage("pf",pye());I.registerLanguage("pgsql",mye());I.registerLanguage("php",gye());I.registerLanguage("php-template",Eye());I.registerLanguage("plaintext",fye());I.registerLanguage("pony",Sye());I.registerLanguage("powershell",bye());I.registerLanguage("processing",hye());I.registerLanguage("profile",Tye());I.registerLanguage("prolog",vye());I.registerLanguage("properties",Cye());I.registerLanguage("protobuf",Rye());I.registerLanguage("puppet",Nye());I.registerLanguage("purebasic",Oye());I.registerLanguage("python",Aye());I.registerLanguage("python-repl",yye());I.registerLanguage("q",Iye());I.registerLanguage("qml",Dye());I.registerLanguage("r",xye());I.registerLanguage("reasonml",wye());I.registerLanguage("rib",Mye());I.registerLanguage("roboconf",Lye());I.registerLanguage("routeros",Pye());I.registerLanguage("rsl",kye());I.registerLanguage("ruleslanguage",Uye());I.registerLanguage("rust",Fye());I.registerLanguage("sas",Bye());I.registerLanguage("scala",Gye());I.registerLanguage("scheme",Yye());I.registerLanguage("scilab",qye());I.registerLanguage("scss",$ye());I.registerLanguage("shell",Hye());I.registerLanguage("smali",zye());I.registerLanguage("smalltalk",Vye());I.registerLanguage("sml",Wye());I.registerLanguage("sqf",Kye());I.registerLanguage("sql",Qye());I.registerLanguage("stan",Xye());I.registerLanguage("stata",Zye());I.registerLanguage("step21",Jye());I.registerLanguage("stylus",jye());I.registerLanguage("subunit",eIe());I.registerLanguage("swift",tIe());I.registerLanguage("taggerscript",nIe());I.registerLanguage("yaml",rIe());I.registerLanguage("tap",iIe());I.registerLanguage("tcl",aIe());I.registerLanguage("thrift",oIe());I.registerLanguage("tp",sIe());I.registerLanguage("twig",lIe());I.registerLanguage("typescript",cIe());I.registerLanguage("vala",uIe());I.registerLanguage("vbnet",dIe());I.registerLanguage("vbscript",_Ie());I.registerLanguage("vbscript-html",pIe());I.registerLanguage("verilog",mIe());I.registerLanguage("vhdl",gIe());I.registerLanguage("vim",EIe());I.registerLanguage("wasm",fIe());I.registerLanguage("wren",SIe());I.registerLanguage("x86asm",bIe());I.registerLanguage("xl",hIe());I.registerLanguage("xquery",TIe());I.registerLanguage("zephir",vIe());I.HighlightJS=I;I.default=I;var CIe=I;const CC=Qm(CIe);const RIe={class:"operation_wrap"},NIe=be({__name:"operateWrap",props:{operate:{},content:{}},setup(t){const e=t,n={copy:IC,download:CM},i=ee(!1),o=()=>{i.value=!0,setTimeout(()=>{i.value=!1},1e3)},s=le(()=>{const{operate:_}=e;return _?n[_]:""}),l=()=>{const{content:_}=e;_!==void 0&&(Wm(_),o())},c=()=>{const{content:_}=e;if(!_)return;const p=document.createElement("a");p.href=_,p.style.display="none",p.download="",document.body.appendChild(p),p.click(),p.addEventListener("load",()=>{o(),document.body.removeChild(p)})},d=()=>{if(i.value)return;const _={copy:l,download:c},{operate:p}=e;p&&_[p]()};return(_,p)=>(V(),ae("div",RIe,[oi(_.$slots,"default",{},void 0,!0),q(s)?(V(),ae("span",{key:0,class:"operate_icon",onClick:d},[q(i)?(V(),ot(q(aM),{key:0,size:"16"})):(V(),ot(ji(q(s)),{key:1,size:"16"}))])):Ge("",!0)]))}});const WN=Dt(NIe,[["__scopeId","data-v-ea9cc5f9"]]),Ig=t=>(Zi("data-v-4f00a864"),t=t(),Ji(),t),OIe={class:"code_container"},AIe={class:"tool_wrap"},yIe={key:0,class:"copy_icon",stroke:"currentColor",fill:"none","stroke-width":"2",viewBox:"0 0 24 24","stroke-linecap":"round","stroke-linejoin":"round",xmlns:"http://www.w3.org/2000/svg"},IIe=Ig(()=>B("path",{d:"M16 4h2a2 2 0 0 1 2 2v14a2 2 0 0 1-2 2H6a2 2 0 0 1-2-2V6a2 2 0 0 1 2-2h2"},null,-1)),DIe=Ig(()=>B("rect",{x:"8",y:"2",width:"8",height:"4",rx:"1",ry:"1"},null,-1)),xIe=[IIe,DIe],wIe={key:1,class:"copy_icon",stroke:"currentColor",fill:"none","stroke-width":"2",viewBox:"0 0 24 24","stroke-linecap":"round","stroke-linejoin":"round",xmlns:"http://www.w3.org/2000/svg"},MIe=Ig(()=>B("polyline",{points:"20 6 9 17 4 12"},null,-1)),LIe=[MIe],PIe=be({name:"Copy",__name:"copy",props:{lang:{},content:{}},setup(t){const e=t,n=ee(!1),i=()=>{if(!n.value){if(!e.content){yC.warning(Km("复制失败"));return}Wm(e.content),n.value=!0,setTimeout(()=>{n.value=!1},1e3)}};return(o,s)=>(V(),ae("div",OIe,[B("div",AIe,[B("span",null,Qe(e.lang),1),B("button",{class:"copy_btn",onClick:i},[q(n)?(V(),ae("svg",wIe,LIe)):(V(),ae("svg",yIe,xIe)),vt(" "+Qe(q(n)?o.$t("复制代码成功"):o.$t("复制代码")),1)])]),oi(o.$slots,"default",{},void 0,!0)]))}});const RC=Dt(PIe,[["__scopeId","data-v-4f00a864"]]),kIe=/^[a-zA-Z_:][a-zA-Z0-9:._-]*$/,jn={};function UIe(t){return kIe.test(t)}function ml(t){const e=document.createElement("template");e.innerHTML=t;const n=e.content.children,i=[];for(let o=0;oNumber(h)),-1);if(o!==-1&&eYa(h)),n[e].preVNode=Ya(C),ue(st,{},[C])}jn.code_inline=function(e,n,i,o,s){const l=e[n];return ue("code",s.renderAttrs(l),l.content)};jn.code_block=function(e,n,i,o,s){const l=e[n],c=s.renderAttrs(l);return ue("pre",void 0,[ue("code",c,[ue(Vm,{},l.content)])])};jn.fence=function(e,n,i,o){const s=e[n],l=s.info?Je.unescapeAll(s.info).trim():"";let c="",d="",_,p;return l&&(p=l.split(/(\s+)/g),c=p[0],d=p.slice(2).join("")),i.highlight?_=i.highlight(s.content,c,d)||Je.escapeHtml(s.content):_=Je.escapeHtml(s.content),o.renderVnode&&(_=FIe(_,n,o)),$i(_)?ue(RC,{lang:l,content:s.content},{default:()=>_}):ue(RC,{lang:l,content:s.content},{default:()=>ml(_)})};jn.image=function(e,n,i,o,s){const l=e[n];return ue("img",{...s.renderAttrs(l),alt:s.renderInlineAsText(l.children||[],i,o)},[])};jn.hardbreak=function(){return ue("br")};jn.softbreak=function(e,n,i){return i.breaks?ue("br"):null};jn.text=function(e,n){return ue(Vm,{},e[n].content)};jn.html_block=function(e,n){const i=e[n];return i.contentVNode?i.contentVNode:ml(i.content)};jn.html_inline=function(e,n){const i=e[n];return i.contentVNode?i.contentVNode:ml(i.content)};function BIe(t,e){const n=t[e];return n.nesting===-1?null:n.hidden?ue(st,{},[]):n.tag==="--"?ue(Ws):ue(n.tag,this.renderAttrs(n),[])}function GIe(t){if(!t.attrs)return{};const e={};return t.attrs.forEach(([n,i])=>{UIe(n)&&(e[n]=i)}),e}function NC(t,e,n){const{rules:i}=this,o=[];return t.map((s,l)=>{var E;const{type:c}=s;let d=null,_=null;if(c==="inline")d=ue(st,{},this.render(s.children||[],e,n));else if(i[c]){const f=(E=i[c])==null?void 0:E.call(i,t,l,e,n,this);typeof f=="string"?d=ml(f):f&&f.node&&f.parent?(_=f.parent,d=f.node):d=f}else d=this.renderToken(t,l,e);let p=!1;const g=o.length>0?o[o.length-1]:null;if(d&&g){if(typeof g.type=="string"||g.type===st){const f=Array.isArray(g.children)?g.children:[];g.children=f.concat([d])}p=!0}return s.nesting===1&&(_?o.push(_):d&&o.push(d)),s.nesting===-1&&o.pop(),p?null:d}).filter(s=>!!s)}const YIe=t=>{t.renderer.rules={...t.renderer.rules,...jn},t.renderer.render=NC,t.renderer.renderInline=NC,t.renderer.renderAttrs=GIe,t.renderer.renderToken=BIe},qIe=be({__name:"textItem",props:{isTyping:{type:Boolean},content:{}},setup(t){const e=t,n=new GRe({html:!0,linkify:!1,typographer:!0,highlight(p,g){if(g&&CC.getLanguage(g))try{return`
    ${CC.highlight(p,{language:g,ignoreIllegals:!0}).value}
    `}catch(E){console.warn(E)}return`
    ${n.utils.escapeHtml(p)}
    `}});n.use(jRe),n.use(YIe);const i=$w({renderVnode:!1}),o=Hw([]),s=p=>p.shapeFlag&8,l=p=>p.shapeFlag&6,c=p=>{if(!p)return!1;const g=p.children;return l(p)?!1:s(p)?!0:g&&g.length?c(g[g.length]):!0},d=p=>{var f;let g=p[p.length-1];if(!g||!c(g))return p;const E=((f=g.props)==null?void 0:f.class)||[];return E.includes("typing-text")||(g=p.pop(),g=Ya(g,{class:[...E,"typing-text"]}),p.push(g)),p},_=Ir.throttle(()=>{const{content:p}=e;if(!p)return;const g=n.render(e.content,i);if(!i.renderVnode){o.value=g;return}o.value=d(g)},100);return Zt(()=>e.content,_,{immediate:!0}),Zt(()=>e.isTyping,()=>{i.renderVnode=e.isTyping},{immediate:!0}),(p,g)=>(V(),ot(WN,{operate:"copy",content:e.content},{default:dt(()=>[B("div",{ref:"markdownWrapRef",class:It(["markdown_wrap",{"typing-pre":e.isTyping}])},[(V(!0),ae(st,null,yn(q(o),(E,f)=>(V(),ot(ji(E),{key:f}))),128))],2)]),_:1},8,["content"]))}});const $Ie={class:"chatHistoryImageItem"},HIe={key:0,class:"maxCover"},zIe=be({__name:"imageItem",props:{images:{}},setup(t){const e=t;return(n,i)=>(V(),ae("div",$Ie,[ue(q(pq),null,{default:dt(()=>[(V(!0),ae(st,null,yn(e.images.slice(0,4),(o,s)=>(V(),ae("div",{key:o,class:"imageItem"},[ue(q(gq),{src:o,"object-fit":"cover",lazy:""},null,8,["src"]),e.images.length>4&&s===3?(V(),ae("div",HIe,"+"+Qe(e.images.length-4),1)):Ge("",!0)]))),128))]),_:1})]))}});const VIe={class:"chatHistoryAudioItem"},WIe={class:"audio"},KIe={class:"control"},QIe=["src"],XIe=be({__name:"audioItem",props:{audioUrl:{}},setup(t){const e=t,n=ee(),i=ee("pause"),o=zw({duration:0,currentTime:0}),s=g=>Ka(new Date(o.duration*g/100*1e3)).format("mm:ss"),l=le(()=>Ka(new Date((o.duration-o.currentTime)*1e3)).format("mm:ss")),c=()=>{var g;(g=n.value)==null||g.play(),i.value="play"},d=()=>{var g;(g=n.value)==null||g.pause(),i.value="pause"},_=()=>{const g=n.value;o.currentTime=g.currentTime},p=()=>{const g=n.value;o.currentTime=g.currentTime,o.duration=g.duration};return(g,E)=>(V(),ae("div",VIe,[B("div",WIe,[B("div",KIe,[q(i)==="pause"?(V(),ot(q(JM),{key:0,onClick:c})):Ge("",!0),q(i)==="play"?(V(),ot(q(VM),{key:1,onClick:d})):Ge("",!0)]),ue(q(oM),{"format-tooltip":s}),B("div",null,Qe(q(l)),1)]),B("audio",{ref_key:"audioRef",ref:n,src:e.audioUrl,autoplay:!1,onLoad:p,onTimeupdate:_},null,40,QIe)]))}});const ZIe={class:"error_msg"},JIe=be({__name:"errorItem",props:{content:{}},setup(t){const n=yt(t,"content");return(i,o)=>(V(),ot(WN,{operate:"copy",content:q(n)},{default:dt(()=>[B("div",ZIe,Qe(q(n)),1)]),_:1},8,["content"]))}});const jIe=Dt(JIe,[["__scopeId","data-v-84a7773a"]]),eDe={class:"agent_message_wrap"},tDe=be({__name:"index",props:{id:{},lastContet:{type:Boolean},contents:{}},setup(t){const e=t,{activeAgentNode:n}=aa(),i=ee(e.contents),o=g=>ue(qIe,{content:g},null),s=g=>ue(zIe,{images:g.split(",")},null),l=g=>ue(XIe,{audioUrl:g},null),c=g=>ue(jIe,{content:g},null),d={[Dr.TEXT]:o,[Dr.IMAGE]:s,[Dr.AUDIO]:l,[Dr.FAILED]:c},_=le(()=>{var g;return e.id===((g=n.value)==null?void 0:g.id)&&e.lastContet}),p=()=>{const g=Zt(n,(E,f)=>{var C;if(!n.value)return;if(f!=null&&E==null||!_.value){g();return}const S=n.value.steps[n.value.steps.length-1];(C=S==null?void 0:S.contents)!=null&&C.length&&(i.value=[...S.contents])},{deep:!0})};return Zt(_,()=>{_.value?p():i.value=e.contents},{immediate:!0}),(g,E)=>(V(),ae("div",eDe,[(V(!0),ae(st,null,yn(q(i),f=>(V(),ae(st,{key:f.id},[f?(V(),ot(ji(d[f.type](f.value.answer)),{key:0,"is-typing":q(_)},null,8,["is-typing"])):Ge("",!0)],64))),128))]))}});const nDe=Dt(tDe,[["__scopeId","data-v-898355de"]]),rDe={class:"steps_container"},iDe={class:"steps_wrap"},aDe={key:0,style:{width:"80%","vertical-align":"middle"},src:aN,alt:""},oDe={key:1,style:{width:"80%","vertical-align":"middle"},src:oN,alt:""},sDe={key:2,style:{width:"80%","vertical-align":"middle"},src:sN,alt:""},lDe={key:3,style:{width:"80%","vertical-align":"middle"},src:lN,alt:""},cDe=be({__name:"index",props:{activeNode:{}},setup(t){const n=yt(t.activeNode,"steps"),i=o=>{var l;if(!o)return"";const s=pN.find(c=>c.job===o);return{width:"100%",height:"100%",background:(l=s==null?void 0:s.color)==null?void 0:l[1],display:"flex","align-items":"center","justify-content":"center","border-radius":"50%"}};return(o,s)=>(V(),ae("div",rDe,[B("section",iDe,[ue(q(sM),{direction:"vertical","line-less":!0},{default:dt(()=>[(V(!0),ae(st,null,yn(q(n),(l,c)=>(V(),ot(w$,{key:l.id,status:l.status,skill:l.skill,title:l.role,description:l.description},{icon:dt(()=>[B("div",{style:Bt(i(l.role))},[l.role==="Product Manager"?(V(),ae("img",aDe)):Ge("",!0),l.role==="Project Manager"?(V(),ae("img",oDe)):Ge("",!0),l.role==="Architect"?(V(),ae("img",sDe)):Ge("",!0),l.role==="Engineer"?(V(),ae("img",lDe)):Ge("",!0)],4)]),default:dt(()=>[ue(nDe,{id:o.activeNode.id,"last-contet":c===q(n).length-1,contents:l.contents},null,8,["id","last-contet","contents"])]),_:2},1032,["status","skill","title","description"]))),128))]),_:1})])]))}});const uDe=Dt(cDe,[["__scopeId","data-v-77479ba1"]]),dDe=be({__name:"statusButton",setup(t){const e={[Ut.INIT]:{text:"",visibel:!1,icon:""},[Ut.IDLE]:{text:"Regenerate",visibel:!0,icon:gs},[Ut.RUNNING]:{text:"Stop Generation",visibel:!0,icon:oL},[Ut.FINISH]:{text:"Regenerate",visibel:!0,icon:gs},[Ut.FAILED]:{text:"Regenerate",visibel:!0,icon:gs},[Ut.TERMINATE]:{text:"Regenerate",visibel:!0,icon:gs}},{globalStatus:n,stopMessage:i,regenMessage:o,activeAgentNode:s}=aa(),l=le(()=>e[n.value]),c=le(()=>{var _;return n.value===Ut.RUNNING&&!((_=s.value)!=null&&_.steps.length)}),d=()=>{if(n.value===Ut.RUNNING){i();return}o()};return(_,p)=>q(l).visibel?(V(),ot(q(xC),{key:0,class:"status_btn",align:"center",direction:"vertical",size:8},{default:dt(()=>[q(c)?Ge("",!0):(V(),ot(q(hm),{key:0,style:{"background-color":"#fff","border-radius":"999px"},type:"outline",onClick:d},{icon:dt(()=>[(V(),ot(ji(q(l).icon)))]),default:dt(()=>[vt(" "+Qe(_.$t(q(l).text)),1)]),_:1}))]),_:1})):Ge("",!0)}});const _De=Dt(dDe,[["__scopeId","data-v-240aae5d"]]),pDe={class:"chatRoomWrapper"},mDe={class:"visionWrapper"},gDe={key:0,class:"emptyWrapper"},EDe=Ww('
    Chat with MetaGPT
    takes a one line requirement as input and outputs user stories / competitive analysis /
    requirements / data structures / APIs / documents,
    etc.
    ',1),fDe={class:"actionWrapper"},SDe=["onClick"],bDe={key:0,style:{width:"22px"},src:V2,alt:""},hDe={key:1,style:{width:"22px"},src:W2,alt:""},TDe={key:2,style:{width:"22px"},src:K2,alt:""},vDe={key:3,style:{width:"22px"},src:Q2,alt:""},CDe={class:"chatWrapper"},RDe={class:"msg_history_area"},NDe={key:0,class:"msg_text"},ODe={class:"bottom_trigger"},ADe={class:"inputWrapper"},yDe={class:"inputInner"},IDe=["disabled"],DDe=be({__name:"chatRoom",setup(t){const e=["Design a RecSys like Toutiao","Write a cli snake game based on pygame","Design a content-based recommendation system","Design a search algorithm framework"],n=ee(!1),i=ee(""),{chatRenderPathList:o,genRootNode:s,sendMessage:l,apiKey:c,shakeApiKeyInput:d,globalStatus:_}=T2(),{scroller:p,onScroll:g}=dN(),E=ee(!1),f=()=>c.value?!0:(d.value=!0,E.value===!0||(E.value=!0,setTimeout(()=>{E.value=!1,d.value=!1},2e3)),!1),S=async T=>{f()&&(await s(T),l(T))},C=async()=>{f()&&(o.value.length||await s(i.value),l(i.value),i.value="")},h=T=>{T.key==="Enter"&&C()};return(T,N)=>{const y=Vw("loading");return V(),ae("div",pDe,[q(E)?(V(),ot(q(lM),{key:0,type:"warning",style:{width:"500px",position:"absolute","z-index":"999"}},{default:dt(()=>[vt("Please Enter Your OpenAI Key to Activate Your Team First.")]),_:1})):Ge("",!0),B("div",mDe,[q(o).length?Ge("",!0):(V(),ae("div",gDe,[EDe,B("div",fDe,[(V(),ae(st,null,yn(e,(x,P)=>B("div",{key:P,class:"button",onClick:D=>S(x)},[P===0?(V(),ae("img",bDe)):Ge("",!0),P===1?(V(),ae("img",hDe)):Ge("",!0),P===2?(V(),ae("img",TDe)):Ge("",!0),P===3?(V(),ae("img",vDe)):Ge("",!0),B("span",null,Qe(x),1)],8,SDe)),64))])])),Pn(B("div",CDe,[Pn((V(),ae("section",RDe,[B("div",{ref_key:"scroller",ref:p,class:"scroll_wrap",onScroll:N[0]||(N[0]=(...x)=>q(g)&&q(g)(...x))},[q(o).length?(V(!0),ae(st,{key:0},yn(q(o),x=>(V(),ot(u$,{key:x.activeNode.id,"render-node":x,"is-root-node":x.activeNode.id===0},{content:dt(()=>[x.activeNode.id===0?(V(),ae("div",NDe,Qe(x.activeNode.steps[0].contents[0].value.answer),1)):x.is_user_message?(V(),ot(f$,{key:1,"active-node":x.activeNode},null,8,["active-node"])):(V(),ot(uDe,{key:2,"active-node":x.activeNode},null,8,["active-node"]))]),_:2},1032,["render-node","is-root-node"]))),128)):Ge("",!0),B("div",ODe,[ue(_De)])],544)])),[[y,q(n)]])],512),[[Qs,q(o).length]])]),ue(q(uM),{disabled:q(c).length,content:"Please fill in your OpenAI API key to activate the hired software team."},{default:dt(()=>[B("div",ADe,[B("div",yDe,[Pn(B("input",{"onUpdate:modelValue":N[1]||(N[1]=x=>wr(i)?i.value=x:null),disabled:!q(c),placeholder:"Please enter a one-sentence requirement.",type:"text",onKeydown:h},null,40,IDe),[[Kw,q(i)]]),q(_)===q(Ut).RUNNING?(V(),ot(q(cM),{key:0,style:{"margin-right":"12px","margin-top":"4px"},size:6,dot:""})):(V(),ae("button",{key:1,class:"sendBtn",onClick:C},[ue(ea,{size:20,fill:"#ffffff","icon-id":"icon-fasong"})]))])])]),_:1},8,["disabled"])])}}});const xDe=Dt(DDe,[["__scopeId","data-v-ae197aef"]]),wDe={class:"chatWrapper"},MDe=be({__name:"chatPage",setup(t){return(e,n)=>(V(),ae("div",wDe,[ue(z2),ue(xDe)]))}});const LDe=Dt(MDe,[["__scopeId","data-v-7d0d8d24"]]),PDe={class:"hfHomeWrapper"},kDe=be({__name:"home",setup(t){return(e,n)=>(V(),ae("div",PDe,[ue(Vq),ue(LDe)]))}});const $De=Dt(kDe,[["__scopeId","data-v-7444cb54"]]);export{$De as default}; diff --git a/spaces/sub314xxl/MusicGen-Continuation/app_batched.py b/spaces/sub314xxl/MusicGen-Continuation/app_batched.py deleted file mode 100644 index 945da7da0abf07f6be156c9c31d4af26db2168cb..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen-Continuation/app_batched.py +++ /dev/null @@ -1,195 +0,0 @@ -""" -Copyright (c) Meta Platforms, Inc. and affiliates. -All rights reserved. - -This source code is licensed under the license found in the -LICENSE file in the root directory of this source tree. -""" - -from tempfile import NamedTemporaryFile -import torch -import gradio as gr -from share_btn import community_icon_html, loading_icon_html, share_js, css - -from audiocraft.data.audio_utils import convert_audio -from audiocraft.data.audio import audio_write -from audiocraft.models import MusicGen - - -MODEL = None - - -def load_model(): - print("Loading model") - return MusicGen.get_pretrained("melody") - - -def predict(texts, melodies): - global MODEL - if MODEL is None: - MODEL = load_model() - - duration = 12 - MODEL.set_generation_params(duration=duration) - - print(texts, melodies) - processed_melodies = [] - - target_sr = 32000 - target_ac = 1 - for melody in melodies: - if melody is None: - processed_melodies.append(None) - else: - sr, melody = ( - melody[0], - torch.from_numpy(melody[1]).to(MODEL.device).float().t(), - ) - if melody.dim() == 1: - melody = melody[None] - melody = melody[..., : int(sr * duration)] - melody = convert_audio(melody, sr, target_sr, target_ac) - processed_melodies.append(melody) - - outputs = MODEL.generate_with_chroma( - descriptions=texts, - melody_wavs=processed_melodies, - melody_sample_rate=target_sr, - progress=False, - ) - - outputs = outputs.detach().cpu().float() - out_files = [] - for output in outputs: - with NamedTemporaryFile("wb", suffix=".wav", delete=False) as file: - audio_write( - file.name, - output, - MODEL.sample_rate, - strategy="loudness", - loudness_headroom_db=16, - loudness_compressor=True, - add_suffix=False, - ) - waveform_video = gr.make_waveform(file.name) - out_files.append(waveform_video) - - return [out_files, melodies] - - -def toggle(choice): - if choice == "mic": - return gr.update(source="microphone", value=None, label="Microphone") - else: - return gr.update(source="upload", value=None, label="File") - - -with gr.Blocks(css=css) as demo: - gr.Markdown( - """ - # MusicGen - - This is the demo for [MusicGen](https://github.com/facebookresearch/audiocraft), a simple and controllable model for music generation - presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284). -
    - - Duplicate Space - for longer sequences, more control and no queue.

    - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Row(): - text = gr.Text( - label="Describe your music", - lines=2, - interactive=True, - elem_id="text-input", - ) - with gr.Column(): - radio = gr.Radio( - ["file", "mic"], - value="file", - label="Melody Condition (optional) File or Mic", - ) - melody = gr.Audio( - source="upload", - type="numpy", - label="File", - interactive=True, - elem_id="melody-input", - ) - with gr.Row(): - submit = gr.Button("Generate") - with gr.Column(): - output = gr.Video(label="Generated Music", elem_id="generated-video") - output_melody = gr.Audio(label="Melody ", elem_id="melody-output") - with gr.Row(visible=False) as share_row: - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html) - loading_icon = gr.HTML(loading_icon_html) - share_button = gr.Button("Share to community", elem_id="share-btn") - share_button.click(None, [], [], _js=share_js) - submit.click( - lambda x: gr.update(visible=False), - None, - [share_row], - queue=False, - show_progress=False, - ).then( - predict, - inputs=[text, melody], - outputs=[output, output_melody], - batch=True, - max_batch_size=12, - ).then( - lambda x: gr.update(visible=True), - None, - [share_row], - queue=False, - show_progress=False, - ) - radio.change(toggle, radio, [melody], queue=False, show_progress=False) - gr.Examples( - fn=predict, - examples=[ - [ - "An 80s driving pop song with heavy drums and synth pads in the background", - "./assets/bach.mp3", - ], - [ - "A cheerful country song with acoustic guitars", - "./assets/bolero_ravel.mp3", - ], - [ - "90s rock song with electric guitar and heavy drums", - None, - ], - [ - "a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions bpm: 130", - "./assets/bach.mp3", - ], - [ - "lofi slow bpm electro chill with organic samples", - None, - ], - ], - inputs=[text, melody], - outputs=[output], - ) - gr.Markdown( - """ - ### More details - - The model will generate 12 seconds of audio based on the description you provided. - You can optionaly provide a reference audio from which a broad melody will be extracted. - The model will then try to follow both the description and melody provided. - All samples are generated with the `melody` model. - - You can also use your own GPU or a Google Colab by following the instructions on our repo. - - See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft) - for more details. - """ - ) -demo.queue(max_size=60).launch() diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Acrobat Pro DC 2018.012.20042 [VERIFIED] Crack Utorrent.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Acrobat Pro DC 2018.012.20042 [VERIFIED] Crack Utorrent.md deleted file mode 100644 index 7109b5cdc5013b188555e644d9dbe6ec2778681b..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Acrobat Pro DC 2018.012.20042 [VERIFIED] Crack Utorrent.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    The Turing Test - Update 2 (CODEX) Patch DnnApex>>https 6.0.9 crack hanya dapat download full the> secret to teen power pdf pb card> mod by grime download free download cumparacion de nucc>trello.com> ; Secure password manager google chrome rar winrar crack ; The Intern (English) hindi dubbed movie download
    Biopharmaceutics and pharmacokinetics by venkateswarlu pdf
    Final Fantasy 7 Advent Children Torrent credito tablaturas eiffel mescla
    IZotope Ozone 5 Advanced VST VST3 RTAS V500 X86 X64 ASSiGNrar
    ion agarbiceanu file din cartea naturii download
    zimmer nexgen sizing chart
    Airbox Playout Software Crack 19
    Passlist Txt Hydra
    native instruments battery 4.v4.0.1 cracked-union trello.com>trello ;

    -

    Accuradius Doxothymus 5.1.8082 Crack notecode ncurses core module Free Download Antetos Crack by notecode have a nice day Download ; 6.4.23031 Crack & Keys Desktop All In One AntiVirus ; Junebug::DJ2BASS 6.0.2.7 Crack Full Registration code hdownload> downloader pc [d5] midiropr p4 daca o sa inteleg cat ma primeasc ; The Intern (English) hindi dubbed movie download
    Biopharmaceutics and pharmacokinetics by venkateswarlu pdf
    Final Fantasy 7 Advent Children Torrent credito tablaturas eiffel mescla
    IZotope Ozone 5 Advanced VST VST3 RTAS V500 X86 X64 ASSiGNrar
    ion agarbiceanu file din cartea naturii download
    zimmer nexgen sizing chart
    Airbox Playout Software Crack 19
    Passlist Txt Hydra
    native instruments battery 4.v4.0.1 cracked-union trello.com ; trello.com ; 6.4.23031 Crack full registrationcode hdownload> downloader pc [d5] midiropr p4 daca o sa inteleg cat ma primeasc ;

    -

    Adobe Acrobat Pro DC 2018.012.20042 Crack utorrent


    Download 🗸 https://cinurl.com/2uEYdx



    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Premiere Pro Cs4 Cs6 Portable X86 X64 Torrentrar ((LINK)).md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Premiere Pro Cs4 Cs6 Portable X86 X64 Torrentrar ((LINK)).md deleted file mode 100644 index cacd701a3268c374880d0b0ea6f3c7a2e6669397..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Premiere Pro Cs4 Cs6 Portable X86 X64 Torrentrar ((LINK)).md +++ /dev/null @@ -1,6 +0,0 @@ -

    Adobe Premiere Pro Cs4 Cs6 Portable X86 X64 Torrentrar


    Download Filehttps://cinurl.com/2uEXp9



    -
    -Adobe Premiere Pro Cs4 Cs6 Portable X86 X64 Torrentrar · Rapture Rejects [full Version]l · Mgsoft Mib Browser Professional Edition Crack 23 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Video Comparer 1 06 Keygen Extra Quality 26.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Video Comparer 1 06 Keygen Extra Quality 26.md deleted file mode 100644 index b9b1e98ec4582f7ffbbbbf36e7cdc8d1b7709724..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Video Comparer 1 06 Keygen Extra Quality 26.md +++ /dev/null @@ -1,26 +0,0 @@ - -

    How to Use Video Comparer 1.06 to Find and Delete Duplicate Videos

    -

    Video Comparer is a Windows utility that quickly detects video duplicate files on your computer, and can easily delete them. It processes most codecs, finds cropped videos, rotated, noised, and with videos splitted into multiple CDs[^4^] [^7^]. Video Comparer provides a report of scanned files, and synchronized thumbnails of duplicate videos[^4^] [^7^].

    -

    video comparer 1 06 keygen 26


    Download Filehttps://cinurl.com/2uEYSF



    -

    In this article, we will show you how to use Video Comparer 1.06 to find and delete duplicate videos on your hard disk. You will need a valid keygen to activate the software, which you can download from various sources on the internet[^1^]. However, we do not recommend or endorse any of these sources, as they may contain malware or viruses. Use them at your own risk.

    -

    Step 1: Install and Activate Video Comparer 1.06

    -

    First, you need to download and install Video Comparer 1.06 from the official website: www.video-comparer.com. You can choose between the Basic, Standard, or Pro version, depending on your needs and budget. The Basic version is free but has some limitations, such as only scanning up to 500 videos and not detecting image transformations or time shifts[^8^]. The Standard version costs $20 and can scan up to 5000 videos and detect image transformations[^8^]. The Pro version costs $100 and can scan unlimited videos and detect image transformations and time shifts[^8^].

    -

    After installing the software, you need to activate it with a valid keygen. You can download a keygen for Video Comparer 1.06 from various sources on the internet, such as Peatix[^1^] or SoundCloud[^2^] [^3^]. However, as we mentioned before, we do not recommend or endorse any of these sources, as they may contain malware or viruses. Use them at your own risk.

    -

    -

    To activate the software with a keygen, follow these steps:

    -
      -
    1. Run the keygen and generate a serial number for Video Comparer 1.06.
    2. -
    3. Run Video Comparer 1.06 and click on the "Help" menu.
    4. -
    5. Select "Enter registration key" and enter the serial number generated by the keygen.
    6. -
    7. Click "OK" and restart the software.
    8. -
    -

    You should now have a fully activated version of Video Comparer 1.06.

    -

    Step 2: Select Folders to Scan

    -

    Next, you need to select the folders that contain the videos you want to scan for duplicates. You can add as many folders as you want by clicking on the "Add folder" button at the top left corner of the main window. You can also remove any folder by selecting it and clicking on the "Remove folder" button.

    -

    After adding all the folders you want to scan, you can adjust some settings by clicking on the "Options" button at the top right corner of the main window. Here you can choose the level of comparison (low, medium, or high), the minimum duration of videos to scan (from 5 seconds to 60 minutes), and the maximum size of videos to scan (from 10 MB to unlimited). You can also enable or disable some features such as detecting image transformations (cropped, rotated, noised), detecting time shifts (splitted into multiple CDs), deleting empty folders after scanning, or moving deleted files to recycle bin.

    -

    When you are done with the settings, click "OK" and then click on the "Start" button at the bottom right corner of the main window to begin scanning.

    -

    Step 3: Review and Delete Duplicate Videos

    -

    Video Comparer will scan all the videos in the selected folders and compare them by their content using a robust "video fingerprint" technology[^5^]. It will then display a report of scanned files, showing how many videos were scanned, how many duplicates were found, how much disk space was saved by deleting duplicates, and how long it took to scan.

    - d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Winstep Nexus 19.2 Crack UPD With Keygen Free Download 2020.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Winstep Nexus 19.2 Crack UPD With Keygen Free Download 2020.md deleted file mode 100644 index e645c324e0a9b7b224b989ccbf82a0a5365f54ab..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Winstep Nexus 19.2 Crack UPD With Keygen Free Download 2020.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Winstep Nexus 19.2 Crack With Keygen Free Download 2020


    Download Ziphttps://cinurl.com/2uEYP8



    - -Tag Archives: Winstep Nexus Ultimate 20.10 Crack 2021. Winstep Nexus Ultimate 20.18 Crack + Torrent Free Download. Author: jicrack | January 3, 2022 0 comments. Author: jicrack | January 3, 2022 0 comments. Windows 10 Pro 1909 X64 + Office 2019 by Lao KT 3.14.20 by LeX_6000 12.10.2019 (x64/RUS). Author: AlexPirat | October 9, 2019 0 comments. Author: AlexPirat | October 9, 2019 0 comments. Windows 10 Enterprise LTSC 2019 by OVGorskiy 03.2020 (x64/RUS). Author: AlexPirat | September 29, 2019 0 comments. Author: AlexPirat | September 29, 2019 0 comments. Windows 10 1903 Enterprise LTSC x64/x86 (2DVD) by Boson4uk 10/12/2019. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/taesiri/DeticChatGPT/detic/modeling/utils.py b/spaces/taesiri/DeticChatGPT/detic/modeling/utils.py deleted file mode 100644 index 297fb469a049d3df2a4aa730e09c9919b4c4ca3c..0000000000000000000000000000000000000000 --- a/spaces/taesiri/DeticChatGPT/detic/modeling/utils.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import torch -import json -import numpy as np -from torch.nn import functional as F - -def load_class_freq( - path='datasets/metadata/lvis_v1_train_cat_info.json', freq_weight=1.0): - cat_info = json.load(open(path, 'r')) - cat_info = torch.tensor( - [c['image_count'] for c in sorted(cat_info, key=lambda x: x['id'])]) - freq_weight = cat_info.float() ** freq_weight - return freq_weight - - -def get_fed_loss_inds(gt_classes, num_sample_cats, C, weight=None): - appeared = torch.unique(gt_classes) # C' - prob = appeared.new_ones(C + 1).float() - prob[-1] = 0 - if len(appeared) < num_sample_cats: - if weight is not None: - prob[:C] = weight.float().clone() - prob[appeared] = 0 - more_appeared = torch.multinomial( - prob, num_sample_cats - len(appeared), - replacement=False) - appeared = torch.cat([appeared, more_appeared]) - return appeared - - - -def reset_cls_test(model, cls_path, num_classes): - model.roi_heads.num_classes = num_classes - if type(cls_path) == str: - print('Resetting zs_weight', cls_path) - zs_weight = torch.tensor( - np.load(cls_path), - dtype=torch.float32).permute(1, 0).contiguous() # D x C - else: - zs_weight = cls_path - zs_weight = torch.cat( - [zs_weight, zs_weight.new_zeros((zs_weight.shape[0], 1))], - dim=1) # D x (C + 1) - if model.roi_heads.box_predictor[0].cls_score.norm_weight: - zs_weight = F.normalize(zs_weight, p=2, dim=0) - zs_weight = zs_weight.to(model.device) - for k in range(len(model.roi_heads.box_predictor)): - del model.roi_heads.box_predictor[k].cls_score.zs_weight - model.roi_heads.box_predictor[k].cls_score.zs_weight = zs_weight \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Callanetics Turkce Dublaj Torrent Indir.md b/spaces/terfces0erbo/CollegeProjectV2/Callanetics Turkce Dublaj Torrent Indir.md deleted file mode 100644 index e288392016bd5a88ce3003207c3ea2ed9438517e..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Callanetics Turkce Dublaj Torrent Indir.md +++ /dev/null @@ -1,78 +0,0 @@ - -

    Callanetics Türkçe Dublaj Torrent İndir: Vücudunuzu Şekillendirmenin En Etkili Yolu

    -

    Callanetics, küçük ve hassas hareketlerle belirli kas gruplarını çalıştıran bir egzersiz biçimidir. Bu egzersizler sayesinde vücudunuz daha sıkı, daha esnek ve daha sağlıklı olur. Callanetics, düşük etkili bir egzersiz programı olduğu için eklemlerinize zarar vermez ve her yaştan insan tarafından yapılabilir.

    -

    callanetics turkce dublaj torrent indir


    Download Zip > https://bytlly.com/2uGiG2



    -

    Peki, Callanetics egzersizlerini nasıl öğrenebilirsiniz? İnternetten Callanetics Türkçe dublaj torrent indirerek, evinizde bu egzersizleri yapabilirsiniz. Callanetics Türkçe dublaj torrent indirme linkleri ve egzersiz rehberi için okumaya devam edin.

    -

    Callanetics Türkçe Dublaj Torrent İndirme Linkleri

    -

    Callanetics egzersizlerini Türkçe dublajlı olarak izlemek istiyorsanız, aşağıdaki linklerden birini kullanarak Callanetics Türkçe dublaj torrent indirebilirsiniz. Bu linklerden indireceğiniz videolar, Callanetics'in kurucusu olan Callan Pinckney tarafından sunulan orijinal egzersiz programlarını içermektedir.

    - -

    Bu linklerden birini tıkladığınızda, karşınıza bir torrent dosyası çıkacaktır. Bu dosyayı bilgisayarınıza indirmek için bir torrent programına ihtiyacınız olacaktır. Torrent programınız yoksa, uTorrent gibi ücretsiz ve güvenilir bir program indirebilirsiniz.

    -

    -

    Torrent dosyasını bilgisayarınıza indirdikten sonra, torrent programınızı açın ve dosyayı programa ekleyin. Program, sizin için videoyu indirmeye başlayacaktır. İndirme süresi, internet hızınıza ve video boyutuna göre değişebilir. Videoyu indirdikten sonra, istediğiniz zaman izleyebilirsiniz.

    -

    Callanetics Egzersiz Rehberi

    -

    Callanetics Türkçe dublaj torrent indirdikten sonra, egzersiz yapmaya hazır mısınız? Callanetics egzersizleri, vücudunuzun farklı bölgelerini çalıştıran çeşitli hareketlerden oluşmaktadır. Bu hareketleri doğru bir şekilde yapmak için aşağıdaki ipuçlarını takip edin:

    -
      -
    • Egzersize başlamadan önce ısınma hareketleri yapın. Bu şekilde kaslarınızı hazırlayabilir ve sakatlanma riskini azaltabilirsiniz.
    • -
    • Egzersizi yaparken nefesinizi tutmayın. Nefes alışverişinizi hareketlerinizle uyumlu hale getirmeye çalışın.
    • -
    • Hareketleri yavaş ve kontrollü bir şekilde yapın. Hareketleri hızlı yapmak yerine, kaslarınızı sıkarak daha fazla etki alın.
    • -
    • Hareketleri yaparken vücudunuzu dik tutun. Sırtınızı kamburlaştırmayın ve omuzlarınızı gevşetin.
    • -
    • Hareketleri yaparken aynaya bakın. Bu şekilde duruşunuzu kontrol edebilir ve hatalarınızı düzeltebilirsiniz.
    • -
    • Egzersizi bitirdikten sonra soğuma hareketleri yapın. Bu şekilde kaslarınızı gevşetebilir ve yorgunluğu azaltabilirsiniz.
    • -
    -

    Callanetics egzersizleri, haftada en az 3 kez yapılmalıdır. Egzersizin süresi ise kişisel tercihinize göre değişebilir. Genellikle 30 dakika ile 1 saat arasında bir süre yeterlidir. Egzersizi düzenli olarak yaptığınızda, vücudunuzda gözle görülür değişimler fark edeceksiniz.

    -

    Sonuç

    -

    Callanetics Türkçe dublaj torrent indirerek, evinizde kolayca bu egzersizi yapabilirsiniz. Callanetics, vücudunuzu sıkılaştırmanın ve sağlıklı kalmanın en etkili yollarından biridir. Callanetics egzersizlerini doğru bir şekilde yapmak için bu makaledeki ipuçlarını takip edin ve sonuçları görün.

    -

    Callanetics'in Faydaları Nelerdir?

    -

    Callanetics, vücudunuz için birçok fayda sağlayan bir egzersiz programıdır. Callanetics'in faydalarından bazıları şunlardır:

    -
      -
    • Kas tonusu ve vücut şekli geliştirir. Callanetics, derin kasları çalıştırarak vücudunuzu sıkılaştırır ve inceltir. Callanetics ile bir ayda bir beden küçülebilirsiniz.
    • -
    • Esneme ve işlevsellik artırır. Callanetics, kaslarınızı uzatarak esnekliğinizi geliştirir. Bu sayede hareket kabiliyetinizi artırır ve günlük aktivitelerinizi daha kolay yapabilirsiniz.
    • -
    • Kilo vermenize yardımcı olur. Callanetics, kilo verme amacıyla oluşturulmamış olsa da, metabolizmanızı hızlandırarak ve kas kütlenizi artırarak kilo vermenize yardımcı olabilir.
    • -
    • Duruş, denge ve koordinasyonu iyileştirir. Callanetics, vücudunuzu dik tutmanızı ve omurganızı desteklemenizi sağlar. Bu sayede duruşunuzu düzeltir, denge ve koordinasyonunuzu geliştirir.
    • -
    • Ağrı ve sakatlanmayı önler veya azaltır. Callanetics, düşük etkili bir egzersiz olduğu için eklemlerinize zarar vermez. Ayrıca sırt, boyun, diz ve omuz ağrısı gibi sorunları olan kişilere de fayda sağlar.
    • -
    • Ruh halini, enerjiyi ve motivasyonu iyileştirir. Callanetics, stresi azaltarak ve endorfin salgılayarak ruh halinizi iyileştirir. Ayrıca enerji seviyenizi yükselterek ve egzersiz yapma isteğinizi artırarak motivasyonunuzu yükseltir.
    • -
    -

    Callanetics'in faydalarını görmek için tek yapmanız gereken Callanetics Türkçe dublaj torrent indirmek ve egzersize başlamak. Callanetics ile vücudunuzu değiştirmenin mümkün olduğunu göreceksiniz.

    -
    Callanetics Egzersizleri Nelerdir?
    -

    Callanetics egzersizleri, vücudunuzun farklı bölgelerini çalıştıran çeşitli hareketlerden oluşmaktadır. Bu hareketler, genellikle bir barre (bale çubuğu), bir sandalye veya bir mat kullanılarak yapılır. Callanetics egzersizlerinin bazı örnekleri şunlardır:

    -
      -
    • Barre leg lifts: Barreye tutunarak bacaklarınızı yana ve arkaya kaldırarak kalça ve bacak kaslarınızı çalıştırın.
    • -
    • Chair dips: Sandalyeye sırtınızı dönerek ellerinizi sandalyenin kenarına koyun. Bacaklarınızı uzatarak vücudunuzu yere indirip kaldırarak triceps kaslarınızı çalıştırın.
    • -
    • Abdominal curls: Matın üzerine sırtüstü uzanın. Bacaklarınızı dizlerinizden bükerek havaya kaldırın. Ellerinizi başınızın arkasına koyun. Karın kaslarınızı kullanarak omuzlarınızı yukarı doğru kaldırın ve indirin.
    • -
    • Plank: Matın üzerine yüzüstü uzanın. Ellerinizi omuzlarınızın altına koyun. Bacaklarınızı uzatarak vücudunuzu kaldırın. Sadece elleriniz ve ayak parmaklarınız mata temas etsin. Vücudunuzu düz bir çizgi halinde tutarak karın kaslarınızı sıkın.
    • -
    • Lunge: Matın üzerinde ayakta durun. Ellerinizi kalçalarınıza koyun. Sağ bacağınızı öne doğru büyük bir adım atın. Sol bacağınızı diziniz yere değene kadar eğin. Sağ bacağınızla tekrar ayakta durun. Aynı hareketi sol bacağınızla yapın.
    • -
    -

    Callanetics egzersizleri, her hareketi 10 ila 100 kez tekrarlayarak yapılır. Bu sayede kaslar derinlemesine çalıştırılır ve sıkılaştırılır. Callanetics egzersizlerini yaparken hareketleri yavaş ve kontrollü bir şekilde yapmaya, nefesinizi düzenli almaya ve vücudunuzu dik tutmaya dikkat edin.

    -
    Callanetics Kullananların Yorumları
    -

    Callanetics egzersizlerinin faydalarını duymak sizi heyecanlandırmış olabilir. Peki, Callanetics kullananların yorumları nelerdir? Callanetics Türkçe dublaj torrent indiren ve egzersizleri yapan kişilerin deneyimleri nasıldır? İşte bazı gerçek kullanıcı yorumları:

    -
    -

    "Callanetics'i 30 yıldır yapıyorum ve vücudumun şekli hiç değişmedi. 50 yaşındayım ama 30 yaşında gibi görünüyorum. Callanetics benim için bir yaşam tarzı oldu. Callanetics Türkçe dublaj torrent indirerek evde rahatça yapabiliyorum. Kesinlikle tavsiye ederim."

    -- Ayşe -
    -
    -

    "Callanetics'i bir arkadaşımın tavsiyesi üzerine denemeye karar verdim. İlk başta çok zorlandım ama sonuçları görmeye başlayınca devam ettim. Bir ayda belimde ve kalçamda 5 cm inceldim. Ayrıca sırt ağrılarım da geçti. Callanetics Türkçe dublaj torrent indirmek çok kolaydı ve videoları izleyerek hareketleri öğrendim."

    -- Emre -
    -
    -

    "Callanetics'i spor salonunda bir eğitmenle yapıyordum ama pandemi nedeniyle evde yapmaya başladım. Callanetics Türkçe dublaj torrent indirerek videoları izliyorum ve çok memnunum. Eğitmenin sesi çok net ve anlaşılır. Hareketleri yaparken çok terlemiyorum ama kaslarımı hissediyorum. Vücudum daha sıkı ve esnek oldu."

    -- Zeynep -
    -

    Callanetics kullananların yorumları, bu egzersizin ne kadar etkili olduğunu gösteriyor. Siz de Callanetics Türkçe dublaj torrent indirerek bu egzersize başlayabilir ve farkı görebilirsiniz.

    -

    Callanetics Türkçe Dublaj Torrent İndir: Son Söz

    -

    Callanetics Türkçe dublaj torrent indirerek, evinizde kolayca bu egzersizi yapabilirsiniz. Callanetics, vücudunuzu sıkılaştırmanın ve sağlıklı kalmanın en etkili yollarından biridir. Callanetics egzersizlerini doğru bir şekilde yapmak için bu makaledeki ipuçlarını takip edin ve sonuçları görün.

    -

    Callanetics Türkçe dublaj torrent indirmek için aşağıdaki linkleri kullanabilirsiniz. Bu linklerden indireceğiniz videolar, Callanetics'in kurucusu olan Callan Pinckney tarafından sunulan orijinal egzersiz programlarını içermektedir.

    - -

    Callanetics Türkçe dublaj torrent indirmek için bir torrent programına ihtiyacınız olacaktır. Torrent programınız yoksa, uTorrent gibi ücretsiz ve güvenilir bir program indirebilirsiniz.

    -

    Callanetics Türkçe dublaj torrent indirmek ve egzersiz yapmak için daha ne bekliyorsunuz? Hemen indirin ve farkı hissedin!

    -

    Callanetics Türkçe Dublaj Torrent İndir: Sonuç

    -

    Callanetics Türkçe dublaj torrent indirerek, vücudunuzu değiştirmenin mümkün olduğunu gördünüz. Callanetics, derin kasları çalıştırarak vücudunuzu sıkılaştırır, esnekliğinizi artırır ve ağrılarınızı azaltır. Ayrıca ruh halinizi, enerjinizi ve motivasyonunuzu iyileştirir. Callanetics egzersizleri, her yaştan ve her fitness seviyesinden insan tarafından yapılabilir. Callanetics Türkçe dublaj torrent indirmek için bu makaledeki linkleri kullanabilirsiniz. Callanetics ile vücudunuzu şekillendirmenin en etkili yolu sizin elinizde.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Adobe Font Folio 11.1.rar 116.md b/spaces/tialenAdioni/chat-gpt-api/logs/Adobe Font Folio 11.1.rar 116.md deleted file mode 100644 index 3080132041fbda63e3cc58272a2a30daebb6696d..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Adobe Font Folio 11.1.rar 116.md +++ /dev/null @@ -1,29 +0,0 @@ - -

    Adobe Font Folio 11.1: A Discontinued Collection of Fonts

    -

    Adobe Font Folio 11.1 was a collection of fonts that Adobe sold until June 1, 2022[^2^]. It contained over 2,400 fonts from the Adobe Type Library, including Adobe Originals, Adobe Systems standards, and fonts from other foundries[^2^]. Users could download the fonts once they purchased the product and use them for print, web, and video projects[^2^].

    -

    Adobe Font Folio 11.1.rar 116


    Downloadhttps://urlcod.com/2uKasv



    -

    However, Adobe discontinued sales of Font Folio 11.1 and Font Folio Education Essentials on June 1, 2022[^2^]. The reason for this decision was not explicitly stated by Adobe, but some users speculated that it was because Adobe wanted to promote its subscription-based service, Adobe Fonts[^1^]. Adobe Fonts is part of Creative Cloud and offers unlimited access to thousands of fonts from various type foundries[^3^]. Users can sync fonts to their devices or use them on the web without worrying about licensing issues[^3^].

    -

    Some users who purchased Font Folio 11.1 before the end-of-sale date expressed frustration and confusion about how to download and install the fonts they bought[^1^]. Adobe advised them to contact customer support for assistance[^1^]. However, some users reported that customer support was not helpful or responsive[^1^]. It is unclear whether Adobe will continue to provide support and updates for Font Folio 11.1 in the future.

    While some users may miss Font Folio 11.1, Adobe Fonts offers many benefits that may outweigh the disadvantages of a subscription-based model. According to Adobe's website[^1^], some of the benefits of Adobe Fonts are:

    -
      -
    • The full Adobe Fonts library can be used for both personal and commercial projects, such as design, web, video, and broadcast[^1^].
    • -
    • Users can create images or vector artwork, including logos, with fonts from Adobe Fonts[^1^].
    • -
    • Users can create a web project to add any font from the service to their website, without any pageview limits or hosting fees[^1^].
    • -
    • Users can embed fonts in PDFs for viewing and printing[^1^].
    • -
    • Users can sync fonts to their devices and use them in any desktop application that supports fonts, such as Photoshop, Illustrator, InDesign, and Microsoft Office[^1^].
    • -
    • Users can access new fonts and updates as soon as they are released by the type foundries[^1^].
    • -
    • Users can browse and filter fonts by various criteria, such as classification, properties, languages, and tags[^1^].
    • -
    • Users can preview fonts with different text and settings before using them[^1^].
    • -
    • Users can discover and learn more about fonts and their designers through articles, videos, and podcasts on Adobe Fonts[^1^].
    • -
    -

    Therefore, Adobe Fonts may be a better option for users who want to have more flexibility, variety, and convenience when using fonts for their creative projects.

    Of course, Adobe Fonts is not the only option for users who need fonts for their projects. There are many alternatives to Adobe Fonts that offer different features, prices, and collections of fonts. Some of the most popular alternatives to Adobe Fonts are:

    -

    -
      -
    1. Google Fonts: A free service that provides access to thousands of open source fonts that can be used on the web or downloaded for offline use[^1^]. Google Fonts has a simple and intuitive interface that allows users to browse, filter, and preview fonts easily. Google Fonts also supports many languages and scripts[^1^].
    2. -
    3. Font Squirrel: A free service that provides a curated collection of high-quality fonts that are licensed for commercial use[^2^]. Font Squirrel also offers a font generator tool that can convert any font into a web font format[^2^]. Font Squirrel has a variety of categories and tags to help users find the right font for their needs[^2^].
    4. -
    5. dafont.com: A free service that provides a large archive of downloadable fonts that are submitted by font designers and enthusiasts[^3^]. dafont.com has a wide range of styles and themes, from handwriting to horror, from retro to futuristic[^3^]. Users can also request or create custom fonts on dafont.com[^3^].
    6. -
    7. Font Library: A free service that provides a collection of fonts that are licensed under open source or public domain licenses. Font Library aims to promote the freedom and diversity of typography by allowing users to use, study, share, and remix fonts. Font Library also has a community of font lovers who contribute and review fonts.
    8. -
    9. Pixelfy: A free service that provides a platform for individual designers to share and download high-quality digital assets, including fonts. Pixelfy has a variety of fonts from different genres and styles, such as vintage, modern, script, and display. Users can also create their own fonts using Pixelfy's vector graphic app.
    10. -
    -

    These are just some of the alternatives to Adobe Fonts that users can explore and compare. Depending on their preferences, budget, and project requirements, users may find one or more of these services suitable for their font needs.

    7196e7f11a
    -
    -
    \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/HP DMI Tool 4.0 Free Download Compatible with Various HP Models such as Pavilion EliteBook ProBook and More.md b/spaces/tialenAdioni/chat-gpt-api/logs/HP DMI Tool 4.0 Free Download Compatible with Various HP Models such as Pavilion EliteBook ProBook and More.md deleted file mode 100644 index 6ff920b2186685ca9245204833655d4d1feb095b..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/HP DMI Tool 4.0 Free Download Compatible with Various HP Models such as Pavilion EliteBook ProBook and More.md +++ /dev/null @@ -1,58 +0,0 @@ - -

    How to Download and Use HP DMI Tool 4.0 for Free

    -

    If you have an HP laptop or desktop computer and you need to modify some information on the system board, such as the product name, serial number, SKU, family, or CT number, you may need to use a special utility called HP DMI Tool. This tool allows you to write or rewrite the DMI (Desktop Management Interface) data on your HP device.

    -

    However, finding and downloading the HP DMI Tool 4.0 for free can be tricky, as HP has decided to not make this tool available anymore. Changes that need this tool now have to be done by HP service centers[^2^]. But don't worry, in this article, we will show you how to download and use HP DMI Tool 4.0 for free without any hassle.

    -

    hp dmi tool 4.0 free download


    Download File ✒ ✒ ✒ https://urlcod.com/2uK9lt



    -

    What is HP DMI Tool 4.0?

    -

    HP DMI Tool 4.0 is a utility that can write or rewrite the DMI data on HP laptops or desktops. DMI data is a set of information that identifies your device and its components, such as the product name, serial number, SKU, family, or CT number. These data are stored on a chip on the system board and are used by the BIOS and other software to identify your device.

    -

    Sometimes, you may need to change the DMI data on your device, for example, if you replace the system board or upgrade some components. If you don't update the DMI data, you may encounter errors or issues with your device, such as invalid product information, warranty problems, activation failures, or BIOS updates failures.

    -

    HP DMI Tool 4.0 can help you update the DMI data on your device easily and quickly. You just need to run the tool from a DOS bootable disk and follow the instructions on the screen. The tool will scan your device and display the current DMI data. You can then modify the data as needed and save the changes.

    -

    How to Download HP DMI Tool 4.0 for Free?

    -

    As mentioned earlier, HP has decided to not make HP DMI Tool 4.0 available anymore. Changes that need this tool now have to be done by HP service centers[^2^]. However, there are still some ways to download HP DMI Tool 4.0 for free from other sources.

    -

    One way is to search for HP DMI Tool 4.0 on Google or other search engines and look for reliable websites that offer this tool for download. You may find some links to Google Drive folders or YouTube videos that contain the tool[^3^]. However, be careful when downloading files from unknown sources, as they may contain viruses or malware that can harm your device.

    -

    How to use hp dmi tool 4.0 for bios update
    -Hp dmi tool 4.0 tutorial and guide
    -Hp dmi tool 4.0 download link and password
    -Hp dmi tool 4.0 compatibility and requirements
    -Hp dmi tool 4.0 error and troubleshooting
    -Hp dmi tool 4.0 alternative and comparison
    -Hp dmi tool 4.0 review and feedback
    -Hp dmi tool 4.0 license and activation
    -Hp dmi tool 4.0 features and benefits
    -Hp dmi tool 4.0 latest version and update
    -Hp dmi tool 4.0 for windows 10/8/7
    -Hp dmi tool 4.0 for mac os/linux
    -Hp dmi tool 4.0 for desktop/laptop
    -Hp dmi tool 4.0 for hp pavilion/elitebook/probook/envy
    -Hp dmi tool 4.0 for hp spectre/omen/zbook/stream
    -Hp dmi tool 4.0 for hp compaq/presario/g series
    -Hp dmi tool 4.0 for hp mini/notebook/tablet
    -Hp dmi tool 4.0 for hp printer/scanner/copier
    -Hp dmi tool 4.0 for hp monitor/keyboard/mouse
    -Hp dmi tool 4.0 for hp docking station/charger/battery
    -How to backup and restore hp dmi data with hp dmi tool 4.0
    -How to change hp serial number with hp dmi tool 4.0
    -How to edit hp product name with hp dmi tool 4.0
    -How to modify hp model number with hp dmi tool 4.0
    -How to update hp bios version with hp dmi tool 4.0
    -How to unlock hp bios password with hp dmi tool 4.0
    -How to fix hp bios checksum error with hp dmi tool 4.0
    -How to repair hp bios corrupted with hp dmi tool 4.0
    -How to flash hp bios chip with hp dmi tool 4.0
    -How to reset hp bios settings with hp dmi tool 4.0
    -How to enable/disable hp bios features with hp dmi tool 4.0
    -How to customize hp bios logo with hp dmi tool 4.0
    -How to optimize hp bios performance with hp dmi tool 4.0
    -How to test hp bios functionality with hp dmi tool 4.0
    -How to diagnose hp bios problems with hp dmi tool 4.0
    -How to upgrade/downgrade hp bios firmware with hp dmi tool 4.0
    -How to install/uninstall hp dmi tool 4.0 on your pc
    -How to run/use/launch/start/open/access/execute/load/work with/operate/manage/control/handle/navigate/utilize/exploit/manipulate/configure/set up/adjust/tweak/customize/optimize/test/diagnose/troubleshoot/repair/fix/solve/improve/enhance/update/upgrade/downgrade/install/uninstall/remove/delete/clean/clear/wipe/format/recover/restore/backup/migrate/transfer/copy/move/rename/edit/change/modify/create/generate/write/read/save/load/export/import/share/send/receive/print/scan/copy/paste/cut/duplicate/split/merge/join/combine/mix/match/sort/filter/search/find/replace/select/highlight/mark/copy/color/font/style/format/layout/design/template/theme/background/image/logo/icon/button/link/menu/toolbar/status bar/title bar/address bar/navigation bar/tab/window/frame/panel/dialog box/message box/alert box/error box/warning box/information box/question box/confirmation box/input box/output box/text box/check box/radio button/drop-down list/combo box/list box/spin box/slider/bar/code/file/folder/directory/drive/partition/volume/disk/media/device/hardware/software/application/program/process/service/task/job/function/method/procedure/routine/command/instruction/statement/expression/operator/operand/variable/constant/parameter/argument/value/data/information/knowledge/wisdom/intelligence/logic/reasoning/calculation/computation/simulation/modeling/analysis/synthesis/conversion/transformation/manipulation/modification/generation/creation/destruction/removal/deletion/cleaning/clearing/wiping/formatting/recovering/restoring/backing up/migrating/transferring/copying/moving/rename/edit/change/modifying/create/generate/write/read/save/load/export/import/share/send/receive/print/scan/copy/paste/cut/duplicate/split/merge/join/combine/mix/match/sort/filter/search/find/

    -

    Another way is to ask for help from other HP users who have access to HP DMI Tool 4.0. You can join the HP Support Community[^1^] and post a question asking for the tool. You may find some users who are willing to share the tool with you via email or other methods. However, be respectful and polite when asking for help, as not everyone may be comfortable sharing this tool with strangers.

    -

    How to Use HP DMI Tool 4.0?

    -

    Once you have downloaded HP DMI Tool 4.0 for free, you need to create a DOS bootable disk that contains the tool. You can use a USB flash drive or a CD/DVD for this purpose. You can use any software that can create a DOS bootable disk, such as Rufus or WinToBootic.

    -

    After creating the DOS bootable disk, you need to insert it into your HP device and restart it. You may need to change the boot order in the BIOS settings to boot from the disk first. Once you boot into DOS mode, you will see a command prompt window.

    -

    Type "dir" and press Enter to see the list of files on the disk. You should see a file named "HPBQ193.exe" or something similar. This is the HP DMI Tool 4.0 file that you need to run.

    -

    Type "HPBQ193.exe" and press Enter to launch the

    e753bf7129
    -
    -
    \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Final Quest by Rick Joyner A Vision of the Last Battle.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Final Quest by Rick Joyner A Vision of the Last Battle.md deleted file mode 100644 index 2056a9fb7a05c74f08bbb7b6cd955fbf63f69b7d..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Final Quest by Rick Joyner A Vision of the Last Battle.md +++ /dev/null @@ -1,93 +0,0 @@ - -

    Download Final Quest by Rick Joyner: A Review

    -

    If you are looking for a book that will challenge your faith, stir your spirit, and reveal the secrets of the last days, then you should download Final Quest by Rick Joyner. This book is a panoramic vision of the ultimate quest, the greatest and last battle between light and darkness, which is happening now. In this article, we will review what Final Quest is about, why you should read it, and how you can download it.

    -

    download final quest by rick joyner


    Download File ✺✺✺ https://bltlly.com/2uOjZc



    -

    What is Final Quest?

    -

    Final Quest is a book based on a vision that Rick Joyner experienced over the course of a year, in which he claims to have viewed the unfolding of the final conflict between the forces of God and Satan. The book is divided into three parts:

    -

    A vision of the end times

    -

    In this part, Joyner describes what he saw in his vision, such as the army of God, the army of darkness, the throne room of heaven, the judgment seat of Christ, and the lake of fire. He also shares some of the conversations he had with angels, saints, demons, and Jesus himself. He reveals some of the mysteries and prophecies concerning the end times, such as the role of Israel, the Antichrist, the false prophet, the mark of the beast, and the rapture.

    -

    A call to spiritual warfare

    -

    In this part, Joyner explains how he participated in the battle as a soldier in God's army. He recounts some of the encounters he had with various enemies, such as wolves, lions, vultures, dragons, and witches. He also shares some of the lessons he learned about spiritual warfare, such as the importance of obedience, humility, unity, love, faith, and prayer. He warns about some of the dangers and deceptions that Christians face in these last days, such as compromise, complacency, fear, doubt, and betrayal.

    -

    scary stranger 3d apk free download latest version
    -how to install scary stranger 3d on android
    -scary stranger 3d game online play
    -scary stranger 3d mod apk unlimited money
    -scary stranger 3d walkthrough all levels
    -scary stranger 3d cheats and tips
    -scary stranger 3d review and rating
    -scary stranger 3d gameplay and trailer
    -scary stranger 3d secrets and easter eggs
    -scary stranger 3d update and new features
    -scary stranger 3d for pc windows 10
    -scary stranger 3d for ios iphone ipad
    -scary stranger 3d for mac os x
    -scary stranger 3d for linux ubuntu
    -scary stranger 3d for chromebook chrome os
    -scary stranger 3d offline mode
    -scary stranger 3d multiplayer mode
    -scary stranger 3d horror simulation game
    -scary stranger 3d open world game
    -scary stranger 3d action adventure game
    -scary stranger 3d survival horror game
    -scary stranger 3d casual game
    -scary stranger 3d single player game
    -scary stranger 3d stylized game
    -scary stranger 3d data safety and privacy
    -scary stranger 3d developer z & k games
    -scary stranger 3d google play id com.zatg.scaryneighbor.hellgame
    -scary stranger 3d installs and downloads stats
    -scary stranger 3d ratings and reviews stats
    -scary stranger 3d teen rating info
    -scary stranger 3d contains ads info
    -scary stranger 3d in app purchases info
    -scary stranger 3d characters and story
    -scary stranger 3d neighbor house and screams
    -scary stranger 3d francis and amanda nick
    -scary stranger 3d state of the art ai technologies
    -scary stranger 3d spine chilling mysteries
    -scary stranger 3d mastermind a superior strategy
    -scary stranger 3d outwit your neighbor game
    -scary stranger 3d explore francis house game
    -scary stranger 3d unravel the horrifying secrets game
    -scary stranger 3d embark on a thrilling adventure game
    -scary stranger 3d discover the secrets that await game
    -how to get rid of ads in scary stranger 3d game
    -how to find energy bottles in scary stranger 3d game
    -how to find hint stars in scary stranger 3d game
    -how to avoid francis in scary stranger 3d game
    -how to complete tasks in scary stranger 3d game
    -how to unlock all chapters in scary stranger 3d game

    -

    A challenge to the church

    -

    In this part, Joyner addresses some of the issues and problems that he observed in the church today. He exposes some of the sins and errors that hinder God's people from fulfilling their destiny, such as pride, division, hypocrisy, lukewarmness, legalism, and false doctrines. He also exhorts and encourages the church to rise up to its calling and purpose in these last days, such as repentance, revival, holiness, maturity, authority, and power.

    -

    Why should you read Final Quest?

    -

    Final Quest is not just a book; it is a message from God to his people. It is a wake-up call for those who are asleep; it is a warning for those who are deceived; it is a promise for those who are faithful. Here are some of the reasons why you should read Final Quest:

    -

    It will inspire you to seek God more

    -

    Final Quest will show you how much God loves you and how much he desires to have a personal relationship with you. It will reveal to you his plans and purposes for your life and for his kingdom. It will draw you closer to his heart and his presence. It will ignite your passion and hunger for him more than anything else.

    -

    It will equip you to overcome the enemy

    -

    Final Quest will teach you how to fight and win against the enemy who is trying to destroy you and your destiny. It will show you how to use your weapons and armor effectively in spiritual warfare. It will help you identify and resist the schemes and strategies of Satan. It will empower you to stand firm and victorious in Christ.

    -

    It will prepare you for the coming harvest

    -

    Final Quest

    Final Quest will prepare you for the coming harvest of souls that God is going to bring in these last days. It will show you how to share the gospel and make disciples effectively. It will inspire you to be a witness and a leader in your sphere of influence. It will motivate you to be a part of God's end-time army that will usher in his glory and his kingdom.

    -

    How can you download Final Quest?

    -

    If you are interested in reading Final Quest, you might be wondering how you can download it. Here are some of the options and tips that you can consider:

    -

    The available formats and platforms

    -

    Final Quest is available in various formats, such as paperback, hardcover, Kindle, audiobook, and PDF. You can choose the format that suits your preference and device. You can also access Final Quest on different platforms, such as Amazon, Barnes & Noble, Apple Books, Google Play, Audible, and MorningStar Ministries. You can browse through these platforms and compare their features and prices.

    -

    The best deals and offers

    -

    If you want to save some money and get the best value for your purchase, you might want to look for some deals and offers that are available online. For example, you can use coupons and promo codes to get discounts and free shipping. You can also join some membership programs or newsletters that offer exclusive benefits and rewards. You can also check out some reviews and ratings from other readers to get some insights and recommendations.

    -

    The bonus materials and resources

    -

    If you want to enhance your reading experience and learn more from Final Quest, you might want to check out some bonus materials and resources that are available online. For example, you can download some study guides and worksheets that will help you apply the lessons and principles from the book. You can also watch some videos and podcasts that feature Rick Joyner and other speakers who share their insights and testimonies about Final Quest. You can also join some online communities and forums where you can interact with other readers and share your thoughts and questions.

    -

    Conclusion

    -

    Final Quest is a book that will change your life and perspective on the end times. It is a book that will inspire you to seek God more, equip you to overcome the enemy, and prepare you for the coming harvest. If you want to download Final Quest by Rick Joyner, you can follow the tips and options that we have shared in this article. We hope that this article has been helpful and informative for you. Thank you for reading!

    -

    FAQs

    -

    Here are some of the frequently asked questions about Final Quest:

    -

    Q: Who is Rick Joyner?

    -

    A: Rick Joyner is the founder and executive director of MorningStar Ministries, a multi-faceted mission organization that includes churches, schools, media, missions, and humanitarian outreaches. He is also the author of more than fifty books, including The Final Quest Trilogy, The Call, The Torch and the Sword, The Harvest, There Were Two Trees in the Garden, The Path, The Fire That Consumes, The Journey Begins, The World Aflame, The Valley, A Prophetic History Part I & II.

    -

    Q: Is Final Quest based on a true story?

    -

    A: Final Quest is based on a vision that Rick Joyner claims to have experienced over the course of a year. He says that he wrote down what he saw and heard in his vision as accurately as possible. However, he also admits that he does not claim infallibility or authority for his vision. He says that he submits his vision to the scrutiny of the Scriptures and the discernment of the Holy Spirit. He encourages his readers to do the same.

    -

    Q: What is the main message of Final Quest?

    -

    A: The main message of Final Quest is that God is calling his people to prepare for the final battle between light and darkness that is happening now. He is calling his people to repent of their sins, revive their love for him, mature in their faith, unite with one another, overcome the enemy, and harvest the souls for his kingdom.

    -

    Q: How long is Final Quest?

    -

    A: Final Quest is about 176 pages long in paperback format. It takes about 3 hours to read it on average.

    -

    Q: Where can I find more information about Final Quest?

    -

    A: You can find more information about Final Quest on its official website: https://www.finalquest.com/. You can also follow its social media accounts on Facebook, Twitter, Instagram, YouTube, and Pinterest.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/samplers/combined_sampler.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/samplers/combined_sampler.py deleted file mode 100644 index 564729f0895b1863d94c479a67202438af45f996..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/samplers/combined_sampler.py +++ /dev/null @@ -1,20 +0,0 @@ -from ..builder import BBOX_SAMPLERS, build_sampler -from .base_sampler import BaseSampler - - -@BBOX_SAMPLERS.register_module() -class CombinedSampler(BaseSampler): - """A sampler that combines positive sampler and negative sampler.""" - - def __init__(self, pos_sampler, neg_sampler, **kwargs): - super(CombinedSampler, self).__init__(**kwargs) - self.pos_sampler = build_sampler(pos_sampler, **kwargs) - self.neg_sampler = build_sampler(neg_sampler, **kwargs) - - def _sample_pos(self, **kwargs): - """Sample positive samples.""" - raise NotImplementedError - - def _sample_neg(self, **kwargs): - """Sample negative samples.""" - raise NotImplementedError diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_backbones/test_resnest.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_backbones/test_resnest.py deleted file mode 100644 index 2243591620eadc82881872ab7b5e5e5df3d8ac0b..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_backbones/test_resnest.py +++ /dev/null @@ -1,43 +0,0 @@ -import pytest -import torch - -from mmdet.models.backbones import ResNeSt -from mmdet.models.backbones.resnest import Bottleneck as BottleneckS - - -def test_resnest_bottleneck(): - with pytest.raises(AssertionError): - # Style must be in ['pytorch', 'caffe'] - BottleneckS(64, 64, radix=2, reduction_factor=4, style='tensorflow') - - # Test ResNeSt Bottleneck structure - block = BottleneckS( - 64, 256, radix=2, reduction_factor=4, stride=2, style='pytorch') - assert block.avd_layer.stride == 2 - assert block.conv2.channels == 256 - - # Test ResNeSt Bottleneck forward - block = BottleneckS(64, 16, radix=2, reduction_factor=4) - x = torch.randn(2, 64, 56, 56) - x_out = block(x) - assert x_out.shape == torch.Size([2, 64, 56, 56]) - - -def test_resnest_backbone(): - with pytest.raises(KeyError): - # ResNeSt depth should be in [50, 101, 152, 200] - ResNeSt(depth=18) - - # Test ResNeSt with radix 2, reduction_factor 4 - model = ResNeSt( - depth=50, radix=2, reduction_factor=4, out_indices=(0, 1, 2, 3)) - model.init_weights() - model.train() - - imgs = torch.randn(2, 3, 224, 224) - feat = model(imgs) - assert len(feat) == 4 - assert feat[0].shape == torch.Size([2, 256, 56, 56]) - assert feat[1].shape == torch.Size([2, 512, 28, 28]) - assert feat[2].shape == torch.Size([2, 1024, 14, 14]) - assert feat[3].shape == torch.Size([2, 2048, 7, 7]) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_dense_heads/test_vfnet_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_dense_heads/test_vfnet_head.py deleted file mode 100644 index 4fd43dd94fcf447c1b95bce96cd70c9976510a9f..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_dense_heads/test_vfnet_head.py +++ /dev/null @@ -1,62 +0,0 @@ -import mmcv -import torch - -from mmdet.models.dense_heads import VFNetHead - - -def test_vfnet_head_loss(): - """Tests vfnet head loss when truth is empty and non-empty.""" - s = 256 - img_metas = [{ - 'img_shape': (s, s, 3), - 'scale_factor': 1, - 'pad_shape': (s, s, 3) - }] - train_cfg = mmcv.Config( - dict( - assigner=dict(type='ATSSAssigner', topk=9), - allowed_border=-1, - pos_weight=-1, - debug=False)) - # since Focal Loss is not supported on CPU - self = VFNetHead( - num_classes=4, - in_channels=1, - train_cfg=train_cfg, - loss_cls=dict(type='VarifocalLoss', use_sigmoid=True, loss_weight=1.0)) - if torch.cuda.is_available(): - self.cuda() - feat = [ - torch.rand(1, 1, s // feat_size, s // feat_size).cuda() - for feat_size in [4, 8, 16, 32, 64] - ] - cls_scores, bbox_preds, bbox_preds_refine = self.forward(feat) - # Test that empty ground truth encourages the network to predict - # background - gt_bboxes = [torch.empty((0, 4)).cuda()] - gt_labels = [torch.LongTensor([]).cuda()] - gt_bboxes_ignore = None - empty_gt_losses = self.loss(cls_scores, bbox_preds, bbox_preds_refine, - gt_bboxes, gt_labels, img_metas, - gt_bboxes_ignore) - # When there is no truth, the cls loss should be nonzero but there - # should be no box loss. - empty_cls_loss = empty_gt_losses['loss_cls'] - empty_box_loss = empty_gt_losses['loss_bbox'] - assert empty_cls_loss.item() > 0, 'cls loss should be non-zero' - assert empty_box_loss.item() == 0, ( - 'there should be no box loss when there are no true boxes') - - # When truth is non-empty then both cls and box loss should be nonzero - # for random inputs - gt_bboxes = [ - torch.Tensor([[23.6667, 23.8757, 238.6326, 151.8874]]).cuda(), - ] - gt_labels = [torch.LongTensor([2]).cuda()] - one_gt_losses = self.loss(cls_scores, bbox_preds, bbox_preds_refine, - gt_bboxes, gt_labels, img_metas, - gt_bboxes_ignore) - onegt_cls_loss = one_gt_losses['loss_cls'] - onegt_box_loss = one_gt_losses['loss_bbox'] - assert onegt_cls_loss.item() > 0, 'cls loss should be non-zero' - assert onegt_box_loss.item() > 0, 'box loss should be non-zero' diff --git a/spaces/tsungtao/controlnet-mlsd-for-livingroom/README.md b/spaces/tsungtao/controlnet-mlsd-for-livingroom/README.md deleted file mode 100644 index ffcb3cdfdba958214e1cf04bf346e3ef86e10eb3..0000000000000000000000000000000000000000 --- a/spaces/tsungtao/controlnet-mlsd-for-livingroom/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Tsungtao Controlnet Mlsd 202305011046 -emoji: 💻 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false -tags: -- jax-diffusers-event ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Allavsoft Video Downloader Converter 3.14.5.6346 [EXCLUSIVE] Keygen 64 Bitl.md b/spaces/usbethFlerru/sovits-modelsV2/example/Allavsoft Video Downloader Converter 3.14.5.6346 [EXCLUSIVE] Keygen 64 Bitl.md deleted file mode 100644 index fa0c2b01ce9d044f48bdc2e0581aaa2a251e1463..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Allavsoft Video Downloader Converter 3.14.5.6346 [EXCLUSIVE] Keygen 64 Bitl.md +++ /dev/null @@ -1,81 +0,0 @@ - -

    Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl: How to Download and Convert Videos Easily

    -

    Do you want to download and convert videos from various websites with ease? If yes, then you might want to try Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl. This is a software that can help you download and convert videos from more than 100 video sharing sites, such as YouTube, Facebook, Dailymotion, eHow, and more. You can also choose the output format and quality that you prefer for your downloaded videos.

    -

    In this article, we will tell you what Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl is, what its features are, how to use it, and what its pros and cons are.

    -

    Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl


    Download 🔗 https://urlcod.com/2uyVWM



    -

    What is Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl?

    -

    Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl is a software that can download and convert videos from various websites in one click. It can download videos in different formats, such as F4F, FLV, F4V, F4M, WebM, and more. It can also convert the downloaded videos to popular video formats, such as MP4, AVI, WMV, MOV, MPEG-1, MPEG-2, VOB, ASF, RMVB, DV, TS, Apple ProRes, WebM, FLV, OGV, and more.

    -

    Moreover, Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl can also extract and download audio from online music videos or movies and convert them to popular audio formats , such as MP3 , WMA , WAV , AAC , AAC , Apple Lossless M4A , AIFF , RA , FLAC , OGG , AU.

    -

    Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl has a user-friendly interface that is easy to use and navigate. You can add multiple video URLs and batch download and convert multiple videos at a time. You can also pause and resume downloads at any time. You can also preview and playback the downloaded video files with the built-in video player.

    -

    How to Use Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl?

    -

    To use Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl to download and convert videos , you need to follow these simple steps:

    -
      -
    1. Download and install Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl from the official website or a trusted source.
    2. -
    3. Run the software and enter the license code that you received after purchasing it or use the keygen that is included in the package to generate one.
    4. -
    5. Copy the URL of the video that you want to download from your browser or any other source and paste it into the software's input box.
    6. -
    7. Select the output format that you want to convert the downloaded video to from the drop-down list or click on the "Automatically Convert to" button to choose one automatically based on your device type.
    8. -
    9. Select the output quality that you want to download the video in from the drop-down list or click on the "Download Quality" button to choose one automatically based on your internet speed.
    10. -
    11. Select the output folder where you want to save the downloaded video file on your device or click on the "Browse" button to choose one manually.
    12. -
    13. Click on the "Download" button to start downloading and converting the video file . You can see the progress of the download and conversion process on the software's interface . You can also pause , resume , stop , delete , or open the downloaded file from there .
    14. -
    15. Enjoy your downloaded and converted video file!
    16. -
    - -

    Pros and Cons of Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl

    -

    Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl has many advantages but also some disadvantages that you should be aware of before using it . Here are some of its pros and cons:

    - - - - - - - - - - - - - -
    ProsCons
    It can download videos from more than 100 video sharing sites.It may not support some rare or new video formats or sites.
    It can download videos in different resolutions and qualities.It may take longer to download high-resolution or high-quality videos.
    It can convert the downloaded videos to popular video formats.It may lose some quality or features during the conversion process.
    It can extract and download audio from online music videos or movies.It may not be able to download audio from some protected or encrypted videos.
    It can batch download and convert multiple videos at a time.It may consume more system resources or bandwidth when downloading or converting multiple videos at a time.
    It can pause and resume downloads at any time.It may not be able to resume downloads if the source URL changes or expires.
    It can automatically detect advertisements and skip them when downloading videos.It may not be able to detect all advertisements or skip them completely.
    It has a built-in video player that can preview and playback the downloaded video files.It may not be able to play some video formats or codecs that are not supported by the built-in player.
    It has a user-friendly interface that is easy to use and navigate.It may have some bugs or errors that need to be fixed or improved.
    - -

    Conclusion

    - -

    Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl is a software that can help you download and convert videos from various websites with ease . It has many features , pros and cons , system requirements , and steps to use it . It is a useful tool for video lovers who want to enjoy their favorite videos offline or on different devices . However , it also has some limitations and drawbacks that need to be considered before using it . We hope this article has given you a comprehensive review of Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl . If you have any questions or feedback , please leave a comment below . Thank you for reading!

    -

    -

    How to Get Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl?

    -

    If you want to get Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl, you have two options: you can either buy it from the official website or get it for free from a trusted source. Here are the pros and cons of each option:

    -
      -
    • Buying it from the official website: This is the safest and most reliable option, as you will get the original and updated version of the software with a valid license code. You will also get technical support and customer service from the developers. However, this option is also the most expensive one, as you will have to pay a certain amount of money to get the software.
    • -
    • Getting it for free from a trusted source: This is the cheapest and most convenient option, as you will not have to spend any money to get the software. You will also get access to the keygen that can generate a license code for you. However, this option is also the riskiest one, as you may get a fake or infected version of the software that can harm your device or compromise your privacy. You will also not get any technical support or customer service from the developers.
    • -
    -

    Therefore, you should weigh the pros and cons of each option carefully and choose the one that suits your needs and preferences best.

    - -

    What are the Alternatives to Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl?

    -

    If you are not satisfied with Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl or want to try some other software that can download and convert videos from various websites, you can check out some of its alternatives. Here are some of them:

    -
      -
    • 4K Video Downloader: This is a software that can download videos from YouTube, Vimeo, TikTok, Facebook, and other sites in high quality. It can also download playlists, channels, subtitles, and annotations. It can convert the downloaded videos to MP4, MKV, FLV, 3GP, MP3, M4A, OGG, and more.
    • -
    • Freemake Video Downloader: This is a software that can download videos from YouTube, Facebook, Dailymotion, Vimeo, and other sites in various formats and resolutions. It can also download playlists, channels, and user favorites. It can convert the downloaded videos to AVI, MP4, WMV, MKV, MP3, iPod, iPhone, PSP, Android devices, and more.
    • -
    • YTD Video Downloader: This is a software that can download videos from YouTube and other sites in HD quality. It can also download entire playlists and channels. It can convert the downloaded videos to MP4, WMV, AVI, MOV, MP3 , iPhone , iPad , Android devices , and more.
    • -
    -

    These are some of the alternatives to Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl that you can try if you want to download and convert videos from various websites . However , you should also be careful when choosing and using these software , as they may have some drawbacks or risks as well . You should always check their reviews , ratings , features , system requirements , and safety before downloading and installing them on your device .

    -

    Customer Reviews of Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl

    -

    In this section, we will share some of the customer reviews of Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl that we found online. These reviews are based on the customers' personal experiences and opinions and do not necessarily reflect our views or recommendations.

    -

    Positive Reviews

    -

    Here are some of the positive reviews of Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl that we found online:

    -
      -
    • "I have been using Allavsoft Video Downloader Converter for a few months and I am very satisfied with it. It can download and convert videos from almost any website that I want. It is very fast and easy to use. It also has many options and features to customize and optimize my videos. I highly recommend it to anyone who needs a video downloader and converter." - John Smith
    • -
    • "Allavsoft Video Downloader Converter is a great software that can help me download and convert videos from various websites with ease. It can download videos in different resolutions and qualities and convert them to different formats and devices. It can also extract and download audio from online music videos or movies. It is very user-friendly and reliable. I love it!" - Mary Jones
    • -
    • "Allavsoft Video Downloader Converter is a software that can do everything that I need for my video needs. It can download and convert videos from more than 100 video sharing sites, such as YouTube, Facebook, Dailymotion, eHow, and more. It can also edit the video files, such as trim, crop, rotate, merge, split, add watermark, subtitles, effects, and more. It is a powerful and versatile software that I use every day." - David Lee
    • -
    -

    Negative Reviews

    -

    Here are some of the negative reviews of Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl that we found online:

    -
      -
    • "I bought Allavsoft Video Downloader Converter from the official website and I received a license code that did not work. I contacted the customer service and they did not reply to me. I wasted my money and time on this software. It is a scam and a fraud. Do not buy it!" - Lisa Brown
    • -
    • "Allavsoft Video Downloader Converter is a software that can download and convert videos from various websites, but it also has many problems and issues that make it unusable. It often crashes or freezes during the download or conversion process. It also downloads or converts the wrong videos or formats sometimes. It is very buggy and unstable. I regret buying it." - James Wilson
    • -
    • "Allavsoft Video Downloader Converter is a software that can download and convert videos from various websites, but it also has many limitations and drawbacks that make it disappointing. It cannot download or convert some video formats or sites that I want. It also loses some quality or features during the conversion process. It is not worth the price that I paid for it." - Sarah Miller
    • -
    - -

    These are some of the customer reviews of Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl that we found online . You can also check out more reviews on the official website or other online platforms . However , you should also be careful when reading these reviews , as they may be biased , inaccurate , or outdated . You should always test the software yourself before buying or using it .

    -

    Conclusion

    -

    Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl is a software that can download and convert videos from various websites with ease. It has many features, pros and cons, system requirements, steps to use it, alternatives, tips to optimize it, and customer reviews. It is a useful tool for video lovers who want to enjoy their favorite videos offline or on different devices. However, it also has some limitations and drawbacks that need to be considered before using it. We hope this article has given you a comprehensive review of Allavsoft Video Downloader Converter 3.14.5.6346 Keygen 64 Bitl. If you have any questions or feedback, please leave a comment below. Thank you for reading!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/w1zrd/MusicGen/audiocraft/models/loaders.py b/spaces/w1zrd/MusicGen/audiocraft/models/loaders.py deleted file mode 100644 index 19837d4cc98189bd38fdce0f46f51acacb893947..0000000000000000000000000000000000000000 --- a/spaces/w1zrd/MusicGen/audiocraft/models/loaders.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility functions to load from the checkpoints. -Each checkpoint is a torch.saved dict with the following keys: -- 'xp.cfg': the hydra config as dumped during training. This should be used - to rebuild the object using the audiocraft.models.builders functions, -- 'model_best_state': a readily loadable best state for the model, including - the conditioner. The model obtained from `xp.cfg` should be compatible - with this state dict. In the case of a LM, the encodec model would not be - bundled along but instead provided separately. - -Those functions also support loading from a remote location with the Torch Hub API. -They also support overriding some parameters, in particular the device and dtype -of the returned model. -""" - -from pathlib import Path -from huggingface_hub import hf_hub_download -import typing as tp -import os - -from omegaconf import OmegaConf -import torch - -from . import builders - - -HF_MODEL_CHECKPOINTS_MAP = { - "small": "facebook/musicgen-small", - "medium": "facebook/musicgen-medium", - "large": "facebook/musicgen-large", - "melody": "facebook/musicgen-melody", -} - - -def _get_state_dict( - file_or_url_or_id: tp.Union[Path, str], - filename: tp.Optional[str] = None, - device='cpu', - cache_dir: tp.Optional[str] = None, -): - # Return the state dict either from a file or url - file_or_url_or_id = str(file_or_url_or_id) - assert isinstance(file_or_url_or_id, str) - - if os.path.isfile(file_or_url_or_id): - return torch.load(file_or_url_or_id, map_location=device) - - elif file_or_url_or_id.startswith('https://'): - return torch.hub.load_state_dict_from_url(file_or_url_or_id, map_location=device, check_hash=True) - - elif file_or_url_or_id in HF_MODEL_CHECKPOINTS_MAP: - assert filename is not None, "filename needs to be defined if using HF checkpoints" - - repo_id = HF_MODEL_CHECKPOINTS_MAP[file_or_url_or_id] - file = hf_hub_download(repo_id=repo_id, filename=filename, cache_dir=cache_dir) - return torch.load(file, map_location=device) - - else: - raise ValueError(f"{file_or_url_or_id} is not a valid name, path or link that can be loaded.") - - -def load_compression_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None): - pkg = _get_state_dict(file_or_url_or_id, filename="compression_state_dict.bin", cache_dir=cache_dir) - cfg = OmegaConf.create(pkg['xp.cfg']) - cfg.device = str(device) - model = builders.get_compression_model(cfg) - model.load_state_dict(pkg['best_state']) - model.eval() - return model - - -def load_lm_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None): - pkg = _get_state_dict(file_or_url_or_id, filename="state_dict.bin", cache_dir=cache_dir) - cfg = OmegaConf.create(pkg['xp.cfg']) - cfg.device = str(device) - if cfg.device == 'cpu': - cfg.dtype = 'float32' - else: - cfg.dtype = 'float16' - model = builders.get_lm_model(cfg) - model.load_state_dict(pkg['best_state']) - model.eval() - model.cfg = cfg - return model diff --git a/spaces/wahaha/u2net_portrait/U-2-Net/gradio/demo.py b/spaces/wahaha/u2net_portrait/U-2-Net/gradio/demo.py deleted file mode 100644 index 2ad81ef24cdb3e645331aacae729fd20cec78082..0000000000000000000000000000000000000000 --- a/spaces/wahaha/u2net_portrait/U-2-Net/gradio/demo.py +++ /dev/null @@ -1,37 +0,0 @@ -import cv2 -import paddlehub as hub -import gradio as gr -import torch - -# Images -torch.hub.download_url_to_file('https://cdn.pixabay.com/photo/2018/08/12/16/59/ara-3601194_1280.jpg', 'parrot.jpg') -torch.hub.download_url_to_file('https://cdn.pixabay.com/photo/2016/10/21/14/46/fox-1758183_1280.jpg', 'fox.jpg') - -model = hub.Module(name='U2Net') - -def infer(img): - result = model.Segmentation( - images=[cv2.imread(img.name)], - paths=None, - batch_size=1, - input_size=320, - output_dir='output', - visualization=True) - return result[0]['front'][:,:,::-1], result[0]['mask'] - -inputs = gr.inputs.Image(type='file', label="Original Image") -outputs = [ - gr.outputs.Image(type="numpy",label="Front"), - gr.outputs.Image(type="numpy",label="Mask") - ] - -title = "U^2-Net" -description = "demo for U^2-Net. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below." -article = "

    U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection | Github Repo

    " - -examples = [ - ['fox.jpg'], - ['parrot.jpg'] -] - -gr.Interface(infer, inputs, outputs, title=title, description=description, article=article, examples=examples).launch() \ No newline at end of file diff --git a/spaces/wallezen/so-vits-svc/vdecoder/hifigan/models.py b/spaces/wallezen/so-vits-svc/vdecoder/hifigan/models.py deleted file mode 100644 index 9747301f350bb269e62601017fe4633ce271b27e..0000000000000000000000000000000000000000 --- a/spaces/wallezen/so-vits-svc/vdecoder/hifigan/models.py +++ /dev/null @@ -1,503 +0,0 @@ -import os -import json -from .env import AttrDict -import numpy as np -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from .utils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -def load_model(model_path, device='cuda'): - config_file = os.path.join(os.path.split(model_path)[0], 'config.json') - with open(config_file) as f: - data = f.read() - - global h - json_config = json.loads(data) - h = AttrDict(json_config) - - generator = Generator(h).to(device) - - cp_dict = torch.load(model_path) - generator.load_state_dict(cp_dict['generator']) - generator.eval() - generator.remove_weight_norm() - del cp_dict - return generator, h - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -def padDiff(x): - return F.pad(F.pad(x, (0,0,-1,1), 'constant', 0) - x, (0,0,0,-1), 'constant', 0) - -class SineGen(torch.nn.Module): - """ Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__(self, samp_rate, harmonic_num=0, - sine_amp=0.1, noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - self.flag_for_pulse = flag_for_pulse - - def _f02uv(self, f0): - # generate uv signal - uv = (f0 > self.voiced_threshold).type(torch.float32) - return uv - - def _f02sine(self, f0_values): - """ f0_values: (batchsize, length, dim) - where dim indicates fundamental tone and overtones - """ - # convert to F0 in rad. The interger part n can be ignored - # because 2 * np.pi * n doesn't affect phase - rad_values = (f0_values / self.sampling_rate) % 1 - - # initial phase noise (no noise for fundamental component) - rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \ - device=f0_values.device) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - - # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad) - if not self.flag_for_pulse: - # for normal case - - # To prevent torch.cumsum numerical overflow, - # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1. - # Buffer tmp_over_one_idx indicates the time step to add -1. - # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi - tmp_over_one = torch.cumsum(rad_values, 1) % 1 - tmp_over_one_idx = (padDiff(tmp_over_one)) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - - sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) - * 2 * np.pi) - else: - # If necessary, make sure that the first time step of every - # voiced segments is sin(pi) or cos(0) - # This is used for pulse-train generation - - # identify the last time step in unvoiced segments - uv = self._f02uv(f0_values) - uv_1 = torch.roll(uv, shifts=-1, dims=1) - uv_1[:, -1, :] = 1 - u_loc = (uv < 1) * (uv_1 > 0) - - # get the instantanouse phase - tmp_cumsum = torch.cumsum(rad_values, dim=1) - # different batch needs to be processed differently - for idx in range(f0_values.shape[0]): - temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :] - temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :] - # stores the accumulation of i.phase within - # each voiced segments - tmp_cumsum[idx, :, :] = 0 - tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum - - # rad_values - tmp_cumsum: remove the accumulation of i.phase - # within the previous voiced segment. - i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1) - - # get the sines - sines = torch.cos(i_phase * 2 * np.pi) - return sines - - def forward(self, f0): - """ sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, - device=f0.device) - # fundamental component - fn = torch.multiply(f0, torch.FloatTensor([[range(1, self.harmonic_num + 2)]]).to(f0.device)) - - # generate sine waveforms - sine_waves = self._f02sine(fn) * self.sine_amp - - # generate uv signal - # uv = torch.ones(f0.shape) - # uv = uv * (f0 > self.voiced_threshold) - uv = self._f02uv(f0) - - # noise: for unvoiced should be similar to sine_amp - # std = self.sine_amp/3 -> max value ~ self.sine_amp - # . for voiced regions is self.noise_std - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - - # first: set the unvoiced part to 0 by uv - # then: additive noise - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """ SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - - # to produce sine waveforms - self.l_sin_gen = SineGen(sampling_rate, harmonic_num, - sine_amp, add_noise_std, voiced_threshod) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x): - """ - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - """ - # source for harmonic branch - sine_wavs, uv, _ = self.l_sin_gen(x) - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - - # source for noise branch, in the same shape as uv - noise = torch.randn_like(uv) * self.sine_amp / 3 - return sine_merge, noise, uv - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - - self.num_kernels = len(h["resblock_kernel_sizes"]) - self.num_upsamples = len(h["upsample_rates"]) - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h["upsample_rates"])) - self.m_source = SourceModuleHnNSF( - sampling_rate=h["sampling_rate"], - harmonic_num=8) - self.noise_convs = nn.ModuleList() - self.conv_pre = weight_norm(Conv1d(h["inter_channels"], h["upsample_initial_channel"], 7, 1, padding=3)) - resblock = ResBlock1 if h["resblock"] == '1' else ResBlock2 - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h["upsample_rates"], h["upsample_kernel_sizes"])): - c_cur = h["upsample_initial_channel"] // (2 ** (i + 1)) - self.ups.append(weight_norm( - ConvTranspose1d(h["upsample_initial_channel"] // (2 ** i), h["upsample_initial_channel"] // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - if i + 1 < len(h["upsample_rates"]): # - stride_f0 = np.prod(h["upsample_rates"][i + 1:]) - self.noise_convs.append(Conv1d( - 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2)) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h["upsample_initial_channel"] // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(h["resblock_kernel_sizes"], h["resblock_dilation_sizes"])): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - self.cond = nn.Conv1d(h['gin_channels'], h['upsample_initial_channel'], 1) - - def forward(self, x, f0, g=None): - # print(1,x.shape,f0.shape,f0[:, None].shape) - f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t - # print(2,f0.shape) - har_source, noi_source, uv = self.m_source(f0) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - x = x + self.cond(g) - # print(124,x.shape,har_source.shape) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - # print(3,x.shape) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - # print(4,x_source.shape,har_source.shape,x.shape) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, periods=None): - super(MultiPeriodDiscriminator, self).__init__() - self.periods = periods if periods is not None else [2, 3, 5, 7, 11] - self.discriminators = nn.ModuleList() - for period in self.periods: - self.discriminators.append(DiscriminatorP(period)) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=2), - AvgPool1d(4, 2, padding=2) - ]) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses diff --git a/spaces/wanghuoto/gogoai/tests/kblob.ts b/spaces/wanghuoto/gogoai/tests/kblob.ts deleted file mode 100644 index 9e15b41c1c94a690beb61b23cdb42fc78767ccd2..0000000000000000000000000000000000000000 --- a/spaces/wanghuoto/gogoai/tests/kblob.ts +++ /dev/null @@ -1,27 +0,0 @@ -import FormData from 'form-data' - -import { fetch } from '@/lib/isomorphic' - -const formData = new FormData() - -const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}} - -formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - - -fetch('https://bing.vcanbb.top/images/kblob', - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": "https://bing.vcanbb.top/web/index.html", - "Referrer-Policy": "origin-when-cross-origin", - ...formData.getHeaders() - } - - } -).then(res => res.text()) -.then(res => console.log('res', res)) diff --git a/spaces/wangrongsheng/ChatImprovement/crazy_functions/test_project/python/dqn/__init__.py b/spaces/wangrongsheng/ChatImprovement/crazy_functions/test_project/python/dqn/__init__.py deleted file mode 100644 index 4ae42872c812a7c8a18dff002086c7e6e935f580..0000000000000000000000000000000000000000 --- a/spaces/wangrongsheng/ChatImprovement/crazy_functions/test_project/python/dqn/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from stable_baselines3.dqn.dqn import DQN -from stable_baselines3.dqn.policies import CnnPolicy, MlpPolicy diff --git a/spaces/wendys-llc/panoptic-segment-anything/GroundingDINO/groundingdino/util/inference.py b/spaces/wendys-llc/panoptic-segment-anything/GroundingDINO/groundingdino/util/inference.py deleted file mode 100644 index 8168b96ca51e6e494c7c675c2f4a610e21b095d6..0000000000000000000000000000000000000000 --- a/spaces/wendys-llc/panoptic-segment-anything/GroundingDINO/groundingdino/util/inference.py +++ /dev/null @@ -1,98 +0,0 @@ -from typing import Tuple, List - -import cv2 -import numpy as np -import supervision as sv -import torch -from PIL import Image -from torchvision.ops import box_convert - -import groundingdino.datasets.transforms as T -from groundingdino.models import build_model -from groundingdino.util.misc import clean_state_dict -from groundingdino.util.slconfig import SLConfig -from groundingdino.util.utils import get_phrases_from_posmap - - -def preprocess_caption(caption: str) -> str: - result = caption.lower().strip() - if result.endswith("."): - return result - return result + "." - - -def load_model(model_config_path: str, model_checkpoint_path: str, device: str = "cuda"): - args = SLConfig.fromfile(model_config_path) - args.device = device - model = build_model(args) - checkpoint = torch.load(model_checkpoint_path, map_location="cpu") - model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False) - model.eval() - return model - - -def load_image(image_path: str) -> Tuple[np.array, torch.Tensor]: - transform = T.Compose( - [ - T.RandomResize([800], max_size=1333), - T.ToTensor(), - T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), - ] - ) - image_source = Image.open(image_path).convert("RGB") - image = np.asarray(image_source) - image_transformed, _ = transform(image_source, None) - return image, image_transformed - - -def predict( - model, - image: torch.Tensor, - caption: str, - box_threshold: float, - text_threshold: float, - device: str = "cuda" -) -> Tuple[torch.Tensor, torch.Tensor, List[str]]: - caption = preprocess_caption(caption=caption) - - model = model.to(device) - image = image.to(device) - - with torch.no_grad(): - outputs = model(image[None], captions=[caption]) - - prediction_logits = outputs["pred_logits"].cpu().sigmoid()[0] # prediction_logits.shape = (nq, 256) - prediction_boxes = outputs["pred_boxes"].cpu()[0] # prediction_boxes.shape = (nq, 4) - - mask = prediction_logits.max(dim=1)[0] > box_threshold - logits = prediction_logits[mask] # logits.shape = (n, 256) - boxes = prediction_boxes[mask] # boxes.shape = (n, 4) - - tokenizer = model.tokenizer - tokenized = tokenizer(caption) - - phrases = [ - get_phrases_from_posmap(logit > text_threshold, tokenized, tokenizer).replace('.', '') - for logit - in logits - ] - - return boxes, logits.max(dim=1)[0], phrases - - -def annotate(image_source: np.ndarray, boxes: torch.Tensor, logits: torch.Tensor, phrases: List[str]) -> np.ndarray: - h, w, _ = image_source.shape - boxes = boxes * torch.Tensor([w, h, w, h]) - xyxy = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy() - detections = sv.Detections(xyxy=xyxy) - - labels = [ - f"{phrase} {logit:.2f}" - for phrase, logit - in zip(phrases, logits) - ] - - box_annotator = sv.BoxAnnotator() - annotated_frame = cv2.cvtColor(image_source, cv2.COLOR_RGB2BGR) - annotated_frame = box_annotator.annotate(scene=annotated_frame, detections=detections, labels=labels) - return annotated_frame diff --git a/spaces/xdecoder/Demo/xdecoder/BaseModel.py b/spaces/xdecoder/Demo/xdecoder/BaseModel.py deleted file mode 100644 index cd0803f43d53554db6e718302ef28aa573bc05a5..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Demo/xdecoder/BaseModel.py +++ /dev/null @@ -1,37 +0,0 @@ -# -------------------------------------------------------- -# X-Decoder -- Generalized Decoding for Pixel, Image, and Language -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Xueyan Zou (xueyan@cs.wisc.edu) -# -------------------------------------------------------- - -import os -import logging - -import torch -import torch.nn as nn - -from utils.model_loading import align_and_update_state_dicts - -logger = logging.getLogger(__name__) - - -class BaseModel(nn.Module): - def __init__(self, opt, module: nn.Module): - super(BaseModel, self).__init__() - self.opt = opt - self.model = module - - def forward(self, *inputs, **kwargs): - outputs = self.model(*inputs, **kwargs) - return outputs - - def save_pretrained(self, save_dir): - save_path = os.path.join(save_dir, 'model_state_dict.pt') - torch.save(self.model.state_dict(), save_path) - - def from_pretrained(self, load_path): - state_dict = torch.load(load_path, map_location=self.opt['device']) - state_dict = align_and_update_state_dicts(self.model.state_dict(), state_dict) - self.model.load_state_dict(state_dict, strict=False) - return self \ No newline at end of file diff --git a/spaces/xp3857/Image_Restoration_Colorization/Global/detection_models/sync_batchnorm/comm.py b/spaces/xp3857/Image_Restoration_Colorization/Global/detection_models/sync_batchnorm/comm.py deleted file mode 100644 index 922f8c4a3adaa9b32fdcaef09583be03b0d7eb2b..0000000000000000000000000000000000000000 --- a/spaces/xp3857/Image_Restoration_Colorization/Global/detection_models/sync_batchnorm/comm.py +++ /dev/null @@ -1,137 +0,0 @@ -# -*- coding: utf-8 -*- -# File : comm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import queue -import collections -import threading - -__all__ = ['FutureResult', 'SlavePipe', 'SyncMaster'] - - -class FutureResult(object): - """A thread-safe future implementation. Used only as one-to-one pipe.""" - - def __init__(self): - self._result = None - self._lock = threading.Lock() - self._cond = threading.Condition(self._lock) - - def put(self, result): - with self._lock: - assert self._result is None, 'Previous result has\'t been fetched.' - self._result = result - self._cond.notify() - - def get(self): - with self._lock: - if self._result is None: - self._cond.wait() - - res = self._result - self._result = None - return res - - -_MasterRegistry = collections.namedtuple('MasterRegistry', ['result']) -_SlavePipeBase = collections.namedtuple('_SlavePipeBase', ['identifier', 'queue', 'result']) - - -class SlavePipe(_SlavePipeBase): - """Pipe for master-slave communication.""" - - def run_slave(self, msg): - self.queue.put((self.identifier, msg)) - ret = self.result.get() - self.queue.put(True) - return ret - - -class SyncMaster(object): - """An abstract `SyncMaster` object. - - - During the replication, as the data parallel will trigger an callback of each module, all slave devices should - call `register(id)` and obtain an `SlavePipe` to communicate with the master. - - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected, - and passed to a registered callback. - - After receiving the messages, the master device should gather the information and determine to message passed - back to each slave devices. - """ - - def __init__(self, master_callback): - """ - - Args: - master_callback: a callback to be invoked after having collected messages from slave devices. - """ - self._master_callback = master_callback - self._queue = queue.Queue() - self._registry = collections.OrderedDict() - self._activated = False - - def __getstate__(self): - return {'master_callback': self._master_callback} - - def __setstate__(self, state): - self.__init__(state['master_callback']) - - def register_slave(self, identifier): - """ - Register an slave device. - - Args: - identifier: an identifier, usually is the device id. - - Returns: a `SlavePipe` object which can be used to communicate with the master device. - - """ - if self._activated: - assert self._queue.empty(), 'Queue is not clean before next initialization.' - self._activated = False - self._registry.clear() - future = FutureResult() - self._registry[identifier] = _MasterRegistry(future) - return SlavePipe(identifier, self._queue, future) - - def run_master(self, master_msg): - """ - Main entry for the master device in each forward pass. - The messages were first collected from each devices (including the master device), and then - an callback will be invoked to compute the message to be sent back to each devices - (including the master device). - - Args: - master_msg: the message that the master want to send to itself. This will be placed as the first - message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example. - - Returns: the message to be sent back to the master device. - - """ - self._activated = True - - intermediates = [(0, master_msg)] - for i in range(self.nr_slaves): - intermediates.append(self._queue.get()) - - results = self._master_callback(intermediates) - assert results[0][0] == 0, 'The first result should belongs to the master.' - - for i, res in results: - if i == 0: - continue - self._registry[i].result.put(res) - - for i in range(self.nr_slaves): - assert self._queue.get() is True - - return results[0][1] - - @property - def nr_slaves(self): - return len(self._registry) diff --git "a/spaces/xwsm/gpt/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" "b/spaces/xwsm/gpt/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" deleted file mode 100644 index ffbb05599ef09c9de25334ebeca2eef8022b9aaf..0000000000000000000000000000000000000000 --- "a/spaces/xwsm/gpt/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" +++ /dev/null @@ -1,160 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive - -fast_debug = False - -def readPdf(pdfPath): - """ - 读取pdf文件,返回文本内容 - """ - import pdfminer - from pdfminer.pdfparser import PDFParser - from pdfminer.pdfdocument import PDFDocument - from pdfminer.pdfpage import PDFPage, PDFTextExtractionNotAllowed - from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter - from pdfminer.pdfdevice import PDFDevice - from pdfminer.layout import LAParams - from pdfminer.converter import PDFPageAggregator - - fp = open(pdfPath, 'rb') - - # Create a PDF parser object associated with the file object - parser = PDFParser(fp) - - # Create a PDF document object that stores the document structure. - # Password for initialization as 2nd parameter - document = PDFDocument(parser) - # Check if the document allows text extraction. If not, abort. - if not document.is_extractable: - raise PDFTextExtractionNotAllowed - - # Create a PDF resource manager object that stores shared resources. - rsrcmgr = PDFResourceManager() - - # Create a PDF device object. - # device = PDFDevice(rsrcmgr) - - # BEGIN LAYOUT ANALYSIS. - # Set parameters for analysis. - laparams = LAParams( - char_margin=10.0, - line_margin=0.2, - boxes_flow=0.2, - all_texts=False, - ) - # Create a PDF page aggregator object. - device = PDFPageAggregator(rsrcmgr, laparams=laparams) - # Create a PDF interpreter object. - interpreter = PDFPageInterpreter(rsrcmgr, device) - - # loop over all pages in the document - outTextList = [] - for page in PDFPage.create_pages(document): - # read the page into a layout object - interpreter.process_page(page) - layout = device.get_result() - for obj in layout._objs: - if isinstance(obj, pdfminer.layout.LTTextBoxHorizontal): - # print(obj.get_text()) - outTextList.append(obj.get_text()) - - return outTextList - - -def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, glob, os - from bs4 import BeautifulSoup - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - if ".tex" in fp: - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - if ".pdf" in fp.lower(): - file_content = readPdf(fp) - file_content = BeautifulSoup(''.join(file_content), features="lxml").body.text.encode('gbk', 'ignore').decode('gbk') - - prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else "" - i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=history, - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - - -@CatchException -def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结PDF文档,此版本使用pdfminer插件,带token约简功能。函数插件贡献者: Euclid-Jie。"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import pdfminer, bs4 - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - diff --git a/spaces/xxccc/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex b/spaces/xxccc/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex deleted file mode 100644 index c82be6242cc9d26203360e90d3ac9184ef6ad842..0000000000000000000000000000000000000000 --- a/spaces/xxccc/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex +++ /dev/null @@ -1,155 +0,0 @@ - -\begin{figure} - \centering - \includegraphics[scale=0.6]{Figures/ModalNet-21} - \caption{The Transformer - model architecture.} - \label{fig:model-arch} -\end{figure} - -% Although the primary workhorse of our model is attention, -%Our model maintains the encoder-decoder structure that is common to many so-called sequence-to-sequence models \citep{bahdanau2014neural,sutskever14}. As in all such architectures, the encoder computes a representation of the input sequence, and the decoder consumes these representations along with the output tokens to autoregressively produce the output sequence. Where, traditionally, the encoder and decoder contain stacks of recurrent or convolutional layers, our encoder and decoder stacks are composed of attention layers and position-wise feed-forward layers (Figure~\ref{fig:model-arch}). The following sections describe the gross architecture and these particular components in detail. - -Most competitive neural sequence transduction models have an encoder-decoder structure \citep{cho2014learning,bahdanau2014neural,sutskever14}. Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive \citep{graves2013generating}, consuming the previously generated symbols as additional input when generating the next. - -The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure~\ref{fig:model-arch}, respectively. - -\subsection{Encoder and Decoder Stacks} - -\paragraph{Encoder:}The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection \citep{he2016deep} around each of the two sub-layers, followed by layer normalization \cite{layernorm2016}. That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $\dmodel=512$. - -\paragraph{Decoder:}The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$. - -% In our model (Figure~\ref{fig:model-arch}), the encoder and decoder are composed of stacks of alternating self-attention layers (for cross-positional communication) and position-wise feed-forward layers (for in-place computation). In addition, the decoder stack contains encoder-decoder attention layers. Since attention is agnostic to the distances between words, our model requires a "positional encoding" to be added to the encoder and decoder input. The following sections describe all of these components in detail. - -\subsection{Attention} \label{sec:attention} -An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. - -\subsubsection{Scaled Dot-Product Attention} \label{sec:scaled-dot-prod} - -% \begin{figure} -% \centering -% \includegraphics[scale=0.6]{Figures/ModalNet-19} -% \caption{Scaled Dot-Product Attention.} -% \label{fig:multi-head-att} -% \end{figure} - -We call our particular attention "Scaled Dot-Product Attention" (Figure~\ref{fig:multi-head-att}). The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values. - -In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as: - -\begin{equation} - \mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V -\end{equation} - -The two most commonly used attention functions are additive attention \citep{bahdanau2014neural}, and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. - -%We scale the dot products by $1/\sqrt{d_k}$ to limit the magnitude of the dot products, which works well in practice. Otherwise, we found applying the softmax to often result in weights very close to 0 or 1, and hence minuscule gradients. - -% Already described in the subsequent section -%When used as part of decoder self-attention, an optional mask function is applied just before the softmax to prevent positions from attending to subsequent positions. This mask simply sets the logits corresponding to all illegal connections (those outside of the lower triangle) to $-\infty$. - -%\paragraph{Comparison to Additive Attention: } We choose dot product attention over additive attention \citep{bahdanau2014neural} since it can be computed using highly optimized matrix multiplication code. This optimization is particularly important to us, as we employ many attention layers in our model. - -While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ \citep{DBLP:journals/corr/BritzGLL17}. We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients \footnote{To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.}. To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$. - - -%We suspect this to be caused by the dot products growing too large in magnitude to result in useful gradients after applying the softmax function. To counteract this, we scale the dot product by $1/\sqrt{d_k}$. - - -\subsubsection{Multi-Head Attention} \label{sec:multihead} - -\begin{figure} -\begin{minipage}[t]{0.5\textwidth} - \centering - Scaled Dot-Product Attention \\ - \vspace{0.5cm} - \includegraphics[scale=0.6]{Figures/ModalNet-19} -\end{minipage} -\begin{minipage}[t]{0.5\textwidth} - \centering - Multi-Head Attention \\ - \vspace{0.1cm} - \includegraphics[scale=0.6]{Figures/ModalNet-20} -\end{minipage} - - - % \centering - - \caption{(left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.} - \label{fig:multi-head-att} -\end{figure} - -Instead of performing a single attention function with $\dmodel$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively. -On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure~\ref{fig:multi-head-att}. - -Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. - -\begin{align*} - \mathrm{MultiHead}(Q, K, V) &= \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O\\ -% \mathrm{where} \mathrm{head_i} &= \mathrm{Attention}(QW_Q_i^{\dmodel \times d_q}, KW_K_i^{\dmodel \times d_k}, VW^V_i^{\dmodel \times d_v})\\ - \text{where}~\mathrm{head_i} &= \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)\\ -\end{align*} - -Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^K_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^V_i \in \mathbb{R}^{\dmodel \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times \dmodel}$. - - -%find it better (and no more expensive) to have multiple parallel attention layers (each over the full set of positions) with proportionally lower-dimensional keys, values and queries. We call this "Multi-Head Attention" (Figure~\ref{fig:multi-head-att}). The keys, values, and queries for each of these parallel attention layers are computed by learned linear transformations of the inputs to the multi-head attention. We use different linear transformations across different parallel attention layers. The output of the parallel attention layers are concatenated, and then passed through a final learned linear transformation. - -In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=\dmodel/h=64$. -Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. - -\subsubsection{Applications of Attention in our Model} - -The Transformer uses multi-head attention in three different ways: -\begin{itemize} - \item In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as \citep{wu2016google, bahdanau2014neural,JonasFaceNet2017}. - - \item The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. - - \item Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. See Figure~\ref{fig:multi-head-att}. - -\end{itemize} - -\subsection{Position-wise Feed-Forward Networks}\label{sec:ffn} - -In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. - -\begin{equation} - \mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2 -\end{equation} - -While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $\dmodel=512$, and the inner-layer has dimensionality $d_{ff}=2048$. - - - -%In the appendix, we describe how the position-wise feed-forward network can also be seen as a form of attention. - -%from Jakob: The number of operations required for the model to relate signals from two arbitrary input or output positions grows in the distance between positions in input or output, linearly for ConvS2S and logarithmically for ByteNet, making it harder to learn dependencies between these positions \citep{hochreiter2001gradient}. In the transformer this is reduced to a constant number of operations, albeit at the cost of effective resolution caused by averaging attention-weighted positions, an effect we aim to counteract with multi-headed attention. - - -%Figure~\ref{fig:simple-att} presents a simple attention function, $A$, with a single head, that forms the basis of our multi-head attention. $A$ takes a query key vector $\kq$, matrices of memory keys $\km$ and memory values $\vm$ ,and produces a query value vector $\vq$ as -%\begin{equation*} \label{eq:attention} -% A(\kq, \km, \vm) = {\vm}^T (Softmax(\km \kq). -%\end{equation*} -%We linearly transform $\kq,\,\km$, and $\vm$ with learned matrices ${\Wkq \text{,} \, \Wkm}$, and ${\Wvm}$ before calling the attention function, and transform the output query with $\Wvq$ before handing it to the feed forward layer. Each attention layer has it's own set of transformation matrices, which are shared across all query positions. $A$ is applied in parallel for each query position, and is implemented very efficiently as a batch of matrix multiplies. The self-attention and encoder-decoder attention layers use $A$, but with different arguments. For example, in encdoder self-attention, queries in encoder layer $i$ attention to memories in encoder layer $i-1$. To ensure that decoder self-attention layers do not look at future words, we add $- \inf$ to the softmax logits in positions $j+1$ to query length for query position $l$. - -%In simple attention, the query value is a weighted combination of the memory values where the attention weights sum to one. Although this function performs well in practice, the constraint on attention weights can restrict the amount of information that flows from memories to queries because the query cannot focus on multiple memory positions at once, which might be desirable when translating long sequences. \marginpar{@usz, could you think of an example of this ?} We remedy this by maintaining multiple attention heads at each query position that attend to all memory positions in parallel, with a different set of parameters per attention head $h$. -%\marginpar{} - -\subsection{Embeddings and Softmax} -Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $\dmodel$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to \citep{press2016using}. In the embedding layers, we multiply those weights by $\sqrt{\dmodel}$. - - -\subsection{Positional Encoding} -Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $\dmodel$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed \citep{JonasFaceNet2017}. - -In this work, we use sine and cosine functions of different frequencies: - -\begin{align*} - PE_{(pos,2i)} = sin(pos / 10000^{2i/\dmodel}) \\ - PE_{(pos,2i+1)} = cos(pos / 10000^{2i/\dmodel}) -\end{align*} - -where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$. - -We also experimented with using learned positional embeddings \citep{JonasFaceNet2017} instead, and found that the two versions produced nearly identical results (see Table~\ref{tab:variations} row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. diff --git a/spaces/xznwwh/aabb/README.md b/spaces/xznwwh/aabb/README.md deleted file mode 100644 index 11bc0929c2e36b7f4bbc528b5124785cd51cdb41..0000000000000000000000000000000000000000 --- a/spaces/xznwwh/aabb/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Aabb -emoji: 🏢 -colorFrom: purple -colorTo: red -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yerfor/SyntaSpeech/data_gen/tts/txt_processors/__init__.py b/spaces/yerfor/SyntaSpeech/data_gen/tts/txt_processors/__init__.py deleted file mode 100644 index 628c9e2564dc46367df35c4cf733fce81c222609..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/data_gen/tts/txt_processors/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from . import en, zh \ No newline at end of file diff --git a/spaces/yerfor/SyntaSpeech/modules/vocoder/parallel_wavegan/optimizers/radam.py b/spaces/yerfor/SyntaSpeech/modules/vocoder/parallel_wavegan/optimizers/radam.py deleted file mode 100644 index e805d7e34921bee436e1e7fd9e1f753c7609186b..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/modules/vocoder/parallel_wavegan/optimizers/radam.py +++ /dev/null @@ -1,91 +0,0 @@ -# -*- coding: utf-8 -*- - -"""RAdam optimizer. - -This code is drived from https://github.com/LiyuanLucasLiu/RAdam. -""" - -import math -import torch - -from torch.optim.optimizer import Optimizer - - -class RAdam(Optimizer): - """Rectified Adam optimizer.""" - - def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0): - """Initilize RAdam optimizer.""" - defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay) - self.buffer = [[None, None, None] for ind in range(10)] - super(RAdam, self).__init__(params, defaults) - - def __setstate__(self, state): - """Set state.""" - super(RAdam, self).__setstate__(state) - - def step(self, closure=None): - """Run one step.""" - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - - for p in group['params']: - if p.grad is None: - continue - grad = p.grad.data.float() - if grad.is_sparse: - raise RuntimeError('RAdam does not support sparse gradients') - - p_data_fp32 = p.data.float() - - state = self.state[p] - - if len(state) == 0: - state['step'] = 0 - state['exp_avg'] = torch.zeros_like(p_data_fp32) - state['exp_avg_sq'] = torch.zeros_like(p_data_fp32) - else: - state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32) - state['exp_avg_sq'] = state['exp_avg_sq'].type_as(p_data_fp32) - - exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] - beta1, beta2 = group['betas'] - - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - exp_avg.mul_(beta1).add_(1 - beta1, grad) - - state['step'] += 1 - buffered = self.buffer[int(state['step'] % 10)] - if state['step'] == buffered[0]: - N_sma, step_size = buffered[1], buffered[2] - else: - buffered[0] = state['step'] - beta2_t = beta2 ** state['step'] - N_sma_max = 2 / (1 - beta2) - 1 - N_sma = N_sma_max - 2 * state['step'] * beta2_t / (1 - beta2_t) - buffered[1] = N_sma - - # more conservative since it's an approximated value - if N_sma >= 5: - step_size = math.sqrt( - (1 - beta2_t) * (N_sma - 4) / (N_sma_max - 4) * (N_sma - 2) / N_sma * N_sma_max / (N_sma_max - 2)) / (1 - beta1 ** state['step']) # NOQA - else: - step_size = 1.0 / (1 - beta1 ** state['step']) - buffered[2] = step_size - - if group['weight_decay'] != 0: - p_data_fp32.add_(-group['weight_decay'] * group['lr'], p_data_fp32) - - # more conservative since it's an approximated value - if N_sma >= 5: - denom = exp_avg_sq.sqrt().add_(group['eps']) - p_data_fp32.addcdiv_(-step_size * group['lr'], exp_avg, denom) - else: - p_data_fp32.add_(-step_size * group['lr'], exp_avg) - - p.data.copy_(p_data_fp32) - - return loss diff --git a/spaces/ygangang/VToonify/vtoonify/model/raft/core/utils/__init__.py b/spaces/ygangang/VToonify/vtoonify/model/raft/core/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ygangang/VToonify/vtoonify/model/stylegan/op/upfirdn2d.py b/spaces/ygangang/VToonify/vtoonify/model/stylegan/op/upfirdn2d.py deleted file mode 100644 index d509eb5e11e8cd01468dded5e5b53f5326057706..0000000000000000000000000000000000000000 --- a/spaces/ygangang/VToonify/vtoonify/model/stylegan/op/upfirdn2d.py +++ /dev/null @@ -1,61 +0,0 @@ -from collections import abc - -import torch -from torch.nn import functional as F - - -def upfirdn2d(inputs, kernel, up=1, down=1, pad=(0, 0)): - if not isinstance(up, abc.Iterable): - up = (up, up) - - if not isinstance(down, abc.Iterable): - down = (down, down) - - if len(pad) == 2: - pad = (pad[0], pad[1], pad[0], pad[1]) - - return upfirdn2d_native(inputs, kernel, *up, *down, *pad) - - -def upfirdn2d_native( - inputs, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, channel, in_h, in_w = inputs.shape - inputs = inputs.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = inputs.shape - kernel_h, kernel_w = kernel.shape - - out = inputs.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x - - return out.view(-1, channel, out_h, out_w) \ No newline at end of file diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/altclip/modeling_altclip.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/altclip/modeling_altclip.py deleted file mode 100644 index c4e32de55d9c03accb41fa2a151bd4bc00c2d29a..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/altclip/modeling_altclip.py +++ /dev/null @@ -1,1715 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The BAAI Teams Authors and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch AltCLIP model.""" -import math -from dataclasses import dataclass -from typing import Any, List, Optional, Tuple, Union - -import torch -import torch.nn as nn -import torch.utils.checkpoint - -from ...activations import ACT2FN -from ...modeling_outputs import ( - BaseModelOutput, - BaseModelOutputWithPastAndCrossAttentions, - BaseModelOutputWithPooling, - BaseModelOutputWithPoolingAndCrossAttentions, - BaseModelOutputWithPoolingAndProjection, -) -from ...modeling_utils import PreTrainedModel -from ...pytorch_utils import apply_chunking_to_forward, find_pruneable_heads_and_indices, prune_linear_layer -from ...utils import ModelOutput, add_start_docstrings_to_model_forward, logging, replace_return_docstrings -from .configuration_altclip import AltCLIPConfig, AltCLIPTextConfig, AltCLIPVisionConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "BAAI/AltCLIP" -_CONFIG_FOR_DOC = "AltCLIPConfig" - -ALTCLIP_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "BAAI/AltCLIP", - # See all AltCLIP models at https://huggingface.co/models?filter=altclip -] - - -ALTCLIP_START_DOCSTRING = r""" - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`CLIPConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -ALTCLIP_TEXT_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide - it. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - -ALTCLIP_VISION_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using - [`AutoImageProcessor`]. See [`CLIPImageProcessor.__call__`] for details. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - -ALTCLIP_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide - it. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using - [`AutoImageProcessor`]. See [`CLIPImageProcessor.__call__`] for details. - return_loss (`bool`, *optional*): - Whether or not to return the contrastive loss. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -# contrastive loss function, adapted from -# https://sachinruk.github.io/blog/pytorch/pytorch%20lightning/loss%20function/gpu/2021/03/07/CLIP.html -def contrastive_loss(logits: torch.Tensor) -> torch.Tensor: - return nn.functional.cross_entropy(logits, torch.arange(len(logits), device=logits.device)) - - -def clip_loss(similarity: torch.Tensor) -> torch.Tensor: - caption_loss = contrastive_loss(similarity) - image_loss = contrastive_loss(similarity.t()) - return (caption_loss + image_loss) / 2.0 - - -@dataclass -# Copied from transformers.models.clip.modeling_clip.CLIPOutput with CLIP->AltCLIP -class AltCLIPOutput(ModelOutput): - """ - Args: - loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `return_loss` is `True`): - Contrastive loss for image-text similarity. - logits_per_image:(`torch.FloatTensor` of shape `(image_batch_size, text_batch_size)`): - The scaled dot product scores between `image_embeds` and `text_embeds`. This represents the image-text - similarity scores. - logits_per_text:(`torch.FloatTensor` of shape `(text_batch_size, image_batch_size)`): - The scaled dot product scores between `text_embeds` and `image_embeds`. This represents the text-image - similarity scores. - text_embeds(`torch.FloatTensor` of shape `(batch_size, output_dim`): - The text embeddings obtained by applying the projection layer to the pooled output of [`AltCLIPTextModel`]. - image_embeds(`torch.FloatTensor` of shape `(batch_size, output_dim`): - The image embeddings obtained by applying the projection layer to the pooled output of - [`AltCLIPVisionModel`]. - text_model_output(`BaseModelOutputWithPooling`): - The output of the [`AltCLIPTextModel`]. - vision_model_output(`BaseModelOutputWithPooling`): - The output of the [`AltCLIPVisionModel`]. - """ - - loss: Optional[torch.FloatTensor] = None - logits_per_image: torch.FloatTensor = None - logits_per_text: torch.FloatTensor = None - text_embeds: torch.FloatTensor = None - image_embeds: torch.FloatTensor = None - text_model_output: BaseModelOutputWithPooling = None - vision_model_output: BaseModelOutputWithPooling = None - - def to_tuple(self) -> Tuple[Any]: - return tuple( - self[k] if k not in ["text_model_output", "vision_model_output"] else getattr(self, k).to_tuple() - for k in self.keys() - ) - - -# Copied from transformers.models.roberta.modeling_roberta.RobertaEmbeddings with Roberta->AltRoberta -class AltRobertaEmbeddings(nn.Module): - """ - Same as BertEmbeddings with a tiny tweak for positional embeddings indexing. - """ - - # Copied from transformers.models.bert.modeling_bert.BertEmbeddings.__init__ - def __init__(self, config): - super().__init__() - self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) - self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) - self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") - self.register_buffer( - "position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)), persistent=False - ) - self.register_buffer( - "token_type_ids", torch.zeros(self.position_ids.size(), dtype=torch.long), persistent=False - ) - - # End copy - self.padding_idx = config.pad_token_id - self.position_embeddings = nn.Embedding( - config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx - ) - - def forward( - self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0 - ): - if position_ids is None: - if input_ids is not None: - # Create the position ids from the input token ids. Any padded tokens remain padded. - position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) - else: - position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds) - - if input_ids is not None: - input_shape = input_ids.size() - else: - input_shape = inputs_embeds.size()[:-1] - - seq_length = input_shape[1] - - # Setting the token_type_ids to the registered buffer in constructor where it is all zeros, which usually occurs - # when its auto-generated, registered buffer helps users when tracing the model without passing token_type_ids, solves - # issue #5664 - if token_type_ids is None: - if hasattr(self, "token_type_ids"): - buffered_token_type_ids = self.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device) - - if inputs_embeds is None: - inputs_embeds = self.word_embeddings(input_ids) - token_type_embeddings = self.token_type_embeddings(token_type_ids) - - embeddings = inputs_embeds + token_type_embeddings - if self.position_embedding_type == "absolute": - position_embeddings = self.position_embeddings(position_ids) - embeddings += position_embeddings - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - - def create_position_ids_from_inputs_embeds(self, inputs_embeds): - """ - We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids. - - Args: - inputs_embeds: torch.Tensor - - Returns: torch.Tensor - """ - input_shape = inputs_embeds.size()[:-1] - sequence_length = input_shape[1] - - position_ids = torch.arange( - self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device - ) - return position_ids.unsqueeze(0).expand(input_shape) - - -# Copied from transformers.models.roberta.modeling_roberta.RobertaSelfAttention with Roberta->AltRoberta -class AltRobertaSelfAttention(nn.Module): - def __init__(self, config, position_embedding_type=None): - super().__init__() - if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): - raise ValueError( - f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " - f"heads ({config.num_attention_heads})" - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size) - self.key = nn.Linear(config.hidden_size, self.all_head_size) - self.value = nn.Linear(config.hidden_size, self.all_head_size) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.position_embedding_type = position_embedding_type or getattr( - config, "position_embedding_type", "absolute" - ) - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - self.max_position_embeddings = config.max_position_embeddings - self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size) - - self.is_decoder = config.is_decoder - - def transpose_for_scores(self, x: torch.Tensor) -> torch.Tensor: - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - x = x.view(new_x_shape) - return x.permute(0, 2, 1, 3) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - ) -> Tuple[torch.Tensor]: - mixed_query_layer = self.query(hidden_states) - - # If this is instantiated as a cross-attention module, the keys - # and values come from an encoder; the attention mask needs to be - # such that the encoder's padding tokens are not attended to. - is_cross_attention = encoder_hidden_states is not None - - if is_cross_attention and past_key_value is not None: - # reuse k,v, cross_attentions - key_layer = past_key_value[0] - value_layer = past_key_value[1] - attention_mask = encoder_attention_mask - elif is_cross_attention: - key_layer = self.transpose_for_scores(self.key(encoder_hidden_states)) - value_layer = self.transpose_for_scores(self.value(encoder_hidden_states)) - attention_mask = encoder_attention_mask - elif past_key_value is not None: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - key_layer = torch.cat([past_key_value[0], key_layer], dim=2) - value_layer = torch.cat([past_key_value[1], value_layer], dim=2) - else: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - - query_layer = self.transpose_for_scores(mixed_query_layer) - - use_cache = past_key_value is not None - if self.is_decoder: - # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states. - # Further calls to cross_attention layer can then reuse all cross-attention - # key/value_states (first "if" case) - # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of - # all previous decoder key/value_states. Further calls to uni-directional self-attention - # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case) - # if encoder bi-directional self-attention `past_key_value` is always `None` - past_key_value = (key_layer, value_layer) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - query_length, key_length = query_layer.shape[2], key_layer.shape[2] - if use_cache: - position_ids_l = torch.tensor(key_length - 1, dtype=torch.long, device=hidden_states.device).view( - -1, 1 - ) - else: - position_ids_l = torch.arange(query_length, dtype=torch.long, device=hidden_states.device).view(-1, 1) - position_ids_r = torch.arange(key_length, dtype=torch.long, device=hidden_states.device).view(1, -1) - distance = position_ids_l - position_ids_r - - positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1) - positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in AltRobertaModel forward() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = nn.functional.softmax(attention_scores, dim=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - - context_layer = torch.matmul(attention_probs, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(new_context_layer_shape) - - outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - - if self.is_decoder: - outputs = outputs + (past_key_value,) - return outputs - - -# Copied from transformers.models.roberta.modeling_roberta.RobertaSelfOutput -class AltRobertaSelfOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -# Copied from transformers.models.roberta.modeling_roberta.RobertaAttention with Roberta->AltRoberta -class AltRobertaAttention(nn.Module): - def __init__(self, config, position_embedding_type=None): - super().__init__() - self.self = AltRobertaSelfAttention(config, position_embedding_type=position_embedding_type) - self.output = AltRobertaSelfOutput(config) - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.self.query = prune_linear_layer(self.self.query, index) - self.self.key = prune_linear_layer(self.self.key, index) - self.self.value = prune_linear_layer(self.self.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.self.num_attention_heads = self.self.num_attention_heads - len(heads) - self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - ) -> Tuple[torch.Tensor]: - self_outputs = self.self( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - ) - attention_output = self.output(self_outputs[0], hidden_states) - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs - - -# Copied from transformers.models.roberta.modeling_roberta.RobertaIntermediate with Roberta->AltRoberta -class AltRobertaIntermediate(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - return hidden_states - - -# Copied from transformers.models.roberta.modeling_roberta.RobertaOutput -class AltRobertaOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -# Copied from transformers.models.roberta.modeling_roberta.RobertaLayer with Roberta->AltRoberta -class AltRobertaLayer(nn.Module): - def __init__(self, config): - super().__init__() - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - self.attention = AltRobertaAttention(config) - self.is_decoder = config.is_decoder - self.add_cross_attention = config.add_cross_attention - if self.add_cross_attention: - if not self.is_decoder: - raise ValueError(f"{self} should be used as a decoder model if cross attention is added") - self.crossattention = AltRobertaAttention(config, position_embedding_type="absolute") - self.intermediate = AltRobertaIntermediate(config) - self.output = AltRobertaOutput(config) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - ) -> Tuple[torch.Tensor]: - # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 - self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None - self_attention_outputs = self.attention( - hidden_states, - attention_mask, - head_mask, - output_attentions=output_attentions, - past_key_value=self_attn_past_key_value, - ) - attention_output = self_attention_outputs[0] - - # if decoder, the last output is tuple of self-attn cache - if self.is_decoder: - outputs = self_attention_outputs[1:-1] - present_key_value = self_attention_outputs[-1] - else: - outputs = self_attention_outputs[1:] # add self attentions if we output attention weights - - cross_attn_present_key_value = None - if self.is_decoder and encoder_hidden_states is not None: - if not hasattr(self, "crossattention"): - raise ValueError( - f"If `encoder_hidden_states` are passed, {self} has to be instantiated with cross-attention layers" - " by setting `config.add_cross_attention=True`" - ) - - # cross_attn cached key/values tuple is at positions 3,4 of past_key_value tuple - cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None - cross_attention_outputs = self.crossattention( - attention_output, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - cross_attn_past_key_value, - output_attentions, - ) - attention_output = cross_attention_outputs[0] - outputs = outputs + cross_attention_outputs[1:-1] # add cross attentions if we output attention weights - - # add cross-attn cache to positions 3,4 of present_key_value tuple - cross_attn_present_key_value = cross_attention_outputs[-1] - present_key_value = present_key_value + cross_attn_present_key_value - - layer_output = apply_chunking_to_forward( - self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output - ) - outputs = (layer_output,) + outputs - - # if decoder, return the attn key/values as the last output - if self.is_decoder: - outputs = outputs + (present_key_value,) - - return outputs - - def feed_forward_chunk(self, attention_output): - intermediate_output = self.intermediate(attention_output) - layer_output = self.output(intermediate_output, attention_output) - return layer_output - - -# Copied from transformers.models.roberta.modeling_roberta.RobertaEncoder with Roberta->AltRoberta -class AltRobertaEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.layer = nn.ModuleList([AltRobertaLayer(config) for _ in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = False, - output_hidden_states: Optional[bool] = False, - return_dict: Optional[bool] = True, - ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None - - if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning_once( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - next_decoder_cache = () if use_cache else None - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - past_key_value = past_key_values[i] if past_key_values is not None else None - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, past_key_value, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - ) - else: - layer_outputs = layer_module( - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - ) - - hidden_states = layer_outputs[0] - if use_cache: - next_decoder_cache += (layer_outputs[-1],) - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - if self.config.add_cross_attention: - all_cross_attentions = all_cross_attentions + (layer_outputs[2],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [ - hidden_states, - next_decoder_cache, - all_hidden_states, - all_self_attentions, - all_cross_attentions, - ] - if v is not None - ) - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=next_decoder_cache, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, - ) - - -# Copied from transformers.models.roberta.modeling_roberta.RobertaPooler -class AltRobertaPooler(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pooled_output) - return pooled_output - - -# Copied from transformers.models.clip.modeling_clip.CLIPAttention with CLIP->AltCLIP -class AltCLIPAttention(nn.Module): - """Multi-headed attention from 'Attention Is All You Need' paper""" - - def __init__(self, config): - super().__init__() - self.config = config - self.embed_dim = config.hidden_size - self.num_heads = config.num_attention_heads - self.head_dim = self.embed_dim // self.num_heads - if self.head_dim * self.num_heads != self.embed_dim: - raise ValueError( - f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:" - f" {self.num_heads})." - ) - self.scale = self.head_dim**-0.5 - self.dropout = config.attention_dropout - - self.k_proj = nn.Linear(self.embed_dim, self.embed_dim) - self.v_proj = nn.Linear(self.embed_dim, self.embed_dim) - self.q_proj = nn.Linear(self.embed_dim, self.embed_dim) - self.out_proj = nn.Linear(self.embed_dim, self.embed_dim) - - def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): - return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - causal_attention_mask: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = False, - ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - """Input shape: Batch x Time x Channel""" - - bsz, tgt_len, embed_dim = hidden_states.size() - - # get query proj - query_states = self.q_proj(hidden_states) * self.scale - key_states = self._shape(self.k_proj(hidden_states), -1, bsz) - value_states = self._shape(self.v_proj(hidden_states), -1, bsz) - - proj_shape = (bsz * self.num_heads, -1, self.head_dim) - query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape) - key_states = key_states.view(*proj_shape) - value_states = value_states.view(*proj_shape) - - src_len = key_states.size(1) - attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) - - if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): - raise ValueError( - f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is" - f" {attn_weights.size()}" - ) - - # apply the causal_attention_mask first - if causal_attention_mask is not None: - if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is" - f" {causal_attention_mask.size()}" - ) - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + causal_attention_mask - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - if attention_mask is not None: - if attention_mask.size() != (bsz, 1, tgt_len, src_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}" - ) - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - attn_weights = nn.functional.softmax(attn_weights, dim=-1) - - if output_attentions: - # this operation is a bit akward, but it's required to - # make sure that attn_weights keeps its gradient. - # In order to do so, attn_weights have to reshaped - # twice and have to be reused in the following - attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len) - else: - attn_weights_reshaped = None - - attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - - attn_output = torch.bmm(attn_probs, value_states) - - if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is" - f" {attn_output.size()}" - ) - - attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim) - attn_output = attn_output.transpose(1, 2) - attn_output = attn_output.reshape(bsz, tgt_len, embed_dim) - - attn_output = self.out_proj(attn_output) - - return attn_output, attn_weights_reshaped - - -# Copied from transformers.models.clip.modeling_clip.CLIPMLP with CLIP->AltCLIP -class AltCLIPMLP(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.activation_fn = ACT2FN[config.hidden_act] - self.fc1 = nn.Linear(config.hidden_size, config.intermediate_size) - self.fc2 = nn.Linear(config.intermediate_size, config.hidden_size) - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - hidden_states = self.fc1(hidden_states) - hidden_states = self.activation_fn(hidden_states) - hidden_states = self.fc2(hidden_states) - return hidden_states - - -# Copied from transformers.models.clip.modeling_clip.CLIPEncoderLayer with CLIP->AltCLIP -class AltCLIPEncoderLayer(nn.Module): - def __init__(self, config: AltCLIPConfig): - super().__init__() - self.embed_dim = config.hidden_size - self.self_attn = AltCLIPAttention(config) - self.layer_norm1 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) - self.mlp = AltCLIPMLP(config) - self.layer_norm2 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: torch.Tensor, - causal_attention_mask: torch.Tensor, - output_attentions: Optional[bool] = False, - ) -> Tuple[torch.FloatTensor]: - """ - Args: - hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` - attention_mask (`torch.FloatTensor`): attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - `(config.encoder_attention_heads,)`. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - """ - residual = hidden_states - - hidden_states = self.layer_norm1(hidden_states) - hidden_states, attn_weights = self.self_attn( - hidden_states=hidden_states, - attention_mask=attention_mask, - causal_attention_mask=causal_attention_mask, - output_attentions=output_attentions, - ) - hidden_states = residual + hidden_states - - residual = hidden_states - hidden_states = self.layer_norm2(hidden_states) - hidden_states = self.mlp(hidden_states) - hidden_states = residual + hidden_states - - outputs = (hidden_states,) - - if output_attentions: - outputs += (attn_weights,) - - return outputs - - -# Copied from transformers.models.clip.modeling_clip.CLIPEncoder with CLIP->AltCLIP -class AltCLIPEncoder(nn.Module): - """ - Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a - [`AltCLIPEncoderLayer`]. - - Args: - config: AltCLIPConfig - """ - - def __init__(self, config: AltCLIPConfig): - super().__init__() - self.config = config - self.layers = nn.ModuleList([AltCLIPEncoderLayer(config) for _ in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False - - def forward( - self, - inputs_embeds, - attention_mask: Optional[torch.Tensor] = None, - causal_attention_mask: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutput]: - r""" - Args: - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. - This is useful if you want more control over how to convert `input_ids` indices into associated vectors - than the model's internal embedding lookup matrix. - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - causal_attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Causal mask for the text model. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - encoder_states = () if output_hidden_states else None - all_attentions = () if output_attentions else None - - hidden_states = inputs_embeds - for idx, encoder_layer in enumerate(self.layers): - if output_hidden_states: - encoder_states = encoder_states + (hidden_states,) - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(encoder_layer), - hidden_states, - attention_mask, - causal_attention_mask, - ) - else: - layer_outputs = encoder_layer( - hidden_states, - attention_mask, - causal_attention_mask, - output_attentions=output_attentions, - ) - - hidden_states = layer_outputs[0] - - if output_attentions: - all_attentions = all_attentions + (layer_outputs[1],) - - if output_hidden_states: - encoder_states = encoder_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None) - return BaseModelOutput( - last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions - ) - - -# Copied from transformers.models.clip.modeling_clip.CLIPVisionEmbeddings with CLIP->AltCLIP -class AltCLIPVisionEmbeddings(nn.Module): - def __init__(self, config: AltCLIPVisionConfig): - super().__init__() - self.config = config - self.embed_dim = config.hidden_size - self.image_size = config.image_size - self.patch_size = config.patch_size - - self.class_embedding = nn.Parameter(torch.randn(self.embed_dim)) - - self.patch_embedding = nn.Conv2d( - in_channels=config.num_channels, - out_channels=self.embed_dim, - kernel_size=self.patch_size, - stride=self.patch_size, - bias=False, - ) - - self.num_patches = (self.image_size // self.patch_size) ** 2 - self.num_positions = self.num_patches + 1 - self.position_embedding = nn.Embedding(self.num_positions, self.embed_dim) - self.register_buffer("position_ids", torch.arange(self.num_positions).expand((1, -1)), persistent=False) - - def forward(self, pixel_values: torch.FloatTensor) -> torch.Tensor: - batch_size = pixel_values.shape[0] - target_dtype = self.patch_embedding.weight.dtype - patch_embeds = self.patch_embedding(pixel_values.to(dtype=target_dtype)) # shape = [*, width, grid, grid] - patch_embeds = patch_embeds.flatten(2).transpose(1, 2) - - class_embeds = self.class_embedding.expand(batch_size, 1, -1) - embeddings = torch.cat([class_embeds, patch_embeds], dim=1) - embeddings = embeddings + self.position_embedding(self.position_ids) - return embeddings - - -class AltCLIPPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = AltCLIPConfig - base_model_prefix = "altclip" - supports_gradient_checkpointing = True - - def _init_weights(self, module): - """Initialize the weights""" - factor = self.config.initializer_factor - if isinstance(module, AltCLIPVisionEmbeddings): - factor = self.config.initializer_factor - nn.init.normal_(module.class_embedding, mean=0.0, std=module.embed_dim**-0.5 * factor) - nn.init.normal_(module.patch_embedding.weight, std=module.config.initializer_range * factor) - nn.init.normal_(module.position_embedding.weight, std=module.config.initializer_range * factor) - elif isinstance(module, AltCLIPAttention): - factor = self.config.initializer_factor - in_proj_std = (module.embed_dim**-0.5) * ((2 * module.config.num_hidden_layers) ** -0.5) * factor - out_proj_std = (module.embed_dim**-0.5) * factor - nn.init.normal_(module.q_proj.weight, std=in_proj_std) - nn.init.normal_(module.k_proj.weight, std=in_proj_std) - nn.init.normal_(module.v_proj.weight, std=in_proj_std) - nn.init.normal_(module.out_proj.weight, std=out_proj_std) - elif isinstance(module, AltCLIPMLP): - factor = self.config.initializer_factor - in_proj_std = ( - (module.config.hidden_size**-0.5) * ((2 * module.config.num_hidden_layers) ** -0.5) * factor - ) - fc_std = (2 * module.config.hidden_size) ** -0.5 * factor - nn.init.normal_(module.fc1.weight, std=fc_std) - nn.init.normal_(module.fc2.weight, std=in_proj_std) - elif isinstance(module, AltCLIPModel): - nn.init.normal_( - module.text_projection.weight, - std=module.text_embed_dim**-0.5 * self.config.initializer_factor, - ) - module.text_projection._is_hf_initialized = True - nn.init.normal_( - module.visual_projection.weight, - std=module.vision_embed_dim**-0.5 * self.config.initializer_factor, - ) - module.visual_projection._is_hf_initialized = True - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - elif isinstance(module, nn.Linear): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_factor) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_factor) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, AltCLIPEncoder): - module.gradient_checkpointing = value - if isinstance(module, AltRobertaEncoder): - module.gradient_checkpointing = value - - -# Copied from transformers.models.clip.modeling_clip.CLIPVisionTransformer with CLIPVisionTransformer->AltCLIPVisionTransformer,CLIPVisionConfig->AltCLIPVisionConfig,CLIPVisionEmbeddings->AltCLIPVisionEmbeddings,CLIPEncoder->AltCLIPEncoder,CLIP_VISION_INPUTS_DOCSTRING->ALTCLIP_VISION_INPUTS_DOCSTRING -class AltCLIPVisionTransformer(nn.Module): - def __init__(self, config: AltCLIPVisionConfig): - super().__init__() - self.config = config - embed_dim = config.hidden_size - - self.embeddings = AltCLIPVisionEmbeddings(config) - self.pre_layrnorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps) - self.encoder = AltCLIPEncoder(config) - self.post_layernorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps) - - @add_start_docstrings_to_model_forward(ALTCLIP_VISION_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=AltCLIPVisionConfig) - def forward( - self, - pixel_values: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPooling]: - r""" - Returns: - - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - hidden_states = self.embeddings(pixel_values) - hidden_states = self.pre_layrnorm(hidden_states) - - encoder_outputs = self.encoder( - inputs_embeds=hidden_states, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - last_hidden_state = encoder_outputs[0] - pooled_output = last_hidden_state[:, 0, :] - pooled_output = self.post_layernorm(pooled_output) - - if not return_dict: - return (last_hidden_state, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPooling( - last_hidden_state=last_hidden_state, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - -class AltCLIPVisionModel(AltCLIPPreTrainedModel): - config_class = AltCLIPVisionConfig - main_input_name = "pixel_values" - - def __init__(self, config: AltCLIPVisionConfig): - super().__init__(config) - self.vision_model = AltCLIPVisionTransformer(config) - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self) -> nn.Module: - return self.vision_model.embeddings.patch_embedding - - @add_start_docstrings_to_model_forward(ALTCLIP_VISION_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=AltCLIPVisionConfig) - def forward( - self, - pixel_values: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPooling]: - r""" - Returns: - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, AltCLIPVisionModel - - >>> model = AltCLIPVisionModel.from_pretrained("BAAI/AltCLIP") - >>> processor = AutoProcessor.from_pretrained("BAAI/AltCLIP") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> inputs = processor(images=image, return_tensors="pt") - - >>> outputs = model(**inputs) - >>> last_hidden_state = outputs.last_hidden_state - >>> pooled_output = outputs.pooler_output # pooled CLS states - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - return self.vision_model( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - -class AltRobertaModel(AltCLIPPreTrainedModel): - """ - - The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of - cross-attention is added between the self-attention layers, following the architecture described in *Attention is - all you need*_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz - Kaiser and Illia Polosukhin. - - To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set - to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and - `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. - - .. _*Attention is all you need*: https://arxiv.org/abs/1706.03762 - - """ - - config_class = AltCLIPTextConfig - - # Copied from transformers.models.bert.modeling_bert.BertModel.__init__ with Bert->AltRoberta - def __init__(self, config, add_pooling_layer=True): - super().__init__(config) - self.config = config - - self.embeddings = AltRobertaEmbeddings(config) - self.encoder = AltRobertaEncoder(config) - - self.pooler = AltRobertaPooler(config) if add_pooling_layer else None - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value): - self.embeddings.word_embeddings = value - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - # Copied from transformers.models.bert.modeling_bert.BertModel.forward - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPoolingAndCrossAttentions]: - r""" - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if self.config.is_decoder: - use_cache = use_cache if use_cache is not None else self.config.use_cache - else: - use_cache = False - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask) - input_shape = input_ids.size() - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - batch_size, seq_length = input_shape - device = input_ids.device if input_ids is not None else inputs_embeds.device - - # past_key_values_length - past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 - - if attention_mask is None: - attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device) - - if token_type_ids is None: - if hasattr(self.embeddings, "token_type_ids"): - buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if self.config.is_decoder and encoder_hidden_states is not None: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - if encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - embedding_output = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - past_key_values_length=past_key_values_length, - ) - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = encoder_outputs[0] - pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, - ) - - -class AltCLIPTextModel(AltCLIPPreTrainedModel): - config_class = AltCLIPTextConfig - - def __init__(self, config): - super().__init__(config) - self.roberta = AltRobertaModel(config, add_pooling_layer=False) - self.transformation = nn.Linear(config.hidden_size, config.project_dim) - self.pre_LN = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.post_init() - - def get_input_embeddings(self) -> nn.Module: - return self.roberta.embeddings.word_embeddings - - def set_input_embeddings(self, value: nn.Embedding) -> None: - self.roberta.embeddings.word_embeddings = value - - def resize_token_embeddings(self, new_num_tokens: Optional[int] = None) -> nn.Embedding: - return super().resize_token_embeddings(new_num_tokens) - - @add_start_docstrings_to_model_forward(ALTCLIP_TEXT_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BaseModelOutputWithPoolingAndProjection, config_class=AltCLIPTextConfig) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - return_dict: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPoolingAndProjection]: - r""" - Returns: - - Examples: - - ```python - >>> from transformers import AutoProcessor, AltCLIPTextModel - - >>> model = AltCLIPTextModel.from_pretrained("BAAI/AltCLIP") - >>> processor = AutoProcessor.from_pretrained("BAAI/AltCLIP") - - >>> texts = ["it's a cat", "it's a dog"] - - >>> inputs = processor(text=texts, padding=True, return_tensors="pt") - - >>> outputs = model(**inputs) - >>> last_hidden_state = outputs.last_hidden_state - >>> pooled_output = outputs.pooler_output # pooled CLS states - ```""" - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.roberta( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - # last module outputs - sequence_output = outputs[0] - - # project every module - sequence_output = self.pre_LN(sequence_output) - - # pooler - projection_state = self.transformation(sequence_output) - pooler_output = projection_state[:, 0] - - if not return_dict: - return (projection_state, pooler_output) + outputs[2:4] - - return BaseModelOutputWithPoolingAndProjection( - last_hidden_state=projection_state, - pooler_output=pooler_output, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -class AltCLIPModel(AltCLIPPreTrainedModel): - config_class = AltCLIPConfig - - def __init__(self, config: AltCLIPConfig): - super().__init__(config) - - if not isinstance(config.vision_config, AltCLIPVisionConfig): - raise ValueError( - "config.vision_config is expected to be of type AltCLIPVisionConfig but is of type" - f" {type(config.vision_config)}." - ) - if not isinstance(config.text_config, AltCLIPTextConfig): - raise ValueError( - "config.text_config is expected to be of type AltCLIPTextConfig but is of type" - f" {type(config.text_config)}." - ) - - text_config = config.text_config - vision_config = config.vision_config - - self.projection_dim = config.projection_dim - self.text_embed_dim = text_config.project_dim - self.vision_embed_dim = vision_config.hidden_size - - self.text_model = AltCLIPTextModel(text_config) - self.vision_model = AltCLIPVisionTransformer(vision_config) - - self.visual_projection = nn.Linear(self.vision_embed_dim, self.projection_dim, bias=False) - self.text_projection = nn.Linear(self.text_embed_dim, self.projection_dim, bias=False) - self.logit_scale = nn.Parameter(torch.tensor(self.config.logit_scale_init_value)) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(ALTCLIP_TEXT_INPUTS_DOCSTRING) - def get_text_features( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - token_type_ids=None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> torch.FloatTensor: - r""" - Returns: - text_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The text embeddings obtained by - applying the projection layer to the pooled output of [`AltCLIPTextModel`]. - - Examples: - - ```python - >>> from transformers import AutoProcessor, AltCLIPModel - - >>> model = AltCLIPModel.from_pretrained("BAAI/AltCLIP") - >>> processor = AutoProcessor.from_pretrained("BAAI/AltCLIP") - >>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") - >>> text_features = model.get_text_features(**inputs) - ```""" - # Use AltCLIP model's config for some fields (if specified) instead of those of vision & text components. - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - text_outputs = self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - token_type_ids=token_type_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - pooled_output = text_outputs[1] - text_features = self.text_projection(pooled_output) - - return text_features - - @add_start_docstrings_to_model_forward(ALTCLIP_VISION_INPUTS_DOCSTRING) - def get_image_features( - self, - pixel_values: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> torch.FloatTensor: - r""" - Returns: - image_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The image embeddings obtained by - applying the projection layer to the pooled output of [`AltCLIPVisionModel`]. - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, AltCLIPModel - - >>> model = AltCLIPModel.from_pretrained("BAAI/AltCLIP") - >>> processor = AutoProcessor.from_pretrained("BAAI/AltCLIP") - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - >>> inputs = processor(images=image, return_tensors="pt") - >>> image_features = model.get_image_features(**inputs) - ```""" - # Use AltCLIP model's config for some fields (if specified) instead of those of vision & text components. - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = vision_outputs[1] # pooled_output - image_features = self.visual_projection(pooled_output) - - return image_features - - @add_start_docstrings_to_model_forward(ALTCLIP_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=AltCLIPOutput, config_class=AltCLIPConfig) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - pixel_values: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - return_loss: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, AltCLIPOutput]: - r""" - Returns: - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, AltCLIPModel - - >>> model = AltCLIPModel.from_pretrained("BAAI/AltCLIP") - >>> processor = AutoProcessor.from_pretrained("BAAI/AltCLIP") - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - >>> inputs = processor( - ... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True - ... ) - >>> outputs = model(**inputs) - >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score - >>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities - ```""" - # Use AltCLIP model's config for some fields (if specified) instead of those of vision & text components. - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - text_outputs = self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - image_embeds = vision_outputs[1] - image_embeds = self.visual_projection(image_embeds) - - text_embeds = text_outputs[1] - text_embeds = self.text_projection(text_embeds) - - # normalized features - image_embeds = image_embeds / image_embeds.norm(p=2, dim=-1, keepdim=True) - text_embeds = text_embeds / text_embeds.norm(p=2, dim=-1, keepdim=True) - - # cosine similarity as logits - logit_scale = self.logit_scale.exp() - logits_per_text = torch.matmul(text_embeds, image_embeds.t()) * logit_scale - logits_per_image = logits_per_text.T - - loss = None - if return_loss: - loss = clip_loss(logits_per_text) - - if not return_dict: - output = (logits_per_image, logits_per_text, text_embeds, image_embeds, text_outputs, vision_outputs) - return ((loss,) + output) if loss is not None else output - - return AltCLIPOutput( - loss=loss, - logits_per_image=logits_per_image, - logits_per_text=logits_per_text, - text_embeds=text_embeds, - image_embeds=image_embeds, - text_model_output=text_outputs, - vision_model_output=vision_outputs, - ) - - -# Copied from transformers.models.roberta.modeling_roberta.create_position_ids_from_input_ids -def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0): - """ - Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols - are ignored. This is modified from fairseq's `utils.make_positions`. - - Args: - x: torch.Tensor x: - - Returns: torch.Tensor - """ - # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA. - mask = input_ids.ne(padding_idx).int() - incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask - return incremental_indices.long() + padding_idx diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/code_llama/tokenization_code_llama_fast.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/code_llama/tokenization_code_llama_fast.py deleted file mode 100644 index 5e8a7945dc1eaca477c3860ebc720c27912f261d..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/code_llama/tokenization_code_llama_fast.py +++ /dev/null @@ -1,426 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import os -from shutil import copyfile -from typing import List, Optional, Tuple - -from tokenizers import normalizers, processors - -from ...tokenization_utils_fast import PreTrainedTokenizerFast -from ...utils import is_sentencepiece_available, logging -from ...utils.versions import require_version - - -require_version("tokenizers>=0.13.3") - -if is_sentencepiece_available(): - from .tokenization_code_llama import CodeLlamaTokenizer -else: - CodeLlamaTokenizer = None - -logger = logging.get_logger(__name__) -VOCAB_FILES_NAMES = {"vocab_file": "tokenizer.model", "tokenizer_file": "tokenizer.json"} - -SPIECE_UNDERLINE = "▁" - - -B_INST, E_INST = "[INST]", "[/INST]" -B_SYS, E_SYS = "<>\n", "\n<>\n\n" - -# fmt: off -DEFAULT_SYSTEM_PROMPT = """You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your \ -answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure\ - that your responses are socially unbiased and positive in nature. - -If a question does not make any sense, or is not factually coherent, explain why instead of answering something not \ -correct. If you don't know the answer to a question, please don't share false information.""" -# fmt: on - - -class CodeLlamaTokenizerFast(PreTrainedTokenizerFast): - """ - Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding. - - This uses notably ByteFallback and no normalization. - - ```python - >>> from transformers import CodeLlamaTokenizerFast - - >>> tokenizer = CodeLlamaTokenizerFast.from_pretrained("hf-internal-testing/llama-tokenizer") - >>> tokenizer.encode("Hello this is a test") - [1, 15043, 445, 338, 263, 1243] - ``` - - If you want to change the `bos_token` or the `eos_token`, make sure to specify them when initializing the model, or - call `tokenizer.update_post_processor()` to make sure that the post-processing is correctly done (otherwise the - values of the first token and final token of an encoded sequence will not be correct). For more details, checkout - [post-processors] (https://huggingface.co/docs/tokenizers/api/post-processors) documentation. - - - This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should - refer to this superclass for more information regarding those methods. The default configuration match that of - [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf/blob/main/tokenizer_config.json) - which supports prompt infilling. - - Args: - vocab_file (`str`): - [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .model extension) that - contains the vocabulary necessary to instantiate a tokenizer. - tokenizer_file (`str`): - [tokenizers](https://github.com/huggingface/tokenizers) file (generally has a .json extension) that - contains everything needed to load the tokenizer. - clean_up_tokenization_spaces (`str`, *optional*, defaults to `False`): - Wether to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra - spaces. - bos_token (`str`, *optional*, defaults to `""`): - The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. - eos_token (`str`, *optional*, defaults to `""`): - The end of sequence token. - unk_token (`str`, *optional*, defaults to `""`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - prefix_token (`str`, *optional*, defaults to `"▁
    "`):
    -            Prefix token used for infilling.
    -        suffix_token (`str`, *optional*, defaults to `"▁"`):
    -            Suffix token used for infilling.
    -        middle_token (`str`, *optional*, defaults to `"▁"`):
    -            Middle token used for infilling.
    -        eot_token (`str`, *optional*, defaults to `"▁"`):
    -            End of text token used for infilling.
    -        fill_token (`str`, *optional*, defaults to `""`):
    -            The token used to split the input between the prefix and suffix.
    -        suffix_first (`bool`, *optional*, default to `False`):
    -            Whether the input prompt and suffix should be formatted with the suffix first.
    -        additional_special_tokens (`List[str]`, *optional*):
    -            Additional special tokens used by the tokenizer.
    -        use_default_system_prompt (`bool`, *optional*, defaults to `True`):
    -            Whether or not the default system prompt for Llama should be used.
    -    """
    -
    -    vocab_files_names = VOCAB_FILES_NAMES
    -    slow_tokenizer_class = CodeLlamaTokenizer
    -    padding_side = "left"
    -    model_input_names = ["input_ids", "attention_mask"]
    -
    -    def __init__(
    -        self,
    -        vocab_file=None,
    -        tokenizer_file=None,
    -        clean_up_tokenization_spaces=False,
    -        unk_token="",
    -        bos_token="",
    -        eos_token="",
    -        prefix_token="▁
    ",
    -        middle_token="▁",
    -        suffix_token="▁",
    -        eot_token="▁",
    -        fill_token="",
    -        additional_special_tokens=None,
    -        add_bos_token=True,
    -        add_eos_token=False,
    -        use_default_system_prompt=False,
    -        **kwargs,
    -    ):
    -        # mark tokens special to skip them
    -        additional_special_tokens = additional_special_tokens or []
    -        for token in [prefix_token, middle_token, suffix_token, eot_token]:
    -            additional_special_tokens += [token] if token is not None else []
    -        self.use_default_system_prompt = use_default_system_prompt
    -
    -        super().__init__(
    -            vocab_file=vocab_file,
    -            tokenizer_file=tokenizer_file,
    -            clean_up_tokenization_spaces=clean_up_tokenization_spaces,
    -            additional_special_tokens=additional_special_tokens,
    -            unk_token=unk_token,
    -            bos_token=bos_token,
    -            eos_token=eos_token,
    -            prefix_token=prefix_token,
    -            middle_token=middle_token,
    -            suffix_token=suffix_token,
    -            eot_token=eot_token,
    -            fill_token=fill_token,
    -            use_default_system_prompt=use_default_system_prompt,
    -            **kwargs,
    -        )
    -        self._add_bos_token = add_bos_token
    -        self._add_eos_token = add_eos_token
    -        self.update_post_processor()
    -
    -        self.vocab_file = vocab_file
    -
    -        self._prefix_token = prefix_token
    -        self._middle_token = middle_token
    -        self._suffix_token = suffix_token
    -        self._eot_token = eot_token
    -        self.fill_token = fill_token
    -
    -    @property
    -    def can_save_slow_tokenizer(self) -> bool:
    -        return os.path.isfile(self.vocab_file) if self.vocab_file else False
    -
    -    # Copied from transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast.update_post_processor
    -    def update_post_processor(self):
    -        """
    -        Updates the underlying post processor with the current `bos_token` and `eos_token`.
    -        """
    -        bos = self.bos_token
    -        bos_token_id = self.bos_token_id
    -        if bos is None and self.add_bos_token:
    -            raise ValueError("add_bos_token = True but bos_token = None")
    -
    -        eos = self.eos_token
    -        eos_token_id = self.eos_token_id
    -        if eos is None and self.add_eos_token:
    -            raise ValueError("add_eos_token = True but eos_token = None")
    -
    -        single = f"{(bos+':0 ') if self.add_bos_token else ''}$A:0{(' '+eos+':0') if self.add_eos_token else ''}"
    -        pair = f"{single}{(' '+bos+':1') if self.add_bos_token else ''} $B:1{(' '+eos+':1') if self.add_eos_token else ''}"
    -
    -        special_tokens = []
    -        if self.add_bos_token:
    -            special_tokens.append((bos, bos_token_id))
    -        if self.add_eos_token:
    -            special_tokens.append((eos, eos_token_id))
    -        self._tokenizer.post_processor = processors.TemplateProcessing(
    -            single=single, pair=pair, special_tokens=special_tokens
    -        )
    -
    -    @property
    -    def prefix_token(self):
    -        return self._prefix_token
    -
    -    @property
    -    def prefix_id(self):
    -        if self._prefix_token is None:
    -            return None
    -        return self.convert_tokens_to_ids(self.prefix_token)
    -
    -    @property
    -    def middle_token(self):
    -        return self._middle_token
    -
    -    @property
    -    def middle_id(self):
    -        if self._middle_token is None:
    -            return None
    -        return self.convert_tokens_to_ids(self.middle_token)
    -
    -    @property
    -    def suffix_token(self):
    -        return self._suffix_token
    -
    -    @property
    -    def suffix_id(self):
    -        if self._suffix_token is None:
    -            return None
    -        return self.convert_tokens_to_ids(self.suffix_token)
    -
    -    @property
    -    def eot_id(self):
    -        if self._eot_token is None:
    -            return None
    -        return self.convert_tokens_to_ids(self.eot_token)
    -
    -    @property
    -    def eot_token(self):
    -        return self._eot_token
    -
    -    @property
    -    def add_eos_token(self):
    -        return self._add_eos_token
    -
    -    @property
    -    def add_bos_token(self):
    -        return self._add_bos_token
    -
    -    @add_eos_token.setter
    -    def add_eos_token(self, value):
    -        self._add_eos_token = value
    -        self.update_post_processor()
    -
    -    @add_bos_token.setter
    -    def add_bos_token(self, value):
    -        self._add_bos_token = value
    -        self.update_post_processor()
    -
    -    def set_infilling_processor(self, reset, suffix_first=False, add_special_tokens=True):
    -        """
    -        Updates the normalizer to make sure the prompt format for `infilling` is respected. The infilling format is the
    -        following: if suffix_first
    -            " 
     {suf}  {pre}"
    -        else:
    -            " 
     {pre} {suf} "
    -
    -        If `reset` is set to `True`, the `normalizer` and `post_processor` are reset to their "normal" behaviour, which
    -        is to add a prefix space for the normalizer, and add a `bos_token` to the input text for the `post_processor`.
    -        """
    -        if reset:
    -            self._tokenizer.normalizer = normalizers.Sequence(
    -                [
    -                    normalizers.Prepend(prepend="▁"),
    -                    normalizers.Replace(pattern=" ", content="▁"),
    -                ]
    -            )
    -            self.update_post_processor()
    -            return
    -
    -        self._tokenizer.normalizer = normalizers.Replace(pattern=" ", content="▁")
    -        pair = [self.bos_token] if self.add_bos_token and add_special_tokens else []
    -        special_tokens = [(self.bos_token, self.bos_token_id)] if self.add_bos_token and add_special_tokens else []
    -        if suffix_first:
    -            # format as " 
     {suf}  {pre}"
    -            pair += [self.prefix_token, self.suffix_token, "$B", self.middle_token, "$A"]
    -            special_tokens += [
    -                (self.prefix_token, self.prefix_id),
    -                (self.suffix_token, self.suffix_id),
    -                (self.middle_token, self.middle_id),
    -            ]
    -        else:
    -            # format as " 
     {pre} {suf} "
    -            pair += [self.prefix_token, "$A", self.suffix_token, "$B", self.middle_token]
    -            special_tokens += [
    -                (self.prefix_token, self.prefix_id),
    -                (self.suffix_token, self.suffix_id),
    -                (self.middle_token, self.middle_id),
    -            ]
    -
    -        if self.add_eos_token and add_special_tokens:
    -            pair += [self.eos_token]
    -            special_tokens += [(self.eos_token, self.eos_token_id)]
    -        self._tokenizer.post_processor = processors.TemplateProcessing(
    -            single="$A", pair=pair, special_tokens=special_tokens
    -        )
    -
    -    def encode_plus(self, text, text_pair=None, suffix_first=False, add_special_tokens=True, **kwargs):
    -        # hack to make sure the input is pre-process but outside rust
    -        text_pair = kwargs.pop("suffix", text_pair)
    -        if self.fill_token is not None and self.fill_token in text and text_pair is None:
    -            text, text_pair = text.split(self.fill_token)
    -
    -        if text_pair is None or len(text_pair) < 1:
    -            return super().encode_plus(text, text_pair, add_special_tokens=add_special_tokens, **kwargs)
    -
    -        if None in (self.prefix_id, self.middle_id, self.suffix_id):
    -            raise ValueError(
    -                "Then input includes a `prefix` and a `suffix` used for the infilling task,"
    -                " the `prefix_id, middle_id, suffix_id` must all be initialized. Current"
    -                f" values : {self.prefix_id, self.middle_id, self.suffix_id}"
    -            )
    -
    -        self.set_infilling_processor(False, suffix_first=suffix_first, add_special_tokens=add_special_tokens)
    -        tokens = super().encode_plus(" " + text, text_pair=text_pair, add_special_tokens=True, **kwargs)
    -        self.set_infilling_processor(True)
    -        return tokens
    -
    -    # Copied from transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast.save_vocabulary
    -    def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
    -        if not self.can_save_slow_tokenizer:
    -            raise ValueError(
    -                "Your fast tokenizer does not have the necessary information to save the vocabulary for a slow "
    -                "tokenizer."
    -            )
    -
    -        if not os.path.isdir(save_directory):
    -            logger.error(f"Vocabulary path ({save_directory}) should be a directory")
    -            return
    -        out_vocab_file = os.path.join(
    -            save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
    -        )
    -
    -        if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file):
    -            copyfile(self.vocab_file, out_vocab_file)
    -
    -        return (out_vocab_file,)
    -
    -    @property
    -    # Copied from transformers.models.llama.tokenization_llama.LlamaTokenizer.default_chat_template
    -    def default_chat_template(self):
    -        """
    -        LLaMA uses [INST] and [/INST] to indicate user messages, and <> and <> to indicate system messages.
    -        Assistant messages do not have special tokens, because LLaMA chat models are generally trained with strict
    -        user/assistant/user/assistant message ordering, and so assistant messages can be identified from the ordering
    -        rather than needing special tokens. The system message is partly 'embedded' in the first user message, which
    -        results in an unusual token ordering when it is present. This template should definitely be changed if you wish
    -        to fine-tune a model with more flexible role ordering!
    -
    -        The output should look something like:
    -
    -        [INST] B_SYS SystemPrompt E_SYS Prompt [/INST] Answer  [INST] Prompt [/INST] Answer 
    -        [INST] Prompt [/INST]
    -        """
    -
    -        template = (
    -            "{% if messages[0]['role'] == 'system' %}"
    -            "{% set loop_messages = messages[1:] %}"  # Extract system message if it's present
    -            "{% set system_message = messages[0]['content'] %}"
    -            "{% elif USE_DEFAULT_PROMPT == true and not '<>' in messages[0]['content'] %}"
    -            "{% set loop_messages = messages %}"  # Or use the default system message if the flag is set
    -            "{% set system_message = 'DEFAULT_SYSTEM_MESSAGE' %}"
    -            "{% else %}"
    -            "{% set loop_messages = messages %}"
    -            "{% set system_message = false %}"
    -            "{% endif %}"
    -            "{% for message in loop_messages %}"  # Loop over all non-system messages
    -            "{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}"
    -            "{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}"
    -            "{% endif %}"
    -            "{% if loop.index0 == 0 and system_message != false %}"  # Embed system message in first message
    -            "{% set content = '<>\\n' + system_message + '\\n<>\\n\\n' + message['content'] %}"
    -            "{% else %}"
    -            "{% set content = message['content'] %}"
    -            "{% endif %}"
    -            "{% if message['role'] == 'user' %}"  # After all of that, handle messages/roles in a fairly normal way
    -            "{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}"
    -            "{% elif message['role'] == 'system' %}"
    -            "{{ '<>\\n' + content.strip() + '\\n<>\\n\\n' }}"
    -            "{% elif message['role'] == 'assistant' %}"
    -            "{{ ' '  + content.strip() + ' ' + eos_token }}"
    -            "{% endif %}"
    -            "{% endfor %}"
    -        )
    -        template = template.replace("USE_DEFAULT_PROMPT", "true" if self.use_default_system_prompt else "false")
    -        default_message = DEFAULT_SYSTEM_PROMPT.replace("\n", "\\n").replace("'", "\\'")
    -        template = template.replace("DEFAULT_SYSTEM_MESSAGE", default_message)
    -
    -        return template
    -
    -    def build_inputs_with_special_tokens(
    -        self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
    -    ) -> List[int]:
    -        """
    -        Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
    -        adding special tokens. The special tokens depend on calling set_lang.
    -
    -        An NLLB sequence has the following format, where `X` represents the sequence:
    -
    -        - `input_ids` (for encoder) `X [eos, src_lang_code]`
    -        - `decoder_input_ids`: (for decoder) `X [eos, tgt_lang_code]`
    -
    -        BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a
    -        separator.
    -
    -        Args:
    -            token_ids_0 (`List[int]`):
    -                List of IDs to which the special tokens will be added.
    -            token_ids_1 (`List[int]`, *optional*):
    -                Optional second list of IDs for sequence pairs.
    -
    -        Returns:
    -            `List[int]`: list of [input IDs](../glossary#input-ids) with the appropriate special tokens.
    -        """
    -        if token_ids_1 is None:
    -            return self.bos_token_id + token_ids_0 + self.eos_token_id
    -        return self.bos_token_id + token_ids_0 + token_ids_1 + self.eos_token_id
    diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/app.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/app.py
    deleted file mode 100644
    index a1a14cb04ba538f09e50dfed0757ac0bd7aac3dd..0000000000000000000000000000000000000000
    --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/app.py
    +++ /dev/null
    @@ -1,1066 +0,0 @@
    -import multiprocessing
    -import os
    -import re
    -import torch
    -import glob
    -import gradio as gr
    -import librosa
    -import numpy as np
    -import soundfile as sf
    -from inference.infer_tool import Svc
    -import logging
    -import json
    -import yaml
    -import time
    -import subprocess
    -import shutil
    -import utils
    -import datetime
    -import traceback
    -from utils import mix_model
    -from onnxexport.model_onnx import SynthesizerTrn
    -from itertools import chain
    -from compress_model import removeOptimizer
    -from auto_slicer import AutoSlicer
    -
    -logging.getLogger('numba').setLevel(logging.WARNING)
    -logging.getLogger('markdown_it').setLevel(logging.WARNING)
    -logging.getLogger('urllib3').setLevel(logging.WARNING)
    -logging.getLogger('matplotlib').setLevel(logging.WARNING)
    -
    -workdir = "logs/44k"
    -diff_workdir = "logs/44k/diffusion"
    -config_dir = "configs/"
    -raw_path = "dataset_raw"
    -raw_wavs_path = "raw"
    -models_backup_path = 'models_backup'
    -root_dir = "checkpoints"
    -debug = False
    -sovits_params = {}
    -diff_params = {}
    -
    -loaded = None
    -
    -def debug_change():
    -    global debug
    -    debug = debug_button.value
    -
    -def get_default_settings():
    -    global sovits_params, diff_params
    -    yaml_path = "settings.yaml"
    -    with open(yaml_path, 'r') as f:
    -        default_settings = yaml.safe_load(f)
    -    sovits_params = default_settings['sovits_params']
    -    diff_params = default_settings['diff_params']
    -    return sovits_params, diff_params
    -
    -def save_default_settings(log_interval,eval_interval,keep_ckpts,batch_size,learning_rate,fp16_run,all_in_mem,num_workers,cache_all_data,cache_device,amp_dtype,diff_batch_size,diff_lr,diff_interval_log,diff_interval_val,diff_force_save):
    -    yaml_path = "settings.yaml"
    -    with open(yaml_path, 'r') as f:
    -        default_settings = yaml.safe_load(f)
    -    default_settings['sovits_params']['log_interval'] = int(log_interval)
    -    default_settings['sovits_params']['eval_interval'] = int(eval_interval)
    -    default_settings['sovits_params']['keep_ckpts'] = int(keep_ckpts)
    -    default_settings['sovits_params']['batch_size'] = int(batch_size)
    -    default_settings['sovits_params']['learning_rate'] = float(learning_rate)
    -    default_settings['sovits_params']['fp16_run'] = fp16_run
    -    default_settings['sovits_params']['all_in_mem'] = all_in_mem
    -    default_settings['diff_params']['num_workers'] = int(num_workers)
    -    default_settings['diff_params']['cache_all_data'] = cache_all_data
    -    default_settings['diff_params']['cache_device'] = str(cache_device)
    -    default_settings['diff_params']['amp_dtype'] = str(amp_dtype)
    -    default_settings['diff_params']['diff_batch_size'] = int(diff_batch_size)
    -    default_settings['diff_params']['diff_lr'] = float(diff_lr)
    -    default_settings['diff_params']['diff_interval_log'] = int(diff_interval_log)
    -    default_settings['diff_params']['diff_interval_val'] = int(diff_interval_val)
    -    default_settings['diff_params']['diff_force_save'] = int(diff_force_save)
    -    with open(yaml_path, 'w') as y:
    -        yaml.safe_dump(default_settings, y, default_flow_style=False, sort_keys=False)
    -        return "成功保存默认配置"
    -
    -def get_model_info(choice_ckpt):
    -    pthfile = os.path.join(workdir, choice_ckpt)
    -    net = torch.load(pthfile, map_location=torch.device('cpu')) #cpu load
    -    spk_emb = net["model"].get("emb_g.weight")
    -    if spk_emb is None:
    -        return "所选模型缺少emb_g.weight,你可能选择了一个底模"
    -    _dim, _layer = spk_emb.size()
    -    model_type = {
    -        768: "Vec768-Layer12",
    -        256: "Vec256-Layer9 / HubertSoft",
    -        1024: "Whisper-PPG"
    -    }
    -    return model_type.get(_layer, "不受支持的模型")
    -    
    -def load_json_encoder(config_choice):
    -    config_file = os.path.join(config_dir + config_choice)
    -    with open(config_file, 'r') as f:
    -        config = json.load(f)
    -    try:
    -        config_encoder = str(config["model"]["speech_encoder"])
    -        return config_encoder
    -    except Exception as e:
    -        if "speech_encoder" in str(e):
    -            return "你的配置文件似乎是未作兼容的旧版,请根据文档指示对你的配置文件进行修改"
    -        else:
    -            return f"出错了: {e}"
    -        
    -def load_model_func(ckpt_name,cluster_name,config_name,enhance,diff_model_name,diff_config_name,only_diffusion,encoder,using_device):
    -    global model
    -    config_path = os.path.join(config_dir, config_name)
    -    diff_config_path = os.path.join(config_dir, diff_config_name) if diff_config_name != "no_diff_config" else "configs/diffusion.yaml"
    -    with open(config_path, 'r') as f:
    -        config = json.load(f)
    -    spk_dict = config["spk"]
    -    spk_name = config.get('spk', None)
    -    spk_choice = next(iter(spk_name)) if spk_name else "未检测到音色"
    -    ckpt_path = os.path.join(workdir, ckpt_name)
    -    _, _suffix = os.path.splitext(cluster_name)
    -    fr = True if _suffix == ".pkl" else False #如果是pkl后缀就启用特征检索
    -    cluster_path = os.path.join(workdir, cluster_name)
    -    diff_model_path = os.path.join(diff_workdir, diff_model_name)
    -    shallow_diffusion = True if diff_model_name != "no_diff" else False
    -    use_spk_mix = False
    -    device = None if using_device == "Auto" else using_device
    -    model = Svc(ckpt_path,
    -                    config_path,
    -                    device,
    -                    cluster_path,
    -                    enhance,
    -                    diff_model_path,
    -                    diff_config_path,
    -                    shallow_diffusion,
    -                    only_diffusion,
    -                    use_spk_mix,
    -                    fr)
    -    spk_list = list(spk_dict.keys())
    -    clip = 25 if encoder == "Whisper-PPG" else 0 #Whisper必须强制切片25秒
    -    device_name = torch.cuda.get_device_properties(model.dev).name if "cuda" in str(model.dev) else str(model.dev)
    -    index_or_kmeans = "特征索引" if fr is True else "聚类模型"
    -    clu_load = "未加载" if cluster_name == "no_clu" else cluster_name
    -    diff_load = "未加载" if diff_model_name == "no_diff" else diff_model_name
    -    output_msg = f"模型被成功加载到了{device_name}上\n{index_or_kmeans}:{clu_load}\n扩散模型:{diff_load}"
    -    return output_msg, gr.Dropdown.update(choices=spk_list, value=spk_choice), clip
    -
    -def Newload_model_func(ckpt_name,cluster_name,config_name2,enhance2,diff_model_name2,diff_config_name2,only_diffusion2,encoder2,using_device2):
    -    global model, loaded
    -    config_name = config_name2.value
    -    enhance = enhance2.value
    -    diff_model_name = diff_model_name2.value
    -    diff_config_name = (diff_config_name2).value
    -    only_diffusion = (only_diffusion2).value
    -    encoder = (encoder2).value
    -    using_device = (using_device2).value
    -    config_path = os.path.join(config_dir, config_name)
    -    diff_config_path = os.path.join(config_dir, diff_config_name) if diff_config_name != "no_diff_config" else "configs/diffusion.yaml"
    -    with open(config_path, 'r') as f:
    -        config = json.load(f)
    -    spk_dict = config["spk"]
    -    spk_name = config.get('spk', None)
    -    spk_choice = next(iter(spk_name)) if spk_name else "未检测到音色"
    -    ckpt_path = os.path.join(workdir, ckpt_name)
    -    _, _suffix = os.path.splitext(cluster_name)
    -    fr = True if _suffix == ".pkl" else False #如果是pkl后缀就启用特征检索
    -    cluster_path = os.path.join(workdir, cluster_name)
    -    diff_model_path = os.path.join(diff_workdir, diff_model_name)
    -    shallow_diffusion = True if diff_model_name != "no_diff" else False
    -    use_spk_mix = False
    -    device = None if using_device == "Auto" else using_device
    -    model = Svc(ckpt_path,
    -                    config_path,
    -                    device,
    -                    cluster_path,
    -                    enhance,
    -                    diff_model_path,
    -                    diff_config_path,
    -                    shallow_diffusion,
    -                    only_diffusion,
    -                    use_spk_mix,
    -                    fr)
    -    spk_list = list(spk_dict.keys())
    -    clip = 25 if encoder == "Whisper-PPG" else 0 #Whisper必须强制切片25秒
    -    device_name = torch.cuda.get_device_properties(model.dev).name if "cuda" in str(model.dev) else str(model.dev)
    -    index_or_kmeans = "特征索引" if fr is True else "聚类模型"
    -    clu_load = "未加载" if cluster_name == "no_clu" else cluster_name
    -    diff_load = "未加载" if diff_model_name == "no_diff" else diff_model_name
    -    loaded = cluster_name
    -    #output_msg = f"模型被成功加载到了{device_name}上\n{index_or_kmeans}:{clu_load}\n扩散模型:{diff_load}"
    -    #return output_msg, gr.Dropdown.update(choices=spk_list, value=spk_choice), clip
    -
    -def get_file_options(directory, extension):
    -    return [file for file in os.listdir(directory) if file.endswith(extension)]
    -
    -def load_options():
    -    ckpt_list = [file for file in get_file_options(workdir, ".pth") if not file.startswith("D_")]
    -    config_list = get_file_options(config_dir, ".json")
    -    cluster_list = ["no_clu"] + get_file_options(workdir, ".pt") + get_file_options(workdir, ".pkl") # 聚类和特征检索模型
    -    diff_list = ["no_diff"] + get_file_options(diff_workdir, ".pt")
    -    diff_config_list = get_file_options(config_dir, ".yaml")
    -    return ckpt_list, config_list, cluster_list, diff_list, diff_config_list
    -
    -def refresh_options():
    -    ckpt_list, config_list, cluster_list, diff_list, diff_config_list = load_options()
    -    return (
    -        choice_ckpt.update(choices=ckpt_list),
    -        config_choice.update(choices=config_list),
    -        cluster_choice.update(choices=cluster_list),
    -        diff_choice.update(choices=diff_list),
    -        diff_config_choice.update(choices=diff_config_list)
    -    )
    -
    -def vc_infer(sid, input_audio, input_audio_path, vc_transform, auto_f0, cluster_ratio, slice_db, noise_scale, pad_seconds, cl_num, lg_num, lgr_num, f0_predictor, enhancer_adaptive_key, cr_threshold, k_step, use_spk_mix, second_encoding, loudness_envelope_adjustment):
    -    if np.issubdtype(input_audio.dtype, np.integer):
    -        input_audio = (input_audio / np.iinfo(input_audio.dtype).max).astype(np.float32)
    -    if len(input_audio.shape) > 1:
    -        input_audio = librosa.to_mono(input_audio.transpose(1, 0))
    -    _audio = model.slice_inference(
    -        input_audio_path,
    -        sid,
    -        vc_transform,
    -        slice_db,
    -        cluster_ratio,
    -        auto_f0,
    -        noise_scale,
    -        pad_seconds,
    -        cl_num,
    -        lg_num,
    -        lgr_num,
    -        f0_predictor,
    -        enhancer_adaptive_key,
    -        cr_threshold,
    -        k_step,
    -        use_spk_mix,
    -        second_encoding,
    -        loudness_envelope_adjustment
    -    )  
    -    model.clear_empty()
    -    timestamp = str(int(time.time()))
    -    if not os.path.exists("results"):
    -        os.makedirs("results")
    -    output_file_name = os.path.splitext(os.path.basename(input_audio_path))[0] + "_" + sid + "_" + timestamp + ".wav"
    -    output_file_path = os.path.join("results", output_file_name)
    -    sf.write(output_file_path, _audio, model.target_sample, format="wav")
    -    return output_file_path
    -
    -def vc_fn(sid, input_audio, vc_transform, auto_f0, cluster_ratio, slice_db, noise_scale, pad_seconds, cl_num, lg_num, lgr_num, f0_predictor, enhancer_adaptive_key, cr_threshold, k_step, use_spk_mix, second_encoding, loudness_envelope_adjustment):
    -    global model
    -    try:
    -        if input_audio is None:
    -            return "You need to upload an audio", None
    -        if model is None:
    -            return "You need to upload an model", None
    -        sampling_rate, audio = input_audio
    -        temp_path = "temp.wav"
    -        sf.write(temp_path, audio, sampling_rate, format="wav")
    -        output_file_path = vc_infer(sid, audio, temp_path, vc_transform, auto_f0, cluster_ratio, slice_db, noise_scale, pad_seconds, cl_num, lg_num, lgr_num, f0_predictor, enhancer_adaptive_key, cr_threshold, k_step, use_spk_mix, second_encoding, loudness_envelope_adjustment)
    -        os.remove(temp_path)
    -        return "Success", output_file_path
    -    except Exception as e:
    -        if debug: traceback.print_exc()
    -        raise gr.Error(e)
    -
    -def vc_batch_fn(sid, input_audio_files, vc_transform, auto_f0, cluster_ratio, slice_db, noise_scale, pad_seconds, cl_num, lg_num, lgr_num, f0_predictor, enhancer_adaptive_key, cr_threshold, k_step, use_spk_mix, second_encoding, loudness_envelope_adjustment):
    -    global model
    -    try:
    -        if input_audio_files is None or len(input_audio_files) == 0:
    -            return "You need to upload at least one audio file"
    -        if model is None:
    -            return "You need to upload a model"
    -        for file_obj in input_audio_files:
    -            input_audio_path = file_obj.name
    -            audio, sampling_rate = sf.read(input_audio_path)
    -            vc_infer(sid, audio, input_audio_path, vc_transform, auto_f0, cluster_ratio, slice_db, noise_scale, pad_seconds, cl_num, lg_num, lgr_num, f0_predictor, enhancer_adaptive_key, cr_threshold, k_step, use_spk_mix, second_encoding, loudness_envelope_adjustment)
    -        return "批量推理完成,音频已经被保存到results文件夹"
    -    except Exception as e:
    -        if debug: traceback.print_exc()
    -        raise gr.Error(e)
    -    
    -def tts_fn(_text, _speaker, sid, vc_transform, auto_f0,cluster_ratio, slice_db, noise_scale,pad_seconds,cl_num,lg_num,lgr_num,f0_predictor,enhancer_adaptive_key,cr_threshold, k_step,use_spk_mix,second_encoding,loudness_envelope_adjustment):
    -    global model
    -    try:
    -        subprocess.run([r"python", "tts.py", _text, _speaker])
    -        sr = 44100
    -        y, sr = librosa.load("tts.wav")
    -        resampled_y = librosa.resample(y, orig_sr=sr, target_sr=sr)
    -        sf.write("tts.wav", resampled_y, sr, subtype = "PCM_16")
    -        input_audio = "tts.wav"
    -        audio, sampling_rate = sf.read(input_audio)
    -        if model is None:
    -            return "You need to upload a model", None
    -        output_file_path = vc_infer(sid, audio, input_audio, vc_transform, auto_f0, cluster_ratio, slice_db, noise_scale, pad_seconds, cl_num, lg_num, lgr_num, f0_predictor, enhancer_adaptive_key, cr_threshold, k_step, use_spk_mix, second_encoding, loudness_envelope_adjustment)
    -        return "Success", output_file_path
    -    except Exception as e:
    -        if debug: traceback.print_exc()
    -        raise gr.Error(e)
    -
    -def load_raw_dirs():
    -    illegal_files = []
    -    #检查文件名
    -    allowed_pattern = re.compile(r'^[a-zA-Z0-9_@#$%^&()_+\-=\s\.]*$')
    -    for root, dirs, files in os.walk(raw_path):
    -        if root != raw_path:  # 只处理子文件夹内的文件
    -            for file in files:
    -                file_name, _ = os.path.splitext(file)
    -                if not allowed_pattern.match(file_name):
    -                    illegal_files.append(file)
    -    if len(illegal_files)!=0:
    -        return f"数据集文件名只能包含数字、字母、下划线,以下文件不符合要求,请改名后再试:{illegal_files}"
    -    #检查有没有小可爱不用wav文件当数据集
    -    for root, dirs, files in os.walk(raw_path):
    -        if root != raw_path:  # 只处理子文件夹内的文件
    -            for file in files:
    -                if not file.lower().endswith('.wav'):
    -                    illegal_files.append(file)
    -    if len(illegal_files)!=0:
    -        return f"以下文件为非wav格式文件,请删除后再试:{illegal_files}"
    -    spk_dirs = []
    -    with os.scandir(raw_path) as entries:
    -        for entry in entries:
    -            if entry.is_dir():
    -                spk_dirs.append(entry.name)
    -    if len(spk_dirs) != 0:
    -        return raw_dirs_list.update(value=spk_dirs)
    -    else:
    -        return raw_dirs_list.update(value="未找到数据集,请检查dataset_raw文件夹")
    -
    -def dataset_preprocess(encoder, f0_predictor, use_diff, vol_aug, skip_loudnorm, num_processes):
    -    diff_arg = "--use_diff" if use_diff else ""
    -    vol_aug_arg = "--vol_aug" if vol_aug else ""
    -    skip_loudnorm_arg = "--skip_loudnorm" if skip_loudnorm else ""
    -    preprocess_commands = [
    -        r"python resample.py %s" % (skip_loudnorm_arg),
    -        r"python preprocess_flist_config.py --speech_encoder %s %s" % (encoder, vol_aug_arg),
    -        r"python preprocess_hubert_f0.py --num_processes %s --f0_predictor %s %s" % (num_processes ,f0_predictor, diff_arg)
    -        ]
    -    accumulated_output = ""
    -    #清空dataset
    -    dataset = os.listdir("dataset/44k")
    -    if len(dataset) != 0:
    -        for dir in dataset:
    -            dataset_dir = "dataset/44k/" + str(dir)
    -            if os.path.isdir(dataset_dir):
    -                shutil.rmtree(dataset_dir)
    -                accumulated_output += f"Deleting previous dataset: {dir}\n"
    -    for command in preprocess_commands:
    -        try:
    -            result = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True, text=True)
    -            accumulated_output += f"Command: {command}, Using Encoder: {encoder}, Using f0 Predictor: {f0_predictor}\n"
    -            yield accumulated_output, None
    -            progress_line = None
    -            for line in result.stdout:
    -                if r"it/s" in line or r"s/it" in line: #防止进度条刷屏
    -                    progress_line = line
    -                else:
    -                    accumulated_output += line
    -                if progress_line is None:
    -                    yield accumulated_output, None
    -                else:
    -                    yield accumulated_output + progress_line, None
    -            result.communicate()
    -        except subprocess.CalledProcessError as e:
    -            result = e.output
    -            accumulated_output += f"Error: {result}\n"
    -            yield accumulated_output, None
    -        if progress_line is not None:
    -            accumulated_output += progress_line
    -        accumulated_output += '-' * 50 + '\n'
    -        yield accumulated_output, None
    -        config_path = "configs/config.json"
    -    with open(config_path, 'r') as f:
    -        config = json.load(f)
    -    spk_name = config.get('spk', None)
    -    yield accumulated_output, gr.Textbox.update(value=spk_name)
    -
    -def regenerate_config(encoder, vol_aug):
    -    vol_aug_arg = "--vol_aug" if vol_aug else ""
    -    cmd = r"python preprocess_flist_config.py --speech_encoder %s %s" % (encoder, vol_aug_arg)
    -    output = ""
    -    try:
    -        result = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True, text=True)
    -        for line in result.stdout:
    -            output += line
    -        output += "Regenerate config file successfully."
    -    except subprocess.CalledProcessError as e:
    -        result = e.output
    -        output += f"Error: {result}\n"
    -    return output
    -
    -def clear_output():
    -    return gr.Textbox.update(value="Cleared!>_<")
    -
    -def read_config(config_path):
    -    with open(config_path, 'r') as config_file:
    -        config_data = json.load(config_file)
    -    return config_data
    -
    -def config_fn(log_interval, eval_interval, keep_ckpts, batch_size, lr, fp16_run, all_in_mem, diff_num_workers, diff_cache_all_data, diff_batch_size, diff_lr, diff_interval_log, diff_interval_val, diff_cache_device, diff_amp_dtype, diff_force_save):
    -    config_origin = "configs/config.json"
    -    diff_config = "configs/diffusion.yaml"
    -    config_data = read_config(config_origin)
    -    config_data['train']['log_interval'] = int(log_interval)
    -    config_data['train']['eval_interval'] = int(eval_interval)
    -    config_data['train']['keep_ckpts'] = int(keep_ckpts)
    -    config_data['train']['batch_size'] = int(batch_size)
    -    config_data['train']['learning_rate'] = float(lr)
    -    config_data['train']['fp16_run'] = fp16_run
    -    config_data['train']['all_in_mem'] = all_in_mem
    -    with open(config_origin, 'w') as config_file:
    -        json.dump(config_data, config_file, indent=4)
    -    with open(diff_config, 'r') as diff_yaml:
    -        diff_config_data = yaml.safe_load(diff_yaml)
    -    diff_config_data['train']['num_workers'] = int(diff_num_workers)
    -    diff_config_data['train']['cache_all_data'] = diff_cache_all_data
    -    diff_config_data['train']['batch_size'] = int(diff_batch_size)
    -    diff_config_data['train']['lr'] = float(diff_lr)
    -    diff_config_data['train']['interval_log'] = int(diff_interval_log)
    -    diff_config_data['train']['interval_val'] = int(diff_interval_val)
    -    diff_config_data['train']['cache_device'] = str(diff_cache_device)
    -    diff_config_data['train']['amp_dtype'] = str(diff_amp_dtype)
    -    diff_config_data['train']['interval_force_save'] = int(diff_force_save)
    -    with open(diff_config, 'w') as diff_yaml:
    -        yaml.safe_dump(diff_config_data, diff_yaml, default_flow_style=False, sort_keys=False)
    -    return "配置文件写入完成"
    -
    -def check_dataset(dataset_path):
    -    if not os.listdir(dataset_path):
    -        return "数据集不存在,请检查dataset文件夹"
    -    no_npy_pt_files = True
    -    for root, dirs, files in os.walk(dataset_path):
    -        for file in files:
    -            if file.endswith('.npy') or file.endswith('.pt'):
    -                no_npy_pt_files = False
    -                break
    -    if no_npy_pt_files:
    -        return "数据集中未检测到f0和hubert文件,可能是预处理未完成"
    -    return None
    -
    -def training(gpu_selection, encoder):
    -    config_data = read_config("configs/config.json")
    -    vol_emb = config_data["model"]["vol_embedding"]
    -    dataset_warn = check_dataset("dataset/44k")
    -    if dataset_warn is not None:
    -        return dataset_warn
    -    encoder_models = { #编码器好多,要塞不下了
    -        "vec256l9": ("D_0.pth", "G_0.pth", "pre_trained_model"),
    -        "vec768l12": ("D_0.pth", "G_0.pth", "pre_trained_model/768l12/vol_emb" if vol_emb else "pre_trained_model/768l12"),
    -        "hubertsoft": ("D_0.pth", "G_0.pth", "pre_trained_model/hubertsoft"),
    -        "whisper-ppg": ("D_0.pth", "G_0.pth", "pre_trained_model/whisper-ppg"),
    -        "cnhubertlarge": ("D_0.pth", "G_0.pth", "pre_trained_model/cnhubertlarge"),
    -        "dphubert": ("D_0.pth", "G_0.pth", "pre_trained_model/dphubert"),
    -        "whisper-ppg-large": ("D_0.pth", "G_0.pth", "pre_trained_model/whisper-ppg-large")
    -    }
    -    if encoder not in encoder_models:
    -        return "未知编码器"
    -    d_0_file, g_0_file, encoder_model_path = encoder_models[encoder]
    -    d_0_path = os.path.join(encoder_model_path, d_0_file)
    -    g_0_path = os.path.join(encoder_model_path, g_0_file)
    -    timestamp = datetime.datetime.now().strftime('%Y_%m_%d_%H_%M')
    -    new_backup_folder = os.path.join(models_backup_path, str(timestamp))
    -    if os.listdir(workdir) != ['diffusion']:
    -        os.makedirs(new_backup_folder, exist_ok=True)
    -        for file in os.listdir(workdir):
    -            if file != "diffusion":
    -                shutil.move(os.path.join(workdir, file), os.path.join(new_backup_folder, file))
    -    shutil.copy(d_0_path, os.path.join(workdir, "D_0.pth"))
    -    shutil.copy(g_0_path, os.path.join(workdir, "G_0.pth"))
    -    cmd = r"set CUDA_VISIBLE_DEVICES=%s && python train.py -c configs/config.json -m 44k" % (gpu_selection)
    -    subprocess.Popen(["cmd", "/c", "start", "cmd", "/k", cmd])
    -    return "已经在新的终端窗口开始训练,请监看终端窗口的训练日志。在终端中按Ctrl+C可暂停训练。"
    -
    -def continue_training(gpu_selection, encoder):
    -    dataset_warn = check_dataset("dataset/44k")
    -    if dataset_warn is not None:
    -        return dataset_warn
    -    if encoder == "":
    -        return "请先选择预处理对应的编码器"
    -    all_files = os.listdir(workdir)
    -    model_files = [f for f in all_files if f.startswith('G_') and f.endswith('.pth')]
    -    if len(model_files) == 0:
    -        return "你还没有已开始的训练"
    -    cmd = r"set CUDA_VISIBLE_DEVICES=%s && python train.py -c configs/config.json -m 44k" % (gpu_selection)
    -    subprocess.Popen(["cmd", "/c", "start", "cmd", "/k", cmd])
    -    return "已经在新的终端窗口开始训练,请监看终端窗口的训练日志。在终端中按Ctrl+C可暂停训练。"
    -
    -def kmeans_training(kmeans_gpu):
    -    if not os.listdir(r"dataset/44k"):
    -        return "数据集不存在,请检查dataset文件夹"
    -    cmd = r"python cluster/train_cluster.py --gpu" if kmeans_gpu else r"python cluster/train_cluster.py"
    -    subprocess.Popen(["cmd", "/c", "start", "cmd", "/k", cmd])
    -    return "已经在新的终端窗口开始训练,训练聚类模型不会输出日志,CPU训练一般需要5-10分钟左右"
    -
    -def index_training():
    -    if not os.listdir(r"dataset/44k"):
    -        return "数据集不存在,请检查dataset文件夹"
    -    cmd = r"python train_index.py -c configs/config.json"
    -    subprocess.Popen(["cmd", "/c", "start", "cmd", "/k", cmd])
    -    return "已经在新的终端窗口开始训练"
    -
    -def diff_training(encoder):
    -    if not os.listdir(r"dataset/44k"):
    -        return "数据集不存在,请检查dataset文件夹"
    -    pre_trained_model_768l12 = "pre_trained_model/diffusion/768l12/model_0.pt"
    -    pre_trained_model_hubertsoft = "pre_trained_model/diffusion/hubertsoft/model_0.pt"
    -    timestamp = datetime.datetime.now().strftime('%Y_%m_%d_%H_%M')
    -    new_backup_folder = os.path.join(models_backup_path, "diffusion", str(timestamp))
    -    if len(os.listdir(diff_workdir)) != 0:
    -        os.makedirs(new_backup_folder, exist_ok=True)
    -        for file in os.listdir(diff_workdir):
    -            shutil.move(os.path.join(diff_workdir, file), os.path.join(new_backup_folder, file))
    -    if encoder == "vec256l9" or encoder == "whisper-ppg":
    -        return "你所选的编码器暂时不支持训练扩散模型"
    -    elif encoder == "vec768l12":
    -        shutil.copy(pre_trained_model_768l12, os.path.join(diff_workdir, "model_0.pt"))
    -    elif encoder == "hubertsoft":
    -        shutil.copy(pre_trained_model_hubertsoft, os.path.join(diff_workdir, "model_0.pt"))
    -    else: 
    -        return "请先选择编码器"
    -    subprocess.Popen(["cmd", "/c", "start", "cmd", "/k", r"python train_diff.py -c configs/diffusion.yaml"])
    -    return "已经在新的终端窗口开始训练,请监看终端窗口的训练日志。在终端中按Ctrl+C可暂停训练。"
    -
    -def diff_continue_training(encoder):
    -    if not os.listdir(r"dataset/44k"):
    -        return "数据集不存在,请检查dataset文件夹"
    -    if encoder == "":
    -        return "请先选择预处理对应的编码器"
    -    all_files = os.listdir(diff_workdir)
    -    model_files = [f for f in all_files if f.endswith('.pt')]
    -    if len(model_files) == 0:
    -        return "你还没有已开始的训练"
    -    subprocess.Popen(["cmd", "/c", "start", "cmd", "/k", r"python train_diff.py -c configs/diffusion.yaml"])
    -    return "已经在新的终端窗口开始训练,请监看终端窗口的训练日志。在终端中按Ctrl+C可暂停训练。"
    -
    -def upload_mix_append_file(files,sfiles):
    -    try:
    -        if(sfiles == None):
    -            file_paths = [file.name for file in files]
    -        else:
    -            file_paths = [file.name for file in chain(files,sfiles)]
    -        p = {file:100 for file in file_paths}
    -        return file_paths,mix_model_output1.update(value=json.dumps(p,indent=2))
    -    except Exception as e:
    -        if debug: traceback.print_exc()
    -        raise gr.Error(e)
    -
    -def mix_submit_click(js,mode):
    -    try:
    -        assert js.lstrip()!=""
    -        modes = {"凸组合":0, "线性组合":1}
    -        mode = modes[mode]
    -        data = json.loads(js)
    -        data = list(data.items())
    -        model_path,mix_rate = zip(*data)
    -        path = mix_model(model_path,mix_rate,mode)
    -        return f"成功,文件被保存在了{path}"
    -    except Exception as e:
    -        if debug: traceback.print_exc()
    -        raise gr.Error(e)
    -
    -def updata_mix_info(files):
    -    try:
    -        if files == None : return mix_model_output1.update(value="")
    -        p = {file.name:100 for file in files}
    -        return mix_model_output1.update(value=json.dumps(p,indent=2))
    -    except Exception as e:
    -        if debug: traceback.print_exc()
    -        raise gr.Error(e)
    -
    -def pth_identify():
    -    if not os.path.exists(root_dir):
    -        return f"未找到{root_dir}文件夹,请先创建一个{root_dir}文件夹并按第一步流程操作"
    -    model_dirs = [d for d in os.listdir(root_dir) if os.path.isdir(os.path.join(root_dir, d))]
    -    if not model_dirs:
    -        return f"未在{root_dir}文件夹中找到模型文件夹,请确保每个模型和配置文件都被放置在单独的文件夹中"
    -    valid_model_dirs = []
    -    for path in model_dirs:
    -        pth_files = glob.glob(f"{root_dir}/{path}/*.pth")
    -        json_files = glob.glob(f"{root_dir}/{path}/*.json")
    -        if len(pth_files) != 1 or len(json_files) != 1:
    -            return f"错误: 在{root_dir}/{path}中找到了{len(pth_files)}个.pth文件和{len(json_files)}个.json文件。应当确保每个文件夹内有且只有一个.pth文件和.json文件"
    -        valid_model_dirs.append(path)
    -        
    -    return f"成功识别了{len(valid_model_dirs)}个模型:{valid_model_dirs}"
    -
    -def onnx_export():
    -    model_dirs = [d for d in os.listdir(root_dir) if os.path.isdir(os.path.join(root_dir, d))]
    -    try:
    -        for path in model_dirs:
    -            pth_files = glob.glob(f"{root_dir}/{path}/*.pth")
    -            json_files = glob.glob(f"{root_dir}/{path}/*.json")
    -            model_file = pth_files[0]
    -            json_file = json_files[0]
    -            with open(json_file, 'r') as config_file:
    -                config_data = json.load(config_file)
    -            channels = config_data["model"]["gin_channels"]
    -            if str(channels) == "256":
    -                para1 = 1
    -            if str(channels) == "768":
    -                para1 = 192
    -            device = torch.device("cpu")
    -            hps = utils.get_hparams_from_file(json_file)
    -            SVCVITS = SynthesizerTrn(
    -                hps.data.filter_length // 2 + 1,
    -                hps.train.segment_size // hps.data.hop_length,
    -                **hps.model)
    -            _ = utils.load_checkpoint(model_file, SVCVITS, None)
    -            _ = SVCVITS.eval().to(device)
    -            for i in SVCVITS.parameters():
    -                i.requires_grad = False       
    -            n_frame = 10
    -            test_hidden_unit = torch.rand(para1, n_frame, channels)
    -            test_pitch = torch.rand(1, n_frame)
    -            test_mel2ph = torch.arange(0, n_frame, dtype=torch.int64)[None] # torch.LongTensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]).unsqueeze(0)
    -            test_uv = torch.ones(1, n_frame, dtype=torch.float32)
    -            test_noise = torch.randn(1, 192, n_frame)
    -            test_sid = torch.LongTensor([0])
    -            input_names = ["c", "f0", "mel2ph", "uv", "noise", "sid"]
    -            output_names = ["audio", ]
    -            onnx_file = os.path.splitext(model_file)[0] + ".onnx"
    -            torch.onnx.export(SVCVITS,
    -                              (
    -                                  test_hidden_unit.to(device),
    -                                  test_pitch.to(device),
    -                                  test_mel2ph.to(device),
    -                                  test_uv.to(device),
    -                                  test_noise.to(device),
    -                                  test_sid.to(device)
    -                              ),
    -                              onnx_file,
    -                              dynamic_axes={
    -                                  "c": [0, 1],
    -                                  "f0": [1],
    -                                  "mel2ph": [1],
    -                                  "uv": [1],
    -                                  "noise": [2],
    -                              },
    -                              do_constant_folding=False,
    -                              opset_version=16,
    -                              verbose=False,
    -                              input_names=input_names,
    -                              output_names=output_names)
    -        return "转换成功,模型被保存在了checkpoints下的对应目录"
    -    except Exception as e:
    -        if debug: traceback.print_exc()
    -        return "转换错误:"+str(e)
    -
    -def load_raw_audio(audio_path):
    -    if not os.path.isdir(audio_path):
    -        return "请输入正确的目录", None
    -    files = os.listdir(audio_path)
    -    wav_files = [file for file in files if file.lower().endswith('.wav')]
    -    if not wav_files:
    -        return "未在目录中找到.wav音频文件", None
    -    return "成功加载", wav_files
    -
    -def slicer_fn(input_dir, output_dir, process_method, max_sec, min_sec):
    -    if output_dir == "":
    -        return "请先选择输出的文件夹"
    -    slicer = AutoSlicer()
    -    if not os.path.exists(output_dir):
    -        os.makedirs(output_dir)
    -    for filename in os.listdir(input_dir):
    -        if filename.lower().endswith(".wav"):
    -            slicer.auto_slice(filename, input_dir, output_dir, max_sec)
    -    if process_method == "丢弃":
    -        for filename in os.listdir(output_dir):
    -            if filename.endswith(".wav"):
    -                filepath = os.path.join(output_dir, filename)
    -                audio, sr = librosa.load(filepath, sr=None, mono=False)
    -                if librosa.get_duration(y=audio, sr=sr) < min_sec:
    -                    os.remove(filepath)
    -    elif process_method == "将过短音频整合为长音频":
    -        slicer.merge_short(output_dir, max_sec, min_sec)
    -    file_count, max_duration, min_duration, orig_duration, final_duration = slicer.slice_count(input_dir, output_dir)
    -    hrs = int(final_duration / 3600)
    -    mins = int((final_duration % 3600) / 60)
    -    sec = format(float(final_duration % 60), '.2f')
    -    rate = format(100 * (final_duration / orig_duration), '.2f')
    -    return f"成功将音频切分为{file_count}条片段,其中最长{max_duration}秒,最短{min_duration}秒,切片后的音频总时长{hrs:02d}小时{mins:02d}分{sec}秒,为原始音频时长的{rate}%"
    -
    -def model_compression(_model):
    -    if _model == "":
    -        return "请先选择要压缩的模型"
    -    else:
    -        model_path = os.path.join(workdir, _model)
    -        filename, extension = os.path.splitext(_model)
    -        output_model_name = f"{filename}_compressed{extension}"
    -        output_path = os.path.join(workdir, output_model_name)
    -        removeOptimizer(model_path, output_path)
    -        return f"模型已成功被保存在了{output_path}"
    -
    -# read ckpt list
    -ckpt_list, config_list, cluster_list, diff_list, diff_config_list = load_options()
    -
    -#read GPU info
    -ngpu=torch.cuda.device_count()
    -gpu_infos=[]
    -if(torch.cuda.is_available()==False or ngpu==0):if_gpu_ok=False
    -else:
    -    if_gpu_ok = False
    -    for i in range(ngpu):
    -        gpu_name=torch.cuda.get_device_name(i)
    -        if("MX"in gpu_name):continue
    -        if("10"in gpu_name or "16"in gpu_name or "20"in gpu_name or "30"in gpu_name or "40"in gpu_name or "A50"in gpu_name.upper() or "70"in gpu_name or "80"in gpu_name or "90"in gpu_name or "M4"in gpu_name or"P4"in gpu_name or "T4"in gpu_name or "TITAN"in gpu_name.upper()):#A10#A100#V100#A40#P40#M40#K80
    -            if_gpu_ok=True#至少有一张能用的N卡
    -            gpu_infos.append("%s\t%s"%(i,gpu_name))
    -gpu_info="\n".join(gpu_infos)if if_gpu_ok==True and len(gpu_infos)>0 else "很遗憾您这没有能用的显卡来支持您训练"
    -gpus="-".join([i[0]for i in gpu_infos])
    -
    -#read default params
    -sovits_params, diff_params = get_default_settings()
    -
    -app = gr.Blocks()
    -
    -def Newget_model_info(choice_ckpt2):
    -    choice_ckpt = str(choice_ckpt2)
    -    pthfile = os.path.join(workdir, choice_ckpt)
    -    net = torch.load(pthfile, map_location=torch.device('cpu')) #cpu load
    -    spk_emb = net["model"].get("emb_g.weight")
    -    if spk_emb is None:
    -        return "所选模型缺少emb_g.weight,你可能选择了一个底模"
    -    _dim, _layer = spk_emb.size()
    -    model_type = {
    -        768: "Vec768-Layer12",
    -        256: "Vec256-Layer9 / HubertSoft",
    -        1024: "Whisper-PPG"
    -    }
    -    return gr.Textbox(visible=False, value=model_type.get(_layer, "不受支持的模型"))
    -
    -with app:
    -    gr.Markdown(value="""
    -        ### So-VITS-SVC 4.1-Stable
    -                
    -        修改自原项目及bilibili@麦哲云
    -
    -        仅供个人娱乐和非商业用途,禁止用于血腥、暴力、性相关、政治相关内容
    -
    -        weiui来自:bilibili@羽毛布団,交流③群:416656175
    -        
    -        镜像作者:bilibili@kiss丿冷鸟鸟,交流群:829974025
    -
    -        """)
    -    with gr.Tabs():
    -        with gr.TabItem("北部玄驹 (Kitasan Black)"):
    -            #with gr.Row():
    -            #    choice_ckpt = gr.Dropdown(label="模型选择", choices=ckpt_list, value="no_model")
    -            #    model_branch = gr.Textbox(label="模型编码器", placeholder="请先选择模型", interactive=False)
    -            #choice_ckpt = gr.Dropdown(value="G_171200.pth", visible=False)
    -            #with gr.Row():
    -            #    config_choice = gr.Dropdown(label="配置文件", choices=config_list, value="no_config")
    -            #    config_info = gr.Textbox(label="配置文件编码器", placeholder="请选择配置文件")
    -            config_choice = gr.Dropdown(value="config.json", visible=False)
    -            #gr.Markdown(value="""**请检查模型和配置文件的编码器是否匹配**""")
    -            #with gr.Row():
    -            #    diff_choice = gr.Dropdown(label="(可选)选择扩散模型", choices=diff_list, value="no_diff", interactive=True)
    -            #    diff_config_choice = gr.Dropdown(label="扩散模型配置文件", choices=diff_config_list, value="no_diff_config", interactive=True)
    -            diff_choice = gr.Dropdown(value="no_diff", visible=False)
    -            diff_config_choice = gr.Dropdown(value="no_diff_config", visible=False)
    -            with gr.Row():
    -                cluster_choice = gr.Dropdown(label="(可选)选择聚类模型/特征检索模型", choices=cluster_list, value="no_clu")
    -            with gr.Row():
    -                enhance = gr.Checkbox(label="是否使用NSF_HIFIGAN增强,该选项对部分训练集少的模型有一定的音质增强效果,但是对训练好的模型有反面效果,默认关闭", value=False)
    -                #only_diffusion = gr.Checkbox(label="是否使用全扩散推理,开启后将不使用So-VITS模型,仅使用扩散模型进行完整扩散推理,默认关闭", value=False)
    -                only_diffusion = gr.Checkbox(value=False, visible=False)
    -            #using_device = gr.Dropdown(label="推理设备,默认为自动选择", choices=["Auto","cuda","cpu"], value="Auto")
    -            using_device = gr.Dropdown(value='Auto', visible=False)
    -            #refresh = gr.Button("刷新选项")
    -            #loadckpt = gr.Button("加载模型", variant="primary")
    -            #with gr.Row():
    -            #    model_message = gr.Textbox(label="Output Message")
    -            #    sid = gr.Dropdown(label="So-VITS说话人", value="speaker0")
    -            sid = gr.Dropdown(value="1068", visible=False)
    -            
    -            #choice_ckpt.change(get_model_info, [choice_ckpt], [model_branch])
    -            model_branch = Newget_model_info("G_171200.pth")
    -            #config_choice.change(load_json_encoder, [config_choice], [config_info])
    -            #refresh.click(refresh_options,[],[choice_ckpt,config_choice,cluster_choice,diff_choice,diff_config_choice])
    -
    -            gr.Markdown(value="""
    -                请稍等片刻,模型加载大约需要10秒。后续操作不需要重新加载模型
    -                """)
    -            with gr.Tabs():
    -                with gr.TabItem("单个音频上传"):
    -                    vc_input3 = gr.Audio(label="单个音频上传")
    -                with gr.TabItem("批量音频上传"):
    -                    vc_batch_files = gr.Files(label="批量音频上传", file_types=["audio"], file_count="multiple")
    -                with gr.TabItem("文字转语音(实验性)"):
    -                    gr.Markdown("""
    -                        文字转语音(TTS)说明:使用edge_tts服务生成音频,并转换为So-VITS模型音色。可以在输入文字中使用标点符号简单控制情绪
    -                        zh-CN-XiaoyiNeural:中文女声
    -                        zh-CN-YunxiNeural: 中文男声
    -                        ja-JP-NanamiNeural:日文女声
    -                        ja-JP-KeitaNeural:日文男声
    -                        zh-CN-liaoning-XiaobeiNeural:东北话女声
    -                        zh-CN-shaanxi-XiaoniNeural: 陕西话女声
    -                        zh-HK-HiuMaanNeural: 粤语女声
    -                        zh-HK-WanLungNeural: 粤语男声
    -                    """)
    -                    with gr.Row():
    -                        text_input = gr.Textbox(label = "在此输入需要转译的文字(建议打开自动f0预测)",)
    -                        tts_spk = gr.Dropdown(label = "选择原始音频音色(来自微软TTS)", choices=["zh-CN-XiaoyiNeural", "zh-CN-YunxiNeural", "zh-CN-liaoning-XiaobeiNeural", "zh-CN-shaanxi-XiaoniNeural", "zh-HK-HiuMaanNeural", "zh-HK-WanLungNeural", "ja-JP-NanamiNeural", "ja-JP-KeitaNeural"], value = "zh-CN-XiaoyiNeural")
    -                    #with gr.Row():
    -                    #    tts_rate = gr.Slider(label = "TTS语音变速(倍速)", minimum = 0, maximum = 3, value = 1)
    -                    #    tts_volume = gr.Slider(label = "TTS语音音量(相对值)", minimum = 0, maximum = 1.5, value = 1)
    -
    -            with gr.Row():
    -                auto_f0 = gr.Checkbox(label="自动f0预测,配合聚类模型f0预测效果更好,会导致变调功能失效(仅限转换语音,歌声不要勾选此项会跑调)", value=False)
    -                f0_predictor = gr.Radio(label="f0预测器选择(如遇哑音可以更换f0预测器解决,crepe为原F0使用均值滤波器)", choices=["pm","crepe","harvest","dio"], value="pm")
    -                cr_threshold = gr.Number(label="F0过滤阈值,只有使用crepe时有效. 数值范围从0-1. 降低该值可减少跑调概率,但会增加哑音", value=0.05)
    -            with gr.Row():
    -                vc_transform = gr.Number(label="变调(整数,可以正负,半音数量,升高八度就是12)", value=0)
    -                cluster_ratio = gr.Number(label="聚类模型/特征检索混合比例,0-1之间,默认为0不启用聚类或特征检索,能提升音色相似度,但会导致咬字下降", value=0)
    -                k_step = gr.Slider(label="浅扩散步数,只有使用了扩散模型才有效,步数越大越接近扩散模型的结果", value=100, minimum = 1, maximum = 1000)
    -            with gr.Row():
    -                enhancer_adaptive_key = gr.Number(label="使NSF-HIFIGAN增强器适应更高的音域(单位为半音数)|默认为0", value=0,interactive=True)
    -                slice_db = gr.Number(label="切片阈值", value=-50)
    -                cl_num = gr.Number(label="音频自动切片,0为按默认方式切片,单位为秒/s,爆显存可以设置此处强制切片", value=0)
    -            with gr.Accordion("高级设置(一般不需要动)", open=False):
    -                noise_scale = gr.Number(label="noise_scale 建议不要动,会影响音质,玄学参数", value=0.4)
    -                pad_seconds = gr.Number(label="推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现", value=0.5)
    -                lg_num = gr.Number(label="两端音频切片的交叉淡入长度,如果自动切片后出现人声不连贯可调整该数值,如果连贯建议采用默认值0,注意,该设置会影响推理速度,单位为秒/s", value=1)
    -                lgr_num = gr.Number(label="自动音频切片后,需要舍弃每段切片的头尾。该参数设置交叉长度保留的比例,范围0-1,左开右闭", value=0.75,interactive=True)
    -                second_encoding = gr.Checkbox(label = "二次编码,浅扩散前会对原始音频进行二次编码,玄学选项,效果时好时差,默认关闭", value=False)
    -                loudness_envelope_adjustment = gr.Number(label="输入源响度包络替换输出响度包络融合比例,越靠近1越使用输出响度包络", value = 0)
    -                use_spk_mix = gr.Checkbox(label="动态声线融合,暂时没做完", value=False, interactive=False)
    -            with gr.Row():
    -                vc_submit = gr.Button("音频转换", variant="primary")
    -                vc_batch_submit = gr.Button("批量转换", variant="primary")
    -                vc_tts_submit = gr.Button("文本转语音", variant="primary")
    -            vc_output1 = gr.Textbox(label="Output Message")
    -            vc_output2 = gr.Audio(label="Output Audio")
    -
    -        def Newvc_fn(sid, input_audio, vc_transform, auto_f0, cluster_ratio, slice_db, noise_scale, pad_seconds, cl_num, lg_num, lgr_num, f0_predictor, enhancer_adaptive_key, cr_threshold, k_step, use_spk_mix, second_encoding, loudness_envelope_adjustment, clus2):
    -            global model, loaded
    -            if loaded != clus2:
    -                Newload_model_func("G_171200.pth",clus2,config_choice,enhance,diff_choice,diff_config_choice,only_diffusion,model_branch,using_device)
    -                loaded = clus2
    -            try:
    -                if input_audio is None:
    -                    return "You need to upload an audio", None
    -                if model is None:
    -                    return "You need to upload an model", None
    -                sampling_rate, audio = input_audio
    -                temp_path = "temp.wav"
    -                sf.write(temp_path, audio, sampling_rate, format="wav")
    -                output_file_path = vc_infer(sid, audio, temp_path, vc_transform, auto_f0, cluster_ratio, slice_db, noise_scale, pad_seconds, cl_num, lg_num, lgr_num, f0_predictor, enhancer_adaptive_key, cr_threshold, k_step, use_spk_mix, second_encoding, loudness_envelope_adjustment)
    -                os.remove(temp_path)
    -                return "Success", output_file_path
    -            except Exception as e:
    -                if debug: traceback.print_exc()
    -                raise gr.Error(e)
    -        
    -        #loadckpt.click(load_model_func,[choice_ckpt,cluster_choice,config_choice,enhance,diff_choice,diff_config_choice,only_diffusion,model_branch,using_device],[model_message, sid, cl_num])
    -        vc_submit.click(Newvc_fn, [sid, vc_input3, vc_transform,auto_f0,cluster_ratio, slice_db, noise_scale,pad_seconds,cl_num,lg_num,lgr_num,f0_predictor,enhancer_adaptive_key,cr_threshold,k_step,use_spk_mix,second_encoding,loudness_envelope_adjustment,cluster_choice], [vc_output1, vc_output2])
    -        vc_batch_submit.click(vc_batch_fn, [sid, vc_batch_files, vc_transform,auto_f0,cluster_ratio, slice_db, noise_scale,pad_seconds,cl_num,lg_num,lgr_num,f0_predictor,enhancer_adaptive_key,cr_threshold,k_step,use_spk_mix,second_encoding,loudness_envelope_adjustment], [vc_output1])
    -        vc_tts_submit.click(tts_fn, [text_input, tts_spk, sid, vc_transform,auto_f0,cluster_ratio, slice_db, noise_scale,pad_seconds,cl_num,lg_num,lgr_num,f0_predictor,enhancer_adaptive_key,cr_threshold,k_step,use_spk_mix,second_encoding,loudness_envelope_adjustment], [vc_output1, vc_output2])
    -        '''
    -        with gr.TabItem("训练"):
    -            gr.Markdown(value="""请将数据集文件夹放置在dataset_raw文件夹下,确认放置正确后点击下方获取数据集名称""")
    -            raw_dirs_list=gr.Textbox(label="Raw dataset directory(s):")
    -            get_raw_dirs=gr.Button("识别数据集", variant="primary")
    -            gr.Markdown(value="""确认数据集正确识别后请选择训练使用的特征编码器和f0预测器,**如果要训练扩散模型,请选择Vec768l12或hubertsoft,并确保So-VITS和扩散模型使用同一个编码器**""")
    -            with gr.Row():
    -                gr.Markdown(value="""**vec256l9**: ContentVec(256Layer9),旧版本叫v1,So-VITS-SVC 4.0的基础版本,**暂不支持扩散模型**
    -                                **vec768l12**: 特征输入更换为ContentVec的第12层Transformer输出,模型理论上会更加还原训练集音色
    -                                **hubertsoft**: So-VITS-SVC 3.0使用的编码器,咬字更为准确,但可能存在多说话人音色泄露问题
    -                                **whisper-ppg**: 来自OpenAI,咬字最为准确,但和Hubertsoft一样存在多说话人音色泄露,且显存占用和训练时间有明显增加。**暂不支持扩散模型**
    -                """)
    -                gr.Markdown(value="""**crepe**: 抗噪能力最强,但预处理速度慢(不过如果你的显卡很强的话速度会很快)
    -                                **pm**: 预处理速度快,但抗噪能力较弱
    -                                **dio**: 先前版本预处理默认使用的f0预测器
    -                                **harvest**: 有一定抗噪能力,预处理显存占用友好,速度比较慢
    -                """)
    -            with gr.Row():
    -                branch_selection = gr.Radio(label="选择训练使用的编码器", choices=["vec256l9","vec768l12","hubertsoft","whisper-ppg"], value="vec768l12", interactive=True)
    -                f0_predictor_selection = gr.Radio(label="选择训练使用的f0预测器", choices=["crepe","pm","dio","harvest"], value="crepe", interactive=True)
    -                use_diff = gr.Checkbox(label="是否使用浅扩散模型,如要训练浅扩散模型请勾选此项", value=True)
    -                vol_aug=gr.Checkbox(label="是否启用响度嵌入和音量增强,启用后可以根据输入源控制输出响度,但对数据集质量的要求更高。**仅支持vec768l12编码器**", value=False)
    -            with gr.Row():
    -                skip_loudnorm = gr.Checkbox(label="是否跳过响度匹配,如果你已经用音频处理软件做过响度匹配,请勾选此处")
    -                num_processes = gr.Slider(label="预处理使用的CPU线程数,可以大幅加快预处理速度,但线程数过大容易爆显存,建议12G显存设置为2", minimum=1, maximum=multiprocessing.cpu_count(), value=1, step=1)
    -            with gr.Row():
    -                raw_preprocess=gr.Button("数据预处理", variant="primary")
    -                regenerate_config_btn=gr.Button("重新生成配置文件", variant="primary")
    -            preprocess_output=gr.Textbox(label="预处理输出信息,完成后请检查一下是否有报错信息,如无则可以进行下一步", max_lines=999)
    -            clear_preprocess_output=gr.Button("清空输出信息")
    -            with gr.Group():
    -                gr.Markdown(value="""填写训练设置和超参数""")
    -                with gr.Row():
    -                    gr.Textbox(label="当前使用显卡信息", value=gpu_info)
    -                    gpu_selection=gr.Textbox(label="多卡用户请指定希望训练使用的显卡ID(0,1,2...)", value=gpus, interactive=True)
    -                with gr.Row():
    -                    log_interval=gr.Textbox(label="每隔多少步(steps)生成一次评估日志", value=sovits_params['log_interval'])
    -                    eval_interval=gr.Textbox(label="每隔多少步(steps)验证并保存一次模型", value=sovits_params['eval_interval'])
    -                    keep_ckpts=gr.Textbox(label="仅保留最新的X个模型,超出该数字的旧模型会被删除。设置为0则永不删除", value=sovits_params['keep_ckpts'])
    -                with gr.Row():
    -                    batch_size=gr.Textbox(label="批量大小,每步取多少条数据进行训练,大batch有助于训练但显著增加显存占用。6G显存建议设定为4", value=sovits_params['batch_size'])
    -                    lr=gr.Textbox(label="学习率,一般不用动,批量大小较大时可以适当增大学习率,但强烈不建议超过0.0002,有炸炉风险", value=sovits_params['learning_rate'])
    -                    fp16_run=gr.Checkbox(label="是否使用fp16混合精度训练,fp16训练可能降低显存占用和训练时间,但对模型质量的影响尚未查证", value=sovits_params['fp16_run'])
    -                    all_in_mem=gr.Checkbox(label="是否加载所有数据集到内存中,硬盘IO过于低下、同时内存容量远大于数据集体积时可以启用,能显著加快训练速度", value=sovits_params['all_in_mem'])
    -                with gr.Row():
    -                    gr.Markdown("请检查右侧的说话人列表是否和你要训练的目标说话人一致,确认无误后点击写入配置文件,然后就可以开始训练了")
    -                    speakers=gr.Textbox(label="说话人列表")
    -            with gr.Accordion(label = "扩散模型配置(训练扩散模型需要写入此处)", open=True):
    -                with gr.Row():
    -                    diff_num_workers = gr.Number(label="num_workers, 如果你的电脑配置较高,可以将这里设置为0加快训练速度", value=diff_params['num_workers'])
    -                    diff_cache_all_data = gr.Checkbox(label="是否缓存数据,启用后可以加快训练速度,关闭后可以节省显存或内存,但会减慢训练速度", value=diff_params['cache_all_data'])
    -                    diff_cache_device = gr.Radio(label="若启用缓存数据,使用显存(cuda)还是内存(cpu)缓存,如果显卡显存充足,选择cuda以加快训练速度", choices=["cuda","cpu"], value=diff_params['cache_device'])
    -                    diff_amp_dtype = gr.Radio(label="训练数据类型,fp16可能会有更快的训练速度,前提是你的显卡支持", choices=["fp32","fp16"], value=diff_params['amp_dtype'])
    -                with gr.Row():
    -                    diff_batch_size = gr.Number(label="批量大小(batch_size),根据显卡显存设置,小显存适当降低该项,6G显存可以设定为48,但该数值不要超过数据集总数量的1/4", value=diff_params['diff_batch_size'])
    -                    diff_lr = gr.Number(label="学习率(一般不需要动)", value=diff_params['diff_lr'])
    -                    diff_interval_log = gr.Number(label="每隔多少步(steps)生成一次评估日志", value = diff_params['diff_interval_log'])
    -                    diff_interval_val = gr.Number(label="每隔多少步(steps)验证并保存一次模型,如果你的批量大小较大,可以适当减少这里的数字,但不建议设置为1000以下", value=diff_params['diff_interval_val'])
    -                    diff_force_save = gr.Number(label="每隔多少步强制保留模型,只有该步数的倍数保存的模型会被保留,其余会被删除。设置为与验证步数相同的值则每个模型都会被保留", value=diff_params['diff_force_save'])
    -            with gr.Row():
    -                save_params=gr.Button("将当前设置保存为默认设置", variant="primary")
    -                write_config=gr.Button("写入配置文件", variant="primary")
    -            write_config_output=gr.Textbox(label="输出信息")
    -
    -            gr.Markdown(value="""**点击从头开始训练**将会自动将已有的训练进度保存到models_backup文件夹,并自动装载预训练模型。
    -                **继续上一次的训练进度**将从上一个保存模型的进度继续训练。继续训练进度无需重新预处理和写入配置文件。
    -                关于扩散、聚类和特征检索的详细说明请看[此处](https://www.yuque.com/umoubuton/ueupp5/kmui02dszo5zrqkz)。
    -                """)
    -            with gr.Row():
    -                with gr.Column():
    -                    start_training=gr.Button("从头开始训练", variant="primary")
    -                    training_output=gr.Textbox(label="训练输出信息")
    -                with gr.Column():
    -                    continue_training_btn=gr.Button("继续上一次的训练进度", variant="primary")
    -                    continue_training_output=gr.Textbox(label="训练输出信息")
    -            with gr.Row():
    -                with gr.Column():
    -                    diff_training_btn=gr.Button("从头训练扩散模型", variant="primary")
    -                    diff_training_output=gr.Textbox(label="训练输出信息")
    -                with gr.Column():
    -                    diff_continue_training_btn=gr.Button("继续训练扩散模型", variant="primary")
    -                    diff_continue_training_output=gr.Textbox(label="训练输出信息") 
    -            with gr.Accordion(label = "聚类、特征检索训练", open=False):
    -                with gr.Row():               
    -                    with gr.Column():
    -                        kmeans_button=gr.Button("训练聚类模型", variant="primary")
    -                        kmeans_gpu = gr.Checkbox(label="使用GPU训练", value=True)
    -                        kmeans_output=gr.Textbox(label="训练输出信息")
    -                    with gr.Column():
    -                        index_button=gr.Button("训练特征检索模型", variant="primary")
    -                        index_output=gr.Textbox(label="训练输出信息")
    -            '''
    -        with gr.TabItem("小工具/实验室特性"):
    -            gr.Markdown(value="""
    -                        ### So-vits-svc 4.1 小工具/实验室特性
    -                        提供了一些有趣或实用的小工具,可以自行探索
    -                        """)
    -            with gr.Tabs():
    -                with gr.TabItem("静态声线融合"):
    -                    gr.Markdown(value="""
    -                         介绍:该功能可以将多个声音模型合成为一个声音模型(多个模型参数的凸组合或线性组合),从而制造出现实中不存在的声线 
    -                                          注意:
    -                                          1.该功能仅支持单说话人的模型
    -                                          2.如果强行使用多说话人模型,需要保证多个模型的说话人数量相同,这样可以混合同一个SpaekerID下的声音
    -                                          3.保证所有待混合模型的config.json中的model字段是相同的
    -                                          4.输出的混合模型可以使用待合成模型的任意一个config.json,但聚类模型将不能使用
    -                                          5.批量上传模型的时候最好把模型放到一个文件夹选中后一起上传
    -                                          6.混合比例调整建议大小在0-100之间,也可以调为其他数字,但在线性组合模式下会出现未知的效果
    -                                          7.混合完毕后,文件将会保存在项目根目录中,文件名为output.pth
    -                                          8.凸组合模式会将混合比例执行Softmax使混合比例相加为1,而线性组合模式不会
    -                        
    -                        """)
    -                    mix_model_path = gr.Files(label="选择需要混合模型文件")
    -                    mix_model_upload_button = gr.UploadButton("选择/追加需要混合模型文件", file_count="multiple")
    -                    mix_model_output1 = gr.Textbox(
    -                                            label="混合比例调整,单位/%",
    -                                            interactive = True
    -                                         )
    -                    mix_mode = gr.Radio(choices=["凸组合", "线性组合"], label="融合模式",value="凸组合",interactive = True)
    -                    mix_submit = gr.Button("声线融合启动", variant="primary")
    -                    mix_model_output2 = gr.Textbox(
    -                                            label="Output Message"
    -                                         )
    -                with gr.TabItem("onnx转换"):
    -                    gr.Markdown(value="""
    -                        提供了将.pth模型(批量)转换为.onnx模型的功能
    -                        源项目本身自带转换的功能,但不支持批量,操作也不够简单,这个工具可以支持在WebUI中以可视化的操作方式批量转换.onnx模型
    -                        有人可能会问,转.onnx模型有什么作用呢?相信我,如果你问出了这个问题,说明这个工具你应该用不上
    -
    -                        ### Step 1: 
    -                        在整合包根目录下新建一个"checkpoints"文件夹,将pth模型和对应的json配置文件按目录分别放置到checkpoints文件夹下
    -                        看起来应该像这样:
    -                        checkpoints
    -                        ├───xxxx
    -                        │   ├───xxxx.pth
    -                        │   └───xxxx.json
    -                        ├───xxxx
    -                        │   ├───xxxx.pth
    -                        │   └───xxxx.json
    -                        └───……
    -                        """)
    -                    pth_dir_msg = gr.Textbox(label="识别待转换模型", placeholder="请将模型和配置文件按上述说明放置在正确位置")
    -                    pth_dir_identify_btn = gr.Button("识别", variant="primary")
    -                    gr.Markdown(value="""
    -                        ### Step 2:
    -                        识别正确后点击下方开始转换,转换一个模型可能需要一分钟甚至更久
    -                        """)
    -                    pth2onnx_btn = gr.Button("开始转换", variant="primary")
    -                    pth2onnx_msg = gr.Textbox(label="输出信息")
    -
    -                with gr.TabItem("智能音频切片"):
    -                    gr.Markdown(value="""
    -                        该工具可以实现对音频的切片,无需调整参数即可完成符合要求的数据集制作。
    -                        数据集要求的音频切片约在2-15秒内,用传统的Slicer-GUI切片工具需要精准调参和二次切片才能符合要求,该工具省去了上述繁琐的操作,只要上传原始音频即可一键制作数据集。
    -                    """)
    -                    with gr.Row():
    -                        raw_audio_path = gr.Textbox(label="原始音频文件夹", placeholder="包含所有待切片音频的文件夹,示例: D:\干声\speakers")
    -                        load_raw_audio_btn = gr.Button("加载原始音频", variant = "primary")
    -                    load_raw_audio_output = gr.Textbox(label = "输出信息")
    -                    raw_audio_dataset = gr.Textbox(label = "音频列表", value = "")
    -                    slicer_output_dir = gr.Textbox(label = "输出目录", placeholder = "选择输出目录")
    -                    with gr.Row():
    -                        process_method = gr.Radio(label = "对过短音频的处理方式", choices = ["丢弃","将过短音频整合为长音频"], value = "丢弃")
    -                        max_sec = gr.Number(label = "切片的最长秒数", value = 15)
    -                        min_sec = gr.Number(label = "切片的最短秒数", value = 2)
    -                    slicer_btn = gr.Button("开始切片", variant = "primary")
    -                    slicer_output_msg = gr.Textbox(label = "输出信息")
    -
    -                    mix_model_path.change(updata_mix_info,[mix_model_path],[mix_model_output1])
    -                    mix_model_upload_button.upload(upload_mix_append_file, [mix_model_upload_button,mix_model_path], [mix_model_path,mix_model_output1])
    -                    mix_submit.click(mix_submit_click, [mix_model_output1,mix_mode], [mix_model_output2])
    -                    pth_dir_identify_btn.click(pth_identify, [], [pth_dir_msg])
    -                    pth2onnx_btn.click(onnx_export, [], [pth2onnx_msg])
    -                    load_raw_audio_btn.click(load_raw_audio, [raw_audio_path], [load_raw_audio_output, raw_audio_dataset])
    -                    slicer_btn.click(slicer_fn, [raw_audio_path, slicer_output_dir, process_method, max_sec, min_sec], [slicer_output_msg])
    -                
    -                with gr.TabItem("模型压缩工具"):
    -                    gr.Markdown(value="""
    -                        该工具可以实现对模型的体积压缩,在**不影响模型推理功能**的情况下,将原本约600M的So-VITS模型压缩至约200M, 大大减少了硬盘的压力。
    -                        **注意:压缩后的模型将无法继续训练,请在确认封炉后再压缩。**
    -                        将模型文件放置在logs/44k下,然后选择需要压缩的模型
    -                    """)
    -                    model_to_compress = gr.Dropdown(label="模型选择", choices=ckpt_list, value="")
    -                    compress_model_btn = gr.Button("压缩模型", variant="primary")
    -                    compress_model_output = gr.Textbox(label="输出信息", value="")
    -
    -                    compress_model_btn.click(model_compression, [model_to_compress], [compress_model_output])
    -        """
    -        get_raw_dirs.click(load_raw_dirs,[],[raw_dirs_list])
    -        raw_preprocess.click(dataset_preprocess,[branch_selection, f0_predictor_selection, use_diff, vol_aug, skip_loudnorm, num_processes],[preprocess_output, speakers])
    -        regenerate_config_btn.click(regenerate_config,[branch_selection, vol_aug],[preprocess_output])
    -        clear_preprocess_output.click(clear_output,[],[preprocess_output])
    -        save_params.click(save_default_settings, [log_interval,eval_interval,keep_ckpts,batch_size,lr,fp16_run,all_in_mem,diff_num_workers,diff_cache_all_data,diff_cache_device,diff_amp_dtype,diff_batch_size,diff_lr,diff_interval_log,diff_interval_val,diff_force_save], [write_config_output])
    -        write_config.click(config_fn,[log_interval, eval_interval, keep_ckpts, batch_size, lr, fp16_run, all_in_mem, diff_num_workers, diff_cache_all_data, diff_batch_size, diff_lr, diff_interval_log, diff_interval_val, diff_cache_device, diff_amp_dtype, diff_force_save],[write_config_output])
    -        start_training.click(training,[gpu_selection, branch_selection],[training_output])
    -        diff_training_btn.click(diff_training,[branch_selection],[diff_training_output])
    -        continue_training_btn.click(continue_training,[gpu_selection, branch_selection],[continue_training_output])
    -        diff_continue_training_btn.click(diff_continue_training,[branch_selection],[diff_continue_training_output])
    -        kmeans_button.click(kmeans_training,[kmeans_gpu],[kmeans_output])
    -        index_button.click(index_training, [], [index_output])
    -        """
    -    with gr.Tabs():
    -        with gr.Row(variant="panel"):
    -            with gr.Column():
    -                gr.Markdown(value="""
    -                     WebUI设置
    -                    """)
    -                debug_button = gr.Checkbox(label="Debug模式,反馈BUG需要打开,打开后控制台可以显示具体错误提示", value=debug)
    -
    -        debug_button.change(debug_change,[],[])
    -
    -        app.queue(concurrency_count=1022, max_size=2044).launch()
    diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/train_index.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/train_index.py
    deleted file mode 100644
    index a8d8cae451b9c2a18dce3db6e2023bc29d48a021..0000000000000000000000000000000000000000
    --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/train_index.py
    +++ /dev/null
    @@ -1,30 +0,0 @@
    -import utils
    -import pickle
    -import os
    -import argparse
    -
    -
    -if __name__ == "__main__":
    -    parser = argparse.ArgumentParser()
    -    parser.add_argument(
    -        "--root_dir", type=str, default="dataset/44k", help="path to root dir"
    -    )
    -    parser.add_argument('-c', '--config', type=str, default="./configs/config.json",
    -                    help='JSON file for configuration')
    -    parser.add_argument(
    -        "--output_dir", type=str, default="logs/44k", help="path to output dir"
    -    )
    -
    -    args = parser.parse_args()
    -
    -    hps = utils.get_hparams_from_file(args.config)
    -    spk_dic = hps.spk
    -    result = {}
    -    
    -    for k,v in spk_dic.items():
    -        print(f"now, index {k} feature...")
    -        index = utils.train_index(k,args.root_dir)
    -        result[v] = index
    -
    -    with open(os.path.join(args.output_dir,"feature_and_index.pkl"),"wb") as f:
    -        pickle.dump(result,f)
    \ No newline at end of file
    diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/vdecoder/hifiganwithsnake/models.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/vdecoder/hifiganwithsnake/models.py
    deleted file mode 100644
    index 64f0e4dc985afd7993f78bb1b9743139990fa4d1..0000000000000000000000000000000000000000
    --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/vdecoder/hifiganwithsnake/models.py
    +++ /dev/null
    @@ -1,518 +0,0 @@
    -import os
    -import json
    -from .env import AttrDict
    -import numpy as np
    -import torch
    -import torch.nn.functional as F
    -import torch.nn as nn
    -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
    -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
    -from .utils import init_weights, get_padding
    -from vdecoder.hifiganwithsnake.alias.act import SnakeAlias
    -
    -LRELU_SLOPE = 0.1
    -
    -
    -def load_model(model_path, device='cuda'):
    -    config_file = os.path.join(os.path.split(model_path)[0], 'config.json')
    -    with open(config_file) as f:
    -        data = f.read()
    -
    -    global h
    -    json_config = json.loads(data)
    -    h = AttrDict(json_config)
    -
    -    generator = Generator(h).to(device)
    -
    -    cp_dict = torch.load(model_path)
    -    generator.load_state_dict(cp_dict['generator'])
    -    generator.eval()
    -    generator.remove_weight_norm()
    -    del cp_dict
    -    return generator, h
    -
    -
    -class ResBlock1(torch.nn.Module):
    -    def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)):
    -        super(ResBlock1, self).__init__()
    -        self.h = h
    -        self.convs1 = nn.ModuleList([
    -            weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
    -                               padding=get_padding(kernel_size, dilation[0]))),
    -            weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
    -                               padding=get_padding(kernel_size, dilation[1]))),
    -            weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
    -                               padding=get_padding(kernel_size, dilation[2])))
    -        ])
    -        self.convs1.apply(init_weights)
    -
    -        self.convs2 = nn.ModuleList([
    -            weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
    -                               padding=get_padding(kernel_size, 1))),
    -            weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
    -                               padding=get_padding(kernel_size, 1))),
    -            weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
    -                               padding=get_padding(kernel_size, 1)))
    -        ])
    -        self.convs2.apply(init_weights)
    -
    -        self.num_layers = len(self.convs1) + len(self.convs2)
    -        self.activations = nn.ModuleList([
    -            SnakeAlias(channels) for _ in range(self.num_layers)
    -        ])
    -
    -    def forward(self, x):
    -        acts1, acts2 = self.activations[::2], self.activations[1::2]
    -        for c1, c2, a1, a2 in zip(self.convs1, self.convs2, acts1, acts2):
    -            xt = a1(x)
    -            xt = c1(xt)
    -            xt = a2(xt)
    -            xt = c2(xt)
    -            x = xt + x
    -        return x
    -
    -    def remove_weight_norm(self):
    -        for l in self.convs1:
    -            remove_weight_norm(l)
    -        for l in self.convs2:
    -            remove_weight_norm(l)
    -
    -
    -class ResBlock2(torch.nn.Module):
    -    def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)):
    -        super(ResBlock2, self).__init__()
    -        self.h = h
    -        self.convs = nn.ModuleList([
    -            weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
    -                               padding=get_padding(kernel_size, dilation[0]))),
    -            weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
    -                               padding=get_padding(kernel_size, dilation[1])))
    -        ])
    -        self.convs.apply(init_weights)
    -        
    -        self.num_layers = len(self.convs)
    -        self.activations = nn.ModuleList([
    -            SnakeAlias(channels) for _ in range(self.num_layers)
    -        ])
    -
    -    def forward(self, x):
    -        for c,a in zip(self.convs, self.activations):
    -            xt = a(x)
    -            xt = c(xt)
    -            x = xt + x
    -        return x
    -
    -    def remove_weight_norm(self):
    -        for l in self.convs:
    -            remove_weight_norm(l)
    -
    -
    -def padDiff(x):
    -    return F.pad(F.pad(x, (0,0,-1,1), 'constant', 0) - x, (0,0,0,-1), 'constant', 0)
    -
    -class SineGen(torch.nn.Module):
    -    """ Definition of sine generator
    -    SineGen(samp_rate, harmonic_num = 0,
    -            sine_amp = 0.1, noise_std = 0.003,
    -            voiced_threshold = 0,
    -            flag_for_pulse=False)
    -    samp_rate: sampling rate in Hz
    -    harmonic_num: number of harmonic overtones (default 0)
    -    sine_amp: amplitude of sine-wavefrom (default 0.1)
    -    noise_std: std of Gaussian noise (default 0.003)
    -    voiced_thoreshold: F0 threshold for U/V classification (default 0)
    -    flag_for_pulse: this SinGen is used inside PulseGen (default False)
    -    Note: when flag_for_pulse is True, the first time step of a voiced
    -        segment is always sin(np.pi) or cos(0)
    -    """
    -
    -    def __init__(self, samp_rate, harmonic_num=0,
    -                 sine_amp=0.1, noise_std=0.003,
    -                 voiced_threshold=0,
    -                 flag_for_pulse=False):
    -        super(SineGen, self).__init__()
    -        self.sine_amp = sine_amp
    -        self.noise_std = noise_std
    -        self.harmonic_num = harmonic_num
    -        self.dim = self.harmonic_num + 1
    -        self.sampling_rate = samp_rate
    -        self.voiced_threshold = voiced_threshold
    -        self.flag_for_pulse = flag_for_pulse
    -
    -    def _f02uv(self, f0):
    -        # generate uv signal
    -        uv = (f0 > self.voiced_threshold).type(torch.float32)
    -        return uv
    -
    -    def _f02sine(self, f0_values):
    -        """ f0_values: (batchsize, length, dim)
    -            where dim indicates fundamental tone and overtones
    -        """
    -        # convert to F0 in rad. The interger part n can be ignored
    -        # because 2 * np.pi * n doesn't affect phase
    -        rad_values = (f0_values / self.sampling_rate) % 1
    -
    -        # initial phase noise (no noise for fundamental component)
    -        rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \
    -                              device=f0_values.device)
    -        rand_ini[:, 0] = 0
    -        rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
    -
    -        # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad)
    -        if not self.flag_for_pulse:
    -            # for normal case
    -
    -            # To prevent torch.cumsum numerical overflow,
    -            # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1.
    -            # Buffer tmp_over_one_idx indicates the time step to add -1.
    -            # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi
    -            tmp_over_one = torch.cumsum(rad_values, 1) % 1
    -            tmp_over_one_idx = (padDiff(tmp_over_one)) < 0
    -            cumsum_shift = torch.zeros_like(rad_values)
    -            cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
    -
    -            sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1)
    -                              * 2 * np.pi)
    -        else:
    -            # If necessary, make sure that the first time step of every
    -            # voiced segments is sin(pi) or cos(0)
    -            # This is used for pulse-train generation
    -
    -            # identify the last time step in unvoiced segments
    -            uv = self._f02uv(f0_values)
    -            uv_1 = torch.roll(uv, shifts=-1, dims=1)
    -            uv_1[:, -1, :] = 1
    -            u_loc = (uv < 1) * (uv_1 > 0)
    -
    -            # get the instantanouse phase
    -            tmp_cumsum = torch.cumsum(rad_values, dim=1)
    -            # different batch needs to be processed differently
    -            for idx in range(f0_values.shape[0]):
    -                temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :]
    -                temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :]
    -                # stores the accumulation of i.phase within
    -                # each voiced segments
    -                tmp_cumsum[idx, :, :] = 0
    -                tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum
    -
    -            # rad_values - tmp_cumsum: remove the accumulation of i.phase
    -            # within the previous voiced segment.
    -            i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1)
    -
    -            # get the sines
    -            sines = torch.cos(i_phase * 2 * np.pi)
    -        return sines
    -
    -    def forward(self, f0):
    -        """ sine_tensor, uv = forward(f0)
    -        input F0: tensor(batchsize=1, length, dim=1)
    -                  f0 for unvoiced steps should be 0
    -        output sine_tensor: tensor(batchsize=1, length, dim)
    -        output uv: tensor(batchsize=1, length, 1)
    -        """
    -        with torch.no_grad():
    -            f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim,
    -                                 device=f0.device)
    -            # fundamental component
    -            fn = torch.multiply(f0, torch.FloatTensor([[range(1, self.harmonic_num + 2)]]).to(f0.device))
    -
    -            # generate sine waveforms
    -            sine_waves = self._f02sine(fn) * self.sine_amp
    -
    -            # generate uv signal
    -            # uv = torch.ones(f0.shape)
    -            # uv = uv * (f0 > self.voiced_threshold)
    -            uv = self._f02uv(f0)
    -
    -            # noise: for unvoiced should be similar to sine_amp
    -            #        std = self.sine_amp/3 -> max value ~ self.sine_amp
    -            # .       for voiced regions is self.noise_std
    -            noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
    -            noise = noise_amp * torch.randn_like(sine_waves)
    -
    -            # first: set the unvoiced part to 0 by uv
    -            # then: additive noise
    -            sine_waves = sine_waves * uv + noise
    -        return sine_waves, uv, noise
    -
    -
    -class SourceModuleHnNSF(torch.nn.Module):
    -    """ SourceModule for hn-nsf
    -    SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
    -                 add_noise_std=0.003, voiced_threshod=0)
    -    sampling_rate: sampling_rate in Hz
    -    harmonic_num: number of harmonic above F0 (default: 0)
    -    sine_amp: amplitude of sine source signal (default: 0.1)
    -    add_noise_std: std of additive Gaussian noise (default: 0.003)
    -        note that amplitude of noise in unvoiced is decided
    -        by sine_amp
    -    voiced_threshold: threhold to set U/V given F0 (default: 0)
    -    Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
    -    F0_sampled (batchsize, length, 1)
    -    Sine_source (batchsize, length, 1)
    -    noise_source (batchsize, length 1)
    -    uv (batchsize, length, 1)
    -    """
    -
    -    def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1,
    -                 add_noise_std=0.003, voiced_threshod=0):
    -        super(SourceModuleHnNSF, self).__init__()
    -
    -        self.sine_amp = sine_amp
    -        self.noise_std = add_noise_std
    -
    -        # to produce sine waveforms
    -        self.l_sin_gen = SineGen(sampling_rate, harmonic_num,
    -                                 sine_amp, add_noise_std, voiced_threshod)
    -
    -        # to merge source harmonics into a single excitation
    -        self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
    -        self.l_tanh = torch.nn.Tanh()
    -
    -    def forward(self, x):
    -        """
    -        Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
    -        F0_sampled (batchsize, length, 1)
    -        Sine_source (batchsize, length, 1)
    -        noise_source (batchsize, length 1)
    -        """
    -        # source for harmonic branch
    -        sine_wavs, uv, _ = self.l_sin_gen(x)
    -        sine_merge = self.l_tanh(self.l_linear(sine_wavs))
    -
    -        # source for noise branch, in the same shape as uv
    -        noise = torch.randn_like(uv) * self.sine_amp / 3
    -        return sine_merge, noise, uv
    -
    -
    -class Generator(torch.nn.Module):
    -    def __init__(self, h):
    -        super(Generator, self).__init__()
    -        self.h = h
    -
    -        self.num_kernels = len(h["resblock_kernel_sizes"])
    -        self.num_upsamples = len(h["upsample_rates"])
    -        self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h["upsample_rates"]))
    -        self.m_source = SourceModuleHnNSF(
    -            sampling_rate=h["sampling_rate"],
    -            harmonic_num=8)
    -        self.noise_convs = nn.ModuleList()
    -        self.conv_pre = weight_norm(Conv1d(h["inter_channels"], h["upsample_initial_channel"], 7, 1, padding=3))
    -        resblock = ResBlock1 if h["resblock"] == '1' else ResBlock2
    -        self.ups = nn.ModuleList()
    -        for i, (u, k) in enumerate(zip(h["upsample_rates"], h["upsample_kernel_sizes"])):
    -            c_cur = h["upsample_initial_channel"] // (2 ** (i + 1))
    -            self.ups.append(weight_norm(
    -                ConvTranspose1d(h["upsample_initial_channel"] // (2 ** i), h["upsample_initial_channel"] // (2 ** (i + 1)),
    -                                k, u, padding=(k - u + 1) // 2)))
    -            if i + 1 < len(h["upsample_rates"]):  #
    -                stride_f0 = np.prod(h["upsample_rates"][i + 1:])
    -                self.noise_convs.append(Conv1d(
    -                    1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=(stride_f0+ 1) // 2))
    -            else:
    -                self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
    -        self.resblocks = nn.ModuleList()
    -        self.snakes = nn.ModuleList()
    -        for i in range(len(self.ups)):
    -            ch = h["upsample_initial_channel"] // (2 ** (i + 1))
    -            self.snakes.append(SnakeAlias(h["upsample_initial_channel"] // (2 ** (i))))
    -            for j, (k, d) in enumerate(zip(h["resblock_kernel_sizes"], h["resblock_dilation_sizes"])):
    -                self.resblocks.append(resblock(h, ch, k, d))
    -
    -        self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3))
    -        self.ups.apply(init_weights)
    -        self.conv_post.apply(init_weights)
    -        self.snake_post = SnakeAlias(ch)
    -        self.cond = nn.Conv1d(h['gin_channels'], h['upsample_initial_channel'], 1)
    -        
    -    def forward(self, x, f0, g=None):
    -        # print(1,x.shape,f0.shape,f0[:, None].shape)
    -        f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2)  # bs,n,t
    -        # print(2,f0.shape)
    -        har_source, noi_source, uv = self.m_source(f0)
    -        har_source = har_source.transpose(1, 2)
    -        x = self.conv_pre(x)
    -        x = x + self.cond(g)
    -        # print(124,x.shape,har_source.shape)
    -        for i in range(self.num_upsamples):
    -            x = self.snakes[i](x)
    -            # print(3,x.shape)
    -            x = self.ups[i](x)
    -            x_source = self.noise_convs[i](har_source)
    -            # print(4,x_source.shape,har_source.shape,x.shape)
    -            x = x + x_source
    -            xs = None
    -            for j in range(self.num_kernels):
    -                if xs is None:
    -                    xs = self.resblocks[i * self.num_kernels + j](x)
    -                else:
    -                    xs += self.resblocks[i * self.num_kernels + j](x)
    -            x = xs / self.num_kernels
    -        x = self.snake_post(x)
    -        x = self.conv_post(x)
    -        x = torch.tanh(x)
    -
    -        return x
    -
    -    def remove_weight_norm(self):
    -        print('Removing weight norm...')
    -        for l in self.ups:
    -            remove_weight_norm(l)
    -        for l in self.resblocks:
    -            l.remove_weight_norm()
    -        remove_weight_norm(self.conv_pre)
    -        remove_weight_norm(self.conv_post)
    -
    -
    -class DiscriminatorP(torch.nn.Module):
    -    def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
    -        super(DiscriminatorP, self).__init__()
    -        self.period = period
    -        norm_f = weight_norm if use_spectral_norm == False else spectral_norm
    -        self.convs = nn.ModuleList([
    -            norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
    -            norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
    -            norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
    -            norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
    -            norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))),
    -        ])
    -        self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
    -
    -    def forward(self, x):
    -        fmap = []
    -
    -        # 1d to 2d
    -        b, c, t = x.shape
    -        if t % self.period != 0:  # pad first
    -            n_pad = self.period - (t % self.period)
    -            x = F.pad(x, (0, n_pad), "reflect")
    -            t = t + n_pad
    -        x = x.view(b, c, t // self.period, self.period)
    -
    -        for l in self.convs:
    -            x = l(x)
    -            x = F.leaky_relu(x, LRELU_SLOPE)
    -            fmap.append(x)
    -        x = self.conv_post(x)
    -        fmap.append(x)
    -        x = torch.flatten(x, 1, -1)
    -
    -        return x, fmap
    -
    -
    -class MultiPeriodDiscriminator(torch.nn.Module):
    -    def __init__(self, periods=None):
    -        super(MultiPeriodDiscriminator, self).__init__()
    -        self.periods = periods if periods is not None else [2, 3, 5, 7, 11]
    -        self.discriminators = nn.ModuleList()
    -        for period in self.periods:
    -            self.discriminators.append(DiscriminatorP(period))
    -
    -    def forward(self, y, y_hat):
    -        y_d_rs = []
    -        y_d_gs = []
    -        fmap_rs = []
    -        fmap_gs = []
    -        for i, d in enumerate(self.discriminators):
    -            y_d_r, fmap_r = d(y)
    -            y_d_g, fmap_g = d(y_hat)
    -            y_d_rs.append(y_d_r)
    -            fmap_rs.append(fmap_r)
    -            y_d_gs.append(y_d_g)
    -            fmap_gs.append(fmap_g)
    -
    -        return y_d_rs, y_d_gs, fmap_rs, fmap_gs
    -
    -
    -class DiscriminatorS(torch.nn.Module):
    -    def __init__(self, use_spectral_norm=False):
    -        super(DiscriminatorS, self).__init__()
    -        norm_f = weight_norm if use_spectral_norm == False else spectral_norm
    -        self.convs = nn.ModuleList([
    -            norm_f(Conv1d(1, 128, 15, 1, padding=7)),
    -            norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)),
    -            norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)),
    -            norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)),
    -            norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)),
    -            norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)),
    -            norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
    -        ])
    -        self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
    -
    -    def forward(self, x):
    -        fmap = []
    -        for l in self.convs:
    -            x = l(x)
    -            x = F.leaky_relu(x, LRELU_SLOPE)
    -            fmap.append(x)
    -        x = self.conv_post(x)
    -        fmap.append(x)
    -        x = torch.flatten(x, 1, -1)
    -
    -        return x, fmap
    -
    -
    -class MultiScaleDiscriminator(torch.nn.Module):
    -    def __init__(self):
    -        super(MultiScaleDiscriminator, self).__init__()
    -        self.discriminators = nn.ModuleList([
    -            DiscriminatorS(use_spectral_norm=True),
    -            DiscriminatorS(),
    -            DiscriminatorS(),
    -        ])
    -        self.meanpools = nn.ModuleList([
    -            AvgPool1d(4, 2, padding=2),
    -            AvgPool1d(4, 2, padding=2)
    -        ])
    -
    -    def forward(self, y, y_hat):
    -        y_d_rs = []
    -        y_d_gs = []
    -        fmap_rs = []
    -        fmap_gs = []
    -        for i, d in enumerate(self.discriminators):
    -            if i != 0:
    -                y = self.meanpools[i - 1](y)
    -                y_hat = self.meanpools[i - 1](y_hat)
    -            y_d_r, fmap_r = d(y)
    -            y_d_g, fmap_g = d(y_hat)
    -            y_d_rs.append(y_d_r)
    -            fmap_rs.append(fmap_r)
    -            y_d_gs.append(y_d_g)
    -            fmap_gs.append(fmap_g)
    -
    -        return y_d_rs, y_d_gs, fmap_rs, fmap_gs
    -
    -
    -def feature_loss(fmap_r, fmap_g):
    -    loss = 0
    -    for dr, dg in zip(fmap_r, fmap_g):
    -        for rl, gl in zip(dr, dg):
    -            loss += torch.mean(torch.abs(rl - gl))
    -
    -    return loss * 2
    -
    -
    -def discriminator_loss(disc_real_outputs, disc_generated_outputs):
    -    loss = 0
    -    r_losses = []
    -    g_losses = []
    -    for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
    -        r_loss = torch.mean((1 - dr) ** 2)
    -        g_loss = torch.mean(dg ** 2)
    -        loss += (r_loss + g_loss)
    -        r_losses.append(r_loss.item())
    -        g_losses.append(g_loss.item())
    -
    -    return loss, r_losses, g_losses
    -
    -
    -def generator_loss(disc_outputs):
    -    loss = 0
    -    gen_losses = []
    -    for dg in disc_outputs:
    -        l = torch.mean((1 - dg) ** 2)
    -        gen_losses.append(l)
    -        loss += l
    -
    -    return loss, gen_losses
    diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/vencoder/dphubert/utils/import_huggingface_wavlm.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/vencoder/dphubert/utils/import_huggingface_wavlm.py
    deleted file mode 100644
    index 1a2ea31c14df5450298ddc5e1f56c98769144828..0000000000000000000000000000000000000000
    --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/vencoder/dphubert/utils/import_huggingface_wavlm.py
    +++ /dev/null
    @@ -1,129 +0,0 @@
    -"""Import Hugging Face transformers's wav2vec2.0 pretrained weights to torchaudios's format.
    -
    -Originally from:
    -https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/utils/import_huggingface.py
    -
    -"""
    -
    -import logging
    -from typing import Any, Dict
    -
    -from torch.nn import Module
    -
    -from ..model import wav2vec2_model, Wav2Vec2Model, wavlm_model
    -
    -_LG = logging.getLogger(__name__)
    -
    -
    -def _get_config(cfg):
    -    config = {
    -        "extractor_mode": f"{cfg.feat_extract_norm}_norm",
    -        "extractor_conv_layer_config": list(zip(cfg.conv_dim, cfg.conv_kernel, cfg.conv_stride)),
    -        "extractor_conv_bias": cfg.conv_bias,
    -        "encoder_embed_dim": cfg.hidden_size,
    -        "encoder_projection_dropout": cfg.feat_proj_dropout,
    -        "encoder_pos_conv_kernel": cfg.num_conv_pos_embeddings,
    -        "encoder_pos_conv_groups": cfg.num_conv_pos_embedding_groups,
    -        "encoder_num_layers": cfg.num_hidden_layers,
    -        "encoder_num_heads": cfg.num_attention_heads,
    -        "encoder_attention_dropout": cfg.attention_dropout,
    -        "encoder_ff_interm_features": cfg.intermediate_size,
    -        "encoder_ff_interm_dropout": cfg.activation_dropout,
    -        "encoder_dropout": cfg.hidden_dropout,
    -        "encoder_layer_norm_first": cfg.do_stable_layer_norm,
    -        "encoder_layer_drop": cfg.layerdrop,
    -    }
    -    return config
    -
    -
    -def _get_config_wavlm(cfg):
    -    config = {
    -        "extractor_mode": f"{cfg.feat_extract_norm}_norm",
    -        "extractor_conv_layer_config": list(zip(cfg.conv_dim, cfg.conv_kernel, cfg.conv_stride)),
    -        "extractor_conv_bias": cfg.conv_bias,
    -        "encoder_embed_dim": cfg.hidden_size,
    -        "encoder_projection_dropout": cfg.feat_proj_dropout,
    -        "encoder_pos_conv_kernel": cfg.num_conv_pos_embeddings,
    -        "encoder_pos_conv_groups": cfg.num_conv_pos_embedding_groups,
    -        "encoder_num_layers": cfg.num_hidden_layers,
    -        "encoder_use_attention": [True] * cfg.num_hidden_layers,
    -        "encoder_use_feed_forward": [True] * cfg.num_hidden_layers,
    -        "encoder_total_num_heads": [cfg.num_attention_heads for _ in range(cfg.num_hidden_layers)],
    -        "encoder_remaining_heads": [list(range(cfg.num_attention_heads)) for _ in range(cfg.num_hidden_layers)],
    -        "encoder_num_buckets": cfg.num_buckets,
    -        "encoder_max_distance": cfg.max_bucket_distance,
    -        "encoder_attention_dropout": cfg.attention_dropout,
    -        "encoder_ff_interm_features": [cfg.intermediate_size for _ in range(cfg.num_hidden_layers)],
    -        "encoder_ff_interm_dropout": cfg.activation_dropout,
    -        "encoder_dropout": cfg.hidden_dropout,
    -        "encoder_layer_norm_first": cfg.do_stable_layer_norm,
    -        "encoder_layer_drop": cfg.layerdrop,
    -        "normalize_waveform": cfg.feat_extract_norm == "layer",
    -    }
    -    return config
    -
    -
    -def _build(config, original):
    -    is_for_ctc = original.__class__.__name__ in ["Wav2Vec2ForCTC", "WavLMForCTC"]
    -    if is_for_ctc:
    -        aux_num_out = original.config.vocab_size
    -        wav2vec2 = original.wav2vec2
    -    else:
    -        _LG.warning(
    -            "The model is not an instance of Wav2Vec2ForCTC or WavLMForCTC. " '"lm_head" module is not imported.'
    -        )
    -        aux_num_out = None
    -        wav2vec2 = original
    -    is_wavlm = original.__class__.__name__ in ["WavLMModel", "WavLMForCTC"]
    -    if is_wavlm:
    -        imported = wavlm_model(**config, aux_num_out=aux_num_out)
    -    else:
    -        imported = wav2vec2_model(**config, aux_num_out=aux_num_out)
    -    print(imported.feature_extractor.load_state_dict(wav2vec2.feature_extractor.state_dict(), strict=False))
    -    print(imported.encoder.feature_projection.load_state_dict(wav2vec2.feature_projection.state_dict(), strict=False))
    -    encoder_state_dict = wav2vec2.encoder.state_dict()
    -    if is_wavlm:  # Rename paramaters of linear transformations for compatibility with the HF model
    -        transform_wavlm_encoder_state(encoder_state_dict, config["encoder_num_layers"])
    -    print(imported.encoder.transformer.load_state_dict(encoder_state_dict, strict=False))
    -    if is_for_ctc:
    -        imported.aux.load_state_dict(original.lm_head.state_dict())
    -    return imported
    -
    -
    -def transform_wavlm_encoder_state(state: Dict[str, Any], encoder_num_layers: int):
    -    """Converts WavLM encoder state from HuggingFace format. In particular, concatenates linear projection weights and
    -    biases to align with the structure of ``torch.nn.MultiheadAttention``.
    -    """
    -    pass
    -    
    -
    -def import_huggingface_model(original: Module) -> Wav2Vec2Model:
    -    """Builds :class:`Wav2Vec2Model` from the corresponding model object of
    -    `Transformers `_.
    -
    -    Args:
    -        original (torch.nn.Module): An instance of ``Wav2Vec2ForCTC`` from ``transformers``.
    -
    -    Returns:
    -        Wav2Vec2Model: Imported model.
    -
    -    Example
    -        >>> from torchaudio.models.wav2vec2.utils import import_huggingface_model
    -        >>>
    -        >>> original = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
    -        >>> model = import_huggingface_model(original)
    -        >>>
    -        >>> waveforms, _ = torchaudio.load("audio.wav")
    -        >>> logits, _ = model(waveforms)
    -    """
    -    _LG.info("Importing model.")
    -    _LG.info("Loading model configuration.")
    -    is_wavlm = original.__class__.__name__ in ["WavLMModel", "WavLMForCTC"]
    -    if is_wavlm:
    -        config = _get_config_wavlm(original.config)
    -    else:
    -        config = _get_config(original.config)
    -    _LG.debug("  - config: %s", config)
    -    _LG.info("Building model.")
    -    imported = _build(config, original)
    -    return imported
    diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/fpn_p5.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/fpn_p5.py
    deleted file mode 100644
    index e991f9c7be095e2a40e12c849b35e246cd0344bd..0000000000000000000000000000000000000000
    --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/fpn_p5.py
    +++ /dev/null
    @@ -1,78 +0,0 @@
    -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
    -import math
    -import fvcore.nn.weight_init as weight_init
    -import torch.nn.functional as F
    -from torch import nn
    -
    -from detectron2.layers import Conv2d, ShapeSpec, get_norm
    -
    -from detectron2.modeling.backbone import Backbone
    -from detectron2.modeling.backbone.fpn import FPN 
    -from detectron2.modeling.backbone.build import BACKBONE_REGISTRY
    -from detectron2.modeling.backbone.resnet import build_resnet_backbone
    -
    -
    -class LastLevelP6P7_P5(nn.Module):
    -    """
    -    This module is used in RetinaNet to generate extra layers, P6 and P7 from
    -    C5 feature.
    -    """
    -
    -    def __init__(self, in_channels, out_channels):
    -        super().__init__()
    -        self.num_levels = 2
    -        self.in_feature = "p5"
    -        self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1)
    -        self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1)
    -        for module in [self.p6, self.p7]:
    -            weight_init.c2_xavier_fill(module)
    -
    -    def forward(self, c5):
    -        p6 = self.p6(c5)
    -        p7 = self.p7(F.relu(p6))
    -        return [p6, p7]
    -
    -
    -@BACKBONE_REGISTRY.register()
    -def build_p67_resnet_fpn_backbone(cfg, input_shape: ShapeSpec):
    -    """
    -    Args:
    -        cfg: a detectron2 CfgNode
    -
    -    Returns:
    -        backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
    -    """
    -    bottom_up = build_resnet_backbone(cfg, input_shape)
    -    in_features = cfg.MODEL.FPN.IN_FEATURES
    -    out_channels = cfg.MODEL.FPN.OUT_CHANNELS
    -    backbone = FPN(
    -        bottom_up=bottom_up,
    -        in_features=in_features,
    -        out_channels=out_channels,
    -        norm=cfg.MODEL.FPN.NORM,
    -        top_block=LastLevelP6P7_P5(out_channels, out_channels),
    -        fuse_type=cfg.MODEL.FPN.FUSE_TYPE,
    -    )
    -    return backbone
    -
    -@BACKBONE_REGISTRY.register()
    -def build_p35_resnet_fpn_backbone(cfg, input_shape: ShapeSpec):
    -    """
    -    Args:
    -        cfg: a detectron2 CfgNode
    -
    -    Returns:
    -        backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
    -    """
    -    bottom_up = build_resnet_backbone(cfg, input_shape)
    -    in_features = cfg.MODEL.FPN.IN_FEATURES
    -    out_channels = cfg.MODEL.FPN.OUT_CHANNELS
    -    backbone = FPN(
    -        bottom_up=bottom_up,
    -        in_features=in_features,
    -        out_channels=out_channels,
    -        norm=cfg.MODEL.FPN.NORM,
    -        top_block=None,
    -        fuse_type=cfg.MODEL.FPN.FUSE_TYPE,
    -    )
    -    return backbone
    \ No newline at end of file
    diff --git a/spaces/yo2266911/uma_voice/train_val_divide.py b/spaces/yo2266911/uma_voice/train_val_divide.py
    deleted file mode 100644
    index 183abb58d6054a98dd92fcffa107ea0571502130..0000000000000000000000000000000000000000
    --- a/spaces/yo2266911/uma_voice/train_val_divide.py
    +++ /dev/null
    @@ -1,18 +0,0 @@
    -import os
    -import numpy as np
    -filename = 'E:/uma_voice/output.txt'
    -split ='|'
    -with open(filename, encoding='utf-8') as f:
    -    filepaths_and_text = [line.strip().split(split) for line in f]
    -
    -train_filename = filename.split('.')[0] + '_train' + '.txt'
    -val_filename = filename.split('.')[0] + '_val' + '.txt'
    -
    -train_split_ratio = 0.99
    -train_f = open(train_filename, 'w', encoding='utf-8')
    -val_f = open(val_filename, 'w', encoding='utf-8')
    -for i in range(len(filepaths_and_text)):
    -    if np.random.rand() < train_split_ratio:
    -        train_f.writelines('|'.join(filepaths_and_text[i]) + '\n')
    -    else:
    -        val_f.writelines('|'.join(filepaths_and_text[i]) + '\n')
    \ No newline at end of file
    diff --git a/spaces/ysharma/ChatGPT-Plugins-UI-with-Langchain/app.py b/spaces/ysharma/ChatGPT-Plugins-UI-with-Langchain/app.py
    deleted file mode 100644
    index 9de2b3f3129e2d9b2c00a73b410fcfe9ec90fb2c..0000000000000000000000000000000000000000
    --- a/spaces/ysharma/ChatGPT-Plugins-UI-with-Langchain/app.py
    +++ /dev/null
    @@ -1,439 +0,0 @@
    -import os
    -import openai
    -import gradio as gr
    -import json
    -import requests
    -import shutil
    -import random
    -import time
    -
    -from gradio_client import Client
    -from newsapi import NewsApiClient
    -
    -from PIL import Image
    -import matplotlib.pyplot as plt
    -
    -# import all defined functions, their definitions and a dictionary
    -from gpt_function_definitions import generate_image, generate_caption, get_news, bored_api
    -
    -#OpenaI Chat Completions endpoint
    -API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream"
    -
    -# Import things that are needed generically from langchain
    -from langchain import LLMMathChain, SerpAPIWrapper
    -from langchain.agents import AgentType, initialize_agent, load_tools
    -from langchain.chat_models import ChatOpenAI
    -from langchain.tools import BaseTool, StructuredTool, Tool, tool
    -from langchain.tools import MoveFileTool, format_tool_to_openai_function
    -from langchain.schema import (
    -    AIMessage,
    -    HumanMessage,
    -    SystemMessage
    -)
    -from langchain.utilities import WikipediaAPIWrapper
    -from langchain.tools import AIPluginTool
    -
    -# Get the value of the openai_api_key from environment variable
    -openai.api_key = os.getenv("OPENAI_API_KEY")
    -search = SerpAPIWrapper()
    -
    -
    -# LANGCHAIN
    -
    -# Load the tool configs that are needed.
    -# Langchain 'Tool' dataclass wraps functions that accept a single string input and returns a string output.
    -tools = [
    -    #image generation 
    -    Tool.from_function(
    -        func=generate_image,
    -        name="generate_image",
    -        description="generate an image based on the prompt provided"
    -        # coroutine= ... <- you can specify an async method if desired as well
    -    ),
    -
    -    # Describe an image
    -    Tool.from_function(
    -        func=generate_caption,
    -        name="generate_caption",
    -        description="generate caption for the image present at the filepath provided"
    -        # coroutine= ... <- you can specify an async method if desired as well
    -    ),
    -
    -    # Get lattest top news
    -    Tool.from_function(
    -        func=get_news,
    -        name="get_news",
    -        description="get top three engilsh news items for a given query, sorted by relevancy"
    -        # coroutine= ... <- you can specify an async method if desired as well
    -    ),
    -
    -    # Search the web using Google search 
    -    Tool.from_function(
    -        func=search.run,
    -        name="Search",
    -        description="useful for when you need to answer questions about current events"
    -        # coroutine= ... <- you can specify an async method if desired as well
    -    ),   
    -
    -    #The Bored API
    -    Tool.from_function(
    -    func=bored_api,
    -    name="bored_api",
    -    description="Get a random activity to do based on the activity type"
    -    # coroutine= ... <- you can specify an async method if desired as well
    -    ),  
    -    ]
    -
    -
    -# Handling Plugin converations
    -def run_conversation(user_input, plugins, tools, chat):
    -    
    -    print(f"Plugins are - {plugins}")
    -    print(f"Total available PLUGINS/Tools are - {tools}")
    -    
    -    # Load the tool configs that are needed.
    -    tools = [val for val, flag in zip(tools, plugins) if flag]
    -    print(f"PLUGINS/Tools enabled in this run are - {tools}")
    -
    -    try:
    -        # defining agents using tools and openai functions
    -        agent = initialize_agent(tools, chat, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)
    -
    -        # calling the agent
    -        function_response = agent.run(user_input)
    -        print(f"function_response is - {function_response}")
    -            
    -        image_file_extns = ['.png', '.jpg', '.gif', '.tiff', '.tif', '.svg', '.bmp']
    -        literal_terms = ['caption', 'captions']
    -        if any(extn in function_response for extn in image_file_extns) and not any(term in function_response for term in literal_terms) :
    -            image_file = function_response.replace('sandbox:',"").split('(')[-1].split(')')[0]
    -            print(f"image_file is -{image_file}")
    -            return function_response, image_file
    -
    -        return function_response, None
    -    
    -    except Exception as e:
    -        print(f"An error occured while calling agents using 'Function Calling': {e}")
    -        return None, None
    -
    -
    -# Setting up a system message for our Chatbot
    -system = SystemMessage(content = "You are a helpful AI assistant") # that translates English to Pirate English.")
    -
    -# driver
    -def predict(user_input, temperature, stable_diff, image_cap, top_news, google_search, bored, file_output, chatbot):
    -
    -    print(f"chatbot - {chatbot}")
    -    print(f"user_input - {user_input}")
    -
    -    # file handling
    -    print(f"Logging: files in the file directory is -{file_output}")
    -    if file_output is not None:
    -      files_avail =  [f.name for f in file_output ]
    -      print(f"files_available are -{files_avail} ")
    -    else:
    -      print("No files available at the moment!")
    -
    -
    -    chat = ChatOpenAI(
    -    #openai_api_key=openai_api_key,
    -    temperature=temperature, #1.0
    -    streaming=True,
    -    model='gpt-3.5-turbo-0613')
    -    messages = [system]
    -    # image, caption, news, serach
    -    plugins = [stable_diff, image_cap, top_news, google_search, bored] 
    -    function_call_decision = True if any(plugins) else False 
    -
    -    if len(chatbot) != 0:
    -        for conv in chatbot:
    -            human = HumanMessage(content=conv[0])
    -            ai = AIMessage(content=conv[1])
    -            messages.append(human)
    -            messages.append(ai)
    -        messages.append(HumanMessage(content=user_input))
    -        print(f"messages list is - {messages}")
    -
    -        if function_call_decision:
    -            # getting openAI function agent reponse
    -            function_response, image_file = run_conversation(user_input, plugins, tools, chat)
    -            if function_response is not None:
    -                gpt_response = AIMessage(content= function_response)
    -                bot_message = gpt_response.content
    -                print(f"bot_message - {bot_message}")
    -                chatbot.append((user_input, bot_message))
    -                return "", chatbot, image_file
    -    else: # for first user message
    -        messages.append(HumanMessage(content=user_input))
    -        print(f"messages list is - {messages}")
    -
    -        if function_call_decision:
    -            # getting openAI function agent reponse
    -            function_response, image_file = run_conversation(user_input, plugins, tools, chat)
    -            if function_response is not None:
    -                gpt_response = AIMessage(content= function_response)
    -                bot_message = gpt_response.content
    -                print(f"bot_message - {bot_message}")
    -                chatbot.append((user_input, bot_message))
    -                return "", chatbot, image_file
    -
    -    # getting gpt3.5's response
    -    gpt_response = chat(messages)
    -    print(f"gpt_response - {gpt_response}")
    -    bot_message = gpt_response.content
    -    print(f"bot_message - {bot_message}")
    -
    -    chatbot.append((user_input, bot_message))
    -
    -    return "", chatbot, None #"", chatbot
    -
    -
    -# Helper functions for file handling
    -def add_image(file_to_save, file_output):
    -    print(f"image file_to_save is - {file_to_save}")
    -    print(f"files available in directory are -{file_output}")
    -
    -    if file_output is not None:
    -      file_output = [f.name for f in file_output]
    -    if file_to_save is None:
    -      return file_output
    -    file_output = [file_to_save] if file_output is None else file_output + [file_to_save]
    -    print(f"Logging: Updated file directory - {file_output}")
    -    return file_output #gr.update(value="dog1.jpg")
    -
    -def add_audio(file_to_save, file_output):
    -    print(f"audio file_to_save is - {file_to_save}")
    -    print(f"files available in directory are -{file_output}")
    -
    -    if file_output is not None:
    -      file_output = [f.name for f in file_output]
    -    if file_to_save is None:
    -      return file_output
    -    file_output = [file_to_save] if file_output is None else file_output + [file_to_save]
    -    print(f"Logging: Updated file directory - {file_output}")
    -    return file_output #gr.update(value="dog1.jpg")
    -
    -def upload_file(file, file_output):
    -    print(f"Logging: all files available - {file_output}")
    -    print(f"Logging: file uploaded is - {file}")
    -
    -    img_orig_name = file.name.split('/')[-1]
    -    shutil.copy2(file.name, img_orig_name)
    -
    -    file_output = [file] if file_output is None else file_output + [file]
    -    file_output = [f.name for f in file_output]
    -    print(f"Logging: Updated file list is - {file_output}")
    -    return file_output
    -
    -
    -# What is happening with function calling, langchain, and Gradio
    -messaging = """
    -How does a Language Model like GPT makes discerning choices regarding which plugins to run? Well, this is done using the Language Model as a reasoning agent and allowing it to assess and process information intelligently.

    -- Langchain & OpenAI Function Calling: AI models like gpt-3.5-turbo-0613 and gpt-4-0613, are designed to identify when and how to activate functions through API calls. These function-specific APIs generate a JSON object with necessary arguments, aiming to surpass the efficacy of traditional chat or text completion APIs.

    -- Gradio Chatbots: Gradio provides super easy way to build Chatbot UI. Refer our Docs. Using Langchain's OpenAI Functions Agent you can create chatbots designed to respond to queries by communicating with external APIs. The API responses are fed back to the Language Model for processing and a new response is generated for the user.The versatility of using Gradio to build LLM applications is immense. FOr example, in this Gradio app, you can have an array of Plugins based on functions which are tailored for various purposes (image, video, audio, text generation, utilities etc). This enhancing the breadth and depth of interactions with your Language Model. -""" - - -# How to use this Demo effectively -howto = """ -Welcome to the ChatGPT-Plugins WebUI, built using Gradio and Langchain! This interactive gradio chatbot uses the GPT3.5-turbo-0613 model from OpenAI and boasts the ability to USE, as well as BUILD Custom Plugins to enhance your chatbot experience. -
    Here’s a quick guide for you to get you started:

    -To get Started: Simply type your messages in the textbox to chat with ChatGPT and press enter!

    -How to use Plugins: Plugins are provided as checkboxes. If you want to try out a plugin just select that checkbox

    - -- DIFFUSERS PLUGIN:
    -What it does: Generates images based on your text prompt.
    -How to use: Type a prompt for the image you want to generate, and the Diffusers plugin will create it for you.
    -Example input: "Generate an image of a sunset over the mountains."

    - -- IMAGE CAPTION PLUGIN:
    -What it does: Describes images that you upload.
    -How to use: Upload an image using the 'Upload' button. Ask ChatGPT to describe the image make sure to mention the image name to it.
    -Example input: "Describe the image cat2.jpg."

    - -- NEWS PLUGIN:
    -What it does: Provides the top 3 news articles based on your search query.
    -How to use: Just type in a search query and the NewsAPI plugin will present the top 3 news based on relevance.
    -Example input: "Show me the top news about space exploration."

    - -- SEARCH PLUGIN:
    -What it does: Searches internet for your queries. Now you don;t need to limit yourself to a knowledge cut-off of 2021
    -How to use: Type in a user message in the chatbot. Google Search plugin will search the internet and present a concise resuklt for you like magic!
    -Example input: "Who is the current girlfriend of Leonardo Di Caprio."

    - -- BORED API PLUGIN:
    -What it does: Suggests you activities of different types.
    -How to use: Mention that you are bored and want some activities to do or simply ask to generate an activity.
    -Example input: "Can you suggest me something to do, I am totally bored."

    - -Access Generated Content: Find all generated images in the Gradio Files component located below the input textbox.

    -Have Fun!: Explore and enjoy the versatile features of this ChatGPT-Plugin WebUI.
    -Now you’re all set to make the most of this ChatGPT demo. Happy chatting! -""" - - -# Guide to add new Plugins -add_plugin_steps = """ -## Steps to add new Plugins to your Langchain-Gradio ChatGPT PLUGIN WebUI - -1. **Acquire the API Endpoint** - - You need an API which you can query, and for this example let's consider using a The Bored API. - - **API Endpoint**: [https://www.boredapi.com/api/activity/?type=](https://www.boredapi.com/api/activity/?type=) - -2. **Create a Function to Query the API** - - You can access any Gradio demo as an API via the Gradio Python Client. - ```python - def bored_api(activity_type) -> str: - ''' - Get a random activity to do based on the activity type. - ''' - activity_type_list = ["education", "recreational", "social", "diy", "charity", "cooking", "relaxation", "music", "busywork"] - activity_type = activity_type.lower() - if activity_type not in activity_type_list: - activity_type = random.choice(activity_type_list) - - api_url = "https://www.boredapi.com/api/activity/?type=" + activity_type - response = requests.get( - api_url - ) - return response.json()['activity'] - ``` - -3. **Add Function definitions** - - Add the function definition to the `gpt_function_definitions.py` file (simply copy and paste). Don't forget to add function description in docstring. - - Add required imports - ```python - from gpt_function_definitions import generate_image, generate_caption, get_news, bored_api - - ``` - -4. **Add the function to the Tools list** - - Add a description - describe what your function does. Models like GPT3.5/4 support Function Calling. The OpenAI Functions Agent from Langchain is designed to work with these functions and models. - - Name - add a name of your function, don't include spaces - - ```python - tools = [ - #image generation - ... - - # Describe an image - ... - - # Get lattest top news - ... - - # Bored Api - Tool.from_function( - func=bored_api, - name="bored_api", - description="Get a random activity to do based on the activity type" - # coroutine= ... <- you can specify an async method if desired as well - ), - ] - ``` - -5. **Update the Chatbot Layout** - - Go to the Blocks Chatbot layout and add a new checkbox for your plugin as: - ```python - bored = gr.Checkbox(label="🙄bored", value=False) - ``` - - Add the new checkbox component (example - bored) to your submit and click events for your chatbot and to the predict function accordingly. - - And also to the `plugins` list in `predict` - ```python - plugins = [stable_diff, image_cap, top_news, search, bored] - ``` - -**Thats it! you have added your own brand new CHATGPT Plugin for yourself. Go PLAY!!** -""" - - -second_headline = """

    🔥This Plugins WebUI is build using Gradio, -Langchain, -and ChatGPT Function Calling API. -You don't need an OPENAI API key to run this demo as Huggingface is provided one for the community use🙌

    """ - - -# Gradio block -with gr.Blocks(css = """#col_container { margin-left: auto; margin-right: auto;} - #chatbot {height: 520px; overflow: auto;}""") as demo: - - gr.HTML('

    🚀ChatGPT-Plugins🧩 WebUI using Langchain & Gradio

    ') - gr.HTML(second_headline) - gr.HTML('''
    Duplicate SpaceDuplicate the Space and run securely with your OpenAI API Key
    ''') - - with gr.Accordion("Follow these Steps to use the Gradio WebUI OR simply Click any of the given Examples! ", open=False): - gr.HTML(howto) - with gr.Accordion("What is happening?", open=False): - gr.HTML(messaging) - - gr.HTML("""Bonus! Steps to build and add your own ChatGPT Plugins to the WebUI using Langchain : Add new Plugins to ChatGPT WebUI in 5 mins!!""") - - with gr.Row(): - with gr.Column(): - openai_api_key_tb = gr.Textbox(label="Enter your OpenAI API key here", - value="🎁ChatGPT Keys are provided by HuggingFace for Free🥳 You don't need to enter yours!😉🙌", - container=False) - #plugin_message = gr.HTML() - - with gr.Accordion("Plugins🛠️ Available",open=True): - with gr.Row(): - stable_diff = gr.Checkbox(label="🖼️Diffusers", value=False) - image_cap = gr.Checkbox(label="🎨Describe Image", value=False) - top_news = gr.Checkbox(label="📰News", value=False) - google_search = gr.Checkbox(label="🌐Google Search", value=False) - bored = gr.Checkbox(label="🙄Bored API", value=False) - #music_gen = gr.Checkbox(label="🎵MusicGen", value=False) - #texttospeech = gr.Checkbox(label="📝🗣️Text-To-Speech", value=False) - #gr.CheckboxGroup(["🎵MusicGen", "🖼️Diffusers", "🎨Describe Image", "📰News", "📝🗣️Text-To-Speech" ], label="Plug-ins", info="enhance your ChatGPT experience using Plugins : Powered by Gradio!") - - with gr.Column(): - gen_image = gr.Image(label="generated image", type="filepath", interactive=False) - - with gr.Row(): - chatbot = gr.Chatbot(elem_id='chatbot', show_share_button=True) - - with gr.Row(): - with gr.Column(scale=0.70): - inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter") - with gr.Column(scale=0.15, min_width=0): - b1 = gr.Button("🏃Run") - with gr.Column(scale=0.15, min_width=0): - btn = gr.UploadButton("📁Upload", file_types=["image", "audio"], file_count="single") - - with gr.Row(): - with gr.Accordion("Parameters", open=False): - top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",) - with gr.Accordion("Available Files", open=False): - file_output = gr.File(file_count="multiple", file_types=["image", "audio"], label="Files Available") - - inputs.submit( predict, - [inputs, temperature, stable_diff, image_cap, top_news, google_search, bored, file_output, chatbot], - [inputs, chatbot, gen_image ]) - b1.click( predict, - [inputs, temperature, stable_diff, image_cap, top_news, google_search, bored, file_output, chatbot], - [inputs, chatbot, gen_image ]) - - - btn.upload(upload_file, [btn, file_output], file_output) - gen_image.change(add_image, [gen_image, file_output], file_output) - #gen_audio.change(add_audio, [gen_audio, file_output], file_output) - - gr.HTML("

    ") - gr.Examples(label = "To get started quickly - Click on any example below and press Enter/Run:", - examples = [["What is the latest top news on Inflation in Europe", 1.0, False, False, True, False, False, None], - ["What is Europe's stand on the ongoing generative AI revolution?", 1.0, False, False, False, True, False, None], - ["Write a very short poem on 'sparkling water'", 1.0, False, False, False, False, False, None], - ["What is the weather in LA and SF?", 1.0, False, False, False, True, False, None], - ["generate an image of a puppy", 1.0, True, False, False, False, False,None], - ["generate a caption for the image cat2.jpg", 1.0, False, True, False, False, False, "cat2.jpg"], - ["Who is the present CEO of Twitter? Are there any new competitors to Twitter?", 1.0, True, True, True, True, False, None], - ["Can you suggest me something to do, I am totally bored", 1.0, False, False, False, False, True, None] - ], - inputs = [inputs, temperature, stable_diff, image_cap, top_news, google_search, bored, file_output] - ) - - with gr.Accordion("Use Langchain to build and add your own Plugins to this ChatGPT WebUI", open=False ): - gr.Markdown(add_plugin_steps) - -demo.queue().launch(debug=True) # height = '1000' diff --git a/spaces/ysheng/SSN-Soft-Shadow-Network-for-Image-Composition/models/Sparse_PH.py b/spaces/ysheng/SSN-Soft-Shadow-Network-for-Image-Composition/models/Sparse_PH.py deleted file mode 100644 index 2dfe0ee6253de9574710a9982970c855c0d719cc..0000000000000000000000000000000000000000 --- a/spaces/ysheng/SSN-Soft-Shadow-Network-for-Image-Composition/models/Sparse_PH.py +++ /dev/null @@ -1,185 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision import utils -from torchvision.transforms import Resize -from collections import OrderedDict -import numpy as np -import matplotlib.cm as cm -import matplotlib as mpl -from torchvision.transforms import InterpolationMode - - -from .abs_model import abs_model -from .blocks import * -from .SSN import SSN -from .SSN_v1 import SSN_v1 -from .Loss.Loss import norm_loss, grad_loss -from .Attention_Unet import Attention_Unet - -class Sparse_PH(abs_model): - def __init__(self, opt): - mid_act = opt['model']['mid_act'] - out_act = opt['model']['out_act'] - in_channels = opt['model']['in_channels'] - out_channels = opt['model']['out_channels'] - resnet = opt['model']['resnet'] - backbone = opt['model']['backbone'] - - self.ncols = opt['hyper_params']['n_cols'] - self.focal = opt['model']['focal'] - self.clip = opt['model']['clip'] - - self.norm_loss_ = opt['model']['norm_loss'] - self.grad_loss_ = opt['model']['grad_loss'] - self.ggrad_loss_ = opt['model']['ggrad_loss'] - self.lap_loss = opt['model']['lap_loss'] - - self.clip_range = opt['dataset']['linear_scale'] + opt['dataset']['linear_offset'] - - if backbone == 'Default': - self.model = SSN_v1(in_channels=in_channels, - out_channels=out_channels, - mid_act=mid_act, - out_act=out_act, - resnet=resnet) - elif backbone == 'ATTN': - self.model = Attention_Unet(in_channels, out_channels, mid_act=mid_act, out_act=out_act) - - self.optimizer = get_optimizer(opt, self.model) - self.visualization = {} - - self.norm_loss = norm_loss() - self.grad_loss = grad_loss() - - - def setup_input(self, x): - return x - - - def forward(self, x): - return self.model(x) - - - def compute_loss(self, y, pred): - b = y.shape[0] - - # total_loss = avg_norm_loss(y, pred) - nloss = self.norm_loss.loss(y, pred) * self.norm_loss_ - gloss = self.grad_loss.loss(pred) * self.grad_loss_ - ggloss = self.grad_loss.gloss(y, pred) * self.ggrad_loss_ - laploss = self.grad_loss.laploss(pred) * self.lap_loss - - total_loss = nloss + gloss + ggloss + laploss - - self.loss_log = { - 'norm_loss': nloss.item(), - 'grad_loss': gloss.item(), - 'grad_l1_loss': ggloss.item(), - 'lap_loss': laploss.item(), - } - - - if self.focal: - total_loss = torch.pow(total_loss, 3) - - return total_loss - - - def supervise(self, input_x, y, is_training:bool)->float: - optimizer = self.optimizer - model = self.model - - x = input_x['x'] - - optimizer.zero_grad() - pred = self.forward(x) - if self.clip: - pred = torch.clip(pred, 0.0, self.clip_range) - - loss = self.compute_loss(y, pred) - if is_training: - loss.backward() - optimizer.step() - - xc = x.shape[1] - for i in range(xc): - self.visualization['x{}'.format(i)] = x[:, i:i+1].detach() - - self.visualization['y_fore'] = y[:, 0:1].detach() - self.visualization['y_back'] = y[:, 1:2].detach() - self.visualization['pred_fore'] = pred[:, 0:1].detach() - self.visualization['pred_back'] = pred[:, 1:2].detach() - - return loss.item() - - - def get_visualize(self) -> OrderedDict: - """ Convert to visualization numpy array - """ - nrows = self.ncols - visualizations = self.visualization - ret_vis = OrderedDict() - - for k, v in visualizations.items(): - batch = v.shape[0] - n = min(nrows, batch) - - plot_v = v[:n] - ret_vis[k] = np.clip(utils.make_grid(plot_v.cpu(), nrow=nrows).numpy().transpose(1,2,0), 0.0, 1.0) - ret_vis[k] = self.plasma(ret_vis[k]) - - return ret_vis - - - def get_logs(self): - return self.loss_log - - - def inference(self, x): - x, device = x['x'], x['device'] - x = torch.from_numpy(x.transpose((2,0,1))).unsqueeze(dim=0).float().to(device) - pred = self.forward(x) - - pred = pred[0].detach().cpu().numpy().transpose((1,2,0)) - - return pred - - - def batch_inference(self, x): - x = x['x'] - pred = self.forward(x) - return pred - - - """ Getter & Setter - """ - def get_models(self) -> dict: - return {'model': self.model} - - - def get_optimizers(self) -> dict: - return {'optimizer': self.optimizer} - - - def set_models(self, models: dict) : - # input test - if 'model' not in models.keys(): - raise ValueError('{} not in self.model'.format('model')) - - self.model = models['model'] - - - def set_optimizers(self, optimizer: dict): - self.optimizer = optimizer['optimizer'] - - - #################### - # Personal Methods # - #################### - def plasma(self, x): - norm = mpl.colors.Normalize(vmin=0.0, vmax=1) - mapper = cm.ScalarMappable(norm=norm, cmap='plasma') - bimg = mapper.to_rgba(x[:,:,0])[:,:,:3] - - return bimg diff --git a/spaces/yuan1615/EmpathyTTS/text/pinyin.py b/spaces/yuan1615/EmpathyTTS/text/pinyin.py deleted file mode 100644 index 3e8663ce19b7813818f997ae5c18f2d972dd26d6..0000000000000000000000000000000000000000 --- a/spaces/yuan1615/EmpathyTTS/text/pinyin.py +++ /dev/null @@ -1,241 +0,0 @@ -initials = [ - "b", - "c", - "ch", - "d", - "f", - "g", - "h", - "j", - "k", - "l", - "m", - "n", - "p", - "q", - "r", - "s", - "sh", - "t", - "w", - "x", - "y", - "z", - "zh", -] -finals = [ - "a1", - "a2", - "a3", - "a4", - "a5", - "ai1", - "ai2", - "ai3", - "ai4", - "ai5", - "an1", - "an2", - "an3", - "an4", - "an5", - "ang1", - "ang2", - "ang3", - "ang4", - "ang5", - "ao1", - "ao2", - "ao3", - "ao4", - "ao5", - "e1", - "e2", - "e3", - "e4", - "e5", - "ei1", - "ei2", - "ei3", - "ei4", - "ei5", - "en1", - "en2", - "en3", - "en4", - "en5", - "eng1", - "eng2", - "eng3", - "eng4", - "eng5", - "er1", - "er2", - "er3", - "er4", - "er5", - "i1", - "i2", - "i3", - "i4", - "i5", - "ia1", - "ia2", - "ia3", - "ia4", - "ia5", - "ian1", - "ian2", - "ian3", - "ian4", - "ian5", - "iang1", - "iang2", - "iang3", - "iang4", - "iang5", - "iao1", - "iao2", - "iao3", - "iao4", - "iao5", - "ie1", - "ie2", - "ie3", - "ie4", - "ie5", - "ii1", - "ii2", - "ii3", - "ii4", - "ii5", - "iii1", - "iii2", - "iii3", - "iii4", - "iii5", - "in1", - "in2", - "in3", - "in4", - "in5", - "ing1", - "ing2", - "ing3", - "ing4", - "ing5", - "iong1", - "iong2", - "iong3", - "iong4", - "iong5", - "iou1", - "iou2", - "iou3", - "iou4", - "iou5", - "o1", - "o2", - "o3", - "o4", - "o5", - "ong1", - "ong2", - "ong3", - "ong4", - "ong5", - "ou1", - "ou2", - "ou3", - "ou4", - "ou5", - "u1", - "u2", - "u3", - "u4", - "u5", - "ua1", - "ua2", - "ua3", - "ua4", - "ua5", - "uai1", - "uai2", - "uai3", - "uai4", - "uai5", - "uan1", - "uan2", - "uan3", - "uan4", - "uan5", - "uang1", - "uang2", - "uang3", - "uang4", - "uang5", - "uei1", - "uei2", - "uei3", - "uei4", - "uei5", - "uen1", - "uen2", - "uen3", - "uen4", - "uen5", - "uo1", - "uo2", - "uo3", - "uo4", - "uo5", - "v1", - "v2", - "v3", - "v4", - "v5", - "van1", - "van2", - "van3", - "van4", - "van5", - "ve1", - "ve2", - "ve3", - "ve4", - "ve5", - "vn1", - "vn2", - "vn3", - "vn4", - "vn5", -] -alphabet = [ - "A", - "B", - "C", - "D", - "E", - "F", - "G", - "H", - "I", - "J", - "K", - "L", - "M", - "N", - "O", - "P", - "Q", - "R", - "S", - "T", - "U", - "V", - "W", - "X", - "Y", - "Z" -] -valid_symbols = initials + finals + ["rr"] diff --git a/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/hubert/__init__.py b/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/hubert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/llama_lora/utils/__init__.py b/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/llama_lora/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/zhang-wei-jian/docker/node_modules/has-symbols/test/tests.js b/spaces/zhang-wei-jian/docker/node_modules/has-symbols/test/tests.js deleted file mode 100644 index 89edd1291ca79ff85ca71ced9a65e4a2b4443fd9..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/has-symbols/test/tests.js +++ /dev/null @@ -1,56 +0,0 @@ -'use strict'; - -// eslint-disable-next-line consistent-return -module.exports = function runSymbolTests(t) { - t.equal(typeof Symbol, 'function', 'global Symbol is a function'); - - if (typeof Symbol !== 'function') { return false; } - - t.notEqual(Symbol(), Symbol(), 'two symbols are not equal'); - - /* - t.equal( - Symbol.prototype.toString.call(Symbol('foo')), - Symbol.prototype.toString.call(Symbol('foo')), - 'two symbols with the same description stringify the same' - ); - */ - - /* - var foo = Symbol('foo'); - - t.notEqual( - String(foo), - String(Symbol('bar')), - 'two symbols with different descriptions do not stringify the same' - ); - */ - - t.equal(typeof Symbol.prototype.toString, 'function', 'Symbol#toString is a function'); - // t.equal(String(foo), Symbol.prototype.toString.call(foo), 'Symbol#toString equals String of the same symbol'); - - t.equal(typeof Object.getOwnPropertySymbols, 'function', 'Object.getOwnPropertySymbols is a function'); - - var obj = {}; - var sym = Symbol('test'); - var symObj = Object(sym); - t.notEqual(typeof sym, 'string', 'Symbol is not a string'); - t.equal(Object.prototype.toString.call(sym), '[object Symbol]', 'symbol primitive Object#toStrings properly'); - t.equal(Object.prototype.toString.call(symObj), '[object Symbol]', 'symbol primitive Object#toStrings properly'); - - var symVal = 42; - obj[sym] = symVal; - // eslint-disable-next-line no-restricted-syntax - for (sym in obj) { t.fail('symbol property key was found in for..in of object'); } - - t.deepEqual(Object.keys(obj), [], 'no enumerable own keys on symbol-valued object'); - t.deepEqual(Object.getOwnPropertyNames(obj), [], 'no own names on symbol-valued object'); - t.deepEqual(Object.getOwnPropertySymbols(obj), [sym], 'one own symbol on symbol-valued object'); - t.equal(Object.prototype.propertyIsEnumerable.call(obj, sym), true, 'symbol is enumerable'); - t.deepEqual(Object.getOwnPropertyDescriptor(obj, sym), { - configurable: true, - enumerable: true, - value: 42, - writable: true - }, 'property descriptor is correct'); -}; diff --git a/spaces/zhanghaohui/szu-gpt-academic/request_llm/bridge_chatglm.py b/spaces/zhanghaohui/szu-gpt-academic/request_llm/bridge_chatglm.py deleted file mode 100644 index deaacd276cc53937bd68fd6e579e737375ef3582..0000000000000000000000000000000000000000 --- a/spaces/zhanghaohui/szu-gpt-academic/request_llm/bridge_chatglm.py +++ /dev/null @@ -1,161 +0,0 @@ - -from transformers import AutoModel, AutoTokenizer -import time -import threading -import importlib -from toolbox import update_ui, get_conf -from multiprocessing import Process, Pipe - -load_message = "ChatGLM尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,ChatGLM消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……" - -################################################################################# -class GetGLMHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.chatglm_model = None - self.chatglm_tokenizer = None - self.info = "" - self.success = True - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - import sentencepiece - self.info = "依赖检测通过" - self.success = True - except: - self.info = "缺少ChatGLM的依赖,如果要使用ChatGLM,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_chatglm.txt`安装ChatGLM的依赖。" - self.success = False - - def ready(self): - return self.chatglm_model is not None - - def run(self): - # 子进程执行 - # 第一次运行,加载参数 - retry = 0 - while True: - try: - if self.chatglm_model is None: - self.chatglm_tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True) - device, = get_conf('LOCAL_MODEL_DEVICE') - if device=='cpu': - self.chatglm_model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).float() - else: - self.chatglm_model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).half().cuda() - self.chatglm_model = self.chatglm_model.eval() - break - else: - break - except: - retry += 1 - if retry > 3: - self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。') - raise RuntimeError("不能正常加载ChatGLM的参数!") - - while True: - # 进入任务等待状态 - kwargs = self.child.recv() - # 收到消息,开始请求 - try: - for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs): - self.child.send(response) - # # 中途接收可能的终止指令(如果有的话) - # if self.child.poll(): - # command = self.child.recv() - # if command == '[Terminate]': break - except: - from toolbox import trimmed_format_exc - self.child.send('[Local Message] Call ChatGLM fail.' + '\n```\n' + trimmed_format_exc() + '\n```\n') - # 请求处理结束,开始下一个循环 - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - # 主进程执行 - self.threadLock.acquire() - self.parent.send(kwargs) - while True: - res = self.parent.recv() - if res != '[Finish]': - yield res - else: - break - self.threadLock.release() - -global glm_handle -glm_handle = None -################################################################################# -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global glm_handle - if glm_handle is None: - glm_handle = GetGLMHandle() - if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + glm_handle.info - if not glm_handle.success: - error = glm_handle.info - glm_handle = None - raise RuntimeError(error) - - # chatglm 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - history_feedin.append(["What can I do?", sys_prompt]) - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - if len(observe_window) >= 1: observe_window[0] = response - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return response - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "")) - - global glm_handle - if glm_handle is None: - glm_handle = GetGLMHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + glm_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not glm_handle.success: - glm_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - # 处理历史信息 - history_feedin = [] - history_feedin.append(["What can I do?", system_prompt] ) - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - # 开始接收chatglm的回复 - response = "[Local Message]: 等待ChatGLM响应中 ..." - for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, response) - yield from update_ui(chatbot=chatbot, history=history) - - # 总结输出 - if response == "[Local Message]: 等待ChatGLM响应中 ...": - response = "[Local Message]: ChatGLM响应异常 ..." - history.extend([inputs, response]) - yield from update_ui(chatbot=chatbot, history=history) diff --git a/spaces/zhenwusw/JoJoGAN/e4e/criteria/__init__.py b/spaces/zhenwusw/JoJoGAN/e4e/criteria/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/dataset/communal/__init__.py b/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/dataset/communal/__init__.py deleted file mode 100644 index 8ea6021ad3c5c3d080e03089095aec34106e5541..0000000000000000000000000000000000000000 --- a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/dataset/communal/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -""" -@Date: 2021/09/22 -@description: -"""