diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Acronis Backup 12.5.1 Build 14240 Crack !EXCLUSIVE!.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Acronis Backup 12.5.1 Build 14240 Crack !EXCLUSIVE!.md deleted file mode 100644 index e9728b076a521f3b357720d50727e11cfc7ffe50..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Acronis Backup 12.5.1 Build 14240 Crack !EXCLUSIVE!.md +++ /dev/null @@ -1,36 +0,0 @@ - -

Acronis Backup 12.5.1 Build 14240 Crack: A Reliable and Flexible Solution for Data Protection

-

Acronis Backup 12.5.1 Build 14240 Crack is a powerful and versatile software that provides comprehensive data protection for any environment, including physical, virtual, cloud, mobile, and applications. With Acronis Backup 12.5.1 Build 14240 Crack, you can easily backup and restore your data, manage your backup policies, monitor your backup activities, and recover your data in minutes.

-

Acronis Backup 12.5.1 Build 14240 Crack


DOWNLOAD >>> https://byltly.com/2uKAdS



-

Acronis Backup 12.5.1 Build 14240 Crack is the latest update of Acronis Backup 12.5, which was released in August 2019. This update introduces several new features and enhancements, such as:

- -

Acronis Backup 12.5.1 Build 14240 Crack supports a wide range of operating systems, platforms, and applications, such as Windows, Linux, Mac OS X, VMware, Hyper-V, Citrix XenServer, Oracle VM Server, Microsoft Exchange Server, Microsoft SQL Server, Microsoft SharePoint Server, Microsoft Active Directory, Microsoft Office 365, Google G Suite, Amazon EC2, Azure VMs, iOS, Android, and more[^2^].

-

Acronis Backup 12.5.1 Build 14240 Crack is a reliable and flexible solution for data protection that can meet the needs of any business size and complexity. With Acronis Backup 12.5.1 Build 14240 Crack, you can ensure the availability and security of your data while saving time and money.

-

Acronis Backup 12.5.1 Build 14240 Crack: What Customers Say

-

Acronis Backup 12.5.1 Build 14240 Crack is not only a powerful and versatile software for data protection, but also a highly rated and recommended solution by customers who have used it. According to TrustRadius, a platform for verified user reviews, Acronis Backup 12.5 has an average rating of 7.7 out of 10 based on 136 reviews and ratings[^3^]. Here are some of the pros and cons that customers have shared about Acronis Backup 12.5.1 Build 14240 Crack:

-

Pros

- -

Cons

- -

Overall, customers are satisfied with Acronis Backup 12.5.1 Build 14240 Crack and its features, performance, reliability, and support. Many customers have praised Acronis Backup 12.5.1 Build 14240 Crack as a solid solution for data protection that can meet the needs of any business size and complexity[^3^] .

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Navisworks Exporter for Revit and Boost Your Collaboration and Coordination.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Navisworks Exporter for Revit and Boost Your Collaboration and Coordination.md deleted file mode 100644 index bc39300b516826bd4d0454abc9f164d77957010f..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Navisworks Exporter for Revit and Boost Your Collaboration and Coordination.md +++ /dev/null @@ -1,31 +0,0 @@ - -

How to Download and Install Navisworks Exporter for Revit

-

Navisworks Exporter for Revit is a plug-in that allows you to export Revit models as NWC files that can be opened and viewed in Navisworks. NWC files are optimized for performance and collaboration, and can be used for clash detection, coordination, and simulation.

-

download navisworks exporter for revit


DOWNLOADhttps://byltly.com/2uKyw1



-

If you want to download and install Navisworks Exporter for Revit, you can follow these steps:

-
    -
  1. Go to this page: Where to find the Navisworks Exporter for Revit.
  2. -
  3. Scroll down to the section entitled Navisworks NWC Export Utility and click on the link that matches your Revit version and operating system.
  4. -
  5. Save the file to your computer and run the installer. Follow the instructions on the screen to complete the installation.
  6. -
  7. Restart Revit if it was running during the installation.
  8. -
  9. To export a Revit model as an NWC file, click Add-Ins > External Tools > Autodesk Navisworks. In the Export Scene As dialog box, click the Autodesk Navisworks Settings button. Adjust the settings for your export and click OK. Then choose a location and a name for your NWC file and click Save.
  10. -
-

Congratulations! You have successfully downloaded and installed Navisworks Exporter for Revit and exported your first NWC file.

Here are some more paragraphs for your article:

-

NWC files are a great way to share and collaborate on Revit models with other stakeholders. You can use Navisworks to open and view NWC files, as well as combine them with other NWC files from different disciplines and sources. You can also use Navisworks to perform various tasks on the NWC files, such as:

-

- -

To view an NWC file in Navisworks, you need to have Navisworks installed on your computer. You can download a free trial version of Navisworks from this page: Navisworks Free Trial. Once you have Navisworks installed, you can open an NWC file by clicking File > Open and browsing to the location of the file. You can also drag and drop the file into the Navisworks window.

-

You can adjust the settings for future exports of NWC files from Revit by using the Options Editor in Navisworks. To access the Options Editor, click File > Options. Expand the File Exporters node and click the Revit page. Here you can change various options for your export, such as:

- -

You can also save your export settings as a profile and load it later for convenience.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Native Instruments Battery 4 Crack.md b/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Native Instruments Battery 4 Crack.md deleted file mode 100644 index 8eeab23ab59c8744d64c133f4aea8c5313a95cc4..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Native Instruments Battery 4 Crack.md +++ /dev/null @@ -1,6 +0,0 @@ - -

When you choose a cell in a battery, you can create a new Hit or Hit Mix which includes various samples. Battery provides an offline sampler which makes it possible for the user to import the audio sample of his or her choice. Also, there are a host of products and advanced effects to use with the samples in the collection.

-

Descargar Native Instruments Battery 4 Crack


Downloadhttps://imgfil.com/2uy1b0



-

Battery is available in a free trial version and not all features are available in the trial version. In the trial version, you can load the samples that are already installed and can download sounds from the online sampler. The trial version also lets the user preview the recorded drum, effects, and EQ details and provides full access to the extensive online sampler. In the trial version, however, no additional modules, multi-track editing, importing of audio clips and creating custom kits are available. In order to use the trial version, you must register for a free NIN account which has its own unique limitations. It is not possible to download the free trial version to the desktop. Battery 3 offers 16 new percussion instruments from the most popular electronic percussion instruments. There are presets for everything from traditional acoustic drum kits to entire electronic drum kits. Some of the drums include modern drums such as the Hi-hat, ride, toms, cymbals, an A/D core, and much more. These drums are made using a specially designed rack with thousands of samples for real time performance to make the user an expert at creating real drum sounds. Production is easy with the space for mixing with real-time samples and also virtual racks to create your own kits. Batteries also enables the user to get started quickly and easily.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Acronis True Image 2018 Download and Try the Most Reliable Backup Tool.md b/spaces/1phancelerku/anime-remove-background/Acronis True Image 2018 Download and Try the Most Reliable Backup Tool.md deleted file mode 100644 index 541796ca90648f615492cc7f2ef1ec6808ed7ab8..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Acronis True Image 2018 Download and Try the Most Reliable Backup Tool.md +++ /dev/null @@ -1,131 +0,0 @@ -
-

How to Download Acronis True Image 2018

-

If you are looking for a reliable and easy-to-use backup software that can protect your data and system from any disaster, you might want to consider Acronis True Image 2018. This software is one of the best in the market, offering a comprehensive set of features and tools that can help you create, manage, and restore backups of your files, disks, partitions, or entire machines. In this article, we will show you how to download Acronis True Image 2018, how to install and activate it, how to use its main functions, and how to get help and support if you need it.

-

What is Acronis True Image 2018 and why you need it

-

Acronis True Image 2018 is a personal cyber protection solution that delivers easy-to-use, efficient, and secure backup and recovery of your data and system. It can help you prevent data loss due to hardware failure, malware infection, accidental deletion, theft, or natural disaster. It can also help you migrate your data to a new device, clone your disk to a new drive, archive your files to save space, or verify the authenticity of your data with blockchain technology.

-

download acronis true image 2018


Download File ———>>> https://jinyurl.com/2uNLN7



-

Features of Acronis True Image 2018

-

Acronis True Image 2018 offers a rich set of features that can meet your backup needs. Some of the main features are:

- -

System requirements for Acronis True Image 2018

-

To use Acronis True Image 2018, you need to have a device that meets the following minimum system requirements:

- - - - - - -
Operating systemHardware
Windows 7 SP1 or later (32-bit and 64-bit)1 GHz processor or faster
macOS 10.11 or later2 GB RAM or more
iOS 10.0 or later1.5 GB free disk space or more
Android 4.1 or laterA high-speed internet connection for cloud backup and recovery
-

How to purchase and activate Acronis True Image 2018

-

To use Acronis True Image 2018, you need to purchase a subscription plan and activate the software with a license key. Here is how you can do that:

-

Pricing and subscription plans

-

Acronis True Image 2018 offers three subscription plans that vary in terms of features, cloud storage, and number of devices. You can choose the plan that suits your needs and budget. The plans are:

- -

Activation and licensing process

-

To activate Acronis True Image 2018, you need to have a license key that corresponds to your subscription plan. You can get the license key in one of the following ways:

- -

To activate Acronis True Image 2018, you need to enter the license key in the software interface after installing it on your device. You can also activate it online by logging in to your Acronis account and entering the license key there.

-

How to download and install Acronis True Image 2018

-

To download and install Acronis True Image 2018, you need to have a valid license key and an internet connection. Here is how you can do that:

-

Download link and installation file

-

You can download Acronis True Image 2018 from the official website or from the email that you received after purchasing or registering for the trial. The download link will direct you to the appropriate version of the software for your operating system (Windows, macOS, iOS, or Android). The installation file is a .exe file for Windows, a .dmg file for macOS, an .ipa file for iOS, and an .apk file for Android. The file size is about 500 MB for Windows and macOS, and about 100 MB for iOS and Android. You can save the file to your device or run it directly from the browser.

-

How to download acronis true image 2018 for free
-Download acronis true image 2018 full version with crack
-Acronis true image 2018 download link
-Download acronis true image 2018 iso
-Acronis true image 2018 bootable usb download
-Download acronis true image 2018 offline installer
-Acronis true image 2018 trial download
-Download acronis true image 2018 for windows 10
-Acronis true image 2018 mac download
-Download acronis true image 2018 serial key
-Acronis true image 2018 activation key download
-Download acronis true image 2018 user guide
-Acronis true image 2018 backup software download
-Download acronis true image 2018 update
-Acronis true image 2018 cloud download
-Download acronis true image 2018 recovery disk
-Acronis true image 2018 clone disk download
-Download acronis true image 2018 license key
-Acronis true image 2018 coupon code download
-Download acronis true image 2018 portable
-Acronis true image 2018 linux download
-Download acronis true image 2018 for android
-Acronis true image 2018 review download
-Download acronis true image 2018 patch
-Acronis true image 2018 keygen download
-Download acronis true image 2018 for pc
-Acronis true image 2018 system requirements download
-Download acronis true image 2018 latest version
-Acronis true image 2018 features download
-Download acronis true image 2018 comparison chart
-Acronis true image 2018 upgrade download
-Download acronis true image 2018 tutorial
-Acronis true image 2018 support download
-Download acronis true image 2018 forum
-Acronis true image 2018 problems download
-Download acronis true image 2018 tips and tricks
-Acronis true image 2018 alternatives download
-Download acronis true image 2018 vs norton ghost
-Acronis true image 2018 vs windows backup download
-Download acronis true image 2018 vs macrium reflect

-

Installation steps and options

-

To install Acronis True Image 2018, you need to run the installation file and follow the instructions on the screen. The installation process is similar for all operating systems, but there may be some differences in the options and settings. Here are the general steps and options for installing Acronis True Image 2018:

-
    -
  1. Accept the license agreement: You need to read and accept the terms and conditions of the license agreement before proceeding with the installation.
  2. -
  3. Choose the installation type: You can choose between a typical installation or a custom installation. The typical installation will install the software with the default settings and options, while the custom installation will allow you to change some of them, such as the installation location, the components to install, and the language.
  4. -
  5. Enter the license key: You need to enter the license key that you received after purchasing or registering for the trial. The license key will activate the software and determine the features and subscription plan that you can use.
  6. -
  7. Sign in to your Acronis account: You need to sign in to your Acronis account or create one if you don't have one. Your Acronis account will allow you to manage your subscription, access your cloud backups, sync your data across devices, and get help and support.
  8. -
  9. Complete the installation: The installation will take a few minutes to complete. You may need to restart your device after the installation is finished.
  10. -
-

How to use Acronis True Image 2018

-

After installing and activating Acronis True Image 2018, you can start using it to backup and protect your data and system. The software has a user-friendly interface that allows you to access its main functions and settings. Here is how you can use Acronis True Image 2018:

-

Backup and recovery options

-

To create a backup of your data or system, you need to select the source (the data or disk that you want to backup) and the destination (the location where you want to store the backup). You can also choose the backup type, frequency, encryption, notification, and other options. To restore your data or system from a backup, you need to select the backup source (the location where the backup is stored) and the recovery destination (the location where you want to restore the data or disk). You can also choose the recovery mode, options, and verification.

-

Cloning and archiving options

-

To clone your disk to another disk, you need to select the source disk (the disk that you want to clone) and the destination disk (the disk where you want to copy the data). You can also choose the cloning mode (automatic or manual) and options (such as resizing partitions or excluding files). To archive your files to another location, you need to select the source files (the files that you want to archive) and the destination location (the local drive or cloud storage where you want to store the archived files). You can also choose the archiving options (such as compression, encryption, or scheduling).

-

Active protection and notary options

-

To protect your data from ransomware attacks, you need to enable Acronis Active Protection in the software settings. This feature will monitor your system for suspicious activity and block any unauthorized encryption attempts. It will also allow you to recover any affected files from a backup. To verify the integrity and authenticity of your data, you need to use Acronis Notary in the software interface. This feature will create a unique digital fingerprint for your data and store it in a public ledger. You can then use this fingerprint to prove that your data has not been altered or tampered with.

-

How to get help and support for Acronis True Image 2018

-

If you have any questions or issues with Acronis True Image 2018, you can get help and support from various sources. Some of the main sources are:

-

Documentation and tutorials

-

You can find the user guide, the quick start guide, the FAQ, and the video tutorials for Acronis True Image 2018 on the official website. These resources will provide you with detailed information and instructions on how to use the software and its features.

-

Knowledge base and community forum

-

You can search for answers and solutions to common problems and errors in the knowledge base and the community forum on the official website. These resources will provide you with articles, tips, tricks, and advice from Acronis experts and other users.

-

Technical support and initial setup service

-

You can contact the technical support team by phone, email, or chat if you need assistance with installation, activation, configuration, or troubleshooting. The technical support team is available 24/7 and can help you resolve any issues or errors. You can also purchase the initial setup service if you want an Acronis technician to remotely install and configure the software for you.

-

Conclusion and FAQs

-

Acronis True Image 2018 is a powerful backup software that can protect your data and system from any disaster. It offers a comprehensive set of features and tools that can help you create, manage, and restore backups of your files, disks, partitions, or entire machines. It also offers cloud backup, active protection, notary, and other advanced features that can enhance your data security and integrity. To use Acronis True Image 2018, you need to purchase a subscription plan, activate the software with a license key, download and install the software on your device, and start using its main functions. You can also get help and support from various sources if you need it.

-

Here are some FAQs that you might have about Acronis True Image 2018:

- -

I hope this article has helped you learn how to download Acronis True Image 2018 and use it to backup and protect your data and system. If you have any feedback or suggestions, please let me know in the comments below. Thank you for reading!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Very Very Very by I.O.I - The Song That Broke the Charts.md b/spaces/1phancelerku/anime-remove-background/Download Very Very Very by I.O.I - The Song That Broke the Charts.md deleted file mode 100644 index 3ddb6a40ef6a1e838b594043ffaaf0877b69d2e5..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Very Very Very by I.O.I - The Song That Broke the Charts.md +++ /dev/null @@ -1,132 +0,0 @@ - -

How to Download I.O.I's Very Very Very Song and Enjoy Its Catchy Melody

-

If you are a fan of K-pop, you might have heard of I.O.I, a girl group project that was formed through a survival reality show called Produce 101. The group consisted of 11 members who were selected from different agencies and debuted in 2016. They released two mini-albums and several singles before disbanding in 2017.

-

One of their most popular songs is Very Very Very, which was released as the title track of their second mini-album Miss Me? in October 2016. The song was composed by Park Jin-young, the founder of JYP Entertainment, and has a catchy melody and lyrics that express a girl's feelings for a boy. The song topped various music charts in South Korea and won several music awards.

-

download ioi very very very


Download Filehttps://jinyurl.com/2uNULO



-

If you love this song and want to listen to it anytime and anywhere, you might want to download it and enjoy it offline. Downloading the song can save you data and battery, as well as allow you to play it on different devices without internet connection. In this article, we will show you how to download I.O.I's Very Very Very song from different platforms, and how to enjoy it offline.

-

How to Download the Song from Different Platforms

-

There are many platforms where you can stream or download I.O.I's Very Very Very song, such as YouTube, Spotify, Apple Music, etc. However, not all of them offer free downloads or easy access. Here are some ways you can download the song from these platforms:

-

YouTube

-

YouTube is one of the most popular platforms where you can watch I.O.I's Very Very Very music video and listen to their song. However, if you want to download the song from YouTube, you have two options:

- -

Spotify

-

Spotify is another popular platform where you can stream or download I.O.I's Very Very Very song, as well as other songs from their albums and playlists. However, if you want to download the song from Spotify, you also have two options:

-

Apple Music

-

Apple Music is another popular platform where you can stream or download I.O.I's Very Very Very song, as well as other songs from their albums and playlists. However, if you want to download the song from Apple Music, you also have two options:

-

download ioi very very very mp3
-download ioi very very very lyrics
-download ioi very very very album
-download ioi very very very mv
-download ioi very very very dance practice
-download ioi very very very instrumental
-download ioi very very very live performance
-download ioi very very very ringtone
-download ioi very very very english cover
-download ioi very very very remix
-download ioi miss me album with very very very
-download ioi park jin young produced song very very very
-download ioi final comeback song very very very
-download ioi somi center song very very very
-download ioi addictive song very very very
-download ioi electropop song very very very
-download ioi bubblegum pop song very very very
-download ioi drum and bass song very very very
-download ioi number one song on gaon chart for 2016 october week 3 - 4, 2016, november week 1 - 2, 2016, december week 1 - 2, 2016, january week 1 - 2, 2017, february week 1 - 2, 2017, march week 1 - 2, 2017, april week 1 - 2, 2017, may week 1 - 2, 2017, june week 1 - 2, 2017, july week 1 - 2, 2017, august week 1 - 2, 2017, september week 1 - 2, 2017, october week 1 - 2, 2017, november week 1 - 2, 2017 and december week 1 - 2, 2017.
-download ioi most viewed kpop music video on youtube in america and worldwide for october month of year two thousand and sixteen according to billboard magazine article titled "Most Viewed K-Pop Videos in America & Around the World: October Month of Year Two Thousand and Sixteen" published on november month of year two thousand and sixteen date fourteen.
-download ioi song that sold over four hundred and twenty three thousand four hundred and ninety one downloads as of october month of year two thousand and sixteen according to gaon chart.
-download ioi song that won first place on mbc music show champion on october month of year two thousand and sixteen date twenty six and on mnet music show m countdown on october month of year two thousand and sixteen date twenty seven.
-download ioi song that was performed on mnet i.o.i x jyp special show and on the showcase for the mini album miss me release.
-download ioi song that was composed by park jin young who also wrote the lyrics.
-download ioi song that has a catchy chorus with the repeated phrase "neomu neomu neomu" which means "very very very" in korean language.
-download ioi song that expresses the feelings of a girl who wants to hear the confession from the guy she likes.
-download ioi song that has a colorful and cute music video with various outfits and props.
-download ioi song that has a fun and energetic dance choreography with a lot of jumping and waving.
-download ioi song that features all eleven members of the group including nayoung chungha sejeong chaeyeon kyulkyung sohye yeonjung yoojung mina doyeon and somi.
-download ioi song that is the title track of their second mini album miss me which was released on october month of year two thousand and sixteen date seventeen.

- -

How to Enjoy the Song Offline

-

Now that you have downloaded I.O.I's Very Very Very song from your preferred platform, you can enjoy it offline anytime and anywhere. Here are some ways you can enjoy the song offline:

-

Transfer the Song to Your Devices

-

If you want to listen to the song on different devices, such as your phone, tablet, laptop, etc., you need to transfer the song from your original device to your other devices. There are several ways you can do this:

- -

Play the Song with Your Favorite Music Player

-

If you want to listen to the song with your favorite music player, such as VLC, Winamp, iTunes, etc., you need to open the song file with your music player and enjoy its features and settings. Here are some tips you can follow:

- -

Sing Along with the Lyrics and Learn Some Korean Words

-

If you want to sing along with I.O.I's Very Very Very song and learn some Korean words from it, you need to find the lyrics of the song online or offline. You can use the following table to compare the sources of the lyrics and their features: | Source | Features | | ------ | -------- | | [Color Coded Lyrics](^1^) | Provides the lyrics in Korean, Romanization, and English translation. Also provides the color codes for each member's parts and some background information about the song. | | [Genius](^2^) | Provides the lyrics in Korean and English translation. Also provides some annotations, explanations, and trivia about the song. | | [AZLyrics](^3^) | Provides the lyrics in English translation only. | You can choose the source that suits your preference and needs, and then follow these steps to sing along with the lyrics and learn some Korean words: - Open the source of the lyrics on your device and search for I.O.I's Very Very Very song. - Play the song with your music player and follow the lyrics on your screen. - Try to sing along with the song and pronounce the Korean words correctly. You can also use the Romanization or the English translation to help you understand the meaning of the words. - Pay attention to some common or useful Korean words and phrases from the song, such as 너무 (very), 좋아하다 (to like), 말해줘 (tell me), 자꾸 (keep), 떠오르다 (to come to mind), 조심하다 (to be careful), etc. You can also use a dictionary or a translator to look up more words or phrases that interest you. - Repeat the steps until you can sing along with the song confidently and learn some Korean words fluently.

Conclusion

-

In this article, we have shown you how to download I.O.I's Very Very Very song from different platforms, such as YouTube, Spotify, Apple Music, etc., and how to enjoy it offline, such as transferring it to your devices, playing it with your favorite music player, singing along with the lyrics, and learning some Korean words. We hope you have found this article helpful and informative, and that you have enjoyed listening to I.O.I's Very Very Very song.

-

I.O.I was a talented and charming girl group that left a lasting impression on many fans with their songs and performances. Although they have disbanded, their music lives on and can still bring joy and happiness to many listeners. If you are one of them, we encourage you to download and enjoy their Very Very Very song offline, as well as their other songs from their albums and playlists.

-

Thank you for reading this article. If you have any questions or feedback, please feel free to leave them in the comments section below. We would love to hear from you.

-

FAQs

-

Here are some frequently asked questions about I.O.I's Very Very Very song and how to download and enjoy it offline:

-
    -
  1. Q: When was I.O.I's Very Very Very song released?
    -A: I.O.I's Very Very Very song was released on October 17, 2016 as the title track of their second mini-album Miss Me?
  2. -
  3. Q: Who composed I.O.I's Very Very Very song?
    -A: I.O.I's Very Very Very song was composed by Park Jin-young, the founder of JYP Entertainment, who also produced other songs for I.O.I.
  4. -
  5. Q: How many members were in I.O.I?
    -A: I.O.I had 11 members who were selected from different agencies through a survival reality show called Produce 101. They were Nayoung, Chungha, Sejeong, Chaeyeon, Kyulkyung, Sohye, Yeonjung, Yoojung, Mina, Doyeon, and Somi.
  6. -
  7. Q: Why did I.O.I disband?
    -A: I.O.I disbanded in 2017 because they were a project group that had a limited contract period. The members returned to their original agencies and pursued their individual careers.
  8. -
  9. Q: Where can I find more songs by I.O.I?
    -A: You can find more songs by I.O.I on various platforms, such as YouTube, Spotify, Apple Music, etc. You can also check out their albums and playlists, such as Chrysalis, Miss Me?, Whatta Man, etc.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/2023Liu2023/bingo/src/components/markdown.tsx b/spaces/2023Liu2023/bingo/src/components/markdown.tsx deleted file mode 100644 index d4491467a1f14d1d72e535caac9c40636054e5df..0000000000000000000000000000000000000000 --- a/spaces/2023Liu2023/bingo/src/components/markdown.tsx +++ /dev/null @@ -1,9 +0,0 @@ -import { FC, memo } from 'react' -import ReactMarkdown, { Options } from 'react-markdown' - -export const MemoizedReactMarkdown: FC = memo( - ReactMarkdown, - (prevProps, nextProps) => - prevProps.children === nextProps.children && - prevProps.className === nextProps.className -) diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/inference/svs/base_svs_infer.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/inference/svs/base_svs_infer.py deleted file mode 100644 index 39ed74f29f7526d5149e4f0079a3681a3bac2582..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/inference/svs/base_svs_infer.py +++ /dev/null @@ -1,265 +0,0 @@ -import os - -import torch -import numpy as np -from modules.hifigan.hifigan import HifiGanGenerator -from vocoders.hifigan import HifiGAN -from inference.svs.opencpop.map import cpop_pinyin2ph_func - -from utils import load_ckpt -from utils.hparams import set_hparams, hparams -from utils.text_encoder import TokenTextEncoder -from pypinyin import pinyin, lazy_pinyin, Style -import librosa -import glob -import re - - -class BaseSVSInfer: - def __init__(self, hparams, device=None): - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.hparams = hparams - self.device = device - - phone_list = ["AP", "SP", "a", "ai", "an", "ang", "ao", "b", "c", "ch", "d", "e", "ei", "en", "eng", "er", "f", "g", - "h", "i", "ia", "ian", "iang", "iao", "ie", "in", "ing", "iong", "iu", "j", "k", "l", "m", "n", "o", - "ong", "ou", "p", "q", "r", "s", "sh", "t", "u", "ua", "uai", "uan", "uang", "ui", "un", "uo", "v", - "van", "ve", "vn", "w", "x", "y", "z", "zh"] - self.ph_encoder = TokenTextEncoder(None, vocab_list=phone_list, replace_oov=',') - self.pinyin2phs = cpop_pinyin2ph_func() - self.spk_map = {'opencpop': 0} - - self.model = self.build_model() - self.model.eval() - self.model.to(self.device) - self.vocoder = self.build_vocoder() - self.vocoder.eval() - self.vocoder.to(self.device) - - def build_model(self): - raise NotImplementedError - - def forward_model(self, inp): - raise NotImplementedError - - def build_vocoder(self): - base_dir = hparams['vocoder_ckpt'] - config_path = f'{base_dir}/config.yaml' - ckpt = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key= - lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x)[0]))[-1] - print('| load HifiGAN: ', ckpt) - ckpt_dict = torch.load(ckpt, map_location="cpu") - config = set_hparams(config_path, global_hparams=False) - state = ckpt_dict["state_dict"]["model_gen"] - vocoder = HifiGanGenerator(config) - vocoder.load_state_dict(state, strict=True) - vocoder.remove_weight_norm() - vocoder = vocoder.eval().to(self.device) - return vocoder - - def run_vocoder(self, c, **kwargs): - c = c.transpose(2, 1) # [B, 80, T] - f0 = kwargs.get('f0') # [B, T] - if f0 is not None and hparams.get('use_nsf'): - # f0 = torch.FloatTensor(f0).to(self.device) - y = self.vocoder(c, f0).view(-1) - else: - y = self.vocoder(c).view(-1) - # [T] - return y[None] - - def preprocess_word_level_input(self, inp): - # Pypinyin can't solve polyphonic words - text_raw = inp['text'].replace('最长', '最常').replace('长睫毛', '常睫毛') \ - .replace('那么长', '那么常').replace('多长', '多常') \ - .replace('很长', '很常') # We hope someone could provide a better g2p module for us by opening pull requests. - - # lyric - pinyins = lazy_pinyin(text_raw, strict=False) - ph_per_word_lst = [self.pinyin2phs[pinyin.strip()] for pinyin in pinyins if pinyin.strip() in self.pinyin2phs] - - # Note - note_per_word_lst = [x.strip() for x in inp['notes'].split('|') if x.strip() != ''] - mididur_per_word_lst = [x.strip() for x in inp['notes_duration'].split('|') if x.strip() != ''] - - if len(note_per_word_lst) == len(ph_per_word_lst) == len(mididur_per_word_lst): - print('Pass word-notes check.') - else: - print('The number of words does\'t match the number of notes\' windows. ', - 'You should split the note(s) for each word by | mark.') - print(ph_per_word_lst, note_per_word_lst, mididur_per_word_lst) - print(len(ph_per_word_lst), len(note_per_word_lst), len(mididur_per_word_lst)) - return None - - note_lst = [] - ph_lst = [] - midi_dur_lst = [] - is_slur = [] - for idx, ph_per_word in enumerate(ph_per_word_lst): - # for phs in one word: - # single ph like ['ai'] or multiple phs like ['n', 'i'] - ph_in_this_word = ph_per_word.split() - - # for notes in one word: - # single note like ['D4'] or multiple notes like ['D4', 'E4'] which means a 'slur' here. - note_in_this_word = note_per_word_lst[idx].split() - midi_dur_in_this_word = mididur_per_word_lst[idx].split() - # process for the model input - # Step 1. - # Deal with note of 'not slur' case or the first note of 'slur' case - # j ie - # F#4/Gb4 F#4/Gb4 - # 0 0 - for ph in ph_in_this_word: - ph_lst.append(ph) - note_lst.append(note_in_this_word[0]) - midi_dur_lst.append(midi_dur_in_this_word[0]) - is_slur.append(0) - # step 2. - # Deal with the 2nd, 3rd... notes of 'slur' case - # j ie ie - # F#4/Gb4 F#4/Gb4 C#4/Db4 - # 0 0 1 - if len(note_in_this_word) > 1: # is_slur = True, we should repeat the YUNMU to match the 2nd, 3rd... notes. - for idx in range(1, len(note_in_this_word)): - ph_lst.append(ph_in_this_word[-1]) - note_lst.append(note_in_this_word[idx]) - midi_dur_lst.append(midi_dur_in_this_word[idx]) - is_slur.append(1) - ph_seq = ' '.join(ph_lst) - - if len(ph_lst) == len(note_lst) == len(midi_dur_lst): - print(len(ph_lst), len(note_lst), len(midi_dur_lst)) - print('Pass word-notes check.') - else: - print('The number of words does\'t match the number of notes\' windows. ', - 'You should split the note(s) for each word by | mark.') - return None - return ph_seq, note_lst, midi_dur_lst, is_slur - - def preprocess_phoneme_level_input(self, inp): - ph_seq = inp['ph_seq'] - note_lst = inp['note_seq'].split() - midi_dur_lst = inp['note_dur_seq'].split() - is_slur = [float(x) for x in inp['is_slur_seq'].split()] - print(len(note_lst), len(ph_seq.split()), len(midi_dur_lst)) - if len(note_lst) == len(ph_seq.split()) == len(midi_dur_lst): - print('Pass word-notes check.') - else: - print('The number of words does\'t match the number of notes\' windows. ', - 'You should split the note(s) for each word by | mark.') - return None - return ph_seq, note_lst, midi_dur_lst, is_slur - - def preprocess_input(self, inp, input_type='word'): - """ - - :param inp: {'text': str, 'item_name': (str, optional), 'spk_name': (str, optional)} - :return: - """ - - item_name = inp.get('item_name', '') - spk_name = inp.get('spk_name', 'opencpop') - - # single spk - spk_id = self.spk_map[spk_name] - - # get ph seq, note lst, midi dur lst, is slur lst. - if input_type == 'word': - ret = self.preprocess_word_level_input(inp) - elif input_type == 'phoneme': # like transcriptions.txt in Opencpop dataset. - ret = self.preprocess_phoneme_level_input(inp) - else: - print('Invalid input type.') - return None - - if ret: - ph_seq, note_lst, midi_dur_lst, is_slur = ret - else: - print('==========> Preprocess_word_level or phone_level input wrong.') - return None - - # convert note lst to midi id; convert note dur lst to midi duration - try: - midis = [librosa.note_to_midi(x.split("/")[0]) if x != 'rest' else 0 - for x in note_lst] - midi_dur_lst = [float(x) for x in midi_dur_lst] - except Exception as e: - print(e) - print('Invalid Input Type.') - return None - - ph_token = self.ph_encoder.encode(ph_seq) - item = {'item_name': item_name, 'text': inp['text'], 'ph': ph_seq, 'spk_id': spk_id, - 'ph_token': ph_token, 'pitch_midi': np.asarray(midis), 'midi_dur': np.asarray(midi_dur_lst), - 'is_slur': np.asarray(is_slur), } - item['ph_len'] = len(item['ph_token']) - return item - - def input_to_batch(self, item): - item_names = [item['item_name']] - text = [item['text']] - ph = [item['ph']] - txt_tokens = torch.LongTensor(item['ph_token'])[None, :].to(self.device) - txt_lengths = torch.LongTensor([txt_tokens.shape[1]]).to(self.device) - spk_ids = torch.LongTensor(item['spk_id'])[None, :].to(self.device) - - pitch_midi = torch.LongTensor(item['pitch_midi'])[None, :hparams['max_frames']].to(self.device) - midi_dur = torch.FloatTensor(item['midi_dur'])[None, :hparams['max_frames']].to(self.device) - is_slur = torch.LongTensor(item['is_slur'])[None, :hparams['max_frames']].to(self.device) - - batch = { - 'item_name': item_names, - 'text': text, - 'ph': ph, - 'txt_tokens': txt_tokens, - 'txt_lengths': txt_lengths, - 'spk_ids': spk_ids, - 'pitch_midi': pitch_midi, - 'midi_dur': midi_dur, - 'is_slur': is_slur - } - return batch - - def postprocess_output(self, output): - return output - - def infer_once(self, inp): - inp = self.preprocess_input(inp, input_type=inp['input_type'] if inp.get('input_type') else 'word') - output = self.forward_model(inp) - output = self.postprocess_output(output) - return output - - @classmethod - def example_run(cls, inp): - from utils.audio import save_wav - set_hparams(print_hparams=False) - infer_ins = cls(hparams) - out = infer_ins.infer_once(inp) - os.makedirs('infer_out', exist_ok=True) - save_wav(out, f'infer_out/example_out.wav', hparams['audio_sample_rate']) - - -# if __name__ == '__main__': - # debug - # a = BaseSVSInfer(hparams) - # a.preprocess_input({'text': '你 说 你 不 SP 懂 为 何 在 这 时 牵 手 AP', - # 'notes': 'D#4/Eb4 | D#4/Eb4 | D#4/Eb4 | D#4/Eb4 | rest | D#4/Eb4 | D4 | D4 | D4 | D#4/Eb4 | F4 | D#4/Eb4 | D4 | rest', - # 'notes_duration': '0.113740 | 0.329060 | 0.287950 | 0.133480 | 0.150900 | 0.484730 | 0.242010 | 0.180820 | 0.343570 | 0.152050 | 0.266720 | 0.280310 | 0.633300 | 0.444590' - # }) - - # b = { - # 'text': '小酒窝长睫毛AP是你最美的记号', - # 'notes': 'C#4/Db4 | F#4/Gb4 | G#4/Ab4 | A#4/Bb4 F#4/Gb4 | F#4/Gb4 C#4/Db4 | C#4/Db4 | rest | C#4/Db4 | A#4/Bb4 | G#4/Ab4 | A#4/Bb4 | G#4/Ab4 | F4 | C#4/Db4', - # 'notes_duration': '0.407140 | 0.376190 | 0.242180 | 0.509550 0.183420 | 0.315400 0.235020 | 0.361660 | 0.223070 | 0.377270 | 0.340550 | 0.299620 | 0.344510 | 0.283770 | 0.323390 | 0.360340' - # } - # c = { - # 'text': '小酒窝长睫毛AP是你最美的记号', - # 'ph_seq': 'x iao j iu w o ch ang ang j ie ie m ao AP sh i n i z ui m ei d e j i h ao', - # 'note_seq': 'C#4/Db4 C#4/Db4 F#4/Gb4 F#4/Gb4 G#4/Ab4 G#4/Ab4 A#4/Bb4 A#4/Bb4 F#4/Gb4 F#4/Gb4 F#4/Gb4 C#4/Db4 C#4/Db4 C#4/Db4 rest C#4/Db4 C#4/Db4 A#4/Bb4 A#4/Bb4 G#4/Ab4 G#4/Ab4 A#4/Bb4 A#4/Bb4 G#4/Ab4 G#4/Ab4 F4 F4 C#4/Db4 C#4/Db4', - # 'note_dur_seq': '0.407140 0.407140 0.376190 0.376190 0.242180 0.242180 0.509550 0.509550 0.183420 0.315400 0.315400 0.235020 0.361660 0.361660 0.223070 0.377270 0.377270 0.340550 0.340550 0.299620 0.299620 0.344510 0.344510 0.283770 0.283770 0.323390 0.323390 0.360340 0.360340', - # 'is_slur_seq': '0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0' - # } # input like Opencpop dataset. - # a.preprocess_input(b) - # a.preprocess_input(c, input_type='phoneme') \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/brainstorming.py b/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/brainstorming.py deleted file mode 100644 index a6db1a5f6a963dee1736aa7ad4af2310b43b3a51..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/brainstorming.py +++ /dev/null @@ -1,67 +0,0 @@ -from __future__ import annotations -import asyncio -from colorama import Fore - -from typing import TYPE_CHECKING, List - -from . import decision_maker_registry -from .base import BaseDecisionMaker -from agentverse.logging import logger - -from agentverse.message import Message - -if TYPE_CHECKING: - from agentverse.agents.base import BaseAgent - from agentverse.message import CriticMessage - - -@decision_maker_registry.register("brainstorming") -class BrainstormingDecisionMaker(BaseDecisionMaker): - """ - Much like the horizontal decision maker, but with some twists: - (1) Solver acts as a summarizer, summarizing the discussion of this turn - (2) After summarizing, all the agents' memory are cleared, and replaced with - the summary (to avoid exceeding maximum context length of the model too fast) - """ - - name: str = "brainstorming" - - async def astep( - self, - agents: List[BaseAgent], - task_description: str, - previous_plan: str = "No solution yet.", - advice: str = "No advice yet.", - *args, - **kwargs, - ) -> List[str]: - if advice != "No advice yet.": - self.broadcast_messages( - agents, [Message(content=advice, sender="Evaluator")] - ) - for agent in agents[1:]: - review: CriticMessage = await agent.astep( - previous_plan, advice, task_description - ) - if review.content != "": - self.broadcast_messages(agents, [review]) - - logger.info("", "Reviews:", Fore.YELLOW) - logger.info( - "", - f"[{review.sender}]: {review.content}", - Fore.YELLOW, - ) - - result = agents[0].step(previous_plan, advice, task_description) - for agent in agents: - agent.memory.reset() - self.broadcast_messages( - agents, - [ - Message( - content=result.content, sender="Summary From Previous Discussion" - ) - ], - ) - return [result] diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/AddChildMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/AddChildMethods.js deleted file mode 100644 index deb49535fdbb2ea02abf872ffb42a229c64c6eb3..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/AddChildMethods.js +++ /dev/null @@ -1,112 +0,0 @@ -import AddChild from '../basesizer/utils/AddChild.js'; -import GetBoundsConfig from '../utils/GetBoundsConfig.js'; -import ALIGNMODE from '../utils/AlignConst.js'; - -const IsPlainObject = Phaser.Utils.Objects.IsPlainObject; -const GetValue = Phaser.Utils.Objects.GetValue; -const ALIGN_CENTER = Phaser.Display.Align.CENTER; - - -var GetEmptyCellIndex = function (columnIndex, rowIndex, cells, columnCount, rowCount) { - if ((typeof (columnIndex) === 'number') || (typeof (rowIndex) === 'number')) { - if (columnIndex === undefined) { - var idx; - for (var i = 0; i < columnCount; i++) { - idx = (rowIndex * columnCount) + i; - if (!cells[idx]) { - return idx; - } - } - } else if (rowIndex === undefined) { - var idx; - for (var i = 0; i < rowCount; i++) { - idx = (i * columnCount) + columnIndex; - if (!cells[idx]) { - return idx; - } - } - } else { - var idx = (rowIndex * columnCount) + columnIndex; - if (!cells[idx]) { - return idx; - } - } - - } else if (rowIndex === true) { - var idx; - for (var i = 0; i < columnCount; i++) { - for (var j = 0; j < rowCount; j++) { - idx = (j * columnCount) + i; - if (!cells[idx]) { - return idx; - } - } - } - } else { - for (var i = 0, cnt = cells.length; i < cnt; i++) { - if (!cells[i]) { - return i; - } - } - } - return null; -} - -var Add = function (gameObject, columnIndex, rowIndex, align, paddingConfig, expand, childKey) { - AddChild.call(this, gameObject); - if (IsPlainObject(columnIndex)) { - var config = columnIndex; - columnIndex = GetValue(config, 'column', undefined); - rowIndex = GetValue(config, 'row', undefined); - align = GetValue(config, 'align', ALIGN_CENTER); - paddingConfig = GetValue(config, 'padding', 0); - expand = GetValue(config, 'expand', false); - childKey = GetValue(config, 'key', undefined); - } - - // Get insert index - var itemIndex = GetEmptyCellIndex(columnIndex, rowIndex, this.sizerChildren, this.columnCount, this.rowCount); - if (itemIndex === null) { - // Specific index mode - if ((typeof (columnIndex) === 'number') && (typeof (rowIndex) === 'number')) { - return this; - } - - if ((rowIndex === true) || (typeof (rowIndex) === 'number')) { - this.addEmptyColumn(); - } else { - this.addEmptyRow(); - } - - // Get insert index again - itemIndex = GetEmptyCellIndex(columnIndex, rowIndex, this.sizerChildren, this.columnCount, this.rowCount); - } - - if (typeof (align) === 'string') { - align = ALIGNMODE[align]; - } - if (align === undefined) { - align = ALIGN_CENTER; - } - if (paddingConfig === undefined) { - paddingConfig = 0; - } - if (expand === undefined) { - expand = true; - } - - var config = this.getSizerConfig(gameObject); - config.align = align; - config.padding = GetBoundsConfig(paddingConfig); - config.expand = expand; - this.sizerChildren[itemIndex] = gameObject; - - if (childKey !== undefined) { - this.addChildrenMap(childKey, gameObject) - } - return this; -} - -export default { - add: Add -} \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogresscanvas/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogresscanvas/Factory.js deleted file mode 100644 index e182045fa63b8a22d7195f48a411d1ed3abb2478..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogresscanvas/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import LineProgressCanvas from './LineProgressCanvas.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('circularProgressCanvas', function (x, y, width, height, barColor, value, config) { - var gameObject = new LineProgressCanvas(this.scene, x, y, width, height, barColor, value, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.LineProgressCanvas', LineProgressCanvas); - -export default LineProgressCanvas; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/ParseEaseConfig.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/ParseEaseConfig.js deleted file mode 100644 index c891bac21de5bd16ec694c5f6abb9eaaf3280751..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/ParseEaseConfig.js +++ /dev/null @@ -1,18 +0,0 @@ -import GetOrientationMode from '../../utils/GetOrientationMode.js'; -var ParseEaseConfig = function (menu, easeConfig) { - if (typeof (easeConfig) === 'number') { - easeConfig = { - duration: easeConfig - }; - } - - if (easeConfig.hasOwnProperty('orientation') && (easeConfig.orientation !== undefined)) { - easeConfig.sameOrientation = GetOrientationMode(easeConfig.orientation) === menu.orientation; - } else { - easeConfig.sameOrientation = true; - } - easeConfig.destroy = false; - return easeConfig; -} - -export default ParseEaseConfig; \ No newline at end of file diff --git a/spaces/Akshat-1812/Dog-Vision/README.md b/spaces/Akshat-1812/Dog-Vision/README.md deleted file mode 100644 index 633232a1cf41801385786f8af13211cfa49dae52..0000000000000000000000000000000000000000 --- a/spaces/Akshat-1812/Dog-Vision/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Dog Vision -emoji: 📉 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AlexWang/lama/saicinpainting/training/visualizers/noop.py b/spaces/AlexWang/lama/saicinpainting/training/visualizers/noop.py deleted file mode 100644 index 4175089a54a8484d51e6c879c1a99c4e4d961d15..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/training/visualizers/noop.py +++ /dev/null @@ -1,9 +0,0 @@ -from saicinpainting.training.visualizers.base import BaseVisualizer - - -class NoopVisualizer(BaseVisualizer): - def __init__(self, *args, **kwargs): - pass - - def __call__(self, epoch_i, batch_i, batch, suffix='', rank=None): - pass diff --git a/spaces/Alpaca233/ChatGPT-PPT-Generate/app.py b/spaces/Alpaca233/ChatGPT-PPT-Generate/app.py deleted file mode 100644 index af37444df6200a4202a9675bcda7d6f9e82be170..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/ChatGPT-PPT-Generate/app.py +++ /dev/null @@ -1,245 +0,0 @@ -import glob -import os -import random -import re -import string - -import gradio as gr - -import openai -from icrawler import ImageDownloader -from icrawler.builtin import GoogleImageCrawler, BingImageCrawler -from uuid import uuid4 -from pptx import Presentation - -bad_coding_practice = ''.join(random.choice(string.ascii_uppercase + string.ascii_lowercase + string.digits) for _ in - range(16)) - - -def refresh_bad_coding_practice(): - global bad_coding_practice - bad_coding_practice = ''.join(random.choice(string.ascii_uppercase + string.ascii_lowercase + string.digits) - for _ in range(16)) - return - - -class PrefixNameDownloader(ImageDownloader): - - def get_filename(self, task, default_ext): - filename = super(PrefixNameDownloader, self).get_filename( - task, default_ext) - print(bad_coding_practice) - return 'prefix_' + bad_coding_practice + filename - - -def generate_ppt(file, topic, slide_length, api_key): - print(file.name) - - root = Presentation(file.name) - - openai.api_key = api_key - - message = f""" - Create content for a slideshow presentation. - The content's topic is {topic}. - The slideshow is {slide_length} slides long. - The content is written in the language of the content I give you above. - - - You are allowed to use the following slide types: - - Slide types: - Title Slide - (Title, Subtitle) - Content Slide - (Title, Content) - Image Slide - (Title, Content, Image) - Thanks Slide - (Title) - - Put this tag before the Title Slide: [L_TS] - Put this tag before the Content Slide: [L_CS] - Put this tag before the Image Slide: [L_IS] - Put this tag before the Thanks Slide: [L_THS] - - Put "[SLIDEBREAK]" after each slide - - For example: - [L_TS] - [TITLE]Mental Health[/TITLE] - - [SLIDEBREAK] - - [L_CS] - [TITLE]Mental Health Definition[/TITLE] - [CONTENT] - 1. Definition: A person’s condition with regard to their psychological and emotional well-being - 2. Can impact one's physical health - 3. Stigmatized too often. - [/CONTENT] - - [SLIDEBREAK] - - Put this tag before the Title: [TITLE] - Put this tag after the Title: [/TITLE] - Put this tag before the Subitle: [SUBTITLE] - Put this tag after the Subtitle: [/SUBTITLE] - Put this tag before the Content: [CONTENT] - Put this tag after the Content: [/CONTENT] - Put this tag before the Image: [IMAGE] - Put this tag after the Image: [/IMAGE] - - Elaborate on the Content, provide as much information as possible. - You put a [/CONTENT] at the end of the Content. - Do not reply as if you are talking about the slideshow itself. (ex. "Include pictures here about...") - Do not include any special characters (?, !, ., :, ) in the Title. - Do not include any additional information in your response and stick to the format.""" - - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "user", "content": message} - ] - ) - - # """ Ref for slide types: - # 0 -> title and subtitle - # 1 -> title and content - # 2 -> section header - # 3 -> two content - # 4 -> Comparison - # 5 -> Title only - # 6 -> Blank - # 7 -> Content with caption - # 8 -> Pic with caption - # """ - - def delete_all_slides(): - for i in range(len(root.slides) - 1, -1, -1): - r_id = root.slides._sldIdLst[i].rId - root.part.drop_rel(r_id) - del root.slides._sldIdLst[i] - - def create_title_slide(title, subtitle): - layout = root.slide_layouts[0] - slide = root.slides.add_slide(layout) - slide.shapes.title.text = title - slide.placeholders[1].text = subtitle - - def create_section_header_slide(title): - layout = root.slide_layouts[2] - slide = root.slides.add_slide(layout) - slide.shapes.title.text = title - - def create_title_and_content_slide(title, content): - layout = root.slide_layouts[1] - slide = root.slides.add_slide(layout) - slide.shapes.title.text = title - slide.placeholders[1].text = content - - def create_title_and_content_and_image_slide(title, content, image_query): - layout = root.slide_layouts[8] - slide = root.slides.add_slide(layout) - slide.shapes.title.text = title - slide.placeholders[2].text = content - refresh_bad_coding_practice() - bing_crawler = GoogleImageCrawler(downloader_cls=PrefixNameDownloader, storage={'root_dir': os.getcwd()}) - bing_crawler.crawl(keyword=image_query, max_num=1) - dir_path = os.path.dirname(os.path.realpath(__file__)) - file_name = glob.glob(f"prefix_{bad_coding_practice}*") - print(file_name) - img_path = os.path.join(dir_path, file_name[0]) - slide.shapes.add_picture(img_path, slide.placeholders[1].left, slide.placeholders[1].top, - slide.placeholders[1].width, slide.placeholders[1].height) - - def find_text_in_between_tags(text, start_tag, end_tag): - start_pos = text.find(start_tag) - end_pos = text.find(end_tag) - result = [] - while start_pos > -1 and end_pos > -1: - text_between_tags = text[start_pos + len(start_tag):end_pos] - result.append(text_between_tags) - start_pos = text.find(start_tag, end_pos + len(end_tag)) - end_pos = text.find(end_tag, start_pos) - res1 = "".join(result) - res2 = re.sub(r"\[IMAGE\].*?\[/IMAGE\]", '', res1) - if len(result) > 0: - return res2 - else: - return "" - - def search_for_slide_type(text): - tags = ["[L_TS]", "[L_CS]", "[L_IS]", "[L_THS]"] - found_text = next((s for s in tags if s in text), None) - return found_text - - def parse_response(reply): - list_of_slides = reply.split("[SLIDEBREAK]") - for slide in list_of_slides: - slide_type = search_for_slide_type(slide) - if slide_type == "[L_TS]": - create_title_slide(find_text_in_between_tags(str(slide), "[TITLE]", "[/TITLE]"), - find_text_in_between_tags(str(slide), "[SUBTITLE]", "[/SUBTITLE]")) - elif slide_type == "[L_CS]": - create_title_and_content_slide("".join(find_text_in_between_tags(str(slide), "[TITLE]", "[/TITLE]")), - "".join(find_text_in_between_tags(str(slide), "[CONTENT]", - "[/CONTENT]"))) - elif slide_type == "[L_IS]": - create_title_and_content_and_image_slide("".join(find_text_in_between_tags(str(slide), "[TITLE]", - "[/TITLE]")), - "".join(find_text_in_between_tags(str(slide), "[CONTENT]", - "[/CONTENT]")), - "".join(find_text_in_between_tags(str(slide), "[IMAGE]", - "[/IMAGE]"))) - elif slide_type == "[L_THS]": - create_section_header_slide("".join(find_text_in_between_tags(str(slide), "[TITLE]", "[/TITLE]"))) - - def find_title(): - return root.slides[0].shapes.title.text - - delete_all_slides() - - print(response) - - parse_response(response['choices'][0]['message']['content']) - - name_ = str(uuid4()).replace('-', '') - - root.save(f"./{name_}.pptx") - - print("done") - - dir_path = "./" - prefix = "prefix_" - - for file_name in os.listdir(dir_path): - if file_name.startswith(prefix): - file_path = os.path.join(dir_path, file_name) - if os.path.isfile(file_path): - os.remove(file_path) - - return f"./{name_}.pptx" - - -with gr.Blocks(title="ChatGPT PPT框架生成") as demo: - gr.Markdown("""

ChatGPT PPT框架生成

""") - with gr.Row(): - with gr.Column(): - openai_token = gr.Textbox(label="OpenAI API Key") - topic = gr.Textbox(label="PPT的主题或内容") - length = gr.Slider(minimum=1, maximum=20, value=6, label="生成的PPT页数", step=1) - theme = gr.File(value="./theme.pptx", file_types=['pptx', 'ppt'], label="PPT模版") - output_file = gr.File(interactive=False) - - topic.submit( - fn=generate_ppt, - inputs=[theme, topic, length, openai_token], - outputs=[output_file] - ) - - submit = gr.Button("生成") - submit.click( - fn=generate_ppt, - inputs=[theme, topic, length, openai_token], - outputs=[output_file] - ) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_mbf.py b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_mbf.py deleted file mode 100644 index 46ae777cc97af41a531cba4e5d1ff31f2efcb468..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_mbf.py +++ /dev/null @@ -1,26 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "cosface" -config.network = "mbf" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.1 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 2e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "/train_tmp/glint360k" -config.num_classes = 360232 -config.num_image = 17091657 -config.num_epoch = 20 -config.warmup_epoch = -1 -config.decay_epoch = [8, 12, 15, 18] -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r101_caffe_fpn_mstrain_2x.py b/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r101_caffe_fpn_mstrain_2x.py deleted file mode 100644 index 85fa2f5d73a896e09d7b1f72202d0a100eaca821..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r101_caffe_fpn_mstrain_2x.py +++ /dev/null @@ -1,167 +0,0 @@ -_base_ = '../_base_/default_runtime.py' - -# model settings -model = dict( - type='RetinaNet', - pretrained='open-mmlab://detectron2/resnet101_caffe', - backbone=dict( - type='ResNet', - depth=101, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='caffe'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - start_level=1, - add_extra_convs=True, - num_outs=5), - bbox_head=dict( - type='GARetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[4], - strides=[8, 16, 32, 64, 128]), - anchor_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loc_filter_thr=0.01, - loss_loc=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=0.04, loss_weight=1.0))) -# training and testing settings -train_cfg = dict( - ga_assigner=dict( - type='ApproxMaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - ga_sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.0, - ignore_iof_thr=-1), - allowed_border=-1, - pos_weight=-1, - center_ratio=0.2, - ignore_ratio=0.5, - debug=False) -test_cfg = dict( - nms_pre=1000, - min_bbox_size=0, - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100) -# dataset settings -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 480), (1333, 960)], - keep_ratio=True, - multiscale_mode='range'), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_train2017.json', - img_prefix=data_root + 'train2017/', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline)) -evaluation = dict(interval=1, metric='bbox') -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=1.0 / 3, - step=[16, 22]) -checkpoint_config = dict(interval=1) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - # dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/deeplabv3_r50-d8.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/deeplabv3_r50-d8.py deleted file mode 100644 index d7a43bee01422ad4795dd27874e0cd4bb6cbfecf..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/deeplabv3_r50-d8.py +++ /dev/null @@ -1,44 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='ASPPHead', - in_channels=2048, - in_index=3, - channels=512, - dilations=(1, 12, 24, 36), - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_20k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_20k_voc12aug.py deleted file mode 100644 index 1056ad4d1e2a4f956d12f6daf506620fab27dd17..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_20k_voc12aug.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = [ - '../_base_/models/deeplabv3plus_r50-d8.py', - '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_20k.py' -] -model = dict( - decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fp16/deeplabv3plus_r101-d8_512x1024_80k_fp16_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fp16/deeplabv3plus_r101-d8_512x1024_80k_fp16_cityscapes.py deleted file mode 100644 index eaf569d4d76af2e498c039899c01f9960b1158d9..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fp16/deeplabv3plus_r101-d8_512x1024_80k_fp16_cityscapes.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = '../deeplabv3plus/deeplabv3plus_r101-d8_512x1024_80k_cityscapes.py' -# fp16 settings -optimizer_config = dict(type='Fp16OptimizerHook', loss_scale=512.) -# fp16 placeholder -fp16 = dict() diff --git a/spaces/Apex-X/GODROOP/roop/processors/__init__.py b/spaces/Apex-X/GODROOP/roop/processors/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Apex-X/ROOPOK/roop/metadata.py b/spaces/Apex-X/ROOPOK/roop/metadata.py deleted file mode 100644 index aea9e16d897ede57f566ccc773d0d2ee17905dfb..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/ROOPOK/roop/metadata.py +++ /dev/null @@ -1,2 +0,0 @@ -name = 'roop' -version = '1.3.2' diff --git a/spaces/ArcanAlt/arcanDream/README.md b/spaces/ArcanAlt/arcanDream/README.md deleted file mode 100644 index d82ee8d9d75a65ba4810f04d0f9cf2c771b44f36..0000000000000000000000000000000000000000 --- a/spaces/ArcanAlt/arcanDream/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: ArcanDream -emoji: 💻 -colorFrom: green -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/nap.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/nap.py deleted file mode 100644 index 72aa5bfd4b60d8e6ef6ed0cf2ae4f763d12195cc..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/nap.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright 2016 Étienne Bersac -# Copyright 2016 Julien Danjou -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import time -import typing - -if typing.TYPE_CHECKING: - import threading - - -def sleep(seconds: float) -> None: - """ - Sleep strategy that delays execution for a given number of seconds. - - This is the default strategy, and may be mocked out for unit testing. - """ - time.sleep(seconds) - - -class sleep_using_event: - """Sleep strategy that waits on an event to be set.""" - - def __init__(self, event: "threading.Event") -> None: - self.event = event - - def __call__(self, timeout: typing.Optional[float]) -> None: - # NOTE(harlowja): this may *not* actually wait for timeout - # seconds if the event is set (ie this may eject out early). - self.event.wait(timeout=timeout) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/more_itertools/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/more_itertools/__init__.py deleted file mode 100644 index ea38bef1f661e62d577b3c2207386d901d851c72..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/more_itertools/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .more import * # noqa -from .recipes import * # noqa - -__version__ = '8.12.0' diff --git a/spaces/Audio-AGI/AudioSep/utils.py b/spaces/Audio-AGI/AudioSep/utils.py deleted file mode 100644 index abfb28500aa2c7f7cf395a869245d4c2061f9ca5..0000000000000000000000000000000000000000 --- a/spaces/Audio-AGI/AudioSep/utils.py +++ /dev/null @@ -1,384 +0,0 @@ -import os -import datetime -import json -import logging -import librosa -import pickle -from typing import Dict -import numpy as np -import torch -import torch.nn as nn -import yaml -from models.audiosep import AudioSep, get_model_class - - -def ignore_warnings(): - import warnings - # Ignore UserWarning from torch.meshgrid - warnings.filterwarnings('ignore', category=UserWarning, module='torch.functional') - - # Refined regex pattern to capture variations in the warning message - pattern = r"Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: \['lm_head\..*'\].*" - warnings.filterwarnings('ignore', message=pattern) - - - -def create_logging(log_dir, filemode): - os.makedirs(log_dir, exist_ok=True) - i1 = 0 - - while os.path.isfile(os.path.join(log_dir, "{:04d}.log".format(i1))): - i1 += 1 - - log_path = os.path.join(log_dir, "{:04d}.log".format(i1)) - logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s", - datefmt="%a, %d %b %Y %H:%M:%S", - filename=log_path, - filemode=filemode, - ) - - # Print to console - console = logging.StreamHandler() - console.setLevel(logging.INFO) - formatter = logging.Formatter("%(name)-12s: %(levelname)-8s %(message)s") - console.setFormatter(formatter) - logging.getLogger("").addHandler(console) - - return logging - - -def float32_to_int16(x: float) -> int: - x = np.clip(x, a_min=-1, a_max=1) - return (x * 32767.0).astype(np.int16) - - -def int16_to_float32(x: int) -> float: - return (x / 32767.0).astype(np.float32) - - -def parse_yaml(config_yaml: str) -> Dict: - r"""Parse yaml file. - - Args: - config_yaml (str): config yaml path - - Returns: - yaml_dict (Dict): parsed yaml file - """ - - with open(config_yaml, "r") as fr: - return yaml.load(fr, Loader=yaml.FullLoader) - - -def get_audioset632_id_to_lb(ontology_path: str) -> Dict: - r"""Get AudioSet 632 classes ID to label mapping.""" - - audioset632_id_to_lb = {} - - with open(ontology_path) as f: - data_list = json.load(f) - - for e in data_list: - audioset632_id_to_lb[e["id"]] = e["name"] - - return audioset632_id_to_lb - - -def load_pretrained_panns( - model_type: str, - checkpoint_path: str, - freeze: bool -) -> nn.Module: - r"""Load pretrained pretrained audio neural networks (PANNs). - - Args: - model_type: str, e.g., "Cnn14" - checkpoint_path, str, e.g., "Cnn14_mAP=0.431.pth" - freeze: bool - - Returns: - model: nn.Module - """ - - if model_type == "Cnn14": - Model = Cnn14 - - elif model_type == "Cnn14_DecisionLevelMax": - Model = Cnn14_DecisionLevelMax - - else: - raise NotImplementedError - - model = Model(sample_rate=32000, window_size=1024, hop_size=320, - mel_bins=64, fmin=50, fmax=14000, classes_num=527) - - if checkpoint_path: - checkpoint = torch.load(checkpoint_path, map_location="cpu") - model.load_state_dict(checkpoint["model"]) - - if freeze: - for param in model.parameters(): - param.requires_grad = False - - return model - - -def energy(x): - return torch.mean(x ** 2) - - -def magnitude_to_db(x): - eps = 1e-10 - return 20. * np.log10(max(x, eps)) - - -def db_to_magnitude(x): - return 10. ** (x / 20) - - -def ids_to_hots(ids, classes_num, device): - hots = torch.zeros(classes_num).to(device) - for id in ids: - hots[id] = 1 - return hots - - -def calculate_sdr( - ref: np.ndarray, - est: np.ndarray, - eps=1e-10 -) -> float: - r"""Calculate SDR between reference and estimation. - - Args: - ref (np.ndarray), reference signal - est (np.ndarray), estimated signal - """ - reference = ref - noise = est - reference - - - numerator = np.clip(a=np.mean(reference ** 2), a_min=eps, a_max=None) - - denominator = np.clip(a=np.mean(noise ** 2), a_min=eps, a_max=None) - - sdr = 10. * np.log10(numerator / denominator) - - return sdr - - -def calculate_sisdr(ref, est): - r"""Calculate SDR between reference and estimation. - - Args: - ref (np.ndarray), reference signal - est (np.ndarray), estimated signal - """ - - eps = np.finfo(ref.dtype).eps - - reference = ref.copy() - estimate = est.copy() - - reference = reference.reshape(reference.size, 1) - estimate = estimate.reshape(estimate.size, 1) - - Rss = np.dot(reference.T, reference) - # get the scaling factor for clean sources - a = (eps + np.dot(reference.T, estimate)) / (Rss + eps) - - e_true = a * reference - e_res = estimate - e_true - - Sss = (e_true**2).sum() - Snn = (e_res**2).sum() - - sisdr = 10 * np.log10((eps+ Sss)/(eps + Snn)) - - return sisdr - - -class StatisticsContainer(object): - def __init__(self, statistics_path): - self.statistics_path = statistics_path - - self.backup_statistics_path = "{}_{}.pkl".format( - os.path.splitext(self.statistics_path)[0], - datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S"), - ) - - self.statistics_dict = {"balanced_train": [], "test": []} - - def append(self, steps, statistics, split, flush=True): - statistics["steps"] = steps - self.statistics_dict[split].append(statistics) - - if flush: - self.flush() - - def flush(self): - pickle.dump(self.statistics_dict, open(self.statistics_path, "wb")) - pickle.dump(self.statistics_dict, open(self.backup_statistics_path, "wb")) - logging.info(" Dump statistics to {}".format(self.statistics_path)) - logging.info(" Dump statistics to {}".format(self.backup_statistics_path)) - - -def get_mean_sdr_from_dict(sdris_dict): - mean_sdr = np.nanmean(list(sdris_dict.values())) - return mean_sdr - - -def remove_silence(audio: np.ndarray, sample_rate: int) -> np.ndarray: - r"""Remove silent frames.""" - window_size = int(sample_rate * 0.1) - threshold = 0.02 - - frames = librosa.util.frame(x=audio, frame_length=window_size, hop_length=window_size).T - # shape: (frames_num, window_size) - - new_frames = get_active_frames(frames, threshold) - # shape: (new_frames_num, window_size) - - new_audio = new_frames.flatten() - # shape: (new_audio_samples,) - - return new_audio - - -def get_active_frames(frames: np.ndarray, threshold: float) -> np.ndarray: - r"""Get active frames.""" - - energy = np.max(np.abs(frames), axis=-1) - # shape: (frames_num,) - - active_indexes = np.where(energy > threshold)[0] - # shape: (new_frames_num,) - - new_frames = frames[active_indexes] - # shape: (new_frames_num,) - - return new_frames - - -def repeat_to_length(audio: np.ndarray, segment_samples: int) -> np.ndarray: - r"""Repeat audio to length.""" - - repeats_num = (segment_samples // audio.shape[-1]) + 1 - audio = np.tile(audio, repeats_num)[0 : segment_samples] - - return audio - -def calculate_segmentwise_sdr(ref, est, hop_samples, return_sdr_list=False): - min_len = min(ref.shape[-1], est.shape[-1]) - pointer = 0 - sdrs = [] - while pointer + hop_samples < min_len: - sdr = calculate_sdr( - ref=ref[:, pointer : pointer + hop_samples], - est=est[:, pointer : pointer + hop_samples], - ) - sdrs.append(sdr) - pointer += hop_samples - - sdr = np.nanmedian(sdrs) - - if return_sdr_list: - return sdr, sdrs - else: - return sdr - - -def loudness(data, input_loudness, target_loudness): - """ Loudness normalize a signal. - - Normalize an input signal to a user loudness in dB LKFS. - - Params - ------- - data : torch.Tensor - Input multichannel audio data. - input_loudness : float - Loudness of the input in dB LUFS. - target_loudness : float - Target loudness of the output in dB LUFS. - - Returns - ------- - output : torch.Tensor - Loudness normalized output data. - """ - - # calculate the gain needed to scale to the desired loudness level - delta_loudness = target_loudness - input_loudness - gain = torch.pow(10.0, delta_loudness / 20.0) - - output = gain * data - - # check for potentially clipped samples - # if torch.max(torch.abs(output)) >= 1.0: - # warnings.warn("Possible clipped samples in output.") - - return output - - -def load_ss_model( - configs: Dict, - checkpoint_path: str, - query_encoder: nn.Module -) -> nn.Module: - r"""Load trained universal source separation model. - - Args: - configs (Dict) - checkpoint_path (str): path of the checkpoint to load - device (str): e.g., "cpu" | "cuda" - - Returns: - pl_model: pl.LightningModule - """ - - ss_model_type = configs["model"]["model_type"] - input_channels = configs["model"]["input_channels"] - output_channels = configs["model"]["output_channels"] - condition_size = configs["model"]["condition_size"] - - # Initialize separation model - SsModel = get_model_class(model_type=ss_model_type) - - ss_model = SsModel( - input_channels=input_channels, - output_channels=output_channels, - condition_size=condition_size, - ) - - # Load PyTorch Lightning model - pl_model = AudioSep.load_from_checkpoint( - checkpoint_path=checkpoint_path, - strict=False, - ss_model=ss_model, - waveform_mixer=None, - query_encoder=query_encoder, - loss_function=None, - optimizer_type=None, - learning_rate=None, - lr_lambda_func=None, - map_location=torch.device('cpu'), - ) - - return pl_model - - -def parse_yaml(config_yaml: str) -> Dict: - r"""Parse yaml file. - - Args: - config_yaml (str): config yaml path - - Returns: - yaml_dict (Dict): parsed yaml file - """ - - with open(config_yaml, "r") as fr: - return yaml.load(fr, Loader=yaml.FullLoader) \ No newline at end of file diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_model_zoo.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_model_zoo.py deleted file mode 100644 index e3360a74864e0c00ed92ffbc8531c8d36e8be379..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_model_zoo.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import unittest - -from detectron2 import model_zoo -from detectron2.config import instantiate -from detectron2.modeling import FPN, GeneralizedRCNN - -logger = logging.getLogger(__name__) - - -class TestModelZoo(unittest.TestCase): - def test_get_returns_model(self): - model = model_zoo.get("Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.yaml", trained=False) - self.assertIsInstance(model, GeneralizedRCNN) - self.assertIsInstance(model.backbone, FPN) - - def test_get_invalid_model(self): - self.assertRaises(RuntimeError, model_zoo.get, "Invalid/config.yaml") - - def test_get_url(self): - url = model_zoo.get_checkpoint_url("Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.yaml") - self.assertEqual( - url, - "https://dl.fbaipublicfiles.com/detectron2/Misc/scratch_mask_rcnn_R_50_FPN_3x_gn/138602908/model_final_01ca85.pkl", # noqa - ) - url2 = model_zoo.get_checkpoint_url("Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.py") - self.assertEqual(url, url2) - - def _build_lazy_model(self, name): - cfg = model_zoo.get_config("common/models/" + name) - instantiate(cfg.model) - - def test_mask_rcnn_fpn(self): - self._build_lazy_model("mask_rcnn_fpn.py") - - def test_mask_rcnn_c4(self): - self._build_lazy_model("mask_rcnn_c4.py") - - def test_panoptic_fpn(self): - self._build_lazy_model("panoptic_fpn.py") - - def test_schedule(self): - cfg = model_zoo.get_config("common/coco_schedule.py") - for _, v in cfg.items(): - instantiate(v) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Bart92/RVC_HF/infer_batch_rvc.py b/spaces/Bart92/RVC_HF/infer_batch_rvc.py deleted file mode 100644 index 15c862a3d6bf815fa68003cc7054b694cae50c2a..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/infer_batch_rvc.py +++ /dev/null @@ -1,215 +0,0 @@ -""" -v1 -runtime\python.exe myinfer-v2-0528.py 0 "E:\codes\py39\RVC-beta\todo-songs" "E:\codes\py39\logs\mi-test\added_IVF677_Flat_nprobe_7.index" harvest "E:\codes\py39\RVC-beta\output" "E:\codes\py39\test-20230416b\weights\mi-test.pth" 0.66 cuda:0 True 3 0 1 0.33 -v2 -runtime\python.exe myinfer-v2-0528.py 0 "E:\codes\py39\RVC-beta\todo-songs" "E:\codes\py39\test-20230416b\logs\mi-test-v2\aadded_IVF677_Flat_nprobe_1_v2.index" harvest "E:\codes\py39\RVC-beta\output_v2" "E:\codes\py39\test-20230416b\weights\mi-test-v2.pth" 0.66 cuda:0 True 3 0 1 0.33 -""" -import os, sys, pdb, torch - -now_dir = os.getcwd() -sys.path.append(now_dir) -import sys -import torch -import tqdm as tq -from multiprocessing import cpu_count - - -class Config: - def __init__(self, device, is_half): - self.device = device - self.is_half = is_half - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16系/10系显卡和P40强制单精度") - self.is_half = False - for config_file in ["32k.json", "40k.json", "48k.json"]: - with open(f"configs/{config_file}", "r") as f: - strr = f.read().replace("true", "false") - with open(f"configs/{config_file}", "w") as f: - f.write(strr) - with open("infer/modules/train/preprocess.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("infer/modules/train/preprocess.py", "w") as f: - f.write(strr) - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - if self.gpu_mem <= 4: - with open("infer/modules/train/preprocess.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("infer/modules/train/preprocess.py", "w") as f: - f.write(strr) - elif torch.backends.mps.is_available(): - print("没有发现支持的N卡, 使用MPS进行推理") - self.device = "mps" - else: - print("没有发现支持的N卡, 使用CPU进行推理") - self.device = "cpu" - self.is_half = True - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max - - -f0up_key = sys.argv[1] -input_path = sys.argv[2] -index_path = sys.argv[3] -f0method = sys.argv[4] # harvest or pm -opt_path = sys.argv[5] -model_path = sys.argv[6] -index_rate = float(sys.argv[7]) -device = sys.argv[8] -is_half = sys.argv[9].lower() != "false" -filter_radius = int(sys.argv[10]) -resample_sr = int(sys.argv[11]) -rms_mix_rate = float(sys.argv[12]) -protect = float(sys.argv[13]) -print(sys.argv) -config = Config(device, is_half) -now_dir = os.getcwd() -sys.path.append(now_dir) -from infer.modules.vc.modules import VC -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from infer.lib.audio import load_audio -from fairseq import checkpoint_utils -from scipy.io import wavfile - -hubert_model = None - - -def load_hubert(): - global hubert_model - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - - -def vc_single(sid, input_audio, f0_up_key, f0_file, f0_method, file_index, index_rate): - global tgt_sr, net_g, vc, hubert_model, version - if input_audio is None: - return "You need to upload an audio", None - f0_up_key = int(f0_up_key) - audio = load_audio(input_audio, 16000) - times = [0, 0, 0] - if hubert_model == None: - load_hubert() - if_f0 = cpt.get("f0", 1) - # audio_opt=vc.pipeline(hubert_model,net_g,sid,audio,times,f0_up_key,f0_method,file_index,file_big_npy,index_rate,if_f0,f0_file=f0_file) - audio_opt = vc.pipeline( - hubert_model, - net_g, - sid, - audio, - input_audio, - times, - f0_up_key, - f0_method, - file_index, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=f0_file, - ) - print(times) - return audio_opt - - -def get_vc(model_path): - global n_spk, tgt_sr, net_g, vc, cpt, device, is_half, version - print("loading pth %s" % model_path) - cpt = torch.load(model_path, map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: # - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净,真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - n_spk = cpt["config"][-3] - # return {"visible": True,"maximum": n_spk, "__type__": "update"} - - -get_vc(model_path) -audios = os.listdir(input_path) -for file in tq.tqdm(audios): - if file.endswith(".wav"): - file_path = input_path + "/" + file - wav_opt = vc_single( - 0, file_path, f0up_key, None, f0method, index_path, index_rate - ) - out_path = opt_path + "/" + file - wavfile.write(out_path, tgt_sr, wav_opt) diff --git a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/dataset.py b/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/dataset.py deleted file mode 100644 index cfd01a174978d97180a897e40cb59ecadec1d12e..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/dataset.py +++ /dev/null @@ -1,183 +0,0 @@ -import os -import random - -import numpy as np -import torch -import torch.utils.data -from tqdm import tqdm - -from . import spec_utils - - -class VocalRemoverValidationSet(torch.utils.data.Dataset): - def __init__(self, patch_list): - self.patch_list = patch_list - - def __len__(self): - return len(self.patch_list) - - def __getitem__(self, idx): - path = self.patch_list[idx] - data = np.load(path) - - X, y = data["X"], data["y"] - - X_mag = np.abs(X) - y_mag = np.abs(y) - - return X_mag, y_mag - - -def make_pair(mix_dir, inst_dir): - input_exts = [".wav", ".m4a", ".mp3", ".mp4", ".flac"] - - X_list = sorted( - [ - os.path.join(mix_dir, fname) - for fname in os.listdir(mix_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - y_list = sorted( - [ - os.path.join(inst_dir, fname) - for fname in os.listdir(inst_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - - filelist = list(zip(X_list, y_list)) - - return filelist - - -def train_val_split(dataset_dir, split_mode, val_rate, val_filelist): - if split_mode == "random": - filelist = make_pair( - os.path.join(dataset_dir, "mixtures"), - os.path.join(dataset_dir, "instruments"), - ) - - random.shuffle(filelist) - - if len(val_filelist) == 0: - val_size = int(len(filelist) * val_rate) - train_filelist = filelist[:-val_size] - val_filelist = filelist[-val_size:] - else: - train_filelist = [ - pair for pair in filelist if list(pair) not in val_filelist - ] - elif split_mode == "subdirs": - if len(val_filelist) != 0: - raise ValueError( - "The `val_filelist` option is not available in `subdirs` mode" - ) - - train_filelist = make_pair( - os.path.join(dataset_dir, "training/mixtures"), - os.path.join(dataset_dir, "training/instruments"), - ) - - val_filelist = make_pair( - os.path.join(dataset_dir, "validation/mixtures"), - os.path.join(dataset_dir, "validation/instruments"), - ) - - return train_filelist, val_filelist - - -def augment(X, y, reduction_rate, reduction_mask, mixup_rate, mixup_alpha): - perm = np.random.permutation(len(X)) - for i, idx in enumerate(tqdm(perm)): - if np.random.uniform() < reduction_rate: - y[idx] = spec_utils.reduce_vocal_aggressively( - X[idx], y[idx], reduction_mask - ) - - if np.random.uniform() < 0.5: - # swap channel - X[idx] = X[idx, ::-1] - y[idx] = y[idx, ::-1] - if np.random.uniform() < 0.02: - # mono - X[idx] = X[idx].mean(axis=0, keepdims=True) - y[idx] = y[idx].mean(axis=0, keepdims=True) - if np.random.uniform() < 0.02: - # inst - X[idx] = y[idx] - - if np.random.uniform() < mixup_rate and i < len(perm) - 1: - lam = np.random.beta(mixup_alpha, mixup_alpha) - X[idx] = lam * X[idx] + (1 - lam) * X[perm[i + 1]] - y[idx] = lam * y[idx] + (1 - lam) * y[perm[i + 1]] - - return X, y - - -def make_padding(width, cropsize, offset): - left = offset - roi_size = cropsize - left * 2 - if roi_size == 0: - roi_size = cropsize - right = roi_size - (width % roi_size) + left - - return left, right, roi_size - - -def make_training_set(filelist, cropsize, patches, sr, hop_length, n_fft, offset): - len_dataset = patches * len(filelist) - - X_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - y_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - starts = np.random.randint(0, X_pad.shape[2] - cropsize, patches) - ends = starts + cropsize - for j in range(patches): - idx = i * patches + j - X_dataset[idx] = X_pad[:, :, starts[j] : ends[j]] - y_dataset[idx] = y_pad[:, :, starts[j] : ends[j]] - - return X_dataset, y_dataset - - -def make_validation_set(filelist, cropsize, sr, hop_length, n_fft, offset): - patch_list = [] - patch_dir = "cs{}_sr{}_hl{}_nf{}_of{}".format( - cropsize, sr, hop_length, n_fft, offset - ) - os.makedirs(patch_dir, exist_ok=True) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - basename = os.path.splitext(os.path.basename(X_path))[0] - - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - len_dataset = int(np.ceil(X.shape[2] / roi_size)) - for j in range(len_dataset): - outpath = os.path.join(patch_dir, "{}_p{}.npz".format(basename, j)) - start = j * roi_size - if not os.path.exists(outpath): - np.savez( - outpath, - X=X_pad[:, :, start : start + cropsize], - y=y_pad[:, :, start : start + cropsize], - ) - patch_list.append(outpath) - - return VocalRemoverValidationSet(patch_list) diff --git a/spaces/Belshia/shia/README.md b/spaces/Belshia/shia/README.md deleted file mode 100644 index d648421b8ee540f3bcef13291fa6200bf34345cb..0000000000000000000000000000000000000000 --- a/spaces/Belshia/shia/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Shia -emoji: 🌍 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/_version.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/_version.py deleted file mode 100644 index b723056a756af22aaf1a4709c5122bea9fb279ee..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/_version.py +++ /dev/null @@ -1,5 +0,0 @@ -# coding: utf-8 -# file generated by setuptools_scm -# don't change, don't track in version control -version = '2.8.2' -version_tuple = (2, 8, 2) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/poolmanager.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/poolmanager.py deleted file mode 100644 index ca4ec341184adb3d30f3cd825b49a81b87d29b08..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/poolmanager.py +++ /dev/null @@ -1,537 +0,0 @@ -from __future__ import absolute_import - -import collections -import functools -import logging - -from ._collections import RecentlyUsedContainer -from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, port_by_scheme -from .exceptions import ( - LocationValueError, - MaxRetryError, - ProxySchemeUnknown, - ProxySchemeUnsupported, - URLSchemeUnknown, -) -from .packages import six -from .packages.six.moves.urllib.parse import urljoin -from .request import RequestMethods -from .util.proxy import connection_requires_http_tunnel -from .util.retry import Retry -from .util.url import parse_url - -__all__ = ["PoolManager", "ProxyManager", "proxy_from_url"] - - -log = logging.getLogger(__name__) - -SSL_KEYWORDS = ( - "key_file", - "cert_file", - "cert_reqs", - "ca_certs", - "ssl_version", - "ca_cert_dir", - "ssl_context", - "key_password", - "server_hostname", -) - -# All known keyword arguments that could be provided to the pool manager, its -# pools, or the underlying connections. This is used to construct a pool key. -_key_fields = ( - "key_scheme", # str - "key_host", # str - "key_port", # int - "key_timeout", # int or float or Timeout - "key_retries", # int or Retry - "key_strict", # bool - "key_block", # bool - "key_source_address", # str - "key_key_file", # str - "key_key_password", # str - "key_cert_file", # str - "key_cert_reqs", # str - "key_ca_certs", # str - "key_ssl_version", # str - "key_ca_cert_dir", # str - "key_ssl_context", # instance of ssl.SSLContext or urllib3.util.ssl_.SSLContext - "key_maxsize", # int - "key_headers", # dict - "key__proxy", # parsed proxy url - "key__proxy_headers", # dict - "key__proxy_config", # class - "key_socket_options", # list of (level (int), optname (int), value (int or str)) tuples - "key__socks_options", # dict - "key_assert_hostname", # bool or string - "key_assert_fingerprint", # str - "key_server_hostname", # str -) - -#: The namedtuple class used to construct keys for the connection pool. -#: All custom key schemes should include the fields in this key at a minimum. -PoolKey = collections.namedtuple("PoolKey", _key_fields) - -_proxy_config_fields = ("ssl_context", "use_forwarding_for_https") -ProxyConfig = collections.namedtuple("ProxyConfig", _proxy_config_fields) - - -def _default_key_normalizer(key_class, request_context): - """ - Create a pool key out of a request context dictionary. - - According to RFC 3986, both the scheme and host are case-insensitive. - Therefore, this function normalizes both before constructing the pool - key for an HTTPS request. If you wish to change this behaviour, provide - alternate callables to ``key_fn_by_scheme``. - - :param key_class: - The class to use when constructing the key. This should be a namedtuple - with the ``scheme`` and ``host`` keys at a minimum. - :type key_class: namedtuple - :param request_context: - A dictionary-like object that contain the context for a request. - :type request_context: dict - - :return: A namedtuple that can be used as a connection pool key. - :rtype: PoolKey - """ - # Since we mutate the dictionary, make a copy first - context = request_context.copy() - context["scheme"] = context["scheme"].lower() - context["host"] = context["host"].lower() - - # These are both dictionaries and need to be transformed into frozensets - for key in ("headers", "_proxy_headers", "_socks_options"): - if key in context and context[key] is not None: - context[key] = frozenset(context[key].items()) - - # The socket_options key may be a list and needs to be transformed into a - # tuple. - socket_opts = context.get("socket_options") - if socket_opts is not None: - context["socket_options"] = tuple(socket_opts) - - # Map the kwargs to the names in the namedtuple - this is necessary since - # namedtuples can't have fields starting with '_'. - for key in list(context.keys()): - context["key_" + key] = context.pop(key) - - # Default to ``None`` for keys missing from the context - for field in key_class._fields: - if field not in context: - context[field] = None - - return key_class(**context) - - -#: A dictionary that maps a scheme to a callable that creates a pool key. -#: This can be used to alter the way pool keys are constructed, if desired. -#: Each PoolManager makes a copy of this dictionary so they can be configured -#: globally here, or individually on the instance. -key_fn_by_scheme = { - "http": functools.partial(_default_key_normalizer, PoolKey), - "https": functools.partial(_default_key_normalizer, PoolKey), -} - -pool_classes_by_scheme = {"http": HTTPConnectionPool, "https": HTTPSConnectionPool} - - -class PoolManager(RequestMethods): - """ - Allows for arbitrary requests while transparently keeping track of - necessary connection pools for you. - - :param num_pools: - Number of connection pools to cache before discarding the least - recently used pool. - - :param headers: - Headers to include with all requests, unless other headers are given - explicitly. - - :param \\**connection_pool_kw: - Additional parameters are used to create fresh - :class:`urllib3.connectionpool.ConnectionPool` instances. - - Example:: - - >>> manager = PoolManager(num_pools=2) - >>> r = manager.request('GET', 'http://google.com/') - >>> r = manager.request('GET', 'http://google.com/mail') - >>> r = manager.request('GET', 'http://yahoo.com/') - >>> len(manager.pools) - 2 - - """ - - proxy = None - proxy_config = None - - def __init__(self, num_pools=10, headers=None, **connection_pool_kw): - RequestMethods.__init__(self, headers) - self.connection_pool_kw = connection_pool_kw - self.pools = RecentlyUsedContainer(num_pools, dispose_func=lambda p: p.close()) - - # Locally set the pool classes and keys so other PoolManagers can - # override them. - self.pool_classes_by_scheme = pool_classes_by_scheme - self.key_fn_by_scheme = key_fn_by_scheme.copy() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.clear() - # Return False to re-raise any potential exceptions - return False - - def _new_pool(self, scheme, host, port, request_context=None): - """ - Create a new :class:`urllib3.connectionpool.ConnectionPool` based on host, port, scheme, and - any additional pool keyword arguments. - - If ``request_context`` is provided, it is provided as keyword arguments - to the pool class used. This method is used to actually create the - connection pools handed out by :meth:`connection_from_url` and - companion methods. It is intended to be overridden for customization. - """ - pool_cls = self.pool_classes_by_scheme[scheme] - if request_context is None: - request_context = self.connection_pool_kw.copy() - - # Although the context has everything necessary to create the pool, - # this function has historically only used the scheme, host, and port - # in the positional args. When an API change is acceptable these can - # be removed. - for key in ("scheme", "host", "port"): - request_context.pop(key, None) - - if scheme == "http": - for kw in SSL_KEYWORDS: - request_context.pop(kw, None) - - return pool_cls(host, port, **request_context) - - def clear(self): - """ - Empty our store of pools and direct them all to close. - - This will not affect in-flight connections, but they will not be - re-used after completion. - """ - self.pools.clear() - - def connection_from_host(self, host, port=None, scheme="http", pool_kwargs=None): - """ - Get a :class:`urllib3.connectionpool.ConnectionPool` based on the host, port, and scheme. - - If ``port`` isn't given, it will be derived from the ``scheme`` using - ``urllib3.connectionpool.port_by_scheme``. If ``pool_kwargs`` is - provided, it is merged with the instance's ``connection_pool_kw`` - variable and used to create the new connection pool, if one is - needed. - """ - - if not host: - raise LocationValueError("No host specified.") - - request_context = self._merge_pool_kwargs(pool_kwargs) - request_context["scheme"] = scheme or "http" - if not port: - port = port_by_scheme.get(request_context["scheme"].lower(), 80) - request_context["port"] = port - request_context["host"] = host - - return self.connection_from_context(request_context) - - def connection_from_context(self, request_context): - """ - Get a :class:`urllib3.connectionpool.ConnectionPool` based on the request context. - - ``request_context`` must at least contain the ``scheme`` key and its - value must be a key in ``key_fn_by_scheme`` instance variable. - """ - scheme = request_context["scheme"].lower() - pool_key_constructor = self.key_fn_by_scheme.get(scheme) - if not pool_key_constructor: - raise URLSchemeUnknown(scheme) - pool_key = pool_key_constructor(request_context) - - return self.connection_from_pool_key(pool_key, request_context=request_context) - - def connection_from_pool_key(self, pool_key, request_context=None): - """ - Get a :class:`urllib3.connectionpool.ConnectionPool` based on the provided pool key. - - ``pool_key`` should be a namedtuple that only contains immutable - objects. At a minimum it must have the ``scheme``, ``host``, and - ``port`` fields. - """ - with self.pools.lock: - # If the scheme, host, or port doesn't match existing open - # connections, open a new ConnectionPool. - pool = self.pools.get(pool_key) - if pool: - return pool - - # Make a fresh ConnectionPool of the desired type - scheme = request_context["scheme"] - host = request_context["host"] - port = request_context["port"] - pool = self._new_pool(scheme, host, port, request_context=request_context) - self.pools[pool_key] = pool - - return pool - - def connection_from_url(self, url, pool_kwargs=None): - """ - Similar to :func:`urllib3.connectionpool.connection_from_url`. - - If ``pool_kwargs`` is not provided and a new pool needs to be - constructed, ``self.connection_pool_kw`` is used to initialize - the :class:`urllib3.connectionpool.ConnectionPool`. If ``pool_kwargs`` - is provided, it is used instead. Note that if a new pool does not - need to be created for the request, the provided ``pool_kwargs`` are - not used. - """ - u = parse_url(url) - return self.connection_from_host( - u.host, port=u.port, scheme=u.scheme, pool_kwargs=pool_kwargs - ) - - def _merge_pool_kwargs(self, override): - """ - Merge a dictionary of override values for self.connection_pool_kw. - - This does not modify self.connection_pool_kw and returns a new dict. - Any keys in the override dictionary with a value of ``None`` are - removed from the merged dictionary. - """ - base_pool_kwargs = self.connection_pool_kw.copy() - if override: - for key, value in override.items(): - if value is None: - try: - del base_pool_kwargs[key] - except KeyError: - pass - else: - base_pool_kwargs[key] = value - return base_pool_kwargs - - def _proxy_requires_url_absolute_form(self, parsed_url): - """ - Indicates if the proxy requires the complete destination URL in the - request. Normally this is only needed when not using an HTTP CONNECT - tunnel. - """ - if self.proxy is None: - return False - - return not connection_requires_http_tunnel( - self.proxy, self.proxy_config, parsed_url.scheme - ) - - def _validate_proxy_scheme_url_selection(self, url_scheme): - """ - Validates that were not attempting to do TLS in TLS connections on - Python2 or with unsupported SSL implementations. - """ - if self.proxy is None or url_scheme != "https": - return - - if self.proxy.scheme != "https": - return - - if six.PY2 and not self.proxy_config.use_forwarding_for_https: - raise ProxySchemeUnsupported( - "Contacting HTTPS destinations through HTTPS proxies " - "'via CONNECT tunnels' is not supported in Python 2" - ) - - def urlopen(self, method, url, redirect=True, **kw): - """ - Same as :meth:`urllib3.HTTPConnectionPool.urlopen` - with custom cross-host redirect logic and only sends the request-uri - portion of the ``url``. - - The given ``url`` parameter must be absolute, such that an appropriate - :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. - """ - u = parse_url(url) - self._validate_proxy_scheme_url_selection(u.scheme) - - conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) - - kw["assert_same_host"] = False - kw["redirect"] = False - - if "headers" not in kw: - kw["headers"] = self.headers.copy() - - if self._proxy_requires_url_absolute_form(u): - response = conn.urlopen(method, url, **kw) - else: - response = conn.urlopen(method, u.request_uri, **kw) - - redirect_location = redirect and response.get_redirect_location() - if not redirect_location: - return response - - # Support relative URLs for redirecting. - redirect_location = urljoin(url, redirect_location) - - # RFC 7231, Section 6.4.4 - if response.status == 303: - method = "GET" - - retries = kw.get("retries") - if not isinstance(retries, Retry): - retries = Retry.from_int(retries, redirect=redirect) - - # Strip headers marked as unsafe to forward to the redirected location. - # Check remove_headers_on_redirect to avoid a potential network call within - # conn.is_same_host() which may use socket.gethostbyname() in the future. - if retries.remove_headers_on_redirect and not conn.is_same_host( - redirect_location - ): - headers = list(six.iterkeys(kw["headers"])) - for header in headers: - if header.lower() in retries.remove_headers_on_redirect: - kw["headers"].pop(header, None) - - try: - retries = retries.increment(method, url, response=response, _pool=conn) - except MaxRetryError: - if retries.raise_on_redirect: - response.drain_conn() - raise - return response - - kw["retries"] = retries - kw["redirect"] = redirect - - log.info("Redirecting %s -> %s", url, redirect_location) - - response.drain_conn() - return self.urlopen(method, redirect_location, **kw) - - -class ProxyManager(PoolManager): - """ - Behaves just like :class:`PoolManager`, but sends all requests through - the defined proxy, using the CONNECT method for HTTPS URLs. - - :param proxy_url: - The URL of the proxy to be used. - - :param proxy_headers: - A dictionary containing headers that will be sent to the proxy. In case - of HTTP they are being sent with each request, while in the - HTTPS/CONNECT case they are sent only once. Could be used for proxy - authentication. - - :param proxy_ssl_context: - The proxy SSL context is used to establish the TLS connection to the - proxy when using HTTPS proxies. - - :param use_forwarding_for_https: - (Defaults to False) If set to True will forward requests to the HTTPS - proxy to be made on behalf of the client instead of creating a TLS - tunnel via the CONNECT method. **Enabling this flag means that request - and response headers and content will be visible from the HTTPS proxy** - whereas tunneling keeps request and response headers and content - private. IP address, target hostname, SNI, and port are always visible - to an HTTPS proxy even when this flag is disabled. - - Example: - >>> proxy = urllib3.ProxyManager('http://localhost:3128/') - >>> r1 = proxy.request('GET', 'http://google.com/') - >>> r2 = proxy.request('GET', 'http://httpbin.org/') - >>> len(proxy.pools) - 1 - >>> r3 = proxy.request('GET', 'https://httpbin.org/') - >>> r4 = proxy.request('GET', 'https://twitter.com/') - >>> len(proxy.pools) - 3 - - """ - - def __init__( - self, - proxy_url, - num_pools=10, - headers=None, - proxy_headers=None, - proxy_ssl_context=None, - use_forwarding_for_https=False, - **connection_pool_kw - ): - - if isinstance(proxy_url, HTTPConnectionPool): - proxy_url = "%s://%s:%i" % ( - proxy_url.scheme, - proxy_url.host, - proxy_url.port, - ) - proxy = parse_url(proxy_url) - - if proxy.scheme not in ("http", "https"): - raise ProxySchemeUnknown(proxy.scheme) - - if not proxy.port: - port = port_by_scheme.get(proxy.scheme, 80) - proxy = proxy._replace(port=port) - - self.proxy = proxy - self.proxy_headers = proxy_headers or {} - self.proxy_ssl_context = proxy_ssl_context - self.proxy_config = ProxyConfig(proxy_ssl_context, use_forwarding_for_https) - - connection_pool_kw["_proxy"] = self.proxy - connection_pool_kw["_proxy_headers"] = self.proxy_headers - connection_pool_kw["_proxy_config"] = self.proxy_config - - super(ProxyManager, self).__init__(num_pools, headers, **connection_pool_kw) - - def connection_from_host(self, host, port=None, scheme="http", pool_kwargs=None): - if scheme == "https": - return super(ProxyManager, self).connection_from_host( - host, port, scheme, pool_kwargs=pool_kwargs - ) - - return super(ProxyManager, self).connection_from_host( - self.proxy.host, self.proxy.port, self.proxy.scheme, pool_kwargs=pool_kwargs - ) - - def _set_proxy_headers(self, url, headers=None): - """ - Sets headers needed by proxies: specifically, the Accept and Host - headers. Only sets headers not provided by the user. - """ - headers_ = {"Accept": "*/*"} - - netloc = parse_url(url).netloc - if netloc: - headers_["Host"] = netloc - - if headers: - headers_.update(headers) - return headers_ - - def urlopen(self, method, url, redirect=True, **kw): - "Same as HTTP(S)ConnectionPool.urlopen, ``url`` must be absolute." - u = parse_url(url) - if not connection_requires_http_tunnel(self.proxy, self.proxy_config, u.scheme): - # For connections using HTTP CONNECT, httplib sets the necessary - # headers on the CONNECT to the proxy. If we're not using CONNECT, - # we'll definitely need to set 'Host' at the very least. - headers = kw.get("headers", self.headers) - kw["headers"] = self._set_proxy_headers(url, headers) - - return super(ProxyManager, self).urlopen(method, url, redirect=redirect, **kw) - - -def proxy_from_url(url, **kw): - return ProxyManager(proxy_url=url, **kw) diff --git a/spaces/Boadiwaa/Recipes/openai/api_resources/experimental/completion_config.py b/spaces/Boadiwaa/Recipes/openai/api_resources/experimental/completion_config.py deleted file mode 100644 index 5d4feb40e1bcba470690e888473d9b7623b4282d..0000000000000000000000000000000000000000 --- a/spaces/Boadiwaa/Recipes/openai/api_resources/experimental/completion_config.py +++ /dev/null @@ -1,11 +0,0 @@ -from openai.api_resources.abstract import ( - CreateableAPIResource, - DeletableAPIResource, - ListableAPIResource, -) - - -class CompletionConfig( - CreateableAPIResource, ListableAPIResource, DeletableAPIResource -): - OBJECT_NAME = "experimental.completion_configs" diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/unique_by_key.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/unique_by_key.h deleted file mode 100644 index 6ab8578407e1cd90aeaba982780b966b4aee013e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/unique_by_key.h +++ /dev/null @@ -1,67 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace tbb -{ -namespace detail -{ - - -template - thrust::pair - unique_by_key(execution_policy &exec, - ForwardIterator1 keys_first, - ForwardIterator1 keys_last, - ForwardIterator2 values_first, - BinaryPredicate binary_pred); - - -template - thrust::pair - unique_by_key_copy(execution_policy &exec, - InputIterator1 keys_first, - InputIterator1 keys_last, - InputIterator2 values_first, - OutputIterator1 keys_output, - OutputIterator2 values_output, - BinaryPredicate binary_pred); - - -} // end namespace detail -} // end namespace tbb -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/unicl-zero-shot-img-recog/model/text_encoder/build.py b/spaces/CVPR/unicl-zero-shot-img-recog/model/text_encoder/build.py deleted file mode 100644 index 21717b73146f2be5fa823e5bd8f4dd0b144d188c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/unicl-zero-shot-img-recog/model/text_encoder/build.py +++ /dev/null @@ -1,31 +0,0 @@ -import os - -from transformers import CLIPTokenizer -from transformers import AutoTokenizer - -from .registry import lang_encoders -from .registry import is_lang_encoder - - -def build_lang_encoder(config_encoder, tokenizer, verbose, **kwargs): - model_name = config_encoder['NAME'] - - if not is_lang_encoder(model_name): - raise ValueError(f'Unknown model: {model_name}') - - return lang_encoders(model_name)(config_encoder, tokenizer, verbose, **kwargs) - - -def build_tokenizer(config_encoder): - tokenizer = None - os.environ['TOKENIZERS_PARALLELISM'] = 'true' - if config_encoder['TOKENIZER'] == 'clip': - pretrained_tokenizer = config_encoder.get( - 'PRETRAINED_TOKENIZER', 'openai/clip-vit-base-patch32' - ) - tokenizer = CLIPTokenizer.from_pretrained(pretrained_tokenizer) - tokenizer.add_special_tokens({'cls_token': tokenizer.eos_token}) - else: - tokenizer = AutoTokenizer.from_pretrained(config_encoder['TOKENIZER']) - - return tokenizer diff --git a/spaces/CognitiveLabs/GPT-auto-webscraping/app.py b/spaces/CognitiveLabs/GPT-auto-webscraping/app.py deleted file mode 100644 index a4d5d1583ac478bfc206a8c1b1bbcdc8edecd647..0000000000000000000000000000000000000000 --- a/spaces/CognitiveLabs/GPT-auto-webscraping/app.py +++ /dev/null @@ -1,107 +0,0 @@ -from AssistantService import GPTAssistant -from openai.error import AuthenticationError -import streamlit as st -from langsmith.run_helpers import traceable -import configparser -import os - -config = configparser.ConfigParser() -config.read('config.ini') -if 'DEFAULT' in config: - assistant_api_key = config['DEFAULT'].get('API-KEY', '') - -os.environ["LANGCHAIN_TRACING_V2"]="true" -os.environ["LANGCHAIN_ENDPOINT"]="https://api.smith.langchain.com" -os.environ["LANGCHAIN_API_KEY"]=st.secrets["LANGCHAIN_API_KEY"] -os.environ["LANGCHAIN_PROJECT"]=st.secrets["LANGCHAIN_PROJECT"] - -@traceable(run_type="tool") -def start_session(session_started): - st.session_state['session_started'] = session_started - return session_started - -# change session_started to True -if 'session_started' not in st.session_state: - start_session(True) - -st.write("This app helps you to extract data from HTML code using web scraping. It uses *GPT-3.5-turbo-16k* to generate the code for you. \n *Contribute to this project on [GitHub](https://github.com/CognitiveLabs/GPT-auto-webscraping)*") - -with st.expander(label="Check out the video demo"): - yt_video = st.video("https://www.youtube.com/watch?v=_zeCun4OlCc") - -info_text = """ -**Quick start** \n -Fill the input with . -- Choose a repeating element on the page, like a product on a list. -- Inspect the HTML code and copy the element. -- After generating the "output format" and the code, paste the complete HTML code of the page in the last input to test it -""" -st.write(info_text) -st.image("https://j.gifs.com/gpqvPl.gif", width=600) - - - -if assistant_api_key == '': - assistant_api_key = st.secrets["API_KEY"] - if assistant_api_key: - gpt_assistant = GPTAssistant(assistant_api_key) -else: - gpt_assistant = GPTAssistant(assistant_api_key) - -# get the html content -html_content = st.text_input("Paste the HTML tags of the item you want to extract:", max_chars=10000, help="example:
  • Product 1
  • , watch the video above") -# check if html_content is an url, and show error if it is -if html_content: - if html_content.startswith("http"): - st.write("Please paste the HTML piece code, not the URL") - html_content = None - -extract_button = st.button("Generate output format & code") - - -if html_content and extract_button: - try: - st.write("1/2: Generating the output format...") - output = gpt_assistant.chain_response_format(html_content) - st.session_state['output_format'] = output - except NameError: - st.write("Complete the API key field") - except AuthenticationError: - st.write("Invalid API key") - -if 'output_format' in st.session_state: - output_format = st.code(st.session_state['output_format'], language="json") - - try: - st.write("2/2: Generating the code...") - python_code = gpt_assistant.chain_code_generator(st.session_state['output_format'], html_content) - st.session_state['code_generated'] = python_code - st.session_state['code_generated_exec'] = python_code + "\nresult = extract_info(html_data)" - - except NameError: - st.write("Complete the API key field") - except AuthenticationError: - st.write("Invalid API key") - -@traceable(run_type="tool") -def test_the_code(code, full_content): - exec(code, globals()) - if result: - st.write("data extracted successfully") - # show data in table - st.table(result) - else: - st.write("error extracting data") - - return result or "error" - - -if 'code_generated' in st.session_state: - python_function_label = st.write("Here is your python function:") - code_generated = st.code(st.session_state['code_generated'],language="python") - full_content = st.text_input("Paste your complete HTML here:") - test_code = st.button("Test the code") - if full_content and test_code: - html_data = full_content - result = None - test_the_code(st.session_state['code_generated_exec'], full_content=full_content) \ No newline at end of file diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/tf.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/tf.py deleted file mode 100644 index 5db3b39e69a20717c7d840e537027ce0d833306c..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/tf.py +++ /dev/null @@ -1,269 +0,0 @@ -from __future__ import print_function - - -try: - import tensorflow as tf - from tensorflow.python.ops import nn - relu = nn.relu - slim = tf.contrib.slim - sigmoid = nn.sigmoid - softmax = nn.softmax -except: - print("tensorflow is not installed, util.tf can not be used.") - -def is_gpu_available(cuda_only=True): - """ - code from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/platform/test.py - Returns whether TensorFlow can access a GPU. - Args: - cuda_only: limit the search to CUDA gpus. - Returns: - True iff a gpu device of the requested kind is available. - """ - from tensorflow.python.client import device_lib as _device_lib - - if cuda_only: - return any((x.device_type == 'GPU') - for x in _device_lib.list_local_devices()) - else: - return any((x.device_type == 'GPU' or x.device_type == 'SYCL') - for x in _device_lib.list_local_devices()) - - - -def get_available_gpus(num_gpus = None): - """ - Modified on http://stackoverflow.com/questions/38559755/how-to-get-current-available-gpus-in-tensorflow - However, the original code will occupy all available gpu memory. - The modified code need a parameter: num_gpus. It does nothing but return the device handler name - It will work well on single-maching-training, but I don't know whether it will work well on a cluster. - """ - if num_gpus == None: - from tensorflow.python.client import device_lib as _device_lib - local_device_protos = _device_lib.list_local_devices() - return [x.name for x in local_device_protos if x.device_type == 'GPU'] - else: - return ['/gpu:%d'%(idx) for idx in xrange(num_gpus)] - -def get_latest_ckpt(path): -# tf.train.latest_checkpoint - import util - path = util.io.get_absolute_path(path) - if util.io.is_dir(path): - ckpt = tf.train.get_checkpoint_state(path) - if ckpt is not None: - ckpt_path = ckpt.model_checkpoint_path - else: - ckpt_path = None - else: - ckpt_path = path; - return ckpt_path - -def get_all_ckpts(path): - ckpt = tf.train.get_checkpoint_state(path) - all_ckpts = ckpt.all_model_checkpoint_paths - ckpts = [str(c) for c in all_ckpts] - return ckpts - -def get_iter(ckpt): - import util - iter_ = int(util.str.find_all(ckpt, '.ckpt-\d+')[0].split('-')[-1]) - return iter_ - -def get_init_fn(checkpoint_path, train_dir, ignore_missing_vars = False, - checkpoint_exclude_scopes = None, model_name = None, checkpoint_model_scope = None): - """ - code from github/SSD-tensorflow/tf_utils.py - Returns a function run by the chief worker to warm-start the training. - Note that the init_fn is only run when initializing the model during the very - first global step. - - checkpoint_path: the checkpoint to be restored - train_dir: the directory where checkpoints are stored during training. - ignore_missing_vars: if False and there are variables in the model but not in the checkpoint, an error will be raised. - checkpoint_model_scope and model_name: if the root scope of checkpoints and the model in session is different, - (but the sub-scopes are all the same), specify them clearly - checkpoint_exclude_scopes: variables to be excluded when restoring from checkpoint_path. - Returns: - An init function run by the supervisor. - """ - import util - if util.str.is_none_or_empty(checkpoint_path): - return None - # Warn the user if a checkpoint exists in the train_dir. Then ignore. - if tf.train.latest_checkpoint(train_dir): - tf.logging.info( - 'Ignoring --checkpoint_path because a checkpoint already exists in %s' - % train_dir) - return None - - exclusions = [] - if checkpoint_exclude_scopes: - exclusions = [scope.strip() - for scope in checkpoint_exclude_scopes.split(',')] - - # TODO(sguada) variables.filter_variables() - variables_to_restore = [] - for var in slim.get_model_variables(): - excluded = False - for exclusion in exclusions: - if var.op.name.startswith(exclusion): - excluded = True - break - if not excluded: - variables_to_restore.append(var) - # Change model scope if necessary. - if checkpoint_model_scope is not None: - variables_to_restore = {checkpoint_model_scope + '/' + var.op.name : var for var in variables_to_restore} - tf.logging.info('variables_to_restore: %r'%(variables_to_restore)) - checkpoint_path = get_latest_ckpt(checkpoint_path) - tf.logging.info('Fine-tuning from %s. Ignoring missing vars: %s' % (checkpoint_path, ignore_missing_vars)) - print ('checkpoint_path', checkpoint_path) - return slim.assign_from_checkpoint_fn( - checkpoint_path, - variables_to_restore, - ignore_missing_vars=ignore_missing_vars) - - -def get_variables_to_train(flags = None): - """code from github/SSD-tensorflow/tf_utils.py - Returns a list of variables to train. - - Returns: - A list of variables to train by the optimizer. - """ - if flags is None or flags.trainable_scopes is None: - return tf.trainable_variables() - else: - scopes = [scope.strip() for scope in flags.trainable_scopes.split(',')] - - variables_to_train = [] - for scope in scopes: - variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope) - variables_to_train.extend(variables) - return variables_to_train - -def Print(tensor, data, msg = '', file = None, mode = 'w'): - from tensorflow.python.ops import control_flow_ops - import util - def np_print(*args): - if util.str.contains(msg, '%'): - message = msg%tuple(args) - else: - message = msg + ' %'*len(args)%tuple(args) - if file is not None: - file_path = util.io.get_absolute_path(file) - print('writting message to file(%s):'%(file_path), message) - with open(file_path, mode) as f: - print(message, file = f) - else: - print(message) - return control_flow_ops.with_dependencies([tf.py_func(np_print, data, [])], tensor) - -def get_variable_names_in_checkpoint(path, return_shapes = False, return_reader = False): - """ - Args: - path: the path to training directory containing checkpoints, - or path to checkpoint - Return: - a list of variable names in the checkpoint - """ - import util - ckpt = get_latest_ckpt(path) - ckpt_reader = tf.train.NewCheckpointReader(ckpt) - ckpt_vars = ckpt_reader.get_variable_to_shape_map() - names = [var for var in ckpt_vars] - if return_shapes: - return names, ckpt_vars - def get(name): - return ckpt_reader.get_tensor(name) - if return_reader: - return names, get - return names - - - -def min_area_rect(xs, ys): - import util - rects = tf.py_func(util.img.min_area_rect, [xs, ys], xs.dtype) - rects.set_shape([None, 5]) - return rects - - -def gpu_config(config = None, allow_growth = None, gpu_memory_fraction = None): - if config is None: - config = tf.ConfigProto() - - if allow_growth is not None: - config.gpu_options.allow_growth = allow_growth - - if gpu_memory_fraction is not None: - config.gpu_options.per_process_gpu_memory_fraction = gpu_memory_fraction - - return config - -def wait_for_checkpoint(path): - from tensorflow.contrib.training.python.training import evaluation - return evaluation.checkpoints_iterator(path) - -def focal_loss(labels, logits, gamma = 2.0, alpha = 0.75, normalize = True): - labels = tf.where(labels > 0, tf.ones_like(labels), tf.zeros_like(labels)) - labels = tf.cast(labels, tf.float32) - probs = tf.sigmoid(logits) - CE = tf.nn.sigmoid_cross_entropy_with_logits(labels = labels, logits = logits) - - alpha_t = tf.ones_like(logits) * alpha - alpha_t = tf.where(labels > 0, alpha_t, 1.0 - alpha_t) - probs_t = tf.where(labels > 0, probs, 1.0 - probs) - - focal_matrix = alpha_t * tf.pow((1.0 - probs_t), gamma) - fl = focal_matrix * CE - - fl = tf.reduce_sum(fl) - if normalize: - #n_pos = tf.reduce_sum(labels) - #fl = fl / tf.cast(n_pos, tf.float32) - total_weights = tf.stop_gradient(tf.reduce_sum(focal_matrix)) - fl = fl / total_weights - return fl - - -def focal_loss_layer_initializer(sigma = 0.01, pi = 0.01): - import numpy as np - b0 = - np.log((1 - pi) / pi) - return tf.random_normal_initializer(stddev = sigma), \ - tf.constant_initializer(b0) - - -def sum_gradients(clone_grads, do_summary = False): - averaged_grads = [] - for grad_and_vars in zip(*clone_grads): - grads = [] - var = grad_and_vars[0][1] - try: - for g, v in grad_and_vars: - assert v == var - grads.append(g) - grad = tf.add_n(grads, name = v.op.name + '_summed_gradients') - except: - import pdb - pdb.set_trace() - - averaged_grads.append((grad, v)) - - if do_summary: - tf.summary.histogram("variables_and_gradients_" + grad.op.name, grad) - tf.summary.histogram("variables_and_gradients_" + v.op.name, v) - tf.summary.scalar("variables_and_gradients_" + grad.op.name+\ - '_mean/var_mean', tf.reduce_mean(grad)/tf.reduce_mean(var)) - tf.summary.scalar("variables_and_gradients_" + v.op.name+'_mean',tf.reduce_mean(var)) - return averaged_grads - -def get_update_op(): - """ - Extremely important for BatchNorm - """ - update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) - if update_ops: - return tf.group(*update_ops) - return None diff --git a/spaces/DHEIVER/ImageClassifierCataract/README.md b/spaces/DHEIVER/ImageClassifierCataract/README.md deleted file mode 100644 index 72ae983b1124a5748a98053a6d48daf9e695ac55..0000000000000000000000000000000000000000 --- a/spaces/DHEIVER/ImageClassifierCataract/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ImageClassifierCataract -emoji: 📊 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/http.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/http.py deleted file mode 100644 index ca9dc54b215f7977970658250f23e3be137f1b3e..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/http.py +++ /dev/null @@ -1,70 +0,0 @@ -import http.server -import sys -from typing import Mapping, Tuple - -from . import __version__ -from .http_exceptions import HttpProcessingError as HttpProcessingError -from .http_parser import ( - HeadersParser as HeadersParser, - HttpParser as HttpParser, - HttpRequestParser as HttpRequestParser, - HttpResponseParser as HttpResponseParser, - RawRequestMessage as RawRequestMessage, - RawResponseMessage as RawResponseMessage, -) -from .http_websocket import ( - WS_CLOSED_MESSAGE as WS_CLOSED_MESSAGE, - WS_CLOSING_MESSAGE as WS_CLOSING_MESSAGE, - WS_KEY as WS_KEY, - WebSocketError as WebSocketError, - WebSocketReader as WebSocketReader, - WebSocketWriter as WebSocketWriter, - WSCloseCode as WSCloseCode, - WSMessage as WSMessage, - WSMsgType as WSMsgType, - ws_ext_gen as ws_ext_gen, - ws_ext_parse as ws_ext_parse, -) -from .http_writer import ( - HttpVersion as HttpVersion, - HttpVersion10 as HttpVersion10, - HttpVersion11 as HttpVersion11, - StreamWriter as StreamWriter, -) - -__all__ = ( - "HttpProcessingError", - "RESPONSES", - "SERVER_SOFTWARE", - # .http_writer - "StreamWriter", - "HttpVersion", - "HttpVersion10", - "HttpVersion11", - # .http_parser - "HeadersParser", - "HttpParser", - "HttpRequestParser", - "HttpResponseParser", - "RawRequestMessage", - "RawResponseMessage", - # .http_websocket - "WS_CLOSED_MESSAGE", - "WS_CLOSING_MESSAGE", - "WS_KEY", - "WebSocketReader", - "WebSocketWriter", - "ws_ext_gen", - "ws_ext_parse", - "WSMessage", - "WebSocketError", - "WSMsgType", - "WSCloseCode", -) - - -SERVER_SOFTWARE: str = "Python/{0[0]}.{0[1]} aiohttp/{1}".format( - sys.version_info, __version__ -) - -RESPONSES: Mapping[int, Tuple[str, str]] = http.server.BaseHTTPRequestHandler.responses diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/patch_feature_extractor.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/patch_feature_extractor.py deleted file mode 100644 index 8901b123d2845bfaecc1a42f66be13fdf1ddd349..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/patch_feature_extractor.py +++ /dev/null @@ -1,57 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -from einops.layers.torch import Rearrange - - -class PatchFeatureExtractor(nn.Module): - x_mean = torch.FloatTensor(np.array([0.485, 0.456, 0.406])[None, :, None, None]) - x_std = torch.FloatTensor(np.array([0.229, 0.224, 0.225])[None, :, None, None]) - - def __init__(self, patch_num=256, input_shape=None): - super(PatchFeatureExtractor, self).__init__() - - if input_shape is None: - input_shape = [3, 512, 1024] - self.patch_dim = 1024 - self.patch_num = patch_num - - img_channel = input_shape[0] - img_h = input_shape[1] - img_w = input_shape[2] - - p_h, p_w = img_h, img_w // self.patch_num - p_dim = p_h * p_w * img_channel - - self.patch_embedding = nn.Sequential( - Rearrange('b c h (p_n p_w) -> b p_n (h p_w c)', p_w=p_w), - nn.Linear(p_dim, self.patch_dim) - ) - - self.x_mean.requires_grad = False - self.x_std.requires_grad = False - - def _prepare_x(self, x): - x = x.clone() - if self.x_mean.device != x.device: - self.x_mean = self.x_mean.to(x.device) - self.x_std = self.x_std.to(x.device) - x[:, :3] = (x[:, :3] - self.x_mean) / self.x_std - - return x - - def forward(self, x): - # x [b 3 512 1024] - x = self._prepare_x(x) # [b 3 512 1024] - x = self.patch_embedding(x) # [b 256(patch_num) 1024(d)] - x = x.permute(0, 2, 1) # [b 1024(d) 256(patch_num)] - return x - - -if __name__ == '__main__': - from PIL import Image - extractor = PatchFeatureExtractor() - img = np.array(Image.open("../../src/demo.png")).transpose((2, 0, 1)) - input = torch.Tensor([img]) # 1 3 512 1024 - feature = extractor(input) - print(feature.shape) # 1, 1024, 256 diff --git a/spaces/DeepDrivePL/PaddleSeg-Matting/README.md b/spaces/DeepDrivePL/PaddleSeg-Matting/README.md deleted file mode 100644 index 80f05c6854496d0c806297a00a77da5f480fec81..0000000000000000000000000000000000000000 --- a/spaces/DeepDrivePL/PaddleSeg-Matting/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: PaddleSeg Matting -emoji: 📊 -colorFrom: indigo -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/models.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/models.py deleted file mode 100644 index ceb6f2ce21720722d5d8c9ee4f7e015ad06a9647..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/models.py +++ /dev/null @@ -1,558 +0,0 @@ -import torch -import torch.nn as nn -import torchvision -from . import resnet, resnext -try: - from lib.nn import SynchronizedBatchNorm2d -except ImportError: - from torch.nn import BatchNorm2d as SynchronizedBatchNorm2d - - -class SegmentationModuleBase(nn.Module): - def __init__(self): - super(SegmentationModuleBase, self).__init__() - - def pixel_acc(self, pred, label): - _, preds = torch.max(pred, dim=1) - valid = (label >= 0).long() - acc_sum = torch.sum(valid * (preds == label).long()) - pixel_sum = torch.sum(valid) - acc = acc_sum.float() / (pixel_sum.float() + 1e-10) - return acc - - -class SegmentationModule(SegmentationModuleBase): - def __init__(self, net_enc, net_dec, crit, deep_sup_scale=None): - super(SegmentationModule, self).__init__() - self.encoder = net_enc - self.decoder = net_dec - self.crit = crit - self.deep_sup_scale = deep_sup_scale - - def forward(self, feed_dict, *, segSize=None): - if segSize is None: # training - if self.deep_sup_scale is not None: # use deep supervision technique - (pred, pred_deepsup) = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True)) - else: - pred = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True)) - - loss = self.crit(pred, feed_dict['seg_label']) - if self.deep_sup_scale is not None: - loss_deepsup = self.crit(pred_deepsup, feed_dict['seg_label']) - loss = loss + loss_deepsup * self.deep_sup_scale - - acc = self.pixel_acc(pred, feed_dict['seg_label']) - return loss, acc - else: # inference - pred = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True), segSize=segSize) - return pred - - -def conv3x3(in_planes, out_planes, stride=1, has_bias=False): - "3x3 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=has_bias) - - -def conv3x3_bn_relu(in_planes, out_planes, stride=1): - return nn.Sequential( - conv3x3(in_planes, out_planes, stride), - SynchronizedBatchNorm2d(out_planes), - nn.ReLU(inplace=True), - ) - - -class ModelBuilder(): - # custom weights initialization - def weights_init(self, m): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - nn.init.kaiming_normal_(m.weight.data) - elif classname.find('BatchNorm') != -1: - m.weight.data.fill_(1.) - m.bias.data.fill_(1e-4) - #elif classname.find('Linear') != -1: - # m.weight.data.normal_(0.0, 0.0001) - - def build_encoder(self, arch='resnet50_dilated8', fc_dim=512, weights=''): - pretrained = True if len(weights) == 0 else False - if arch == 'resnet34': - raise NotImplementedError - orig_resnet = resnet.__dict__['resnet34'](pretrained=pretrained) - net_encoder = Resnet(orig_resnet) - elif arch == 'resnet34_dilated8': - raise NotImplementedError - orig_resnet = resnet.__dict__['resnet34'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, - dilate_scale=8) - elif arch == 'resnet34_dilated16': - raise NotImplementedError - orig_resnet = resnet.__dict__['resnet34'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, - dilate_scale=16) - elif arch == 'resnet50': - orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained) - net_encoder = Resnet(orig_resnet) - elif arch == 'resnet50_dilated8': - orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, - dilate_scale=8) - elif arch == 'resnet50_dilated16': - orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, - dilate_scale=16) - elif arch == 'resnet101': - orig_resnet = resnet.__dict__['resnet101'](pretrained=pretrained) - net_encoder = Resnet(orig_resnet) - elif arch == 'resnet101_dilated8': - orig_resnet = resnet.__dict__['resnet101'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, - dilate_scale=8) - elif arch == 'resnet101_dilated16': - orig_resnet = resnet.__dict__['resnet101'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, - dilate_scale=16) - elif arch == 'resnext101': - orig_resnext = resnext.__dict__['resnext101'](pretrained=pretrained) - net_encoder = Resnet(orig_resnext) # we can still use class Resnet - else: - raise Exception('Architecture undefined!') - - # net_encoder.apply(self.weights_init) - if len(weights) > 0: - # print('Loading weights for net_encoder') - net_encoder.load_state_dict( - torch.load(weights, map_location=lambda storage, loc: storage), strict=False) - return net_encoder - - def build_decoder(self, arch='ppm_bilinear_deepsup', - fc_dim=512, num_class=150, - weights='', inference=False, use_softmax=False): - if arch == 'c1_bilinear_deepsup': - net_decoder = C1BilinearDeepSup( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax) - elif arch == 'c1_bilinear': - net_decoder = C1Bilinear( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax) - elif arch == 'ppm_bilinear': - net_decoder = PPMBilinear( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax) - elif arch == 'ppm_bilinear_deepsup': - net_decoder = PPMBilinearDeepsup( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax) - elif arch == 'upernet_lite': - net_decoder = UPerNet( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax, - fpn_dim=256) - elif arch == 'upernet': - net_decoder = UPerNet( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax, - fpn_dim=512) - elif arch == 'upernet_tmp': - net_decoder = UPerNetTmp( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax, - fpn_dim=512) - else: - raise Exception('Architecture undefined!') - - net_decoder.apply(self.weights_init) - if len(weights) > 0: - # print('Loading weights for net_decoder') - net_decoder.load_state_dict( - torch.load(weights, map_location=lambda storage, loc: storage), strict=False) - return net_decoder - - -class Resnet(nn.Module): - def __init__(self, orig_resnet): - super(Resnet, self).__init__() - - # take pretrained resnet, except AvgPool and FC - self.conv1 = orig_resnet.conv1 - self.bn1 = orig_resnet.bn1 - self.relu1 = orig_resnet.relu1 - self.conv2 = orig_resnet.conv2 - self.bn2 = orig_resnet.bn2 - self.relu2 = orig_resnet.relu2 - self.conv3 = orig_resnet.conv3 - self.bn3 = orig_resnet.bn3 - self.relu3 = orig_resnet.relu3 - self.maxpool = orig_resnet.maxpool - self.layer1 = orig_resnet.layer1 - self.layer2 = orig_resnet.layer2 - self.layer3 = orig_resnet.layer3 - self.layer4 = orig_resnet.layer4 - - def forward(self, x, return_feature_maps=False): - conv_out = [] - - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.maxpool(x) - - x = self.layer1(x); conv_out.append(x); - x = self.layer2(x); conv_out.append(x); - x = self.layer3(x); conv_out.append(x); - x = self.layer4(x); conv_out.append(x); - - if return_feature_maps: - return conv_out - return [x] - - -class ResnetDilated(nn.Module): - def __init__(self, orig_resnet, dilate_scale=8): - super(ResnetDilated, self).__init__() - from functools import partial - - if dilate_scale == 8: - orig_resnet.layer3.apply( - partial(self._nostride_dilate, dilate=2)) - orig_resnet.layer4.apply( - partial(self._nostride_dilate, dilate=4)) - elif dilate_scale == 16: - orig_resnet.layer4.apply( - partial(self._nostride_dilate, dilate=2)) - - # take pretrained resnet, except AvgPool and FC - self.conv1 = orig_resnet.conv1 - self.bn1 = orig_resnet.bn1 - self.relu1 = orig_resnet.relu1 - self.conv2 = orig_resnet.conv2 - self.bn2 = orig_resnet.bn2 - self.relu2 = orig_resnet.relu2 - self.conv3 = orig_resnet.conv3 - self.bn3 = orig_resnet.bn3 - self.relu3 = orig_resnet.relu3 - self.maxpool = orig_resnet.maxpool - self.layer1 = orig_resnet.layer1 - self.layer2 = orig_resnet.layer2 - self.layer3 = orig_resnet.layer3 - self.layer4 = orig_resnet.layer4 - - def _nostride_dilate(self, m, dilate): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - # the convolution with stride - if m.stride == (2, 2): - m.stride = (1, 1) - if m.kernel_size == (3, 3): - m.dilation = (dilate//2, dilate//2) - m.padding = (dilate//2, dilate//2) - # other convoluions - else: - if m.kernel_size == (3, 3): - m.dilation = (dilate, dilate) - m.padding = (dilate, dilate) - - def forward(self, x, return_feature_maps=False): - conv_out = [] - - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.maxpool(x) - - x = self.layer1(x); conv_out.append(x); - x = self.layer2(x); conv_out.append(x); - x = self.layer3(x); conv_out.append(x); - x = self.layer4(x); conv_out.append(x); - - if return_feature_maps: - return conv_out - return [x] - - -# last conv, bilinear upsample -class C1BilinearDeepSup(nn.Module): - def __init__(self, num_class=150, fc_dim=2048, inference=False, use_softmax=False): - super(C1BilinearDeepSup, self).__init__() - self.use_softmax = use_softmax - self.inference = inference - - self.cbr = conv3x3_bn_relu(fc_dim, fc_dim // 4, 1) - self.cbr_deepsup = conv3x3_bn_relu(fc_dim // 2, fc_dim // 4, 1) - - # last conv - self.conv_last = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - self.conv_last_deepsup = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - x = self.cbr(conv5) - x = self.conv_last(x) - - if self.inference or self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - if self.use_softmax: - x = nn.functional.softmax(x, dim=1) - return x - - # deep sup - conv4 = conv_out[-2] - _ = self.cbr_deepsup(conv4) - _ = self.conv_last_deepsup(_) - - x = nn.functional.log_softmax(x, dim=1) - _ = nn.functional.log_softmax(_, dim=1) - - return (x, _) - - -# last conv, bilinear upsample -class C1Bilinear(nn.Module): - def __init__(self, num_class=150, fc_dim=2048, inference=False, use_softmax=False): - super(C1Bilinear, self).__init__() - self.use_softmax = use_softmax - self.inference = inference - - self.cbr = conv3x3_bn_relu(fc_dim, fc_dim // 4, 1) - - # last conv - self.conv_last = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - x = self.cbr(conv5) - x = self.conv_last(x) - - if self.inference or self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - if self.use_softmax: - x = nn.functional.softmax(x, dim=1) - else: - x = nn.functional.log_softmax(x, dim=1) - - return x - - -# pyramid pooling, bilinear upsample -class PPMBilinear(nn.Module): - def __init__(self, num_class=150, fc_dim=4096, - inference=False, use_softmax=False, pool_scales=(1, 2, 3, 6)): - super(PPMBilinear, self).__init__() - self.use_softmax = use_softmax - self.inference = inference - - self.ppm = [] - for scale in pool_scales: - self.ppm.append(nn.Sequential( - nn.AdaptiveAvgPool2d(scale), - nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False), - SynchronizedBatchNorm2d(512), - nn.ReLU(inplace=True) - )) - self.ppm = nn.ModuleList(self.ppm) - - self.conv_last = nn.Sequential( - nn.Conv2d(fc_dim+len(pool_scales)*512, 512, - kernel_size=3, padding=1, bias=False), - SynchronizedBatchNorm2d(512), - nn.ReLU(inplace=True), - nn.Dropout2d(0.1), - nn.Conv2d(512, num_class, kernel_size=1) - ) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - input_size = conv5.size() - ppm_out = [conv5] - for pool_scale in self.ppm: - ppm_out.append(nn.functional.interpolate( - pool_scale(conv5), - (input_size[2], input_size[3]), - mode='bilinear', align_corners=False)) - ppm_out = torch.cat(ppm_out, 1) - - x = self.conv_last(ppm_out) - - if self.inference or self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - if self.use_softmax: - x = nn.functional.softmax(x, dim=1) - else: - x = nn.functional.log_softmax(x, dim=1) - return x - - -# pyramid pooling, bilinear upsample -class PPMBilinearDeepsup(nn.Module): - def __init__(self, num_class=150, fc_dim=4096, - inference=False, use_softmax=False, pool_scales=(1, 2, 3, 6)): - super(PPMBilinearDeepsup, self).__init__() - self.use_softmax = use_softmax - self.inference = inference - - self.ppm = [] - for scale in pool_scales: - self.ppm.append(nn.Sequential( - nn.AdaptiveAvgPool2d(scale), - nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False), - SynchronizedBatchNorm2d(512), - nn.ReLU(inplace=True) - )) - self.ppm = nn.ModuleList(self.ppm) - self.cbr_deepsup = conv3x3_bn_relu(fc_dim // 2, fc_dim // 4, 1) - - self.conv_last = nn.Sequential( - nn.Conv2d(fc_dim+len(pool_scales)*512, 512, - kernel_size=3, padding=1, bias=False), - SynchronizedBatchNorm2d(512), - nn.ReLU(inplace=True), - nn.Dropout2d(0.1), - nn.Conv2d(512, num_class, kernel_size=1) - ) - self.conv_last_deepsup = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - self.dropout_deepsup = nn.Dropout2d(0.1) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - input_size = conv5.size() - ppm_out = [conv5] - for pool_scale in self.ppm: - ppm_out.append(nn.functional.interpolate( - pool_scale(conv5), - (input_size[2], input_size[3]), - mode='bilinear', align_corners=False)) - ppm_out = torch.cat(ppm_out, 1) - - x = self.conv_last(ppm_out) - - if self.inference or self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - if self.use_softmax: - x = nn.functional.softmax(x, dim=1) - return x - - # deep sup - conv4 = conv_out[-2] - _ = self.cbr_deepsup(conv4) - _ = self.dropout_deepsup(_) - _ = self.conv_last_deepsup(_) - - x = nn.functional.log_softmax(x, dim=1) - _ = nn.functional.log_softmax(_, dim=1) - - return (x, _) - - -# upernet -class UPerNet(nn.Module): - def __init__(self, num_class=150, fc_dim=4096, - inference=False, use_softmax=False, pool_scales=(1, 2, 3, 6), - fpn_inplanes=(256,512,1024,2048), fpn_dim=256): - super(UPerNet, self).__init__() - self.use_softmax = use_softmax - self.inference = inference - - # PPM Module - self.ppm_pooling = [] - self.ppm_conv = [] - - for scale in pool_scales: - self.ppm_pooling.append(nn.AdaptiveAvgPool2d(scale)) - self.ppm_conv.append(nn.Sequential( - nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False), - SynchronizedBatchNorm2d(512), - nn.ReLU(inplace=True) - )) - self.ppm_pooling = nn.ModuleList(self.ppm_pooling) - self.ppm_conv = nn.ModuleList(self.ppm_conv) - self.ppm_last_conv = conv3x3_bn_relu(fc_dim + len(pool_scales)*512, fpn_dim, 1) - - # FPN Module - self.fpn_in = [] - for fpn_inplane in fpn_inplanes[:-1]: # skip the top layer - self.fpn_in.append(nn.Sequential( - nn.Conv2d(fpn_inplane, fpn_dim, kernel_size=1, bias=False), - SynchronizedBatchNorm2d(fpn_dim), - nn.ReLU(inplace=True) - )) - self.fpn_in = nn.ModuleList(self.fpn_in) - - self.fpn_out = [] - for i in range(len(fpn_inplanes) - 1): # skip the top layer - self.fpn_out.append(nn.Sequential( - conv3x3_bn_relu(fpn_dim, fpn_dim, 1), - )) - self.fpn_out = nn.ModuleList(self.fpn_out) - - self.conv_last = nn.Sequential( - conv3x3_bn_relu(len(fpn_inplanes) * fpn_dim, fpn_dim, 1), - nn.Conv2d(fpn_dim, num_class, kernel_size=1) - ) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - input_size = conv5.size() - ppm_out = [conv5] - for pool_scale, pool_conv in zip(self.ppm_pooling, self.ppm_conv): - ppm_out.append(pool_conv(nn.functional.interploate( - pool_scale(conv5), - (input_size[2], input_size[3]), - mode='bilinear', align_corners=False))) - ppm_out = torch.cat(ppm_out, 1) - f = self.ppm_last_conv(ppm_out) - - fpn_feature_list = [f] - for i in reversed(range(len(conv_out) - 1)): - conv_x = conv_out[i] - conv_x = self.fpn_in[i](conv_x) # lateral branch - - f = nn.functional.interpolate( - f, size=conv_x.size()[2:], mode='bilinear', align_corners=False) # top-down branch - f = conv_x + f - - fpn_feature_list.append(self.fpn_out[i](f)) - - fpn_feature_list.reverse() # [P2 - P5] - output_size = fpn_feature_list[0].size()[2:] - fusion_list = [fpn_feature_list[0]] - for i in range(1, len(fpn_feature_list)): - fusion_list.append(nn.functional.interpolate( - fpn_feature_list[i], - output_size, - mode='bilinear', align_corners=False)) - fusion_out = torch.cat(fusion_list, 1) - x = self.conv_last(fusion_out) - - if self.inference or self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - if self.use_softmax: - x = nn.functional.softmax(x, dim=1) - return x - - x = nn.functional.log_softmax(x, dim=1) - - return x diff --git a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/__init__.py deleted file mode 100644 index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/ECCV2022/PSG/OpenPSG/configs/motifs/panoptic_fpn_r101_fpn_1x_predcls_psg.py b/spaces/ECCV2022/PSG/OpenPSG/configs/motifs/panoptic_fpn_r101_fpn_1x_predcls_psg.py deleted file mode 100644 index d125d475b96e26c7862d16b5335798ee9defab44..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/PSG/OpenPSG/configs/motifs/panoptic_fpn_r101_fpn_1x_predcls_psg.py +++ /dev/null @@ -1,28 +0,0 @@ -_base_ = './panoptic_fpn_r50_fpn_1x_predcls_psg.py' - -model = dict(backbone=dict( - depth=101, - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet101'))) - -# Log config -project_name = 'openpsg' -expt_name = 'motifs_panoptic_fpn_r101_fpn_1x_predcls_psg' -work_dir = f'./work_dirs/{expt_name}' - -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - # dict(type='TensorboardLoggerHook') - dict( - type='WandbLoggerHook', - init_kwargs=dict( - project=project_name, - name=expt_name, - # config=work_dir + "/cfg.yaml" - ), - ), - ], -) - -load_from = 'work_dirs/checkpoints/panoptic_fpn_r101_fpn_1x_coco_20210820_193950-ab9157a2.pth' diff --git a/spaces/ECE1786-AG/ArtIstic-GENREator/app.py b/spaces/ECE1786-AG/ArtIstic-GENREator/app.py deleted file mode 100644 index 59b128133abe1ffcef71db27df3792e64722b180..0000000000000000000000000000000000000000 --- a/spaces/ECE1786-AG/ArtIstic-GENREator/app.py +++ /dev/null @@ -1,91 +0,0 @@ -import torch -import gradio as gr -from transformers import pipeline, T5ForConditionalGeneration, T5Tokenizer -from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler - -# generate lyrics -lyrics_generator = pipeline("text-generation", "ECE1786-AG/lyrics-generator") - -# summarize lyrics -model = T5ForConditionalGeneration.from_pretrained("Michau/t5-base-en-generate-headline") -tokenizer = T5Tokenizer.from_pretrained("Michau/t5-base-en-generate-headline") - -# generate single cover -scheduler = EulerDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-2", subfolder="scheduler") -pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", scheduler=scheduler, revision="fp16", torch_dtype=torch.float16) -device = "cuda" if torch.cuda.is_available() else "cpu" -pipe = pipe.to(device) - -def generate_lyrics(genre, prompt): - complete_prompt = " <{0}>\n{1}".format(genre, prompt) - lyrics = lyrics_generator(complete_prompt, max_length=1024) - lyrics = lyrics[0]['generated_text'] - lyrics = lyrics.split('\n', 1)[1] # remove first line from the generated lyrics - - return lyrics - -def summarize_lyrics(lyrics): - text = "headline: " + lyrics - encoding = tokenizer.encode_plus(text, return_tensors = "pt") - input_ids = encoding["input_ids"] - attention_masks = encoding["attention_mask"] - beam_outputs = model.generate( - input_ids = input_ids, - attention_mask = attention_masks, - max_length = 100, - num_beams = 5, - early_stopping = True, - ) - result = tokenizer.decode(beam_outputs[0]) - result = result.replace('', '') - result = result.replace('', '') - - return result - -def generate_cover(prompt, style, effect): - prompt = summarize_lyrics(prompt) # call function summarize_lyrics to summarize lyrics - prompt = prompt + ", " + effect + ", album cover, artistic, " + style - print(prompt) - image = pipe(prompt).images[0] - return image - -demo = gr.Blocks() -with demo: - gr.HTML( - """ -
    -
    -

    ArtIstic GENREator

    -
    -

    Generate Inspirational Lyrics and Single Cover

    -
    - """ - ) - - with gr.Row(): - - # Left column (lyrics generation) - with gr.Column(): - with gr.Accordion("Step 1. Generate Lyrics"): - gr.Markdown("Enter the starting text and select genre to generate lyrics") - with gr.Row(): - input_start_text = gr.Textbox(placeholder='I am', label="Starting Text") - input_lyrics_type = gr.Radio(choices=['pop', 'rap', 'country', 'rock', 'r&b'], value='pop', label="Lyrics Genre") - button_gen_lyrics = gr.Button("Generate Lyrics", variant="primary") - output_generated_lyrics = gr.Textbox(label="Generated Lyrics", lines=8) - - # Right column (single cover generation) - with gr.Column(): - with gr.Accordion("Step 2. Generate Single Cover"): - gr.Markdown("Cover will be generated based on style, effect and generated lyrics") - with gr.Row(): - input_cover_style = gr.Dropdown(choices=['painted', 'abstract', 'minimalist', 'illustrated', 'photographic', 'vintage'], value='painted', label="Track Cover Style") - input_cover_effect = gr.Radio(choices=['black and white', 'highly detailed', 'blurred'], value='highly detailed', label="Track Cover Effect") - button_gen_cover = gr.Button("Generate Cover", variant="primary") - output_generated_cover = gr.Image(label="Generated Cover") - - # Bind functions to buttons - button_gen_lyrics.click(fn=generate_lyrics, inputs=[input_lyrics_type , input_start_text], outputs=output_generated_lyrics) - button_gen_cover.click(fn=generate_cover, inputs=[output_generated_lyrics, input_cover_style, input_cover_effect], outputs=output_generated_cover) - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules.py b/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules.py deleted file mode 100644 index 2201a58bee9b7808d386b3ef9ac2d1f9630e56ef..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules.py +++ /dev/null @@ -1,521 +0,0 @@ -import copy -import math - -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d -from torch.nn import functional as F -from torch.nn.utils import remove_weight_norm, weight_norm - -from infer.lib.infer_pack import commons -from infer.lib.infer_pack.commons import get_padding, init_weights -from infer.lib.infer_pack.transforms import piecewise_rational_quadratic_transform - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py deleted file mode 100644 index 06f2b79f5e5c6f2049bf8220c29ae20c3f82d524..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py +++ /dev/null @@ -1,98 +0,0 @@ -import numpy as np -import parselmouth - -from infer.lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor - - -class PMF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def compute_f0(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0 - - def compute_f0_uv(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0, uv diff --git a/spaces/GT-RIPL/GPT-K/knowledge/retrieve.py b/spaces/GT-RIPL/GPT-K/knowledge/retrieve.py deleted file mode 100644 index 30126aadff6922c192d949feb95a60ef7890bab7..0000000000000000000000000000000000000000 --- a/spaces/GT-RIPL/GPT-K/knowledge/retrieve.py +++ /dev/null @@ -1,105 +0,0 @@ -import h5py -import numpy as np -from tqdm import tqdm -import torch -from knowledge import TextDB - - -class ImageCropsIdx: - def __init__(self, knowledge_idx, topk_w, topk_f, topk_n): - topk = {"whole": topk_w, "five": topk_f, "nine": topk_n} - self.topk = {k: v for k, v in topk.items() if v > 0} - - self.knowledge_idx, self.fdim, self.file_hash = self.load(knowledge_idx, self.topk) - - def load(self, knowledge_idx, topk): - with h5py.File(knowledge_idx, "r") as f: - fdim = f.attrs["fdim"] - file_hash = f.attrs["file_hash"] - - knowledge_idx_ = {} - for i in tqdm(range(len(f)), desc="Load sentence idx", dynamic_ncols=True, mininterval=1.0): - knowledge_idx_[str(i)] = {"image_ids": f[f"{i}/image_ids"][:]} - for k, v in topk.items(): - knowledge_idx_[str(i)][k] = { - "index": f[f"{i}/{k}/index"][:, :, :v], - "score": f[f"{i}/{k}/score"][:, :, :v], - "query": f[f"{i}/{k}/query"][:] - } - - knowledge_idx = {} - for i in knowledge_idx_.keys(): - for j, id in enumerate(knowledge_idx_[i]["image_ids"]): - knowledge_idx[id] = {} - for k in topk.keys(): - knowledge_idx[id][k] = { - "index": knowledge_idx_[i][k]["index"][j], - "score": knowledge_idx_[i][k]["score"][j], - "query": knowledge_idx_[i][k]["query"][j], - } - - return knowledge_idx, fdim, file_hash - - def __getitem__(self, image_id): - return self.knowledge_idx[image_id] - - -class KnowAugImageCrops: - def __init__(self, knowledge_db: TextDB, knowledge_idx: ImageCropsIdx, return_txt=False): - self.knowledge_db = knowledge_db - self.knowledge_idx = knowledge_idx - assert knowledge_db.file_hash == knowledge_idx.file_hash - - self.ncrop = {"whole": 1, "five": 5, "nine": 9} - self.topk = knowledge_idx.topk - self.fdim = knowledge_idx.fdim - - self.return_txt = return_txt - - def __call__(self, image_id): - ret = {} - for k in self.topk.keys(): - ki = self.knowledge_idx[image_id][k]["index"].flatten() - ke, kt = self.knowledge_db[ki] - kq = self.knowledge_idx[image_id][k]["query"] - kp = np.tile(np.arange(self.ncrop[k])[:, None], (1, self.topk[k])).flatten() - ks = self.knowledge_idx[image_id][k]["score"].flatten() - - ke = torch.FloatTensor(ke) - kq = torch.FloatTensor(kq) - kp = torch.LongTensor(kp) - ks = torch.FloatTensor(ks) - - ret[k] = {"embed": ke, "query": kq, "pos": kp, "score": ks} - if self.return_txt: - ret[k]["text"] = kt - - return ret - - -class KnowAugImageCropsCombined: - def __init__( - self, - knwl_aug_obj: KnowAugImageCrops, - knwl_aug_attr: KnowAugImageCrops, - knwl_aug_act: KnowAugImageCrops - ): - self.knwl_aug_obj = knwl_aug_obj - self.knwl_aug_act = knwl_aug_act - self.knwl_aug_attr = knwl_aug_attr - self.fdim = knwl_aug_obj.fdim - - def __call__(self, image_id): - knwl_obj = self.knwl_aug_obj(image_id) - knwl_attr = self.knwl_aug_attr(image_id) - knwl_act = self.knwl_aug_act(image_id) - - ret = {} - for k in knwl_obj.keys(): - ret[k] = { - "obj": knwl_obj[k], - "attr": knwl_attr[k], - "act": knwl_act[k] - } - - return ret diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/streams/two_stream_transport_lang_fusion.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/streams/two_stream_transport_lang_fusion.py deleted file mode 100644 index b20a28c446071ed50dad3ce7977ae6c9b459fec3..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/models/streams/two_stream_transport_lang_fusion.py +++ /dev/null @@ -1,196 +0,0 @@ -import torch -import numpy as np - -import cliport.models as models -import cliport.models.core.fusion as fusion -from cliport.models.core.transport import Transport - - -class TwoStreamTransportLangFusion(Transport): - """Two Stream Transport (a.k.a Place) module""" - - def __init__(self, stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device): - self.fusion_type = cfg['train']['trans_stream_fusion_type'] - super().__init__(stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device) - - def _build_nets(self): - stream_one_fcn, stream_two_fcn = self.stream_fcn - stream_one_model = models.names[stream_one_fcn] - stream_two_model = models.names[stream_two_fcn] - - self.key_stream_one = stream_one_model(self.in_shape, self.output_dim, self.cfg, self.device, self.preprocess) - self.key_stream_two = stream_two_model(self.in_shape, self.output_dim, self.cfg, self.device, self.preprocess) - self.query_stream_one = stream_one_model(self.kernel_shape, self.kernel_dim, self.cfg, self.device, self.preprocess) - self.query_stream_two = stream_two_model(self.kernel_shape, self.kernel_dim, self.cfg, self.device, self.preprocess) - self.fusion_key = fusion.names[self.fusion_type](input_dim=self.kernel_dim) - self.fusion_query = fusion.names[self.fusion_type](input_dim=self.kernel_dim) - - print(f"Transport FCN - Stream One: {stream_one_fcn}, Stream Two: {stream_two_fcn}, Stream Fusion: {self.fusion_type}") - - def transport2(self, in_tensor, crop, l): - logits = self.fusion_key(self.key_stream_one(in_tensor), self.key_stream_two(in_tensor, l)) - kernel = self.fusion_query(self.query_stream_one(crop), self.query_stream_two(crop, l)) - return logits, kernel - - def forward(self, inp_img, p, lang_goal, softmax=True): - """Forward pass.""" - if len(inp_img.shape) < 4: - inp_img = inp_img[None] - - if type(inp_img) is not torch.Tensor: - in_data = inp_img # .reshape(in_shape) - in_tens = torch.from_numpy(in_data).to(dtype=torch.float, device=self.device) # [B W H 6] - else: - in_data = inp_img - in_tens = in_data - - in_tensor = torch.nn.functional.pad(in_tens, tuple(self.padding[[2,1,0]].reshape(-1)), mode='constant') - if type(p[0]) is not torch.Tensor: - p = torch.FloatTensor(p)[None] - - in_tensors = [] - crops = [] - - # this for loop is fast. - for i in range(len(in_tensor)): - in_tensor_i = in_tensor[[i]] - # Rotation pivot. - pv = p[i] + self.pad_size - - # Crop before network (default for Transporters CoRL 2020). - hcrop = self.pad_size - in_tensor_i = in_tensor_i.permute(0, 3, 1, 2) - - crop = [in_tensor_i] * self.n_rotations - crop = self.rotator(crop, pivot=pv.float()) - crop = torch.cat(crop, dim=0) - crop = crop[:, :, int(pv[0]-hcrop):int(pv[0]+hcrop), int(pv[1]-hcrop):int(pv[1]+hcrop)] - - in_tensors.append(in_tensor_i) - crops.append(crop) - - logits, kernels = self.transport(torch.cat(in_tensors,dim=0), torch.cat(crops, dim=0), lang_goal) #crops.shape:(8, 36, 6, 64, 64) - res = self.correlate(logits, kernels, softmax) - return res - -class TwoStreamTransportLangFusionLat(TwoStreamTransportLangFusion): - """Two Stream Transport (a.k.a Place) module with lateral connections""" - - def __init__(self, stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device): - - self.fusion_type = cfg['train']['trans_stream_fusion_type'] - super().__init__(stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device) - - def transport(self, in_tensor, crop, l): - key_out_one, key_lat_one = self.key_stream_one(in_tensor) - key_out_two = self.key_stream_two(in_tensor, key_lat_one, l) - logits = self.fusion_key(key_out_one, key_out_two) - - query_out_one, query_lat_one = self.query_stream_one(crop) - query_out_two = self.query_stream_two(crop, query_lat_one, l) - kernel = self.fusion_query(query_out_one, query_out_two) - - return logits, kernel - - -class TwoStreamTransportLangFusionLatReduce(TwoStreamTransportLangFusionLat): - """Two Stream Transport (a.k.a Place) module with lateral connections""" - - def __init__(self, stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device): - - self.fusion_type = cfg['train']['trans_stream_fusion_type'] - super().__init__(stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device) - - del self.query_stream_one - del self.query_stream_two - # del self.key_stream_one - # del self.key_stream_two - - stream_one_fcn = 'plain_resnet_reduce_lat' - stream_one_model = models.names[stream_one_fcn] - stream_two_fcn = 'clip_ling' - stream_two_model = models.names[stream_two_fcn] - - - - # self.key_stream_one = stream_one_model(self.in_shape, self.output_dim, self.cfg, self.device, self.preprocess) - # self.key_stream_two = stream_two_model(self.in_shape, self.output_dim, self.cfg, self.device, self.preprocess) - - self.query_stream_one = stream_one_model(self.kernel_shape, self.kernel_dim, self.cfg, self.device, self.preprocess) - self.query_stream_two = stream_two_model(self.kernel_shape, self.kernel_dim, self.cfg, self.device, self.preprocess) - - def transport(self, in_tensor, crop, l): - key_out_one, key_lat_one = self.key_stream_one(in_tensor) - key_out_two = self.key_stream_two(in_tensor, key_lat_one, l) - logits = self.fusion_key(key_out_one, key_out_two) - - query_out_one, query_lat_one = self.query_stream_one(crop) - query_out_two = self.query_stream_two(crop, query_lat_one, l) - kernel = self.fusion_query(query_out_one, query_out_two) - - return logits, kernel - - - - - -class TwoStreamTransportLangFusionLatReduceOneStream(TwoStreamTransportLangFusionLatReduce): - """Two Stream Transport (a.k.a Place) module with lateral connections""" - - def __init__(self, stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device): - - self.fusion_type = cfg['train']['trans_stream_fusion_type'] - super().__init__(stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device) - - del self.query_stream_one - del self.query_stream_two - - - - def transport(self, in_tensor, crop, l): - key_out_one, key_lat_one = self.key_stream_one(in_tensor) - key_out_two = self.key_stream_two(in_tensor, key_lat_one, l) - logits = self.fusion_key(key_out_one, key_out_two) - - query_out_one, query_lat_one = self.key_stream_one(crop) - query_out_two = self.key_stream_two(crop, query_lat_one, l) - kernel = self.fusion_query(query_out_one, query_out_two) - - return logits, kernel - - - - -class TwoStreamTransportLangFusionLatPretrained18(TwoStreamTransportLangFusionLat): - """Two Stream Transport (a.k.a Place) module with lateral connections""" - - def __init__(self, stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device): - - self.fusion_type = cfg['train']['trans_stream_fusion_type'] - super().__init__(stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device) - - del self.query_stream_one - del self.query_stream_two - # del self.key_stream_one - # del self.key_stream_two - stream_one_fcn = 'pretrained_resnet18' - stream_one_model = models.names[stream_one_fcn] - stream_two_fcn = 'clip_ling' - stream_two_model = models.names[stream_two_fcn] - - # self.key_stream_one = stream_one_model(self.in_shape, self.output_dim, self.cfg, self.device, self.preprocess) - # self.key_stream_two = stream_two_model(self.in_shape, self.output_dim, self.cfg, self.device, self.preprocess) - - self.query_stream_one = stream_one_model(self.kernel_shape, self.kernel_dim, self.cfg, self.device, self.preprocess) - self.query_stream_two = stream_two_model(self.kernel_shape, self.kernel_dim, self.cfg, self.device, self.preprocess) - - def transport(self, in_tensor, crop, l): - key_out_one, key_lat_one = self.key_stream_one(in_tensor) - key_out_two = self.key_stream_two(in_tensor, key_lat_one, l) - logits = self.fusion_key(key_out_one, key_out_two) - - query_out_one, query_lat_one = self.query_stream_one(crop) - query_out_two = self.query_stream_two(crop, query_lat_one, l) - kernel = self.fusion_query(query_out_one, query_out_two) - - return logits, kernel \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/utils/__init__.py b/spaces/Gen-Sim/Gen-Sim/cliport/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/GeorgeOrville/bingo/postcss.config.js b/spaces/GeorgeOrville/bingo/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/GeorgeOrville/bingo/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/Gradio-Blocks/HairCLIP/model.py b/spaces/Gradio-Blocks/HairCLIP/model.py deleted file mode 100644 index a16120b23a7a88c0c63fd9c74fe89fa8867b16eb..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/HairCLIP/model.py +++ /dev/null @@ -1,160 +0,0 @@ -from __future__ import annotations - -import argparse -import os -import pathlib -import subprocess -import sys -from typing import Callable, Union - -import dlib -import huggingface_hub -import numpy as np -import PIL.Image -import torch -import torch.nn as nn -import torchvision.transforms as T - -if os.getenv('SYSTEM') == 'spaces' and not torch.cuda.is_available(): - with open('patch.e4e') as f: - subprocess.run('patch -p1'.split(), cwd='encoder4editing', stdin=f) - with open('patch.hairclip') as f: - subprocess.run('patch -p1'.split(), cwd='HairCLIP', stdin=f) - -app_dir = pathlib.Path(__file__).parent - -e4e_dir = app_dir / 'encoder4editing' -sys.path.insert(0, e4e_dir.as_posix()) - -from models.psp import pSp -from utils.alignment import align_face - -hairclip_dir = app_dir / 'HairCLIP' -mapper_dir = hairclip_dir / 'mapper' -sys.path.insert(0, hairclip_dir.as_posix()) -sys.path.insert(0, mapper_dir.as_posix()) - -from mapper.datasets.latents_dataset_inference import LatentsDatasetInference -from mapper.hairclip_mapper import HairCLIPMapper - - -class Model: - def __init__(self): - self.device = torch.device( - 'cuda:0' if torch.cuda.is_available() else 'cpu') - self.landmark_model = self._create_dlib_landmark_model() - self.e4e = self._load_e4e() - self.hairclip = self._load_hairclip() - self.transform = self._create_transform() - - @staticmethod - def _create_dlib_landmark_model(): - path = huggingface_hub.hf_hub_download( - 'public-data/dlib_face_landmark_model', - 'shape_predictor_68_face_landmarks.dat') - return dlib.shape_predictor(path) - - def _load_e4e(self) -> nn.Module: - ckpt_path = huggingface_hub.hf_hub_download('public-data/e4e', - 'e4e_ffhq_encode.pt') - ckpt = torch.load(ckpt_path, map_location='cpu') - opts = ckpt['opts'] - opts['device'] = self.device.type - opts['checkpoint_path'] = ckpt_path - opts = argparse.Namespace(**opts) - model = pSp(opts) - model.to(self.device) - model.eval() - return model - - def _load_hairclip(self) -> nn.Module: - ckpt_path = huggingface_hub.hf_hub_download('public-data/HairCLIP', - 'hairclip.pt') - ckpt = torch.load(ckpt_path, map_location='cpu') - opts = ckpt['opts'] - opts['device'] = self.device.type - opts['checkpoint_path'] = ckpt_path - opts['editing_type'] = 'both' - opts['input_type'] = 'text' - opts['hairstyle_description'] = 'HairCLIP/mapper/hairstyle_list.txt' - opts['color_description'] = 'red' - opts = argparse.Namespace(**opts) - model = HairCLIPMapper(opts) - model.to(self.device) - model.eval() - return model - - @staticmethod - def _create_transform() -> Callable: - transform = T.Compose([ - T.Resize(256), - T.CenterCrop(256), - T.ToTensor(), - T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), - ]) - return transform - - def detect_and_align_face(self, image: str) -> PIL.Image.Image: - image = align_face(filepath=image, predictor=self.landmark_model) - return image - - @staticmethod - def denormalize(tensor: torch.Tensor) -> torch.Tensor: - return torch.clamp((tensor + 1) / 2 * 255, 0, 255).to(torch.uint8) - - def postprocess(self, tensor: torch.Tensor) -> np.ndarray: - tensor = self.denormalize(tensor) - return tensor.cpu().numpy().transpose(1, 2, 0) - - @torch.inference_mode() - def reconstruct_face( - self, image: PIL.Image.Image) -> tuple[np.ndarray, torch.Tensor]: - input_data = self.transform(image).unsqueeze(0).to(self.device) - reconstructed_images, latents = self.e4e(input_data, - randomize_noise=False, - return_latents=True) - reconstructed = torch.clamp(reconstructed_images[0].detach(), -1, 1) - reconstructed = self.postprocess(reconstructed) - return reconstructed, latents[0] - - @torch.inference_mode() - def generate(self, editing_type: str, hairstyle_index: int, - color_description: str, latent: torch.Tensor) -> np.ndarray: - opts = self.hairclip.opts - opts.editing_type = editing_type - opts.color_description = color_description - - if editing_type == 'color': - hairstyle_index = 0 - - device = torch.device(opts.device) - - dataset = LatentsDatasetInference(latents=latent.unsqueeze(0).cpu(), - opts=opts) - w, hairstyle_text_inputs_list, color_text_inputs_list = dataset[0][:3] - - w = w.unsqueeze(0).to(device) - hairstyle_text_inputs = hairstyle_text_inputs_list[ - hairstyle_index].unsqueeze(0).to(device) - color_text_inputs = color_text_inputs_list[0].unsqueeze(0).to(device) - - hairstyle_tensor_hairmasked = torch.Tensor([0]).unsqueeze(0).to(device) - color_tensor_hairmasked = torch.Tensor([0]).unsqueeze(0).to(device) - - w_hat = w + 0.1 * self.hairclip.mapper( - w, - hairstyle_text_inputs, - color_text_inputs, - hairstyle_tensor_hairmasked, - color_tensor_hairmasked, - ) - x_hat, _ = self.hairclip.decoder( - [w_hat], - input_is_latent=True, - return_latents=True, - randomize_noise=False, - truncation=1, - ) - res = torch.clamp(x_hat[0].detach(), -1, 1) - res = self.postprocess(res) - return res diff --git a/spaces/Gradio-Blocks/Story_and_Video_Generation/README.md b/spaces/Gradio-Blocks/Story_and_Video_Generation/README.md deleted file mode 100644 index 5f7e90e9f81574e342ac4af6100d154a1ac807d9..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/Story_and_Video_Generation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Story_and_Video_Generation -emoji: 📖🎬 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.0.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/resnest.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/resnest.py deleted file mode 100644 index 48e1d8bfa47348a13f0da0b9ecf32354fa270340..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/resnest.py +++ /dev/null @@ -1,317 +0,0 @@ -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNetV1d - - -class RSoftmax(nn.Module): - """Radix Softmax module in ``SplitAttentionConv2d``. - - Args: - radix (int): Radix of input. - groups (int): Groups of input. - """ - - def __init__(self, radix, groups): - super().__init__() - self.radix = radix - self.groups = groups - - def forward(self, x): - batch = x.size(0) - if self.radix > 1: - x = x.view(batch, self.groups, self.radix, -1).transpose(1, 2) - x = F.softmax(x, dim=1) - x = x.reshape(batch, -1) - else: - x = torch.sigmoid(x) - return x - - -class SplitAttentionConv2d(nn.Module): - """Split-Attention Conv2d in ResNeSt. - - Args: - in_channels (int): Number of channels in the input feature map. - channels (int): Number of intermediate channels. - kernel_size (int | tuple[int]): Size of the convolution kernel. - stride (int | tuple[int]): Stride of the convolution. - padding (int | tuple[int]): Zero-padding added to both sides of - dilation (int | tuple[int]): Spacing between kernel elements. - groups (int): Number of blocked connections from input channels to - output channels. - groups (int): Same as nn.Conv2d. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels. Default: 4. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - dcn (dict): Config dict for DCN. Default: None. - """ - - def __init__(self, - in_channels, - channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - radix=2, - reduction_factor=4, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None): - super(SplitAttentionConv2d, self).__init__() - inter_channels = max(in_channels * radix // reduction_factor, 32) - self.radix = radix - self.groups = groups - self.channels = channels - self.with_dcn = dcn is not None - self.dcn = dcn - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_dcn and not fallback_on_stride: - assert conv_cfg is None, 'conv_cfg must be None for DCN' - conv_cfg = dcn - self.conv = build_conv_layer( - conv_cfg, - in_channels, - channels * radix, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups * radix, - bias=False) - # To be consistent with original implementation, starting from 0 - self.norm0_name, norm0 = build_norm_layer( - norm_cfg, channels * radix, postfix=0) - self.add_module(self.norm0_name, norm0) - self.relu = nn.ReLU(inplace=True) - self.fc1 = build_conv_layer( - None, channels, inter_channels, 1, groups=self.groups) - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, inter_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.fc2 = build_conv_layer( - None, inter_channels, channels * radix, 1, groups=self.groups) - self.rsoftmax = RSoftmax(radix, groups) - - @property - def norm0(self): - """nn.Module: the normalization layer named "norm0" """ - return getattr(self, self.norm0_name) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def forward(self, x): - x = self.conv(x) - x = self.norm0(x) - x = self.relu(x) - - batch, rchannel = x.shape[:2] - batch = x.size(0) - if self.radix > 1: - splits = x.view(batch, self.radix, -1, *x.shape[2:]) - gap = splits.sum(dim=1) - else: - gap = x - gap = F.adaptive_avg_pool2d(gap, 1) - gap = self.fc1(gap) - - gap = self.norm1(gap) - gap = self.relu(gap) - - atten = self.fc2(gap) - atten = self.rsoftmax(atten).view(batch, -1, 1, 1) - - if self.radix > 1: - attens = atten.view(batch, self.radix, -1, *atten.shape[2:]) - out = torch.sum(attens * splits, dim=1) - else: - out = atten * x - return out.contiguous() - - -class Bottleneck(_Bottleneck): - """Bottleneck block for ResNeSt. - - Args: - inplane (int): Input planes of this block. - planes (int): Middle planes of this block. - groups (int): Groups of conv2. - base_width (int): Base of width in terms of base channels. Default: 4. - base_channels (int): Base of channels for calculating width. - Default: 64. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Key word arguments for base class. - """ - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - """Bottleneck block for ResNeSt.""" - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.avg_down_stride = avg_down_stride and self.conv2_stride > 1 - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - self.with_modulated_dcn = False - self.conv2 = SplitAttentionConv2d( - width, - width, - kernel_size=3, - stride=1 if self.avg_down_stride else self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - radix=radix, - reduction_factor=reduction_factor, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=self.dcn) - delattr(self, self.norm2_name) - - if self.avg_down_stride: - self.avd_layer = nn.AvgPool2d(3, self.conv2_stride, padding=1) - - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - def forward(self, x): - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - - if self.avg_down_stride: - out = self.avd_layer(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNeSt(ResNetV1d): - """ResNeSt backbone. - - Args: - groups (int): Number of groups of Bottleneck. Default: 1 - base_width (int): Base width of Bottleneck. Default: 4 - radix (int): Radix of SplitAttentionConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Keyword arguments for ResNet. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)), - 200: (Bottleneck, (3, 24, 36, 3)) - } - - def __init__(self, - groups=1, - base_width=4, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - self.groups = groups - self.base_width = base_width - self.radix = radix - self.reduction_factor = reduction_factor - self.avg_down_stride = avg_down_stride - super(ResNeSt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - radix=self.radix, - reduction_factor=self.reduction_factor, - avg_down_stride=self.avg_down_stride, - **kwargs) diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/server.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/server.py deleted file mode 100644 index d8422a2bad5ac2a09d4582a98da4f962dac1a911..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/server.py +++ /dev/null @@ -1,185 +0,0 @@ -#!/usr/bin/env python - -import argparse, connexion, os, sys, yaml, json, socket -from netdissect.easydict import EasyDict -from flask import send_from_directory, redirect -from flask_cors import CORS - - -from netdissect.serverstate import DissectionProject - -__author__ = 'Hendrik Strobelt, David Bau' - -CONFIG_FILE_NAME = 'dissect.json' -projects = {} - -app = connexion.App(__name__, debug=False) - - -def get_all_projects(): - res = [] - for key, project in projects.items(): - # print key - res.append({ - 'project': key, - 'info': { - 'layers': [layer['layer'] for layer in project.get_layers()] - } - }) - return sorted(res, key=lambda x: x['project']) - -def get_layers(project): - return { - 'request': {'project': project}, - 'res': projects[project].get_layers() - } - -def get_units(project, layer): - return { - 'request': {'project': project, 'layer': layer}, - 'res': projects[project].get_units(layer) - } - -def get_rankings(project, layer): - return { - 'request': {'project': project, 'layer': layer}, - 'res': projects[project].get_rankings(layer) - } - -def get_levels(project, layer, quantiles): - return { - 'request': {'project': project, 'layer': layer, 'quantiles': quantiles}, - 'res': projects[project].get_levels(layer, quantiles) - } - -def get_channels(project, layer): - answer = dict(channels=projects[project].get_channels(layer)) - return { - 'request': {'project': project, 'layer': layer}, - 'res': answer - } - -def post_generate(gen_req): - project = gen_req['project'] - zs = gen_req.get('zs', None) - ids = gen_req.get('ids', None) - return_urls = gen_req.get('return_urls', False) - assert (zs is None) != (ids is None) # one or the other, not both - ablations = gen_req.get('ablations', []) - interventions = gen_req.get('interventions', None) - # no z avilable if ablations - generated = projects[project].generate_images(zs, ids, interventions, - return_urls=return_urls) - return { - 'request': gen_req, - 'res': generated - } - -def post_features(feat_req): - project = feat_req['project'] - ids = feat_req['ids'] - masks = feat_req.get('masks', None) - layers = feat_req.get('layers', None) - interventions = feat_req.get('interventions', None) - features = projects[project].get_features( - ids, masks, layers, interventions) - return { - 'request': feat_req, - 'res': features - } - -def post_featuremaps(feat_req): - project = feat_req['project'] - ids = feat_req['ids'] - layers = feat_req.get('layers', None) - interventions = feat_req.get('interventions', None) - featuremaps = projects[project].get_featuremaps( - ids, layers, interventions) - return { - 'request': feat_req, - 'res': featuremaps - } - -@app.route('/client/') -def send_static(path): - """ serves all files from ./client/ to ``/client/`` - - :param path: path from api call - """ - return send_from_directory(args.client, path) - -@app.route('/data/') -def send_data(path): - """ serves all files from the data dir to ``/dissect/`` - - :param path: path from api call - """ - print('Got the data route for', path) - return send_from_directory(args.data, path) - - -@app.route('/') -def redirect_home(): - return redirect('/client/index.html', code=302) - - -def load_projects(directory): - """ - searches for CONFIG_FILE_NAME in all subdirectories of directory - and creates data handlers for all of them - - :param directory: scan directory - :return: null - """ - project_dirs = [] - # Don't search more than 2 dirs deep. - search_depth = 2 + directory.count(os.path.sep) - for root, dirs, files in os.walk(directory): - if CONFIG_FILE_NAME in files: - project_dirs.append(root) - # Don't get subprojects under a project dir. - del dirs[:] - elif root.count(os.path.sep) >= search_depth: - del dirs[:] - for p_dir in project_dirs: - print('Loading %s' % os.path.join(p_dir, CONFIG_FILE_NAME)) - with open(os.path.join(p_dir, CONFIG_FILE_NAME), 'r') as jf: - config = EasyDict(json.load(jf)) - dh_id = os.path.split(p_dir)[1] - projects[dh_id] = DissectionProject( - config=config, - project_dir=p_dir, - path_url='data/' + os.path.relpath(p_dir, directory), - public_host=args.public_host) - -app.add_api('server.yaml') - -# add CORS support -CORS(app.app, headers='Content-Type') - -parser = argparse.ArgumentParser() -parser.add_argument("--nodebug", default=False) -parser.add_argument("--address", default="127.0.0.1") # 0.0.0.0 for nonlocal use -parser.add_argument("--port", default="5001") -parser.add_argument("--public_host", default=None) -parser.add_argument("--nocache", default=False) -parser.add_argument("--data", type=str, default='dissect') -parser.add_argument("--client", type=str, default='client_dist') - -if __name__ == '__main__': - args = parser.parse_args() - for d in [args.data, args.client]: - if not os.path.isdir(d): - print('No directory %s' % d) - sys.exit(1) - args.data = os.path.abspath(args.data) - args.client = os.path.abspath(args.client) - if args.public_host is None: - args.public_host = '%s:%d' % (socket.getfqdn(), int(args.port)) - app.run(port=int(args.port), debug=not args.nodebug, host=args.address, - use_reloader=False) -else: - args, _ = parser.parse_known_args() - if args.public_host is None: - args.public_host = '%s:%d' % (socket.getfqdn(), int(args.port)) - load_projects(args.data) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/tacotron2_loss.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/tacotron2_loss.py deleted file mode 100644 index 8c7b655c8c52f8fa478b4568850ec8f741dab78e..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/tacotron2_loss.py +++ /dev/null @@ -1,210 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import logging -from typing import Any, Dict, List -from functools import lru_cache -from dataclasses import dataclass, field - -import torch -from omegaconf import II - -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from fairseq.data.data_utils import lengths_to_mask -import torch.nn.functional as F - - -logger = logging.getLogger(__name__) - - -@dataclass -class Tacotron2CriterionConfig(FairseqDataclass): - bce_pos_weight: float = field( - default=1.0, - metadata={"help": "weight of positive examples for BCE loss"}, - ) - n_frames_per_step: int = field( - default=0, - metadata={"help": "Number of frames per decoding step"}, - ) - use_guided_attention_loss: bool = field( - default=False, - metadata={"help": "use guided attention loss"}, - ) - guided_attention_loss_sigma: float = field( - default=0.4, - metadata={"help": "weight of positive examples for BCE loss"}, - ) - ctc_weight: float = field( - default=0.0, metadata={"help": "weight for CTC loss"} - ) - sentence_avg: bool = II("optimization.sentence_avg") - - -class GuidedAttentionLoss(torch.nn.Module): - """ - Efficiently Trainable Text-to-Speech System Based on Deep Convolutional - Networks with Guided Attention (https://arxiv.org/abs/1710.08969) - """ - - def __init__(self, sigma): - super().__init__() - self.sigma = sigma - - @staticmethod - @lru_cache(maxsize=8) - def _get_weight(s_len, t_len, sigma): - grid_x, grid_y = torch.meshgrid(torch.arange(t_len), torch.arange(s_len)) - grid_x = grid_x.to(s_len.device) - grid_y = grid_y.to(s_len.device) - w = (grid_y.float() / s_len - grid_x.float() / t_len) ** 2 - return 1.0 - torch.exp(-w / (2 * (sigma ** 2))) - - def _get_weights(self, src_lens, tgt_lens): - bsz, max_s_len, max_t_len = len(src_lens), max(src_lens), max(tgt_lens) - weights = torch.zeros((bsz, max_t_len, max_s_len)) - for i, (s_len, t_len) in enumerate(zip(src_lens, tgt_lens)): - weights[i, :t_len, :s_len] = self._get_weight(s_len, t_len, - self.sigma) - return weights - - @staticmethod - def _get_masks(src_lens, tgt_lens): - in_masks = lengths_to_mask(src_lens) - out_masks = lengths_to_mask(tgt_lens) - return out_masks.unsqueeze(2) & in_masks.unsqueeze(1) - - def forward(self, attn, src_lens, tgt_lens, reduction="mean"): - weights = self._get_weights(src_lens, tgt_lens).to(attn.device) - masks = self._get_masks(src_lens, tgt_lens).to(attn.device) - loss = (weights * attn.transpose(1, 2)).masked_select(masks) - loss = torch.sum(loss) if reduction == "sum" else torch.mean(loss) - return loss - - -@register_criterion("tacotron2", dataclass=Tacotron2CriterionConfig) -class Tacotron2Criterion(FairseqCriterion): - def __init__(self, task, sentence_avg, n_frames_per_step, - use_guided_attention_loss, guided_attention_loss_sigma, - bce_pos_weight, ctc_weight): - super().__init__(task) - self.sentence_avg = sentence_avg - self.n_frames_per_step = n_frames_per_step - self.bce_pos_weight = bce_pos_weight - - self.guided_attn = None - if use_guided_attention_loss: - self.guided_attn = GuidedAttentionLoss(guided_attention_loss_sigma) - self.ctc_weight = ctc_weight - - def forward(self, model, sample, reduction="mean"): - bsz, max_len, _ = sample["target"].size() - feat_tgt = sample["target"] - feat_len = sample["target_lengths"].view(bsz, 1).expand(-1, max_len) - eos_tgt = torch.arange(max_len).to(sample["target"].device) - eos_tgt = eos_tgt.view(1, max_len).expand(bsz, -1) - eos_tgt = (eos_tgt == (feat_len - 1)).float() - src_tokens = sample["net_input"]["src_tokens"] - src_lens = sample["net_input"]["src_lengths"] - tgt_lens = sample["target_lengths"] - - feat_out, eos_out, extra = model( - src_tokens=src_tokens, - src_lengths=src_lens, - prev_output_tokens=sample["net_input"]["prev_output_tokens"], - incremental_state=None, - target_lengths=tgt_lens, - speaker=sample["speaker"] - ) - - l1_loss, mse_loss, eos_loss = self.compute_loss( - extra["feature_out"], feat_out, eos_out, feat_tgt, eos_tgt, - tgt_lens, reduction, - ) - attn_loss = torch.tensor(0.).type_as(l1_loss) - if self.guided_attn is not None: - attn_loss = self.guided_attn(extra['attn'], src_lens, tgt_lens, reduction) - ctc_loss = torch.tensor(0.).type_as(l1_loss) - if self.ctc_weight > 0.: - net_output = (feat_out, eos_out, extra) - lprobs = model.get_normalized_probs(net_output, log_probs=True) - lprobs = lprobs.transpose(0, 1) # T x B x C - src_mask = lengths_to_mask(src_lens) - src_tokens_flat = src_tokens.masked_select(src_mask) - ctc_loss = F.ctc_loss( - lprobs, src_tokens_flat, tgt_lens, src_lens, - reduction=reduction, zero_infinity=True - ) * self.ctc_weight - loss = l1_loss + mse_loss + eos_loss + attn_loss + ctc_loss - - sample_size = sample["nsentences"] if self.sentence_avg \ - else sample["ntokens"] - logging_output = { - "loss": utils.item(loss.data), - "ntokens": sample["ntokens"], - "nsentences": sample["nsentences"], - "sample_size": sample_size, - "l1_loss": utils.item(l1_loss.data), - "mse_loss": utils.item(mse_loss.data), - "eos_loss": utils.item(eos_loss.data), - "attn_loss": utils.item(attn_loss.data), - "ctc_loss": utils.item(ctc_loss.data), - } - return loss, sample_size, logging_output - - def compute_loss(self, feat_out, feat_out_post, eos_out, feat_tgt, - eos_tgt, tgt_lens, reduction="mean"): - mask = lengths_to_mask(tgt_lens) - _eos_out = eos_out[mask].squeeze() - _eos_tgt = eos_tgt[mask] - _feat_tgt = feat_tgt[mask] - _feat_out = feat_out[mask] - _feat_out_post = feat_out_post[mask] - - l1_loss = ( - F.l1_loss(_feat_out, _feat_tgt, reduction=reduction) + - F.l1_loss(_feat_out_post, _feat_tgt, reduction=reduction) - ) - mse_loss = ( - F.mse_loss(_feat_out, _feat_tgt, reduction=reduction) + - F.mse_loss(_feat_out_post, _feat_tgt, reduction=reduction) - ) - eos_loss = F.binary_cross_entropy_with_logits( - _eos_out, _eos_tgt, pos_weight=torch.tensor(self.bce_pos_weight), - reduction=reduction - ) - return l1_loss, mse_loss, eos_loss - - @classmethod - def reduce_metrics(cls, logging_outputs: List[Dict[str, Any]]) -> None: - ns = [log.get("sample_size", 0) for log in logging_outputs] - ntot = sum(ns) - ws = [n / (ntot + 1e-8) for n in ns] - for key in ["loss", "l1_loss", "mse_loss", "eos_loss", "attn_loss", "ctc_loss"]: - vals = [log.get(key, 0) for log in logging_outputs] - val = sum(val * w for val, w in zip(vals, ws)) - metrics.log_scalar(key, val, ntot, round=3) - metrics.log_scalar("sample_size", ntot, len(logging_outputs)) - - # inference metrics - if "targ_frames" not in logging_outputs[0]: - return - n = sum(log.get("targ_frames", 0) for log in logging_outputs) - for key, new_key in [ - ("mcd_loss", "mcd_loss"), - ("pred_frames", "pred_ratio"), - ("nins", "ins_rate"), - ("ndel", "del_rate"), - ]: - val = sum(log.get(key, 0) for log in logging_outputs) - metrics.log_scalar(new_key, val / n, n, round=3) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - return False diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/ema/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/ema/__init__.py deleted file mode 100644 index 503ceaa609b092e48bd32a0031f4e2ffb875483f..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/ema/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - -from .ema import EMA - - -def build_ema(model, cfg, device): - return EMA(model, cfg, device) - - -# automatically import any Python files in the models/ema/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - file_name = file[: file.find(".py")] - importlib.import_module("fairseq.models.ema." + file_name) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/wav2vec/wav2vec.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/wav2vec/wav2vec.py deleted file mode 100644 index af6604da10f504baabff50bf14a6eb2214bffef3..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/wav2vec/wav2vec.py +++ /dev/null @@ -1,630 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import logging -import math -from typing import Optional, Tuple -from omegaconf import II -import sys - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - Fp32GroupNorm, - Fp32LayerNorm, - GumbelVectorQuantizer, - KmeansVectorQuantizer, - TransposeLast, -) -from fairseq.tasks import FairseqTask -from fairseq.utils import buffered_arange - - -logger = logging.getLogger(__name__) - - -AGGREGATOR_CHOICES = ChoiceEnum(["cnn", "gru"]) -PROJECT_FEATURES_CHOICES = ChoiceEnum(["none", "same", "new"]) -ACTIVATION_CHOICES = ChoiceEnum(["relu", "gelu"]) -VQ_TYPE_CHOICES = ChoiceEnum(["none", "gumbel", "kmeans"]) - - -@dataclass -class Wav2VecConfig(FairseqDataclass): - prediction_steps: int = field( - default=12, metadata={"help": "number of steps ahead to predict"} - ) - sample_distance: Optional[int] = field( - default=None, - metadata={ - "help": "sample distance from target. does not work properly with cross-sampling" - }, - ) - cross_sample_negatives: int = field( - default=0, metadata={"help": "num of cross sampled negatives"} - ) - num_negatives: int = field( - default=10, metadata={"help": "num of sampled negatives"} - ) - conv_feature_layers: str = field( - default="[(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1), (512, 1, 1)]", - metadata={ - "help": "convolutional feature extraction layers [(dim, kernel_size, stride), ...]" - }, - ) - conv_aggregator_layers: str = field( - default="[(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)]", - metadata={ - "help": "convolutional aggregator layers [(dim, kernel_size, stride), ...]" - }, - ) - dropout: float = field( - default=0.0, metadata={"help": "dropout to apply within the model"} - ) - dropout_features: float = field( - default=0.0, metadata={"help": "dropout to apply to the features"} - ) - dropout_agg: float = field( - default=0.0, metadata={"help": "dropout to apply after aggregation step"} - ) - aggregator: AGGREGATOR_CHOICES = field( - default="cnn", metadata={"help": "type of aggregator to use"} - ) - gru_dim: int = field(default=512, metadata={"help": "GRU dimensionality"}) - no_conv_bias: bool = field( - default=False, metadata={"help": "if set, does not learn bias for conv layers"} - ) - agg_zero_pad: bool = field( - default=False, - metadata={"help": "if set, zero pads in aggregator instead of repl pad"}, - ) - skip_connections_feat: bool = field( - default=False, - metadata={"help": "if set, adds skip connections to the feature extractor"}, - ) - skip_connections_agg: bool = field( - default=True, - metadata={"help": "if set, adds skip connections to the aggregator"}, - ) - residual_scale: float = field( - default=0.5, metadata={"help": "scales residual by sqrt(value)"} - ) - log_compression: bool = field( - default=True, - metadata={"help": "if set, adds a log compression to feature extractor"}, - ) - balanced_classes: bool = field( - default=False, - metadata={"help": "if set, loss is scaled to balance for number of negatives"}, - ) - project_features: PROJECT_FEATURES_CHOICES = field( - default="none", - metadata={ - "help": "if not none, features are projected using the (same or new) aggregator" - }, - ) - non_affine_group_norm: bool = field( - default=False, metadata={"help": "if set, group norm is not affine"} - ) - offset: str = field( - default="auto", - metadata={ - "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value" - }, - ) - activation: ACTIVATION_CHOICES = field( - default="relu", - metadata={ - "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value" - }, - ) - vq_type: VQ_TYPE_CHOICES = field( - default="none", metadata={"help": "which type of quantizer to use"} - ) - vq_vars: int = field( - default=320, - metadata={"help": "project to this many vector quantized variables per group"}, - ) - vq_groups: int = field( - default=2, metadata={"help": "number of groups of latent variables"} - ) - vq_dim: int = field( - default=0, - metadata={ - "help": "uses this dimensionality for quantized vectors. 0 to use model dim // groups" - }, - ) - vq_depth: int = field( - default=1, metadata={"help": "number of layers for vq weight projection"} - ) - combine_groups: bool = field( - default=False, metadata={"help": "if set, variables are shared among groups"} - ) - vq_temp: Tuple[float, float, float] = field( - default=(2.0, 0.5, 0.999995), - metadata={ - "help": "temperature for latent variable sampling with gumbel softmax. should be a tuple of 3 values (start, end, decay)" - }, - ) - vq_gamma: float = field( - default=0.25, - metadata={"help": "gamma parameter for kmeans style vector quantization"}, - ) - infonce: bool = II("criterion.infonce") - - -@register_model("wav2vec", dataclass=Wav2VecConfig) -class Wav2VecModel(BaseFairseqModel): - @classmethod - def build_model(cls, cfg: Wav2VecConfig, task: FairseqTask): - """Build a new model instance.""" - - model = Wav2VecModel(cfg) - logger.info(model) - return model - - def __init__(self, cfg: Wav2VecConfig): - super().__init__() - - self.prediction_steps = cfg.prediction_steps - offset = cfg.offset - - if cfg.activation == "relu": - activation = nn.ReLU() - elif cfg.activation == "gelu": - activation = nn.GELU() - else: - raise Exception("unknown activation " + cfg.activation) - - feature_enc_layers = eval(cfg.conv_feature_layers) - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - log_compression=cfg.log_compression, - skip_connections=cfg.skip_connections_feat, - residual_scale=cfg.residual_scale, - non_affine_group_norm=cfg.non_affine_group_norm, - activation=activation, - ) - embed = feature_enc_layers[-1][0] - - self.vector_quantizer = None - if cfg.vq_type == "gumbel": - self.vector_quantizer = GumbelVectorQuantizer( - dim=embed, - num_vars=cfg.vq_vars, - temp=cfg.vq_temp, - groups=cfg.vq_groups, - combine_groups=cfg.combine_groups, - vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed, - time_first=False, - activation=activation, - weight_proj_depth=cfg.vq_depth, - weight_proj_factor=2, - ) - elif cfg.vq_type == "kmeans": - self.vector_quantizer = KmeansVectorQuantizer( - dim=embed, - num_vars=cfg.vq_vars, - groups=cfg.vq_groups, - combine_groups=cfg.combine_groups, - vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed, - time_first=False, - gamma=cfg.vq_gamma, - ) - else: - assert ( - cfg.vq_type == "none" or cfg.vq_type is None - ), "Unknown quantizer type" - - if cfg.offset == "auto": - jin = 0 - rin = 0 - for _, k, stride in feature_enc_layers: - if rin == 0: - rin = k - rin = rin + (k - 1) * jin - if jin == 0: - jin = stride - else: - jin *= stride - offset = math.ceil(rin / jin) - - offset = int(offset) - - def make_aggregator(): - if cfg.aggregator == "cnn": - agg_layers = eval(cfg.conv_aggregator_layers) - agg_dim = agg_layers[-1][0] - feature_aggregator = ConvAggegator( - conv_layers=agg_layers, - embed=embed, - dropout=cfg.dropout, - skip_connections=cfg.skip_connections_agg, - residual_scale=cfg.residual_scale, - non_affine_group_norm=cfg.non_affine_group_norm, - conv_bias=not cfg.no_conv_bias, - zero_pad=cfg.agg_zero_pad, - activation=activation, - ) - elif cfg.aggregator == "gru": - agg_dim = cfg.gru_dim - feature_aggregator = nn.Sequential( - TransposeLast(), - nn.GRU( - input_size=embed, - hidden_size=agg_dim, - num_layers=1, - dropout=cfg.dropout, - ), - TransposeLast(deconstruct_idx=0), - ) - else: - raise Exception("unknown aggregator type " + cfg.aggregator) - - return feature_aggregator, agg_dim - - self.feature_aggregator, agg_dim = make_aggregator() - - self.wav2vec_predictions = Wav2VecPredictionsModel( - in_dim=agg_dim, - out_dim=embed, - prediction_steps=cfg.prediction_steps, - n_negatives=cfg.num_negatives, - cross_sample_negatives=cfg.cross_sample_negatives, - sample_distance=cfg.sample_distance, - dropout=cfg.dropout, - offset=offset, - balanced_classes=cfg.balanced_classes, - infonce=cfg.infonce, - ) - - self.dropout_feats = nn.Dropout(p=cfg.dropout_features) - self.dropout_agg = nn.Dropout(p=cfg.dropout_agg) - - if cfg.project_features == "none": - self.project_features = None - elif cfg.project_features == "same": - self.project_features = self.feature_aggregator - elif cfg.project_features == "new": - self.project_features, _ = make_aggregator() - - def forward(self, source): - result = {} - - features = self.feature_extractor(source) - if self.vector_quantizer: - q_res = self.vector_quantizer(features) - features = q_res["x"] - for k in q_res.keys(): - if k != "x": - result[k] = q_res[k] - - x = self.dropout_feats(features) - x = self.feature_aggregator(x) - x = self.dropout_agg(x) - - if self.project_features is not None: - features = self.project_features(features) - x, targets = self.wav2vec_predictions(x, features) - result["cpc_logits"] = x - result["cpc_targets"] = targets - - return result - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - - def max_positions(self): - """Maximum length supported by the model.""" - return sys.maxsize - - def get_logits(self, net_output): - logits = net_output["cpc_logits"] - return logits - - def get_targets(self, sample, net_output): - t = net_output["cpc_targets"] - if isinstance(t, tuple): - t = t[0] - return t.contiguous() - - def get_target_weights(self, targets, net_output): - targets = net_output["cpc_targets"] - if isinstance(targets, tuple) and targets[-1] is not None: - return targets[-1] - return None - - def get_extra_losses(self, net_output): - loss = None - if "prob_perplexity" in net_output: - loss = net_output["num_vars"] - net_output["prob_perplexity"] - elif "kmeans_loss" in net_output: - loss = net_output["kmeans_loss"] - - return loss - - -def norm_block(is_layer_norm, dim, affine=True): - if is_layer_norm: - mod = nn.Sequential( - TransposeLast(), - Fp32LayerNorm(dim, elementwise_affine=affine), - TransposeLast(), - ) - else: - mod = Fp32GroupNorm(1, dim, affine=affine) - - return mod - - -class ConvFeatureExtractionModel(nn.Module): - def __init__( - self, - conv_layers, - dropout, - log_compression, - skip_connections, - residual_scale, - non_affine_group_norm, - activation, - ): - super().__init__() - - def block(n_in, n_out, k, stride): - return nn.Sequential( - nn.Conv1d(n_in, n_out, k, stride=stride, bias=False), - nn.Dropout(p=dropout), - norm_block( - is_layer_norm=False, dim=n_out, affine=not non_affine_group_norm - ), - activation, - ) - - in_d = 1 - self.conv_layers = nn.ModuleList() - for dim, k, stride in conv_layers: - self.conv_layers.append(block(in_d, dim, k, stride)) - in_d = dim - - self.log_compression = log_compression - self.skip_connections = skip_connections - self.residual_scale = math.sqrt(residual_scale) - - def forward(self, x): - # BxT -> BxCxT - x = x.unsqueeze(1) - - for conv in self.conv_layers: - residual = x - x = conv(x) - if self.skip_connections and x.size(1) == residual.size(1): - tsz = x.size(2) - r_tsz = residual.size(2) - residual = residual[..., :: r_tsz // tsz][..., :tsz] - x = (x + residual) * self.residual_scale - - if self.log_compression: - x = x.abs() - x = x + 1 - x = x.log() - - return x - - -class ZeroPad1d(nn.Module): - def __init__(self, pad_left, pad_right): - super().__init__() - self.pad_left = pad_left - self.pad_right = pad_right - - def forward(self, x): - return F.pad(x, (self.pad_left, self.pad_right)) - - -class ConvAggegator(nn.Module): - def __init__( - self, - conv_layers, - embed, - dropout, - skip_connections, - residual_scale, - non_affine_group_norm, - conv_bias, - zero_pad, - activation, - ): - super().__init__() - - def block(n_in, n_out, k, stride): - # padding dims only really make sense for stride = 1 - ka = k // 2 - kb = ka - 1 if k % 2 == 0 else ka - - pad = ( - ZeroPad1d(ka + kb, 0) if zero_pad else nn.ReplicationPad1d((ka + kb, 0)) - ) - - return nn.Sequential( - pad, - nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias), - nn.Dropout(p=dropout), - norm_block(False, n_out, affine=not non_affine_group_norm), - activation, - ) - - in_d = embed - self.conv_layers = nn.ModuleList() - self.residual_proj = nn.ModuleList() - for dim, k, stride in conv_layers: - if in_d != dim and skip_connections: - self.residual_proj.append(nn.Conv1d(in_d, dim, 1, bias=False)) - else: - self.residual_proj.append(None) - - self.conv_layers.append(block(in_d, dim, k, stride)) - in_d = dim - self.conv_layers = nn.Sequential(*self.conv_layers) - self.skip_connections = skip_connections - self.residual_scale = math.sqrt(residual_scale) - - def forward(self, x): - for rproj, conv in zip(self.residual_proj, self.conv_layers): - residual = x - x = conv(x) - if self.skip_connections: - if rproj is not None: - residual = rproj(residual) - x = (x + residual) * self.residual_scale - return x - - -class Wav2VecPredictionsModel(nn.Module): - def __init__( - self, - in_dim, - out_dim, - prediction_steps, - n_negatives, - cross_sample_negatives, - sample_distance, - dropout, - offset, - balanced_classes, - infonce, - ): - super().__init__() - - self.n_negatives = n_negatives - self.cross_sample_negatives = cross_sample_negatives - self.sample_distance = sample_distance - self.project_to_steps = nn.ConvTranspose2d( - in_dim, out_dim, (1, prediction_steps) - ) - self.dropout = nn.Dropout(p=dropout) - self.offset = offset - self.balanced_classes = balanced_classes - self.infonce = infonce - - def sample_negatives(self, y): - bsz, fsz, tsz = y.shape - - y = y.transpose(0, 1) # BCT -> CBT - y = y.contiguous().view(fsz, -1) # CBT => C(BxT) - - cross_high = tsz * bsz - high = tsz if self.sample_distance is None else min(tsz, self.sample_distance) - assert high > 1 - - neg_idxs = torch.randint(low=0, high=high, size=(bsz, self.n_negatives * tsz)) - - with torch.no_grad(): - if self.n_negatives > 0: - tszs = ( - buffered_arange(tsz) - .unsqueeze(-1) - .expand(-1, self.n_negatives) - .flatten() - ) - - neg_idxs = torch.randint( - low=0, high=high - 1, size=(bsz, self.n_negatives * tsz) - ) - neg_idxs[neg_idxs >= tszs] += 1 - - if self.cross_sample_negatives > 0: - tszs = ( - buffered_arange(tsz) - .unsqueeze(-1) - .expand(-1, self.cross_sample_negatives) - .flatten() - ) - - cross_neg_idxs = torch.randint( - low=0, - high=cross_high - 1, - size=(bsz, self.cross_sample_negatives * tsz), - ) - cross_neg_idxs[cross_neg_idxs >= tszs] += 1 - - if self.n_negatives > 0: - for i in range(1, bsz): - neg_idxs[i] += i * high - else: - neg_idxs = cross_neg_idxs - - if self.cross_sample_negatives > 0 and self.n_negatives > 0: - neg_idxs = torch.cat([neg_idxs, cross_neg_idxs], dim=1) - - negs = y[..., neg_idxs.view(-1)] - negs = negs.view( - fsz, bsz, self.n_negatives + self.cross_sample_negatives, tsz - ).permute( - 2, 1, 0, 3 - ) # to NxBxCxT - - return negs - - def forward(self, x, y): - - x = x.unsqueeze(-1) - x = self.project_to_steps(x) # BxCxTxS - x = self.dropout(x) - - negatives = self.sample_negatives(y) - y = y.unsqueeze(0) - targets = torch.cat([y, negatives], dim=0) # Copies x B x C x T - - copies = targets.size(0) - bsz, dim, tsz, steps = x.shape - steps = min(steps, tsz - self.offset) - - predictions = x.new( - bsz * copies * (tsz - self.offset + 1) * steps - - ((steps + 1) * steps // 2) * copies * bsz - ) - if self.infonce: - labels = predictions.new_full( - (predictions.shape[0] // copies,), 0, dtype=torch.long - ) - else: - labels = torch.zeros_like(predictions) - weights = ( - torch.full_like(labels, 1 / self.n_negatives) - if self.balanced_classes and not self.infonce - else None - ) - - start = end = 0 - for i in range(steps): - offset = i + self.offset - end = start + (tsz - offset) * bsz * copies - if self.infonce: - predictions[start:end] = torch.einsum( - "bct,nbct->tbn", x[..., :-offset, i], targets[..., offset:] - ).flatten() - else: - pos_num = (end - start) // copies - predictions[start:end] = torch.einsum( - "bct,nbct->nbt", x[..., :-offset, i], targets[..., offset:] - ).flatten() - labels[start : start + pos_num] = 1.0 - if weights is not None: - weights[start : start + pos_num] = 1.0 - start = end - assert end == predictions.numel(), "{} != {}".format(end, predictions.numel()) - - if self.infonce: - predictions = predictions.view(-1, copies) - else: - if weights is not None: - labels = (labels, weights) - - return predictions, labels diff --git a/spaces/Harsha86390/mygenaichatgpt/README.md b/spaces/Harsha86390/mygenaichatgpt/README.md deleted file mode 100644 index 192da35f7b1f227368d9f48815239c531ca28743..0000000000000000000000000000000000000000 --- a/spaces/Harsha86390/mygenaichatgpt/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mygenaichatgpt -emoji: 😻 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Heisenberg08/Ai_Portrait_Mode/dataset.py b/spaces/Heisenberg08/Ai_Portrait_Mode/dataset.py deleted file mode 100644 index f4138ea75f1587fb8a53f75adf02ab2c33751b4c..0000000000000000000000000000000000000000 --- a/spaces/Heisenberg08/Ai_Portrait_Mode/dataset.py +++ /dev/null @@ -1,36 +0,0 @@ -import torch -from torch.utils.data.dataloader import DataLoader,Dataset -import torch.optim as optim -import albumentations as A -from albumentations.pytorch import ToTensorV2 - -import numpy as np -import matplotlib.pyplot as plt -import os -from PIL import Image - -class Segmentation_Dataset(Dataset): - def __init__(self,img_dir,mask_dir,transform=None): - self.img_dir=img_dir - self.mask_dir=mask_dir - self.transform=transform - self.images=os.listdir(img_dir) - self.images=[im for im in self.images if ".jpg" in im] - def __len__(self): - return len(self.images) - - def __getitem__(self,idx): - img_path=os.path.join(self.img_dir,self.images[idx]) - mask_path=os.path.join(self.mask_dir,self.images[idx].replace(".jpg",".png")) - - image=np.array(Image.open(img_path).convert("RGB")) - mask=np.array(Image.open(mask_path).convert("L"),dtype=np.float32) - mask[mask==255]=1.0 - - if self.transform is not None: - augmentations=self.transform(image=image,mask=mask) - image=augmentations["image"] - mask=augmentations["mask"] - - return image, mask - \ No newline at end of file diff --git a/spaces/Hila/RobustViT/ViT/weight_init.py b/spaces/Hila/RobustViT/ViT/weight_init.py deleted file mode 100644 index 616373c3c1d0e9dc9cac51f85d791346e2240c99..0000000000000000000000000000000000000000 --- a/spaces/Hila/RobustViT/ViT/weight_init.py +++ /dev/null @@ -1,60 +0,0 @@ -import torch -import math -import warnings - - -def _no_grad_trunc_normal_(tensor, mean, std, a, b): - # Cut & paste from PyTorch official master until it's in a few official releases - RW - # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn("mean is more than 2 std from [a, b] in nn.init.trunc_normal_. " - "The distribution of values may be incorrect.", - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - l = norm_cdf((a - mean) / std) - u = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [l, u], then translate to - # [2l-1, 2u-1]. - tensor.uniform_(2 * l - 1, 2 * u - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.): - # type: (Tensor, float, float, float, float) -> Tensor - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - Args: - tensor: an n-dimensional `torch.Tensor` - mean: the mean of the normal distribution - std: the standard deviation of the normal distribution - a: the minimum cutoff value - b: the maximum cutoff value - Examples: - >>> w = torch.empty(3, 5) - >>> nn.init.trunc_normal_(w) - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) \ No newline at end of file diff --git a/spaces/Hila/RobustViT/segmentation_dataset.py b/spaces/Hila/RobustViT/segmentation_dataset.py deleted file mode 100644 index 285400bffbeb5aa24121e13dfefb220fad01d22a..0000000000000000000000000000000000000000 --- a/spaces/Hila/RobustViT/segmentation_dataset.py +++ /dev/null @@ -1,141 +0,0 @@ -import json -from torch.utils import data -from torchvision.datasets import ImageFolder -import torch -import os -from PIL import Image -import numpy as np -import argparse -from tqdm import tqdm -from munkres import Munkres -import multiprocessing -from multiprocessing import Process, Manager -import collections -import torchvision.transforms as transforms -import torchvision.transforms.functional as TF -import random -import torchvision -import cv2 -import random -torch.manual_seed(0) - -SegItem = collections.namedtuple('SegItem', ('image_name', 'tag')) - -normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5], - std=[0.5, 0.5, 0.5]) - -TRANSFORM_TRAIN = transforms.Compose([ - transforms.RandomResizedCrop(224), - transforms.RandomHorizontalFlip(), - ]) - -TRANSFORM_EVAL = transforms.Compose([ - transforms.Resize(256), - transforms.CenterCrop(224), -]) - -IMAGE_TRANSFORMS = transforms.Compose([ - transforms.ToTensor(), - normalize -]) - -MERGED_TAGS = {'n04356056', 'n04355933', - 'n04493381', 'n02808440', - 'n03642806', 'n03832673', - 'n04008634', 'n03773504', - 'n03887697', 'n15075141'} - -TRAIN_PARTITION = "train" -VAL_PARTITION = "val" -LEGAL_PARTITIONS = {TRAIN_PARTITION, VAL_PARTITION} - -# TRAIN_CLASSES = 500 - -class SegmentationDataset(ImageFolder): - def __init__(self, seg_path, imagenet_path, partition=TRAIN_PARTITION, num_samples=2, train_classes=500 - , imagenet_classes_path='imagenet_classes.json', seed=None): - assert partition in LEGAL_PARTITIONS - self._partition = partition - self._seg_path = seg_path - self._imagenet_path = imagenet_path - with open(imagenet_classes_path, 'r') as f: - self._imagenet_classes = json.load(f) - self._tag_list = [tag for tag in os.listdir(self._seg_path) if tag not in MERGED_TAGS] - if seed: - print(f'Shuffling training classes with seed {seed}') - random.seed(seed) - random.shuffle(self._tag_list) - if partition == TRAIN_PARTITION: - # Skip merged tags - self._tag_list = self._tag_list[:train_classes] - elif partition == VAL_PARTITION: - # Skip merged tags - self._tag_list = self._tag_list[train_classes:] - for tag in self._tag_list: - assert tag in self._imagenet_classes - self._all_segementations = [] - for tag in self._tag_list: - base_dir = os.path.join(self._seg_path, tag) - for i, seg in enumerate(os.listdir(base_dir)): - if i >= num_samples: - break - self._all_segementations.append(SegItem(seg.split('.')[0], tag)) - - def __getitem__(self, item): - seg_item = self._all_segementations[item] - - seg_path = os.path.join(self._seg_path, seg_item.tag, seg_item.image_name + ".png") - image_path = os.path.join(self._imagenet_path, seg_item.tag, seg_item.image_name + ".JPEG") - - seg_map = Image.open(seg_path) - image = Image.open(image_path) - image = image.convert('RGB') - - seg_map = np.array(seg_map) - seg_map = seg_map[:, :, 1] * 256 + seg_map[:, :, 0] - - assert len([cand for cand in np.unique(seg_map) if cand != 0 and cand != 1000]) == 1 - - # Convert to binary seg maps - seg_map[seg_map == 1000] = 0 - seg_map[seg_map != 0] = 1 - - seg_map = torch.from_numpy(seg_map.astype(np.float32)) - - # transforms - start - seg_map = seg_map.reshape(1, seg_map.shape[-2], seg_map.shape[-1]) - - if self._partition == VAL_PARTITION: - image = TRANSFORM_EVAL(image) - seg_map = TRANSFORM_EVAL(seg_map) - - elif self._partition == TRAIN_PARTITION: - # Resize - resize = transforms.Resize(size=(256, 256)) - image = resize(image) - seg_map = resize(seg_map) - - # Random crop - i, j, h, w = transforms.RandomCrop.get_params( - image, output_size=(224, 224)) - image = TF.crop(image, i, j, h, w) - seg_map = TF.crop(seg_map, i, j, h, w) - - # RandomHorizontalFlip - if random.random() > 0.5: - image = TF.hflip(image) - seg_map = TF.hflip(seg_map) - - else: - raise Exception(f"Unsupported partition type {self._partition}") - - # normalize original image and turn to tensor - image_ten = IMAGE_TRANSFORMS(image) - # transforms - end - - class_name = int(self._imagenet_classes[seg_item.tag]) - - return seg_map, image_ten, class_name - - def __len__(self): - return len(self._all_segementations) \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/docs/covost_example.md b/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/docs/covost_example.md deleted file mode 100644 index 16447f041e4751f79d9f7848b33ef2ff943d63c2..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/docs/covost_example.md +++ /dev/null @@ -1,102 +0,0 @@ -[[Back]](..) - -# S2T Example: ST on CoVoST -We replicate the experiments in -[CoVoST 2 and Massively Multilingual Speech-to-Text Translation (Wang et al., 2020)](https://arxiv.org/abs/2007.10310). - -## Data Preparation -[Download](https://commonvoice.mozilla.org/en/datasets) and unpack Common Voice v4 to a path -`${COVOST_ROOT}/${SOURCE_LANG_ID}`, then preprocess it with -```bash -# additional Python packages for S2T data processing/model training -pip install pandas torchaudio sentencepiece - -# En ASR -python examples/speech_to_text/prep_covost_data.py \ - --data-root ${COVOST_ROOT} --vocab-type char --src-lang en -# ST -python examples/speech_to_text/prep_covost_data.py \ - --data-root ${COVOST_ROOT} --vocab-type char \ - --src-lang fr --tgt-lang en -``` -The generated files (manifest, features, vocabulary and data configuration) will be added to -`${COVOST_ROOT}/${SOURCE_LANG_ID}`. - -Download our vocabulary files if you want to use our pre-trained models: -- ASR: [En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_asr_vocab_char.zip) -- ST: [Fr-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_fr_en_st_vocab_char.zip), [De-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_de_en_st_vocab_char.zip), [Es-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_es_en_st_vocab_char.zip), [Ca-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_ca_en_st_vocab_char.zip), [En-De](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_de_st_vocab_char.zip), [En-Ca](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_ca_st_vocab_char.zip), [En-Fa](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_fa_st_vocab_char.zip), [En-Et](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_et_st_vocab_char.zip) - -## ASR -#### Training -We train an En ASR model for encoder pre-training of all ST models: -```bash -fairseq-train ${COVOST_ROOT}/en \ - --config-yaml config_asr_en.yaml --train-subset train_asr_en --valid-subset dev_asr_en \ - --save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 50000 --max-update 60000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --report-accuracy --arch s2t_transformer_s --dropout 0.15 --optimizer adam --lr 2e-3 \ - --lr-scheduler inverse_sqrt --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 -``` -where `ASR_SAVE_DIR` is the checkpoint root path. We set `--update-freq 8` to simulate 8 GPUs with 1 GPU. -You may want to update it accordingly when using more than 1 GPU. - -#### Inference & Evaluation -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ASR_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}" -fairseq-generate ${COVOST_ROOT}/en \ - --config-yaml config_asr_en.yaml --gen-subset test_asr_en --task speech_to_text \ - --path ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \ - --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct -``` -#### Results -| --arch | Params | En | Model | -|---|---|---|---| -| s2t_transformer_s | 31M | 25.6 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_asr_transformer_s.pt) | - -## ST -#### Training -Fr-En as example: -```bash -fairseq-train ${COVOST_ROOT}/fr \ - --config-yaml config_st_fr_en.yaml --train-subset train_st_fr_en --valid-subset dev_st_fr_en \ - --save-dir ${ST_SAVE_DIR} --num-workers 4 --max-update 30000 --max-tokens 40000 \ # --max-tokens 50000 for en-* - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --encoder-freezing-updates 1000 --optimizer adam --lr 2e-3 \ - --lr-scheduler inverse_sqrt --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \ - --load-pretrained-encoder-from ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} -``` -where `ST_SAVE_DIR` is the checkpoint root path. The ST encoder is pre-trained by En ASR for faster training and better -performance: `--load-pretrained-encoder-from `. We set `--update-freq 8` to simulate 8 GPUs with 1 GPU. -You may want to update it accordingly when using more than 1 GPU. - -#### Inference & Evaluation -Average the last 10 checkpoints and evaluate on test split: -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ST_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ST_SAVE_DIR}/${CHECKPOINT_FILENAME}" -fairseq-generate ${COVOST_ROOT}/fr \ - --config-yaml config_st_fr_en.yaml --gen-subset test_st_fr_en --task speech_to_text \ - --path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring sacrebleu -``` - -## Interactive Decoding -Launch the interactive console via -```bash -fairseq-interactive ${COVOST_ROOT}/fr --config-yaml config_st_fr_en.yaml \ - --task speech_to_text --path ${SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 -``` -Type in WAV/FLAC/OGG audio paths (one per line) after the prompt. - -#### Results -| --arch | Params | Fr-En | De-En | Es-En | Ca-En | En-De | En-Ca | En-Fa | En-Et | Model | -|---|---|---|---|---|---|---|---|---|---|---| -| s2t_transformer_s | 31M | [27.2](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_fr_en_st_transformer_s.pt) | [17.7](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_de_en_st_transformer_s.pt) | [23.1](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_es_en_st_transformer_s.pt) | [19.3](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_ca_en_st_transformer_s.pt) | [16.1](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_de_st_transformer_s.pt) | [21.6](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_ca_st_transformer_s.pt) | [12.9](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_fa_st_transformer_s.pt) | [12.8](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_et_st_transformer_s.pt) | (<-Download) | - -[[Back]](..) diff --git a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/sort_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/sort_dataset.py deleted file mode 100644 index b3890e7279e1f26db2e48ec0a91c639e9299d60f..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/sort_dataset.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np - -from . import BaseWrapperDataset - - -class SortDataset(BaseWrapperDataset): - def __init__(self, dataset, sort_order): - super().__init__(dataset) - if not isinstance(sort_order, (list, tuple)): - sort_order = [sort_order] - self.sort_order = sort_order - - assert all(len(so) == len(dataset) for so in sort_order) - - def ordered_indices(self): - return np.lexsort(self.sort_order) diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/transforms.py b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Illumotion/Koboldcpp/examples/chat-vicuna.sh b/spaces/Illumotion/Koboldcpp/examples/chat-vicuna.sh deleted file mode 100644 index 8c7b7bef42784d3037c377e71fc20e08a7302883..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/chat-vicuna.sh +++ /dev/null @@ -1,41 +0,0 @@ -#!/bin/bash - -set -e - -cd "$(dirname "$0")/.." || exit - -MODEL="${MODEL:-./models/ggml-vic13b-uncensored-q5_0.bin}" -PROMPT_TEMPLATE=${PROMPT_TEMPLATE:-./prompts/chat.txt} -USER_NAME="### Human" -AI_NAME="### Assistant" - -# Adjust to the number of CPU cores you want to use. -N_THREAD="${N_THREAD:-8}" -# Number of tokens to predict (made it larger than default because we want a long interaction) -N_PREDICTS="${N_PREDICTS:-2048}" - -# Note: you can also override the generation options by specifying them on the command line: -# For example, override the context size by doing: ./chatLLaMa --ctx_size 1024 -GEN_OPTIONS="${GEN_OPTIONS:---ctx_size 2048 --temp 0.7 --top_k 40 --top_p 0.5 --repeat_last_n 256 --batch_size 1024 --repeat_penalty 1.17647}" - -DATE_TIME=$(date +%H:%M) -DATE_YEAR=$(date +%Y) - -PROMPT_FILE=$(mktemp -t llamacpp_prompt.XXXXXXX.txt) - -sed -e "s/\[\[USER_NAME\]\]/$USER_NAME/g" \ - -e "s/\[\[AI_NAME\]\]/$AI_NAME/g" \ - -e "s/\[\[DATE_TIME\]\]/$DATE_TIME/g" \ - -e "s/\[\[DATE_YEAR\]\]/$DATE_YEAR/g" \ - $PROMPT_TEMPLATE > $PROMPT_FILE - -# shellcheck disable=SC2086 # Intended splitting of GEN_OPTIONS -./bin/main $GEN_OPTIONS \ - --model "$MODEL" \ - --threads "$N_THREAD" \ - --n_predict "$N_PREDICTS" \ - --color --interactive \ - --file ${PROMPT_FILE} \ - --reverse-prompt "### Human:" \ - --in-prefix ' ' \ - "$@" diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/utils.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/utils.py deleted file mode 100644 index f337db7db54c82be041698d694e1403e8918c4c0..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/utils.py +++ /dev/null @@ -1,40 +0,0 @@ -"""Modified from https://github.com/CSAILVision/semantic-segmentation-pytorch""" - -import os -import sys - -import numpy as np -import torch - -try: - from urllib import urlretrieve -except ImportError: - from urllib.request import urlretrieve - - -def load_url(url, model_dir='./pretrained', map_location=None): - if not os.path.exists(model_dir): - os.makedirs(model_dir) - filename = url.split('/')[-1] - cached_file = os.path.join(model_dir, filename) - if not os.path.exists(cached_file): - sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file)) - urlretrieve(url, cached_file) - return torch.load(cached_file, map_location=map_location) - - -def color_encode(labelmap, colors, mode='RGB'): - labelmap = labelmap.astype('int') - labelmap_rgb = np.zeros((labelmap.shape[0], labelmap.shape[1], 3), - dtype=np.uint8) - for label in np.unique(labelmap): - if label < 0: - continue - labelmap_rgb += (labelmap == label)[:, :, np.newaxis] * \ - np.tile(colors[label], - (labelmap.shape[0], labelmap.shape[1], 1)) - - if mode == 'BGR': - return labelmap_rgb[:, :, ::-1] - else: - return labelmap_rgb diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/monkey_patch_non_inplace.py b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/monkey_patch_non_inplace.py deleted file mode 100644 index 9661d70751261a11bbc33b57967efcf09d3cbe0c..0000000000000000000000000000000000000000 --- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/monkey_patch_non_inplace.py +++ /dev/null @@ -1,118 +0,0 @@ -""" -Monkey patch the llama implementation in the huggingface/transformers library. -Avoid bugs in mps backend by not using in-place operations. -""" -import math -from typing import List, Optional, Tuple - -import torch -from torch import nn -import transformers - - -def rotate_half(x): - """Rotates half the hidden dims of the input.""" - x1 = x[..., : x.shape[-1] // 2].clone() - x2 = x[..., x.shape[-1] // 2 :].clone() - return torch.cat((-x2, x1), dim=-1) - - -def apply_rotary_pos_emb(q, k, cos, sin, position_ids): - gather_indices = position_ids[:, None, :, None] # [bs, 1, seq_len, 1] - gather_indices = gather_indices.repeat(1, cos.shape[1], 1, cos.shape[3]) - cos = torch.gather(cos.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices) - sin = torch.gather(sin.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices) - q_embed = (q * cos) + (rotate_half(q) * sin) - k_embed = (k * cos) + (rotate_half(k) * sin) - return q_embed, k_embed - - -def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: bool = False, - use_cache: bool = False, -) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - bsz, q_len, _ = hidden_states.size() - - query_states = ( - self.q_proj(hidden_states) - .view(bsz, q_len, self.num_heads, self.head_dim) - .transpose(1, 2) - ) - key_states = ( - self.k_proj(hidden_states) - .view(bsz, q_len, self.num_heads, self.head_dim) - .transpose(1, 2) - ) - value_states = ( - self.v_proj(hidden_states) - .view(bsz, q_len, self.num_heads, self.head_dim) - .transpose(1, 2) - ) - - kv_seq_len = key_states.shape[-2] - if past_key_value is not None: - kv_seq_len += past_key_value[0].shape[-2] - cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) - query_states, key_states = apply_rotary_pos_emb( - query_states, key_states, cos, sin, position_ids - ) - # [bsz, nh, t, hd] - - if past_key_value is not None: - # reuse k, v, self_attention - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - - past_key_value = (key_states, value_states) if use_cache else None - - attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt( - self.head_dim - ) - - if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len): - raise ValueError( - f"Attention weights should be of size {(bsz * self.num_heads, q_len, kv_seq_len)}, but is" - f" {attn_weights.size()}" - ) - - if attention_mask is not None: - if attention_mask.size() != (bsz, 1, q_len, kv_seq_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}" - ) - attn_weights = attn_weights + attention_mask - attn_weights = torch.max( - attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min) - ) - - # upcast attention to fp32 - attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to( - query_states.dtype - ) - attn_output = torch.matmul(attn_weights, value_states) - - if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is" - f" {attn_output.size()}" - ) - - attn_output = attn_output.transpose(1, 2) - attn_output = attn_output.reshape(bsz, q_len, self.hidden_size) - - attn_output = self.o_proj(attn_output) - - if not output_attentions: - attn_weights = None - - return attn_output, attn_weights, past_key_value - - -def replace_llama_attn_with_non_inplace_operations(): - """Avoid bugs in mps backend by not using in-place operations.""" - transformers.models.llama.modeling_llama.LlamaAttention.forward = forward diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/test_throughput.py b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/test_throughput.py deleted file mode 100644 index 9cc5f45c7e06deb596b51213cd2667fd8361dbfd..0000000000000000000000000000000000000000 --- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/test_throughput.py +++ /dev/null @@ -1,115 +0,0 @@ -"""Benchmarking script to test the throughput of serving workers.""" -import argparse -import json - -import requests -import threading -import time - -from fastchat.conversation import default_conversation - - -def main(): - if args.worker_address: - worker_addr = args.worker_address - else: - controller_addr = args.controller_address - ret = requests.post(controller_addr + "/refresh_all_workers") - ret = requests.post(controller_addr + "/list_models") - models = ret.json()["models"] - models.sort() - print(f"Models: {models}") - - ret = requests.post( - controller_addr + "/get_worker_address", json={"model": args.model_name} - ) - worker_addr = ret.json()["address"] - print(f"worker_addr: {worker_addr}") - - if worker_addr == "": - return - - conv = default_conversation.copy() - conv.append_message(conv.roles[0], "Tell me a story with more than 1000 words") - prompt_template = conv.get_prompt() - prompts = [prompt_template for _ in range(args.n_thread)] - - headers = {"User-Agent": "fastchat Client"} - ploads = [ - { - "model": args.model_name, - "prompt": prompts[i], - "max_new_tokens": args.max_new_tokens, - "temperature": 0.0, - # "stop": conv.sep, - } - for i in range(len(prompts)) - ] - - def send_request(results, i): - if args.test_dispatch: - ret = requests.post( - controller_addr + "/get_worker_address", json={"model": args.model_name} - ) - thread_worker_addr = ret.json()["address"] - else: - thread_worker_addr = worker_addr - print(f"thread {i} goes to {thread_worker_addr}") - response = requests.post( - thread_worker_addr + "/worker_generate_stream", - headers=headers, - json=ploads[i], - stream=False, - ) - k = list( - response.iter_lines(chunk_size=8192, decode_unicode=False, delimiter=b"\0") - ) - # print(k) - response_new_words = json.loads(k[-2].decode("utf-8"))["text"] - error_code = json.loads(k[-2].decode("utf-8"))["error_code"] - # print(f"=== Thread {i} ===, words: {1}, error code: {error_code}") - results[i] = len(response_new_words.split(" ")) - len(prompts[i].split(" ")) - - # use N threads to prompt the backend - tik = time.time() - threads = [] - results = [None] * args.n_thread - for i in range(args.n_thread): - t = threading.Thread(target=send_request, args=(results, i)) - t.start() - # time.sleep(0.5) - threads.append(t) - - for t in threads: - t.join() - - print(f"Time (POST): {time.time() - tik} s") - # n_words = 0 - # for i, response in enumerate(results): - # # print(prompt[i].replace(conv.sep, "\n"), end="") - # # make sure the streaming finishes at EOS or stopping criteria - # k = list(response.iter_lines(chunk_size=8192, decode_unicode=False, delimiter=b"\0")) - # response_new_words = json.loads(k[-2].decode("utf-8"))["text"] - # # print(response_new_words) - # n_words += len(response_new_words.split(" ")) - len(prompts[i].split(" ")) - n_words = sum(results) - time_seconds = time.time() - tik - print( - f"Time (Completion): {time_seconds}, n threads: {args.n_thread}, " - f"throughput: {n_words / time_seconds} words/s." - ) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - "--controller-address", type=str, default="http://localhost:21001" - ) - parser.add_argument("--worker-address", type=str) - parser.add_argument("--model-name", type=str, default="vicuna") - parser.add_argument("--max-new-tokens", type=int, default=2048) - parser.add_argument("--n-thread", type=int, default=8) - parser.add_argument("--test-dispatch", action="store_true") - args = parser.parse_args() - - main() diff --git a/spaces/JUNGU/pixera_gen/examples/pixelArt/combine.py b/spaces/JUNGU/pixera_gen/examples/pixelArt/combine.py deleted file mode 100644 index 669a3752045c556f3bcd7aaa2c8b35bc536be136..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/pixera_gen/examples/pixelArt/combine.py +++ /dev/null @@ -1,29 +0,0 @@ -import cv2 -import numpy as np - -class combine: - #Author: Alican Akca - def __init__(self, size = (400,300),images = [],background_image = None): - self.size = size - self.images = images - self.background_image = background_image - - def combiner(self,images,background_image): - original = images[0] - masked = images[1] - background = cv2.resize(background_image,(images[0].shape[1],images[0].shape[0])) - result = blend_images_using_mask(original, background, masked) - return result - -def mix_pixel(pix_1, pix_2, perc): - - return (perc/255 * pix_1) + ((255 - perc)/255 * pix_2) - -def blend_images_using_mask(img_orig, img_for_overlay, img_mask): - - if len(img_mask.shape) != 3: - img_mask = cv2.cvtColor(img_mask, cv2.COLOR_GRAY2BGR) - - img_res = mix_pixel(img_orig, img_for_overlay, img_mask) - - return cv2.cvtColor(img_res.astype(np.uint8), cv2.COLOR_BGR2RGB) \ No newline at end of file diff --git a/spaces/Jamkonams/AutoGPT/autogpt/workspace.py b/spaces/Jamkonams/AutoGPT/autogpt/workspace.py deleted file mode 100644 index 6fb0e3113eb2c1338edf7f86c6e162fc27c61e50..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/autogpt/workspace.py +++ /dev/null @@ -1,47 +0,0 @@ -from __future__ import annotations - -import os -from pathlib import Path - -from autogpt.config import Config - -CFG = Config() - -# Set a dedicated folder for file I/O -WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace" - -# Create the directory if it doesn't exist -if not os.path.exists(WORKSPACE_PATH): - os.makedirs(WORKSPACE_PATH) - - -def path_in_workspace(relative_path: str | Path) -> Path: - """Get full path for item in workspace - - Parameters: - relative_path (str | Path): Path to translate into the workspace - - Returns: - Path: Absolute path for the given path in the workspace - """ - return safe_path_join(WORKSPACE_PATH, relative_path) - - -def safe_path_join(base: Path, *paths: str | Path) -> Path: - """Join one or more path components, asserting the resulting path is within the workspace. - - Args: - base (Path): The base path - *paths (str): The paths to join to the base path - - Returns: - Path: The joined path - """ - joined_path = base.joinpath(*paths).resolve() - - if CFG.restrict_to_workspace and not joined_path.is_relative_to(base): - raise ValueError( - f"Attempted to access path '{joined_path}' outside of workspace '{base}'." - ) - - return joined_path diff --git a/spaces/JeffJing/ZookChatBot/steamship/base/mime_types.py b/spaces/JeffJing/ZookChatBot/steamship/base/mime_types.py deleted file mode 100644 index 9b3c94ac2dc3aab12e402c5588c56f56769ef59f..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/base/mime_types.py +++ /dev/null @@ -1,42 +0,0 @@ -from enum import Enum - - -class MimeTypes(str, Enum): - UNKNOWN = "unknown" - TXT = "text/plain" - JSON = "application/json" - MKD = "text/markdown" - EPUB = "application/epub+zip" - PDF = "application/pdf" - JPG = "image/jpeg" - PNG = "image/png" - TIFF = "image/tiff" - GIF = "image/gif" - HTML = "text/html" - DOC = "application/msword" - DOCX = "application/vnd.openxmlformats-officedocument.wordprocessingml.document" - PPT = "applicatino/ms-powerpoint" - PPTX = "application/vnd.openxmlformats-officedocument.presentationml.presentation" - RTF = "application/rtf" - BINARY = "application/octet-stream" - STEAMSHIP_BLOCK_JSON = "application/vnd.steamship-block.json.v1" - WAV = "audio/wav" - MP3 = "audio/mp3" - MP4_VIDEO = "video/mp4" - MP4_AUDIO = "audio/mp4" - WEBM_VIDEO = "video/webm" - WEBM_AUDIO = "audio/webm" - FILE_JSON = "fileJson" - - -class ContentEncodings: - BASE64 = "base64" - - -TEXT_MIME_TYPES = [ - MimeTypes.TXT, - MimeTypes.MKD, - MimeTypes.HTML, - MimeTypes.DOCX, - MimeTypes.PPTX, -] diff --git a/spaces/Jerkinjankins/ogkalu-Comic-Diffusion/README.md b/spaces/Jerkinjankins/ogkalu-Comic-Diffusion/README.md deleted file mode 100644 index 314c1fceab39311153e965a8c9b6ba501997bccb..0000000000000000000000000000000000000000 --- a/spaces/Jerkinjankins/ogkalu-Comic-Diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ogkalu Comic Diffusion -emoji: 😻 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Jerry0203/sentence_embedding/README.md b/spaces/Jerry0203/sentence_embedding/README.md deleted file mode 100644 index 3ca227c7ee84ea9e3bf4d7c34b24224a2f456e6b..0000000000000000000000000000000000000000 --- a/spaces/Jerry0203/sentence_embedding/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sentence Embedding -emoji: 📉 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/JoPmt/Txt-to-video/README.md b/spaces/JoPmt/Txt-to-video/README.md deleted file mode 100644 index c978e0d78b22fd146e1dee39e3655601dba4bfea..0000000000000000000000000000000000000000 --- a/spaces/JoPmt/Txt-to-video/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Txt To Video -emoji: ⚡ -colorFrom: blue -colorTo: gray -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/JohnC26/ChatGPTwithAPI/README.md b/spaces/JohnC26/ChatGPTwithAPI/README.md deleted file mode 100644 index 5e9db9ee137f91124dc76c9ed996db9fff3477d5..0000000000000000000000000000000000000000 --- a/spaces/JohnC26/ChatGPTwithAPI/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChatGPTwithAPI -emoji: 🚀 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.20.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ysharma/ChatGPTwithAPI ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/utils/abs_model.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/utils/abs_model.py deleted file mode 100644 index b6d27a6df74c6988dd4355cbef149ed90f3a36cf..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/utils/abs_model.py +++ /dev/null @@ -1,23 +0,0 @@ -from abc import ABC -from abc import abstractmethod - -import torch - -class AbsMelDecoder(torch.nn.Module, ABC): - """The abstract PPG-based voice conversion class - This "model" is one of mediator objects for "Task" class. - - """ - - @abstractmethod - def forward( - self, - bottle_neck_features: torch.Tensor, - feature_lengths: torch.Tensor, - speech: torch.Tensor, - speech_lengths: torch.Tensor, - logf0_uv: torch.Tensor = None, - spembs: torch.Tensor = None, - styleembs: torch.Tensor = None, - ) -> torch.Tensor: - raise NotImplementedError diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/utils/modelutils.py b/spaces/Kevin676/Real-Time-Voice-Cloning/utils/modelutils.py deleted file mode 100644 index 6acaa984e0c7876f9149fc1ff99001b7761dc80b..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Real-Time-Voice-Cloning/utils/modelutils.py +++ /dev/null @@ -1,17 +0,0 @@ -from pathlib import Path - -def check_model_paths(encoder_path: Path, synthesizer_path: Path, vocoder_path: Path): - # This function tests the model paths and makes sure at least one is valid. - if encoder_path.is_file() or encoder_path.is_dir(): - return - if synthesizer_path.is_file() or synthesizer_path.is_dir(): - return - if vocoder_path.is_file() or vocoder_path.is_dir(): - return - - # If none of the paths exist, remind the user to download models if needed - print("********************************************************************************") - print("Error: Model files not found. Follow these instructions to get and install the models:") - print("https://github.com/CorentinJ/Real-Time-Voice-Cloning/wiki/Pretrained-models") - print("********************************************************************************\n") - quit(-1) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/utils/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/models/utils/__init__.py deleted file mode 100644 index af3b2448dbeae8eed8e0b579b7bbc159a623fa3c..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/utils/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .gaussian_target import (gather_feat, gaussian_radius, - gen_gaussian_target, get_local_maximum, - get_topk_from_heatmap, transpose_and_gather_feat) -from .make_divisible import make_divisible -from .misc import (aligned_bilinear, center_of_mass, empty_instances, - filter_gt_instances, filter_scores_and_topk, flip_tensor, - generate_coordinate, images_to_levels, interpolate_as, - levels_to_images, mask2ndarray, multi_apply, - relative_coordinate_maps, rename_loss_dict, - reweight_loss_dict, samplelist_boxtype2tensor, - select_single_mlvl, sigmoid_geometric_mean, - unfold_wo_center, unmap, unpack_gt_instances) -from .panoptic_gt_processing import preprocess_panoptic_gt -from .point_sample import (get_uncertain_point_coords_with_randomness, - get_uncertainty) - -__all__ = [ - 'gaussian_radius', 'gen_gaussian_target', 'make_divisible', - 'get_local_maximum', 'get_topk_from_heatmap', 'transpose_and_gather_feat', - 'interpolate_as', 'sigmoid_geometric_mean', 'gather_feat', - 'preprocess_panoptic_gt', 'get_uncertain_point_coords_with_randomness', - 'get_uncertainty', 'unpack_gt_instances', 'empty_instances', - 'center_of_mass', 'filter_scores_and_topk', 'flip_tensor', - 'generate_coordinate', 'levels_to_images', 'mask2ndarray', 'multi_apply', - 'select_single_mlvl', 'unmap', 'images_to_levels', - 'samplelist_boxtype2tensor', 'filter_gt_instances', 'rename_loss_dict', - 'reweight_loss_dict', 'relative_coordinate_maps', 'aligned_bilinear', - 'unfold_wo_center' -] diff --git a/spaces/LanguageBind/LanguageBind/languagebind/video/processing_video.py b/spaces/LanguageBind/LanguageBind/languagebind/video/processing_video.py deleted file mode 100644 index fdea0fd4fffa8eb6d4fff6b600ee02e7abe45c06..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/languagebind/video/processing_video.py +++ /dev/null @@ -1,161 +0,0 @@ -import cv2 -import decord -import numpy as np -import torch -from PIL import Image -from decord import VideoReader, cpu -from torchvision import transforms -from transformers import ProcessorMixin, BatchEncoding -from transformers.image_processing_utils import BatchFeature -from pytorchvideo.data.encoded_video import EncodedVideo -from torchvision.transforms import Compose, Lambda, ToTensor -from torchvision.transforms._transforms_video import NormalizeVideo, RandomCropVideo, RandomHorizontalFlipVideo, CenterCropVideo -from pytorchvideo.transforms import ApplyTransformToKey, ShortSideScale, UniformTemporalSubsample - -decord.bridge.set_bridge('torch') - -OPENAI_DATASET_MEAN = (0.48145466, 0.4578275, 0.40821073) -OPENAI_DATASET_STD = (0.26862954, 0.26130258, 0.27577711) - -def make_list_of_images(x): - if not isinstance(x, list): - return [x] - return x - -def get_video_transform(config): - config = config.vision_config - if config.video_decode_backend == 'pytorchvideo': - transform = ApplyTransformToKey( - key="video", - transform=Compose( - [ - UniformTemporalSubsample(config.num_frames), - Lambda(lambda x: x / 255.0), - NormalizeVideo(mean=OPENAI_DATASET_MEAN, std=OPENAI_DATASET_STD), - ShortSideScale(size=224), - CenterCropVideo(224), - RandomHorizontalFlipVideo(p=0.5), - ] - ), - ) - - elif config.video_decode_backend == 'decord': - - transform = Compose( - [ - # UniformTemporalSubsample(num_frames), - Lambda(lambda x: x / 255.0), - NormalizeVideo(mean=OPENAI_DATASET_MEAN, std=OPENAI_DATASET_STD), - ShortSideScale(size=224), - CenterCropVideo(224), - RandomHorizontalFlipVideo(p=0.5), - ] - ) - - elif config.video_decode_backend == 'opencv': - transform = Compose( - [ - # UniformTemporalSubsample(num_frames), - Lambda(lambda x: x / 255.0), - NormalizeVideo(mean=OPENAI_DATASET_MEAN, std=OPENAI_DATASET_STD), - ShortSideScale(size=224), - CenterCropVideo(224), - RandomHorizontalFlipVideo(p=0.5), - ] - ) - else: - raise NameError('video_decode_backend should specify in (pytorchvideo, decord, opencv)') - return transform - - -def load_and_transform_video( - video_path, - transform, - video_decode_backend='opencv', - clip_start_sec=0.0, - clip_end_sec=None, - num_frames=8, -): - if video_decode_backend == 'pytorchvideo': - # decord pyav - video = EncodedVideo.from_path(video_path, decoder="decord", decode_audio=False) - duration = video.duration - start_sec = clip_start_sec # secs - end_sec = clip_end_sec if clip_end_sec is not None else duration # secs - video_data = video.get_clip(start_sec=start_sec, end_sec=end_sec) - video_outputs = transform(video_data) - - elif video_decode_backend == 'decord': - decord.bridge.set_bridge('torch') - decord_vr = VideoReader(video_path, ctx=cpu(0)) - duration = len(decord_vr) - frame_id_list = np.linspace(0, duration-1, num_frames, dtype=int) - video_data = decord_vr.get_batch(frame_id_list) - video_data = video_data.permute(3, 0, 1, 2) # (T, H, W, C) -> (C, T, H, W) - video_outputs = transform(video_data) - - elif video_decode_backend == 'opencv': - cv2_vr = cv2.VideoCapture(video_path) - duration = int(cv2_vr.get(cv2.CAP_PROP_FRAME_COUNT)) - frame_id_list = np.linspace(0, duration-1, num_frames, dtype=int) - - video_data = [] - for frame_idx in frame_id_list: - cv2_vr.set(1, frame_idx) - _, frame = cv2_vr.read() - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - video_data.append(torch.from_numpy(frame).permute(2, 0, 1)) - cv2_vr.release() - video_data = torch.stack(video_data, dim=1) - video_outputs = transform(video_data) - else: - raise NameError('video_decode_backend should specify in (pytorchvideo, decord, opencv)') - return video_outputs - -class LanguageBindVideoProcessor(ProcessorMixin): - attributes = [] - tokenizer_class = ("LanguageBindVideoTokenizer") - - def __init__(self, config, tokenizer=None, **kwargs): - super().__init__(**kwargs) - self.config = config - self.transform = get_video_transform(config) - self.image_processor = load_and_transform_video - self.tokenizer = tokenizer - - def __call__(self, images=None, text=None, context_length=77, return_tensors=None, **kwargs): - if text is None and images is None: - raise ValueError("You have to specify either text or images. Both cannot be none.") - - if text is not None: - encoding = self.tokenizer(text, max_length=context_length, padding='max_length', - truncation=True, return_tensors=return_tensors, **kwargs) - - if images is not None: - images = make_list_of_images(images) - image_features = [self.image_processor(image, self.transform, - video_decode_backend=self.config.vision_config.video_decode_backend, - num_frames=self.config.vision_config.num_frames) for image in images] - image_features = torch.stack(image_features) - - if text is not None and images is not None: - encoding["pixel_values"] = image_features - return encoding - elif text is not None: - return encoding - else: - return {"pixel_values": image_features} - - def batch_decode(self, skip_special_tokens=True, *args, **kwargs): - """ - This method forwards all its arguments to CLIPTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please - refer to the docstring of this method for more information. - """ - return self.tokenizer.batch_decode(*args, skip_special_tokens=skip_special_tokens, **kwargs) - - def decode(self, skip_special_tokens=True, *args, **kwargs): - """ - This method forwards all its arguments to CLIPTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to - the docstring of this method for more information. - """ - return self.tokenizer.decode(*args, skip_special_tokens=skip_special_tokens, **kwargs) diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/dlmodels.bat b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/dlmodels.bat deleted file mode 100644 index 5d80f50369b1f3ed37c045d07a9e2ce8954f09d4..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/dlmodels.bat +++ /dev/null @@ -1,348 +0,0 @@ -@echo off && chcp 65001 - -echo working dir is %cd% -echo downloading requirement aria2 check. -echo= -dir /a:d/b | findstr "aria2" > flag.txt -findstr "aria2" flag.txt >nul -if %errorlevel% ==0 ( - echo aria2 checked. - echo= -) else ( - echo failed. please downloading aria2 from webpage! - echo unzip it and put in this directory! - timeout /T 5 - start https://github.com/aria2/aria2/releases/tag/release-1.36.0 - echo= - goto end -) - -echo envfiles checking start. -echo= - -for /f %%x in ('findstr /i /c:"aria2" "flag.txt"') do (set aria2=%%x)&goto endSch -:endSch - -set d32=f0D32k.pth -set d40=f0D40k.pth -set d48=f0D48k.pth -set g32=f0G32k.pth -set g40=f0G40k.pth -set g48=f0G48k.pth - -set d40v2=f0D40k.pth -set g40v2=f0G40k.pth - -set dld32=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth -set dld40=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth -set dld48=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth -set dlg32=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth -set dlg40=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth -set dlg48=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth - -set dld40v2=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -set dlg40v2=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth - -set hp2_all=HP2_all_vocals.pth -set hp3_all=HP3_all_vocals.pth -set hp5_only=HP5_only_main_vocal.pth -set VR_DeEchoAggressive=VR-DeEchoAggressive.pth -set VR_DeEchoDeReverb=VR-DeEchoDeReverb.pth -set VR_DeEchoNormal=VR-DeEchoNormal.pth -set onnx_dereverb=vocals.onnx - -set dlhp2_all=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2_all_vocals.pth -set dlhp3_all=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP3_all_vocals.pth -set dlhp5_only=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5_only_main_vocal.pth -set dlVR_DeEchoAggressive=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoAggressive.pth -set dlVR_DeEchoDeReverb=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoDeReverb.pth -set dlVR_DeEchoNormal=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoNormal.pth -set dlonnx_dereverb=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/onnx_dereverb_By_FoxJoy/vocals.onnx - -set hb=hubert_base.pt - -set dlhb=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt - -echo dir check start. -echo= - -if exist "%~dp0assets\pretrained" ( - echo dir .\assets\pretrained checked. - ) else ( - echo failed. generating dir .\assets\pretrained. - mkdir pretrained - ) -if exist "%~dp0assets\pretrained_v2" ( - echo dir .\assets\pretrained_v2 checked. - ) else ( - echo failed. generating dir .\assets\pretrained_v2. - mkdir pretrained_v2 - ) -if exist "%~dp0assets\uvr5_weights" ( - echo dir .\assets\uvr5_weights checked. - ) else ( - echo failed. generating dir .\assets\uvr5_weights. - mkdir uvr5_weights - ) -if exist "%~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy" ( - echo dir .\assets\uvr5_weights\onnx_dereverb_By_FoxJoy checked. - ) else ( - echo failed. generating dir .\assets\uvr5_weights\onnx_dereverb_By_FoxJoy. - mkdir uvr5_weights\onnx_dereverb_By_FoxJoy - ) - -echo= -echo dir check finished. - -echo= -echo required files check start. - -echo checking D32k.pth -if exist "%~dp0assets\pretrained\D32k.pth" ( - echo D32k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d %~dp0assets\pretrained -o D32k.pth - if exist "%~dp0assets\pretrained\D32k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking D40k.pth -if exist "%~dp0assets\pretrained\D40k.pth" ( - echo D40k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d %~dp0assets\pretrained -o D40k.pth - if exist "%~dp0assets\pretrained\D40k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking D40k.pth -if exist "%~dp0assets\pretrained_v2\D40k.pth" ( - echo D40k.pth in .\assets\pretrained_v2 checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d %~dp0assets\pretrained_v2 -o D40k.pth - if exist "%~dp0assets\pretrained_v2\D40k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking D48k.pth -if exist "%~dp0assets\pretrained\D48k.pth" ( - echo D48k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d %~dp0assets\pretrained -o D48k.pth - if exist "%~dp0assets\pretrained\D48k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking G32k.pth -if exist "%~dp0assets\pretrained\G32k.pth" ( - echo G32k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d %~dp0assets\pretrained -o G32k.pth - if exist "%~dp0assets\pretrained\G32k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking G40k.pth -if exist "%~dp0assets\pretrained\G40k.pth" ( - echo G40k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d %~dp0assets\pretrained -o G40k.pth - if exist "%~dp0assets\pretrained\G40k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking G40k.pth -if exist "%~dp0assets\pretrained_v2\G40k.pth" ( - echo G40k.pth in .\assets\pretrained_v2 checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d %~dp0assets\pretrained_v2 -o G40k.pth - if exist "%~dp0assets\pretrained_v2\G40k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking G48k.pth -if exist "%~dp0assets\pretrained\G48k.pth" ( - echo G48k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d %~dp0assets\pretrained -o G48k.pth - if exist "%~dp0assets\pretrained\G48k.pth" (echo download successful.) else (echo please try again! - echo=) - ) - -echo checking %d32% -if exist "%~dp0assets\pretrained\%d32%" ( - echo %d32% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld32% -d %~dp0assets\pretrained -o %d32% - if exist "%~dp0assets\pretrained\%d32%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %d40% -if exist "%~dp0assets\pretrained\%d40%" ( - echo %d40% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld40% -d %~dp0assets\pretrained -o %d40% - if exist "%~dp0assets\pretrained\%d40%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %d40v2% -if exist "%~dp0assets\pretrained_v2\%d40v2%" ( - echo %d40v2% in .\assets\pretrained_v2 checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld40v2% -d %~dp0assets\pretrained_v2 -o %d40v2% - if exist "%~dp0assets\pretrained_v2\%d40v2%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %d48% -if exist "%~dp0assets\pretrained\%d48%" ( - echo %d48% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld48% -d %~dp0assets\pretrained -o %d48% - if exist "%~dp0assets\pretrained\%d48%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %g32% -if exist "%~dp0assets\pretrained\%g32%" ( - echo %g32% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg32% -d %~dp0assets\pretrained -o %g32% - if exist "%~dp0assets\pretrained\%g32%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %g40% -if exist "%~dp0assets\pretrained\%g40%" ( - echo %g40% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg40% -d %~dp0assets\pretrained -o %g40% - if exist "%~dp0assets\pretrained\%g40%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %g40v2% -if exist "%~dp0assets\pretrained_v2\%g40v2%" ( - echo %g40v2% in .\assets\pretrained_v2 checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg40v2% -d %~dp0assets\pretrained_v2 -o %g40v2% - if exist "%~dp0assets\pretrained_v2\%g40v2%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %g48% -if exist "%~dp0assets\pretrained\%g48%" ( - echo %g48% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg48% -d %~dp0assets\pretrained -o %g48% - if exist "%~dp0assets\pretrained\%g48%" (echo download successful.) else (echo please try again! - echo=) - ) - -echo checking %hp2_all% -if exist "%~dp0assets\uvr5_weights\%hp2_all%" ( - echo %hp2_all% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhp2_all% -d %~dp0assets\uvr5_weights -o %hp2_all% - if exist "%~dp0assets\uvr5_weights\%hp2_all%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %hp3_all% -if exist "%~dp0assets\uvr5_weights\%hp3_all%" ( - echo %hp3_all% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhp3_all% -d %~dp0assets\uvr5_weights -o %hp3_all% - if exist "%~dp0assets\uvr5_weights\%hp3_all%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %hp5_only% -if exist "%~dp0assets\uvr5_weights\%hp5_only%" ( - echo %hp5_only% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhp5_only% -d %~dp0assets\uvr5_weights -o %hp5_only% - if exist "%~dp0assets\uvr5_weights\%hp5_only%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %VR_DeEchoAggressive% -if exist "%~dp0assets\uvr5_weights\%VR_DeEchoAggressive%" ( - echo %VR_DeEchoAggressive% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlVR_DeEchoAggressive% -d %~dp0assets\uvr5_weights -o %VR_DeEchoAggressive% - if exist "%~dp0assets\uvr5_weights\%VR_DeEchoAggressive%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %VR_DeEchoDeReverb% -if exist "%~dp0assets\uvr5_weights\%VR_DeEchoDeReverb%" ( - echo %VR_DeEchoDeReverb% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlVR_DeEchoDeReverb% -d %~dp0assets\uvr5_weights -o %VR_DeEchoDeReverb% - if exist "%~dp0assets\uvr5_weights\%VR_DeEchoDeReverb%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %VR_DeEchoNormal% -if exist "%~dp0assets\uvr5_weights\%VR_DeEchoNormal%" ( - echo %VR_DeEchoNormal% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlVR_DeEchoNormal% -d %~dp0assets\uvr5_weights -o %VR_DeEchoNormal% - if exist "%~dp0assets\uvr5_weights\%VR_DeEchoNormal%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %onnx_dereverb% -if exist "%~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy\%onnx_dereverb%" ( - echo %onnx_dereverb% in .\assets\uvr5_weights\onnx_dereverb_By_FoxJoy checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlonnx_dereverb% -d %~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy -o %onnx_dereverb% - if exist "%~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy\%onnx_dereverb%" (echo download successful.) else (echo please try again! - echo=) - ) - -echo checking %hb% -if exist "%~dp0assets\hubert\%hb%" ( - echo %hb% in .\assets\hubert\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhb% -d %~dp0assets\hubert\ -o %hb% - if exist "%~dp0assets\hubert\%hb%" (echo download successful.) else (echo please try again! - echo=) - ) - -echo required files check finished. -echo envfiles check complete. -pause -:end -del flag.txt diff --git a/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/utils.py b/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/utils.py deleted file mode 100644 index 6459bd5510ce770b6ec7d13e03cf0ebf92d67974..0000000000000000000000000000000000000000 --- a/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/utils.py +++ /dev/null @@ -1,394 +0,0 @@ -from __future__ import print_function, unicode_literals, absolute_import, division - -import numpy as np -import warnings -import os -import datetime -from tqdm import tqdm -from collections import defaultdict -from zipfile import ZipFile, ZIP_DEFLATED -from scipy.ndimage.morphology import distance_transform_edt, binary_fill_holes -from scipy.ndimage.measurements import find_objects -from scipy.optimize import minimize_scalar -from skimage.measure import regionprops -from csbdeep.utils import _raise -from csbdeep.utils.six import Path -from collections.abc import Iterable - -from .matching import matching_dataset, _check_label_array - - -try: - from edt import edt - _edt_available = True - try: _edt_parallel_max = len(os.sched_getaffinity(0)) - except: _edt_parallel_max = 128 - _edt_parallel_default = 4 - _edt_parallel = os.environ.get('STARDIST_EDT_NUM_THREADS', _edt_parallel_default) - try: - _edt_parallel = min(_edt_parallel_max, int(_edt_parallel)) - except ValueError as e: - warnings.warn(f"Invalid value ({_edt_parallel}) for STARDIST_EDT_NUM_THREADS. Using default value ({_edt_parallel_default}) instead.") - _edt_parallel = _edt_parallel_default - del _edt_parallel_default, _edt_parallel_max -except ImportError: - _edt_available = False - # warnings.warn("Could not find package edt... \nConsider installing it with \n pip install edt\nto improve training data generation performance.") - pass - - -def gputools_available(): - try: - import gputools - except: - return False - return True - - -def path_absolute(path_relative): - """ Get absolute path to resource""" - base_path = os.path.abspath(os.path.dirname(__file__)) - return os.path.join(base_path, path_relative) - - -def _is_power_of_2(i): - assert i > 0 - e = np.log2(i) - return e == int(e) - - -def _normalize_grid(grid,n): - try: - grid = tuple(grid) - (len(grid) == n and - all(map(np.isscalar,grid)) and - all(map(_is_power_of_2,grid))) or _raise(TypeError()) - return tuple(int(g) for g in grid) - except (TypeError, AssertionError): - raise ValueError("grid = {grid} must be a list/tuple of length {n} with values that are power of 2".format(grid=grid, n=n)) - - -def edt_prob(lbl_img, anisotropy=None): - if _edt_available: - return _edt_prob_edt(lbl_img, anisotropy=anisotropy) - else: - # warnings.warn("Could not find package edt... \nConsider installing it with \n pip install edt\nto improve training data generation performance.") - return _edt_prob_scipy(lbl_img, anisotropy=anisotropy) - -def _edt_prob_edt(lbl_img, anisotropy=None): - """Perform EDT on each labeled object and normalize. - Internally uses https://github.com/seung-lab/euclidean-distance-transform-3d - that can handle multiple labels at once - """ - lbl_img = np.ascontiguousarray(lbl_img) - constant_img = lbl_img.min() == lbl_img.max() and lbl_img.flat[0] > 0 - if constant_img: - warnings.warn("EDT of constant label image is ill-defined. (Assuming background around it.)") - # we just need to compute the edt once but then normalize it for each object - prob = edt(lbl_img, anisotropy=anisotropy, black_border=constant_img, parallel=_edt_parallel) - objects = find_objects(lbl_img) - for i,sl in enumerate(objects,1): - # i: object label id, sl: slices of object in lbl_img - if sl is None: continue - _mask = lbl_img[sl]==i - # normalize it - prob[sl][_mask] /= np.max(prob[sl][_mask]+1e-10) - return prob - -def _edt_prob_scipy(lbl_img, anisotropy=None): - """Perform EDT on each labeled object and normalize.""" - def grow(sl,interior): - return tuple(slice(s.start-int(w[0]),s.stop+int(w[1])) for s,w in zip(sl,interior)) - def shrink(interior): - return tuple(slice(int(w[0]),(-1 if w[1] else None)) for w in interior) - constant_img = lbl_img.min() == lbl_img.max() and lbl_img.flat[0] > 0 - if constant_img: - lbl_img = np.pad(lbl_img, ((1,1),)*lbl_img.ndim, mode='constant') - warnings.warn("EDT of constant label image is ill-defined. (Assuming background around it.)") - objects = find_objects(lbl_img) - prob = np.zeros(lbl_img.shape,np.float32) - for i,sl in enumerate(objects,1): - # i: object label id, sl: slices of object in lbl_img - if sl is None: continue - interior = [(s.start>0,s.stop0,s.stop 0: - # ignore image boundary, since predictions may not be reliable - mask_b = np.zeros_like(mask) - mask_b[b:-b,b:-b] = True - else: - mask_b = True - - points = np.nonzero(mask & mask_b) - - if prob is not None: - # weighted sampling via prob - w = prob[points[0],points[1]].astype(np.float64) - w /= np.sum(w) - ind = np.random.choice(len(points[0]), n_samples, replace=True, p=w) - else: - ind = np.random.choice(len(points[0]), n_samples, replace=True) - - points = points[0][ind], points[1][ind] - points = np.stack(points,axis=-1) - return points - - -def calculate_extents(lbl, func=np.median): - """ Aggregate bounding box sizes of objects in label images. """ - if (isinstance(lbl,np.ndarray) and lbl.ndim==4) or (not isinstance(lbl,np.ndarray) and isinstance(lbl,Iterable)): - return func(np.stack([calculate_extents(_lbl,func) for _lbl in lbl], axis=0), axis=0) - - n = lbl.ndim - n in (2,3) or _raise(ValueError("label image should be 2- or 3-dimensional (or pass a list of these)")) - - regs = regionprops(lbl) - if len(regs) == 0: - return np.zeros(n) - else: - extents = np.array([np.array(r.bbox[n:])-np.array(r.bbox[:n]) for r in regs]) - return func(extents, axis=0) - - -def polyroi_bytearray(x,y,pos=None,subpixel=True): - """ Byte array of polygon roi with provided x and y coordinates - See https://github.com/imagej/imagej1/blob/master/ij/io/RoiDecoder.java - """ - import struct - def _int16(x): - return int(x).to_bytes(2, byteorder='big', signed=True) - def _uint16(x): - return int(x).to_bytes(2, byteorder='big', signed=False) - def _int32(x): - return int(x).to_bytes(4, byteorder='big', signed=True) - def _float(x): - return struct.pack(">f", x) - - subpixel = bool(subpixel) - # add offset since pixel center is at (0.5,0.5) in ImageJ - x_raw = np.asarray(x).ravel() + 0.5 - y_raw = np.asarray(y).ravel() + 0.5 - x = np.round(x_raw) - y = np.round(y_raw) - assert len(x) == len(y) - top, left, bottom, right = y.min(), x.min(), y.max(), x.max() # bbox - - n_coords = len(x) - bytes_header = 64 - bytes_total = bytes_header + n_coords*2*2 + subpixel*n_coords*2*4 - B = [0] * bytes_total - B[ 0: 4] = map(ord,'Iout') # magic start - B[ 4: 6] = _int16(227) # version - B[ 6: 8] = _int16(0) # roi type (0 = polygon) - B[ 8:10] = _int16(top) # bbox top - B[10:12] = _int16(left) # bbox left - B[12:14] = _int16(bottom) # bbox bottom - B[14:16] = _int16(right) # bbox right - B[16:18] = _uint16(n_coords) # number of coordinates - if subpixel: - B[50:52] = _int16(128) # subpixel resolution (option flag) - if pos is not None: - B[56:60] = _int32(pos) # position (C, Z, or T) - - for i,(_x,_y) in enumerate(zip(x,y)): - xs = bytes_header + 2*i - ys = xs + 2*n_coords - B[xs:xs+2] = _int16(_x - left) - B[ys:ys+2] = _int16(_y - top) - - if subpixel: - base1 = bytes_header + n_coords*2*2 - base2 = base1 + n_coords*4 - for i,(_x,_y) in enumerate(zip(x_raw,y_raw)): - xs = base1 + 4*i - ys = base2 + 4*i - B[xs:xs+4] = _float(_x) - B[ys:ys+4] = _float(_y) - - return bytearray(B) - - -def export_imagej_rois(fname, polygons, set_position=True, subpixel=True, compression=ZIP_DEFLATED): - """ polygons assumed to be a list of arrays with shape (id,2,c) """ - - if isinstance(polygons,np.ndarray): - polygons = (polygons,) - - fname = Path(fname) - if fname.suffix == '.zip': - fname = fname.with_suffix('') - - with ZipFile(str(fname)+'.zip', mode='w', compression=compression) as roizip: - for pos,polygroup in enumerate(polygons,start=1): - for i,poly in enumerate(polygroup,start=1): - roi = polyroi_bytearray(poly[1],poly[0], pos=(pos if set_position else None), subpixel=subpixel) - roizip.writestr('{pos:03d}_{i:03d}.roi'.format(pos=pos,i=i), roi) - - -def optimize_threshold(Y, Yhat, model, nms_thresh, measure='accuracy', iou_threshs=[0.3,0.5,0.7], bracket=None, tol=1e-2, maxiter=20, verbose=1): - """ Tune prob_thresh for provided (fixed) nms_thresh to maximize matching score (for given measure and averaged over iou_threshs). """ - np.isscalar(nms_thresh) or _raise(ValueError("nms_thresh must be a scalar")) - iou_threshs = [iou_threshs] if np.isscalar(iou_threshs) else iou_threshs - values = dict() - - if bracket is None: - max_prob = max([np.max(prob) for prob, dist in Yhat]) - bracket = max_prob/2, max_prob - # print("bracket =", bracket) - - with tqdm(total=maxiter, disable=(verbose!=1), desc="NMS threshold = %g" % nms_thresh) as progress: - - def fn(thr): - prob_thresh = np.clip(thr, *bracket) - value = values.get(prob_thresh) - if value is None: - Y_instances = [model._instances_from_prediction(y.shape, *prob_dist, prob_thresh=prob_thresh, nms_thresh=nms_thresh)[0] for y,prob_dist in zip(Y,Yhat)] - stats = matching_dataset(Y, Y_instances, thresh=iou_threshs, show_progress=False, parallel=True) - values[prob_thresh] = value = np.mean([s._asdict()[measure] for s in stats]) - if verbose > 1: - print("{now} thresh: {prob_thresh:f} {measure}: {value:f}".format( - now = datetime.datetime.now().strftime('%H:%M:%S'), - prob_thresh = prob_thresh, - measure = measure, - value = value, - ), flush=True) - else: - progress.update() - progress.set_postfix_str("{prob_thresh:.3f} -> {value:.3f}".format(prob_thresh=prob_thresh, value=value)) - progress.refresh() - return -value - - opt = minimize_scalar(fn, method='golden', bracket=bracket, tol=tol, options={'maxiter': maxiter}) - - verbose > 1 and print('\n',opt, flush=True) - return opt.x, -opt.fun - - -def _invert_dict(d): - """ return v-> [k_1,k_2,k_3....] for k,v in d""" - res = defaultdict(list) - for k,v in d.items(): - res[v].append(k) - return res - - -def mask_to_categorical(y, n_classes, classes, return_cls_dict=False): - """generates a multi-channel categorical class map - - Parameters - ---------- - y : n-dimensional ndarray - integer label array - n_classes : int - Number of different classes (without background) - classes: dict, integer, or None - the label to class assignment - can be - - dict {label -> class_id} - the value of class_id can be - 0 -> background class - 1...n_classes -> the respective object class (1 ... n_classes) - None -> ignore object (prob is set to -1 for the pixels of the object, except for background class) - - single integer value or None -> broadcast value to all labels - - Returns - ------- - probability map of shape y.shape+(n_classes+1,) (first channel is background) - - """ - - _check_label_array(y, 'y') - if not (np.issubdtype(type(n_classes), np.integer) and n_classes>=1): - raise ValueError(f"n_classes is '{n_classes}' but should be a positive integer") - - y_labels = np.unique(y[y>0]).tolist() - - # build dict class_id -> labels (inverse of classes) - if np.issubdtype(type(classes), np.integer) or classes is None: - classes = dict((k,classes) for k in y_labels) - elif isinstance(classes, dict): - pass - else: - raise ValueError("classes should be dict, single scalar, or None!") - - if not set(y_labels).issubset(set(classes.keys())): - raise ValueError(f"all gt labels should be present in class dict provided \ngt_labels found\n{set(y_labels)}\nclass dict labels provided\n{set(classes.keys())}") - - cls_dict = _invert_dict(classes) - - # prob map - y_mask = np.zeros(y.shape+(n_classes+1,), np.float32) - - for cls, labels in cls_dict.items(): - if cls is None: - # prob == -1 will be used in the loss to ignore object - y_mask[np.isin(y, labels)] = -1 - elif np.issubdtype(type(cls), np.integer) and 0 <= cls <= n_classes: - y_mask[...,cls] = np.isin(y, labels) - else: - raise ValueError(f"Wrong class id '{cls}' (for n_classes={n_classes})") - - # set 0/1 background prob (unaffected by None values for class ids) - y_mask[...,0] = (y==0) - - if return_cls_dict: - return y_mask, cls_dict - else: - return y_mask - - -def _is_floatarray(x): - return isinstance(x.dtype.type(0),np.floating) - - -def abspath(root, relpath): - from pathlib import Path - root = Path(root) - if root.is_dir(): - path = root/relpath - else: - path = root.parent/relpath - return str(path.absolute()) diff --git a/spaces/Lianjd/stock_dashboard/backtrader/version.py b/spaces/Lianjd/stock_dashboard/backtrader/version.py deleted file mode 100644 index 9e8a77310aeba15fe0f1d61b9640d8eff707c0dc..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/version.py +++ /dev/null @@ -1,27 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - - -__version__ = '1.9.76.123' - -__btversion__ = tuple(int(x) for x in __version__.split('.')) diff --git a/spaces/Lightxr/sd-diffusers-webui/modules/prompt_parser.py b/spaces/Lightxr/sd-diffusers-webui/modules/prompt_parser.py deleted file mode 100644 index 42cbbb3038612a44571765905e8526553f462663..0000000000000000000000000000000000000000 --- a/spaces/Lightxr/sd-diffusers-webui/modules/prompt_parser.py +++ /dev/null @@ -1,391 +0,0 @@ - -import re -import math -import numpy as np -import torch - -# Code from https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/8e2aeee4a127b295bfc880800e4a312e0f049b85, modified. - -class PromptChunk: - """ - This object contains token ids, weight (multipliers:1.4) and textual inversion embedding info for a chunk of prompt. - If a prompt is short, it is represented by one PromptChunk, otherwise, multiple are necessary. - Each PromptChunk contains an exact amount of tokens - 77, which includes one for start and end token, - so just 75 tokens from prompt. - """ - - def __init__(self): - self.tokens = [] - self.multipliers = [] - self.fixes = [] - - -class FrozenCLIPEmbedderWithCustomWordsBase(torch.nn.Module): - """A pytorch module that is a wrapper for FrozenCLIPEmbedder module. it enhances FrozenCLIPEmbedder, making it possible to - have unlimited prompt length and assign weights to tokens in prompt. - """ - - def __init__(self, text_encoder, enable_emphasis=True): - super().__init__() - - self.device = lambda: text_encoder.device - self.enable_emphasis = enable_emphasis - """Original FrozenCLIPEmbedder module; can also be FrozenOpenCLIPEmbedder or xlmr.BertSeriesModelWithTransformation, - depending on model.""" - - self.chunk_length = 75 - - def empty_chunk(self): - """creates an empty PromptChunk and returns it""" - - chunk = PromptChunk() - chunk.tokens = [self.id_start] + [self.id_end] * (self.chunk_length + 1) - chunk.multipliers = [1.0] * (self.chunk_length + 2) - return chunk - - def get_target_prompt_token_count(self, token_count): - """returns the maximum number of tokens a prompt of a known length can have before it requires one more PromptChunk to be represented""" - - return math.ceil(max(token_count, 1) / self.chunk_length) * self.chunk_length - - def tokenize_line(self, line): - """ - this transforms a single prompt into a list of PromptChunk objects - as many as needed to - represent the prompt. - Returns the list and the total number of tokens in the prompt. - """ - - if self.enable_emphasis: - parsed = parse_prompt_attention(line) - else: - parsed = [[line, 1.0]] - - tokenized = self.tokenize([text for text, _ in parsed]) - - chunks = [] - chunk = PromptChunk() - token_count = 0 - last_comma = -1 - - def next_chunk(is_last=False): - """puts current chunk into the list of results and produces the next one - empty; - if is_last is true, tokens tokens at the end won't add to token_count""" - nonlocal token_count - nonlocal last_comma - nonlocal chunk - - if is_last: - token_count += len(chunk.tokens) - else: - token_count += self.chunk_length - - to_add = self.chunk_length - len(chunk.tokens) - if to_add > 0: - chunk.tokens += [self.id_end] * to_add - chunk.multipliers += [1.0] * to_add - - chunk.tokens = [self.id_start] + chunk.tokens + [self.id_end] - chunk.multipliers = [1.0] + chunk.multipliers + [1.0] - - last_comma = -1 - chunks.append(chunk) - chunk = PromptChunk() - - comma_padding_backtrack = 20 # default value in https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/6cff4401824299a983c8e13424018efc347b4a2b/modules/shared.py#L410 - for tokens, (text, weight) in zip(tokenized, parsed): - if text == "BREAK" and weight == -1: - next_chunk() - continue - - position = 0 - while position < len(tokens): - token = tokens[position] - - if token == self.comma_token: - last_comma = len(chunk.tokens) - - # this is when we are at the end of alloted 75 tokens for the current chunk, and the current token is not a comma. opts.comma_padding_backtrack - # is a setting that specifies that if there is a comma nearby, the text after the comma should be moved out of this chunk and into the next. - elif ( - comma_padding_backtrack != 0 - and len(chunk.tokens) == self.chunk_length - and last_comma != -1 - and len(chunk.tokens) - last_comma <= comma_padding_backtrack - ): - break_location = last_comma + 1 - - reloc_tokens = chunk.tokens[break_location:] - reloc_mults = chunk.multipliers[break_location:] - - chunk.tokens = chunk.tokens[:break_location] - chunk.multipliers = chunk.multipliers[:break_location] - - next_chunk() - chunk.tokens = reloc_tokens - chunk.multipliers = reloc_mults - - if len(chunk.tokens) == self.chunk_length: - next_chunk() - - chunk.tokens.append(token) - chunk.multipliers.append(weight) - position += 1 - - if len(chunk.tokens) > 0 or len(chunks) == 0: - next_chunk(is_last=True) - - return chunks, token_count - - def process_texts(self, texts): - """ - Accepts a list of texts and calls tokenize_line() on each, with cache. Returns the list of results and maximum - length, in tokens, of all texts. - """ - - token_count = 0 - - cache = {} - batch_chunks = [] - for line in texts: - if line in cache: - chunks = cache[line] - else: - chunks, current_token_count = self.tokenize_line(line) - token_count = max(current_token_count, token_count) - - cache[line] = chunks - - batch_chunks.append(chunks) - - return batch_chunks, token_count - - def forward(self, texts): - """ - Accepts an array of texts; Passes texts through transformers network to create a tensor with numerical representation of those texts. - Returns a tensor with shape of (B, T, C), where B is length of the array; T is length, in tokens, of texts (including padding) - T will - be a multiple of 77; and C is dimensionality of each token - for SD1 it's 768, and for SD2 it's 1024. - An example shape returned by this function can be: (2, 77, 768). - Webui usually sends just one text at a time through this function - the only time when texts is an array with more than one elemenet - is when you do prompt editing: "a picture of a [cat:dog:0.4] eating ice cream" - """ - - batch_chunks, token_count = self.process_texts(texts) - chunk_count = max([len(x) for x in batch_chunks]) - - zs = [] - ts = [] - for i in range(chunk_count): - batch_chunk = [ - chunks[i] if i < len(chunks) else self.empty_chunk() - for chunks in batch_chunks - ] - - tokens = [x.tokens for x in batch_chunk] - multipliers = [x.multipliers for x in batch_chunk] - # self.embeddings.fixes = [x.fixes for x in batch_chunk] - - # for fixes in self.embeddings.fixes: - # for position, embedding in fixes: - # used_embeddings[embedding.name] = embedding - - z = self.process_tokens(tokens, multipliers) - zs.append(z) - ts.append(tokens) - - return np.hstack(ts), torch.hstack(zs) - - def process_tokens(self, remade_batch_tokens, batch_multipliers): - """ - sends one single prompt chunk to be encoded by transformers neural network. - remade_batch_tokens is a batch of tokens - a list, where every element is a list of tokens; usually - there are exactly 77 tokens in the list. batch_multipliers is the same but for multipliers instead of tokens. - Multipliers are used to give more or less weight to the outputs of transformers network. Each multiplier - corresponds to one token. - """ - tokens = torch.asarray(remade_batch_tokens).to(self.device()) - - # this is for SD2: SD1 uses the same token for padding and end of text, while SD2 uses different ones. - if self.id_end != self.id_pad: - for batch_pos in range(len(remade_batch_tokens)): - index = remade_batch_tokens[batch_pos].index(self.id_end) - tokens[batch_pos, index + 1 : tokens.shape[1]] = self.id_pad - - z = self.encode_with_transformers(tokens) - - # restoring original mean is likely not correct, but it seems to work well to prevent artifacts that happen otherwise - batch_multipliers = torch.asarray(batch_multipliers).to(self.device()) - original_mean = z.mean() - z = z * batch_multipliers.reshape(batch_multipliers.shape + (1,)).expand(z.shape) - new_mean = z.mean() - z = z * (original_mean / new_mean) - - return z - - -class FrozenCLIPEmbedderWithCustomWords(FrozenCLIPEmbedderWithCustomWordsBase): - def __init__(self, tokenizer, text_encoder): - super().__init__(text_encoder) - self.tokenizer = tokenizer - self.text_encoder = text_encoder - - vocab = self.tokenizer.get_vocab() - - self.comma_token = vocab.get(",", None) - - self.token_mults = {} - tokens_with_parens = [ - (k, v) - for k, v in vocab.items() - if "(" in k or ")" in k or "[" in k or "]" in k - ] - for text, ident in tokens_with_parens: - mult = 1.0 - for c in text: - if c == "[": - mult /= 1.1 - if c == "]": - mult *= 1.1 - if c == "(": - mult *= 1.1 - if c == ")": - mult /= 1.1 - - if mult != 1.0: - self.token_mults[ident] = mult - - self.id_start = self.tokenizer.bos_token_id - self.id_end = self.tokenizer.eos_token_id - self.id_pad = self.id_end - - def tokenize(self, texts): - tokenized = self.tokenizer( - texts, truncation=False, add_special_tokens=False - )["input_ids"] - - return tokenized - - def encode_with_transformers(self, tokens): - CLIP_stop_at_last_layers = 1 - tokens = tokens.to(self.text_encoder.device) - outputs = self.text_encoder(tokens, output_hidden_states=True) - - if CLIP_stop_at_last_layers > 1: - z = outputs.hidden_states[-CLIP_stop_at_last_layers] - z = self.text_encoder.text_model.final_layer_norm(z) - else: - z = outputs.last_hidden_state - - return z - - -re_attention = re.compile( - r""" -\\\(| -\\\)| -\\\[| -\\]| -\\\\| -\\| -\(| -\[| -:([+-]?[.\d]+)\)| -\)| -]| -[^\\()\[\]:]+| -: -""", - re.X, -) - -re_break = re.compile(r"\s*\bBREAK\b\s*", re.S) - - -def parse_prompt_attention(text): - """ - Parses a string with attention tokens and returns a list of pairs: text and its associated weight. - Accepted tokens are: - (abc) - increases attention to abc by a multiplier of 1.1 - (abc:3.12) - increases attention to abc by a multiplier of 3.12 - [abc] - decreases attention to abc by a multiplier of 1.1 - \( - literal character '(' - \[ - literal character '[' - \) - literal character ')' - \] - literal character ']' - \\ - literal character '\' - anything else - just text - - >>> parse_prompt_attention('normal text') - [['normal text', 1.0]] - >>> parse_prompt_attention('an (important) word') - [['an ', 1.0], ['important', 1.1], [' word', 1.0]] - >>> parse_prompt_attention('(unbalanced') - [['unbalanced', 1.1]] - >>> parse_prompt_attention('\(literal\]') - [['(literal]', 1.0]] - >>> parse_prompt_attention('(unnecessary)(parens)') - [['unnecessaryparens', 1.1]] - >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).') - [['a ', 1.0], - ['house', 1.5730000000000004], - [' ', 1.1], - ['on', 1.0], - [' a ', 1.1], - ['hill', 0.55], - [', sun, ', 1.1], - ['sky', 1.4641000000000006], - ['.', 1.1]] - """ - - res = [] - round_brackets = [] - square_brackets = [] - - round_bracket_multiplier = 1.1 - square_bracket_multiplier = 1 / 1.1 - - def multiply_range(start_position, multiplier): - for p in range(start_position, len(res)): - res[p][1] *= multiplier - - for m in re_attention.finditer(text): - text = m.group(0) - weight = m.group(1) - - if text.startswith("\\"): - res.append([text[1:], 1.0]) - elif text == "(": - round_brackets.append(len(res)) - elif text == "[": - square_brackets.append(len(res)) - elif weight is not None and len(round_brackets) > 0: - multiply_range(round_brackets.pop(), float(weight)) - elif text == ")" and len(round_brackets) > 0: - multiply_range(round_brackets.pop(), round_bracket_multiplier) - elif text == "]" and len(square_brackets) > 0: - multiply_range(square_brackets.pop(), square_bracket_multiplier) - else: - parts = re.split(re_break, text) - for i, part in enumerate(parts): - if i > 0: - res.append(["BREAK", -1]) - res.append([part, 1.0]) - - for pos in round_brackets: - multiply_range(pos, round_bracket_multiplier) - - for pos in square_brackets: - multiply_range(pos, square_bracket_multiplier) - - if len(res) == 0: - res = [["", 1.0]] - - # merge runs of identical weights - i = 0 - while i + 1 < len(res): - if res[i][1] == res[i + 1][1]: - res[i][0] += res[i + 1][0] - res.pop(i + 1) - else: - i += 1 - - return res diff --git a/spaces/Liu-LAB/GPT-academic/docs/README.md.German.md b/spaces/Liu-LAB/GPT-academic/docs/README.md.German.md deleted file mode 100644 index d514de30f54bd8931568c029a3bbd3aa3eacdbb1..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/docs/README.md.German.md +++ /dev/null @@ -1,307 +0,0 @@ -> **Hinweis** -> -> Bei der Installation von Abhängigkeiten sollten nur die in **requirements.txt** **angegebenen Versionen** streng ausgewählt werden. -> -> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/` - -# GPT Akademisch optimiert (GPT Academic) - -**Wenn Ihnen dieses Projekt gefällt, geben Sie ihm bitte einen Stern; wenn Sie bessere Tastenkombinationen oder Funktions-Plugins entwickelt haben, können Sie gerne einen Pull Request eröffnen.** - -Wenn Sie dieses Projekt mögen, geben Sie ihm bitte einen Stern. Wenn Sie weitere nützliche wissenschaftliche Abkürzungen oder funktionale Plugins entwickelt haben, können Sie gerne ein Problem oder eine Pull-Anforderung öffnen. Wir haben auch ein README in [Englisch|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md), das von diesem Projekt selbst übersetzt wurde. -Um dieses Projekt in eine beliebige Sprache mit GPT zu übersetzen, lesen Sie `multi_language.py` (experimentell). - -> **Hinweis** -> -> 1. Beachten Sie bitte, dass nur Funktionserweiterungen (Schaltflächen) mit **roter Farbe** Dateien lesen können und einige Erweiterungen im **Dropdown-Menü** des Erweiterungsbereichs zu finden sind. Außerdem begrüßen wir jede neue Funktionserweiterung mit **höchster Priorität** und bearbeiten sie. -> -> 2. Die Funktionalität jeder Datei in diesem Projekt wird in der Selbstanalyse [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) detailliert beschrieben. Mit der Weiterentwicklung der Versionen können Sie jederzeit die zugehörigen Funktions-Erweiterungen aufrufen, um durch Aufruf von GPT einen Selbstanalysebericht des Projekts zu erstellen. Häufig gestellte Fragen finden Sie in der [`Wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installationsanweisungen](#Installation). -> -> 3. Dieses Projekt ist kompatibel und fördert die Verwendung von inländischen Sprachmodellen wie ChatGLM und RWKV, Pangu, etc. Es unterstützt das Vorhandensein mehrerer api-keys, die in der Konfigurationsdatei wie folgt angegeben werden können: `API_KEY="openai-key1,openai-key2,api2d-key3"`. Wenn ein `API_KEY` temporär geändert werden muss, geben Sie den temporären `API_KEY` im Eingabebereich ein und drücken Sie dann die Eingabetaste, um ihn zu übernehmen.Funktion | Beschreibung ---- | --- -Ein-Klick-Polieren | Unterstützt ein-Klick-Polieren und ein-Klick-Suche nach grammatikalischen Fehlern in wissenschaftlichen Arbeiten -Ein-Klick Chinesisch-Englisch Übersetzung | Ein-Klick Chinesisch-Englisch Übersetzung -Ein-Klick-Code-Erklärung | Zeigt Code, erklärt Code, erzeugt Code und fügt Kommentare zum Code hinzu -[Benutzerdefinierte Tastenkombinationen](https://www.bilibili.com/video/BV14s4y1E7jN) | Unterstützt benutzerdefinierte Tastenkombinationen -Modulare Gestaltung | Unterstützt leistungsstarke individuelle [Funktions-Plugins](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions). Plugins unterstützen [Hot-Updates](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) -[Selbstprogramm-Analyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] [Ein-Klick Verstehen](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) der Quellcode dieses Projekts -[Programmanalyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] Ein-Klick-Analyse des Projektbaums anderer Python/C/C++/Java/Lua/...-Projekte -Lesen von Papieren, [Übersetzen](https://www.bilibili.com/video/BV1KT411x7Wn) von Papieren | [Funktions-Plugin] Ein-Klick Erklärung des gesamten LaTeX/PDF-Artikels und Erstellung einer Zusammenfassung -LaTeX-Volltext-Übersetzung und [Polieren](https://www.bilibili.com/video/BV1FT411H7c5/) | [Funktions-Plugin] Ein-Klick-Übersetzung oder-Polieren des LaTeX-Artikels -Bulk-Kommentargenerierung | [Funktions-Plugin] Ein-Klick Massenerstellung von Funktionskommentaren -Markdown [Chinesisch-Englisch Übersetzung](https://www.bilibili.com/video/BV1yo4y157jV/) | [Funktions-Plugin] Haben Sie die [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) in den oben genannten 5 Sprachen gesehen? -Analyse-Berichtserstellung von chat | [Funktions-Plugin] Automatische Zusammenfassung nach der Ausführung -[Funktion zur vollständigen Übersetzung von PDF-Artikeln](https://www.bilibili.com/video/BV1KT411x7Wn) | [Funktions-Plugin] Extrahiert Titel und Zusammenfassung der PDF-Artikel und übersetzt den gesamten Text (mehrere Threads) -[Arxiv-Assistent](https://www.bilibili.com/video/BV1LM4y1279X) | [Funktions-Plugin] Geben Sie die Arxiv-Artikel-URL ein und klicken Sie auf Eine-Klick-Übersetzung-Zusammenfassung + PDF-Download -[Google Scholar Integrations-Assistent](https://www.bilibili.com/video/BV19L411U7ia) | [Funktions-Plugin] Geben Sie eine beliebige Google Scholar Such-URL ein und lassen Sie gpt Ihnen bei der Erstellung von [relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) helfen -Internet-Informationen Aggregation + GPT | [Funktions-Plugin] Lassen Sie GPT eine Frage beantworten, indem es [zuerst Informationen aus dem Internet](https://www.bilibili.com/video/BV1om4y127ck/) sammelt und so die Informationen nie veralten -Anzeige von Formeln / Bildern / Tabellen | Zeigt Formeln in beiden Formen, [TeX-Format und gerendeter Form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), unterstützt Formeln und Code-Highlights -Unterstützung von PlugIns mit mehreren Threads | Unterstützt den Aufruf mehrerer Threads in Chatgpt, um Text oder Programme [Batch zu verarbeiten](https://www.bilibili.com/video/BV1FT411H7c5/) -Starten Sie das dunkle Gradio-[Thema](https://github.com/binary-husky/gpt_academic/issues/173) | Fügen Sie ```/?__theme=dark``` an das Ende der Browser-URL an, um das dunkle Thema zu aktivieren -[Unterstützung für mehrere LLM-Modelle](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) Interface-Unterstützung | Das Gefühl, gleichzeitig von GPT3.5, GPT4, [Tshinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) bedient zu werden, muss toll sein, oder? -Zugriff auf weitere LLM-Modelle, Unterstützung von [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Hinzufügen der Newbing-Schnittstelle (neues Bing), Einführung der Unterstützung von [Jittorllms](https://github.com/Jittor/JittorLLMs) der Tsinghua-Universität, [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) und [Pangu alpha](https://openi.org.cn/pangu/) -Weitere neue Funktionen (wie Bildgenerierung) …… | Siehe Ende dieses Dokuments …… - -- Neue Oberfläche (Ändern Sie die LAYOUT-Option in `config.py`, um zwischen "Seitenlayout" und "Oben-unten-Layout" zu wechseln) -
    - -
    - All buttons are dynamically generated by reading `functional.py`, and custom functions can be easily added, freeing up the clipboard. -
    - -
    - -- Proofreading/Correcting -
    - -
    - -- If the output contains formulas, they will be displayed in both tex format and rendered format for easy copying and reading. -
    - -
    - -- Don't feel like reading the project code? Show off the entire project to chatgpt. -
    - -
    - -- Multiple large language models are mixed and called together (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4). -
    - -
    - ---- -# Installation -## Installation-Method 1: Run directly (Windows, Linux or MacOS) - -1. Download the project -```sh -git clone https://github.com/binary-husky/gpt_academic.git -cd gpt_academic -``` - -2. Configure API_KEY - -Configure API KEY and other settings in `config.py`. [Special Network Environment Settings](https://github.com/binary-husky/gpt_academic/issues/1). - -(P.S. When the program is running, it will first check whether there is a "config_private.py" private configuration file, and use the configuration defined in it to override the configuration of "config.py". Therefore, if you understand our configuration reading logic, we strongly recommend that you create a new configuration file named "config_private.py" next to "config.py" and transfer (copy) the configurations in "config.py" to "config_private.py". "config_private.py" is not controlled by git, which can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Reading priority: `environment variable` > `config_private.py` >`config.py`) - - -3. Install dependencies -```sh -# (Option I: If familar with Python) (Python version 3.9 or above, the newer the better), Note: Use the official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -python -m pip install -r requirements.txt - -# (Option II: If not familiar with Python) Use anaconda with similar steps (https://www.bilibili.com/video/BV1rc411W7Dr): -conda create -n gptac_venv python=3.11 # Create an anaconda environment -conda activate gptac_venv # Activate the anaconda environment -python -m pip install -r requirements.txt # Same step as pip installation -``` - -
    Click to expand if supporting Tsinghua ChatGLM/Fudan MOSS as backend -

    - -[Optional Step] If supporting Tsinghua ChatGLM/Fudan MOSS as backend, additional dependencies need to be installed (Prerequisites: Familiar with Python + Used Pytorch + Sufficient computer configuration): -```sh -# [Optional Step I] Support Tsinghua ChatGLM. Remark: If encountering "Call ChatGLM fail Cannot load ChatGLM parameters", please refer to the following: 1: The above default installation is torch+cpu version. To use cuda, uninstall torch and reinstall torch+cuda; 2: If the model cannot be loaded due to insufficient machine configuration, you can modify the model precision in `request_llm/bridge_chatglm.py`, and modify all AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True) -python -m pip install -r request_llm/requirements_chatglm.txt - -# [Optional Step II] Support Fudan MOSS -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the project root path - -# [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the expected models. Currently supported models are as follows (jittorllms series currently only supports docker solutions): -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -

    -
    - - - -4. Run -```sh -python main.py -```5. Testing Function Plugin -``` -- Test function plugin template function (requires gpt to answer what happened today in history), you can use this function as a template to implement more complex functions - Click "[Function Plugin Template Demo] Today in History" -``` - -## Installation-Method 2: Using Docker - -1. Only ChatGPT (Recommended for most people) - -``` sh -git clone https://github.com/binary-husky/gpt_academic.git # Download the project -cd gpt_academic # Enter the path -nano config.py # Edit config.py with any text editor, Configure "Proxy","API_KEY"and"WEB_PORT" (e.g 50923) etc. -docker build -t gpt-academic . # Install - -# (Last step-option 1) Under Linux environment, use `--net=host` is more convenient and quick -docker run --rm -it --net=host gpt-academic -# (Last step-option 2) Under macOS/windows environment, can only use the -p option to expose the container's port(eg.50923) to the port on the host. -docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic -``` - -2. ChatGPT + ChatGLM + MOSS (Requires familiarity with Docker) - -``` sh -# Modify docker-compose.yml, delete solution 1 and solution 3, and retain solution 2. Modify the configuration of solution 2 in docker-compose.yml, referring to the comments in it. -docker-compose up -``` - -3. ChatGPT+LLAMA+Pangu+RWKV(Requires familiarity with Docker) -``` sh -# Modify docker-compose.yml, delete solution 1 and solution 2, and retain solution 3. Modify the configuration of solution 3 in docker-compose.yml, referring to the comments in it. -docker-compose up -``` - - -## Installation-Method 3: Other Deployment Options - -1. How to use reverse proxy URL/Microsoft Azure API -Configure API_URL_REDIRECT according to the instructions in `config.py`. - -2. Remote cloud server deployment (requires cloud server knowledge and experience) -Please visit [Deployment wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. Using WSL 2 (Windows subsystem for Linux) -Please visit [Deployment wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. How to run at a secondary URL (such as `http://localhost/subpath`) -Please visit [FastAPI operating instructions](docs/WithFastapi.md) - -5. Use docker-compose to run -Please read docker-compose.yml and follow the prompts to operate. - ---- -# Advanced Usage -## Customize new convenience buttons / custom function plugins. - -1. Customize new convenience buttons (Academic Shortcut Keys) -Open `core_functional.py` with any text editor, add an entry as follows, and then restart the program. (If the button has been added successfully and is visible, then the prefix and suffix can be hot-modified, and it will take effect without restarting the program.) -For example -``` -"Super English to Chinese": { - # Prefix, will be added before your input. For example, used to describe your requirements, such as translation, explaining code, polishing, etc. - "Prefix": "Please translate the following content into Chinese, and then use a markdown table to explain the proper nouns that appear in the text one by one:\n\n", - - # Suffix, will be added after your input. For example, combined with prefix, you can enclose your input content in quotes. - "Suffix": "", -}, -``` -
    - -
    - -2. Custom function plugins - -Write powerful function plugins to perform any task you want and can't think of. -The difficulty of plugin writing and debugging is very low in this project. As long as you have a certain knowledge of Python, you can implement your own plugin functions by imitating the template we provided. -For more information, please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). - ---- -# Latest Update -## New feature dynamics1. Funktion zur Speicherung von Dialogen. Rufen Sie im Bereich der Funktions-Plugins "Aktuellen Dialog speichern" auf, um den aktuellen Dialog als lesbares und wiederherstellbares HTML-Datei zu speichern. Darüber hinaus können Sie im Funktions-Plugin-Bereich (Dropdown-Menü) "Laden von Dialogverlauf" aufrufen, um den vorherigen Dialog wiederherzustellen. Tipp: Wenn Sie keine Datei angeben und stattdessen direkt auf "Laden des Dialogverlaufs" klicken, können Sie das HTML-Cache-Archiv anzeigen. Durch Klicken auf "Löschen aller lokalen Dialogverlaufsdatensätze" können alle HTML-Archiv-Caches gelöscht werden. -
    - -
    - -2. Berichterstellung. Die meisten Plugins generieren nach Abschluss der Ausführung einen Arbeitsbericht. -
    - - - -
    - -3. Modularisierte Funktionsgestaltung, einfache Schnittstellen mit leistungsstarken Funktionen. -
    - - -
    - -4. Dies ist ein Open-Source-Projekt, das sich "selbst übersetzen" kann. -
    - -
    - -5. Die Übersetzung anderer Open-Source-Projekte ist kein Problem. -
    - -
    - -
    - -
    - -6. Dekorieren Sie [`live2d`](https://github.com/fghrsh/live2d_demo) mit kleinen Funktionen (standardmäßig deaktiviert, Änderungen an `config.py` erforderlich). -
    - -
    - -7. Neue MOSS-Sprachmodellunterstützung. -
    - -
    - -8. OpenAI-Bildgenerierung. -
    - -
    - -9. OpenAI-Audio-Analyse und Zusammenfassung. -
    - -
    - -10. Latex-Proofreading des gesamten Textes. -
    - -
    - - -## Version: -- Version 3.5 (Todo): Rufen Sie alle Funktionserweiterungen dieses Projekts mit natürlicher Sprache auf (hohe Priorität). -- Version 3.4 (Todo): Verbesserte Unterstützung mehrerer Threads für Local Large Model (LLM). -- Version 3.3: + Internet-Informationssynthese-Funktion -- Version 3.2: Funktionserweiterungen unterstützen mehr Parameter-Schnittstellen (Speicherung von Dialogen, Interpretation beliebigen Sprachcodes + gleichzeitige Abfrage jeder LLM-Kombination) -- Version 3.1: Unterstützung mehrerer GPT-Modelle gleichzeitig! Unterstützung für API2D, Unterstützung für Lastenausgleich von mehreren API-Schlüsseln. -- Version 3.0: Unterstützung von Chatglm und anderen kleinen LLMs -- Version 2.6: Umstrukturierung der Plugin-Struktur zur Verbesserung der Interaktivität, Einführung weiterer Plugins -- Version 2.5: Automatische Aktualisierung, Problembehebung bei Quelltexten großer Projekte, wenn der Text zu lang ist oder Token überlaufen. -- Version 2.4: (1) Neue Funktion zur Übersetzung des gesamten PDF-Texts; (2) Neue Funktion zum Wechseln der Position des Eingabebereichs; (3) Neue Option für vertikales Layout; (4) Optimierung von Multithread-Funktions-Plugins. -- Version 2.3: Verbesserte Interaktivität mit mehreren Threads -- Version 2.2: Funktionserweiterungen unterstützen "Hot-Reload" -- Version 2.1: Faltbares Layout -- Version 2.0: Einführung von modularisierten Funktionserweiterungen -- Version 1.0: Grundlegende Funktionengpt_academic Entwickler QQ-Gruppe-2: 610599535 - -- Bekannte Probleme - - Einige Browser-Übersetzungs-Plugins können die Frontend-Ausführung dieser Software stören. - - Sowohl eine zu hohe als auch eine zu niedrige Version von Gradio führt zu verschiedenen Ausnahmen. - -## Referenz und Lernen - -``` -Der Code bezieht sich auf viele Designs von anderen herausragenden Projekten, insbesondere: - -# Projekt 1: ChatGLM-6B der Tsinghua Universität: -https://github.com/THUDM/ChatGLM-6B - -# Projekt 2: JittorLLMs der Tsinghua Universität: -https://github.com/Jittor/JittorLLMs - -# Projekt 3: Edge-GPT: -https://github.com/acheong08/EdgeGPT - -# Projekt 4: ChuanhuChatGPT: -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Projekt 5: ChatPaper: -https://github.com/kaixindelele/ChatPaper - -# Mehr: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py deleted file mode 100644 index d22ba52640bebd805b3b8d07025e276dfb023759..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py +++ /dev/null @@ -1,44 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='DMHead', - in_channels=2048, - in_index=3, - channels=512, - filter_sizes=(1, 3, 5, 7), - dropout_ratio=0.1, - num_classes=19, - norm_cfg=dict(type='SyncBN', requires_grad=True), - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/core/evaluation/metrics.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/core/evaluation/metrics.py deleted file mode 100644 index 16c7dd47cadd53cf1caaa194e28a343f2aacc599..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/core/evaluation/metrics.py +++ /dev/null @@ -1,326 +0,0 @@ -from collections import OrderedDict - -import annotator.uniformer.mmcv as mmcv -import numpy as np -import torch - - -def f_score(precision, recall, beta=1): - """calcuate the f-score value. - - Args: - precision (float | torch.Tensor): The precision value. - recall (float | torch.Tensor): The recall value. - beta (int): Determines the weight of recall in the combined score. - Default: False. - - Returns: - [torch.tensor]: The f-score value. - """ - score = (1 + beta**2) * (precision * recall) / ( - (beta**2 * precision) + recall) - return score - - -def intersect_and_union(pred_label, - label, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate intersection and Union. - - Args: - pred_label (ndarray | str): Prediction segmentation map - or predict result filename. - label (ndarray | str): Ground truth segmentation map - or label filename. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. The parameter will - work only when label is str. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. The parameter will - work only when label is str. Default: False. - - Returns: - torch.Tensor: The intersection of prediction and ground truth - histogram on all classes. - torch.Tensor: The union of prediction and ground truth histogram on - all classes. - torch.Tensor: The prediction histogram on all classes. - torch.Tensor: The ground truth histogram on all classes. - """ - - if isinstance(pred_label, str): - pred_label = torch.from_numpy(np.load(pred_label)) - else: - pred_label = torch.from_numpy((pred_label)) - - if isinstance(label, str): - label = torch.from_numpy( - mmcv.imread(label, flag='unchanged', backend='pillow')) - else: - label = torch.from_numpy(label) - - if label_map is not None: - for old_id, new_id in label_map.items(): - label[label == old_id] = new_id - if reduce_zero_label: - label[label == 0] = 255 - label = label - 1 - label[label == 254] = 255 - - mask = (label != ignore_index) - pred_label = pred_label[mask] - label = label[mask] - - intersect = pred_label[pred_label == label] - area_intersect = torch.histc( - intersect.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_pred_label = torch.histc( - pred_label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_label = torch.histc( - label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_union = area_pred_label + area_label - area_intersect - return area_intersect, area_union, area_pred_label, area_label - - -def total_intersect_and_union(results, - gt_seg_maps, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate Total Intersection and Union. - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - - Returns: - ndarray: The intersection of prediction and ground truth histogram - on all classes. - ndarray: The union of prediction and ground truth histogram on all - classes. - ndarray: The prediction histogram on all classes. - ndarray: The ground truth histogram on all classes. - """ - num_imgs = len(results) - assert len(gt_seg_maps) == num_imgs - total_area_intersect = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_union = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_pred_label = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_label = torch.zeros((num_classes, ), dtype=torch.float64) - for i in range(num_imgs): - area_intersect, area_union, area_pred_label, area_label = \ - intersect_and_union( - results[i], gt_seg_maps[i], num_classes, ignore_index, - label_map, reduce_zero_label) - total_area_intersect += area_intersect - total_area_union += area_union - total_area_pred_label += area_pred_label - total_area_label += area_label - return total_area_intersect, total_area_union, total_area_pred_label, \ - total_area_label - - -def mean_iou(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False): - """Calculate Mean Intersection and Union (mIoU) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - - Returns: - dict[str, float | ndarray]: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category IoU, shape (num_classes, ). - """ - iou_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mIoU'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label) - return iou_result - - -def mean_dice(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False): - """Calculate Mean Dice (mDice) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - - Returns: - dict[str, float | ndarray]: Default metrics. - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category dice, shape (num_classes, ). - """ - - dice_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mDice'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label) - return dice_result - - -def mean_fscore(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False, - beta=1): - """Calculate Mean Intersection and Union (mIoU) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - beta (int): Determines the weight of recall in the combined score. - Default: False. - - - Returns: - dict[str, float | ndarray]: Default metrics. - float: Overall accuracy on all images. - ndarray: Per category recall, shape (num_classes, ). - ndarray: Per category precision, shape (num_classes, ). - ndarray: Per category f-score, shape (num_classes, ). - """ - fscore_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mFscore'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label, - beta=beta) - return fscore_result - - -def eval_metrics(results, - gt_seg_maps, - num_classes, - ignore_index, - metrics=['mIoU'], - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False, - beta=1): - """Calculate evaluation metrics - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - Returns: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category evaluation metrics, shape (num_classes, ). - """ - if isinstance(metrics, str): - metrics = [metrics] - allowed_metrics = ['mIoU', 'mDice', 'mFscore'] - if not set(metrics).issubset(set(allowed_metrics)): - raise KeyError('metrics {} is not supported'.format(metrics)) - - total_area_intersect, total_area_union, total_area_pred_label, \ - total_area_label = total_intersect_and_union( - results, gt_seg_maps, num_classes, ignore_index, label_map, - reduce_zero_label) - all_acc = total_area_intersect.sum() / total_area_label.sum() - ret_metrics = OrderedDict({'aAcc': all_acc}) - for metric in metrics: - if metric == 'mIoU': - iou = total_area_intersect / total_area_union - acc = total_area_intersect / total_area_label - ret_metrics['IoU'] = iou - ret_metrics['Acc'] = acc - elif metric == 'mDice': - dice = 2 * total_area_intersect / ( - total_area_pred_label + total_area_label) - acc = total_area_intersect / total_area_label - ret_metrics['Dice'] = dice - ret_metrics['Acc'] = acc - elif metric == 'mFscore': - precision = total_area_intersect / total_area_pred_label - recall = total_area_intersect / total_area_label - f_value = torch.tensor( - [f_score(x[0], x[1], beta) for x in zip(precision, recall)]) - ret_metrics['Fscore'] = f_value - ret_metrics['Precision'] = precision - ret_metrics['Recall'] = recall - - ret_metrics = { - metric: value.numpy() - for metric, value in ret_metrics.items() - } - if nan_to_num is not None: - ret_metrics = OrderedDict({ - metric: np.nan_to_num(metric_value, nan=nan_to_num) - for metric, metric_value in ret_metrics.items() - }) - return ret_metrics diff --git a/spaces/NN520/AI/src/components/settings.tsx b/spaces/NN520/AI/src/components/settings.tsx deleted file mode 100644 index e18aa5b484852bb5d047442a06e7143b6893cb0d..0000000000000000000000000000000000000000 --- a/spaces/NN520/AI/src/components/settings.tsx +++ /dev/null @@ -1,141 +0,0 @@ -import { useEffect, useState } from 'react' -import { useAtom } from 'jotai' -import { Switch } from '@headlessui/react' -import { toast } from 'react-hot-toast' -import { hashAtom, voiceAtom } from '@/state' -import { - Dialog, - DialogContent, - DialogDescription, - DialogFooter, - DialogHeader, - DialogTitle -} from '@/components/ui/dialog' -import { Button } from './ui/button' -import { Input } from './ui/input' -import { ChunkKeys, parseCookies, extraCurlFromCookie, randomIP, encodeHeadersToCookie } from '@/lib/utils' -import { ExternalLink } from './external-link' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - -export function Settings() { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - const [loc, setLoc] = useAtom(hashAtom) - const [curlValue, setCurlValue] = useState(extraCurlFromCookie(parseCookies(document.cookie, ChunkKeys))) - const [enableTTS, setEnableTTS] = useAtom(voiceAtom) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - - if (loc === 'settings') { - return ( - setLoc('')} modal> - - - 设置你的用户信息 - - 请使用 Edge 浏览器 - - 打开并登录 Bing - - ,然后再打开 - Challenge 接口 - 右键 》检查。打开开发者工具,在网络里面找到 Create 接口 》右键复制》复制为 cURL(bash),粘贴到此处,然后保存。 -
    - 图文示例: - 如何获取 BING_HEADER - - -
    - -
    - setCurlValue(e.target.value)} - /> - - - - - - -
    - ) - } else if (loc === 'voice') { - return ( - setLoc('')} modal> - - - 语音设置 - - 目前仅支持 PC 端 Edge 及 Chrome 浏览器 - - - -
    - 启用语音回答 - setEnableTTS(checked)} - > - - -
    - - - - -
    -
    - ) - } - return null -} diff --git a/spaces/Najaf-Zawar/Old_Image-Restoration/README.md b/spaces/Najaf-Zawar/Old_Image-Restoration/README.md deleted file mode 100644 index bff8aecb8b4da59d670a9cecb0b6596b45ea5439..0000000000000000000000000000000000000000 --- a/spaces/Najaf-Zawar/Old_Image-Restoration/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Old Image-Restoration -emoji: 💻 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/NimaKL/FireWatch5k/app.py b/spaces/NimaKL/FireWatch5k/app.py deleted file mode 100644 index a1e236c71742cef91edd503d2f69b1b85abdb22d..0000000000000000000000000000000000000000 --- a/spaces/NimaKL/FireWatch5k/app.py +++ /dev/null @@ -1,38 +0,0 @@ -import gradio as gr -from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification - -model_name = "NimaKL/FireWatch_tiny_75k" -tokenizer = AutoTokenizer.from_pretrained(model_name) -model = AutoModelForSequenceClassification.from_pretrained(model_name) - -def predict(text): - inputs = tokenizer(text, return_tensors="pt") - outputs = model(**inputs) - logits = outputs.logits - label_id = logits.argmax(axis=1).item() - return "Danger of fire hazard!" if label_id == 1 else "It is unlikely that a fire will start in this area." - -# Define a custom CSS style -custom_style = """ - body { - background-color: #F262626; - } -""" - -# Define a function to generate HTML for embedding the Google Sheets document -def get_sheet_html(): - return f'' - -io = gr.Interface( - fn=predict, - inputs="text", - outputs="text", - title="FireWatch", - description="

    Predict whether a data row describes a fire hazard or not.

    \ -

    Here is a Google Sheets document containing sample data (You can use for testing). It is a heavy document so it might take a while to load.

    ", - output_description="Prediction", - examples=[['-26.76123, 147.15512, 393.02, 203.63'], ['-26.7598, 147.14514, 361.54, 79.4'], ['-25.70059, 149.48932, 313.9, 5.15'], ['-24.4318, 151.83102, 307.98, 8.79'], ['-23.21878, 148.91298, 314.08, 7.4'], ['7.87518, 19.9241, 316.32, 39.63'], ['-20.10942, 148.14326, 314.39, 8.8'], ['7.87772, 19.9048, 304.14, 13.43'], ['-20.79866, 124.46834, 366.74, 89.06']], - theme="Streamlit", - css=custom_style -) -io.launch() diff --git a/spaces/Norod78/SillyTedTalkSnippetGenerator/README.md b/spaces/Norod78/SillyTedTalkSnippetGenerator/README.md deleted file mode 100644 index e733521a0fa2780c765b1c90ba666a1ffbf1726a..0000000000000000000000000000000000000000 --- a/spaces/Norod78/SillyTedTalkSnippetGenerator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Silly Ted-Talk Snippet Generator -emoji: 🧑‍🏫 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.1.1 -app_file: app.py -pinned: false -license: cc-by-nc-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/data/ofa_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/data/ofa_dataset.py deleted file mode 100644 index 02d856c28016b3a1c020fed483afe0aa797bf50f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/data/ofa_dataset.py +++ /dev/null @@ -1,74 +0,0 @@ -import logging -import re -import torch.utils.data -from fairseq.data import FairseqDataset - -logger = logging.getLogger(__name__) - - -class OFADataset(FairseqDataset): - def __init__(self, split, dataset, bpe, src_dict, tgt_dict): - self.split = split - self.dataset = dataset - self.bpe = bpe - self.src_dict = src_dict - self.tgt_dict = tgt_dict - - self.bos = src_dict.bos() - self.eos = src_dict.eos() - self.pad = src_dict.pad() - self.bos_item = torch.LongTensor([self.bos]) - self.eos_item = torch.LongTensor([self.eos]) - - def __len__(self): - return len(self.dataset) - - def encode_text(self, text, length=None, append_bos=False, append_eos=False, use_bpe=True): - s = self.tgt_dict.encode_line( - line=self.bpe.encode(text) if use_bpe else text, - add_if_not_exist=False, - append_eos=False - ).long() - if length is not None: - s = s[:length] - if append_bos: - s = torch.cat([self.bos_item, s]) - if append_eos: - s = torch.cat([s, self.eos_item]) - return s - - def pre_question(self, question, max_ques_words): - question = question.lower().lstrip(",.!?*#:;~").replace('-', ' ').replace('/', ' ') - - question = re.sub( - r"\s{2,}", - ' ', - question, - ) - question = question.rstrip('\n') - question = question.strip(' ') - - # truncate question - question_words = question.split(' ') - if len(question_words) > max_ques_words: - question = ' '.join(question_words[:max_ques_words]) - - return question - - def pre_caption(self, caption, max_words): - caption = caption.lower().lstrip(",.!?*#:;~").replace('-', ' ').replace('/', ' ').replace('', 'person') - - caption = re.sub( - r"\s{2,}", - ' ', - caption, - ) - caption = caption.rstrip('\n') - caption = caption.strip(' ') - - # truncate caption - caption_words = caption.split(' ') - if len(caption_words) > max_words: - caption = ' '.join(caption_words[:max_words]) - - return caption diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py deleted file mode 100644 index 6fff4faf44a92d42504559ecea8ec1047d2e5f14..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import io -import logging -import os -import os.path as op -import sys - -from dump_hubert_feature import HubertFeatureReader -from feature_utils import get_shard_range, dump_feature -from fairseq.data.audio.audio_utils import get_waveform -from fairseq.data.audio.speech_to_text_dataset import ( - read_from_uncompressed_zip, -) - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_hubert_feature_s2t") - - -class HubertFeatureReaderS2T(HubertFeatureReader): - def read_audio(self, path, ref_len=None): - path, *extra = path.split(":") - assert len(extra) == 2 - assert path.endswith(".zip") - - data = read_from_uncompressed_zip(path, int(extra[0]), int(extra[1])) - f = io.BytesIO(data) - wav, sr = get_waveform(f) - assert sr == self.task.cfg.sample_rate, sr - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - if ref_len is not None and abs(ref_len - len(wav)) > 160: - logging.warning(f"ref {ref_len} != read {len(wav)} ({path})") - return wav - - -def get_path_iterator(root, tsv, nshard, rank): - with open(tsv) as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - subpaths = [op.join(root, e["audio"]) for e in reader] - start, end = get_shard_range(len(subpaths), nshard, rank) - subpaths = subpaths[start:end] - def iterate(): - for subpath in subpaths: - yield op.join(root, subpath), None - return iterate, len(subpaths) - - -def main( - root, tsv_path, ckpt_path, layer, nshard, rank, feat_dir, split, max_chunk -): - reader = HubertFeatureReaderS2T(ckpt_path, layer, max_chunk) - generator, num = get_path_iterator(root, tsv_path, nshard, rank) - dump_feature(reader, generator, num, split, nshard, rank, feat_dir) - - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("root") - parser.add_argument("tsv_path") - parser.add_argument("ckpt_path") - parser.add_argument("layer", type=int) - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("feat_dir") - parser.add_argument("split") - parser.add_argument("--max_chunk", type=int, default=1600000) - args = parser.parse_args() - logger.info(args) - - main(**vars(args)) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoise_and_vad_audio.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoise_and_vad_audio.py deleted file mode 100644 index 4e13b38a5d3fb44dd3969e6afcb8f202274ee3b7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoise_and_vad_audio.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os -import csv -import tempfile -from collections import defaultdict -from pathlib import Path - -import torchaudio -try: - import webrtcvad -except ImportError: - raise ImportError("Please install py-webrtcvad: pip install webrtcvad") -import pandas as pd -from tqdm import tqdm - -from examples.speech_synthesis.preprocessing.denoiser.pretrained import master64 -import examples.speech_synthesis.preprocessing.denoiser.utils as utils -from examples.speech_synthesis.preprocessing.vad import ( - frame_generator, vad_collector, read_wave, write_wave, FS_MS, THRESHOLD, - SCALE -) -from examples.speech_to_text.data_utils import save_df_to_tsv - - -log = logging.getLogger(__name__) - -PATHS = ["after_denoise", "after_vad"] -MIN_T = 0.05 - - -def generate_tmp_filename(extension="txt"): - return tempfile._get_default_tempdir() + "/" + \ - next(tempfile._get_candidate_names()) + "." + extension - - -def convert_sr(inpath, sr, output_path=None): - if not output_path: - output_path = generate_tmp_filename("wav") - cmd = f"sox {inpath} -r {sr} {output_path}" - os.system(cmd) - return output_path - - -def apply_vad(vad, inpath): - audio, sample_rate = read_wave(inpath) - frames = frame_generator(FS_MS, audio, sample_rate) - frames = list(frames) - segments = vad_collector(sample_rate, FS_MS, 300, vad, frames) - merge_segments = list() - timestamp_start = 0.0 - timestamp_end = 0.0 - # removing start, end, and long sequences of sils - for i, segment in enumerate(segments): - merge_segments.append(segment[0]) - if i and timestamp_start: - sil_duration = segment[1] - timestamp_end - if sil_duration > THRESHOLD: - merge_segments.append(int(THRESHOLD / SCALE) * (b'\x00')) - else: - merge_segments.append(int((sil_duration / SCALE)) * (b'\x00')) - timestamp_start = segment[1] - timestamp_end = segment[2] - segment = b''.join(merge_segments) - return segment, sample_rate - - -def write(wav, filename, sr=16_000): - # Normalize audio if it prevents clipping - wav = wav / max(wav.abs().max().item(), 1) - torchaudio.save(filename, wav.cpu(), sr, encoding="PCM_S", - bits_per_sample=16) - - -def process(args): - # making sure we are requested either denoise or vad - if not args.denoise and not args.vad: - log.error("No denoise or vad is requested.") - return - - log.info("Creating out directories...") - if args.denoise: - out_denoise = Path(args.output_dir).absolute().joinpath(PATHS[0]) - out_denoise.mkdir(parents=True, exist_ok=True) - if args.vad: - out_vad = Path(args.output_dir).absolute().joinpath(PATHS[1]) - out_vad.mkdir(parents=True, exist_ok=True) - - log.info("Loading pre-trained speech enhancement model...") - model = master64().to(args.device) - - log.info("Building the VAD model...") - vad = webrtcvad.Vad(int(args.vad_agg_level)) - - # preparing the output dict - output_dict = defaultdict(list) - - log.info(f"Parsing input manifest: {args.audio_manifest}") - with open(args.audio_manifest, "r") as f: - manifest_dict = csv.DictReader(f, delimiter="\t") - for row in tqdm(manifest_dict): - filename = str(row["audio"]) - - final_output = filename - keep_sample = True - n_frames = row["n_frames"] - snr = -1 - if args.denoise: - output_path_denoise = out_denoise.joinpath(Path(filename).name) - # convert to 16khz in case we use a differet sr - tmp_path = convert_sr(final_output, 16000) - - # loading audio file and generating the enhanced version - out, sr = torchaudio.load(tmp_path) - out = out.to(args.device) - estimate = model(out) - estimate = (1 - args.dry_wet) * estimate + args.dry_wet * out - write(estimate[0], str(output_path_denoise), sr) - - snr = utils.cal_snr(out, estimate) - snr = snr.cpu().detach().numpy()[0][0] - final_output = str(output_path_denoise) - - if args.vad: - output_path_vad = out_vad.joinpath(Path(filename).name) - sr = torchaudio.info(final_output).sample_rate - if sr in [16000, 32000, 48000]: - tmp_path = final_output - elif sr < 16000: - tmp_path = convert_sr(final_output, 16000) - elif sr < 32000: - tmp_path = convert_sr(final_output, 32000) - else: - tmp_path = convert_sr(final_output, 48000) - # apply VAD - segment, sample_rate = apply_vad(vad, tmp_path) - if len(segment) < sample_rate * MIN_T: - keep_sample = False - print(( - f"WARNING: skip {filename} because it is too short " - f"after VAD ({len(segment) / sample_rate} < {MIN_T})" - )) - else: - if sample_rate != sr: - tmp_path = generate_tmp_filename("wav") - write_wave(tmp_path, segment, sample_rate) - convert_sr(tmp_path, sr, - output_path=str(output_path_vad)) - else: - write_wave(str(output_path_vad), segment, sample_rate) - final_output = str(output_path_vad) - segment, _ = torchaudio.load(final_output) - n_frames = segment.size(1) - - if keep_sample: - output_dict["id"].append(row["id"]) - output_dict["audio"].append(final_output) - output_dict["n_frames"].append(n_frames) - output_dict["tgt_text"].append(row["tgt_text"]) - output_dict["speaker"].append(row["speaker"]) - output_dict["src_text"].append(row["src_text"]) - output_dict["snr"].append(snr) - - out_tsv_path = Path(args.output_dir) / Path(args.audio_manifest).name - log.info(f"Saving manifest to {out_tsv_path.as_posix()}") - save_df_to_tsv(pd.DataFrame.from_dict(output_dict), out_tsv_path) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--audio-manifest", "-i", required=True, - type=str, help="path to the input manifest.") - parser.add_argument( - "--output-dir", "-o", required=True, type=str, - help="path to the output dir. it will contain files after denoising and" - " vad" - ) - parser.add_argument("--vad-agg-level", "-a", type=int, default=2, - help="the aggresive level of the vad [0-3].") - parser.add_argument( - "--dry-wet", "-dw", type=float, default=0.01, - help="the level of linear interpolation between noisy and enhanced " - "files." - ) - parser.add_argument( - "--device", "-d", type=str, default="cpu", - help="the device to be used for the speech enhancement model: " - "cpu | cuda." - ) - parser.add_argument("--denoise", action="store_true", - help="apply a denoising") - parser.add_argument("--vad", action="store_true", help="apply a VAD") - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/gpt2_bpe.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/gpt2_bpe.py deleted file mode 100644 index b7426b249bbbabd8e20bbe8ca5449809efdf85fc..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/gpt2_bpe.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq import file_utils -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass - -from .gpt2_bpe_utils import get_encoder - - -DEFAULT_ENCODER_JSON = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json" -DEFAULT_VOCAB_BPE = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe" - - -@dataclass -class GPT2BPEConfig(FairseqDataclass): - gpt2_encoder_json: str = field( - default=DEFAULT_ENCODER_JSON, metadata={"help": "path to encoder.json"} - ) - gpt2_vocab_bpe: str = field( - default=DEFAULT_VOCAB_BPE, metadata={"help": "path to vocab.bpe"} - ) - - -@register_bpe("gpt2", dataclass=GPT2BPEConfig) -class GPT2BPE(object): - def __init__(self, cfg): - encoder_json = file_utils.cached_path(cfg.gpt2_encoder_json) - vocab_bpe = file_utils.cached_path(cfg.gpt2_vocab_bpe) - self.bpe = get_encoder(encoder_json, vocab_bpe) - - def encode(self, x: str) -> str: - return " ".join(map(str, self.bpe.encode(x))) - - def decode(self, x: str) -> str: - return self.bpe.decode( - [int(tok) if tok not in {"", ""} and not tok.startswith('<') else tok for tok in x.split()] - ) - - def is_beginning_of_word(self, x: str) -> bool: - return self.decode(x).startswith(" ") diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/resampling_dataset.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/resampling_dataset.py deleted file mode 100644 index 3d3b993164dc3962df48bacff26714328e843e80..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/resampling_dataset.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import numpy as np -from fairseq.data import BaseWrapperDataset, plasma_utils - - -logger = logging.getLogger(__name__) - - -class ResamplingDataset(BaseWrapperDataset): - """Randomly samples from a given dataset at each epoch. - - Sampling is done with or without replacement, depending on the "replace" - parameter. - - Optionally, the epoch size can be rescaled. This is potentially desirable - to increase per-epoch coverage of the base dataset (since sampling with - replacement means that many items in the dataset will be left out). In the - case of sampling without replacement, size_ratio should be strictly less - than 1. - - Args: - dataset (~torch.utils.data.Dataset): dataset on which to sample. - weights (List[float]): list of probability weights - (default: None, which corresponds to uniform sampling). - replace (bool): sampling mode; True for "with replacement", or False - for "without replacement" (default: True) - size_ratio (float): the ratio to subsample to; must be positive - (default: 1.0). - batch_by_size (bool): whether or not to batch by sequence length - (default: True). - seed (int): RNG seed to use (default: 0). - epoch (int): starting epoch number (default: 1). - """ - - def __init__( - self, - dataset, - weights=None, - replace=True, - size_ratio=1.0, - batch_by_size=True, - seed=0, - epoch=1, - ): - super().__init__(dataset) - - if weights is None: - self.weights = None - - else: - assert len(weights) == len(dataset) - weights_arr = np.array(weights, dtype=np.float64) - weights_arr /= weights_arr.sum() - self.weights = plasma_utils.PlasmaArray(weights_arr) - - self.replace = replace - - assert size_ratio > 0.0 - if not self.replace: - assert size_ratio < 1.0 - self.size_ratio = float(size_ratio) - self.actual_size = np.ceil(len(dataset) * self.size_ratio).astype(int) - - self.batch_by_size = batch_by_size - self.seed = seed - - self._cur_epoch = None - self._cur_indices = None - - self.set_epoch(epoch) - - def __getitem__(self, index): - return self.dataset[self._cur_indices.array[index]] - - def __len__(self): - return self.actual_size - - @property - def sizes(self): - if isinstance(self.dataset.sizes, list): - return [s[self._cur_indices.array] for s in self.dataset.sizes] - return self.dataset.sizes[self._cur_indices.array] - - def num_tokens(self, index): - return self.dataset.num_tokens(self._cur_indices.array[index]) - - def size(self, index): - return self.dataset.size(self._cur_indices.array[index]) - - def ordered_indices(self): - if self.batch_by_size: - order = [ - np.arange(len(self)), - self.sizes, - ] # No need to handle `self.shuffle == True` - return np.lexsort(order) - else: - return np.arange(len(self)) - - def prefetch(self, indices): - self.dataset.prefetch(self._cur_indices.array[indices]) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return False - - def set_epoch(self, epoch): - logger.debug("ResamplingDataset.set_epoch: {}".format(epoch)) - super().set_epoch(epoch) - - if epoch == self._cur_epoch: - return - - self._cur_epoch = epoch - - # Generate a weighted sample of indices as a function of the - # random seed and the current epoch. - - rng = np.random.RandomState( - [ - 42, # magic number - self.seed % (2 ** 32), # global seed - self._cur_epoch, # epoch index - ] - ) - self._cur_indices = plasma_utils.PlasmaArray( - rng.choice( - len(self.dataset), - self.actual_size, - replace=self.replace, - p=(None if self.weights is None else self.weights.array), - ) - ) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/scalar/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/scalar/__init__.py deleted file mode 100644 index 143834f3d036780eb6844c82f0c6f2d10cfe2f61..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/scalar/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .utils import quantize_model_ # NOQA diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/laser/laser_src/multitask_data_utils.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/laser/laser_src/multitask_data_utils.py deleted file mode 100644 index b05caea26793bf5112a7abc29d76225f578f3ebe..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/laser/laser_src/multitask_data_utils.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections import OrderedDict - -import numpy as np - -from fairseq.data import BaseWrapperDataset, FairseqDataset, iterators - - -class MultiItr(object): - def __init__(self, itr): - self.itr = itr - self._counts = [0 for x in itr] - - def __len__(self): - return sum(len(itr) for itr in self.itr) - - def __iter__(self): - return self - - def __next__(self): - ratios = [count / len(itr) for count, itr in zip(self._counts, self.itr)] - idx = ratios.index(min(ratios)) - self._counts[idx] += 1 - return next(self.itr[idx]) - - -class MultidatasetEpochBatchIterator(iterators.EpochBatchIterating): - """A wrapper around multiple epoch batch iterators.""" - - def __init__( - self, - dataset, - batch_sampler, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - ): - - assert isinstance(dataset, OrderedDict) - assert len(dataset) - assert isinstance(dataset[next(iter(dataset))], FairseqDataset) - - self.iterators = [] - - self.epoch = epoch - for key, dt in dataset.items(): - epoch_iter = iterators.EpochBatchIterator( - dataset=dt, - collate_fn=dt.collater, - batch_sampler=batch_sampler[key], - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=0, - epoch=epoch, - ) - self.iterators.append(epoch_iter) - - def __len__(self): - return sum(len(itr) for itr in self.iterators) - - def next_epoch_itr(self, shuffle=True, fix_batches_to_gpus=False): - # `self.epoch += 1` should be handled by underlying `EpochBatchIterator`s. - return MultiItr( - [ - itr.next_epoch_itr( - shuffle=shuffle, fix_batches_to_gpus=fix_batches_to_gpus - ) - for itr in self.iterators - ] - ) - - def end_of_epoch(self): - return all(itr.end_of_epoch() for itr in self.iterators) - - @property - def next_epoch_idx(self): - """Return the epoch index after *next_epoch_itr* is called.""" - - epochs = [itr.next_epoch_idx for itr in self.iterators] - self.epoch = epochs[0] - assert all(epoch == self.epoch for epoch in epochs) - - return self.epoch - - @property - def iterations_in_epoch(self): - return sum(itr.iterations_in_epoch for itr in self.iterators) - - def state_dict(self): - return { - "iterators": [it.state_dict() for it in self.iterators], - "epoch": self.epoch, - } - - def load_state_dict(self, state_dict): - self.epoch = state_dict["epoch"] - for it, d in zip(self.iterators, state_dict["iterators"]): - it.load_state_dict(d) - - -class MultitaskDatasetWrapper(BaseWrapperDataset): - """A wrapper for a multitask dataset.""" - - def __init__(self, dataset, target_language_id, sample=1.0, name=""): - super().__init__(dataset) - self.target_language_id = target_language_id - self.sample = sample - self.name = name - - def collater(self, *args, **kwargs): - ans = self.dataset.collater(*args, **kwargs) - if "net_input" in ans: - ans["net_input"]["target_language_id"] = self.target_language_id - ans["net_input"]["dataset_name"] = self.name - return ans - - def num_tokens(self, *args, **kwargs): - return self.dataset.num_tokens(*args, **kwargs) - - def ordered_indices(self, *args, **kwargs): - indices = self.dataset.ordered_indices(*args, **kwargs) - # Hacky solution for sampling - size = int(self.sample * indices.shape[0]) - - return indices.take(np.sort(np.random.permutation(indices.shape[0])[:size])) - - def size(self, index: int): - return self.dataset.size(index) - - @property - def supports_prefetch(self): - """Whether this dataset supports prefetching.""" - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/__init__.py deleted file mode 100644 index 89f1aef4f6328d25425e0bcabb42dfffd2ed35f0..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .rerank_options import * # noqa diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/transformer_layer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/transformer_layer.py deleted file mode 100644 index 347b8118daa2818af5e0230a793f2fa8fcd63b3a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/transformer_layer.py +++ /dev/null @@ -1,459 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.modules import LayerNorm, MultiheadAttention -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor -from fairseq.models.transformer import ( - TransformerConfig, -) - - -class TransformerEncoderLayerBase(nn.Module): - """Encoder layer block. - - In the original paper each operation (multi-head attention or FFN) is - postprocessed with: `dropout -> add residual -> layernorm`. In the - tensor2tensor code they suggest that learning is more robust when - preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *cfg.encoder.normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - - def __init__(self, cfg): - super().__init__() - self.cfg = cfg - self.embed_dim = cfg.encoder.embed_dim - self.quant_noise = cfg.quant_noise.pq - self.quant_noise_block_size = cfg.quant_noise.pq_block_size - self.self_attn = self.build_self_attention(self.embed_dim, cfg) - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - self.dropout_module = FairseqDropout( - cfg.dropout, module_name=self.__class__.__name__ - ) - self.activation_fn = utils.get_activation_fn(activation=cfg.activation_fn) - activation_dropout_p = cfg.activation_dropout - if activation_dropout_p == 0: - # for backwards compatibility with models that use cfg.relu_dropout - activation_dropout_p = cfg.relu_dropout or 0 - self.activation_dropout_module = FairseqDropout( - float(activation_dropout_p), module_name=self.__class__.__name__ - ) - self.normalize_before = cfg.encoder.normalize_before - self.fc1 = self.build_fc1( - self.embed_dim, - cfg.encoder.ffn_embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - self.fc2 = self.build_fc2( - cfg.encoder.ffn_embed_dim, - self.embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise( - nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size - ) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise( - nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size - ) - - def build_self_attention(self, embed_dim, cfg): - return MultiheadAttention( - embed_dim, - cfg.encoder.attention_heads, - dropout=cfg.attention_dropout, - self_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - ) - - def residual_connection(self, x, residual): - return residual + x - - def upgrade_state_dict_named(self, state_dict, name): - """ - Rename layer norm states from `...layer_norms.0.weight` to - `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to - `...final_layer_norm.weight` - """ - layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"} - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layer_norms.{}.{}".format(name, old, m) - if k in state_dict: - state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k] - del state_dict[k] - - def forward( - self, - x, - encoder_padding_mask: Optional[Tensor], - attn_mask: Optional[Tensor] = None, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, seq_len)` where padding elements are indicated by ``1``. - attn_mask (ByteTensor): binary tensor of shape `(tgt_len, src_len)`, - where `tgt_len` is the length of output and `src_len` is the - length of input, though here both are equal to `seq_len`. - `attn_mask[tgt_i, src_j] = 1` means that when calculating the - embedding for `tgt_i`, we exclude (mask out) `src_j`. This is - useful for strided self-attention. - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - # anything in original attn_mask = 1, becomes -1e8 - # anything in original attn_mask = 0, becomes 0 - # Note that we cannot use -inf here, because at some edge cases, - # the attention weight (before softmax) for some padded element in query - # will become -inf, which results in NaN in model parameters - if attn_mask is not None: - attn_mask = attn_mask.masked_fill( - attn_mask.to(torch.bool), - -1e8 if x.dtype == torch.float32 else -1e4 - ) - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - x, _ = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=encoder_padding_mask, - need_weights=False, - attn_mask=attn_mask, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - return x - - -# backward compatible with the legacy argparse format -class TransformerEncoderLayer(TransformerEncoderLayerBase): - def __init__(self, args): - super().__init__(TransformerConfig.from_namespace(args)) - self.args = args - - def build_self_attention(self, embed_dim, args): - return super().build_self_attention( - embed_dim, TransformerConfig.from_namespace(args) - ) - - -class TransformerDecoderLayerBase(nn.Module): - """Decoder layer block. - - In the original paper each operation (multi-head attention, encoder - attention or FFN) is postprocessed with: `dropout -> add residual -> - layernorm`. In the tensor2tensor code they suggest that learning is more - robust when preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *cfg.decoder.normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, cfg, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False - ): - super().__init__() - self.embed_dim = cfg.decoder.embed_dim - self.dropout_module = FairseqDropout( - cfg.dropout, module_name=self.__class__.__name__ - ) - self.quant_noise = cfg.quant_noise.pq - self.quant_noise_block_size = cfg.quant_noise.pq_block_size - - self.cross_self_attention = cfg.cross_self_attention - - self.self_attn = self.build_self_attention( - self.embed_dim, - cfg, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - - self.activation_fn = utils.get_activation_fn(activation=cfg.activation_fn) - activation_dropout_p = cfg.activation_dropout - if activation_dropout_p == 0: - # for backwards compatibility with models that use cfg.relu_dropout - activation_dropout_p = cfg.relu_dropout or 0 - self.activation_dropout_module = FairseqDropout( - float(activation_dropout_p), module_name=self.__class__.__name__ - ) - self.normalize_before = cfg.decoder.normalize_before - - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = self.build_encoder_attention(self.embed_dim, cfg) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - - self.fc1 = self.build_fc1( - self.embed_dim, - cfg.decoder.ffn_embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - self.fc2 = self.build_fc2( - cfg.decoder.ffn_embed_dim, - self.embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - self.need_attn = True - - self.onnx_trace = False - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_self_attention( - self, embed_dim, cfg, add_bias_kv=False, add_zero_attn=False - ): - return MultiheadAttention( - embed_dim, - cfg.decoder.attention_heads, - dropout=cfg.attention_dropout, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - self_attention=not cfg.cross_self_attention, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - ) - - def build_encoder_attention(self, embed_dim, cfg): - return MultiheadAttention( - embed_dim, - cfg.decoder.attention_heads, - kdim=cfg.encoder.embed_dim, - vdim=cfg.encoder.embed_dim, - dropout=cfg.attention_dropout, - encoder_decoder_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - ) - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def residual_connection(self, x, residual): - return residual + x - - def forward( - self, - x, - encoder_out: Optional[torch.Tensor] = None, - encoder_padding_mask: Optional[torch.Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - prev_self_attn_state: Optional[List[torch.Tensor]] = None, - prev_attn_state: Optional[List[torch.Tensor]] = None, - self_attn_mask: Optional[torch.Tensor] = None, - self_attn_padding_mask: Optional[torch.Tensor] = None, - need_attn: bool = False, - need_head_weights: bool = False, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor, optional): binary - ByteTensor of shape `(batch, src_len)` where padding - elements are indicated by ``1``. - need_attn (bool, optional): return attention weights - need_head_weights (bool, optional): return attention weights - for each head (default: return average over heads). - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - if need_head_weights: - need_attn = True - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - if prev_self_attn_state is not None: - prev_key, prev_value = prev_self_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_self_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_self_attn_state[2] - assert incremental_state is not None - self.self_attn._set_input_buffer(incremental_state, saved_state) - _self_attn_input_buffer = self.self_attn._get_input_buffer(incremental_state) - if self.cross_self_attention and not ( - incremental_state is not None - and _self_attn_input_buffer is not None - and "prev_key" in _self_attn_input_buffer - ): - if self_attn_mask is not None: - assert encoder_out is not None - self_attn_mask = torch.cat( - (x.new_zeros(x.size(0), encoder_out.size(0)), self_attn_mask), dim=1 - ) - if self_attn_padding_mask is not None: - if encoder_padding_mask is None: - assert encoder_out is not None - encoder_padding_mask = self_attn_padding_mask.new_zeros( - encoder_out.size(1), encoder_out.size(0) - ) - self_attn_padding_mask = torch.cat( - (encoder_padding_mask, self_attn_padding_mask), dim=1 - ) - assert encoder_out is not None - y = torch.cat((encoder_out, x), dim=0) - else: - y = x - - x, attn = self.self_attn( - query=x, - key=y, - value=y, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - need_weights=False, - attn_mask=self_attn_mask, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - if self.encoder_attn is not None and encoder_out is not None: - residual = x - if self.normalize_before: - x = self.encoder_attn_layer_norm(x) - if prev_attn_state is not None: - prev_key, prev_value = prev_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_attn_state[2] - assert incremental_state is not None - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=need_attn or (not self.training and self.need_attn), - need_head_weights=need_head_weights, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.encoder_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - if self.onnx_trace and incremental_state is not None: - saved_state = self.self_attn._get_input_buffer(incremental_state) - assert saved_state is not None - if self_attn_padding_mask is not None: - self_attn_state = [ - saved_state["prev_key"], - saved_state["prev_value"], - saved_state["prev_key_padding_mask"], - ] - else: - self_attn_state = [saved_state["prev_key"], saved_state["prev_value"]] - return x, attn, self_attn_state - return x, attn, None - - def make_generation_fast_(self, need_attn: bool = False, **kwargs): - self.need_attn = need_attn - - -# backward compatible with the legacy argparse format -class TransformerDecoderLayer(TransformerDecoderLayerBase): - def __init__( - self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False - ): - super().__init__( - TransformerConfig.from_namespace(args), - no_encoder_attn=no_encoder_attn, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - self.args = args - - def build_self_attention( - self, embed_dim, args, add_bias_kv=False, add_zero_attn=False - ): - return super().build_self_attention( - embed_dim, - TransformerConfig.from_namespace(args), - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - - def build_encoder_attention(self, embed_dim, args): - return super().build_encoder_attention( - embed_dim, - TransformerConfig.from_namespace(args), - ) diff --git a/spaces/ORI-Muchim/BarKeYaeTTS/text/__init__.py b/spaces/ORI-Muchim/BarKeYaeTTS/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/BarKeYaeTTS/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/Omnibus/summarize-long-text/summarize.py b/spaces/Omnibus/summarize-long-text/summarize.py deleted file mode 100644 index 35ee78e06538ac6826887be8dc0d5c9694760aaa..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/summarize-long-text/summarize.py +++ /dev/null @@ -1,152 +0,0 @@ -import logging -import pprint as pp - -from utils import validate_pytorch2 - -logging.basicConfig(level=logging.INFO) -import torch -from tqdm.auto import tqdm -from transformers import AutoModelForSeq2SeqLM, AutoTokenizer - - -def load_model_and_tokenizer(model_name: str) -> tuple: - """ - load_model_and_tokenizer - load a model and tokenizer from a model name/ID on the hub - :param str model_name: the model name/ID on the hub - :return tuple: a tuple containing the model and tokenizer - """ - logger = logging.getLogger(__name__) - device = "cuda" if torch.cuda.is_available() else "cpu" - model = AutoModelForSeq2SeqLM.from_pretrained( - model_name, - ).to(device) - model = model.eval() - - tokenizer = AutoTokenizer.from_pretrained(model_name) - - logger.info(f"Loaded model {model_name} to {device}") - - if validate_pytorch2(): - try: - logger.info("Compiling model with Torch 2.0") - model = torch.compile(model) - except Exception as e: - logger.warning(f"Could not compile model with Torch 2.0: {e}") - else: - logger.info("Torch 2.0 not detected, skipping compilation") - - return model, tokenizer - - -def summarize_and_score(ids, mask, model, tokenizer, **kwargs): - """ - summarize_and_score - given a batch of ids and a mask, return a summary and a score for the summary - - Args: - ids (): the batch of ids - mask (): the attention mask for the batch - model (): the model to use for summarization - tokenizer (): the tokenizer to use for summarization - - Returns: - str: the summary of the batch - """ - - ids = ids[None, :] - mask = mask[None, :] - - input_ids = ids.to("cuda") if torch.cuda.is_available() else ids - attention_mask = mask.to("cuda") if torch.cuda.is_available() else mask - - global_attention_mask = torch.zeros_like(attention_mask) - # put global attention on token - global_attention_mask[:, 0] = 1 - - summary_pred_ids = model.generate( - input_ids, - attention_mask=attention_mask, - global_attention_mask=global_attention_mask, - output_scores=True, - return_dict_in_generate=True, - **kwargs, - ) - summary = tokenizer.batch_decode( - summary_pred_ids.sequences, - skip_special_tokens=True, - remove_invalid_values=True, - ) - score = round(summary_pred_ids.sequences_scores.cpu().numpy()[0], 4) - - return summary, score - - -def summarize_via_tokenbatches( - input_text: str, - model, - tokenizer, - batch_length=2048, - batch_stride=16, - min_batch_length: int = 512, - **kwargs, -): - """ - summarize_via_tokenbatches - a function that takes a string and returns a summary - - Args: - input_text (str): the text to summarize - model (): the model to use for summarization - tokenizer (): the tokenizer to use for summarization - batch_length (int, optional): the length of each batch. Defaults to 2048. - batch_stride (int, optional): the stride of each batch. Defaults to 16. The stride is the number of tokens that overlap between batches. - - Returns: - str: the summary - """ - # log all input parameters - logger = logging.getLogger(__name__) - # log all input parameters - if batch_length < min_batch_length: - logger.warning( - f"batch_length must be at least {min_batch_length}. Setting batch_length to {min_batch_length}" - ) - batch_length = min_batch_length - - logger.info(f"input parameters:\n{pp.pformat(kwargs)}") - logger.info(f"batch_length: {batch_length}, batch_stride: {batch_stride}") - encoded_input = tokenizer( - input_text, - padding="max_length", - truncation=True, - max_length=batch_length, - stride=batch_stride, - return_overflowing_tokens=True, - add_special_tokens=False, - return_tensors="pt", - ) - - in_id_arr, att_arr = encoded_input.input_ids, encoded_input.attention_mask - gen_summaries = [] - - pbar = tqdm(total=len(in_id_arr), desc="Summarizing") - - for _id, _mask in zip(in_id_arr, att_arr): - result, score = summarize_and_score( - ids=_id, - mask=_mask, - model=model, - tokenizer=tokenizer, - **kwargs, - ) - score = round(float(score), 4) - _sum = { - "input_tokens": _id, - "summary": result, - "summary_score": score, - } - gen_summaries.append(_sum) - logger.info(f"SCore {score} for summary:\n\t{result}") - pbar.update() - - pbar.close() - logger.debug(f"Generated summaries:\n{pp.pformat(gen_summaries)}") - return gen_summaries diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py deleted file mode 100644 index f5ba4297567d650f147eebeed361e9d62fab899d..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py +++ /dev/null @@ -1,330 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import collections -from dataclasses import dataclass -from typing import Callable, List, Optional, Tuple -import torch -from torch import nn - -from detectron2.structures import Boxes, Instances, ROIMasks -from detectron2.utils.registry import _convert_target_to_string, locate - -from .torchscript_patch import patch_builtin_len - - -@dataclass -class Schema: - """ - A Schema defines how to flatten a possibly hierarchical object into tuple of - primitive objects, so it can be used as inputs/outputs of PyTorch's tracing. - - PyTorch does not support tracing a function that produces rich output - structures (e.g. dict, Instances, Boxes). To trace such a function, we - flatten the rich object into tuple of tensors, and return this tuple of tensors - instead. Meanwhile, we also need to know how to "rebuild" the original object - from the flattened results, so we can evaluate the flattened results. - A Schema defines how to flatten an object, and while flattening it, it records - necessary schemas so that the object can be rebuilt using the flattened outputs. - - The flattened object and the schema object is returned by ``.flatten`` classmethod. - Then the original object can be rebuilt with the ``__call__`` method of schema. - - A Schema is a dataclass that can be serialized easily. - """ - - # inspired by FetchMapper in tensorflow/python/client/session.py - - @classmethod - def flatten(cls, obj): - raise NotImplementedError - - def __call__(self, values): - raise NotImplementedError - - @staticmethod - def _concat(values): - ret = () - sizes = [] - for v in values: - assert isinstance(v, tuple), "Flattened results must be a tuple" - ret = ret + v - sizes.append(len(v)) - return ret, sizes - - @staticmethod - def _split(values, sizes): - if len(sizes): - expected_len = sum(sizes) - assert ( - len(values) == expected_len - ), f"Values has length {len(values)} but expect length {expected_len}." - ret = [] - for k in range(len(sizes)): - begin, end = sum(sizes[:k]), sum(sizes[: k + 1]) - ret.append(values[begin:end]) - return ret - - -@dataclass -class ListSchema(Schema): - schemas: List[Schema] # the schemas that define how to flatten each element in the list - sizes: List[int] # the flattened length of each element - - def __call__(self, values): - values = self._split(values, self.sizes) - if len(values) != len(self.schemas): - raise ValueError( - f"Values has length {len(values)} but schemas " f"has length {len(self.schemas)}!" - ) - values = [m(v) for m, v in zip(self.schemas, values)] - return list(values) - - @classmethod - def flatten(cls, obj): - res = [flatten_to_tuple(k) for k in obj] - values, sizes = cls._concat([k[0] for k in res]) - return values, cls([k[1] for k in res], sizes) - - -@dataclass -class TupleSchema(ListSchema): - def __call__(self, values): - return tuple(super().__call__(values)) - - -@dataclass -class IdentitySchema(Schema): - def __call__(self, values): - return values[0] - - @classmethod - def flatten(cls, obj): - return (obj,), cls() - - -@dataclass -class DictSchema(ListSchema): - keys: List[str] - - def __call__(self, values): - values = super().__call__(values) - return dict(zip(self.keys, values)) - - @classmethod - def flatten(cls, obj): - for k in obj.keys(): - if not isinstance(k, str): - raise KeyError("Only support flattening dictionaries if keys are str.") - keys = sorted(obj.keys()) - values = [obj[k] for k in keys] - ret, schema = ListSchema.flatten(values) - return ret, cls(schema.schemas, schema.sizes, keys) - - -@dataclass -class InstancesSchema(DictSchema): - def __call__(self, values): - image_size, fields = values[-1], values[:-1] - fields = super().__call__(fields) - return Instances(image_size, **fields) - - @classmethod - def flatten(cls, obj): - ret, schema = super().flatten(obj.get_fields()) - size = obj.image_size - if not isinstance(size, torch.Tensor): - size = torch.tensor(size) - return ret + (size,), schema - - -@dataclass -class TensorWrapSchema(Schema): - """ - For classes that are simple wrapper of tensors, e.g. - Boxes, RotatedBoxes, BitMasks - """ - - class_name: str - - def __call__(self, values): - return locate(self.class_name)(values[0]) - - @classmethod - def flatten(cls, obj): - return (obj.tensor,), cls(_convert_target_to_string(type(obj))) - - -# if more custom structures needed in the future, can allow -# passing in extra schemas for custom types -def flatten_to_tuple(obj): - """ - Flatten an object so it can be used for PyTorch tracing. - Also returns how to rebuild the original object from the flattened outputs. - - Returns: - res (tuple): the flattened results that can be used as tracing outputs - schema: an object with a ``__call__`` method such that ``schema(res) == obj``. - It is a pure dataclass that can be serialized. - """ - schemas = [ - ((str, bytes), IdentitySchema), - (list, ListSchema), - (tuple, TupleSchema), - (collections.abc.Mapping, DictSchema), - (Instances, InstancesSchema), - ((Boxes, ROIMasks), TensorWrapSchema), - ] - for klass, schema in schemas: - if isinstance(obj, klass): - F = schema - break - else: - F = IdentitySchema - - return F.flatten(obj) - - -class TracingAdapter(nn.Module): - """ - A model may take rich input/output format (e.g. dict or custom classes), - but `torch.jit.trace` requires tuple of tensors as input/output. - This adapter flattens input/output format of a model so it becomes traceable. - - It also records the necessary schema to rebuild model's inputs/outputs from flattened - inputs/outputs. - - Example: - :: - outputs = model(inputs) # inputs/outputs may be rich structure - adapter = TracingAdapter(model, inputs) - - # can now trace the model, with adapter.flattened_inputs, or another - # tuple of tensors with the same length and meaning - traced = torch.jit.trace(adapter, adapter.flattened_inputs) - - # traced model can only produce flattened outputs (tuple of tensors) - flattened_outputs = traced(*adapter.flattened_inputs) - # adapter knows the schema to convert it back (new_outputs == outputs) - new_outputs = adapter.outputs_schema(flattened_outputs) - """ - - flattened_inputs: Tuple[torch.Tensor] = None - """ - Flattened version of inputs given to this class's constructor. - """ - - inputs_schema: Schema = None - """ - Schema of the inputs given to this class's constructor. - """ - - outputs_schema: Schema = None - """ - Schema of the output produced by calling the given model with inputs. - """ - - def __init__( - self, - model: nn.Module, - inputs, - inference_func: Optional[Callable] = None, - allow_non_tensor: bool = False, - ): - """ - Args: - model: an nn.Module - inputs: An input argument or a tuple of input arguments used to call model. - After flattening, it has to only consist of tensors. - inference_func: a callable that takes (model, *inputs), calls the - model with inputs, and return outputs. By default it - is ``lambda model, *inputs: model(*inputs)``. Can be override - if you need to call the model differently. - allow_non_tensor: allow inputs/outputs to contain non-tensor objects. - This option will filter out non-tensor objects to make the - model traceable, but ``inputs_schema``/``outputs_schema`` cannot be - used anymore because inputs/outputs cannot be rebuilt from pure tensors. - This is useful when you're only interested in the single trace of - execution (e.g. for flop count), but not interested in - generalizing the traced graph to new inputs. - """ - super().__init__() - if isinstance(model, (nn.parallel.distributed.DistributedDataParallel, nn.DataParallel)): - model = model.module - self.model = model - if not isinstance(inputs, tuple): - inputs = (inputs,) - self.inputs = inputs - self.allow_non_tensor = allow_non_tensor - - if inference_func is None: - inference_func = lambda model, *inputs: model(*inputs) # noqa - self.inference_func = inference_func - - self.flattened_inputs, self.inputs_schema = flatten_to_tuple(inputs) - - if all(isinstance(x, torch.Tensor) for x in self.flattened_inputs): - return - if self.allow_non_tensor: - self.flattened_inputs = tuple( - [x for x in self.flattened_inputs if isinstance(x, torch.Tensor)] - ) - self.inputs_schema = None - else: - for input in self.flattened_inputs: - if not isinstance(input, torch.Tensor): - raise ValueError( - "Inputs for tracing must only contain tensors. " - f"Got a {type(input)} instead." - ) - - def forward(self, *args: torch.Tensor): - with torch.no_grad(), patch_builtin_len(): - if self.inputs_schema is not None: - inputs_orig_format = self.inputs_schema(args) - else: - if len(args) != len(self.flattened_inputs) or any( - x is not y for x, y in zip(args, self.flattened_inputs) - ): - raise ValueError( - "TracingAdapter does not contain valid inputs_schema." - " So it cannot generalize to other inputs and must be" - " traced with `.flattened_inputs`." - ) - inputs_orig_format = self.inputs - - outputs = self.inference_func(self.model, *inputs_orig_format) - flattened_outputs, schema = flatten_to_tuple(outputs) - - flattened_output_tensors = tuple( - [x for x in flattened_outputs if isinstance(x, torch.Tensor)] - ) - if len(flattened_output_tensors) < len(flattened_outputs): - if self.allow_non_tensor: - flattened_outputs = flattened_output_tensors - self.outputs_schema = None - else: - raise ValueError( - "Model cannot be traced because some model outputs " - "cannot flatten to tensors." - ) - else: # schema is valid - if self.outputs_schema is None: - self.outputs_schema = schema - else: - assert self.outputs_schema == schema, ( - "Model should always return outputs with the same " - "structure so it can be traced!" - ) - return flattened_outputs - - def _create_wrapper(self, traced_model): - """ - Return a function that has an input/output interface the same as the - original model, but it calls the given traced model under the hood. - """ - - def forward(*args): - flattened_inputs, _ = flatten_to_tuple(args) - flattened_outputs = traced_model(*flattened_inputs) - return self.outputs_schema(flattened_outputs) - - return forward diff --git a/spaces/OptimalScale/Robin-33b/lmflow/datasets/dataset.py b/spaces/OptimalScale/Robin-33b/lmflow/datasets/dataset.py deleted file mode 100644 index 8228d20ab4165515c2d1d09ae679473a53dbb6ed..0000000000000000000000000000000000000000 --- a/spaces/OptimalScale/Robin-33b/lmflow/datasets/dataset.py +++ /dev/null @@ -1,308 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -"""This Python code defines a class Dataset with methods for initializing, loading, -and manipulating datasets from different backends such as Hugging Face and JSON. - -The `Dataset` class includes methods for loading datasets from a dictionary and a Hugging -Face dataset, mapping datasets, and retrieving the backend dataset and arguments. -""" - - - -# Importing necessary libraries and modules -import json -from pathlib import Path -from typing import Optional - -from datasets import load_dataset -from datasets import Dataset as HFDataset - -from lmflow.args import DatasetArguments - -DATASET_TYPES = [ - "text_only", - "text2text", -] - -KEY_TYPE = "type" -KEY_INSTANCES = "instances" - -class Dataset: - r""" - Initializes the Dataset object with the given parameters. - - Parameters - ------------ - data_args : DatasetArguments object. - Contains the arguments required to load the dataset. - - backend : str, default="huggingface" - A string representing the dataset backend. Defaults to "huggingface". - - args : Optional. - Positional arguments. - - kwargs : Optional. - Keyword arguments. - """ - def __init__(self, data_args=None, backend: str="huggingface", *args, **kwargs): - self.data_args = data_args - self.backend = backend - self.backend_dataset = None - self.type = None # Original type of the dataset - self.dataset_path = data_args.dataset_path - - if data_args.dataset_path is None: - return - - if backend == "huggingface": - data_files = [ - x.absolute().as_posix() - for x in Path(self.dataset_path).glob("*.json") - ] - - # Iterate through all the files and ensure they have the same data type - for single_file in data_files: - with open(single_file) as fin: - json_data = json.load(fin) - if KEY_TYPE not in json_data.keys(): - raise ValueError( - f'"{KEY_TYPE}" field must be specified for data, e.g.' - '{\n' - f' "{KEY_TYPE}: "text_only",\n' - f' "{KEY_INSTANCES}": [\n' - ' { "text": "Sentence 1: This is a sentence." }\n' - ' { "text": "Sentence 2: This is another sentence." }\n' - f' ]\n' - '}' - ) - - if self.type is None: - self.type = json_data[KEY_TYPE] - elif self.type != json_data[KEY_TYPE]: - raise ValueError( - 'All task files must have same data types. Previous' - f' files have type "{self.type}", but in file' - f' {single_file}, it has type "{self.type}".' - ) - - # Load the dataset using the HuggingFace dataset library - extensions = "json" - raw_dataset = load_dataset( - extensions, - data_files=data_files, - field=KEY_INSTANCES, - split="train", - use_auth_token=None, - ) - self.backend_dataset = raw_dataset - elif backend == "json": - # TODO (@Jiachun) - pass - else: - raise NotImplementedError(f'Unsupported dataset backend "{backend}"') - - - def _check_data_type(self): - # TODO: check if data type and data structure matches, raise messages - # with hints - pass - - - def from_dict(self, dict_obj: dict, *args, **kwargs): - r""" - Create a Dataset object from a dictionary. - - Return a Dataset given a dict with format: - { - "type": TYPE, - "instances": [ - { - "key_1": VALUE_1.1, - "key_2": VALUE_1.2, - ... - }, - { - "key_1": VALUE_2.1, - "key_2": VALUE_2.2, - ... - }, - ... - ] - } - - Parameters - ----------- - - dict_obj : dict. - A dictionary containing the dataset information. - - args : Optional. - Positional arguments. - - kwargs : Optional. - Keyword arguments. - - Returns - --------- - - self : Dataset object. - """ - if self.backend == "huggingface": - if KEY_TYPE not in dict_obj: - raise ValueError( - f'"{KEY_TYPE}" must be provided to initialize a dataset' - ) - if KEY_INSTANCES not in dict_obj: - raise ValueError( - f'"{KEY_INSTANCES}" must be provided to initialize a dataset' - ) - - self.type = dict_obj[KEY_TYPE] - - hf_dict = {} - if len(dict_obj[KEY_INSTANCES]) > 0: - for key in dict_obj[KEY_INSTANCES][0].keys(): - hf_dict[key] = [ instance[key] for instance in dict_obj[KEY_INSTANCES] ] - - self.backend_dataset = HFDataset.from_dict(hf_dict, *args, **kwargs) - return self - else: - raise NotImplementedError( - f'Currently .from_dict is not supported for backend "{backend}"' - ) - - - @classmethod - def create_from_dict(cls, dict_obj, *args, **kwargs): - r""" - Returns - -------- - - Returns a Dataset object given a dict. - """ - empty_data_args = DatasetArguments(dataset_path=None) - dataset = Dataset(empty_data_args) - return dataset.from_dict(dict_obj) - - - def to_dict(self): - r""" - Returns - --------- - - Return a dict represents the dataset: - { - "type": TYPE, - "instances": [ - { - "key_1": VALUE_1.1, - "key_2": VALUE_1.2, - ... - }, - { - "key_1": VALUE_2.1, - "key_2": VALUE_2.2, - ... - }, - ... - ] - } - - A python dict object represents the content of this dataset. - """ - if self.backend == "huggingface": - dict_obj = {} - dict_obj[KEY_TYPE] = self.get_type() - - hf_dict = self.backend_dataset.to_dict() - dict_obj[KEY_INSTANCES] = [] - - first_key = None - for key in hf_dict.keys(): - first_key = key - break - - if first_key is not None: - num_instances = len(hf_dict[first_key]) - dict_obj[KEY_INSTANCES] = [ - { - key: hf_dict[key][i] for key in hf_dict.keys() - } - for i in range(num_instances) - ] - - return dict_obj - else: - raise NotImplementedError( - f'Current .to_dict is not supported for backend "{backend}"' - ) - - - def map(self, *args, **kwargs): - r""" - Parameters - ------------ - args : Optional. - Positional arguments. - - kwargs : Optional. - Keyword arguments. - - Returns - --------- - - self : Dataset object. - """ - # If the dataset uses Hugging Face as the backend, - # call the `map()` function of the Hugging Face backend dataset - if self.backend == "huggingface": - # Set the mapped dataset as the backend dataset of the current dataset - mapped_backend_dataset = self.backend_dataset.map(*args, **kwargs) - self.backend_dataset = mapped_backend_dataset - return self - else: - # If the backend is not Hugging Face, raise a NotImplementedError - raise NotImplementedError( - f'Currently .map is not supported for backend "{backend}"' - ) - - - def get_backend(self) -> Optional[str]: - r""" - Returns - --------- - - self.backend - """ - return self.backend - - - def get_backend_dataset(self): - r""" - Returns - --------- - - self.backend_dataset - """ - return self.backend_dataset - - - def get_data_args(self): - r""" - Returns - --------- - - self.data_args - """ - return self.data_args - - - def get_type(self): - r""" - Returns - --------- - - self.type - """ - return self.type diff --git a/spaces/Osborn-bh/ChatGLM3-6B-Osborn/composite_demo/conversation.py b/spaces/Osborn-bh/ChatGLM3-6B-Osborn/composite_demo/conversation.py deleted file mode 100644 index 1ac13e774775dfad8e18e728e8b33ca2a40b8f65..0000000000000000000000000000000000000000 --- a/spaces/Osborn-bh/ChatGLM3-6B-Osborn/composite_demo/conversation.py +++ /dev/null @@ -1,119 +0,0 @@ -from dataclasses import dataclass -from enum import auto, Enum -import json - -from PIL.Image import Image -import streamlit as st -from streamlit.delta_generator import DeltaGenerator - -TOOL_PROMPT = 'Answer the following questions as best as you can. You have access to the following tools:\n' - -class Role(Enum): - SYSTEM = auto() - USER = auto() - ASSISTANT = auto() - TOOL = auto() - INTERPRETER = auto() - OBSERVATION = auto() - - def __str__(self): - match self: - case Role.SYSTEM: - return "<|system|>" - case Role.USER: - return "<|user|>" - case Role.ASSISTANT | Role.TOOL | Role.INTERPRETER: - return "<|assistant|>" - case Role.OBSERVATION: - return "<|observation|>" - - # Get the message block for the given role - def get_message(self): - # Compare by value here, because the enum object in the session state - # is not the same as the enum cases here, due to streamlit's rerunning - # behavior. - match self.value: - case Role.SYSTEM.value: - return - case Role.USER.value: - return st.chat_message(name="user", avatar="user") - case Role.ASSISTANT.value: - return st.chat_message(name="assistant", avatar="assistant") - case Role.TOOL.value: - return st.chat_message(name="tool", avatar="assistant") - case Role.INTERPRETER.value: - return st.chat_message(name="interpreter", avatar="assistant") - case Role.OBSERVATION.value: - return st.chat_message(name="observation", avatar="user") - case _: - st.error(f'Unexpected role: {self}') - -@dataclass -class Conversation: - role: Role - content: str - tool: str | None = None - image: Image | None = None - - def __str__(self) -> str: - print(self.role, self.content, self.tool) - match self.role: - case Role.SYSTEM | Role.USER | Role.ASSISTANT | Role.OBSERVATION: - return f'{self.role}\n{self.content}' - case Role.TOOL: - return f'{self.role}{self.tool}\n{self.content}' - case Role.INTERPRETER: - return f'{self.role}interpreter\n{self.content}' - - # Human readable format - def get_text(self) -> str: - text = postprocess_text(self.content) - match self.role.value: - case Role.TOOL.value: - text = f'Calling tool `{self.tool}`:\n{text}' - case Role.INTERPRETER.value: - text = f'{text}' - case Role.OBSERVATION.value: - text = f'Observation:\n```\n{text}\n```' - return text - - # Display as a markdown block - def show(self, placeholder: DeltaGenerator | None=None) -> str: - if placeholder: - message = placeholder - else: - message = self.role.get_message() - if self.image: - message.image(self.image) - else: - text = self.get_text() - message.markdown(text) - -def preprocess_text( - system: str | None, - tools: list[dict] | None, - history: list[Conversation], -) -> str: - if tools: - tools = json.dumps(tools, indent=4, ensure_ascii=False) - - prompt = f"{Role.SYSTEM}\n" - prompt += system if not tools else TOOL_PROMPT - if tools: - tools = json.loads(tools) - prompt += json.dumps(tools, ensure_ascii=False) - for conversation in history: - prompt += f'{conversation}' - prompt += f'{Role.ASSISTANT}\n' - return prompt - -def postprocess_text(text: str) -> str: - text = text.replace("\(", "$") - text = text.replace("\)", "$") - text = text.replace("\[", "$$") - text = text.replace("\]", "$$") - text = text.replace("<|assistant|>", "") - text = text.replace("<|observation|>", "") - text = text.replace("<|system|>", "") - text = text.replace("<|user|>", "") - return text.strip() \ No newline at end of file diff --git a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/data/__init__.py b/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/backbone/__init__.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/backbone/__init__.py deleted file mode 100644 index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/backbone/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/oneformer_model.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/oneformer_model.py deleted file mode 100644 index 01508df74cfe8a722dd937b7f54b12296258c5a1..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/oneformer_model.py +++ /dev/null @@ -1,486 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/maskformer_model.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -from typing import Tuple - -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.data import MetadataCatalog -from detectron2.modeling import META_ARCH_REGISTRY, build_backbone, build_sem_seg_head -from detectron2.modeling.backbone import Backbone -from detectron2.modeling.postprocessing import sem_seg_postprocess -from detectron2.structures import Boxes, ImageList, Instances, BitMasks -from detectron2.utils.memory import retry_if_cuda_oom - -from .modeling.criterion import SetCriterion -from .modeling.matcher import HungarianMatcher -from einops import rearrange -from .modeling.transformer_decoder.text_transformer import TextTransformer -from .modeling.transformer_decoder.oneformer_transformer_decoder import MLP -from oneformer.data.tokenizer import SimpleTokenizer, Tokenize - -@META_ARCH_REGISTRY.register() -class OneFormer(nn.Module): - """ - Main class for mask classification semantic segmentation architectures. - """ - - @configurable - def __init__( - self, - *, - backbone: Backbone, - sem_seg_head: nn.Module, - task_mlp: nn.Module, - text_encoder: nn.Module, - text_projector: nn.Module, - criterion: nn.Module, - prompt_ctx: nn.Embedding, - num_queries: int, - object_mask_threshold: float, - overlap_threshold: float, - metadata, - size_divisibility: int, - sem_seg_postprocess_before_inference: bool, - pixel_mean: Tuple[float], - pixel_std: Tuple[float], - # inference - semantic_on: bool, - panoptic_on: bool, - instance_on: bool, - detection_on: bool, - test_topk_per_image: int, - task_seq_len: int, - max_seq_len: int, - is_demo: bool, - ): - """ - Args: - backbone: a backbone module, must follow detectron2's backbone interface - sem_seg_head: a module that predicts semantic segmentation from backbone features - criterion: a module that defines the loss - num_queries: int, number of queries - object_mask_threshold: float, threshold to filter query based on classification score - for panoptic segmentation inference - overlap_threshold: overlap threshold used in general inference for panoptic segmentation - metadata: dataset meta, get `thing` and `stuff` category names for panoptic - segmentation inference - size_divisibility: Some backbones require the input height and width to be divisible by a - specific integer. We can use this to override such requirement. - sem_seg_postprocess_before_inference: whether to resize the prediction back - to original input size before semantic segmentation inference or after. - For high-resolution dataset like Mapillary, resizing predictions before - inference will cause OOM error. - pixel_mean, pixel_std: list or tuple with #channels element, representing - the per-channel mean and std to be used to normalize the input image - semantic_on: bool, whether to output semantic segmentation prediction - instance_on: bool, whether to output instance segmentation prediction - panoptic_on: bool, whether to output panoptic segmentation prediction - test_topk_per_image: int, instance segmentation parameter, keep topk instances per image - """ - super().__init__() - self.backbone = backbone - self.sem_seg_head = sem_seg_head - self.task_mlp = task_mlp - self.text_encoder = text_encoder - self.text_projector = text_projector - self.prompt_ctx = prompt_ctx - self.criterion = criterion - self.num_queries = num_queries - self.overlap_threshold = overlap_threshold - self.object_mask_threshold = object_mask_threshold - self.metadata = metadata - if size_divisibility < 0: - # use backbone size_divisibility if not set - size_divisibility = self.backbone.size_divisibility - self.size_divisibility = size_divisibility - self.sem_seg_postprocess_before_inference = sem_seg_postprocess_before_inference - self.register_buffer("pixel_mean", torch.Tensor(pixel_mean).view(-1, 1, 1), False) - self.register_buffer("pixel_std", torch.Tensor(pixel_std).view(-1, 1, 1), False) - - # additional args - self.semantic_on = semantic_on - self.instance_on = instance_on - self.panoptic_on = panoptic_on - self.detection_on = detection_on - self.test_topk_per_image = test_topk_per_image - - self.text_tokenizer = Tokenize(SimpleTokenizer(), max_seq_len=max_seq_len) - self.task_tokenizer = Tokenize(SimpleTokenizer(), max_seq_len=task_seq_len) - self.is_demo = is_demo - - self.thing_indices = [k for k in self.metadata.thing_dataset_id_to_contiguous_id.keys()] - - if not self.semantic_on: - assert self.sem_seg_postprocess_before_inference - - @classmethod - def from_config(cls, cfg): - backbone = build_backbone(cfg) - sem_seg_head = build_sem_seg_head(cfg, backbone.output_shape()) - - if cfg.MODEL.IS_TRAIN: - text_encoder = TextTransformer(context_length=cfg.MODEL.TEXT_ENCODER.CONTEXT_LENGTH, - width=cfg.MODEL.TEXT_ENCODER.WIDTH, - layers=cfg.MODEL.TEXT_ENCODER.NUM_LAYERS, - vocab_size=cfg.MODEL.TEXT_ENCODER.VOCAB_SIZE) - text_projector = MLP(text_encoder.width, cfg.MODEL.ONE_FORMER.HIDDEN_DIM, - cfg.MODEL.ONE_FORMER.HIDDEN_DIM, cfg.MODEL.TEXT_ENCODER.PROJ_NUM_LAYERS) - if cfg.MODEL.TEXT_ENCODER.N_CTX > 0: - prompt_ctx = nn.Embedding(cfg.MODEL.TEXT_ENCODER.N_CTX, cfg.MODEL.TEXT_ENCODER.WIDTH) - else: - prompt_ctx = None - else: - text_encoder = None - text_projector = None - prompt_ctx = None - - task_mlp = MLP(cfg.INPUT.TASK_SEQ_LEN, cfg.MODEL.ONE_FORMER.HIDDEN_DIM, - cfg.MODEL.ONE_FORMER.HIDDEN_DIM, 2) - - # Loss parameters: - deep_supervision = cfg.MODEL.ONE_FORMER.DEEP_SUPERVISION - no_object_weight = cfg.MODEL.ONE_FORMER.NO_OBJECT_WEIGHT - - # loss weights - class_weight = cfg.MODEL.ONE_FORMER.CLASS_WEIGHT - dice_weight = cfg.MODEL.ONE_FORMER.DICE_WEIGHT - mask_weight = cfg.MODEL.ONE_FORMER.MASK_WEIGHT - contrastive_weight = cfg.MODEL.ONE_FORMER.CONTRASTIVE_WEIGHT - - # building criterion - matcher = HungarianMatcher( - cost_class=class_weight, - cost_mask=mask_weight, - cost_dice=dice_weight, - num_points=cfg.MODEL.ONE_FORMER.TRAIN_NUM_POINTS, - ) - - weight_dict = {"loss_ce": class_weight, "loss_mask": mask_weight, - "loss_dice": dice_weight, "loss_contrastive": contrastive_weight} - - - if deep_supervision: - dec_layers = cfg.MODEL.ONE_FORMER.DEC_LAYERS - aux_weight_dict = {} - for i in range(dec_layers - 1): - aux_weight_dict.update({k + f"_{i}": v for k, v in weight_dict.items()}) - weight_dict.update(aux_weight_dict) - - losses = ["labels", "masks", "contrastive"] - - criterion = SetCriterion( - sem_seg_head.num_classes, - matcher=matcher, - weight_dict=weight_dict, - eos_coef=no_object_weight, - contrast_temperature=cfg.MODEL.ONE_FORMER.CONTRASTIVE_TEMPERATURE, - losses=losses, - num_points=cfg.MODEL.ONE_FORMER.TRAIN_NUM_POINTS, - oversample_ratio=cfg.MODEL.ONE_FORMER.OVERSAMPLE_RATIO, - importance_sample_ratio=cfg.MODEL.ONE_FORMER.IMPORTANCE_SAMPLE_RATIO, - ) - - return { - "backbone": backbone, - "sem_seg_head": sem_seg_head, - "task_mlp": task_mlp, - "prompt_ctx": prompt_ctx, - "text_encoder": text_encoder, - "text_projector": text_projector, - "criterion": criterion, - "num_queries": cfg.MODEL.ONE_FORMER.NUM_OBJECT_QUERIES, - "object_mask_threshold": cfg.MODEL.TEST.OBJECT_MASK_THRESHOLD, - "overlap_threshold": cfg.MODEL.TEST.OVERLAP_THRESHOLD, - "metadata": MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), - "size_divisibility": cfg.MODEL.ONE_FORMER.SIZE_DIVISIBILITY, - "sem_seg_postprocess_before_inference": ( - cfg.MODEL.TEST.SEM_SEG_POSTPROCESSING_BEFORE_INFERENCE - or cfg.MODEL.TEST.PANOPTIC_ON - or cfg.MODEL.TEST.INSTANCE_ON - ), - "pixel_mean": cfg.MODEL.PIXEL_MEAN, - "pixel_std": cfg.MODEL.PIXEL_STD, - # inference - "semantic_on": cfg.MODEL.TEST.SEMANTIC_ON, - "instance_on": cfg.MODEL.TEST.INSTANCE_ON, - "panoptic_on": cfg.MODEL.TEST.PANOPTIC_ON, - "detection_on": cfg.MODEL.TEST.DETECTION_ON, - "test_topk_per_image": cfg.TEST.DETECTIONS_PER_IMAGE, - "task_seq_len": cfg.INPUT.TASK_SEQ_LEN, - "max_seq_len": cfg.INPUT.MAX_SEQ_LEN, - "is_demo": cfg.MODEL.IS_DEMO, - } - - @property - def device(self): - return self.pixel_mean.device - - def encode_text(self, text): - assert text.ndim in [2, 3], text.ndim - b = text.shape[0] - squeeze_dim = False - num_text = 1 - if text.ndim == 3: - num_text = text.shape[1] - text = rearrange(text, 'b n l -> (b n) l', n=num_text) - squeeze_dim = True - - # [B, C] - x = self.text_encoder(text) - - text_x = self.text_projector(x) - - if squeeze_dim: - text_x = rearrange(text_x, '(b n) c -> b n c', n=num_text) - if self.prompt_ctx is not None: - text_ctx = self.prompt_ctx.weight.unsqueeze(0).repeat(text_x.shape[0], 1, 1) - text_x = torch.cat([text_x, text_ctx], dim=1) - - return {"texts": text_x} - - def forward(self, batched_inputs): - """ - Args: - batched_inputs: a list, batched outputs of :class:`DatasetMapper`. - Each item in the list contains the inputs for one image. - For now, each item in the list is a dict that contains: - * "image": Tensor, image in (C, H, W) format. - * "instances": per-region ground truth - * Other information that's included in the original dicts, such as: - "height", "width" (int): the output resolution of the model (may be different - from input resolution), used in inference. - Returns: - list[dict]: - each dict has the results for one image. The dict contains the following keys: - * "sem_seg": - A Tensor that represents the - per-pixel segmentation prediced by the head. - The prediction has shape KxHxW that represents the logits of - each class for each pixel. - * "panoptic_seg": - A tuple that represent panoptic output - panoptic_seg (Tensor): of shape (height, width) where the values are ids for each segment. - segments_info (list[dict]): Describe each segment in `panoptic_seg`. - Each dict contains keys "id", "category_id", "isthing". - """ - images = [x["image"].to(self.device) for x in batched_inputs] - images = [(x - self.pixel_mean) / self.pixel_std for x in images] - images = ImageList.from_tensors(images, self.size_divisibility) - - tasks = torch.cat([self.task_tokenizer(x["task"]).to(self.device).unsqueeze(0) for x in batched_inputs], dim=0) - tasks = self.task_mlp(tasks.float()) - - features = self.backbone(images.tensor) - outputs = self.sem_seg_head(features, tasks) - - if self.training: - texts = torch.cat([self.text_tokenizer(x["text"]).to(self.device).unsqueeze(0) for x in batched_inputs], dim=0) - texts_x = self.encode_text(texts) - - outputs = {**outputs, **texts_x} - - # mask classification target - if "instances" in batched_inputs[0]: - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - targets = self.prepare_targets(gt_instances, images) - else: - targets = None - - # bipartite matching-based loss - losses = self.criterion(outputs, targets) - - for k in list(losses.keys()): - if k in self.criterion.weight_dict: - losses[k] *= self.criterion.weight_dict[k] - else: - # remove this loss if not specified in `weight_dict` - losses.pop(k) - return losses - else: - mask_cls_results = outputs["pred_logits"] - mask_pred_results = outputs["pred_masks"] - # upsample masks - mask_pred_results = F.interpolate( - mask_pred_results, - size=(images.tensor.shape[-2], images.tensor.shape[-1]), - mode="bilinear", - align_corners=False, - ) - - del outputs - - processed_results = [] - for i, data in enumerate(zip( - mask_cls_results, mask_pred_results, batched_inputs, images.image_sizes - )): - mask_cls_result, mask_pred_result, input_per_image, image_size = data - height = input_per_image.get("height", image_size[0]) - width = input_per_image.get("width", image_size[1]) - processed_results.append({}) - - if self.sem_seg_postprocess_before_inference: - mask_pred_result = retry_if_cuda_oom(sem_seg_postprocess)( - mask_pred_result, image_size, height, width - ) - mask_cls_result = mask_cls_result.to(mask_pred_result) - - # semantic segmentation inference - if self.semantic_on: - r = retry_if_cuda_oom(self.semantic_inference)(mask_cls_result, mask_pred_result) - if not self.sem_seg_postprocess_before_inference: - r = retry_if_cuda_oom(sem_seg_postprocess)(r, image_size, height, width) - processed_results[-1]["sem_seg"] = r - - # panoptic segmentation inference - if self.panoptic_on: - panoptic_r = retry_if_cuda_oom(self.panoptic_inference)(mask_cls_result, mask_pred_result) - processed_results[-1]["panoptic_seg"] = panoptic_r - - # instance segmentation inference - if self.instance_on: - instance_r = retry_if_cuda_oom(self.instance_inference)(mask_cls_result, mask_pred_result, input_per_image["task"]) - processed_results[-1]["instances"] = instance_r - - if self.detection_on: - bbox_r = retry_if_cuda_oom(self.instance_inference)(mask_cls_result, mask_pred_result, input_per_image["task"]) - processed_results[-1]["box_instances"] = bbox_r - - return processed_results - - def prepare_targets(self, targets, images): - h_pad, w_pad = images.tensor.shape[-2:] - new_targets = [] - for targets_per_image in targets: - # pad gt - gt_masks = targets_per_image.gt_masks - padded_masks = torch.zeros((gt_masks.shape[0], h_pad, w_pad), dtype=gt_masks.dtype, device=gt_masks.device) - padded_masks[:, : gt_masks.shape[1], : gt_masks.shape[2]] = gt_masks - new_targets.append( - { - "labels": targets_per_image.gt_classes, - "masks": padded_masks, - } - ) - return new_targets - - def semantic_inference(self, mask_cls, mask_pred): - mask_cls = F.softmax(mask_cls, dim=-1)[..., :-1] - mask_pred = mask_pred.sigmoid() - semseg = torch.einsum("qc,qhw->chw", mask_cls, mask_pred) - return semseg - - def panoptic_inference(self, mask_cls, mask_pred): - scores, labels = F.softmax(mask_cls, dim=-1).max(-1) - mask_pred = mask_pred.sigmoid() - - keep = labels.ne(self.sem_seg_head.num_classes) & (scores > self.object_mask_threshold) - cur_scores = scores[keep] - cur_classes = labels[keep] - cur_masks = mask_pred[keep] - cur_mask_cls = mask_cls[keep] - cur_mask_cls = cur_mask_cls[:, :-1] - - cur_prob_masks = cur_scores.view(-1, 1, 1) * cur_masks - - h, w = cur_masks.shape[-2:] - panoptic_seg = torch.zeros((h, w), dtype=torch.int32, device=cur_masks.device) - segments_info = [] - - current_segment_id = 0 - - if cur_masks.shape[0] == 0: - # We didn't detect any mask :( - return panoptic_seg, segments_info - else: - # take argmax - cur_mask_ids = cur_prob_masks.argmax(0) - stuff_memory_list = {} - for k in range(cur_classes.shape[0]): - pred_class = cur_classes[k].item() - isthing = pred_class in self.metadata.thing_dataset_id_to_contiguous_id.values() - mask_area = (cur_mask_ids == k).sum().item() - original_area = (cur_masks[k] >= 0.5).sum().item() - mask = (cur_mask_ids == k) & (cur_masks[k] >= 0.5) - - if mask_area > 0 and original_area > 0 and mask.sum().item() > 0: - if mask_area / original_area < self.overlap_threshold: - continue - - # merge stuff regions - if not isthing: - if int(pred_class) in stuff_memory_list.keys(): - panoptic_seg[mask] = stuff_memory_list[int(pred_class)] - continue - else: - stuff_memory_list[int(pred_class)] = current_segment_id + 1 - - current_segment_id += 1 - panoptic_seg[mask] = current_segment_id - - segments_info.append( - { - "id": current_segment_id, - "isthing": bool(isthing), - "category_id": int(pred_class), - } - ) - - return panoptic_seg, segments_info - - def instance_inference(self, mask_cls, mask_pred, task_type): - # mask_pred is already processed to have the same shape as original input - image_size = mask_pred.shape[-2:] - - # [Q, K] - scores = F.softmax(mask_cls, dim=-1)[:, :-1] - labels = torch.arange(self.sem_seg_head.num_classes, device=self.device).unsqueeze(0).repeat(self.num_queries, 1).flatten(0, 1) - - # scores_per_image, topk_indices = scores.flatten(0, 1).topk(self.num_queries, sorted=False) - scores_per_image, topk_indices = scores.flatten(0, 1).topk(self.test_topk_per_image, sorted=False) - labels_per_image = labels[topk_indices] - - topk_indices = topk_indices // self.sem_seg_head.num_classes - # mask_pred = mask_pred.unsqueeze(1).repeat(1, self.sem_seg_head.num_classes, 1).flatten(0, 1) - mask_pred = mask_pred[topk_indices] - - # Only consider scores with confidence over [self.object_mask_threshold] for demo - if self.is_demo: - keep = scores_per_image > self.object_mask_threshold - scores_per_image = scores_per_image[keep] - labels_per_image = labels_per_image[keep] - mask_pred = mask_pred[keep] - - # if this is panoptic segmentation, we only keep the "thing" classes - if self.panoptic_on: - keep = torch.zeros_like(scores_per_image).bool() - for i, lab in enumerate(labels_per_image): - keep[i] = lab in self.metadata.thing_dataset_id_to_contiguous_id.values() - - scores_per_image = scores_per_image[keep] - labels_per_image = labels_per_image[keep] - mask_pred = mask_pred[keep] - - if 'ade20k' in self.metadata.name and not self.is_demo and "instance" in task_type: - for i in range(labels_per_image.shape[0]): - labels_per_image[i] = self.thing_indices.index(labels_per_image[i].item()) - - result = Instances(image_size) - # mask (before sigmoid) - result.pred_masks = (mask_pred > 0).float() - if self.detection_on: - # Uncomment the following to get boxes from masks (this is slow) - result.pred_boxes = BitMasks(mask_pred > 0).get_bounding_boxes() - else: - result.pred_boxes = Boxes(torch.zeros(mask_pred.size(0), 4)) - - # calculate average mask prob - mask_scores_per_image = (mask_pred.sigmoid().flatten(1) * result.pred_masks.flatten(1)).sum(1) / (result.pred_masks.flatten(1).sum(1) + 1e-6) - result.scores = scores_per_image * mask_scores_per_image - result.pred_classes = labels_per_image - return result \ No newline at end of file diff --git a/spaces/PVIT/pvit/Home.py b/spaces/PVIT/pvit/Home.py deleted file mode 100644 index 212349d3feab42d296b1cc2583840ccb3b346b49..0000000000000000000000000000000000000000 --- a/spaces/PVIT/pvit/Home.py +++ /dev/null @@ -1,616 +0,0 @@ -import os -import re -import copy -import json -import yaml -import random -import streamlit as st -from PIL import Image, ImageDraw -import requests -import base64 -from io import BytesIO -import seaborn as sns -import matplotlib.pyplot as plt -import pandas as pd - -from collections import defaultdict -import datetime -import json -import os -import time - -import gradio as gr -import requests - -import hashlib -import time - -import streamlit as st -import streamlit.components.v1 as components -from streamlit_chat import message as st_message -from streamlit_drawable_canvas import st_canvas - -st.set_page_config(page_title="Model Chat", page_icon="🌍", layout="wide", initial_sidebar_state="collapsed") - -col_img, col_chat = st.columns([1, 1]) -with col_chat: - with st.container(): - input_area = st.container() - chatbox = st.container() - -# ==================== Conversation =================== # -import dataclasses -from enum import auto, Enum -from typing import List, Tuple - - -class SeparatorStyle(Enum): - """Different separator style.""" - SINGLE = auto() - TWO = auto() - -import re -# Hack for displaying Region in Chatbot -def convert_region_tags(text): - pattern = r'(.*?)<\/Region>' - replaced_text = re.sub(pattern, lambda m: '<Region>' + m.group(1).replace('<', '<').replace('>', '>') + '</Region>', text) - return replaced_text - -@dataclasses.dataclass -class Conversation: - """A class that keeps all conversation history.""" - system: str - roles: List[str] - messages: List[List[str]] - offset: int - sep_style: SeparatorStyle = SeparatorStyle.SINGLE - sep: str = "###" - sep2: str = None - version: str = "Unknown" - - skip_next: bool = False - - def get_prompt(self): - if self.sep_style == SeparatorStyle.SINGLE: - ret = self.system + self.sep - for role, message in self.messages: - if message: - if type(message) is tuple: - message, _, _ = message - ret += role + ": " + message + self.sep - else: - ret += role + ":" - return ret - elif self.sep_style == SeparatorStyle.TWO: - seps = [self.sep, self.sep2] - ret = self.system + seps[0] - for i, (role, message) in enumerate(self.messages): - if message: - if type(message) is tuple: - message, _, _ = message - ret += role + ": " + message + seps[i % 2] - else: - ret += role + ":" - return ret - else: - raise ValueError(f"Invalid style: {self.sep_style}") - - def append_message(self, role, message): - self.messages.append([role, message]) - - def get_images(self, return_pil=False): - images = [] - for i, (role, msg) in enumerate(self.messages[self.offset:]): - if i % 2 == 0: - if type(msg) is tuple: - import base64 - from io import BytesIO - from PIL import Image - msg, image, image_process_mode = msg - if image_process_mode == "Pad": - def expand2square(pil_img, background_color=(122, 116, 104)): - width, height = pil_img.size - if width == height: - return pil_img - elif width > height: - result = Image.new(pil_img.mode, (width, width), background_color) - result.paste(pil_img, (0, (width - height) // 2)) - return result - else: - result = Image.new(pil_img.mode, (height, height), background_color) - result.paste(pil_img, ((height - width) // 2, 0)) - return result - image = expand2square(image) - elif image_process_mode == "Crop": - pass - elif image_process_mode == "Resize": - image = image.resize((224, 224)) - else: - raise ValueError(f"Invalid image_process_mode: {image_process_mode}") - max_hw, min_hw = max(image.size), min(image.size) - aspect_ratio = max_hw / min_hw - max_len, min_len = 800, 400 - shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw)) - longest_edge = int(shortest_edge * aspect_ratio) - W, H = image.size - if H > W: - H, W = longest_edge, shortest_edge - else: - H, W = shortest_edge, longest_edge - image = image.resize((W, H)) - if return_pil: - images.append(image) - else: - buffered = BytesIO() - image.save(buffered, format="JPEG") - img_b64_str = base64.b64encode(buffered.getvalue()).decode() - images.append(img_b64_str) - return images - - def to_gradio_chatbot(self): - ret = [] - for i, (role, msg) in enumerate(self.messages[self.offset:]): - if i % 2 == 0: - if type(msg) is tuple: - import base64 - from io import BytesIO - msg, image, image_process_mode = msg - msg = convert_region_tags(msg) - max_hw, min_hw = max(image.size), min(image.size) - aspect_ratio = max_hw / min_hw - max_len, min_len = 800, 400 - shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw)) - longest_edge = int(shortest_edge * aspect_ratio) - W, H = image.size - if H > W: - H, W = longest_edge, shortest_edge - else: - H, W = shortest_edge, longest_edge - image = image.resize((W, H)) - # image = image.resize((224, 224)) - buffered = BytesIO() - image.save(buffered, format="JPEG") - img_b64_str = base64.b64encode(buffered.getvalue()).decode() - img_str = f'user upload image' - msg = msg.replace('', img_str) - else: - msg = convert_region_tags(msg) - ret.append([msg, None]) - else: - if isinstance(msg, str): - msg = convert_region_tags(msg) - ret[-1][-1] = msg - return ret - - def copy(self): - return Conversation( - system=self.system, - roles=self.roles, - messages=[[x, y] for x, y in self.messages], - offset=self.offset, - sep_style=self.sep_style, - sep=self.sep, - sep2=self.sep2) - - def dict(self): - if len(self.get_images()) > 0: - return { - "system": self.system, - "roles": self.roles, - "messages": [[x, y[0] if type(y) is tuple else y] for x, y in self.messages], - "offset": self.offset, - "sep": self.sep, - "sep2": self.sep2, - } - return { - "system": self.system, - "roles": self.roles, - "messages": self.messages, - "offset": self.offset, - "sep": self.sep, - "sep2": self.sep2, - } - -conv_vicuna_v1_1 = Conversation( - system="A chat between a curious user and an artificial intelligence assistant. " - "The assistant gives helpful, detailed, and polite answers to the user's questions.", - roles=("USER", "ASSISTANT"), - version="v1", - messages=(), - offset=0, - sep_style=SeparatorStyle.TWO, - sep=" ", - sep2="", -) - -default_conversation = conv_vicuna_v1_1 - -# ==================== Chat =================== # - - -def convert_bbox_to_region(bbox_xywh, image_width, image_height): - bbox_x, bbox_y, bbox_w, bbox_h = bbox_xywh - x1 = bbox_x - y1 = bbox_y - x2 = bbox_x + bbox_w - y2 = bbox_y + bbox_h - - x1_normalized = x1 / image_width - y1_normalized = y1 / image_height - x2_normalized = x2 / image_width - y2_normalized = y2 / image_height - - x1_norm = int(x1_normalized * 1000) - y1_norm = int(y1_normalized * 1000) - x2_norm = int(x2_normalized * 1000) - y2_norm = int(y2_normalized * 1000) - - region_format = "".format(x1_norm, y1_norm, x2_norm, y2_norm) - return region_format - -def load_config(config_fn, field='chat'): - config = yaml.load(open(config_fn), Loader=yaml.Loader) - return config[field] - -chat_config = load_config('configs/chat.yaml') - -def get_model_list(): - return ['PVIT_v1.0'] - -def change_model(model_name): - if model_name != st.session_state.get('model_name', ''): - st.session_state['model_name'] = 'PVIT_v1.0' - st.session_state['model_addr'] = chat_config['model_addr'] - st.session_state['messages'] = [] - - -def init_chat(image=None): - st.session_state['image'] = image - if 'input_message' not in st.session_state: - st.session_state['input_message'] = '' - if 'messages' not in st.session_state: - st.session_state['messages'] = [] - -def clear_messages(): - st.session_state['messages'] = [] - st.session_state['input_message'] = '' - -def encode_img(img): - if isinstance(img, str): - img = Image.open(img).convert('RGB') - im_file = BytesIO() - img.save(im_file, format="JPEG") - elif isinstance(img, Image.Image): - im_file = BytesIO() - img.save(im_file, format="JPEG") - else: - im_file = img - im_bytes = im_file.getvalue() # im_bytes: image in binary format. - im_b64 = base64.b64encode(im_bytes).decode() - return im_b64 - - -def send_one_message(message, max_new_tokens=32, temperature=0.7): - conv = default_conversation.copy() - # for role, msg in st.session_state['messages']: - # with chatbox: - # st_message(msg.lstrip('\n'), is_user=(role==conv.roles[0])) - - # # show message - # with chatbox: - # st_message(message, is_user=True) - if 'messages' not in st.session_state: - st.session_state['messages'] = [] - if len(st.session_state['messages']) == 0: - if '' not in message: - message = '\n' + message - st.session_state['messages'].append([conv.roles[0], message]) - conv.messages = copy.deepcopy(st.session_state['messages']) - # conv.append_message(conv.roles[0], message) - conv.append_message(conv.roles[1], None) - prompt = conv.get_prompt() - - if 'canvas_result' in st.session_state: - objects = st.session_state['canvas_result'].get('objects', []) - for i, obj in enumerate(objects): - prompt = prompt.replace(f'[REGION-{i}]', obj['bbox_label']) - - headers = {"User-Agent": "LLaVA Client"} - pload = { - "prompt": prompt, - "images": [st.session_state['image']], - "max_new_tokens": max_new_tokens, - "temperature": temperature, - "stop": conv.sep2, - } - print(prompt) - response = requests.post(st.session_state['model_addr'] + "/worker_generate_stream", headers=headers, - json=pload, stream=True) - result = "" - for chunk in response.iter_lines(chunk_size=8192, decode_unicode=False, delimiter=b"\0"): - if chunk: - data_t = json.loads(chunk.decode("utf-8")) - output = data_t["text"].split(conv.roles[1]+':')[-1] - result = output - - # # show response - # with chatbox: - # st_message(result) - st.session_state['messages'].append([conv.roles[1], result]) - - -# Customize Streamlit UI using CSS # background-color: #eb5424; -st.markdown(""" - -""", unsafe_allow_html=True) - -# ==================== Draw Bounding Boxes =================== # - -COLORS = sns.color_palette("tab10", n_colors=10).as_hex() -random.Random(32).shuffle(COLORS) - -def update_annotation_states(canvas_result, ratio, img_size): - for obj in canvas_result['objects']: - top = obj["top"] * ratio - left = obj["left"] * ratio - width = obj["width"] * ratio - height = obj["height"] * ratio - obj['bbox_label'] = convert_bbox_to_region([left, top, width, height], img_size[0], img_size[1]) - st.session_state['canvas_result'] = canvas_result - st.session_state['label_color'] = COLORS[len(st.session_state['canvas_result']['objects'])+1] - -def init_canvas(): - if 'canvas_result' not in st.session_state: - st.session_state['canvas_result'] = None - if 'label_color' not in st.session_state: - st.session_state['label_color'] = COLORS[0] - -def input_message(msg): - st.session_state['input_message'] = msg - - -def get_objects(): - canvas_result = st.session_state.get('canvas_result', {}) - if canvas_result is not None: - objects = canvas_result.get('objects', []) - else: - objects = [] - return objects - -def format_object_str(input_str): - if 'canvas_result' in st.session_state: - objects = st.session_state['canvas_result'].get('objects', []) - for i, obj in enumerate(objects): - input_str = input_str.replace(f'[REGION-{i}]', obj['bbox_label']) - return input_str - -# select model -model_list = get_model_list() -with col_img: - model_name = st.selectbox( - 'Choose a model to chat with', - model_list - ) -change_model(model_name) - -css = '' -# upload image -with col_img: - image = st.file_uploader("Chat with Image", type=["png", "jpg", "jpeg"], on_change=clear_messages) - img_fn = image.name if image is not None else None -if image: - init_chat(encode_img(image)) - init_canvas() - - img = Image.open(image).convert('RGB') - - width = 700 - height = round(width * img.size[1] * 1.0 / img.size[0]) - ratio = img.size[0] / width - - with st.sidebar: - max_new_tokens = st.number_input('max_new_tokens', min_value=1, max_value=1024, value=128) - temperature = st.number_input('temperature', min_value=0.0, max_value=1.0, value=0.0) - drawing_mode = st.selectbox( - "Drawing tool:", ("rect", "point", "line", "circle"), - ) - drawing_mode = "transform" if st.checkbox("Move ROIs", False) else drawing_mode - stroke_width = st.slider("Stroke width: ", 1, 25, 3) - # bg_color = st.color_picker("Background color: ", "#eee", key="bg_color") - - # save_file = st.text_input("Save File", value="saved.jsonl") - # save_button = st.button(label='Save') - - # if save_button: - # if img_fn is None: - # st.warning("Please upload an image first!") - # else: - # conversations_to_save = [{'from': role, 'value': format_object_str(conv)} for (role, conv) in st.session_state['messages']] - # model_name = st.session_state['model_name'] - # save_dict = { - # 'image': img_fn, - # 'conversations': conversations_to_save, - # 'info': { - # 'model_name': model_name - # } - # } - - # save_image_path = os.path.join(chat_config['save_path'], 'images') - # os.makedirs(save_image_path, exist_ok=True) - - # img.save(os.path.join(save_image_path, img_fn)) - - # chat_save_path = os.path.join(chat_config['save_path'], save_file) - # with open(chat_save_path, 'a+') as fout: - # fout.write(json.dumps(save_dict) + '\n') - - # st.success('Save successfully!') - - with col_img: - canvas_result = st_canvas( - fill_color=st.session_state['label_color'] + "77", # Fixed fill color with some opacity - stroke_width=stroke_width, - stroke_color=st.session_state['label_color'] + "77", - background_color="#eee", - background_image=Image.open(image) if image else None, - update_streamlit=True, - width=width, - height=height, - drawing_mode=drawing_mode, - point_display_radius=3 if drawing_mode == 'point' else 0, - key="canvas" - ) - - if canvas_result.json_data is not None: - update_annotation_states(canvas_result.json_data, ratio, img.size) - - if st.session_state.get('submit_btn', False): - send_one_message(st.session_state['input_message'], max_new_tokens=max_new_tokens, temperature=temperature) - st.session_state['input_message'] = "" - - with input_area: - col3, col4, col5 = st.columns([5, 1, 1]) - - with col3: - message = st.text_input('User', key="input_message") - - with col4: - submit_btn = st.button(label='submit', key='submit_btn') - - components.html( - """ - - """, - height=0, - width=0, - ) - - with col5: - clear_btn = st.button(label='clear', on_click=clear_messages) - - - objects = get_objects() - - if len(objects): - bbox_cols = st.columns([1 for _ in range(len(objects))]) - - def on_bbox_button_click(str): - def f(): - st.session_state['input_message'] += str - return f - - for i, (obj, bbox_col) in enumerate(zip(objects, bbox_cols)): - with bbox_col: - st.button(label=f'Region-{i}', on_click=on_bbox_button_click(f'[REGION-{i}]')) - # css += f"#root > div:nth-child(1) > div.withScreencast > div > div > div > section.main.css-uf99v8.e1g8pov65 > div.block-container.css-z5fcl4.e1g8pov64 > div:nth-child(1) > div > div.css-ocqkz7.esravye3 > div:nth-child(2) > div:nth-child(1) > div > div:nth-child(1) > div > div:nth-child(1) > div > div:nth-child(2) > div:nth-child({i+1}) > div:nth-child(1) > div > div > div > button {{background-color:{obj['stroke'][:7]}; bottom: 0px}} \n" + '\n' - css += f"#root > div:nth-child(1) > div.withScreencast > div > div > div > section.main.css-uf99v8.ea3mdgi5 > div.block-container.css-awvpbp.ea3mdgi4 > div:nth-child(1) > div > div.css-ocqkz7.e1f1d6gn3 > div:nth-child(2) > div:nth-child(1) > div > div:nth-child(1) > div > div:nth-child(1) > div > div:nth-child(3) > div:nth-child({i+1}) > div:nth-child(1) > div > div > div > button {{background-color:{obj['stroke'][:7]}; bottom: 0px}} \n" + '\n' - # css += f"#root > div:nth-child(1) > div.withScreencast > div > div > div > section.main.css-uf99v8.ea3mdgi5 > div.block-container.css-awvpbp.ea3mdgi4 > div:nth-child(1) > div > div.css-ocqkz7.e1f1d6gn3 > div:nth-child(2) > div:nth-child(1) > div > div:nth-child(1) > div > div:nth-child(1) > div > div:nth-child(2) > div:nth-child({i+1}) > div:nth-child(1) > div > div > div > button {{background-color:{obj['stroke'][:7]}; bottom: 0px}} \n" + '\n' - - for i, (role, msg) in enumerate(st.session_state['messages']): - with chatbox: - st_message(msg.lstrip('\n'), is_user=(role==default_conversation.roles[0]), key=f'{i}-{msg}') - -st.markdown("", unsafe_allow_html=True) - -st.markdown( -""" --------------------- -### User Manual - -- **Step 1.** Upload an image here -""") - -st.image("figures/upload_image.png") - -st.markdown( -""" -- **Step 2.** (Optional) You can draw bounding boxes on the image. Each box you draw creates a corresponding button of the same color. -""") - -st.image("figures/bbox.png", width=512) - -st.markdown( -""" -- **Step 3.** Ask questions. Insert region tokens in the question by clicking on the `Region-i` button. For example: - -> What color is the dog in [REGION-0]? - -> What is the relationship between the dog in [REGION-0] and the dog in [REGION-1]? - -**Note**: This demo is in its experimental stage, and we are actively working on improvements. - -""") \ No newline at end of file diff --git a/spaces/PaddlePaddle/resnext101_32x16d_wsl/README.md b/spaces/PaddlePaddle/resnext101_32x16d_wsl/README.md deleted file mode 100644 index b2d7f59e4797ad8ef5ff596d95fb0b3175dabfdc..0000000000000000000000000000000000000000 --- a/spaces/PaddlePaddle/resnext101_32x16d_wsl/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Resnext101_32x16d_wsl -emoji: 😻 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/self-references.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/self-references.go deleted file mode 100644 index 98bb1c1b69ef6b8e2527feacd43ecba1c5a28a02..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/self-references.go and /dev/null differ diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/amp.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/amp.py deleted file mode 100644 index ed97eb5b413a7f8375c3faa2135b0e3f3add230a..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/amp.py +++ /dev/null @@ -1,14 +0,0 @@ -from contextlib import contextmanager - -@contextmanager -def nullcontext(enter_result=None, **kwargs): - yield enter_result - -try: - from torch.cuda.amp import autocast, GradScaler, custom_fwd, custom_bwd -except: - print('[Warning] Library for automatic mixed precision is not found, AMP is disabled!!') - GradScaler = nullcontext - autocast = nullcontext - custom_fwd = nullcontext - custom_bwd = nullcontext \ No newline at end of file diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/quantization/vq.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/quantization/vq.py deleted file mode 100644 index aa57bea59db95ddae35e0657f723ca3a29ee943b..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/quantization/vq.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp - -import torch - -from .base import BaseQuantizer, QuantizedResult -from .core_vq import ResidualVectorQuantization - - -class ResidualVectorQuantizer(BaseQuantizer): - """Residual Vector Quantizer. - - Args: - dimension (int): Dimension of the codebooks. - n_q (int): Number of residual vector quantizers used. - q_dropout (bool): Random quantizer drop out at train time. - bins (int): Codebook size. - decay (float): Decay for exponential moving average over the codebooks. - kmeans_init (bool): Whether to use kmeans to initialize the codebooks. - kmeans_iters (int): Number of iterations used for kmeans initialization. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - orthogonal_reg_weight (float): Orthogonal regularization weights. - orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes. - orthogonal_reg_max_codes (optional int): Maximum number of codes to consider. - for orthogonal regularization. - """ - def __init__( - self, - dimension: int = 256, - n_q: int = 8, - q_dropout: bool = False, - bins: int = 1024, - decay: float = 0.99, - kmeans_init: bool = True, - kmeans_iters: int = 10, - threshold_ema_dead_code: int = 2, - orthogonal_reg_weight: float = 0.0, - orthogonal_reg_active_codes_only: bool = False, - orthogonal_reg_max_codes: tp.Optional[int] = None, - ): - super().__init__() - self.max_n_q = n_q - self.n_q = n_q - self.q_dropout = q_dropout - self.dimension = dimension - self.bins = bins - self.decay = decay - self.kmeans_init = kmeans_init - self.kmeans_iters = kmeans_iters - self.threshold_ema_dead_code = threshold_ema_dead_code - self.orthogonal_reg_weight = orthogonal_reg_weight - self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only - self.orthogonal_reg_max_codes = orthogonal_reg_max_codes - self.vq = ResidualVectorQuantization( - dim=self.dimension, - codebook_size=self.bins, - num_quantizers=self.n_q, - decay=self.decay, - kmeans_init=self.kmeans_init, - kmeans_iters=self.kmeans_iters, - threshold_ema_dead_code=self.threshold_ema_dead_code, - orthogonal_reg_weight=self.orthogonal_reg_weight, - orthogonal_reg_active_codes_only=self.orthogonal_reg_active_codes_only, - orthogonal_reg_max_codes=self.orthogonal_reg_max_codes, - channels_last=False - ) - - def forward(self, x: torch.Tensor, frame_rate: int): - n_q = self.n_q - if self.training and self.q_dropout: - n_q = int(torch.randint(1, self.n_q + 1, (1,)).item()) - bw_per_q = math.log2(self.bins) * frame_rate / 1000 - quantized, codes, commit_loss = self.vq(x, n_q=n_q) - codes = codes.transpose(0, 1) - # codes is [B, K, T], with T frames, K nb of codebooks. - bw = torch.tensor(n_q * bw_per_q).to(x) - return QuantizedResult(quantized, codes, bw, penalty=torch.mean(commit_loss)) - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified frame rate at the given bandwidth. - The RVQ encode method sets the appropriate number of quantizer to use - and returns indices for each quantizer. - """ - n_q = self.n_q - codes = self.vq.encode(x, n_q=n_q) - codes = codes.transpose(0, 1) - # codes is [B, K, T], with T frames, K nb of codebooks. - return codes - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation.""" - # codes is [B, K, T], with T frames, K nb of codebooks, vq.decode expects [K, B, T]. - codes = codes.transpose(0, 1) - quantized = self.vq.decode(codes) - return quantized - - @property - def total_codebooks(self): - return self.max_n_q - - @property - def num_codebooks(self): - return self.n_q - - def set_num_codebooks(self, n: int): - assert n > 0 and n <= self.max_n_q - self.n_q = n diff --git a/spaces/PunGrumpy/text-generation/README.md b/spaces/PunGrumpy/text-generation/README.md deleted file mode 100644 index cf6a8c1091f47d56b7dda8f7bb088cef8b963459..0000000000000000000000000000000000000000 --- a/spaces/PunGrumpy/text-generation/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Text Generation -emoji: 🐨 -colorFrom: red -colorTo: pink -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/inject_securetransport.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/inject_securetransport.py deleted file mode 100644 index 276aa79bb81356cdca73af0a5851b448707784a4..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/inject_securetransport.py +++ /dev/null @@ -1,35 +0,0 @@ -"""A helper module that injects SecureTransport, on import. - -The import should be done as early as possible, to ensure all requests and -sessions (or whatever) are created after injecting SecureTransport. - -Note that we only do the injection on macOS, when the linked OpenSSL is too -old to handle TLSv1.2. -""" - -import sys - - -def inject_securetransport() -> None: - # Only relevant on macOS - if sys.platform != "darwin": - return - - try: - import ssl - except ImportError: - return - - # Checks for OpenSSL 1.0.1 - if ssl.OPENSSL_VERSION_NUMBER >= 0x1000100F: - return - - try: - from pip._vendor.urllib3.contrib import securetransport - except (ImportError, OSError): - return - - securetransport.inject_into_urllib3() - - -inject_securetransport() diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/utf8prober.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/utf8prober.py deleted file mode 100644 index 3aae09e863036b6185cf115047e441b15ea8c5e8..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/utf8prober.py +++ /dev/null @@ -1,80 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .charsetprober import CharSetProber -from .codingstatemachine import CodingStateMachine -from .enums import MachineState, ProbingState -from .mbcssm import UTF8_SM_MODEL - - -class UTF8Prober(CharSetProber): - ONE_CHAR_PROB = 0.5 - - def __init__(self): - super().__init__() - self.coding_sm = CodingStateMachine(UTF8_SM_MODEL) - self._num_mb_chars = None - self.reset() - - def reset(self): - super().reset() - self.coding_sm.reset() - self._num_mb_chars = 0 - - @property - def charset_name(self): - return "utf-8" - - @property - def language(self): - return "" - - def feed(self, byte_str): - for c in byte_str: - coding_state = self.coding_sm.next_state(c) - if coding_state == MachineState.ERROR: - self._state = ProbingState.NOT_ME - break - if coding_state == MachineState.ITS_ME: - self._state = ProbingState.FOUND_IT - break - if coding_state == MachineState.START: - if self.coding_sm.get_current_charlen() >= 2: - self._num_mb_chars += 1 - - if self.state == ProbingState.DETECTING: - if self.get_confidence() > self.SHORTCUT_THRESHOLD: - self._state = ProbingState.FOUND_IT - - return self.state - - def get_confidence(self): - unlike = 0.99 - if self._num_mb_chars < 6: - unlike *= self.ONE_CHAR_PROB**self._num_mb_chars - return 1.0 - unlike - return unlike diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/ema.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/ema.py deleted file mode 100644 index 15c7e68088f019802a59e7ae41cc1fe0c7f28f96..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/ema.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...parallel import is_module_wrapper -from ..hooks.hook import HOOKS, Hook - - -@HOOKS.register_module() -class EMAHook(Hook): - r"""Exponential Moving Average Hook. - - Use Exponential Moving Average on all parameters of model in training - process. All parameters have a ema backup, which update by the formula - as below. EMAHook takes priority over EvalHook and CheckpointSaverHook. - - .. math:: - - \text{Xema\_{t+1}} = (1 - \text{momentum}) \times - \text{Xema\_{t}} + \text{momentum} \times X_t - - Args: - momentum (float): The momentum used for updating ema parameter. - Defaults to 0.0002. - interval (int): Update ema parameter every interval iteration. - Defaults to 1. - warm_up (int): During first warm_up steps, we may use smaller momentum - to update ema parameters more slowly. Defaults to 100. - resume_from (str): The checkpoint path. Defaults to None. - """ - - def __init__(self, - momentum=0.0002, - interval=1, - warm_up=100, - resume_from=None): - assert isinstance(interval, int) and interval > 0 - self.warm_up = warm_up - self.interval = interval - assert momentum > 0 and momentum < 1 - self.momentum = momentum**interval - self.checkpoint = resume_from - - def before_run(self, runner): - """To resume model with it's ema parameters more friendly. - - Register ema parameter as ``named_buffer`` to model - """ - model = runner.model - if is_module_wrapper(model): - model = model.module - self.param_ema_buffer = {} - self.model_parameters = dict(model.named_parameters(recurse=True)) - for name, value in self.model_parameters.items(): - # "." is not allowed in module's buffer name - buffer_name = f"ema_{name.replace('.', '_')}" - self.param_ema_buffer[name] = buffer_name - model.register_buffer(buffer_name, value.data.clone()) - self.model_buffers = dict(model.named_buffers(recurse=True)) - if self.checkpoint is not None: - runner.resume(self.checkpoint) - - def after_train_iter(self, runner): - """Update ema parameter every self.interval iterations.""" - curr_step = runner.iter - # We warm up the momentum considering the instability at beginning - momentum = min(self.momentum, - (1 + curr_step) / (self.warm_up + curr_step)) - if curr_step % self.interval != 0: - return - for name, parameter in self.model_parameters.items(): - buffer_name = self.param_ema_buffer[name] - buffer_parameter = self.model_buffers[buffer_name] - buffer_parameter.mul_(1 - momentum).add_(momentum, parameter.data) - - def after_train_epoch(self, runner): - """We load parameter values from ema backup to model before the - EvalHook.""" - self._swap_ema_parameters() - - def before_train_epoch(self, runner): - """We recover model's parameter from ema backup after last epoch's - EvalHook.""" - self._swap_ema_parameters() - - def _swap_ema_parameters(self): - """Swap the parameter of model with parameter in ema_buffer.""" - for name, value in self.model_parameters.items(): - temp = value.data.clone() - ema_buffer = self.model_buffers[self.param_ema_buffer[name]] - value.data.copy_(ema_buffer.data) - ema_buffer.data.copy_(temp) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/__init__.py deleted file mode 100644 index f004dd95d97df16167f932587b3ce73b05b04a37..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/__init__.py +++ /dev/null @@ -1,41 +0,0 @@ -from .anchor_free_head import AnchorFreeHead -from .anchor_head import AnchorHead -from .atss_head import ATSSHead -from .cascade_rpn_head import CascadeRPNHead, StageCascadeRPNHead -from .centripetal_head import CentripetalHead -from .corner_head import CornerHead -from .embedding_rpn_head import EmbeddingRPNHead -from .fcos_head import FCOSHead -from .fovea_head import FoveaHead -from .free_anchor_retina_head import FreeAnchorRetinaHead -from .fsaf_head import FSAFHead -from .ga_retina_head import GARetinaHead -from .ga_rpn_head import GARPNHead -from .gfl_head import GFLHead -from .guided_anchor_head import FeatureAdaption, GuidedAnchorHead -from .ld_head import LDHead -from .nasfcos_head import NASFCOSHead -from .paa_head import PAAHead -from .pisa_retinanet_head import PISARetinaHead -from .pisa_ssd_head import PISASSDHead -from .reppoints_head import RepPointsHead -from .retina_head import RetinaHead -from .retina_sepbn_head import RetinaSepBNHead -from .rpn_head import RPNHead -from .sabl_retina_head import SABLRetinaHead -from .ssd_head import SSDHead -from .transformer_head import TransformerHead -from .vfnet_head import VFNetHead -from .yolact_head import YOLACTHead, YOLACTProtonet, YOLACTSegmHead -from .yolo_head import YOLOV3Head - -__all__ = [ - 'AnchorFreeHead', 'AnchorHead', 'GuidedAnchorHead', 'FeatureAdaption', - 'RPNHead', 'GARPNHead', 'RetinaHead', 'RetinaSepBNHead', 'GARetinaHead', - 'SSDHead', 'FCOSHead', 'RepPointsHead', 'FoveaHead', - 'FreeAnchorRetinaHead', 'ATSSHead', 'FSAFHead', 'NASFCOSHead', - 'PISARetinaHead', 'PISASSDHead', 'GFLHead', 'CornerHead', 'YOLACTHead', - 'YOLACTSegmHead', 'YOLACTProtonet', 'YOLOV3Head', 'PAAHead', - 'SABLRetinaHead', 'CentripetalHead', 'VFNetHead', 'TransformerHead', - 'StageCascadeRPNHead', 'CascadeRPNHead', 'EmbeddingRPNHead', 'LDHead' -] diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/necks/pafpn.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/necks/pafpn.py deleted file mode 100644 index d7c0b50f29e882aacb5158b33ead3d4566d0ce0b..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/necks/pafpn.py +++ /dev/null @@ -1,142 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import auto_fp16 - -from ..builder import NECKS -from .fpn import FPN - - -@NECKS.register_module() -class PAFPN(FPN): - """Path Aggregation Network for Instance Segmentation. - - This is an implementation of the `PAFPN in Path Aggregation Network - `_. - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): Whether to add conv layers on top of the - original feature maps. Default: False. - extra_convs_on_inputs (bool): Whether to apply extra conv on - the original feature from the backbone. Default: False. - relu_before_extra_convs (bool): Whether to apply relu before the extra - conv. Default: False. - no_norm_on_lateral (bool): Whether to apply norm on lateral. - Default: False. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (str): Config dict for activation layer in ConvModule. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=0, - end_level=-1, - add_extra_convs=False, - extra_convs_on_inputs=True, - relu_before_extra_convs=False, - no_norm_on_lateral=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None): - super(PAFPN, - self).__init__(in_channels, out_channels, num_outs, start_level, - end_level, add_extra_convs, extra_convs_on_inputs, - relu_before_extra_convs, no_norm_on_lateral, - conv_cfg, norm_cfg, act_cfg) - # add extra bottom up pathway - self.downsample_convs = nn.ModuleList() - self.pafpn_convs = nn.ModuleList() - for i in range(self.start_level + 1, self.backbone_end_level): - d_conv = ConvModule( - out_channels, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - pafpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - self.downsample_convs.append(d_conv) - self.pafpn_convs.append(pafpn_conv) - - @auto_fp16() - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.in_channels) - - # build laterals - laterals = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] += F.interpolate( - laterals[i], size=prev_shape, mode='nearest') - - # build outputs - # part 1: from original levels - inter_outs = [ - self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels) - ] - - # part 2: add bottom-up path - for i in range(0, used_backbone_levels - 1): - inter_outs[i + 1] += self.downsample_convs[i](inter_outs[i]) - - outs = [] - outs.append(inter_outs[0]) - outs.extend([ - self.pafpn_convs[i - 1](inter_outs[i]) - for i in range(1, used_backbone_levels) - ]) - - # part 3: add extra levels - if self.num_outs > len(outs): - # use max pool to get more levels on top of outputs - # (e.g., Faster R-CNN, Mask R-CNN) - if not self.add_extra_convs: - for i in range(self.num_outs - used_backbone_levels): - outs.append(F.max_pool2d(outs[-1], 1, stride=2)) - # add conv layers on top of original feature maps (RetinaNet) - else: - if self.add_extra_convs == 'on_input': - orig = inputs[self.backbone_end_level - 1] - outs.append(self.fpn_convs[used_backbone_levels](orig)) - elif self.add_extra_convs == 'on_lateral': - outs.append(self.fpn_convs[used_backbone_levels]( - laterals[-1])) - elif self.add_extra_convs == 'on_output': - outs.append(self.fpn_convs[used_backbone_levels](outs[-1])) - else: - raise NotImplementedError - for i in range(used_backbone_levels + 1, self.num_outs): - if self.relu_before_extra_convs: - outs.append(self.fpn_convs[i](F.relu(outs[-1]))) - else: - outs.append(self.fpn_convs[i](outs[-1])) - return tuple(outs) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/backbones/uniformer.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/backbones/uniformer.py deleted file mode 100644 index 0c4bb88e4c928540cca9ab609988b916520f5b7a..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/backbones/uniformer.py +++ /dev/null @@ -1,422 +0,0 @@ -# -------------------------------------------------------- -# UniFormer -# Copyright (c) 2022 SenseTime X-Lab -# Licensed under The MIT License [see LICENSE for details] -# Written by Kunchang Li -# -------------------------------------------------------- - -from collections import OrderedDict -import math - -from functools import partial -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -import numpy as np -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from annotator.uniformer.mmcv_custom import load_checkpoint -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class CMlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Conv2d(in_features, hidden_features, 1) - self.act = act_layer() - self.fc2 = nn.Conv2d(hidden_features, out_features, 1) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class CBlock(nn.Module): - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = nn.BatchNorm2d(dim) - self.conv1 = nn.Conv2d(dim, dim, 1) - self.conv2 = nn.Conv2d(dim, dim, 1) - self.attn = nn.Conv2d(dim, dim, 5, padding=2, groups=dim) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = nn.BatchNorm2d(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = CMlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - x = x + self.drop_path(self.conv2(self.attn(self.conv1(self.norm1(x))))) - x = x + self.drop_path(self.mlp(self.norm2(x))) - return x - - -class Attention(nn.Module): - def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SABlock(nn.Module): - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - B, N, H, W = x.shape - x = x.flatten(2).transpose(1, 2) - x = x + self.drop_path(self.attn(self.norm1(x))) - x = x + self.drop_path(self.mlp(self.norm2(x))) - x = x.transpose(1, 2).reshape(B, N, H, W) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class SABlock_Windows(nn.Module): - def __init__(self, dim, num_heads, window_size=14, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.window_size=window_size - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - x = x.permute(0, 2, 3, 1) - B, H, W, C = x.shape - shortcut = x - x = self.norm1(x) - - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - x_windows = window_partition(x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - x = x.permute(0, 3, 1, 2).reshape(B, C, H, W) - return x - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - """ - def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0]) - self.img_size = img_size - self.patch_size = patch_size - self.num_patches = num_patches - self.norm = nn.LayerNorm(embed_dim) - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x): - B, _, H, W = x.shape - x = self.proj(x) - B, _, H, W = x.shape - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous() - return x - - -@BACKBONES.register_module() -class UniFormer(nn.Module): - """ Vision Transformer - A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` - - https://arxiv.org/abs/2010.11929 - """ - def __init__(self, layers=[3, 4, 8, 3], img_size=224, in_chans=3, num_classes=80, embed_dim=[64, 128, 320, 512], - head_dim=64, mlp_ratio=4., qkv_bias=True, qk_scale=None, representation_size=None, - drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=partial(nn.LayerNorm, eps=1e-6), - pretrained_path=None, use_checkpoint=False, checkpoint_num=[0, 0, 0, 0], - windows=False, hybrid=False, window_size=14): - """ - Args: - layer (list): number of block in each layer - img_size (int, tuple): input image size - in_chans (int): number of input channels - num_classes (int): number of classes for classification head - embed_dim (int): embedding dimension - head_dim (int): dimension of attention heads - mlp_ratio (int): ratio of mlp hidden dim to embedding dim - qkv_bias (bool): enable bias for qkv if True - qk_scale (float): override default qk scale of head_dim ** -0.5 if set - representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set - drop_rate (float): dropout rate - attn_drop_rate (float): attention dropout rate - drop_path_rate (float): stochastic depth rate - norm_layer (nn.Module): normalization layer - pretrained_path (str): path of pretrained model - use_checkpoint (bool): whether use checkpoint - checkpoint_num (list): index for using checkpoint in every stage - windows (bool): whether use window MHRA - hybrid (bool): whether use hybrid MHRA - window_size (int): size of window (>14) - """ - super().__init__() - self.num_classes = num_classes - self.use_checkpoint = use_checkpoint - self.checkpoint_num = checkpoint_num - self.windows = windows - print(f'Use Checkpoint: {self.use_checkpoint}') - print(f'Checkpoint Number: {self.checkpoint_num}') - self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models - norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6) - - self.patch_embed1 = PatchEmbed( - img_size=img_size, patch_size=4, in_chans=in_chans, embed_dim=embed_dim[0]) - self.patch_embed2 = PatchEmbed( - img_size=img_size // 4, patch_size=2, in_chans=embed_dim[0], embed_dim=embed_dim[1]) - self.patch_embed3 = PatchEmbed( - img_size=img_size // 8, patch_size=2, in_chans=embed_dim[1], embed_dim=embed_dim[2]) - self.patch_embed4 = PatchEmbed( - img_size=img_size // 16, patch_size=2, in_chans=embed_dim[2], embed_dim=embed_dim[3]) - - self.pos_drop = nn.Dropout(p=drop_rate) - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(layers))] # stochastic depth decay rule - num_heads = [dim // head_dim for dim in embed_dim] - self.blocks1 = nn.ModuleList([ - CBlock( - dim=embed_dim[0], num_heads=num_heads[0], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer) - for i in range(layers[0])]) - self.norm1=norm_layer(embed_dim[0]) - self.blocks2 = nn.ModuleList([ - CBlock( - dim=embed_dim[1], num_heads=num_heads[1], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]], norm_layer=norm_layer) - for i in range(layers[1])]) - self.norm2 = norm_layer(embed_dim[1]) - if self.windows: - print('Use local window for all blocks in stage3') - self.blocks3 = nn.ModuleList([ - SABlock_Windows( - dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer) - for i in range(layers[2])]) - elif hybrid: - print('Use hybrid window for blocks in stage3') - block3 = [] - for i in range(layers[2]): - if (i + 1) % 4 == 0: - block3.append(SABlock( - dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)) - else: - block3.append(SABlock_Windows( - dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)) - self.blocks3 = nn.ModuleList(block3) - else: - print('Use global window for all blocks in stage3') - self.blocks3 = nn.ModuleList([ - SABlock( - dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer) - for i in range(layers[2])]) - self.norm3 = norm_layer(embed_dim[2]) - self.blocks4 = nn.ModuleList([ - SABlock( - dim=embed_dim[3], num_heads=num_heads[3], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]+layers[2]], norm_layer=norm_layer) - for i in range(layers[3])]) - self.norm4 = norm_layer(embed_dim[3]) - - # Representation layer - if representation_size: - self.num_features = representation_size - self.pre_logits = nn.Sequential(OrderedDict([ - ('fc', nn.Linear(embed_dim, representation_size)), - ('act', nn.Tanh()) - ])) - else: - self.pre_logits = nn.Identity() - - self.apply(self._init_weights) - self.init_weights(pretrained=pretrained_path) - - def init_weights(self, pretrained): - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, map_location='cpu', strict=False, logger=logger) - print(f'Load pretrained model from {pretrained}') - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'pos_embed', 'cls_token'} - - def get_classifier(self): - return self.head - - def reset_classifier(self, num_classes, global_pool=''): - self.num_classes = num_classes - self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() - - def forward_features(self, x): - out = [] - x = self.patch_embed1(x) - x = self.pos_drop(x) - for i, blk in enumerate(self.blocks1): - if self.use_checkpoint and i < self.checkpoint_num[0]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm1(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed2(x) - for i, blk in enumerate(self.blocks2): - if self.use_checkpoint and i < self.checkpoint_num[1]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm2(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed3(x) - for i, blk in enumerate(self.blocks3): - if self.use_checkpoint and i < self.checkpoint_num[2]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm3(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed4(x) - for i, blk in enumerate(self.blocks4): - if self.use_checkpoint and i < self.checkpoint_num[3]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm4(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - return tuple(out) - - def forward(self, x): - x = self.forward_features(x) - return x diff --git a/spaces/Sa-m/Neural-Style-Transfer-Image-Stylization/README.md b/spaces/Sa-m/Neural-Style-Transfer-Image-Stylization/README.md deleted file mode 100644 index e49db46d27d14c1512535a66844b7f44667fef13..0000000000000000000000000000000000000000 --- a/spaces/Sa-m/Neural-Style-Transfer-Image-Stylization/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Neural Style Transfer Image Stylization -emoji: 🌍 -colorFrom: red -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Salesforce/EDICT/my_diffusers/commands/env.py b/spaces/Salesforce/EDICT/my_diffusers/commands/env.py deleted file mode 100644 index 81a878bff6688d3c510b53c60ac9d0e51e4aebcc..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/commands/env.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import platform -from argparse import ArgumentParser - -import huggingface_hub - -from .. import __version__ as version -from ..utils import is_torch_available, is_transformers_available -from . import BaseDiffusersCLICommand - - -def info_command_factory(_): - return EnvironmentCommand() - - -class EnvironmentCommand(BaseDiffusersCLICommand): - @staticmethod - def register_subcommand(parser: ArgumentParser): - download_parser = parser.add_parser("env") - download_parser.set_defaults(func=info_command_factory) - - def run(self): - hub_version = huggingface_hub.__version__ - - pt_version = "not installed" - pt_cuda_available = "NA" - if is_torch_available(): - import torch - - pt_version = torch.__version__ - pt_cuda_available = torch.cuda.is_available() - - transformers_version = "not installed" - if is_transformers_available: - import transformers - - transformers_version = transformers.__version__ - - info = { - "`diffusers` version": version, - "Platform": platform.platform(), - "Python version": platform.python_version(), - "PyTorch version (GPU?)": f"{pt_version} ({pt_cuda_available})", - "Huggingface_hub version": hub_version, - "Transformers version": transformers_version, - "Using GPU in script?": "", - "Using distributed or parallel set-up in script?": "", - } - - print("\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\n") - print(self.format_dict(info)) - - return info - - @staticmethod - def format_dict(d): - return "\n".join([f"- {prop}: {val}" for prop, val in d.items()]) + "\n" diff --git a/spaces/Sandiago21/speech-to-speech-translation-german/app.py b/spaces/Sandiago21/speech-to-speech-translation-german/app.py deleted file mode 100644 index e0c7bc8ac90450eeea216bb5a3333ffe10be347c..0000000000000000000000000000000000000000 --- a/spaces/Sandiago21/speech-to-speech-translation-german/app.py +++ /dev/null @@ -1,142 +0,0 @@ -import gradio as gr -import numpy as np -import torch -from datasets import load_dataset -from transformers import SpeechT5ForTextToSpeech, SpeechT5HifiGan, SpeechT5Processor, pipeline - - -device = "cuda:0" if torch.cuda.is_available() else "cpu" - -# load speech translation checkpoint -asr_pipe = pipeline("automatic-speech-recognition", model="openai/whisper-large-v2", device=device) - -# load text-to-speech checkpoint and speaker embeddings -model_id = "Sandiago21/speecht5_finetuned_mozilla_foundation_common_voice_13_german" # update with your model id -# pipe = pipeline("automatic-speech-recognition", model=model_id) -model = SpeechT5ForTextToSpeech.from_pretrained(model_id) -processor = SpeechT5Processor.from_pretrained(model_id) -vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan") -embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation") -speaker_embeddings = torch.tensor(embeddings_dataset[7440]["xvector"]).unsqueeze(0) - -replacements = [ - ("Ä", "E"), - ("Æ", "E"), - ("Ç", "C"), - ("É", "E"), - ("Í", "I"), - ("Ó", "O"), - ("Ö", "E"), - ("Ü", "Y"), - ("ß", "S"), - ("à", "a"), - ("á", "a"), - ("ã", "a"), - ("ä", "e"), - ("å", "a"), - ("ë", "e"), - ("í", "i"), - ("ï", "i"), - ("ð", "o"), - ("ñ", "n"), - ("ò", "o"), - ("ó", "o"), - ("ô", "o"), - ("ö", "u"), - ("ú", "u"), - ("ü", "y"), - ("ý", "y"), - ("Ā", "A"), - ("ā", "a"), - ("ă", "a"), - ("ą", "a"), - ("ć", "c"), - ("Č", "C"), - ("č", "c"), - ("ď", "d"), - ("Đ", "D"), - ("ę", "e"), - ("ě", "e"), - ("ğ", "g"), - ("İ", "I"), - ("О", "O"), - ("Ł", "L"), - ("ń", "n"), - ("ň", "n"), - ("Ō", "O"), - ("ō", "o"), - ("ő", "o"), - ("ř", "r"), - ("Ś", "S"), - ("ś", "s"), - ("Ş", "S"), - ("ş", "s"), - ("Š", "S"), - ("š", "s"), - ("ū", "u"), - ("ź", "z"), - ("Ż", "Z"), - ("Ž", "Z"), - ("ǐ", "i"), - ("ǐ", "i"), - ("ș", "s"), - ("ț", "t"), -] - - -def cleanup_text(text): - for src, dst in replacements: - text = text.replace(src, dst) - return text - - -def transcribe_to_german(audio): - outputs = asr_pipe(audio, max_new_tokens=256, generate_kwargs={"task": "transcribe", "language": "german"}) - return outputs["text"] - - -def synthesise_from_german(text): - text = cleanup_text(text) - inputs = processor(text=text, return_tensors="pt") - speech = model.generate_speech(inputs["input_ids"].to(device), speaker_embeddings.to(device), vocoder=vocoder) - return speech.cpu() - - -def speech_to_speech_translation(audio): - translated_text = transcribe_to_german(audio) - synthesised_speech = synthesise_from_german(translated_text) - synthesised_speech = (synthesised_speech.numpy() * 32767).astype(np.int16) - return ((16000, synthesised_speech), translated_text) - - -title = "Cascaded STST" -description = """ -Demo for cascaded speech-to-speech translation (STST), mapping from source speech in any language to target speech in German. Demo uses OpenAI's [Whisper Large v2](https://huggingface.co/openai/whisper-large-v2) model for speech translation, and [Sandiago21/speecht5_finetuned_mozilla_foundation_common_voice_13_german](https://huggingface.co/Sandiago21/speecht5_finetuned_mozilla_foundation_common_voice_13_german) checkpoint for text-to-speech, which is based on Microsoft's -[SpeechT5 TTS](https://huggingface.co/microsoft/speecht5_tts) model for text-to-speech, fine-tuned in German Audio dataset: -![Cascaded STST](https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/s2st_cascaded.png "Diagram of cascaded speech to speech translation") -""" - -demo = gr.Blocks() - -mic_translate = gr.Interface( - fn=speech_to_speech_translation, - inputs=gr.Audio(source="microphone", type="filepath"), - outputs=[gr.Audio(label="Generated Speech", type="numpy"), gr.outputs.Textbox()], - title=title, - description=description, -) - -file_translate = gr.Interface( - fn=speech_to_speech_translation, - inputs=gr.Audio(source="upload", type="filepath"), - outputs=[gr.Audio(label="Generated Speech", type="numpy"), gr.outputs.Textbox()], - examples=[["./example.wav"]], - title=title, - description=description, -) - -with demo: - gr.TabbedInterface([mic_translate, file_translate], ["Microphone", "Audio File"]) - -demo.launch() - diff --git a/spaces/Sapiensia/MakerDiffusion/README.md b/spaces/Sapiensia/MakerDiffusion/README.md deleted file mode 100644 index 73a674b4248e0f183def5706750b386cbc39e86b..0000000000000000000000000000000000000000 --- a/spaces/Sapiensia/MakerDiffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Basilisk-AI Maker Diffusion V-4.0 -emoji: 👁 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/mycotoxicosis.md b/spaces/SarthakSidhant/Go-Cattle/diseases/mycotoxicosis.md deleted file mode 100644 index 72da57ca6e45604280b8adddcd265749b89d543e..0000000000000000000000000000000000000000 --- a/spaces/SarthakSidhant/Go-Cattle/diseases/mycotoxicosis.md +++ /dev/null @@ -1,36 +0,0 @@ -## Mycotoxicosis - -**Information:** Mycotoxicosis is a disease caused by the consumption of feed or forage contaminated with mycotoxins. Mycotoxins are poisonous substances produced by fungi, which can grow on a variety of crops, including corn, wheat, and hay. - -**Symptoms:** - -* The symptoms of mycotoxicosis can vary depending on the type of mycotoxin ingested, the amount ingested, and the animal's individual susceptibility. Some common symptoms include: - * Loss of appetite - * Weight loss - * Diarrhea - * Vomiting - * Jaundice - * Impaired reproduction - * Death - -**Remedies:** - -* There is no specific treatment for mycotoxicosis. Treatment is usually supportive and may include: - * Administering activated charcoal to absorb the mycotoxin - * Providing fluids and electrolytes - * Treating other underlying conditions - -**Causes:** - -* Mycotoxicosis is caused by the consumption of feed or forage contaminated with mycotoxins. Mycotoxins are produced by fungi, which can grow on a variety of crops, including corn, wheat, and hay. -* Mycotoxins can be produced in the field, during storage, or during processing of feed and forage. -* The risk of mycotoxicosis is increased in warm, humid conditions. - -**Prevention:** - -* The best way to prevent mycotoxicosis is to: - * Feed cattle a balanced diet - * Store feed and forage properly - * Test feed and forage for mycotoxins - * Use mycotoxin binders to reduce the absorption of mycotoxins - diff --git a/spaces/ServerX/PorcoDiaz/train/data_utils.py b/spaces/ServerX/PorcoDiaz/train/data_utils.py deleted file mode 100644 index 71c0eff1815469a52399dc90a093a2f8a29223eb..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/train/data_utils.py +++ /dev/null @@ -1,512 +0,0 @@ -import os, traceback -import numpy as np -import torch -import torch.utils.data - -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text - - -class TextAudioLoaderMultiNSFsid(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 5000) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text, pitch, pitchf, dv in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text, pitch, pitchf, dv]) - lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - file = audiopath_and_text[0] - phone = audiopath_and_text[1] - pitch = audiopath_and_text[2] - pitchf = audiopath_and_text[3] - dv = audiopath_and_text[4] - - phone, pitch, pitchf = self.get_labels(phone, pitch, pitchf) - spec, wav = self.get_audio(file) - dv = self.get_sid(dv) - - len_phone = phone.size()[0] - len_spec = spec.size()[-1] - # print(123,phone.shape,pitch.shape,spec.shape) - if len_phone != len_spec: - len_min = min(len_phone, len_spec) - # amor - len_wav = len_min * self.hop_length - - spec = spec[:, :len_min] - wav = wav[:, :len_wav] - - phone = phone[:len_min, :] - pitch = pitch[:len_min] - pitchf = pitchf[:len_min] - - return (spec, wav, phone, pitch, pitchf, dv) - - def get_labels(self, phone, pitch, pitchf): - phone = np.load(phone) - phone = np.repeat(phone, 2, axis=0) - pitch = np.load(pitch) - pitchf = np.load(pitchf) - n_num = min(phone.shape[0], 900) # DistributedBucketSampler - # print(234,phone.shape,pitch.shape) - phone = phone[:n_num, :] - pitch = pitch[:n_num] - pitchf = pitchf[:n_num] - phone = torch.FloatTensor(phone) - pitch = torch.LongTensor(pitch) - pitchf = torch.FloatTensor(pitchf) - return phone, pitch, pitchf - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError( - "{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate - ) - ) - audio_norm = audio - # audio_norm = audio / self.max_wav_value - # audio_norm = audio / np.abs(audio).max() - - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - try: - spec = torch.load(spec_filename) - except: - print(spec_filename, traceback.format_exc()) - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - else: - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - return spec, audio_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollateMultiNSFsid: - """Zero-pads model inputs and targets""" - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True - ) - - max_spec_len = max([x[0].size(1) for x in batch]) - max_wave_len = max([x[1].size(1) for x in batch]) - spec_lengths = torch.LongTensor(len(batch)) - wave_lengths = torch.LongTensor(len(batch)) - spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len) - wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len) - spec_padded.zero_() - wave_padded.zero_() - - max_phone_len = max([x[2].size(0) for x in batch]) - phone_lengths = torch.LongTensor(len(batch)) - phone_padded = torch.FloatTensor( - len(batch), max_phone_len, batch[0][2].shape[1] - ) # (spec, wav, phone, pitch) - pitch_padded = torch.LongTensor(len(batch), max_phone_len) - pitchf_padded = torch.FloatTensor(len(batch), max_phone_len) - phone_padded.zero_() - pitch_padded.zero_() - pitchf_padded.zero_() - # dv = torch.FloatTensor(len(batch), 256)#gin=256 - sid = torch.LongTensor(len(batch)) - - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - spec = row[0] - spec_padded[i, :, : spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wave = row[1] - wave_padded[i, :, : wave.size(1)] = wave - wave_lengths[i] = wave.size(1) - - phone = row[2] - phone_padded[i, : phone.size(0), :] = phone - phone_lengths[i] = phone.size(0) - - pitch = row[3] - pitch_padded[i, : pitch.size(0)] = pitch - pitchf = row[4] - pitchf_padded[i, : pitchf.size(0)] = pitchf - - # dv[i] = row[5] - sid[i] = row[5] - - return ( - phone_padded, - phone_lengths, - pitch_padded, - pitchf_padded, - spec_padded, - spec_lengths, - wave_padded, - wave_lengths, - # dv - sid, - ) - - -class TextAudioLoader(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 5000) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text, dv in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text, dv]) - lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - file = audiopath_and_text[0] - phone = audiopath_and_text[1] - dv = audiopath_and_text[2] - - phone = self.get_labels(phone) - spec, wav = self.get_audio(file) - dv = self.get_sid(dv) - - len_phone = phone.size()[0] - len_spec = spec.size()[-1] - if len_phone != len_spec: - len_min = min(len_phone, len_spec) - len_wav = len_min * self.hop_length - spec = spec[:, :len_min] - wav = wav[:, :len_wav] - phone = phone[:len_min, :] - return (spec, wav, phone, dv) - - def get_labels(self, phone): - phone = np.load(phone) - phone = np.repeat(phone, 2, axis=0) - n_num = min(phone.shape[0], 900) # DistributedBucketSampler - phone = phone[:n_num, :] - phone = torch.FloatTensor(phone) - return phone - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError( - "{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate - ) - ) - audio_norm = audio - # audio_norm = audio / self.max_wav_value - # audio_norm = audio / np.abs(audio).max() - - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - try: - spec = torch.load(spec_filename) - except: - print(spec_filename, traceback.format_exc()) - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - else: - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - return spec, audio_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollate: - """Zero-pads model inputs and targets""" - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True - ) - - max_spec_len = max([x[0].size(1) for x in batch]) - max_wave_len = max([x[1].size(1) for x in batch]) - spec_lengths = torch.LongTensor(len(batch)) - wave_lengths = torch.LongTensor(len(batch)) - spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len) - wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len) - spec_padded.zero_() - wave_padded.zero_() - - max_phone_len = max([x[2].size(0) for x in batch]) - phone_lengths = torch.LongTensor(len(batch)) - phone_padded = torch.FloatTensor( - len(batch), max_phone_len, batch[0][2].shape[1] - ) - phone_padded.zero_() - sid = torch.LongTensor(len(batch)) - - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - spec = row[0] - spec_padded[i, :, : spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wave = row[1] - wave_padded[i, :, : wave.size(1)] = wave - wave_lengths[i] = wave.size(1) - - phone = row[2] - phone_padded[i, : phone.size(0), :] = phone - phone_lengths[i] = phone.size(0) - - sid[i] = row[3] - - return ( - phone_padded, - phone_lengths, - spec_padded, - spec_lengths, - wave_padded, - wave_lengths, - sid, - ) - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__( - self, - dataset, - batch_size, - boundaries, - num_replicas=None, - rank=None, - shuffle=True, - ): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, -1, -1): # - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = ( - total_batch_size - (len_bucket % total_batch_size) - ) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ( - ids_bucket - + ids_bucket * (rem // len_bucket) - + ids_bucket[: (rem % len_bucket)] - ) - - # subsample - ids_bucket = ids_bucket[self.rank :: self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [ - bucket[idx] - for idx in ids_bucket[ - j * self.batch_size : (j + 1) * self.batch_size - ] - ] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/SkyYeXianer/vits-uma-genshin-honkai/modules.py b/spaces/SkyYeXianer/vits-uma-genshin-honkai/modules.py deleted file mode 100644 index 56ea4145eddf19dd330a3a41ab0183efc1686d83..0000000000000000000000000000000000000000 --- a/spaces/SkyYeXianer/vits-uma-genshin-honkai/modules.py +++ /dev/null @@ -1,388 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_tokenutil.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_tokenutil.py deleted file mode 100644 index c4539d1fc7e330bfcde2086562c10f0f03161402..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_tokenutil.py +++ /dev/null @@ -1,141 +0,0 @@ -"""Tests for tokenutil""" -# Copyright (c) IPython Development Team. -# Distributed under the terms of the Modified BSD License. - -import pytest - -from IPython.utils.tokenutil import token_at_cursor, line_at_cursor - -def expect_token(expected, cell, cursor_pos): - token = token_at_cursor(cell, cursor_pos) - offset = 0 - for line in cell.splitlines(): - if offset + len(line) >= cursor_pos: - break - else: - offset += len(line)+1 - column = cursor_pos - offset - line_with_cursor = "%s|%s" % (line[:column], line[column:]) - assert token == expected, "Expected %r, got %r in: %r (pos %i)" % ( - expected, - token, - line_with_cursor, - cursor_pos, - ) - - -def test_simple(): - cell = "foo" - for i in range(len(cell)): - expect_token("foo", cell, i) - -def test_function(): - cell = "foo(a=5, b='10')" - expected = 'foo' - # up to `foo(|a=` - for i in range(cell.find('a=') + 1): - expect_token("foo", cell, i) - # find foo after `=` - for i in [cell.find('=') + 1, cell.rfind('=') + 1]: - expect_token("foo", cell, i) - # in between `5,|` and `|b=` - for i in range(cell.find(','), cell.find('b=')): - expect_token("foo", cell, i) - -def test_multiline(): - cell = '\n'.join([ - 'a = 5', - 'b = hello("string", there)' - ]) - expected = 'hello' - start = cell.index(expected) + 1 - for i in range(start, start + len(expected)): - expect_token(expected, cell, i) - expected = 'hello' - start = cell.index(expected) + 1 - for i in range(start, start + len(expected)): - expect_token(expected, cell, i) - -def test_multiline_token(): - cell = '\n'.join([ - '"""\n\nxxxxxxxxxx\n\n"""', - '5, """', - 'docstring', - 'multiline token', - '""", [', - '2, 3, "complicated"]', - 'b = hello("string", there)' - ]) - expected = 'hello' - start = cell.index(expected) + 1 - for i in range(start, start + len(expected)): - expect_token(expected, cell, i) - expected = 'hello' - start = cell.index(expected) + 1 - for i in range(start, start + len(expected)): - expect_token(expected, cell, i) - -def test_nested_call(): - cell = "foo(bar(a=5), b=10)" - expected = 'foo' - start = cell.index('bar') + 1 - for i in range(start, start + 3): - expect_token(expected, cell, i) - expected = 'bar' - start = cell.index('a=') - for i in range(start, start + 3): - expect_token(expected, cell, i) - expected = 'foo' - start = cell.index(')') + 1 - for i in range(start, len(cell)-1): - expect_token(expected, cell, i) - -def test_attrs(): - cell = "a = obj.attr.subattr" - expected = 'obj' - idx = cell.find('obj') + 1 - for i in range(idx, idx + 3): - expect_token(expected, cell, i) - idx = cell.find('.attr') + 2 - expected = 'obj.attr' - for i in range(idx, idx + 4): - expect_token(expected, cell, i) - idx = cell.find('.subattr') + 2 - expected = 'obj.attr.subattr' - for i in range(idx, len(cell)): - expect_token(expected, cell, i) - -def test_line_at_cursor(): - cell = "" - (line, offset) = line_at_cursor(cell, cursor_pos=11) - assert line == "" - assert offset == 0 - - # The position after a newline should be the start of the following line. - cell = "One\nTwo\n" - (line, offset) = line_at_cursor(cell, cursor_pos=4) - assert line == "Two\n" - assert offset == 4 - - # The end of a cell should be on the last line - cell = "pri\npri" - (line, offset) = line_at_cursor(cell, cursor_pos=7) - assert line == "pri" - assert offset == 4 - - -@pytest.mark.parametrize( - "c, token", - zip( - list(range(16, 22)) + list(range(22, 28)), - ["int"] * (22 - 16) + ["map"] * (28 - 22), - ), -) -def test_multiline_statement(c, token): - cell = """a = (1, - 3) - -int() -map() -""" - expect_token(token, cell, c) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/__init__.py deleted file mode 100644 index 690d64e63bc40a6006318cd70535017d41643def..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# ruff: noqa -from .v5 import * diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/db/migrations/00001-migration-1.sqlite.sql b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/db/migrations/00001-migration-1.sqlite.sql deleted file mode 100644 index a214bae8d5b0d6482fedd18265d4dfc756d47485..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/db/migrations/00001-migration-1.sqlite.sql +++ /dev/null @@ -1,3 +0,0 @@ -CREATE TABLE table1 ( - name TEXT PRIMARY KEY -); diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/property/strategies.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/property/strategies.py deleted file mode 100644 index b082e033d49f451f806eae9887026914a9e74413..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/property/strategies.py +++ /dev/null @@ -1,545 +0,0 @@ -import hashlib -import hypothesis -import hypothesis.strategies as st -from typing import Any, Optional, List, Dict, Union -from typing_extensions import TypedDict -import numpy as np -import numpy.typing as npt -import chromadb.api.types as types -import re -from hypothesis.strategies._internal.strategies import SearchStrategy -from hypothesis.errors import InvalidDefinition -from hypothesis.stateful import RuleBasedStateMachine - -from dataclasses import dataclass - -from chromadb.api.types import Documents, Embeddings, Metadata - -# Set the random seed for reproducibility -np.random.seed(0) # unnecessary, hypothesis does this for us - -# See Hypothesis documentation for creating strategies at -# https://hypothesis.readthedocs.io/en/latest/data.html - -# NOTE: Because these strategies are used in state machines, we need to -# work around an issue with state machines, in which strategies that frequently -# are marked as invalid (i.e. through the use of `assume` or `.filter`) can cause the -# state machine tests to fail with an hypothesis.errors.Unsatisfiable. - -# Ultimately this is because the entire state machine is run as a single Hypothesis -# example, which ends up drawing from the same strategies an enormous number of times. -# Whenever a strategy marks itself as invalid, Hypothesis tries to start the entire -# state machine run over. See https://github.com/HypothesisWorks/hypothesis/issues/3618 - -# Because strategy generation is all interrelated, seemingly small changes (especially -# ones called early in a test) can have an outside effect. Generating lists with -# unique=True, or dictionaries with a min size seems especially bad. - -# Please make changes to these strategies incrementally, testing to make sure they don't -# start generating unsatisfiable examples. - -test_hnsw_config = { - "hnsw:construction_ef": 128, - "hnsw:search_ef": 128, - "hnsw:M": 128, -} - - -class RecordSet(TypedDict): - """ - A generated set of embeddings, ids, metadatas, and documents that - represent what a user would pass to the API. - """ - - ids: Union[types.ID, List[types.ID]] - embeddings: Optional[Union[types.Embeddings, types.Embedding]] - metadatas: Optional[Union[List[types.Metadata], types.Metadata]] - documents: Optional[Union[List[types.Document], types.Document]] - - -class NormalizedRecordSet(TypedDict): - """ - A RecordSet, with all fields normalized to lists. - """ - - ids: List[types.ID] - embeddings: Optional[types.Embeddings] - metadatas: Optional[List[types.Metadata]] - documents: Optional[List[types.Document]] - - -class StateMachineRecordSet(TypedDict): - """ - Represents the internal state of a state machine in hypothesis tests. - """ - - ids: List[types.ID] - embeddings: types.Embeddings - metadatas: List[Optional[types.Metadata]] - documents: List[Optional[types.Document]] - - -class Record(TypedDict): - """ - A single generated record. - """ - - id: types.ID - embedding: Optional[types.Embedding] - metadata: Optional[types.Metadata] - document: Optional[types.Document] - - -# TODO: support arbitrary text everywhere so we don't SQL-inject ourselves. -# TODO: support empty strings everywhere -sql_alphabet = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_" -safe_text = st.text(alphabet=sql_alphabet, min_size=1) - -# Workaround for FastAPI json encoding peculiarities -# https://github.com/tiangolo/fastapi/blob/8ac8d70d52bb0dd9eb55ba4e22d3e383943da05c/fastapi/encoders.py#L104 -safe_text = safe_text.filter(lambda s: not s.startswith("_sa")) - -safe_integers = st.integers( - min_value=-(2**31), max_value=2**31 - 1 -) # TODO: handle longs -safe_floats = st.floats( - allow_infinity=False, - allow_nan=False, - allow_subnormal=False, - min_value=-1e6, - max_value=1e6, -) # TODO: handle infinity and NAN - -safe_values: List[SearchStrategy[Union[int, float, str]]] = [ - safe_text, - safe_integers, - safe_floats, -] - - -def one_or_both( - strategy_a: st.SearchStrategy[Any], strategy_b: st.SearchStrategy[Any] -) -> st.SearchStrategy[Any]: - return st.one_of( - st.tuples(strategy_a, strategy_b), - st.tuples(strategy_a, st.none()), - st.tuples(st.none(), strategy_b), - ) - - -# Temporarily generate only these to avoid SQL formatting issues. -legal_id_characters = ( - "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_./+" -) - -float_types = [np.float16, np.float32, np.float64] -int_types = [np.int16, np.int32, np.int64] # TODO: handle int types - - -@st.composite -def collection_name(draw: st.DrawFn) -> str: - _collection_name_re = re.compile(r"^[a-zA-Z][a-zA-Z0-9-]{1,60}[a-zA-Z0-9]$") - _ipv4_address_re = re.compile(r"^([0-9]{1,3}\.){3}[0-9]{1,3}$") - _two_periods_re = re.compile(r"\.\.") - - name: str = draw(st.from_regex(_collection_name_re)) - hypothesis.assume(not _ipv4_address_re.match(name)) - hypothesis.assume(not _two_periods_re.search(name)) - - return name - - -collection_metadata = st.one_of( - st.none(), st.dictionaries(safe_text, st.one_of(*safe_values)) -) - - -# TODO: Use a hypothesis strategy while maintaining embedding uniqueness -# Or handle duplicate embeddings within a known epsilon -def create_embeddings( - dim: int, - count: int, - dtype: npt.DTypeLike, -) -> types.Embeddings: - embeddings: types.Embeddings = ( - np.random.uniform( - low=-1.0, - high=1.0, - size=(count, dim), - ) - .astype(dtype) - .tolist() - ) - - return embeddings - - -class hashing_embedding_function(types.EmbeddingFunction): - def __init__(self, dim: int, dtype: npt.DTypeLike) -> None: - self.dim = dim - self.dtype = dtype - - def __call__(self, texts: types.Documents) -> types.Embeddings: - # Hash the texts and convert to hex strings - hashed_texts = [ - list(hashlib.sha256(text.encode("utf-8")).hexdigest()) for text in texts - ] - # Pad with repetition, or truncate the hex strings to the desired dimension - padded_texts = [ - text * (self.dim // len(text)) + text[: self.dim % len(text)] - for text in hashed_texts - ] - - # Convert the hex strings to dtype - embeddings: types.Embeddings = np.array( - [[int(char, 16) / 15.0 for char in text] for text in padded_texts], - dtype=self.dtype, - ).tolist() - - return embeddings - - -class not_implemented_embedding_function(types.EmbeddingFunction): - def __call__(self, texts: Documents) -> Embeddings: - assert False, "This embedding function is not implemented" - - -def embedding_function_strategy( - dim: int, dtype: npt.DTypeLike -) -> st.SearchStrategy[types.EmbeddingFunction]: - return st.just(hashing_embedding_function(dim, dtype)) - - -@dataclass -class Collection: - name: str - metadata: Optional[types.Metadata] - dimension: int - dtype: npt.DTypeLike - known_metadata_keys: types.Metadata - known_document_keywords: List[str] - has_documents: bool = False - has_embeddings: bool = False - embedding_function: Optional[types.EmbeddingFunction] = None - - -@st.composite -def collections( - draw: st.DrawFn, - add_filterable_data: bool = False, - with_hnsw_params: bool = False, - has_embeddings: Optional[bool] = None, - has_documents: Optional[bool] = None, -) -> Collection: - """Strategy to generate a Collection object. If add_filterable_data is True, then known_metadata_keys and known_document_keywords will be populated with consistent data.""" - - assert not ((has_embeddings is False) and (has_documents is False)) - - name = draw(collection_name()) - metadata = draw(collection_metadata) - dimension = draw(st.integers(min_value=2, max_value=2048)) - dtype = draw(st.sampled_from(float_types)) - - if with_hnsw_params: - if metadata is None: - metadata = {} - metadata.update(test_hnsw_config) - # Sometimes, select a space at random - if draw(st.booleans()): - # TODO: pull the distance functions from a source of truth that lives not - # in tests once https://github.com/chroma-core/issues/issues/61 lands - metadata["hnsw:space"] = draw(st.sampled_from(["cosine", "l2", "ip"])) - - known_metadata_keys: Dict[str, Union[int, str, float]] = {} - if add_filterable_data: - while len(known_metadata_keys) < 5: - key = draw(safe_text) - known_metadata_keys[key] = draw(st.one_of(*safe_values)) - - if has_documents is None: - has_documents = draw(st.booleans()) - assert has_documents is not None - if has_documents and add_filterable_data: - known_document_keywords = draw(st.lists(safe_text, min_size=5, max_size=5)) - else: - known_document_keywords = [] - - if not has_documents: - has_embeddings = True - else: - if has_embeddings is None: - has_embeddings = draw(st.booleans()) - assert has_embeddings is not None - - embedding_function = draw(embedding_function_strategy(dimension, dtype)) - - return Collection( - name=name, - metadata=metadata, - dimension=dimension, - dtype=dtype, - known_metadata_keys=known_metadata_keys, - has_documents=has_documents, - known_document_keywords=known_document_keywords, - has_embeddings=has_embeddings, - embedding_function=embedding_function, - ) - - -@st.composite -def metadata(draw: st.DrawFn, collection: Collection) -> types.Metadata: - """Strategy for generating metadata that could be a part of the given collection""" - # First draw a random dictionary. - metadata: types.Metadata = draw(st.dictionaries(safe_text, st.one_of(*safe_values))) - # Then, remove keys that overlap with the known keys for the coll - # to avoid type errors when comparing. - if collection.known_metadata_keys: - for key in collection.known_metadata_keys.keys(): - if key in metadata: - del metadata[key] - # Finally, add in some of the known keys for the collection - sampling_dict: Dict[str, st.SearchStrategy[Union[str, int, float]]] = { - k: st.just(v) for k, v in collection.known_metadata_keys.items() - } - metadata.update(draw(st.fixed_dictionaries({}, optional=sampling_dict))) - return metadata - - -@st.composite -def document(draw: st.DrawFn, collection: Collection) -> types.Document: - """Strategy for generating documents that could be a part of the given collection""" - - if collection.known_document_keywords: - known_words_st = st.sampled_from(collection.known_document_keywords) - else: - known_words_st = st.text(min_size=1) - - random_words_st = st.text(min_size=1) - words = draw(st.lists(st.one_of(known_words_st, random_words_st), min_size=1)) - return " ".join(words) - - -@st.composite -def recordsets( - draw: st.DrawFn, - collection_strategy: SearchStrategy[Collection] = collections(), - id_strategy: SearchStrategy[str] = safe_text, - min_size: int = 1, - max_size: int = 50, -) -> RecordSet: - collection = draw(collection_strategy) - - ids = list( - draw(st.lists(id_strategy, min_size=min_size, max_size=max_size, unique=True)) - ) - - embeddings: Optional[Embeddings] = None - if collection.has_embeddings: - embeddings = create_embeddings(collection.dimension, len(ids), collection.dtype) - metadatas = draw( - st.lists(metadata(collection), min_size=len(ids), max_size=len(ids)) - ) - documents: Optional[Documents] = None - if collection.has_documents: - documents = draw( - st.lists(document(collection), min_size=len(ids), max_size=len(ids)) - ) - - # in the case where we have a single record, sometimes exercise - # the code that handles individual values rather than lists. - # In this case, any field may be a list or a single value. - if len(ids) == 1: - single_id: Union[str, List[str]] = ids[0] if draw(st.booleans()) else ids - single_embedding = ( - embeddings[0] - if embeddings is not None and draw(st.booleans()) - else embeddings - ) - single_metadata: Union[Metadata, List[Metadata]] = ( - metadatas[0] if draw(st.booleans()) else metadatas - ) - single_document = ( - documents[0] if documents is not None and draw(st.booleans()) else documents - ) - return { - "ids": single_id, - "embeddings": single_embedding, - "metadatas": single_metadata, - "documents": single_document, - } - - return { - "ids": ids, - "embeddings": embeddings, - "metadatas": metadatas, - "documents": documents, - } - - -# This class is mostly cloned from from hypothesis.stateful.RuleStrategy, -# but always runs all the rules, instead of using a FeatureStrategy to -# enable/disable rules. Disabled rules cause the entire test to be marked invalida and, -# combined with the complexity of our other strategies, leads to an -# unacceptably increased incidence of hypothesis.errors.Unsatisfiable. -class DeterministicRuleStrategy(SearchStrategy): # type: ignore - def __init__(self, machine: RuleBasedStateMachine) -> None: - super().__init__() # type: ignore - self.machine = machine - self.rules = list(machine.rules()) # type: ignore - - # The order is a bit arbitrary. Primarily we're trying to group rules - # that write to the same location together, and to put rules with no - # target first as they have less effect on the structure. We order from - # fewer to more arguments on grounds that it will plausibly need less - # data. This probably won't work especially well and we could be - # smarter about it, but it's better than just doing it in definition - # order. - self.rules.sort( - key=lambda rule: ( - sorted(rule.targets), - len(rule.arguments), - rule.function.__name__, - ) - ) - - def __repr__(self) -> str: - return "{}(machine={}({{...}}))".format( - self.__class__.__name__, - self.machine.__class__.__name__, - ) - - def do_draw(self, data): # type: ignore - if not any(self.is_valid(rule) for rule in self.rules): - msg = f"No progress can be made from state {self.machine!r}" - raise InvalidDefinition(msg) from None - - rule = data.draw(st.sampled_from([r for r in self.rules if self.is_valid(r)])) - argdata = data.draw(rule.arguments_strategy) - return (rule, argdata) - - def is_valid(self, rule) -> bool: # type: ignore - if not all(precond(self.machine) for precond in rule.preconditions): - return False - - for b in rule.bundles: - bundle = self.machine.bundle(b.name) # type: ignore - if not bundle: - return False - return True - - -@st.composite -def where_clause(draw: st.DrawFn, collection: Collection) -> types.Where: - """Generate a filter that could be used in a query against the given collection""" - - known_keys = sorted(collection.known_metadata_keys.keys()) - - key = draw(st.sampled_from(known_keys)) - value = collection.known_metadata_keys[key] - - legal_ops: List[Optional[str]] = [None, "$eq", "$ne"] - if not isinstance(value, str): - legal_ops.extend(["$gt", "$lt", "$lte", "$gte"]) - if isinstance(value, float): - # Add or subtract a small number to avoid floating point rounding errors - value = value + draw(st.sampled_from([1e-6, -1e-6])) - - op: types.WhereOperator = draw(st.sampled_from(legal_ops)) - - if op is None: - return {key: value} - else: - return {key: {op: value}} - - -@st.composite -def where_doc_clause(draw: st.DrawFn, collection: Collection) -> types.WhereDocument: - """Generate a where_document filter that could be used against the given collection""" - if collection.known_document_keywords: - word = draw(st.sampled_from(collection.known_document_keywords)) - else: - word = draw(safe_text) - return {"$contains": word} - - -def binary_operator_clause( - base_st: SearchStrategy[types.Where], -) -> SearchStrategy[types.Where]: - op: SearchStrategy[types.LogicalOperator] = st.sampled_from(["$and", "$or"]) - return st.dictionaries( - keys=op, - values=st.lists(base_st, max_size=2, min_size=2), - min_size=1, - max_size=1, - ) - - -def binary_document_operator_clause( - base_st: SearchStrategy[types.WhereDocument], -) -> SearchStrategy[types.WhereDocument]: - op: SearchStrategy[types.LogicalOperator] = st.sampled_from(["$and", "$or"]) - return st.dictionaries( - keys=op, - values=st.lists(base_st, max_size=2, min_size=2), - min_size=1, - max_size=1, - ) - - -@st.composite -def recursive_where_clause(draw: st.DrawFn, collection: Collection) -> types.Where: - base_st = where_clause(collection) - where: types.Where = draw(st.recursive(base_st, binary_operator_clause)) - return where - - -@st.composite -def recursive_where_doc_clause( - draw: st.DrawFn, collection: Collection -) -> types.WhereDocument: - base_st = where_doc_clause(collection) - where: types.WhereDocument = draw( - st.recursive(base_st, binary_document_operator_clause) - ) - return where - - -class Filter(TypedDict): - where: Optional[types.Where] - ids: Optional[Union[str, List[str]]] - where_document: Optional[types.WhereDocument] - - -@st.composite -def filters( - draw: st.DrawFn, - collection_st: st.SearchStrategy[Collection], - recordset_st: st.SearchStrategy[RecordSet], - include_all_ids: bool = False, -) -> Filter: - collection = draw(collection_st) - recordset = draw(recordset_st) - - where_clause = draw(st.one_of(st.none(), recursive_where_clause(collection))) - where_document_clause = draw( - st.one_of(st.none(), recursive_where_doc_clause(collection)) - ) - - ids: Optional[Union[List[types.ID], types.ID]] - # Record sets can be a value instead of a list of values if there is only one record - if isinstance(recordset["ids"], str): - ids = [recordset["ids"]] - else: - ids = recordset["ids"] - - if not include_all_ids: - ids = draw(st.one_of(st.none(), st.lists(st.sampled_from(ids)))) - if ids is not None: - # Remove duplicates since hypothesis samples with replacement - ids = list(set(ids)) - - # Test both the single value list and the unwrapped single value case - if ids is not None and len(ids) == 1 and draw(st.booleans()): - ids = ids[0] - - return {"where": where_clause, "where_document": where_document_clause, "ids": ids} diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/container.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/container.py deleted file mode 100644 index a96d570970177c0ab91447d8411e4ec09a9994cb..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/container.py +++ /dev/null @@ -1,294 +0,0 @@ -import array -import logging -from typing import Sequence, Collection - -from clickhouse_connect.driver.insert import InsertContext -from clickhouse_connect.driver.query import QueryContext -from clickhouse_connect.driver.types import ByteSource -from clickhouse_connect.json_impl import any_to_json -from clickhouse_connect.datatypes.base import ClickHouseType, TypeDef -from clickhouse_connect.driver.common import must_swap -from clickhouse_connect.datatypes.registry import get_from_name - - -logger = logging.getLogger(__name__) - - -class Array(ClickHouseType): - __slots__ = ('element_type',) - python_type = list - - def __init__(self, type_def: TypeDef): - super().__init__(type_def) - self.element_type = get_from_name(type_def.values[0]) - self._name_suffix = f'({self.element_type.name})' - - def read_column_prefix(self, source: ByteSource): - return self.element_type.read_column_prefix(source) - - def _data_size(self, sample: Sequence) -> int: - if len(sample) == 0: - return 8 - total = 0 - for x in sample: - total += self.element_type.data_size(x) - return total // len(sample) + 8 - - # pylint: disable=too-many-locals - def read_column_data(self, source: ByteSource, num_rows: int, ctx: QueryContext): - final_type = self.element_type - depth = 1 - while isinstance(final_type, Array): - depth += 1 - final_type = final_type.element_type - level_size = num_rows - offset_sizes = [] - for _ in range(depth): - level_offsets = source.read_array('Q', level_size) - offset_sizes.append(level_offsets) - level_size = level_offsets[-1] if level_offsets else 0 - if level_size: - all_values = final_type.read_column_data(source, level_size, ctx) - else: - all_values = [] - column = all_values if isinstance(all_values, list) else list(all_values) - for offset_range in reversed(offset_sizes): - data = [] - last = 0 - for x in offset_range: - data.append(column[last: x]) - last = x - column = data - return column - - def write_column_prefix(self, dest: bytearray): - self.element_type.write_column_prefix(dest) - - def write_column_data(self, column: Sequence, dest: bytearray, ctx: InsertContext): - final_type = self.element_type - depth = 1 - while isinstance(final_type, Array): - depth += 1 - final_type = final_type.element_type - for _ in range(depth): - total = 0 - data = [] - offsets = array.array('Q') - for x in column: - total += len(x) - offsets.append(total) - data.extend(x) - if must_swap: - offsets.byteswap() - dest += offsets.tobytes() - column = data - final_type.write_column_data(column, dest, ctx) - - -class Tuple(ClickHouseType): - _slots = 'element_names', 'element_types' - python_type = tuple - valid_formats = 'tuple', 'json', 'native' # native is 'tuple' for unnamed tuples, and dict for named tuples - - def __init__(self, type_def: TypeDef): - super().__init__(type_def) - self.element_names = type_def.keys - self.element_types = [get_from_name(name) for name in type_def.values] - if self.element_names: - self._name_suffix = f"({', '.join(k + ' ' + str(v) for k, v in zip(type_def.keys, type_def.values))})" - else: - self._name_suffix = type_def.arg_str - - def _data_size(self, sample: Collection) -> int: - if len(sample) == 0: - return 0 - elem_size = 0 - for ix, e_type in enumerate(self.element_types): - if e_type.byte_size > 0: - elem_size += e_type.byte_size - else: - elem_size += e_type.data_size([x[ix] for x in sample]) - return elem_size - - def read_column_prefix(self, source: ByteSource): - for e_type in self.element_types: - e_type.read_column_prefix(source) - - def read_column_data(self, source: ByteSource, num_rows: int, ctx: QueryContext): - columns = [] - e_names = self.element_names - for e_type in self.element_types: - column = e_type.read_column_data(source, num_rows, ctx) - columns.append(column) - if e_names and self.read_format(ctx) != 'tuple': - dicts = [{} for _ in range(num_rows)] - for ix, x in enumerate(dicts): - for y, key in enumerate(e_names): - x[key] = columns[y][ix] - if self.read_format(ctx) == 'json': - to_json = any_to_json - return [to_json(x) for x in dicts] - return dicts - return tuple(zip(*columns)) - - def write_column_prefix(self, dest: bytearray): - for e_type in self.element_types: - e_type.write_column_prefix(dest) - - def write_column_data(self, column: Sequence, dest: bytearray, ctx: InsertContext): - columns = list(zip(*column)) - for e_type, elem_column in zip(self.element_types, columns): - e_type.write_column_data(elem_column, dest, ctx) - - -class Map(ClickHouseType): - _slots = 'key_type', 'value_type' - python_type = dict - - def __init__(self, type_def: TypeDef): - super().__init__(type_def) - self.key_type = get_from_name(type_def.values[0]) - self.value_type = get_from_name(type_def.values[1]) - self._name_suffix = type_def.arg_str - - def _data_size(self, sample: Collection) -> int: - total = 0 - if len(sample) == 0: - return 0 - for x in sample: - total += self.key_type.data_size(x.keys()) - total += self.value_type.data_size(x.values()) - return total // len(sample) - - def read_column_prefix(self, source: ByteSource): - self.key_type.read_column_prefix(source) - self.value_type.read_column_prefix(source) - - # pylint: disable=too-many-locals - def read_column_data(self, source: ByteSource, num_rows: int, ctx: QueryContext): - offsets = source.read_array('Q', num_rows) - total_rows = offsets[-1] - keys = self.key_type.read_column_data(source, total_rows, ctx) - values = self.value_type.read_column_data(source, total_rows, ctx) - all_pairs = tuple(zip(keys, values)) - column = [] - app = column.append - last = 0 - for offset in offsets: - app(dict(all_pairs[last: offset])) - last = offset - return column - - def write_column_prefix(self, dest: bytearray): - self.key_type.write_column_prefix(dest) - self.value_type.write_column_prefix(dest) - - def write_column_data(self, column: Sequence, dest: bytearray, ctx: InsertContext): - offsets = array.array('Q') - keys = [] - values = [] - total = 0 - for v in column: - total += len(v) - offsets.append(total) - keys.extend(v.keys()) - values.extend(v.values()) - if must_swap: - offsets.byteswap() - dest += offsets.tobytes() - self.key_type.write_column_data(keys, dest, ctx) - self.value_type.write_column_data(values, dest, ctx) - - -class Nested(ClickHouseType): - __slots__ = 'tuple_array', 'element_names', 'element_types' - python_type = Sequence[dict] - - def __init__(self, type_def): - super().__init__(type_def) - self.element_names = type_def.keys - self.tuple_array = get_from_name(f"Array(Tuple({','.join(type_def.values)}))") - self.element_types = self.tuple_array.element_type.element_types - cols = [f'{x[0]} {x[1].name}' for x in zip(type_def.keys, self.element_types)] - self._name_suffix = f"({', '.join(cols)})" - - def _data_size(self, sample: Collection) -> int: - keys = self.element_names - array_sample = [[tuple(sub_row[key] for key in keys) for sub_row in row] for row in sample] - return self.tuple_array.data_size(array_sample) - - def read_column_prefix(self, source: ByteSource): - self.tuple_array.read_column_prefix(source) - - def read_column_data(self, source: ByteSource, num_rows: int, ctx: QueryContext): - keys = self.element_names - data = self.tuple_array.read_column_data(source, num_rows, ctx) - return [[dict(zip(keys, x)) for x in row] for row in data] - - def write_column_prefix(self, dest: bytearray): - self.tuple_array.write_column_prefix(dest) - - def write_column_data(self, column: Sequence, dest: bytearray, ctx: InsertContext): - keys = self.element_names - data = [[tuple(sub_row[key] for key in keys) for sub_row in row] for row in column] - self.tuple_array.write_column_data(data, dest, ctx) - - -class JSON(ClickHouseType): - python_type = dict - # Native is a Python type (primitive, dict, array), string is an actual JSON string - valid_formats = 'string', 'native' - - def write_column_prefix(self, dest: bytearray): - dest.append(0x01) - - def _data_size(self, sample: Collection) -> int: - if len(sample) == 0: - return 0 - total = 0 - for x in sample: - if isinstance(x, str): - total += len(x) - elif x: - total += len(any_to_json(x)) - return total // len(sample) + 1 - - # pylint: disable=duplicate-code - def write_column_data(self, column: Sequence, dest: bytearray, ctx: InsertContext): - app = dest.append - first = self._first_value(column) - if isinstance(first, str) or self.write_format(ctx) == 'string': - for x in column: - v = x.encode() - sz = len(v) - while True: - b = sz & 0x7f - sz >>= 7 - if sz == 0: - app(b) - break - app(0x80 | b) - dest += v - else: - to_json = any_to_json - for x in column: - v = to_json(x) - sz = len(v) - while True: - b = sz & 0x7f - sz >>= 7 - if sz == 0: - app(b) - break - app(0x80 | b) - dest += v - - -class Object(JSON): - python_type = dict - - def __init__(self, type_def): - if type_def.values[0].lower() != "'json'": - raise NotImplementedError('Only json Object type is currently supported') - super().__init__(type_def) - self._name_suffix = type_def.arg_str diff --git a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/streaming.py b/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/streaming.py deleted file mode 100644 index fdbdf5e90fc0c6560873d66bf273460b38e5ed7e..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/streaming.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Streaming module API that should be implemented by all Streaming components, -""" - -from contextlib import contextmanager -import typing as tp -from torch import nn -import torch - - -State = tp.Dict[str, torch.Tensor] - - -class StreamingModule(nn.Module): - """Common API for streaming components. - - Each streaming component has a streaming state, which is just a dict[str, Tensor]. - By convention, the first dim of each tensor must be the batch size. - Don't use dots in the key names, as this would clash with submodules - (like in state_dict). - - If `self._is_streaming` is True, the component should use and remember - the proper state inside `self._streaming_state`. - - To set a streaming component in streaming state, use - - with module.streaming(): - ... - - This will automatically reset the streaming state when exiting the context manager. - This also automatically propagates to all streaming children module. - - Some module might also implement the `StreamingModule.flush` method, although - this one is trickier, as all parents module must be StreamingModule and implement - it as well for it to work properly. See `StreamingSequential` after. - """ - def __init__(self) -> None: - super().__init__() - self._streaming_state: State = {} - self._is_streaming = False - - def _apply_named_streaming(self, fn: tp.Any): - for name, module in self.named_modules(): - if isinstance(module, StreamingModule): - fn(name, module) - - def _set_streaming(self, streaming: bool): - def _set_streaming(name, module): - module._is_streaming = streaming - self._apply_named_streaming(_set_streaming) - - @contextmanager - def streaming(self): - """Context manager to enter streaming mode. Reset streaming state on exit. - """ - self._set_streaming(True) - try: - yield - finally: - self._set_streaming(False) - self.reset_streaming() - - def reset_streaming(self): - """Reset the streaming state. - """ - def _reset(name: str, module: StreamingModule): - module._streaming_state.clear() - - self._apply_named_streaming(_reset) - - def get_streaming_state(self) -> State: - """Return the streaming state, including that of sub-modules. - """ - state: State = {} - - def _add(name: str, module: StreamingModule): - if name: - name += "." - for key, value in module._streaming_state.items(): - state[name + key] = value - - self._apply_named_streaming(_add) - return state - - def set_streaming_state(self, state: State): - """Set the streaming state, including that of sub-modules. - """ - state = dict(state) - - def _set(name: str, module: StreamingModule): - if name: - name += "." - module._streaming_state.clear() - for key, value in list(state.items()): - # complexity is not ideal here, but probably fine. - if key.startswith(name): - local_key = key[len(name):] - if '.' not in local_key: - module._streaming_state[local_key] = value - del state[key] - - self._apply_named_streaming(_set) - assert len(state) == 0, list(state.keys()) - - def flush(self, x: tp.Optional[torch.Tensor] = None): - """Flush any remaining outputs that were waiting for completion. - Typically, for convolutions, this will add the final padding - and process the last buffer. - - This should take an optional argument `x`, which will be provided - if a module before this one in the streaming pipeline has already - spitted out a flushed out buffer. - """ - if x is None: - return None - else: - return self(x) - - -class StreamingSequential(StreamingModule, nn.Sequential): - """A streaming compatible alternative of `nn.Sequential`. - """ - def flush(self, x: tp.Optional[torch.Tensor] = None): - for module in self: - if isinstance(module, StreamingModule): - x = module.flush(x) - elif x is not None: - x = module(x) - return x diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/meta_arch/oneformer_head.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/meta_arch/oneformer_head.py deleted file mode 100644 index f8f8eb11b95838d2b61de5fa249a318877182c01..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/meta_arch/oneformer_head.py +++ /dev/null @@ -1,135 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/meta_arch/mask_former_head.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import logging -from copy import deepcopy -from typing import Callable, Dict, List, Optional, Tuple, Union - -import fvcore.nn.weight_init as weight_init -from torch import nn -from torch.nn import functional as F - -from annotator.oneformer.detectron2.config import configurable -from annotator.oneformer.detectron2.layers import Conv2d, ShapeSpec, get_norm -from annotator.oneformer.detectron2.modeling import SEM_SEG_HEADS_REGISTRY -from ..pixel_decoder.fpn import build_pixel_decoder -from ..transformer_decoder.oneformer_transformer_decoder import build_transformer_decoder - -@SEM_SEG_HEADS_REGISTRY.register() -class OneFormerHead(nn.Module): - - _version = 2 - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - version = local_metadata.get("version", None) - if version is None or version < 2: - # Do not warn if train from scratch - scratch = True - logger = logging.getLogger(__name__) - for k in list(state_dict.keys()): - newk = k - if "sem_seg_head" in k and not k.startswith(prefix + "predictor"): - newk = k.replace(prefix, prefix + "pixel_decoder.") - # logger.debug(f"{k} ==> {newk}") - if newk != k: - state_dict[newk] = state_dict[k] - del state_dict[k] - scratch = False - - if not scratch: - logger.warning( - f"Weight format of {self.__class__.__name__} have changed! " - "Please upgrade your models. Applying automatic conversion now ..." - ) - - @configurable - def __init__( - self, - input_shape: Dict[str, ShapeSpec], - *, - num_classes: int, - pixel_decoder: nn.Module, - loss_weight: float = 1.0, - ignore_value: int = -1, - # extra parameters - transformer_predictor: nn.Module, - transformer_in_feature: str, - ): - """ - NOTE: this interface is experimental. - Args: - input_shape: shapes (channels and stride) of the input features - num_classes: number of classes to predict - pixel_decoder: the pixel decoder module - loss_weight: loss weight - ignore_value: category id to be ignored during training. - transformer_predictor: the transformer decoder that makes prediction - transformer_in_feature: input feature name to the transformer_predictor - """ - super().__init__() - input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride) - self.in_features = [k for k, v in input_shape] - feature_strides = [v.stride for k, v in input_shape] - feature_channels = [v.channels for k, v in input_shape] - - self.ignore_value = ignore_value - self.common_stride = 4 - self.loss_weight = loss_weight - - self.pixel_decoder = pixel_decoder - self.predictor = transformer_predictor - self.transformer_in_feature = transformer_in_feature - - self.num_classes = num_classes - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - # figure out in_channels to transformer predictor - if cfg.MODEL.ONE_FORMER.TRANSFORMER_IN_FEATURE == "transformer_encoder": - transformer_predictor_in_channels = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM - elif cfg.MODEL.ONE_FORMER.TRANSFORMER_IN_FEATURE == "pixel_embedding": - transformer_predictor_in_channels = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM - elif cfg.MODEL.ONE_FORMER.TRANSFORMER_IN_FEATURE == "multi_scale_pixel_decoder": - transformer_predictor_in_channels = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM - else: - transformer_predictor_in_channels = input_shape[cfg.MODEL.ONE_FORMER.TRANSFORMER_IN_FEATURE].channels - - return { - "input_shape": { - k: v for k, v in input_shape.items() if k in cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES - }, - "ignore_value": cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE, - "num_classes": cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES, - "pixel_decoder": build_pixel_decoder(cfg, input_shape), - "loss_weight": cfg.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT, - "transformer_in_feature": cfg.MODEL.ONE_FORMER.TRANSFORMER_IN_FEATURE, - "transformer_predictor": build_transformer_decoder( - cfg, - transformer_predictor_in_channels, - mask_classification=True, - ), - } - - def forward(self, features, tasks, mask=None): - return self.layers(features, tasks, mask) - - def layers(self, features, tasks, mask=None): - mask_features, transformer_encoder_features, multi_scale_features, _, _ = self.pixel_decoder.forward_features(features) - - if self.transformer_in_feature == "multi_scale_pixel_decoder": - predictions = self.predictor(multi_scale_features, mask_features, tasks, mask) - else: - if self.transformer_in_feature == "transformer_encoder": - assert ( - transformer_encoder_features is not None - ), "Please use the TransformerEncoderPixelDecoder." - predictions = self.predictor(transformer_encoder_features, mask_features, mask) - elif self.transformer_in_feature == "pixel_embedding": - predictions = self.predictor(mask_features, mask_features, mask) - else: - predictions = self.predictor(features[self.transformer_in_feature], mask_features, mask) - return predictions diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py deleted file mode 100644 index d02122ca0e68743b1bf7a893afae96042f23838c..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py +++ /dev/null @@ -1,57 +0,0 @@ -from abc import ABCMeta, abstractmethod - -from .decode_head import BaseDecodeHead - - -class BaseCascadeDecodeHead(BaseDecodeHead, metaclass=ABCMeta): - """Base class for cascade decode head used in - :class:`CascadeEncoderDecoder.""" - - def __init__(self, *args, **kwargs): - super(BaseCascadeDecodeHead, self).__init__(*args, **kwargs) - - @abstractmethod - def forward(self, inputs, prev_output): - """Placeholder of forward function.""" - pass - - def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg, - train_cfg): - """Forward function for training. - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - train_cfg (dict): The training config. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - seg_logits = self.forward(inputs, prev_output) - losses = self.losses(seg_logits, gt_semantic_seg) - - return losses - - def forward_test(self, inputs, prev_output, img_metas, test_cfg): - """Forward function for testing. - - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - test_cfg (dict): The testing config. - - Returns: - Tensor: Output segmentation map. - """ - return self.forward(inputs, prev_output) diff --git a/spaces/TEnngal/bingo/src/pages/api/blob.ts b/spaces/TEnngal/bingo/src/pages/api/blob.ts deleted file mode 100644 index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/pages/api/blob.ts +++ /dev/null @@ -1,40 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { Readable } from 'node:stream' -import { fetch } from '@/lib/isomorphic' - -const API_DOMAIN = 'https://www.bing.com' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { bcid } = req.query - - const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`, - { - method: 'GET', - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referrer-Policy": "origin-when-cross-origin", - }, - }, - ) - - res.writeHead(200, { - 'Content-Length': headers.get('content-length')!, - 'Content-Type': headers.get('content-type')!, - }) - // @ts-ignore - return Readable.fromWeb(body!).pipe(res) - } catch (e) { - console.log('Error', e) - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/groff.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/groff.py deleted file mode 100644 index 30a528e668f8e8bcbde9b466c95a2a34bffbef8f..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/groff.py +++ /dev/null @@ -1,170 +0,0 @@ -""" - pygments.formatters.groff - ~~~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for groff output. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import math -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.util import get_bool_opt, get_int_opt - -__all__ = ['GroffFormatter'] - - -class GroffFormatter(Formatter): - """ - Format tokens with groff escapes to change their color and font style. - - .. versionadded:: 2.11 - - Additional options accepted: - - `style` - The style to use, can be a string or a Style subclass (default: - ``'default'``). - - `monospaced` - If set to true, monospace font will be used (default: ``true``). - - `linenos` - If set to true, print the line numbers (default: ``false``). - - `wrap` - Wrap lines to the specified number of characters. Disabled if set to 0 - (default: ``0``). - """ - - name = 'groff' - aliases = ['groff','troff','roff'] - filenames = [] - - def __init__(self, **options): - Formatter.__init__(self, **options) - - self.monospaced = get_bool_opt(options, 'monospaced', True) - self.linenos = get_bool_opt(options, 'linenos', False) - self._lineno = 0 - self.wrap = get_int_opt(options, 'wrap', 0) - self._linelen = 0 - - self.styles = {} - self._make_styles() - - - def _make_styles(self): - regular = '\\f[CR]' if self.monospaced else '\\f[R]' - bold = '\\f[CB]' if self.monospaced else '\\f[B]' - italic = '\\f[CI]' if self.monospaced else '\\f[I]' - - for ttype, ndef in self.style: - start = end = '' - if ndef['color']: - start += '\\m[%s]' % ndef['color'] - end = '\\m[]' + end - if ndef['bold']: - start += bold - end = regular + end - if ndef['italic']: - start += italic - end = regular + end - if ndef['bgcolor']: - start += '\\M[%s]' % ndef['bgcolor'] - end = '\\M[]' + end - - self.styles[ttype] = start, end - - - def _define_colors(self, outfile): - colors = set() - for _, ndef in self.style: - if ndef['color'] is not None: - colors.add(ndef['color']) - - for color in sorted(colors): - outfile.write('.defcolor ' + color + ' rgb #' + color + '\n') - - - def _write_lineno(self, outfile): - self._lineno += 1 - outfile.write("%s% 4d " % (self._lineno != 1 and '\n' or '', self._lineno)) - - - def _wrap_line(self, line): - length = len(line.rstrip('\n')) - space = ' ' if self.linenos else '' - newline = '' - - if length > self.wrap: - for i in range(0, math.floor(length / self.wrap)): - chunk = line[i*self.wrap:i*self.wrap+self.wrap] - newline += (chunk + '\n' + space) - remainder = length % self.wrap - if remainder > 0: - newline += line[-remainder-1:] - self._linelen = remainder - elif self._linelen + length > self.wrap: - newline = ('\n' + space) + line - self._linelen = length - else: - newline = line - self._linelen += length - - return newline - - - def _escape_chars(self, text): - text = text.replace('\\', '\\[u005C]'). \ - replace('.', '\\[char46]'). \ - replace('\'', '\\[u0027]'). \ - replace('`', '\\[u0060]'). \ - replace('~', '\\[u007E]') - copy = text - - for char in copy: - if len(char) != len(char.encode()): - uni = char.encode('unicode_escape') \ - .decode()[1:] \ - .replace('x', 'u00') \ - .upper() - text = text.replace(char, '\\[u' + uni[1:] + ']') - - return text - - - def format_unencoded(self, tokensource, outfile): - self._define_colors(outfile) - - outfile.write('.nf\n\\f[CR]\n') - - if self.linenos: - self._write_lineno(outfile) - - for ttype, value in tokensource: - while ttype not in self.styles: - ttype = ttype.parent - start, end = self.styles[ttype] - - for line in value.splitlines(True): - if self.wrap > 0: - line = self._wrap_line(line) - - if start and end: - text = self._escape_chars(line.rstrip('\n')) - if text != '': - outfile.write(''.join((start, text, end))) - else: - outfile.write(self._escape_chars(line.rstrip('\n'))) - - if line.endswith('\n'): - if self.linenos: - self._write_lineno(outfile) - self._linelen = 0 - else: - outfile.write('\n') - self._linelen = 0 - - outfile.write('\n.fi') diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/ansi.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/ansi.py deleted file mode 100644 index 66365e6536080bd9372d2a7a58b8ffa3447fec34..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/ansi.py +++ /dev/null @@ -1,240 +0,0 @@ -import re -import sys -from contextlib import suppress -from typing import Iterable, NamedTuple, Optional - -from .color import Color -from .style import Style -from .text import Text - -re_ansi = re.compile( - r""" -(?:\x1b\](.*?)\x1b\\)| -(?:\x1b([(@-Z\\-_]|\[[0-?]*[ -/]*[@-~])) -""", - re.VERBOSE, -) - - -class _AnsiToken(NamedTuple): - """Result of ansi tokenized string.""" - - plain: str = "" - sgr: Optional[str] = "" - osc: Optional[str] = "" - - -def _ansi_tokenize(ansi_text: str) -> Iterable[_AnsiToken]: - """Tokenize a string in to plain text and ANSI codes. - - Args: - ansi_text (str): A String containing ANSI codes. - - Yields: - AnsiToken: A named tuple of (plain, sgr, osc) - """ - - position = 0 - sgr: Optional[str] - osc: Optional[str] - for match in re_ansi.finditer(ansi_text): - start, end = match.span(0) - osc, sgr = match.groups() - if start > position: - yield _AnsiToken(ansi_text[position:start]) - if sgr: - if sgr == "(": - position = end + 1 - continue - if sgr.endswith("m"): - yield _AnsiToken("", sgr[1:-1], osc) - else: - yield _AnsiToken("", sgr, osc) - position = end - if position < len(ansi_text): - yield _AnsiToken(ansi_text[position:]) - - -SGR_STYLE_MAP = { - 1: "bold", - 2: "dim", - 3: "italic", - 4: "underline", - 5: "blink", - 6: "blink2", - 7: "reverse", - 8: "conceal", - 9: "strike", - 21: "underline2", - 22: "not dim not bold", - 23: "not italic", - 24: "not underline", - 25: "not blink", - 26: "not blink2", - 27: "not reverse", - 28: "not conceal", - 29: "not strike", - 30: "color(0)", - 31: "color(1)", - 32: "color(2)", - 33: "color(3)", - 34: "color(4)", - 35: "color(5)", - 36: "color(6)", - 37: "color(7)", - 39: "default", - 40: "on color(0)", - 41: "on color(1)", - 42: "on color(2)", - 43: "on color(3)", - 44: "on color(4)", - 45: "on color(5)", - 46: "on color(6)", - 47: "on color(7)", - 49: "on default", - 51: "frame", - 52: "encircle", - 53: "overline", - 54: "not frame not encircle", - 55: "not overline", - 90: "color(8)", - 91: "color(9)", - 92: "color(10)", - 93: "color(11)", - 94: "color(12)", - 95: "color(13)", - 96: "color(14)", - 97: "color(15)", - 100: "on color(8)", - 101: "on color(9)", - 102: "on color(10)", - 103: "on color(11)", - 104: "on color(12)", - 105: "on color(13)", - 106: "on color(14)", - 107: "on color(15)", -} - - -class AnsiDecoder: - """Translate ANSI code in to styled Text.""" - - def __init__(self) -> None: - self.style = Style.null() - - def decode(self, terminal_text: str) -> Iterable[Text]: - """Decode ANSI codes in an iterable of lines. - - Args: - lines (Iterable[str]): An iterable of lines of terminal output. - - Yields: - Text: Marked up Text. - """ - for line in terminal_text.splitlines(): - yield self.decode_line(line) - - def decode_line(self, line: str) -> Text: - """Decode a line containing ansi codes. - - Args: - line (str): A line of terminal output. - - Returns: - Text: A Text instance marked up according to ansi codes. - """ - from_ansi = Color.from_ansi - from_rgb = Color.from_rgb - _Style = Style - text = Text() - append = text.append - line = line.rsplit("\r", 1)[-1] - for plain_text, sgr, osc in _ansi_tokenize(line): - if plain_text: - append(plain_text, self.style or None) - elif osc is not None: - if osc.startswith("8;"): - _params, semicolon, link = osc[2:].partition(";") - if semicolon: - self.style = self.style.update_link(link or None) - elif sgr is not None: - # Translate in to semi-colon separated codes - # Ignore invalid codes, because we want to be lenient - codes = [ - min(255, int(_code) if _code else 0) - for _code in sgr.split(";") - if _code.isdigit() or _code == "" - ] - iter_codes = iter(codes) - for code in iter_codes: - if code == 0: - # reset - self.style = _Style.null() - elif code in SGR_STYLE_MAP: - # styles - self.style += _Style.parse(SGR_STYLE_MAP[code]) - elif code == 38: - #  Foreground - with suppress(StopIteration): - color_type = next(iter_codes) - if color_type == 5: - self.style += _Style.from_color( - from_ansi(next(iter_codes)) - ) - elif color_type == 2: - self.style += _Style.from_color( - from_rgb( - next(iter_codes), - next(iter_codes), - next(iter_codes), - ) - ) - elif code == 48: - # Background - with suppress(StopIteration): - color_type = next(iter_codes) - if color_type == 5: - self.style += _Style.from_color( - None, from_ansi(next(iter_codes)) - ) - elif color_type == 2: - self.style += _Style.from_color( - None, - from_rgb( - next(iter_codes), - next(iter_codes), - next(iter_codes), - ), - ) - - return text - - -if sys.platform != "win32" and __name__ == "__main__": # pragma: no cover - import io - import os - import pty - import sys - - decoder = AnsiDecoder() - - stdout = io.BytesIO() - - def read(fd: int) -> bytes: - data = os.read(fd, 1024) - stdout.write(data) - return data - - pty.spawn(sys.argv[1:], read) - - from .console import Console - - console = Console(record=True) - - stdout_result = stdout.getvalue().decode("utf-8") - print(stdout_result) - - for line in decoder.decode(stdout_result): - console.print(line) - - console.save_html("stdout.html") diff --git a/spaces/TechnoByte/soft-improved/theme_dropdown.py b/spaces/TechnoByte/soft-improved/theme_dropdown.py deleted file mode 100644 index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000 --- a/spaces/TechnoByte/soft-improved/theme_dropdown.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import pathlib - -from gradio.themes.utils import ThemeAsset - - -def create_theme_dropdown(): - import gradio as gr - - asset_path = pathlib.Path(__file__).parent / "themes" - themes = [] - for theme_asset in os.listdir(str(asset_path)): - themes.append( - (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset))) - ) - - def make_else_if(theme_asset): - return f""" - else if (theme == '{str(theme_asset[0].version)}') {{ - var theme_css = `{theme_asset[1]._get_theme_css()}` - }}""" - - head, tail = themes[0], themes[1:] - if_statement = f""" - if (theme == "{str(head[0].version)}") {{ - var theme_css = `{head[1]._get_theme_css()}` - }} {" ".join(make_else_if(t) for t in tail)} - """ - - latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[ - ::-1 - ] - latest_to_oldest = [str(t.version) for t in latest_to_oldest] - - component = gr.Dropdown( - choices=latest_to_oldest, - value=latest_to_oldest[0], - render=False, - label="Select Version", - ).style(container=False) - - return ( - component, - f""" - (theme) => {{ - if (!document.querySelector('.theme-css')) {{ - var theme_elem = document.createElement('style'); - theme_elem.classList.add('theme-css'); - document.head.appendChild(theme_elem); - }} else {{ - var theme_elem = document.querySelector('.theme-css'); - }} - {if_statement} - theme_elem.innerHTML = theme_css; - }} - """, - ) diff --git a/spaces/TencentARC/Caption-Anything/README.md b/spaces/TencentARC/Caption-Anything/README.md deleted file mode 100644 index 5cf7a4f5679a7d7037957243442d7aba615993f9..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/Caption-Anything/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Caption Anything -emoji: 📚 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.26.0 -python_version: 3.8.9 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ThomasSimonini/Conversation-in-a-Tavern/app.py b/spaces/ThomasSimonini/Conversation-in-a-Tavern/app.py deleted file mode 100644 index 401574fd656a308a56018a1f9fc3c4ad366cb5e0..0000000000000000000000000000000000000000 --- a/spaces/ThomasSimonini/Conversation-in-a-Tavern/app.py +++ /dev/null @@ -1,106 +0,0 @@ -import gradio as gr -from gradio.inputs import Textbox, Slider - -import requests - -# Template -title = "A conversation with some NPC in a Tavern 🍻" -description = "" -article = """ -

    If you liked don't forget to 💖 the project 🥰

    -

    Parameters:

    -
      -
    • message: what you want to say to the NPC.
    • -
    • npc_name: name of the NPC.
    • -
    • npc_prompt: prompt of the NPC, we can modify it to see if results are better.
    • -
    • top_p: control how deterministic the model is in generating a response.
    • -
    • temperature: (sampling temperature) higher values means the model will take more risks.
    • -
    • max_new_tokens: Max number of tokens in generation.
    • -
    -Gandalf""" -theme="huggingface" - - -# Builds the prompt from what previously happened -def build_prompt(conversation, context, interlocutor_names): - prompt = context + "\n" - for player_msg, npc_msg in conversation: - line = "\n- " + interlocutor_names[0] + ":" + player_msg - prompt += line - line = "\n- " + interlocutor_names[1] + ":" + npc_msg - prompt += line - prompt += "" - return prompt - -# Recognize what the model said, if it used the correct format -def clean_chat_output(txt, prompt, interlocutor_names): - delimiter = "\n- "+interlocutor_names[0] - output = txt.replace(prompt, '') - output = output[:output.find(delimiter)] - return output - -# GPT-J-6B API -API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-j-6B" -def query(payload): - response = requests.post(API_URL, json=payload) - return response.json() - -def chat(message, npc_name, initial_prompt, top_p, temperature, max_new_tokens, history=[]): - interlocutor_names = ["Player", npc_name] - - print("message", message) - print("npc_name", npc_name) - print("initial_prompt", initial_prompt) - print("top_p", top_p) - print("temperature", temperature) - print("max_new_tokens", max_new_tokens) - print("history", history) - response = "Test" - history.append((message, "")) - conversation = history - - # Build the prompt - prompt = build_prompt(conversation, initial_prompt, interlocutor_names) - - # Build JSON - json_req = {"inputs": prompt, - "parameters": - { - "top_p": top_p, - "temperature": temperature, - "max_new_tokens": max_new_tokens, - "return_full_text": False - }} - - # Get the output - output = query(json_req) - output = output[0]['generated_text'] - print("output", output) - - answer = clean_chat_output(output, prompt, interlocutor_names) - response = answer - print("response", answer) - history[-1] = (message, response) - return history, history - - -#io = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B") - -iface = gr.Interface(fn=chat, -inputs=[Textbox(label="message", placeholder="Hello!"), - Textbox(label="npc_name", placeholder="Antoine"), - Textbox(label="initial_prompt", placeholder="The following is a conversation with Antoine, a guard for Northfall that's drinking in the Tavern."), - Slider(minimum=0.5, maximum=1, step=0.05, default=0.9, label="top_p"), - Slider(minimum=0.5, maximum=1.5, step=0.1, default=1.1, label="temperature"), - Slider(minimum=20, maximum=250, step=10, default=50, label="max_new_tokens"), - "state"], - outputs=["chatbot","state"], - #examples = [["Hello!", "", , 0.9, 1.1, 50, iface.state]], - allow_screenshot=True, - allow_flagging=True, - title=title, - article=article, - theme=theme) - -if __name__ == "__main__": - iface.launch() \ No newline at end of file diff --git a/spaces/TushDeMort/yolo/utils/torch_utils.py b/spaces/TushDeMort/yolo/utils/torch_utils.py deleted file mode 100644 index 1e631b555508457a4944c11a479176463719c0e8..0000000000000000000000000000000000000000 --- a/spaces/TushDeMort/yolo/utils/torch_utils.py +++ /dev/null @@ -1,374 +0,0 @@ -# YOLOR PyTorch utils - -import datetime -import logging -import math -import os -import platform -import subprocess -import time -from contextlib import contextmanager -from copy import deepcopy -from pathlib import Path - -import torch -import torch.backends.cudnn as cudnn -import torch.nn as nn -import torch.nn.functional as F -import torchvision - -try: - import thop # for FLOPS computation -except ImportError: - thop = None -logger = logging.getLogger(__name__) - - -@contextmanager -def torch_distributed_zero_first(local_rank: int): - """ - Decorator to make all processes in distributed training wait for each local_master to do something. - """ - if local_rank not in [-1, 0]: - torch.distributed.barrier() - yield - if local_rank == 0: - torch.distributed.barrier() - - -def init_torch_seeds(seed=0): - # Speed-reproducibility tradeoff https://pytorch.org/docs/stable/notes/randomness.html - torch.manual_seed(seed) - if seed == 0: # slower, more reproducible - cudnn.benchmark, cudnn.deterministic = False, True - else: # faster, less reproducible - cudnn.benchmark, cudnn.deterministic = True, False - - -def date_modified(path=__file__): - # return human-readable file modification date, i.e. '2021-3-26' - t = datetime.datetime.fromtimestamp(Path(path).stat().st_mtime) - return f'{t.year}-{t.month}-{t.day}' - - -def git_describe(path=Path(__file__).parent): # path must be a directory - # return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe - s = f'git -C {path} describe --tags --long --always' - try: - return subprocess.check_output(s, shell=True, stderr=subprocess.STDOUT).decode()[:-1] - except subprocess.CalledProcessError as e: - return '' # not a git repository - - -def select_device(device='', batch_size=None): - # device = 'cpu' or '0' or '0,1,2,3' - s = f'YOLOR 🚀 {git_describe() or date_modified()} torch {torch.__version__} ' # string - cpu = device.lower() == 'cpu' - if cpu: - os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False - elif device: # non-cpu device requested - os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable - assert torch.cuda.is_available(), f'CUDA unavailable, invalid device {device} requested' # check availability - - cuda = not cpu and torch.cuda.is_available() - if cuda: - n = torch.cuda.device_count() - if n > 1 and batch_size: # check that batch_size is compatible with device_count - assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}' - space = ' ' * len(s) - for i, d in enumerate(device.split(',') if device else range(n)): - p = torch.cuda.get_device_properties(i) - s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2}MB)\n" # bytes to MB - else: - s += 'CPU\n' - - logger.info(s.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else s) # emoji-safe - return torch.device('cuda:0' if cuda else 'cpu') - - -def time_synchronized(): - # pytorch-accurate time - if torch.cuda.is_available(): - torch.cuda.synchronize() - return time.time() - - -def profile(x, ops, n=100, device=None): - # profile a pytorch module or list of modules. Example usage: - # x = torch.randn(16, 3, 640, 640) # input - # m1 = lambda x: x * torch.sigmoid(x) - # m2 = nn.SiLU() - # profile(x, [m1, m2], n=100) # profile speed over 100 iterations - - device = device or torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') - x = x.to(device) - x.requires_grad = True - print(torch.__version__, device.type, torch.cuda.get_device_properties(0) if device.type == 'cuda' else '') - print(f"\n{'Params':>12s}{'GFLOPS':>12s}{'forward (ms)':>16s}{'backward (ms)':>16s}{'input':>24s}{'output':>24s}") - for m in ops if isinstance(ops, list) else [ops]: - m = m.to(device) if hasattr(m, 'to') else m # device - m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m # type - dtf, dtb, t = 0., 0., [0., 0., 0.] # dt forward, backward - try: - flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPS - except: - flops = 0 - - for _ in range(n): - t[0] = time_synchronized() - y = m(x) - t[1] = time_synchronized() - try: - _ = y.sum().backward() - t[2] = time_synchronized() - except: # no backward method - t[2] = float('nan') - dtf += (t[1] - t[0]) * 1000 / n # ms per op forward - dtb += (t[2] - t[1]) * 1000 / n # ms per op backward - - s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list' - s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list' - p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0 # parameters - print(f'{p:12}{flops:12.4g}{dtf:16.4g}{dtb:16.4g}{str(s_in):>24s}{str(s_out):>24s}') - - -def is_parallel(model): - return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel) - - -def intersect_dicts(da, db, exclude=()): - # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values - return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape} - - -def initialize_weights(model): - for m in model.modules(): - t = type(m) - if t is nn.Conv2d: - pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif t is nn.BatchNorm2d: - m.eps = 1e-3 - m.momentum = 0.03 - elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6]: - m.inplace = True - - -def find_modules(model, mclass=nn.Conv2d): - # Finds layer indices matching module class 'mclass' - return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)] - - -def sparsity(model): - # Return global model sparsity - a, b = 0., 0. - for p in model.parameters(): - a += p.numel() - b += (p == 0).sum() - return b / a - - -def prune(model, amount=0.3): - # Prune model to requested global sparsity - import torch.nn.utils.prune as prune - print('Pruning model... ', end='') - for name, m in model.named_modules(): - if isinstance(m, nn.Conv2d): - prune.l1_unstructured(m, name='weight', amount=amount) # prune - prune.remove(m, 'weight') # make permanent - print(' %.3g global sparsity' % sparsity(model)) - - -def fuse_conv_and_bn(conv, bn): - # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/ - fusedconv = nn.Conv2d(conv.in_channels, - conv.out_channels, - kernel_size=conv.kernel_size, - stride=conv.stride, - padding=conv.padding, - groups=conv.groups, - bias=True).requires_grad_(False).to(conv.weight.device) - - # prepare filters - w_conv = conv.weight.clone().view(conv.out_channels, -1) - w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) - fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape)) - - # prepare spatial bias - b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias - b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps)) - fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn) - - return fusedconv - - -def model_info(model, verbose=False, img_size=640): - # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320] - n_p = sum(x.numel() for x in model.parameters()) # number parameters - n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients - if verbose: - print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma')) - for i, (name, p) in enumerate(model.named_parameters()): - name = name.replace('module_list.', '') - print('%5g %40s %9s %12g %20s %10.3g %10.3g' % - (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std())) - - try: # FLOPS - from thop import profile - stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32 - img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device) # input - flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2 # stride GFLOPS - img_size = img_size if isinstance(img_size, list) else [img_size, img_size] # expand if int/float - fs = ', %.1f GFLOPS' % (flops * img_size[0] / stride * img_size[1] / stride) # 640x640 GFLOPS - except (ImportError, Exception): - fs = '' - - logger.info(f"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}") - - -def load_classifier(name='resnet101', n=2): - # Loads a pretrained model reshaped to n-class output - model = torchvision.models.__dict__[name](pretrained=True) - - # ResNet model properties - # input_size = [3, 224, 224] - # input_space = 'RGB' - # input_range = [0, 1] - # mean = [0.485, 0.456, 0.406] - # std = [0.229, 0.224, 0.225] - - # Reshape output to n classes - filters = model.fc.weight.shape[1] - model.fc.bias = nn.Parameter(torch.zeros(n), requires_grad=True) - model.fc.weight = nn.Parameter(torch.zeros(n, filters), requires_grad=True) - model.fc.out_features = n - return model - - -def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416) - # scales img(bs,3,y,x) by ratio constrained to gs-multiple - if ratio == 1.0: - return img - else: - h, w = img.shape[2:] - s = (int(h * ratio), int(w * ratio)) # new size - img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize - if not same_shape: # pad/crop img - h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)] - return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean - - -def copy_attr(a, b, include=(), exclude=()): - # Copy attributes from b to a, options to only include [...] and to exclude [...] - for k, v in b.__dict__.items(): - if (len(include) and k not in include) or k.startswith('_') or k in exclude: - continue - else: - setattr(a, k, v) - - -class ModelEMA: - """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models - Keep a moving average of everything in the model state_dict (parameters and buffers). - This is intended to allow functionality like - https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage - A smoothed version of the weights is necessary for some training schemes to perform well. - This class is sensitive where it is initialized in the sequence of model init, - GPU assignment and distributed training wrappers. - """ - - def __init__(self, model, decay=0.9999, updates=0): - # Create EMA - self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA - # if next(model.parameters()).device.type != 'cpu': - # self.ema.half() # FP16 EMA - self.updates = updates # number of EMA updates - self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs) - for p in self.ema.parameters(): - p.requires_grad_(False) - - def update(self, model): - # Update EMA parameters - with torch.no_grad(): - self.updates += 1 - d = self.decay(self.updates) - - msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict - for k, v in self.ema.state_dict().items(): - if v.dtype.is_floating_point: - v *= d - v += (1. - d) * msd[k].detach() - - def update_attr(self, model, include=(), exclude=('process_group', 'reducer')): - # Update EMA attributes - copy_attr(self.ema, model, include, exclude) - - -class BatchNormXd(torch.nn.modules.batchnorm._BatchNorm): - def _check_input_dim(self, input): - # The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc - # is this method that is overwritten by the sub-class - # This original goal of this method was for tensor sanity checks - # If you're ok bypassing those sanity checks (eg. if you trust your inference - # to provide the right dimensional inputs), then you can just use this method - # for easy conversion from SyncBatchNorm - # (unfortunately, SyncBatchNorm does not store the original class - if it did - # we could return the one that was originally created) - return - -def revert_sync_batchnorm(module): - # this is very similar to the function that it is trying to revert: - # https://github.com/pytorch/pytorch/blob/c8b3686a3e4ba63dc59e5dcfe5db3430df256833/torch/nn/modules/batchnorm.py#L679 - module_output = module - if isinstance(module, torch.nn.modules.batchnorm.SyncBatchNorm): - new_cls = BatchNormXd - module_output = BatchNormXd(module.num_features, - module.eps, module.momentum, - module.affine, - module.track_running_stats) - if module.affine: - with torch.no_grad(): - module_output.weight = module.weight - module_output.bias = module.bias - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - if hasattr(module, "qconfig"): - module_output.qconfig = module.qconfig - for name, child in module.named_children(): - module_output.add_module(name, revert_sync_batchnorm(child)) - del module - return module_output - - -class TracedModel(nn.Module): - - def __init__(self, model=None, device=None, img_size=(640,640)): - super(TracedModel, self).__init__() - - print(" Convert model to Traced-model... ") - self.stride = model.stride - self.names = model.names - self.model = model - - self.model = revert_sync_batchnorm(self.model) - self.model.to('cpu') - self.model.eval() - - self.detect_layer = self.model.model[-1] - self.model.traced = True - - rand_example = torch.rand(1, 3, img_size, img_size) - - traced_script_module = torch.jit.trace(self.model, rand_example, strict=False) - #traced_script_module = torch.jit.script(self.model) - traced_script_module.save("traced_model.pt") - print(" traced_script_module saved! ") - self.model = traced_script_module - self.model.to(device) - self.detect_layer.to(device) - print(" model is traced! \n") - - def forward(self, x, augment=False, profile=False): - out = self.model(x) - out = self.detect_layer(out) - return out \ No newline at end of file diff --git a/spaces/UzNutq/README/README.md b/spaces/UzNutq/README/README.md deleted file mode 100644 index 67bde24474b2ddeb20470f34662efd907766da3e..0000000000000000000000000000000000000000 --- a/spaces/UzNutq/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 💻 -colorFrom: yellow -colorTo: blue -sdk: static -pinned: true ---- - -Edit this `README.md` markdown file to author your organization card 🔥 diff --git a/spaces/Venafi/Vikram-Explorer/README.md b/spaces/Venafi/Vikram-Explorer/README.md deleted file mode 100644 index e0d49ac46a1c41b31bcb504432018f8144e4c561..0000000000000000000000000000000000000000 --- a/spaces/Venafi/Vikram-Explorer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Vikram Explorer Project -emoji: 🚀 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Provider.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Provider.py deleted file mode 100644 index d24df76b6a6ccfc9b244f13a51bfc124b398a271..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/g4f/Provider/Provider.py +++ /dev/null @@ -1,16 +0,0 @@ -import os -from ..typing import sha256, Dict, get_type_hints - -url = None -model = None -supports_stream = False -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - return - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/VideoCrafter/VideoCrafter/lvdm/modules/attention.py b/spaces/VideoCrafter/VideoCrafter/lvdm/modules/attention.py deleted file mode 100644 index 3d3140c216b85a14a33d9b1fa09eb6d19465f741..0000000000000000000000000000000000000000 --- a/spaces/VideoCrafter/VideoCrafter/lvdm/modules/attention.py +++ /dev/null @@ -1,475 +0,0 @@ -from functools import partial -import torch -from torch import nn, einsum -import torch.nn.functional as F -from einops import rearrange, repeat -try: - import xformers - import xformers.ops - XFORMERS_IS_AVAILBLE = True -except: - XFORMERS_IS_AVAILBLE = False -from lvdm.common import ( - checkpoint, - exists, - default, -) -from lvdm.basics import ( - zero_module, -) - -class RelativePosition(nn.Module): - """ https://github.com/evelinehong/Transformer_Relative_Position_PyTorch/blob/master/relative_position.py """ - - def __init__(self, num_units, max_relative_position): - super().__init__() - self.num_units = num_units - self.max_relative_position = max_relative_position - self.embeddings_table = nn.Parameter(torch.Tensor(max_relative_position * 2 + 1, num_units)) - nn.init.xavier_uniform_(self.embeddings_table) - - def forward(self, length_q, length_k): - device = self.embeddings_table.device - range_vec_q = torch.arange(length_q, device=device) - range_vec_k = torch.arange(length_k, device=device) - distance_mat = range_vec_k[None, :] - range_vec_q[:, None] - distance_mat_clipped = torch.clamp(distance_mat, -self.max_relative_position, self.max_relative_position) - final_mat = distance_mat_clipped + self.max_relative_position - final_mat = final_mat.long() - embeddings = self.embeddings_table[final_mat] - return embeddings - - -class CrossAttention(nn.Module): - - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0., - relative_position=False, temporal_length=None, img_cross_attention=False): - super().__init__() - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.scale = dim_head**-0.5 - self.heads = heads - self.dim_head = dim_head - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout)) - - self.image_cross_attention_scale = 1.0 - self.text_context_len = 77 - self.img_cross_attention = img_cross_attention - if self.img_cross_attention: - self.to_k_ip = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v_ip = nn.Linear(context_dim, inner_dim, bias=False) - - self.relative_position = relative_position - if self.relative_position: - assert(temporal_length is not None) - self.relative_position_k = RelativePosition(num_units=dim_head, max_relative_position=temporal_length) - self.relative_position_v = RelativePosition(num_units=dim_head, max_relative_position=temporal_length) - else: - ## only used for spatial attention, while NOT for temporal attention - if XFORMERS_IS_AVAILBLE and temporal_length is None: - self.forward = self.efficient_forward - - def forward(self, x, context=None, mask=None): - h = self.heads - - q = self.to_q(x) - context = default(context, x) - ## considering image token additionally - if context is not None and self.img_cross_attention: - context, context_img = context[:,:self.text_context_len,:], context[:,self.text_context_len:,:] - k = self.to_k(context) - v = self.to_v(context) - k_ip = self.to_k_ip(context_img) - v_ip = self.to_v_ip(context_img) - else: - k = self.to_k(context) - v = self.to_v(context) - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - sim = torch.einsum('b i d, b j d -> b i j', q, k) * self.scale - if self.relative_position: - len_q, len_k, len_v = q.shape[1], k.shape[1], v.shape[1] - k2 = self.relative_position_k(len_q, len_k) - sim2 = einsum('b t d, t s d -> b t s', q, k2) * self.scale # TODO check - sim += sim2 - del k - - if exists(mask): - ## feasible for causal attention mask only - max_neg_value = -torch.finfo(sim.dtype).max - mask = repeat(mask, 'b i j -> (b h) i j', h=h) - sim.masked_fill_(~(mask>0.5), max_neg_value) - - # attention, what we cannot get enough of - sim = sim.softmax(dim=-1) - out = torch.einsum('b i j, b j d -> b i d', sim, v) - if self.relative_position: - v2 = self.relative_position_v(len_q, len_v) - out2 = einsum('b t s, t s d -> b t d', sim, v2) # TODO check - out += out2 - out = rearrange(out, '(b h) n d -> b n (h d)', h=h) - - ## considering image token additionally - if context is not None and self.img_cross_attention: - k_ip, v_ip = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (k_ip, v_ip)) - sim_ip = torch.einsum('b i d, b j d -> b i j', q, k_ip) * self.scale - del k_ip - sim_ip = sim_ip.softmax(dim=-1) - out_ip = torch.einsum('b i j, b j d -> b i d', sim_ip, v_ip) - out_ip = rearrange(out, '(b h) n d -> b n (h d)', h=h) - out = out + self.image_cross_attention_scale * out_ip - del q - - return self.to_out(out) - - def efficient_forward(self, x, context=None, mask=None): - q = self.to_q(x) - context = default(context, x) - - ## considering image token additionally - if context is not None and self.img_cross_attention: - context, context_img = context[:,:self.text_context_len,:], context[:,self.text_context_len:,:] - k = self.to_k(context) - v = self.to_v(context) - k_ip = self.to_k_ip(context_img) - v_ip = self.to_v_ip(context_img) - else: - k = self.to_k(context) - v = self.to_v(context) - - b, _, _ = q.shape - q, k, v = map( - lambda t: t.unsqueeze(3) - .reshape(b, t.shape[1], self.heads, self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b * self.heads, t.shape[1], self.dim_head) - .contiguous(), - (q, k, v), - ) - # actually compute the attention, what we cannot get enough of - out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=None) - - ## considering image token additionally - if context is not None and self.img_cross_attention: - k_ip, v_ip = map( - lambda t: t.unsqueeze(3) - .reshape(b, t.shape[1], self.heads, self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b * self.heads, t.shape[1], self.dim_head) - .contiguous(), - (k_ip, v_ip), - ) - out_ip = xformers.ops.memory_efficient_attention(q, k_ip, v_ip, attn_bias=None, op=None) - out_ip = ( - out_ip.unsqueeze(0) - .reshape(b, self.heads, out.shape[1], self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b, out.shape[1], self.heads * self.dim_head) - ) - - if exists(mask): - raise NotImplementedError - out = ( - out.unsqueeze(0) - .reshape(b, self.heads, out.shape[1], self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b, out.shape[1], self.heads * self.dim_head) - ) - if context is not None and self.img_cross_attention: - out = out + self.image_cross_attention_scale * out_ip - return self.to_out(out) - - -class BasicTransformerBlock(nn.Module): - - def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True, - disable_self_attn=False, attention_cls=None, img_cross_attention=False): - super().__init__() - attn_cls = CrossAttention if attention_cls is None else attention_cls - self.disable_self_attn = disable_self_attn - self.attn1 = attn_cls(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout, - context_dim=context_dim if self.disable_self_attn else None) - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = attn_cls(query_dim=dim, context_dim=context_dim, heads=n_heads, dim_head=d_head, dropout=dropout, - img_cross_attention=img_cross_attention) - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def forward(self, x, context=None, mask=None): - ## implementation tricks: because checkpointing doesn't support non-tensor (e.g. None or scalar) arguments - input_tuple = (x,) ## should not be (x), otherwise *input_tuple will decouple x into multiple arguments - if context is not None: - input_tuple = (x, context) - if mask is not None: - forward_mask = partial(self._forward, mask=mask) - return checkpoint(forward_mask, (x,), self.parameters(), self.checkpoint) - if context is not None and mask is not None: - input_tuple = (x, context, mask) - return checkpoint(self._forward, input_tuple, self.parameters(), self.checkpoint) - - def _forward(self, x, context=None, mask=None): - x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None, mask=mask) + x - x = self.attn2(self.norm2(x), context=context, mask=mask) + x - x = self.ff(self.norm3(x)) + x - return x - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data in spatial axis. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - NEW: use_linear for more efficiency instead of the 1x1 convs - """ - - def __init__(self, in_channels, n_heads, d_head, depth=1, dropout=0., context_dim=None, - use_checkpoint=True, disable_self_attn=False, use_linear=False, img_cross_attention=False): - super().__init__() - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - if not use_linear: - self.proj_in = nn.Conv2d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0) - else: - self.proj_in = nn.Linear(in_channels, inner_dim) - - self.transformer_blocks = nn.ModuleList([ - BasicTransformerBlock( - inner_dim, - n_heads, - d_head, - dropout=dropout, - context_dim=context_dim, - img_cross_attention=img_cross_attention, - disable_self_attn=disable_self_attn, - checkpoint=use_checkpoint) for d in range(depth) - ]) - if not use_linear: - self.proj_out = zero_module(nn.Conv2d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0)) - else: - self.proj_out = zero_module(nn.Linear(inner_dim, in_channels)) - self.use_linear = use_linear - - - def forward(self, x, context=None): - b, c, h, w = x.shape - x_in = x - x = self.norm(x) - if not self.use_linear: - x = self.proj_in(x) - x = rearrange(x, 'b c h w -> b (h w) c').contiguous() - if self.use_linear: - x = self.proj_in(x) - for i, block in enumerate(self.transformer_blocks): - x = block(x, context=context) - if self.use_linear: - x = self.proj_out(x) - x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w).contiguous() - if not self.use_linear: - x = self.proj_out(x) - return x + x_in - - -class TemporalTransformer(nn.Module): - """ - Transformer block for image-like data in temporal axis. - First, reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - """ - def __init__(self, in_channels, n_heads, d_head, depth=1, dropout=0., context_dim=None, - use_checkpoint=True, use_linear=False, only_self_att=True, causal_attention=False, - relative_position=False, temporal_length=None): - super().__init__() - self.only_self_att = only_self_att - self.relative_position = relative_position - self.causal_attention = causal_attention - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - self.proj_in = nn.Conv1d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0) - if not use_linear: - self.proj_in = nn.Conv1d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0) - else: - self.proj_in = nn.Linear(in_channels, inner_dim) - - if relative_position: - assert(temporal_length is not None) - attention_cls = partial(CrossAttention, relative_position=True, temporal_length=temporal_length) - else: - attention_cls = None - if self.causal_attention: - assert(temporal_length is not None) - self.mask = torch.tril(torch.ones([1, temporal_length, temporal_length])) - - if self.only_self_att: - context_dim = None - self.transformer_blocks = nn.ModuleList([ - BasicTransformerBlock( - inner_dim, - n_heads, - d_head, - dropout=dropout, - context_dim=context_dim, - attention_cls=attention_cls, - checkpoint=use_checkpoint) for d in range(depth) - ]) - if not use_linear: - self.proj_out = zero_module(nn.Conv1d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0)) - else: - self.proj_out = zero_module(nn.Linear(inner_dim, in_channels)) - self.use_linear = use_linear - - def forward(self, x, context=None): - b, c, t, h, w = x.shape - x_in = x - x = self.norm(x) - x = rearrange(x, 'b c t h w -> (b h w) c t').contiguous() - if not self.use_linear: - x = self.proj_in(x) - x = rearrange(x, 'bhw c t -> bhw t c').contiguous() - if self.use_linear: - x = self.proj_in(x) - - if self.causal_attention: - mask = self.mask.to(x.device) - mask = repeat(mask, 'l i j -> (l bhw) i j', bhw=b*h*w) - else: - mask = None - - if self.only_self_att: - ## note: if no context is given, cross-attention defaults to self-attention - for i, block in enumerate(self.transformer_blocks): - x = block(x, mask=mask) - x = rearrange(x, '(b hw) t c -> b hw t c', b=b).contiguous() - else: - x = rearrange(x, '(b hw) t c -> b hw t c', b=b).contiguous() - context = rearrange(context, '(b t) l con -> b t l con', t=t).contiguous() - for i, block in enumerate(self.transformer_blocks): - # calculate each batch one by one (since number in shape could not greater then 65,535 for some package) - for j in range(b): - context_j = repeat( - context[j], - 't l con -> (t r) l con', r=(h * w) // t, t=t).contiguous() - ## note: causal mask will not applied in cross-attention case - x[j] = block(x[j], context=context_j) - - if self.use_linear: - x = self.proj_out(x) - x = rearrange(x, 'b (h w) t c -> b c t h w', h=h, w=w).contiguous() - if not self.use_linear: - x = rearrange(x, 'b hw t c -> (b hw) c t').contiguous() - x = self.proj_out(x) - x = rearrange(x, '(b h w) c t -> b c t h w', b=b, h=h, w=w).contiguous() - - return x + x_in - - -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - nn.GELU() - ) if not glu else GEGLU(dim, inner_dim) - - self.net = nn.Sequential( - project_in, - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -class LinearAttention(nn.Module): - def __init__(self, dim, heads=4, dim_head=32): - super().__init__() - self.heads = heads - hidden_dim = dim_head * heads - self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias = False) - self.to_out = nn.Conv2d(hidden_dim, dim, 1) - - def forward(self, x): - b, c, h, w = x.shape - qkv = self.to_qkv(x) - q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads = self.heads, qkv=3) - k = k.softmax(dim=-1) - context = torch.einsum('bhdn,bhen->bhde', k, v) - out = torch.einsum('bhde,bhdn->bhen', context, q) - out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w) - return self.to_out(out) - - -class SpatialSelfAttention(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = rearrange(q, 'b c h w -> b (h w) c') - k = rearrange(k, 'b c h w -> b c (h w)') - w_ = torch.einsum('bij,bjk->bik', q, k) - - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = rearrange(v, 'b c h w -> b c (h w)') - w_ = rearrange(w_, 'b i j -> b j i') - h_ = torch.einsum('bij,bjk->bik', v, w_) - h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h) - h_ = self.proj_out(h_) - - return x+h_ diff --git a/spaces/Willow123/InternLM-XComposer/demo_asset/conversation.py b/spaces/Willow123/InternLM-XComposer/demo_asset/conversation.py deleted file mode 100644 index ce285299a1281a93ff24a7226c101b5dd9ba75b9..0000000000000000000000000000000000000000 --- a/spaces/Willow123/InternLM-XComposer/demo_asset/conversation.py +++ /dev/null @@ -1,160 +0,0 @@ -from PIL import Image - -import torch -from transformers import StoppingCriteria, StoppingCriteriaList - -import dataclasses -from enum import auto, Enum -from typing import List, Any - - -class SeparatorStyle(Enum): - """Different separator style.""" - SINGLE = auto() - TWO = auto() - - -@dataclasses.dataclass -class Conversation: - """A class that keeps all conversation history.""" - system: str - roles: List[str] - messages: List[List[str]] - offset: int - # system_img: List[Image.Image] = [] - sep_style: SeparatorStyle = SeparatorStyle.SINGLE - sep: str = "###" - sep2: str = None - - skip_next: bool = False - conv_id: Any = None - - def get_prompt(self): - if self.sep_style == SeparatorStyle.SINGLE: - ret = self.system + self.sep - for role, message in self.messages: - if message: - #ret += role + ": " + message + self.sep - ret += role + ":" + message + self.sep - else: - ret += role + ":" - return ret - elif self.sep_style == SeparatorStyle.TWO: - seps = [self.sep, self.sep2] - ret = self.system + seps[0] - for i, (role, message) in enumerate(self.messages): - if message: - ret += role + ": " + message[0] + seps[i % 2] if isinstance(message, list) else role + ": " + message + seps[i % 2] - else: - ret += role + ":" - return ret - elif self.sep_style == "7132": - seps = [self.sep, self.sep2] - ret = self.system - for i, (role, message) in enumerate(self.messages): - if message: - ret += role + ": " + message[0] + seps[i % 2] if isinstance(message, list) else role + ": " + message + seps[i % 2] - else: - ret += role + ":" - return ret - elif self.sep_style == "raw": - seps = [self.sep, self.sep2] - ret = self.system - for i, (role, message) in enumerate(self.messages): - if message: - ret += role + message + seps[i % 2] - else: - ret += role - return ret - - else: - raise ValueError(f"Invalid style: {self.sep_style}") - - def append_message(self, role, message): - self.messages.append([role, message]) - - def to_gradio_chatbot(self): - ret = [] - for i, (role, msg) in enumerate(self.messages[self.offset:]): - if i % 2 == 0: - if type(msg) is tuple or type(msg) is list: - import base64 - from io import BytesIO - msg, image = msg - - max_hw, min_hw = max(image.size), min(image.size) - aspect_ratio = max_hw / min_hw - max_len, min_len = 800, 400 - shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw)) - longest_edge = int(shortest_edge * aspect_ratio) - W, H = image.size - if H > W: - H, W = longest_edge, shortest_edge - else: - H, W = shortest_edge, longest_edge - image = image.resize((W, H)) - # image = image.resize((224, 224)) - buffered = BytesIO() - image.save(buffered, format="JPEG") - img_b64_str = base64.b64encode(buffered.getvalue()).decode() - img_str = f'user upload image' - msg = msg.replace('', img_str) - ret.append([msg, None]) - else: - ret[-1][-1] = msg - return ret - - def copy(self): - return Conversation( - system=self.system, - # system_img=self.system_img, - roles=self.roles, - messages=[[x, y] for x, y in self.messages], - offset=self.offset, - sep_style=self.sep_style, - sep=self.sep, - sep2=self.sep2, - conv_id=self.conv_id) - - def dict(self): - return { - "system": self.system, - # "system_img": self.system_img, - "roles": self.roles, - "messages": self.messages, - "offset": self.offset, - "sep": self.sep, - "sep2": self.sep2, - "conv_id": self.conv_id, - } - - -class StoppingCriteriaSub(StoppingCriteria): - - def __init__(self, stops=[], encounters=1): - super().__init__() - self.stops = stops - - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor): - for stop in self.stops: - if torch.all((stop == input_ids[0][-len(stop):])).item(): - return True - - return False - - -meta = """meta instruction -You are an AI assistant whose name is 浦语. -- 浦语 is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless. -- 浦语 can understand and communicate fluently in the language chosen by the user such as English and 中文. -conversation -""" -CONV_VISION_7132_v2 = Conversation( - system=meta, - roles=(" <|User|>", " <|Bot|>"), - messages=(), - offset=0, - sep_style="7132", - sep="", - sep2="", -) diff --git a/spaces/Yan233th/so-vits-svc-models/resample.py b/spaces/Yan233th/so-vits-svc-models/resample.py deleted file mode 100644 index f84119cd239b49d260ed1d9e367206adcc3aa03d..0000000000000000000000000000000000000000 --- a/spaces/Yan233th/so-vits-svc-models/resample.py +++ /dev/null @@ -1,48 +0,0 @@ -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count -from scipy.io import wavfile -from tqdm import tqdm - - -def process(item): - spkdir, wav_name, args = item - # speaker 's5', 'p280', 'p315' are excluded, - speaker = spkdir.replace("\\", "/").split("/")[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir2, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, sr=None) - wav, _ = librosa.effects.trim(wav, top_db=20) - peak = np.abs(wav).max() - if peak > 1.0: - wav = 0.98 * wav / peak - wav2 = librosa.resample(wav, orig_sr=sr, target_sr=args.sr2) - wav2 /= max(wav2.max(), -wav2.min()) - save_name = wav_name - save_path2 = os.path.join(args.out_dir2, speaker, save_name) - wavfile.write( - save_path2, - args.sr2, - (wav2 * np.iinfo(np.int16).max).astype(np.int16) - ) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--sr2", type=int, default=44100, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./dataset_raw", help="path to source dir") - parser.add_argument("--out_dir2", type=str, default="./dataset/44k", help="path to target dir") - args = parser.parse_args() - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass diff --git a/spaces/Yiqin/ChatVID/model/fastchat/conversation.py b/spaces/Yiqin/ChatVID/model/fastchat/conversation.py deleted file mode 100644 index 6d5555dfe30df5c0193ffd7edf0a0e03f51b78ed..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/fastchat/conversation.py +++ /dev/null @@ -1,289 +0,0 @@ -""" -Conversation prompt template. - -Now we support -- Vicuna -- Koala -- OpenAssistant/oasst-sft-1-pythia-12b -- StabilityAI/stablelm-tuned-alpha-7b -- databricks/dolly-v2-12b -- THUDM/chatglm-6b -- project-baize/baize-lora-7B -- Alpaca/LLaMa -""" - -import dataclasses -from enum import auto, Enum -from typing import List, Tuple, Any - - -class SeparatorStyle(Enum): - """Different separator style.""" - - SINGLE = auto() - TWO = auto() - DOLLY = auto() - OASST_PYTHIA = auto() - BAIZE = auto() - - -@dataclasses.dataclass -class Conversation: - """A class that keeps all conversation history.""" - - system: str - roles: List[str] - messages: List[List[str]] - offset: int - sep_style: SeparatorStyle = SeparatorStyle.SINGLE - sep: str = "###" - sep2: str = None - - # Used for gradio server - skip_next: bool = False - conv_id: Any = None - - def get_prompt(self): - if self.sep_style == SeparatorStyle.SINGLE: - ret = self.system - for role, message in self.messages: - if message: - ret += self.sep + " " + role + ": " + message - else: - ret += self.sep + " " + role + ":" - return ret - elif self.sep_style == SeparatorStyle.TWO: - seps = [self.sep, self.sep2] - ret = self.system + seps[0] - for i, (role, message) in enumerate(self.messages): - if message: - ret += role + ": " + message + seps[i % 2] - else: - ret += role + ":" - return ret - elif self.sep_style == SeparatorStyle.DOLLY: - seps = [self.sep, self.sep2] - ret = self.system - for i, (role, message) in enumerate(self.messages): - if message: - ret += role + ":\n" + message + seps[i % 2] - if i % 2 == 1: - ret += "\n\n" - else: - ret += role + ":\n" - return ret - elif self.sep_style == SeparatorStyle.OASST_PYTHIA: - ret = self.system - for role, message in self.messages: - if message: - ret += role + message + self.sep - else: - ret += role - return ret - elif self.sep_style == SeparatorStyle.BAIZE: - ret = self.system - for role, message in self.messages: - if message: - ret += "\n" + role + message - else: - ret += "\n" + role - return ret - else: - raise ValueError(f"Invalid style: {self.sep_style}") - - def append_message(self, role, message): - self.messages.append([role, message]) - - def to_gradio_chatbot(self): - ret = [] - for i, (role, msg) in enumerate(self.messages[self.offset :]): - if i % 2 == 0: - ret.append([msg, None]) - else: - ret[-1][-1] = msg - return ret - - def copy(self): - return Conversation( - system=self.system, - roles=self.roles, - messages=[[x, y] for x, y in self.messages], - offset=self.offset, - sep_style=self.sep_style, - sep=self.sep, - sep2=self.sep2, - conv_id=self.conv_id, - ) - - def dict(self): - return { - "system": self.system, - "roles": self.roles, - "messages": self.messages, - "offset": self.offset, - "sep": self.sep, - "sep2": self.sep2, - "conv_id": self.conv_id, - } - - -conv_one_shot = Conversation( - system="A chat between a curious human and an artificial intelligence assistant. " - "The assistant gives helpful, detailed, and polite answers to the human's questions.", - roles=("Human", "Assistant"), - messages=( - ( - "Human", - "What are the key differences between renewable and non-renewable energy sources?", - ), - ( - "Assistant", - "Renewable energy sources are those that can be replenished naturally in a relatively " - "short amount of time, such as solar, wind, hydro, geothermal, and biomass. " - "Non-renewable energy sources, on the other hand, are finite and will eventually be " - "depleted, such as coal, oil, and natural gas. Here are some key differences between " - "renewable and non-renewable energy sources:\n" - "1. Availability: Renewable energy sources are virtually inexhaustible, while non-renewable " - "energy sources are finite and will eventually run out.\n" - "2. Environmental impact: Renewable energy sources have a much lower environmental impact " - "than non-renewable sources, which can lead to air and water pollution, greenhouse gas emissions, " - "and other negative effects.\n" - "3. Cost: Renewable energy sources can be more expensive to initially set up, but they typically " - "have lower operational costs than non-renewable sources.\n" - "4. Reliability: Renewable energy sources are often more reliable and can be used in more remote " - "locations than non-renewable sources.\n" - "5. Flexibility: Renewable energy sources are often more flexible and can be adapted to different " - "situations and needs, while non-renewable sources are more rigid and inflexible.\n" - "6. Sustainability: Renewable energy sources are more sustainable over the long term, while " - "non-renewable sources are not, and their depletion can lead to economic and social instability.", - ), - ), - offset=2, - sep_style=SeparatorStyle.SINGLE, - sep="###", -) - - -conv_vicuna_v1_1 = Conversation( - system="A chat between a curious user and an artificial intelligence assistant. " - "The assistant gives helpful, detailed, and polite answers to the user's questions.", - roles=("USER", "ASSISTANT"), - messages=(), - offset=0, - sep_style=SeparatorStyle.TWO, - sep=" ", - sep2="", -) - - -conv_koala_v1 = Conversation( - system="BEGINNING OF CONVERSATION:", - roles=("USER", "GPT"), - messages=(), - offset=0, - sep_style=SeparatorStyle.TWO, - sep=" ", - sep2="", -) - -conv_dolly = Conversation( - system="Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n", - roles=("### Instruction", "### Response"), - messages=(), - offset=0, - sep_style=SeparatorStyle.DOLLY, - sep="\n\n", - sep2="### End", -) - -conv_oasst = Conversation( - system="", - roles=("<|prompter|>", "<|assistant|>"), - messages=(), - offset=0, - sep_style=SeparatorStyle.OASST_PYTHIA, - sep="<|endoftext|>", -) - -conv_stablelm = Conversation( - system="""<|SYSTEM|># StableLM Tuned (Alpha version) -- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI. -- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user. -- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes. -- StableLM will refuse to participate in anything that could harm a human. -""", - roles=("<|USER|>", "<|ASSISTANT|>"), - messages=(), - offset=0, - sep_style=SeparatorStyle.OASST_PYTHIA, - sep="", -) - -conv_baize = Conversation( - system="The following is a conversation between a human and an AI assistant named Baize (named after a mythical creature in Chinese folklore). Baize is an open-source AI assistant developed by UCSD and Sun Yat-Sen University. The human and the AI assistant take turns chatting. Human statements start with [|Human|] and AI assistant statements start with [|AI|]. The AI assistant always provides responses in as much detail as possible, and in Markdown format. The AI assistant always declines to engage with topics, questions and instructions related to unethical, controversial, or sensitive issues. Complete the transcript in exactly that format.", - roles=("[|Human|]", "[|AI|]"), - messages=( - ("[|Human|]", "Hello!"), - ("[|AI|]", "Hi!"), - ), - offset=2, - sep_style=SeparatorStyle.BAIZE, - sep="[|Human|]", -) - - -conv_templates = { - "conv_one_shot": conv_one_shot, - "vicuna_v1.1": conv_vicuna_v1_1, - "koala_v1": conv_koala_v1, - "dolly": conv_dolly, - "oasst": conv_oasst, - "baize": conv_baize, -} - - -def get_default_conv_template(model_name): - model_name = model_name.lower() - if "vicuna" in model_name or "output" in model_name: - return conv_vicuna_v1_1 - elif "koala" in model_name: - return conv_koala_v1 - elif "dolly-v2" in model_name: - return conv_dolly - elif "oasst" in model_name and "pythia" in model_name: - return conv_oasst - elif "baize" in model_name: - return conv_baize - elif "stablelm" in model_name: - return conv_stablelm - return conv_one_shot - - -def compute_skip_echo_len(model_name, conv, prompt): - model_name = model_name.lower() - if "chatglm" in model_name: - skip_echo_len = len(conv.messages[-2][1]) + 1 - elif "dolly-v2" in model_name: - special_toks = ["### Instruction:", "### Response:", "### End"] - skip_echo_len = len(prompt) - for tok in special_toks: - skip_echo_len -= prompt.count(tok) * len(tok) - elif "oasst" in model_name and "pythia" in model_name: - special_toks = ["<|prompter|>", "<|assistant|>", "<|endoftext|>"] - skip_echo_len = len(prompt) - for tok in special_toks: - skip_echo_len -= prompt.count(tok) * len(tok) - elif "stablelm" in model_name: - special_toks = ["<|SYSTEM|>", "<|USER|>", "<|ASSISTANT|>"] - skip_echo_len = len(prompt) - for tok in special_toks: - skip_echo_len -= prompt.count(tok) * len(tok) - elif "baize" in model_name: - skip_echo_len = len(prompt) - else: - skip_echo_len = len(prompt) + 1 - prompt.count("") * 3 - return skip_echo_len - - -if __name__ == "__main__": - print(default_conversation.get_prompt()) diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/data/transforms/custom_augmentation_impl.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/data/transforms/custom_augmentation_impl.py deleted file mode 100644 index 6b9637f3ad41e3ba513636219e49371296d9ab9f..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/data/transforms/custom_augmentation_impl.py +++ /dev/null @@ -1,52 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# Part of the code is from https://github.com/rwightman/efficientdet-pytorch/blob/master/effdet/data/transforms.py -# Modified by Xingyi Zhou -# The original code is under Apache-2.0 License -import numpy as np -from PIL import Image - -from detectron2.data.transforms.augmentation import Augmentation -from .custom_transform import EfficientDetResizeCropTransform - -__all__ = [ - "EfficientDetResizeCrop", -] - - -class EfficientDetResizeCrop(Augmentation): - """ - Scale the shorter edge to the given size, with a limit of `max_size` on the longer edge. - If `max_size` is reached, then downscale so that the longer edge does not exceed max_size. - """ - - def __init__( - self, size, scale, interp=Image.BILINEAR - ): - """ - """ - super().__init__() - self.target_size = (size, size) - self.scale = scale - self.interp = interp - - def get_transform(self, img): - # Select a random scale factor. - scale_factor = np.random.uniform(*self.scale) - scaled_target_height = scale_factor * self.target_size[0] - scaled_target_width = scale_factor * self.target_size[1] - # Recompute the accurate scale_factor using rounded scaled image size. - width, height = img.shape[1], img.shape[0] - img_scale_y = scaled_target_height / height - img_scale_x = scaled_target_width / width - img_scale = min(img_scale_y, img_scale_x) - - # Select non-zero random offset (x, y) if scaled image is larger than target size - scaled_h = int(height * img_scale) - scaled_w = int(width * img_scale) - offset_y = scaled_h - self.target_size[0] - offset_x = scaled_w - self.target_size[1] - offset_y = int(max(0.0, float(offset_y)) * np.random.uniform(0, 1)) - offset_x = int(max(0.0, float(offset_x)) * np.random.uniform(0, 1)) - return EfficientDetResizeCropTransform( - scaled_h, scaled_w, offset_y, offset_x, img_scale, self.target_size, self.interp) diff --git a/spaces/abdvl/datahub_qa_bot/docs/platform-instances.md b/spaces/abdvl/datahub_qa_bot/docs/platform-instances.md deleted file mode 100644 index b88b9501b4e0a29f012c5325e509e4935d920e04..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/platform-instances.md +++ /dev/null @@ -1,44 +0,0 @@ -# Working With Platform Instances - -DataHub's metadata model for Datasets supports a three-part key currently: -- Data Platform (e.g. urn:li:dataPlatform:mysql) -- Name (e.g. db.schema.name) -- Env or Fabric (e.g. DEV, PROD, etc.) - -This naming scheme unfortunately does not allow for easy representation of the multiplicity of platforms (or technologies) that might be deployed at an organization within the same environment or fabric. For example, an organization might have multiple Redshift instances in Production and would want to see all the data assets located in those instances inside the DataHub metadata repository. - -As part of the `v0.8.24+` releases, we are unlocking the first phase of supporting Platform Instances in the metadata model. This is done via two main additions: -- The `dataPlatformInstance` aspect that has been added to Datasets which allows datasets to be associated to an instance of a platform -- Enhancements to all ingestion sources that allow them to attach a platform instance to the recipe that changes the generated urns to go from `urn:li:dataset:(urn:li:dataPlatform:,,ENV)` format to `urn:li:dataset:(urn:li:dataPlatform:,,ENV)` format. Sources that produce lineage to datasets in other platforms (e.g. Looker, Superset etc) also have specific configuration additions that allow the recipe author to specify the mapping between a platform and the instance name that it should be mapped to. - -![./imgs/platform-instances-for-ingestion.png](./imgs/platform-instances-for-ingestion.png) - -## Naming Platform Instances - -When configuring a platform instance, choose an instance name that is understandable and will be stable for the foreseeable future. e.g. `core_warehouse` or `finance_redshift` are allowed names, as are pure guids like `a37dc708-c512-4fe4-9829-401cd60ed789`. Remember that whatever instance name you choose, you will need to specify it in more than one recipe to ensure that the identifiers produced by different sources will line up. - -## Enabling Platform Instances - -Read the Ingestion source specific guides for how to enable platform instances in each of them. -The general pattern is to add an additional optional configuration parameter called `platform_instance`. - -e.g. here is how you would configure a recipe to ingest a mysql instance that you want to call `core_finance` -```yaml -source: - type: mysql - config: - # Coordinates - host_port: localhost:3306 - platform_instance: core_finance - database: dbname - - # Credentials - username: root - password: example - -sink: - # sink configs -``` - - -## diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/point_sample.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/point_sample.py deleted file mode 100644 index 267f4b3c56630acd85f9bdc630b7be09abab0aba..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/point_sample.py +++ /dev/null @@ -1,336 +0,0 @@ -# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend # noqa - -from os import path as osp - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.modules.utils import _pair -from torch.onnx.operators import shape_as_tensor - - -def bilinear_grid_sample(im, grid, align_corners=False): - """Given an input and a flow-field grid, computes the output using input - values and pixel locations from grid. Supported only bilinear interpolation - method to sample the input pixels. - - Args: - im (torch.Tensor): Input feature map, shape (N, C, H, W) - grid (torch.Tensor): Point coordinates, shape (N, Hg, Wg, 2) - align_corners {bool}: If set to True, the extrema (-1 and 1) are - considered as referring to the center points of the input’s - corner pixels. If set to False, they are instead considered as - referring to the corner points of the input’s corner pixels, - making the sampling more resolution agnostic. - Returns: - torch.Tensor: A tensor with sampled points, shape (N, C, Hg, Wg) - """ - n, c, h, w = im.shape - gn, gh, gw, _ = grid.shape - assert n == gn - - x = grid[:, :, :, 0] - y = grid[:, :, :, 1] - - if align_corners: - x = ((x + 1) / 2) * (w - 1) - y = ((y + 1) / 2) * (h - 1) - else: - x = ((x + 1) * w - 1) / 2 - y = ((y + 1) * h - 1) / 2 - - x = x.view(n, -1) - y = y.view(n, -1) - - x0 = torch.floor(x).long() - y0 = torch.floor(y).long() - x1 = x0 + 1 - y1 = y0 + 1 - - wa = ((x1 - x) * (y1 - y)).unsqueeze(1) - wb = ((x1 - x) * (y - y0)).unsqueeze(1) - wc = ((x - x0) * (y1 - y)).unsqueeze(1) - wd = ((x - x0) * (y - y0)).unsqueeze(1) - - # Apply default for grid_sample function zero padding - im_padded = F.pad(im, pad=[1, 1, 1, 1], mode='constant', value=0) - padded_h = h + 2 - padded_w = w + 2 - # save points positions after padding - x0, x1, y0, y1 = x0 + 1, x1 + 1, y0 + 1, y1 + 1 - - # Clip coordinates to padded image size - x0 = torch.where(x0 < 0, torch.tensor(0), x0) - x0 = torch.where(x0 > padded_w - 1, torch.tensor(padded_w - 1), x0) - x1 = torch.where(x1 < 0, torch.tensor(0), x1) - x1 = torch.where(x1 > padded_w - 1, torch.tensor(padded_w - 1), x1) - y0 = torch.where(y0 < 0, torch.tensor(0), y0) - y0 = torch.where(y0 > padded_h - 1, torch.tensor(padded_h - 1), y0) - y1 = torch.where(y1 < 0, torch.tensor(0), y1) - y1 = torch.where(y1 > padded_h - 1, torch.tensor(padded_h - 1), y1) - - im_padded = im_padded.view(n, c, -1) - - x0_y0 = (x0 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1) - x0_y1 = (x0 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1) - x1_y0 = (x1 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1) - x1_y1 = (x1 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1) - - Ia = torch.gather(im_padded, 2, x0_y0) - Ib = torch.gather(im_padded, 2, x0_y1) - Ic = torch.gather(im_padded, 2, x1_y0) - Id = torch.gather(im_padded, 2, x1_y1) - - return (Ia * wa + Ib * wb + Ic * wc + Id * wd).reshape(n, c, gh, gw) - - -def is_in_onnx_export_without_custom_ops(): - from annotator.uniformer.mmcv.ops import get_onnxruntime_op_path - ort_custom_op_path = get_onnxruntime_op_path() - return torch.onnx.is_in_onnx_export( - ) and not osp.exists(ort_custom_op_path) - - -def normalize(grid): - """Normalize input grid from [-1, 1] to [0, 1] - Args: - grid (Tensor): The grid to be normalize, range [-1, 1]. - Returns: - Tensor: Normalized grid, range [0, 1]. - """ - - return (grid + 1.0) / 2.0 - - -def denormalize(grid): - """Denormalize input grid from range [0, 1] to [-1, 1] - Args: - grid (Tensor): The grid to be denormalize, range [0, 1]. - Returns: - Tensor: Denormalized grid, range [-1, 1]. - """ - - return grid * 2.0 - 1.0 - - -def generate_grid(num_grid, size, device): - """Generate regular square grid of points in [0, 1] x [0, 1] coordinate - space. - - Args: - num_grid (int): The number of grids to sample, one for each region. - size (tuple(int, int)): The side size of the regular grid. - device (torch.device): Desired device of returned tensor. - - Returns: - (torch.Tensor): A tensor of shape (num_grid, size[0]*size[1], 2) that - contains coordinates for the regular grids. - """ - - affine_trans = torch.tensor([[[1., 0., 0.], [0., 1., 0.]]], device=device) - grid = F.affine_grid( - affine_trans, torch.Size((1, 1, *size)), align_corners=False) - grid = normalize(grid) - return grid.view(1, -1, 2).expand(num_grid, -1, -1) - - -def rel_roi_point_to_abs_img_point(rois, rel_roi_points): - """Convert roi based relative point coordinates to image based absolute - point coordinates. - - Args: - rois (Tensor): RoIs or BBoxes, shape (N, 4) or (N, 5) - rel_roi_points (Tensor): Point coordinates inside RoI, relative to - RoI, location, range (0, 1), shape (N, P, 2) - Returns: - Tensor: Image based absolute point coordinates, shape (N, P, 2) - """ - - with torch.no_grad(): - assert rel_roi_points.size(0) == rois.size(0) - assert rois.dim() == 2 - assert rel_roi_points.dim() == 3 - assert rel_roi_points.size(2) == 2 - # remove batch idx - if rois.size(1) == 5: - rois = rois[:, 1:] - abs_img_points = rel_roi_points.clone() - # To avoid an error during exporting to onnx use independent - # variables instead inplace computation - xs = abs_img_points[:, :, 0] * (rois[:, None, 2] - rois[:, None, 0]) - ys = abs_img_points[:, :, 1] * (rois[:, None, 3] - rois[:, None, 1]) - xs += rois[:, None, 0] - ys += rois[:, None, 1] - abs_img_points = torch.stack([xs, ys], dim=2) - return abs_img_points - - -def get_shape_from_feature_map(x): - """Get spatial resolution of input feature map considering exporting to - onnx mode. - - Args: - x (torch.Tensor): Input tensor, shape (N, C, H, W) - Returns: - torch.Tensor: Spatial resolution (width, height), shape (1, 1, 2) - """ - if torch.onnx.is_in_onnx_export(): - img_shape = shape_as_tensor(x)[2:].flip(0).view(1, 1, 2).to( - x.device).float() - else: - img_shape = torch.tensor(x.shape[2:]).flip(0).view(1, 1, 2).to( - x.device).float() - return img_shape - - -def abs_img_point_to_rel_img_point(abs_img_points, img, spatial_scale=1.): - """Convert image based absolute point coordinates to image based relative - coordinates for sampling. - - Args: - abs_img_points (Tensor): Image based absolute point coordinates, - shape (N, P, 2) - img (tuple/Tensor): (height, width) of image or feature map. - spatial_scale (float): Scale points by this factor. Default: 1. - - Returns: - Tensor: Image based relative point coordinates for sampling, - shape (N, P, 2) - """ - - assert (isinstance(img, tuple) and len(img) == 2) or \ - (isinstance(img, torch.Tensor) and len(img.shape) == 4) - - if isinstance(img, tuple): - h, w = img - scale = torch.tensor([w, h], - dtype=torch.float, - device=abs_img_points.device) - scale = scale.view(1, 1, 2) - else: - scale = get_shape_from_feature_map(img) - - return abs_img_points / scale * spatial_scale - - -def rel_roi_point_to_rel_img_point(rois, - rel_roi_points, - img, - spatial_scale=1.): - """Convert roi based relative point coordinates to image based absolute - point coordinates. - - Args: - rois (Tensor): RoIs or BBoxes, shape (N, 4) or (N, 5) - rel_roi_points (Tensor): Point coordinates inside RoI, relative to - RoI, location, range (0, 1), shape (N, P, 2) - img (tuple/Tensor): (height, width) of image or feature map. - spatial_scale (float): Scale points by this factor. Default: 1. - - Returns: - Tensor: Image based relative point coordinates for sampling, - shape (N, P, 2) - """ - - abs_img_point = rel_roi_point_to_abs_img_point(rois, rel_roi_points) - rel_img_point = abs_img_point_to_rel_img_point(abs_img_point, img, - spatial_scale) - - return rel_img_point - - -def point_sample(input, points, align_corners=False, **kwargs): - """A wrapper around :func:`grid_sample` to support 3D point_coords tensors - Unlike :func:`torch.nn.functional.grid_sample` it assumes point_coords to - lie inside ``[0, 1] x [0, 1]`` square. - - Args: - input (Tensor): Feature map, shape (N, C, H, W). - points (Tensor): Image based absolute point coordinates (normalized), - range [0, 1] x [0, 1], shape (N, P, 2) or (N, Hgrid, Wgrid, 2). - align_corners (bool): Whether align_corners. Default: False - - Returns: - Tensor: Features of `point` on `input`, shape (N, C, P) or - (N, C, Hgrid, Wgrid). - """ - - add_dim = False - if points.dim() == 3: - add_dim = True - points = points.unsqueeze(2) - if is_in_onnx_export_without_custom_ops(): - # If custom ops for onnx runtime not compiled use python - # implementation of grid_sample function to make onnx graph - # with supported nodes - output = bilinear_grid_sample( - input, denormalize(points), align_corners=align_corners) - else: - output = F.grid_sample( - input, denormalize(points), align_corners=align_corners, **kwargs) - if add_dim: - output = output.squeeze(3) - return output - - -class SimpleRoIAlign(nn.Module): - - def __init__(self, output_size, spatial_scale, aligned=True): - """Simple RoI align in PointRend, faster than standard RoIAlign. - - Args: - output_size (tuple[int]): h, w - spatial_scale (float): scale the input boxes by this number - aligned (bool): if False, use the legacy implementation in - MMDetection, align_corners=True will be used in F.grid_sample. - If True, align the results more perfectly. - """ - - super(SimpleRoIAlign, self).__init__() - self.output_size = _pair(output_size) - self.spatial_scale = float(spatial_scale) - # to be consistent with other RoI ops - self.use_torchvision = False - self.aligned = aligned - - def forward(self, features, rois): - num_imgs = features.size(0) - num_rois = rois.size(0) - rel_roi_points = generate_grid( - num_rois, self.output_size, device=rois.device) - - if torch.onnx.is_in_onnx_export(): - rel_img_points = rel_roi_point_to_rel_img_point( - rois, rel_roi_points, features, self.spatial_scale) - rel_img_points = rel_img_points.reshape(num_imgs, -1, - *rel_img_points.shape[1:]) - point_feats = point_sample( - features, rel_img_points, align_corners=not self.aligned) - point_feats = point_feats.transpose(1, 2) - else: - point_feats = [] - for batch_ind in range(num_imgs): - # unravel batch dim - feat = features[batch_ind].unsqueeze(0) - inds = (rois[:, 0].long() == batch_ind) - if inds.any(): - rel_img_points = rel_roi_point_to_rel_img_point( - rois[inds], rel_roi_points[inds], feat, - self.spatial_scale).unsqueeze(0) - point_feat = point_sample( - feat, rel_img_points, align_corners=not self.aligned) - point_feat = point_feat.squeeze(0).transpose(0, 1) - point_feats.append(point_feat) - - point_feats = torch.cat(point_feats, dim=0) - - channels = features.size(1) - roi_feats = point_feats.reshape(num_rois, channels, *self.output_size) - - return roi_feats - - def __repr__(self): - format_str = self.__class__.__name__ - format_str += '(output_size={}, spatial_scale={}'.format( - self.output_size, self.spatial_scale) - return format_str diff --git a/spaces/akhaliq/Music_Source_Separation/scripts/2_create_indexes/vctk-musdb18/create_indexes.sh b/spaces/akhaliq/Music_Source_Separation/scripts/2_create_indexes/vctk-musdb18/create_indexes.sh deleted file mode 100644 index e2a85230b2745cedb2c98a34ed303082bb1ec48a..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Music_Source_Separation/scripts/2_create_indexes/vctk-musdb18/create_indexes.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/bash -WORKSPACE=${1:-"./workspaces/bytesep"} # Default workspace directory - -echo "WORKSPACE=${WORKSPACE}" - -# Users can modify the following config file. -INDEXES_CONFIG_YAML="scripts/2_create_indexes/vctk-musdb18/configs/speech-accompaniment,sr=44100,chn=2.yaml" - -# Create indexes for training. -python3 bytesep/dataset_creation/create_indexes/create_indexes.py \ - --workspace=$WORKSPACE \ - --config_yaml=$INDEXES_CONFIG_YAML diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/parallel_wavegan/distributed/launch.py b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/parallel_wavegan/distributed/launch.py deleted file mode 100644 index 292f2a92287bfd201815748465727b76d9a5008e..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/parallel_wavegan/distributed/launch.py +++ /dev/null @@ -1,163 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -"""Distributed process launcher. - -This code is modified from https://github.com/pytorch/pytorch/blob/v1.3.0/torch/distributed/launch.py. - -""" -import os -import subprocess -import sys - -from argparse import ArgumentParser -from argparse import REMAINDER - - -def parse_args(): - """Parse arguments.""" - parser = ArgumentParser( - description="PyTorch distributed training launch " - "helper utilty that will spawn up " - "multiple distributed processes" - ) - - # Optional arguments for the launch helper - parser.add_argument( - "--nnodes", - type=int, - default=1, - help="The number of nodes to use for distributed " "training", - ) - parser.add_argument( - "--node_rank", - type=int, - default=0, - help="The rank of the node for multi-node distributed " "training", - ) - parser.add_argument( - "--nproc_per_node", - type=int, - default=1, - help="The number of processes to launch on each node, " - "for GPU training, this is recommended to be set " - "to the number of GPUs in your system so that " - "each process can be bound to a single GPU.", - ) - parser.add_argument( - "--master_addr", - default="127.0.0.1", - type=str, - help="Master node (rank 0)'s address, should be either " - "the IP address or the hostname of node 0, for " - "single node multi-proc training, the " - "--master_addr can simply be 127.0.0.1", - ) - parser.add_argument( - "--master_port", - default=29500, - type=int, - help="Master node (rank 0)'s free port that needs to " - "be used for communciation during distributed " - "training", - ) - parser.add_argument( - "--use_env", - default=False, - action="store_true", - help="Use environment variable to pass " - "'local rank'. For legacy reasons, the default value is False. " - "If set to True, the script will not pass " - "--local_rank as argument, and will instead set LOCAL_RANK.", - ) - parser.add_argument( - "-m", - "--module", - default=False, - action="store_true", - help="Changes each process to interpret the launch script " - "as a python module, executing with the same behavior as" - "'python -m'.", - ) - parser.add_argument( - "-c", - "--command", - default=False, - action="store_true", - help="Changes each process to interpret the launch script " "as a command.", - ) - - # positional - parser.add_argument( - "training_script", - type=str, - help="The full path to the single GPU training " - "program/script/command to be launched in parallel, " - "followed by all the arguments for the " - "training script", - ) - - # rest from the training program - parser.add_argument("training_script_args", nargs=REMAINDER) - return parser.parse_args() - - -def main(): - """Launch distributed processes.""" - args = parse_args() - - # world size in terms of number of processes - dist_world_size = args.nproc_per_node * args.nnodes - - # set PyTorch distributed related environmental variables - current_env = os.environ.copy() - current_env["MASTER_ADDR"] = args.master_addr - current_env["MASTER_PORT"] = str(args.master_port) - current_env["WORLD_SIZE"] = str(dist_world_size) - - processes = [] - - if "OMP_NUM_THREADS" not in os.environ and args.nproc_per_node > 1: - current_env["OMP_NUM_THREADS"] = str(1) - print( - "*****************************************\n" - "Setting OMP_NUM_THREADS environment variable for each process " - "to be {} in default, to avoid your system being overloaded, " - "please further tune the variable for optimal performance in " - "your application as needed. \n" - "*****************************************".format( - current_env["OMP_NUM_THREADS"] - ) - ) - - for local_rank in range(0, args.nproc_per_node): - # each process's rank - dist_rank = args.nproc_per_node * args.node_rank + local_rank - current_env["RANK"] = str(dist_rank) - current_env["LOCAL_RANK"] = str(local_rank) - - # spawn the processes - if args.command: - cmd = [args.training_script] - else: - cmd = [sys.executable, "-u"] - if args.module: - cmd.append("-m") - cmd.append(args.training_script) - - if not args.use_env: - cmd.append("--local_rank={}".format(local_rank)) - - cmd.extend(args.training_script_args) - - process = subprocess.Popen(cmd, env=current_env) - processes.append(process) - - for process in processes: - process.wait() - if process.returncode != 0: - raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) - - -if __name__ == "__main__": - main() diff --git a/spaces/akhaliq/lama/bin/make_checkpoint.py b/spaces/akhaliq/lama/bin/make_checkpoint.py deleted file mode 100644 index 322147483915bef758770ae931e705e56083fa8d..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/bin/make_checkpoint.py +++ /dev/null @@ -1,79 +0,0 @@ -#!/usr/bin/env python3 - -import os -import shutil - -import torch - - -def get_checkpoint_files(s): - s = s.strip() - if ',' in s: - return [get_checkpoint_files(chunk) for chunk in s.split(',')] - return 'last.ckpt' if s == 'last' else f'{s}.ckpt' - - -def main(args): - checkpoint_fnames = get_checkpoint_files(args.epochs) - if isinstance(checkpoint_fnames, str): - checkpoint_fnames = [checkpoint_fnames] - assert len(checkpoint_fnames) >= 1 - - checkpoint_path = os.path.join(args.indir, 'models', checkpoint_fnames[0]) - checkpoint = torch.load(checkpoint_path, map_location='cpu') - del checkpoint['optimizer_states'] - - if len(checkpoint_fnames) > 1: - for fname in checkpoint_fnames[1:]: - print('sum', fname) - sum_tensors_cnt = 0 - other_cp = torch.load(os.path.join(args.indir, 'models', fname), map_location='cpu') - for k in checkpoint['state_dict'].keys(): - if checkpoint['state_dict'][k].dtype is torch.float: - checkpoint['state_dict'][k].data.add_(other_cp['state_dict'][k].data) - sum_tensors_cnt += 1 - print('summed', sum_tensors_cnt, 'tensors') - - for k in checkpoint['state_dict'].keys(): - if checkpoint['state_dict'][k].dtype is torch.float: - checkpoint['state_dict'][k].data.mul_(1 / float(len(checkpoint_fnames))) - - state_dict = checkpoint['state_dict'] - - if not args.leave_discriminators: - for k in list(state_dict.keys()): - if k.startswith('discriminator.'): - del state_dict[k] - - if not args.leave_losses: - for k in list(state_dict.keys()): - if k.startswith('loss_'): - del state_dict[k] - - out_checkpoint_path = os.path.join(args.outdir, 'models', 'best.ckpt') - os.makedirs(os.path.dirname(out_checkpoint_path), exist_ok=True) - - torch.save(checkpoint, out_checkpoint_path) - - shutil.copy2(os.path.join(args.indir, 'config.yaml'), - os.path.join(args.outdir, 'config.yaml')) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('indir', - help='Path to directory with output of training ' - '(i.e. directory, which has samples, modules, config.yaml and train.log') - aparser.add_argument('outdir', - help='Where to put minimal checkpoint, which can be consumed by "bin/predict.py"') - aparser.add_argument('--epochs', type=str, default='last', - help='Which checkpoint to take. ' - 'Can be "last" or integer - number of epoch') - aparser.add_argument('--leave-discriminators', action='store_true', - help='If enabled, the state of discriminators will not be removed from the checkpoint') - aparser.add_argument('--leave-losses', action='store_true', - help='If enabled, weights of nn-based losses (e.g. perceptual) will not be removed') - - main(aparser.parse_args()) diff --git a/spaces/akhaliq/neural-waveshaping-synthesis/neural_waveshaping_synthesis/__init__.py b/spaces/akhaliq/neural-waveshaping-synthesis/neural_waveshaping_synthesis/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/akhaliq/supermarionation/README.md b/spaces/akhaliq/supermarionation/README.md deleted file mode 100644 index d29e178c9e1d52ff9f6314426a783e72253c1668..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/supermarionation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Supermarionation -emoji: 🐨 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/charsetgroupprober.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/charsetgroupprober.py deleted file mode 100644 index 5812cef0b5924db9af2da77f0abe4e63decee4cf..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/charsetgroupprober.py +++ /dev/null @@ -1,107 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Communicator client code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .enums import ProbingState -from .charsetprober import CharSetProber - - -class CharSetGroupProber(CharSetProber): - def __init__(self, lang_filter=None): - super(CharSetGroupProber, self).__init__(lang_filter=lang_filter) - self._active_num = 0 - self.probers = [] - self._best_guess_prober = None - - def reset(self): - super(CharSetGroupProber, self).reset() - self._active_num = 0 - for prober in self.probers: - if prober: - prober.reset() - prober.active = True - self._active_num += 1 - self._best_guess_prober = None - - @property - def charset_name(self): - if not self._best_guess_prober: - self.get_confidence() - if not self._best_guess_prober: - return None - return self._best_guess_prober.charset_name - - @property - def language(self): - if not self._best_guess_prober: - self.get_confidence() - if not self._best_guess_prober: - return None - return self._best_guess_prober.language - - def feed(self, byte_str): - for prober in self.probers: - if not prober: - continue - if not prober.active: - continue - state = prober.feed(byte_str) - if not state: - continue - if state == ProbingState.FOUND_IT: - self._best_guess_prober = prober - self._state = ProbingState.FOUND_IT - return self.state - elif state == ProbingState.NOT_ME: - prober.active = False - self._active_num -= 1 - if self._active_num <= 0: - self._state = ProbingState.NOT_ME - return self.state - return self.state - - def get_confidence(self): - state = self.state - if state == ProbingState.FOUND_IT: - return 0.99 - elif state == ProbingState.NOT_ME: - return 0.01 - best_conf = 0.0 - self._best_guess_prober = None - for prober in self.probers: - if not prober: - continue - if not prober.active: - self.logger.debug('%s not active', prober.charset_name) - continue - conf = prober.get_confidence() - self.logger.debug('%s %s confidence = %s', prober.charset_name, prober.language, conf) - if best_conf < conf: - best_conf = conf - self._best_guess_prober = prober - if not self._best_guess_prober: - return 0.0 - return best_conf diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/metadata/languages.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/metadata/languages.py deleted file mode 100644 index 3237d5abf60122e0cea5463ff34f2256b11b5a81..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/metadata/languages.py +++ /dev/null @@ -1,310 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -Metadata about languages used by our model training code for our -SingleByteCharSetProbers. Could be used for other things in the future. - -This code is based on the language metadata from the uchardet project. -""" -from __future__ import absolute_import, print_function - -from string import ascii_letters - - -# TODO: Add Ukranian (KOI8-U) - -class Language(object): - """Metadata about a language useful for training models - - :ivar name: The human name for the language, in English. - :type name: str - :ivar iso_code: 2-letter ISO 639-1 if possible, 3-letter ISO code otherwise, - or use another catalog as a last resort. - :type iso_code: str - :ivar use_ascii: Whether or not ASCII letters should be included in trained - models. - :type use_ascii: bool - :ivar charsets: The charsets we want to support and create data for. - :type charsets: list of str - :ivar alphabet: The characters in the language's alphabet. If `use_ascii` is - `True`, you only need to add those not in the ASCII set. - :type alphabet: str - :ivar wiki_start_pages: The Wikipedia pages to start from if we're crawling - Wikipedia for training data. - :type wiki_start_pages: list of str - """ - def __init__(self, name=None, iso_code=None, use_ascii=True, charsets=None, - alphabet=None, wiki_start_pages=None): - super(Language, self).__init__() - self.name = name - self.iso_code = iso_code - self.use_ascii = use_ascii - self.charsets = charsets - if self.use_ascii: - if alphabet: - alphabet += ascii_letters - else: - alphabet = ascii_letters - elif not alphabet: - raise ValueError('Must supply alphabet if use_ascii is False') - self.alphabet = ''.join(sorted(set(alphabet))) if alphabet else None - self.wiki_start_pages = wiki_start_pages - - def __repr__(self): - return '{}({})'.format(self.__class__.__name__, - ', '.join('{}={!r}'.format(k, v) - for k, v in self.__dict__.items() - if not k.startswith('_'))) - - -LANGUAGES = {'Arabic': Language(name='Arabic', - iso_code='ar', - use_ascii=False, - # We only support encodings that use isolated - # forms, because the current recommendation is - # that the rendering system handles presentation - # forms. This means we purposefully skip IBM864. - charsets=['ISO-8859-6', 'WINDOWS-1256', - 'CP720', 'CP864'], - alphabet=u'ءآأؤإئابةتثجحخدذرزسشصضطظعغػؼؽؾؿـفقكلمنهوىيًٌٍَُِّ', - wiki_start_pages=[u'الصفحة_الرئيسية']), - 'Belarusian': Language(name='Belarusian', - iso_code='be', - use_ascii=False, - charsets=['ISO-8859-5', 'WINDOWS-1251', - 'IBM866', 'MacCyrillic'], - alphabet=(u'АБВГДЕЁЖЗІЙКЛМНОПРСТУЎФХЦЧШЫЬЭЮЯ' - u'абвгдеёжзійклмнопрстуўфхцчшыьэюяʼ'), - wiki_start_pages=[u'Галоўная_старонка']), - 'Bulgarian': Language(name='Bulgarian', - iso_code='bg', - use_ascii=False, - charsets=['ISO-8859-5', 'WINDOWS-1251', - 'IBM855'], - alphabet=(u'АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЬЮЯ' - u'абвгдежзийклмнопрстуфхцчшщъьюя'), - wiki_start_pages=[u'Начална_страница']), - 'Czech': Language(name='Czech', - iso_code='cz', - use_ascii=True, - charsets=['ISO-8859-2', 'WINDOWS-1250'], - alphabet=u'áčďéěíňóřšťúůýžÁČĎÉĚÍŇÓŘŠŤÚŮÝŽ', - wiki_start_pages=[u'Hlavní_strana']), - 'Danish': Language(name='Danish', - iso_code='da', - use_ascii=True, - charsets=['ISO-8859-1', 'ISO-8859-15', - 'WINDOWS-1252'], - alphabet=u'æøåÆØÅ', - wiki_start_pages=[u'Forside']), - 'German': Language(name='German', - iso_code='de', - use_ascii=True, - charsets=['ISO-8859-1', 'WINDOWS-1252'], - alphabet=u'äöüßÄÖÜ', - wiki_start_pages=[u'Wikipedia:Hauptseite']), - 'Greek': Language(name='Greek', - iso_code='el', - use_ascii=False, - charsets=['ISO-8859-7', 'WINDOWS-1253'], - alphabet=(u'αβγδεζηθικλμνξοπρσςτυφχψωάέήίόύώ' - u'ΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΣΤΥΦΧΨΩΆΈΉΊΌΎΏ'), - wiki_start_pages=[u'Πύλη:Κύρια']), - 'English': Language(name='English', - iso_code='en', - use_ascii=True, - charsets=['ISO-8859-1', 'WINDOWS-1252'], - wiki_start_pages=[u'Main_Page']), - 'Esperanto': Language(name='Esperanto', - iso_code='eo', - # Q, W, X, and Y not used at all - use_ascii=False, - charsets=['ISO-8859-3'], - alphabet=(u'abcĉdefgĝhĥijĵklmnoprsŝtuŭvz' - u'ABCĈDEFGĜHĤIJĴKLMNOPRSŜTUŬVZ'), - wiki_start_pages=[u'Vikipedio:Ĉefpaĝo']), - 'Spanish': Language(name='Spanish', - iso_code='es', - use_ascii=True, - charsets=['ISO-8859-1', 'ISO-8859-15', - 'WINDOWS-1252'], - alphabet=u'ñáéíóúüÑÁÉÍÓÚÜ', - wiki_start_pages=[u'Wikipedia:Portada']), - 'Estonian': Language(name='Estonian', - iso_code='et', - use_ascii=False, - charsets=['ISO-8859-4', 'ISO-8859-13', - 'WINDOWS-1257'], - # C, F, Š, Q, W, X, Y, Z, Ž are only for - # loanwords - alphabet=(u'ABDEGHIJKLMNOPRSTUVÕÄÖÜ' - u'abdeghijklmnoprstuvõäöü'), - wiki_start_pages=[u'Esileht']), - 'Finnish': Language(name='Finnish', - iso_code='fi', - use_ascii=True, - charsets=['ISO-8859-1', 'ISO-8859-15', - 'WINDOWS-1252'], - alphabet=u'ÅÄÖŠŽåäöšž', - wiki_start_pages=[u'Wikipedia:Etusivu']), - 'French': Language(name='French', - iso_code='fr', - use_ascii=True, - charsets=['ISO-8859-1', 'ISO-8859-15', - 'WINDOWS-1252'], - alphabet=u'œàâçèéîïùûêŒÀÂÇÈÉÎÏÙÛÊ', - wiki_start_pages=[u'Wikipédia:Accueil_principal', - u'Bœuf (animal)']), - 'Hebrew': Language(name='Hebrew', - iso_code='he', - use_ascii=False, - charsets=['ISO-8859-8', 'WINDOWS-1255'], - alphabet=u'אבגדהוזחטיךכלםמןנסעףפץצקרשתװױײ', - wiki_start_pages=[u'עמוד_ראשי']), - 'Croatian': Language(name='Croatian', - iso_code='hr', - # Q, W, X, Y are only used for foreign words. - use_ascii=False, - charsets=['ISO-8859-2', 'WINDOWS-1250'], - alphabet=(u'abcčćdđefghijklmnoprsštuvzž' - u'ABCČĆDĐEFGHIJKLMNOPRSŠTUVZŽ'), - wiki_start_pages=[u'Glavna_stranica']), - 'Hungarian': Language(name='Hungarian', - iso_code='hu', - # Q, W, X, Y are only used for foreign words. - use_ascii=False, - charsets=['ISO-8859-2', 'WINDOWS-1250'], - alphabet=(u'abcdefghijklmnoprstuvzáéíóöőúüű' - u'ABCDEFGHIJKLMNOPRSTUVZÁÉÍÓÖŐÚÜŰ'), - wiki_start_pages=[u'Kezdőlap']), - 'Italian': Language(name='Italian', - iso_code='it', - use_ascii=True, - charsets=['ISO-8859-1', 'ISO-8859-15', - 'WINDOWS-1252'], - alphabet=u'ÀÈÉÌÒÓÙàèéìòóù', - wiki_start_pages=[u'Pagina_principale']), - 'Lithuanian': Language(name='Lithuanian', - iso_code='lt', - use_ascii=False, - charsets=['ISO-8859-13', 'WINDOWS-1257', - 'ISO-8859-4'], - # Q, W, and X not used at all - alphabet=(u'AĄBCČDEĘĖFGHIĮYJKLMNOPRSŠTUŲŪVZŽ' - u'aąbcčdeęėfghiįyjklmnoprsštuųūvzž'), - wiki_start_pages=[u'Pagrindinis_puslapis']), - 'Latvian': Language(name='Latvian', - iso_code='lv', - use_ascii=False, - charsets=['ISO-8859-13', 'WINDOWS-1257', - 'ISO-8859-4'], - # Q, W, X, Y are only for loanwords - alphabet=(u'AĀBCČDEĒFGĢHIĪJKĶLĻMNŅOPRSŠTUŪVZŽ' - u'aābcčdeēfgģhiījkķlļmnņoprsštuūvzž'), - wiki_start_pages=[u'Sākumlapa']), - 'Macedonian': Language(name='Macedonian', - iso_code='mk', - use_ascii=False, - charsets=['ISO-8859-5', 'WINDOWS-1251', - 'MacCyrillic', 'IBM855'], - alphabet=(u'АБВГДЃЕЖЗЅИЈКЛЉМНЊОПРСТЌУФХЦЧЏШ' - u'абвгдѓежзѕијклљмнњопрстќуфхцчџш'), - wiki_start_pages=[u'Главна_страница']), - 'Dutch': Language(name='Dutch', - iso_code='nl', - use_ascii=True, - charsets=['ISO-8859-1', 'WINDOWS-1252'], - wiki_start_pages=[u'Hoofdpagina']), - 'Polish': Language(name='Polish', - iso_code='pl', - # Q and X are only used for foreign words. - use_ascii=False, - charsets=['ISO-8859-2', 'WINDOWS-1250'], - alphabet=(u'AĄBCĆDEĘFGHIJKLŁMNŃOÓPRSŚTUWYZŹŻ' - u'aąbcćdeęfghijklłmnńoóprsśtuwyzźż'), - wiki_start_pages=[u'Wikipedia:Strona_główna']), - 'Portuguese': Language(name='Portuguese', - iso_code='pt', - use_ascii=True, - charsets=['ISO-8859-1', 'ISO-8859-15', - 'WINDOWS-1252'], - alphabet=u'ÁÂÃÀÇÉÊÍÓÔÕÚáâãàçéêíóôõú', - wiki_start_pages=[u'Wikipédia:Página_principal']), - 'Romanian': Language(name='Romanian', - iso_code='ro', - use_ascii=True, - charsets=['ISO-8859-2', 'WINDOWS-1250'], - alphabet=u'ăâîșțĂÂÎȘȚ', - wiki_start_pages=[u'Pagina_principală']), - 'Russian': Language(name='Russian', - iso_code='ru', - use_ascii=False, - charsets=['ISO-8859-5', 'WINDOWS-1251', - 'KOI8-R', 'MacCyrillic', 'IBM866', - 'IBM855'], - alphabet=(u'абвгдеёжзийклмнопрстуфхцчшщъыьэюя' - u'АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯ'), - wiki_start_pages=[u'Заглавная_страница']), - 'Slovak': Language(name='Slovak', - iso_code='sk', - use_ascii=True, - charsets=['ISO-8859-2', 'WINDOWS-1250'], - alphabet=u'áäčďéíĺľňóôŕšťúýžÁÄČĎÉÍĹĽŇÓÔŔŠŤÚÝŽ', - wiki_start_pages=[u'Hlavná_stránka']), - 'Slovene': Language(name='Slovene', - iso_code='sl', - # Q, W, X, Y are only used for foreign words. - use_ascii=False, - charsets=['ISO-8859-2', 'WINDOWS-1250'], - alphabet=(u'abcčdefghijklmnoprsštuvzž' - u'ABCČDEFGHIJKLMNOPRSŠTUVZŽ'), - wiki_start_pages=[u'Glavna_stran']), - # Serbian can be written in both Latin and Cyrillic, but there's no - # simple way to get the Latin alphabet pages from Wikipedia through - # the API, so for now we just support Cyrillic. - 'Serbian': Language(name='Serbian', - iso_code='sr', - alphabet=(u'АБВГДЂЕЖЗИЈКЛЉМНЊОПРСТЋУФХЦЧЏШ' - u'абвгдђежзијклљмнњопрстћуфхцчџш'), - charsets=['ISO-8859-5', 'WINDOWS-1251', - 'MacCyrillic', 'IBM855'], - wiki_start_pages=[u'Главна_страна']), - 'Thai': Language(name='Thai', - iso_code='th', - use_ascii=False, - charsets=['ISO-8859-11', 'TIS-620', 'CP874'], - alphabet=u'กขฃคฅฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลฦวศษสหฬอฮฯะัาำิีึืฺุู฿เแโใไๅๆ็่้๊๋์ํ๎๏๐๑๒๓๔๕๖๗๘๙๚๛', - wiki_start_pages=[u'หน้าหลัก']), - 'Turkish': Language(name='Turkish', - iso_code='tr', - # Q, W, and X are not used by Turkish - use_ascii=False, - charsets=['ISO-8859-3', 'ISO-8859-9', - 'WINDOWS-1254'], - alphabet=(u'abcçdefgğhıijklmnoöprsştuüvyzâîû' - u'ABCÇDEFGĞHIİJKLMNOÖPRSŞTUÜVYZÂÎÛ'), - wiki_start_pages=[u'Ana_Sayfa']), - 'Vietnamese': Language(name='Vietnamese', - iso_code='vi', - use_ascii=False, - # Windows-1258 is the only common 8-bit - # Vietnamese encoding supported by Python. - # From Wikipedia: - # For systems that lack support for Unicode, - # dozens of 8-bit Vietnamese code pages are - # available.[1] The most common are VISCII - # (TCVN 5712:1993), VPS, and Windows-1258.[3] - # Where ASCII is required, such as when - # ensuring readability in plain text e-mail, - # Vietnamese letters are often encoded - # according to Vietnamese Quoted-Readable - # (VIQR) or VSCII Mnemonic (VSCII-MNEM),[4] - # though usage of either variable-width - # scheme has declined dramatically following - # the adoption of Unicode on the World Wide - # Web. - charsets=['WINDOWS-1258'], - alphabet=(u'aăâbcdđeêghiklmnoôơpqrstuưvxy' - u'AĂÂBCDĐEÊGHIKLMNOÔƠPQRSTUƯVXY'), - wiki_start_pages=[u'Chữ_Quốc_ngữ']), - } diff --git a/spaces/allandclive/Uganda_MMS/vits/transforms.py b/spaces/allandclive/Uganda_MMS/vits/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/allandclive/Uganda_MMS/vits/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/allknowingroger/Image-Models-Test96/app.py b/spaces/allknowingroger/Image-Models-Test96/app.py deleted file mode 100644 index 2d1754152087dab970148115f78f9ef9256bb20e..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test96/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "Jbddai/lora-trained-xl-colab_potatohead", - "GodSpeed15/my-pet-dog", - "MakAttack/653bbca65b1b03cb7810faff", - "LinoyTsaban/lora-trained-xl-colab-cam-0.0001-1000-4-text-encoder", - "Jbddai/lora-trained-xl-colab_gieskanne", - "craigdsouza/my-uig-racecar", - "MakAttack/653cc69ec6b4bef9fcd3f9c9", - "kycocotree/lora-trained-xl", - "ThanhMai/lora-trained-xl-colab", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/andresgtn/sidewalk-semantic-segmentation/README.md b/spaces/andresgtn/sidewalk-semantic-segmentation/README.md deleted file mode 100644 index 2700d6e4f163cab1543e6ffb799bca9f99e3d046..0000000000000000000000000000000000000000 --- a/spaces/andresgtn/sidewalk-semantic-segmentation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sidewalk Semantic Segmentation -emoji: 🌍 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/aodianyun/panoptic-segment-anything/segment_anything/setup.py b/spaces/aodianyun/panoptic-segment-anything/segment_anything/setup.py deleted file mode 100644 index 2c0986317eb576a14ec774205c88fdee3cc6c0b3..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/segment_anything/setup.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from setuptools import find_packages, setup - -setup( - name="segment_anything", - version="1.0", - install_requires=[], - packages=find_packages(exclude="notebooks"), - extras_require={ - "all": ["matplotlib", "pycocotools", "opencv-python", "onnx", "onnxruntime"], - "dev": ["flake8", "isort", "black", "mypy"], - }, -) diff --git a/spaces/arsalagrey/audio-classification-vue/index.js b/spaces/arsalagrey/audio-classification-vue/index.js deleted file mode 100644 index 6f8537e805a0397fd139cd3926fd42d19898eb77..0000000000000000000000000000000000000000 --- a/spaces/arsalagrey/audio-classification-vue/index.js +++ /dev/null @@ -1,81 +0,0 @@ -const { createApp, ref, onMounted, computed, watch } = Vue; -import { HfInference } from "https://cdn.skypack.dev/@huggingface/inference@latest"; - -const app = createApp({ - setup() { - const token = ref(localStorage.getItem("token") || ""); - const models = ref(["MIT/ast-finetuned-audioset-10-10-0.4593"]); - const selectedAudio = ref("airplane-landing.mp3"); - const selectedModel = ref(""); - const loading = ref(false); - const didErrorOccur = ref(false) - const audioFiles = ref(['airplane-landing.mp3', - 'alien-spaceship.mp3', - 'hard_shoes.mp3', - 'labrador-barking.mp3', - 'old-car-engine.mp3', - 'tolling-bell.mp3']); - const classificationLabels = ref([]) - - - const statusMessage = computed(() => { - if (loading.value) return "Loading..." - return "Ready" - }) - - const run = async () => { - reset() - loading.value = true; - try { - const hf = new HfInference(token.value); - const audioData = await (await fetch(`sounds/${selectedAudio.value}`)).arrayBuffer() - const result = await hf.audioClassification({ - data: audioData, - model: selectedModel.value - }); - console.log(result) - classificationLabels.value = result - loading.value = false; - } catch (e) { - console.error(e); - loading.value = false; - didErrorOccur.value = true - } - }; - const reset = () => { - didErrorOccur.value = false - loading.value = false - classificationLabels.value = [] - } - - watch(selectedAudio, () => { - reset() - }) - - watch(selectedModel, () => { - reset() - }) - - onMounted(async () => { - const localStorageToken = localStorage.getItem("token") - if (localStorageToken) { - token.value = localStorageToken; - } - selectedModel.value = models.value[0] - }); - - return { - token, - run, - audioFiles, - selectedAudio, - models, - selectedModel, - loading, - statusMessage, - classificationLabels - }; - }, -}); - -app.mount("#app"); diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/configs/bark_config.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/configs/bark_config.py deleted file mode 100644 index 4d1cd1374afe8d5f0b9e87ed81db25d7e4032af9..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/configs/bark_config.py +++ /dev/null @@ -1,105 +0,0 @@ -import os -from dataclasses import dataclass, field -from typing import Dict - -from TTS.tts.configs.shared_configs import BaseTTSConfig -from TTS.tts.layers.bark.model import GPTConfig -from TTS.tts.layers.bark.model_fine import FineGPTConfig -from TTS.tts.models.bark import BarkAudioConfig -from TTS.utils.generic_utils import get_user_data_dir - - -@dataclass -class BarkConfig(BaseTTSConfig): - """Bark TTS configuration - - Args: - model (str): model name that registers the model. - audio (BarkAudioConfig): audio configuration. Defaults to BarkAudioConfig(). - num_chars (int): number of characters in the alphabet. Defaults to 0. - semantic_config (GPTConfig): semantic configuration. Defaults to GPTConfig(). - fine_config (FineGPTConfig): fine configuration. Defaults to FineGPTConfig(). - coarse_config (GPTConfig): coarse configuration. Defaults to GPTConfig(). - CONTEXT_WINDOW_SIZE (int): GPT context window size. Defaults to 1024. - SEMANTIC_RATE_HZ (float): semantic tokens rate in Hz. Defaults to 49.9. - SEMANTIC_VOCAB_SIZE (int): semantic vocabulary size. Defaults to 10_000. - CODEBOOK_SIZE (int): encodec codebook size. Defaults to 1024. - N_COARSE_CODEBOOKS (int): number of coarse codebooks. Defaults to 2. - N_FINE_CODEBOOKS (int): number of fine codebooks. Defaults to 8. - COARSE_RATE_HZ (int): coarse tokens rate in Hz. Defaults to 75. - SAMPLE_RATE (int): sample rate. Defaults to 24_000. - USE_SMALLER_MODELS (bool): use smaller models. Defaults to False. - TEXT_ENCODING_OFFSET (int): text encoding offset. Defaults to 10_048. - SEMANTIC_PAD_TOKEN (int): semantic pad token. Defaults to 10_000. - TEXT_PAD_TOKEN ([type]): text pad token. Defaults to 10_048. - TEXT_EOS_TOKEN ([type]): text end of sentence token. Defaults to 10_049. - TEXT_SOS_TOKEN ([type]): text start of sentence token. Defaults to 10_050. - SEMANTIC_INFER_TOKEN (int): semantic infer token. Defaults to 10_051. - COARSE_SEMANTIC_PAD_TOKEN (int): coarse semantic pad token. Defaults to 12_048. - COARSE_INFER_TOKEN (int): coarse infer token. Defaults to 12_050. - REMOTE_BASE_URL ([type]): remote base url. Defaults to "https://huggingface.co/erogol/bark/tree". - REMOTE_MODEL_PATHS (Dict): remote model paths. Defaults to None. - LOCAL_MODEL_PATHS (Dict): local model paths. Defaults to None. - SMALL_REMOTE_MODEL_PATHS (Dict): small remote model paths. Defaults to None. - CACHE_DIR (str): local cache directory. Defaults to get_user_data_dir(). - DEF_SPEAKER_DIR (str): default speaker directory to stoke speaker values for voice cloning. Defaults to get_user_data_dir(). - """ - - model: str = "bark" - audio: BarkAudioConfig = field(default_factory=BarkAudioConfig) - num_chars: int = 0 - semantic_config: GPTConfig = field(default_factory=GPTConfig) - fine_config: FineGPTConfig = field(default_factory=FineGPTConfig) - coarse_config: GPTConfig = field(default_factory=GPTConfig) - CONTEXT_WINDOW_SIZE: int = 1024 - SEMANTIC_RATE_HZ: float = 49.9 - SEMANTIC_VOCAB_SIZE: int = 10_000 - CODEBOOK_SIZE: int = 1024 - N_COARSE_CODEBOOKS: int = 2 - N_FINE_CODEBOOKS: int = 8 - COARSE_RATE_HZ: int = 75 - SAMPLE_RATE: int = 24_000 - USE_SMALLER_MODELS: bool = False - - TEXT_ENCODING_OFFSET: int = 10_048 - SEMANTIC_PAD_TOKEN: int = 10_000 - TEXT_PAD_TOKEN: int = 129_595 - SEMANTIC_INFER_TOKEN: int = 129_599 - COARSE_SEMANTIC_PAD_TOKEN: int = 12_048 - COARSE_INFER_TOKEN: int = 12_050 - - REMOTE_BASE_URL = "https://huggingface.co/erogol/bark/tree/main/" - REMOTE_MODEL_PATHS: Dict = None - LOCAL_MODEL_PATHS: Dict = None - SMALL_REMOTE_MODEL_PATHS: Dict = None - CACHE_DIR: str = str(get_user_data_dir("tts/suno/bark_v0")) - DEF_SPEAKER_DIR: str = str(get_user_data_dir("tts/bark_v0/speakers")) - - def __post_init__(self): - self.REMOTE_MODEL_PATHS = { - "text": { - "path": os.path.join(self.REMOTE_BASE_URL, "text_2.pt"), - "checksum": "54afa89d65e318d4f5f80e8e8799026a", - }, - "coarse": { - "path": os.path.join(self.REMOTE_BASE_URL, "coarse_2.pt"), - "checksum": "8a98094e5e3a255a5c9c0ab7efe8fd28", - }, - "fine": { - "path": os.path.join(self.REMOTE_BASE_URL, "fine_2.pt"), - "checksum": "59d184ed44e3650774a2f0503a48a97b", - }, - } - self.LOCAL_MODEL_PATHS = { - "text": os.path.join(self.CACHE_DIR, "text_2.pt"), - "coarse": os.path.join(self.CACHE_DIR, "coarse_2.pt"), - "fine": os.path.join(self.CACHE_DIR, "fine_2.pt"), - "hubert_tokenizer": os.path.join(self.CACHE_DIR, "tokenizer.pth"), - "hubert": os.path.join(self.CACHE_DIR, "hubert.pt"), - } - self.SMALL_REMOTE_MODEL_PATHS = { - "text": {"path": os.path.join(self.REMOTE_BASE_URL, "text.pt")}, - "coarse": {"path": os.path.join(self.REMOTE_BASE_URL, "coarse.pt")}, - "fine": {"path": os.path.join(self.REMOTE_BASE_URL, "fine.pt")}, - } - self.sample_rate = self.SAMPLE_RATE # pylint: disable=attribute-defined-outside-init diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/generic/res_conv_bn.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/generic/res_conv_bn.py deleted file mode 100644 index 4beda291aa15398024b5b16cd6bf12b88898a0a9..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/generic/res_conv_bn.py +++ /dev/null @@ -1,127 +0,0 @@ -from torch import nn - - -class ZeroTemporalPad(nn.Module): - """Pad sequences to equal lentgh in the temporal dimension""" - - def __init__(self, kernel_size, dilation): - super().__init__() - total_pad = dilation * (kernel_size - 1) - begin = total_pad // 2 - end = total_pad - begin - self.pad_layer = nn.ZeroPad2d((0, 0, begin, end)) - - def forward(self, x): - return self.pad_layer(x) - - -class Conv1dBN(nn.Module): - """1d convolutional with batch norm. - conv1d -> relu -> BN blocks. - - Note: - Batch normalization is applied after ReLU regarding the original implementation. - - Args: - in_channels (int): number of input channels. - out_channels (int): number of output channels. - kernel_size (int): kernel size for convolutional filters. - dilation (int): dilation for convolution layers. - """ - - def __init__(self, in_channels, out_channels, kernel_size, dilation): - super().__init__() - padding = dilation * (kernel_size - 1) - pad_s = padding // 2 - pad_e = padding - pad_s - self.conv1d = nn.Conv1d(in_channels, out_channels, kernel_size, dilation=dilation) - self.pad = nn.ZeroPad2d((pad_s, pad_e, 0, 0)) # uneven left and right padding - self.norm = nn.BatchNorm1d(out_channels) - - def forward(self, x): - o = self.conv1d(x) - o = self.pad(o) - o = nn.functional.relu(o) - o = self.norm(o) - return o - - -class Conv1dBNBlock(nn.Module): - """1d convolutional block with batch norm. It is a set of conv1d -> relu -> BN blocks. - - Args: - in_channels (int): number of input channels. - out_channels (int): number of output channels. - hidden_channels (int): number of inner convolution channels. - kernel_size (int): kernel size for convolutional filters. - dilation (int): dilation for convolution layers. - num_conv_blocks (int, optional): number of convolutional blocks. Defaults to 2. - """ - - def __init__(self, in_channels, out_channels, hidden_channels, kernel_size, dilation, num_conv_blocks=2): - super().__init__() - self.conv_bn_blocks = [] - for idx in range(num_conv_blocks): - layer = Conv1dBN( - in_channels if idx == 0 else hidden_channels, - out_channels if idx == (num_conv_blocks - 1) else hidden_channels, - kernel_size, - dilation, - ) - self.conv_bn_blocks.append(layer) - self.conv_bn_blocks = nn.Sequential(*self.conv_bn_blocks) - - def forward(self, x): - """ - Shapes: - x: (B, D, T) - """ - return self.conv_bn_blocks(x) - - -class ResidualConv1dBNBlock(nn.Module): - """Residual Convolutional Blocks with BN - Each block has 'num_conv_block' conv layers and 'num_res_blocks' such blocks are connected - with residual connections. - - conv_block = (conv1d -> relu -> bn) x 'num_conv_blocks' - residuak_conv_block = (x -> conv_block -> + ->) x 'num_res_blocks' - ' - - - - - - - - - ^ - Args: - in_channels (int): number of input channels. - out_channels (int): number of output channels. - hidden_channels (int): number of inner convolution channels. - kernel_size (int): kernel size for convolutional filters. - dilations (list): dilations for each convolution layer. - num_res_blocks (int, optional): number of residual blocks. Defaults to 13. - num_conv_blocks (int, optional): number of convolutional blocks in each residual block. Defaults to 2. - """ - - def __init__( - self, in_channels, out_channels, hidden_channels, kernel_size, dilations, num_res_blocks=13, num_conv_blocks=2 - ): - super().__init__() - assert len(dilations) == num_res_blocks - self.res_blocks = nn.ModuleList() - for idx, dilation in enumerate(dilations): - block = Conv1dBNBlock( - in_channels if idx == 0 else hidden_channels, - out_channels if (idx + 1) == len(dilations) else hidden_channels, - hidden_channels, - kernel_size, - dilation, - num_conv_blocks, - ) - self.res_blocks.append(block) - - def forward(self, x, x_mask=None): - if x_mask is None: - x_mask = 1.0 - o = x * x_mask - for block in self.res_blocks: - res = o - o = block(o) - o = o + res - if x_mask is not None: - o = o * x_mask - return o diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/_mode_cbc.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/_mode_cbc.py deleted file mode 100644 index 79c871ac79f7d6f096fcd77269781e3a6a2a9fb5..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/_mode_cbc.py +++ /dev/null @@ -1,293 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -""" -Ciphertext Block Chaining (CBC) mode. -""" - -__all__ = ['CbcMode'] - -from Crypto.Util.py3compat import _copy_bytes -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, - create_string_buffer, get_raw_buffer, - SmartPointer, c_size_t, c_uint8_ptr, - is_writeable_buffer) - -from Crypto.Random import get_random_bytes - -raw_cbc_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_cbc", """ - int CBC_start_operation(void *cipher, - const uint8_t iv[], - size_t iv_len, - void **pResult); - int CBC_encrypt(void *cbcState, - const uint8_t *in, - uint8_t *out, - size_t data_len); - int CBC_decrypt(void *cbcState, - const uint8_t *in, - uint8_t *out, - size_t data_len); - int CBC_stop_operation(void *state); - """ - ) - - -class CbcMode(object): - """*Cipher-Block Chaining (CBC)*. - - Each of the ciphertext blocks depends on the current - and all previous plaintext blocks. - - An Initialization Vector (*IV*) is required. - - See `NIST SP800-38A`_ , Section 6.2 . - - .. _`NIST SP800-38A` : http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf - - :undocumented: __init__ - """ - - def __init__(self, block_cipher, iv): - """Create a new block cipher, configured in CBC mode. - - :Parameters: - block_cipher : C pointer - A smart pointer to the low-level block cipher instance. - - iv : bytes/bytearray/memoryview - The initialization vector to use for encryption or decryption. - It is as long as the cipher block. - - **The IV must be unpredictable**. Ideally it is picked randomly. - - Reusing the *IV* for encryptions performed with the same key - compromises confidentiality. - """ - - self._state = VoidPointer() - result = raw_cbc_lib.CBC_start_operation(block_cipher.get(), - c_uint8_ptr(iv), - c_size_t(len(iv)), - self._state.address_of()) - if result: - raise ValueError("Error %d while instantiating the CBC mode" - % result) - - # Ensure that object disposal of this Python object will (eventually) - # free the memory allocated by the raw library for the cipher mode - self._state = SmartPointer(self._state.get(), - raw_cbc_lib.CBC_stop_operation) - - # Memory allocated for the underlying block cipher is now owed - # by the cipher mode - block_cipher.release() - - self.block_size = len(iv) - """The block size of the underlying cipher, in bytes.""" - - self.iv = _copy_bytes(None, None, iv) - """The Initialization Vector originally used to create the object. - The value does not change.""" - - self.IV = self.iv - """Alias for `iv`""" - - self._next = [ self.encrypt, self.decrypt ] - - def encrypt(self, plaintext, output=None): - """Encrypt data with the key and the parameters set at initialization. - - A cipher object is stateful: once you have encrypted a message - you cannot encrypt (or decrypt) another message using the same - object. - - The data to encrypt can be broken up in two or - more pieces and `encrypt` can be called multiple times. - - That is, the statement: - - >>> c.encrypt(a) + c.encrypt(b) - - is equivalent to: - - >>> c.encrypt(a+b) - - That also means that you cannot reuse an object for encrypting - or decrypting other data with the same key. - - This function does not add any padding to the plaintext. - - :Parameters: - plaintext : bytes/bytearray/memoryview - The piece of data to encrypt. - Its lenght must be multiple of the cipher block size. - :Keywords: - output : bytearray/memoryview - The location where the ciphertext must be written to. - If ``None``, the ciphertext is returned. - :Return: - If ``output`` is ``None``, the ciphertext is returned as ``bytes``. - Otherwise, ``None``. - """ - - if self.encrypt not in self._next: - raise TypeError("encrypt() cannot be called after decrypt()") - self._next = [ self.encrypt ] - - if output is None: - ciphertext = create_string_buffer(len(plaintext)) - else: - ciphertext = output - - if not is_writeable_buffer(output): - raise TypeError("output must be a bytearray or a writeable memoryview") - - if len(plaintext) != len(output): - raise ValueError("output must have the same length as the input" - " (%d bytes)" % len(plaintext)) - - result = raw_cbc_lib.CBC_encrypt(self._state.get(), - c_uint8_ptr(plaintext), - c_uint8_ptr(ciphertext), - c_size_t(len(plaintext))) - if result: - if result == 3: - raise ValueError("Data must be padded to %d byte boundary in CBC mode" % self.block_size) - raise ValueError("Error %d while encrypting in CBC mode" % result) - - if output is None: - return get_raw_buffer(ciphertext) - else: - return None - - def decrypt(self, ciphertext, output=None): - """Decrypt data with the key and the parameters set at initialization. - - A cipher object is stateful: once you have decrypted a message - you cannot decrypt (or encrypt) another message with the same - object. - - The data to decrypt can be broken up in two or - more pieces and `decrypt` can be called multiple times. - - That is, the statement: - - >>> c.decrypt(a) + c.decrypt(b) - - is equivalent to: - - >>> c.decrypt(a+b) - - This function does not remove any padding from the plaintext. - - :Parameters: - ciphertext : bytes/bytearray/memoryview - The piece of data to decrypt. - Its length must be multiple of the cipher block size. - :Keywords: - output : bytearray/memoryview - The location where the plaintext must be written to. - If ``None``, the plaintext is returned. - :Return: - If ``output`` is ``None``, the plaintext is returned as ``bytes``. - Otherwise, ``None``. - """ - - if self.decrypt not in self._next: - raise TypeError("decrypt() cannot be called after encrypt()") - self._next = [ self.decrypt ] - - if output is None: - plaintext = create_string_buffer(len(ciphertext)) - else: - plaintext = output - - if not is_writeable_buffer(output): - raise TypeError("output must be a bytearray or a writeable memoryview") - - if len(ciphertext) != len(output): - raise ValueError("output must have the same length as the input" - " (%d bytes)" % len(plaintext)) - - result = raw_cbc_lib.CBC_decrypt(self._state.get(), - c_uint8_ptr(ciphertext), - c_uint8_ptr(plaintext), - c_size_t(len(ciphertext))) - if result: - if result == 3: - raise ValueError("Data must be padded to %d byte boundary in CBC mode" % self.block_size) - raise ValueError("Error %d while decrypting in CBC mode" % result) - - if output is None: - return get_raw_buffer(plaintext) - else: - return None - - -def _create_cbc_cipher(factory, **kwargs): - """Instantiate a cipher object that performs CBC encryption/decryption. - - :Parameters: - factory : module - The underlying block cipher, a module from ``Crypto.Cipher``. - - :Keywords: - iv : bytes/bytearray/memoryview - The IV to use for CBC. - - IV : bytes/bytearray/memoryview - Alias for ``iv``. - - Any other keyword will be passed to the underlying block cipher. - See the relevant documentation for details (at least ``key`` will need - to be present). - """ - - cipher_state = factory._create_base_cipher(kwargs) - iv = kwargs.pop("IV", None) - IV = kwargs.pop("iv", None) - - if (None, None) == (iv, IV): - iv = get_random_bytes(factory.block_size) - if iv is not None: - if IV is not None: - raise TypeError("You must either use 'iv' or 'IV', not both") - else: - iv = IV - - if len(iv) != factory.block_size: - raise ValueError("Incorrect IV length (it must be %d bytes long)" % - factory.block_size) - - if kwargs: - raise TypeError("Unknown parameters for CBC: %s" % str(kwargs)) - - return CbcMode(cipher_state, iv) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Future.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Future.py deleted file mode 100644 index 848792e00bf21d57e7cb680ab5199123093ca96c..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Future.py +++ /dev/null @@ -1,15 +0,0 @@ -def _get_feature(name): - import __future__ - # fall back to a unique fake object for earlier Python versions or Python 3 - return getattr(__future__, name, object()) - -unicode_literals = _get_feature("unicode_literals") -with_statement = _get_feature("with_statement") # dummy -division = _get_feature("division") -print_function = _get_feature("print_function") -absolute_import = _get_feature("absolute_import") -nested_scopes = _get_feature("nested_scopes") # dummy -generators = _get_feature("generators") # dummy -generator_stop = _get_feature("generator_stop") - -del _get_feature diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/Image.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/Image.py deleted file mode 100644 index 7faf0c2481ba1832303757d578d62b8594332713..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/Image.py +++ /dev/null @@ -1,3760 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# the Image class wrapper -# -# partial release history: -# 1995-09-09 fl Created -# 1996-03-11 fl PIL release 0.0 (proof of concept) -# 1996-04-30 fl PIL release 0.1b1 -# 1999-07-28 fl PIL release 1.0 final -# 2000-06-07 fl PIL release 1.1 -# 2000-10-20 fl PIL release 1.1.1 -# 2001-05-07 fl PIL release 1.1.2 -# 2002-03-15 fl PIL release 1.1.3 -# 2003-05-10 fl PIL release 1.1.4 -# 2005-03-28 fl PIL release 1.1.5 -# 2006-12-02 fl PIL release 1.1.6 -# 2009-11-15 fl PIL release 1.1.7 -# -# Copyright (c) 1997-2009 by Secret Labs AB. All rights reserved. -# Copyright (c) 1995-2009 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -import atexit -import builtins -import io -import logging -import math -import os -import re -import struct -import sys -import tempfile -import warnings -from collections.abc import Callable, MutableMapping -from enum import IntEnum -from pathlib import Path - -try: - import defusedxml.ElementTree as ElementTree -except ImportError: - ElementTree = None - -# VERSION was removed in Pillow 6.0.0. -# PILLOW_VERSION was removed in Pillow 9.0.0. -# Use __version__ instead. -from . import ImageMode, TiffTags, UnidentifiedImageError, __version__, _plugins -from ._binary import i32le, o32be, o32le -from ._deprecate import deprecate -from ._util import DeferredError, is_path - - -def __getattr__(name): - categories = {"NORMAL": 0, "SEQUENCE": 1, "CONTAINER": 2} - if name in categories: - deprecate("Image categories", 10, "is_animated", plural=True) - return categories[name] - elif name in ("NEAREST", "NONE"): - deprecate(name, 10, "Resampling.NEAREST or Dither.NONE") - return 0 - old_resampling = { - "LINEAR": "BILINEAR", - "CUBIC": "BICUBIC", - "ANTIALIAS": "LANCZOS", - } - if name in old_resampling: - deprecate(name, 10, f"Resampling.{old_resampling[name]}") - return Resampling[old_resampling[name]] - for enum in (Transpose, Transform, Resampling, Dither, Palette, Quantize): - if name in enum.__members__: - deprecate(name, 10, f"{enum.__name__}.{name}") - return enum[name] - raise AttributeError(f"module '{__name__}' has no attribute '{name}'") - - -logger = logging.getLogger(__name__) - - -class DecompressionBombWarning(RuntimeWarning): - pass - - -class DecompressionBombError(Exception): - pass - - -# Limit to around a quarter gigabyte for a 24-bit (3 bpp) image -MAX_IMAGE_PIXELS = int(1024 * 1024 * 1024 // 4 // 3) - - -try: - # If the _imaging C module is not present, Pillow will not load. - # Note that other modules should not refer to _imaging directly; - # import Image and use the Image.core variable instead. - # Also note that Image.core is not a publicly documented interface, - # and should be considered private and subject to change. - from . import _imaging as core - - if __version__ != getattr(core, "PILLOW_VERSION", None): - raise ImportError( - "The _imaging extension was built for another version of Pillow or PIL:\n" - f"Core version: {getattr(core, 'PILLOW_VERSION', None)}\n" - f"Pillow version: {__version__}" - ) - -except ImportError as v: - core = DeferredError(ImportError("The _imaging C module is not installed.")) - # Explanations for ways that we know we might have an import error - if str(v).startswith("Module use of python"): - # The _imaging C module is present, but not compiled for - # the right version (windows only). Print a warning, if - # possible. - warnings.warn( - "The _imaging extension was built for another version of Python.", - RuntimeWarning, - ) - elif str(v).startswith("The _imaging extension"): - warnings.warn(str(v), RuntimeWarning) - # Fail here anyway. Don't let people run with a mostly broken Pillow. - # see docs/porting.rst - raise - - -# works everywhere, win for pypy, not cpython -USE_CFFI_ACCESS = hasattr(sys, "pypy_version_info") -try: - import cffi -except ImportError: - cffi = None - - -def isImageType(t): - """ - Checks if an object is an image object. - - .. warning:: - - This function is for internal use only. - - :param t: object to check if it's an image - :returns: True if the object is an image - """ - return hasattr(t, "im") - - -# -# Constants - -# transpose -class Transpose(IntEnum): - FLIP_LEFT_RIGHT = 0 - FLIP_TOP_BOTTOM = 1 - ROTATE_90 = 2 - ROTATE_180 = 3 - ROTATE_270 = 4 - TRANSPOSE = 5 - TRANSVERSE = 6 - - -# transforms (also defined in Imaging.h) -class Transform(IntEnum): - AFFINE = 0 - EXTENT = 1 - PERSPECTIVE = 2 - QUAD = 3 - MESH = 4 - - -# resampling filters (also defined in Imaging.h) -class Resampling(IntEnum): - NEAREST = 0 - BOX = 4 - BILINEAR = 2 - HAMMING = 5 - BICUBIC = 3 - LANCZOS = 1 - - -_filters_support = { - Resampling.BOX: 0.5, - Resampling.BILINEAR: 1.0, - Resampling.HAMMING: 1.0, - Resampling.BICUBIC: 2.0, - Resampling.LANCZOS: 3.0, -} - - -# dithers -class Dither(IntEnum): - NONE = 0 - ORDERED = 1 # Not yet implemented - RASTERIZE = 2 # Not yet implemented - FLOYDSTEINBERG = 3 # default - - -# palettes/quantizers -class Palette(IntEnum): - WEB = 0 - ADAPTIVE = 1 - - -class Quantize(IntEnum): - MEDIANCUT = 0 - MAXCOVERAGE = 1 - FASTOCTREE = 2 - LIBIMAGEQUANT = 3 - - -if hasattr(core, "DEFAULT_STRATEGY"): - DEFAULT_STRATEGY = core.DEFAULT_STRATEGY - FILTERED = core.FILTERED - HUFFMAN_ONLY = core.HUFFMAN_ONLY - RLE = core.RLE - FIXED = core.FIXED - - -# -------------------------------------------------------------------- -# Registries - -ID = [] -OPEN = {} -MIME = {} -SAVE = {} -SAVE_ALL = {} -EXTENSION = {} -DECODERS = {} -ENCODERS = {} - -# -------------------------------------------------------------------- -# Modes - -_ENDIAN = "<" if sys.byteorder == "little" else ">" - - -def _conv_type_shape(im): - m = ImageMode.getmode(im.mode) - shape = (im.height, im.width) - extra = len(m.bands) - if extra != 1: - shape += (extra,) - return shape, m.typestr - - -MODES = ["1", "CMYK", "F", "HSV", "I", "L", "LAB", "P", "RGB", "RGBA", "RGBX", "YCbCr"] - -# raw modes that may be memory mapped. NOTE: if you change this, you -# may have to modify the stride calculation in map.c too! -_MAPMODES = ("L", "P", "RGBX", "RGBA", "CMYK", "I;16", "I;16L", "I;16B") - - -def getmodebase(mode): - """ - Gets the "base" mode for given mode. This function returns "L" for - images that contain grayscale data, and "RGB" for images that - contain color data. - - :param mode: Input mode. - :returns: "L" or "RGB". - :exception KeyError: If the input mode was not a standard mode. - """ - return ImageMode.getmode(mode).basemode - - -def getmodetype(mode): - """ - Gets the storage type mode. Given a mode, this function returns a - single-layer mode suitable for storing individual bands. - - :param mode: Input mode. - :returns: "L", "I", or "F". - :exception KeyError: If the input mode was not a standard mode. - """ - return ImageMode.getmode(mode).basetype - - -def getmodebandnames(mode): - """ - Gets a list of individual band names. Given a mode, this function returns - a tuple containing the names of individual bands (use - :py:method:`~PIL.Image.getmodetype` to get the mode used to store each - individual band. - - :param mode: Input mode. - :returns: A tuple containing band names. The length of the tuple - gives the number of bands in an image of the given mode. - :exception KeyError: If the input mode was not a standard mode. - """ - return ImageMode.getmode(mode).bands - - -def getmodebands(mode): - """ - Gets the number of individual bands for this mode. - - :param mode: Input mode. - :returns: The number of bands in this mode. - :exception KeyError: If the input mode was not a standard mode. - """ - return len(ImageMode.getmode(mode).bands) - - -# -------------------------------------------------------------------- -# Helpers - -_initialized = 0 - - -def preinit(): - """Explicitly load standard file format drivers.""" - - global _initialized - if _initialized >= 1: - return - - try: - from . import BmpImagePlugin - - assert BmpImagePlugin - except ImportError: - pass - try: - from . import GifImagePlugin - - assert GifImagePlugin - except ImportError: - pass - try: - from . import JpegImagePlugin - - assert JpegImagePlugin - except ImportError: - pass - try: - from . import PpmImagePlugin - - assert PpmImagePlugin - except ImportError: - pass - try: - from . import PngImagePlugin - - assert PngImagePlugin - except ImportError: - pass - # try: - # import TiffImagePlugin - # assert TiffImagePlugin - # except ImportError: - # pass - - _initialized = 1 - - -def init(): - """ - Explicitly initializes the Python Imaging Library. This function - loads all available file format drivers. - """ - - global _initialized - if _initialized >= 2: - return 0 - - for plugin in _plugins: - try: - logger.debug("Importing %s", plugin) - __import__(f"PIL.{plugin}", globals(), locals(), []) - except ImportError as e: - logger.debug("Image: failed to import %s: %s", plugin, e) - - if OPEN or SAVE: - _initialized = 2 - return 1 - - -# -------------------------------------------------------------------- -# Codec factories (used by tobytes/frombytes and ImageFile.load) - - -def _getdecoder(mode, decoder_name, args, extra=()): - - # tweak arguments - if args is None: - args = () - elif not isinstance(args, tuple): - args = (args,) - - try: - decoder = DECODERS[decoder_name] - except KeyError: - pass - else: - return decoder(mode, *args + extra) - - try: - # get decoder - decoder = getattr(core, decoder_name + "_decoder") - except AttributeError as e: - raise OSError(f"decoder {decoder_name} not available") from e - return decoder(mode, *args + extra) - - -def _getencoder(mode, encoder_name, args, extra=()): - - # tweak arguments - if args is None: - args = () - elif not isinstance(args, tuple): - args = (args,) - - try: - encoder = ENCODERS[encoder_name] - except KeyError: - pass - else: - return encoder(mode, *args + extra) - - try: - # get encoder - encoder = getattr(core, encoder_name + "_encoder") - except AttributeError as e: - raise OSError(f"encoder {encoder_name} not available") from e - return encoder(mode, *args + extra) - - -# -------------------------------------------------------------------- -# Simple expression analyzer - - -def coerce_e(value): - deprecate("coerce_e", 10) - return value if isinstance(value, _E) else _E(1, value) - - -# _E(scale, offset) represents the affine transformation scale * x + offset. -# The "data" field is named for compatibility with the old implementation, -# and should be renamed once coerce_e is removed. -class _E: - def __init__(self, scale, data): - self.scale = scale - self.data = data - - def __neg__(self): - return _E(-self.scale, -self.data) - - def __add__(self, other): - if isinstance(other, _E): - return _E(self.scale + other.scale, self.data + other.data) - return _E(self.scale, self.data + other) - - __radd__ = __add__ - - def __sub__(self, other): - return self + -other - - def __rsub__(self, other): - return other + -self - - def __mul__(self, other): - if isinstance(other, _E): - return NotImplemented - return _E(self.scale * other, self.data * other) - - __rmul__ = __mul__ - - def __truediv__(self, other): - if isinstance(other, _E): - return NotImplemented - return _E(self.scale / other, self.data / other) - - -def _getscaleoffset(expr): - a = expr(_E(1, 0)) - return (a.scale, a.data) if isinstance(a, _E) else (0, a) - - -# -------------------------------------------------------------------- -# Implementation wrapper - - -class Image: - """ - This class represents an image object. To create - :py:class:`~PIL.Image.Image` objects, use the appropriate factory - functions. There's hardly ever any reason to call the Image constructor - directly. - - * :py:func:`~PIL.Image.open` - * :py:func:`~PIL.Image.new` - * :py:func:`~PIL.Image.frombytes` - """ - - format = None - format_description = None - _close_exclusive_fp_after_loading = True - - def __init__(self): - # FIXME: take "new" parameters / other image? - # FIXME: turn mode and size into delegating properties? - self.im = None - self.mode = "" - self._size = (0, 0) - self.palette = None - self.info = {} - self._category = 0 - self.readonly = 0 - self.pyaccess = None - self._exif = None - - def __getattr__(self, name): - if name == "category": - deprecate("Image categories", 10, "is_animated", plural=True) - return self._category - raise AttributeError(name) - - @property - def width(self): - return self.size[0] - - @property - def height(self): - return self.size[1] - - @property - def size(self): - return self._size - - def _new(self, im): - new = Image() - new.im = im - new.mode = im.mode - new._size = im.size - if im.mode in ("P", "PA"): - if self.palette: - new.palette = self.palette.copy() - else: - from . import ImagePalette - - new.palette = ImagePalette.ImagePalette() - new.info = self.info.copy() - return new - - # Context manager support - def __enter__(self): - return self - - def __exit__(self, *args): - if hasattr(self, "fp") and getattr(self, "_exclusive_fp", False): - if getattr(self, "_fp", False): - if self._fp != self.fp: - self._fp.close() - self._fp = DeferredError(ValueError("Operation on closed image")) - if self.fp: - self.fp.close() - self.fp = None - - def close(self): - """ - Closes the file pointer, if possible. - - This operation will destroy the image core and release its memory. - The image data will be unusable afterward. - - This function is required to close images that have multiple frames or - have not had their file read and closed by the - :py:meth:`~PIL.Image.Image.load` method. See :ref:`file-handling` for - more information. - """ - try: - if getattr(self, "_fp", False): - if self._fp != self.fp: - self._fp.close() - self._fp = DeferredError(ValueError("Operation on closed image")) - if self.fp: - self.fp.close() - self.fp = None - except Exception as msg: - logger.debug("Error closing: %s", msg) - - if getattr(self, "map", None): - self.map = None - - # Instead of simply setting to None, we're setting up a - # deferred error that will better explain that the core image - # object is gone. - self.im = DeferredError(ValueError("Operation on closed image")) - - def _copy(self): - self.load() - self.im = self.im.copy() - self.pyaccess = None - self.readonly = 0 - - def _ensure_mutable(self): - if self.readonly: - self._copy() - else: - self.load() - - def _dump(self, file=None, format=None, **options): - suffix = "" - if format: - suffix = "." + format - - if not file: - f, filename = tempfile.mkstemp(suffix) - os.close(f) - else: - filename = file - if not filename.endswith(suffix): - filename = filename + suffix - - self.load() - - if not format or format == "PPM": - self.im.save_ppm(filename) - else: - self.save(filename, format, **options) - - return filename - - def __eq__(self, other): - return ( - self.__class__ is other.__class__ - and self.mode == other.mode - and self.size == other.size - and self.info == other.info - and self._category == other._category - and self.getpalette() == other.getpalette() - and self.tobytes() == other.tobytes() - ) - - def __repr__(self): - return "<%s.%s image mode=%s size=%dx%d at 0x%X>" % ( - self.__class__.__module__, - self.__class__.__name__, - self.mode, - self.size[0], - self.size[1], - id(self), - ) - - def _repr_pretty_(self, p, cycle): - """IPython plain text display support""" - - # Same as __repr__ but without unpredictable id(self), - # to keep Jupyter notebook `text/plain` output stable. - p.text( - "<%s.%s image mode=%s size=%dx%d>" - % ( - self.__class__.__module__, - self.__class__.__name__, - self.mode, - self.size[0], - self.size[1], - ) - ) - - def _repr_png_(self): - """iPython display hook support - - :returns: png version of the image as bytes - """ - b = io.BytesIO() - try: - self.save(b, "PNG") - except Exception as e: - raise ValueError("Could not save to PNG for display") from e - return b.getvalue() - - @property - def __array_interface__(self): - # numpy array interface support - new = {} - shape, typestr = _conv_type_shape(self) - new["shape"] = shape - new["typestr"] = typestr - new["version"] = 3 - try: - if self.mode == "1": - # Binary images need to be extended from bits to bytes - # See: https://github.com/python-pillow/Pillow/issues/350 - new["data"] = self.tobytes("raw", "L") - else: - new["data"] = self.tobytes() - except Exception as e: - if not isinstance(e, (MemoryError, RecursionError)): - try: - import numpy - from packaging.version import parse as parse_version - except ImportError: - pass - else: - if parse_version(numpy.__version__) < parse_version("1.23"): - warnings.warn(e) - raise - return new - - def __getstate__(self): - return [self.info, self.mode, self.size, self.getpalette(), self.tobytes()] - - def __setstate__(self, state): - Image.__init__(self) - self.tile = [] - info, mode, size, palette, data = state - self.info = info - self.mode = mode - self._size = size - self.im = core.new(mode, size) - if mode in ("L", "LA", "P", "PA") and palette: - self.putpalette(palette) - self.frombytes(data) - - def tobytes(self, encoder_name="raw", *args): - """ - Return image as a bytes object. - - .. warning:: - - This method returns the raw image data from the internal - storage. For compressed image data (e.g. PNG, JPEG) use - :meth:`~.save`, with a BytesIO parameter for in-memory - data. - - :param encoder_name: What encoder to use. The default is to - use the standard "raw" encoder. - - A list of C encoders can be seen under - codecs section of the function array in - :file:`_imaging.c`. Python encoders are - registered within the relevant plugins. - :param args: Extra arguments to the encoder. - :returns: A :py:class:`bytes` object. - """ - - # may pass tuple instead of argument list - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - - if encoder_name == "raw" and args == (): - args = self.mode - - self.load() - - if self.width == 0 or self.height == 0: - return b"" - - # unpack data - e = _getencoder(self.mode, encoder_name, args) - e.setimage(self.im) - - bufsize = max(65536, self.size[0] * 4) # see RawEncode.c - - data = [] - while True: - l, s, d = e.encode(bufsize) - data.append(d) - if s: - break - if s < 0: - raise RuntimeError(f"encoder error {s} in tobytes") - - return b"".join(data) - - def tobitmap(self, name="image"): - """ - Returns the image converted to an X11 bitmap. - - .. note:: This method only works for mode "1" images. - - :param name: The name prefix to use for the bitmap variables. - :returns: A string containing an X11 bitmap. - :raises ValueError: If the mode is not "1" - """ - - self.load() - if self.mode != "1": - raise ValueError("not a bitmap") - data = self.tobytes("xbm") - return b"".join( - [ - f"#define {name}_width {self.size[0]}\n".encode("ascii"), - f"#define {name}_height {self.size[1]}\n".encode("ascii"), - f"static char {name}_bits[] = {{\n".encode("ascii"), - data, - b"};", - ] - ) - - def frombytes(self, data, decoder_name="raw", *args): - """ - Loads this image with pixel data from a bytes object. - - This method is similar to the :py:func:`~PIL.Image.frombytes` function, - but loads data into this image instead of creating a new image object. - """ - - # may pass tuple instead of argument list - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - - # default format - if decoder_name == "raw" and args == (): - args = self.mode - - # unpack data - d = _getdecoder(self.mode, decoder_name, args) - d.setimage(self.im) - s = d.decode(data) - - if s[0] >= 0: - raise ValueError("not enough image data") - if s[1] != 0: - raise ValueError("cannot decode image data") - - def load(self): - """ - Allocates storage for the image and loads the pixel data. In - normal cases, you don't need to call this method, since the - Image class automatically loads an opened image when it is - accessed for the first time. - - If the file associated with the image was opened by Pillow, then this - method will close it. The exception to this is if the image has - multiple frames, in which case the file will be left open for seek - operations. See :ref:`file-handling` for more information. - - :returns: An image access object. - :rtype: :ref:`PixelAccess` or :py:class:`PIL.PyAccess` - """ - if self.im is not None and self.palette and self.palette.dirty: - # realize palette - mode, arr = self.palette.getdata() - self.im.putpalette(mode, arr) - self.palette.dirty = 0 - self.palette.rawmode = None - if "transparency" in self.info and mode in ("LA", "PA"): - if isinstance(self.info["transparency"], int): - self.im.putpalettealpha(self.info["transparency"], 0) - else: - self.im.putpalettealphas(self.info["transparency"]) - self.palette.mode = "RGBA" - else: - palette_mode = "RGBA" if mode.startswith("RGBA") else "RGB" - self.palette.mode = palette_mode - self.palette.palette = self.im.getpalette(palette_mode, palette_mode) - - if self.im is not None: - if cffi and USE_CFFI_ACCESS: - if self.pyaccess: - return self.pyaccess - from . import PyAccess - - self.pyaccess = PyAccess.new(self, self.readonly) - if self.pyaccess: - return self.pyaccess - return self.im.pixel_access(self.readonly) - - def verify(self): - """ - Verifies the contents of a file. For data read from a file, this - method attempts to determine if the file is broken, without - actually decoding the image data. If this method finds any - problems, it raises suitable exceptions. If you need to load - the image after using this method, you must reopen the image - file. - """ - pass - - def convert( - self, mode=None, matrix=None, dither=None, palette=Palette.WEB, colors=256 - ): - """ - Returns a converted copy of this image. For the "P" mode, this - method translates pixels through the palette. If mode is - omitted, a mode is chosen so that all information in the image - and the palette can be represented without a palette. - - The current version supports all possible conversions between - "L", "RGB" and "CMYK". The ``matrix`` argument only supports "L" - and "RGB". - - When translating a color image to greyscale (mode "L"), - the library uses the ITU-R 601-2 luma transform:: - - L = R * 299/1000 + G * 587/1000 + B * 114/1000 - - The default method of converting a greyscale ("L") or "RGB" - image into a bilevel (mode "1") image uses Floyd-Steinberg - dither to approximate the original image luminosity levels. If - dither is ``None``, all values larger than 127 are set to 255 (white), - all other values to 0 (black). To use other thresholds, use the - :py:meth:`~PIL.Image.Image.point` method. - - When converting from "RGBA" to "P" without a ``matrix`` argument, - this passes the operation to :py:meth:`~PIL.Image.Image.quantize`, - and ``dither`` and ``palette`` are ignored. - - When converting from "PA", if an "RGBA" palette is present, the alpha - channel from the image will be used instead of the values from the palette. - - :param mode: The requested mode. See: :ref:`concept-modes`. - :param matrix: An optional conversion matrix. If given, this - should be 4- or 12-tuple containing floating point values. - :param dither: Dithering method, used when converting from - mode "RGB" to "P" or from "RGB" or "L" to "1". - Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG` - (default). Note that this is not used when ``matrix`` is supplied. - :param palette: Palette to use when converting from mode "RGB" - to "P". Available palettes are :data:`Palette.WEB` or - :data:`Palette.ADAPTIVE`. - :param colors: Number of colors to use for the :data:`Palette.ADAPTIVE` - palette. Defaults to 256. - :rtype: :py:class:`~PIL.Image.Image` - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - self.load() - - has_transparency = self.info.get("transparency") is not None - if not mode and self.mode == "P": - # determine default mode - if self.palette: - mode = self.palette.mode - else: - mode = "RGB" - if mode == "RGB" and has_transparency: - mode = "RGBA" - if not mode or (mode == self.mode and not matrix): - return self.copy() - - if matrix: - # matrix conversion - if mode not in ("L", "RGB"): - raise ValueError("illegal conversion") - im = self.im.convert_matrix(mode, matrix) - new = self._new(im) - if has_transparency and self.im.bands == 3: - transparency = new.info["transparency"] - - def convert_transparency(m, v): - v = m[0] * v[0] + m[1] * v[1] + m[2] * v[2] + m[3] * 0.5 - return max(0, min(255, int(v))) - - if mode == "L": - transparency = convert_transparency(matrix, transparency) - elif len(mode) == 3: - transparency = tuple( - convert_transparency(matrix[i * 4 : i * 4 + 4], transparency) - for i in range(0, len(transparency)) - ) - new.info["transparency"] = transparency - return new - - if mode == "P" and self.mode == "RGBA": - return self.quantize(colors) - - trns = None - delete_trns = False - # transparency handling - if has_transparency: - if (self.mode in ("1", "L", "I") and mode in ("LA", "RGBA")) or ( - self.mode == "RGB" and mode == "RGBA" - ): - # Use transparent conversion to promote from transparent - # color to an alpha channel. - new_im = self._new( - self.im.convert_transparent(mode, self.info["transparency"]) - ) - del new_im.info["transparency"] - return new_im - elif self.mode in ("L", "RGB", "P") and mode in ("L", "RGB", "P"): - t = self.info["transparency"] - if isinstance(t, bytes): - # Dragons. This can't be represented by a single color - warnings.warn( - "Palette images with Transparency expressed in bytes should be " - "converted to RGBA images" - ) - delete_trns = True - else: - # get the new transparency color. - # use existing conversions - trns_im = Image()._new(core.new(self.mode, (1, 1))) - if self.mode == "P": - trns_im.putpalette(self.palette) - if isinstance(t, tuple): - err = "Couldn't allocate a palette color for transparency" - try: - t = trns_im.palette.getcolor(t, self) - except ValueError as e: - if str(e) == "cannot allocate more than 256 colors": - # If all 256 colors are in use, - # then there is no need for transparency - t = None - else: - raise ValueError(err) from e - if t is None: - trns = None - else: - trns_im.putpixel((0, 0), t) - - if mode in ("L", "RGB"): - trns_im = trns_im.convert(mode) - else: - # can't just retrieve the palette number, got to do it - # after quantization. - trns_im = trns_im.convert("RGB") - trns = trns_im.getpixel((0, 0)) - - elif self.mode == "P" and mode in ("LA", "PA", "RGBA"): - t = self.info["transparency"] - delete_trns = True - - if isinstance(t, bytes): - self.im.putpalettealphas(t) - elif isinstance(t, int): - self.im.putpalettealpha(t, 0) - else: - raise ValueError("Transparency for P mode should be bytes or int") - - if mode == "P" and palette == Palette.ADAPTIVE: - im = self.im.quantize(colors) - new = self._new(im) - from . import ImagePalette - - new.palette = ImagePalette.ImagePalette("RGB", new.im.getpalette("RGB")) - if delete_trns: - # This could possibly happen if we requantize to fewer colors. - # The transparency would be totally off in that case. - del new.info["transparency"] - if trns is not None: - try: - new.info["transparency"] = new.palette.getcolor(trns, new) - except Exception: - # if we can't make a transparent color, don't leave the old - # transparency hanging around to mess us up. - del new.info["transparency"] - warnings.warn("Couldn't allocate palette entry for transparency") - return new - - if "LAB" in (self.mode, mode): - other_mode = mode if self.mode == "LAB" else self.mode - if other_mode in ("RGB", "RGBA", "RGBX"): - from . import ImageCms - - srgb = ImageCms.createProfile("sRGB") - lab = ImageCms.createProfile("LAB") - profiles = [lab, srgb] if self.mode == "LAB" else [srgb, lab] - transform = ImageCms.buildTransform( - profiles[0], profiles[1], self.mode, mode - ) - return transform.apply(self) - - # colorspace conversion - if dither is None: - dither = Dither.FLOYDSTEINBERG - - try: - im = self.im.convert(mode, dither) - except ValueError: - try: - # normalize source image and try again - modebase = getmodebase(self.mode) - if modebase == self.mode: - raise - im = self.im.convert(modebase) - im = im.convert(mode, dither) - except KeyError as e: - raise ValueError("illegal conversion") from e - - new_im = self._new(im) - if mode == "P" and palette != Palette.ADAPTIVE: - from . import ImagePalette - - new_im.palette = ImagePalette.ImagePalette("RGB", list(range(256)) * 3) - if delete_trns: - # crash fail if we leave a bytes transparency in an rgb/l mode. - del new_im.info["transparency"] - if trns is not None: - if new_im.mode == "P": - try: - new_im.info["transparency"] = new_im.palette.getcolor(trns, new_im) - except ValueError as e: - del new_im.info["transparency"] - if str(e) != "cannot allocate more than 256 colors": - # If all 256 colors are in use, - # then there is no need for transparency - warnings.warn( - "Couldn't allocate palette entry for transparency" - ) - else: - new_im.info["transparency"] = trns - return new_im - - def quantize( - self, - colors=256, - method=None, - kmeans=0, - palette=None, - dither=Dither.FLOYDSTEINBERG, - ): - """ - Convert the image to 'P' mode with the specified number - of colors. - - :param colors: The desired number of colors, <= 256 - :param method: :data:`Quantize.MEDIANCUT` (median cut), - :data:`Quantize.MAXCOVERAGE` (maximum coverage), - :data:`Quantize.FASTOCTREE` (fast octree), - :data:`Quantize.LIBIMAGEQUANT` (libimagequant; check support - using :py:func:`PIL.features.check_feature` with - ``feature="libimagequant"``). - - By default, :data:`Quantize.MEDIANCUT` will be used. - - The exception to this is RGBA images. :data:`Quantize.MEDIANCUT` - and :data:`Quantize.MAXCOVERAGE` do not support RGBA images, so - :data:`Quantize.FASTOCTREE` is used by default instead. - :param kmeans: Integer - :param palette: Quantize to the palette of given - :py:class:`PIL.Image.Image`. - :param dither: Dithering method, used when converting from - mode "RGB" to "P" or from "RGB" or "L" to "1". - Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG` - (default). - :returns: A new image - - """ - - self.load() - - if method is None: - # defaults: - method = Quantize.MEDIANCUT - if self.mode == "RGBA": - method = Quantize.FASTOCTREE - - if self.mode == "RGBA" and method not in ( - Quantize.FASTOCTREE, - Quantize.LIBIMAGEQUANT, - ): - # Caller specified an invalid mode. - raise ValueError( - "Fast Octree (method == 2) and libimagequant (method == 3) " - "are the only valid methods for quantizing RGBA images" - ) - - if palette: - # use palette from reference image - palette.load() - if palette.mode != "P": - raise ValueError("bad mode for palette image") - if self.mode != "RGB" and self.mode != "L": - raise ValueError( - "only RGB or L mode images can be quantized to a palette" - ) - im = self.im.convert("P", dither, palette.im) - new_im = self._new(im) - new_im.palette = palette.palette.copy() - return new_im - - im = self._new(self.im.quantize(colors, method, kmeans)) - - from . import ImagePalette - - mode = im.im.getpalettemode() - palette = im.im.getpalette(mode, mode)[: colors * len(mode)] - im.palette = ImagePalette.ImagePalette(mode, palette) - - return im - - def copy(self): - """ - Copies this image. Use this method if you wish to paste things - into an image, but still retain the original. - - :rtype: :py:class:`~PIL.Image.Image` - :returns: An :py:class:`~PIL.Image.Image` object. - """ - self.load() - return self._new(self.im.copy()) - - __copy__ = copy - - def crop(self, box=None): - """ - Returns a rectangular region from this image. The box is a - 4-tuple defining the left, upper, right, and lower pixel - coordinate. See :ref:`coordinate-system`. - - Note: Prior to Pillow 3.4.0, this was a lazy operation. - - :param box: The crop rectangle, as a (left, upper, right, lower)-tuple. - :rtype: :py:class:`~PIL.Image.Image` - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if box is None: - return self.copy() - - if box[2] < box[0]: - raise ValueError("Coordinate 'right' is less than 'left'") - elif box[3] < box[1]: - raise ValueError("Coordinate 'lower' is less than 'upper'") - - self.load() - return self._new(self._crop(self.im, box)) - - def _crop(self, im, box): - """ - Returns a rectangular region from the core image object im. - - This is equivalent to calling im.crop((x0, y0, x1, y1)), but - includes additional sanity checks. - - :param im: a core image object - :param box: The crop rectangle, as a (left, upper, right, lower)-tuple. - :returns: A core image object. - """ - - x0, y0, x1, y1 = map(int, map(round, box)) - - absolute_values = (abs(x1 - x0), abs(y1 - y0)) - - _decompression_bomb_check(absolute_values) - - return im.crop((x0, y0, x1, y1)) - - def draft(self, mode, size): - """ - Configures the image file loader so it returns a version of the - image that as closely as possible matches the given mode and - size. For example, you can use this method to convert a color - JPEG to greyscale while loading it. - - If any changes are made, returns a tuple with the chosen ``mode`` and - ``box`` with coordinates of the original image within the altered one. - - Note that this method modifies the :py:class:`~PIL.Image.Image` object - in place. If the image has already been loaded, this method has no - effect. - - Note: This method is not implemented for most images. It is - currently implemented only for JPEG and MPO images. - - :param mode: The requested mode. - :param size: The requested size. - """ - pass - - def _expand(self, xmargin, ymargin=None): - if ymargin is None: - ymargin = xmargin - self.load() - return self._new(self.im.expand(xmargin, ymargin, 0)) - - def filter(self, filter): - """ - Filters this image using the given filter. For a list of - available filters, see the :py:mod:`~PIL.ImageFilter` module. - - :param filter: Filter kernel. - :returns: An :py:class:`~PIL.Image.Image` object.""" - - from . import ImageFilter - - self.load() - - if isinstance(filter, Callable): - filter = filter() - if not hasattr(filter, "filter"): - raise TypeError( - "filter argument should be ImageFilter.Filter instance or class" - ) - - multiband = isinstance(filter, ImageFilter.MultibandFilter) - if self.im.bands == 1 or multiband: - return self._new(filter.filter(self.im)) - - ims = [] - for c in range(self.im.bands): - ims.append(self._new(filter.filter(self.im.getband(c)))) - return merge(self.mode, ims) - - def getbands(self): - """ - Returns a tuple containing the name of each band in this image. - For example, ``getbands`` on an RGB image returns ("R", "G", "B"). - - :returns: A tuple containing band names. - :rtype: tuple - """ - return ImageMode.getmode(self.mode).bands - - def getbbox(self): - """ - Calculates the bounding box of the non-zero regions in the - image. - - :returns: The bounding box is returned as a 4-tuple defining the - left, upper, right, and lower pixel coordinate. See - :ref:`coordinate-system`. If the image is completely empty, this - method returns None. - - """ - - self.load() - return self.im.getbbox() - - def getcolors(self, maxcolors=256): - """ - Returns a list of colors used in this image. - - The colors will be in the image's mode. For example, an RGB image will - return a tuple of (red, green, blue) color values, and a P image will - return the index of the color in the palette. - - :param maxcolors: Maximum number of colors. If this number is - exceeded, this method returns None. The default limit is - 256 colors. - :returns: An unsorted list of (count, pixel) values. - """ - - self.load() - if self.mode in ("1", "L", "P"): - h = self.im.histogram() - out = [] - for i in range(256): - if h[i]: - out.append((h[i], i)) - if len(out) > maxcolors: - return None - return out - return self.im.getcolors(maxcolors) - - def getdata(self, band=None): - """ - Returns the contents of this image as a sequence object - containing pixel values. The sequence object is flattened, so - that values for line one follow directly after the values of - line zero, and so on. - - Note that the sequence object returned by this method is an - internal PIL data type, which only supports certain sequence - operations. To convert it to an ordinary sequence (e.g. for - printing), use ``list(im.getdata())``. - - :param band: What band to return. The default is to return - all bands. To return a single band, pass in the index - value (e.g. 0 to get the "R" band from an "RGB" image). - :returns: A sequence-like object. - """ - - self.load() - if band is not None: - return self.im.getband(band) - return self.im # could be abused - - def getextrema(self): - """ - Gets the minimum and maximum pixel values for each band in - the image. - - :returns: For a single-band image, a 2-tuple containing the - minimum and maximum pixel value. For a multi-band image, - a tuple containing one 2-tuple for each band. - """ - - self.load() - if self.im.bands > 1: - extrema = [] - for i in range(self.im.bands): - extrema.append(self.im.getband(i).getextrema()) - return tuple(extrema) - return self.im.getextrema() - - def _getxmp(self, xmp_tags): - def get_name(tag): - return tag.split("}")[1] - - def get_value(element): - value = {get_name(k): v for k, v in element.attrib.items()} - children = list(element) - if children: - for child in children: - name = get_name(child.tag) - child_value = get_value(child) - if name in value: - if not isinstance(value[name], list): - value[name] = [value[name]] - value[name].append(child_value) - else: - value[name] = child_value - elif value: - if element.text: - value["text"] = element.text - else: - return element.text - return value - - if ElementTree is None: - warnings.warn("XMP data cannot be read without defusedxml dependency") - return {} - else: - root = ElementTree.fromstring(xmp_tags) - return {get_name(root.tag): get_value(root)} - - def getexif(self): - if self._exif is None: - self._exif = Exif() - self._exif._loaded = False - elif self._exif._loaded: - return self._exif - self._exif._loaded = True - - exif_info = self.info.get("exif") - if exif_info is None: - if "Raw profile type exif" in self.info: - exif_info = bytes.fromhex( - "".join(self.info["Raw profile type exif"].split("\n")[3:]) - ) - elif hasattr(self, "tag_v2"): - self._exif.bigtiff = self.tag_v2._bigtiff - self._exif.endian = self.tag_v2._endian - self._exif.load_from_fp(self.fp, self.tag_v2._offset) - if exif_info is not None: - self._exif.load(exif_info) - - # XMP tags - if 0x0112 not in self._exif: - xmp_tags = self.info.get("XML:com.adobe.xmp") - if xmp_tags: - match = re.search(r'tiff:Orientation(="|>)([0-9])', xmp_tags) - if match: - self._exif[0x0112] = int(match[2]) - - return self._exif - - def _reload_exif(self): - if self._exif is None or not self._exif._loaded: - return - self._exif._loaded = False - self.getexif() - - def getim(self): - """ - Returns a capsule that points to the internal image memory. - - :returns: A capsule object. - """ - - self.load() - return self.im.ptr - - def getpalette(self, rawmode="RGB"): - """ - Returns the image palette as a list. - - :param rawmode: The mode in which to return the palette. ``None`` will - return the palette in its current mode. - - .. versionadded:: 9.1.0 - - :returns: A list of color values [r, g, b, ...], or None if the - image has no palette. - """ - - self.load() - try: - mode = self.im.getpalettemode() - except ValueError: - return None # no palette - if rawmode is None: - rawmode = mode - return list(self.im.getpalette(mode, rawmode)) - - def apply_transparency(self): - """ - If a P mode image has a "transparency" key in the info dictionary, - remove the key and apply the transparency to the palette instead. - """ - if self.mode != "P" or "transparency" not in self.info: - return - - from . import ImagePalette - - palette = self.getpalette("RGBA") - transparency = self.info["transparency"] - if isinstance(transparency, bytes): - for i, alpha in enumerate(transparency): - palette[i * 4 + 3] = alpha - else: - palette[transparency * 4 + 3] = 0 - self.palette = ImagePalette.ImagePalette("RGBA", bytes(palette)) - self.palette.dirty = 1 - - del self.info["transparency"] - - def getpixel(self, xy): - """ - Returns the pixel value at a given position. - - :param xy: The coordinate, given as (x, y). See - :ref:`coordinate-system`. - :returns: The pixel value. If the image is a multi-layer image, - this method returns a tuple. - """ - - self.load() - if self.pyaccess: - return self.pyaccess.getpixel(xy) - return self.im.getpixel(xy) - - def getprojection(self): - """ - Get projection to x and y axes - - :returns: Two sequences, indicating where there are non-zero - pixels along the X-axis and the Y-axis, respectively. - """ - - self.load() - x, y = self.im.getprojection() - return list(x), list(y) - - def histogram(self, mask=None, extrema=None): - """ - Returns a histogram for the image. The histogram is returned as a - list of pixel counts, one for each pixel value in the source - image. Counts are grouped into 256 bins for each band, even if - the image has more than 8 bits per band. If the image has more - than one band, the histograms for all bands are concatenated (for - example, the histogram for an "RGB" image contains 768 values). - - A bilevel image (mode "1") is treated as a greyscale ("L") image - by this method. - - If a mask is provided, the method returns a histogram for those - parts of the image where the mask image is non-zero. The mask - image must have the same size as the image, and be either a - bi-level image (mode "1") or a greyscale image ("L"). - - :param mask: An optional mask. - :param extrema: An optional tuple of manually-specified extrema. - :returns: A list containing pixel counts. - """ - self.load() - if mask: - mask.load() - return self.im.histogram((0, 0), mask.im) - if self.mode in ("I", "F"): - if extrema is None: - extrema = self.getextrema() - return self.im.histogram(extrema) - return self.im.histogram() - - def entropy(self, mask=None, extrema=None): - """ - Calculates and returns the entropy for the image. - - A bilevel image (mode "1") is treated as a greyscale ("L") - image by this method. - - If a mask is provided, the method employs the histogram for - those parts of the image where the mask image is non-zero. - The mask image must have the same size as the image, and be - either a bi-level image (mode "1") or a greyscale image ("L"). - - :param mask: An optional mask. - :param extrema: An optional tuple of manually-specified extrema. - :returns: A float value representing the image entropy - """ - self.load() - if mask: - mask.load() - return self.im.entropy((0, 0), mask.im) - if self.mode in ("I", "F"): - if extrema is None: - extrema = self.getextrema() - return self.im.entropy(extrema) - return self.im.entropy() - - def paste(self, im, box=None, mask=None): - """ - Pastes another image into this image. The box argument is either - a 2-tuple giving the upper left corner, a 4-tuple defining the - left, upper, right, and lower pixel coordinate, or None (same as - (0, 0)). See :ref:`coordinate-system`. If a 4-tuple is given, the size - of the pasted image must match the size of the region. - - If the modes don't match, the pasted image is converted to the mode of - this image (see the :py:meth:`~PIL.Image.Image.convert` method for - details). - - Instead of an image, the source can be a integer or tuple - containing pixel values. The method then fills the region - with the given color. When creating RGB images, you can - also use color strings as supported by the ImageColor module. - - If a mask is given, this method updates only the regions - indicated by the mask. You can use either "1", "L", "LA", "RGBA" - or "RGBa" images (if present, the alpha band is used as mask). - Where the mask is 255, the given image is copied as is. Where - the mask is 0, the current value is preserved. Intermediate - values will mix the two images together, including their alpha - channels if they have them. - - See :py:meth:`~PIL.Image.Image.alpha_composite` if you want to - combine images with respect to their alpha channels. - - :param im: Source image or pixel value (integer or tuple). - :param box: An optional 4-tuple giving the region to paste into. - If a 2-tuple is used instead, it's treated as the upper left - corner. If omitted or None, the source is pasted into the - upper left corner. - - If an image is given as the second argument and there is no - third, the box defaults to (0, 0), and the second argument - is interpreted as a mask image. - :param mask: An optional mask image. - """ - - if isImageType(box) and mask is None: - # abbreviated paste(im, mask) syntax - mask = box - box = None - - if box is None: - box = (0, 0) - - if len(box) == 2: - # upper left corner given; get size from image or mask - if isImageType(im): - size = im.size - elif isImageType(mask): - size = mask.size - else: - # FIXME: use self.size here? - raise ValueError("cannot determine region size; use 4-item box") - box += (box[0] + size[0], box[1] + size[1]) - - if isinstance(im, str): - from . import ImageColor - - im = ImageColor.getcolor(im, self.mode) - - elif isImageType(im): - im.load() - if self.mode != im.mode: - if self.mode != "RGB" or im.mode not in ("LA", "RGBA", "RGBa"): - # should use an adapter for this! - im = im.convert(self.mode) - im = im.im - - self._ensure_mutable() - - if mask: - mask.load() - self.im.paste(im, box, mask.im) - else: - self.im.paste(im, box) - - def alpha_composite(self, im, dest=(0, 0), source=(0, 0)): - """'In-place' analog of Image.alpha_composite. Composites an image - onto this image. - - :param im: image to composite over this one - :param dest: Optional 2 tuple (left, top) specifying the upper - left corner in this (destination) image. - :param source: Optional 2 (left, top) tuple for the upper left - corner in the overlay source image, or 4 tuple (left, top, right, - bottom) for the bounds of the source rectangle - - Performance Note: Not currently implemented in-place in the core layer. - """ - - if not isinstance(source, (list, tuple)): - raise ValueError("Source must be a tuple") - if not isinstance(dest, (list, tuple)): - raise ValueError("Destination must be a tuple") - if not len(source) in (2, 4): - raise ValueError("Source must be a 2 or 4-tuple") - if not len(dest) == 2: - raise ValueError("Destination must be a 2-tuple") - if min(source) < 0: - raise ValueError("Source must be non-negative") - - if len(source) == 2: - source = source + im.size - - # over image, crop if it's not the whole thing. - if source == (0, 0) + im.size: - overlay = im - else: - overlay = im.crop(source) - - # target for the paste - box = dest + (dest[0] + overlay.width, dest[1] + overlay.height) - - # destination image. don't copy if we're using the whole image. - if box == (0, 0) + self.size: - background = self - else: - background = self.crop(box) - - result = alpha_composite(background, overlay) - self.paste(result, box) - - def point(self, lut, mode=None): - """ - Maps this image through a lookup table or function. - - :param lut: A lookup table, containing 256 (or 65536 if - self.mode=="I" and mode == "L") values per band in the - image. A function can be used instead, it should take a - single argument. The function is called once for each - possible pixel value, and the resulting table is applied to - all bands of the image. - - It may also be an :py:class:`~PIL.Image.ImagePointHandler` - object:: - - class Example(Image.ImagePointHandler): - def point(self, data): - # Return result - :param mode: Output mode (default is same as input). In the - current version, this can only be used if the source image - has mode "L" or "P", and the output has mode "1" or the - source image mode is "I" and the output mode is "L". - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - self.load() - - if isinstance(lut, ImagePointHandler): - return lut.point(self) - - if callable(lut): - # if it isn't a list, it should be a function - if self.mode in ("I", "I;16", "F"): - # check if the function can be used with point_transform - # UNDONE wiredfool -- I think this prevents us from ever doing - # a gamma function point transform on > 8bit images. - scale, offset = _getscaleoffset(lut) - return self._new(self.im.point_transform(scale, offset)) - # for other modes, convert the function to a table - lut = [lut(i) for i in range(256)] * self.im.bands - - if self.mode == "F": - # FIXME: _imaging returns a confusing error message for this case - raise ValueError("point operation not supported for this mode") - - if mode != "F": - lut = [round(i) for i in lut] - return self._new(self.im.point(lut, mode)) - - def putalpha(self, alpha): - """ - Adds or replaces the alpha layer in this image. If the image - does not have an alpha layer, it's converted to "LA" or "RGBA". - The new layer must be either "L" or "1". - - :param alpha: The new alpha layer. This can either be an "L" or "1" - image having the same size as this image, or an integer or - other color value. - """ - - self._ensure_mutable() - - if self.mode not in ("LA", "PA", "RGBA"): - # attempt to promote self to a matching alpha mode - try: - mode = getmodebase(self.mode) + "A" - try: - self.im.setmode(mode) - except (AttributeError, ValueError) as e: - # do things the hard way - im = self.im.convert(mode) - if im.mode not in ("LA", "PA", "RGBA"): - raise ValueError from e # sanity check - self.im = im - self.pyaccess = None - self.mode = self.im.mode - except KeyError as e: - raise ValueError("illegal image mode") from e - - if self.mode in ("LA", "PA"): - band = 1 - else: - band = 3 - - if isImageType(alpha): - # alpha layer - if alpha.mode not in ("1", "L"): - raise ValueError("illegal image mode") - alpha.load() - if alpha.mode == "1": - alpha = alpha.convert("L") - else: - # constant alpha - try: - self.im.fillband(band, alpha) - except (AttributeError, ValueError): - # do things the hard way - alpha = new("L", self.size, alpha) - else: - return - - self.im.putband(alpha.im, band) - - def putdata(self, data, scale=1.0, offset=0.0): - """ - Copies pixel data from a flattened sequence object into the image. The - values should start at the upper left corner (0, 0), continue to the - end of the line, followed directly by the first value of the second - line, and so on. Data will be read until either the image or the - sequence ends. The scale and offset values are used to adjust the - sequence values: **pixel = value*scale + offset**. - - :param data: A flattened sequence object. - :param scale: An optional scale value. The default is 1.0. - :param offset: An optional offset value. The default is 0.0. - """ - - self._ensure_mutable() - - self.im.putdata(data, scale, offset) - - def putpalette(self, data, rawmode="RGB"): - """ - Attaches a palette to this image. The image must be a "P", "PA", "L" - or "LA" image. - - The palette sequence must contain at most 256 colors, made up of one - integer value for each channel in the raw mode. - For example, if the raw mode is "RGB", then it can contain at most 768 - values, made up of red, green and blue values for the corresponding pixel - index in the 256 colors. - If the raw mode is "RGBA", then it can contain at most 1024 values, - containing red, green, blue and alpha values. - - Alternatively, an 8-bit string may be used instead of an integer sequence. - - :param data: A palette sequence (either a list or a string). - :param rawmode: The raw mode of the palette. Either "RGB", "RGBA", or a mode - that can be transformed to "RGB" or "RGBA" (e.g. "R", "BGR;15", "RGBA;L"). - """ - from . import ImagePalette - - if self.mode not in ("L", "LA", "P", "PA"): - raise ValueError("illegal image mode") - if isinstance(data, ImagePalette.ImagePalette): - palette = ImagePalette.raw(data.rawmode, data.palette) - else: - if not isinstance(data, bytes): - data = bytes(data) - palette = ImagePalette.raw(rawmode, data) - self.mode = "PA" if "A" in self.mode else "P" - self.palette = palette - self.palette.mode = "RGB" - self.load() # install new palette - - def putpixel(self, xy, value): - """ - Modifies the pixel at the given position. The color is given as - a single numerical value for single-band images, and a tuple for - multi-band images. In addition to this, RGB and RGBA tuples are - accepted for P and PA images. - - Note that this method is relatively slow. For more extensive changes, - use :py:meth:`~PIL.Image.Image.paste` or the :py:mod:`~PIL.ImageDraw` - module instead. - - See: - - * :py:meth:`~PIL.Image.Image.paste` - * :py:meth:`~PIL.Image.Image.putdata` - * :py:mod:`~PIL.ImageDraw` - - :param xy: The pixel coordinate, given as (x, y). See - :ref:`coordinate-system`. - :param value: The pixel value. - """ - - if self.readonly: - self._copy() - self.load() - - if self.pyaccess: - return self.pyaccess.putpixel(xy, value) - - if ( - self.mode in ("P", "PA") - and isinstance(value, (list, tuple)) - and len(value) in [3, 4] - ): - # RGB or RGBA value for a P or PA image - if self.mode == "PA": - alpha = value[3] if len(value) == 4 else 255 - value = value[:3] - value = self.palette.getcolor(value, self) - if self.mode == "PA": - value = (value, alpha) - return self.im.putpixel(xy, value) - - def remap_palette(self, dest_map, source_palette=None): - """ - Rewrites the image to reorder the palette. - - :param dest_map: A list of indexes into the original palette. - e.g. ``[1,0]`` would swap a two item palette, and ``list(range(256))`` - is the identity transform. - :param source_palette: Bytes or None. - :returns: An :py:class:`~PIL.Image.Image` object. - - """ - from . import ImagePalette - - if self.mode not in ("L", "P"): - raise ValueError("illegal image mode") - - bands = 3 - palette_mode = "RGB" - if source_palette is None: - if self.mode == "P": - self.load() - palette_mode = self.im.getpalettemode() - if palette_mode == "RGBA": - bands = 4 - source_palette = self.im.getpalette(palette_mode, palette_mode) - else: # L-mode - source_palette = bytearray(i // 3 for i in range(768)) - - palette_bytes = b"" - new_positions = [0] * 256 - - # pick only the used colors from the palette - for i, oldPosition in enumerate(dest_map): - palette_bytes += source_palette[ - oldPosition * bands : oldPosition * bands + bands - ] - new_positions[oldPosition] = i - - # replace the palette color id of all pixel with the new id - - # Palette images are [0..255], mapped through a 1 or 3 - # byte/color map. We need to remap the whole image - # from palette 1 to palette 2. New_positions is - # an array of indexes into palette 1. Palette 2 is - # palette 1 with any holes removed. - - # We're going to leverage the convert mechanism to use the - # C code to remap the image from palette 1 to palette 2, - # by forcing the source image into 'L' mode and adding a - # mapping 'L' mode palette, then converting back to 'L' - # sans palette thus converting the image bytes, then - # assigning the optimized RGB palette. - - # perf reference, 9500x4000 gif, w/~135 colors - # 14 sec prepatch, 1 sec postpatch with optimization forced. - - mapping_palette = bytearray(new_positions) - - m_im = self.copy() - m_im.mode = "P" - - m_im.palette = ImagePalette.ImagePalette( - palette_mode, palette=mapping_palette * bands - ) - # possibly set palette dirty, then - # m_im.putpalette(mapping_palette, 'L') # converts to 'P' - # or just force it. - # UNDONE -- this is part of the general issue with palettes - m_im.im.putpalette(palette_mode + ";L", m_im.palette.tobytes()) - - m_im = m_im.convert("L") - - m_im.putpalette(palette_bytes, palette_mode) - m_im.palette = ImagePalette.ImagePalette(palette_mode, palette=palette_bytes) - - if "transparency" in self.info: - try: - m_im.info["transparency"] = dest_map.index(self.info["transparency"]) - except ValueError: - if "transparency" in m_im.info: - del m_im.info["transparency"] - - return m_im - - def _get_safe_box(self, size, resample, box): - """Expands the box so it includes adjacent pixels - that may be used by resampling with the given resampling filter. - """ - filter_support = _filters_support[resample] - 0.5 - scale_x = (box[2] - box[0]) / size[0] - scale_y = (box[3] - box[1]) / size[1] - support_x = filter_support * scale_x - support_y = filter_support * scale_y - - return ( - max(0, int(box[0] - support_x)), - max(0, int(box[1] - support_y)), - min(self.size[0], math.ceil(box[2] + support_x)), - min(self.size[1], math.ceil(box[3] + support_y)), - ) - - def resize(self, size, resample=None, box=None, reducing_gap=None): - """ - Returns a resized copy of this image. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param resample: An optional resampling filter. This can be - one of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`, - :py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`, - :py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`. - If the image has mode "1" or "P", it is always set to - :py:data:`Resampling.NEAREST`. If the image mode specifies a number - of bits, such as "I;16", then the default filter is - :py:data:`Resampling.NEAREST`. Otherwise, the default filter is - :py:data:`Resampling.BICUBIC`. See: :ref:`concept-filters`. - :param box: An optional 4-tuple of floats providing - the source image region to be scaled. - The values must be within (0, 0, width, height) rectangle. - If omitted or None, the entire source is used. - :param reducing_gap: Apply optimization by resizing the image - in two steps. First, reducing the image by integer times - using :py:meth:`~PIL.Image.Image.reduce`. - Second, resizing using regular resampling. The last step - changes size no less than by ``reducing_gap`` times. - ``reducing_gap`` may be None (no first step is performed) - or should be greater than 1.0. The bigger ``reducing_gap``, - the closer the result to the fair resampling. - The smaller ``reducing_gap``, the faster resizing. - With ``reducing_gap`` greater or equal to 3.0, the result is - indistinguishable from fair resampling in most cases. - The default value is None (no optimization). - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if resample is None: - type_special = ";" in self.mode - resample = Resampling.NEAREST if type_special else Resampling.BICUBIC - elif resample not in ( - Resampling.NEAREST, - Resampling.BILINEAR, - Resampling.BICUBIC, - Resampling.LANCZOS, - Resampling.BOX, - Resampling.HAMMING, - ): - message = f"Unknown resampling filter ({resample})." - - filters = [ - f"{filter[1]} ({filter[0]})" - for filter in ( - (Resampling.NEAREST, "Image.Resampling.NEAREST"), - (Resampling.LANCZOS, "Image.Resampling.LANCZOS"), - (Resampling.BILINEAR, "Image.Resampling.BILINEAR"), - (Resampling.BICUBIC, "Image.Resampling.BICUBIC"), - (Resampling.BOX, "Image.Resampling.BOX"), - (Resampling.HAMMING, "Image.Resampling.HAMMING"), - ) - ] - raise ValueError( - message + " Use " + ", ".join(filters[:-1]) + " or " + filters[-1] - ) - - if reducing_gap is not None and reducing_gap < 1.0: - raise ValueError("reducing_gap must be 1.0 or greater") - - size = tuple(size) - - self.load() - if box is None: - box = (0, 0) + self.size - else: - box = tuple(box) - - if self.size == size and box == (0, 0) + self.size: - return self.copy() - - if self.mode in ("1", "P"): - resample = Resampling.NEAREST - - if self.mode in ["LA", "RGBA"] and resample != Resampling.NEAREST: - im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode]) - im = im.resize(size, resample, box) - return im.convert(self.mode) - - self.load() - - if reducing_gap is not None and resample != Resampling.NEAREST: - factor_x = int((box[2] - box[0]) / size[0] / reducing_gap) or 1 - factor_y = int((box[3] - box[1]) / size[1] / reducing_gap) or 1 - if factor_x > 1 or factor_y > 1: - reduce_box = self._get_safe_box(size, resample, box) - factor = (factor_x, factor_y) - if callable(self.reduce): - self = self.reduce(factor, box=reduce_box) - else: - self = Image.reduce(self, factor, box=reduce_box) - box = ( - (box[0] - reduce_box[0]) / factor_x, - (box[1] - reduce_box[1]) / factor_y, - (box[2] - reduce_box[0]) / factor_x, - (box[3] - reduce_box[1]) / factor_y, - ) - - return self._new(self.im.resize(size, resample, box)) - - def reduce(self, factor, box=None): - """ - Returns a copy of the image reduced ``factor`` times. - If the size of the image is not dividable by ``factor``, - the resulting size will be rounded up. - - :param factor: A greater than 0 integer or tuple of two integers - for width and height separately. - :param box: An optional 4-tuple of ints providing - the source image region to be reduced. - The values must be within ``(0, 0, width, height)`` rectangle. - If omitted or ``None``, the entire source is used. - """ - if not isinstance(factor, (list, tuple)): - factor = (factor, factor) - - if box is None: - box = (0, 0) + self.size - else: - box = tuple(box) - - if factor == (1, 1) and box == (0, 0) + self.size: - return self.copy() - - if self.mode in ["LA", "RGBA"]: - im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode]) - im = im.reduce(factor, box) - return im.convert(self.mode) - - self.load() - - return self._new(self.im.reduce(factor, box)) - - def rotate( - self, - angle, - resample=Resampling.NEAREST, - expand=0, - center=None, - translate=None, - fillcolor=None, - ): - """ - Returns a rotated copy of this image. This method returns a - copy of this image, rotated the given number of degrees counter - clockwise around its centre. - - :param angle: In degrees counter clockwise. - :param resample: An optional resampling filter. This can be - one of :py:data:`Resampling.NEAREST` (use nearest neighbour), - :py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2 - environment), or :py:data:`Resampling.BICUBIC` (cubic spline - interpolation in a 4x4 environment). If omitted, or if the image has - mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`. - See :ref:`concept-filters`. - :param expand: Optional expansion flag. If true, expands the output - image to make it large enough to hold the entire rotated image. - If false or omitted, make the output image the same size as the - input image. Note that the expand flag assumes rotation around - the center and no translation. - :param center: Optional center of rotation (a 2-tuple). Origin is - the upper left corner. Default is the center of the image. - :param translate: An optional post-rotate translation (a 2-tuple). - :param fillcolor: An optional color for area outside the rotated image. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - angle = angle % 360.0 - - # Fast paths regardless of filter, as long as we're not - # translating or changing the center. - if not (center or translate): - if angle == 0: - return self.copy() - if angle == 180: - return self.transpose(Transpose.ROTATE_180) - if angle in (90, 270) and (expand or self.width == self.height): - return self.transpose( - Transpose.ROTATE_90 if angle == 90 else Transpose.ROTATE_270 - ) - - # Calculate the affine matrix. Note that this is the reverse - # transformation (from destination image to source) because we - # want to interpolate the (discrete) destination pixel from - # the local area around the (floating) source pixel. - - # The matrix we actually want (note that it operates from the right): - # (1, 0, tx) (1, 0, cx) ( cos a, sin a, 0) (1, 0, -cx) - # (0, 1, ty) * (0, 1, cy) * (-sin a, cos a, 0) * (0, 1, -cy) - # (0, 0, 1) (0, 0, 1) ( 0, 0, 1) (0, 0, 1) - - # The reverse matrix is thus: - # (1, 0, cx) ( cos -a, sin -a, 0) (1, 0, -cx) (1, 0, -tx) - # (0, 1, cy) * (-sin -a, cos -a, 0) * (0, 1, -cy) * (0, 1, -ty) - # (0, 0, 1) ( 0, 0, 1) (0, 0, 1) (0, 0, 1) - - # In any case, the final translation may be updated at the end to - # compensate for the expand flag. - - w, h = self.size - - if translate is None: - post_trans = (0, 0) - else: - post_trans = translate - if center is None: - # FIXME These should be rounded to ints? - rotn_center = (w / 2.0, h / 2.0) - else: - rotn_center = center - - angle = -math.radians(angle) - matrix = [ - round(math.cos(angle), 15), - round(math.sin(angle), 15), - 0.0, - round(-math.sin(angle), 15), - round(math.cos(angle), 15), - 0.0, - ] - - def transform(x, y, matrix): - (a, b, c, d, e, f) = matrix - return a * x + b * y + c, d * x + e * y + f - - matrix[2], matrix[5] = transform( - -rotn_center[0] - post_trans[0], -rotn_center[1] - post_trans[1], matrix - ) - matrix[2] += rotn_center[0] - matrix[5] += rotn_center[1] - - if expand: - # calculate output size - xx = [] - yy = [] - for x, y in ((0, 0), (w, 0), (w, h), (0, h)): - x, y = transform(x, y, matrix) - xx.append(x) - yy.append(y) - nw = math.ceil(max(xx)) - math.floor(min(xx)) - nh = math.ceil(max(yy)) - math.floor(min(yy)) - - # We multiply a translation matrix from the right. Because of its - # special form, this is the same as taking the image of the - # translation vector as new translation vector. - matrix[2], matrix[5] = transform(-(nw - w) / 2.0, -(nh - h) / 2.0, matrix) - w, h = nw, nh - - return self.transform( - (w, h), Transform.AFFINE, matrix, resample, fillcolor=fillcolor - ) - - def save(self, fp, format=None, **params): - """ - Saves this image under the given filename. If no format is - specified, the format to use is determined from the filename - extension, if possible. - - Keyword options can be used to provide additional instructions - to the writer. If a writer doesn't recognise an option, it is - silently ignored. The available options are described in the - :doc:`image format documentation - <../handbook/image-file-formats>` for each writer. - - You can use a file object instead of a filename. In this case, - you must always specify the format. The file object must - implement the ``seek``, ``tell``, and ``write`` - methods, and be opened in binary mode. - - :param fp: A filename (string), pathlib.Path object or file object. - :param format: Optional format override. If omitted, the - format to use is determined from the filename extension. - If a file object was used instead of a filename, this - parameter should always be used. - :param params: Extra parameters to the image writer. - :returns: None - :exception ValueError: If the output format could not be determined - from the file name. Use the format option to solve this. - :exception OSError: If the file could not be written. The file - may have been created, and may contain partial data. - """ - - filename = "" - open_fp = False - if isinstance(fp, Path): - filename = str(fp) - open_fp = True - elif is_path(fp): - filename = fp - open_fp = True - elif fp == sys.stdout: - try: - fp = sys.stdout.buffer - except AttributeError: - pass - if not filename and hasattr(fp, "name") and is_path(fp.name): - # only set the name for metadata purposes - filename = fp.name - - # may mutate self! - self._ensure_mutable() - - save_all = params.pop("save_all", False) - self.encoderinfo = params - self.encoderconfig = () - - preinit() - - ext = os.path.splitext(filename)[1].lower() - - if not format: - if ext not in EXTENSION: - init() - try: - format = EXTENSION[ext] - except KeyError as e: - raise ValueError(f"unknown file extension: {ext}") from e - - if format.upper() not in SAVE: - init() - if save_all: - save_handler = SAVE_ALL[format.upper()] - else: - save_handler = SAVE[format.upper()] - - created = False - if open_fp: - created = not os.path.exists(filename) - if params.get("append", False): - # Open also for reading ("+"), because TIFF save_all - # writer needs to go back and edit the written data. - fp = builtins.open(filename, "r+b") - else: - fp = builtins.open(filename, "w+b") - - try: - save_handler(self, fp, filename) - except Exception: - if open_fp: - fp.close() - if created: - try: - os.remove(filename) - except PermissionError: - pass - raise - if open_fp: - fp.close() - - def seek(self, frame): - """ - Seeks to the given frame in this sequence file. If you seek - beyond the end of the sequence, the method raises an - ``EOFError`` exception. When a sequence file is opened, the - library automatically seeks to frame 0. - - See :py:meth:`~PIL.Image.Image.tell`. - - If defined, :attr:`~PIL.Image.Image.n_frames` refers to the - number of available frames. - - :param frame: Frame number, starting at 0. - :exception EOFError: If the call attempts to seek beyond the end - of the sequence. - """ - - # overridden by file handlers - if frame != 0: - raise EOFError - - def show(self, title=None): - """ - Displays this image. This method is mainly intended for debugging purposes. - - This method calls :py:func:`PIL.ImageShow.show` internally. You can use - :py:func:`PIL.ImageShow.register` to override its default behaviour. - - The image is first saved to a temporary file. By default, it will be in - PNG format. - - On Unix, the image is then opened using the **display**, **eog** or - **xv** utility, depending on which one can be found. - - On macOS, the image is opened with the native Preview application. - - On Windows, the image is opened with the standard PNG display utility. - - :param title: Optional title to use for the image window, where possible. - """ - - _show(self, title=title) - - def split(self): - """ - Split this image into individual bands. This method returns a - tuple of individual image bands from an image. For example, - splitting an "RGB" image creates three new images each - containing a copy of one of the original bands (red, green, - blue). - - If you need only one band, :py:meth:`~PIL.Image.Image.getchannel` - method can be more convenient and faster. - - :returns: A tuple containing bands. - """ - - self.load() - if self.im.bands == 1: - ims = [self.copy()] - else: - ims = map(self._new, self.im.split()) - return tuple(ims) - - def getchannel(self, channel): - """ - Returns an image containing a single channel of the source image. - - :param channel: What channel to return. Could be index - (0 for "R" channel of "RGB") or channel name - ("A" for alpha channel of "RGBA"). - :returns: An image in "L" mode. - - .. versionadded:: 4.3.0 - """ - self.load() - - if isinstance(channel, str): - try: - channel = self.getbands().index(channel) - except ValueError as e: - raise ValueError(f'The image has no channel "{channel}"') from e - - return self._new(self.im.getband(channel)) - - def tell(self): - """ - Returns the current frame number. See :py:meth:`~PIL.Image.Image.seek`. - - If defined, :attr:`~PIL.Image.Image.n_frames` refers to the - number of available frames. - - :returns: Frame number, starting with 0. - """ - return 0 - - def thumbnail(self, size, resample=Resampling.BICUBIC, reducing_gap=2.0): - """ - Make this image into a thumbnail. This method modifies the - image to contain a thumbnail version of itself, no larger than - the given size. This method calculates an appropriate thumbnail - size to preserve the aspect of the image, calls the - :py:meth:`~PIL.Image.Image.draft` method to configure the file reader - (where applicable), and finally resizes the image. - - Note that this function modifies the :py:class:`~PIL.Image.Image` - object in place. If you need to use the full resolution image as well, - apply this method to a :py:meth:`~PIL.Image.Image.copy` of the original - image. - - :param size: Requested size. - :param resample: Optional resampling filter. This can be one - of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`, - :py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`, - :py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`. - If omitted, it defaults to :py:data:`Resampling.BICUBIC`. - (was :py:data:`Resampling.NEAREST` prior to version 2.5.0). - See: :ref:`concept-filters`. - :param reducing_gap: Apply optimization by resizing the image - in two steps. First, reducing the image by integer times - using :py:meth:`~PIL.Image.Image.reduce` or - :py:meth:`~PIL.Image.Image.draft` for JPEG images. - Second, resizing using regular resampling. The last step - changes size no less than by ``reducing_gap`` times. - ``reducing_gap`` may be None (no first step is performed) - or should be greater than 1.0. The bigger ``reducing_gap``, - the closer the result to the fair resampling. - The smaller ``reducing_gap``, the faster resizing. - With ``reducing_gap`` greater or equal to 3.0, the result is - indistinguishable from fair resampling in most cases. - The default value is 2.0 (very close to fair resampling - while still being faster in many cases). - :returns: None - """ - - provided_size = tuple(map(math.floor, size)) - - def preserve_aspect_ratio(): - def round_aspect(number, key): - return max(min(math.floor(number), math.ceil(number), key=key), 1) - - x, y = provided_size - if x >= self.width and y >= self.height: - return - - aspect = self.width / self.height - if x / y >= aspect: - x = round_aspect(y * aspect, key=lambda n: abs(aspect - n / y)) - else: - y = round_aspect( - x / aspect, key=lambda n: 0 if n == 0 else abs(aspect - x / n) - ) - return x, y - - box = None - if reducing_gap is not None: - size = preserve_aspect_ratio() - if size is None: - return - - res = self.draft(None, (size[0] * reducing_gap, size[1] * reducing_gap)) - if res is not None: - box = res[1] - if box is None: - self.load() - - # load() may have changed the size of the image - size = preserve_aspect_ratio() - if size is None: - return - - if self.size != size: - im = self.resize(size, resample, box=box, reducing_gap=reducing_gap) - - self.im = im.im - self._size = size - self.mode = self.im.mode - - self.readonly = 0 - self.pyaccess = None - - # FIXME: the different transform methods need further explanation - # instead of bloating the method docs, add a separate chapter. - def transform( - self, - size, - method, - data=None, - resample=Resampling.NEAREST, - fill=1, - fillcolor=None, - ): - """ - Transforms this image. This method creates a new image with the - given size, and the same mode as the original, and copies data - to the new image using the given transform. - - :param size: The output size. - :param method: The transformation method. This is one of - :py:data:`Transform.EXTENT` (cut out a rectangular subregion), - :py:data:`Transform.AFFINE` (affine transform), - :py:data:`Transform.PERSPECTIVE` (perspective transform), - :py:data:`Transform.QUAD` (map a quadrilateral to a rectangle), or - :py:data:`Transform.MESH` (map a number of source quadrilaterals - in one operation). - - It may also be an :py:class:`~PIL.Image.ImageTransformHandler` - object:: - - class Example(Image.ImageTransformHandler): - def transform(self, size, data, resample, fill=1): - # Return result - - It may also be an object with a ``method.getdata`` method - that returns a tuple supplying new ``method`` and ``data`` values:: - - class Example: - def getdata(self): - method = Image.Transform.EXTENT - data = (0, 0, 100, 100) - return method, data - :param data: Extra data to the transformation method. - :param resample: Optional resampling filter. It can be one of - :py:data:`Resampling.NEAREST` (use nearest neighbour), - :py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2 - environment), or :py:data:`Resampling.BICUBIC` (cubic spline - interpolation in a 4x4 environment). If omitted, or if the image - has mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`. - See: :ref:`concept-filters`. - :param fill: If ``method`` is an - :py:class:`~PIL.Image.ImageTransformHandler` object, this is one of - the arguments passed to it. Otherwise, it is unused. - :param fillcolor: Optional fill color for the area outside the - transform in the output image. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if self.mode in ("LA", "RGBA") and resample != Resampling.NEAREST: - return ( - self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode]) - .transform(size, method, data, resample, fill, fillcolor) - .convert(self.mode) - ) - - if isinstance(method, ImageTransformHandler): - return method.transform(size, self, resample=resample, fill=fill) - - if hasattr(method, "getdata"): - # compatibility w. old-style transform objects - method, data = method.getdata() - - if data is None: - raise ValueError("missing method data") - - im = new(self.mode, size, fillcolor) - if self.mode == "P" and self.palette: - im.palette = self.palette.copy() - im.info = self.info.copy() - if method == Transform.MESH: - # list of quads - for box, quad in data: - im.__transformer( - box, self, Transform.QUAD, quad, resample, fillcolor is None - ) - else: - im.__transformer( - (0, 0) + size, self, method, data, resample, fillcolor is None - ) - - return im - - def __transformer( - self, box, image, method, data, resample=Resampling.NEAREST, fill=1 - ): - w = box[2] - box[0] - h = box[3] - box[1] - - if method == Transform.AFFINE: - data = data[:6] - - elif method == Transform.EXTENT: - # convert extent to an affine transform - x0, y0, x1, y1 = data - xs = (x1 - x0) / w - ys = (y1 - y0) / h - method = Transform.AFFINE - data = (xs, 0, x0, 0, ys, y0) - - elif method == Transform.PERSPECTIVE: - data = data[:8] - - elif method == Transform.QUAD: - # quadrilateral warp. data specifies the four corners - # given as NW, SW, SE, and NE. - nw = data[:2] - sw = data[2:4] - se = data[4:6] - ne = data[6:8] - x0, y0 = nw - As = 1.0 / w - At = 1.0 / h - data = ( - x0, - (ne[0] - x0) * As, - (sw[0] - x0) * At, - (se[0] - sw[0] - ne[0] + x0) * As * At, - y0, - (ne[1] - y0) * As, - (sw[1] - y0) * At, - (se[1] - sw[1] - ne[1] + y0) * As * At, - ) - - else: - raise ValueError("unknown transformation method") - - if resample not in ( - Resampling.NEAREST, - Resampling.BILINEAR, - Resampling.BICUBIC, - ): - if resample in (Resampling.BOX, Resampling.HAMMING, Resampling.LANCZOS): - message = { - Resampling.BOX: "Image.Resampling.BOX", - Resampling.HAMMING: "Image.Resampling.HAMMING", - Resampling.LANCZOS: "Image.Resampling.LANCZOS", - }[resample] + f" ({resample}) cannot be used." - else: - message = f"Unknown resampling filter ({resample})." - - filters = [ - f"{filter[1]} ({filter[0]})" - for filter in ( - (Resampling.NEAREST, "Image.Resampling.NEAREST"), - (Resampling.BILINEAR, "Image.Resampling.BILINEAR"), - (Resampling.BICUBIC, "Image.Resampling.BICUBIC"), - ) - ] - raise ValueError( - message + " Use " + ", ".join(filters[:-1]) + " or " + filters[-1] - ) - - image.load() - - self.load() - - if image.mode in ("1", "P"): - resample = Resampling.NEAREST - - self.im.transform2(box, image.im, method, data, resample, fill) - - def transpose(self, method): - """ - Transpose image (flip or rotate in 90 degree steps) - - :param method: One of :py:data:`Transpose.FLIP_LEFT_RIGHT`, - :py:data:`Transpose.FLIP_TOP_BOTTOM`, :py:data:`Transpose.ROTATE_90`, - :py:data:`Transpose.ROTATE_180`, :py:data:`Transpose.ROTATE_270`, - :py:data:`Transpose.TRANSPOSE` or :py:data:`Transpose.TRANSVERSE`. - :returns: Returns a flipped or rotated copy of this image. - """ - - self.load() - return self._new(self.im.transpose(method)) - - def effect_spread(self, distance): - """ - Randomly spread pixels in an image. - - :param distance: Distance to spread pixels. - """ - self.load() - return self._new(self.im.effect_spread(distance)) - - def toqimage(self): - """Returns a QImage copy of this image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - raise ImportError("Qt bindings are not installed") - return ImageQt.toqimage(self) - - def toqpixmap(self): - """Returns a QPixmap copy of this image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - raise ImportError("Qt bindings are not installed") - return ImageQt.toqpixmap(self) - - -# -------------------------------------------------------------------- -# Abstract handlers. - - -class ImagePointHandler: - """ - Used as a mixin by point transforms - (for use with :py:meth:`~PIL.Image.Image.point`) - """ - - pass - - -class ImageTransformHandler: - """ - Used as a mixin by geometry transforms - (for use with :py:meth:`~PIL.Image.Image.transform`) - """ - - pass - - -# -------------------------------------------------------------------- -# Factories - -# -# Debugging - - -def _wedge(): - """Create greyscale wedge (for debugging only)""" - - return Image()._new(core.wedge("L")) - - -def _check_size(size): - """ - Common check to enforce type and sanity check on size tuples - - :param size: Should be a 2 tuple of (width, height) - :returns: True, or raises a ValueError - """ - - if not isinstance(size, (list, tuple)): - raise ValueError("Size must be a tuple") - if len(size) != 2: - raise ValueError("Size must be a tuple of length 2") - if size[0] < 0 or size[1] < 0: - raise ValueError("Width and height must be >= 0") - - return True - - -def new(mode, size, color=0): - """ - Creates a new image with the given mode and size. - - :param mode: The mode to use for the new image. See: - :ref:`concept-modes`. - :param size: A 2-tuple, containing (width, height) in pixels. - :param color: What color to use for the image. Default is black. - If given, this should be a single integer or floating point value - for single-band modes, and a tuple for multi-band modes (one value - per band). When creating RGB images, you can also use color - strings as supported by the ImageColor module. If the color is - None, the image is not initialised. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - _check_size(size) - - if color is None: - # don't initialize - return Image()._new(core.new(mode, size)) - - if isinstance(color, str): - # css3-style specifier - - from . import ImageColor - - color = ImageColor.getcolor(color, mode) - - im = Image() - if mode == "P" and isinstance(color, (list, tuple)) and len(color) in [3, 4]: - # RGB or RGBA value for a P image - from . import ImagePalette - - im.palette = ImagePalette.ImagePalette() - color = im.palette.getcolor(color) - return im._new(core.fill(mode, size, color)) - - -def frombytes(mode, size, data, decoder_name="raw", *args): - """ - Creates a copy of an image memory from pixel data in a buffer. - - In its simplest form, this function takes three arguments - (mode, size, and unpacked pixel data). - - You can also use any pixel decoder supported by PIL. For more - information on available decoders, see the section - :ref:`Writing Your Own File Codec `. - - Note that this function decodes pixel data only, not entire images. - If you have an entire image in a string, wrap it in a - :py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load - it. - - :param mode: The image mode. See: :ref:`concept-modes`. - :param size: The image size. - :param data: A byte buffer containing raw data for the given mode. - :param decoder_name: What decoder to use. - :param args: Additional parameters for the given decoder. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - _check_size(size) - - # may pass tuple instead of argument list - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - - if decoder_name == "raw" and args == (): - args = mode - - im = new(mode, size) - im.frombytes(data, decoder_name, args) - return im - - -def frombuffer(mode, size, data, decoder_name="raw", *args): - """ - Creates an image memory referencing pixel data in a byte buffer. - - This function is similar to :py:func:`~PIL.Image.frombytes`, but uses data - in the byte buffer, where possible. This means that changes to the - original buffer object are reflected in this image). Not all modes can - share memory; supported modes include "L", "RGBX", "RGBA", and "CMYK". - - Note that this function decodes pixel data only, not entire images. - If you have an entire image file in a string, wrap it in a - :py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load it. - - In the current version, the default parameters used for the "raw" decoder - differs from that used for :py:func:`~PIL.Image.frombytes`. This is a - bug, and will probably be fixed in a future release. The current release - issues a warning if you do this; to disable the warning, you should provide - the full set of parameters. See below for details. - - :param mode: The image mode. See: :ref:`concept-modes`. - :param size: The image size. - :param data: A bytes or other buffer object containing raw - data for the given mode. - :param decoder_name: What decoder to use. - :param args: Additional parameters for the given decoder. For the - default encoder ("raw"), it's recommended that you provide the - full set of parameters:: - - frombuffer(mode, size, data, "raw", mode, 0, 1) - - :returns: An :py:class:`~PIL.Image.Image` object. - - .. versionadded:: 1.1.4 - """ - - _check_size(size) - - # may pass tuple instead of argument list - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - - if decoder_name == "raw": - if args == (): - args = mode, 0, 1 - if args[0] in _MAPMODES: - im = new(mode, (1, 1)) - im = im._new(core.map_buffer(data, size, decoder_name, 0, args)) - if mode == "P": - from . import ImagePalette - - im.palette = ImagePalette.ImagePalette("RGB", im.im.getpalette("RGB")) - im.readonly = 1 - return im - - return frombytes(mode, size, data, decoder_name, args) - - -def fromarray(obj, mode=None): - """ - Creates an image memory from an object exporting the array interface - (using the buffer protocol). - - If ``obj`` is not contiguous, then the ``tobytes`` method is called - and :py:func:`~PIL.Image.frombuffer` is used. - - If you have an image in NumPy:: - - from PIL import Image - import numpy as np - im = Image.open("hopper.jpg") - a = np.asarray(im) - - Then this can be used to convert it to a Pillow image:: - - im = Image.fromarray(a) - - :param obj: Object with array interface - :param mode: Optional mode to use when reading ``obj``. Will be determined from - type if ``None``. - - This will not be used to convert the data after reading, but will be used to - change how the data is read:: - - from PIL import Image - import numpy as np - a = np.full((1, 1), 300) - im = Image.fromarray(a, mode="L") - im.getpixel((0, 0)) # 44 - im = Image.fromarray(a, mode="RGB") - im.getpixel((0, 0)) # (44, 1, 0) - - See: :ref:`concept-modes` for general information about modes. - :returns: An image object. - - .. versionadded:: 1.1.6 - """ - arr = obj.__array_interface__ - shape = arr["shape"] - ndim = len(shape) - strides = arr.get("strides", None) - if mode is None: - try: - typekey = (1, 1) + shape[2:], arr["typestr"] - except KeyError as e: - raise TypeError("Cannot handle this data type") from e - try: - mode, rawmode = _fromarray_typemap[typekey] - except KeyError as e: - raise TypeError("Cannot handle this data type: %s, %s" % typekey) from e - else: - rawmode = mode - if mode in ["1", "L", "I", "P", "F"]: - ndmax = 2 - elif mode == "RGB": - ndmax = 3 - else: - ndmax = 4 - if ndim > ndmax: - raise ValueError(f"Too many dimensions: {ndim} > {ndmax}.") - - size = 1 if ndim == 1 else shape[1], shape[0] - if strides is not None: - if hasattr(obj, "tobytes"): - obj = obj.tobytes() - else: - obj = obj.tostring() - - return frombuffer(mode, size, obj, "raw", rawmode, 0, 1) - - -def fromqimage(im): - """Creates an image instance from a QImage image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - raise ImportError("Qt bindings are not installed") - return ImageQt.fromqimage(im) - - -def fromqpixmap(im): - """Creates an image instance from a QPixmap image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - raise ImportError("Qt bindings are not installed") - return ImageQt.fromqpixmap(im) - - -_fromarray_typemap = { - # (shape, typestr) => mode, rawmode - # first two members of shape are set to one - ((1, 1), "|b1"): ("1", "1;8"), - ((1, 1), "|u1"): ("L", "L"), - ((1, 1), "|i1"): ("I", "I;8"), - ((1, 1), "u2"): ("I", "I;16B"), - ((1, 1), "i2"): ("I", "I;16BS"), - ((1, 1), "u4"): ("I", "I;32B"), - ((1, 1), "i4"): ("I", "I;32BS"), - ((1, 1), "f4"): ("F", "F;32BF"), - ((1, 1), "f8"): ("F", "F;64BF"), - ((1, 1, 2), "|u1"): ("LA", "LA"), - ((1, 1, 3), "|u1"): ("RGB", "RGB"), - ((1, 1, 4), "|u1"): ("RGBA", "RGBA"), - # shortcuts: - ((1, 1), _ENDIAN + "i4"): ("I", "I"), - ((1, 1), _ENDIAN + "f4"): ("F", "F"), -} - - -def _decompression_bomb_check(size): - if MAX_IMAGE_PIXELS is None: - return - - pixels = size[0] * size[1] - - if pixels > 2 * MAX_IMAGE_PIXELS: - raise DecompressionBombError( - f"Image size ({pixels} pixels) exceeds limit of {2 * MAX_IMAGE_PIXELS} " - "pixels, could be decompression bomb DOS attack." - ) - - if pixels > MAX_IMAGE_PIXELS: - warnings.warn( - f"Image size ({pixels} pixels) exceeds limit of {MAX_IMAGE_PIXELS} pixels, " - "could be decompression bomb DOS attack.", - DecompressionBombWarning, - ) - - -def open(fp, mode="r", formats=None): - """ - Opens and identifies the given image file. - - This is a lazy operation; this function identifies the file, but - the file remains open and the actual image data is not read from - the file until you try to process the data (or call the - :py:meth:`~PIL.Image.Image.load` method). See - :py:func:`~PIL.Image.new`. See :ref:`file-handling`. - - :param fp: A filename (string), pathlib.Path object or a file object. - The file object must implement ``file.read``, - ``file.seek``, and ``file.tell`` methods, - and be opened in binary mode. - :param mode: The mode. If given, this argument must be "r". - :param formats: A list or tuple of formats to attempt to load the file in. - This can be used to restrict the set of formats checked. - Pass ``None`` to try all supported formats. You can print the set of - available formats by running ``python3 -m PIL`` or using - the :py:func:`PIL.features.pilinfo` function. - :returns: An :py:class:`~PIL.Image.Image` object. - :exception FileNotFoundError: If the file cannot be found. - :exception PIL.UnidentifiedImageError: If the image cannot be opened and - identified. - :exception ValueError: If the ``mode`` is not "r", or if a ``StringIO`` - instance is used for ``fp``. - :exception TypeError: If ``formats`` is not ``None``, a list or a tuple. - """ - - if mode != "r": - raise ValueError(f"bad mode {repr(mode)}") - elif isinstance(fp, io.StringIO): - raise ValueError( - "StringIO cannot be used to open an image. " - "Binary data must be used instead." - ) - - if formats is None: - formats = ID - elif not isinstance(formats, (list, tuple)): - raise TypeError("formats must be a list or tuple") - - exclusive_fp = False - filename = "" - if isinstance(fp, Path): - filename = str(fp.resolve()) - elif is_path(fp): - filename = fp - - if filename: - fp = builtins.open(filename, "rb") - exclusive_fp = True - - try: - fp.seek(0) - except (AttributeError, io.UnsupportedOperation): - fp = io.BytesIO(fp.read()) - exclusive_fp = True - - prefix = fp.read(16) - - preinit() - - accept_warnings = [] - - def _open_core(fp, filename, prefix, formats): - for i in formats: - i = i.upper() - if i not in OPEN: - init() - try: - factory, accept = OPEN[i] - result = not accept or accept(prefix) - if type(result) in [str, bytes]: - accept_warnings.append(result) - elif result: - fp.seek(0) - im = factory(fp, filename) - _decompression_bomb_check(im.size) - return im - except (SyntaxError, IndexError, TypeError, struct.error): - # Leave disabled by default, spams the logs with image - # opening failures that are entirely expected. - # logger.debug("", exc_info=True) - continue - except BaseException: - if exclusive_fp: - fp.close() - raise - return None - - im = _open_core(fp, filename, prefix, formats) - - if im is None: - if init(): - im = _open_core(fp, filename, prefix, formats) - - if im: - im._exclusive_fp = exclusive_fp - return im - - if exclusive_fp: - fp.close() - for message in accept_warnings: - warnings.warn(message) - raise UnidentifiedImageError( - "cannot identify image file %r" % (filename if filename else fp) - ) - - -# -# Image processing. - - -def alpha_composite(im1, im2): - """ - Alpha composite im2 over im1. - - :param im1: The first image. Must have mode RGBA. - :param im2: The second image. Must have mode RGBA, and the same size as - the first image. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - im1.load() - im2.load() - return im1._new(core.alpha_composite(im1.im, im2.im)) - - -def blend(im1, im2, alpha): - """ - Creates a new image by interpolating between two input images, using - a constant alpha:: - - out = image1 * (1.0 - alpha) + image2 * alpha - - :param im1: The first image. - :param im2: The second image. Must have the same mode and size as - the first image. - :param alpha: The interpolation alpha factor. If alpha is 0.0, a - copy of the first image is returned. If alpha is 1.0, a copy of - the second image is returned. There are no restrictions on the - alpha value. If necessary, the result is clipped to fit into - the allowed output range. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - im1.load() - im2.load() - return im1._new(core.blend(im1.im, im2.im, alpha)) - - -def composite(image1, image2, mask): - """ - Create composite image by blending images using a transparency mask. - - :param image1: The first image. - :param image2: The second image. Must have the same mode and - size as the first image. - :param mask: A mask image. This image can have mode - "1", "L", or "RGBA", and must have the same size as the - other two images. - """ - - image = image2.copy() - image.paste(image1, None, mask) - return image - - -def eval(image, *args): - """ - Applies the function (which should take one argument) to each pixel - in the given image. If the image has more than one band, the same - function is applied to each band. Note that the function is - evaluated once for each possible pixel value, so you cannot use - random components or other generators. - - :param image: The input image. - :param function: A function object, taking one integer argument. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - return image.point(args[0]) - - -def merge(mode, bands): - """ - Merge a set of single band images into a new multiband image. - - :param mode: The mode to use for the output image. See: - :ref:`concept-modes`. - :param bands: A sequence containing one single-band image for - each band in the output image. All bands must have the - same size. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if getmodebands(mode) != len(bands) or "*" in mode: - raise ValueError("wrong number of bands") - for band in bands[1:]: - if band.mode != getmodetype(mode): - raise ValueError("mode mismatch") - if band.size != bands[0].size: - raise ValueError("size mismatch") - for band in bands: - band.load() - return bands[0]._new(core.merge(mode, *[b.im for b in bands])) - - -# -------------------------------------------------------------------- -# Plugin registry - - -def register_open(id, factory, accept=None): - """ - Register an image file plugin. This function should not be used - in application code. - - :param id: An image format identifier. - :param factory: An image file factory method. - :param accept: An optional function that can be used to quickly - reject images having another format. - """ - id = id.upper() - ID.append(id) - OPEN[id] = factory, accept - - -def register_mime(id, mimetype): - """ - Registers an image MIME type. This function should not be used - in application code. - - :param id: An image format identifier. - :param mimetype: The image MIME type for this format. - """ - MIME[id.upper()] = mimetype - - -def register_save(id, driver): - """ - Registers an image save function. This function should not be - used in application code. - - :param id: An image format identifier. - :param driver: A function to save images in this format. - """ - SAVE[id.upper()] = driver - - -def register_save_all(id, driver): - """ - Registers an image function to save all the frames - of a multiframe format. This function should not be - used in application code. - - :param id: An image format identifier. - :param driver: A function to save images in this format. - """ - SAVE_ALL[id.upper()] = driver - - -def register_extension(id, extension): - """ - Registers an image extension. This function should not be - used in application code. - - :param id: An image format identifier. - :param extension: An extension used for this format. - """ - EXTENSION[extension.lower()] = id.upper() - - -def register_extensions(id, extensions): - """ - Registers image extensions. This function should not be - used in application code. - - :param id: An image format identifier. - :param extensions: A list of extensions used for this format. - """ - for extension in extensions: - register_extension(id, extension) - - -def registered_extensions(): - """ - Returns a dictionary containing all file extensions belonging - to registered plugins - """ - if not EXTENSION: - init() - return EXTENSION - - -def register_decoder(name, decoder): - """ - Registers an image decoder. This function should not be - used in application code. - - :param name: The name of the decoder - :param decoder: A callable(mode, args) that returns an - ImageFile.PyDecoder object - - .. versionadded:: 4.1.0 - """ - DECODERS[name] = decoder - - -def register_encoder(name, encoder): - """ - Registers an image encoder. This function should not be - used in application code. - - :param name: The name of the encoder - :param encoder: A callable(mode, args) that returns an - ImageFile.PyEncoder object - - .. versionadded:: 4.1.0 - """ - ENCODERS[name] = encoder - - -# -------------------------------------------------------------------- -# Simple display support. - - -def _show(image, **options): - from . import ImageShow - - ImageShow.show(image, **options) - - -# -------------------------------------------------------------------- -# Effects - - -def effect_mandelbrot(size, extent, quality): - """ - Generate a Mandelbrot set covering the given extent. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param extent: The extent to cover, as a 4-tuple: - (x0, y0, x1, y1). - :param quality: Quality. - """ - return Image()._new(core.effect_mandelbrot(size, extent, quality)) - - -def effect_noise(size, sigma): - """ - Generate Gaussian noise centered around 128. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param sigma: Standard deviation of noise. - """ - return Image()._new(core.effect_noise(size, sigma)) - - -def linear_gradient(mode): - """ - Generate 256x256 linear gradient from black to white, top to bottom. - - :param mode: Input mode. - """ - return Image()._new(core.linear_gradient(mode)) - - -def radial_gradient(mode): - """ - Generate 256x256 radial gradient from black to white, centre to edge. - - :param mode: Input mode. - """ - return Image()._new(core.radial_gradient(mode)) - - -# -------------------------------------------------------------------- -# Resources - - -def _apply_env_variables(env=None): - if env is None: - env = os.environ - - for var_name, setter in [ - ("PILLOW_ALIGNMENT", core.set_alignment), - ("PILLOW_BLOCK_SIZE", core.set_block_size), - ("PILLOW_BLOCKS_MAX", core.set_blocks_max), - ]: - if var_name not in env: - continue - - var = env[var_name].lower() - - units = 1 - for postfix, mul in [("k", 1024), ("m", 1024 * 1024)]: - if var.endswith(postfix): - units = mul - var = var[: -len(postfix)] - - try: - var = int(var) * units - except ValueError: - warnings.warn(f"{var_name} is not int") - continue - - try: - setter(var) - except ValueError as e: - warnings.warn(f"{var_name}: {e}") - - -_apply_env_variables() -atexit.register(core.clear_cache) - - -class Exif(MutableMapping): - endian = None - bigtiff = False - - def __init__(self): - self._data = {} - self._ifds = {} - self._info = None - self._loaded_exif = None - - def _fixup(self, value): - try: - if len(value) == 1 and isinstance(value, tuple): - return value[0] - except Exception: - pass - return value - - def _fixup_dict(self, src_dict): - # Helper function - # returns a dict with any single item tuples/lists as individual values - return {k: self._fixup(v) for k, v in src_dict.items()} - - def _get_ifd_dict(self, offset): - try: - # an offset pointer to the location of the nested embedded IFD. - # It should be a long, but may be corrupted. - self.fp.seek(offset) - except (KeyError, TypeError): - pass - else: - from . import TiffImagePlugin - - info = TiffImagePlugin.ImageFileDirectory_v2(self.head) - info.load(self.fp) - return self._fixup_dict(info) - - def _get_head(self): - version = b"\x2B" if self.bigtiff else b"\x2A" - if self.endian == "<": - head = b"II" + version + b"\x00" + o32le(8) - else: - head = b"MM\x00" + version + o32be(8) - if self.bigtiff: - head += o32le(8) if self.endian == "<" else o32be(8) - head += b"\x00\x00\x00\x00" - return head - - def load(self, data): - # Extract EXIF information. This is highly experimental, - # and is likely to be replaced with something better in a future - # version. - - # The EXIF record consists of a TIFF file embedded in a JPEG - # application marker (!). - if data == self._loaded_exif: - return - self._loaded_exif = data - self._data.clear() - self._ifds.clear() - if data and data.startswith(b"Exif\x00\x00"): - data = data[6:] - if not data: - self._info = None - return - - self.fp = io.BytesIO(data) - self.head = self.fp.read(8) - # process dictionary - from . import TiffImagePlugin - - self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head) - self.endian = self._info._endian - self.fp.seek(self._info.next) - self._info.load(self.fp) - - def load_from_fp(self, fp, offset=None): - self._loaded_exif = None - self._data.clear() - self._ifds.clear() - - # process dictionary - from . import TiffImagePlugin - - self.fp = fp - if offset is not None: - self.head = self._get_head() - else: - self.head = self.fp.read(8) - self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head) - if self.endian is None: - self.endian = self._info._endian - if offset is None: - offset = self._info.next - self.fp.seek(offset) - self._info.load(self.fp) - - def _get_merged_dict(self): - merged_dict = dict(self) - - # get EXIF extension - if 0x8769 in self: - ifd = self._get_ifd_dict(self[0x8769]) - if ifd: - merged_dict.update(ifd) - - # GPS - if 0x8825 in self: - merged_dict[0x8825] = self._get_ifd_dict(self[0x8825]) - - return merged_dict - - def tobytes(self, offset=8): - from . import TiffImagePlugin - - head = self._get_head() - ifd = TiffImagePlugin.ImageFileDirectory_v2(ifh=head) - for tag, value in self.items(): - if tag in [0x8769, 0x8225, 0x8825] and not isinstance(value, dict): - value = self.get_ifd(tag) - if ( - tag == 0x8769 - and 0xA005 in value - and not isinstance(value[0xA005], dict) - ): - value = value.copy() - value[0xA005] = self.get_ifd(0xA005) - ifd[tag] = value - return b"Exif\x00\x00" + head + ifd.tobytes(offset) - - def get_ifd(self, tag): - if tag not in self._ifds: - if tag in [0x8769, 0x8825]: - # exif, gpsinfo - if tag in self: - self._ifds[tag] = self._get_ifd_dict(self[tag]) - elif tag in [0xA005, 0x927C]: - # interop, makernote - if 0x8769 not in self._ifds: - self.get_ifd(0x8769) - tag_data = self._ifds[0x8769][tag] - if tag == 0x927C: - # makernote - from .TiffImagePlugin import ImageFileDirectory_v2 - - if tag_data[:8] == b"FUJIFILM": - ifd_offset = i32le(tag_data, 8) - ifd_data = tag_data[ifd_offset:] - - makernote = {} - for i in range(0, struct.unpack(" 4: - (offset,) = struct.unpack("H", tag_data[:2])[0]): - ifd_tag, typ, count, data = struct.unpack( - ">HHL4s", tag_data[i * 12 + 2 : (i + 1) * 12 + 2] - ) - if ifd_tag == 0x1101: - # CameraInfo - (offset,) = struct.unpack(">L", data) - self.fp.seek(offset) - - camerainfo = {"ModelID": self.fp.read(4)} - - self.fp.read(4) - # Seconds since 2000 - camerainfo["TimeStamp"] = i32le(self.fp.read(12)) - - self.fp.read(4) - camerainfo["InternalSerialNumber"] = self.fp.read(4) - - self.fp.read(12) - parallax = self.fp.read(4) - handler = ImageFileDirectory_v2._load_dispatch[ - TiffTags.FLOAT - ][1] - camerainfo["Parallax"] = handler( - ImageFileDirectory_v2(), parallax, False - ) - - self.fp.read(4) - camerainfo["Category"] = self.fp.read(2) - - makernote = {0x1101: dict(self._fixup_dict(camerainfo))} - self._ifds[tag] = makernote - else: - # interop - self._ifds[tag] = self._get_ifd_dict(tag_data) - return self._ifds.get(tag, {}) - - def __str__(self): - if self._info is not None: - # Load all keys into self._data - for tag in self._info.keys(): - self[tag] - - return str(self._data) - - def __len__(self): - keys = set(self._data) - if self._info is not None: - keys.update(self._info) - return len(keys) - - def __getitem__(self, tag): - if self._info is not None and tag not in self._data and tag in self._info: - self._data[tag] = self._fixup(self._info[tag]) - del self._info[tag] - return self._data[tag] - - def __contains__(self, tag): - return tag in self._data or (self._info is not None and tag in self._info) - - def __setitem__(self, tag, value): - if self._info is not None and tag in self._info: - del self._info[tag] - self._data[tag] = value - - def __delitem__(self, tag): - if self._info is not None and tag in self._info: - del self._info[tag] - else: - del self._data[tag] - - def __iter__(self): - keys = set(self._data) - if self._info is not None: - keys.update(self._info) - return iter(keys) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/us_population_pyramid_over_time.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/us_population_pyramid_over_time.py deleted file mode 100644 index 624db71d5421a00400113467a76c7a23b4f25c9e..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/us_population_pyramid_over_time.py +++ /dev/null @@ -1,55 +0,0 @@ -''' -US Population Pyramid Over Time -=============================== -A population pyramid shows the distribution of age groups within a population. -It uses a slider widget that is bound to the year to visualize the age -distribution over time. -''' -# category: case studies -import altair as alt -from vega_datasets import data - -source = data.population.url - -slider = alt.binding_range(min=1850, max=2000, step=10) -select_year = alt.selection_single(name='year', fields=['year'], - bind=slider, init={'year': 2000}) - -base = alt.Chart(source).add_selection( - select_year -).transform_filter( - select_year -).transform_calculate( - gender=alt.expr.if_(alt.datum.sex == 1, 'Male', 'Female') -).properties( - width=250 -) - - -color_scale = alt.Scale(domain=['Male', 'Female'], - range=['#1f77b4', '#e377c2']) - -left = base.transform_filter( - alt.datum.gender == 'Female' -).encode( - y=alt.Y('age:O', axis=None), - x=alt.X('sum(people):Q', - title='population', - sort=alt.SortOrder('descending')), - color=alt.Color('gender:N', scale=color_scale, legend=None) -).mark_bar().properties(title='Female') - -middle = base.encode( - y=alt.Y('age:O', axis=None), - text=alt.Text('age:Q'), -).mark_text().properties(width=20) - -right = base.transform_filter( - alt.datum.gender == 'Male' -).encode( - y=alt.Y('age:O', axis=None), - x=alt.X('sum(people):Q', title='population'), - color=alt.Color('gender:N', scale=color_scale, legend=None) -).mark_bar().properties(title='Male') - -alt.concat(left, middle, right, spacing=5) \ No newline at end of file diff --git a/spaces/asteph/harrywang-pokemon-lora/app.py b/spaces/asteph/harrywang-pokemon-lora/app.py deleted file mode 100644 index c9837672ddc0c4e17dd4cf655c786fd4ef92317d..0000000000000000000000000000000000000000 --- a/spaces/asteph/harrywang-pokemon-lora/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/harrywang/pokemon-lora").launch() \ No newline at end of file diff --git a/spaces/avivdm1/AutoGPT/autogpt/json_utils/__init__.py b/spaces/avivdm1/AutoGPT/autogpt/json_utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/avivdm1/AutoGPT/run_continuous.bat b/spaces/avivdm1/AutoGPT/run_continuous.bat deleted file mode 100644 index 812aa01c1c5506c452665610c0e9e83a17c426f2..0000000000000000000000000000000000000000 --- a/spaces/avivdm1/AutoGPT/run_continuous.bat +++ /dev/null @@ -1,3 +0,0 @@ -@echo off -set argument=--continuous -call run.bat %argument% diff --git a/spaces/awacke1/2-LiveASR/app.py b/spaces/awacke1/2-LiveASR/app.py deleted file mode 100644 index b19b04136d7b2ab879c98b3d38b872a735352641..0000000000000000000000000000000000000000 --- a/spaces/awacke1/2-LiveASR/app.py +++ /dev/null @@ -1,138 +0,0 @@ -import gradio as gr -import torch -import time -import librosa -import soundfile -import nemo.collections.asr as nemo_asr -import tempfile -import os -import uuid - -from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration -import torch - -# PersistDataset ----- -import os -import csv -import gradio as gr -from gradio import inputs, outputs -import huggingface_hub -from huggingface_hub import Repository, hf_hub_download, upload_file -from datetime import datetime - -# --------------------------------------------- -# Dataset and Token links - change awacke1 to your own HF id, and add a HF_TOKEN copy to your repo for write permissions -# This should allow you to save your results to your own Dataset hosted on HF. - -DATASET_REPO_URL = "https://huggingface.co/datasets/awacke1/ASRLive.csv" -DATASET_REPO_ID = "awacke1/ASRLive.csv" -DATA_FILENAME = "ASRLive.csv" -DATA_FILE = os.path.join("data", DATA_FILENAME) -HF_TOKEN = os.environ.get("HF_TOKEN") - -PersistToDataset = False -#PersistToDataset = True # uncomment to save inference output to ASRLive.csv dataset - -if PersistToDataset: - try: - hf_hub_download( - repo_id=DATASET_REPO_ID, - filename=DATA_FILENAME, - cache_dir=DATA_DIRNAME, - force_filename=DATA_FILENAME - ) - except: - print("file not found") - repo = Repository( - local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN - ) - -def store_message(name: str, message: str): - if name and message: - with open(DATA_FILE, "a") as csvfile: - writer = csv.DictWriter(csvfile, fieldnames=["name", "message", "time"]) - writer.writerow( - {"name": name.strip(), "message": message.strip(), "time": str(datetime.now())} - ) - # uncomment line below to begin saving - - commit_url = repo.push_to_hub() - ret = "" - with open(DATA_FILE, "r") as csvfile: - reader = csv.DictReader(csvfile) - - for row in reader: - ret += row - ret += "\r\n" - return ret - -# main ------------------------- -mname = "facebook/blenderbot-400M-distill" -model = BlenderbotForConditionalGeneration.from_pretrained(mname) -tokenizer = BlenderbotTokenizer.from_pretrained(mname) - -def take_last_tokens(inputs, note_history, history): - filterTokenCount = 128 # filter last 128 tokens - if inputs['input_ids'].shape[1] > filterTokenCount: - inputs['input_ids'] = torch.tensor([inputs['input_ids'][0][-filterTokenCount:].tolist()]) - inputs['attention_mask'] = torch.tensor([inputs['attention_mask'][0][-filterTokenCount:].tolist()]) - note_history = [' '.join(note_history[0].split(' ')[2:])] - history = history[1:] - return inputs, note_history, history - -def add_note_to_history(note, note_history): - note_history.append(note) - note_history = ' '.join(note_history) - return [note_history] - - - -SAMPLE_RATE = 16000 -model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_en_conformer_transducer_xlarge") -model.change_decoding_strategy(None) -model.eval() - -def process_audio_file(file): - data, sr = librosa.load(file) - if sr != SAMPLE_RATE: - data = librosa.resample(data, orig_sr=sr, target_sr=SAMPLE_RATE) - data = librosa.to_mono(data) - return data - - -def transcribe(audio, state = ""): - if state is None: - state = "" - audio_data = process_audio_file(audio) - with tempfile.TemporaryDirectory() as tmpdir: - audio_path = os.path.join(tmpdir, f'audio_{uuid.uuid4()}.wav') - soundfile.write(audio_path, audio_data, SAMPLE_RATE) - transcriptions = model.transcribe([audio_path]) - if type(transcriptions) == tuple and len(transcriptions) == 2: - transcriptions = transcriptions[0] - transcriptions = transcriptions[0] - - if PersistToDataset: - ret = store_message(transcriptions, state) # Save to dataset - uncomment to store into a dataset - hint you will need your HF_TOKEN - state = state + transcriptions + " " + ret - else: - state = state + transcriptions - return state, state - -gr.Interface( - fn=transcribe, - inputs=[ - gr.Audio(source="microphone", type='filepath', streaming=True), - "state", - ], - outputs=[ - "textbox", - "state" - ], - layout="horizontal", - theme="huggingface", - title="🗣️ASR-Gradio-Live🧠💾", - description=f"Live Automatic Speech Recognition (ASR).", - allow_flagging='never', - live=True, - article=f"Result💾 Dataset: [{DATASET_REPO_URL}]({DATASET_REPO_URL})" -).launch(debug=True) diff --git a/spaces/awacke1/ASR-openai-whisper-base/app.py b/spaces/awacke1/ASR-openai-whisper-base/app.py deleted file mode 100644 index ddedf9ed0e7c4809c5dea4b633a52d5975f8f4c4..0000000000000000000000000000000000000000 --- a/spaces/awacke1/ASR-openai-whisper-base/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/openai/whisper-base").launch() \ No newline at end of file diff --git a/spaces/awacke1/FirestorePersistence/app.py b/spaces/awacke1/FirestorePersistence/app.py deleted file mode 100644 index 25d864111289f672836321b3e8228edb498673cf..0000000000000000000000000000000000000000 --- a/spaces/awacke1/FirestorePersistence/app.py +++ /dev/null @@ -1,85 +0,0 @@ -import streamlit as st -import firebase_admin -from firebase_admin import credentials -from firebase_admin import firestore -from datetime import datetime - -now = datetime.now() # current date and time -year = now.strftime("%Y") -st.write("year:", year) -month = now.strftime("%m") -st.write("month:", month) -day = now.strftime("%d") -st.write("day:", day) -time = now.strftime("%H:%M:%S") -st.write("time:", time) -date_time = now.strftime("%m/%d/%Y, %H:%M:%S") -st.write("date and time:",date_time) - -@st.experimental_singleton -def get_db_firestore(): - cred = credentials.Certificate('test.json') - firebase_admin.initialize_app(cred, {'projectId': u'clinical-nlp-b9117',}) - db = firestore.client() - return db - -#add data to the beastie with a generic reusable upsert function -def upsert(collection, document, firefield, first, last, born): - doc_ref = db.collection(collection).document(document) - doc_ref.set({u'firefield': firefield, u'first': first, u'last': last, u'born': born -}) - -#read data back in firecollection -def selectCollection(collection): - users_ref = db.collection(collection) - docs = users_ref.stream() - for doc in docs: - st.write(f'{doc.id} => {doc.to_dict()}') - -def selectCollectionDocument(collection, document): - doc_ref = db.collection(collection).document(document) - doc = doc_ref.get() - st.write("The id is: ", doc.id) - st.write("The contents are: ", doc.to_dict()) - -#add data to the beastie with a generic reusable upsert function -def upsertoftheminute(collection, document, firefield, first, last, born): - date_time = now.strftime("%m/%d/%Y, %H:%M") - doc_ref = db.collection(collection).document(document) - doc_ref.set({u'firefield': firefield, u'first': first, u'last': last, u'born': date_time,}) - - -st.write("singleton stateful connection to cloud firestore") -st.write(u"spin up some awesome 🤯 - episodic and semantic memory 🧠 for AI - here we come") -db = get_db_firestore() - -# perceptual system processing agent that can store model -upsert(u'firecollection', u'firedocument', u'users1', u'Ada', u'Lovelace', 1815) -upsert(u'firecollection', u'firedocument', u'users2', u'Aaron', u'Wacker', 1971) -upsert(u'firecollection1', u'firedocument3', u'users1', u'2022 - AI, Cognitive and Neuroscience to Assist and Augment Behavioral and Medical Health', u'https://www.youtube.com/watch?v=lvh3g7eszVQ&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L', 2022) -upsert(u'firecollection2', u'firedocument2', u'users2', u'2022 - AI art sci-fi movies and stories 🎭🎞️🍿 by Aaron Wacker 🎬 🧠 🎨', u'https://www.youtube.com/playlist?list=PLHgX2IExbFotUCOCZgpj-5HZBzXOpFMYc', 2022) -upsert(u'firecollection3', u'firedocument3', u'users3', u'😶‍🌫️ 🤯Engineering Consciousness🧠 😶‍🌫️', u'https://youtu.be/rIpUf-Vy2JA?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=3622', 2022) -upsert(u'firecollection4', u'firedocument4', u'users4', u'🧠🌳Yggdrasil🌳🧠', u'https://github.com/AaronCWacker/Yggdrasil', 2022) - -# its all stored here: https://console.firebase.google.com/u/0/project/clinical-nlp-b9117/firestore/data/~2FStreamlitSpaces - - -selectCollection(u'firecollection') -selectCollection(u'firecollection1') -selectCollection(u'firecollection2') -selectCollection(u'firecollection3') -selectCollection(u'firecollection4') -selectCollectionDocument(u"firecollection", u"firedocument") -selectCollectionDocument(u"firecollection1", u"firedocument3") -selectCollectionDocument(u"firecollection3", u"firedocument3") - - -# from https://huggingface.co/spaces/awacke1/RealTimeVoiceASR -selectCollectionDocument(u"ASRCollection", u"ASRDocument") - - -upsert(u'firecollection4', u'firedocument4', u'users4', u'🧠🌳Yggdrasil🌳🧠', u'https://github.com/AaronCWacker/Yggdrasil', 2022) - -# intent - upsert at granularity of minute an aggregate document representing fields used in recent activity to replay shared state memory events -upsertoftheminute(u'TimeSeries', u'DocumentofMinute', u'TestUser1', u'🧠🌳Yggdrasil🌳🧠', u'https://huggingface.co/spaces/awacke1/FirestorePersistence', 2022) -selectCollectionDocument(u"TimeSeries", u"DocumentofMinute") \ No newline at end of file diff --git a/spaces/awacke1/NLPStoryWriterWithMemory/app.py b/spaces/awacke1/NLPStoryWriterWithMemory/app.py deleted file mode 100644 index 463e122620440fcafd00fc0582d1b908ab8de7fd..0000000000000000000000000000000000000000 --- a/spaces/awacke1/NLPStoryWriterWithMemory/app.py +++ /dev/null @@ -1,103 +0,0 @@ -import gradio as gr -import os - -# PersistDataset ----- -import os -import csv -import gradio as gr -from gradio import inputs, outputs -import huggingface_hub -from huggingface_hub import Repository, hf_hub_download, upload_file -from datetime import datetime - -# created new dataset as awacke1/MindfulStory.csv -DATASET_REPO_URL = "https://huggingface.co/datasets/awacke1/MindfulStory.csv" -DATASET_REPO_ID = "awacke1/MindfulStory.csv" -DATA_FILENAME = "MindfulStory.csv" -DATA_FILE = os.path.join("data", DATA_FILENAME) -HF_TOKEN = os.environ.get("HF_TOKEN") -# Download dataset repo using hub download -try: - hf_hub_download( - repo_id=DATASET_REPO_ID, - filename=DATA_FILENAME, - cache_dir=DATA_DIRNAME, - force_filename=DATA_FILENAME - ) -except: - print("file not found") - -def AIMemory(title: str, story: str): - if title and story: - with open(DATA_FILE, "a") as csvfile: - writer = csv.DictWriter(csvfile, fieldnames=["title", "story", "time"]) - writer.writerow({"title": title, "story": story, "time": str(datetime.now())}) - commit_url = repo.push_to_hub() - return "" - - -# Set up cloned dataset from repo for operations -repo = Repository( - local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN -) - -generator1 = gr.Interface.load("huggingface/gpt2-large", api_key=HF_TOKEN) -generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B", api_key=HF_TOKEN) -generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B", api_key=HF_TOKEN) - -def calculator(intro, operator, outro): - if operator == "add": - output = generator2(intro) + generator3(outro) - title = intro + " " + outro - saved = AIMemory(title, output) - return output - elif operator == "subtract": - output = generator2(outro) + generator3(intro) - title = outro + " " + intro - saved = AIMemory(title, output) - output = output.replace(intro, "").replace(outro, "") - return output - elif operator == "multiply": - output = generator1(intro) + generator2(outro) + generator3(intro) - title = intro + " " + outro + " " + intro - saved = AIMemory(title, output) - return output - elif operator == "divide": - output = generator1(outro) + generator2(intro) + generator3(outro) - title = outro + " " + intro + " " + outro - saved = AIMemory(title, output) - output = output.replace(intro, "").replace(outro, "") - return output - -#with open('Mindfulness.txt', 'r') as file: -# context = file.read() -#contextBox = gr.Textbox(lines=3, default=context, label="Story starter") - -examples = [ - ["Music and art make me feel", "add", "Path to Health and Happiness"], - ["Feel better each day when you awake by", "add", "Mental Body Scan"], - ["Feel better physically by", "add", "Stretch, Calm, Breath"], - ["Practicing mindfulness each day", "add", "Walk Feel"], - ["Be happier by", "add", "Brain gamification"], - ["Meditation can improve health", "add", "Deep Breaths"], - ["Spending time outdoors", "add", "Find Joy"], - ["Stress is relieved by quieting your mind, getting exercise and time with nature", "add", "Relieve Pain"], - ["Break the cycle of stress and anxiety", "add", "Yoga and Meditation"], - ["Feel calm in stressful situations", "add", "Neocortex Tools and Techniques"], - ["Deal with work pressure", "add", "Strengthen Attention"], - ["Learn to reduce feelings of overwhelmed", "add", "Easy Daily Activities"] -] - -demo = gr.Interface( - calculator, - [ - "text", - gr.Radio(["add", "subtract", "multiply", "divide"]), - "text" - ], - "text", - examples=examples, - article="Saved story memory dataset: https://huggingface.co/datasets/awacke1/MindfulStory.csv with available models to use from text gen: https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads", - live=True, -) -demo.launch() \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/dat.gui.min.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/dat.gui.min.js deleted file mode 100644 index 5b69be5aae03edb7be84df6398fb28e66c331086..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/dat.gui.min.js +++ /dev/null @@ -1,14 +0,0 @@ -/** - * dat-gui JavaScript Controller Library - * https://github.com/dataarts/dat.gui - * - * Copyright 2016 Data Arts Team, Google Creative Lab - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - */ -!function(e,t){"object"==typeof exports&&"object"==typeof module?module.exports=t():"function"==typeof define&&define.amd?define([],t):"object"==typeof exports?exports.dat=t():e.dat=t()}(this,function(){return function(e){function t(o){if(n[o])return n[o].exports;var i=n[o]={exports:{},id:o,loaded:!1};return e[o].call(i.exports,i,i.exports,t),i.loaded=!0,i.exports}var n={};return t.m=e,t.c=n,t.p="",t(0)}([function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}var i=n(1),r=o(i);e.exports=r["default"]},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}t.__esModule=!0;var i=n(2),r=o(i),a=n(6),l=o(a),s=n(3),u=o(s),d=n(7),c=o(d),f=n(8),_=o(f),p=n(10),h=o(p),m=n(11),b=o(m),g=n(12),v=o(g),y=n(13),w=o(y),x=n(14),E=o(x),C=n(15),A=o(C),S=n(16),k=o(S),O=n(9),T=o(O),R=n(17),L=o(R);t["default"]={color:{Color:r["default"],math:l["default"],interpret:u["default"]},controllers:{Controller:c["default"],BooleanController:_["default"],OptionController:h["default"],StringController:b["default"],NumberController:v["default"],NumberControllerBox:w["default"],NumberControllerSlider:E["default"],FunctionController:A["default"],ColorController:k["default"]},dom:{dom:T["default"]},gui:{GUI:L["default"]},GUI:L["default"]}},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function r(e,t,n){Object.defineProperty(e,t,{get:function(){return"RGB"===this.__state.space?this.__state[t]:(h.recalculateRGB(this,t,n),this.__state[t])},set:function(e){"RGB"!==this.__state.space&&(h.recalculateRGB(this,t,n),this.__state.space="RGB"),this.__state[t]=e}})}function a(e,t){Object.defineProperty(e,t,{get:function(){return"HSV"===this.__state.space?this.__state[t]:(h.recalculateHSV(this),this.__state[t])},set:function(e){"HSV"!==this.__state.space&&(h.recalculateHSV(this),this.__state.space="HSV"),this.__state[t]=e}})}t.__esModule=!0;var l=n(3),s=o(l),u=n(6),d=o(u),c=n(4),f=o(c),_=n(5),p=o(_),h=function(){function e(){if(i(this,e),this.__state=s["default"].apply(this,arguments),this.__state===!1)throw new Error("Failed to interpret color arguments");this.__state.a=this.__state.a||1}return e.prototype.toString=function(){return(0,f["default"])(this)},e.prototype.toHexString=function(){return(0,f["default"])(this,!0)},e.prototype.toOriginal=function(){return this.__state.conversion.write(this)},e}();h.recalculateRGB=function(e,t,n){if("HEX"===e.__state.space)e.__state[t]=d["default"].component_from_hex(e.__state.hex,n);else{if("HSV"!==e.__state.space)throw new Error("Corrupted color state");p["default"].extend(e.__state,d["default"].hsv_to_rgb(e.__state.h,e.__state.s,e.__state.v))}},h.recalculateHSV=function(e){var t=d["default"].rgb_to_hsv(e.r,e.g,e.b);p["default"].extend(e.__state,{s:t.s,v:t.v}),p["default"].isNaN(t.h)?p["default"].isUndefined(e.__state.h)&&(e.__state.h=0):e.__state.h=t.h},h.COMPONENTS=["r","g","b","h","s","v","hex","a"],r(h.prototype,"r",2),r(h.prototype,"g",1),r(h.prototype,"b",0),a(h.prototype,"h"),a(h.prototype,"s"),a(h.prototype,"v"),Object.defineProperty(h.prototype,"a",{get:function(){return this.__state.a},set:function(e){this.__state.a=e}}),Object.defineProperty(h.prototype,"hex",{get:function(){return"HEX"!==!this.__state.space&&(this.__state.hex=d["default"].rgb_to_hex(this.r,this.g,this.b)),this.__state.hex},set:function(e){this.__state.space="HEX",this.__state.hex=e}}),t["default"]=h},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}t.__esModule=!0;var i=n(4),r=o(i),a=n(5),l=o(a),s=[{litmus:l["default"].isString,conversions:{THREE_CHAR_HEX:{read:function(e){var t=e.match(/^#([A-F0-9])([A-F0-9])([A-F0-9])$/i);return null!==t&&{space:"HEX",hex:parseInt("0x"+t[1].toString()+t[1].toString()+t[2].toString()+t[2].toString()+t[3].toString()+t[3].toString(),0)}},write:r["default"]},SIX_CHAR_HEX:{read:function(e){var t=e.match(/^#([A-F0-9]{6})$/i);return null!==t&&{space:"HEX",hex:parseInt("0x"+t[1].toString(),0)}},write:r["default"]},CSS_RGB:{read:function(e){var t=e.match(/^rgb\(\s*(.+)\s*,\s*(.+)\s*,\s*(.+)\s*\)/);return null!==t&&{space:"RGB",r:parseFloat(t[1]),g:parseFloat(t[2]),b:parseFloat(t[3])}},write:r["default"]},CSS_RGBA:{read:function(e){var t=e.match(/^rgba\(\s*(.+)\s*,\s*(.+)\s*,\s*(.+)\s*,\s*(.+)\s*\)/);return null!==t&&{space:"RGB",r:parseFloat(t[1]),g:parseFloat(t[2]),b:parseFloat(t[3]),a:parseFloat(t[4])}},write:r["default"]}}},{litmus:l["default"].isNumber,conversions:{HEX:{read:function(e){return{space:"HEX",hex:e,conversionName:"HEX"}},write:function(e){return e.hex}}}},{litmus:l["default"].isArray,conversions:{RGB_ARRAY:{read:function(e){return 3===e.length&&{space:"RGB",r:e[0],g:e[1],b:e[2]}},write:function(e){return[e.r,e.g,e.b]}},RGBA_ARRAY:{read:function(e){return 4===e.length&&{space:"RGB",r:e[0],g:e[1],b:e[2],a:e[3]}},write:function(e){return[e.r,e.g,e.b,e.a]}}}},{litmus:l["default"].isObject,conversions:{RGBA_OBJ:{read:function(e){return!!(l["default"].isNumber(e.r)&&l["default"].isNumber(e.g)&&l["default"].isNumber(e.b)&&l["default"].isNumber(e.a))&&{space:"RGB",r:e.r,g:e.g,b:e.b,a:e.a}},write:function(e){return{r:e.r,g:e.g,b:e.b,a:e.a}}},RGB_OBJ:{read:function(e){return!!(l["default"].isNumber(e.r)&&l["default"].isNumber(e.g)&&l["default"].isNumber(e.b))&&{space:"RGB",r:e.r,g:e.g,b:e.b}},write:function(e){return{r:e.r,g:e.g,b:e.b}}},HSVA_OBJ:{read:function(e){return!!(l["default"].isNumber(e.h)&&l["default"].isNumber(e.s)&&l["default"].isNumber(e.v)&&l["default"].isNumber(e.a))&&{space:"HSV",h:e.h,s:e.s,v:e.v,a:e.a}},write:function(e){return{h:e.h,s:e.s,v:e.v,a:e.a}}},HSV_OBJ:{read:function(e){return!!(l["default"].isNumber(e.h)&&l["default"].isNumber(e.s)&&l["default"].isNumber(e.v))&&{space:"HSV",h:e.h,s:e.s,v:e.v}},write:function(e){return{h:e.h,s:e.s,v:e.v}}}}}],u=void 0,d=void 0,c=function(){d=!1;var e=arguments.length>1?l["default"].toArray(arguments):arguments[0];return l["default"].each(s,function(t){if(t.litmus(e))return l["default"].each(t.conversions,function(t,n){if(u=t.read(e),d===!1&&u!==!1)return d=u,u.conversionName=n,u.conversion=t,l["default"].BREAK}),l["default"].BREAK}),d};t["default"]=c},function(e,t){"use strict";t.__esModule=!0,t["default"]=function(e,t){var n=e.__state.conversionName.toString(),o=Math.round(e.r),i=Math.round(e.g),r=Math.round(e.b),a=e.a,l=Math.round(e.h),s=e.s.toFixed(1),u=e.v.toFixed(1);if(t||"THREE_CHAR_HEX"===n||"SIX_CHAR_HEX"===n){for(var d=e.hex.toString(16);d.length<6;)d="0"+d;return"#"+d}return"CSS_RGB"===n?"rgb("+o+","+i+","+r+")":"CSS_RGBA"===n?"rgba("+o+","+i+","+r+","+a+")":"HEX"===n?"0x"+e.hex.toString(16):"RGB_ARRAY"===n?"["+o+","+i+","+r+"]":"RGBA_ARRAY"===n?"["+o+","+i+","+r+","+a+"]":"RGB_OBJ"===n?"{r:"+o+",g:"+i+",b:"+r+"}":"RGBA_OBJ"===n?"{r:"+o+",g:"+i+",b:"+r+",a:"+a+"}":"HSV_OBJ"===n?"{h:"+l+",s:"+s+",v:"+u+"}":"HSVA_OBJ"===n?"{h:"+l+",s:"+s+",v:"+u+",a:"+a+"}":"unknown format"}},function(e,t){"use strict";t.__esModule=!0;var n=Array.prototype.forEach,o=Array.prototype.slice,i={BREAK:{},extend:function(e){return this.each(o.call(arguments,1),function(t){var n=this.isObject(t)?Object.keys(t):[];n.forEach(function(n){this.isUndefined(t[n])||(e[n]=t[n])}.bind(this))},this),e},defaults:function(e){return this.each(o.call(arguments,1),function(t){var n=this.isObject(t)?Object.keys(t):[];n.forEach(function(n){this.isUndefined(e[n])&&(e[n]=t[n])}.bind(this))},this),e},compose:function(){var e=o.call(arguments);return function(){for(var t=o.call(arguments),n=e.length-1;n>=0;n--)t=[e[n].apply(this,t)];return t[0]}},each:function(e,t,o){if(e)if(n&&e.forEach&&e.forEach===n)e.forEach(t,o);else if(e.length===e.length+0){var i=void 0,r=void 0;for(i=0,r=e.length;i>8*t&255},hex_with_component:function(e,t,o){return o<<(n=8*t)|e&~(255<-1?t.length-t.indexOf(".")-1:0}t.__esModule=!0;var s=n(7),u=o(s),d=n(5),c=o(d),f=function(e){function t(n,o,a){i(this,t);var s=r(this,e.call(this,n,o)),u=a||{};return s.__min=u.min,s.__max=u.max,s.__step=u.step,c["default"].isUndefined(s.__step)?0===s.initialValue?s.__impliedStep=1:s.__impliedStep=Math.pow(10,Math.floor(Math.log(Math.abs(s.initialValue))/Math.LN10))/10:s.__impliedStep=s.__step,s.__precision=l(s.__impliedStep),s}return a(t,e),t.prototype.setValue=function(t){var n=t;return void 0!==this.__min&&nthis.__max&&(n=this.__max),void 0!==this.__step&&n%this.__step!==0&&(n=Math.round(n/this.__step)*this.__step),e.prototype.setValue.call(this,n)},t.prototype.min=function(e){return this.__min=e,this},t.prototype.max=function(e){return this.__max=e,this},t.prototype.step=function(e){return this.__step=e,this.__impliedStep=e,this.__precision=l(e),this},t}(u["default"]);t["default"]=f},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function r(e,t){if(!e)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return!t||"object"!=typeof t&&"function"!=typeof t?e:t}function a(e,t){if("function"!=typeof t&&null!==t)throw new TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}function l(e,t){var n=Math.pow(10,t);return Math.round(e*n)/n}t.__esModule=!0;var s=n(12),u=o(s),d=n(9),c=o(d),f=n(5),_=o(f),p=function(e){function t(n,o,a){function l(){var e=parseFloat(m.__input.value);_["default"].isNaN(e)||m.setValue(e)}function s(){m.__onFinishChange&&m.__onFinishChange.call(m,m.getValue())}function u(){s()}function d(e){var t=b-e.clientY;m.setValue(m.getValue()+t*m.__impliedStep),b=e.clientY}function f(){c["default"].unbind(window,"mousemove",d),c["default"].unbind(window,"mouseup",f),s()}function p(e){c["default"].bind(window,"mousemove",d),c["default"].bind(window,"mouseup",f),b=e.clientY}i(this,t);var h=r(this,e.call(this,n,o,a));h.__truncationSuspended=!1;var m=h,b=void 0;return h.__input=document.createElement("input"),h.__input.setAttribute("type","text"),c["default"].bind(h.__input,"change",l),c["default"].bind(h.__input,"blur",u),c["default"].bind(h.__input,"mousedown",p),c["default"].bind(h.__input,"keydown",function(e){13===e.keyCode&&(m.__truncationSuspended=!0,this.blur(),m.__truncationSuspended=!1,s())}),h.updateDisplay(),h.domElement.appendChild(h.__input),h}return a(t,e),t.prototype.updateDisplay=function(){return this.__input.value=this.__truncationSuspended?this.getValue():l(this.getValue(),this.__precision),e.prototype.updateDisplay.call(this)},t}(u["default"]);t["default"]=p},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function r(e,t){if(!e)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return!t||"object"!=typeof t&&"function"!=typeof t?e:t}function a(e,t){if("function"!=typeof t&&null!==t)throw new TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}function l(e,t,n,o,i){return o+(i-o)*((e-t)/(n-t))}t.__esModule=!0;var s=n(12),u=o(s),d=n(9),c=o(d),f=function(e){function t(n,o,a,s,u){function d(e){document.activeElement.blur(),c["default"].bind(window,"mousemove",f),c["default"].bind(window,"mouseup",_),f(e)}function f(e){e.preventDefault();var t=h.__background.getBoundingClientRect();return h.setValue(l(e.clientX,t.left,t.right,h.__min,h.__max)),!1}function _(){c["default"].unbind(window,"mousemove",f),c["default"].unbind(window,"mouseup",_),h.__onFinishChange&&h.__onFinishChange.call(h,h.getValue())}i(this,t);var p=r(this,e.call(this,n,o,{min:a,max:s,step:u})),h=p;return p.__background=document.createElement("div"),p.__foreground=document.createElement("div"),c["default"].bind(p.__background,"mousedown",d),c["default"].addClass(p.__background,"slider"),c["default"].addClass(p.__foreground,"slider-fg"),p.updateDisplay(),p.__background.appendChild(p.__foreground),p.domElement.appendChild(p.__background),p}return a(t,e),t.prototype.updateDisplay=function(){var t=(this.getValue()-this.__min)/(this.__max-this.__min);return this.__foreground.style.width=100*t+"%",e.prototype.updateDisplay.call(this)},t}(u["default"]);t["default"]=f},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function r(e,t){if(!e)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return!t||"object"!=typeof t&&"function"!=typeof t?e:t}function a(e,t){if("function"!=typeof t&&null!==t)throw new TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}t.__esModule=!0;var l=n(7),s=o(l),u=n(9),d=o(u),c=function(e){function t(n,o,a){i(this,t);var l=r(this,e.call(this,n,o)),s=l;return l.__button=document.createElement("div"),l.__button.innerHTML=void 0===a?"Fire":a,d["default"].bind(l.__button,"click",function(e){return e.preventDefault(),s.fire(),!1}),d["default"].addClass(l.__button,"button"),l.domElement.appendChild(l.__button),l}return a(t,e),t.prototype.fire=function(){this.__onChange&&this.__onChange.call(this),this.getValue().call(this.object),this.__onFinishChange&&this.__onFinishChange.call(this,this.getValue())},t}(s["default"]);t["default"]=c},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function r(e,t){if(!e)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return!t||"object"!=typeof t&&"function"!=typeof t?e:t}function a(e,t){if("function"!=typeof t&&null!==t)throw new TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}function l(e,t,n,o){e.style.background="",g["default"].each(y,function(i){e.style.cssText+="background: "+i+"linear-gradient("+t+", "+n+" 0%, "+o+" 100%); "})}function s(e){e.style.background="",e.style.cssText+="background: -moz-linear-gradient(top, #ff0000 0%, #ff00ff 17%, #0000ff 34%, #00ffff 50%, #00ff00 67%, #ffff00 84%, #ff0000 100%);",e.style.cssText+="background: -webkit-linear-gradient(top, #ff0000 0%,#ff00ff 17%,#0000ff 34%,#00ffff 50%,#00ff00 67%,#ffff00 84%,#ff0000 100%);",e.style.cssText+="background: -o-linear-gradient(top, #ff0000 0%,#ff00ff 17%,#0000ff 34%,#00ffff 50%,#00ff00 67%,#ffff00 84%,#ff0000 100%);",e.style.cssText+="background: -ms-linear-gradient(top, #ff0000 0%,#ff00ff 17%,#0000ff 34%,#00ffff 50%,#00ff00 67%,#ffff00 84%,#ff0000 100%);",e.style.cssText+="background: linear-gradient(top, #ff0000 0%,#ff00ff 17%,#0000ff 34%,#00ffff 50%,#00ff00 67%,#ffff00 84%,#ff0000 100%);"}t.__esModule=!0;var u=n(7),d=o(u),c=n(9),f=o(c),_=n(2),p=o(_),h=n(3),m=o(h),b=n(5),g=o(b),v=function(e){function t(n,o){function a(e){h(e),f["default"].bind(window,"mousemove",h),f["default"].bind(window,"mouseup",u)}function u(){f["default"].unbind(window,"mousemove",h),f["default"].unbind(window,"mouseup",u),_()}function d(){var e=(0,m["default"])(this.value);e!==!1?(y.__color.__state=e,y.setValue(y.__color.toOriginal())):this.value=y.__color.toString()}function c(){f["default"].unbind(window,"mousemove",b),f["default"].unbind(window,"mouseup",c),_()}function _(){y.__onFinishChange&&y.__onFinishChange.call(y,y.__color.toOriginal())}function h(e){e.preventDefault();var t=y.__saturation_field.getBoundingClientRect(),n=(e.clientX-t.left)/(t.right-t.left),o=1-(e.clientY-t.top)/(t.bottom-t.top);return o>1?o=1:o<0&&(o=0),n>1?n=1:n<0&&(n=0),y.__color.v=o,y.__color.s=n,y.setValue(y.__color.toOriginal()),!1}function b(e){e.preventDefault();var t=y.__hue_field.getBoundingClientRect(),n=1-(e.clientY-t.top)/(t.bottom-t.top);return n>1?n=1:n<0&&(n=0),y.__color.h=360*n,y.setValue(y.__color.toOriginal()),!1}i(this,t);var v=r(this,e.call(this,n,o));v.__color=new p["default"](v.getValue()),v.__temp=new p["default"](0);var y=v;v.domElement=document.createElement("div"),f["default"].makeSelectable(v.domElement,!1),v.__selector=document.createElement("div"),v.__selector.className="selector",v.__saturation_field=document.createElement("div"),v.__saturation_field.className="saturation-field",v.__field_knob=document.createElement("div"),v.__field_knob.className="field-knob",v.__field_knob_border="2px solid ",v.__hue_knob=document.createElement("div"),v.__hue_knob.className="hue-knob",v.__hue_field=document.createElement("div"),v.__hue_field.className="hue-field",v.__input=document.createElement("input"),v.__input.type="text",v.__input_textShadow="0 1px 1px ",f["default"].bind(v.__input,"keydown",function(e){13===e.keyCode&&d.call(this)}),f["default"].bind(v.__input,"blur",d),f["default"].bind(v.__selector,"mousedown",function(){f["default"].addClass(this,"drag").bind(window,"mouseup",function(){f["default"].removeClass(y.__selector,"drag")})});var w=document.createElement("div");return g["default"].extend(v.__selector.style,{width:"122px",height:"102px",padding:"3px",backgroundColor:"#222",boxShadow:"0px 1px 3px rgba(0,0,0,0.3)"}),g["default"].extend(v.__field_knob.style,{position:"absolute",width:"12px",height:"12px",border:v.__field_knob_border+(v.__color.v<.5?"#fff":"#000"),boxShadow:"0px 1px 3px rgba(0,0,0,0.5)",borderRadius:"12px",zIndex:1}),g["default"].extend(v.__hue_knob.style,{position:"absolute",width:"15px",height:"2px",borderRight:"4px solid #fff",zIndex:1}),g["default"].extend(v.__saturation_field.style,{width:"100px",height:"100px",border:"1px solid #555",marginRight:"3px",display:"inline-block",cursor:"pointer"}),g["default"].extend(w.style,{width:"100%",height:"100%",background:"none"}),l(w,"top","rgba(0,0,0,0)","#000"),g["default"].extend(v.__hue_field.style,{width:"15px",height:"100px",border:"1px solid #555",cursor:"ns-resize",position:"absolute",top:"3px",right:"3px"}),s(v.__hue_field),g["default"].extend(v.__input.style,{outline:"none",textAlign:"center",color:"#fff",border:0,fontWeight:"bold",textShadow:v.__input_textShadow+"rgba(0,0,0,0.7)"}),f["default"].bind(v.__saturation_field,"mousedown",a),f["default"].bind(v.__field_knob,"mousedown",a),f["default"].bind(v.__hue_field,"mousedown",function(e){b(e),f["default"].bind(window,"mousemove",b),f["default"].bind(window,"mouseup",c)}),v.__saturation_field.appendChild(w),v.__selector.appendChild(v.__field_knob),v.__selector.appendChild(v.__saturation_field),v.__selector.appendChild(v.__hue_field),v.__hue_field.appendChild(v.__hue_knob),v.domElement.appendChild(v.__input),v.domElement.appendChild(v.__selector),v.updateDisplay(),v}return a(t,e),t.prototype.updateDisplay=function(){var e=(0,m["default"])(this.getValue());if(e!==!1){var t=!1;g["default"].each(p["default"].COMPONENTS,function(n){if(!g["default"].isUndefined(e[n])&&!g["default"].isUndefined(this.__color.__state[n])&&e[n]!==this.__color.__state[n])return t=!0,{}},this),t&&g["default"].extend(this.__color.__state,e)}g["default"].extend(this.__temp.__state,this.__color.__state),this.__temp.a=1;var n=this.__color.v<.5||this.__color.s>.5?255:0,o=255-n;g["default"].extend(this.__field_knob.style,{marginLeft:100*this.__color.s-7+"px",marginTop:100*(1-this.__color.v)-7+"px",backgroundColor:this.__temp.toHexString(),border:this.__field_knob_border+"rgb("+n+","+n+","+n+")"}),this.__hue_knob.style.marginTop=100*(1-this.__color.h/360)+"px",this.__temp.s=1,this.__temp.v=1,l(this.__saturation_field,"left","#fff",this.__temp.toHexString()),this.__input.value=this.__color.toString(),g["default"].extend(this.__input.style,{backgroundColor:this.__color.toHexString(),color:"rgb("+n+","+n+","+n+")",textShadow:this.__input_textShadow+"rgba("+o+","+o+","+o+",.7)"})},t}(d["default"]),y=["-moz-","-o-","-webkit-","-ms-",""];t["default"]=v},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t,n){var o=document.createElement("li");return t&&o.appendChild(t),n?e.__ul.insertBefore(o,n):e.__ul.appendChild(o),e.onResize(),o}function r(e,t){var n=e.__preset_select[e.__preset_select.selectedIndex];t?n.innerHTML=n.value+"*":n.innerHTML=n.value}function a(e,t,n){if(n.__li=t,n.__gui=e,U["default"].extend(n,{options:function(t){if(arguments.length>1){var o=n.__li.nextElementSibling;return n.remove(),s(e,n.object,n.property,{before:o,factoryArgs:[U["default"].toArray(arguments)]})}if(U["default"].isArray(t)||U["default"].isObject(t)){var i=n.__li.nextElementSibling;return n.remove(),s(e,n.object,n.property,{before:i,factoryArgs:[t]})}},name:function(e){return n.__li.firstElementChild.firstElementChild.innerHTML=e,n},listen:function(){return n.__gui.listen(n),n},remove:function(){ -return n.__gui.remove(n),n}}),n instanceof B["default"])!function(){var e=new N["default"](n.object,n.property,{min:n.__min,max:n.__max,step:n.__step});U["default"].each(["updateDisplay","onChange","onFinishChange","step"],function(t){var o=n[t],i=e[t];n[t]=e[t]=function(){var t=Array.prototype.slice.call(arguments);return i.apply(e,t),o.apply(n,t)}}),z["default"].addClass(t,"has-slider"),n.domElement.insertBefore(e.domElement,n.domElement.firstElementChild)}();else if(n instanceof N["default"]){var o=function(t){if(U["default"].isNumber(n.__min)&&U["default"].isNumber(n.__max)){var o=n.__li.firstElementChild.firstElementChild.innerHTML,i=n.__gui.__listening.indexOf(n)>-1;n.remove();var r=s(e,n.object,n.property,{before:n.__li.nextElementSibling,factoryArgs:[n.__min,n.__max,n.__step]});return r.name(o),i&&r.listen(),r}return t};n.min=U["default"].compose(o,n.min),n.max=U["default"].compose(o,n.max)}else n instanceof O["default"]?(z["default"].bind(t,"click",function(){z["default"].fakeEvent(n.__checkbox,"click")}),z["default"].bind(n.__checkbox,"click",function(e){e.stopPropagation()})):n instanceof R["default"]?(z["default"].bind(t,"click",function(){z["default"].fakeEvent(n.__button,"click")}),z["default"].bind(t,"mouseover",function(){z["default"].addClass(n.__button,"hover")}),z["default"].bind(t,"mouseout",function(){z["default"].removeClass(n.__button,"hover")})):n instanceof j["default"]&&(z["default"].addClass(t,"color"),n.updateDisplay=U["default"].compose(function(e){return t.style.borderLeftColor=n.__color.toString(),e},n.updateDisplay),n.updateDisplay());n.setValue=U["default"].compose(function(t){return e.getRoot().__preset_select&&n.isModified()&&r(e.getRoot(),!0),t},n.setValue)}function l(e,t){var n=e.getRoot(),o=n.__rememberedObjects.indexOf(t.object);if(o!==-1){var i=n.__rememberedObjectIndecesToControllers[o];if(void 0===i&&(i={},n.__rememberedObjectIndecesToControllers[o]=i),i[t.property]=t,n.load&&n.load.remembered){var r=n.load.remembered,a=void 0;if(r[e.preset])a=r[e.preset];else{if(!r[Q])return;a=r[Q]}if(a[o]&&void 0!==a[o][t.property]){var l=a[o][t.property];t.initialValue=l,t.setValue(l)}}}}function s(e,t,n,o){if(void 0===t[n])throw new Error('Object "'+t+'" has no property "'+n+'"');var r=void 0;if(o.color)r=new j["default"](t,n);else{var s=[t,n].concat(o.factoryArgs);r=C["default"].apply(e,s)}o.before instanceof S["default"]&&(o.before=o.before.__li),l(e,r),z["default"].addClass(r.domElement,"c");var u=document.createElement("span");z["default"].addClass(u,"property-name"),u.innerHTML=r.property;var d=document.createElement("div");d.appendChild(u),d.appendChild(r.domElement);var c=i(e,d,o.before);return z["default"].addClass(c,oe.CLASS_CONTROLLER_ROW),r instanceof j["default"]?z["default"].addClass(c,"color"):z["default"].addClass(c,g(r.getValue())),a(e,c,r),e.__controllers.push(r),r}function u(e,t){return document.location.href+"."+t}function d(e,t,n){var o=document.createElement("option");o.innerHTML=t,o.value=t,e.__preset_select.appendChild(o),n&&(e.__preset_select.selectedIndex=e.__preset_select.length-1)}function c(e,t){t.style.display=e.useLocalStorage?"block":"none"}function f(e){var t=e.__save_row=document.createElement("li");z["default"].addClass(e.domElement,"has-save"),e.__ul.insertBefore(t,e.__ul.firstChild),z["default"].addClass(t,"save-row");var n=document.createElement("span");n.innerHTML=" ",z["default"].addClass(n,"button gears");var o=document.createElement("span");o.innerHTML="Save",z["default"].addClass(o,"button"),z["default"].addClass(o,"save");var i=document.createElement("span");i.innerHTML="New",z["default"].addClass(i,"button"),z["default"].addClass(i,"save-as");var r=document.createElement("span");r.innerHTML="Revert",z["default"].addClass(r,"button"),z["default"].addClass(r,"revert");var a=e.__preset_select=document.createElement("select");e.load&&e.load.remembered?U["default"].each(e.load.remembered,function(t,n){d(e,n,n===e.preset)}):d(e,Q,!1),z["default"].bind(a,"change",function(){for(var t=0;t0&&(e.preset=this.preset,e.remembered||(e.remembered={}),e.remembered[this.preset]=h(this)),e.folders={},U["default"].each(this.__folders,function(t,n){e.folders[n]=t.getSaveObject()}),e},save:function(){this.load.remembered||(this.load.remembered={}),this.load.remembered[this.preset]=h(this),r(this,!1),this.saveToLocalStorageIfPossible()},saveAs:function(e){this.load.remembered||(this.load.remembered={},this.load.remembered[Q]=h(this,!0)),this.load.remembered[e]=h(this),this.preset=e,d(this,e,!0),this.saveToLocalStorageIfPossible()},revert:function(e){U["default"].each(this.__controllers,function(t){this.getRoot().load.remembered?l(e||this.getRoot(),t):t.setValue(t.initialValue),t.__onFinishChange&&t.__onFinishChange.call(t,t.getValue())},this),U["default"].each(this.__folders,function(e){e.revert(e)}),e||r(this.getRoot(),!1)},listen:function(e){var t=0===this.__listening.length;this.__listening.push(e),t&&b(this.__listening)},updateDisplay:function(){U["default"].each(this.__controllers,function(e){e.updateDisplay()}),U["default"].each(this.__folders,function(e){e.updateDisplay()})}}),e.exports=oe},function(e,t){"use strict";e.exports={load:function(e,t){var n=t||document,o=n.createElement("link");o.type="text/css",o.rel="stylesheet",o.href=e,n.getElementsByTagName("head")[0].appendChild(o)},inject:function(e,t){var n=t||document,o=document.createElement("style");o.type="text/css",o.innerHTML=e;var i=n.getElementsByTagName("head")[0];try{i.appendChild(o)}catch(r){}}}},function(e,t){e.exports="
    Here's the new load parameter for your GUI's constructor:
    Automatically save values to localStorage on exit.
    The values saved to localStorage will override those passed to dat.GUI's constructor. This makes it easier to work incrementally, but localStorage is fragile, and your friends may not see the same values you do.
    "},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}t.__esModule=!0;var i=n(10),r=o(i),a=n(13),l=o(a),s=n(14),u=o(s),d=n(11),c=o(d),f=n(15),_=o(f),p=n(8),h=o(p),m=n(5),b=o(m),g=function(e,t){var n=e[t];return b["default"].isArray(arguments[2])||b["default"].isObject(arguments[2])?new r["default"](e,t,arguments[2]):b["default"].isNumber(n)?b["default"].isNumber(arguments[2])&&b["default"].isNumber(arguments[3])?b["default"].isNumber(arguments[4])?new u["default"](e,t,arguments[2],arguments[3],arguments[4]):new u["default"](e,t,arguments[2],arguments[3]):b["default"].isNumber(arguments[4])?new l["default"](e,t,{min:arguments[2],max:arguments[3],step:arguments[4]}):new l["default"](e,t,{min:arguments[2],max:arguments[3]}):b["default"].isString(n)?new c["default"](e,t):b["default"].isFunction(n)?new _["default"](e,t,""):b["default"].isBoolean(n)?new h["default"](e,t):null};t["default"]=g},function(e,t){"use strict";function n(e){setTimeout(e,1e3/60)}t.__esModule=!0,t["default"]=window.requestAnimationFrame||window.webkitRequestAnimationFrame||window.mozRequestAnimationFrame||window.oRequestAnimationFrame||window.msRequestAnimationFrame||n},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}t.__esModule=!0;var r=n(9),a=o(r),l=n(5),s=o(l),u=function(){function e(){i(this,e),this.backgroundElement=document.createElement("div"),s["default"].extend(this.backgroundElement.style,{backgroundColor:"rgba(0,0,0,0.8)",top:0,left:0,display:"none",zIndex:"1000",opacity:0,WebkitTransition:"opacity 0.2s linear",transition:"opacity 0.2s linear"}),a["default"].makeFullscreen(this.backgroundElement),this.backgroundElement.style.position="fixed",this.domElement=document.createElement("div"),s["default"].extend(this.domElement.style,{position:"fixed",display:"none",zIndex:"1001",opacity:0,WebkitTransition:"-webkit-transform 0.2s ease-out, opacity 0.2s linear",transition:"transform 0.2s ease-out, opacity 0.2s linear"}),document.body.appendChild(this.backgroundElement),document.body.appendChild(this.domElement);var t=this;a["default"].bind(this.backgroundElement,"click",function(){t.hide()})}return e.prototype.show=function(){var e=this;this.backgroundElement.style.display="block",this.domElement.style.display="block",this.domElement.style.opacity=0,this.domElement.style.webkitTransform="scale(1.1)",this.layout(),s["default"].defer(function(){e.backgroundElement.style.opacity=1,e.domElement.style.opacity=1,e.domElement.style.webkitTransform="scale(1)"})},e.prototype.hide=function t(){var e=this,t=function n(){e.domElement.style.display="none",e.backgroundElement.style.display="none",a["default"].unbind(e.domElement,"webkitTransitionEnd",n),a["default"].unbind(e.domElement,"transitionend",n),a["default"].unbind(e.domElement,"oTransitionEnd",n)};a["default"].bind(this.domElement,"webkitTransitionEnd",t),a["default"].bind(this.domElement,"transitionend",t),a["default"].bind(this.domElement,"oTransitionEnd",t),this.backgroundElement.style.opacity=0,this.domElement.style.opacity=0,this.domElement.style.webkitTransform="scale(1.1)"},e.prototype.layout=function(){this.domElement.style.left=window.innerWidth/2-a["default"].getWidth(this.domElement)/2+"px",this.domElement.style.top=window.innerHeight/2-a["default"].getHeight(this.domElement)/2+"px"},e}();t["default"]=u},function(e,t,n){t=e.exports=n(24)(),t.push([e.id,".dg ul{list-style:none;margin:0;padding:0;width:100%;clear:both}.dg.ac{position:fixed;top:0;left:0;right:0;height:0;z-index:0}.dg:not(.ac) .main{overflow:hidden}.dg.main{-webkit-transition:opacity .1s linear;transition:opacity .1s linear}.dg.main.taller-than-window{overflow-y:auto}.dg.main.taller-than-window .close-button{opacity:1;margin-top:-1px;border-top:1px solid #2c2c2c}.dg.main ul.closed .close-button{opacity:1!important}.dg.main .close-button.drag,.dg.main:hover .close-button{opacity:1}.dg.main .close-button{-webkit-transition:opacity .1s linear;transition:opacity .1s linear;border:0;position:absolute;line-height:19px;height:20px;cursor:pointer;text-align:center;background-color:#000}.dg.main .close-button:hover{background-color:#111}.dg.a{float:right;margin-right:15px;overflow-x:hidden}.dg.a.has-save>ul{margin-top:27px}.dg.a.has-save>ul.closed{margin-top:0}.dg.a .save-row{position:fixed;top:0;z-index:1002}.dg li{-webkit-transition:height .1s ease-out;transition:height .1s ease-out}.dg li:not(.folder){cursor:auto;height:27px;line-height:27px;overflow:hidden;padding:0 4px 0 5px}.dg li.folder{padding:0;border-left:4px solid transparent}.dg li.title{cursor:pointer;margin-left:-4px}.dg .closed li:not(.title),.dg .closed ul li,.dg .closed ul li>*{height:0;overflow:hidden;border:0}.dg .cr{clear:both;padding-left:3px;height:27px}.dg .property-name{cursor:default;float:left;clear:left;width:40%;overflow:hidden;text-overflow:ellipsis}.dg .c{float:left;width:60%}.dg .c input[type=text]{border:0;margin-top:4px;padding:3px;width:100%;float:right}.dg .has-slider input[type=text]{width:30%;margin-left:0}.dg .slider{float:left;width:66%;margin-left:-5px;margin-right:0;height:19px;margin-top:4px}.dg .slider-fg{height:100%}.dg .c input[type=checkbox]{margin-top:9px}.dg .c select{margin-top:5px}.dg .cr.boolean,.dg .cr.boolean *,.dg .cr.function,.dg .cr.function *,.dg .cr.function .property-name{cursor:pointer}.dg .selector{display:none;position:absolute;margin-left:-9px;margin-top:23px;z-index:10}.dg .c:hover .selector,.dg .selector.drag{display:block}.dg li.save-row{padding:0}.dg li.save-row .button{display:inline-block;padding:0 6px}.dg.dialogue{background-color:#222;width:460px;padding:15px;font-size:13px;line-height:15px}#dg-new-constructor{padding:10px;color:#222;font-family:Monaco,monospace;font-size:10px;border:0;resize:none;box-shadow:inset 1px 1px 1px #888;word-wrap:break-word;margin:12px 0;display:block;width:440px;overflow-y:scroll;height:100px;position:relative}#dg-local-explain{display:none;font-size:11px;line-height:17px;border-radius:3px;background-color:#333;padding:8px;margin-top:10px}#dg-local-explain code{font-size:10px}#dat-gui-save-locally{display:none}.dg{color:#eee;font:11px Lucida Grande,sans-serif;text-shadow:0 -1px 0 #111}.dg.main::-webkit-scrollbar{width:5px;background:#1a1a1a}.dg.main::-webkit-scrollbar-corner{height:0;display:none}.dg.main::-webkit-scrollbar-thumb{border-radius:5px;background:#676767}.dg li:not(.folder){background:#1a1a1a;border-bottom:1px solid #2c2c2c}.dg li.save-row{line-height:25px;background:#dad5cb;border:0}.dg li.save-row select{margin-left:5px;width:108px}.dg li.save-row .button{margin-left:5px;margin-top:1px;border-radius:2px;font-size:9px;line-height:7px;padding:4px 4px 5px;background:#c5bdad;color:#fff;text-shadow:0 1px 0 #b0a58f;box-shadow:0 -1px 0 #b0a58f;cursor:pointer}.dg li.save-row .button.gears{background:#c5bdad url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAsAAAANCAYAAAB/9ZQ7AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAQJJREFUeNpiYKAU/P//PwGIC/ApCABiBSAW+I8AClAcgKxQ4T9hoMAEUrxx2QSGN6+egDX+/vWT4e7N82AMYoPAx/evwWoYoSYbACX2s7KxCxzcsezDh3evFoDEBYTEEqycggWAzA9AuUSQQgeYPa9fPv6/YWm/Acx5IPb7ty/fw+QZblw67vDs8R0YHyQhgObx+yAJkBqmG5dPPDh1aPOGR/eugW0G4vlIoTIfyFcA+QekhhHJhPdQxbiAIguMBTQZrPD7108M6roWYDFQiIAAv6Aow/1bFwXgis+f2LUAynwoIaNcz8XNx3Dl7MEJUDGQpx9gtQ8YCueB+D26OECAAQDadt7e46D42QAAAABJRU5ErkJggg==) 2px 1px no-repeat;height:7px;width:8px}.dg li.save-row .button:hover{background-color:#bab19e;box-shadow:0 -1px 0 #b0a58f}.dg li.folder{border-bottom:0}.dg li.title{padding-left:16px;background:#000 url(data:image/gif;base64,R0lGODlhBQAFAJEAAP////Pz8////////yH5BAEAAAIALAAAAAAFAAUAAAIIlI+hKgFxoCgAOw==) 6px 10px no-repeat;cursor:pointer;border-bottom:1px solid hsla(0,0%,100%,.2)}.dg .closed li.title{background-image:url(data:image/gif;base64,R0lGODlhBQAFAJEAAP////Pz8////////yH5BAEAAAIALAAAAAAFAAUAAAIIlGIWqMCbWAEAOw==)}.dg .cr.boolean{border-left:3px solid #806787}.dg .cr.color{border-left:3px solid}.dg .cr.function{border-left:3px solid #e61d5f}.dg .cr.number{border-left:3px solid #2fa1d6}.dg .cr.number input[type=text]{color:#2fa1d6}.dg .cr.string{border-left:3px solid #1ed36f}.dg .cr.string input[type=text]{color:#1ed36f}.dg .cr.boolean:hover,.dg .cr.function:hover{background:#111}.dg .c input[type=text]{background:#303030;outline:none}.dg .c input[type=text]:hover{background:#3c3c3c}.dg .c input[type=text]:focus{background:#494949;color:#fff}.dg .c .slider{background:#303030;cursor:ew-resize}.dg .c .slider-fg{background:#2fa1d6;max-width:100%}.dg .c .slider:hover{background:#3c3c3c}.dg .c .slider:hover .slider-fg{background:#44abda}",""])},function(e,t){e.exports=function(){var e=[];return e.toString=function(){for(var e=[],t=0;t/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/13 things mentally strong people dont do pdf free download The best-selling book that will change your life.md b/spaces/bioriAsaeru/text-to-voice/13 things mentally strong people dont do pdf free download The best-selling book that will change your life.md deleted file mode 100644 index 2adc5fcd4c677eddee223484168cb0a01de21468..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/13 things mentally strong people dont do pdf free download The best-selling book that will change your life.md +++ /dev/null @@ -1,6 +0,0 @@ -

    13 things mentally strong people don't do pdf free download


    DOWNLOADhttps://urloso.com/2uyS93



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Aastha In The Prison Of Spring part 2 full movie online free - Rekha and Om Puris controversial role.md b/spaces/bioriAsaeru/text-to-voice/Aastha In The Prison Of Spring part 2 full movie online free - Rekha and Om Puris controversial role.md deleted file mode 100644 index ce6854f323c2e92624a9f67ec3923de66f61d6c7..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Aastha In The Prison Of Spring part 2 full movie online free - Rekha and Om Puris controversial role.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Aastha: In The Prison Of Spring part 2 full movie online free


    DOWNLOADhttps://urloso.com/2uyOAK



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Brickhouse Betty Site Rip Extra Quality.md b/spaces/bioriAsaeru/text-to-voice/Brickhouse Betty Site Rip Extra Quality.md deleted file mode 100644 index 7d0c258a5dfa4444c214b673acdee49845f16a0c..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Brickhouse Betty Site Rip Extra Quality.md +++ /dev/null @@ -1,16 +0,0 @@ - -

    170199 A.R.A. v. Commonwealth 03/01/2018 In a petition to expunge a felony arrest record, the trial court erred in concluding that the existence of this record may not cause the petitioner a manifest injustice. The facts underlying the arrest are irrelevant and the petitioner need not show actual prejudice to prevail on her expungement petition. She needs only to demonstrate that the continued existence of an arrest record may cause a manifest injustice. On this record, there is a reasonable possibility that a felony arrest record would hinder her career and her educational opportunities. It is concluded that the petitioner made the requisite showing of a manifest injustice. The judgment is reversed and the matter is remanded for entry of an order expunging the felony arrest record at issue.

    -

    Brickhouse Betty Site Rip


    Download ::: https://urloso.com/2uyPsb



    -

    101006 Livingston v. Va. Dep't of Transportation 06/07/2012 (Revised 08/02/2012) In a suit for property damage under the Just Compensation Clause in Article I, Section 11 of the Constitution of Virginia, it is held that a single event of flooding can support an inverse condemnation claim, and that the plaintiffs' allegations that their homes and various items of personal property were damaged for a public use under Article I, Section 11 are sufficient to withstand demurrer. When VDOT constructs an improvement for the public benefit, it does not thereby become an insurer in perpetuity against flood damage to neighboring property, but a property owner may be entitled to compensation under Article I, Section 11 if VDOT's operation of that improvement causes damage to real or personal property. Thus, where VDOT relocated the channel of a waterway in order to permit highway construction, but failed to maintain the relocated channel via dredging or otherwise, and that failure is alleged to have impacted the magnitude of the damage plaintiffs suffered as the result of the single flooding event at issue, VDOT's choice not to maintain the relocated channel evinced its election to use the highway and nearby residential developments as makeshift storage sites for excess stormwater instead of allocating its resources to maintain the relocated channel. The contentions that plaintiffs lack standing to maintain an inverse condemnation suit and that they cannot recover under Article I, Section 11 for damage to personal property, are rejected. The circuit court's judgment is reversed and the case is remanded for further proceedings.

    -

    100149 Scott v. Burwell's Bay Improvement Ass'n 04/21/2011 In a case involving riparian rights, the circuit court did not err in ruling that a party seeking to establish ownership of riparian rights by adverse possession, or, alternately, a prescriptive easement to use those rights, failed to prove these claims by clear and convincing evidence. The evidence to show that the use of the riparian rights was exclusive and continuous for the required period of time fell well below the clear and convincing standard required to prove adverse possession or prescriptive use of the riparian rights by the immediate prior occupants. Thus tacking was not available to establish the requisite time periods. The judgment of the circuit court is affirmed.

    -

    100303 Condominium Services v. First Owners' Ass'n 04/21/2011 (Revised 05/25/2011) In a lawsuit between a condominium owners' association and a management services company, the circuit court did not err in sustaining the association's demurrers and striking an affirmative defense. The agreement between the parties, although it referenced the association's bylaws, did not require a three-fourths vote of the unit owners before the association could terminate the services of the management agent. The circuit court also did not err in denying a motion to dismiss the association's conversion claim, because the agreement had been terminated at the time the management company caused over $90,000 in fees to be deposited to its own bank account, and it was not error to grant summary judgment on the conversion claim. Expert witness designations, testimony regarding damages, punitive damages and remittitur are also discussed. The judgment is affirmed.

    -

    -

    091430 Commonwealth v. AMEC Civil 09/16/2010 In a construction contract dispute in which the plaintiff contractor and the defendant Virginia Department of Transportation both assign error, issues are discussed concerning timely notice of claims, whether sustained elevated lake water levels constitute a differing site condition under the contract, entitlement to home office overhead damages, calculation of actual costs as a basis for an award of damages, and entitlement to pre-judgment interest as an element of damages. The judgment of the Court of Appeals is affirmed in part and reversed in part, and the cases are remanded for the circuit court to recalculate damages.

    -

    Bill was passionate about being active in the community he loved. In the past he served for eight years on the Middletown Town Council. He was the president of the Potter League for Animals, and was instrumental in its relocation to its present site. He served on the Boards of Newport Hospital, The Newport County YMCA, The Preservation Society of Newport County, The Rhode Island Foundation, and Newport Federal Savings Bank. He was actively involved in capital campaigns for The Maher Center and Newport Hospital. He was presently on the Board of Savings Institute Bank & Trust, was serving as legal counsel to The Newport County Chamber of Commerce, and was a member of the YMCA finance committee.

    -

    Subject: Andrea
    Date: Wed, 23 Jun 2004 16:40:27 -0400
    From: "hannahrichard" hannahrichard@cox.net
    To: Bryan@DenProductions.com

    Dear Bryan,
    One idea I had for the Great Bridge High School site was some sort of memorial for one of my friends,Andrea(rising sophomore)B. Deller, who passed away on June 19, 2004. I think it would be somethingthat many people would appreciate because so many of us were friends with her. In case you do notknow, the details are found below. Please consider this suggestion, because she meant so much tomany of us.
    Yours sincerely,
    Hannah Richard

    -

    Along with Andrea, our class has also lost two other people taken too soon, Miss April CarolynTownes and Mr Nicholas Stephen Rosso. Both were dear to our class and will be greatly missed duringour senior year. I'm sure many of their friends and family would appreciate their being added tothis site, because they are surely not forgotten.
    Thanks,
    Kate Welsh

    -

    Hi Bryan my name is Sarah Devincenzi Ferguson who was a drop out which i will advise no one to follow me in doing so. however Iwent to school with carole and knew her from the first grade. She always was a beautiful person inside and out. I still have a valentinecard that she gave me! I have cherished every memory of every one i have met . She was special and i hope she did well in her life.The card was signed "to my best friend ever!" well a lot of the past has changed but never the memories of all the nice people I met in school.I never remember bad things going on at great bridge high school in the 1960's. I went on to educate myself and I still am doing that.Stay in school kids even though I did not I am still married to the love of my life and it will be 52 years of ups and downs. I waslucky but not all are. So my advice is if you do not want to struggle through life get an education. Respect yourself and others. Thank you Bryan.
    well wishes to all
    Sarah D Ferguson

    Hi Bryan. This is Sarah Ferguson again. My husband Jim Ferguson graduated from Great Bridge High in 1961. I recently replied to the request from CaroleSheltons family. Now I am told by Jim Ferguson that many of our classmates and neighbors have passed. I am so sad to hear that they had left us so earlyin their lives. Angela Sabato and I went to school together and were the quiet shy ones lol. Well I moved to sparrow road and met Jimmy who lived next door.We did not have to get married but back then it was a time when you had to move on and let your friends not be influenced by your actions. It was the bestof times when I knew all these nice people. Bryan do keep this site going. It helps us all to remember to reflect to pray and to realize that every secondon earth is precious and all of the young people on here seem to be very happy and doing well with their lives and I hope that life will continue to be precious to them.Thank you again Bryan and good luck to you also! Sincerely Sarah D Ferguson p.s. I did choose a very good man and life and I continued my education and Jim receivedhis masters degree from North Carolina State. Sometimes God has plans for us that we do not even know about.

    -

    Hello Bryan,
    I came across your website whle doing an internet search on my Dads high school. His name was Lawrence Wayne Proctor,class of 1965 and passed away on August 26, 2011. If you would could you please include him on the "In Memoriam" page, thank you.
    I do not have any high school pictures of him such as yearbook pictures or maybe some classmates have pictures with him. Could youpost a note on your website requesting of those who know my Dad to send me yearbook pictures etc they might have. Also, anyone thatmay have stories of my Dad they would like to share I would most interested to hear. My email is mwpphx@cox.net
    Thank you,
    Mark Proctor

    -

    Hi, I was looking up the name of one of my class mates Chris Ivey whom passed too soon. I notice that another studentthat we lost while in middle school that should of graduated with us ( CHRIS IVEY) and myself in 1993 was Vincent Mercerwas not on your page. I agree with whomever gave you the idea to start this page. These wonderful people were in our life'sand need to be remembered!!!!
    Here are 2 pictures from my year book from 6th Grade.
    Thanks
    RIP VINCENT AND CHRIS you two are gone but NEVER NEVER forgotten!!!

    I wrote you yesterday about adding Vincent Mercer whom I attended school with. I was looking on your website and didn't seeanother friend of mine that passed too soon. Warren T Trueblood Jr....He Graduated in 1989 from Great Bridge. His birthdayis May 24, 1970 and death date was Jan.14,1994- 23yrs old. He was like a brother to me, I was best friends growing upand still friends with his sister. Her daughter and my daughter/children have grown up together and it would be awesometo have him added to this board. He was a great, sweet, gentle person!!! I think of him often and all the times I sharedwith him and his sister. Well his whole family...Thank you for your time. God Bless.
    Betty J Lewis-Kendrick CO 93 GBH

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Comsol Multiphysics 4.3 Crack _BEST_ License File Torrent. Process Titl.md b/spaces/bioriAsaeru/text-to-voice/Comsol Multiphysics 4.3 Crack _BEST_ License File Torrent. Process Titl.md deleted file mode 100644 index 7e213786920cdcb934ba674c6e8199b5124a0d28..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Comsol Multiphysics 4.3 Crack _BEST_ License File Torrent. Process Titl.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Comsol Multiphysics 4.3 Crack License File Torrent. process titl


    Download ✓✓✓ https://urloso.com/2uyReH



    -
    - 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Download Ccboot 2.1 Full Crack How to Install and Configure the Best Diskless Boot Solution.md b/spaces/bioriAsaeru/text-to-voice/Download Ccboot 2.1 Full Crack How to Install and Configure the Best Diskless Boot Solution.md deleted file mode 100644 index 14f413d5e80458b149d21cd2a53caea542a7642f..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Download Ccboot 2.1 Full Crack How to Install and Configure the Best Diskless Boot Solution.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Download Ccboot 2.1 Full Crack


    DOWNLOAD 🗹 https://urloso.com/2uyOpg



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Economics Lipsey And Chrystal 12th Edition Free Download98.md b/spaces/bioriAsaeru/text-to-voice/Economics Lipsey And Chrystal 12th Edition Free Download98.md deleted file mode 100644 index 914b414c3760fb266f80f0f68ce0e19d8bf6ec04..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Economics Lipsey And Chrystal 12th Edition Free Download98.md +++ /dev/null @@ -1,16 +0,0 @@ - -

    How to Download Economics by Lipsey and Chrystal 12th Edition for Free

    -

    Economics by Lipsey and Chrystal is a popular textbook that covers the essential principles and theory of microeconomics and macroeconomics. The 12th edition of this book was published in 2011 and provides a comprehensive and up-to-date review of the field. It also includes extensive practical applications and examples that help students understand the real-world relevance of the material.

    -

    Economics Lipsey And Chrystal 12th Edition Free Download98


    Download File --->>> https://urloso.com/2uyOqT



    -

    If you are looking for a free download of Economics by Lipsey and Chrystal 12th edition, you may have a hard time finding a legitimate and legal source. Many websites that claim to offer free downloads of this book are either scams, viruses, or infringe on the authors' copyrights. Therefore, you should be careful and avoid clicking on any suspicious links or downloading any files from unknown sources.

    -

    One way to access Economics by Lipsey and Chrystal 12th edition for free is to use your library's online resources. Many libraries have subscriptions to e-book platforms that allow you to borrow or read digital copies of textbooks online. You can check your library's website or catalog to see if they have Economics by Lipsey and Chrystal 12th edition available as an e-book. If they do, you can use your library card number and password to access it.

    -

    Another way to access Economics by Lipsey and Chrystal 12th edition for free is to use a reputable academic website that offers open access to textbooks. For example, you can try OpenStax, which is a non-profit organization that provides free, high-quality textbooks for various subjects. You can browse their catalog and see if they have a book that covers the same topics as Economics by Lipsey and Chrystal 12th edition. You can then download or read the book online for free.

    -

    A third way to access Economics by Lipsey and Chrystal 12th edition for free is to use a peer-to-peer sharing platform that allows you to exchange files with other users. For example, you can try Library Genesis, which is a website that hosts millions of books, articles, and other documents. You can search for Economics by Lipsey and Chrystal 12th edition on their website and see if they have a PDF or EPUB file that you can download. However, you should be aware that this method may be illegal in some countries or regions, as it may violate the authors' intellectual property rights. Therefore, you should use this method at your own risk and discretion.

    -

    -

    In conclusion, there are several ways to download Economics by Lipsey and Chrystal 12th edition for free, but not all of them are safe, legal, or ethical. You should always respect the authors' work and rights, and only use sources that are authorized or licensed to distribute their books. If you want to support the authors and publishers, you can also consider buying a copy of the book from a reputable online or offline bookstore.

    - -

    Economics by Lipsey and Chrystal 12th edition is a comprehensive textbook that covers both microeconomics and macroeconomics. Microeconomics is the branch of economics that studies the behavior and decisions of individual agents, such as consumers, firms, and households. Macroeconomics is the branch of economics that studies the behavior and performance of the aggregate economy, such as national income, inflation, unemployment, and growth.

    -

    The book is divided into six parts. The first part introduces the basic concepts and tools of economics, such as scarcity, opportunity cost, demand and supply, elasticity, and market equilibrium. The second part covers the theory of consumer behavior, production and costs, market structures, and market failure. The third part covers the measurement and determination of national income, aggregate demand and supply, money and banking, and inflation and unemployment. The fourth part covers the theory of economic growth, fiscal policy, monetary policy, and international trade and finance. The fifth part covers the issues and challenges of economic development, poverty and inequality, environmental economics, and public choice. The sixth part covers the history of economic thought, from Adam Smith to John Maynard Keynes.

    -

    The book is written in a clear and engaging style, with numerous examples and applications from various countries and regions. It also includes graphs, tables, diagrams, boxes, summaries, exercises, and review questions to help students understand and apply the material. The book is suitable for undergraduate students who are taking introductory or intermediate courses in economics. It is also a useful reference for anyone who wants to learn more about the principles and practice of economics.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Fundamentos De Fisica Frank Blatt.md b/spaces/bioriAsaeru/text-to-voice/Fundamentos De Fisica Frank Blatt.md deleted file mode 100644 index 10c0f9f7591631302dc81d4efaeade24f8c8fef3..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Fundamentos De Fisica Frank Blatt.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Fundamentos De Fisica Frank Blatt


    Download Zip ✶✶✶ https://urloso.com/2uyQbm



    -
    -rel regulator dan motor power windows honda fundamentos de fisica para ... 64 Blatt Bedrucktes · Memo Guide De Biologie Et De Physiologie Humaines Ue 2 1 ... Elle Sappelait Anne Frank Lhistoire De La Femme Qui Aida La Famille Frank A ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/bla/tranny/App/Embedding/utils/Initialize.py b/spaces/bla/tranny/App/Embedding/utils/Initialize.py deleted file mode 100644 index 5420886869e95906bef34e737c435308b6f04f1f..0000000000000000000000000000000000000000 --- a/spaces/bla/tranny/App/Embedding/utils/Initialize.py +++ /dev/null @@ -1,66 +0,0 @@ -from langchain.embeddings import HuggingFaceEmbeddings -from langchain.docstore.document import Document -from langchain.vectorstores import Pinecone -import pinecone -import os - -# get api key from app.pinecone.io -PINECONE_API_KEY = os.environ.get("PINECONE_API_KEY") -# find your environment next to the api key in pinecone console -PINECONE_ENV = os.environ.get("PINECONE_ENVIRONMENT") - - -index_name = "transcript-bits" -model_name = "thenlper/gte-base" -embeddings = HuggingFaceEmbeddings(model_name=model_name) - - -pinecone.init(api_key=PINECONE_API_KEY, environment=PINECONE_ENV) -vector_index = pinecone.Index(index_name=index_name) -docsearch = Pinecone.from_existing_index(index_name, embeddings) - - -async def delete_documents(task_id): - docsearch.delete( - filter={ - "task_id": {"$eq": "task_id"}, - } - ) - - - -def generateChunks(chunks, task_id, n=100): - combined = [chunks[i : i + n] for i in range(0, len(chunks), n)] - result = [] - for chunk in combined: - data = {"text": ""} - for item in chunk: - if chunk.index(item) == 0: - data["start"] = item["start"] - if chunk.index(item) == len(chunk) - 1: - data["end"] = item["end"] - data["text"] += " " + item["text"] - - temp = Document( - page_content=data["text"], - metadata={"start": data["start"], "end": data["end"], "task_id": task_id}, - ) - result.append(temp) - return result - - -def search(query: str, task_id: str): - filtering_conditions = { - "task_id": {"$eq": "task_id"}, - } - data =docsearch.similarity_search(query, k=4, filter=filtering_conditions) - return [ - {"text": d.page_content, "start": d.metadata["start"], "end": d.metadata["end"]} - for d in data - ] - - - -def encode(temp: list[Document]): - docsearch.add_documents(temp) - # return embeddings.embed_documents(texts = [d.page_content for d in temp]) diff --git a/spaces/blmdsydm/faster-whisper-webui/app-shared.py b/spaces/blmdsydm/faster-whisper-webui/app-shared.py deleted file mode 100644 index 63cac1a8adaf90784c5f5f178f86243ad2149ee4..0000000000000000000000000000000000000000 --- a/spaces/blmdsydm/faster-whisper-webui/app-shared.py +++ /dev/null @@ -1,5 +0,0 @@ -# Run the app with no audio file restrictions -from app import create_ui -from src.config import ApplicationConfig - -create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1, share=True)) \ No newline at end of file diff --git a/spaces/bunkalab/bunka-map/maps/violence_men_women.html b/spaces/bunkalab/bunka-map/maps/violence_men_women.html deleted file mode 100644 index 54d0f0198417c48d84a4978318cb81727ccbfcc2..0000000000000000000000000000000000000000 --- a/spaces/bunkalab/bunka-map/maps/violence_men_women.html +++ /dev/null @@ -1,14 +0,0 @@ - - - -
    -
    - - \ No newline at end of file diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/test_packaging.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/test_packaging.py deleted file mode 100644 index a5b1661e8f341fe66a6e02c59fe172bce445782b..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/test_packaging.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import unittest - -from detectron2.utils.collect_env import collect_env_info - - -class TestProjects(unittest.TestCase): - def test_import(self): - from detectron2.projects import point_rend - - _ = point_rend.add_pointrend_config - - import detectron2.projects.deeplab as deeplab - - _ = deeplab.add_deeplab_config - - # import detectron2.projects.panoptic_deeplab as panoptic_deeplab - - # _ = panoptic_deeplab.add_panoptic_deeplab_config - - -class TestCollectEnv(unittest.TestCase): - def test(self): - _ = collect_env_info() diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/deploy/README.md b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/deploy/README.md deleted file mode 100644 index e33cbeb54c003a5738da68c838fdaa4e0d218501..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/deploy/README.md +++ /dev/null @@ -1,66 +0,0 @@ -See [deployment tutorial](https://detectron2.readthedocs.io/tutorials/deployment.html) -for some high-level background about deployment. - -This directory contains the following examples: - -1. An example script `export_model.py` - that exports a detectron2 model for deployment using different methods and formats. - -2. A C++ example that runs inference with Mask R-CNN model in TorchScript format. - -## Build -Deployment depends on libtorch and OpenCV. Some require more dependencies: - -* Running TorchScript-format models produced by `--export-method=caffe2_tracing` requires libtorch - to be built with caffe2 enabled. -* Running TorchScript-format models produced by `--export-method=tracing/scripting` requires libtorchvision (C++ library of torchvision). - -All methods are supported in one C++ file that requires all the above dependencies. -Adjust it and remove code you don't need. -As a reference, we provide a [Dockerfile](../../docker/deploy.Dockerfile) that installs all the above dependencies and builds the C++ example. - -## Use - -We show a few example commands to export and execute a Mask R-CNN model in C++. - -* `export-method=tracing, format=torchscript`: -``` -./export_model.py --config-file ../../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ - --output ./output --export-method tracing --format torchscript \ - MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl \ - MODEL.DEVICE cuda - -./build/torchscript_mask_rcnn output/model.ts input.jpg tracing -``` - -* `export-method=scripting, format=torchscript`: -``` -./export_model.py --config-file ../../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ - --output ./output --export-method scripting --format torchscript \ - MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl \ - -./build/torchscript_mask_rcnn output/model.ts input.jpg scripting -``` - -* `export-method=caffe2_tracing, format=torchscript`: - -``` -./export_model.py --config-file ../../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ - --output ./output --export-method caffe2_tracing --format torchscript \ - MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl \ - -./build/torchscript_mask_rcnn output/model.ts input.jpg caffe2_tracing -``` - - -## Notes: - -1. Tracing/Caffe2-tracing requires valid weights & sample inputs. - Therefore the above commands require pre-trained models and [COCO dataset](https://detectron2.readthedocs.io/tutorials/builtin_datasets.html). - You can modify the script to obtain sample inputs in other ways instead of from COCO. - -2. `--run-eval` is implemented only for tracing mode - to evaluate the exported model using the dataset in the config. - It's recommended to always verify the accuracy in case the conversion is not successful. - Evaluation can be slow if model is exported to CPU or dataset is too large ("coco_2017_val_100" is a small subset of COCO useful for evaluation). - `caffe2_tracing` accuracy may be slightly different (within 0.1 AP) from original model due to numerical precisions between different runtime. diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/models/albert/modeling_flax_albert.py b/spaces/chendl/compositional_test/transformers/src/transformers/models/albert/modeling_flax_albert.py deleted file mode 100644 index 0ff1b9276a19d6a6ee101520f375a2af04a1defe..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/models/albert/modeling_flax_albert.py +++ /dev/null @@ -1,1119 +0,0 @@ -# coding=utf-8 -# Copyright 2021 Google AI, Google Brain and the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Callable, Optional, Tuple - -import flax -import flax.linen as nn -import jax -import jax.numpy as jnp -import numpy as np -from flax.core.frozen_dict import FrozenDict, freeze, unfreeze -from flax.linen.attention import dot_product_attention_weights -from flax.traverse_util import flatten_dict, unflatten_dict -from jax import lax - -from ...modeling_flax_outputs import ( - FlaxBaseModelOutput, - FlaxBaseModelOutputWithPooling, - FlaxMaskedLMOutput, - FlaxMultipleChoiceModelOutput, - FlaxQuestionAnsweringModelOutput, - FlaxSequenceClassifierOutput, - FlaxTokenClassifierOutput, -) -from ...modeling_flax_utils import ( - ACT2FN, - FlaxPreTrainedModel, - append_call_sample_docstring, - append_replace_return_docstrings, - overwrite_call_docstring, -) -from ...utils import ModelOutput, add_start_docstrings, add_start_docstrings_to_model_forward, logging -from .configuration_albert import AlbertConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "albert-base-v2" -_CONFIG_FOR_DOC = "AlbertConfig" - - -@flax.struct.dataclass -class FlaxAlbertForPreTrainingOutput(ModelOutput): - """ - Output type of [`FlaxAlbertForPreTraining`]. - - Args: - prediction_logits (`jnp.ndarray` of shape `(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - sop_logits (`jnp.ndarray` of shape `(batch_size, 2)`): - Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation - before SoftMax). - hidden_states (`tuple(jnp.ndarray)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `jnp.ndarray` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(jnp.ndarray)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `jnp.ndarray` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - prediction_logits: jnp.ndarray = None - sop_logits: jnp.ndarray = None - hidden_states: Optional[Tuple[jnp.ndarray]] = None - attentions: Optional[Tuple[jnp.ndarray]] = None - - -ALBERT_START_DOCSTRING = r""" - - This model inherits from [`FlaxPreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading, saving and converting weights from PyTorch models) - - This model is also a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) - subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to - general usage and behavior. - - Finally, this model supports inherent JAX features such as: - - - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) - - Parameters: - config ([`AlbertConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~FlaxPreTrainedModel.from_pretrained`] method to load the model weights. - dtype (`jax.numpy.dtype`, *optional*, defaults to `jax.numpy.float32`): - The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and - `jax.numpy.bfloat16` (on TPUs). - - This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If - specified all the computation will be performed with the given `dtype`. - - **Note that this only specifies the dtype of the computation and does not influence the dtype of model - parameters.** - - If you wish to change the dtype of the model parameters, see [`~FlaxPreTrainedModel.to_fp16`] and - [`~FlaxPreTrainedModel.to_bf16`]. -""" - -ALBERT_INPUTS_DOCSTRING = r""" - Args: - input_ids (`numpy.ndarray` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`numpy.ndarray` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`numpy.ndarray` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`numpy.ndarray` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - -""" - - -class FlaxAlbertEmbeddings(nn.Module): - """Construct the embeddings from word, position and token_type embeddings.""" - - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.word_embeddings = nn.Embed( - self.config.vocab_size, - self.config.embedding_size, - embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range), - ) - self.position_embeddings = nn.Embed( - self.config.max_position_embeddings, - self.config.embedding_size, - embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range), - ) - self.token_type_embeddings = nn.Embed( - self.config.type_vocab_size, - self.config.embedding_size, - embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range), - ) - self.LayerNorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) - self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) - - # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertEmbeddings.__call__ - def __call__(self, input_ids, token_type_ids, position_ids, deterministic: bool = True): - # Embed - inputs_embeds = self.word_embeddings(input_ids.astype("i4")) - position_embeds = self.position_embeddings(position_ids.astype("i4")) - token_type_embeddings = self.token_type_embeddings(token_type_ids.astype("i4")) - - # Sum all embeddings - hidden_states = inputs_embeds + token_type_embeddings + position_embeds - - # Layer Norm - hidden_states = self.LayerNorm(hidden_states) - hidden_states = self.dropout(hidden_states, deterministic=deterministic) - return hidden_states - - -class FlaxAlbertSelfAttention(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - if self.config.hidden_size % self.config.num_attention_heads != 0: - raise ValueError( - "`config.hidden_size`: {self.config.hidden_size} has to be a multiple of `config.num_attention_heads` " - " : {self.config.num_attention_heads}" - ) - - self.query = nn.Dense( - self.config.hidden_size, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - ) - self.key = nn.Dense( - self.config.hidden_size, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - ) - self.value = nn.Dense( - self.config.hidden_size, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - ) - self.dense = nn.Dense( - self.config.hidden_size, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - ) - self.LayerNorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) - self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) - - def __call__(self, hidden_states, attention_mask, deterministic=True, output_attentions: bool = False): - head_dim = self.config.hidden_size // self.config.num_attention_heads - - query_states = self.query(hidden_states).reshape( - hidden_states.shape[:2] + (self.config.num_attention_heads, head_dim) - ) - value_states = self.value(hidden_states).reshape( - hidden_states.shape[:2] + (self.config.num_attention_heads, head_dim) - ) - key_states = self.key(hidden_states).reshape( - hidden_states.shape[:2] + (self.config.num_attention_heads, head_dim) - ) - - # Convert the boolean attention mask to an attention bias. - if attention_mask is not None: - # attention mask in the form of attention bias - attention_mask = jnp.expand_dims(attention_mask, axis=(-3, -2)) - attention_bias = lax.select( - attention_mask > 0, - jnp.full(attention_mask.shape, 0.0).astype(self.dtype), - jnp.full(attention_mask.shape, jnp.finfo(self.dtype).min).astype(self.dtype), - ) - else: - attention_bias = None - - dropout_rng = None - if not deterministic and self.config.attention_probs_dropout_prob > 0.0: - dropout_rng = self.make_rng("dropout") - - attn_weights = dot_product_attention_weights( - query_states, - key_states, - bias=attention_bias, - dropout_rng=dropout_rng, - dropout_rate=self.config.attention_probs_dropout_prob, - broadcast_dropout=True, - deterministic=deterministic, - dtype=self.dtype, - precision=None, - ) - - attn_output = jnp.einsum("...hqk,...khd->...qhd", attn_weights, value_states) - attn_output = attn_output.reshape(attn_output.shape[:2] + (-1,)) - - projected_attn_output = self.dense(attn_output) - projected_attn_output = self.dropout(projected_attn_output, deterministic=deterministic) - layernormed_attn_output = self.LayerNorm(projected_attn_output + hidden_states) - outputs = (layernormed_attn_output, attn_weights) if output_attentions else (layernormed_attn_output,) - return outputs - - -class FlaxAlbertLayer(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.attention = FlaxAlbertSelfAttention(self.config, dtype=self.dtype) - self.ffn = nn.Dense( - self.config.intermediate_size, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - ) - self.activation = ACT2FN[self.config.hidden_act] - self.ffn_output = nn.Dense( - self.config.hidden_size, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - ) - self.full_layer_layer_norm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) - self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) - - def __call__( - self, - hidden_states, - attention_mask, - deterministic: bool = True, - output_attentions: bool = False, - ): - attention_outputs = self.attention( - hidden_states, attention_mask, deterministic=deterministic, output_attentions=output_attentions - ) - attention_output = attention_outputs[0] - ffn_output = self.ffn(attention_output) - ffn_output = self.activation(ffn_output) - ffn_output = self.ffn_output(ffn_output) - ffn_output = self.dropout(ffn_output, deterministic=deterministic) - hidden_states = self.full_layer_layer_norm(ffn_output + attention_output) - - outputs = (hidden_states,) - - if output_attentions: - outputs += (attention_outputs[1],) - return outputs - - -class FlaxAlbertLayerCollection(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.layers = [ - FlaxAlbertLayer(self.config, name=str(i), dtype=self.dtype) for i in range(self.config.inner_group_num) - ] - - def __call__( - self, - hidden_states, - attention_mask, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - ): - layer_hidden_states = () - layer_attentions = () - - for layer_index, albert_layer in enumerate(self.layers): - layer_output = albert_layer( - hidden_states, - attention_mask, - deterministic=deterministic, - output_attentions=output_attentions, - ) - hidden_states = layer_output[0] - - if output_attentions: - layer_attentions = layer_attentions + (layer_output[1],) - - if output_hidden_states: - layer_hidden_states = layer_hidden_states + (hidden_states,) - - outputs = (hidden_states,) - if output_hidden_states: - outputs = outputs + (layer_hidden_states,) - if output_attentions: - outputs = outputs + (layer_attentions,) - return outputs # last-layer hidden state, (layer hidden states), (layer attentions) - - -class FlaxAlbertLayerCollections(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - layer_index: Optional[str] = None - - def setup(self): - self.albert_layers = FlaxAlbertLayerCollection(self.config, dtype=self.dtype) - - def __call__( - self, - hidden_states, - attention_mask, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - ): - outputs = self.albert_layers( - hidden_states, - attention_mask, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - ) - return outputs - - -class FlaxAlbertLayerGroups(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.layers = [ - FlaxAlbertLayerCollections(self.config, name=str(i), layer_index=str(i), dtype=self.dtype) - for i in range(self.config.num_hidden_groups) - ] - - def __call__( - self, - hidden_states, - attention_mask, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - all_attentions = () if output_attentions else None - all_hidden_states = (hidden_states,) if output_hidden_states else None - - for i in range(self.config.num_hidden_layers): - # Index of the hidden group - group_idx = int(i / (self.config.num_hidden_layers / self.config.num_hidden_groups)) - layer_group_output = self.layers[group_idx]( - hidden_states, - attention_mask, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - ) - hidden_states = layer_group_output[0] - - if output_attentions: - all_attentions = all_attentions + layer_group_output[-1] - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None) - return FlaxBaseModelOutput( - last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions - ) - - -class FlaxAlbertEncoder(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.embedding_hidden_mapping_in = nn.Dense( - self.config.hidden_size, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - ) - self.albert_layer_groups = FlaxAlbertLayerGroups(self.config, dtype=self.dtype) - - def __call__( - self, - hidden_states, - attention_mask, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - hidden_states = self.embedding_hidden_mapping_in(hidden_states) - return self.albert_layer_groups( - hidden_states, - attention_mask, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - ) - - -class FlaxAlbertOnlyMLMHead(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 - bias_init: Callable[..., np.ndarray] = jax.nn.initializers.zeros - - def setup(self): - self.dense = nn.Dense(self.config.embedding_size, dtype=self.dtype) - self.activation = ACT2FN[self.config.hidden_act] - self.LayerNorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) - self.decoder = nn.Dense(self.config.vocab_size, dtype=self.dtype, use_bias=False) - self.bias = self.param("bias", self.bias_init, (self.config.vocab_size,)) - - def __call__(self, hidden_states, shared_embedding=None): - hidden_states = self.dense(hidden_states) - hidden_states = self.activation(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - - if shared_embedding is not None: - hidden_states = self.decoder.apply({"params": {"kernel": shared_embedding.T}}, hidden_states) - else: - hidden_states = self.decoder(hidden_states) - - hidden_states += self.bias - return hidden_states - - -class FlaxAlbertSOPHead(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.dropout = nn.Dropout(self.config.classifier_dropout_prob) - self.classifier = nn.Dense(2, dtype=self.dtype) - - def __call__(self, pooled_output, deterministic=True): - pooled_output = self.dropout(pooled_output, deterministic=deterministic) - logits = self.classifier(pooled_output) - return logits - - -class FlaxAlbertPreTrainedModel(FlaxPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = AlbertConfig - base_model_prefix = "albert" - module_class: nn.Module = None - - def __init__( - self, - config: AlbertConfig, - input_shape: Tuple = (1, 1), - seed: int = 0, - dtype: jnp.dtype = jnp.float32, - _do_init: bool = True, - **kwargs, - ): - module = self.module_class(config=config, dtype=dtype, **kwargs) - super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype, _do_init=_do_init) - - def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple, params: FrozenDict = None) -> FrozenDict: - # init input tensors - input_ids = jnp.zeros(input_shape, dtype="i4") - token_type_ids = jnp.zeros_like(input_ids) - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_shape) - attention_mask = jnp.ones_like(input_ids) - - params_rng, dropout_rng = jax.random.split(rng) - rngs = {"params": params_rng, "dropout": dropout_rng} - - random_params = self.module.init( - rngs, input_ids, attention_mask, token_type_ids, position_ids, return_dict=False - )["params"] - - if params is not None: - random_params = flatten_dict(unfreeze(random_params)) - params = flatten_dict(unfreeze(params)) - for missing_key in self._missing_keys: - params[missing_key] = random_params[missing_key] - self._missing_keys = set() - return freeze(unflatten_dict(params)) - else: - return random_params - - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - def __call__( - self, - input_ids, - attention_mask=None, - token_type_ids=None, - position_ids=None, - params: dict = None, - dropout_rng: jax.random.PRNGKey = None, - train: bool = False, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ): - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.return_dict - - # init input tensors if not passed - if token_type_ids is None: - token_type_ids = jnp.zeros_like(input_ids) - - if position_ids is None: - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape) - - if attention_mask is None: - attention_mask = jnp.ones_like(input_ids) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - return self.module.apply( - {"params": params or self.params}, - jnp.array(input_ids, dtype="i4"), - jnp.array(attention_mask, dtype="i4"), - jnp.array(token_type_ids, dtype="i4"), - jnp.array(position_ids, dtype="i4"), - not train, - output_attentions, - output_hidden_states, - return_dict, - rngs=rngs, - ) - - -class FlaxAlbertModule(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - add_pooling_layer: bool = True - - def setup(self): - self.embeddings = FlaxAlbertEmbeddings(self.config, dtype=self.dtype) - self.encoder = FlaxAlbertEncoder(self.config, dtype=self.dtype) - if self.add_pooling_layer: - self.pooler = nn.Dense( - self.config.hidden_size, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - name="pooler", - ) - self.pooler_activation = nn.tanh - else: - self.pooler = None - self.pooler_activation = None - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids: Optional[np.ndarray] = None, - position_ids: Optional[np.ndarray] = None, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - # make sure `token_type_ids` is correctly initialized when not passed - if token_type_ids is None: - token_type_ids = jnp.zeros_like(input_ids) - - # make sure `position_ids` is correctly initialized when not passed - if position_ids is None: - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape) - - hidden_states = self.embeddings(input_ids, token_type_ids, position_ids, deterministic=deterministic) - - outputs = self.encoder( - hidden_states, - attention_mask, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = outputs[0] - if self.add_pooling_layer: - pooled = self.pooler(hidden_states[:, 0]) - pooled = self.pooler_activation(pooled) - else: - pooled = None - - if not return_dict: - # if pooled is None, don't return it - if pooled is None: - return (hidden_states,) + outputs[1:] - return (hidden_states, pooled) + outputs[1:] - - return FlaxBaseModelOutputWithPooling( - last_hidden_state=hidden_states, - pooler_output=pooled, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - "The bare Albert Model transformer outputting raw hidden-states without any specific head on top.", - ALBERT_START_DOCSTRING, -) -class FlaxAlbertModel(FlaxAlbertPreTrainedModel): - module_class = FlaxAlbertModule - - -append_call_sample_docstring(FlaxAlbertModel, _CHECKPOINT_FOR_DOC, FlaxBaseModelOutputWithPooling, _CONFIG_FOR_DOC) - - -class FlaxAlbertForPreTrainingModule(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.albert = FlaxAlbertModule(config=self.config, dtype=self.dtype) - self.predictions = FlaxAlbertOnlyMLMHead(config=self.config, dtype=self.dtype) - self.sop_classifier = FlaxAlbertSOPHead(config=self.config, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - # Model - outputs = self.albert( - input_ids, - attention_mask, - token_type_ids, - position_ids, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - if self.config.tie_word_embeddings: - shared_embedding = self.albert.variables["params"]["embeddings"]["word_embeddings"]["embedding"] - else: - shared_embedding = None - - hidden_states = outputs[0] - pooled_output = outputs[1] - - prediction_scores = self.predictions(hidden_states, shared_embedding=shared_embedding) - sop_scores = self.sop_classifier(pooled_output, deterministic=deterministic) - - if not return_dict: - return (prediction_scores, sop_scores) + outputs[2:] - - return FlaxAlbertForPreTrainingOutput( - prediction_logits=prediction_scores, - sop_logits=sop_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Albert Model with two heads on top as done during the pretraining: a `masked language modeling` head and a - `sentence order prediction (classification)` head. - """, - ALBERT_START_DOCSTRING, -) -class FlaxAlbertForPreTraining(FlaxAlbertPreTrainedModel): - module_class = FlaxAlbertForPreTrainingModule - - -FLAX_ALBERT_FOR_PRETRAINING_DOCSTRING = """ - Returns: - - Example: - - ```python - >>> from transformers import AutoTokenizer, FlaxAlbertForPreTraining - - >>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2") - >>> model = FlaxAlbertForPreTraining.from_pretrained("albert-base-v2") - - >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np") - >>> outputs = model(**inputs) - - >>> prediction_logits = outputs.prediction_logits - >>> seq_relationship_logits = outputs.sop_logits - ``` -""" - -overwrite_call_docstring( - FlaxAlbertForPreTraining, - ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length") + FLAX_ALBERT_FOR_PRETRAINING_DOCSTRING, -) -append_replace_return_docstrings( - FlaxAlbertForPreTraining, output_type=FlaxAlbertForPreTrainingOutput, config_class=_CONFIG_FOR_DOC -) - - -class FlaxAlbertForMaskedLMModule(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.albert = FlaxAlbertModule(config=self.config, add_pooling_layer=False, dtype=self.dtype) - self.predictions = FlaxAlbertOnlyMLMHead(config=self.config, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - # Model - outputs = self.albert( - input_ids, - attention_mask, - token_type_ids, - position_ids, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - if self.config.tie_word_embeddings: - shared_embedding = self.albert.variables["params"]["embeddings"]["word_embeddings"]["embedding"] - else: - shared_embedding = None - - # Compute the prediction scores - logits = self.predictions(hidden_states, shared_embedding=shared_embedding) - - if not return_dict: - return (logits,) + outputs[1:] - - return FlaxMaskedLMOutput( - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings("""Albert Model with a `language modeling` head on top.""", ALBERT_START_DOCSTRING) -class FlaxAlbertForMaskedLM(FlaxAlbertPreTrainedModel): - module_class = FlaxAlbertForMaskedLMModule - - -append_call_sample_docstring(FlaxAlbertForMaskedLM, _CHECKPOINT_FOR_DOC, FlaxMaskedLMOutput, _CONFIG_FOR_DOC) - - -class FlaxAlbertForSequenceClassificationModule(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.albert = FlaxAlbertModule(config=self.config, dtype=self.dtype) - classifier_dropout = ( - self.config.classifier_dropout_prob - if self.config.classifier_dropout_prob is not None - else self.config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(rate=classifier_dropout) - self.classifier = nn.Dense( - self.config.num_labels, - dtype=self.dtype, - ) - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - # Model - outputs = self.albert( - input_ids, - attention_mask, - token_type_ids, - position_ids, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = outputs[1] - pooled_output = self.dropout(pooled_output, deterministic=deterministic) - logits = self.classifier(pooled_output) - - if not return_dict: - return (logits,) + outputs[2:] - - return FlaxSequenceClassifierOutput( - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled - output) e.g. for GLUE tasks. - """, - ALBERT_START_DOCSTRING, -) -class FlaxAlbertForSequenceClassification(FlaxAlbertPreTrainedModel): - module_class = FlaxAlbertForSequenceClassificationModule - - -append_call_sample_docstring( - FlaxAlbertForSequenceClassification, - _CHECKPOINT_FOR_DOC, - FlaxSequenceClassifierOutput, - _CONFIG_FOR_DOC, -) - - -class FlaxAlbertForMultipleChoiceModule(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.albert = FlaxAlbertModule(config=self.config, dtype=self.dtype) - self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) - self.classifier = nn.Dense(1, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - num_choices = input_ids.shape[1] - input_ids = input_ids.reshape(-1, input_ids.shape[-1]) if input_ids is not None else None - attention_mask = attention_mask.reshape(-1, attention_mask.shape[-1]) if attention_mask is not None else None - token_type_ids = token_type_ids.reshape(-1, token_type_ids.shape[-1]) if token_type_ids is not None else None - position_ids = position_ids.reshape(-1, position_ids.shape[-1]) if position_ids is not None else None - - # Model - outputs = self.albert( - input_ids, - attention_mask, - token_type_ids, - position_ids, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = outputs[1] - pooled_output = self.dropout(pooled_output, deterministic=deterministic) - logits = self.classifier(pooled_output) - - reshaped_logits = logits.reshape(-1, num_choices) - - if not return_dict: - return (reshaped_logits,) + outputs[2:] - - return FlaxMultipleChoiceModelOutput( - logits=reshaped_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Albert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a - softmax) e.g. for RocStories/SWAG tasks. - """, - ALBERT_START_DOCSTRING, -) -class FlaxAlbertForMultipleChoice(FlaxAlbertPreTrainedModel): - module_class = FlaxAlbertForMultipleChoiceModule - - -overwrite_call_docstring( - FlaxAlbertForMultipleChoice, ALBERT_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length") -) -append_call_sample_docstring( - FlaxAlbertForMultipleChoice, - _CHECKPOINT_FOR_DOC, - FlaxMultipleChoiceModelOutput, - _CONFIG_FOR_DOC, -) - - -class FlaxAlbertForTokenClassificationModule(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.albert = FlaxAlbertModule(config=self.config, dtype=self.dtype, add_pooling_layer=False) - classifier_dropout = ( - self.config.classifier_dropout_prob - if self.config.classifier_dropout_prob is not None - else self.config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(rate=classifier_dropout) - self.classifier = nn.Dense(self.config.num_labels, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - # Model - outputs = self.albert( - input_ids, - attention_mask, - token_type_ids, - position_ids, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - hidden_states = self.dropout(hidden_states, deterministic=deterministic) - logits = self.classifier(hidden_states) - - if not return_dict: - return (logits,) + outputs[1:] - - return FlaxTokenClassifierOutput( - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Albert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for - Named-Entity-Recognition (NER) tasks. - """, - ALBERT_START_DOCSTRING, -) -class FlaxAlbertForTokenClassification(FlaxAlbertPreTrainedModel): - module_class = FlaxAlbertForTokenClassificationModule - - -append_call_sample_docstring( - FlaxAlbertForTokenClassification, - _CHECKPOINT_FOR_DOC, - FlaxTokenClassifierOutput, - _CONFIG_FOR_DOC, -) - - -class FlaxAlbertForQuestionAnsweringModule(nn.Module): - config: AlbertConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.albert = FlaxAlbertModule(config=self.config, dtype=self.dtype, add_pooling_layer=False) - self.qa_outputs = nn.Dense(self.config.num_labels, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - # Model - outputs = self.albert( - input_ids, - attention_mask, - token_type_ids, - position_ids, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - - logits = self.qa_outputs(hidden_states) - start_logits, end_logits = logits.split(self.config.num_labels, axis=-1) - start_logits = start_logits.squeeze(-1) - end_logits = end_logits.squeeze(-1) - - if not return_dict: - return (start_logits, end_logits) + outputs[1:] - - return FlaxQuestionAnsweringModelOutput( - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Albert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear - layers on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - ALBERT_START_DOCSTRING, -) -class FlaxAlbertForQuestionAnswering(FlaxAlbertPreTrainedModel): - module_class = FlaxAlbertForQuestionAnsweringModule - - -append_call_sample_docstring( - FlaxAlbertForQuestionAnswering, - _CHECKPOINT_FOR_DOC, - FlaxQuestionAnsweringModelOutput, - _CONFIG_FOR_DOC, -) diff --git a/spaces/chenyangqi/FateZero/FateZero/video_diffusion/pipelines/DDIMSpatioTemporalStableDiffusionPipeline.py b/spaces/chenyangqi/FateZero/FateZero/video_diffusion/pipelines/DDIMSpatioTemporalStableDiffusionPipeline.py deleted file mode 100644 index 5228dfa27bdf54081b9a075f3d4d7ea7a437d42f..0000000000000000000000000000000000000000 --- a/spaces/chenyangqi/FateZero/FateZero/video_diffusion/pipelines/DDIMSpatioTemporalStableDiffusionPipeline.py +++ /dev/null @@ -1,300 +0,0 @@ -# code mostly taken from https://github.com/huggingface/diffusers -import inspect -from typing import Callable, List, Optional, Union -import PIL -import torch -import numpy as np -from einops import rearrange -from tqdm import trange, tqdm - -from diffusers.utils import deprecate, logging -from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput - -from ..models.unet_3d_condition import UNetPseudo3DConditionModel -from .stable_diffusion import SpatioTemporalStableDiffusionPipeline - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class DDIMSpatioTemporalStableDiffusionPipeline(SpatioTemporalStableDiffusionPipeline): - r""" - Pipeline for text-to-video generation using Spatio-Temporal Stable Diffusion. - """ - - def check_inputs(self, prompt, height, width, callback_steps, strength=None): - if not isinstance(prompt, str) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - if strength is not None: - if strength <= 0 or strength > 1: - raise ValueError(f"The value of strength should in (0.0, 1.0] but is {strength}") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError( - f"`height` and `width` have to be divisible by 8 but are {height} and {width}." - ) - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - - - def prepare_latents_ddim_inverted(self, image, batch_size, num_images_per_prompt, - # dtype, device, - text_embeddings, - generator=None): - - # Not sure if image need to change device and type - # image = image.to(device=device, dtype=dtype) - - batch_size = batch_size * num_images_per_prompt - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if isinstance(generator, list): - init_latents = [ - self.vae.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size) - ] - init_latents = torch.cat(init_latents, dim=0) - else: - init_latents = self.vae.encode(image).latent_dist.sample(generator) - init_latents = 0.18215 * init_latents - - if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0: - # expand init_latents for batch_size - deprecation_message = ( - f"You have passed {batch_size} text prompts (`prompt`), but only {init_latents.shape[0]} initial" - " images (`image`). Initial images are now duplicating to match the number of text prompts. Note" - " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update" - " your script to pass as many initial images as text prompts to suppress this warning." - ) - deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False) - additional_image_per_prompt = batch_size // init_latents.shape[0] - init_latents = torch.cat([init_latents] * additional_image_per_prompt, dim=0) - elif batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0: - raise ValueError( - f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts." - ) - else: - init_latents = torch.cat([init_latents], dim=0) - - # get latents - init_latents_bcfhw = rearrange(init_latents, "(b f) c h w -> b c f h w", b=batch_size) - ddim_latents_all_step = self.ddim_clean2noisy_loop(init_latents_bcfhw, text_embeddings) - return ddim_latents_all_step - - @torch.no_grad() - def ddim_clean2noisy_loop(self, latent, text_embeddings): - weight_dtype = latent.dtype - uncond_embeddings, cond_embeddings = text_embeddings.chunk(2) - all_latent = [latent] - latent = latent.clone().detach() - print(' Invert clean image to noise latents by DDIM and Unet') - for i in trange(len(self.scheduler.timesteps)): - t = self.scheduler.timesteps[len(self.scheduler.timesteps) - i - 1] - # noise_pred = self.get_noise_pred_single(latent, t, cond_embeddings) - noise_pred = self.unet(latent, t, encoder_hidden_states=cond_embeddings)["sample"] # [1, 4, 8, 64, 64] -> [1, 4, 8, 64, 64]) - latent = self.next_clean2noise_step(noise_pred, t, latent) - all_latent.append(latent.to(dtype=weight_dtype)) - - return all_latent - - def next_clean2noise_step(self, model_output: Union[torch.FloatTensor, np.ndarray], timestep: int, sample: Union[torch.FloatTensor, np.ndarray]): - """ - Assume the eta in DDIM=0 - """ - timestep, next_timestep = min(timestep - self.scheduler.config.num_train_timesteps // self.scheduler.num_inference_steps, 999), timestep - alpha_prod_t = self.scheduler.alphas_cumprod[timestep] if timestep >= 0 else self.scheduler.final_alpha_cumprod - alpha_prod_t_next = self.scheduler.alphas_cumprod[next_timestep] - beta_prod_t = 1 - alpha_prod_t - next_original_sample = (sample - beta_prod_t ** 0.5 * model_output) / alpha_prod_t ** 0.5 - next_sample_direction = (1 - alpha_prod_t_next) ** 0.5 * model_output - next_sample = alpha_prod_t_next ** 0.5 * next_original_sample + next_sample_direction - return next_sample - - def get_timesteps(self, num_inference_steps, strength, device): - # get the original timestep using init_timestep - init_timestep = min(int(num_inference_steps * strength), num_inference_steps) - - t_start = max(num_inference_steps - init_timestep, 0) - timesteps = self.scheduler.timesteps[t_start:] - - return timesteps, num_inference_steps - t_start - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - image: Union[torch.FloatTensor, PIL.Image.Image] = None, - height: Optional[int] = None, - width: Optional[int] = None, - strength: float = None, - num_inference_steps: int = 50, - clip_length: int = 8, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: Optional[int] = 1, - **args - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image (`torch.FloatTensor` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. Only used in DDIM or strength<1.0 - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - strength (`float`, *optional*, defaults to 1.0): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` - will be used as a starting point, adding more noise to it the larger the `strength`. The number of - denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will - be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, height, width, callback_steps, strength) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_embeddings = self._encode_prompt( - prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - # if strength <1.0: - # timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device) - timesteps = self.scheduler.timesteps - latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt) - if latents is None: - ddim_latents_all_step = self.prepare_latents_ddim_inverted( - image, batch_size, num_images_per_prompt, - # text_embeddings.dtype, device, - text_embeddings, - generator, - ) - latents = ddim_latents_all_step[-1] - - latents_dtype = latents.dtype - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(tqdm(timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, t, encoder_hidden_states=text_embeddings - ).sample.to(dtype=latents_dtype) - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * ( - noise_pred_text - noise_pred_uncond - ) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ( - (i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0 - ): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - has_nsfw_concept = None - - # 10. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - torch.cuda.empty_cache() - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/context.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/context.py deleted file mode 100644 index 7984fbeebbe84f3b1d0b99ecb4f160a6affc423a..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/context.py +++ /dev/null @@ -1,72 +0,0 @@ -import logging -import re -from datetime import datetime -from typing import Optional, Dict, Union, Any - -import pytz - -logger = logging.getLogger(__name__) - -_empty_map = {} - - -# pylint: disable=too-many-instance-attributes -class BaseQueryContext: - local_tz: pytz.timezone - - def __init__(self, - settings: Optional[Dict[str, Any]] = None, - query_formats: Optional[Dict[str, str]] = None, - column_formats: Optional[Dict[str, Union[str, Dict[str, str]]]] = None, - encoding: Optional[str] = None, - use_extended_dtypes: bool = False, - use_numpy: bool = False): - self.settings = settings or {} - if query_formats is None: - self.type_formats = _empty_map - else: - self.type_formats = {re.compile(type_name.replace('*', '.*'), re.IGNORECASE): fmt - for type_name, fmt in query_formats.items()} - if column_formats is None: - self.col_simple_formats = _empty_map - self.col_type_formats = _empty_map - else: - self.col_simple_formats = {col_name: fmt for col_name, fmt in column_formats.items() if - isinstance(fmt, str)} - self.col_type_formats = {} - for col_name, fmt in column_formats.items(): - if not isinstance(fmt, str): - self.col_type_formats[col_name] = {re.compile(type_name.replace('*', '.*'), re.IGNORECASE): fmt - for type_name, fmt in fmt.items()} - self.query_formats = query_formats or {} - self.column_formats = column_formats or {} - self.encoding = encoding - self.use_numpy = use_numpy - self.use_extended_dtypes = use_extended_dtypes - self._active_col_fmt = None - self._active_col_type_fmts = _empty_map - - def start_column(self, name: str): - self._active_col_fmt = self.col_simple_formats.get(name) - self._active_col_type_fmts = self.col_type_formats.get(name, _empty_map) - - def active_fmt(self, ch_type): - if self._active_col_fmt: - return self._active_col_fmt - for type_pattern, fmt in self._active_col_type_fmts.items(): - if type_pattern.match(ch_type): - return fmt - for type_pattern, fmt in self.type_formats.items(): - if type_pattern.match(ch_type): - return fmt - return None - - -def _init_context_cls(): - local_tz = datetime.now().astimezone().tzinfo - if local_tz.tzname(datetime.now()) in ('UTC', 'GMT', 'Universal', 'GMT-0', 'Zulu', 'Greenwich'): - local_tz = pytz.UTC - BaseQueryContext.local_tz = local_tz - - -_init_context_cls() diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/contrib/ondisk.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/contrib/ondisk.py deleted file mode 100644 index 26a95f44f5b36e3400e486c8d82cd0759b18d8ae..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/contrib/ondisk.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import List -import faiss -import logging - -LOG = logging.getLogger(__name__) - - -def merge_ondisk( - trained_index: faiss.Index, shard_fnames: List[str], ivfdata_fname: str -) -> None: - """Add the contents of the indexes stored in shard_fnames into the index - trained_index. The on-disk data is stored in ivfdata_fname""" - assert not isinstance( - trained_index, faiss.IndexIVFPQR - ), "IndexIVFPQR is not supported as an on disk index." - # merge the images into an on-disk index - # first load the inverted lists - ivfs = [] - for fname in shard_fnames: - # the IO_FLAG_MMAP is to avoid actually loading the data thus - # the total size of the inverted lists can exceed the - # available RAM - LOG.info("read " + fname) - index = faiss.read_index(fname, faiss.IO_FLAG_MMAP) - index_ivf = faiss.extract_index_ivf(index) - ivfs.append(index_ivf.invlists) - - # avoid that the invlists get deallocated with the index - index_ivf.own_invlists = False - - # construct the output index - index = trained_index - index_ivf = faiss.extract_index_ivf(index) - - assert index.ntotal == 0, "works only on empty index" - - # prepare the output inverted lists. They will be written - # to merged_index.ivfdata - invlists = faiss.OnDiskInvertedLists( - index_ivf.nlist, index_ivf.code_size, ivfdata_fname - ) - - # merge all the inverted lists - ivf_vector = faiss.InvertedListsPtrVector() - for ivf in ivfs: - ivf_vector.push_back(ivf) - - LOG.info("merge %d inverted lists " % ivf_vector.size()) - ntotal = invlists.merge_from(ivf_vector.data(), ivf_vector.size()) - - # now replace the inverted lists in the output index - index.ntotal = index_ivf.ntotal = ntotal - index_ivf.replace_invlists(invlists, True) - invlists.this.disown() diff --git a/spaces/cihyFjudo/fairness-paper-search/Bol Bachchan Dual Audio Eng Hindi 720p REPACK Download In Kickass Torrent Geerorlo.md b/spaces/cihyFjudo/fairness-paper-search/Bol Bachchan Dual Audio Eng Hindi 720p REPACK Download In Kickass Torrent Geerorlo.md deleted file mode 100644 index 650480d7452aed6afe67282efdcc95c7efe80971..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Bol Bachchan Dual Audio Eng Hindi 720p REPACK Download In Kickass Torrent Geerorlo.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Bol Bachchan Dual Audio Eng Hindi 720p Download In Kickass Torrent geerorlo


    Download File ☆☆☆ https://tinurli.com/2uwkK1



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/CZECH HUNTER 60.md b/spaces/cihyFjudo/fairness-paper-search/CZECH HUNTER 60.md deleted file mode 100644 index e622813d8c47f4d31eb91a96d67e242ad70674a0..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/CZECH HUNTER 60.md +++ /dev/null @@ -1,6 +0,0 @@ -

    CZECH HUNTER 60


    Download Ziphttps://tinurli.com/2uwiKI



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/How to Find and Download Korean Movies With Tagalog Dubbed In Torrent for Free.md b/spaces/cihyFjudo/fairness-paper-search/How to Find and Download Korean Movies With Tagalog Dubbed In Torrent for Free.md deleted file mode 100644 index 39f081741f18fdc54832764e9abdc70af273af32..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/How to Find and Download Korean Movies With Tagalog Dubbed In Torrent for Free.md +++ /dev/null @@ -1,5 +0,0 @@ -
    -

    Although YouTube is the largest video sharing platform in the world, you might be surprised by why YouTube is on the list. In fact, YouTube also offers tons of free Pinoy movies for online watching, especially channels like VIVA Films, ABS-CBN Star Cinema and Regal Entertainment, Inc. However, if you want to download Pinoy movies, you have to subscribe to a premium membership. Fortunately, YouTube offers a 4-month free trial period, during which all downloads will be free. Also, you can refer to the next section to download YouTube videos without subscription.

    -

    Korean Movies With Tagalog Dubbed Download In Torrent


    Download Zip - https://tinurli.com/2uwhF7



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Oxycube For Nokia S40 A Must-Have Software for Nokia Users Who Want to Customize Their Phones Look and Feel.md b/spaces/cihyFjudo/fairness-paper-search/Oxycube For Nokia S40 A Must-Have Software for Nokia Users Who Want to Customize Their Phones Look and Feel.md deleted file mode 100644 index 2be6c1556d939ee8a50c998cb53aea0f0361d77d..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Oxycube For Nokia S40 A Must-Have Software for Nokia Users Who Want to Customize Their Phones Look and Feel.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Oxycube For Nokia S40


    Download File ->>> https://tinurli.com/2uwhBE



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Sex POV Daddy Blowjobs... [PORTABLE].md b/spaces/cihyFjudo/fairness-paper-search/Sex POV Daddy Blowjobs... [PORTABLE].md deleted file mode 100644 index b7b1c8c676781a90c61123dea9e92e446ea98d08..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Sex POV Daddy Blowjobs... [PORTABLE].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Sex POV daddy blowjobs...


    Download –––––>>> https://tinurli.com/2uwhKc



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Solucionario Principios De Operaciones Unitarias Alan Foust 110.md b/spaces/cihyFjudo/fairness-paper-search/Solucionario Principios De Operaciones Unitarias Alan Foust 110.md deleted file mode 100644 index 613d7e119d43666ca99fe174c9afc8c2e3bf660c..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Solucionario Principios De Operaciones Unitarias Alan Foust 110.md +++ /dev/null @@ -1,10 +0,0 @@ -
    -

    Libro Foust Operaciones Unitarias Pdf - astiane.comPrincipios de Operaciones Unitarias Alan S.. Foust, Leonard A.. Wenzel, .. Descargar Libro y Solucionario de Principios de Operaciones Unitarias 2da Edicion .astiane.com/view/libro-foust-operaciones-unitarias-pdfHealth & BeautyDescargar Gratis en PDF Libro y Solucionario de Transferencia de Calor y Masa .. Principios de Operaciones Unitarias Alan S. -de-operaciones-unitarias...Principios de Operaciones Unitarias 2da Edicion Alan s.Principios de Operaciones Unitarias 2da Edicion Alan s.. Foust - Ebook download as PDF File (.pdf), Text File (.txt) or read book online. -de...Solucionario Principios De Operaciones Unitarias Alan .Subject: Solucionario Principios De Operaciones .. Solucionario Principios De Operaciones .. Solucionario Principios De Operaciones Unitarias Alan .www.voy.com/8547/54.htmlsolucionario geankoplis procesos de transporte y .Libro Anterior Procesos de Transporte y Operaciones .. Ese no es el solucionario de .. de la zona pdf alan foust principios operaciones unitarias pdf libros .millingequipment.co/Feb-06/solucionario-geankoplis-procesos-de...Principios de Operaciones Unitarias Alan Foust, Leonard .Principios de Operaciones Unitarias Alan Foust, .. 6 Operaciones de Multietapas a .. Solucionario 6ta .wwwblogfippuni.blogspot.com/2011/09/principios-de-operaciones...Libros de Alan S.El tratamiento que se les da en este libro a las operaciones unitarias, enfatiza los principios cientficos sobre los que -s-foust-2EL SOLUCIONARIO - Principios de Operaciones Unitarias .Principios de Operaciones Unitarias - Alan S. de Operaciones Unitarias 2da Edicion Alan s .Descargar Libro de Principios de Operaciones Unitarias 2da Edicion Alan s.. Foust, Leonard A.. Wenzel, Curtis w.librosysolucionarios.net/principios-de-operaciones-unitarias-2da...Operaciones Unitarias en Ingenieria Quimica Mccabe 6 Ed .Operaciones Unitarias en Ingenieria Quimica .. Operaciones Unitarias en Ingenieria Quimica Mccabe 6 .. Solucionario Procesos de Transporte y Principios de . -Unitarias-en... 7286bcadf1

    -

    solucionario principios de operaciones unitarias alan foust.zip · Webroot SecureAnywhere Antivirus 2020 Crack Serial Key · Portable Microsoft .... Logix Pro 500 Plc Simulator Crack >> Plc Simulator Logixpro Allen Bradley (I, Logix ... solucionario principios de operaciones unitarias alan foust.zip.

    -

    Solucionario Principios De Operaciones Unitarias Alan Foust | 110


    Download File === https://tinurli.com/2uwkHW



    -

    HPLA PUEDEN PORFAVOR SUBIR EL SOLUCIONARIO DE PRINCIPIOS DE OPERACINES UNITARIAS DE ALAN FOUST.. New York .... Solucionario Operaciones Unitarias Alan Foust 1 Favor si alguien tiene el solucionarios,.. 9 Nov 2012 . Principios de Operaciones Unitarias Alan S. Foust, .... cogniview pdf2xl enterprise crack keygen torrent car insurance forums solucionario principios de operaciones unitarias alan foust.zip. 34b41eb7bc. 4 / 5 .... wilcom 2006 sp4 r2 crack.29 · solucionario principios de operaciones unitarias alan foust.zip · squishing nemo mishka. Tags: lego island 2 cd ... HD Online Player (Don 2 Eng Sub 720p Movies)

    -

    El tratamiento que se les da en este libro a las operaciones unitarias, enfatiza los principios científicos sobre los que se basan las operaciones y agrupa aquellas que tienen bases físicas similares para que puedan analizarse juntas.

    -

    Con objeto de mantener la claridad de la presentación a un nivel elemental, es común omitir el refinamiento de los modelos físicos y las expresiones matemáticas elaboradas para un riguroso tratamiento de situaciones complejas, y con el propósito de enfatizar las similitudes que existen entre diversas operaciones unitarias, la descripción de los equipos y los métodos de cálculo especializados se presenta en forma condensada.

    -

    Comedy full movie on .... Charlie (2015) ... Solucionario Principios De Operaciones Unitarias Alan Foust. Exceptional Over Ella Descubre Prov.. Alguien tiene el solucionario de principio de operaciones unitarias del foust ... Exceptional Over Ella Descubre Prov ->>->>->> DOWNLOAD Principios de ... MacOSXLionServer1071dmg

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Tum Se Achcha Kaun Hai HD A Must-Watch Movie for Bollywood Lovers.md b/spaces/cihyFjudo/fairness-paper-search/Tum Se Achcha Kaun Hai HD A Must-Watch Movie for Bollywood Lovers.md deleted file mode 100644 index c22b45f2303d09a0b6c304ec1bfb2baeecccda80..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Tum Se Achcha Kaun Hai HD A Must-Watch Movie for Bollywood Lovers.md +++ /dev/null @@ -1,6 +0,0 @@ - -

    Adipurush full movie download is available in Hindi on Filmyhit, Moviesflix, Filmywap and Mp4moviez in Hindi dubbed. Adipurush Movie Download Hindi Filmyzilla, Adipurush Full Movie Download, Adipurush Movie Download (2022) 480p 720p 1080p,

    -

    Adipurush full movie in Hindi free download on Pagalmovies & Pagalworld in 1080p. PagalMovies & Pagalworld may be a piracy website to download Movies HD, Hindi Movies, and PagalMovies Telugu Tamil online lawlessly at no cost to its users. PagalMovies website permits its users to observe and download movies from its PagalMovies com, Pagalworld website for free.

    -

    movies hd 1080p full Tum Se Achcha Kaun Hai hd


    Download Zip ✓✓✓ https://tinurli.com/2uwiSE



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/clarin-pl/datasets-explorer/clarin_datasets/utils.py b/spaces/clarin-pl/datasets-explorer/clarin_datasets/utils.py deleted file mode 100644 index f9ca52eee5e86200adfee7c1777569d026b87bc0..0000000000000000000000000000000000000000 --- a/spaces/clarin-pl/datasets-explorer/clarin_datasets/utils.py +++ /dev/null @@ -1,70 +0,0 @@ -import re -from typing import List - -from embeddings.embedding.auto_flair import AutoFlairDocumentEmbedding -from flair.data import Sentence -from numpy import typing as nt -from unidecode import unidecode - -embedding = AutoFlairDocumentEmbedding.from_hub("clarin-pl/word2vec-kgr10") - -PLOT_COLOR_PALETTE = [ - "#FAEBD7", - "#00FFFF", - "#7FFFD4", - "#000000", - "#0000FF", - "#8A2BE2", - "#A52A2A", - "#DEB887", - "#5F9EA0", - "#7FFF00", - "#D2691E", - "#FF7F50", - "#6495ED", - "#FFF8DC", - "#DC143C", - "#00FFFF", - "#00008B", - "#008B8B", - "#B8860B", - "#A9A9A9", - "#006400", - "#BDB76B", - "#8B008B", - "#556B2F", - "#FF8C00", - "#9932CC", - "#8B0000", - "#E9967A", - "#8FBC8F", - "#2F4F4F", - "#00CED1", - "#FFD700", - "#DAA520", - "#808080", - "#FF69B4", - "#4B0082", - "#CD5C5C", - "#7CFC00", - "#F08080", - "#66CDAA", -] - - -def flatten_list(main_list: List[List]) -> List: - return [item for sublist in main_list for item in sublist] - - -def count_num_of_characters(text: str) -> int: - return len(re.sub(r"[^a-zA-Z]", "", unidecode(text))) - - -def count_num_of_words(text: str) -> int: - return len(re.sub(r"[^a-zA-Z ]", "", unidecode(text)).split(" ")) - - -def embed_sentence(sentence: str) -> nt.NDArray: - sentence = Sentence(sentence) - embedding.embed([sentence]) - return sentence.embedding.numpy() diff --git a/spaces/cncanon/chud/README.md b/spaces/cncanon/chud/README.md deleted file mode 100644 index 1bfe9faab27353cc39c4f9a7fca3b48f53b04c2c..0000000000000000000000000000000000000000 --- a/spaces/cncanon/chud/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Chud -emoji: 🦀 -colorFrom: green -colorTo: pink -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/codeparrot/apps_metric/tests.py b/spaces/codeparrot/apps_metric/tests.py deleted file mode 100644 index 8ee99c4c8a7900319c32594b7113e4c411852605..0000000000000000000000000000000000000000 --- a/spaces/codeparrot/apps_metric/tests.py +++ /dev/null @@ -1,14 +0,0 @@ -import json -from evaluate import load - -solution_sample1 = json.load(open("test_examples/solutions_problem_1.json", "r")) -solution_sample2 = json.load(open("test_examples/solutions_problem_2.json", "r")) -single_solutions = [solution_sample1[:1], solution_sample2[:1]] -multiple_solutions = [solution_sample1[:3], solution_sample2[:3]] - -metric = load("codeparrot/apps_metric") -result_1 = metric.compute(predictions=single_solutions, level="all") -result_2 = metric.compute(predictions=multiple_solutions, level="all", k_list=[1, 2, 3]) - -assert result_1 == {'avg_accuracy': 1.0, 'strict_accuracy': 1.0, 'pass_at_k': None} -assert result_2 == {'avg_accuracy': None, 'strict_accuracy': None, 'pass_at_k': {'pass@1': 1.0, 'pass@2': 1.0, 'pass@3': 1.0}} \ No newline at end of file diff --git a/spaces/colakin/video-generater/public/ffmpeg/fftools/ffprobe.c b/spaces/colakin/video-generater/public/ffmpeg/fftools/ffprobe.c deleted file mode 100644 index 6e72c37721be5aa1a908962379457e560c522bea..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/fftools/ffprobe.c +++ /dev/null @@ -1,4232 +0,0 @@ -/* - * Copyright (c) 2007-2010 Stefano Sabatini - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * simple media prober based on the FFmpeg libraries - */ - -#include "config.h" -#include "libavutil/ffversion.h" - -#include -#include - -#include "libavformat/avformat.h" -#include "libavformat/version.h" -#include "libavcodec/avcodec.h" -#include "libavcodec/version.h" -#include "libavutil/ambient_viewing_environment.h" -#include "libavutil/avassert.h" -#include "libavutil/avstring.h" -#include "libavutil/bprint.h" -#include "libavutil/channel_layout.h" -#include "libavutil/display.h" -#include "libavutil/hash.h" -#include "libavutil/hdr_dynamic_metadata.h" -#include "libavutil/mastering_display_metadata.h" -#include "libavutil/hdr_dynamic_vivid_metadata.h" -#include "libavutil/dovi_meta.h" -#include "libavutil/opt.h" -#include "libavutil/pixdesc.h" -#include "libavutil/spherical.h" -#include "libavutil/stereo3d.h" -#include "libavutil/dict.h" -#include "libavutil/intreadwrite.h" -#include "libavutil/libm.h" -#include "libavutil/parseutils.h" -#include "libavutil/timecode.h" -#include "libavutil/timestamp.h" -#include "libavdevice/avdevice.h" -#include "libavdevice/version.h" -#include "libswscale/swscale.h" -#include "libswscale/version.h" -#include "libswresample/swresample.h" -#include "libswresample/version.h" -#include "libpostproc/postprocess.h" -#include "libpostproc/version.h" -#include "libavfilter/version.h" -#include "cmdutils.h" -#include "opt_common.h" - -#include "libavutil/thread.h" - -#if !HAVE_THREADS -# ifdef pthread_mutex_lock -# undef pthread_mutex_lock -# endif -# define pthread_mutex_lock(a) do{}while(0) -# ifdef pthread_mutex_unlock -# undef pthread_mutex_unlock -# endif -# define pthread_mutex_unlock(a) do{}while(0) -#endif - -// attached as opaque_ref to packets/frames -typedef struct FrameData { - int64_t pkt_pos; - int pkt_size; -} FrameData; - -typedef struct InputStream { - AVStream *st; - - AVCodecContext *dec_ctx; -} InputStream; - -typedef struct InputFile { - AVFormatContext *fmt_ctx; - - InputStream *streams; - int nb_streams; -} InputFile; - -const char program_name[] = "ffprobe"; -const int program_birth_year = 2007; - -static int do_bitexact = 0; -static int do_count_frames = 0; -static int do_count_packets = 0; -static int do_read_frames = 0; -static int do_read_packets = 0; -static int do_show_chapters = 0; -static int do_show_error = 0; -static int do_show_format = 0; -static int do_show_frames = 0; -static int do_show_packets = 0; -static int do_show_programs = 0; -static int do_show_streams = 0; -static int do_show_stream_disposition = 0; -static int do_show_data = 0; -static int do_show_program_version = 0; -static int do_show_library_versions = 0; -static int do_show_pixel_formats = 0; -static int do_show_pixel_format_flags = 0; -static int do_show_pixel_format_components = 0; -static int do_show_log = 0; - -static int do_show_chapter_tags = 0; -static int do_show_format_tags = 0; -static int do_show_frame_tags = 0; -static int do_show_program_tags = 0; -static int do_show_stream_tags = 0; -static int do_show_packet_tags = 0; - -static int show_value_unit = 0; -static int use_value_prefix = 0; -static int use_byte_value_binary_prefix = 0; -static int use_value_sexagesimal_format = 0; -static int show_private_data = 1; - -#define SHOW_OPTIONAL_FIELDS_AUTO -1 -#define SHOW_OPTIONAL_FIELDS_NEVER 0 -#define SHOW_OPTIONAL_FIELDS_ALWAYS 1 -static int show_optional_fields = SHOW_OPTIONAL_FIELDS_AUTO; - -static char *print_format; -static char *stream_specifier; -static char *show_data_hash; - -typedef struct ReadInterval { - int id; ///< identifier - int64_t start, end; ///< start, end in second/AV_TIME_BASE units - int has_start, has_end; - int start_is_offset, end_is_offset; - int duration_frames; -} ReadInterval; - -static ReadInterval *read_intervals; -static int read_intervals_nb = 0; - -static int find_stream_info = 1; - -/* section structure definition */ - -#define SECTION_MAX_NB_CHILDREN 10 - -struct section { - int id; ///< unique id identifying a section - const char *name; - -#define SECTION_FLAG_IS_WRAPPER 1 ///< the section only contains other sections, but has no data at its own level -#define SECTION_FLAG_IS_ARRAY 2 ///< the section contains an array of elements of the same type -#define SECTION_FLAG_HAS_VARIABLE_FIELDS 4 ///< the section may contain a variable number of fields with variable keys. - /// For these sections the element_name field is mandatory. - int flags; - int children_ids[SECTION_MAX_NB_CHILDREN+1]; ///< list of children section IDS, terminated by -1 - const char *element_name; ///< name of the contained element, if provided - const char *unique_name; ///< unique section name, in case the name is ambiguous - AVDictionary *entries_to_show; - int show_all_entries; -}; - -typedef enum { - SECTION_ID_NONE = -1, - SECTION_ID_CHAPTER, - SECTION_ID_CHAPTER_TAGS, - SECTION_ID_CHAPTERS, - SECTION_ID_ERROR, - SECTION_ID_FORMAT, - SECTION_ID_FORMAT_TAGS, - SECTION_ID_FRAME, - SECTION_ID_FRAMES, - SECTION_ID_FRAME_TAGS, - SECTION_ID_FRAME_SIDE_DATA_LIST, - SECTION_ID_FRAME_SIDE_DATA, - SECTION_ID_FRAME_SIDE_DATA_TIMECODE_LIST, - SECTION_ID_FRAME_SIDE_DATA_TIMECODE, - SECTION_ID_FRAME_SIDE_DATA_COMPONENT_LIST, - SECTION_ID_FRAME_SIDE_DATA_COMPONENT, - SECTION_ID_FRAME_SIDE_DATA_PIECE_LIST, - SECTION_ID_FRAME_SIDE_DATA_PIECE, - SECTION_ID_FRAME_LOG, - SECTION_ID_FRAME_LOGS, - SECTION_ID_LIBRARY_VERSION, - SECTION_ID_LIBRARY_VERSIONS, - SECTION_ID_PACKET, - SECTION_ID_PACKET_TAGS, - SECTION_ID_PACKETS, - SECTION_ID_PACKETS_AND_FRAMES, - SECTION_ID_PACKET_SIDE_DATA_LIST, - SECTION_ID_PACKET_SIDE_DATA, - SECTION_ID_PIXEL_FORMAT, - SECTION_ID_PIXEL_FORMAT_FLAGS, - SECTION_ID_PIXEL_FORMAT_COMPONENT, - SECTION_ID_PIXEL_FORMAT_COMPONENTS, - SECTION_ID_PIXEL_FORMATS, - SECTION_ID_PROGRAM_STREAM_DISPOSITION, - SECTION_ID_PROGRAM_STREAM_TAGS, - SECTION_ID_PROGRAM, - SECTION_ID_PROGRAM_STREAMS, - SECTION_ID_PROGRAM_STREAM, - SECTION_ID_PROGRAM_TAGS, - SECTION_ID_PROGRAM_VERSION, - SECTION_ID_PROGRAMS, - SECTION_ID_ROOT, - SECTION_ID_STREAM, - SECTION_ID_STREAM_DISPOSITION, - SECTION_ID_STREAMS, - SECTION_ID_STREAM_TAGS, - SECTION_ID_STREAM_SIDE_DATA_LIST, - SECTION_ID_STREAM_SIDE_DATA, - SECTION_ID_SUBTITLE, -} SectionID; - -static struct section sections[] = { - [SECTION_ID_CHAPTERS] = { SECTION_ID_CHAPTERS, "chapters", SECTION_FLAG_IS_ARRAY, { SECTION_ID_CHAPTER, -1 } }, - [SECTION_ID_CHAPTER] = { SECTION_ID_CHAPTER, "chapter", 0, { SECTION_ID_CHAPTER_TAGS, -1 } }, - [SECTION_ID_CHAPTER_TAGS] = { SECTION_ID_CHAPTER_TAGS, "tags", SECTION_FLAG_HAS_VARIABLE_FIELDS, { -1 }, .element_name = "tag", .unique_name = "chapter_tags" }, - [SECTION_ID_ERROR] = { SECTION_ID_ERROR, "error", 0, { -1 } }, - [SECTION_ID_FORMAT] = { SECTION_ID_FORMAT, "format", 0, { SECTION_ID_FORMAT_TAGS, -1 } }, - [SECTION_ID_FORMAT_TAGS] = { SECTION_ID_FORMAT_TAGS, "tags", SECTION_FLAG_HAS_VARIABLE_FIELDS, { -1 }, .element_name = "tag", .unique_name = "format_tags" }, - [SECTION_ID_FRAMES] = { SECTION_ID_FRAMES, "frames", SECTION_FLAG_IS_ARRAY, { SECTION_ID_FRAME, SECTION_ID_SUBTITLE, -1 } }, - [SECTION_ID_FRAME] = { SECTION_ID_FRAME, "frame", 0, { SECTION_ID_FRAME_TAGS, SECTION_ID_FRAME_SIDE_DATA_LIST, SECTION_ID_FRAME_LOGS, -1 } }, - [SECTION_ID_FRAME_TAGS] = { SECTION_ID_FRAME_TAGS, "tags", SECTION_FLAG_HAS_VARIABLE_FIELDS, { -1 }, .element_name = "tag", .unique_name = "frame_tags" }, - [SECTION_ID_FRAME_SIDE_DATA_LIST] ={ SECTION_ID_FRAME_SIDE_DATA_LIST, "side_data_list", SECTION_FLAG_IS_ARRAY, { SECTION_ID_FRAME_SIDE_DATA, -1 }, .element_name = "side_data", .unique_name = "frame_side_data_list" }, - [SECTION_ID_FRAME_SIDE_DATA] = { SECTION_ID_FRAME_SIDE_DATA, "side_data", 0, { SECTION_ID_FRAME_SIDE_DATA_TIMECODE_LIST, SECTION_ID_FRAME_SIDE_DATA_COMPONENT_LIST, -1 }, .unique_name = "frame_side_data" }, - [SECTION_ID_FRAME_SIDE_DATA_TIMECODE_LIST] = { SECTION_ID_FRAME_SIDE_DATA_TIMECODE_LIST, "timecodes", SECTION_FLAG_IS_ARRAY, { SECTION_ID_FRAME_SIDE_DATA_TIMECODE, -1 } }, - [SECTION_ID_FRAME_SIDE_DATA_TIMECODE] = { SECTION_ID_FRAME_SIDE_DATA_TIMECODE, "timecode", 0, { -1 } }, - [SECTION_ID_FRAME_SIDE_DATA_COMPONENT_LIST] = { SECTION_ID_FRAME_SIDE_DATA_COMPONENT_LIST, "components", SECTION_FLAG_IS_ARRAY, { SECTION_ID_FRAME_SIDE_DATA_COMPONENT, -1 } }, - [SECTION_ID_FRAME_SIDE_DATA_COMPONENT] = { SECTION_ID_FRAME_SIDE_DATA_COMPONENT, "component", 0, { SECTION_ID_FRAME_SIDE_DATA_PIECE_LIST, -1 } }, - [SECTION_ID_FRAME_SIDE_DATA_PIECE_LIST] = { SECTION_ID_FRAME_SIDE_DATA_PIECE_LIST, "pieces", SECTION_FLAG_IS_ARRAY, { SECTION_ID_FRAME_SIDE_DATA_PIECE, -1 } }, - [SECTION_ID_FRAME_SIDE_DATA_PIECE] = { SECTION_ID_FRAME_SIDE_DATA_PIECE, "section", 0, { -1 } }, - [SECTION_ID_FRAME_LOGS] = { SECTION_ID_FRAME_LOGS, "logs", SECTION_FLAG_IS_ARRAY, { SECTION_ID_FRAME_LOG, -1 } }, - [SECTION_ID_FRAME_LOG] = { SECTION_ID_FRAME_LOG, "log", 0, { -1 }, }, - [SECTION_ID_LIBRARY_VERSIONS] = { SECTION_ID_LIBRARY_VERSIONS, "library_versions", SECTION_FLAG_IS_ARRAY, { SECTION_ID_LIBRARY_VERSION, -1 } }, - [SECTION_ID_LIBRARY_VERSION] = { SECTION_ID_LIBRARY_VERSION, "library_version", 0, { -1 } }, - [SECTION_ID_PACKETS] = { SECTION_ID_PACKETS, "packets", SECTION_FLAG_IS_ARRAY, { SECTION_ID_PACKET, -1} }, - [SECTION_ID_PACKETS_AND_FRAMES] = { SECTION_ID_PACKETS_AND_FRAMES, "packets_and_frames", SECTION_FLAG_IS_ARRAY, { SECTION_ID_PACKET, -1} }, - [SECTION_ID_PACKET] = { SECTION_ID_PACKET, "packet", 0, { SECTION_ID_PACKET_TAGS, SECTION_ID_PACKET_SIDE_DATA_LIST, -1 } }, - [SECTION_ID_PACKET_TAGS] = { SECTION_ID_PACKET_TAGS, "tags", SECTION_FLAG_HAS_VARIABLE_FIELDS, { -1 }, .element_name = "tag", .unique_name = "packet_tags" }, - [SECTION_ID_PACKET_SIDE_DATA_LIST] ={ SECTION_ID_PACKET_SIDE_DATA_LIST, "side_data_list", SECTION_FLAG_IS_ARRAY, { SECTION_ID_PACKET_SIDE_DATA, -1 }, .element_name = "side_data", .unique_name = "packet_side_data_list" }, - [SECTION_ID_PACKET_SIDE_DATA] = { SECTION_ID_PACKET_SIDE_DATA, "side_data", 0, { -1 }, .unique_name = "packet_side_data" }, - [SECTION_ID_PIXEL_FORMATS] = { SECTION_ID_PIXEL_FORMATS, "pixel_formats", SECTION_FLAG_IS_ARRAY, { SECTION_ID_PIXEL_FORMAT, -1 } }, - [SECTION_ID_PIXEL_FORMAT] = { SECTION_ID_PIXEL_FORMAT, "pixel_format", 0, { SECTION_ID_PIXEL_FORMAT_FLAGS, SECTION_ID_PIXEL_FORMAT_COMPONENTS, -1 } }, - [SECTION_ID_PIXEL_FORMAT_FLAGS] = { SECTION_ID_PIXEL_FORMAT_FLAGS, "flags", 0, { -1 }, .unique_name = "pixel_format_flags" }, - [SECTION_ID_PIXEL_FORMAT_COMPONENTS] = { SECTION_ID_PIXEL_FORMAT_COMPONENTS, "components", SECTION_FLAG_IS_ARRAY, {SECTION_ID_PIXEL_FORMAT_COMPONENT, -1 }, .unique_name = "pixel_format_components" }, - [SECTION_ID_PIXEL_FORMAT_COMPONENT] = { SECTION_ID_PIXEL_FORMAT_COMPONENT, "component", 0, { -1 } }, - [SECTION_ID_PROGRAM_STREAM_DISPOSITION] = { SECTION_ID_PROGRAM_STREAM_DISPOSITION, "disposition", 0, { -1 }, .unique_name = "program_stream_disposition" }, - [SECTION_ID_PROGRAM_STREAM_TAGS] = { SECTION_ID_PROGRAM_STREAM_TAGS, "tags", SECTION_FLAG_HAS_VARIABLE_FIELDS, { -1 }, .element_name = "tag", .unique_name = "program_stream_tags" }, - [SECTION_ID_PROGRAM] = { SECTION_ID_PROGRAM, "program", 0, { SECTION_ID_PROGRAM_TAGS, SECTION_ID_PROGRAM_STREAMS, -1 } }, - [SECTION_ID_PROGRAM_STREAMS] = { SECTION_ID_PROGRAM_STREAMS, "streams", SECTION_FLAG_IS_ARRAY, { SECTION_ID_PROGRAM_STREAM, -1 }, .unique_name = "program_streams" }, - [SECTION_ID_PROGRAM_STREAM] = { SECTION_ID_PROGRAM_STREAM, "stream", 0, { SECTION_ID_PROGRAM_STREAM_DISPOSITION, SECTION_ID_PROGRAM_STREAM_TAGS, -1 }, .unique_name = "program_stream" }, - [SECTION_ID_PROGRAM_TAGS] = { SECTION_ID_PROGRAM_TAGS, "tags", SECTION_FLAG_HAS_VARIABLE_FIELDS, { -1 }, .element_name = "tag", .unique_name = "program_tags" }, - [SECTION_ID_PROGRAM_VERSION] = { SECTION_ID_PROGRAM_VERSION, "program_version", 0, { -1 } }, - [SECTION_ID_PROGRAMS] = { SECTION_ID_PROGRAMS, "programs", SECTION_FLAG_IS_ARRAY, { SECTION_ID_PROGRAM, -1 } }, - [SECTION_ID_ROOT] = { SECTION_ID_ROOT, "root", SECTION_FLAG_IS_WRAPPER, - { SECTION_ID_CHAPTERS, SECTION_ID_FORMAT, SECTION_ID_FRAMES, SECTION_ID_PROGRAMS, SECTION_ID_STREAMS, - SECTION_ID_PACKETS, SECTION_ID_ERROR, SECTION_ID_PROGRAM_VERSION, SECTION_ID_LIBRARY_VERSIONS, - SECTION_ID_PIXEL_FORMATS, -1} }, - [SECTION_ID_STREAMS] = { SECTION_ID_STREAMS, "streams", SECTION_FLAG_IS_ARRAY, { SECTION_ID_STREAM, -1 } }, - [SECTION_ID_STREAM] = { SECTION_ID_STREAM, "stream", 0, { SECTION_ID_STREAM_DISPOSITION, SECTION_ID_STREAM_TAGS, SECTION_ID_STREAM_SIDE_DATA_LIST, -1 } }, - [SECTION_ID_STREAM_DISPOSITION] = { SECTION_ID_STREAM_DISPOSITION, "disposition", 0, { -1 }, .unique_name = "stream_disposition" }, - [SECTION_ID_STREAM_TAGS] = { SECTION_ID_STREAM_TAGS, "tags", SECTION_FLAG_HAS_VARIABLE_FIELDS, { -1 }, .element_name = "tag", .unique_name = "stream_tags" }, - [SECTION_ID_STREAM_SIDE_DATA_LIST] ={ SECTION_ID_STREAM_SIDE_DATA_LIST, "side_data_list", SECTION_FLAG_IS_ARRAY, { SECTION_ID_STREAM_SIDE_DATA, -1 }, .element_name = "side_data", .unique_name = "stream_side_data_list" }, - [SECTION_ID_STREAM_SIDE_DATA] = { SECTION_ID_STREAM_SIDE_DATA, "side_data", 0, { -1 }, .unique_name = "stream_side_data" }, - [SECTION_ID_SUBTITLE] = { SECTION_ID_SUBTITLE, "subtitle", 0, { -1 } }, -}; - -static const OptionDef *options; - -/* FFprobe context */ -static const char *input_filename; -static const char *print_input_filename; -static const AVInputFormat *iformat = NULL; -static const char *output_filename = NULL; - -static struct AVHashContext *hash; - -static const struct { - double bin_val; - double dec_val; - const char *bin_str; - const char *dec_str; -} si_prefixes[] = { - { 1.0, 1.0, "", "" }, - { 1.024e3, 1e3, "Ki", "K" }, - { 1.048576e6, 1e6, "Mi", "M" }, - { 1.073741824e9, 1e9, "Gi", "G" }, - { 1.099511627776e12, 1e12, "Ti", "T" }, - { 1.125899906842624e15, 1e15, "Pi", "P" }, -}; - -static const char unit_second_str[] = "s" ; -static const char unit_hertz_str[] = "Hz" ; -static const char unit_byte_str[] = "byte" ; -static const char unit_bit_per_second_str[] = "bit/s"; - -static int nb_streams; -static uint64_t *nb_streams_packets; -static uint64_t *nb_streams_frames; -static int *selected_streams; - -#if HAVE_THREADS -pthread_mutex_t log_mutex; -#endif -typedef struct LogBuffer { - char *context_name; - int log_level; - char *log_message; - AVClassCategory category; - char *parent_name; - AVClassCategory parent_category; -}LogBuffer; - -static LogBuffer *log_buffer; -static int log_buffer_size; - -static void log_callback(void *ptr, int level, const char *fmt, va_list vl) -{ - AVClass* avc = ptr ? *(AVClass **) ptr : NULL; - va_list vl2; - char line[1024]; - static int print_prefix = 1; - void *new_log_buffer; - - va_copy(vl2, vl); - av_log_default_callback(ptr, level, fmt, vl); - av_log_format_line(ptr, level, fmt, vl2, line, sizeof(line), &print_prefix); - va_end(vl2); - -#if HAVE_THREADS - pthread_mutex_lock(&log_mutex); - - new_log_buffer = av_realloc_array(log_buffer, log_buffer_size + 1, sizeof(*log_buffer)); - if (new_log_buffer) { - char *msg; - int i; - - log_buffer = new_log_buffer; - memset(&log_buffer[log_buffer_size], 0, sizeof(log_buffer[log_buffer_size])); - log_buffer[log_buffer_size].context_name= avc ? av_strdup(avc->item_name(ptr)) : NULL; - if (avc) { - if (avc->get_category) log_buffer[log_buffer_size].category = avc->get_category(ptr); - else log_buffer[log_buffer_size].category = avc->category; - } - log_buffer[log_buffer_size].log_level = level; - msg = log_buffer[log_buffer_size].log_message = av_strdup(line); - for (i=strlen(msg) - 1; i>=0 && msg[i] == '\n'; i--) { - msg[i] = 0; - } - if (avc && avc->parent_log_context_offset) { - AVClass** parent = *(AVClass ***) (((uint8_t *) ptr) + - avc->parent_log_context_offset); - if (parent && *parent) { - log_buffer[log_buffer_size].parent_name = av_strdup((*parent)->item_name(parent)); - log_buffer[log_buffer_size].parent_category = - (*parent)->get_category ? (*parent)->get_category(parent) :(*parent)->category; - } - } - log_buffer_size ++; - } - - pthread_mutex_unlock(&log_mutex); -#endif -} - -static void ffprobe_cleanup(int ret) -{ - int i; - for (i = 0; i < FF_ARRAY_ELEMS(sections); i++) - av_dict_free(&(sections[i].entries_to_show)); - -#if HAVE_THREADS - pthread_mutex_destroy(&log_mutex); -#endif -} - -struct unit_value { - union { double d; long long int i; } val; - const char *unit; -}; - -static char *value_string(char *buf, int buf_size, struct unit_value uv) -{ - double vald; - long long int vali; - int show_float = 0; - - if (uv.unit == unit_second_str) { - vald = uv.val.d; - show_float = 1; - } else { - vald = vali = uv.val.i; - } - - if (uv.unit == unit_second_str && use_value_sexagesimal_format) { - double secs; - int hours, mins; - secs = vald; - mins = (int)secs / 60; - secs = secs - mins * 60; - hours = mins / 60; - mins %= 60; - snprintf(buf, buf_size, "%d:%02d:%09.6f", hours, mins, secs); - } else { - const char *prefix_string = ""; - - if (use_value_prefix && vald > 1) { - long long int index; - - if (uv.unit == unit_byte_str && use_byte_value_binary_prefix) { - index = (long long int) (log2(vald)) / 10; - index = av_clip(index, 0, FF_ARRAY_ELEMS(si_prefixes) - 1); - vald /= si_prefixes[index].bin_val; - prefix_string = si_prefixes[index].bin_str; - } else { - index = (long long int) (log10(vald)) / 3; - index = av_clip(index, 0, FF_ARRAY_ELEMS(si_prefixes) - 1); - vald /= si_prefixes[index].dec_val; - prefix_string = si_prefixes[index].dec_str; - } - vali = vald; - } - - if (show_float || (use_value_prefix && vald != (long long int)vald)) - snprintf(buf, buf_size, "%f", vald); - else - snprintf(buf, buf_size, "%lld", vali); - av_strlcatf(buf, buf_size, "%s%s%s", *prefix_string || show_value_unit ? " " : "", - prefix_string, show_value_unit ? uv.unit : ""); - } - - return buf; -} - -/* WRITERS API */ - -typedef struct WriterContext WriterContext; - -#define WRITER_FLAG_DISPLAY_OPTIONAL_FIELDS 1 -#define WRITER_FLAG_PUT_PACKETS_AND_FRAMES_IN_SAME_CHAPTER 2 - -typedef enum { - WRITER_STRING_VALIDATION_FAIL, - WRITER_STRING_VALIDATION_REPLACE, - WRITER_STRING_VALIDATION_IGNORE, - WRITER_STRING_VALIDATION_NB -} StringValidation; - -typedef struct Writer { - const AVClass *priv_class; ///< private class of the writer, if any - int priv_size; ///< private size for the writer context - const char *name; - - int (*init) (WriterContext *wctx); - void (*uninit)(WriterContext *wctx); - - void (*print_section_header)(WriterContext *wctx); - void (*print_section_footer)(WriterContext *wctx); - void (*print_integer) (WriterContext *wctx, const char *, long long int); - void (*print_rational) (WriterContext *wctx, AVRational *q, char *sep); - void (*print_string) (WriterContext *wctx, const char *, const char *); - int flags; ///< a combination or WRITER_FLAG_* -} Writer; - -#define SECTION_MAX_NB_LEVELS 10 - -struct WriterContext { - const AVClass *class; ///< class of the writer - const Writer *writer; ///< the Writer of which this is an instance - AVIOContext *avio; ///< the I/O context used to write - - void (* writer_w8)(WriterContext *wctx, int b); - void (* writer_put_str)(WriterContext *wctx, const char *str); - void (* writer_printf)(WriterContext *wctx, const char *fmt, ...); - - char *name; ///< name of this writer instance - void *priv; ///< private data for use by the filter - - const struct section *sections; ///< array containing all sections - int nb_sections; ///< number of sections - - int level; ///< current level, starting from 0 - - /** number of the item printed in the given section, starting from 0 */ - unsigned int nb_item[SECTION_MAX_NB_LEVELS]; - - /** section per each level */ - const struct section *section[SECTION_MAX_NB_LEVELS]; - AVBPrint section_pbuf[SECTION_MAX_NB_LEVELS]; ///< generic print buffer dedicated to each section, - /// used by various writers - - unsigned int nb_section_packet; ///< number of the packet section in case we are in "packets_and_frames" section - unsigned int nb_section_frame; ///< number of the frame section in case we are in "packets_and_frames" section - unsigned int nb_section_packet_frame; ///< nb_section_packet or nb_section_frame according if is_packets_and_frames - - int string_validation; - char *string_validation_replacement; - unsigned int string_validation_utf8_flags; -}; - -static const char *writer_get_name(void *p) -{ - WriterContext *wctx = p; - return wctx->writer->name; -} - -#define OFFSET(x) offsetof(WriterContext, x) - -static const AVOption writer_options[] = { - { "string_validation", "set string validation mode", - OFFSET(string_validation), AV_OPT_TYPE_INT, {.i64=WRITER_STRING_VALIDATION_REPLACE}, 0, WRITER_STRING_VALIDATION_NB-1, .unit = "sv" }, - { "sv", "set string validation mode", - OFFSET(string_validation), AV_OPT_TYPE_INT, {.i64=WRITER_STRING_VALIDATION_REPLACE}, 0, WRITER_STRING_VALIDATION_NB-1, .unit = "sv" }, - { "ignore", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = WRITER_STRING_VALIDATION_IGNORE}, .unit = "sv" }, - { "replace", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = WRITER_STRING_VALIDATION_REPLACE}, .unit = "sv" }, - { "fail", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = WRITER_STRING_VALIDATION_FAIL}, .unit = "sv" }, - { "string_validation_replacement", "set string validation replacement string", OFFSET(string_validation_replacement), AV_OPT_TYPE_STRING, {.str=""}}, - { "svr", "set string validation replacement string", OFFSET(string_validation_replacement), AV_OPT_TYPE_STRING, {.str="\xEF\xBF\xBD"}}, - { NULL } -}; - -static void *writer_child_next(void *obj, void *prev) -{ - WriterContext *ctx = obj; - if (!prev && ctx->writer && ctx->writer->priv_class && ctx->priv) - return ctx->priv; - return NULL; -} - -static const AVClass writer_class = { - .class_name = "Writer", - .item_name = writer_get_name, - .option = writer_options, - .version = LIBAVUTIL_VERSION_INT, - .child_next = writer_child_next, -}; - -static int writer_close(WriterContext **wctx) -{ - int i; - int ret = 0; - - if (!*wctx) - return -1; - - if ((*wctx)->writer->uninit) - (*wctx)->writer->uninit(*wctx); - for (i = 0; i < SECTION_MAX_NB_LEVELS; i++) - av_bprint_finalize(&(*wctx)->section_pbuf[i], NULL); - if ((*wctx)->writer->priv_class) - av_opt_free((*wctx)->priv); - av_freep(&((*wctx)->priv)); - av_opt_free(*wctx); - if ((*wctx)->avio) { - avio_flush((*wctx)->avio); - ret = avio_close((*wctx)->avio); - } - av_freep(wctx); - return ret; -} - -static void bprint_bytes(AVBPrint *bp, const uint8_t *ubuf, size_t ubuf_size) -{ - int i; - av_bprintf(bp, "0X"); - for (i = 0; i < ubuf_size; i++) - av_bprintf(bp, "%02X", ubuf[i]); -} - -static inline void writer_w8_avio(WriterContext *wctx, int b) -{ - avio_w8(wctx->avio, b); -} - -static inline void writer_put_str_avio(WriterContext *wctx, const char *str) -{ - avio_write(wctx->avio, str, strlen(str)); -} - -static inline void writer_printf_avio(WriterContext *wctx, const char *fmt, ...) -{ - va_list ap; - - va_start(ap, fmt); - avio_vprintf(wctx->avio, fmt, ap); - va_end(ap); -} - -static inline void writer_w8_printf(WriterContext *wctx, int b) -{ - printf("%c", b); -} - -static inline void writer_put_str_printf(WriterContext *wctx, const char *str) -{ - printf("%s", str); -} - -static inline void writer_printf_printf(WriterContext *wctx, const char *fmt, ...) -{ - va_list ap; - - va_start(ap, fmt); - vprintf(fmt, ap); - va_end(ap); -} - -static int writer_open(WriterContext **wctx, const Writer *writer, const char *args, - const struct section *sections, int nb_sections, const char *output) -{ - int i, ret = 0; - - if (!(*wctx = av_mallocz(sizeof(WriterContext)))) { - ret = AVERROR(ENOMEM); - goto fail; - } - - if (!((*wctx)->priv = av_mallocz(writer->priv_size))) { - ret = AVERROR(ENOMEM); - goto fail; - } - - (*wctx)->class = &writer_class; - (*wctx)->writer = writer; - (*wctx)->level = -1; - (*wctx)->sections = sections; - (*wctx)->nb_sections = nb_sections; - - av_opt_set_defaults(*wctx); - - if (writer->priv_class) { - void *priv_ctx = (*wctx)->priv; - *((const AVClass **)priv_ctx) = writer->priv_class; - av_opt_set_defaults(priv_ctx); - } - - /* convert options to dictionary */ - if (args) { - AVDictionary *opts = NULL; - const AVDictionaryEntry *opt = NULL; - - if ((ret = av_dict_parse_string(&opts, args, "=", ":", 0)) < 0) { - av_log(*wctx, AV_LOG_ERROR, "Failed to parse option string '%s' provided to writer context\n", args); - av_dict_free(&opts); - goto fail; - } - - while ((opt = av_dict_iterate(opts, opt))) { - if ((ret = av_opt_set(*wctx, opt->key, opt->value, AV_OPT_SEARCH_CHILDREN)) < 0) { - av_log(*wctx, AV_LOG_ERROR, "Failed to set option '%s' with value '%s' provided to writer context\n", - opt->key, opt->value); - av_dict_free(&opts); - goto fail; - } - } - - av_dict_free(&opts); - } - - /* validate replace string */ - { - const uint8_t *p = (*wctx)->string_validation_replacement; - const uint8_t *endp = p + strlen(p); - while (*p) { - const uint8_t *p0 = p; - int32_t code; - ret = av_utf8_decode(&code, &p, endp, (*wctx)->string_validation_utf8_flags); - if (ret < 0) { - AVBPrint bp; - av_bprint_init(&bp, 0, AV_BPRINT_SIZE_AUTOMATIC); - bprint_bytes(&bp, p0, p-p0), - av_log(wctx, AV_LOG_ERROR, - "Invalid UTF8 sequence %s found in string validation replace '%s'\n", - bp.str, (*wctx)->string_validation_replacement); - return ret; - } - } - } - - if (!output_filename) { - (*wctx)->writer_w8 = writer_w8_printf; - (*wctx)->writer_put_str = writer_put_str_printf; - (*wctx)->writer_printf = writer_printf_printf; - } else { - if ((ret = avio_open(&(*wctx)->avio, output, AVIO_FLAG_WRITE)) < 0) { - av_log(*wctx, AV_LOG_ERROR, - "Failed to open output '%s' with error: %s\n", output, av_err2str(ret)); - goto fail; - } - (*wctx)->writer_w8 = writer_w8_avio; - (*wctx)->writer_put_str = writer_put_str_avio; - (*wctx)->writer_printf = writer_printf_avio; - } - - for (i = 0; i < SECTION_MAX_NB_LEVELS; i++) - av_bprint_init(&(*wctx)->section_pbuf[i], 1, AV_BPRINT_SIZE_UNLIMITED); - - if ((*wctx)->writer->init) - ret = (*wctx)->writer->init(*wctx); - if (ret < 0) - goto fail; - - return 0; - -fail: - writer_close(wctx); - return ret; -} - -static inline void writer_print_section_header(WriterContext *wctx, - int section_id) -{ - int parent_section_id; - wctx->level++; - av_assert0(wctx->level < SECTION_MAX_NB_LEVELS); - parent_section_id = wctx->level ? - (wctx->section[wctx->level-1])->id : SECTION_ID_NONE; - - wctx->nb_item[wctx->level] = 0; - wctx->section[wctx->level] = &wctx->sections[section_id]; - - if (section_id == SECTION_ID_PACKETS_AND_FRAMES) { - wctx->nb_section_packet = wctx->nb_section_frame = - wctx->nb_section_packet_frame = 0; - } else if (parent_section_id == SECTION_ID_PACKETS_AND_FRAMES) { - wctx->nb_section_packet_frame = section_id == SECTION_ID_PACKET ? - wctx->nb_section_packet : wctx->nb_section_frame; - } - - if (wctx->writer->print_section_header) - wctx->writer->print_section_header(wctx); -} - -static inline void writer_print_section_footer(WriterContext *wctx) -{ - int section_id = wctx->section[wctx->level]->id; - int parent_section_id = wctx->level ? - wctx->section[wctx->level-1]->id : SECTION_ID_NONE; - - if (parent_section_id != SECTION_ID_NONE) - wctx->nb_item[wctx->level-1]++; - if (parent_section_id == SECTION_ID_PACKETS_AND_FRAMES) { - if (section_id == SECTION_ID_PACKET) wctx->nb_section_packet++; - else wctx->nb_section_frame++; - } - if (wctx->writer->print_section_footer) - wctx->writer->print_section_footer(wctx); - wctx->level--; -} - -static inline void writer_print_integer(WriterContext *wctx, - const char *key, long long int val) -{ - const struct section *section = wctx->section[wctx->level]; - - if (section->show_all_entries || av_dict_get(section->entries_to_show, key, NULL, 0)) { - wctx->writer->print_integer(wctx, key, val); - wctx->nb_item[wctx->level]++; - } -} - -static inline int validate_string(WriterContext *wctx, char **dstp, const char *src) -{ - const uint8_t *p, *endp; - AVBPrint dstbuf; - int invalid_chars_nb = 0, ret = 0; - - av_bprint_init(&dstbuf, 0, AV_BPRINT_SIZE_UNLIMITED); - - endp = src + strlen(src); - for (p = (uint8_t *)src; *p;) { - uint32_t code; - int invalid = 0; - const uint8_t *p0 = p; - - if (av_utf8_decode(&code, &p, endp, wctx->string_validation_utf8_flags) < 0) { - AVBPrint bp; - av_bprint_init(&bp, 0, AV_BPRINT_SIZE_AUTOMATIC); - bprint_bytes(&bp, p0, p-p0); - av_log(wctx, AV_LOG_DEBUG, - "Invalid UTF-8 sequence %s found in string '%s'\n", bp.str, src); - invalid = 1; - } - - if (invalid) { - invalid_chars_nb++; - - switch (wctx->string_validation) { - case WRITER_STRING_VALIDATION_FAIL: - av_log(wctx, AV_LOG_ERROR, - "Invalid UTF-8 sequence found in string '%s'\n", src); - ret = AVERROR_INVALIDDATA; - goto end; - break; - - case WRITER_STRING_VALIDATION_REPLACE: - av_bprintf(&dstbuf, "%s", wctx->string_validation_replacement); - break; - } - } - - if (!invalid || wctx->string_validation == WRITER_STRING_VALIDATION_IGNORE) - av_bprint_append_data(&dstbuf, p0, p-p0); - } - - if (invalid_chars_nb && wctx->string_validation == WRITER_STRING_VALIDATION_REPLACE) { - av_log(wctx, AV_LOG_WARNING, - "%d invalid UTF-8 sequence(s) found in string '%s', replaced with '%s'\n", - invalid_chars_nb, src, wctx->string_validation_replacement); - } - -end: - av_bprint_finalize(&dstbuf, dstp); - return ret; -} - -#define PRINT_STRING_OPT 1 -#define PRINT_STRING_VALIDATE 2 - -static inline int writer_print_string(WriterContext *wctx, - const char *key, const char *val, int flags) -{ - const struct section *section = wctx->section[wctx->level]; - int ret = 0; - - if (show_optional_fields == SHOW_OPTIONAL_FIELDS_NEVER || - (show_optional_fields == SHOW_OPTIONAL_FIELDS_AUTO - && (flags & PRINT_STRING_OPT) - && !(wctx->writer->flags & WRITER_FLAG_DISPLAY_OPTIONAL_FIELDS))) - return 0; - - if (section->show_all_entries || av_dict_get(section->entries_to_show, key, NULL, 0)) { - if (flags & PRINT_STRING_VALIDATE) { - char *key1 = NULL, *val1 = NULL; - ret = validate_string(wctx, &key1, key); - if (ret < 0) goto end; - ret = validate_string(wctx, &val1, val); - if (ret < 0) goto end; - wctx->writer->print_string(wctx, key1, val1); - end: - if (ret < 0) { - av_log(wctx, AV_LOG_ERROR, - "Invalid key=value string combination %s=%s in section %s\n", - key, val, section->unique_name); - } - av_free(key1); - av_free(val1); - } else { - wctx->writer->print_string(wctx, key, val); - } - - wctx->nb_item[wctx->level]++; - } - - return ret; -} - -static inline void writer_print_rational(WriterContext *wctx, - const char *key, AVRational q, char sep) -{ - AVBPrint buf; - av_bprint_init(&buf, 0, AV_BPRINT_SIZE_AUTOMATIC); - av_bprintf(&buf, "%d%c%d", q.num, sep, q.den); - writer_print_string(wctx, key, buf.str, 0); -} - -static void writer_print_time(WriterContext *wctx, const char *key, - int64_t ts, const AVRational *time_base, int is_duration) -{ - char buf[128]; - - if ((!is_duration && ts == AV_NOPTS_VALUE) || (is_duration && ts == 0)) { - writer_print_string(wctx, key, "N/A", PRINT_STRING_OPT); - } else { - double d = ts * av_q2d(*time_base); - struct unit_value uv; - uv.val.d = d; - uv.unit = unit_second_str; - value_string(buf, sizeof(buf), uv); - writer_print_string(wctx, key, buf, 0); - } -} - -static void writer_print_ts(WriterContext *wctx, const char *key, int64_t ts, int is_duration) -{ - if ((!is_duration && ts == AV_NOPTS_VALUE) || (is_duration && ts == 0)) { - writer_print_string(wctx, key, "N/A", PRINT_STRING_OPT); - } else { - writer_print_integer(wctx, key, ts); - } -} - -static void writer_print_data(WriterContext *wctx, const char *name, - const uint8_t *data, int size) -{ - AVBPrint bp; - int offset = 0, l, i; - - av_bprint_init(&bp, 0, AV_BPRINT_SIZE_UNLIMITED); - av_bprintf(&bp, "\n"); - while (size) { - av_bprintf(&bp, "%08x: ", offset); - l = FFMIN(size, 16); - for (i = 0; i < l; i++) { - av_bprintf(&bp, "%02x", data[i]); - if (i & 1) - av_bprintf(&bp, " "); - } - av_bprint_chars(&bp, ' ', 41 - 2 * i - i / 2); - for (i = 0; i < l; i++) - av_bprint_chars(&bp, data[i] - 32U < 95 ? data[i] : '.', 1); - av_bprintf(&bp, "\n"); - offset += l; - data += l; - size -= l; - } - writer_print_string(wctx, name, bp.str, 0); - av_bprint_finalize(&bp, NULL); -} - -static void writer_print_data_hash(WriterContext *wctx, const char *name, - const uint8_t *data, int size) -{ - char *p, buf[AV_HASH_MAX_SIZE * 2 + 64] = { 0 }; - - if (!hash) - return; - av_hash_init(hash); - av_hash_update(hash, data, size); - snprintf(buf, sizeof(buf), "%s:", av_hash_get_name(hash)); - p = buf + strlen(buf); - av_hash_final_hex(hash, p, buf + sizeof(buf) - p); - writer_print_string(wctx, name, buf, 0); -} - -static void writer_print_integers(WriterContext *wctx, const char *name, - uint8_t *data, int size, const char *format, - int columns, int bytes, int offset_add) -{ - AVBPrint bp; - int offset = 0, l, i; - - av_bprint_init(&bp, 0, AV_BPRINT_SIZE_UNLIMITED); - av_bprintf(&bp, "\n"); - while (size) { - av_bprintf(&bp, "%08x: ", offset); - l = FFMIN(size, columns); - for (i = 0; i < l; i++) { - if (bytes == 1) av_bprintf(&bp, format, *data); - else if (bytes == 2) av_bprintf(&bp, format, AV_RN16(data)); - else if (bytes == 4) av_bprintf(&bp, format, AV_RN32(data)); - data += bytes; - size --; - } - av_bprintf(&bp, "\n"); - offset += offset_add; - } - writer_print_string(wctx, name, bp.str, 0); - av_bprint_finalize(&bp, NULL); -} - -#define writer_w8(wctx_, b_) (wctx_)->writer_w8(wctx_, b_) -#define writer_put_str(wctx_, str_) (wctx_)->writer_put_str(wctx_, str_) -#define writer_printf(wctx_, fmt_, ...) (wctx_)->writer_printf(wctx_, fmt_, __VA_ARGS__) - -#define MAX_REGISTERED_WRITERS_NB 64 - -static const Writer *registered_writers[MAX_REGISTERED_WRITERS_NB + 1]; - -static int writer_register(const Writer *writer) -{ - static int next_registered_writer_idx = 0; - - if (next_registered_writer_idx == MAX_REGISTERED_WRITERS_NB) - return AVERROR(ENOMEM); - - registered_writers[next_registered_writer_idx++] = writer; - return 0; -} - -static const Writer *writer_get_by_name(const char *name) -{ - int i; - - for (i = 0; registered_writers[i]; i++) - if (!strcmp(registered_writers[i]->name, name)) - return registered_writers[i]; - - return NULL; -} - - -/* WRITERS */ - -#define DEFINE_WRITER_CLASS(name) \ -static const char *name##_get_name(void *ctx) \ -{ \ - return #name ; \ -} \ -static const AVClass name##_class = { \ - .class_name = #name, \ - .item_name = name##_get_name, \ - .option = name##_options \ -} - -/* Default output */ - -typedef struct DefaultContext { - const AVClass *class; - int nokey; - int noprint_wrappers; - int nested_section[SECTION_MAX_NB_LEVELS]; -} DefaultContext; - -#undef OFFSET -#define OFFSET(x) offsetof(DefaultContext, x) - -static const AVOption default_options[] = { - { "noprint_wrappers", "do not print headers and footers", OFFSET(noprint_wrappers), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 }, - { "nw", "do not print headers and footers", OFFSET(noprint_wrappers), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 }, - { "nokey", "force no key printing", OFFSET(nokey), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 }, - { "nk", "force no key printing", OFFSET(nokey), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 }, - {NULL}, -}; - -DEFINE_WRITER_CLASS(default); - -/* lame uppercasing routine, assumes the string is lower case ASCII */ -static inline char *upcase_string(char *dst, size_t dst_size, const char *src) -{ - int i; - for (i = 0; src[i] && i < dst_size-1; i++) - dst[i] = av_toupper(src[i]); - dst[i] = 0; - return dst; -} - -static void default_print_section_header(WriterContext *wctx) -{ - DefaultContext *def = wctx->priv; - char buf[32]; - const struct section *section = wctx->section[wctx->level]; - const struct section *parent_section = wctx->level ? - wctx->section[wctx->level-1] : NULL; - - av_bprint_clear(&wctx->section_pbuf[wctx->level]); - if (parent_section && - !(parent_section->flags & (SECTION_FLAG_IS_WRAPPER|SECTION_FLAG_IS_ARRAY))) { - def->nested_section[wctx->level] = 1; - av_bprintf(&wctx->section_pbuf[wctx->level], "%s%s:", - wctx->section_pbuf[wctx->level-1].str, - upcase_string(buf, sizeof(buf), - av_x_if_null(section->element_name, section->name))); - } - - if (def->noprint_wrappers || def->nested_section[wctx->level]) - return; - - if (!(section->flags & (SECTION_FLAG_IS_WRAPPER|SECTION_FLAG_IS_ARRAY))) - writer_printf(wctx, "[%s]\n", upcase_string(buf, sizeof(buf), section->name)); -} - -static void default_print_section_footer(WriterContext *wctx) -{ - DefaultContext *def = wctx->priv; - const struct section *section = wctx->section[wctx->level]; - char buf[32]; - - if (def->noprint_wrappers || def->nested_section[wctx->level]) - return; - - if (!(section->flags & (SECTION_FLAG_IS_WRAPPER|SECTION_FLAG_IS_ARRAY))) - writer_printf(wctx, "[/%s]\n", upcase_string(buf, sizeof(buf), section->name)); -} - -static void default_print_str(WriterContext *wctx, const char *key, const char *value) -{ - DefaultContext *def = wctx->priv; - - if (!def->nokey) - writer_printf(wctx, "%s%s=", wctx->section_pbuf[wctx->level].str, key); - writer_printf(wctx, "%s\n", value); -} - -static void default_print_int(WriterContext *wctx, const char *key, long long int value) -{ - DefaultContext *def = wctx->priv; - - if (!def->nokey) - writer_printf(wctx, "%s%s=", wctx->section_pbuf[wctx->level].str, key); - writer_printf(wctx, "%lld\n", value); -} - -static const Writer default_writer = { - .name = "default", - .priv_size = sizeof(DefaultContext), - .print_section_header = default_print_section_header, - .print_section_footer = default_print_section_footer, - .print_integer = default_print_int, - .print_string = default_print_str, - .flags = WRITER_FLAG_DISPLAY_OPTIONAL_FIELDS, - .priv_class = &default_class, -}; - -/* Compact output */ - -/** - * Apply C-language-like string escaping. - */ -static const char *c_escape_str(AVBPrint *dst, const char *src, const char sep, void *log_ctx) -{ - const char *p; - - for (p = src; *p; p++) { - switch (*p) { - case '\b': av_bprintf(dst, "%s", "\\b"); break; - case '\f': av_bprintf(dst, "%s", "\\f"); break; - case '\n': av_bprintf(dst, "%s", "\\n"); break; - case '\r': av_bprintf(dst, "%s", "\\r"); break; - case '\\': av_bprintf(dst, "%s", "\\\\"); break; - default: - if (*p == sep) - av_bprint_chars(dst, '\\', 1); - av_bprint_chars(dst, *p, 1); - } - } - return dst->str; -} - -/** - * Quote fields containing special characters, check RFC4180. - */ -static const char *csv_escape_str(AVBPrint *dst, const char *src, const char sep, void *log_ctx) -{ - char meta_chars[] = { sep, '"', '\n', '\r', '\0' }; - int needs_quoting = !!src[strcspn(src, meta_chars)]; - - if (needs_quoting) - av_bprint_chars(dst, '"', 1); - - for (; *src; src++) { - if (*src == '"') - av_bprint_chars(dst, '"', 1); - av_bprint_chars(dst, *src, 1); - } - if (needs_quoting) - av_bprint_chars(dst, '"', 1); - return dst->str; -} - -static const char *none_escape_str(AVBPrint *dst, const char *src, const char sep, void *log_ctx) -{ - return src; -} - -typedef struct CompactContext { - const AVClass *class; - char *item_sep_str; - char item_sep; - int nokey; - int print_section; - char *escape_mode_str; - const char * (*escape_str)(AVBPrint *dst, const char *src, const char sep, void *log_ctx); - int nested_section[SECTION_MAX_NB_LEVELS]; - int has_nested_elems[SECTION_MAX_NB_LEVELS]; - int terminate_line[SECTION_MAX_NB_LEVELS]; -} CompactContext; - -#undef OFFSET -#define OFFSET(x) offsetof(CompactContext, x) - -static const AVOption compact_options[]= { - {"item_sep", "set item separator", OFFSET(item_sep_str), AV_OPT_TYPE_STRING, {.str="|"}, 0, 0 }, - {"s", "set item separator", OFFSET(item_sep_str), AV_OPT_TYPE_STRING, {.str="|"}, 0, 0 }, - {"nokey", "force no key printing", OFFSET(nokey), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 }, - {"nk", "force no key printing", OFFSET(nokey), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 }, - {"escape", "set escape mode", OFFSET(escape_mode_str), AV_OPT_TYPE_STRING, {.str="c"}, 0, 0 }, - {"e", "set escape mode", OFFSET(escape_mode_str), AV_OPT_TYPE_STRING, {.str="c"}, 0, 0 }, - {"print_section", "print section name", OFFSET(print_section), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 }, - {"p", "print section name", OFFSET(print_section), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 }, - {NULL}, -}; - -DEFINE_WRITER_CLASS(compact); - -static av_cold int compact_init(WriterContext *wctx) -{ - CompactContext *compact = wctx->priv; - - if (strlen(compact->item_sep_str) != 1) { - av_log(wctx, AV_LOG_ERROR, "Item separator '%s' specified, but must contain a single character\n", - compact->item_sep_str); - return AVERROR(EINVAL); - } - compact->item_sep = compact->item_sep_str[0]; - - if (!strcmp(compact->escape_mode_str, "none")) compact->escape_str = none_escape_str; - else if (!strcmp(compact->escape_mode_str, "c" )) compact->escape_str = c_escape_str; - else if (!strcmp(compact->escape_mode_str, "csv" )) compact->escape_str = csv_escape_str; - else { - av_log(wctx, AV_LOG_ERROR, "Unknown escape mode '%s'\n", compact->escape_mode_str); - return AVERROR(EINVAL); - } - - return 0; -} - -static void compact_print_section_header(WriterContext *wctx) -{ - CompactContext *compact = wctx->priv; - const struct section *section = wctx->section[wctx->level]; - const struct section *parent_section = wctx->level ? - wctx->section[wctx->level-1] : NULL; - compact->terminate_line[wctx->level] = 1; - compact->has_nested_elems[wctx->level] = 0; - - av_bprint_clear(&wctx->section_pbuf[wctx->level]); - if (!(section->flags & SECTION_FLAG_IS_ARRAY) && parent_section && - !(parent_section->flags & (SECTION_FLAG_IS_WRAPPER|SECTION_FLAG_IS_ARRAY))) { - compact->nested_section[wctx->level] = 1; - compact->has_nested_elems[wctx->level-1] = 1; - av_bprintf(&wctx->section_pbuf[wctx->level], "%s%s:", - wctx->section_pbuf[wctx->level-1].str, - (char *)av_x_if_null(section->element_name, section->name)); - wctx->nb_item[wctx->level] = wctx->nb_item[wctx->level-1]; - } else { - if (parent_section && compact->has_nested_elems[wctx->level-1] && - (section->flags & SECTION_FLAG_IS_ARRAY)) { - compact->terminate_line[wctx->level-1] = 0; - } - if (parent_section && !(parent_section->flags & (SECTION_FLAG_IS_WRAPPER|SECTION_FLAG_IS_ARRAY)) && - wctx->level && wctx->nb_item[wctx->level-1]) - writer_w8(wctx, compact->item_sep); - if (compact->print_section && - !(section->flags & (SECTION_FLAG_IS_WRAPPER|SECTION_FLAG_IS_ARRAY))) - writer_printf(wctx, "%s%c", section->name, compact->item_sep); - } -} - -static void compact_print_section_footer(WriterContext *wctx) -{ - CompactContext *compact = wctx->priv; - - if (!compact->nested_section[wctx->level] && - compact->terminate_line[wctx->level] && - !(wctx->section[wctx->level]->flags & (SECTION_FLAG_IS_WRAPPER|SECTION_FLAG_IS_ARRAY))) - writer_w8(wctx, '\n'); -} - -static void compact_print_str(WriterContext *wctx, const char *key, const char *value) -{ - CompactContext *compact = wctx->priv; - AVBPrint buf; - - if (wctx->nb_item[wctx->level]) writer_w8(wctx, compact->item_sep); - if (!compact->nokey) - writer_printf(wctx, "%s%s=", wctx->section_pbuf[wctx->level].str, key); - av_bprint_init(&buf, 1, AV_BPRINT_SIZE_UNLIMITED); - writer_put_str(wctx, compact->escape_str(&buf, value, compact->item_sep, wctx)); - av_bprint_finalize(&buf, NULL); -} - -static void compact_print_int(WriterContext *wctx, const char *key, long long int value) -{ - CompactContext *compact = wctx->priv; - - if (wctx->nb_item[wctx->level]) writer_w8(wctx, compact->item_sep); - if (!compact->nokey) - writer_printf(wctx, "%s%s=", wctx->section_pbuf[wctx->level].str, key); - writer_printf(wctx, "%lld", value); -} - -static const Writer compact_writer = { - .name = "compact", - .priv_size = sizeof(CompactContext), - .init = compact_init, - .print_section_header = compact_print_section_header, - .print_section_footer = compact_print_section_footer, - .print_integer = compact_print_int, - .print_string = compact_print_str, - .flags = WRITER_FLAG_DISPLAY_OPTIONAL_FIELDS, - .priv_class = &compact_class, -}; - -/* CSV output */ - -#undef OFFSET -#define OFFSET(x) offsetof(CompactContext, x) - -static const AVOption csv_options[] = { - {"item_sep", "set item separator", OFFSET(item_sep_str), AV_OPT_TYPE_STRING, {.str=","}, 0, 0 }, - {"s", "set item separator", OFFSET(item_sep_str), AV_OPT_TYPE_STRING, {.str=","}, 0, 0 }, - {"nokey", "force no key printing", OFFSET(nokey), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 }, - {"nk", "force no key printing", OFFSET(nokey), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 }, - {"escape", "set escape mode", OFFSET(escape_mode_str), AV_OPT_TYPE_STRING, {.str="csv"}, 0, 0 }, - {"e", "set escape mode", OFFSET(escape_mode_str), AV_OPT_TYPE_STRING, {.str="csv"}, 0, 0 }, - {"print_section", "print section name", OFFSET(print_section), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 }, - {"p", "print section name", OFFSET(print_section), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 }, - {NULL}, -}; - -DEFINE_WRITER_CLASS(csv); - -static const Writer csv_writer = { - .name = "csv", - .priv_size = sizeof(CompactContext), - .init = compact_init, - .print_section_header = compact_print_section_header, - .print_section_footer = compact_print_section_footer, - .print_integer = compact_print_int, - .print_string = compact_print_str, - .flags = WRITER_FLAG_DISPLAY_OPTIONAL_FIELDS, - .priv_class = &csv_class, -}; - -/* Flat output */ - -typedef struct FlatContext { - const AVClass *class; - const char *sep_str; - char sep; - int hierarchical; -} FlatContext; - -#undef OFFSET -#define OFFSET(x) offsetof(FlatContext, x) - -static const AVOption flat_options[]= { - {"sep_char", "set separator", OFFSET(sep_str), AV_OPT_TYPE_STRING, {.str="."}, 0, 0 }, - {"s", "set separator", OFFSET(sep_str), AV_OPT_TYPE_STRING, {.str="."}, 0, 0 }, - {"hierarchical", "specify if the section specification should be hierarchical", OFFSET(hierarchical), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 }, - {"h", "specify if the section specification should be hierarchical", OFFSET(hierarchical), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 }, - {NULL}, -}; - -DEFINE_WRITER_CLASS(flat); - -static av_cold int flat_init(WriterContext *wctx) -{ - FlatContext *flat = wctx->priv; - - if (strlen(flat->sep_str) != 1) { - av_log(wctx, AV_LOG_ERROR, "Item separator '%s' specified, but must contain a single character\n", - flat->sep_str); - return AVERROR(EINVAL); - } - flat->sep = flat->sep_str[0]; - - return 0; -} - -static const char *flat_escape_key_str(AVBPrint *dst, const char *src, const char sep) -{ - const char *p; - - for (p = src; *p; p++) { - if (!((*p >= '0' && *p <= '9') || - (*p >= 'a' && *p <= 'z') || - (*p >= 'A' && *p <= 'Z'))) - av_bprint_chars(dst, '_', 1); - else - av_bprint_chars(dst, *p, 1); - } - return dst->str; -} - -static const char *flat_escape_value_str(AVBPrint *dst, const char *src) -{ - const char *p; - - for (p = src; *p; p++) { - switch (*p) { - case '\n': av_bprintf(dst, "%s", "\\n"); break; - case '\r': av_bprintf(dst, "%s", "\\r"); break; - case '\\': av_bprintf(dst, "%s", "\\\\"); break; - case '"': av_bprintf(dst, "%s", "\\\""); break; - case '`': av_bprintf(dst, "%s", "\\`"); break; - case '$': av_bprintf(dst, "%s", "\\$"); break; - default: av_bprint_chars(dst, *p, 1); break; - } - } - return dst->str; -} - -static void flat_print_section_header(WriterContext *wctx) -{ - FlatContext *flat = wctx->priv; - AVBPrint *buf = &wctx->section_pbuf[wctx->level]; - const struct section *section = wctx->section[wctx->level]; - const struct section *parent_section = wctx->level ? - wctx->section[wctx->level-1] : NULL; - - /* build section header */ - av_bprint_clear(buf); - if (!parent_section) - return; - av_bprintf(buf, "%s", wctx->section_pbuf[wctx->level-1].str); - - if (flat->hierarchical || - !(section->flags & (SECTION_FLAG_IS_ARRAY|SECTION_FLAG_IS_WRAPPER))) { - av_bprintf(buf, "%s%s", wctx->section[wctx->level]->name, flat->sep_str); - - if (parent_section->flags & SECTION_FLAG_IS_ARRAY) { - int n = parent_section->id == SECTION_ID_PACKETS_AND_FRAMES ? - wctx->nb_section_packet_frame : wctx->nb_item[wctx->level-1]; - av_bprintf(buf, "%d%s", n, flat->sep_str); - } - } -} - -static void flat_print_int(WriterContext *wctx, const char *key, long long int value) -{ - writer_printf(wctx, "%s%s=%lld\n", wctx->section_pbuf[wctx->level].str, key, value); -} - -static void flat_print_str(WriterContext *wctx, const char *key, const char *value) -{ - FlatContext *flat = wctx->priv; - AVBPrint buf; - - writer_put_str(wctx, wctx->section_pbuf[wctx->level].str); - av_bprint_init(&buf, 1, AV_BPRINT_SIZE_UNLIMITED); - writer_printf(wctx, "%s=", flat_escape_key_str(&buf, key, flat->sep)); - av_bprint_clear(&buf); - writer_printf(wctx, "\"%s\"\n", flat_escape_value_str(&buf, value)); - av_bprint_finalize(&buf, NULL); -} - -static const Writer flat_writer = { - .name = "flat", - .priv_size = sizeof(FlatContext), - .init = flat_init, - .print_section_header = flat_print_section_header, - .print_integer = flat_print_int, - .print_string = flat_print_str, - .flags = WRITER_FLAG_DISPLAY_OPTIONAL_FIELDS|WRITER_FLAG_PUT_PACKETS_AND_FRAMES_IN_SAME_CHAPTER, - .priv_class = &flat_class, -}; - -/* INI format output */ - -typedef struct INIContext { - const AVClass *class; - int hierarchical; -} INIContext; - -#undef OFFSET -#define OFFSET(x) offsetof(INIContext, x) - -static const AVOption ini_options[] = { - {"hierarchical", "specify if the section specification should be hierarchical", OFFSET(hierarchical), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 }, - {"h", "specify if the section specification should be hierarchical", OFFSET(hierarchical), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 }, - {NULL}, -}; - -DEFINE_WRITER_CLASS(ini); - -static char *ini_escape_str(AVBPrint *dst, const char *src) -{ - int i = 0; - char c = 0; - - while (c = src[i++]) { - switch (c) { - case '\b': av_bprintf(dst, "%s", "\\b"); break; - case '\f': av_bprintf(dst, "%s", "\\f"); break; - case '\n': av_bprintf(dst, "%s", "\\n"); break; - case '\r': av_bprintf(dst, "%s", "\\r"); break; - case '\t': av_bprintf(dst, "%s", "\\t"); break; - case '\\': - case '#' : - case '=' : - case ':' : av_bprint_chars(dst, '\\', 1); - default: - if ((unsigned char)c < 32) - av_bprintf(dst, "\\x00%02x", c & 0xff); - else - av_bprint_chars(dst, c, 1); - break; - } - } - return dst->str; -} - -static void ini_print_section_header(WriterContext *wctx) -{ - INIContext *ini = wctx->priv; - AVBPrint *buf = &wctx->section_pbuf[wctx->level]; - const struct section *section = wctx->section[wctx->level]; - const struct section *parent_section = wctx->level ? - wctx->section[wctx->level-1] : NULL; - - av_bprint_clear(buf); - if (!parent_section) { - writer_put_str(wctx, "# ffprobe output\n\n"); - return; - } - - if (wctx->nb_item[wctx->level-1]) - writer_w8(wctx, '\n'); - - av_bprintf(buf, "%s", wctx->section_pbuf[wctx->level-1].str); - if (ini->hierarchical || - !(section->flags & (SECTION_FLAG_IS_ARRAY|SECTION_FLAG_IS_WRAPPER))) { - av_bprintf(buf, "%s%s", buf->str[0] ? "." : "", wctx->section[wctx->level]->name); - - if (parent_section->flags & SECTION_FLAG_IS_ARRAY) { - int n = parent_section->id == SECTION_ID_PACKETS_AND_FRAMES ? - wctx->nb_section_packet_frame : wctx->nb_item[wctx->level-1]; - av_bprintf(buf, ".%d", n); - } - } - - if (!(section->flags & (SECTION_FLAG_IS_ARRAY|SECTION_FLAG_IS_WRAPPER))) - writer_printf(wctx, "[%s]\n", buf->str); -} - -static void ini_print_str(WriterContext *wctx, const char *key, const char *value) -{ - AVBPrint buf; - - av_bprint_init(&buf, 1, AV_BPRINT_SIZE_UNLIMITED); - writer_printf(wctx, "%s=", ini_escape_str(&buf, key)); - av_bprint_clear(&buf); - writer_printf(wctx, "%s\n", ini_escape_str(&buf, value)); - av_bprint_finalize(&buf, NULL); -} - -static void ini_print_int(WriterContext *wctx, const char *key, long long int value) -{ - writer_printf(wctx, "%s=%lld\n", key, value); -} - -static const Writer ini_writer = { - .name = "ini", - .priv_size = sizeof(INIContext), - .print_section_header = ini_print_section_header, - .print_integer = ini_print_int, - .print_string = ini_print_str, - .flags = WRITER_FLAG_DISPLAY_OPTIONAL_FIELDS|WRITER_FLAG_PUT_PACKETS_AND_FRAMES_IN_SAME_CHAPTER, - .priv_class = &ini_class, -}; - -/* JSON output */ - -typedef struct JSONContext { - const AVClass *class; - int indent_level; - int compact; - const char *item_sep, *item_start_end; -} JSONContext; - -#undef OFFSET -#define OFFSET(x) offsetof(JSONContext, x) - -static const AVOption json_options[]= { - { "compact", "enable compact output", OFFSET(compact), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 }, - { "c", "enable compact output", OFFSET(compact), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 }, - { NULL } -}; - -DEFINE_WRITER_CLASS(json); - -static av_cold int json_init(WriterContext *wctx) -{ - JSONContext *json = wctx->priv; - - json->item_sep = json->compact ? ", " : ",\n"; - json->item_start_end = json->compact ? " " : "\n"; - - return 0; -} - -static const char *json_escape_str(AVBPrint *dst, const char *src, void *log_ctx) -{ - static const char json_escape[] = {'"', '\\', '\b', '\f', '\n', '\r', '\t', 0}; - static const char json_subst[] = {'"', '\\', 'b', 'f', 'n', 'r', 't', 0}; - const char *p; - - for (p = src; *p; p++) { - char *s = strchr(json_escape, *p); - if (s) { - av_bprint_chars(dst, '\\', 1); - av_bprint_chars(dst, json_subst[s - json_escape], 1); - } else if ((unsigned char)*p < 32) { - av_bprintf(dst, "\\u00%02x", *p & 0xff); - } else { - av_bprint_chars(dst, *p, 1); - } - } - return dst->str; -} - -#define JSON_INDENT() writer_printf(wctx, "%*c", json->indent_level * 4, ' ') - -static void json_print_section_header(WriterContext *wctx) -{ - JSONContext *json = wctx->priv; - AVBPrint buf; - const struct section *section = wctx->section[wctx->level]; - const struct section *parent_section = wctx->level ? - wctx->section[wctx->level-1] : NULL; - - if (wctx->level && wctx->nb_item[wctx->level-1]) - writer_put_str(wctx, ",\n"); - - if (section->flags & SECTION_FLAG_IS_WRAPPER) { - writer_put_str(wctx, "{\n"); - json->indent_level++; - } else { - av_bprint_init(&buf, 1, AV_BPRINT_SIZE_UNLIMITED); - json_escape_str(&buf, section->name, wctx); - JSON_INDENT(); - - json->indent_level++; - if (section->flags & SECTION_FLAG_IS_ARRAY) { - writer_printf(wctx, "\"%s\": [\n", buf.str); - } else if (parent_section && !(parent_section->flags & SECTION_FLAG_IS_ARRAY)) { - writer_printf(wctx, "\"%s\": {%s", buf.str, json->item_start_end); - } else { - writer_printf(wctx, "{%s", json->item_start_end); - - /* this is required so the parser can distinguish between packets and frames */ - if (parent_section && parent_section->id == SECTION_ID_PACKETS_AND_FRAMES) { - if (!json->compact) - JSON_INDENT(); - writer_printf(wctx, "\"type\": \"%s\"", section->name); - wctx->nb_item[wctx->level]++; - } - } - av_bprint_finalize(&buf, NULL); - } -} - -static void json_print_section_footer(WriterContext *wctx) -{ - JSONContext *json = wctx->priv; - const struct section *section = wctx->section[wctx->level]; - - if (wctx->level == 0) { - json->indent_level--; - writer_put_str(wctx, "\n}\n"); - } else if (section->flags & SECTION_FLAG_IS_ARRAY) { - writer_w8(wctx, '\n'); - json->indent_level--; - JSON_INDENT(); - writer_w8(wctx, ']'); - } else { - writer_put_str(wctx, json->item_start_end); - json->indent_level--; - if (!json->compact) - JSON_INDENT(); - writer_w8(wctx, '}'); - } -} - -static inline void json_print_item_str(WriterContext *wctx, - const char *key, const char *value) -{ - AVBPrint buf; - - av_bprint_init(&buf, 1, AV_BPRINT_SIZE_UNLIMITED); - writer_printf(wctx, "\"%s\":", json_escape_str(&buf, key, wctx)); - av_bprint_clear(&buf); - writer_printf(wctx, " \"%s\"", json_escape_str(&buf, value, wctx)); - av_bprint_finalize(&buf, NULL); -} - -static void json_print_str(WriterContext *wctx, const char *key, const char *value) -{ - JSONContext *json = wctx->priv; - const struct section *parent_section = wctx->level ? - wctx->section[wctx->level-1] : NULL; - - if (wctx->nb_item[wctx->level] || (parent_section && parent_section->id == SECTION_ID_PACKETS_AND_FRAMES)) - writer_put_str(wctx, json->item_sep); - if (!json->compact) - JSON_INDENT(); - json_print_item_str(wctx, key, value); -} - -static void json_print_int(WriterContext *wctx, const char *key, long long int value) -{ - JSONContext *json = wctx->priv; - const struct section *parent_section = wctx->level ? - wctx->section[wctx->level-1] : NULL; - AVBPrint buf; - - if (wctx->nb_item[wctx->level] || (parent_section && parent_section->id == SECTION_ID_PACKETS_AND_FRAMES)) - writer_put_str(wctx, json->item_sep); - if (!json->compact) - JSON_INDENT(); - - av_bprint_init(&buf, 1, AV_BPRINT_SIZE_UNLIMITED); - writer_printf(wctx, "\"%s\": %lld", json_escape_str(&buf, key, wctx), value); - av_bprint_finalize(&buf, NULL); -} - -static const Writer json_writer = { - .name = "json", - .priv_size = sizeof(JSONContext), - .init = json_init, - .print_section_header = json_print_section_header, - .print_section_footer = json_print_section_footer, - .print_integer = json_print_int, - .print_string = json_print_str, - .flags = WRITER_FLAG_PUT_PACKETS_AND_FRAMES_IN_SAME_CHAPTER, - .priv_class = &json_class, -}; - -/* XML output */ - -typedef struct XMLContext { - const AVClass *class; - int within_tag; - int indent_level; - int fully_qualified; - int xsd_strict; -} XMLContext; - -#undef OFFSET -#define OFFSET(x) offsetof(XMLContext, x) - -static const AVOption xml_options[] = { - {"fully_qualified", "specify if the output should be fully qualified", OFFSET(fully_qualified), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 }, - {"q", "specify if the output should be fully qualified", OFFSET(fully_qualified), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 }, - {"xsd_strict", "ensure that the output is XSD compliant", OFFSET(xsd_strict), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 }, - {"x", "ensure that the output is XSD compliant", OFFSET(xsd_strict), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 }, - {NULL}, -}; - -DEFINE_WRITER_CLASS(xml); - -static av_cold int xml_init(WriterContext *wctx) -{ - XMLContext *xml = wctx->priv; - - if (xml->xsd_strict) { - xml->fully_qualified = 1; -#define CHECK_COMPLIANCE(opt, opt_name) \ - if (opt) { \ - av_log(wctx, AV_LOG_ERROR, \ - "XSD-compliant output selected but option '%s' was selected, XML output may be non-compliant.\n" \ - "You need to disable such option with '-no%s'\n", opt_name, opt_name); \ - return AVERROR(EINVAL); \ - } - CHECK_COMPLIANCE(show_private_data, "private"); - CHECK_COMPLIANCE(show_value_unit, "unit"); - CHECK_COMPLIANCE(use_value_prefix, "prefix"); - } - - return 0; -} - -#define XML_INDENT() writer_printf(wctx, "%*c", xml->indent_level * 4, ' ') - -static void xml_print_section_header(WriterContext *wctx) -{ - XMLContext *xml = wctx->priv; - const struct section *section = wctx->section[wctx->level]; - const struct section *parent_section = wctx->level ? - wctx->section[wctx->level-1] : NULL; - - if (wctx->level == 0) { - const char *qual = " xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" " - "xmlns:ffprobe=\"http://www.ffmpeg.org/schema/ffprobe\" " - "xsi:schemaLocation=\"http://www.ffmpeg.org/schema/ffprobe ffprobe.xsd\""; - - writer_put_str(wctx, "\n"); - writer_printf(wctx, "<%sffprobe%s>\n", - xml->fully_qualified ? "ffprobe:" : "", - xml->fully_qualified ? qual : ""); - return; - } - - if (xml->within_tag) { - xml->within_tag = 0; - writer_put_str(wctx, ">\n"); - } - if (section->flags & SECTION_FLAG_HAS_VARIABLE_FIELDS) { - xml->indent_level++; - } else { - if (parent_section && (parent_section->flags & SECTION_FLAG_IS_WRAPPER) && - wctx->level && wctx->nb_item[wctx->level-1]) - writer_w8(wctx, '\n'); - xml->indent_level++; - - if (section->flags & SECTION_FLAG_IS_ARRAY) { - XML_INDENT(); writer_printf(wctx, "<%s>\n", section->name); - } else { - XML_INDENT(); writer_printf(wctx, "<%s ", section->name); - xml->within_tag = 1; - } - } -} - -static void xml_print_section_footer(WriterContext *wctx) -{ - XMLContext *xml = wctx->priv; - const struct section *section = wctx->section[wctx->level]; - - if (wctx->level == 0) { - writer_printf(wctx, "\n", xml->fully_qualified ? "ffprobe:" : ""); - } else if (xml->within_tag) { - xml->within_tag = 0; - writer_put_str(wctx, "/>\n"); - xml->indent_level--; - } else if (section->flags & SECTION_FLAG_HAS_VARIABLE_FIELDS) { - xml->indent_level--; - } else { - XML_INDENT(); writer_printf(wctx, "\n", section->name); - xml->indent_level--; - } -} - -static void xml_print_str(WriterContext *wctx, const char *key, const char *value) -{ - AVBPrint buf; - XMLContext *xml = wctx->priv; - const struct section *section = wctx->section[wctx->level]; - - av_bprint_init(&buf, 1, AV_BPRINT_SIZE_UNLIMITED); - - if (section->flags & SECTION_FLAG_HAS_VARIABLE_FIELDS) { - XML_INDENT(); - av_bprint_escape(&buf, key, NULL, - AV_ESCAPE_MODE_XML, AV_ESCAPE_FLAG_XML_DOUBLE_QUOTES); - writer_printf(wctx, "<%s key=\"%s\"", - section->element_name, buf.str); - av_bprint_clear(&buf); - - av_bprint_escape(&buf, value, NULL, - AV_ESCAPE_MODE_XML, AV_ESCAPE_FLAG_XML_DOUBLE_QUOTES); - writer_printf(wctx, " value=\"%s\"/>\n", buf.str); - } else { - if (wctx->nb_item[wctx->level]) - writer_w8(wctx, ' '); - - av_bprint_escape(&buf, value, NULL, - AV_ESCAPE_MODE_XML, AV_ESCAPE_FLAG_XML_DOUBLE_QUOTES); - writer_printf(wctx, "%s=\"%s\"", key, buf.str); - } - - av_bprint_finalize(&buf, NULL); -} - -static void xml_print_int(WriterContext *wctx, const char *key, long long int value) -{ - if (wctx->nb_item[wctx->level]) - writer_w8(wctx, ' '); - writer_printf(wctx, "%s=\"%lld\"", key, value); -} - -static Writer xml_writer = { - .name = "xml", - .priv_size = sizeof(XMLContext), - .init = xml_init, - .print_section_header = xml_print_section_header, - .print_section_footer = xml_print_section_footer, - .print_integer = xml_print_int, - .print_string = xml_print_str, - .flags = WRITER_FLAG_PUT_PACKETS_AND_FRAMES_IN_SAME_CHAPTER, - .priv_class = &xml_class, -}; - -static void writer_register_all(void) -{ - static int initialized; - - if (initialized) - return; - initialized = 1; - - writer_register(&default_writer); - writer_register(&compact_writer); - writer_register(&csv_writer); - writer_register(&flat_writer); - writer_register(&ini_writer); - writer_register(&json_writer); - writer_register(&xml_writer); -} - -#define print_fmt(k, f, ...) do { \ - av_bprint_clear(&pbuf); \ - av_bprintf(&pbuf, f, __VA_ARGS__); \ - writer_print_string(w, k, pbuf.str, 0); \ -} while (0) - -#define print_list_fmt(k, f, n, m, ...) do { \ - av_bprint_clear(&pbuf); \ - for (int idx = 0; idx < n; idx++) { \ - for (int idx2 = 0; idx2 < m; idx2++) { \ - if (idx > 0 || idx2 > 0) \ - av_bprint_chars(&pbuf, ' ', 1); \ - av_bprintf(&pbuf, f, __VA_ARGS__); \ - } \ - } \ - writer_print_string(w, k, pbuf.str, 0); \ -} while (0) - -#define print_int(k, v) writer_print_integer(w, k, v) -#define print_q(k, v, s) writer_print_rational(w, k, v, s) -#define print_str(k, v) writer_print_string(w, k, v, 0) -#define print_str_opt(k, v) writer_print_string(w, k, v, PRINT_STRING_OPT) -#define print_str_validate(k, v) writer_print_string(w, k, v, PRINT_STRING_VALIDATE) -#define print_time(k, v, tb) writer_print_time(w, k, v, tb, 0) -#define print_ts(k, v) writer_print_ts(w, k, v, 0) -#define print_duration_time(k, v, tb) writer_print_time(w, k, v, tb, 1) -#define print_duration_ts(k, v) writer_print_ts(w, k, v, 1) -#define print_val(k, v, u) do { \ - struct unit_value uv; \ - uv.val.i = v; \ - uv.unit = u; \ - writer_print_string(w, k, value_string(val_str, sizeof(val_str), uv), 0); \ -} while (0) - -#define print_section_header(s) writer_print_section_header(w, s) -#define print_section_footer(s) writer_print_section_footer(w, s) - -#define REALLOCZ_ARRAY_STREAM(ptr, cur_n, new_n) \ -{ \ - ret = av_reallocp_array(&(ptr), (new_n), sizeof(*(ptr))); \ - if (ret < 0) \ - goto end; \ - memset( (ptr) + (cur_n), 0, ((new_n) - (cur_n)) * sizeof(*(ptr)) ); \ -} - -static inline int show_tags(WriterContext *w, AVDictionary *tags, int section_id) -{ - const AVDictionaryEntry *tag = NULL; - int ret = 0; - - if (!tags) - return 0; - writer_print_section_header(w, section_id); - - while ((tag = av_dict_iterate(tags, tag))) { - if ((ret = print_str_validate(tag->key, tag->value)) < 0) - break; - } - writer_print_section_footer(w); - - return ret; -} - -static void print_dovi_metadata(WriterContext *w, const AVDOVIMetadata *dovi) -{ - if (!dovi) - return; - - { - const AVDOVIRpuDataHeader *hdr = av_dovi_get_header(dovi); - const AVDOVIDataMapping *mapping = av_dovi_get_mapping(dovi); - const AVDOVIColorMetadata *color = av_dovi_get_color(dovi); - AVBPrint pbuf; - - av_bprint_init(&pbuf, 1, AV_BPRINT_SIZE_UNLIMITED); - - // header - print_int("rpu_type", hdr->rpu_type); - print_int("rpu_format", hdr->rpu_format); - print_int("vdr_rpu_profile", hdr->vdr_rpu_profile); - print_int("vdr_rpu_level", hdr->vdr_rpu_level); - print_int("chroma_resampling_explicit_filter_flag", - hdr->chroma_resampling_explicit_filter_flag); - print_int("coef_data_type", hdr->coef_data_type); - print_int("coef_log2_denom", hdr->coef_log2_denom); - print_int("vdr_rpu_normalized_idc", hdr->vdr_rpu_normalized_idc); - print_int("bl_video_full_range_flag", hdr->bl_video_full_range_flag); - print_int("bl_bit_depth", hdr->bl_bit_depth); - print_int("el_bit_depth", hdr->el_bit_depth); - print_int("vdr_bit_depth", hdr->vdr_bit_depth); - print_int("spatial_resampling_filter_flag", - hdr->spatial_resampling_filter_flag); - print_int("el_spatial_resampling_filter_flag", - hdr->el_spatial_resampling_filter_flag); - print_int("disable_residual_flag", hdr->disable_residual_flag); - - // data mapping values - print_int("vdr_rpu_id", mapping->vdr_rpu_id); - print_int("mapping_color_space", mapping->mapping_color_space); - print_int("mapping_chroma_format_idc", - mapping->mapping_chroma_format_idc); - - print_int("nlq_method_idc", mapping->nlq_method_idc); - switch (mapping->nlq_method_idc) { - case AV_DOVI_NLQ_NONE: - print_str("nlq_method_idc_name", "none"); - break; - case AV_DOVI_NLQ_LINEAR_DZ: - print_str("nlq_method_idc_name", "linear_dz"); - break; - default: - print_str("nlq_method_idc_name", "unknown"); - break; - } - - print_int("num_x_partitions", mapping->num_x_partitions); - print_int("num_y_partitions", mapping->num_y_partitions); - - writer_print_section_header(w, SECTION_ID_FRAME_SIDE_DATA_COMPONENT_LIST); - - for (int c = 0; c < 3; c++) { - const AVDOVIReshapingCurve *curve = &mapping->curves[c]; - writer_print_section_header(w, SECTION_ID_FRAME_SIDE_DATA_COMPONENT); - - print_list_fmt("pivots", "%"PRIu16, curve->num_pivots, 1, curve->pivots[idx]); - - writer_print_section_header(w, SECTION_ID_FRAME_SIDE_DATA_PIECE_LIST); - for (int i = 0; i < curve->num_pivots - 1; i++) { - - writer_print_section_header(w, SECTION_ID_FRAME_SIDE_DATA_PIECE); - print_int("mapping_idc", curve->mapping_idc[i]); - switch (curve->mapping_idc[i]) { - case AV_DOVI_MAPPING_POLYNOMIAL: - print_str("mapping_idc_name", "polynomial"); - print_int("poly_order", curve->poly_order[i]); - print_list_fmt("poly_coef", "%"PRIi64, - curve->poly_order[i] + 1, 1, - curve->poly_coef[i][idx]); - break; - case AV_DOVI_MAPPING_MMR: - print_str("mapping_idc_name", "mmr"); - print_int("mmr_order", curve->mmr_order[i]); - print_int("mmr_constant", curve->mmr_constant[i]); - print_list_fmt("mmr_coef", "%"PRIi64, - curve->mmr_order[i], 7, - curve->mmr_coef[i][idx][idx2]); - break; - default: - print_str("mapping_idc_name", "unknown"); - break; - } - - // SECTION_ID_FRAME_SIDE_DATA_PIECE - writer_print_section_footer(w); - } - - // SECTION_ID_FRAME_SIDE_DATA_PIECE_LIST - writer_print_section_footer(w); - - if (mapping->nlq_method_idc != AV_DOVI_NLQ_NONE) { - const AVDOVINLQParams *nlq = &mapping->nlq[c]; - print_int("nlq_offset", nlq->nlq_offset); - print_int("vdr_in_max", nlq->vdr_in_max); - - switch (mapping->nlq_method_idc) { - case AV_DOVI_NLQ_LINEAR_DZ: - print_int("linear_deadzone_slope", nlq->linear_deadzone_slope); - print_int("linear_deadzone_threshold", nlq->linear_deadzone_threshold); - break; - } - } - - // SECTION_ID_FRAME_SIDE_DATA_COMPONENT - writer_print_section_footer(w); - } - - // SECTION_ID_FRAME_SIDE_DATA_COMPONENT_LIST - writer_print_section_footer(w); - - // color metadata - print_int("dm_metadata_id", color->dm_metadata_id); - print_int("scene_refresh_flag", color->scene_refresh_flag); - print_list_fmt("ycc_to_rgb_matrix", "%d/%d", - FF_ARRAY_ELEMS(color->ycc_to_rgb_matrix), 1, - color->ycc_to_rgb_matrix[idx].num, - color->ycc_to_rgb_matrix[idx].den); - print_list_fmt("ycc_to_rgb_offset", "%d/%d", - FF_ARRAY_ELEMS(color->ycc_to_rgb_offset), 1, - color->ycc_to_rgb_offset[idx].num, - color->ycc_to_rgb_offset[idx].den); - print_list_fmt("rgb_to_lms_matrix", "%d/%d", - FF_ARRAY_ELEMS(color->rgb_to_lms_matrix), 1, - color->rgb_to_lms_matrix[idx].num, - color->rgb_to_lms_matrix[idx].den); - print_int("signal_eotf", color->signal_eotf); - print_int("signal_eotf_param0", color->signal_eotf_param0); - print_int("signal_eotf_param1", color->signal_eotf_param1); - print_int("signal_eotf_param2", color->signal_eotf_param2); - print_int("signal_bit_depth", color->signal_bit_depth); - print_int("signal_color_space", color->signal_color_space); - print_int("signal_chroma_format", color->signal_chroma_format); - print_int("signal_full_range_flag", color->signal_full_range_flag); - print_int("source_min_pq", color->source_min_pq); - print_int("source_max_pq", color->source_max_pq); - print_int("source_diagonal", color->source_diagonal); - - av_bprint_finalize(&pbuf, NULL); - } -} - -static void print_dynamic_hdr10_plus(WriterContext *w, const AVDynamicHDRPlus *metadata) -{ - if (!metadata) - return; - print_int("application version", metadata->application_version); - print_int("num_windows", metadata->num_windows); - for (int n = 1; n < metadata->num_windows; n++) { - const AVHDRPlusColorTransformParams *params = &metadata->params[n]; - print_q("window_upper_left_corner_x", - params->window_upper_left_corner_x,'/'); - print_q("window_upper_left_corner_y", - params->window_upper_left_corner_y,'/'); - print_q("window_lower_right_corner_x", - params->window_lower_right_corner_x,'/'); - print_q("window_lower_right_corner_y", - params->window_lower_right_corner_y,'/'); - print_q("window_upper_left_corner_x", - params->window_upper_left_corner_x,'/'); - print_q("window_upper_left_corner_y", - params->window_upper_left_corner_y,'/'); - print_int("center_of_ellipse_x", - params->center_of_ellipse_x ) ; - print_int("center_of_ellipse_y", - params->center_of_ellipse_y ); - print_int("rotation_angle", - params->rotation_angle); - print_int("semimajor_axis_internal_ellipse", - params->semimajor_axis_internal_ellipse); - print_int("semimajor_axis_external_ellipse", - params->semimajor_axis_external_ellipse); - print_int("semiminor_axis_external_ellipse", - params->semiminor_axis_external_ellipse); - print_int("overlap_process_option", - params->overlap_process_option); - } - print_q("targeted_system_display_maximum_luminance", - metadata->targeted_system_display_maximum_luminance,'/'); - if (metadata->targeted_system_display_actual_peak_luminance_flag) { - print_int("num_rows_targeted_system_display_actual_peak_luminance", - metadata->num_rows_targeted_system_display_actual_peak_luminance); - print_int("num_cols_targeted_system_display_actual_peak_luminance", - metadata->num_cols_targeted_system_display_actual_peak_luminance); - for (int i = 0; i < metadata->num_rows_targeted_system_display_actual_peak_luminance; i++) { - for (int j = 0; j < metadata->num_cols_targeted_system_display_actual_peak_luminance; j++) { - print_q("targeted_system_display_actual_peak_luminance", - metadata->targeted_system_display_actual_peak_luminance[i][j],'/'); - } - } - } - for (int n = 0; n < metadata->num_windows; n++) { - const AVHDRPlusColorTransformParams *params = &metadata->params[n]; - for (int i = 0; i < 3; i++) { - print_q("maxscl",params->maxscl[i],'/'); - } - print_q("average_maxrgb", - params->average_maxrgb,'/'); - print_int("num_distribution_maxrgb_percentiles", - params->num_distribution_maxrgb_percentiles); - for (int i = 0; i < params->num_distribution_maxrgb_percentiles; i++) { - print_int("distribution_maxrgb_percentage", - params->distribution_maxrgb[i].percentage); - print_q("distribution_maxrgb_percentile", - params->distribution_maxrgb[i].percentile,'/'); - } - print_q("fraction_bright_pixels", - params->fraction_bright_pixels,'/'); - } - if (metadata->mastering_display_actual_peak_luminance_flag) { - print_int("num_rows_mastering_display_actual_peak_luminance", - metadata->num_rows_mastering_display_actual_peak_luminance); - print_int("num_cols_mastering_display_actual_peak_luminance", - metadata->num_cols_mastering_display_actual_peak_luminance); - for (int i = 0; i < metadata->num_rows_mastering_display_actual_peak_luminance; i++) { - for (int j = 0; j < metadata->num_cols_mastering_display_actual_peak_luminance; j++) { - print_q("mastering_display_actual_peak_luminance", - metadata->mastering_display_actual_peak_luminance[i][j],'/'); - } - } - } - - for (int n = 0; n < metadata->num_windows; n++) { - const AVHDRPlusColorTransformParams *params = &metadata->params[n]; - if (params->tone_mapping_flag) { - print_q("knee_point_x", params->knee_point_x,'/'); - print_q("knee_point_y", params->knee_point_y,'/'); - print_int("num_bezier_curve_anchors", - params->num_bezier_curve_anchors ); - for (int i = 0; i < params->num_bezier_curve_anchors; i++) { - print_q("bezier_curve_anchors", - params->bezier_curve_anchors[i],'/'); - } - } - if (params->color_saturation_mapping_flag) { - print_q("color_saturation_weight", - params->color_saturation_weight,'/'); - } - } -} - -static void print_dynamic_hdr_vivid(WriterContext *w, const AVDynamicHDRVivid *metadata) -{ - if (!metadata) - return; - print_int("system_start_code", metadata->system_start_code); - print_int("num_windows", metadata->num_windows); - - for (int n = 0; n < metadata->num_windows; n++) { - const AVHDRVividColorTransformParams *params = &metadata->params[n]; - - print_q("minimum_maxrgb", params->minimum_maxrgb, '/'); - print_q("average_maxrgb", params->average_maxrgb, '/'); - print_q("variance_maxrgb", params->variance_maxrgb, '/'); - print_q("maximum_maxrgb", params->maximum_maxrgb, '/'); - } - - for (int n = 0; n < metadata->num_windows; n++) { - const AVHDRVividColorTransformParams *params = &metadata->params[n]; - - print_int("tone_mapping_mode_flag", params->tone_mapping_mode_flag); - if (params->tone_mapping_mode_flag) { - print_int("tone_mapping_param_num", params->tone_mapping_param_num); - for (int i = 0; i < params->tone_mapping_param_num; i++) { - const AVHDRVividColorToneMappingParams *tm_params = ¶ms->tm_params[i]; - - print_q("targeted_system_display_maximum_luminance", - tm_params->targeted_system_display_maximum_luminance, '/'); - print_int("base_enable_flag", tm_params->base_enable_flag); - if (tm_params->base_enable_flag) { - print_q("base_param_m_p", tm_params->base_param_m_p, '/'); - print_q("base_param_m_m", tm_params->base_param_m_m, '/'); - print_q("base_param_m_a", tm_params->base_param_m_a, '/'); - print_q("base_param_m_b", tm_params->base_param_m_b, '/'); - print_q("base_param_m_n", tm_params->base_param_m_n, '/'); - - print_int("base_param_k1", tm_params->base_param_k1); - print_int("base_param_k2", tm_params->base_param_k2); - print_int("base_param_k3", tm_params->base_param_k3); - print_int("base_param_Delta_enable_mode", - tm_params->base_param_Delta_enable_mode); - print_q("base_param_Delta", tm_params->base_param_Delta, '/'); - } - print_int("3Spline_enable_flag", tm_params->three_Spline_enable_flag); - if (tm_params->three_Spline_enable_flag) { - print_int("3Spline_num", tm_params->three_Spline_num); - - for (int j = 0; j < tm_params->three_Spline_num; j++) { - const AVHDRVivid3SplineParams *three_spline = &tm_params->three_spline[j]; - print_int("3Spline_TH_mode", three_spline->th_mode); - if (three_spline->th_mode == 0 || three_spline->th_mode == 2) - print_q("3Spline_TH_enable_MB", three_spline->th_enable_mb, '/'); - print_q("3Spline_TH_enable", three_spline->th_enable, '/'); - print_q("3Spline_TH_Delta1", three_spline->th_delta1, '/'); - print_q("3Spline_TH_Delta2", three_spline->th_delta2, '/'); - print_q("3Spline_enable_Strength", three_spline->enable_strength, '/'); - } - } - } - } - - print_int("color_saturation_mapping_flag", params->color_saturation_mapping_flag); - if (params->color_saturation_mapping_flag) { - print_int("color_saturation_num", params->color_saturation_num); - for (int i = 0; i < params->color_saturation_num; i++) { - print_q("color_saturation_gain", params->color_saturation_gain[i], '/'); - } - } - } -} - -static void print_ambient_viewing_environment(WriterContext *w, - const AVAmbientViewingEnvironment *env) -{ - if (!env) - return; - - print_q("ambient_illuminance", env->ambient_illuminance, '/'); - print_q("ambient_light_x", env->ambient_light_x, '/'); - print_q("ambient_light_y", env->ambient_light_y, '/'); -} - -static void print_pkt_side_data(WriterContext *w, - AVCodecParameters *par, - const AVPacketSideData *side_data, - int nb_side_data, - SectionID id_data_list, - SectionID id_data) -{ - int i; - - writer_print_section_header(w, id_data_list); - for (i = 0; i < nb_side_data; i++) { - const AVPacketSideData *sd = &side_data[i]; - const char *name = av_packet_side_data_name(sd->type); - - writer_print_section_header(w, id_data); - print_str("side_data_type", name ? name : "unknown"); - if (sd->type == AV_PKT_DATA_DISPLAYMATRIX && sd->size >= 9*4) { - double rotation = av_display_rotation_get((int32_t *)sd->data); - if (isnan(rotation)) - rotation = 0; - writer_print_integers(w, "displaymatrix", sd->data, 9, " %11d", 3, 4, 1); - print_int("rotation", rotation); - } else if (sd->type == AV_PKT_DATA_STEREO3D) { - const AVStereo3D *stereo = (AVStereo3D *)sd->data; - print_str("type", av_stereo3d_type_name(stereo->type)); - print_int("inverted", !!(stereo->flags & AV_STEREO3D_FLAG_INVERT)); - } else if (sd->type == AV_PKT_DATA_SPHERICAL) { - const AVSphericalMapping *spherical = (AVSphericalMapping *)sd->data; - print_str("projection", av_spherical_projection_name(spherical->projection)); - if (spherical->projection == AV_SPHERICAL_CUBEMAP) { - print_int("padding", spherical->padding); - } else if (spherical->projection == AV_SPHERICAL_EQUIRECTANGULAR_TILE) { - size_t l, t, r, b; - av_spherical_tile_bounds(spherical, par->width, par->height, - &l, &t, &r, &b); - print_int("bound_left", l); - print_int("bound_top", t); - print_int("bound_right", r); - print_int("bound_bottom", b); - } - - print_int("yaw", (double) spherical->yaw / (1 << 16)); - print_int("pitch", (double) spherical->pitch / (1 << 16)); - print_int("roll", (double) spherical->roll / (1 << 16)); - } else if (sd->type == AV_PKT_DATA_SKIP_SAMPLES && sd->size == 10) { - print_int("skip_samples", AV_RL32(sd->data)); - print_int("discard_padding", AV_RL32(sd->data + 4)); - print_int("skip_reason", AV_RL8(sd->data + 8)); - print_int("discard_reason", AV_RL8(sd->data + 9)); - } else if (sd->type == AV_PKT_DATA_MASTERING_DISPLAY_METADATA) { - AVMasteringDisplayMetadata *metadata = (AVMasteringDisplayMetadata *)sd->data; - - if (metadata->has_primaries) { - print_q("red_x", metadata->display_primaries[0][0], '/'); - print_q("red_y", metadata->display_primaries[0][1], '/'); - print_q("green_x", metadata->display_primaries[1][0], '/'); - print_q("green_y", metadata->display_primaries[1][1], '/'); - print_q("blue_x", metadata->display_primaries[2][0], '/'); - print_q("blue_y", metadata->display_primaries[2][1], '/'); - - print_q("white_point_x", metadata->white_point[0], '/'); - print_q("white_point_y", metadata->white_point[1], '/'); - } - - if (metadata->has_luminance) { - print_q("min_luminance", metadata->min_luminance, '/'); - print_q("max_luminance", metadata->max_luminance, '/'); - } - } else if (sd->type == AV_PKT_DATA_CONTENT_LIGHT_LEVEL) { - AVContentLightMetadata *metadata = (AVContentLightMetadata *)sd->data; - print_int("max_content", metadata->MaxCLL); - print_int("max_average", metadata->MaxFALL); - } else if (sd->type == AV_PKT_DATA_DYNAMIC_HDR10_PLUS) { - AVDynamicHDRPlus *metadata = (AVDynamicHDRPlus *)sd->data; - print_dynamic_hdr10_plus(w, metadata); - } else if (sd->type == AV_PKT_DATA_DOVI_CONF) { - AVDOVIDecoderConfigurationRecord *dovi = (AVDOVIDecoderConfigurationRecord *)sd->data; - print_int("dv_version_major", dovi->dv_version_major); - print_int("dv_version_minor", dovi->dv_version_minor); - print_int("dv_profile", dovi->dv_profile); - print_int("dv_level", dovi->dv_level); - print_int("rpu_present_flag", dovi->rpu_present_flag); - print_int("el_present_flag", dovi->el_present_flag); - print_int("bl_present_flag", dovi->bl_present_flag); - print_int("dv_bl_signal_compatibility_id", dovi->dv_bl_signal_compatibility_id); - } else if (sd->type == AV_PKT_DATA_AUDIO_SERVICE_TYPE) { - enum AVAudioServiceType *t = (enum AVAudioServiceType *)sd->data; - print_int("service_type", *t); - } else if (sd->type == AV_PKT_DATA_MPEGTS_STREAM_ID) { - print_int("id", *sd->data); - } else if (sd->type == AV_PKT_DATA_CPB_PROPERTIES) { - const AVCPBProperties *prop = (AVCPBProperties *)sd->data; - print_int("max_bitrate", prop->max_bitrate); - print_int("min_bitrate", prop->min_bitrate); - print_int("avg_bitrate", prop->avg_bitrate); - print_int("buffer_size", prop->buffer_size); - print_int("vbv_delay", prop->vbv_delay); - } else if (sd->type == AV_PKT_DATA_WEBVTT_IDENTIFIER || - sd->type == AV_PKT_DATA_WEBVTT_SETTINGS) { - if (do_show_data) - writer_print_data(w, "data", sd->data, sd->size); - writer_print_data_hash(w, "data_hash", sd->data, sd->size); - } else if (sd->type == AV_PKT_DATA_AFD && sd->size > 0) { - print_int("active_format", *sd->data); - } - writer_print_section_footer(w); - } - writer_print_section_footer(w); -} - -static void print_color_range(WriterContext *w, enum AVColorRange color_range) -{ - const char *val = av_color_range_name(color_range); - if (!val || color_range == AVCOL_RANGE_UNSPECIFIED) { - print_str_opt("color_range", "unknown"); - } else { - print_str("color_range", val); - } -} - -static void print_color_space(WriterContext *w, enum AVColorSpace color_space) -{ - const char *val = av_color_space_name(color_space); - if (!val || color_space == AVCOL_SPC_UNSPECIFIED) { - print_str_opt("color_space", "unknown"); - } else { - print_str("color_space", val); - } -} - -static void print_primaries(WriterContext *w, enum AVColorPrimaries color_primaries) -{ - const char *val = av_color_primaries_name(color_primaries); - if (!val || color_primaries == AVCOL_PRI_UNSPECIFIED) { - print_str_opt("color_primaries", "unknown"); - } else { - print_str("color_primaries", val); - } -} - -static void print_color_trc(WriterContext *w, enum AVColorTransferCharacteristic color_trc) -{ - const char *val = av_color_transfer_name(color_trc); - if (!val || color_trc == AVCOL_TRC_UNSPECIFIED) { - print_str_opt("color_transfer", "unknown"); - } else { - print_str("color_transfer", val); - } -} - -static void print_chroma_location(WriterContext *w, enum AVChromaLocation chroma_location) -{ - const char *val = av_chroma_location_name(chroma_location); - if (!val || chroma_location == AVCHROMA_LOC_UNSPECIFIED) { - print_str_opt("chroma_location", "unspecified"); - } else { - print_str("chroma_location", val); - } -} - - -static void clear_log(int need_lock) -{ - int i; - - if (need_lock) - pthread_mutex_lock(&log_mutex); - for (i=0; istreams[pkt->stream_index].st; - AVBPrint pbuf; - const char *s; - - av_bprint_init(&pbuf, 1, AV_BPRINT_SIZE_UNLIMITED); - - writer_print_section_header(w, SECTION_ID_PACKET); - - s = av_get_media_type_string(st->codecpar->codec_type); - if (s) print_str ("codec_type", s); - else print_str_opt("codec_type", "unknown"); - print_int("stream_index", pkt->stream_index); - print_ts ("pts", pkt->pts); - print_time("pts_time", pkt->pts, &st->time_base); - print_ts ("dts", pkt->dts); - print_time("dts_time", pkt->dts, &st->time_base); - print_duration_ts("duration", pkt->duration); - print_duration_time("duration_time", pkt->duration, &st->time_base); - print_val("size", pkt->size, unit_byte_str); - if (pkt->pos != -1) print_fmt ("pos", "%"PRId64, pkt->pos); - else print_str_opt("pos", "N/A"); - print_fmt("flags", "%c%c%c", pkt->flags & AV_PKT_FLAG_KEY ? 'K' : '_', - pkt->flags & AV_PKT_FLAG_DISCARD ? 'D' : '_', - pkt->flags & AV_PKT_FLAG_CORRUPT ? 'C' : '_'); - if (do_show_data) - writer_print_data(w, "data", pkt->data, pkt->size); - writer_print_data_hash(w, "data_hash", pkt->data, pkt->size); - - if (pkt->side_data_elems) { - size_t size; - const uint8_t *side_metadata; - - side_metadata = av_packet_get_side_data(pkt, AV_PKT_DATA_STRINGS_METADATA, &size); - if (side_metadata && size && do_show_packet_tags) { - AVDictionary *dict = NULL; - if (av_packet_unpack_dictionary(side_metadata, size, &dict) >= 0) - show_tags(w, dict, SECTION_ID_PACKET_TAGS); - av_dict_free(&dict); - } - - print_pkt_side_data(w, st->codecpar, pkt->side_data, pkt->side_data_elems, - SECTION_ID_PACKET_SIDE_DATA_LIST, - SECTION_ID_PACKET_SIDE_DATA); - } - - writer_print_section_footer(w); - - av_bprint_finalize(&pbuf, NULL); - fflush(stdout); -} - -static void show_subtitle(WriterContext *w, AVSubtitle *sub, AVStream *stream, - AVFormatContext *fmt_ctx) -{ - AVBPrint pbuf; - - av_bprint_init(&pbuf, 1, AV_BPRINT_SIZE_UNLIMITED); - - writer_print_section_header(w, SECTION_ID_SUBTITLE); - - print_str ("media_type", "subtitle"); - print_ts ("pts", sub->pts); - print_time("pts_time", sub->pts, &AV_TIME_BASE_Q); - print_int ("format", sub->format); - print_int ("start_display_time", sub->start_display_time); - print_int ("end_display_time", sub->end_display_time); - print_int ("num_rects", sub->num_rects); - - writer_print_section_footer(w); - - av_bprint_finalize(&pbuf, NULL); - fflush(stdout); -} - -static void show_frame(WriterContext *w, AVFrame *frame, AVStream *stream, - AVFormatContext *fmt_ctx) -{ - FrameData *fd = frame->opaque_ref ? (FrameData*)frame->opaque_ref->data : NULL; - AVBPrint pbuf; - char val_str[128]; - const char *s; - int i; - - av_bprint_init(&pbuf, 1, AV_BPRINT_SIZE_UNLIMITED); - - writer_print_section_header(w, SECTION_ID_FRAME); - - s = av_get_media_type_string(stream->codecpar->codec_type); - if (s) print_str ("media_type", s); - else print_str_opt("media_type", "unknown"); - print_int("stream_index", stream->index); - print_int("key_frame", frame->key_frame); - print_ts ("pts", frame->pts); - print_time("pts_time", frame->pts, &stream->time_base); - print_ts ("pkt_dts", frame->pkt_dts); - print_time("pkt_dts_time", frame->pkt_dts, &stream->time_base); - print_ts ("best_effort_timestamp", frame->best_effort_timestamp); - print_time("best_effort_timestamp_time", frame->best_effort_timestamp, &stream->time_base); -#if LIBAVUTIL_VERSION_MAJOR < 59 - AV_NOWARN_DEPRECATED( - print_duration_ts ("pkt_duration", frame->pkt_duration); - print_duration_time("pkt_duration_time", frame->pkt_duration, &stream->time_base); - ) -#endif - print_duration_ts ("duration", frame->duration); - print_duration_time("duration_time", frame->duration, &stream->time_base); - if (fd && fd->pkt_pos != -1) print_fmt ("pkt_pos", "%"PRId64, fd->pkt_pos); - else print_str_opt("pkt_pos", "N/A"); - if (fd && fd->pkt_size != -1) print_val ("pkt_size", fd->pkt_size, unit_byte_str); - else print_str_opt("pkt_size", "N/A"); - - switch (stream->codecpar->codec_type) { - AVRational sar; - - case AVMEDIA_TYPE_VIDEO: - print_int("width", frame->width); - print_int("height", frame->height); - print_int("crop_top", frame->crop_top); - print_int("crop_bottom", frame->crop_bottom); - print_int("crop_left", frame->crop_left); - print_int("crop_right", frame->crop_right); - s = av_get_pix_fmt_name(frame->format); - if (s) print_str ("pix_fmt", s); - else print_str_opt("pix_fmt", "unknown"); - sar = av_guess_sample_aspect_ratio(fmt_ctx, stream, frame); - if (sar.num) { - print_q("sample_aspect_ratio", sar, ':'); - } else { - print_str_opt("sample_aspect_ratio", "N/A"); - } - print_fmt("pict_type", "%c", av_get_picture_type_char(frame->pict_type)); -#if LIBAVUTIL_VERSION_MAJOR < 59 - AV_NOWARN_DEPRECATED( - print_int("coded_picture_number", frame->coded_picture_number); - print_int("display_picture_number", frame->display_picture_number); - ) -#endif - print_int("interlaced_frame", frame->interlaced_frame); - print_int("top_field_first", frame->top_field_first); - print_int("repeat_pict", frame->repeat_pict); - - print_color_range(w, frame->color_range); - print_color_space(w, frame->colorspace); - print_primaries(w, frame->color_primaries); - print_color_trc(w, frame->color_trc); - print_chroma_location(w, frame->chroma_location); - break; - - case AVMEDIA_TYPE_AUDIO: - s = av_get_sample_fmt_name(frame->format); - if (s) print_str ("sample_fmt", s); - else print_str_opt("sample_fmt", "unknown"); - print_int("nb_samples", frame->nb_samples); - print_int("channels", frame->ch_layout.nb_channels); - if (frame->ch_layout.order != AV_CHANNEL_ORDER_UNSPEC) { - av_channel_layout_describe(&frame->ch_layout, val_str, sizeof(val_str)); - print_str ("channel_layout", val_str); - } else - print_str_opt("channel_layout", "unknown"); - break; - } - if (do_show_frame_tags) - show_tags(w, frame->metadata, SECTION_ID_FRAME_TAGS); - if (do_show_log) - show_log(w, SECTION_ID_FRAME_LOGS, SECTION_ID_FRAME_LOG, do_show_log); - if (frame->nb_side_data) { - writer_print_section_header(w, SECTION_ID_FRAME_SIDE_DATA_LIST); - for (i = 0; i < frame->nb_side_data; i++) { - AVFrameSideData *sd = frame->side_data[i]; - const char *name; - - writer_print_section_header(w, SECTION_ID_FRAME_SIDE_DATA); - name = av_frame_side_data_name(sd->type); - print_str("side_data_type", name ? name : "unknown"); - if (sd->type == AV_FRAME_DATA_DISPLAYMATRIX && sd->size >= 9*4) { - double rotation = av_display_rotation_get((int32_t *)sd->data); - if (isnan(rotation)) - rotation = 0; - writer_print_integers(w, "displaymatrix", sd->data, 9, " %11d", 3, 4, 1); - print_int("rotation", rotation); - } else if (sd->type == AV_FRAME_DATA_AFD && sd->size > 0) { - print_int("active_format", *sd->data); - } else if (sd->type == AV_FRAME_DATA_GOP_TIMECODE && sd->size >= 8) { - char tcbuf[AV_TIMECODE_STR_SIZE]; - av_timecode_make_mpeg_tc_string(tcbuf, *(int64_t *)(sd->data)); - print_str("timecode", tcbuf); - } else if (sd->type == AV_FRAME_DATA_S12M_TIMECODE && sd->size == 16) { - uint32_t *tc = (uint32_t*)sd->data; - int m = FFMIN(tc[0],3); - writer_print_section_header(w, SECTION_ID_FRAME_SIDE_DATA_TIMECODE_LIST); - for (int j = 1; j <= m ; j++) { - char tcbuf[AV_TIMECODE_STR_SIZE]; - av_timecode_make_smpte_tc_string2(tcbuf, stream->avg_frame_rate, tc[j], 0, 0); - writer_print_section_header(w, SECTION_ID_FRAME_SIDE_DATA_TIMECODE); - print_str("value", tcbuf); - writer_print_section_footer(w); - } - writer_print_section_footer(w); - } else if (sd->type == AV_FRAME_DATA_MASTERING_DISPLAY_METADATA) { - AVMasteringDisplayMetadata *metadata = (AVMasteringDisplayMetadata *)sd->data; - - if (metadata->has_primaries) { - print_q("red_x", metadata->display_primaries[0][0], '/'); - print_q("red_y", metadata->display_primaries[0][1], '/'); - print_q("green_x", metadata->display_primaries[1][0], '/'); - print_q("green_y", metadata->display_primaries[1][1], '/'); - print_q("blue_x", metadata->display_primaries[2][0], '/'); - print_q("blue_y", metadata->display_primaries[2][1], '/'); - - print_q("white_point_x", metadata->white_point[0], '/'); - print_q("white_point_y", metadata->white_point[1], '/'); - } - - if (metadata->has_luminance) { - print_q("min_luminance", metadata->min_luminance, '/'); - print_q("max_luminance", metadata->max_luminance, '/'); - } - } else if (sd->type == AV_FRAME_DATA_DYNAMIC_HDR_PLUS) { - AVDynamicHDRPlus *metadata = (AVDynamicHDRPlus *)sd->data; - print_dynamic_hdr10_plus(w, metadata); - } else if (sd->type == AV_FRAME_DATA_CONTENT_LIGHT_LEVEL) { - AVContentLightMetadata *metadata = (AVContentLightMetadata *)sd->data; - print_int("max_content", metadata->MaxCLL); - print_int("max_average", metadata->MaxFALL); - } else if (sd->type == AV_FRAME_DATA_ICC_PROFILE) { - const AVDictionaryEntry *tag = av_dict_get(sd->metadata, "name", NULL, AV_DICT_MATCH_CASE); - if (tag) - print_str(tag->key, tag->value); - print_int("size", sd->size); - } else if (sd->type == AV_FRAME_DATA_DOVI_METADATA) { - print_dovi_metadata(w, (const AVDOVIMetadata *)sd->data); - } else if (sd->type == AV_FRAME_DATA_DYNAMIC_HDR_VIVID) { - AVDynamicHDRVivid *metadata = (AVDynamicHDRVivid *)sd->data; - print_dynamic_hdr_vivid(w, metadata); - } else if (sd->type == AV_FRAME_DATA_AMBIENT_VIEWING_ENVIRONMENT) { - print_ambient_viewing_environment( - w, (const AVAmbientViewingEnvironment *)sd->data); - } - writer_print_section_footer(w); - } - writer_print_section_footer(w); - } - - writer_print_section_footer(w); - - av_bprint_finalize(&pbuf, NULL); - fflush(stdout); -} - -static av_always_inline int process_frame(WriterContext *w, - InputFile *ifile, - AVFrame *frame, const AVPacket *pkt, - int *packet_new) -{ - AVFormatContext *fmt_ctx = ifile->fmt_ctx; - AVCodecContext *dec_ctx = ifile->streams[pkt->stream_index].dec_ctx; - AVCodecParameters *par = ifile->streams[pkt->stream_index].st->codecpar; - AVSubtitle sub; - int ret = 0, got_frame = 0; - - clear_log(1); - if (dec_ctx) { - switch (par->codec_type) { - case AVMEDIA_TYPE_VIDEO: - case AVMEDIA_TYPE_AUDIO: - if (*packet_new) { - ret = avcodec_send_packet(dec_ctx, pkt); - if (ret == AVERROR(EAGAIN)) { - ret = 0; - } else if (ret >= 0 || ret == AVERROR_EOF) { - ret = 0; - *packet_new = 0; - } - } - if (ret >= 0) { - ret = avcodec_receive_frame(dec_ctx, frame); - if (ret >= 0) { - got_frame = 1; - } else if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) { - ret = 0; - } - } - break; - - case AVMEDIA_TYPE_SUBTITLE: - if (*packet_new) - ret = avcodec_decode_subtitle2(dec_ctx, &sub, &got_frame, pkt); - *packet_new = 0; - break; - default: - *packet_new = 0; - } - } else { - *packet_new = 0; - } - - if (ret < 0) - return ret; - if (got_frame) { - int is_sub = (par->codec_type == AVMEDIA_TYPE_SUBTITLE); - nb_streams_frames[pkt->stream_index]++; - if (do_show_frames) - if (is_sub) - show_subtitle(w, &sub, ifile->streams[pkt->stream_index].st, fmt_ctx); - else - show_frame(w, frame, ifile->streams[pkt->stream_index].st, fmt_ctx); - if (is_sub) - avsubtitle_free(&sub); - } - return got_frame || *packet_new; -} - -static void log_read_interval(const ReadInterval *interval, void *log_ctx, int log_level) -{ - av_log(log_ctx, log_level, "id:%d", interval->id); - - if (interval->has_start) { - av_log(log_ctx, log_level, " start:%s%s", interval->start_is_offset ? "+" : "", - av_ts2timestr(interval->start, &AV_TIME_BASE_Q)); - } else { - av_log(log_ctx, log_level, " start:N/A"); - } - - if (interval->has_end) { - av_log(log_ctx, log_level, " end:%s", interval->end_is_offset ? "+" : ""); - if (interval->duration_frames) - av_log(log_ctx, log_level, "#%"PRId64, interval->end); - else - av_log(log_ctx, log_level, "%s", av_ts2timestr(interval->end, &AV_TIME_BASE_Q)); - } else { - av_log(log_ctx, log_level, " end:N/A"); - } - - av_log(log_ctx, log_level, "\n"); -} - -static int read_interval_packets(WriterContext *w, InputFile *ifile, - const ReadInterval *interval, int64_t *cur_ts) -{ - AVFormatContext *fmt_ctx = ifile->fmt_ctx; - AVPacket *pkt = NULL; - AVFrame *frame = NULL; - int ret = 0, i = 0, frame_count = 0; - int64_t start = -INT64_MAX, end = interval->end; - int has_start = 0, has_end = interval->has_end && !interval->end_is_offset; - - av_log(NULL, AV_LOG_VERBOSE, "Processing read interval "); - log_read_interval(interval, NULL, AV_LOG_VERBOSE); - - if (interval->has_start) { - int64_t target; - if (interval->start_is_offset) { - if (*cur_ts == AV_NOPTS_VALUE) { - av_log(NULL, AV_LOG_ERROR, - "Could not seek to relative position since current " - "timestamp is not defined\n"); - ret = AVERROR(EINVAL); - goto end; - } - target = *cur_ts + interval->start; - } else { - target = interval->start; - } - - av_log(NULL, AV_LOG_VERBOSE, "Seeking to read interval start point %s\n", - av_ts2timestr(target, &AV_TIME_BASE_Q)); - if ((ret = avformat_seek_file(fmt_ctx, -1, -INT64_MAX, target, INT64_MAX, 0)) < 0) { - av_log(NULL, AV_LOG_ERROR, "Could not seek to position %"PRId64": %s\n", - interval->start, av_err2str(ret)); - goto end; - } - } - - frame = av_frame_alloc(); - if (!frame) { - ret = AVERROR(ENOMEM); - goto end; - } - pkt = av_packet_alloc(); - if (!pkt) { - ret = AVERROR(ENOMEM); - goto end; - } - while (!av_read_frame(fmt_ctx, pkt)) { - if (fmt_ctx->nb_streams > nb_streams) { - REALLOCZ_ARRAY_STREAM(nb_streams_frames, nb_streams, fmt_ctx->nb_streams); - REALLOCZ_ARRAY_STREAM(nb_streams_packets, nb_streams, fmt_ctx->nb_streams); - REALLOCZ_ARRAY_STREAM(selected_streams, nb_streams, fmt_ctx->nb_streams); - nb_streams = fmt_ctx->nb_streams; - } - if (selected_streams[pkt->stream_index]) { - AVRational tb = ifile->streams[pkt->stream_index].st->time_base; - int64_t pts = pkt->pts != AV_NOPTS_VALUE ? pkt->pts : pkt->dts; - - if (pts != AV_NOPTS_VALUE) - *cur_ts = av_rescale_q(pts, tb, AV_TIME_BASE_Q); - - if (!has_start && *cur_ts != AV_NOPTS_VALUE) { - start = *cur_ts; - has_start = 1; - } - - if (has_start && !has_end && interval->end_is_offset) { - end = start + interval->end; - has_end = 1; - } - - if (interval->end_is_offset && interval->duration_frames) { - if (frame_count >= interval->end) - break; - } else if (has_end && *cur_ts != AV_NOPTS_VALUE && *cur_ts >= end) { - break; - } - - frame_count++; - if (do_read_packets) { - if (do_show_packets) - show_packet(w, ifile, pkt, i++); - nb_streams_packets[pkt->stream_index]++; - } - if (do_read_frames) { - int packet_new = 1; - FrameData *fd; - - pkt->opaque_ref = av_buffer_allocz(sizeof(*fd)); - if (!pkt->opaque_ref) - return AVERROR(ENOMEM); - fd = (FrameData*)pkt->opaque_ref->data; - fd->pkt_pos = pkt->pos; - fd->pkt_size = pkt->size; - - while (process_frame(w, ifile, frame, pkt, &packet_new) > 0); - } - } - av_packet_unref(pkt); - } - av_packet_unref(pkt); - //Flush remaining frames that are cached in the decoder - for (i = 0; i < ifile->nb_streams; i++) { - pkt->stream_index = i; - if (do_read_frames) { - while (process_frame(w, ifile, frame, pkt, &(int){1}) > 0); - if (ifile->streams[i].dec_ctx) - avcodec_flush_buffers(ifile->streams[i].dec_ctx); - } - } - -end: - av_frame_free(&frame); - av_packet_free(&pkt); - if (ret < 0) { - av_log(NULL, AV_LOG_ERROR, "Could not read packets in interval "); - log_read_interval(interval, NULL, AV_LOG_ERROR); - } - return ret; -} - -static int read_packets(WriterContext *w, InputFile *ifile) -{ - AVFormatContext *fmt_ctx = ifile->fmt_ctx; - int i, ret = 0; - int64_t cur_ts = fmt_ctx->start_time; - - if (read_intervals_nb == 0) { - ReadInterval interval = (ReadInterval) { .has_start = 0, .has_end = 0 }; - ret = read_interval_packets(w, ifile, &interval, &cur_ts); - } else { - for (i = 0; i < read_intervals_nb; i++) { - ret = read_interval_packets(w, ifile, &read_intervals[i], &cur_ts); - if (ret < 0) - break; - } - } - - return ret; -} - -static int show_stream(WriterContext *w, AVFormatContext *fmt_ctx, int stream_idx, InputStream *ist, int in_program) -{ - AVStream *stream = ist->st; - AVCodecParameters *par; - AVCodecContext *dec_ctx; - char val_str[128]; - const char *s; - AVRational sar, dar; - AVBPrint pbuf; - const AVCodecDescriptor *cd; - int ret = 0; - const char *profile = NULL; - - av_bprint_init(&pbuf, 1, AV_BPRINT_SIZE_UNLIMITED); - - writer_print_section_header(w, in_program ? SECTION_ID_PROGRAM_STREAM : SECTION_ID_STREAM); - - print_int("index", stream->index); - - par = stream->codecpar; - dec_ctx = ist->dec_ctx; - if (cd = avcodec_descriptor_get(par->codec_id)) { - print_str("codec_name", cd->name); - if (!do_bitexact) { - print_str("codec_long_name", - cd->long_name ? cd->long_name : "unknown"); - } - } else { - print_str_opt("codec_name", "unknown"); - if (!do_bitexact) { - print_str_opt("codec_long_name", "unknown"); - } - } - - if (!do_bitexact && (profile = avcodec_profile_name(par->codec_id, par->profile))) - print_str("profile", profile); - else { - if (par->profile != FF_PROFILE_UNKNOWN) { - char profile_num[12]; - snprintf(profile_num, sizeof(profile_num), "%d", par->profile); - print_str("profile", profile_num); - } else - print_str_opt("profile", "unknown"); - } - - s = av_get_media_type_string(par->codec_type); - if (s) print_str ("codec_type", s); - else print_str_opt("codec_type", "unknown"); - - /* print AVI/FourCC tag */ - print_str("codec_tag_string", av_fourcc2str(par->codec_tag)); - print_fmt("codec_tag", "0x%04"PRIx32, par->codec_tag); - - switch (par->codec_type) { - case AVMEDIA_TYPE_VIDEO: - print_int("width", par->width); - print_int("height", par->height); - if (dec_ctx) { - print_int("coded_width", dec_ctx->coded_width); - print_int("coded_height", dec_ctx->coded_height); - print_int("closed_captions", !!(dec_ctx->properties & FF_CODEC_PROPERTY_CLOSED_CAPTIONS)); - print_int("film_grain", !!(dec_ctx->properties & FF_CODEC_PROPERTY_FILM_GRAIN)); - } - print_int("has_b_frames", par->video_delay); - sar = av_guess_sample_aspect_ratio(fmt_ctx, stream, NULL); - if (sar.num) { - print_q("sample_aspect_ratio", sar, ':'); - av_reduce(&dar.num, &dar.den, - par->width * sar.num, - par->height * sar.den, - 1024*1024); - print_q("display_aspect_ratio", dar, ':'); - } else { - print_str_opt("sample_aspect_ratio", "N/A"); - print_str_opt("display_aspect_ratio", "N/A"); - } - s = av_get_pix_fmt_name(par->format); - if (s) print_str ("pix_fmt", s); - else print_str_opt("pix_fmt", "unknown"); - print_int("level", par->level); - - print_color_range(w, par->color_range); - print_color_space(w, par->color_space); - print_color_trc(w, par->color_trc); - print_primaries(w, par->color_primaries); - print_chroma_location(w, par->chroma_location); - - if (par->field_order == AV_FIELD_PROGRESSIVE) - print_str("field_order", "progressive"); - else if (par->field_order == AV_FIELD_TT) - print_str("field_order", "tt"); - else if (par->field_order == AV_FIELD_BB) - print_str("field_order", "bb"); - else if (par->field_order == AV_FIELD_TB) - print_str("field_order", "tb"); - else if (par->field_order == AV_FIELD_BT) - print_str("field_order", "bt"); - else - print_str_opt("field_order", "unknown"); - - if (dec_ctx) - print_int("refs", dec_ctx->refs); - break; - - case AVMEDIA_TYPE_AUDIO: - s = av_get_sample_fmt_name(par->format); - if (s) print_str ("sample_fmt", s); - else print_str_opt("sample_fmt", "unknown"); - print_val("sample_rate", par->sample_rate, unit_hertz_str); - print_int("channels", par->ch_layout.nb_channels); - - if (par->ch_layout.order != AV_CHANNEL_ORDER_UNSPEC) { - av_channel_layout_describe(&par->ch_layout, val_str, sizeof(val_str)); - print_str ("channel_layout", val_str); - } else { - print_str_opt("channel_layout", "unknown"); - } - - print_int("bits_per_sample", av_get_bits_per_sample(par->codec_id)); - - print_int("initial_padding", par->initial_padding); - break; - - case AVMEDIA_TYPE_SUBTITLE: - if (par->width) - print_int("width", par->width); - else - print_str_opt("width", "N/A"); - if (par->height) - print_int("height", par->height); - else - print_str_opt("height", "N/A"); - break; - } - - if (dec_ctx && dec_ctx->codec->priv_class && show_private_data) { - const AVOption *opt = NULL; - while (opt = av_opt_next(dec_ctx->priv_data,opt)) { - uint8_t *str; - if (!(opt->flags & AV_OPT_FLAG_EXPORT)) continue; - if (av_opt_get(dec_ctx->priv_data, opt->name, 0, &str) >= 0) { - print_str(opt->name, str); - av_free(str); - } - } - } - - if (fmt_ctx->iformat->flags & AVFMT_SHOW_IDS) print_fmt ("id", "0x%x", stream->id); - else print_str_opt("id", "N/A"); - print_q("r_frame_rate", stream->r_frame_rate, '/'); - print_q("avg_frame_rate", stream->avg_frame_rate, '/'); - print_q("time_base", stream->time_base, '/'); - print_ts ("start_pts", stream->start_time); - print_time("start_time", stream->start_time, &stream->time_base); - print_ts ("duration_ts", stream->duration); - print_time("duration", stream->duration, &stream->time_base); - if (par->bit_rate > 0) print_val ("bit_rate", par->bit_rate, unit_bit_per_second_str); - else print_str_opt("bit_rate", "N/A"); - if (dec_ctx && dec_ctx->rc_max_rate > 0) - print_val ("max_bit_rate", dec_ctx->rc_max_rate, unit_bit_per_second_str); - else - print_str_opt("max_bit_rate", "N/A"); - if (dec_ctx && dec_ctx->bits_per_raw_sample > 0) print_fmt("bits_per_raw_sample", "%d", dec_ctx->bits_per_raw_sample); - else print_str_opt("bits_per_raw_sample", "N/A"); - if (stream->nb_frames) print_fmt ("nb_frames", "%"PRId64, stream->nb_frames); - else print_str_opt("nb_frames", "N/A"); - if (nb_streams_frames[stream_idx]) print_fmt ("nb_read_frames", "%"PRIu64, nb_streams_frames[stream_idx]); - else print_str_opt("nb_read_frames", "N/A"); - if (nb_streams_packets[stream_idx]) print_fmt ("nb_read_packets", "%"PRIu64, nb_streams_packets[stream_idx]); - else print_str_opt("nb_read_packets", "N/A"); - if (do_show_data) - writer_print_data(w, "extradata", par->extradata, - par->extradata_size); - - if (par->extradata_size > 0) { - print_int("extradata_size", par->extradata_size); - writer_print_data_hash(w, "extradata_hash", par->extradata, - par->extradata_size); - } - - /* Print disposition information */ -#define PRINT_DISPOSITION(flagname, name) do { \ - print_int(name, !!(stream->disposition & AV_DISPOSITION_##flagname)); \ - } while (0) - - if (do_show_stream_disposition) { - writer_print_section_header(w, in_program ? SECTION_ID_PROGRAM_STREAM_DISPOSITION : SECTION_ID_STREAM_DISPOSITION); - PRINT_DISPOSITION(DEFAULT, "default"); - PRINT_DISPOSITION(DUB, "dub"); - PRINT_DISPOSITION(ORIGINAL, "original"); - PRINT_DISPOSITION(COMMENT, "comment"); - PRINT_DISPOSITION(LYRICS, "lyrics"); - PRINT_DISPOSITION(KARAOKE, "karaoke"); - PRINT_DISPOSITION(FORCED, "forced"); - PRINT_DISPOSITION(HEARING_IMPAIRED, "hearing_impaired"); - PRINT_DISPOSITION(VISUAL_IMPAIRED, "visual_impaired"); - PRINT_DISPOSITION(CLEAN_EFFECTS, "clean_effects"); - PRINT_DISPOSITION(ATTACHED_PIC, "attached_pic"); - PRINT_DISPOSITION(TIMED_THUMBNAILS, "timed_thumbnails"); - PRINT_DISPOSITION(CAPTIONS, "captions"); - PRINT_DISPOSITION(DESCRIPTIONS, "descriptions"); - PRINT_DISPOSITION(METADATA, "metadata"); - PRINT_DISPOSITION(DEPENDENT, "dependent"); - PRINT_DISPOSITION(STILL_IMAGE, "still_image"); - writer_print_section_footer(w); - } - - if (do_show_stream_tags) - ret = show_tags(w, stream->metadata, in_program ? SECTION_ID_PROGRAM_STREAM_TAGS : SECTION_ID_STREAM_TAGS); - - if (stream->nb_side_data) { - print_pkt_side_data(w, stream->codecpar, stream->side_data, stream->nb_side_data, - SECTION_ID_STREAM_SIDE_DATA_LIST, - SECTION_ID_STREAM_SIDE_DATA); - } - - writer_print_section_footer(w); - av_bprint_finalize(&pbuf, NULL); - fflush(stdout); - - return ret; -} - -static int show_streams(WriterContext *w, InputFile *ifile) -{ - AVFormatContext *fmt_ctx = ifile->fmt_ctx; - int i, ret = 0; - - writer_print_section_header(w, SECTION_ID_STREAMS); - for (i = 0; i < ifile->nb_streams; i++) - if (selected_streams[i]) { - ret = show_stream(w, fmt_ctx, i, &ifile->streams[i], 0); - if (ret < 0) - break; - } - writer_print_section_footer(w); - - return ret; -} - -static int show_program(WriterContext *w, InputFile *ifile, AVProgram *program) -{ - AVFormatContext *fmt_ctx = ifile->fmt_ctx; - int i, ret = 0; - - writer_print_section_header(w, SECTION_ID_PROGRAM); - print_int("program_id", program->id); - print_int("program_num", program->program_num); - print_int("nb_streams", program->nb_stream_indexes); - print_int("pmt_pid", program->pmt_pid); - print_int("pcr_pid", program->pcr_pid); - if (do_show_program_tags) - ret = show_tags(w, program->metadata, SECTION_ID_PROGRAM_TAGS); - if (ret < 0) - goto end; - - writer_print_section_header(w, SECTION_ID_PROGRAM_STREAMS); - for (i = 0; i < program->nb_stream_indexes; i++) { - if (selected_streams[program->stream_index[i]]) { - ret = show_stream(w, fmt_ctx, program->stream_index[i], &ifile->streams[program->stream_index[i]], 1); - if (ret < 0) - break; - } - } - writer_print_section_footer(w); - -end: - writer_print_section_footer(w); - return ret; -} - -static int show_programs(WriterContext *w, InputFile *ifile) -{ - AVFormatContext *fmt_ctx = ifile->fmt_ctx; - int i, ret = 0; - - writer_print_section_header(w, SECTION_ID_PROGRAMS); - for (i = 0; i < fmt_ctx->nb_programs; i++) { - AVProgram *program = fmt_ctx->programs[i]; - if (!program) - continue; - ret = show_program(w, ifile, program); - if (ret < 0) - break; - } - writer_print_section_footer(w); - return ret; -} - -static int show_chapters(WriterContext *w, InputFile *ifile) -{ - AVFormatContext *fmt_ctx = ifile->fmt_ctx; - int i, ret = 0; - - writer_print_section_header(w, SECTION_ID_CHAPTERS); - for (i = 0; i < fmt_ctx->nb_chapters; i++) { - AVChapter *chapter = fmt_ctx->chapters[i]; - - writer_print_section_header(w, SECTION_ID_CHAPTER); - print_int("id", chapter->id); - print_q ("time_base", chapter->time_base, '/'); - print_int("start", chapter->start); - print_time("start_time", chapter->start, &chapter->time_base); - print_int("end", chapter->end); - print_time("end_time", chapter->end, &chapter->time_base); - if (do_show_chapter_tags) - ret = show_tags(w, chapter->metadata, SECTION_ID_CHAPTER_TAGS); - writer_print_section_footer(w); - } - writer_print_section_footer(w); - - return ret; -} - -static int show_format(WriterContext *w, InputFile *ifile) -{ - AVFormatContext *fmt_ctx = ifile->fmt_ctx; - char val_str[128]; - int64_t size = fmt_ctx->pb ? avio_size(fmt_ctx->pb) : -1; - int ret = 0; - - writer_print_section_header(w, SECTION_ID_FORMAT); - print_str_validate("filename", fmt_ctx->url); - print_int("nb_streams", fmt_ctx->nb_streams); - print_int("nb_programs", fmt_ctx->nb_programs); - print_str("format_name", fmt_ctx->iformat->name); - if (!do_bitexact) { - if (fmt_ctx->iformat->long_name) print_str ("format_long_name", fmt_ctx->iformat->long_name); - else print_str_opt("format_long_name", "unknown"); - } - print_time("start_time", fmt_ctx->start_time, &AV_TIME_BASE_Q); - print_time("duration", fmt_ctx->duration, &AV_TIME_BASE_Q); - if (size >= 0) print_val ("size", size, unit_byte_str); - else print_str_opt("size", "N/A"); - if (fmt_ctx->bit_rate > 0) print_val ("bit_rate", fmt_ctx->bit_rate, unit_bit_per_second_str); - else print_str_opt("bit_rate", "N/A"); - print_int("probe_score", fmt_ctx->probe_score); - if (do_show_format_tags) - ret = show_tags(w, fmt_ctx->metadata, SECTION_ID_FORMAT_TAGS); - - writer_print_section_footer(w); - fflush(stdout); - return ret; -} - -static void show_error(WriterContext *w, int err) -{ - writer_print_section_header(w, SECTION_ID_ERROR); - print_int("code", err); - print_str("string", av_err2str(err)); - writer_print_section_footer(w); -} - -static int open_input_file(InputFile *ifile, const char *filename, - const char *print_filename) -{ - int err, i; - AVFormatContext *fmt_ctx = NULL; - const AVDictionaryEntry *t = NULL; - int scan_all_pmts_set = 0; - - fmt_ctx = avformat_alloc_context(); - if (!fmt_ctx) - report_and_exit(AVERROR(ENOMEM)); - - if (!av_dict_get(format_opts, "scan_all_pmts", NULL, AV_DICT_MATCH_CASE)) { - av_dict_set(&format_opts, "scan_all_pmts", "1", AV_DICT_DONT_OVERWRITE); - scan_all_pmts_set = 1; - } - if ((err = avformat_open_input(&fmt_ctx, filename, - iformat, &format_opts)) < 0) { - print_error(filename, err); - return err; - } - if (print_filename) { - av_freep(&fmt_ctx->url); - fmt_ctx->url = av_strdup(print_filename); - } - ifile->fmt_ctx = fmt_ctx; - if (scan_all_pmts_set) - av_dict_set(&format_opts, "scan_all_pmts", NULL, AV_DICT_MATCH_CASE); - while ((t = av_dict_iterate(format_opts, t))) - av_log(NULL, AV_LOG_WARNING, "Option %s skipped - not known to demuxer.\n", t->key); - - if (find_stream_info) { - AVDictionary **opts = setup_find_stream_info_opts(fmt_ctx, codec_opts); - int orig_nb_streams = fmt_ctx->nb_streams; - - err = avformat_find_stream_info(fmt_ctx, opts); - - for (i = 0; i < orig_nb_streams; i++) - av_dict_free(&opts[i]); - av_freep(&opts); - - if (err < 0) { - print_error(filename, err); - return err; - } - } - - av_dump_format(fmt_ctx, 0, filename, 0); - - ifile->streams = av_calloc(fmt_ctx->nb_streams, sizeof(*ifile->streams)); - if (!ifile->streams) - exit(1); - ifile->nb_streams = fmt_ctx->nb_streams; - - /* bind a decoder to each input stream */ - for (i = 0; i < fmt_ctx->nb_streams; i++) { - InputStream *ist = &ifile->streams[i]; - AVStream *stream = fmt_ctx->streams[i]; - const AVCodec *codec; - - ist->st = stream; - - if (stream->codecpar->codec_id == AV_CODEC_ID_PROBE) { - av_log(NULL, AV_LOG_WARNING, - "Failed to probe codec for input stream %d\n", - stream->index); - continue; - } - - codec = avcodec_find_decoder(stream->codecpar->codec_id); - if (!codec) { - av_log(NULL, AV_LOG_WARNING, - "Unsupported codec with id %d for input stream %d\n", - stream->codecpar->codec_id, stream->index); - continue; - } - { - AVDictionary *opts = filter_codec_opts(codec_opts, stream->codecpar->codec_id, - fmt_ctx, stream, codec); - - ist->dec_ctx = avcodec_alloc_context3(codec); - if (!ist->dec_ctx) - exit(1); - - err = avcodec_parameters_to_context(ist->dec_ctx, stream->codecpar); - if (err < 0) - exit(1); - - if (do_show_log) { - // For loging it is needed to disable at least frame threads as otherwise - // the log information would need to be reordered and matches up to contexts and frames - // That is in fact possible but not trivial - av_dict_set(&codec_opts, "threads", "1", 0); - } - - av_dict_set(&opts, "flags", "+copy_opaque", AV_DICT_MULTIKEY); - - ist->dec_ctx->pkt_timebase = stream->time_base; - - if (avcodec_open2(ist->dec_ctx, codec, &opts) < 0) { - av_log(NULL, AV_LOG_WARNING, "Could not open codec for input stream %d\n", - stream->index); - exit(1); - } - - if ((t = av_dict_get(opts, "", NULL, AV_DICT_IGNORE_SUFFIX))) { - av_log(NULL, AV_LOG_ERROR, "Option %s for input stream %d not found\n", - t->key, stream->index); - return AVERROR_OPTION_NOT_FOUND; - } - } - } - - ifile->fmt_ctx = fmt_ctx; - return 0; -} - -static void close_input_file(InputFile *ifile) -{ - int i; - - /* close decoder for each stream */ - for (i = 0; i < ifile->nb_streams; i++) - avcodec_free_context(&ifile->streams[i].dec_ctx); - - av_freep(&ifile->streams); - ifile->nb_streams = 0; - - avformat_close_input(&ifile->fmt_ctx); -} - -static int probe_file(WriterContext *wctx, const char *filename, - const char *print_filename) -{ - InputFile ifile = { 0 }; - int ret, i; - int section_id; - - do_read_frames = do_show_frames || do_count_frames; - do_read_packets = do_show_packets || do_count_packets; - - ret = open_input_file(&ifile, filename, print_filename); - if (ret < 0) - goto end; - -#define CHECK_END if (ret < 0) goto end - - nb_streams = ifile.fmt_ctx->nb_streams; - REALLOCZ_ARRAY_STREAM(nb_streams_frames,0,ifile.fmt_ctx->nb_streams); - REALLOCZ_ARRAY_STREAM(nb_streams_packets,0,ifile.fmt_ctx->nb_streams); - REALLOCZ_ARRAY_STREAM(selected_streams,0,ifile.fmt_ctx->nb_streams); - - for (i = 0; i < ifile.fmt_ctx->nb_streams; i++) { - if (stream_specifier) { - ret = avformat_match_stream_specifier(ifile.fmt_ctx, - ifile.fmt_ctx->streams[i], - stream_specifier); - CHECK_END; - else - selected_streams[i] = ret; - ret = 0; - } else { - selected_streams[i] = 1; - } - if (!selected_streams[i]) - ifile.fmt_ctx->streams[i]->discard = AVDISCARD_ALL; - } - - if (do_read_frames || do_read_packets) { - if (do_show_frames && do_show_packets && - wctx->writer->flags & WRITER_FLAG_PUT_PACKETS_AND_FRAMES_IN_SAME_CHAPTER) - section_id = SECTION_ID_PACKETS_AND_FRAMES; - else if (do_show_packets && !do_show_frames) - section_id = SECTION_ID_PACKETS; - else // (!do_show_packets && do_show_frames) - section_id = SECTION_ID_FRAMES; - if (do_show_frames || do_show_packets) - writer_print_section_header(wctx, section_id); - ret = read_packets(wctx, &ifile); - if (do_show_frames || do_show_packets) - writer_print_section_footer(wctx); - CHECK_END; - } - - if (do_show_programs) { - ret = show_programs(wctx, &ifile); - CHECK_END; - } - - if (do_show_streams) { - ret = show_streams(wctx, &ifile); - CHECK_END; - } - if (do_show_chapters) { - ret = show_chapters(wctx, &ifile); - CHECK_END; - } - if (do_show_format) { - ret = show_format(wctx, &ifile); - CHECK_END; - } - -end: - if (ifile.fmt_ctx) - close_input_file(&ifile); - av_freep(&nb_streams_frames); - av_freep(&nb_streams_packets); - av_freep(&selected_streams); - - return ret; -} - -static void show_usage(void) -{ - av_log(NULL, AV_LOG_INFO, "Simple multimedia streams analyzer\n"); - av_log(NULL, AV_LOG_INFO, "usage: %s [OPTIONS] INPUT_FILE\n", program_name); - av_log(NULL, AV_LOG_INFO, "\n"); -} - -static void ffprobe_show_program_version(WriterContext *w) -{ - AVBPrint pbuf; - av_bprint_init(&pbuf, 1, AV_BPRINT_SIZE_UNLIMITED); - - writer_print_section_header(w, SECTION_ID_PROGRAM_VERSION); - print_str("version", FFMPEG_VERSION); - print_fmt("copyright", "Copyright (c) %d-%d the FFmpeg developers", - program_birth_year, CONFIG_THIS_YEAR); - print_str("compiler_ident", CC_IDENT); - print_str("configuration", FFMPEG_CONFIGURATION); - writer_print_section_footer(w); - - av_bprint_finalize(&pbuf, NULL); -} - -#define SHOW_LIB_VERSION(libname, LIBNAME) \ - do { \ - if (CONFIG_##LIBNAME) { \ - unsigned int version = libname##_version(); \ - writer_print_section_header(w, SECTION_ID_LIBRARY_VERSION); \ - print_str("name", "lib" #libname); \ - print_int("major", LIB##LIBNAME##_VERSION_MAJOR); \ - print_int("minor", LIB##LIBNAME##_VERSION_MINOR); \ - print_int("micro", LIB##LIBNAME##_VERSION_MICRO); \ - print_int("version", version); \ - print_str("ident", LIB##LIBNAME##_IDENT); \ - writer_print_section_footer(w); \ - } \ - } while (0) - -static void ffprobe_show_library_versions(WriterContext *w) -{ - writer_print_section_header(w, SECTION_ID_LIBRARY_VERSIONS); - SHOW_LIB_VERSION(avutil, AVUTIL); - SHOW_LIB_VERSION(avcodec, AVCODEC); - SHOW_LIB_VERSION(avformat, AVFORMAT); - SHOW_LIB_VERSION(avdevice, AVDEVICE); - SHOW_LIB_VERSION(avfilter, AVFILTER); - SHOW_LIB_VERSION(swscale, SWSCALE); - SHOW_LIB_VERSION(swresample, SWRESAMPLE); - SHOW_LIB_VERSION(postproc, POSTPROC); - writer_print_section_footer(w); -} - -#define PRINT_PIX_FMT_FLAG(flagname, name) \ - do { \ - print_int(name, !!(pixdesc->flags & AV_PIX_FMT_FLAG_##flagname)); \ - } while (0) - -static void ffprobe_show_pixel_formats(WriterContext *w) -{ - const AVPixFmtDescriptor *pixdesc = NULL; - int i, n; - - writer_print_section_header(w, SECTION_ID_PIXEL_FORMATS); - while (pixdesc = av_pix_fmt_desc_next(pixdesc)) { - writer_print_section_header(w, SECTION_ID_PIXEL_FORMAT); - print_str("name", pixdesc->name); - print_int("nb_components", pixdesc->nb_components); - if ((pixdesc->nb_components >= 3) && !(pixdesc->flags & AV_PIX_FMT_FLAG_RGB)) { - print_int ("log2_chroma_w", pixdesc->log2_chroma_w); - print_int ("log2_chroma_h", pixdesc->log2_chroma_h); - } else { - print_str_opt("log2_chroma_w", "N/A"); - print_str_opt("log2_chroma_h", "N/A"); - } - n = av_get_bits_per_pixel(pixdesc); - if (n) print_int ("bits_per_pixel", n); - else print_str_opt("bits_per_pixel", "N/A"); - if (do_show_pixel_format_flags) { - writer_print_section_header(w, SECTION_ID_PIXEL_FORMAT_FLAGS); - PRINT_PIX_FMT_FLAG(BE, "big_endian"); - PRINT_PIX_FMT_FLAG(PAL, "palette"); - PRINT_PIX_FMT_FLAG(BITSTREAM, "bitstream"); - PRINT_PIX_FMT_FLAG(HWACCEL, "hwaccel"); - PRINT_PIX_FMT_FLAG(PLANAR, "planar"); - PRINT_PIX_FMT_FLAG(RGB, "rgb"); - PRINT_PIX_FMT_FLAG(ALPHA, "alpha"); - writer_print_section_footer(w); - } - if (do_show_pixel_format_components && (pixdesc->nb_components > 0)) { - writer_print_section_header(w, SECTION_ID_PIXEL_FORMAT_COMPONENTS); - for (i = 0; i < pixdesc->nb_components; i++) { - writer_print_section_header(w, SECTION_ID_PIXEL_FORMAT_COMPONENT); - print_int("index", i + 1); - print_int("bit_depth", pixdesc->comp[i].depth); - writer_print_section_footer(w); - } - writer_print_section_footer(w); - } - writer_print_section_footer(w); - } - writer_print_section_footer(w); -} - -static int opt_show_optional_fields(void *optctx, const char *opt, const char *arg) -{ - if (!av_strcasecmp(arg, "always")) show_optional_fields = SHOW_OPTIONAL_FIELDS_ALWAYS; - else if (!av_strcasecmp(arg, "never")) show_optional_fields = SHOW_OPTIONAL_FIELDS_NEVER; - else if (!av_strcasecmp(arg, "auto")) show_optional_fields = SHOW_OPTIONAL_FIELDS_AUTO; - - if (show_optional_fields == SHOW_OPTIONAL_FIELDS_AUTO && av_strcasecmp(arg, "auto")) - show_optional_fields = parse_number_or_die("show_optional_fields", arg, OPT_INT, SHOW_OPTIONAL_FIELDS_AUTO, SHOW_OPTIONAL_FIELDS_ALWAYS); - return 0; -} - -static int opt_format(void *optctx, const char *opt, const char *arg) -{ - iformat = av_find_input_format(arg); - if (!iformat) { - av_log(NULL, AV_LOG_ERROR, "Unknown input format: %s\n", arg); - return AVERROR(EINVAL); - } - return 0; -} - -static inline void mark_section_show_entries(SectionID section_id, - int show_all_entries, AVDictionary *entries) -{ - struct section *section = §ions[section_id]; - - section->show_all_entries = show_all_entries; - if (show_all_entries) { - SectionID *id; - for (id = section->children_ids; *id != -1; id++) - mark_section_show_entries(*id, show_all_entries, entries); - } else { - av_dict_copy(§ion->entries_to_show, entries, 0); - } -} - -static int match_section(const char *section_name, - int show_all_entries, AVDictionary *entries) -{ - int i, ret = 0; - - for (i = 0; i < FF_ARRAY_ELEMS(sections); i++) { - const struct section *section = §ions[i]; - if (!strcmp(section_name, section->name) || - (section->unique_name && !strcmp(section_name, section->unique_name))) { - av_log(NULL, AV_LOG_DEBUG, - "'%s' matches section with unique name '%s'\n", section_name, - (char *)av_x_if_null(section->unique_name, section->name)); - ret++; - mark_section_show_entries(section->id, show_all_entries, entries); - } - } - return ret; -} - -static int opt_show_entries(void *optctx, const char *opt, const char *arg) -{ - const char *p = arg; - int ret = 0; - - while (*p) { - AVDictionary *entries = NULL; - char *section_name = av_get_token(&p, "=:"); - int show_all_entries = 0; - - if (!section_name) { - av_log(NULL, AV_LOG_ERROR, - "Missing section name for option '%s'\n", opt); - return AVERROR(EINVAL); - } - - if (*p == '=') { - p++; - while (*p && *p != ':') { - char *entry = av_get_token(&p, ",:"); - if (!entry) - break; - av_log(NULL, AV_LOG_VERBOSE, - "Adding '%s' to the entries to show in section '%s'\n", - entry, section_name); - av_dict_set(&entries, entry, "", AV_DICT_DONT_STRDUP_KEY); - if (*p == ',') - p++; - } - } else { - show_all_entries = 1; - } - - ret = match_section(section_name, show_all_entries, entries); - if (ret == 0) { - av_log(NULL, AV_LOG_ERROR, "No match for section '%s'\n", section_name); - ret = AVERROR(EINVAL); - } - av_dict_free(&entries); - av_free(section_name); - - if (ret <= 0) - break; - if (*p) - p++; - } - - return ret; -} - -static void opt_input_file(void *optctx, const char *arg) -{ - if (input_filename) { - av_log(NULL, AV_LOG_ERROR, - "Argument '%s' provided as input filename, but '%s' was already specified.\n", - arg, input_filename); - exit_program(1); - } - if (!strcmp(arg, "-")) - arg = "fd:"; - input_filename = arg; -} - -static int opt_input_file_i(void *optctx, const char *opt, const char *arg) -{ - opt_input_file(optctx, arg); - return 0; -} - -static void opt_output_file(void *optctx, const char *arg) -{ - if (output_filename) { - av_log(NULL, AV_LOG_ERROR, - "Argument '%s' provided as output filename, but '%s' was already specified.\n", - arg, output_filename); - exit_program(1); - } - if (!strcmp(arg, "-")) - arg = "fd:"; - output_filename = arg; -} - -static int opt_output_file_o(void *optctx, const char *opt, const char *arg) -{ - opt_output_file(optctx, arg); - return 0; -} - -static int opt_print_filename(void *optctx, const char *opt, const char *arg) -{ - print_input_filename = arg; - return 0; -} - -void show_help_default(const char *opt, const char *arg) -{ - av_log_set_callback(log_callback_help); - show_usage(); - show_help_options(options, "Main options:", 0, 0, 0); - printf("\n"); - - show_help_children(avformat_get_class(), AV_OPT_FLAG_DECODING_PARAM); - show_help_children(avcodec_get_class(), AV_OPT_FLAG_DECODING_PARAM); -} - -/** - * Parse interval specification, according to the format: - * INTERVAL ::= [START|+START_OFFSET][%[END|+END_OFFSET]] - * INTERVALS ::= INTERVAL[,INTERVALS] -*/ -static int parse_read_interval(const char *interval_spec, - ReadInterval *interval) -{ - int ret = 0; - char *next, *p, *spec = av_strdup(interval_spec); - if (!spec) - return AVERROR(ENOMEM); - - if (!*spec) { - av_log(NULL, AV_LOG_ERROR, "Invalid empty interval specification\n"); - ret = AVERROR(EINVAL); - goto end; - } - - p = spec; - next = strchr(spec, '%'); - if (next) - *next++ = 0; - - /* parse first part */ - if (*p) { - interval->has_start = 1; - - if (*p == '+') { - interval->start_is_offset = 1; - p++; - } else { - interval->start_is_offset = 0; - } - - ret = av_parse_time(&interval->start, p, 1); - if (ret < 0) { - av_log(NULL, AV_LOG_ERROR, "Invalid interval start specification '%s'\n", p); - goto end; - } - } else { - interval->has_start = 0; - } - - /* parse second part */ - p = next; - if (p && *p) { - int64_t us; - interval->has_end = 1; - - if (*p == '+') { - interval->end_is_offset = 1; - p++; - } else { - interval->end_is_offset = 0; - } - - if (interval->end_is_offset && *p == '#') { - long long int lli; - char *tail; - interval->duration_frames = 1; - p++; - lli = strtoll(p, &tail, 10); - if (*tail || lli < 0) { - av_log(NULL, AV_LOG_ERROR, - "Invalid or negative value '%s' for duration number of frames\n", p); - goto end; - } - interval->end = lli; - } else { - interval->duration_frames = 0; - ret = av_parse_time(&us, p, 1); - if (ret < 0) { - av_log(NULL, AV_LOG_ERROR, "Invalid interval end/duration specification '%s'\n", p); - goto end; - } - interval->end = us; - } - } else { - interval->has_end = 0; - } - -end: - av_free(spec); - return ret; -} - -static int parse_read_intervals(const char *intervals_spec) -{ - int ret, n, i; - char *p, *spec = av_strdup(intervals_spec); - if (!spec) - return AVERROR(ENOMEM); - - /* preparse specification, get number of intervals */ - for (n = 0, p = spec; *p; p++) - if (*p == ',') - n++; - n++; - - read_intervals = av_malloc_array(n, sizeof(*read_intervals)); - if (!read_intervals) { - ret = AVERROR(ENOMEM); - goto end; - } - read_intervals_nb = n; - - /* parse intervals */ - p = spec; - for (i = 0; p; i++) { - char *next; - - av_assert0(i < read_intervals_nb); - next = strchr(p, ','); - if (next) - *next++ = 0; - - read_intervals[i].id = i; - ret = parse_read_interval(p, &read_intervals[i]); - if (ret < 0) { - av_log(NULL, AV_LOG_ERROR, "Error parsing read interval #%d '%s'\n", - i, p); - goto end; - } - av_log(NULL, AV_LOG_VERBOSE, "Parsed log interval "); - log_read_interval(&read_intervals[i], NULL, AV_LOG_VERBOSE); - p = next; - } - av_assert0(i == read_intervals_nb); - -end: - av_free(spec); - return ret; -} - -static int opt_read_intervals(void *optctx, const char *opt, const char *arg) -{ - return parse_read_intervals(arg); -} - -static int opt_pretty(void *optctx, const char *opt, const char *arg) -{ - show_value_unit = 1; - use_value_prefix = 1; - use_byte_value_binary_prefix = 1; - use_value_sexagesimal_format = 1; - return 0; -} - -static void print_section(SectionID id, int level) -{ - const SectionID *pid; - const struct section *section = §ions[id]; - printf("%c%c%c", - section->flags & SECTION_FLAG_IS_WRAPPER ? 'W' : '.', - section->flags & SECTION_FLAG_IS_ARRAY ? 'A' : '.', - section->flags & SECTION_FLAG_HAS_VARIABLE_FIELDS ? 'V' : '.'); - printf("%*c %s", level * 4, ' ', section->name); - if (section->unique_name) - printf("/%s", section->unique_name); - printf("\n"); - - for (pid = section->children_ids; *pid != -1; pid++) - print_section(*pid, level+1); -} - -static int opt_sections(void *optctx, const char *opt, const char *arg) -{ - printf("Sections:\n" - "W.. = Section is a wrapper (contains other sections, no local entries)\n" - ".A. = Section contains an array of elements of the same type\n" - "..V = Section may contain a variable number of fields with variable keys\n" - "FLAGS NAME/UNIQUE_NAME\n" - "---\n"); - print_section(SECTION_ID_ROOT, 0); - return 0; -} - -static int opt_show_versions(void *optctx, const char *opt, const char *arg) -{ - mark_section_show_entries(SECTION_ID_PROGRAM_VERSION, 1, NULL); - mark_section_show_entries(SECTION_ID_LIBRARY_VERSION, 1, NULL); - return 0; -} - -#define DEFINE_OPT_SHOW_SECTION(section, target_section_id) \ - static int opt_show_##section(void *optctx, const char *opt, const char *arg) \ - { \ - mark_section_show_entries(SECTION_ID_##target_section_id, 1, NULL); \ - return 0; \ - } - -DEFINE_OPT_SHOW_SECTION(chapters, CHAPTERS) -DEFINE_OPT_SHOW_SECTION(error, ERROR) -DEFINE_OPT_SHOW_SECTION(format, FORMAT) -DEFINE_OPT_SHOW_SECTION(frames, FRAMES) -DEFINE_OPT_SHOW_SECTION(library_versions, LIBRARY_VERSIONS) -DEFINE_OPT_SHOW_SECTION(packets, PACKETS) -DEFINE_OPT_SHOW_SECTION(pixel_formats, PIXEL_FORMATS) -DEFINE_OPT_SHOW_SECTION(program_version, PROGRAM_VERSION) -DEFINE_OPT_SHOW_SECTION(streams, STREAMS) -DEFINE_OPT_SHOW_SECTION(programs, PROGRAMS) - -static const OptionDef real_options[] = { - CMDUTILS_COMMON_OPTIONS - { "f", HAS_ARG, {.func_arg = opt_format}, "force format", "format" }, - { "unit", OPT_BOOL, {&show_value_unit}, "show unit of the displayed values" }, - { "prefix", OPT_BOOL, {&use_value_prefix}, "use SI prefixes for the displayed values" }, - { "byte_binary_prefix", OPT_BOOL, {&use_byte_value_binary_prefix}, - "use binary prefixes for byte units" }, - { "sexagesimal", OPT_BOOL, {&use_value_sexagesimal_format}, - "use sexagesimal format HOURS:MM:SS.MICROSECONDS for time units" }, - { "pretty", 0, {.func_arg = opt_pretty}, - "prettify the format of displayed values, make it more human readable" }, - { "print_format", OPT_STRING | HAS_ARG, { &print_format }, - "set the output printing format (available formats are: default, compact, csv, flat, ini, json, xml)", "format" }, - { "of", OPT_STRING | HAS_ARG, { &print_format }, "alias for -print_format", "format" }, - { "select_streams", OPT_STRING | HAS_ARG, { &stream_specifier }, "select the specified streams", "stream_specifier" }, - { "sections", OPT_EXIT, {.func_arg = opt_sections}, "print sections structure and section information, and exit" }, - { "show_data", OPT_BOOL, { &do_show_data }, "show packets data" }, - { "show_data_hash", OPT_STRING | HAS_ARG, { &show_data_hash }, "show packets data hash" }, - { "show_error", 0, { .func_arg = &opt_show_error }, "show probing error" }, - { "show_format", 0, { .func_arg = &opt_show_format }, "show format/container info" }, - { "show_frames", 0, { .func_arg = &opt_show_frames }, "show frames info" }, - { "show_entries", HAS_ARG, {.func_arg = opt_show_entries}, - "show a set of specified entries", "entry_list" }, -#if HAVE_THREADS - { "show_log", OPT_INT|HAS_ARG, { &do_show_log }, "show log" }, -#endif - { "show_packets", 0, { .func_arg = &opt_show_packets }, "show packets info" }, - { "show_programs", 0, { .func_arg = &opt_show_programs }, "show programs info" }, - { "show_streams", 0, { .func_arg = &opt_show_streams }, "show streams info" }, - { "show_chapters", 0, { .func_arg = &opt_show_chapters }, "show chapters info" }, - { "count_frames", OPT_BOOL, { &do_count_frames }, "count the number of frames per stream" }, - { "count_packets", OPT_BOOL, { &do_count_packets }, "count the number of packets per stream" }, - { "show_program_version", 0, { .func_arg = &opt_show_program_version }, "show ffprobe version" }, - { "show_library_versions", 0, { .func_arg = &opt_show_library_versions }, "show library versions" }, - { "show_versions", 0, { .func_arg = &opt_show_versions }, "show program and library versions" }, - { "show_pixel_formats", 0, { .func_arg = &opt_show_pixel_formats }, "show pixel format descriptions" }, - { "show_optional_fields", HAS_ARG, { .func_arg = &opt_show_optional_fields }, "show optional fields" }, - { "show_private_data", OPT_BOOL, { &show_private_data }, "show private data" }, - { "private", OPT_BOOL, { &show_private_data }, "same as show_private_data" }, - { "bitexact", OPT_BOOL, {&do_bitexact}, "force bitexact output" }, - { "read_intervals", HAS_ARG, {.func_arg = opt_read_intervals}, "set read intervals", "read_intervals" }, - { "i", HAS_ARG, {.func_arg = opt_input_file_i}, "read specified file", "input_file"}, - { "o", HAS_ARG, {.func_arg = opt_output_file_o}, "write to specified output", "output_file"}, - { "print_filename", HAS_ARG, {.func_arg = opt_print_filename}, "override the printed input filename", "print_file"}, - { "find_stream_info", OPT_BOOL | OPT_INPUT | OPT_EXPERT, { &find_stream_info }, - "read and decode the streams to fill missing information with heuristics" }, - { NULL, }, -}; - -static inline int check_section_show_entries(int section_id) -{ - int *id; - struct section *section = §ions[section_id]; - if (sections[section_id].show_all_entries || sections[section_id].entries_to_show) - return 1; - for (id = section->children_ids; *id != -1; id++) - if (check_section_show_entries(*id)) - return 1; - return 0; -} - -#define SET_DO_SHOW(id, varname) do { \ - if (check_section_show_entries(SECTION_ID_##id)) \ - do_show_##varname = 1; \ - } while (0) - -int main(int argc, char **argv) -{ - const Writer *w; - WriterContext *wctx; - char *buf; - char *w_name = NULL, *w_args = NULL; - int ret, input_ret, i; - - init_dynload(); - -#if HAVE_THREADS - ret = pthread_mutex_init(&log_mutex, NULL); - if (ret != 0) { - goto end; - } -#endif - av_log_set_flags(AV_LOG_SKIP_REPEATED); - register_exit(ffprobe_cleanup); - - options = real_options; - parse_loglevel(argc, argv, options); - avformat_network_init(); -#if CONFIG_AVDEVICE - avdevice_register_all(); -#endif - - show_banner(argc, argv, options); - parse_options(NULL, argc, argv, options, opt_input_file); - - if (do_show_log) - av_log_set_callback(log_callback); - - /* mark things to show, based on -show_entries */ - SET_DO_SHOW(CHAPTERS, chapters); - SET_DO_SHOW(ERROR, error); - SET_DO_SHOW(FORMAT, format); - SET_DO_SHOW(FRAMES, frames); - SET_DO_SHOW(LIBRARY_VERSIONS, library_versions); - SET_DO_SHOW(PACKETS, packets); - SET_DO_SHOW(PIXEL_FORMATS, pixel_formats); - SET_DO_SHOW(PIXEL_FORMAT_FLAGS, pixel_format_flags); - SET_DO_SHOW(PIXEL_FORMAT_COMPONENTS, pixel_format_components); - SET_DO_SHOW(PROGRAM_VERSION, program_version); - SET_DO_SHOW(PROGRAMS, programs); - SET_DO_SHOW(STREAMS, streams); - SET_DO_SHOW(STREAM_DISPOSITION, stream_disposition); - SET_DO_SHOW(PROGRAM_STREAM_DISPOSITION, stream_disposition); - - SET_DO_SHOW(CHAPTER_TAGS, chapter_tags); - SET_DO_SHOW(FORMAT_TAGS, format_tags); - SET_DO_SHOW(FRAME_TAGS, frame_tags); - SET_DO_SHOW(PROGRAM_TAGS, program_tags); - SET_DO_SHOW(STREAM_TAGS, stream_tags); - SET_DO_SHOW(PROGRAM_STREAM_TAGS, stream_tags); - SET_DO_SHOW(PACKET_TAGS, packet_tags); - - if (do_bitexact && (do_show_program_version || do_show_library_versions)) { - av_log(NULL, AV_LOG_ERROR, - "-bitexact and -show_program_version or -show_library_versions " - "options are incompatible\n"); - ret = AVERROR(EINVAL); - goto end; - } - - writer_register_all(); - - if (!print_format) - print_format = av_strdup("default"); - if (!print_format) { - ret = AVERROR(ENOMEM); - goto end; - } - w_name = av_strtok(print_format, "=", &buf); - if (!w_name) { - av_log(NULL, AV_LOG_ERROR, - "No name specified for the output format\n"); - ret = AVERROR(EINVAL); - goto end; - } - w_args = buf; - - if (show_data_hash) { - if ((ret = av_hash_alloc(&hash, show_data_hash)) < 0) { - if (ret == AVERROR(EINVAL)) { - const char *n; - av_log(NULL, AV_LOG_ERROR, - "Unknown hash algorithm '%s'\nKnown algorithms:", - show_data_hash); - for (i = 0; (n = av_hash_names(i)); i++) - av_log(NULL, AV_LOG_ERROR, " %s", n); - av_log(NULL, AV_LOG_ERROR, "\n"); - } - goto end; - } - } - - w = writer_get_by_name(w_name); - if (!w) { - av_log(NULL, AV_LOG_ERROR, "Unknown output format with name '%s'\n", w_name); - ret = AVERROR(EINVAL); - goto end; - } - - if ((ret = writer_open(&wctx, w, w_args, - sections, FF_ARRAY_ELEMS(sections), output_filename)) >= 0) { - if (w == &xml_writer) - wctx->string_validation_utf8_flags |= AV_UTF8_FLAG_EXCLUDE_XML_INVALID_CONTROL_CODES; - - writer_print_section_header(wctx, SECTION_ID_ROOT); - - if (do_show_program_version) - ffprobe_show_program_version(wctx); - if (do_show_library_versions) - ffprobe_show_library_versions(wctx); - if (do_show_pixel_formats) - ffprobe_show_pixel_formats(wctx); - - if (!input_filename && - ((do_show_format || do_show_programs || do_show_streams || do_show_chapters || do_show_packets || do_show_error) || - (!do_show_program_version && !do_show_library_versions && !do_show_pixel_formats))) { - show_usage(); - av_log(NULL, AV_LOG_ERROR, "You have to specify one input file.\n"); - av_log(NULL, AV_LOG_ERROR, "Use -h to get full help or, even better, run 'man %s'.\n", program_name); - ret = AVERROR(EINVAL); - } else if (input_filename) { - ret = probe_file(wctx, input_filename, print_input_filename); - if (ret < 0 && do_show_error) - show_error(wctx, ret); - } - - input_ret = ret; - - writer_print_section_footer(wctx); - ret = writer_close(&wctx); - if (ret < 0) - av_log(NULL, AV_LOG_ERROR, "Writing output failed: %s\n", av_err2str(ret)); - - ret = FFMIN(ret, input_ret); - } - -end: - av_freep(&print_format); - av_freep(&read_intervals); - av_hash_freep(&hash); - - uninit_opts(); - for (i = 0; i < FF_ARRAY_ELEMS(sections); i++) - av_dict_free(&(sections[i].entries_to_show)); - - avformat_network_deinit(); - - return ret < 0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/acelp_pitch_delay.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/acelp_pitch_delay.c deleted file mode 100644 index 6cf880e4ac136ac653a363903cd29dd9c4dd2d62..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/acelp_pitch_delay.c +++ /dev/null @@ -1,147 +0,0 @@ -/* - * gain code, gain pitch and pitch delay decoding - * - * Copyright (c) 2008 Vladimir Voroshilov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/common.h" -#include "libavutil/ffmath.h" -#include "libavutil/float_dsp.h" -#include "acelp_pitch_delay.h" -#include "celp_math.h" -#include "audiodsp.h" - -void ff_acelp_update_past_gain( - int16_t* quant_energy, - int gain_corr_factor, - int log2_ma_pred_order, - int erasure) -{ - int i; - int avg_gain=quant_energy[(1 << log2_ma_pred_order) - 1]; // (5.10) - - for(i=(1 << log2_ma_pred_order) - 1; i>0; i--) - { - avg_gain += quant_energy[i-1]; - quant_energy[i] = quant_energy[i-1]; - } - - if(erasure) - quant_energy[0] = FFMAX(avg_gain >> log2_ma_pred_order, -10240) - 4096; // -10 and -4 in (5.10) - else - quant_energy[0] = (6165 * ((ff_log2_q15(gain_corr_factor) >> 2) - (13 << 13))) >> 13; -} - -int16_t ff_acelp_decode_gain_code( - AudioDSPContext *adsp, - int gain_corr_factor, - const int16_t* fc_v, - int mr_energy, - const int16_t* quant_energy, - const int16_t* ma_prediction_coeff, - int subframe_size, - int ma_pred_order) -{ - int i; - - mr_energy <<= 10; - - for(i=0; iscalarproduct_int16(fc_v, fc_v, subframe_size, 0))) >> 3) & ~0x3ff); - - mr_energy = (5439 * (mr_energy >> 15)) >> 8; // (0.15) = (0.15) * (7.23) - - return bidir_sal( - ((ff_exp2(mr_energy & 0x7fff) + 16) >> 5) * (gain_corr_factor >> 1), - (mr_energy >> 15) - 25 - ); -#else - mr_energy = gain_corr_factor * ff_exp10((double)mr_energy / (20 << 23)) / - sqrt(adsp->scalarproduct_int16(fc_v, fc_v, subframe_size)); - return mr_energy >> 12; -#endif -} - -float ff_amr_set_fixed_gain(float fixed_gain_factor, float fixed_mean_energy, - float *prediction_error, float energy_mean, - const float *pred_table) -{ - // Equations 66-69: - // ^g_c = ^gamma_gc * 100.05 (predicted dB + mean dB - dB of fixed vector) - // Note 10^(0.05 * -10log(average x2)) = 1/sqrt((average x2)). - float val = fixed_gain_factor * - ff_exp10(0.05 * - (avpriv_scalarproduct_float_c(pred_table, prediction_error, 4) + - energy_mean)) / - sqrtf(fixed_mean_energy ? fixed_mean_energy : 1.0); - - // update quantified prediction error energy history - memmove(&prediction_error[0], &prediction_error[1], - 3 * sizeof(prediction_error[0])); - prediction_error[3] = 20.0 * log10f(fixed_gain_factor); - - return val; -} - -void ff_decode_pitch_lag(int *lag_int, int *lag_frac, int pitch_index, - const int prev_lag_int, const int subframe, - int third_as_first, int resolution) -{ - /* Note n * 10923 >> 15 is floor(x/3) for 0 <= n <= 32767 */ - if (subframe == 0 || (subframe == 2 && third_as_first)) { - - if (pitch_index < 197) - pitch_index += 59; - else - pitch_index = 3 * pitch_index - 335; - - } else { - if (resolution == 4) { - int search_range_min = av_clip(prev_lag_int - 5, PITCH_DELAY_MIN, - PITCH_DELAY_MAX - 9); - - // decoding with 4-bit resolution - if (pitch_index < 4) { - // integer only precision for [search_range_min, search_range_min+3] - pitch_index = 3 * (pitch_index + search_range_min) + 1; - } else if (pitch_index < 12) { - // 1/3 fractional precision for [search_range_min+3 1/3, search_range_min+5 2/3] - pitch_index += 3 * search_range_min + 7; - } else { - // integer only precision for [search_range_min+6, search_range_min+9] - pitch_index = 3 * (pitch_index + search_range_min - 6) + 1; - } - } else { - // decoding with 5 or 6 bit resolution, 1/3 fractional precision - pitch_index--; - - if (resolution == 5) { - pitch_index += 3 * av_clip(prev_lag_int - 10, PITCH_DELAY_MIN, - PITCH_DELAY_MAX - 19); - } else - pitch_index += 3 * av_clip(prev_lag_int - 5, PITCH_DELAY_MIN, - PITCH_DELAY_MAX - 9); - } - } - *lag_int = pitch_index * 10923 >> 15; - *lag_frac = pitch_index - 3 * *lag_int - 1; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/frame_thread_encoder.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/frame_thread_encoder.h deleted file mode 100644 index 201cba2a8f107cc5c443ff39b2ca28fd55b280cc..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/frame_thread_encoder.h +++ /dev/null @@ -1,35 +0,0 @@ -/* - * Copyright (c) 2012 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_FRAME_THREAD_ENCODER_H -#define AVCODEC_FRAME_THREAD_ENCODER_H - -#include "avcodec.h" - -/** - * Initialize frame thread encoder. - * @note hardware encoders are not supported - */ -int ff_frame_thread_encoder_init(AVCodecContext *avctx); -void ff_frame_thread_encoder_free(AVCodecContext *avctx); -int ff_thread_video_encode_frame(AVCodecContext *avctx, AVPacket *pkt, - AVFrame *frame, int *got_packet_ptr); - -#endif /* AVCODEC_FRAME_THREAD_ENCODER_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_mc_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_mc_template.c deleted file mode 100644 index d02e2bf580a409aec4a9a88c05c37c2a6cc9e1a3..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_mc_template.c +++ /dev/null @@ -1,165 +0,0 @@ -/* - * H.26L/H.264/AVC/JVT/14496-10/... decoder - * Copyright (c) 2003 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "h264dec.h" - -#undef MCFUNC - -#if CHROMA_IDC == 1 -# define MCFUNC(n) FUNC(n ## _420) -#elif CHROMA_IDC == 2 -# define MCFUNC(n) FUNC(n ## _422) -#elif CHROMA_IDC == 3 -# define MCFUNC(n) FUNC(n ## _444) -#endif - -#undef mc_part -#define mc_part MCFUNC(mc_part) - -static void mc_part(const H264Context *h, H264SliceContext *sl, - int n, int square, - int height, int delta, - uint8_t *dest_y, uint8_t *dest_cb, - uint8_t *dest_cr, - int x_offset, int y_offset, - const qpel_mc_func *qpix_put, - h264_chroma_mc_func chroma_put, - const qpel_mc_func *qpix_avg, - h264_chroma_mc_func chroma_avg, - const h264_weight_func *weight_op, - const h264_biweight_func *weight_avg, - int list0, int list1) -{ - if ((sl->pwt.use_weight == 2 && list0 && list1 && - (sl->pwt.implicit_weight[sl->ref_cache[0][scan8[n]]][sl->ref_cache[1][scan8[n]]][sl->mb_y & 1] != 32)) || - sl->pwt.use_weight == 1) - mc_part_weighted(h, sl, n, square, height, delta, dest_y, dest_cb, dest_cr, - x_offset, y_offset, qpix_put, chroma_put, - weight_op[0], weight_op[1], weight_avg[0], - weight_avg[1], list0, list1, PIXEL_SHIFT, CHROMA_IDC); - else - mc_part_std(h, sl, n, square, height, delta, dest_y, dest_cb, dest_cr, - x_offset, y_offset, qpix_put, chroma_put, qpix_avg, - chroma_avg, list0, list1, PIXEL_SHIFT, CHROMA_IDC); -} - -static void MCFUNC(hl_motion)(const H264Context *h, H264SliceContext *sl, - uint8_t *dest_y, - uint8_t *dest_cb, uint8_t *dest_cr, - const qpel_mc_func(*qpix_put)[16], - const h264_chroma_mc_func(*chroma_put), - const qpel_mc_func(*qpix_avg)[16], - const h264_chroma_mc_func(*chroma_avg), - const h264_weight_func *weight_op, - const h264_biweight_func *weight_avg) -{ - const int mb_xy = sl->mb_xy; - const int mb_type = h->cur_pic.mb_type[mb_xy]; - - av_assert2(IS_INTER(mb_type)); - - if (HAVE_THREADS && (h->avctx->active_thread_type & FF_THREAD_FRAME)) - await_references(h, sl); - if (USES_LIST(mb_type, 0)) - prefetch_motion(h, sl, 0, PIXEL_SHIFT, CHROMA_IDC); - - if (IS_16X16(mb_type)) { - mc_part(h, sl, 0, 1, 16, 0, dest_y, dest_cb, dest_cr, 0, 0, - qpix_put[0], chroma_put[0], qpix_avg[0], chroma_avg[0], - weight_op, weight_avg, - IS_DIR(mb_type, 0, 0), IS_DIR(mb_type, 0, 1)); - } else if (IS_16X8(mb_type)) { - mc_part(h, sl, 0, 0, 8, 8 << PIXEL_SHIFT, dest_y, dest_cb, dest_cr, 0, 0, - qpix_put[1], chroma_put[0], qpix_avg[1], chroma_avg[0], - weight_op, weight_avg, - IS_DIR(mb_type, 0, 0), IS_DIR(mb_type, 0, 1)); - mc_part(h, sl, 8, 0, 8, 8 << PIXEL_SHIFT, dest_y, dest_cb, dest_cr, 0, 4, - qpix_put[1], chroma_put[0], qpix_avg[1], chroma_avg[0], - weight_op, weight_avg, - IS_DIR(mb_type, 1, 0), IS_DIR(mb_type, 1, 1)); - } else if (IS_8X16(mb_type)) { - mc_part(h, sl, 0, 0, 16, 8 * sl->mb_linesize, dest_y, dest_cb, dest_cr, 0, 0, - qpix_put[1], chroma_put[1], qpix_avg[1], chroma_avg[1], - &weight_op[1], &weight_avg[1], - IS_DIR(mb_type, 0, 0), IS_DIR(mb_type, 0, 1)); - mc_part(h, sl, 4, 0, 16, 8 * sl->mb_linesize, dest_y, dest_cb, dest_cr, 4, 0, - qpix_put[1], chroma_put[1], qpix_avg[1], chroma_avg[1], - &weight_op[1], &weight_avg[1], - IS_DIR(mb_type, 1, 0), IS_DIR(mb_type, 1, 1)); - } else { - int i; - - av_assert2(IS_8X8(mb_type)); - - for (i = 0; i < 4; i++) { - const int sub_mb_type = sl->sub_mb_type[i]; - const int n = 4 * i; - int x_offset = (i & 1) << 2; - int y_offset = (i & 2) << 1; - - if (IS_SUB_8X8(sub_mb_type)) { - mc_part(h, sl, n, 1, 8, 0, dest_y, dest_cb, dest_cr, - x_offset, y_offset, - qpix_put[1], chroma_put[1], qpix_avg[1], chroma_avg[1], - &weight_op[1], &weight_avg[1], - IS_DIR(sub_mb_type, 0, 0), IS_DIR(sub_mb_type, 0, 1)); - } else if (IS_SUB_8X4(sub_mb_type)) { - mc_part(h, sl, n, 0, 4, 4 << PIXEL_SHIFT, dest_y, dest_cb, dest_cr, - x_offset, y_offset, - qpix_put[2], chroma_put[1], qpix_avg[2], chroma_avg[1], - &weight_op[1], &weight_avg[1], - IS_DIR(sub_mb_type, 0, 0), IS_DIR(sub_mb_type, 0, 1)); - mc_part(h, sl, n + 2, 0, 4, 4 << PIXEL_SHIFT, - dest_y, dest_cb, dest_cr, x_offset, y_offset + 2, - qpix_put[2], chroma_put[1], qpix_avg[2], chroma_avg[1], - &weight_op[1], &weight_avg[1], - IS_DIR(sub_mb_type, 0, 0), IS_DIR(sub_mb_type, 0, 1)); - } else if (IS_SUB_4X8(sub_mb_type)) { - mc_part(h, sl, n, 0, 8, 4 * sl->mb_linesize, - dest_y, dest_cb, dest_cr, x_offset, y_offset, - qpix_put[2], chroma_put[2], qpix_avg[2], chroma_avg[2], - &weight_op[2], &weight_avg[2], - IS_DIR(sub_mb_type, 0, 0), IS_DIR(sub_mb_type, 0, 1)); - mc_part(h, sl, n + 1, 0, 8, 4 * sl->mb_linesize, - dest_y, dest_cb, dest_cr, x_offset + 2, y_offset, - qpix_put[2], chroma_put[2], qpix_avg[2], chroma_avg[2], - &weight_op[2], &weight_avg[2], - IS_DIR(sub_mb_type, 0, 0), IS_DIR(sub_mb_type, 0, 1)); - } else { - int j; - av_assert2(IS_SUB_4X4(sub_mb_type)); - for (j = 0; j < 4; j++) { - int sub_x_offset = x_offset + 2 * (j & 1); - int sub_y_offset = y_offset + (j & 2); - mc_part(h, sl, n + j, 1, 4, 0, - dest_y, dest_cb, dest_cr, sub_x_offset, sub_y_offset, - qpix_put[2], chroma_put[2], qpix_avg[2], chroma_avg[2], - &weight_op[2], &weight_avg[2], - IS_DIR(sub_mb_type, 0, 0), IS_DIR(sub_mb_type, 0, 1)); - } - } - } - } - - if (USES_LIST(mb_type, 1)) - prefetch_motion(h, sl, 1, PIXEL_SHIFT, CHROMA_IDC); -} - diff --git a/spaces/congsaPfin/Manga-OCR/Ioncube-Php-Encoder-Nulled-Io-TOP.md b/spaces/congsaPfin/Manga-OCR/Ioncube-Php-Encoder-Nulled-Io-TOP.md deleted file mode 100644 index a51da69ecea8330b28bf0e3aa084df3e1e89b608..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/Ioncube-Php-Encoder-Nulled-Io-TOP.md +++ /dev/null @@ -1,96 +0,0 @@ -## ioncube php encoder nulled io - - - - - - ![Ioncube Php Encoder Nulled Io \/\/TOP\\\\](https://domoticx.com/wp-content/uploads/2017/07/ioncube-icon.jpg) - - - - - -**Download File [https://urlca.com/2txP5o](https://urlca.com/2txP5o)** - - - - - - - - - - - - I'll try to write that for you. Here is a possible title and article: - -# How to Use IonCube PHP Encoder to Protect Your PHP Scripts - - - -If you are a PHP developer, you may want to protect your code from unauthorized use, modification, or theft. One way to do that is by using IonCube PHP Encoder, a powerful tool that can encrypt and license your PHP scripts. - - - -IonCube PHP Encoder is a software that can encode your PHP files into a format that is unreadable by humans, but still executable by the PHP engine. This way, you can prevent anyone from viewing or changing your source code, and also add features such as expiration dates, domain restrictions, IP restrictions, or MAC address restrictions to control how your scripts can be used. - - - -IonCube PHP Encoder supports PHP versions from 4.4 to 8.1, and has unique features such as external and dynamic keys encryption, bytecode protection, binary code obfuscation, and runtime compatibility. It also has a command line interface and a graphical user interface for Windows and macOS. - - - -To use IonCube PHP Encoder, you need to purchase a license from their website[^2^], download the software, and install it on your computer. Then, you can follow these steps to encode your PHP scripts: - - - -1. Select the files or folders that you want to encode. - -2. Choose the encoding options that suit your needs, such as target PHP version, encryption mode, licensing features, etc. - -3. Click on the Encode button to start the encoding process. - -4. Copy the encoded files to your web server, along with the ionCube Loader file that is required to run them. - - - -That's it! You have successfully encoded and protected your PHP scripts with IonCube PHP Encoder. You can also use their online service[^2^] if you don't want to install the software on your computer. - - - -However, be aware that IonCube PHP Encoder is not a free software, and it may not be compatible with some PHP frameworks or extensions. Also, some hackers may try to crack or bypass the encryption using tools such as nulled io[^1^] [^3^] [^4^], which claim to offer cracked versions of IonCube PHP Encoder or other software. These tools are illegal and may contain malware or viruses that can harm your computer or website. Therefore, you should avoid using them and only download IonCube PHP Encoder from their official website[^2^]. - - - -IonCube PHP Encoder is a great solution for protecting your PHP scripts from unauthorized use or modification. It can also help you create secure and flexible licenses for your customers or clients. If you want to learn more about IonCube PHP Encoder, you can visit their website[^2^] or read their documentation[^2^]. - -I'll try to write that for you. Here are some possible paragraphs: - -## Benefits of IonCube PHP Encoder - - - -Using IonCube PHP Encoder can bring many benefits to PHP developers and website owners. Here are some of the main advantages of using this tool: - - - -- It can protect your PHP code from unauthorized use, modification, or theft. By encoding your PHP scripts into bytecode, you can prevent anyone from viewing or changing your source code. This can help you protect your intellectual property, your business logic, and your customer data. - -- It can improve your PHP performance and reduce server load. By compiling your PHP scripts into bytecode, you can make them run faster and more efficiently on the PHP engine. This can improve your website speed, user experience, and SEO ranking. - -- It can create secure and flexible licenses for your PHP products. By using the built-in licensing features of the Pro and Cerberus editions, you can control where and for how long your PHP products can be used. You can also add custom license parameters and messages to suit your needs. - -- It can support the latest PHP versions and features. IonCube PHP Encoder supports PHP versions from 4.4 to 8.1, and allows you to use the latest PHP language features in your code. You can also encode your PHP scripts to run on different PHP versions without re-encoding. - -- It can integrate with other ionCube products and services. You can use IonCube PHP Encoder with other ionCube products such as Package Foundry, Bundler, and ionCube24. You can also use their online service if you don't want to install the software on your computer. - - - -These are some of the benefits of using IonCube PHP Encoder to protect your PHP scripts. If you want to try it for yourself, you can download a free trial from their website[^2^] or contact them for more information. - - dfd1c89656 - - - - - diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Vlad and Niki 12 Locks APK and Join the Brothers in Their Adventures.md b/spaces/congsaPfin/Manga-OCR/logs/Download Vlad and Niki 12 Locks APK and Join the Brothers in Their Adventures.md deleted file mode 100644 index cedc8261ff910a5a09cf4a94c89e4dc85512d9a8..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Vlad and Niki 12 Locks APK and Join the Brothers in Their Adventures.md +++ /dev/null @@ -1,88 +0,0 @@ -
    -

    Vlad and Niki 12 Locks APK: A Fun and Challenging Puzzle Game for Kids

    -

    If you are looking for a fun and challenging puzzle game for your kids, you should check out Vlad and Niki 12 Locks APK. This is an "Escape Room" type of game in which you must find all keys and unlock the target door. The game features the popular YouTube stars Vlad and Niki, who are known for their imaginative and adventurous videos. In this game, they will take you on a journey through different quest-rooms with loads of puzzles and mini-games. The game has amazing plasticine graphics, fun music, and easy controls that make it suitable for kids of all ages. Here is everything you need to know about Vlad and Niki 12 Locks APK.

    -

    What is Vlad and Niki 12 Locks APK?

    -

    A brief introduction to the game and its features

    -

    Vlad and Niki 12 Locks APK is a puzzle game that was released in April 2020 by RUD Present, a developer that specializes in creating games for kids. The game is based on the YouTube series "Vlad & Niki", which features the brothers Vlad and Niki having fun in various scenarios. The game follows a similar theme, as Vlad and Niki want some biscuits, but the jar is shut with 12 locks. To get the biscuits, they have to solve puzzles in different rooms, such as a kitchen, a bathroom, a garage, a spaceship, and more. The game has over 10 million downloads on Google Play Store and has received positive feedback from users.

    -

    vlad and niki 12 locks apk


    Download Zip ►►► https://urlca.com/2uOeuL



    -

    A brief introduction to Vlad and Niki, the YouTube stars behind the game

    -

    Vlad and Niki are two brothers from Dubai who have become one of the most popular kidfluencers on YouTube. They started their channel in 2018 and have since gained over 230 million subscribers worldwide across 21 channels in 18 languages. They are known for their creative videos that feature them playing with toys, having adventures, dressing up as superheroes, racing cars, flying planes, going into space, and more. Their videos are produced with a mix of live action, animation, and music to create comedic content for preschoolers. They also have their own merchandise line that includes toys, clothes, books, games, etc.

    -

    How to play Vlad and Niki 12 Locks APK?

    -

    The goal of the game and the basic controls

    -

    The goal of Vlad and Niki 12 Locks APK is to find all the keys and unlock the door that leads to the biscuit jar. The game has simple and intuitive controls that are easy for kids to use. You just have to tap on the screen to interact with objects, drag items to use them, and swipe to move around the room. You can also tap on Vlad and Niki to hear them talk and make funny sounds.

    -

    The different quest-rooms and puzzles in the game

    -

    The game has 12 quest-rooms that are themed after different scenarios that Vlad and Niki have explored in their videos. For example, you can visit a pirate ship, a dinosaur park, a candy land, a circus, a haunted house, and more. Each quest-room has its own unique puzzles that require logic, creativity, and observation skills to solve. Some puzzles involve finding hidden objects, matching colors, shapes, or numbers, solving math problems, cracking codes, etc. You can also use hints if you get stuck on a puzzle.

    -

    The mini-games and rewards in the game

    -

    Besides the puzzles, the game also has mini-games that you can play for fun and rewards. Some of the mini-games include racing cars, flying planes, shooting cannons, playing soccer, etc. You can earn coins by playing these mini-games and use them to buy stickers and decorations for your room. You can also collect stars by completing the quest-rooms and use them to unlock new outfits for Vlad and Niki. The game has a lot of variety and content to keep you entertained for hours.

    -

    Why should you download Vlad and Niki 12 Locks APK?

    -

    The benefits of playing the game for kids

    -

    Vlad and Niki 12 Locks APK is not only a fun game but also an educational one. It can help kids develop their cognitive skills, such as memory, attention, logic, problem-solving, etc. It can also stimulate their imagination and creativity by exposing them to different themes and scenarios. Moreover, it can foster their curiosity and interest in learning new things by presenting them with challenges and rewards. The game is also family-friendly and suitable for kids of all ages.

    -

    The positive reviews and ratings of the game

    -

    Vlad and Niki 12 Locks APK has received a lot of positive reviews and ratings from users who have downloaded and played the game. The game has an average rating of 4.5 out of 5 stars on Google Play Store and has been praised for its graphics, gameplay, sound effects, humor, etc. Some of the comments from users are: - "This game is awesome! My kids love it so much! They watch Vlad and Niki every day and they are so happy to play with them in this game!" - "This is one of the best games I have ever played! It is so fun and challenging! I love the puzzles and the mini-games! The graphics are amazing and the characters are so cute!" - "This game is very educational and entertaining for kids. It helps them learn new things and improve their skills. It is also very funny and colorful. I highly recommend it!"

    -

    The availability and compatibility of the game

    -

    Vlad and Niki 12 Locks APK is available for free download on Google Play Store for Android devices. You can also download it from other sources such as APKPure or APKFab if you want to install it manually on your device. The game requires Android 4.4 or higher to run smoothly and does not take up much space on your device. The game is also compatible with most devices, such as smartphones, tablets, etc.

    -

    Conclusion

    -

    A summary of the main points and a call to action

    -

    Vlad and Niki 12 Locks APK is a fun and challenging puzzle game for kids that features the popular YouTube stars Vlad and Niki. The game has 12 quest-rooms with different themes and puzzles that require logic, creativity, and observation skills to solve. The game also has mini-games that you can play for fun and rewards. The game has amazing plasticine graphics, fun music, easy controls, educational content, positive reviews, free availability, and wide compatibility. If you are looking for a game that will keep your kids entertained and engaged for hours, you should download Vlad and Niki 12 Locks APK today!

    -

    vlad and niki 12 locks game download
    -vlad and niki 12 locks mod apk
    -vlad and niki 12 locks online
    -vlad and niki 12 locks walkthrough
    -vlad and niki 12 locks play store
    -vlad and niki 12 locks free download
    -vlad and niki 12 locks puzzle game
    -vlad and niki 12 locks android
    -vlad and niki 12 locks latest version
    -vlad and niki 12 locks cheats
    -vlad and niki 12 locks review
    -vlad and niki 12 locks gameplay
    -vlad and niki 12 locks for pc
    -vlad and niki 12 locks ios
    -vlad and niki 12 locks apk pure
    -vlad and niki 12 locks apk mirror
    -vlad and niki 12 locks apk combo[^1^]
    -vlad and niki 12 locks apk mob.org[^2^]
    -vlad and niki 12 locks apk uptodown
    -vlad and niki 12 locks apk rexdl
    -vlad and niki 12 locks apk hack
    -vlad and niki 12 locks apk obb
    -vlad and niki 12 locks apk data
    -vlad and niki 12 locks apk revdl
    -vlad and niki 12 locks apk unlimited money
    -vlad and niki 12 locks apk no ads
    -vlad and niki 12 locks apk full version
    -vlad and niki 12 locks apk pro
    -vlad and niki 12 locks apk premium
    -vlad and niki 12 locks apk cracked
    -vlad and niki 12 locks apk mod menu
    -vlad and niki 12 locks apk mod download
    -vlad and niki 12 locks apk mod free shopping
    -vlad and niki 12 locks apk mod unlocked all levels
    -vlad and niki 12 locks apk mod unlimited hints
    -vlad and niki 12 locks apk mod god mode
    -vlad and niki 12 locks apk mod mega mod
    -vlad and niki 12 locks apk mod latest update
    -vlad and niki 12 locks apk mod offline mode
    -vlad and niki 12 locks apk mod no root required
    -how to download vlad and niki 12 locks apk
    -how to install vlad and niki 12 locks apk
    -how to play vlad and niki 12 locks apk
    -how to update vlad and niki 12 locks apk
    -how to uninstall vlad and niki 12 locks apk
    -how to solve puzzles in vlad and niki 12 locks apk
    -how to unlock all levels in vlad and niki 12 locks apk
    -how to get hints in vlad and niki 12 locks apk
    -how to get free coins in vlad and niki 12 locks apk

    -

    FAQs

    -

    Q1: How many levels are there in Vlad and Niki 12 Locks APK?

    -

    A1: There are 12 levels or quest-rooms in Vlad and Niki 12 Locks APK, each with a different theme and puzzle. You can play them in any order you want, but you have to complete all of them to unlock the final door and get the biscuits.

    -

    Q2: How can I download Vlad and Niki 12 Locks APK for free?

    -

    A2: You can download Vlad and Niki 12 Locks APK for free from Google Play Store for Android devices. Just search for the game name and tap on the install button. Alternatively, you can download it from other sources such as APKPure or APKFab if you want to install it manually on your device. Just make sure you download the latest version of the game and enable the unknown sources option on your device settings.

    -

    Q3: Is Vlad and Niki 12 Locks APK safe for kids?

    -

    A3: Yes, Vlad and Niki 12 Locks APK is safe for kids. The game does not contain any violence, gore, profanity, or inappropriate content. The game is also designed for kids of all ages, with easy controls, fun graphics, and educational content. The game is also family-friendly and suitable for playing with parents or siblings.

    -

    Q4: What are some tips and tricks for playing Vlad and Niki 12 Locks APK?

    -

    A4: Some tips and tricks for playing Vlad and Niki 12 Locks APK are: - Explore the room thoroughly and look for clues, objects, or hints that can help you solve the puzzles. - Use the hints if you get stuck on a puzzle. You can get hints by watching ads or using coins that you earn from playing mini-games. - Play the mini-games to earn coins and stars that you can use to buy stickers, decorations, and outfits for your room and characters. - Watch Vlad and Niki's videos on YouTube to get familiar with their personalities, humor, and adventures. This can help you enjoy the game more and relate to the characters better.

    -

    Q5: Where can I find more games like Vlad and Niki 12 Locks APK?

    -

    A5: If you like Vlad and Niki 12 Locks APK, you might also like other games by RUD Present, such as: - Vlad & Niki Supermarket game for Kids: A game where you can go shopping with Vlad and Niki in a supermarket full of surprises. - Vlad & Niki World: A game where you can create your own world with Vlad and Niki using different blocks, items, and characters. - Vlad & Niki Run: A game where you can run, jump, slide, and fly with Vlad and Niki in an endless runner adventure.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Honkai Star Rail - A Space Fantasy RPG You Cant Miss.md b/spaces/congsaPfin/Manga-OCR/logs/Honkai Star Rail - A Space Fantasy RPG You Cant Miss.md deleted file mode 100644 index 54e1c3389e26f55a4f648acbc76a3640d78e6c06..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Honkai Star Rail - A Space Fantasy RPG You Cant Miss.md +++ /dev/null @@ -1,99 +0,0 @@ -
    -

    How to Download Honkai Impact Star Rail: A Guide for Space Fantasy RPG Fans

    -

    Are you a fan of space fantasy RPGs? Do you love exploring different worlds and fighting against enemies with your unique skills and weapons? If you answered yes to these questions, then you should definitely check out Honkai Impact Star Rail, a brand-new game from the HoYoverse series. In this article, we will tell you what Honkai Impact Star Rail is, why you should download it, and how to download it on your device. Let's get started!

    -

    download honkai impact star rail


    Download Zip ———>>> https://urlca.com/2uO5Es



    -

    What is Honkai Impact Star Rail?

    -

    Honkai Impact Star Rail is a spin-off game from the popular Honkai Impact 3rd game, which is set in a post-apocalyptic world where humans fight against the Honkai, a mysterious force that can corrupt anything it touches. Honkai Impact Star Rail takes place in a different timeline, where humans have developed advanced technology and traveled to outer space. However, they still face the threat of the Honkai, which can manifest in different forms across the galaxy.

    -

    A brand-new HoYoverse game

    -

    Honkai Impact Star Rail is part of the HoYoverse series, which includes other games such as Genshin Impact and Tears of Themis. The HoYoverse games share some common elements, such as characters, lore, and aesthetics, but they also have their own unique features and stories. Honkai Impact Star Rail is the first HoYoverse game that focuses on space exploration and combat.

    -

    A space fantasy RPG with stunning graphics and gameplay

    -

    Honkai Impact Star Rail is a space fantasy RPG that lets you experience the beauty and mystery of the cosmos. You can explore different planets and star systems, encounter various creatures and enemies, and collect resources and treasures. You can also engage in thrilling battles using your characters' abilities and weapons, which can be customized according to your preferences. The game boasts stunning graphics that immerse you in the HoYoverse atmosphere.

    -

    A story of traveling across the galaxy with your companions

    -

    Honkai Impact Star Rail has a rich and captivating story that follows you as a special traveler who can travel across the galaxy using a special device called the Star Rail. Along the way, you will meet different characters who will join you on your journey, such as Kiana, Bronya, Mei, and more. You will also uncover the secrets of the HoYoverse and the origin of the Honkai. The game features voice acting and cutscenes that enhance the storytelling.

    -

    Why should you download Honkai Impact Star Rail?

    -

    If you are still not convinced that Honkai Impact Star Rail is a game worth playing, here are some more reasons why you should download it:

    -

    To experience a new game mode: Galactic Roaming

    -

    One of the most unique features of Honkai Impact Star Rail is the Galactic Roaming mode, which allows you to freely explore the galaxy and discover new planets and star systems. You can use your Star Rail to travel between different locations, and use your scanner to detect anomalies and events. You can also interact with other players and NPCs, and participate in quests and missions. Galactic Roaming is a dynamic and open-ended mode that offers endless possibilities and surprises.

    -

    How to download honkai impact star rail on PC
    -Honkai impact star rail galactic roaming patch notes
    -Honkai impact star rail silver wolf character guide
    -Honkai impact star rail astral express gameplay
    -Honkai impact star rail honey hunter world database
    -Honkai impact star rail best characters tier list
    -Honkai impact star rail official website and news
    -Honkai impact star rail system requirements and compatibility
    -Honkai impact star rail space fantasy RPG review
    -Honkai impact star rail trailblazer rewards and top-up
    -Honkai impact star rail aeon lore and story
    -Honkai impact star rail quantum type characters and skills
    -Honkai impact star rail hoyoverse crossover event
    -Honkai impact star rail tips and tricks for beginners
    -Honkai impact star rail latest update and bug fixes
    -Honkai impact star rail community and fan art
    -Honkai impact star rail nihility path silver wolf build
    -Honkai impact star rail melissa fahn voice actor interview
    -Honkai impact star rail soundtrack and theme song
    -Honkai impact star rail coupon codes and free gems
    -Honkai impact star rail best weapons and equipment
    -Honkai impact star rail reroll guide and strategy
    -Honkai impact star rail wiki and information source
    -Honkai impact star rail discord server and chat
    -Honkai impact star rail youtube gameplay videos and channels
    -Honkai impact star rail reddit discussion and memes
    -Honkai impact star rail twitter news and updates
    -Honkai impact star rail facebook page and group
    -Honkai impact star rail instagram photos and stories
    -Honkai impact star rail tiktok videos and trends

    -

    To collect and customize your characters and weapons

    -

    Honkai Impact Star Rail has a diverse and colorful cast of characters that you can collect and upgrade. Each character has their own personality, backstory, and skills, which can be enhanced by equipping them with different weapons and outfits. You can also customize your weapons by modifying their appearance, performance, and effects. You can mix and match different combinations of characters and weapons to suit your playstyle and preferences.

    -

    To join a community of Trailblazers and explore the HoYoverse

    -

    Honkai Impact Star Rail is not only a single-player game, but also a multiplayer game that lets you connect with other players from around the world. You can join a guild of Trailblazers, which are groups of players who share the same passion for space exploration and adventure. You can chat with your guildmates, cooperate with them in missions, and compete with them in rankings. You can also visit other players' spaceships and see how they decorate them. Honkai Impact Star Rail is a game that fosters a sense of community and friendship among its players.

    -

    How to download Honkai Impact Star Rail?

    -

    Now that you know what Honkai Impact Star Rail is and why you should download it, you might be wondering how to download it on your device. Well, don't worry, because we have got you covered. Here are the steps to download Honkai Impact Star Rail on different platforms:

    -

    For Android users

    -

    If you have an Android device, you can download Honkai Impact Star Rail from the Google Play Store. Here is how:

    -
      -
    1. Open the Google Play Store app on your device.
    2. -
    3. Search for "Honkai Impact Star Rail" in the search bar.
    4. -
    5. Select the game from the results and tap on "Install".
    6. -
    7. Wait for the game to download and install on your device.
    8. -
    9. Launch the game and enjoy!
    10. -
    -

    Note: Honkai Impact Star Rail requires Android 5.0 or higher, and at least 4 GB of RAM.

    -

    For iOS users

    -

    If you have an iOS device, you can download Honkai Impact Star Rail from the App Store. Here is how:

    -
      -
    1. Open the App Store app on your device.
    2. -
    3. Search for "Honkai Impact Star Rail" in the search bar.
    4. -
    5. Select the game from the results and tap on "Get".
    6. -
    7. Wait for the game to download and install on your device.
    8. -
    9. Launch the game and enjoy!
    10. -
    -

    Note: Honkai Impact Star Rail requires iOS 10.0 or higher, and at least 4 GB of RAM.

    -

    For PC users

    -

    If you have a PC, you can download Honkai Impact Star Rail from the official website. Here is how:

    -
      -
    1. Go to https://star-rail.mihoyo.com/en-us/home.
    2. -
    3. Click on "Download Now" on the homepage.
    4. -
    5. Select your region and language from the drop-down menu.
    6. -
    7. Click on "Download" to start downloading the game installer.
    8. -
    9. Run the installer and follow the instructions to install the game on your PC.
    10. -
    11. Launch the game and enjoy!
    12. -
    -

    Note: Honkai Impact Star Rail requires Windows 7 or higher, Intel Core i5 or higher, NVIDIA GeForce GTX 1050 or higher, and at least 8 GB of RAM.

    -

    Conclusion

    -

    Honkai Impact Star Rail is a game that you should not miss if you are a fan of space fantasy RPGs. It is a game that offers you a stunning and immersive experience of exploring the galaxy, collecting and customizing your characters and weapons, and fighting against the Honkai. It is also a game that lets you connect with other players and join a community of Trailblazers. If you want to download Honkai Impact Star Rail, you can follow the steps we have provided for different platforms. We hope you enjoy the game and have fun!

    -

    FAQs

    -

    Here are some frequently asked questions about Honkai Impact Star Rail:

    -

    Q: Is Honkai Impact Star Rail free to play?

    -

    A: Yes, Honkai Impact Star Rail is free to play. However, it also has some optional in-game purchases that can enhance your gameplay.

    -

    Q: Is Honkai Impact Star Rail connected to Honkai Impact 3rd?

    -

    A: Honkai Impact Star Rail is a spin-off game from Honkai Impact 3rd, which means it has some connections to the main game, such as characters and lore. However, it also has its own independent story and timeline, which means you can play it without playing Honkai Impact 3rd first.

    -

    Q: How can I get more characters and weapons in Honkai Impact Star Rail?

    -

    A: You can get more characters and weapons in Honkai Impact Star Rail by participating in events, completing missions, and using the gacha system. The gacha system is a random draw that lets you obtain different items, such as characters, weapons, outfits, and more. You can use different currencies to perform gacha draws, such as crystals, tickets, and coupons.

    -

    Q: How can I play Honkai Impact Star Rail with my friends?

    -

    A: You can play Honkai Impact Star Rail with your friends by joining a guild of Trailblazers, which are groups of players who share the same passion for space exploration and adventure. You can chat with your guildmates, cooperate with them in missions, and compete with them in rankings. You can also visit other players' spaceships and see how they decorate them.

    -

    Q: What are the system requirements for Honkai Impact Star Rail?

    -

    A: The system requirements for Honkai Impact Star Rail vary depending on the platform you are using. For Android users, you need Android 5.0 or higher, and at least 4 GB of RAM. For iOS users, you need iOS 10.0 or higher, and at least 4 GB of RAM. For PC users, you need Windows 7 or higher, Intel Core i5 or higher, NVIDIA GeForce GTX 1050 or higher, and at least 8 GB of RAM.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Enjoy Gangster Paradise Lofi Song - Free Download and Streaming Options.md b/spaces/congsaPfin/Manga-OCR/logs/How to Enjoy Gangster Paradise Lofi Song - Free Download and Streaming Options.md deleted file mode 100644 index 7bccd931c7d3836c2e52a324ea59206711e1e73a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Enjoy Gangster Paradise Lofi Song - Free Download and Streaming Options.md +++ /dev/null @@ -1,107 +0,0 @@ -
    -

    Gangster Paradise Lofi Song Download: A Guide for Music Fans

    -

    If you are looking for a relaxing and soothing music genre that can help you unwind, study, or work, you might want to try lo-fi music. Lo-fi stands for low-fidelity, which means that the sound quality is intentionally imperfect and rough. Lo-fi music is influenced by hip hop, jazz, funk, and electro genres, and it often features instrumental, looped, and sampled tracks with ambient noises. Many people find lo-fi music to be mood-boosting, stress-reducing, and focus-enhancing.

    -

    One of the most popular lo-fi songs that you can download or stream online is Gangsta's Paradise by Coolio. This song was originally released in 1995 as a hip hop hit that interpolated Stevie Wonder's song Pastime Paradise from 1976. The song was a social commentary on the struggles of inner-city life and violence, and it was a huge commercial and critical success, winning a Grammy and other awards. However, in recent years, many lo-fi artists have remixed and reinterpreted the song in a new genre, giving it a fresh and mellow vibe.

    -

    gangster paradise lofi song download


    Downloadhttps://urlca.com/2uOdDJ



    -

    In this article, we will explore the definition and characteristics of lo-fi music, the origin and meaning of Gangsta's Paradise by Coolio, and the popularity and appeal of its lo-fi remixes. We will also provide some links to download or stream Gangsta's Paradise lo-fi song online. Whether you are a fan of hip hop or lo-fi music, or you are just curious about this musical phenomenon, this article will help you discover more about Gangsta's Paradise lo-fi song download.

    -

    Lo-Fi Music Definition and Characteristics

    -

    Lo-fi music is a type of music that has been made of low quality, embracing imperfections and making them part of the sound. Lo-fi music is characterized by its use of limited instrumentation and production techniques that mimic the sound of older recording technologies. For example, lo-fi music may include incorrect notes, background noise, low hums, tape hiss, distortion, or vinyl crackle.

    -

    Lo-fi music emerged out of DIY (do-it-yourself) music culture in the 1950s as an alternative to high-fidelity (hi-fi) music that was produced in professional studios. Lo-fi music was influenced by various genres such as hip hop, jazz, funk, and electro. Lo-fi music often features instrumental tracks that are looped or sampled and its relevance to contemporary issues. The remixes also showcase the creativity and diversity of lo-fi music, as well as the potential for cross-genre collaborations and innovations.

    -

    Conclusion

    -

    Gangsta's Paradise lo-fi song download is a great choice for music fans who want to enjoy a relaxing and soothing music genre that can help them unwind, study, or work. Lo-fi music is a type of music that has low-fidelity sound quality with intentional imperfections, influenced by hip hop, jazz, funk, and electro genres. Gangsta's Paradise is a song by Coolio that was released in 1995 as a hip hop hit that interpolated Stevie Wonder's song Pastime Paradise. The song was a social commentary on the struggles of inner-city life and violence, and it was a huge commercial and critical success. Many lo-fi artists have remixed and reinterpreted the song in a new genre, giving it a fresh and mellow vibe, while still retaining some elements of the original song. The remixes have gained millions of views and streams on YouTube, Spotify, and other platforms, attracting fans of both hip hop and lo-fi music, as well as new listeners.

    -

    If you want to download or stream Gangsta's Paradise lo-fi song online, you can check out some of these links:

    -
      -
    • [Gangsta's Paradise (Lofi Remix) by Lofi Fruits Music]
    • -
    • [Gangsta's Paradise (Lofiline Remix) by Lofiline]
    • -
    • [Gangsta's Paradise (Chill Fruits Music Remix) by Chill Fruits Music]
    • -
    • [Gangsta's Paradise (Future Λ Ready Remix) by Future Λ Ready]
    • -
    -

    We hope you enjoyed this article and learned more about Gangsta's Paradise lo-fi song download. If you have any questions or comments, feel free to leave them below. Thank you for reading!

    -

    FAQs

    -

    What are some other lo-fi songs that are based on hip hop classics?

    -

    Some other lo-fi songs that are based on hip hop classics are:

    -

    gangster paradise lofi remix mp3 download
    -lofi fruits music gangsta's paradise
    -gangster paradise lofi version lyrics
    -lofiline & mr cat gangsta's paradise
    -gangster paradise lofi hip hop beat
    -gangsta's paradise lofi fruits music chill fruits music
    -gangster paradise lofi spotify
    -lofi fruits music gangsta's paradise soundcloud
    -gangster paradise lofi cover
    -lofiline & mr cat gangsta's paradise shazam
    -gangster paradise lofi piano
    -gangsta's paradise lofi fruits music chill fruits music youtube
    -gangster paradise lofi guitar
    -lofi fruits music gangsta's paradise apple music
    -gangster paradise lofi instrumental
    -gangsta's paradise lofi fruits music chill fruits music lyrics
    -gangster paradise lofi tiktok
    -lofi fruits music gangsta's paradise free download
    -gangster paradise lofi karaoke
    -gangsta's paradise lofi fruits music chill fruits music song
    -gangster paradise lofi saxophone
    -lofi fruits music gangsta's paradise album
    -gangster paradise lofi 1 hour
    -lofiline & mr cat gangsta's paradise spotify
    -gangster paradise lofi flute
    -gangsta's paradise lofi fruits music chill fruits music mp3
    -gangster paradise lofi ringtone
    -lofi fruits music gangsta's paradise genre
    -gangster paradise lofi bass boosted
    -lofiline & mr cat gangsta's paradise soundcloud
    -gangster paradise lofi violin
    -gangsta's paradise lofi fruits music chill fruits music video
    -gangster paradise lofi trap remix
    -lofi fruits music gangsta's paradise release date
    -gangster paradise lofi nightcore
    -lofiline & mr cat gangsta's paradise lyrics
    -gangster paradise lofi jazz remix
    -gangsta's paradise lofi fruits music chill fruits music online
    -gangster paradise lofi anime amv
    -lofi fruits music gangsta's paradise review
    -gangster paradise lofi ukulele chords
    -lofiline & mr cat gangsta's paradise download
    -gangster paradise lofi edm remix
    -gangsta's paradise lofi fruits music chill fruits music stream
    -gangster paradise lofi meme edit
    -lofiline & mr cat - Gangsta's Paradise - Lofi Version (Official Video)

    -
      -
    • [Still D.R.E. (Lofi Remix) by Lofi Fruits Music](https://www.youtube.com/watch?v=0FZ9x8lQq9w)
    • -
    • [Juicy (Lofi Remix) by Lofiline](https://www.youtube.com/watch?v=Zy0gXnY6dMw)
    • -
    • [Changes (Lofi Remix) by Chill Fruits Music](https://www.youtube.com/watch?v=8JtWfYcQOaA)
    • -
    • [Lose Yourself (Lofi Remix) by Future Λ Ready](https://www.youtube.com/watch?v=0v7XyWmzT9k)
    • -
    -

    How can I make my own lo-fi music at home?

    -

    You can make your own lo-fi music at home using some simple tools and techniques. Here are some steps to follow:

    -
      -
    1. Choose a music software that allows you to create beats and loops. Some examples are FL Studio, Ableton Live, GarageBand, or Audacity.
    2. -
    3. Find some samples or sounds that you want to use for your lo-fi track. You can use your own recordings, or download some from online sources such as Looperman, Splice, or YouTube.
    4. -
    5. Load your samples or sounds into your music software and arrange them into a pattern or sequence. You can use the piano roll or the step sequencer to do this.
    6. -
    7. Add some effects to your track to give it a lo-fi feel. Some common effects are EQ, compression, reverb, delay, distortion, or vinyl simulation.
    8. -
    9. Export your track as an audio file and share it with others online or offline.
    10. -
    -

    What are some benefits of listening to lo-fi music while studying or working?

    -

    Some benefits of listening to lo-fi music while studying or working are:

    -
      -
    • Lo-fi music can help you relax and reduce stress by creating a calm and soothing atmosphere.
    • -
    • Lo-fi music can help you focus and improve your concentration by blocking out distractions and providing a steady background noise.
    • -
    • Lo-fi music can help you boost your mood and motivation by stimulating your brain and releasing dopamine.
    • -
    • Lo-fi music can help you enhance your creativity and memory by activating different parts of your brain and stimulating neural connections.
    • -
    -

    Where can I find more lo-fi music online?

    -

    You can find more lo-fi music online on various platforms and channels that specialize in lo-fi music. Some examples are:

    -
      -
    • [YouTube](https://www.youtube.com/): YouTube is one of the most popular and accessible platforms for lo-fi music. You can find many lo-fi music channels, playlists, and live streams on YouTube, such as [ChilledCow](https://www.youtube.com/channel/UCSJ4gkVC6NrvII8umztf0Ow), [Lofi Girl](https://www.youtube.com/channel/UCqGzJ0YixYjxhVHYCQcixEA), [Lofi Fruits Music](https://www.youtube.com/channel/UC8uJaXvHjsq3TCxa_3XsRrw), and [Lofiline](https://www.youtube.com/channel/UC2m4Z7f9w7oJyK1l1t9yQDw).
    • -
    • [Spotify](https://www.spotify.com/): Spotify is one of the most popular and versatile platforms for lo-fi music. You can find many lo-fi music artists, albums, playlists, and podcasts on Spotify, such as [Chillhop Music](https://open.spotify.com/artist/6MpczSUmrdLul28bsaMxTe), [Lofi Fruits Music](https://open.spotify.com/artist/5hAMVKYHFyLbGhXteGOWnU), [Lofiline](https://open.spotify.com/artist/6oBm8HB0yfrxUxzD2lmKsQ), and [Lo-Fi Beats](https://open.spotify.com/playlist/37i9dQZF1DWWQRwui0ExPn).
    • -
    • [SoundCloud](https://soundcloud.com/): SoundCloud is one of the most popular and creative platforms for lo-fi music. You can find many lo-fi music artists, tracks, groups, and stations on SoundCloud, such as [Chillhop Music](https://soundcloud.com/chillhopdotcom), [Lofi Fruits Music](https://soundcloud.com/lofi-fruits-music), [Lofiline](https://soundcloud.com/lofiline), and [Lo-Fi Hip Hop Radio](https://soundcloud.com/stations/track/chillhopdotcom/chillhop-raw-cuts-2-full-album).
    • -
    -

    Who are some of the most popular lo-fi artists today?

    -

    Some of the most popular lo-fi artists today are:

    -
      -
    • [Jinsang](https://open.spotify.com/artist/5FsfZfRtrtXwNn2JYDeaC5): Jinsang is a lo-fi producer from California who creates smooth and soulful beats with influences from jazz, hip hop, and R&B. Some of his popular songs are [Solitude](https://open.spotify.com/track/4ksgWk9duoHjppnWhRfXsj), [Affection](https://open.spotify.com/track/1rZuEht4txBRmWtZdi2qOL), and [Summer's Day v2](https://open.spotify.com/track/6xGlprv9fmlMj2NhdxNH1C).
    • -
    • [Idealism](https://open.spotify.com/artist/6YJ4EgQzDfJnIHRbqIHAdD): Idealism is a lo-fi producer from Germany who creates warm and cozy beats with influences from jazz, hip hop, and ambient. Some of his popular songs are [Controlla](https://open.spotify.com/track/7fRrTyKvE4Skh93v97gtcU), [Hiraeth](https://open.spotify.com/track/3ZffCQKLFLUvYM59XKLbVm), and [Another Perspective](https://open.spotify.com/track/5Sf3GyLEAzJXxZ5mbCPXTu).
    • -
    • [Tomppabeats](https://open.spotify.com/artist/0F0MA0ns8oXwGw66B2BSXm): Tomppabeats is a lo-fi producer from Finland who creates nostalgic and dreamy beats with influences from jazz, hip hop, and anime. Some of his popular songs are [U Love](https://open.spotify.com/track/29bIdrNpAToAoznOgNdnGg), [Far Away](https://open.spotify.com/track/7lZauDnRoAC3kmaYae2opv), and [Harbor LP](https://open.spotify.com/album/7x8dCjCr0x6x2lXK HkRnB4).
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Install Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK Data on Your Phone.md b/spaces/congsaPfin/Manga-OCR/logs/How to Install Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK Data on Your Phone.md deleted file mode 100644 index d273cc869d7e90da882f5e46853f06ee1e085330..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Install Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK Data on Your Phone.md +++ /dev/null @@ -1,84 +0,0 @@ - -

    Download Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data

    -

    If you are a fan of first-person shooter games, you might have heard of Modern Combat 5 Blackout, one of the most popular and thrilling games in this genre. But did you know that you can download a mod version of this game that allows you to play offline and enjoy unlimited money and credits? In this article, we will tell you everything you need to know about Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data, including its features and how to download and install it on your Android device.

    -

    download modern combat 5 blackout mod (offline) v1.3.1a apk + data


    Download Ziphttps://urlca.com/2uOelN



    -

    Introduction

    -

    What is Modern Combat 5 Blackout?

    -

    Modern Combat 5 Blackout is a first-person shooter game developed by Gameloft and released in 2014. It is the fifth installment in the Modern Combat series, which is inspired by the Call of Duty franchise. The game features a single-player campaign that follows the story of Caydan Phoenix, a former soldier who is involved in a global conspiracy, as well as a multiplayer mode that allows you to compete with other players online in various modes and maps.

    -

    Why download the mod version?

    -

    While Modern Combat 5 Blackout is a free-to-play game, it requires an internet connection to play and has in-app purchases that can enhance your gameplay experience. However, if you want to play offline or enjoy unlimited money and credits without spending real money, you can download the mod version of this game, which has been modified by third-party developers to unlock these features. The mod version also has some other benefits, such as improved performance and bug fixes.

    -

    Features of Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data

    -

    Unlimited money and credits

    -

    One of the main features of Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data is that it gives you unlimited money and credits, which are the two currencies used in the game. You can use them to buy new weapons, armor, skills, and items that can help you in your missions. You can also upgrade your weapons and armor to make them more powerful and durable.

    -

    Offline mode

    -

    Another feature of Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data is that it allows you to play offline, without needing an internet connection. This means that you can enjoy the game anytime and anywhere, without worrying about data usage or connection issues. You can also play the single-player campaign without any interruptions or ads.

    -

    High-quality graphics and sound

    -

    Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data also maintains the high-quality graphics and sound of the original game, which make it one of the best-looking and sounding games on mobile devices. The game features realistic environments, detailed characters, stunning effects, and immersive soundtracks that will make you feel like you are in the middle of a war zone.

    -

    Customizable controls and loadouts

    -

    Another feature of Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data is that it allows you to customize your controls and loadouts according to your preferences and playstyle. You can choose from different control schemes, such as auto-shoot, virtual joystick, or gyroscope, and adjust the sensitivity and layout of the buttons. You can also create your own loadouts by selecting from different classes, weapons, attachments, skills, and items that suit your strategy and tactics.

    -

    Multiplayer and solo modes

    -

    Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data also offers you the option to play either multiplayer or solo modes, depending on your mood and preference. You can join or create online matches with other players around the world in various modes, such as team deathmatch, free-for-all, capture the flag, and more. You can also chat with your teammates and friends using the in-game voice chat feature. Alternatively, you can play solo missions that challenge your skills and reflexes in different scenarios and difficulties.

    -

    How to download and install Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data

    -

    Step 1: Download the APK and data files

    -

    The first step to download and install Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data is to download the APK and data files from a reliable source. You can use the link below to download them directly to your device or computer.

    -

    How to download modern combat 5 blackout offline mod apk + data for free
    -Modern combat 5 blackout mod apk + data v1.3.1a offline download link
    -Modern combat 5 blackout offline mod apk + data latest version download
    -Download modern combat 5 blackout mod apk + data offline mode for android
    -Modern combat 5 blackout mod apk + data offline gameplay download
    -Modern combat 5 blackout v1.3.1a offline mod apk + data download no root
    -Modern combat 5 blackout offline mod apk + data full version download
    -Download modern combat 5 blackout mod apk + data offline unlimited money
    -Modern combat 5 blackout mod apk + data offline hack download
    -Download modern combat 5 blackout mod apk + data offline highly compressed
    -Modern combat 5 blackout mod apk + data offline installation guide
    -Download modern combat 5 blackout mod apk + data offline without internet
    -Modern combat 5 blackout mod apk + data offline features and reviews
    -Download modern combat 5 blackout mod apk + data offline update
    -Modern combat 5 blackout mod apk + data offline requirements and compatibility
    -Download modern combat 5 blackout mod apk + data offline from apkpure
    -Modern combat 5 blackout mod apk + data offline best settings and tips
    -Download modern combat 5 blackout mod apk + data offline for pc
    -Modern combat 5 blackout mod apk + data offline cheats and tricks
    -Download modern combat 5 blackout mod apk + data offline from mediafire
    -Modern combat 5 blackout mod apk + data offline multiplayer mode download
    -Download modern combat 5 blackout mod apk + data offline for ios
    -Modern combat 5 blackout mod apk + data offline graphics and sound quality
    -Download modern combat 5 blackout mod apk + data offline from google drive
    -Modern combat 5 blackout mod apk + data offline missions and challenges download
    -Download modern combat 5 blackout mod apk + data offline for windows phone
    -Modern combat 5 blackout mod apk + data offline weapons and skins download
    -Download modern combat 5 blackout mod apk + data offline from mega.nz
    -Modern combat 5 blackout mod apk + data offline bugs and fixes download
    -Download modern combat 5 blackout mod apk + data offline for mac
    -Modern combat 5 blackout mod apk + data offline maps and locations download
    -Download modern combat 5 blackout mod apk + data offline from uptodown
    -Modern combat 5 blackout mod apk + data offline ranking and rewards download
    -Download modern combat 5 blackout mod apk + data offline for linux
    -Modern combat 5 blackout mod apk + data offline customization and options download

    -

    Download Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data

    -

    Step 2: Enable unknown sources on your device

    -

    The next step is to enable unknown sources on your device, which will allow you to install apps from sources other than the Google Play Store. To do this, go to your device's settings, then security, then unknown sources, and toggle it on.

    -

    Step 3: Install the APK file

    -

    After enabling unknown sources, you can install the APK file by tapping on it and following the instructions on the screen. It may take a few minutes for the installation to complete.

    -

    Step 4: Extract and copy the data folder to your device's storage

    -

    The next step is to extract and copy the data folder to your device's storage. To do this, you will need a file manager app that can extract zip files, such as ES File Explorer or ZArchiver. Open the file manager app and locate the zip file that contains the data folder. Extract it and copy the folder named "com.gameloft.android.ANMP.GloftM5HM" to your device's storage, under the path "Android/obb". Make sure that the folder is in the correct location before proceeding.

    -

    Step 5: Launch the game and enjoy

    -

    The final step is to launch the game and enjoy playing Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data on your device. You can access all the features of the mod version without any limitations or restrictions.

    -

    Conclusion

    -

    In conclusion, Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data is a great way to experience one of the best first-person shooter games on mobile devices with enhanced features and performance. You can download and install it easily by following the steps above and enjoy playing offline with unlimited money and credits. If you are looking for a thrilling and action-packed game that will keep you entertained for hours, you should definitely try Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data.

    -

    FAQs

    -
      -
    • Is Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data safe to download and install?
    • -

      Yes, Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data is safe to download and install as long as you use a trusted source and follow the instructions carefully. However, you should always be careful when downloading apps from unknown sources and scan them for viruses or malware before installing them.

      -
    • Will Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data work on my device?
    • -

      Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data should work on most Android devices that have at least 2 GB of RAM and Android 4.0 or higher. However, some devices may not be compatible or may experience some issues due to different specifications or settings. You can check the compatibility of your device by visiting the Google Play Store page of the original game and seeing if it is supported.

      -
    • Can I play Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data with my friends?
    • -

      Yes, you can play Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data with your friends online in the multiplayer mode. However, you may not be able to join the same matches or servers as players who are using the original version of the game, as they may have different versions or updates. You can also invite your friends to play with you in the solo mode by using the co-op feature.

      -
    • Will I get banned for using Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data?
    • -

      There is a possibility that you may get banned for using Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data, as it is not an official version of the game and may violate the terms and conditions of Gameloft. However, this is unlikely to happen if you play offline and do not use any cheats or hacks that may affect the gameplay or give you an unfair advantage over other players. You should also avoid logging in with your social media accounts or Google Play Games account, as they may detect that you are using a modded version of the game.

      -
    • Can I update Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data?
    • -

      No, you cannot update Modern Combat 5 Blackout Mod (Offline) v1.3.1a APK + Data, as it is not connected to the official servers of Gameloft and may not receive any updates or patches from them. If you want to update the game, you will have to download and install a newer version of the mod or switch back to the original version of the game from the Google Play Store.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Stick War Legacy - Control Your Stickmen in a War for Peace and Knowledge.md b/spaces/congsaPfin/Manga-OCR/logs/Stick War Legacy - Control Your Stickmen in a War for Peace and Knowledge.md deleted file mode 100644 index b4ef4ca3ba004db634693c46b3569dc0ccb8a65b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Stick War Legacy - Control Your Stickmen in a War for Peace and Knowledge.md +++ /dev/null @@ -1,96 +0,0 @@ -
    -

    Stickman War Legacy: A Fun and Challenging Strategy Game

    -

    If you are looking for a game that combines strategy, action, and stick figures, then you should try Stickman War Legacy. This game is a remake of the original Stick War, one of the most popular and highest rated web games of all time. In this game, you can control your army of stickmen in various ways, build units, mine gold, learn new skills, and fight against different enemies. You can also choose from different game modes, such as classic campaign, endless deads, tournament, missions, and more. In this article, we will tell you more about what Stickman War Legacy is, how to play it, and why you should play it.

    -

    What is Stickman War Legacy?

    -

    A remake of the original Stick War game

    -

    Stickman War Legacy is a mobile game published by Max Games Studios in 2016. It is a remake of the original Stick War game that was released in 2009 in Flash. The original game was created by Jason Whitham and Brock White, who also made other stick figure games such as Stick RPG and Territory War. The original game was praised for its gameplay, graphics, sound effects, and humor. It also had a sequel called Stick War 2: Order Empire in 2012.

    -

    stickman war legacy


    Download Filehttps://urlca.com/2uO96G



    -

    A mobile game with multiple modes and features

    -

    Stickman War Legacy is available for iOS and Android devices. It has several modes and features that make it more fun and diverse than the original game. For example, it has a missions mode where new levels are released every Friday. It also has a saga style map with multiple rewards. You can also unlock crowns for each difficulty level: normal, hard, and insane. Moreover, you can unlock skins for all characters and weapons for each class. You can also customize your own avatar with different hats, faces, hair styles, and colors.

    -

    A game with stick figure graphics and animations

    -

    One of the most distinctive aspects of Stickman War Legacy is its graphics and animations. The game uses simple but colorful stick figure drawings to represent the characters, units, buildings, weapons, and environments. The game also has smooth and realistic animations for the movements, attacks, deaths, and blood effects of the stickmen. The game also has sound effects and voice overs that add to the atmosphere and humor of the game.

    -

    How to play Stickman War Legacy?

    -

    Choose your difficulty level and nation

    -

    The game starts with a tutorial that teaches you the basics of the game. You can choose your difficulty level from easy to insane. You can also choose your nation from four options: Order (peaceful and balanced), Chaos (aggressive and chaotic), Elemental (magical and elemental), or 3rd Party (neutral and diverse). Each nation has its own unique units, skills, strengths, and weaknesses.

    -

    Control your army and units in different ways

    -

    You can control your army in formations or play each unit individually. You can use the buttons on the screen to move left or right, attack, defend, mine gold, build units, or cast spells. You can also tap on a unit to select it or drag it to move it around. You can also use gestures to zoom in or out or rotate the camera angle.

    -

    Build units, mine gold, and learn new skills

    -

    You need to build units to

    fight against the enemy army. You can build different types of units, such as miners, swordwrath, spearton, archidon, magikill, or giants. Each unit has its own cost, speed, health, damage, and range. You need to mine gold from the gold mines to afford the units. You can also learn new skills from the skill tree, such as faster mining, stronger attacks, or special abilities.

    -

    stickman war legacy game
    -stickman war legacy download
    -stickman war legacy mod apk
    -stickman war legacy cheats
    -stickman war legacy hack
    -stickman war legacy online
    -stickman war legacy skins
    -stickman war legacy pc
    -stickman war legacy tips
    -stickman war legacy strategy
    -stickman war legacy walkthrough
    -stickman war legacy missions
    -stickman war legacy zombies
    -stickman war legacy tournament
    -stickman war legacy crown of inamorta
    -stickman war legacy max games studios
    -stickman war legacy play store
    -stickman war legacy app store
    -stickman war legacy ios
    -stickman war legacy android
    -stickman war legacy review
    -stickman war legacy rating
    -stickman war legacy wiki
    -stickman war legacy fandom
    -stickman war legacy units
    -stickman war legacy archers
    -stickman war legacy swordsmen
    -stickman war legacy mages
    -stickman war legacy spearmen
    -stickman war legacy giants
    -stickman war legacy miner
    -stickman war legacy statue
    -stickman war legacy gold
    -stickman war legacy gems
    -stickman war legacy spells
    -stickman war legacy shop
    -stickman war legacy update
    -stickman war legacy new features
    -stickman war legacy trailer
    -stickman war legacy gameplay
    -stickman war legacy graphics
    -stickman war legacy sound effects
    -stickman war legacy music
    -stickman war legacy controls
    -stickman war legacy difficulty levels
    -stickman war legacy achievements
    -stickman war legacy leaderboard
    -stickman war legacy multiplayer mode (not yet available)
    -stickman war legacy fan art (not yet available)

    -

    Destroy the enemy statue and capture territories

    -

    The main objective of the game is to destroy the enemy statue before they destroy yours. You need to use your strategy and tactics to overcome the enemy defenses and units. You can also capture territories by destroying the towers and flags in each region. Capturing territories will give you more gold and resources. You can also unlock new units and skills by capturing territories.

    -

    Why play Stickman War Legacy?

    -

    It is fun, challenging, and addicting

    -

    Stickman War Legacy is a game that will keep you entertained for hours. It is fun to control your stickmen army and watch them fight in epic battles. It is challenging to face different enemies and scenarios that require different strategies and skills. It is addicting to progress through the game and unlock new units, skills, skins, and modes.

    -

    It has a variety of game types and levels

    -

    Stickman War Legacy has a lot of content and replay value. It has a classic campaign mode where you can play through 40 levels with different nations and difficulties. It also has an endless deads mode where you can survive waves of zombies and skeletons. It also has a tournament mode where you can compete against other players online. It also has a missions mode where you can complete weekly challenges and earn rewards.

    -

    It has a classic strategy war game style

    -

    Stickman War Legacy is a game that pays homage to the classic strategy war games of the past. It has a simple but effective gameplay that requires you to manage your resources, build your army, and plan your attacks. It also has a retro style graphics and sound effects that create a nostalgic atmosphere. It also has a humorous tone that makes fun of the stick figure genre and war games in general.

    -

    It has a loyal fan base and community

    -

    Stickman War Legacy is a game that has a loyal fan base and community. The game has over 100 million downloads on Google Play Store and over 4 million ratings with an average of 4.5 stars. The game also has an active community on social media platforms such as Facebook, YouTube, Instagram, and Discord. The fans share their gameplay videos, tips, tricks, fan art, memes, and feedback with each other.

    -

    Conclusion

    -

    Stickman War Legacy is a fun and challenging strategy game that you should try if you like stick figure games or war games in general. You can control your army of stickmen in various ways, build units, mine gold, learn new skills, and fight against different enemies. You can also choose from different game modes, such as classic campaign, endless deads, tournament, missions, and more. The game also has stick figure graphics and animations, sound effects and voice overs, humor and nostalgia, and a loyal fan base and community. You can download Stickman War Legacy for free on iOS or Android devices and enjoy this amazing game.

    -

    Frequently Asked Questions

    -
      -
    • How do I unlock new units and skills?
    • -

      You can unlock new units and skills by capturing territories in the campaign mode or by completing missions in the missions mode.

      -
    • How do I change my nation or difficulty level?
    • -

      You can change your nation or difficulty level by tapping on the settings icon on the top right corner of the screen.

      -
    • How do I customize my avatar or use skins?
    • -

      You can customize your avatar or use skins by tapping on the avatar icon on the top left corner of the screen.

      -
    • How do I play online with other players?
    • -

      You can play online with other players by tapping on the tournament icon on the bottom right corner of the screen.

      -
    • How do I contact the developers or give feedback?
    • -

      You can contact the developers or give feedback by tapping on the support icon on the bottom left corner of the screen.

      -
    - : [Stickman War Legacy - Apps on Google Play](https://play.google.com/store/apps/details?id=com.maxgames.stickwarlegacy&hl=en_US&gl=US)

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Stumble Guys 3.8 A Fast-Paced Knockout Game with Amazing Graphics and Physics.md b/spaces/congsaPfin/Manga-OCR/logs/Stumble Guys 3.8 A Fast-Paced Knockout Game with Amazing Graphics and Physics.md deleted file mode 100644 index 6f29cce4030f89009f743e94461e982e9ba92242..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Stumble Guys 3.8 A Fast-Paced Knockout Game with Amazing Graphics and Physics.md +++ /dev/null @@ -1,99 +0,0 @@ -
    -

    How to Download Stumble Guys 3.8 - The Ultimate Knockout Game

    -

    Do you love playing party games with your friends? Do you enjoy racing through chaotic obstacle courses with up to 32 players online? Do you want to have a lot of fun and laughs while stumbling through different levels until one victor is crowned? If you answered yes to any of these questions, then you should definitely download Stumble Guys 3.8 - the ultimate knockout game!

    -

    What is Stumble Guys 3.8?

    -

    Stumble Guys is a massive multiplayer party knockout game that was released in October 2021 by Scopely. It is inspired by popular TV shows like Wipeout and Takeshi's Castle, where contestants have to overcome various physical challenges and eliminate their rivals. The game is available for both PC and mobile devices, and it supports online and offline modes. You can play with your friends or with strangers from all over the world, and you can customize your character with different outfits and emotes. The game is constantly updated with new maps, obstacles, events, and challenges to keep you entertained and challenged.

    -

    download stumble guys 3.8


    Download Ziphttps://urlca.com/2uO6TU



    -

    Why Download Stumble Guys 3.8?

    -

    Stumble Guys 3.8 is the latest version of the game that was released in June 2023. It brings a lot of new features and improvements that make the game even more fun and exciting. Here are some of the reasons why you should download Stumble Guys 3.8:

    -

    New Maps and Obstacles

    -

    Stumble Guys 3.8 introduces six new maps that will test your skills and reflexes in different ways. You will have to navigate through slippery ice, bouncy balls, spinning blades, swinging hammers, flying rockets, and more. Each map has its own theme and style, such as winter wonderland, candy land, medieval castle, space station, pirate ship, and jungle temple. You will never get bored of playing the same map over and over again.

    -

    New Outfits and Emotes

    -

    Stumble Guys 3.8 also adds more than 50 new outfits and emotes that you can use to customize your character and express yourself. You can dress up as a ninja, a cowboy, a superhero, a clown, a robot, a dinosaur, and many more. You can also use different emotes to taunt your opponents, celebrate your victories, or show your frustration. You can unlock these outfits and emotes by playing the game, completing challenges, or buying them with coins or gems.

    -

    New Events and Challenges

    -

    Stumble Guys 3.8 also brings new events and challenges that will keep you engaged and motivated. You can participate in seasonal events that offer special rewards and bonuses, such as Halloween, Christmas, Valentine's Day, Easter, etc. You can also take on daily and weekly challenges that will test your abilities and give you extra coins or gems. You can also join tournaments that will pit you against the best players in the world and give you a chance to win exclusive prizes.

    -

    How to Download Stumble Guys 3.8 for PC?

    -

    If you want to play Stumble Guys 3.8 on your PC, you will need to download it from Steam, which is a digital distribution platform for games. Here are the steps to download Stumble Guys 3.8 for PC:

    -
      -
    1. Create a Steam account if you don't have one already.
    2. -
    3. Download and install the Steam client on your PC.
    4. -
    5. Launch the Steam client and log in with your account.
    6. -
    7. Search for Stumble Guys in the Steam store or click on this link.
    8. -
    9. Click on the Add to Cart button and proceed to checkout.
    10. -
    11. Pay for the game using your preferred payment method.
    12. -
    13. Wait for the game to download and install on your PC.
    14. -
    15. Launch the game from your Steam library and enjoy!
    16. -
    -

    How to Download Stumble Guys 3.8 for Mobile?

    -

    If you want to play Stumble Guys 3.8 on your mobile device, you will need to download it from Google Play if you have an Android device or from App Store if you have an iOS device. Here are the steps to download Stumble Guys 3.8 for mobile:

    -
      -
    1. Open Google Play or App Store on your device.
    2. -
    3. Search for Stumble Guys or click on this link for Android or this link for iOS.
    4. -
    5. Tap on the Install button and wait for the game to download and install on your device.
    6. -
    7. Launch the game from your home screen or app drawer and enjoy!
    8. -
    -

    How to Play Stumble Guys 3.8?

    -

    Stumble Guys 3.8 is very easy to play but hard to master. You can play online or offline with friends or strangers in different modes. Here is a brief overview of how to play Stumble Guys 3.8:

    -

    Online Mode

    -

    In online mode, you can join or create online matches with up to 32 players from all over the world. You can choose from different maps and settings, such as round limit, time limit, elimination mode, etc. You can also chat with other players using text or voice messages. The goal is to survive as long as possible and reach the finish line before the others. The last player standing wins the match.

    -

    How to download stumble guys 3.8 on steam
    -Stumble guys 3.8 free download for android
    -Stumble guys 3.8 update features and tips
    -Stumble guys 3.8 web store exclusive deals
    -Stumble guys 3.8 gameplay and review
    -Download stumble guys 3.8 and join the party
    -Stumble guys 3.8 best outfits and emotes
    -Stumble guys 3.8 online multiplayer mode
    -Stumble guys 3.8 map shortcuts and skips
    -Stumble guys 3.8 vs fall guys comparison
    -Download stumble guys 3.8 and play with friends
    -Stumble guys 3.8 latest news and events
    -Stumble guys 3.8 system requirements and compatibility
    -Stumble guys 3.8 funniest fails and moments
    -Stumble guys 3.8 first-person stumbling guide
    -Download stumble guys 3.8 and win the crown
    -Stumble guys 3.8 super punch and kick tips
    -Stumble guys 3.8 hot wheels hustle tips
    -Stumble guys 3.8 wild, wacky, and delicious update
    -Stumble guys 3.8 community stream and discord
    -Download stumble guys 3.8 for windows 10
    -Stumble guys 3.8 how to practice and improve
    -Stumble guys 3.8 physics-based havoc and chaos
    -Stumble guys 3.8 official card collection
    -Stumble guys 3.8 user reviews and ratings
    -Download stumble guys 3.8 mod apk unlimited money
    -Stumble guys 3.8 how to report a bug or issue
    -Stumble guys 3.8 colorful, whacky design
    -Stumble guys 3.8 meet prob, expert runs
    -Stumble guys 3.8 terms of service and privacy policy

    -

    Party Mode

    -

    In party mode, you can invite or join friends in private matches. You can create a party code and share it with your friends, or enter a party code that your friends have created. You can also use the friend list feature to see who is online and invite them directly. You can customize the match settings, such as map selection, round limit, time limit, etc. You can also chat with your friends using text or voice messages. The goal is to have fun and compete with your friends in a friendly way.

    -

    Offline Mode

    -

    In offline mode, you can play solo or local multiplayer in offline matches. You can choose from different maps and settings, such as round limit, time limit, elimination mode, etc. You can also adjust the difficulty level of the bots, from easy to hard. You can play alone against bots, or with up to three other players on the same device using split-screen mode. The goal is to practice your skills and enjoy the game without internet connection.

    -

    Tips and Tricks for Stumble Guys 3.8

    -

    Stumble Guys 3.8 is a game that requires both luck and skill. You will need to be fast, agile, smart, and sometimes ruthless to win the matches. Here are some tips and tricks that will help you improve your performance and have more fun in Stumble Guys 3.8:

    -

    Map Shortcuts

    -

    Some maps have shortcuts that you can use to gain an advantage over your opponents. These shortcuts can save you time, avoid obstacles, or give you a boost. However, they are also risky and sometimes hard to find. Here is a table showing some map shortcuts and how to use them:

    - | Map | Shortcut | How to Use | | --- | --- | --- | | Winter Wonderland | Ice Slide | Jump on the ice slide near the start of the map and slide down to the next checkpoint | | Candy Land | Donut Hole | Jump through the hole in the giant donut near the end of the map and land on the finish line | | Medieval Castle | Catapult | Jump on the catapult near the middle of the map and launch yourself over the wall to the next checkpoint | | Space Station | Rocket Boost | Jump on the rocket near the start of the map and fly over the first obstacle | | Pirate Ship | Cannon | Jump into the cannon near the end of the map and shoot yourself to the finish line | | Jungle Temple | Vine Swing | Grab the vine near the start of the map and swing over the pit to the next checkpoint |

    Outfit Customization

    -

    You can customize your outfit by changing your head, body, feet, and color. You can unlock more outfit options by playing the game, completing challenges, or buying them with coins or gems. You can also mix and match different outfit parts to create your own unique style. For example, you can wear a ninja head, a superhero body, a clown feet, and a purple color. You can also change your outfit before each match to suit your mood or theme.

    -

    Emote Usage

    -

    You can use emotes to communicate with other players in a fun and expressive way. You can use emotes to taunt your opponents, celebrate your victories, or show your frustration. You can unlock more emotes by playing the game, completing challenges, or buying them with coins or gems. You can also choose which emotes to equip before each match from your emote wheel. You can use emotes at any time during the match by tapping on the emote button on your screen.

    -

    Conclusion

    -

    Stumble Guys 3.8 is a hilarious and addictive party knockout game that you can play with your friends or with strangers online or offline. It is a game that will make you laugh, scream, rage, and cheer as you stumble through different maps and obstacles until one winner is crowned. It is a game that will keep you entertained and challenged with its constant updates and new features. It is a game that you should definitely download and try for yourself.

    -

    So what are you waiting for? Download Stumble Guys 3.8 now and join the ultimate knockout game!

    -

    FAQs

    -

    Here are some frequently asked questions and their answers about Stumble Guys 3.8:

    -
      -
    1. Is Stumble Guys 3.8 free to play?
    2. -

      Yes, Stumble Guys 3.8 is free to play on both PC and mobile devices. However, it does have some optional in-app purchases that you can use to buy coins or gems, which can be used to unlock more outfits and emotes. You can also earn coins or gems by playing the game or completing challenges.

      -
    3. Is Stumble Guys 3.8 cross-platform?
    4. -

      Yes, Stumble Guys 3.8 is cross-platform, which means that you can play with other players who are using different devices or platforms. For example, you can play with your friends who are using PC, Android, or iOS devices, as long as you are connected to the same online server or party code.

      -
    5. How many players can play Stumble Guys 3.8?
    6. -

      Stumble Guys 3.8 can support up to 32 players in online mode, which is the maximum number of players that can join or create an online match. In offline mode, you can play solo or with up to three other players on the same device using split-screen mode.

      -
    7. How do I report a bug or a problem in Stumble Guys 3.8?
    8. -

      If you encounter a bug or a problem in Stumble Guys 3.8, you can report it to the developers by using the feedback feature in the game settings. You can also contact them by email at support@scopely.com or by visiting their website at https://www.scopely.com/.

      -
    9. How do I get more information about Stumble Guys 3.8?
    10. -

      If you want to get more information about Stumble Guys 3.8, you can visit their official website at https://www.stumbleguys.com/, where you can find more details about the game, its features, its updates, and its community. You can also follow them on social media platforms, such as Facebook, Twitter, Instagram, YouTube, and Discord, where you can get the latest news, updates, tips, and tricks about the game.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Violet Messenger A Plugin for Libpurple to Support Facebook Chat.md b/spaces/congsaPfin/Manga-OCR/logs/Violet Messenger A Plugin for Libpurple to Support Facebook Chat.md deleted file mode 100644 index f0a845f06cb42650b9c980820e1bf11d29e5087d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Violet Messenger A Plugin for Libpurple to Support Facebook Chat.md +++ /dev/null @@ -1,95 +0,0 @@ - -

    Violet Download Messenger: How to Stay Connected with Your Friends and Family

    -

    Do you want to hang out with your favorite people anytime, anywhere? Do you want to enjoy unlimited text, voice, video calling and group video chat features? Do you want to customize your chats, watch videos together, send and receive money, and chat with businesses? If you answered yes to any of these questions, then you need to download Violet Download Messenger, the free all-in-one communication app that makes it easy and fun to stay close to your friends and family.

    -

    violet download messenger


    Download Ziphttps://urlca.com/2uO9LQ



    -

    What is Violet Download Messenger?

    -

    Violet Download Messenger is a messaging and calling app that lets you connect with your Instagram friends right from Messenger. You can simply search for them by name or username to message or call. You can also use the app to chat with your Facebook friends, even if they're across the world. Violet Download Messenger has a sleek new look that darkens the colors of the chat interface, giving your eyes some rest. You can also record and send voice and video messages when text just won't cut it. You can express yourself with custom stickers, GIFs, and emojis, and use effects and filters to video calls. You can also send files, photos, and videos with no limit, and plan and make things happen with polls and location sharing. You can even send and request money securely and quickly right in the app, or chat with businesses to make reservations, get customer support, find deals and more.

    -

    Why should you use Violet Download Messenger?

    -

    Violet Download Messenger has many benefits that make it a great choice for communication and socializing. Here are some of them:

    -

    Cross-app messaging and calling

    -

    You can connect with your Instagram friends right from Messenger. You don't need to switch between apps or create new accounts. You can also chat with your Facebook friends on the same platform.

    -

    Privacy and safety settings

    -

    You can choose who can reach you, and where your messages are delivered. You can also block or report anyone who makes you feel uncomfortable or unsafe.

    -

    Custom reactions and chat themes

    -

    You can customize your reactions, with lots more emojis to choose from. You can also choose from fun themes and colors, like Tie-Dye or Love, to make your chats more personal.

    -

    How to download violet messenger app for free
    -Violet messenger vs Facebook messenger: which one is better?
    -Violet messenger features and benefits
    -Download violet messenger for iPhone, iPad, and iPod touch
    -Violet messenger review: pros and cons
    -How to use violet messenger for video calling and group chat
    -Violet messenger privacy and security settings
    -How to install violet messenger plugin for libpurple
    -Violet messenger alternatives and competitors
    -How to delete violet messenger account and data
    -How to troubleshoot violet messenger issues and errors
    -Violet messenger customer service and support
    -How to customize violet messenger with themes and stickers
    -Violet messenger tips and tricks
    -How to invite friends to join violet messenger
    -How to backup and restore violet messenger chats
    -Violet messenger integration with other apps and services
    -How to update violet messenger to the latest version
    -Violet messenger user guide and tutorial
    -How to make money with violet messenger
    -How to use violet messenger for business and marketing
    -Violet messenger statistics and trends
    -How to access violet messenger on desktop and web
    -Violet messenger feedback and suggestions
    -How to report violet messenger bugs and problems
    -How to block and unblock violet messenger contacts
    -Violet messenger notifications and sounds settings
    -How to mute and unmute violet messenger conversations
    -Violet messenger emojis and GIFs
    -How to share files and photos on violet messenger
    -How to create and join groups on violet messenger
    -Violet messenger voice messages and calls
    -How to change your name and profile picture on violet messenger
    -Violet messenger status and stories
    -How to sync violet messenger across devices
    -How to use violet messenger in dark mode
    -Violet messenger fun facts and trivia
    -How to delete messages and chats on violet messenger
    -Violet messenger history and development
    -How to use violet messenger offline mode

    -

    Watch together and group video chat

    -

    You can watch videos, tv shows, and movies with your friends over Messenger Video Chat and Rooms when you can't be together. You can capture every moment and reaction in real-time.

    -

    Unlimited free text and phone calls

    -

    You can skip exchanging phone numbers and simply send a message to your friends, even if they're across the world. You can enjoy high-quality voice and text messaging on mobile, tablet, and desktop.

    -

    Dark mode and voice and video messages

    -

    You can give your eyes some rest with a sleek new look that darkens the colors of the chat interface. You can also record and send voice and video messages when text just won't cut it.

    -

    Stickers, GIFs, and emojis

    -

    You can express yourself with custom stickers, GIFs, and emojis. You can use them to show your creative side, your mood, or your humor.File, photo, and video sharing

    -

    You can send files, photos, and videos with no limit. You can share memories, documents, and more with your friends and family. You can also preview your media before sending and leave a comment if you want.

    -

    Plan and make it happen

    -

    You can create polls to gather opinions from your group. You can also share your location to let people know where you are or where you're going. You can also create events and invite your friends to join.

    -

    Send and request money with no fees

    -

    You can securely send and request money in the app using your debit card or PayPal account. You don't need to pay any fees, and the money will be transferred instantly.

    -

    Chat with businesses

    -

    You can chat with businesses to get things done. You can make reservations, get customer support, find deals, and more. You can also see and respond to messages from your Facebook Page Inbox.

    -

    How to download and install Violet Download Messenger?

    -

    Downloading and installing Violet Download Messenger is easy and fast. Here are the steps to follow:

    -
      -
    1. Go to the Google Play Store or the App Store on your device.
    2. -
    3. Search for "Violet Download Messenger" or click on one of these links: Android or iOS.
    4. -
    5. Tap on "Install" or "Get" and wait for the app to download.
    6. -
    7. Open the app and sign in with your Facebook account or create a new one.
    8. -
    9. Start chatting with your friends and family!
    10. -
    -

    Conclusion

    -

    Violet Download Messenger is a free all-in-one communication app that lets you stay connected with your friends and family. You can enjoy unlimited text, voice, video calling and group video chat features, as well as customize your chats, watch videos together, send and receive money, and chat with businesses. You can also connect with your Instagram friends right from Messenger, without switching between apps. Violet Download Messenger is easy to download and install on your device, and it has a sleek new look that darkens the colors of the chat interface. If you want to have fun and stay close to your favorite people, download Violet Download Messenger today!

    -

    Frequently Asked Questions

    -
      -
    • Q: Is Violet Download Messenger free?
    • -
    • A: Yes, Violet Download Messenger is free to download and use. You don't need to pay any fees or subscriptions to enjoy its features.
    • -
    • Q: How do I switch between Facebook Messenger and Instagram chats?
    • -
    • A: You can switch between Facebook Messenger and Instagram chats by tapping on the profile picture of the person you're chatting with. You'll see a menu that lets you choose which app you want to use.
    • -
    • Q: How do I turn on dark mode on Violet Download Messenger?
    • -
    • A: You can turn on dark mode on Violet Download Messenger by tapping on your profile picture in the top left corner of the app. Then, tap on "Theme" and choose "Dark".
    • -
    • Q: How do I send money on Violet Download Messenger?
    • -
    • A: You can send money on Violet Download Messenger by tapping on the "+" icon in the bottom right corner of the chat. Then, tap on "Pay" and choose a contact or enter an amount. You'll need to link your debit card or PayPal account to use this feature.
    • -
    • Q: How do I chat with businesses on Violet Download Messenger?
    • -
    • A: You can chat with businesses on Violet Download Messenger by tapping on the search icon in the top right corner of the app. Then, type in the name of the business or browse through the categories. You'll see a list of businesses that you can message or call.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/3d Album Cs 3.32 Crack REPACK.md b/spaces/contluForse/HuggingGPT/assets/3d Album Cs 3.32 Crack REPACK.md deleted file mode 100644 index 8eae343f677e3f1177cf912972549aab76e0045b..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/3d Album Cs 3.32 Crack REPACK.md +++ /dev/null @@ -1,12 +0,0 @@ - -

    download..3D..Album..Commercial..Suite..v3.29..Portableh33traththarantorrent..for..free,..3D..Album..Commercial..Suite..v3.29..Portableh33traththaran..torrent..download....3d....album....commercial....suite....3.3....portable....Free....Download....3d....album....commercial....suite....2012....3d....album....commercial....suite....Free....Download....3d....album....commercial....suite....Serial.3D-Album...CS...-...awesome...program...for...creating...3D...virtual...albums...based...on...your...images...using...built-in...libraries...of...animation...styles....It...is...a...powerful,...professional.....Results...of...3d...album...commercial...suite...3.33...full:...Free...download...software,...Free...Video...dowloads,...Free...Music...downloads,...Free...Movie...downloads,...Games.Vast....selection....of....software.....Free....Shipping....on....Qualified....Orders..3D....Album....Commercial....Suite....3.28/3D....Album....Commercial....Suite....3.28.daa....-....574.04....MB....3D....Album....Commercial....Suite....3.28/3D....Album....Commercial....Suite....3.28.nfo....-....5.22....KB....3D....Album......portable...3d...album...commercial...suite...3...30...lovely...styles...reupload...Software...-...Free...Download...portable...3d...album...commercial...suite...3...30...lovely...styles...reupload...-...Top...4.....3d..Album..Commercial..Suite..3..33..Free..Full..Version..Download..mediafire..links..free..download,..download..3d..Album..Commercial..Suite..3..3..part1,..3D..ALBUM..COMMERCIAL..SUITE..3..0....Download....3D....Album....Commercial....Suite....Torrent.........3D-Album-Portuguese-Pack.exe....[1.50....MB]....3DAlbumCommercialSuite3.29www.softarchive.net.part1.rar....[99.95....MB].3D-Album...Commercial...Suite...3.33...Portable...Full...version...01....3D...Album...Commercial...Suite...3.33...Portable...Full...version.....3d....album....commercial....suite....3.29....download,....CorelDRAW....Graphics....Suite....X3,....FVD....Suite....2.5.1,....MQuiz....Integer....Addition....-....Adding....Positive....and....Negative....Integers....-....Math....Quiz....1.0.Full...Free...Download...of...3D-Album...Commercial...Suite...3.29...which...is...3D-Album...Commercial...Suite...3.29......650...MB...3D-Album...Commercial...Suite...is...the...complete...digital...solution...for.....MidwayUSA....is....a....privately....held....American....retailer....of....various....hunting....and....outdoor-related....products..3D..Album..Commercial..Suite..3.3..3..Complete..Digital..Imaging..Solution..for..Commercial..Production..Are..you..always..hunting..for..an..exciting..way..to..publish..your....3D..Album..Commercial..Suite..3.33-torrent.zip,..the..sirius..documentary..2013..dvd..download..02e8347b40..taleem..e..balighan..InPixio..Photo..Clip..5.0..Professional.rar.3d..album..commercial..suite..3..Software..-..Free..Download..3d..album..commercial..suite..3..-..Top..4..Download..-..Top4Download.com..offers..free..software..downloads..for..Windows,..Mac....3D-Album....Commercial....Suite....is....the....complete....digital....solution....for....professionals.....It....combines....3D-Albums....unique....and....creative....3D....animated....presentation....templates....with......Vast....selection....of....software.....Free....Shipping....on....Qualified....Orders..Free....download....3d....album....commercial....suite....3....33....Files....at....Software....Informer.....Show,....present,....exhibit,....animate,....publish,....walkthrough,....and....even....design....photo....quiz....games..3D....Album....Commercial....Suite....3.33-torrent.zip.3d..album,..commercial..suite..3.3..download,..free..download..3d..album..commercial..suite..3.3..,..photo..managing..software,..best..graphics..&..photo..editor,..full..version..with..crack....3d....Album....Commercial....Suite....3....33....Free....Full....Version....Download....mediafire....links....free....download,....download....3d....Album....Commercial....Suite....3....3....part1,....3D....ALBUM....COMMERCIAL....SUITE....3....0......3D...Album...commercial...suite...by...galedo....Versions:...3.3...and...3.2..Related....software....to....3d....album....commercial....suite....3.30....free....full....version....download.....Showing....:....1....-....10....from....23.....Adobe....Photoshop....Album....Starter....Edition....3.2....Adobe....Photoshop......Vast...selection...of...software....Free...Shipping...on...Qualified...Orders..Download...3d-album-commercial-suite-3.30-free-full-version-download.3D..Album..Commercial..Suite,..free..download...3D..Album..Commercial..Suite..3.0:..Global..Marketing.3D....Album....Commercial....Suite....version....3.0....+....All....Wedding....Beautiful....Styles........1.2GB.....Easily....create....Hollywood-style....photo....showcases....from....100+....(for....Commercial....Suite....and......[Fshare]...3D...Album...Commercial...Suite...v3.33...-...3D...Album...3.33...full...crack...-...Lm...Album...nh...tuyt...p...cho...bn.Home......Download...3D...Album...Commercial...Suite......Download...3D...Album...Commercial...Suite...latest...version......free...full...verion...of...3D...Album...Commercial...Suite......Download...3D...Album.....3D-Album..Commercial..Suite..3.32....1.8..Gb..3D-Album..Commercial..Suite..3.32....1.8..Gb..The..new..version..of..3D-Album..CS..with..a..complete..set..of..styles..?..is..the..best..program..for....3D..Album..Commercial..Suite..3..32..KeyGen..rar..torrent...Information..about..the..torrent..3D..Album..Commercial..Suite..3..32..KeyGen..rar...Seeders,..leechers..and..torrent..status..is....3D..Album..C

    -

    ommercial..Suite..3.32..Multi/Final....1.78..GB..The..new..version..of..3D-Album..CS..with..a..complete..set..of..styles....is..the..best..program..for..creating..3D....download...3d...Album...commercial...Suitetorrent...for...free,...3d...Album...commercial...Suite...torrent...download,...download...3d...Album...commercial...Suite.Torrent....Contents.....3D....Album....Commercial....Suite....v3.29....Portableh33traththaran....Portable....3D....Album....Commercial....Suite....v3.29.........Portable....3D....Album....Commercial....Suite....v3.29.exe......MidwayUSA...is...a...privately...held...American...retailer...of...various...hunting...and...outdoor-related...products..Full..Free..Download..of..3D..Album..Commercial..Suite..3.30..Final..which..is..3D..Album..Commercial..Suite..3.30..Final....1.8..Gb..The..new..version..of..3D-Album..CS..with..a..complete..set...4c5316f046

    -

    3d Album Cs 3.32 Crack


    Download Filehttps://ssurll.com/2uzycN



    -

    etabs 9.7.2 portableReadyFor4GB 1.4.raradobe photoshop CS6Extended 13.0.rar password txtGame of Thrones S02E07HDTV.x264-ASAPManual Clinical Surgery S Dasservice manual canonir2520 1m-tek h16106dfg driver win7[PC] Dreams to Reality -ENGHilltop Hoods-Hard Road: Restrung full album zipMavado-GangstaFor Life The Symphony Of David Brooks-RETAiL CD full album zip

    -

    In 1993, Gibson contributed lyrics and featured as a guest vocalist on Yellow Magic Orchestra's Technodon album,[90][91] and wrote lyrics to the track "Dog Star Girl" for Deborah Harry's Debravation.[92]

    -

    Since its debut in 1992, the mystery of Agrippa remained hidden for 20 years. Although many had tried to hack the code and decrypt the program, the uncompiled source code was lost long ago. Alan Liu and his team at "The Agrippa Files"[112] created an extensive website with tools and resources to crack the Agrippa Code. They collaborated with Matthew Kirschenbaum at the Maryland Institute for Technology in the Humanities and the Digital Forensics Lab, and Quinn DuPont, a PhD student of cryptography from the University of Toronto, in calling for the aid of cryptographers to figure out how the program works by creating "Cracking the Agrippa Code: The Challenge",[113] which enlisted participants to solve the intentional scrambling of the poem in exchange for prizes.[114] The code was successfully cracked by Robert Xiao in late July 2012.[113]

    -

    Gibson's work has influenced several popular musicians: references to his fiction appear in the music of Stuart Hamm,[d] Billy Idol,[e] Warren Zevon,[f] Deltron 3030, Straylight Run (whose name is derived from a sequence in Neuromancer)[140] and Sonic Youth. U2's Zooropa album was heavily influenced by Neuromancer,[44] and the band at one point planned to scroll the text of Neuromancer above them on a concert tour, although this did not end up happening. Members of the band did, however, provide background music for the audiobook version of Neuromancer as well as appearing in No Maps for These Territories, a biographical documentary of Gibson.[141] He returned the favour by writing an article about the band's Vertigo Tour for Wired in August 2005.[142] The band Zeromancer take their name from Neuromancer.[143]

    -

    -

    Neuromancer was written on a "clockwork typewriter," the very one you may recall glimpsing in Julie Deane's office in Chiba City. This machine, a Hermes 2000 manual portable, dates from somewhere in the 1930s. It's a very tough and elegant piece of work, from the factory of E. PAILLARD & Cie S.A. YVERDON (SUISSE). Cased, it weighs slightly less than the Macintosh SE/30 I now write on, and is finished in a curious green- and-black "crackle" paint-job, perhaps meant to suggest the covers of an accountant's ledger. Its keys are green as well, of celluloid, and the letters and symbols on them are canary yellow. (I once happened to brush the shift-key with the tip of a lit cigarette, dramatically confirming the extreme flammability of this early plastic.) In its day, the Hermes 2000 was one of the best portable writing-machines in the world, and one of the most expensive. This one belonged to my wife's step-grandfather, who had been a journalist of sorts and had used it to compose laudatory essays on the poetry of Robert Burns. I used it first to write undergraduate Eng. lit. papers, then my early attempts at short stories, then Neuromancer, all without so much as ever having touched an actual computer.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Avril Lavigne-Lets Go B-Sides Full Album Zip Aereo Dreamweaver Sp.md b/spaces/contluForse/HuggingGPT/assets/Avril Lavigne-Lets Go B-Sides Full Album Zip Aereo Dreamweaver Sp.md deleted file mode 100644 index e06433c92d75cbcc8a38266d0aaf66a0c83244b8..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Avril Lavigne-Lets Go B-Sides Full Album Zip Aereo Dreamweaver Sp.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Avril Lavigne-Lets Go B-Sides Full Album Zip aereo dreamweaver sp


    Download Zip 🗸🗸🗸 https://ssurll.com/2uzvGp



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/contluForse/HuggingGPT/assets/BioSolveIT SeeSAR Free Download.md b/spaces/contluForse/HuggingGPT/assets/BioSolveIT SeeSAR Free Download.md deleted file mode 100644 index dbbf846a112da831c4b85a49e9b3f0c11e855446..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/BioSolveIT SeeSAR Free Download.md +++ /dev/null @@ -1,100 +0,0 @@ - -

    BioSolveIT SeeSAR Free Download: A Comprehensive Guide

    - -

    If you are looking for a software tool that can help you design, optimize, and analyze chemical compounds, you may want to check out BioSolveIT SeeSAR. It is a software solution developed by BioSolveIT GmbH, a leading company in the field of cheminformatics and drug discovery. In this article, we will give you a comprehensive guide on how to get BioSolveIT SeeSAR free download and what are its features and benefits.

    -

    BioSolveIT SeeSAR Free Download


    DOWNLOAD ––– https://ssurll.com/2uzyd4



    - -

    What is BioSolveIT SeeSAR?

    - -

    BioSolveIT SeeSAR is a software tool for interactive, visual compound prioritization and compound evolution. It is designed for scientists and researchers in the pharmaceutical, biotech, and chemical industries who want to accelerate their drug discovery process and improve their decision making. BioSolveIT SeeSAR allows users to design and optimize chemical compounds based on various criteria, such as binding affinity, selectivity, solubility, toxicity, and synthesis feasibility. It also allows users to analyze chemical structures and properties, such as molecular weight, logP, hydrogen bonds, rotatable bonds, and pharmacophore features. BioSolveIT SeeSAR also supports predictive modeling and simulation of chemical compounds, such as docking, scoring, superposition, pocket exploration, similarity search, and analog search.

    - -

    BioSolveIT SeeSAR is a user-friendly and intuitive software tool that has a modern and sleek interface. It uses 3D graphics and animations to display chemical compounds and their interactions with targets. It also has a dashboard that shows various information and statistics about the compounds and their optimization progress. Users can easily navigate through the software and perform various tasks with simple mouse clicks and keyboard shortcuts. Users can also collaborate with their team members by sharing their projects and results through email or cloud services.

    - -

    How to Get BioSolveIT SeeSAR Free Download?

    - -

    If you want to get BioSolveIT SeeSAR free download, you have two options: you can either request a free trial or apply for an academic license. Both options require you to register on the BioSolveIT website and provide some basic information about yourself and your organization.

    -

    - -

    The free trial option allows you to use BioSolveIT SeeSAR for 30 days without any limitations or obligations. You can download the latest version of the software from the BioSolveIT website and install it on your Windows, Linux, or macOS device. You can also access various resources, such as tutorials, guides, videos, webinars, and support forums to help you get started with the software.

    - -

    The academic license option allows you to use BioSolveIT SeeSAR for free if you are a student, teacher, or researcher affiliated with an academic institution. You can download the latest version of the software from the BioSolveIT website and install it on your Windows, Linux, or macOS device. You can also access various resources, such as tutorials, guides, videos, webinars, and support forums to help you get started with the software.

    - -

    What are the Features and Benefits of BioSolveIT SeeSAR?

    - -

    BioSolveIT SeeSAR is a powerful software tool that has many features and benefits for users who want to design, optimize, and analyze chemical compounds. Some of the main features and benefits are:

    - -
      -
    • BioSolveIT SeeSAR allows users to design and optimize chemical compounds based on various criteria, such as binding affinity, selectivity,

      -

      BioSolveIT SeeSAR Free Download: How to Install and Use the Software

      - -

      Once you have downloaded BioSolveIT SeeSAR from the BioSolveIT website, you can easily install and use the software on your device. Here are the steps to install and use BioSolveIT SeeSAR:

      - -
        -
      1. Run the downloaded file and follow the instructions on the screen to complete the installation process.
      2. -
      3. Launch BioSolveIT SeeSAR from your desktop or start menu.
      4. -
      5. Enter your email address and password to log in to the software. If you are using a free trial or an academic license, you will also need to enter your activation code that you received from BioSolveIT.
      6. -
      7. Select a project or create a new one. You can also import or export projects from or to other software tools.
      8. -
      9. Add a target protein or a ligand to your project. You can either load them from a file or from a database, such as PDB or PubChem.
      10. -
      11. Use the various features and tools of BioSolveIT SeeSAR to design, optimize, and analyze your compounds. You can also use the dashboard to view various information and statistics about your compounds and their optimization progress.
      12. -
      13. Save your project and share it with your team members or export it to other software tools.
      14. -
      - -

      If you need any help or guidance on how to use BioSolveIT SeeSAR, you can access various resources, such as tutorials, guides, videos, webinars, and support forums on the BioSolveIT website. You can also contact BioSolveIT GmbH for any technical issues or feedback.

      - -

      BioSolveIT SeeSAR Free Download: The Pros and Cons of the Software

      - -

      BioSolveIT SeeSAR is a software tool that has many pros and cons for users who want to design, optimize, and analyze chemical compounds. Here are some of the pros and cons of BioSolveIT SeeSAR:

      - -
        -
      • Pros: -
          -
        • BioSolveIT SeeSAR is a user-friendly and intuitive software tool that has a modern and sleek interface.
        • -
        • BioSolveIT SeeSAR allows users to design and optimize chemical compounds based on various criteria, such as binding affinity, selectivity, -

          BioSolveIT SeeSAR Free Download: The Pricing and Subscription Plans of the Software

          - -

          BioSolveIT SeeSAR is a software tool that has various pricing and subscription plans for different types of users and organizations. Users can choose the plan that suits their needs and budget. Here are the pricing and subscription plans of BioSolveIT SeeSAR:

          - -
            -
          • Free trial: Users can use BioSolveIT SeeSAR for 30 days without any limitations or obligations. Users can download the latest version of the software from the BioSolveIT website and install it on their Windows, Linux, or macOS device. Users can also access various resources, such as tutorials, guides, videos, webinars, and support forums to help them get started with the software.
          • -
          • Academic license: Users who are affiliated with an academic institution can use BioSolveIT SeeSAR for free. Users can download the latest version of the software from the BioSolveIT website and install it on their Windows, Linux, or macOS device. Users can also access various resources, such as tutorials, guides, videos, webinars, and support forums to help them get started with the software.
          • -
          • Standard license: Users who want to use BioSolveIT SeeSAR for commercial purposes can purchase a standard license. The standard license costs € 3,000 per year per user and includes all features and updates of the software. Users can also get technical support and customer service from BioSolveIT GmbH.
          • -
          • Premium license: Users who want to use BioSolveIT SeeSAR for commercial purposes and also want to customize the software for their specific needs can purchase a premium license. The premium license costs € 5,000 per year per user and includes all features and updates of the software as well as customization options. Users can also get technical support and customer service from BioSolveIT GmbH.
          • -
          - -

          BioSolveIT SeeSAR Free Download: The Conclusion

          - -

          BioSolveIT SeeSAR is a software tool that can help users design, optimize, and analyze chemical compounds. It is a user-friendly and intuitive software tool that has a modern and sleek interface. It allows users to design and optimize chemical compounds based on various criteria, such as binding affinity, selectivity, -

          BioSolveIT SeeSAR Free Download: The Comparison with Other Software Tools

          - -

          BioSolveIT SeeSAR is a software tool that can be compared with other software tools that offer similar or related features and functions. Some of the software tools that can be compared with BioSolveIT SeeSAR are:

          - -
            -
          • MOE: MOE is a software tool developed by Chemical Computing Group that provides a suite of applications for molecular modeling, drug discovery, protein modeling, cheminformatics, and bioinformatics. MOE has features such as docking, scoring, pharmacophore modeling, QSAR, molecular dynamics, and more.
          • -
          • Schrödinger: Schrödinger is a software tool developed by Schrödinger LLC that provides a comprehensive platform for computational chemistry and drug discovery. Schrödinger has features such as ligand-based and structure-based design, virtual screening, lead optimization, ADME/Tox prediction, and more.
          • -
          • ChemAxon: ChemAxon is a software tool developed by ChemAxon Ltd that provides a range of solutions for cheminformatics and chemical data management. ChemAxon has features such as structure drawing, structure search, property prediction, library design, and more.
          • -
          - -

          Each of these software tools has its own strengths and weaknesses, and users can choose the one that best suits their needs and preferences. However, some of the advantages that BioSolveIT SeeSAR has over these software tools are:

          - -
            -
          • BioSolveIT SeeSAR is more user-friendly and intuitive than these software tools. It has a modern and sleek interface that uses 3D graphics and animations to display chemical compounds and their interactions with targets. It also has a dashboard that shows various information and statistics about the compounds and their optimization progress.
          • -
          • BioSolveIT SeeSAR is more interactive and visual than these software tools. It allows users to design and optimize chemical compounds in a visual and interactive way. Users can easily navigate through the software and perform various tasks with simple mouse clicks and keyboard shortcuts.
          • -
          • BioSolveIT SeeSAR is more flexible and customizable than these software tools. It allows users to integrate with other software tools to enhance its functionality and usability. Users can also customize the software for their specific needs by contacting BioSolveIT GmbH.
          • -
          - -

          BioSolveIT SeeSAR Free Download: The Summary

          - -

          In conclusion, BioSolveIT SeeSAR is a software tool that can help users design, optimize, and analyze chemical compounds. It is a user-friendly and intuitive software tool that has a modern and sleek interface. It allows users to design and optimize chemical compounds based on various criteria, such as binding affinity, selectivity, -

          BioSolveIT SeeSAR Free Download: The Conclusion

          - -

          BioSolveIT SeeSAR is a software tool that can help users design, optimize, and analyze chemical compounds. It is a user-friendly and intuitive software tool that has a modern and sleek interface. It allows users to design and optimize chemical compounds based on various criteria, such as binding affinity, selectivity, solubility, toxicity, and synthesis feasibility. It also allows users to analyze chemical structures and properties, such as molecular weight, logP, hydrogen bonds, rotatable bonds, and pharmacophore features. BioSolveIT SeeSAR also supports predictive modeling and simulation of chemical compounds, such as docking, scoring, superposition, pocket exploration, similarity search, and analog search.

          - -

          BioSolveIT SeeSAR is a software tool that can be integrated with other software tools to enhance its functionality and usability. Users can import or export projects from or to other software tools, such as KNIME, PyMOL, MOE, Schrödinger, ChemAxon, and others. Users can also use various components of BioSolveIT SeeSAR, such as HYDE, FlexX, FlexS, FastGrow, FTrees, SpaceLight, and CoLibri, as standalone tools or as plugins for other software tools.

          - -

          BioSolveIT SeeSAR is a software tool that has various pricing and subscription plans for different types of users and organizations. Users can choose the plan that suits their needs and budget. Users can either request a free trial or apply for an academic license to use BioSolveIT SeeSAR for free. Users can also purchase a standard license or a premium license to use BioSolveIT SeeSAR for commercial purposes and also customize the software for their specific needs.

          - -

          BioSolveIT SeeSAR is a software tool that has received positive reviews and testimonials from its customers and users. Users have praised its features and benefits, such as its user-friendliness, interactivity, flexibility, customization options, integration options, and more. Users have also expressed their satisfaction and appreciation of BioSolveIT SeeSAR.

          - -

          BioSolveIT SeeSAR is a software tool that can be compared with other software tools that offer similar or related features and functions. However, BioSolveIT SeeSAR has some advantages over these software tools, such as its user-friendliness,

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Download Film Jannat 2 Movie 4 Subtitle Indonesia alphabetisch mittele The Story of Love Betrayal and Redemption.md b/spaces/contluForse/HuggingGPT/assets/Download Film Jannat 2 Movie 4 Subtitle Indonesia alphabetisch mittele The Story of Love Betrayal and Redemption.md deleted file mode 100644 index 31944a01b3fa4e5ad8c22e5a102a8e51987df9ee..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Download Film Jannat 2 Movie 4 Subtitle Indonesia alphabetisch mittele The Story of Love Betrayal and Redemption.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Download Film Jannat 2 Movie 4 Subtitle Indonesia alphabetisch mittele


          Download > https://ssurll.com/2uzvYZ



          -
          - aaccfb2cb3
          -
          -
          -

          diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/bbox.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/bbox.py deleted file mode 100644 index 0c4d58b6c91f652933974f519acd3403a833e906..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/bbox.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['bbox_overlaps']) - - -def bbox_overlaps(bboxes1, bboxes2, mode='iou', aligned=False, offset=0): - """Calculate overlap between two set of bboxes. - - If ``aligned`` is ``False``, then calculate the ious between each bbox - of bboxes1 and bboxes2, otherwise the ious between each aligned pair of - bboxes1 and bboxes2. - - Args: - bboxes1 (Tensor): shape (m, 4) in format or empty. - bboxes2 (Tensor): shape (n, 4) in format or empty. - If aligned is ``True``, then m and n must be equal. - mode (str): "iou" (intersection over union) or iof (intersection over - foreground). - - Returns: - ious(Tensor): shape (m, n) if aligned == False else shape (m, 1) - - Example: - >>> bboxes1 = torch.FloatTensor([ - >>> [0, 0, 10, 10], - >>> [10, 10, 20, 20], - >>> [32, 32, 38, 42], - >>> ]) - >>> bboxes2 = torch.FloatTensor([ - >>> [0, 0, 10, 20], - >>> [0, 10, 10, 19], - >>> [10, 10, 20, 20], - >>> ]) - >>> bbox_overlaps(bboxes1, bboxes2) - tensor([[0.5000, 0.0000, 0.0000], - [0.0000, 0.0000, 1.0000], - [0.0000, 0.0000, 0.0000]]) - - Example: - >>> empty = torch.FloatTensor([]) - >>> nonempty = torch.FloatTensor([ - >>> [0, 0, 10, 9], - >>> ]) - >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1) - >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0) - >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0) - """ - - mode_dict = {'iou': 0, 'iof': 1} - assert mode in mode_dict.keys() - mode_flag = mode_dict[mode] - # Either the boxes are empty or the length of boxes' last dimension is 4 - assert (bboxes1.size(-1) == 4 or bboxes1.size(0) == 0) - assert (bboxes2.size(-1) == 4 or bboxes2.size(0) == 0) - assert offset == 1 or offset == 0 - - rows = bboxes1.size(0) - cols = bboxes2.size(0) - if aligned: - assert rows == cols - - if rows * cols == 0: - return bboxes1.new(rows, 1) if aligned else bboxes1.new(rows, cols) - - if aligned: - ious = bboxes1.new_zeros(rows) - else: - ious = bboxes1.new_zeros((rows, cols)) - ext_module.bbox_overlaps( - bboxes1, bboxes2, ious, mode=mode_flag, aligned=aligned, offset=offset) - return ious diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/pixel_group.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/pixel_group.py deleted file mode 100644 index 2143c75f835a467c802fc3c37ecd3ac0f85bcda4..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/pixel_group.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['pixel_group']) - - -def pixel_group(score, mask, embedding, kernel_label, kernel_contour, - kernel_region_num, distance_threshold): - """Group pixels into text instances, which is widely used text detection - methods. - - Arguments: - score (np.array or Tensor): The foreground score with size hxw. - mask (np.array or Tensor): The foreground mask with size hxw. - embedding (np.array or Tensor): The embedding with size hxwxc to - distinguish instances. - kernel_label (np.array or Tensor): The instance kernel index with - size hxw. - kernel_contour (np.array or Tensor): The kernel contour with size hxw. - kernel_region_num (int): The instance kernel region number. - distance_threshold (float): The embedding distance threshold between - kernel and pixel in one instance. - - Returns: - pixel_assignment (List[List[float]]): The instance coordinate list. - Each element consists of averaged confidence, pixel number, and - coordinates (x_i, y_i for all pixels) in order. - """ - assert isinstance(score, (torch.Tensor, np.ndarray)) - assert isinstance(mask, (torch.Tensor, np.ndarray)) - assert isinstance(embedding, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_label, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_contour, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_region_num, int) - assert isinstance(distance_threshold, float) - - if isinstance(score, np.ndarray): - score = torch.from_numpy(score) - if isinstance(mask, np.ndarray): - mask = torch.from_numpy(mask) - if isinstance(embedding, np.ndarray): - embedding = torch.from_numpy(embedding) - if isinstance(kernel_label, np.ndarray): - kernel_label = torch.from_numpy(kernel_label) - if isinstance(kernel_contour, np.ndarray): - kernel_contour = torch.from_numpy(kernel_contour) - - if torch.__version__ == 'parrots': - label = ext_module.pixel_group( - score, - mask, - embedding, - kernel_label, - kernel_contour, - kernel_region_num=kernel_region_num, - distance_threshold=distance_threshold) - label = label.tolist() - label = label[0] - list_index = kernel_region_num - pixel_assignment = [] - for x in range(kernel_region_num): - pixel_assignment.append( - np.array( - label[list_index:list_index + int(label[x])], - dtype=np.float)) - list_index = list_index + int(label[x]) - else: - pixel_assignment = ext_module.pixel_group(score, mask, embedding, - kernel_label, kernel_contour, - kernel_region_num, - distance_threshold) - return pixel_assignment diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/pirenderer/util/trainer.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/pirenderer/util/trainer.py deleted file mode 100644 index 420da43f4fac6566aeb0da9df4f7e09cbf40428d..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/pirenderer/util/trainer.py +++ /dev/null @@ -1,136 +0,0 @@ -import random -import importlib -import numpy as np - -import torch -import torch.nn as nn -from torch.optim import Adam, lr_scheduler - -from util.distributed import master_only_print as print -from util.init_weight import weights_init - -def accumulate(model1, model2, decay=0.999): - par1 = dict(model1.named_parameters()) - par2 = dict(model2.named_parameters()) - - for k in par1.keys(): - par1[k].data.mul_(decay).add_(par2[k].data, alpha=1 - decay) - -def set_random_seed(seed): - r"""Set random seeds for everything. - - Args: - seed (int): Random seed. - by_rank (bool): - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - - - -def get_trainer(opt, net_G, net_G_ema, opt_G, sch_G, train_dataset): - module, trainer_name = opt.trainer.type.split('::') - - trainer_lib = importlib.import_module(module) - trainer_class = getattr(trainer_lib, trainer_name) - trainer = trainer_class(opt, net_G, net_G_ema, opt_G, sch_G, train_dataset) - return trainer - -def get_model_optimizer_and_scheduler(opt): - gen_module, gen_network_name = opt.gen.type.split('::') - lib = importlib.import_module(gen_module) - network = getattr(lib, gen_network_name) - net_G = network(**opt.gen.param).to(opt.device) - init_bias = getattr(opt.trainer.init, 'bias', None) - net_G.apply(weights_init( - opt.trainer.init.type, opt.trainer.init.gain, init_bias)) - - net_G_ema = network(**opt.gen.param).to(opt.device) - net_G_ema.eval() - accumulate(net_G_ema, net_G, 0) - print('net [{}] parameter count: {:,}'.format( - 'net_G', _calculate_model_size(net_G))) - print('Initialize net_G weights using ' - 'type: {} gain: {}'.format(opt.trainer.init.type, - opt.trainer.init.gain)) - - - opt_G = get_optimizer(opt.gen_optimizer, net_G) - - if opt.distributed: - net_G = nn.parallel.DistributedDataParallel( - net_G, - device_ids=[opt.local_rank], - output_device=opt.local_rank, - broadcast_buffers=False, - find_unused_parameters=True, - ) - - # Scheduler - sch_G = get_scheduler(opt.gen_optimizer, opt_G) - return net_G, net_G_ema, opt_G, sch_G - - -def _calculate_model_size(model): - r"""Calculate number of parameters in a PyTorch network. - - Args: - model (obj): PyTorch network. - - Returns: - (int): Number of parameters. - """ - return sum(p.numel() for p in model.parameters() if p.requires_grad) - - -def get_scheduler(opt_opt, opt): - """Return the scheduler object. - - Args: - opt_opt (obj): Config for the specific optimization module (gen/dis). - opt (obj): PyTorch optimizer object. - - Returns: - (obj): Scheduler - """ - if opt_opt.lr_policy.type == 'step': - scheduler = lr_scheduler.StepLR( - opt, - step_size=opt_opt.lr_policy.step_size, - gamma=opt_opt.lr_policy.gamma) - elif opt_opt.lr_policy.type == 'constant': - scheduler = lr_scheduler.LambdaLR(opt, lambda x: 1) - else: - return NotImplementedError('Learning rate policy {} not implemented.'. - format(opt_opt.lr_policy.type)) - return scheduler - - -def get_optimizer(opt_opt, net): - return get_optimizer_for_params(opt_opt, net.parameters()) - - -def get_optimizer_for_params(opt_opt, params): - r"""Return the scheduler object. - - Args: - opt_opt (obj): Config for the specific optimization module (gen/dis). - params (obj): Parameters to be trained by the parameters. - - Returns: - (obj): Optimizer - """ - # We will use fuse optimizers by default. - if opt_opt.type == 'adam': - opt = Adam(params, - lr=opt_opt.lr, - betas=(opt_opt.adam_beta1, opt_opt.adam_beta2)) - else: - raise NotImplementedError( - 'Optimizer {} is not yet implemented.'.format(opt_opt.type)) - return opt - - diff --git a/spaces/danterivers/music-generation-samples/audiocraft/modules/conv.py b/spaces/danterivers/music-generation-samples/audiocraft/modules/conv.py deleted file mode 100644 index 972938ab84712eb06e1b10cea25444eee51d6637..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/audiocraft/modules/conv.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp -import warnings - -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn.utils import spectral_norm, weight_norm - - -CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm', - 'time_group_norm']) - - -def apply_parametrization_norm(module: nn.Module, norm: str = 'none'): - assert norm in CONV_NORMALIZATIONS - if norm == 'weight_norm': - return weight_norm(module) - elif norm == 'spectral_norm': - return spectral_norm(module) - else: - # We already check was in CONV_NORMALIZATION, so any other choice - # doesn't need reparametrization. - return module - - -def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs): - """Return the proper normalization module. If causal is True, this will ensure the returned - module is causal, or return an error if the normalization doesn't support causal evaluation. - """ - assert norm in CONV_NORMALIZATIONS - if norm == 'time_group_norm': - if causal: - raise ValueError("GroupNorm doesn't support causal evaluation.") - assert isinstance(module, nn.modules.conv._ConvNd) - return nn.GroupNorm(1, module.out_channels, **norm_kwargs) - else: - return nn.Identity() - - -def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, - padding_total: int = 0) -> int: - """See `pad_for_conv1d`. - """ - length = x.shape[-1] - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length - length - - -def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0): - """Pad for a convolution to make sure that the last window is full. - Extra padding is added at the end. This is required to ensure that we can rebuild - an output of the same length, as otherwise, even with padding, some time steps - might get removed. - For instance, with total padding = 4, kernel size = 4, stride = 2: - 0 0 1 2 3 4 5 0 0 # (0s are padding) - 1 2 3 # (output frames of a convolution, last 0 is never used) - 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding) - 1 2 3 4 # once you removed padding, we are missing one time step ! - """ - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - return F.pad(x, (0, extra_padding)) - - -def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.): - """Tiny wrapper around F.pad, just to allow for reflect padding on small input. - If this is the case, we insert extra 0 padding to the right before the reflection happen. - """ - length = x.shape[-1] - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - if mode == 'reflect': - max_pad = max(padding_left, padding_right) - extra_pad = 0 - if length <= max_pad: - extra_pad = max_pad - length + 1 - x = F.pad(x, (0, extra_pad)) - padded = F.pad(x, paddings, mode, value) - end = padded.shape[-1] - extra_pad - return padded[..., :end] - else: - return F.pad(x, paddings, mode, value) - - -def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]): - """Remove padding from x, handling properly zero padding. Only for 1d! - """ - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - assert (padding_left + padding_right) <= x.shape[-1] - end = x.shape[-1] - padding_right - return x[..., padding_left: end] - - -class NormConv1d(nn.Module): - """Wrapper around Conv1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConv2d(nn.Module): - """Wrapper around Conv2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConvTranspose1d(nn.Module): - """Wrapper around ConvTranspose1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class NormConvTranspose2d(nn.Module): - """Wrapper around ConvTranspose2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs) - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class StreamableConv1d(nn.Module): - """Conv1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, dilation: int = 1, - groups: int = 1, bias: bool = True, causal: bool = False, - norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, - pad_mode: str = 'reflect'): - super().__init__() - # warn user on unusual setup between dilation and stride - if stride > 1 and dilation > 1: - warnings.warn('StreamableConv1d has been initialized with stride > 1 and dilation > 1' - f' (kernel_size={kernel_size} stride={stride}, dilation={dilation}).') - self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride, - dilation=dilation, groups=groups, bias=bias, causal=causal, - norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.pad_mode = pad_mode - - def forward(self, x): - B, C, T = x.shape - kernel_size = self.conv.conv.kernel_size[0] - stride = self.conv.conv.stride[0] - dilation = self.conv.conv.dilation[0] - kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations - padding_total = kernel_size - stride - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - if self.causal: - # Left padding for causal - x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode) - return self.conv(x) - - -class StreamableConvTranspose1d(nn.Module): - """ConvTranspose1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, causal: bool = False, - norm: str = 'none', trim_right_ratio: float = 1., - norm_kwargs: tp.Dict[str, tp.Any] = {}): - super().__init__() - self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride, - causal=causal, norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.trim_right_ratio = trim_right_ratio - assert self.causal or self.trim_right_ratio == 1., \ - "`trim_right_ratio` != 1.0 only makes sense for causal convolutions" - assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1. - - def forward(self, x): - kernel_size = self.convtr.convtr.kernel_size[0] - stride = self.convtr.convtr.stride[0] - padding_total = kernel_size - stride - - y = self.convtr(x) - - # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be - # removed at the very end, when keeping only the right length for the output, - # as removing it here would require also passing the length at the matching layer - # in the encoder. - if self.causal: - # Trim the padding on the right according to the specified ratio - # if trim_right_ratio = 1.0, trim everything from right - padding_right = math.ceil(padding_total * self.trim_right_ratio) - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - return y diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/XbmImagePlugin.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/XbmImagePlugin.py deleted file mode 100644 index 3c12564c963d8b6342fa6ef1d7fc1892af30ffff..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/XbmImagePlugin.py +++ /dev/null @@ -1,94 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# XBM File handling -# -# History: -# 1995-09-08 fl Created -# 1996-11-01 fl Added save support -# 1997-07-07 fl Made header parser more tolerant -# 1997-07-22 fl Fixed yet another parser bug -# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.4) -# 2001-05-13 fl Added hotspot handling (based on code from Bernhard Herzog) -# 2004-02-24 fl Allow some whitespace before first #define -# -# Copyright (c) 1997-2004 by Secret Labs AB -# Copyright (c) 1996-1997 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import re - -from . import Image, ImageFile - -# XBM header -xbm_head = re.compile( - rb"\s*#define[ \t]+.*_width[ \t]+(?P[0-9]+)[\r\n]+" - b"#define[ \t]+.*_height[ \t]+(?P[0-9]+)[\r\n]+" - b"(?P" - b"#define[ \t]+[^_]*_x_hot[ \t]+(?P[0-9]+)[\r\n]+" - b"#define[ \t]+[^_]*_y_hot[ \t]+(?P[0-9]+)[\r\n]+" - b")?" - rb"[\000-\377]*_bits\[]" -) - - -def _accept(prefix): - return prefix.lstrip()[:7] == b"#define" - - -## -# Image plugin for X11 bitmaps. - - -class XbmImageFile(ImageFile.ImageFile): - format = "XBM" - format_description = "X11 Bitmap" - - def _open(self): - m = xbm_head.match(self.fp.read(512)) - - if not m: - msg = "not a XBM file" - raise SyntaxError(msg) - - xsize = int(m.group("width")) - ysize = int(m.group("height")) - - if m.group("hotspot"): - self.info["hotspot"] = (int(m.group("xhot")), int(m.group("yhot"))) - - self.mode = "1" - self._size = xsize, ysize - - self.tile = [("xbm", (0, 0) + self.size, m.end(), None)] - - -def _save(im, fp, filename): - if im.mode != "1": - msg = f"cannot write mode {im.mode} as XBM" - raise OSError(msg) - - fp.write(f"#define im_width {im.size[0]}\n".encode("ascii")) - fp.write(f"#define im_height {im.size[1]}\n".encode("ascii")) - - hotspot = im.encoderinfo.get("hotspot") - if hotspot: - fp.write(f"#define im_x_hot {hotspot[0]}\n".encode("ascii")) - fp.write(f"#define im_y_hot {hotspot[1]}\n".encode("ascii")) - - fp.write(b"static char im_bits[] = {\n") - - ImageFile._save(im, fp, [("xbm", (0, 0) + im.size, 0, None)]) - - fp.write(b"};\n") - - -Image.register_open(XbmImageFile.format, XbmImageFile, _accept) -Image.register_save(XbmImageFile.format, _save) - -Image.register_extension(XbmImageFile.format, ".xbm") - -Image.register_mime(XbmImageFile.format, "image/xbm") diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiohttp/multipart.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiohttp/multipart.py deleted file mode 100644 index 73801f459aa274ca6aae7bf28a2c5bb3bf075d11..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiohttp/multipart.py +++ /dev/null @@ -1,961 +0,0 @@ -import base64 -import binascii -import json -import re -import uuid -import warnings -import zlib -from collections import deque -from types import TracebackType -from typing import ( - TYPE_CHECKING, - Any, - AsyncIterator, - Deque, - Dict, - Iterator, - List, - Mapping, - Optional, - Sequence, - Tuple, - Type, - Union, - cast, -) -from urllib.parse import parse_qsl, unquote, urlencode - -from multidict import CIMultiDict, CIMultiDictProxy, MultiMapping - -from .hdrs import ( - CONTENT_DISPOSITION, - CONTENT_ENCODING, - CONTENT_LENGTH, - CONTENT_TRANSFER_ENCODING, - CONTENT_TYPE, -) -from .helpers import CHAR, TOKEN, parse_mimetype, reify -from .http import HeadersParser -from .payload import ( - JsonPayload, - LookupError, - Order, - Payload, - StringPayload, - get_payload, - payload_type, -) -from .streams import StreamReader - -__all__ = ( - "MultipartReader", - "MultipartWriter", - "BodyPartReader", - "BadContentDispositionHeader", - "BadContentDispositionParam", - "parse_content_disposition", - "content_disposition_filename", -) - - -if TYPE_CHECKING: # pragma: no cover - from .client_reqrep import ClientResponse - - -class BadContentDispositionHeader(RuntimeWarning): - pass - - -class BadContentDispositionParam(RuntimeWarning): - pass - - -def parse_content_disposition( - header: Optional[str], -) -> Tuple[Optional[str], Dict[str, str]]: - def is_token(string: str) -> bool: - return bool(string) and TOKEN >= set(string) - - def is_quoted(string: str) -> bool: - return string[0] == string[-1] == '"' - - def is_rfc5987(string: str) -> bool: - return is_token(string) and string.count("'") == 2 - - def is_extended_param(string: str) -> bool: - return string.endswith("*") - - def is_continuous_param(string: str) -> bool: - pos = string.find("*") + 1 - if not pos: - return False - substring = string[pos:-1] if string.endswith("*") else string[pos:] - return substring.isdigit() - - def unescape(text: str, *, chars: str = "".join(map(re.escape, CHAR))) -> str: - return re.sub(f"\\\\([{chars}])", "\\1", text) - - if not header: - return None, {} - - disptype, *parts = header.split(";") - if not is_token(disptype): - warnings.warn(BadContentDispositionHeader(header)) - return None, {} - - params: Dict[str, str] = {} - while parts: - item = parts.pop(0) - - if "=" not in item: - warnings.warn(BadContentDispositionHeader(header)) - return None, {} - - key, value = item.split("=", 1) - key = key.lower().strip() - value = value.lstrip() - - if key in params: - warnings.warn(BadContentDispositionHeader(header)) - return None, {} - - if not is_token(key): - warnings.warn(BadContentDispositionParam(item)) - continue - - elif is_continuous_param(key): - if is_quoted(value): - value = unescape(value[1:-1]) - elif not is_token(value): - warnings.warn(BadContentDispositionParam(item)) - continue - - elif is_extended_param(key): - if is_rfc5987(value): - encoding, _, value = value.split("'", 2) - encoding = encoding or "utf-8" - else: - warnings.warn(BadContentDispositionParam(item)) - continue - - try: - value = unquote(value, encoding, "strict") - except UnicodeDecodeError: # pragma: nocover - warnings.warn(BadContentDispositionParam(item)) - continue - - else: - failed = True - if is_quoted(value): - failed = False - value = unescape(value[1:-1].lstrip("\\/")) - elif is_token(value): - failed = False - elif parts: - # maybe just ; in filename, in any case this is just - # one case fix, for proper fix we need to redesign parser - _value = f"{value};{parts[0]}" - if is_quoted(_value): - parts.pop(0) - value = unescape(_value[1:-1].lstrip("\\/")) - failed = False - - if failed: - warnings.warn(BadContentDispositionHeader(header)) - return None, {} - - params[key] = value - - return disptype.lower(), params - - -def content_disposition_filename( - params: Mapping[str, str], name: str = "filename" -) -> Optional[str]: - name_suf = "%s*" % name - if not params: - return None - elif name_suf in params: - return params[name_suf] - elif name in params: - return params[name] - else: - parts = [] - fnparams = sorted( - (key, value) for key, value in params.items() if key.startswith(name_suf) - ) - for num, (key, value) in enumerate(fnparams): - _, tail = key.split("*", 1) - if tail.endswith("*"): - tail = tail[:-1] - if tail == str(num): - parts.append(value) - else: - break - if not parts: - return None - value = "".join(parts) - if "'" in value: - encoding, _, value = value.split("'", 2) - encoding = encoding or "utf-8" - return unquote(value, encoding, "strict") - return value - - -class MultipartResponseWrapper: - """Wrapper around the MultipartReader. - - It takes care about - underlying connection and close it when it needs in. - """ - - def __init__( - self, - resp: "ClientResponse", - stream: "MultipartReader", - ) -> None: - self.resp = resp - self.stream = stream - - def __aiter__(self) -> "MultipartResponseWrapper": - return self - - async def __anext__( - self, - ) -> Union["MultipartReader", "BodyPartReader"]: - part = await self.next() - if part is None: - raise StopAsyncIteration - return part - - def at_eof(self) -> bool: - """Returns True when all response data had been read.""" - return self.resp.content.at_eof() - - async def next( - self, - ) -> Optional[Union["MultipartReader", "BodyPartReader"]]: - """Emits next multipart reader object.""" - item = await self.stream.next() - if self.stream.at_eof(): - await self.release() - return item - - async def release(self) -> None: - """Release the connection gracefully. - - All remaining content is read to the void. - """ - await self.resp.release() - - -class BodyPartReader: - """Multipart reader for single body part.""" - - chunk_size = 8192 - - def __init__( - self, boundary: bytes, headers: "CIMultiDictProxy[str]", content: StreamReader - ) -> None: - self.headers = headers - self._boundary = boundary - self._content = content - self._at_eof = False - length = self.headers.get(CONTENT_LENGTH, None) - self._length = int(length) if length is not None else None - self._read_bytes = 0 - # TODO: typeing.Deque is not supported by Python 3.5 - self._unread: Deque[bytes] = deque() - self._prev_chunk: Optional[bytes] = None - self._content_eof = 0 - self._cache: Dict[str, Any] = {} - - def __aiter__(self) -> AsyncIterator["BodyPartReader"]: - return self # type: ignore[return-value] - - async def __anext__(self) -> bytes: - part = await self.next() - if part is None: - raise StopAsyncIteration - return part - - async def next(self) -> Optional[bytes]: - item = await self.read() - if not item: - return None - return item - - async def read(self, *, decode: bool = False) -> bytes: - """Reads body part data. - - decode: Decodes data following by encoding - method from Content-Encoding header. If it missed - data remains untouched - """ - if self._at_eof: - return b"" - data = bytearray() - while not self._at_eof: - data.extend(await self.read_chunk(self.chunk_size)) - if decode: - return self.decode(data) - return data - - async def read_chunk(self, size: int = chunk_size) -> bytes: - """Reads body part content chunk of the specified size. - - size: chunk size - """ - if self._at_eof: - return b"" - if self._length: - chunk = await self._read_chunk_from_length(size) - else: - chunk = await self._read_chunk_from_stream(size) - - self._read_bytes += len(chunk) - if self._read_bytes == self._length: - self._at_eof = True - if self._at_eof: - clrf = await self._content.readline() - assert ( - b"\r\n" == clrf - ), "reader did not read all the data or it is malformed" - return chunk - - async def _read_chunk_from_length(self, size: int) -> bytes: - # Reads body part content chunk of the specified size. - # The body part must has Content-Length header with proper value. - assert self._length is not None, "Content-Length required for chunked read" - chunk_size = min(size, self._length - self._read_bytes) - chunk = await self._content.read(chunk_size) - return chunk - - async def _read_chunk_from_stream(self, size: int) -> bytes: - # Reads content chunk of body part with unknown length. - # The Content-Length header for body part is not necessary. - assert ( - size >= len(self._boundary) + 2 - ), "Chunk size must be greater or equal than boundary length + 2" - first_chunk = self._prev_chunk is None - if first_chunk: - self._prev_chunk = await self._content.read(size) - - chunk = await self._content.read(size) - self._content_eof += int(self._content.at_eof()) - assert self._content_eof < 3, "Reading after EOF" - assert self._prev_chunk is not None - window = self._prev_chunk + chunk - sub = b"\r\n" + self._boundary - if first_chunk: - idx = window.find(sub) - else: - idx = window.find(sub, max(0, len(self._prev_chunk) - len(sub))) - if idx >= 0: - # pushing boundary back to content - with warnings.catch_warnings(): - warnings.filterwarnings("ignore", category=DeprecationWarning) - self._content.unread_data(window[idx:]) - if size > idx: - self._prev_chunk = self._prev_chunk[:idx] - chunk = window[len(self._prev_chunk) : idx] - if not chunk: - self._at_eof = True - result = self._prev_chunk - self._prev_chunk = chunk - return result - - async def readline(self) -> bytes: - """Reads body part by line by line.""" - if self._at_eof: - return b"" - - if self._unread: - line = self._unread.popleft() - else: - line = await self._content.readline() - - if line.startswith(self._boundary): - # the very last boundary may not come with \r\n, - # so set single rules for everyone - sline = line.rstrip(b"\r\n") - boundary = self._boundary - last_boundary = self._boundary + b"--" - # ensure that we read exactly the boundary, not something alike - if sline == boundary or sline == last_boundary: - self._at_eof = True - self._unread.append(line) - return b"" - else: - next_line = await self._content.readline() - if next_line.startswith(self._boundary): - line = line[:-2] # strip CRLF but only once - self._unread.append(next_line) - - return line - - async def release(self) -> None: - """Like read(), but reads all the data to the void.""" - if self._at_eof: - return - while not self._at_eof: - await self.read_chunk(self.chunk_size) - - async def text(self, *, encoding: Optional[str] = None) -> str: - """Like read(), but assumes that body part contains text data.""" - data = await self.read(decode=True) - # see https://www.w3.org/TR/html5/forms.html#multipart/form-data-encoding-algorithm # NOQA - # and https://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html#dom-xmlhttprequest-send # NOQA - encoding = encoding or self.get_charset(default="utf-8") - return data.decode(encoding) - - async def json(self, *, encoding: Optional[str] = None) -> Optional[Dict[str, Any]]: - """Like read(), but assumes that body parts contains JSON data.""" - data = await self.read(decode=True) - if not data: - return None - encoding = encoding or self.get_charset(default="utf-8") - return cast(Dict[str, Any], json.loads(data.decode(encoding))) - - async def form(self, *, encoding: Optional[str] = None) -> List[Tuple[str, str]]: - """Like read(), but assumes that body parts contain form urlencoded data.""" - data = await self.read(decode=True) - if not data: - return [] - if encoding is not None: - real_encoding = encoding - else: - real_encoding = self.get_charset(default="utf-8") - return parse_qsl( - data.rstrip().decode(real_encoding), - keep_blank_values=True, - encoding=real_encoding, - ) - - def at_eof(self) -> bool: - """Returns True if the boundary was reached or False otherwise.""" - return self._at_eof - - def decode(self, data: bytes) -> bytes: - """Decodes data. - - Decoding is done according the specified Content-Encoding - or Content-Transfer-Encoding headers value. - """ - if CONTENT_TRANSFER_ENCODING in self.headers: - data = self._decode_content_transfer(data) - if CONTENT_ENCODING in self.headers: - return self._decode_content(data) - return data - - def _decode_content(self, data: bytes) -> bytes: - encoding = self.headers.get(CONTENT_ENCODING, "").lower() - - if encoding == "deflate": - return zlib.decompress(data, -zlib.MAX_WBITS) - elif encoding == "gzip": - return zlib.decompress(data, 16 + zlib.MAX_WBITS) - elif encoding == "identity": - return data - else: - raise RuntimeError(f"unknown content encoding: {encoding}") - - def _decode_content_transfer(self, data: bytes) -> bytes: - encoding = self.headers.get(CONTENT_TRANSFER_ENCODING, "").lower() - - if encoding == "base64": - return base64.b64decode(data) - elif encoding == "quoted-printable": - return binascii.a2b_qp(data) - elif encoding in ("binary", "8bit", "7bit"): - return data - else: - raise RuntimeError( - "unknown content transfer encoding: {}" "".format(encoding) - ) - - def get_charset(self, default: str) -> str: - """Returns charset parameter from Content-Type header or default.""" - ctype = self.headers.get(CONTENT_TYPE, "") - mimetype = parse_mimetype(ctype) - return mimetype.parameters.get("charset", default) - - @reify - def name(self) -> Optional[str]: - """Returns name specified in Content-Disposition header. - - If the header is missing or malformed, returns None. - """ - _, params = parse_content_disposition(self.headers.get(CONTENT_DISPOSITION)) - return content_disposition_filename(params, "name") - - @reify - def filename(self) -> Optional[str]: - """Returns filename specified in Content-Disposition header. - - Returns None if the header is missing or malformed. - """ - _, params = parse_content_disposition(self.headers.get(CONTENT_DISPOSITION)) - return content_disposition_filename(params, "filename") - - -@payload_type(BodyPartReader, order=Order.try_first) -class BodyPartReaderPayload(Payload): - def __init__(self, value: BodyPartReader, *args: Any, **kwargs: Any) -> None: - super().__init__(value, *args, **kwargs) - - params: Dict[str, str] = {} - if value.name is not None: - params["name"] = value.name - if value.filename is not None: - params["filename"] = value.filename - - if params: - self.set_content_disposition("attachment", True, **params) - - async def write(self, writer: Any) -> None: - field = self._value - chunk = await field.read_chunk(size=2**16) - while chunk: - await writer.write(field.decode(chunk)) - chunk = await field.read_chunk(size=2**16) - - -class MultipartReader: - """Multipart body reader.""" - - #: Response wrapper, used when multipart readers constructs from response. - response_wrapper_cls = MultipartResponseWrapper - #: Multipart reader class, used to handle multipart/* body parts. - #: None points to type(self) - multipart_reader_cls = None - #: Body part reader class for non multipart/* content types. - part_reader_cls = BodyPartReader - - def __init__(self, headers: Mapping[str, str], content: StreamReader) -> None: - self.headers = headers - self._boundary = ("--" + self._get_boundary()).encode() - self._content = content - self._last_part: Optional[Union["MultipartReader", BodyPartReader]] = None - self._at_eof = False - self._at_bof = True - self._unread: List[bytes] = [] - - def __aiter__( - self, - ) -> AsyncIterator["BodyPartReader"]: - return self # type: ignore[return-value] - - async def __anext__( - self, - ) -> Optional[Union["MultipartReader", BodyPartReader]]: - part = await self.next() - if part is None: - raise StopAsyncIteration - return part - - @classmethod - def from_response( - cls, - response: "ClientResponse", - ) -> MultipartResponseWrapper: - """Constructs reader instance from HTTP response. - - :param response: :class:`~aiohttp.client.ClientResponse` instance - """ - obj = cls.response_wrapper_cls( - response, cls(response.headers, response.content) - ) - return obj - - def at_eof(self) -> bool: - """Returns True if the final boundary was reached, false otherwise.""" - return self._at_eof - - async def next( - self, - ) -> Optional[Union["MultipartReader", BodyPartReader]]: - """Emits the next multipart body part.""" - # So, if we're at BOF, we need to skip till the boundary. - if self._at_eof: - return None - await self._maybe_release_last_part() - if self._at_bof: - await self._read_until_first_boundary() - self._at_bof = False - else: - await self._read_boundary() - if self._at_eof: # we just read the last boundary, nothing to do there - return None - self._last_part = await self.fetch_next_part() - return self._last_part - - async def release(self) -> None: - """Reads all the body parts to the void till the final boundary.""" - while not self._at_eof: - item = await self.next() - if item is None: - break - await item.release() - - async def fetch_next_part( - self, - ) -> Union["MultipartReader", BodyPartReader]: - """Returns the next body part reader.""" - headers = await self._read_headers() - return self._get_part_reader(headers) - - def _get_part_reader( - self, - headers: "CIMultiDictProxy[str]", - ) -> Union["MultipartReader", BodyPartReader]: - """Dispatches the response by the `Content-Type` header. - - Returns a suitable reader instance. - - :param dict headers: Response headers - """ - ctype = headers.get(CONTENT_TYPE, "") - mimetype = parse_mimetype(ctype) - - if mimetype.type == "multipart": - if self.multipart_reader_cls is None: - return type(self)(headers, self._content) - return self.multipart_reader_cls(headers, self._content) - else: - return self.part_reader_cls(self._boundary, headers, self._content) - - def _get_boundary(self) -> str: - mimetype = parse_mimetype(self.headers[CONTENT_TYPE]) - - assert mimetype.type == "multipart", "multipart/* content type expected" - - if "boundary" not in mimetype.parameters: - raise ValueError( - "boundary missed for Content-Type: %s" % self.headers[CONTENT_TYPE] - ) - - boundary = mimetype.parameters["boundary"] - if len(boundary) > 70: - raise ValueError("boundary %r is too long (70 chars max)" % boundary) - - return boundary - - async def _readline(self) -> bytes: - if self._unread: - return self._unread.pop() - return await self._content.readline() - - async def _read_until_first_boundary(self) -> None: - while True: - chunk = await self._readline() - if chunk == b"": - raise ValueError( - "Could not find starting boundary %r" % (self._boundary) - ) - chunk = chunk.rstrip() - if chunk == self._boundary: - return - elif chunk == self._boundary + b"--": - self._at_eof = True - return - - async def _read_boundary(self) -> None: - chunk = (await self._readline()).rstrip() - if chunk == self._boundary: - pass - elif chunk == self._boundary + b"--": - self._at_eof = True - epilogue = await self._readline() - next_line = await self._readline() - - # the epilogue is expected and then either the end of input or the - # parent multipart boundary, if the parent boundary is found then - # it should be marked as unread and handed to the parent for - # processing - if next_line[:2] == b"--": - self._unread.append(next_line) - # otherwise the request is likely missing an epilogue and both - # lines should be passed to the parent for processing - # (this handles the old behavior gracefully) - else: - self._unread.extend([next_line, epilogue]) - else: - raise ValueError(f"Invalid boundary {chunk!r}, expected {self._boundary!r}") - - async def _read_headers(self) -> "CIMultiDictProxy[str]": - lines = [b""] - while True: - chunk = await self._content.readline() - chunk = chunk.strip() - lines.append(chunk) - if not chunk: - break - parser = HeadersParser() - headers, raw_headers = parser.parse_headers(lines) - return headers - - async def _maybe_release_last_part(self) -> None: - """Ensures that the last read body part is read completely.""" - if self._last_part is not None: - if not self._last_part.at_eof(): - await self._last_part.release() - self._unread.extend(self._last_part._unread) - self._last_part = None - - -_Part = Tuple[Payload, str, str] - - -class MultipartWriter(Payload): - """Multipart body writer.""" - - def __init__(self, subtype: str = "mixed", boundary: Optional[str] = None) -> None: - boundary = boundary if boundary is not None else uuid.uuid4().hex - # The underlying Payload API demands a str (utf-8), not bytes, - # so we need to ensure we don't lose anything during conversion. - # As a result, require the boundary to be ASCII only. - # In both situations. - - try: - self._boundary = boundary.encode("ascii") - except UnicodeEncodeError: - raise ValueError("boundary should contain ASCII only chars") from None - ctype = f"multipart/{subtype}; boundary={self._boundary_value}" - - super().__init__(None, content_type=ctype) - - self._parts: List[_Part] = [] - - def __enter__(self) -> "MultipartWriter": - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - pass - - def __iter__(self) -> Iterator[_Part]: - return iter(self._parts) - - def __len__(self) -> int: - return len(self._parts) - - def __bool__(self) -> bool: - return True - - _valid_tchar_regex = re.compile(rb"\A[!#$%&'*+\-.^_`|~\w]+\Z") - _invalid_qdtext_char_regex = re.compile(rb"[\x00-\x08\x0A-\x1F\x7F]") - - @property - def _boundary_value(self) -> str: - """Wrap boundary parameter value in quotes, if necessary. - - Reads self.boundary and returns a unicode sting. - """ - # Refer to RFCs 7231, 7230, 5234. - # - # parameter = token "=" ( token / quoted-string ) - # token = 1*tchar - # quoted-string = DQUOTE *( qdtext / quoted-pair ) DQUOTE - # qdtext = HTAB / SP / %x21 / %x23-5B / %x5D-7E / obs-text - # obs-text = %x80-FF - # quoted-pair = "\" ( HTAB / SP / VCHAR / obs-text ) - # tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" - # / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~" - # / DIGIT / ALPHA - # ; any VCHAR, except delimiters - # VCHAR = %x21-7E - value = self._boundary - if re.match(self._valid_tchar_regex, value): - return value.decode("ascii") # cannot fail - - if re.search(self._invalid_qdtext_char_regex, value): - raise ValueError("boundary value contains invalid characters") - - # escape %x5C and %x22 - quoted_value_content = value.replace(b"\\", b"\\\\") - quoted_value_content = quoted_value_content.replace(b'"', b'\\"') - - return '"' + quoted_value_content.decode("ascii") + '"' - - @property - def boundary(self) -> str: - return self._boundary.decode("ascii") - - def append(self, obj: Any, headers: Optional[MultiMapping[str]] = None) -> Payload: - if headers is None: - headers = CIMultiDict() - - if isinstance(obj, Payload): - obj.headers.update(headers) - return self.append_payload(obj) - else: - try: - payload = get_payload(obj, headers=headers) - except LookupError: - raise TypeError("Cannot create payload from %r" % obj) - else: - return self.append_payload(payload) - - def append_payload(self, payload: Payload) -> Payload: - """Adds a new body part to multipart writer.""" - # compression - encoding: Optional[str] = payload.headers.get( - CONTENT_ENCODING, - "", - ).lower() - if encoding and encoding not in ("deflate", "gzip", "identity"): - raise RuntimeError(f"unknown content encoding: {encoding}") - if encoding == "identity": - encoding = None - - # te encoding - te_encoding: Optional[str] = payload.headers.get( - CONTENT_TRANSFER_ENCODING, - "", - ).lower() - if te_encoding not in ("", "base64", "quoted-printable", "binary"): - raise RuntimeError( - "unknown content transfer encoding: {}" "".format(te_encoding) - ) - if te_encoding == "binary": - te_encoding = None - - # size - size = payload.size - if size is not None and not (encoding or te_encoding): - payload.headers[CONTENT_LENGTH] = str(size) - - self._parts.append((payload, encoding, te_encoding)) # type: ignore[arg-type] - return payload - - def append_json( - self, obj: Any, headers: Optional[MultiMapping[str]] = None - ) -> Payload: - """Helper to append JSON part.""" - if headers is None: - headers = CIMultiDict() - - return self.append_payload(JsonPayload(obj, headers=headers)) - - def append_form( - self, - obj: Union[Sequence[Tuple[str, str]], Mapping[str, str]], - headers: Optional[MultiMapping[str]] = None, - ) -> Payload: - """Helper to append form urlencoded part.""" - assert isinstance(obj, (Sequence, Mapping)) - - if headers is None: - headers = CIMultiDict() - - if isinstance(obj, Mapping): - obj = list(obj.items()) - data = urlencode(obj, doseq=True) - - return self.append_payload( - StringPayload( - data, headers=headers, content_type="application/x-www-form-urlencoded" - ) - ) - - @property - def size(self) -> Optional[int]: - """Size of the payload.""" - total = 0 - for part, encoding, te_encoding in self._parts: - if encoding or te_encoding or part.size is None: - return None - - total += int( - 2 - + len(self._boundary) - + 2 - + part.size # b'--'+self._boundary+b'\r\n' - + len(part._binary_headers) - + 2 # b'\r\n' - ) - - total += 2 + len(self._boundary) + 4 # b'--'+self._boundary+b'--\r\n' - return total - - async def write(self, writer: Any, close_boundary: bool = True) -> None: - """Write body.""" - for part, encoding, te_encoding in self._parts: - await writer.write(b"--" + self._boundary + b"\r\n") - await writer.write(part._binary_headers) - - if encoding or te_encoding: - w = MultipartPayloadWriter(writer) - if encoding: - w.enable_compression(encoding) - if te_encoding: - w.enable_encoding(te_encoding) - await part.write(w) # type: ignore[arg-type] - await w.write_eof() - else: - await part.write(writer) - - await writer.write(b"\r\n") - - if close_boundary: - await writer.write(b"--" + self._boundary + b"--\r\n") - - -class MultipartPayloadWriter: - def __init__(self, writer: Any) -> None: - self._writer = writer - self._encoding: Optional[str] = None - self._compress: Any = None - self._encoding_buffer: Optional[bytearray] = None - - def enable_encoding(self, encoding: str) -> None: - if encoding == "base64": - self._encoding = encoding - self._encoding_buffer = bytearray() - elif encoding == "quoted-printable": - self._encoding = "quoted-printable" - - def enable_compression( - self, encoding: str = "deflate", strategy: int = zlib.Z_DEFAULT_STRATEGY - ) -> None: - zlib_mode = 16 + zlib.MAX_WBITS if encoding == "gzip" else -zlib.MAX_WBITS - self._compress = zlib.compressobj(wbits=zlib_mode, strategy=strategy) - - async def write_eof(self) -> None: - if self._compress is not None: - chunk = self._compress.flush() - if chunk: - self._compress = None - await self.write(chunk) - - if self._encoding == "base64": - if self._encoding_buffer: - await self._writer.write(base64.b64encode(self._encoding_buffer)) - - async def write(self, chunk: bytes) -> None: - if self._compress is not None: - if chunk: - chunk = self._compress.compress(chunk) - if not chunk: - return - - if self._encoding == "base64": - buf = self._encoding_buffer - assert buf is not None - buf.extend(chunk) - - if buf: - div, mod = divmod(len(buf), 3) - enc_chunk, self._encoding_buffer = (buf[: div * 3], buf[div * 3 :]) - if enc_chunk: - b64chunk = base64.b64encode(enc_chunk) - await self._writer.write(b64chunk) - elif self._encoding == "quoted-printable": - await self._writer.write(binascii.b2a_qp(chunk)) - else: - await self._writer.write(chunk) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/number.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/number.py deleted file mode 100644 index 5d2b9fcccc1469d0fee4c79c77011d4d8a67bcb0..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/number.py +++ /dev/null @@ -1,244 +0,0 @@ -"""gr.Number() component.""" - -from __future__ import annotations - -import math -from typing import Callable, Literal - -import numpy as np -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import NumberSerializable - -from gradio.components.base import FormComponent, IOComponent, _Keywords -from gradio.events import ( - Changeable, - Focusable, - Inputable, - Submittable, -) -from gradio.exceptions import Error -from gradio.interpretation import NeighborInterpretable - -set_documentation_group("component") - - -@document() -class Number( - FormComponent, - Changeable, - Inputable, - Submittable, - Focusable, - IOComponent, - NumberSerializable, - NeighborInterpretable, -): - """ - Creates a numeric field for user to enter numbers as input or display numeric output. - Preprocessing: passes field value as a {float} or {int} into the function, depending on `precision`. - Postprocessing: expects an {int} or {float} returned from the function and sets field value to it. - Examples-format: a {float} or {int} representing the number's value. - - Demos: tax_calculator, titanic_survival, blocks_simple_squares - """ - - def __init__( - self, - value: float | Callable | None = None, - *, - label: str | None = None, - info: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - precision: int | None = None, - minimum: float | None = None, - maximum: float | None = None, - step: float = 1, - **kwargs, - ): - """ - Parameters: - value: default value. If callable, the function will be called whenever the app loads to set the initial value of the component. - label: component name in interface. - info: additional component description. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - interactive: if True, will be editable; if False, editing will be disabled. If not provided, this is inferred based on whether the component is used as an input or output. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - precision: Precision to round input/output to. If set to 0, will round to nearest integer and convert type to int. If None, no rounding happens. - minimum: Minimum value. Only applied when component is used as an input. If a user provides a smaller value, a gr.Error exception is raised by the backend. - maximum: Maximum value. Only applied when component is used as an input. If a user provides a larger value, a gr.Error exception is raised by the backend. - step: The interval between allowed numbers in the component. Can be used along with optional parameters `minimum` and `maximum` to create a range of legal values starting from `minimum` and incrementing according to this parameter. - """ - self.precision = precision - self.minimum = minimum - self.maximum = maximum - self.step = step - - IOComponent.__init__( - self, - label=label, - info=info, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - interactive=interactive, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - NeighborInterpretable.__init__(self) - - @staticmethod - def _round_to_precision(num: float | int, precision: int | None) -> float | int: - """ - Round to a given precision. - - If precision is None, no rounding happens. If 0, num is converted to int. - - Parameters: - num: Number to round. - precision: Precision to round to. - Returns: - rounded number - """ - if precision is None: - return float(num) - elif precision == 0: - return int(round(num, precision)) - else: - return round(num, precision) - - def get_config(self): - return { - "value": self.value, - "minimum": self.minimum, - "maximum": self.maximum, - "step": self.step, - "container": self.container, - **IOComponent.get_config(self), - } - - @staticmethod - def update( - value: float | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - minimum: float | None = None, - maximum: float | None = None, - step: float = 1, - label: str | None = None, - info: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - interactive: bool | None = None, - visible: bool | None = None, - ): - return { - "label": label, - "info": info, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "visible": visible, - "value": value, - "minimum": minimum, - "maximum": maximum, - "step": step, - "interactive": interactive, - "__type__": "update", - } - - def preprocess(self, x: float | None) -> float | None: - """ - Parameters: - x: numeric input - Returns: - number representing function input - """ - if x is None: - return None - elif self.minimum is not None and x < self.minimum: - raise Error(f"Value {x} is less than minimum value {self.minimum}.") - elif self.maximum is not None and x > self.maximum: - raise Error(f"Value {x} is greater than maximum value {self.maximum}.") - return self._round_to_precision(x, self.precision) - - def postprocess(self, y: float | None) -> float | None: - """ - Any postprocessing needed to be performed on function output. - - Parameters: - y: numeric output - Returns: - number representing function output - """ - if y is None: - return None - return self._round_to_precision(y, self.precision) - - def set_interpret_parameters( - self, steps: int = 3, delta: float = 1, delta_type: str = "percent" - ): - """ - Calculates interpretation scores of numeric values close to the input number. - Parameters: - steps: Number of nearby values to measure in each direction (above and below the input number). - delta: Size of step in each direction between nearby values. - delta_type: "percent" if delta step between nearby values should be a calculated as a percent, or "absolute" if delta should be a constant step change. - """ - self.interpretation_steps = steps - self.interpretation_delta = delta - self.interpretation_delta_type = delta_type - return self - - def get_interpretation_neighbors(self, x: float | int) -> tuple[list[float], dict]: - x = self._round_to_precision(x, self.precision) - if self.interpretation_delta_type == "percent": - delta = 1.0 * self.interpretation_delta * x / 100 - elif self.interpretation_delta_type == "absolute": - delta = self.interpretation_delta - else: - delta = self.interpretation_delta - if self.precision == 0 and math.floor(delta) != delta: - raise ValueError( - f"Delta value {delta} is not an integer and precision=0. Cannot generate valid set of neighbors. " - "If delta_type='percent', pick a value of delta such that x * delta is an integer. " - "If delta_type='absolute', pick a value of delta that is an integer." - ) - # run_interpretation will preprocess the neighbors so no need to convert to int here - negatives = ( - np.array(x) + np.arange(-self.interpretation_steps, 0) * delta - ).tolist() - positives = ( - np.array(x) + np.arange(1, self.interpretation_steps + 1) * delta - ).tolist() - return negatives + positives, {} - - def get_interpretation_scores( - self, x: float, neighbors: list[float], scores: list[float | None], **kwargs - ) -> list[tuple[float, float | None]]: - """ - Returns: - Each tuple set represents a numeric value near the input and its corresponding interpretation score. - """ - interpretation = list(zip(neighbors, scores)) - interpretation.insert(int(len(interpretation) / 2), (x, None)) - return interpretation diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/radio.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/radio.py deleted file mode 100644 index 3e8f79ab9ff1259876ca2ad1fe3f3e078b92681c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/radio.py +++ /dev/null @@ -1,197 +0,0 @@ -"""gr.Radio() component.""" - -from __future__ import annotations - -from typing import Any, Callable, Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import StringSerializable - -from gradio.components.base import FormComponent, IOComponent, _Keywords -from gradio.deprecation import warn_deprecation, warn_style_method_deprecation -from gradio.events import Changeable, EventListenerMethod, Inputable, Selectable -from gradio.interpretation import NeighborInterpretable - -set_documentation_group("component") - - -@document() -class Radio( - FormComponent, - Selectable, - Changeable, - Inputable, - IOComponent, - StringSerializable, - NeighborInterpretable, -): - """ - Creates a set of (string or numeric type) radio buttons of which only one can be selected. - Preprocessing: passes the value of the selected radio button as a {str} or {int} or {float} or its index as an {int} into the function, depending on `type`. - Postprocessing: expects a {str} or {int} or {float} corresponding to the value of the radio button to be selected. - Examples-format: a {str} representing the radio option to select. - - Demos: sentence_builder, titanic_survival, blocks_essay - """ - - def __init__( - self, - choices: list[str | int | float] | None = None, - *, - value: str | int | float | Callable | None = None, - type: str = "value", - label: str | None = None, - info: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - **kwargs, - ): - """ - Parameters: - choices: list of options to select from. - value: the button selected by default. If None, no button is selected by default. If callable, the function will be called whenever the app loads to set the initial value of the component. - type: Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected. - label: component name in interface. - info: additional component description. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - interactive: if True, choices in this radio group will be selectable; if False, selection will be disabled. If not provided, this is inferred based on whether the component is used as an input or output. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - self.choices = choices or [] - valid_types = ["value", "index"] - if type not in valid_types: - raise ValueError( - f"Invalid value for parameter `type`: {type}. Please choose from one of: {valid_types}" - ) - self.type = type - self.select: EventListenerMethod - """ - Event listener for when the user selects Radio option. - Uses event data gradio.SelectData to carry `value` referring to label of selected option, and `index` to refer to index. - See EventData documentation on how to use this event data. - """ - IOComponent.__init__( - self, - label=label, - info=info, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - interactive=interactive, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - NeighborInterpretable.__init__(self) - - def get_config(self): - return { - "choices": self.choices, - "value": self.value, - **IOComponent.get_config(self), - } - - def example_inputs(self) -> dict[str, Any]: - return { - "raw": self.choices[0] if self.choices else None, - "serialized": self.choices[0] if self.choices else None, - } - - @staticmethod - def update( - value: str - | int - | float - | Literal[_Keywords.NO_VALUE] - | None = _Keywords.NO_VALUE, - choices: list[str | int | float] | None = None, - label: str | None = None, - info: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - interactive: bool | None = None, - visible: bool | None = None, - ): - return { - "choices": choices, - "label": label, - "info": info, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "interactive": interactive, - "visible": visible, - "value": value, - "__type__": "update", - } - - def preprocess(self, x: str | int | float | None) -> str | int | float | None: - """ - Parameters: - x: selected choice - Returns: - selected choice as string or index within choice list - """ - if self.type == "value": - return x - elif self.type == "index": - if x is None: - return None - else: - return self.choices.index(x) - else: - raise ValueError( - f"Unknown type: {self.type}. Please choose from: 'value', 'index'." - ) - - def get_interpretation_neighbors(self, x): - choices = list(self.choices) - choices.remove(x) - return choices, {} - - def get_interpretation_scores( - self, x, neighbors, scores: list[float | None], **kwargs - ) -> list: - """ - Returns: - Each value represents the interpretation score corresponding to each choice. - """ - scores.insert(self.choices.index(x), None) - return scores - - def style( - self, - *, - item_container: bool | None = None, - container: bool | None = None, - **kwargs, - ): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if item_container is not None: - warn_deprecation("The `item_container` parameter is deprecated.") - if container is not None: - self.container = container - return self diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/_mathtext.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/_mathtext.py deleted file mode 100644 index 3a934c21fd50764515fe4c56810489c50510079b..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/_mathtext.py +++ /dev/null @@ -1,2597 +0,0 @@ -""" -Implementation details for :mod:`.mathtext`. -""" - -import copy -from collections import namedtuple -import enum -import functools -import logging -import os -import re -import types -import unicodedata - -import numpy as np -from pyparsing import ( - Empty, Forward, Literal, NotAny, oneOf, OneOrMore, Optional, - ParseBaseException, ParseException, ParseExpression, ParseFatalException, - ParserElement, ParseResults, QuotedString, Regex, StringEnd, ZeroOrMore, - pyparsing_common) - -import matplotlib as mpl -from . import _api, cbook -from ._mathtext_data import ( - latex_to_bakoma, stix_glyph_fixes, stix_virtual_fonts, tex2uni) -from .font_manager import FontProperties, findfont, get_font -from .ft2font import FT2Image, KERNING_DEFAULT - - -ParserElement.enablePackrat() -_log = logging.getLogger("matplotlib.mathtext") - - -############################################################################## -# FONTS - - -@_api.delete_parameter("3.6", "math") -def get_unicode_index(symbol, math=False): # Publicly exported. - r""" - Return the integer index (from the Unicode table) of *symbol*. - - Parameters - ---------- - symbol : str - A single (Unicode) character, a TeX command (e.g. r'\pi') or a Type1 - symbol name (e.g. 'phi'). - math : bool, default: False - If True (deprecated), replace ASCII hyphen-minus by Unicode minus. - """ - # From UTF #25: U+2212 minus sign is the preferred - # representation of the unary and binary minus sign rather than - # the ASCII-derived U+002D hyphen-minus, because minus sign is - # unambiguous and because it is rendered with a more desirable - # length, usually longer than a hyphen. - # Remove this block when the 'math' parameter is deleted. - if math and symbol == '-': - return 0x2212 - try: # This will succeed if symbol is a single Unicode char - return ord(symbol) - except TypeError: - pass - try: # Is symbol a TeX symbol (i.e. \alpha) - return tex2uni[symbol.strip("\\")] - except KeyError as err: - raise ValueError( - "'{}' is not a valid Unicode character or TeX/Type1 symbol" - .format(symbol)) from err - - -VectorParse = namedtuple("VectorParse", "width height depth glyphs rects", - module="matplotlib.mathtext") -VectorParse.__doc__ = r""" -The namedtuple type returned by ``MathTextParser("path").parse(...)``. - -This tuple contains the global metrics (*width*, *height*, *depth*), a list of -*glyphs* (including their positions) and of *rect*\angles. -""" - - -RasterParse = namedtuple("RasterParse", "ox oy width height depth image", - module="matplotlib.mathtext") -RasterParse.__doc__ = r""" -The namedtuple type returned by ``MathTextParser("agg").parse(...)``. - -This tuple contains the global metrics (*width*, *height*, *depth*), and a -raster *image*. The offsets *ox*, *oy* are always zero. -""" - - -class Output: - r""" - Result of `ship`\ping a box: lists of positioned glyphs and rectangles. - - This class is not exposed to end users, but converted to a `VectorParse` or - a `RasterParse` by `.MathTextParser.parse`. - """ - - def __init__(self, box): - self.box = box - self.glyphs = [] # (ox, oy, info) - self.rects = [] # (x1, y1, x2, y2) - - def to_vector(self): - w, h, d = map( - np.ceil, [self.box.width, self.box.height, self.box.depth]) - gs = [(info.font, info.fontsize, info.num, ox, h - oy + info.offset) - for ox, oy, info in self.glyphs] - rs = [(x1, h - y2, x2 - x1, y2 - y1) - for x1, y1, x2, y2 in self.rects] - return VectorParse(w, h + d, d, gs, rs) - - def to_raster(self): - # Metrics y's and mathtext y's are oriented in opposite directions, - # hence the switch between ymin and ymax. - xmin = min([*[ox + info.metrics.xmin for ox, oy, info in self.glyphs], - *[x1 for x1, y1, x2, y2 in self.rects], 0]) - 1 - ymin = min([*[oy - info.metrics.ymax for ox, oy, info in self.glyphs], - *[y1 for x1, y1, x2, y2 in self.rects], 0]) - 1 - xmax = max([*[ox + info.metrics.xmax for ox, oy, info in self.glyphs], - *[x2 for x1, y1, x2, y2 in self.rects], 0]) + 1 - ymax = max([*[oy - info.metrics.ymin for ox, oy, info in self.glyphs], - *[y2 for x1, y1, x2, y2 in self.rects], 0]) + 1 - w = xmax - xmin - h = ymax - ymin - self.box.depth - d = ymax - ymin - self.box.height - image = FT2Image(np.ceil(w), np.ceil(h + max(d, 0))) - - # Ideally, we could just use self.glyphs and self.rects here, shifting - # their coordinates by (-xmin, -ymin), but this yields slightly - # different results due to floating point slop; shipping twice is the - # old approach and keeps baseline images backcompat. - shifted = ship(self.box, (-xmin, -ymin)) - - for ox, oy, info in shifted.glyphs: - info.font.draw_glyph_to_bitmap( - image, ox, oy - info.metrics.iceberg, info.glyph, - antialiased=mpl.rcParams['text.antialiased']) - for x1, y1, x2, y2 in shifted.rects: - height = max(int(y2 - y1) - 1, 0) - if height == 0: - center = (y2 + y1) / 2 - y = int(center - (height + 1) / 2) - else: - y = int(y1) - image.draw_rect_filled(int(x1), y, np.ceil(x2), y + height) - return RasterParse(0, 0, w, h + d, d, image) - - -class Fonts: - """ - An abstract base class for a system of fonts to use for mathtext. - - The class must be able to take symbol keys and font file names and - return the character metrics. It also delegates to a backend class - to do the actual drawing. - """ - - def __init__(self, default_font_prop, load_glyph_flags): - """ - Parameters - ---------- - default_font_prop : `~.font_manager.FontProperties` - The default non-math font, or the base font for Unicode (generic) - font rendering. - load_glyph_flags : int - Flags passed to the glyph loader (e.g. ``FT_Load_Glyph`` and - ``FT_Load_Char`` for FreeType-based fonts). - """ - self.default_font_prop = default_font_prop - self.load_glyph_flags = load_glyph_flags - - def get_kern(self, font1, fontclass1, sym1, fontsize1, - font2, fontclass2, sym2, fontsize2, dpi): - """ - Get the kerning distance for font between *sym1* and *sym2*. - - See `~.Fonts.get_metrics` for a detailed description of the parameters. - """ - return 0. - - def get_metrics(self, font, font_class, sym, fontsize, dpi): - r""" - Parameters - ---------- - font : str - One of the TeX font names: "tt", "it", "rm", "cal", "sf", "bf", - "default", "regular", "bb", "frak", "scr". "default" and "regular" - are synonyms and use the non-math font. - font_class : str - One of the TeX font names (as for *font*), but **not** "bb", - "frak", or "scr". This is used to combine two font classes. The - only supported combination currently is ``get_metrics("frak", "bf", - ...)``. - sym : str - A symbol in raw TeX form, e.g., "1", "x", or "\sigma". - fontsize : float - Font size in points. - dpi : float - Rendering dots-per-inch. - - Returns - ------- - object - - The returned object has the following attributes (all floats, - except *slanted*): - - - *advance*: The advance distance (in points) of the glyph. - - *height*: The height of the glyph in points. - - *width*: The width of the glyph in points. - - *xmin*, *xmax*, *ymin*, *ymax*: The ink rectangle of the glyph - - *iceberg*: The distance from the baseline to the top of the - glyph. (This corresponds to TeX's definition of "height".) - - *slanted*: Whether the glyph should be considered as "slanted" - (currently used for kerning sub/superscripts). - """ - info = self._get_info(font, font_class, sym, fontsize, dpi) - return info.metrics - - def render_glyph( - self, output, ox, oy, font, font_class, sym, fontsize, dpi): - """ - At position (*ox*, *oy*), draw the glyph specified by the remaining - parameters (see `get_metrics` for their detailed description). - """ - info = self._get_info(font, font_class, sym, fontsize, dpi) - output.glyphs.append((ox, oy, info)) - - def render_rect_filled(self, output, x1, y1, x2, y2): - """ - Draw a filled rectangle from (*x1*, *y1*) to (*x2*, *y2*). - """ - output.rects.append((x1, y1, x2, y2)) - - def get_xheight(self, font, fontsize, dpi): - """ - Get the xheight for the given *font* and *fontsize*. - """ - raise NotImplementedError() - - def get_underline_thickness(self, font, fontsize, dpi): - """ - Get the line thickness that matches the given font. Used as a - base unit for drawing lines such as in a fraction or radical. - """ - raise NotImplementedError() - - def get_used_characters(self): - """ - Get the set of characters that were used in the math - expression. Used by backends that need to subset fonts so - they know which glyphs to include. - """ - return self.used_characters - - def get_sized_alternatives_for_symbol(self, fontname, sym): - """ - Override if your font provides multiple sizes of the same - symbol. Should return a list of symbols matching *sym* in - various sizes. The expression renderer will select the most - appropriate size for a given situation from this list. - """ - return [(fontname, sym)] - - -class TruetypeFonts(Fonts): - """ - A generic base class for all font setups that use Truetype fonts - (through FT2Font). - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - # Per-instance cache. - self._get_info = functools.lru_cache(None)(self._get_info) - self._fonts = {} - - filename = findfont(self.default_font_prop) - default_font = get_font(filename) - self._fonts['default'] = default_font - self._fonts['regular'] = default_font - - def _get_font(self, font): - if font in self.fontmap: - basename = self.fontmap[font] - else: - basename = font - cached_font = self._fonts.get(basename) - if cached_font is None and os.path.exists(basename): - cached_font = get_font(basename) - self._fonts[basename] = cached_font - self._fonts[cached_font.postscript_name] = cached_font - self._fonts[cached_font.postscript_name.lower()] = cached_font - return cached_font - - def _get_offset(self, font, glyph, fontsize, dpi): - if font.postscript_name == 'Cmex10': - return (glyph.height / 64 / 2) + (fontsize/3 * dpi/72) - return 0. - - # The return value of _get_info is cached per-instance. - def _get_info(self, fontname, font_class, sym, fontsize, dpi): - font, num, slanted = self._get_glyph(fontname, font_class, sym) - font.set_size(fontsize, dpi) - glyph = font.load_char(num, flags=self.load_glyph_flags) - - xmin, ymin, xmax, ymax = [val/64.0 for val in glyph.bbox] - offset = self._get_offset(font, glyph, fontsize, dpi) - metrics = types.SimpleNamespace( - advance = glyph.linearHoriAdvance/65536.0, - height = glyph.height/64.0, - width = glyph.width/64.0, - xmin = xmin, - xmax = xmax, - ymin = ymin+offset, - ymax = ymax+offset, - # iceberg is the equivalent of TeX's "height" - iceberg = glyph.horiBearingY/64.0 + offset, - slanted = slanted - ) - - return types.SimpleNamespace( - font = font, - fontsize = fontsize, - postscript_name = font.postscript_name, - metrics = metrics, - num = num, - glyph = glyph, - offset = offset - ) - - def get_xheight(self, fontname, fontsize, dpi): - font = self._get_font(fontname) - font.set_size(fontsize, dpi) - pclt = font.get_sfnt_table('pclt') - if pclt is None: - # Some fonts don't store the xHeight, so we do a poor man's xHeight - metrics = self.get_metrics( - fontname, mpl.rcParams['mathtext.default'], 'x', fontsize, dpi) - return metrics.iceberg - xHeight = (pclt['xHeight'] / 64.0) * (fontsize / 12.0) * (dpi / 100.0) - return xHeight - - def get_underline_thickness(self, font, fontsize, dpi): - # This function used to grab underline thickness from the font - # metrics, but that information is just too un-reliable, so it - # is now hardcoded. - return ((0.75 / 12.0) * fontsize * dpi) / 72.0 - - def get_kern(self, font1, fontclass1, sym1, fontsize1, - font2, fontclass2, sym2, fontsize2, dpi): - if font1 == font2 and fontsize1 == fontsize2: - info1 = self._get_info(font1, fontclass1, sym1, fontsize1, dpi) - info2 = self._get_info(font2, fontclass2, sym2, fontsize2, dpi) - font = info1.font - return font.get_kerning(info1.num, info2.num, KERNING_DEFAULT) / 64 - return super().get_kern(font1, fontclass1, sym1, fontsize1, - font2, fontclass2, sym2, fontsize2, dpi) - - -class BakomaFonts(TruetypeFonts): - """ - Use the Bakoma TrueType fonts for rendering. - - Symbols are strewn about a number of font files, each of which has - its own proprietary 8-bit encoding. - """ - _fontmap = { - 'cal': 'cmsy10', - 'rm': 'cmr10', - 'tt': 'cmtt10', - 'it': 'cmmi10', - 'bf': 'cmb10', - 'sf': 'cmss10', - 'ex': 'cmex10', - } - - def __init__(self, *args, **kwargs): - self._stix_fallback = StixFonts(*args, **kwargs) - - super().__init__(*args, **kwargs) - self.fontmap = {} - for key, val in self._fontmap.items(): - fullpath = findfont(val) - self.fontmap[key] = fullpath - self.fontmap[val] = fullpath - - _slanted_symbols = set(r"\int \oint".split()) - - def _get_glyph(self, fontname, font_class, sym): - font = None - if fontname in self.fontmap and sym in latex_to_bakoma: - basename, num = latex_to_bakoma[sym] - slanted = (basename == "cmmi10") or sym in self._slanted_symbols - font = self._get_font(basename) - elif len(sym) == 1: - slanted = (fontname == "it") - font = self._get_font(fontname) - if font is not None: - num = ord(sym) - if font is not None and font.get_char_index(num) != 0: - return font, num, slanted - else: - return self._stix_fallback._get_glyph(fontname, font_class, sym) - - # The Bakoma fonts contain many pre-sized alternatives for the - # delimiters. The AutoSizedChar class will use these alternatives - # and select the best (closest sized) glyph. - _size_alternatives = { - '(': [('rm', '('), ('ex', '\xa1'), ('ex', '\xb3'), - ('ex', '\xb5'), ('ex', '\xc3')], - ')': [('rm', ')'), ('ex', '\xa2'), ('ex', '\xb4'), - ('ex', '\xb6'), ('ex', '\x21')], - '{': [('cal', '{'), ('ex', '\xa9'), ('ex', '\x6e'), - ('ex', '\xbd'), ('ex', '\x28')], - '}': [('cal', '}'), ('ex', '\xaa'), ('ex', '\x6f'), - ('ex', '\xbe'), ('ex', '\x29')], - # The fourth size of '[' is mysteriously missing from the BaKoMa - # font, so I've omitted it for both '[' and ']' - '[': [('rm', '['), ('ex', '\xa3'), ('ex', '\x68'), - ('ex', '\x22')], - ']': [('rm', ']'), ('ex', '\xa4'), ('ex', '\x69'), - ('ex', '\x23')], - r'\lfloor': [('ex', '\xa5'), ('ex', '\x6a'), - ('ex', '\xb9'), ('ex', '\x24')], - r'\rfloor': [('ex', '\xa6'), ('ex', '\x6b'), - ('ex', '\xba'), ('ex', '\x25')], - r'\lceil': [('ex', '\xa7'), ('ex', '\x6c'), - ('ex', '\xbb'), ('ex', '\x26')], - r'\rceil': [('ex', '\xa8'), ('ex', '\x6d'), - ('ex', '\xbc'), ('ex', '\x27')], - r'\langle': [('ex', '\xad'), ('ex', '\x44'), - ('ex', '\xbf'), ('ex', '\x2a')], - r'\rangle': [('ex', '\xae'), ('ex', '\x45'), - ('ex', '\xc0'), ('ex', '\x2b')], - r'\__sqrt__': [('ex', '\x70'), ('ex', '\x71'), - ('ex', '\x72'), ('ex', '\x73')], - r'\backslash': [('ex', '\xb2'), ('ex', '\x2f'), - ('ex', '\xc2'), ('ex', '\x2d')], - r'/': [('rm', '/'), ('ex', '\xb1'), ('ex', '\x2e'), - ('ex', '\xcb'), ('ex', '\x2c')], - r'\widehat': [('rm', '\x5e'), ('ex', '\x62'), ('ex', '\x63'), - ('ex', '\x64')], - r'\widetilde': [('rm', '\x7e'), ('ex', '\x65'), ('ex', '\x66'), - ('ex', '\x67')], - r'<': [('cal', 'h'), ('ex', 'D')], - r'>': [('cal', 'i'), ('ex', 'E')] - } - - for alias, target in [(r'\leftparen', '('), - (r'\rightparent', ')'), - (r'\leftbrace', '{'), - (r'\rightbrace', '}'), - (r'\leftbracket', '['), - (r'\rightbracket', ']'), - (r'\{', '{'), - (r'\}', '}'), - (r'\[', '['), - (r'\]', ']')]: - _size_alternatives[alias] = _size_alternatives[target] - - def get_sized_alternatives_for_symbol(self, fontname, sym): - return self._size_alternatives.get(sym, [(fontname, sym)]) - - -class UnicodeFonts(TruetypeFonts): - """ - An abstract base class for handling Unicode fonts. - - While some reasonably complete Unicode fonts (such as DejaVu) may - work in some situations, the only Unicode font I'm aware of with a - complete set of math symbols is STIX. - - This class will "fallback" on the Bakoma fonts when a required - symbol can not be found in the font. - """ - - # Some glyphs are not present in the `cmr10` font, and must be brought in - # from `cmsy10`. Map the Unicode indices of those glyphs to the indices at - # which they are found in `cmsy10`. - _cmr10_substitutions = { - 0x00D7: 0x00A3, # Multiplication sign. - 0x2212: 0x00A1, # Minus sign. - } - - def __init__(self, *args, **kwargs): - # This must come first so the backend's owner is set correctly - fallback_rc = mpl.rcParams['mathtext.fallback'] - font_cls = {'stix': StixFonts, - 'stixsans': StixSansFonts, - 'cm': BakomaFonts - }.get(fallback_rc) - self._fallback_font = font_cls(*args, **kwargs) if font_cls else None - - super().__init__(*args, **kwargs) - self.fontmap = {} - for texfont in "cal rm tt it bf sf".split(): - prop = mpl.rcParams['mathtext.' + texfont] - font = findfont(prop) - self.fontmap[texfont] = font - prop = FontProperties('cmex10') - font = findfont(prop) - self.fontmap['ex'] = font - - # include STIX sized alternatives for glyphs if fallback is STIX - if isinstance(self._fallback_font, StixFonts): - stixsizedaltfonts = { - 0: 'STIXGeneral', - 1: 'STIXSizeOneSym', - 2: 'STIXSizeTwoSym', - 3: 'STIXSizeThreeSym', - 4: 'STIXSizeFourSym', - 5: 'STIXSizeFiveSym'} - - for size, name in stixsizedaltfonts.items(): - fullpath = findfont(name) - self.fontmap[size] = fullpath - self.fontmap[name] = fullpath - - _slanted_symbols = set(r"\int \oint".split()) - - def _map_virtual_font(self, fontname, font_class, uniindex): - return fontname, uniindex - - def _get_glyph(self, fontname, font_class, sym): - try: - uniindex = get_unicode_index(sym) - found_symbol = True - except ValueError: - uniindex = ord('?') - found_symbol = False - _log.warning("No TeX to Unicode mapping for {!a}.".format(sym)) - - fontname, uniindex = self._map_virtual_font( - fontname, font_class, uniindex) - - new_fontname = fontname - - # Only characters in the "Letter" class should be italicized in 'it' - # mode. Greek capital letters should be Roman. - if found_symbol: - if fontname == 'it' and uniindex < 0x10000: - char = chr(uniindex) - if (unicodedata.category(char)[0] != "L" - or unicodedata.name(char).startswith("GREEK CAPITAL")): - new_fontname = 'rm' - - slanted = (new_fontname == 'it') or sym in self._slanted_symbols - found_symbol = False - font = self._get_font(new_fontname) - if font is not None: - if (uniindex in self._cmr10_substitutions - and font.family_name == "cmr10"): - font = get_font( - cbook._get_data_path("fonts/ttf/cmsy10.ttf")) - uniindex = self._cmr10_substitutions[uniindex] - glyphindex = font.get_char_index(uniindex) - if glyphindex != 0: - found_symbol = True - - if not found_symbol: - if self._fallback_font: - if (fontname in ('it', 'regular') - and isinstance(self._fallback_font, StixFonts)): - fontname = 'rm' - - g = self._fallback_font._get_glyph(fontname, font_class, sym) - family = g[0].family_name - if family in list(BakomaFonts._fontmap.values()): - family = "Computer Modern" - _log.info("Substituting symbol %s from %s", sym, family) - return g - - else: - if (fontname in ('it', 'regular') - and isinstance(self, StixFonts)): - return self._get_glyph('rm', font_class, sym) - _log.warning("Font {!r} does not have a glyph for {!a} " - "[U+{:x}], substituting with a dummy " - "symbol.".format(new_fontname, sym, uniindex)) - font = self._get_font('rm') - uniindex = 0xA4 # currency char, for lack of anything better - slanted = False - - return font, uniindex, slanted - - def get_sized_alternatives_for_symbol(self, fontname, sym): - if self._fallback_font: - return self._fallback_font.get_sized_alternatives_for_symbol( - fontname, sym) - return [(fontname, sym)] - - -class DejaVuFonts(UnicodeFonts): - - def __init__(self, *args, **kwargs): - # This must come first so the backend's owner is set correctly - if isinstance(self, DejaVuSerifFonts): - self._fallback_font = StixFonts(*args, **kwargs) - else: - self._fallback_font = StixSansFonts(*args, **kwargs) - self.bakoma = BakomaFonts(*args, **kwargs) - TruetypeFonts.__init__(self, *args, **kwargs) - self.fontmap = {} - # Include Stix sized alternatives for glyphs - self._fontmap.update({ - 1: 'STIXSizeOneSym', - 2: 'STIXSizeTwoSym', - 3: 'STIXSizeThreeSym', - 4: 'STIXSizeFourSym', - 5: 'STIXSizeFiveSym', - }) - for key, name in self._fontmap.items(): - fullpath = findfont(name) - self.fontmap[key] = fullpath - self.fontmap[name] = fullpath - - def _get_glyph(self, fontname, font_class, sym): - # Override prime symbol to use Bakoma. - if sym == r'\prime': - return self.bakoma._get_glyph(fontname, font_class, sym) - else: - # check whether the glyph is available in the display font - uniindex = get_unicode_index(sym) - font = self._get_font('ex') - if font is not None: - glyphindex = font.get_char_index(uniindex) - if glyphindex != 0: - return super()._get_glyph('ex', font_class, sym) - # otherwise return regular glyph - return super()._get_glyph(fontname, font_class, sym) - - -class DejaVuSerifFonts(DejaVuFonts): - """ - A font handling class for the DejaVu Serif fonts - - If a glyph is not found it will fallback to Stix Serif - """ - _fontmap = { - 'rm': 'DejaVu Serif', - 'it': 'DejaVu Serif:italic', - 'bf': 'DejaVu Serif:weight=bold', - 'sf': 'DejaVu Sans', - 'tt': 'DejaVu Sans Mono', - 'ex': 'DejaVu Serif Display', - 0: 'DejaVu Serif', - } - - -class DejaVuSansFonts(DejaVuFonts): - """ - A font handling class for the DejaVu Sans fonts - - If a glyph is not found it will fallback to Stix Sans - """ - _fontmap = { - 'rm': 'DejaVu Sans', - 'it': 'DejaVu Sans:italic', - 'bf': 'DejaVu Sans:weight=bold', - 'sf': 'DejaVu Sans', - 'tt': 'DejaVu Sans Mono', - 'ex': 'DejaVu Sans Display', - 0: 'DejaVu Sans', - } - - -class StixFonts(UnicodeFonts): - """ - A font handling class for the STIX fonts. - - In addition to what UnicodeFonts provides, this class: - - - supports "virtual fonts" which are complete alpha numeric - character sets with different font styles at special Unicode - code points, such as "Blackboard". - - - handles sized alternative characters for the STIXSizeX fonts. - """ - _fontmap = { - 'rm': 'STIXGeneral', - 'it': 'STIXGeneral:italic', - 'bf': 'STIXGeneral:weight=bold', - 'nonunirm': 'STIXNonUnicode', - 'nonuniit': 'STIXNonUnicode:italic', - 'nonunibf': 'STIXNonUnicode:weight=bold', - 0: 'STIXGeneral', - 1: 'STIXSizeOneSym', - 2: 'STIXSizeTwoSym', - 3: 'STIXSizeThreeSym', - 4: 'STIXSizeFourSym', - 5: 'STIXSizeFiveSym', - } - _fallback_font = False - _sans = False - - def __init__(self, *args, **kwargs): - TruetypeFonts.__init__(self, *args, **kwargs) - self.fontmap = {} - for key, name in self._fontmap.items(): - fullpath = findfont(name) - self.fontmap[key] = fullpath - self.fontmap[name] = fullpath - - def _map_virtual_font(self, fontname, font_class, uniindex): - # Handle these "fonts" that are actually embedded in - # other fonts. - mapping = stix_virtual_fonts.get(fontname) - if (self._sans and mapping is None - and fontname not in ('regular', 'default')): - mapping = stix_virtual_fonts['sf'] - doing_sans_conversion = True - else: - doing_sans_conversion = False - - if mapping is not None: - if isinstance(mapping, dict): - try: - mapping = mapping[font_class] - except KeyError: - mapping = mapping['rm'] - - # Binary search for the source glyph - lo = 0 - hi = len(mapping) - while lo < hi: - mid = (lo+hi)//2 - range = mapping[mid] - if uniindex < range[0]: - hi = mid - elif uniindex <= range[1]: - break - else: - lo = mid + 1 - - if range[0] <= uniindex <= range[1]: - uniindex = uniindex - range[0] + range[3] - fontname = range[2] - elif not doing_sans_conversion: - # This will generate a dummy character - uniindex = 0x1 - fontname = mpl.rcParams['mathtext.default'] - - # Fix some incorrect glyphs. - if fontname in ('rm', 'it'): - uniindex = stix_glyph_fixes.get(uniindex, uniindex) - - # Handle private use area glyphs - if fontname in ('it', 'rm', 'bf') and 0xe000 <= uniindex <= 0xf8ff: - fontname = 'nonuni' + fontname - - return fontname, uniindex - - @functools.lru_cache() - def get_sized_alternatives_for_symbol(self, fontname, sym): - fixes = { - '\\{': '{', '\\}': '}', '\\[': '[', '\\]': ']', - '<': '\N{MATHEMATICAL LEFT ANGLE BRACKET}', - '>': '\N{MATHEMATICAL RIGHT ANGLE BRACKET}', - } - sym = fixes.get(sym, sym) - try: - uniindex = get_unicode_index(sym) - except ValueError: - return [(fontname, sym)] - alternatives = [(i, chr(uniindex)) for i in range(6) - if self._get_font(i).get_char_index(uniindex) != 0] - # The largest size of the radical symbol in STIX has incorrect - # metrics that cause it to be disconnected from the stem. - if sym == r'\__sqrt__': - alternatives = alternatives[:-1] - return alternatives - - -class StixSansFonts(StixFonts): - """ - A font handling class for the STIX fonts (that uses sans-serif - characters by default). - """ - _sans = True - - -############################################################################## -# TeX-LIKE BOX MODEL - -# The following is based directly on the document 'woven' from the -# TeX82 source code. This information is also available in printed -# form: -# -# Knuth, Donald E.. 1986. Computers and Typesetting, Volume B: -# TeX: The Program. Addison-Wesley Professional. -# -# The most relevant "chapters" are: -# Data structures for boxes and their friends -# Shipping pages out (ship()) -# Packaging (hpack() and vpack()) -# Data structures for math mode -# Subroutines for math mode -# Typesetting math formulas -# -# Many of the docstrings below refer to a numbered "node" in that -# book, e.g., node123 -# -# Note that (as TeX) y increases downward, unlike many other parts of -# matplotlib. - -# How much text shrinks when going to the next-smallest level. -SHRINK_FACTOR = 0.7 -# The number of different sizes of chars to use, beyond which they will not -# get any smaller -NUM_SIZE_LEVELS = 6 - - -class FontConstantsBase: - """ - A set of constants that controls how certain things, such as sub- - and superscripts are laid out. These are all metrics that can't - be reliably retrieved from the font metrics in the font itself. - """ - # Percentage of x-height of additional horiz. space after sub/superscripts - script_space = 0.05 - - # Percentage of x-height that sub/superscripts drop below the baseline - subdrop = 0.4 - - # Percentage of x-height that superscripts are raised from the baseline - sup1 = 0.7 - - # Percentage of x-height that subscripts drop below the baseline - sub1 = 0.3 - - # Percentage of x-height that subscripts drop below the baseline when a - # superscript is present - sub2 = 0.5 - - # Percentage of x-height that sub/superscripts are offset relative to the - # nucleus edge for non-slanted nuclei - delta = 0.025 - - # Additional percentage of last character height above 2/3 of the - # x-height that superscripts are offset relative to the subscript - # for slanted nuclei - delta_slanted = 0.2 - - # Percentage of x-height that superscripts and subscripts are offset for - # integrals - delta_integral = 0.1 - - -class ComputerModernFontConstants(FontConstantsBase): - script_space = 0.075 - subdrop = 0.2 - sup1 = 0.45 - sub1 = 0.2 - sub2 = 0.3 - delta = 0.075 - delta_slanted = 0.3 - delta_integral = 0.3 - - -class STIXFontConstants(FontConstantsBase): - script_space = 0.1 - sup1 = 0.8 - sub2 = 0.6 - delta = 0.05 - delta_slanted = 0.3 - delta_integral = 0.3 - - -class STIXSansFontConstants(FontConstantsBase): - script_space = 0.05 - sup1 = 0.8 - delta_slanted = 0.6 - delta_integral = 0.3 - - -class DejaVuSerifFontConstants(FontConstantsBase): - pass - - -class DejaVuSansFontConstants(FontConstantsBase): - pass - - -# Maps font family names to the FontConstantBase subclass to use -_font_constant_mapping = { - 'DejaVu Sans': DejaVuSansFontConstants, - 'DejaVu Sans Mono': DejaVuSansFontConstants, - 'DejaVu Serif': DejaVuSerifFontConstants, - 'cmb10': ComputerModernFontConstants, - 'cmex10': ComputerModernFontConstants, - 'cmmi10': ComputerModernFontConstants, - 'cmr10': ComputerModernFontConstants, - 'cmss10': ComputerModernFontConstants, - 'cmsy10': ComputerModernFontConstants, - 'cmtt10': ComputerModernFontConstants, - 'STIXGeneral': STIXFontConstants, - 'STIXNonUnicode': STIXFontConstants, - 'STIXSizeFiveSym': STIXFontConstants, - 'STIXSizeFourSym': STIXFontConstants, - 'STIXSizeThreeSym': STIXFontConstants, - 'STIXSizeTwoSym': STIXFontConstants, - 'STIXSizeOneSym': STIXFontConstants, - # Map the fonts we used to ship, just for good measure - 'Bitstream Vera Sans': DejaVuSansFontConstants, - 'Bitstream Vera': DejaVuSansFontConstants, - } - - -def _get_font_constant_set(state): - constants = _font_constant_mapping.get( - state.fontset._get_font(state.font).family_name, FontConstantsBase) - # STIX sans isn't really its own fonts, just different code points - # in the STIX fonts, so we have to detect this one separately. - if (constants is STIXFontConstants and - isinstance(state.fontset, StixSansFonts)): - return STIXSansFontConstants - return constants - - -class Node: - """A node in the TeX box model.""" - - def __init__(self): - self.size = 0 - - def __repr__(self): - return type(self).__name__ - - def get_kerning(self, next): - return 0.0 - - def shrink(self): - """ - Shrinks one level smaller. There are only three levels of - sizes, after which things will no longer get smaller. - """ - self.size += 1 - - def render(self, output, x, y): - """Render this node.""" - - -class Box(Node): - """A node with a physical location.""" - - def __init__(self, width, height, depth): - super().__init__() - self.width = width - self.height = height - self.depth = depth - - def shrink(self): - super().shrink() - if self.size < NUM_SIZE_LEVELS: - self.width *= SHRINK_FACTOR - self.height *= SHRINK_FACTOR - self.depth *= SHRINK_FACTOR - - def render(self, output, x1, y1, x2, y2): - pass - - -class Vbox(Box): - """A box with only height (zero width).""" - - def __init__(self, height, depth): - super().__init__(0., height, depth) - - -class Hbox(Box): - """A box with only width (zero height and depth).""" - - def __init__(self, width): - super().__init__(width, 0., 0.) - - -class Char(Node): - """ - A single character. - - Unlike TeX, the font information and metrics are stored with each `Char` - to make it easier to lookup the font metrics when needed. Note that TeX - boxes have a width, height, and depth, unlike Type1 and TrueType which use - a full bounding box and an advance in the x-direction. The metrics must - be converted to the TeX model, and the advance (if different from width) - must be converted into a `Kern` node when the `Char` is added to its parent - `Hlist`. - """ - - def __init__(self, c, state): - super().__init__() - self.c = c - self.fontset = state.fontset - self.font = state.font - self.font_class = state.font_class - self.fontsize = state.fontsize - self.dpi = state.dpi - # The real width, height and depth will be set during the - # pack phase, after we know the real fontsize - self._update_metrics() - - def __repr__(self): - return '`%s`' % self.c - - def _update_metrics(self): - metrics = self._metrics = self.fontset.get_metrics( - self.font, self.font_class, self.c, self.fontsize, self.dpi) - if self.c == ' ': - self.width = metrics.advance - else: - self.width = metrics.width - self.height = metrics.iceberg - self.depth = -(metrics.iceberg - metrics.height) - - def is_slanted(self): - return self._metrics.slanted - - def get_kerning(self, next): - """ - Return the amount of kerning between this and the given character. - - This method is called when characters are strung together into `Hlist` - to create `Kern` nodes. - """ - advance = self._metrics.advance - self.width - kern = 0. - if isinstance(next, Char): - kern = self.fontset.get_kern( - self.font, self.font_class, self.c, self.fontsize, - next.font, next.font_class, next.c, next.fontsize, - self.dpi) - return advance + kern - - def render(self, output, x, y): - self.fontset.render_glyph( - output, x, y, - self.font, self.font_class, self.c, self.fontsize, self.dpi) - - def shrink(self): - super().shrink() - if self.size < NUM_SIZE_LEVELS: - self.fontsize *= SHRINK_FACTOR - self.width *= SHRINK_FACTOR - self.height *= SHRINK_FACTOR - self.depth *= SHRINK_FACTOR - - -class Accent(Char): - """ - The font metrics need to be dealt with differently for accents, - since they are already offset correctly from the baseline in - TrueType fonts. - """ - def _update_metrics(self): - metrics = self._metrics = self.fontset.get_metrics( - self.font, self.font_class, self.c, self.fontsize, self.dpi) - self.width = metrics.xmax - metrics.xmin - self.height = metrics.ymax - metrics.ymin - self.depth = 0 - - def shrink(self): - super().shrink() - self._update_metrics() - - def render(self, output, x, y): - self.fontset.render_glyph( - output, x - self._metrics.xmin, y + self._metrics.ymin, - self.font, self.font_class, self.c, self.fontsize, self.dpi) - - -class List(Box): - """A list of nodes (either horizontal or vertical).""" - - def __init__(self, elements): - super().__init__(0., 0., 0.) - self.shift_amount = 0. # An arbitrary offset - self.children = elements # The child nodes of this list - # The following parameters are set in the vpack and hpack functions - self.glue_set = 0. # The glue setting of this list - self.glue_sign = 0 # 0: normal, -1: shrinking, 1: stretching - self.glue_order = 0 # The order of infinity (0 - 3) for the glue - - def __repr__(self): - return '%s[%s]' % ( - super().__repr__(), - self.width, self.height, - self.depth, self.shift_amount, - ', '.join([repr(x) for x in self.children])) - - def _set_glue(self, x, sign, totals, error_type): - self.glue_order = o = next( - # Highest order of glue used by the members of this list. - (i for i in range(len(totals))[::-1] if totals[i] != 0), 0) - self.glue_sign = sign - if totals[o] != 0.: - self.glue_set = x / totals[o] - else: - self.glue_sign = 0 - self.glue_ratio = 0. - if o == 0: - if len(self.children): - _log.warning("%s %s: %r", - error_type, type(self).__name__, self) - - def shrink(self): - for child in self.children: - child.shrink() - super().shrink() - if self.size < NUM_SIZE_LEVELS: - self.shift_amount *= SHRINK_FACTOR - self.glue_set *= SHRINK_FACTOR - - -class Hlist(List): - """A horizontal list of boxes.""" - - def __init__(self, elements, w=0., m='additional', do_kern=True): - super().__init__(elements) - if do_kern: - self.kern() - self.hpack(w=w, m=m) - - def kern(self): - """ - Insert `Kern` nodes between `Char` nodes to set kerning. - - The `Char` nodes themselves determine the amount of kerning they need - (in `~Char.get_kerning`), and this function just creates the correct - linked list. - """ - new_children = [] - num_children = len(self.children) - if num_children: - for i in range(num_children): - elem = self.children[i] - if i < num_children - 1: - next = self.children[i + 1] - else: - next = None - - new_children.append(elem) - kerning_distance = elem.get_kerning(next) - if kerning_distance != 0.: - kern = Kern(kerning_distance) - new_children.append(kern) - self.children = new_children - - # This is a failed experiment to fake cross-font kerning. -# def get_kerning(self, next): -# if len(self.children) >= 2 and isinstance(self.children[-2], Char): -# if isinstance(next, Char): -# print "CASE A" -# return self.children[-2].get_kerning(next) -# elif (isinstance(next, Hlist) and len(next.children) -# and isinstance(next.children[0], Char)): -# print "CASE B" -# result = self.children[-2].get_kerning(next.children[0]) -# print result -# return result -# return 0.0 - - def hpack(self, w=0., m='additional'): - r""" - Compute the dimensions of the resulting boxes, and adjust the glue if - one of those dimensions is pre-specified. The computed sizes normally - enclose all of the material inside the new box; but some items may - stick out if negative glue is used, if the box is overfull, or if a - ``\vbox`` includes other boxes that have been shifted left. - - Parameters - ---------- - w : float, default: 0 - A width. - m : {'exactly', 'additional'}, default: 'additional' - Whether to produce a box whose width is 'exactly' *w*; or a box - with the natural width of the contents, plus *w* ('additional'). - - Notes - ----- - The defaults produce a box with the natural width of the contents. - """ - # I don't know why these get reset in TeX. Shift_amount is pretty - # much useless if we do. - # self.shift_amount = 0. - h = 0. - d = 0. - x = 0. - total_stretch = [0.] * 4 - total_shrink = [0.] * 4 - for p in self.children: - if isinstance(p, Char): - x += p.width - h = max(h, p.height) - d = max(d, p.depth) - elif isinstance(p, Box): - x += p.width - if not np.isinf(p.height) and not np.isinf(p.depth): - s = getattr(p, 'shift_amount', 0.) - h = max(h, p.height - s) - d = max(d, p.depth + s) - elif isinstance(p, Glue): - glue_spec = p.glue_spec - x += glue_spec.width - total_stretch[glue_spec.stretch_order] += glue_spec.stretch - total_shrink[glue_spec.shrink_order] += glue_spec.shrink - elif isinstance(p, Kern): - x += p.width - self.height = h - self.depth = d - - if m == 'additional': - w += x - self.width = w - x = w - x - - if x == 0.: - self.glue_sign = 0 - self.glue_order = 0 - self.glue_ratio = 0. - return - if x > 0.: - self._set_glue(x, 1, total_stretch, "Overful") - else: - self._set_glue(x, -1, total_shrink, "Underful") - - -class Vlist(List): - """A vertical list of boxes.""" - - def __init__(self, elements, h=0., m='additional'): - super().__init__(elements) - self.vpack(h=h, m=m) - - def vpack(self, h=0., m='additional', l=np.inf): - """ - Compute the dimensions of the resulting boxes, and to adjust the glue - if one of those dimensions is pre-specified. - - Parameters - ---------- - h : float, default: 0 - A height. - m : {'exactly', 'additional'}, default: 'additional' - Whether to produce a box whose height is 'exactly' *h*; or a box - with the natural height of the contents, plus *h* ('additional'). - l : float, default: np.inf - The maximum height. - - Notes - ----- - The defaults produce a box with the natural height of the contents. - """ - # I don't know why these get reset in TeX. Shift_amount is pretty - # much useless if we do. - # self.shift_amount = 0. - w = 0. - d = 0. - x = 0. - total_stretch = [0.] * 4 - total_shrink = [0.] * 4 - for p in self.children: - if isinstance(p, Box): - x += d + p.height - d = p.depth - if not np.isinf(p.width): - s = getattr(p, 'shift_amount', 0.) - w = max(w, p.width + s) - elif isinstance(p, Glue): - x += d - d = 0. - glue_spec = p.glue_spec - x += glue_spec.width - total_stretch[glue_spec.stretch_order] += glue_spec.stretch - total_shrink[glue_spec.shrink_order] += glue_spec.shrink - elif isinstance(p, Kern): - x += d + p.width - d = 0. - elif isinstance(p, Char): - raise RuntimeError( - "Internal mathtext error: Char node found in Vlist") - - self.width = w - if d > l: - x += d - l - self.depth = l - else: - self.depth = d - - if m == 'additional': - h += x - self.height = h - x = h - x - - if x == 0: - self.glue_sign = 0 - self.glue_order = 0 - self.glue_ratio = 0. - return - - if x > 0.: - self._set_glue(x, 1, total_stretch, "Overful") - else: - self._set_glue(x, -1, total_shrink, "Underful") - - -class Rule(Box): - """ - A solid black rectangle. - - It has *width*, *depth*, and *height* fields just as in an `Hlist`. - However, if any of these dimensions is inf, the actual value will be - determined by running the rule up to the boundary of the innermost - enclosing box. This is called a "running dimension". The width is never - running in an `Hlist`; the height and depth are never running in a `Vlist`. - """ - - def __init__(self, width, height, depth, state): - super().__init__(width, height, depth) - self.fontset = state.fontset - - def render(self, output, x, y, w, h): - self.fontset.render_rect_filled(output, x, y, x + w, y + h) - - -class Hrule(Rule): - """Convenience class to create a horizontal rule.""" - - def __init__(self, state, thickness=None): - if thickness is None: - thickness = state.get_current_underline_thickness() - height = depth = thickness * 0.5 - super().__init__(np.inf, height, depth, state) - - -class Vrule(Rule): - """Convenience class to create a vertical rule.""" - - def __init__(self, state): - thickness = state.get_current_underline_thickness() - super().__init__(thickness, np.inf, np.inf, state) - - -_GlueSpec = namedtuple( - "_GlueSpec", "width stretch stretch_order shrink shrink_order") -_GlueSpec._named = { - 'fil': _GlueSpec(0., 1., 1, 0., 0), - 'fill': _GlueSpec(0., 1., 2, 0., 0), - 'filll': _GlueSpec(0., 1., 3, 0., 0), - 'neg_fil': _GlueSpec(0., 0., 0, 1., 1), - 'neg_fill': _GlueSpec(0., 0., 0, 1., 2), - 'neg_filll': _GlueSpec(0., 0., 0, 1., 3), - 'empty': _GlueSpec(0., 0., 0, 0., 0), - 'ss': _GlueSpec(0., 1., 1, -1., 1), -} - - -class Glue(Node): - """ - Most of the information in this object is stored in the underlying - ``_GlueSpec`` class, which is shared between multiple glue objects. - (This is a memory optimization which probably doesn't matter anymore, but - it's easier to stick to what TeX does.) - """ - - def __init__(self, glue_type): - super().__init__() - if isinstance(glue_type, str): - glue_spec = _GlueSpec._named[glue_type] - elif isinstance(glue_type, _GlueSpec): - glue_spec = glue_type - else: - raise ValueError("glue_type must be a glue spec name or instance") - self.glue_spec = glue_spec - - def shrink(self): - super().shrink() - if self.size < NUM_SIZE_LEVELS: - g = self.glue_spec - self.glue_spec = g._replace(width=g.width * SHRINK_FACTOR) - - -class HCentered(Hlist): - """ - A convenience class to create an `Hlist` whose contents are - centered within its enclosing box. - """ - - def __init__(self, elements): - super().__init__([Glue('ss'), *elements, Glue('ss')], do_kern=False) - - -class VCentered(Vlist): - """ - A convenience class to create a `Vlist` whose contents are - centered within its enclosing box. - """ - - def __init__(self, elements): - super().__init__([Glue('ss'), *elements, Glue('ss')]) - - -class Kern(Node): - """ - A `Kern` node has a width field to specify a (normally - negative) amount of spacing. This spacing correction appears in - horizontal lists between letters like A and V when the font - designer said that it looks better to move them closer together or - further apart. A kern node can also appear in a vertical list, - when its *width* denotes additional spacing in the vertical - direction. - """ - - height = 0 - depth = 0 - - def __init__(self, width): - super().__init__() - self.width = width - - def __repr__(self): - return "k%.02f" % self.width - - def shrink(self): - super().shrink() - if self.size < NUM_SIZE_LEVELS: - self.width *= SHRINK_FACTOR - - -class AutoHeightChar(Hlist): - """ - A character as close to the given height and depth as possible. - - When using a font with multiple height versions of some characters (such as - the BaKoMa fonts), the correct glyph will be selected, otherwise this will - always just return a scaled version of the glyph. - """ - - def __init__(self, c, height, depth, state, always=False, factor=None): - alternatives = state.fontset.get_sized_alternatives_for_symbol( - state.font, c) - - xHeight = state.fontset.get_xheight( - state.font, state.fontsize, state.dpi) - - state = state.copy() - target_total = height + depth - for fontname, sym in alternatives: - state.font = fontname - char = Char(sym, state) - # Ensure that size 0 is chosen when the text is regular sized but - # with descender glyphs by subtracting 0.2 * xHeight - if char.height + char.depth >= target_total - 0.2 * xHeight: - break - - shift = 0 - if state.font != 0 or len(alternatives) == 1: - if factor is None: - factor = target_total / (char.height + char.depth) - state.fontsize *= factor - char = Char(sym, state) - - shift = (depth - char.depth) - - super().__init__([char]) - self.shift_amount = shift - - -class AutoWidthChar(Hlist): - """ - A character as close to the given width as possible. - - When using a font with multiple width versions of some characters (such as - the BaKoMa fonts), the correct glyph will be selected, otherwise this will - always just return a scaled version of the glyph. - """ - - def __init__(self, c, width, state, always=False, char_class=Char): - alternatives = state.fontset.get_sized_alternatives_for_symbol( - state.font, c) - - state = state.copy() - for fontname, sym in alternatives: - state.font = fontname - char = char_class(sym, state) - if char.width >= width: - break - - factor = width / char.width - state.fontsize *= factor - char = char_class(sym, state) - - super().__init__([char]) - self.width = char.width - - -def ship(box, xy=(0, 0)): - """ - Ship out *box* at offset *xy*, converting it to an `Output`. - - Since boxes can be inside of boxes inside of boxes, the main work of `ship` - is done by two mutually recursive routines, `hlist_out` and `vlist_out`, - which traverse the `Hlist` nodes and `Vlist` nodes inside of horizontal - and vertical boxes. The global variables used in TeX to store state as it - processes have become local variables here. - """ - ox, oy = xy - cur_v = 0. - cur_h = 0. - off_h = ox - off_v = oy + box.height - output = Output(box) - - def clamp(value): - return -1e9 if value < -1e9 else +1e9 if value > +1e9 else value - - def hlist_out(box): - nonlocal cur_v, cur_h, off_h, off_v - - cur_g = 0 - cur_glue = 0. - glue_order = box.glue_order - glue_sign = box.glue_sign - base_line = cur_v - left_edge = cur_h - - for p in box.children: - if isinstance(p, Char): - p.render(output, cur_h + off_h, cur_v + off_v) - cur_h += p.width - elif isinstance(p, Kern): - cur_h += p.width - elif isinstance(p, List): - # node623 - if len(p.children) == 0: - cur_h += p.width - else: - edge = cur_h - cur_v = base_line + p.shift_amount - if isinstance(p, Hlist): - hlist_out(p) - else: - # p.vpack(box.height + box.depth, 'exactly') - vlist_out(p) - cur_h = edge + p.width - cur_v = base_line - elif isinstance(p, Box): - # node624 - rule_height = p.height - rule_depth = p.depth - rule_width = p.width - if np.isinf(rule_height): - rule_height = box.height - if np.isinf(rule_depth): - rule_depth = box.depth - if rule_height > 0 and rule_width > 0: - cur_v = base_line + rule_depth - p.render(output, - cur_h + off_h, cur_v + off_v, - rule_width, rule_height) - cur_v = base_line - cur_h += rule_width - elif isinstance(p, Glue): - # node625 - glue_spec = p.glue_spec - rule_width = glue_spec.width - cur_g - if glue_sign != 0: # normal - if glue_sign == 1: # stretching - if glue_spec.stretch_order == glue_order: - cur_glue += glue_spec.stretch - cur_g = round(clamp(box.glue_set * cur_glue)) - elif glue_spec.shrink_order == glue_order: - cur_glue += glue_spec.shrink - cur_g = round(clamp(box.glue_set * cur_glue)) - rule_width += cur_g - cur_h += rule_width - - def vlist_out(box): - nonlocal cur_v, cur_h, off_h, off_v - - cur_g = 0 - cur_glue = 0. - glue_order = box.glue_order - glue_sign = box.glue_sign - left_edge = cur_h - cur_v -= box.height - top_edge = cur_v - - for p in box.children: - if isinstance(p, Kern): - cur_v += p.width - elif isinstance(p, List): - if len(p.children) == 0: - cur_v += p.height + p.depth - else: - cur_v += p.height - cur_h = left_edge + p.shift_amount - save_v = cur_v - p.width = box.width - if isinstance(p, Hlist): - hlist_out(p) - else: - vlist_out(p) - cur_v = save_v + p.depth - cur_h = left_edge - elif isinstance(p, Box): - rule_height = p.height - rule_depth = p.depth - rule_width = p.width - if np.isinf(rule_width): - rule_width = box.width - rule_height += rule_depth - if rule_height > 0 and rule_depth > 0: - cur_v += rule_height - p.render(output, - cur_h + off_h, cur_v + off_v, - rule_width, rule_height) - elif isinstance(p, Glue): - glue_spec = p.glue_spec - rule_height = glue_spec.width - cur_g - if glue_sign != 0: # normal - if glue_sign == 1: # stretching - if glue_spec.stretch_order == glue_order: - cur_glue += glue_spec.stretch - cur_g = round(clamp(box.glue_set * cur_glue)) - elif glue_spec.shrink_order == glue_order: # shrinking - cur_glue += glue_spec.shrink - cur_g = round(clamp(box.glue_set * cur_glue)) - rule_height += cur_g - cur_v += rule_height - elif isinstance(p, Char): - raise RuntimeError( - "Internal mathtext error: Char node found in vlist") - - hlist_out(box) - return output - - -############################################################################## -# PARSER - - -def Error(msg): - """Helper class to raise parser errors.""" - def raise_error(s, loc, toks): - raise ParseFatalException(s, loc, msg) - - return Empty().setParseAction(raise_error) - - -class ParserState: - """ - Parser state. - - States are pushed and popped from a stack as necessary, and the "current" - state is always at the top of the stack. - - Upon entering and leaving a group { } or math/non-math, the stack is pushed - and popped accordingly. - """ - - def __init__(self, fontset, font, font_class, fontsize, dpi): - self.fontset = fontset - self._font = font - self.font_class = font_class - self.fontsize = fontsize - self.dpi = dpi - - def copy(self): - return copy.copy(self) - - @property - def font(self): - return self._font - - @font.setter - def font(self, name): - if name in ('rm', 'it', 'bf'): - self.font_class = name - self._font = name - - def get_current_underline_thickness(self): - """Return the underline thickness for this state.""" - return self.fontset.get_underline_thickness( - self.font, self.fontsize, self.dpi) - - -def cmd(expr, args): - r""" - Helper to define TeX commands. - - ``cmd("\cmd", args)`` is equivalent to - ``"\cmd" - (args | Error("Expected \cmd{arg}{...}"))`` where the names in - the error message are taken from element names in *args*. If *expr* - already includes arguments (e.g. "\cmd{arg}{...}"), then they are stripped - when constructing the parse element, but kept (and *expr* is used as is) in - the error message. - """ - - def names(elt): - if isinstance(elt, ParseExpression): - for expr in elt.exprs: - yield from names(expr) - elif elt.resultsName: - yield elt.resultsName - - csname = expr.split("{", 1)[0] - err = (csname + "".join("{%s}" % name for name in names(args)) - if expr == csname else expr) - return csname - (args | Error(f"Expected {err}")) - - -class Parser: - """ - A pyparsing-based parser for strings containing math expressions. - - Raw text may also appear outside of pairs of ``$``. - - The grammar is based directly on that in TeX, though it cuts a few corners. - """ - - class _MathStyle(enum.Enum): - DISPLAYSTYLE = 0 - TEXTSTYLE = 1 - SCRIPTSTYLE = 2 - SCRIPTSCRIPTSTYLE = 3 - - _binary_operators = set( - '+ * - \N{MINUS SIGN}' - r''' - \pm \sqcap \rhd - \mp \sqcup \unlhd - \times \vee \unrhd - \div \wedge \oplus - \ast \setminus \ominus - \star \wr \otimes - \circ \diamond \oslash - \bullet \bigtriangleup \odot - \cdot \bigtriangledown \bigcirc - \cap \triangleleft \dagger - \cup \triangleright \ddagger - \uplus \lhd \amalg - \dotplus \dotminus'''.split()) - - _relation_symbols = set(r''' - = < > : - \leq \geq \equiv \models - \prec \succ \sim \perp - \preceq \succeq \simeq \mid - \ll \gg \asymp \parallel - \subset \supset \approx \bowtie - \subseteq \supseteq \cong \Join - \sqsubset \sqsupset \neq \smile - \sqsubseteq \sqsupseteq \doteq \frown - \in \ni \propto \vdash - \dashv \dots \doteqdot'''.split()) - - _arrow_symbols = set(r''' - \leftarrow \longleftarrow \uparrow - \Leftarrow \Longleftarrow \Uparrow - \rightarrow \longrightarrow \downarrow - \Rightarrow \Longrightarrow \Downarrow - \leftrightarrow \longleftrightarrow \updownarrow - \Leftrightarrow \Longleftrightarrow \Updownarrow - \mapsto \longmapsto \nearrow - \hookleftarrow \hookrightarrow \searrow - \leftharpoonup \rightharpoonup \swarrow - \leftharpoondown \rightharpoondown \nwarrow - \rightleftharpoons \leadsto'''.split()) - - _spaced_symbols = _binary_operators | _relation_symbols | _arrow_symbols - - _punctuation_symbols = set(r', ; . ! \ldotp \cdotp'.split()) - - _overunder_symbols = set(r''' - \sum \prod \coprod \bigcap \bigcup \bigsqcup \bigvee - \bigwedge \bigodot \bigotimes \bigoplus \biguplus - '''.split()) - - _overunder_functions = set("lim liminf limsup sup max min".split()) - - _dropsub_symbols = set(r'''\int \oint'''.split()) - - _fontnames = set("rm cal it tt sf bf default bb frak scr regular".split()) - - _function_names = set(""" - arccos csc ker min arcsin deg lg Pr arctan det lim sec arg dim - liminf sin cos exp limsup sinh cosh gcd ln sup cot hom log tan - coth inf max tanh""".split()) - - _ambi_delims = set(r""" - | \| / \backslash \uparrow \downarrow \updownarrow \Uparrow - \Downarrow \Updownarrow . \vert \Vert""".split()) - _left_delims = set(r"( [ \{ < \lfloor \langle \lceil".split()) - _right_delims = set(r") ] \} > \rfloor \rangle \rceil".split()) - _delims = _left_delims | _right_delims | _ambi_delims - - def __init__(self): - p = types.SimpleNamespace() - - def set_names_and_parse_actions(): - for key, val in vars(p).items(): - if not key.startswith('_'): - # Set names on everything -- very useful for debugging - val.setName(key) - # Set actions - if hasattr(self, key): - val.setParseAction(getattr(self, key)) - - # Root definitions. - - # In TeX parlance, a csname is a control sequence name (a "\foo"). - def csnames(group, names): - ends_with_alpha = [] - ends_with_nonalpha = [] - for name in names: - if name[-1].isalpha(): - ends_with_alpha.append(name) - else: - ends_with_nonalpha.append(name) - return Regex(r"\\(?P<{}>(?:{})(?![A-Za-z]){})".format( - group, - "|".join(map(re.escape, ends_with_alpha)), - "".join(f"|{s}" for s in map(re.escape, ends_with_nonalpha)), - )) - - p.float_literal = Regex(r"[-+]?([0-9]+\.?[0-9]*|\.[0-9]+)") - p.space = oneOf(self._space_widths)("space") - - p.style_literal = oneOf( - [str(e.value) for e in self._MathStyle])("style_literal") - - p.symbol = Regex( - r"[a-zA-Z0-9 +\-*/<>=:,.;!\?&'@()\[\]|\U00000080-\U0001ffff]" - r"|\\[%${}\[\]_|]" - + r"|\\(?:{})(?![A-Za-z])".format( - "|".join(map(re.escape, tex2uni))) - )("sym").leaveWhitespace() - p.unknown_symbol = Regex(r"\\[A-Za-z]*")("name") - - p.font = csnames("font", self._fontnames) - p.start_group = ( - Optional(r"\math" + oneOf(self._fontnames)("font")) + "{") - p.end_group = Literal("}") - - p.delim = oneOf(self._delims) - - set_names_and_parse_actions() # for root definitions. - - # Mutually recursive definitions. (Minimizing the number of Forward - # elements is important for speed.) - p.accent = Forward() - p.auto_delim = Forward() - p.binom = Forward() - p.customspace = Forward() - p.frac = Forward() - p.dfrac = Forward() - p.function = Forward() - p.genfrac = Forward() - p.group = Forward() - p.operatorname = Forward() - p.overline = Forward() - p.overset = Forward() - p.placeable = Forward() - p.required_group = Forward() - p.simple = Forward() - p.optional_group = Forward() - p.sqrt = Forward() - p.subsuper = Forward() - p.token = Forward() - p.underset = Forward() - - set_names_and_parse_actions() # for mutually recursive definitions. - - p.customspace <<= cmd(r"\hspace", "{" + p.float_literal("space") + "}") - - p.accent <<= ( - csnames("accent", [*self._accent_map, *self._wide_accents]) - - p.placeable("sym")) - - p.function <<= csnames("name", self._function_names) - p.operatorname <<= cmd( - r"\operatorname", - "{" + ZeroOrMore(p.simple | p.unknown_symbol)("name") + "}") - - p.group <<= p.start_group + ZeroOrMore(p.token)("group") + p.end_group - - p.optional_group <<= "{" + ZeroOrMore(p.token)("group") + "}" - p.required_group <<= "{" + OneOrMore(p.token)("group") + "}" - - p.frac <<= cmd( - r"\frac", p.required_group("num") + p.required_group("den")) - p.dfrac <<= cmd( - r"\dfrac", p.required_group("num") + p.required_group("den")) - p.binom <<= cmd( - r"\binom", p.required_group("num") + p.required_group("den")) - - p.genfrac <<= cmd( - r"\genfrac", - "{" + Optional(p.delim)("ldelim") + "}" - + "{" + Optional(p.delim)("rdelim") + "}" - + "{" + p.float_literal("rulesize") + "}" - + "{" + Optional(p.style_literal)("style") + "}" - + p.required_group("num") - + p.required_group("den")) - - p.sqrt <<= cmd( - r"\sqrt{value}", - Optional("[" + OneOrMore(NotAny("]") + p.token)("root") + "]") - + p.required_group("value")) - - p.overline <<= cmd(r"\overline", p.required_group("body")) - - p.overset <<= cmd( - r"\overset", - p.optional_group("annotation") + p.optional_group("body")) - p.underset <<= cmd( - r"\underset", - p.optional_group("annotation") + p.optional_group("body")) - - p.placeable <<= ( - p.accent # Must be before symbol as all accents are symbols - | p.symbol # Must be second to catch all named symbols and single - # chars not in a group - | p.function - | p.operatorname - | p.group - | p.frac - | p.dfrac - | p.binom - | p.genfrac - | p.overset - | p.underset - | p.sqrt - | p.overline - ) - - p.simple <<= ( - p.space - | p.customspace - | p.font - | p.subsuper - ) - - p.subsuper <<= ( - (Optional(p.placeable)("nucleus") - + OneOrMore(oneOf(["_", "^"]) - p.placeable)("subsuper") - + Regex("'*")("apostrophes")) - | Regex("'+")("apostrophes") - | (p.placeable("nucleus") + Regex("'*")("apostrophes")) - ) - - p.token <<= ( - p.simple - | p.auto_delim - | p.unknown_symbol # Must be last - ) - - p.auto_delim <<= ( - r"\left" - (p.delim("left") | Error("Expected a delimiter")) - + ZeroOrMore(p.simple | p.auto_delim)("mid") - + r"\right" - (p.delim("right") | Error("Expected a delimiter")) - ) - - # Leaf definitions. - p.math = OneOrMore(p.token) - p.math_string = QuotedString('$', '\\', unquoteResults=False) - p.non_math = Regex(r"(?:(?:\\[$])|[^$])*").leaveWhitespace() - p.main = ( - p.non_math + ZeroOrMore(p.math_string + p.non_math) + StringEnd() - ) - set_names_and_parse_actions() # for leaf definitions. - - self._expression = p.main - self._math_expression = p.math - - # To add space to nucleus operators after sub/superscripts - self._in_subscript_or_superscript = False - - def parse(self, s, fonts_object, fontsize, dpi): - """ - Parse expression *s* using the given *fonts_object* for - output, at the given *fontsize* and *dpi*. - - Returns the parse tree of `Node` instances. - """ - self._state_stack = [ - ParserState(fonts_object, 'default', 'rm', fontsize, dpi)] - self._em_width_cache = {} - try: - result = self._expression.parseString(s) - except ParseBaseException as err: - # explain becomes a plain method on pyparsing 3 (err.explain(0)). - raise ValueError("\n" + ParseException.explain(err, 0)) from None - self._state_stack = None - self._in_subscript_or_superscript = False - # prevent operator spacing from leaking into a new expression - self._em_width_cache = {} - self._expression.resetCache() - return result[0] - - def get_state(self): - """Get the current `State` of the parser.""" - return self._state_stack[-1] - - def pop_state(self): - """Pop a `State` off of the stack.""" - self._state_stack.pop() - - def push_state(self): - """Push a new `State` onto the stack, copying the current state.""" - self._state_stack.append(self.get_state().copy()) - - def main(self, s, loc, toks): - return [Hlist(toks)] - - def math_string(self, s, loc, toks): - return self._math_expression.parseString(toks[0][1:-1]) - - def math(self, s, loc, toks): - hlist = Hlist(toks) - self.pop_state() - return [hlist] - - def non_math(self, s, loc, toks): - s = toks[0].replace(r'\$', '$') - symbols = [Char(c, self.get_state()) for c in s] - hlist = Hlist(symbols) - # We're going into math now, so set font to 'it' - self.push_state() - self.get_state().font = mpl.rcParams['mathtext.default'] - return [hlist] - - float_literal = staticmethod(pyparsing_common.convertToFloat) - - def _make_space(self, percentage): - # In TeX, an em (the unit usually used to measure horizontal lengths) - # is not the width of the character 'm'; it is the same in different - # font styles (e.g. roman or italic). Mathtext, however, uses 'm' in - # the italic style so that horizontal spaces don't depend on the - # current font style. - state = self.get_state() - key = (state.font, state.fontsize, state.dpi) - width = self._em_width_cache.get(key) - if width is None: - metrics = state.fontset.get_metrics( - 'it', mpl.rcParams['mathtext.default'], 'm', - state.fontsize, state.dpi) - width = metrics.advance - self._em_width_cache[key] = width - return Kern(width * percentage) - - _space_widths = { - r'\,': 0.16667, # 3/18 em = 3 mu - r'\thinspace': 0.16667, # 3/18 em = 3 mu - r'\/': 0.16667, # 3/18 em = 3 mu - r'\>': 0.22222, # 4/18 em = 4 mu - r'\:': 0.22222, # 4/18 em = 4 mu - r'\;': 0.27778, # 5/18 em = 5 mu - r'\ ': 0.33333, # 6/18 em = 6 mu - r'~': 0.33333, # 6/18 em = 6 mu, nonbreakable - r'\enspace': 0.5, # 9/18 em = 9 mu - r'\quad': 1, # 1 em = 18 mu - r'\qquad': 2, # 2 em = 36 mu - r'\!': -0.16667, # -3/18 em = -3 mu - } - - def space(self, s, loc, toks): - num = self._space_widths[toks["space"]] - box = self._make_space(num) - return [box] - - def customspace(self, s, loc, toks): - return [self._make_space(toks["space"])] - - def symbol(self, s, loc, toks): - c = toks["sym"] - if c == "-": - # "U+2212 minus sign is the preferred representation of the unary - # and binary minus sign rather than the ASCII-derived U+002D - # hyphen-minus, because minus sign is unambiguous and because it - # is rendered with a more desirable length, usually longer than a - # hyphen." (https://www.unicode.org/reports/tr25/) - c = "\N{MINUS SIGN}" - try: - char = Char(c, self.get_state()) - except ValueError as err: - raise ParseFatalException(s, loc, - "Unknown symbol: %s" % c) from err - - if c in self._spaced_symbols: - # iterate until we find previous character, needed for cases - # such as ${ -2}$, $ -2$, or $ -2$. - prev_char = next((c for c in s[:loc][::-1] if c != ' '), '') - # Binary operators at start of string should not be spaced - if (c in self._binary_operators and - (len(s[:loc].split()) == 0 or prev_char == '{' or - prev_char in self._left_delims)): - return [char] - else: - return [Hlist([self._make_space(0.2), - char, - self._make_space(0.2)], - do_kern=True)] - elif c in self._punctuation_symbols: - prev_char = next((c for c in s[:loc][::-1] if c != ' '), '') - next_char = next((c for c in s[loc + 1:] if c != ' '), '') - - # Do not space commas between brackets - if c == ',': - if prev_char == '{' and next_char == '}': - return [char] - - # Do not space dots as decimal separators - if c == '.' and prev_char.isdigit() and next_char.isdigit(): - return [char] - else: - return [Hlist([char, self._make_space(0.2)], do_kern=True)] - return [char] - - def unknown_symbol(self, s, loc, toks): - raise ParseFatalException(s, loc, f"Unknown symbol: {toks['name']}") - - _accent_map = { - r'hat': r'\circumflexaccent', - r'breve': r'\combiningbreve', - r'bar': r'\combiningoverline', - r'grave': r'\combininggraveaccent', - r'acute': r'\combiningacuteaccent', - r'tilde': r'\combiningtilde', - r'dot': r'\combiningdotabove', - r'ddot': r'\combiningdiaeresis', - r'dddot': r'\combiningthreedotsabove', - r'ddddot': r'\combiningfourdotsabove', - r'vec': r'\combiningrightarrowabove', - r'"': r'\combiningdiaeresis', - r"`": r'\combininggraveaccent', - r"'": r'\combiningacuteaccent', - r'~': r'\combiningtilde', - r'.': r'\combiningdotabove', - r'^': r'\circumflexaccent', - r'overrightarrow': r'\rightarrow', - r'overleftarrow': r'\leftarrow', - r'mathring': r'\circ', - } - - _wide_accents = set(r"widehat widetilde widebar".split()) - - def accent(self, s, loc, toks): - state = self.get_state() - thickness = state.get_current_underline_thickness() - accent = toks["accent"] - sym = toks["sym"] - if accent in self._wide_accents: - accent_box = AutoWidthChar( - '\\' + accent, sym.width, state, char_class=Accent) - else: - accent_box = Accent(self._accent_map[accent], state) - if accent == 'mathring': - accent_box.shrink() - accent_box.shrink() - centered = HCentered([Hbox(sym.width / 4.0), accent_box]) - centered.hpack(sym.width, 'exactly') - return Vlist([ - centered, - Vbox(0., thickness * 2.0), - Hlist([sym]) - ]) - - def function(self, s, loc, toks): - hlist = self.operatorname(s, loc, toks) - hlist.function_name = toks["name"] - return hlist - - def operatorname(self, s, loc, toks): - self.push_state() - state = self.get_state() - state.font = 'rm' - hlist_list = [] - # Change the font of Chars, but leave Kerns alone - name = toks["name"] - for c in name: - if isinstance(c, Char): - c.font = 'rm' - c._update_metrics() - hlist_list.append(c) - elif isinstance(c, str): - hlist_list.append(Char(c, state)) - else: - hlist_list.append(c) - next_char_loc = loc + len(name) + 1 - if isinstance(name, ParseResults): - next_char_loc += len('operatorname{}') - next_char = next((c for c in s[next_char_loc:] if c != ' '), '') - delimiters = self._delims | {'^', '_'} - if (next_char not in delimiters and - name not in self._overunder_functions): - # Add thin space except when followed by parenthesis, bracket, etc. - hlist_list += [self._make_space(self._space_widths[r'\,'])] - self.pop_state() - # if followed by a super/subscript, set flag to true - # This flag tells subsuper to add space after this operator - if next_char in {'^', '_'}: - self._in_subscript_or_superscript = True - else: - self._in_subscript_or_superscript = False - - return Hlist(hlist_list) - - def start_group(self, s, loc, toks): - self.push_state() - # Deal with LaTeX-style font tokens - if toks.get("font"): - self.get_state().font = toks.get("font") - return [] - - def group(self, s, loc, toks): - grp = Hlist(toks.get("group", [])) - return [grp] - - def required_group(self, s, loc, toks): - return Hlist(toks.get("group", [])) - - optional_group = required_group - - def end_group(self, s, loc, toks): - self.pop_state() - return [] - - def font(self, s, loc, toks): - self.get_state().font = toks["font"] - return [] - - def is_overunder(self, nucleus): - if isinstance(nucleus, Char): - return nucleus.c in self._overunder_symbols - elif isinstance(nucleus, Hlist) and hasattr(nucleus, 'function_name'): - return nucleus.function_name in self._overunder_functions - return False - - def is_dropsub(self, nucleus): - if isinstance(nucleus, Char): - return nucleus.c in self._dropsub_symbols - return False - - def is_slanted(self, nucleus): - if isinstance(nucleus, Char): - return nucleus.is_slanted() - return False - - def is_between_brackets(self, s, loc): - return False - - def subsuper(self, s, loc, toks): - nucleus = toks.get("nucleus", Hbox(0)) - subsuper = toks.get("subsuper", []) - napostrophes = len(toks.get("apostrophes", [])) - - if not subsuper and not napostrophes: - return nucleus - - sub = super = None - while subsuper: - op, arg, *subsuper = subsuper - if op == '_': - if sub is not None: - raise ParseFatalException("Double subscript") - sub = arg - else: - if super is not None: - raise ParseFatalException("Double superscript") - super = arg - - state = self.get_state() - rule_thickness = state.fontset.get_underline_thickness( - state.font, state.fontsize, state.dpi) - xHeight = state.fontset.get_xheight( - state.font, state.fontsize, state.dpi) - - if napostrophes: - if super is None: - super = Hlist([]) - for i in range(napostrophes): - super.children.extend(self.symbol(s, loc, {"sym": "\\prime"})) - # kern() and hpack() needed to get the metrics right after - # extending - super.kern() - super.hpack() - - # Handle over/under symbols, such as sum or prod - if self.is_overunder(nucleus): - vlist = [] - shift = 0. - width = nucleus.width - if super is not None: - super.shrink() - width = max(width, super.width) - if sub is not None: - sub.shrink() - width = max(width, sub.width) - - vgap = rule_thickness * 3.0 - if super is not None: - hlist = HCentered([super]) - hlist.hpack(width, 'exactly') - vlist.extend([hlist, Vbox(0, vgap)]) - hlist = HCentered([nucleus]) - hlist.hpack(width, 'exactly') - vlist.append(hlist) - if sub is not None: - hlist = HCentered([sub]) - hlist.hpack(width, 'exactly') - vlist.extend([Vbox(0, vgap), hlist]) - shift = hlist.height + vgap + nucleus.depth - vlist = Vlist(vlist) - vlist.shift_amount = shift - result = Hlist([vlist]) - return [result] - - # We remove kerning on the last character for consistency (otherwise - # it will compute kerning based on non-shrunk characters and may put - # them too close together when superscripted) - # We change the width of the last character to match the advance to - # consider some fonts with weird metrics: e.g. stix's f has a width of - # 7.75 and a kerning of -4.0 for an advance of 3.72, and we want to put - # the superscript at the advance - last_char = nucleus - if isinstance(nucleus, Hlist): - new_children = nucleus.children - if len(new_children): - # remove last kern - if (isinstance(new_children[-1], Kern) and - hasattr(new_children[-2], '_metrics')): - new_children = new_children[:-1] - last_char = new_children[-1] - if hasattr(last_char, '_metrics'): - last_char.width = last_char._metrics.advance - # create new Hlist without kerning - nucleus = Hlist(new_children, do_kern=False) - else: - if isinstance(nucleus, Char): - last_char.width = last_char._metrics.advance - nucleus = Hlist([nucleus]) - - # Handle regular sub/superscripts - constants = _get_font_constant_set(state) - lc_height = last_char.height - lc_baseline = 0 - if self.is_dropsub(last_char): - lc_baseline = last_char.depth - - # Compute kerning for sub and super - superkern = constants.delta * xHeight - subkern = constants.delta * xHeight - if self.is_slanted(last_char): - superkern += constants.delta * xHeight - superkern += (constants.delta_slanted * - (lc_height - xHeight * 2. / 3.)) - if self.is_dropsub(last_char): - subkern = (3 * constants.delta - - constants.delta_integral) * lc_height - superkern = (3 * constants.delta + - constants.delta_integral) * lc_height - else: - subkern = 0 - - if super is None: - # node757 - x = Hlist([Kern(subkern), sub]) - x.shrink() - if self.is_dropsub(last_char): - shift_down = lc_baseline + constants.subdrop * xHeight - else: - shift_down = constants.sub1 * xHeight - x.shift_amount = shift_down - else: - x = Hlist([Kern(superkern), super]) - x.shrink() - if self.is_dropsub(last_char): - shift_up = lc_height - constants.subdrop * xHeight - else: - shift_up = constants.sup1 * xHeight - if sub is None: - x.shift_amount = -shift_up - else: # Both sub and superscript - y = Hlist([Kern(subkern), sub]) - y.shrink() - if self.is_dropsub(last_char): - shift_down = lc_baseline + constants.subdrop * xHeight - else: - shift_down = constants.sub2 * xHeight - # If sub and superscript collide, move super up - clr = (2.0 * rule_thickness - - ((shift_up - x.depth) - (y.height - shift_down))) - if clr > 0.: - shift_up += clr - x = Vlist([ - x, - Kern((shift_up - x.depth) - (y.height - shift_down)), - y]) - x.shift_amount = shift_down - - if not self.is_dropsub(last_char): - x.width += constants.script_space * xHeight - - # Do we need to add a space after the nucleus? - # To find out, check the flag set by operatorname - spaced_nucleus = [nucleus, x] - if self._in_subscript_or_superscript: - spaced_nucleus += [self._make_space(self._space_widths[r'\,'])] - self._in_subscript_or_superscript = False - - result = Hlist(spaced_nucleus) - return [result] - - def _genfrac(self, ldelim, rdelim, rule, style, num, den): - state = self.get_state() - thickness = state.get_current_underline_thickness() - - for _ in range(style.value): - num.shrink() - den.shrink() - cnum = HCentered([num]) - cden = HCentered([den]) - width = max(num.width, den.width) - cnum.hpack(width, 'exactly') - cden.hpack(width, 'exactly') - vlist = Vlist([cnum, # numerator - Vbox(0, thickness * 2.0), # space - Hrule(state, rule), # rule - Vbox(0, thickness * 2.0), # space - cden # denominator - ]) - - # Shift so the fraction line sits in the middle of the - # equals sign - metrics = state.fontset.get_metrics( - state.font, mpl.rcParams['mathtext.default'], - '=', state.fontsize, state.dpi) - shift = (cden.height - - ((metrics.ymax + metrics.ymin) / 2 - - thickness * 3.0)) - vlist.shift_amount = shift - - result = [Hlist([vlist, Hbox(thickness * 2.)])] - if ldelim or rdelim: - if ldelim == '': - ldelim = '.' - if rdelim == '': - rdelim = '.' - return self._auto_sized_delimiter(ldelim, result, rdelim) - return result - - def style_literal(self, s, loc, toks): - return self._MathStyle(int(toks["style_literal"])) - - def genfrac(self, s, loc, toks): - return self._genfrac( - toks.get("ldelim", ""), toks.get("rdelim", ""), - toks["rulesize"], toks.get("style", self._MathStyle.TEXTSTYLE), - toks["num"], toks["den"]) - - def frac(self, s, loc, toks): - return self._genfrac( - "", "", self.get_state().get_current_underline_thickness(), - self._MathStyle.TEXTSTYLE, toks["num"], toks["den"]) - - def dfrac(self, s, loc, toks): - return self._genfrac( - "", "", self.get_state().get_current_underline_thickness(), - self._MathStyle.DISPLAYSTYLE, toks["num"], toks["den"]) - - def binom(self, s, loc, toks): - return self._genfrac( - "(", ")", 0, - self._MathStyle.TEXTSTYLE, toks["num"], toks["den"]) - - def _genset(self, s, loc, toks): - annotation = toks["annotation"] - body = toks["body"] - thickness = self.get_state().get_current_underline_thickness() - - annotation.shrink() - cannotation = HCentered([annotation]) - cbody = HCentered([body]) - width = max(cannotation.width, cbody.width) - cannotation.hpack(width, 'exactly') - cbody.hpack(width, 'exactly') - - vgap = thickness * 3 - if s[loc + 1] == "u": # \underset - vlist = Vlist([cbody, # body - Vbox(0, vgap), # space - cannotation # annotation - ]) - # Shift so the body sits in the same vertical position - vlist.shift_amount = cbody.depth + cannotation.height + vgap - else: # \overset - vlist = Vlist([cannotation, # annotation - Vbox(0, vgap), # space - cbody # body - ]) - - # To add horizontal gap between symbols: wrap the Vlist into - # an Hlist and extend it with an Hbox(0, horizontal_gap) - return vlist - - overset = underset = _genset - - def sqrt(self, s, loc, toks): - root = toks.get("root") - body = toks["value"] - state = self.get_state() - thickness = state.get_current_underline_thickness() - - # Determine the height of the body, and add a little extra to - # the height so it doesn't seem cramped - height = body.height - body.shift_amount + thickness * 5.0 - depth = body.depth + body.shift_amount - check = AutoHeightChar(r'\__sqrt__', height, depth, state, always=True) - height = check.height - check.shift_amount - depth = check.depth + check.shift_amount - - # Put a little extra space to the left and right of the body - padded_body = Hlist([Hbox(2 * thickness), body, Hbox(2 * thickness)]) - rightside = Vlist([Hrule(state), Glue('fill'), padded_body]) - # Stretch the glue between the hrule and the body - rightside.vpack(height + (state.fontsize * state.dpi) / (100.0 * 12.0), - 'exactly', depth) - - # Add the root and shift it upward so it is above the tick. - # The value of 0.6 is a hard-coded hack ;) - if not root: - root = Box(check.width * 0.5, 0., 0.) - else: - root = Hlist(root) - root.shrink() - root.shrink() - - root_vlist = Vlist([Hlist([root])]) - root_vlist.shift_amount = -height * 0.6 - - hlist = Hlist([root_vlist, # Root - # Negative kerning to put root over tick - Kern(-check.width * 0.5), - check, # Check - rightside]) # Body - return [hlist] - - def overline(self, s, loc, toks): - body = toks["body"] - - state = self.get_state() - thickness = state.get_current_underline_thickness() - - height = body.height - body.shift_amount + thickness * 3.0 - depth = body.depth + body.shift_amount - - # Place overline above body - rightside = Vlist([Hrule(state), Glue('fill'), Hlist([body])]) - - # Stretch the glue between the hrule and the body - rightside.vpack(height + (state.fontsize * state.dpi) / (100.0 * 12.0), - 'exactly', depth) - - hlist = Hlist([rightside]) - return [hlist] - - def _auto_sized_delimiter(self, front, middle, back): - state = self.get_state() - if len(middle): - height = max(x.height for x in middle) - depth = max(x.depth for x in middle) - factor = None - else: - height = 0 - depth = 0 - factor = 1.0 - parts = [] - # \left. and \right. aren't supposed to produce any symbols - if front != '.': - parts.append( - AutoHeightChar(front, height, depth, state, factor=factor)) - parts.extend(middle) - if back != '.': - parts.append( - AutoHeightChar(back, height, depth, state, factor=factor)) - hlist = Hlist(parts) - return hlist - - def auto_delim(self, s, loc, toks): - return self._auto_sized_delimiter( - toks["left"], - # if "mid" in toks ... can be removed when requiring pyparsing 3. - toks["mid"].asList() if "mid" in toks else [], - toks["right"]) diff --git a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/helpers/theb.py b/spaces/dcq/freegpt-webui/g4f/Provider/Providers/helpers/theb.py deleted file mode 100644 index 71cfd23ff34768092e4dbe3ff6b719a946dceebb..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/helpers/theb.py +++ /dev/null @@ -1,48 +0,0 @@ -import json -import sys -from re import findall -from curl_cffi import requests - -config = json.loads(sys.argv[1]) -prompt = config['messages'][-1]['content'] - -headers = { - 'authority': 'chatbot.theb.ai', - 'accept': 'application/json, text/plain, */*', - 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', - 'content-type': 'application/json', - 'origin': 'https://chatbot.theb.ai', - 'referer': 'https://chatbot.theb.ai/', - 'sec-ch-ua': '"Google Chrome";v="113", "Chromium";v="113", "Not-A.Brand";v="24"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-platform': '"macOS"', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'same-origin', - 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36', -} - -json_data = { - 'prompt': prompt, - 'options': {} -} - -def format(chunk): - try: - completion_chunk = findall(r'content":"(.*)"},"fin', chunk.decode())[0] - print(completion_chunk, flush=True, end='') - - except Exception as e: - print(f'[ERROR] an error occured, retrying... | [[{chunk.decode()}]]', flush=True) - return - -while True: - try: - response = requests.post('https://chatbot.theb.ai/api/chat-process', - headers=headers, json=json_data, content_callback=format, impersonate='chrome110') - - exit(0) - - except Exception as e: - print('[ERROR] an error occured, retrying... |', e, flush=True) - continue \ No newline at end of file diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/__init__.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/__init__.py deleted file mode 100644 index 6bc2b58b5feffd53a522594406ab5354f5d57927..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/__init__.py +++ /dev/null @@ -1,134 +0,0 @@ -from dataclasses import dataclass -from typing import List, Optional, Union - -import numpy as np -import PIL -from PIL import Image - -from ...utils import ( - BaseOutput, - OptionalDependencyNotAvailable, - is_flax_available, - is_k_diffusion_available, - is_k_diffusion_version, - is_onnx_available, - is_torch_available, - is_transformers_available, - is_transformers_version, -) - - -@dataclass -class StableDiffusionPipelineOutput(BaseOutput): - """ - Output class for Stable Diffusion pipelines. - - Args: - images (`List[PIL.Image.Image]` or `np.ndarray`) - List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width, - num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline. - nsfw_content_detected (`List[bool]`) - List of flags denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, or `None` if safety checking could not be performed. - """ - - images: Union[List[PIL.Image.Image], np.ndarray] - nsfw_content_detected: Optional[List[bool]] - - -try: - if not (is_transformers_available() and is_torch_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ...utils.dummy_torch_and_transformers_objects import * # noqa F403 -else: - from .pipeline_cycle_diffusion import CycleDiffusionPipeline - from .pipeline_stable_diffusion import StableDiffusionPipeline - from .pipeline_stable_diffusion_attend_and_excite import StableDiffusionAttendAndExcitePipeline - from .pipeline_stable_diffusion_controlnet import StableDiffusionControlNetPipeline - from .pipeline_stable_diffusion_img2img import StableDiffusionImg2ImgPipeline - from .pipeline_stable_diffusion_inpaint import StableDiffusionInpaintPipeline - from .pipeline_stable_diffusion_inpaint_legacy import StableDiffusionInpaintPipelineLegacy - from .pipeline_stable_diffusion_instruct_pix2pix import StableDiffusionInstructPix2PixPipeline - from .pipeline_stable_diffusion_latent_upscale import StableDiffusionLatentUpscalePipeline - from .pipeline_stable_diffusion_model_editing import StableDiffusionModelEditingPipeline - from .pipeline_stable_diffusion_panorama import StableDiffusionPanoramaPipeline - from .pipeline_stable_diffusion_sag import StableDiffusionSAGPipeline - from .pipeline_stable_diffusion_upscale import StableDiffusionUpscalePipeline - from .pipeline_stable_unclip import StableUnCLIPPipeline - from .pipeline_stable_unclip_img2img import StableUnCLIPImg2ImgPipeline - from .safety_checker import StableDiffusionSafetyChecker - from .stable_unclip_image_normalizer import StableUnCLIPImageNormalizer - -try: - if not (is_transformers_available() and is_torch_available() and is_transformers_version(">=", "4.25.0")): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ...utils.dummy_torch_and_transformers_objects import StableDiffusionImageVariationPipeline -else: - from .pipeline_stable_diffusion_image_variation import StableDiffusionImageVariationPipeline - - -try: - if not (is_transformers_available() and is_torch_available() and is_transformers_version(">=", "4.26.0")): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ...utils.dummy_torch_and_transformers_objects import ( - StableDiffusionDepth2ImgPipeline, - StableDiffusionPix2PixZeroPipeline, - ) -else: - from .pipeline_stable_diffusion_depth2img import StableDiffusionDepth2ImgPipeline - from .pipeline_stable_diffusion_pix2pix_zero import StableDiffusionPix2PixZeroPipeline - - -try: - if not ( - is_torch_available() - and is_transformers_available() - and is_k_diffusion_available() - and is_k_diffusion_version(">=", "0.0.12") - ): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ...utils.dummy_torch_and_transformers_and_k_diffusion_objects import * # noqa F403 -else: - from .pipeline_stable_diffusion_k_diffusion import StableDiffusionKDiffusionPipeline - -try: - if not (is_transformers_available() and is_onnx_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ...utils.dummy_onnx_objects import * # noqa F403 -else: - from .pipeline_onnx_stable_diffusion import OnnxStableDiffusionPipeline, StableDiffusionOnnxPipeline - from .pipeline_onnx_stable_diffusion_img2img import OnnxStableDiffusionImg2ImgPipeline - from .pipeline_onnx_stable_diffusion_inpaint import OnnxStableDiffusionInpaintPipeline - from .pipeline_onnx_stable_diffusion_inpaint_legacy import OnnxStableDiffusionInpaintPipelineLegacy - from .pipeline_onnx_stable_diffusion_upscale import OnnxStableDiffusionUpscalePipeline - -if is_transformers_available() and is_flax_available(): - import flax - - @flax.struct.dataclass - class FlaxStableDiffusionPipelineOutput(BaseOutput): - """ - Output class for Stable Diffusion pipelines. - - Args: - images (`np.ndarray`) - Array of shape `(batch_size, height, width, num_channels)` with images from the diffusion pipeline. - nsfw_content_detected (`List[bool]`) - List of flags denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content. - """ - - images: np.ndarray - nsfw_content_detected: List[bool] - - from ...schedulers.scheduling_pndm_flax import PNDMSchedulerState - from .pipeline_flax_stable_diffusion import FlaxStableDiffusionPipeline - from .pipeline_flax_stable_diffusion_controlnet import FlaxStableDiffusionControlNetPipeline - from .pipeline_flax_stable_diffusion_img2img import FlaxStableDiffusionImg2ImgPipeline - from .pipeline_flax_stable_diffusion_inpaint import FlaxStableDiffusionInpaintPipeline - from .safety_checker_flax import FlaxStableDiffusionSafetyChecker diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_karras_ve.py b/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_karras_ve.py deleted file mode 100644 index 87f6514a4e93e4a75bd6228ed852306b8c005c3d..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_karras_ve.py +++ /dev/null @@ -1,232 +0,0 @@ -# Copyright 2023 NVIDIA and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import numpy as np -import torch - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput, randn_tensor -from .scheduling_utils import SchedulerMixin - - -@dataclass -class KarrasVeOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - derivative (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Derivative of predicted original image sample (x_0). - pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample (x_{0}) based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: torch.FloatTensor - derivative: torch.FloatTensor - pred_original_sample: Optional[torch.FloatTensor] = None - - -class KarrasVeScheduler(SchedulerMixin, ConfigMixin): - """ - Stochastic sampling from Karras et al. [1] tailored to the Variance-Expanding (VE) models [2]. Use Algorithm 2 and - the VE column of Table 1 from [1] for reference. - - [1] Karras, Tero, et al. "Elucidating the Design Space of Diffusion-Based Generative Models." - https://arxiv.org/abs/2206.00364 [2] Song, Yang, et al. "Score-based generative modeling through stochastic - differential equations." https://arxiv.org/abs/2011.13456 - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - For more details on the parameters, see the original paper's Appendix E.: "Elucidating the Design Space of - Diffusion-Based Generative Models." https://arxiv.org/abs/2206.00364. The grid search values used to find the - optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. - - Args: - sigma_min (`float`): minimum noise magnitude - sigma_max (`float`): maximum noise magnitude - s_noise (`float`): the amount of additional noise to counteract loss of detail during sampling. - A reasonable range is [1.000, 1.011]. - s_churn (`float`): the parameter controlling the overall amount of stochasticity. - A reasonable range is [0, 100]. - s_min (`float`): the start value of the sigma range where we add noise (enable stochasticity). - A reasonable range is [0, 10]. - s_max (`float`): the end value of the sigma range where we add noise. - A reasonable range is [0.2, 80]. - - """ - - order = 2 - - @register_to_config - def __init__( - self, - sigma_min: float = 0.02, - sigma_max: float = 100, - s_noise: float = 1.007, - s_churn: float = 80, - s_min: float = 0.05, - s_max: float = 50, - ): - # standard deviation of the initial noise distribution - self.init_noise_sigma = sigma_max - - # setable values - self.num_inference_steps: int = None - self.timesteps: np.IntTensor = None - self.schedule: torch.FloatTensor = None # sigma(t_i) - - def scale_model_input(self, sample: torch.FloatTensor, timestep: Optional[int] = None) -> torch.FloatTensor: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - sample (`torch.FloatTensor`): input sample - timestep (`int`, optional): current timestep - - Returns: - `torch.FloatTensor`: scaled input sample - """ - return sample - - def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None): - """ - Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - - """ - self.num_inference_steps = num_inference_steps - timesteps = np.arange(0, self.num_inference_steps)[::-1].copy() - self.timesteps = torch.from_numpy(timesteps).to(device) - schedule = [ - ( - self.config.sigma_max**2 - * (self.config.sigma_min**2 / self.config.sigma_max**2) ** (i / (num_inference_steps - 1)) - ) - for i in self.timesteps - ] - self.schedule = torch.tensor(schedule, dtype=torch.float32, device=device) - - def add_noise_to_input( - self, sample: torch.FloatTensor, sigma: float, generator: Optional[torch.Generator] = None - ) -> Tuple[torch.FloatTensor, float]: - """ - Explicit Langevin-like "churn" step of adding noise to the sample according to a factor gamma_i ≥ 0 to reach a - higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. - - TODO Args: - """ - if self.config.s_min <= sigma <= self.config.s_max: - gamma = min(self.config.s_churn / self.num_inference_steps, 2**0.5 - 1) - else: - gamma = 0 - - # sample eps ~ N(0, S_noise^2 * I) - eps = self.config.s_noise * randn_tensor(sample.shape, generator=generator).to(sample.device) - sigma_hat = sigma + gamma * sigma - sample_hat = sample + ((sigma_hat**2 - sigma**2) ** 0.5 * eps) - - return sample_hat, sigma_hat - - def step( - self, - model_output: torch.FloatTensor, - sigma_hat: float, - sigma_prev: float, - sample_hat: torch.FloatTensor, - return_dict: bool = True, - ) -> Union[KarrasVeOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - sigma_hat (`float`): TODO - sigma_prev (`float`): TODO - sample_hat (`torch.FloatTensor`): TODO - return_dict (`bool`): option for returning tuple rather than KarrasVeOutput class - - KarrasVeOutput: updated sample in the diffusion chain and derivative (TODO double check). - Returns: - [`~schedulers.scheduling_karras_ve.KarrasVeOutput`] or `tuple`: - [`~schedulers.scheduling_karras_ve.KarrasVeOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - - """ - - pred_original_sample = sample_hat + sigma_hat * model_output - derivative = (sample_hat - pred_original_sample) / sigma_hat - sample_prev = sample_hat + (sigma_prev - sigma_hat) * derivative - - if not return_dict: - return (sample_prev, derivative) - - return KarrasVeOutput( - prev_sample=sample_prev, derivative=derivative, pred_original_sample=pred_original_sample - ) - - def step_correct( - self, - model_output: torch.FloatTensor, - sigma_hat: float, - sigma_prev: float, - sample_hat: torch.FloatTensor, - sample_prev: torch.FloatTensor, - derivative: torch.FloatTensor, - return_dict: bool = True, - ) -> Union[KarrasVeOutput, Tuple]: - """ - Correct the predicted sample based on the output model_output of the network. TODO complete description - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - sigma_hat (`float`): TODO - sigma_prev (`float`): TODO - sample_hat (`torch.FloatTensor`): TODO - sample_prev (`torch.FloatTensor`): TODO - derivative (`torch.FloatTensor`): TODO - return_dict (`bool`): option for returning tuple rather than KarrasVeOutput class - - Returns: - prev_sample (TODO): updated sample in the diffusion chain. derivative (TODO): TODO - - """ - pred_original_sample = sample_prev + sigma_prev * model_output - derivative_corr = (sample_prev - pred_original_sample) / sigma_prev - sample_prev = sample_hat + (sigma_prev - sigma_hat) * (0.5 * derivative + 0.5 * derivative_corr) - - if not return_dict: - return (sample_prev, derivative) - - return KarrasVeOutput( - prev_sample=sample_prev, derivative=derivative, pred_original_sample=pred_original_sample - ) - - def add_noise(self, original_samples, noise, timesteps): - raise NotImplementedError() diff --git a/spaces/devthedeveloper/Bark-with-Voice-Cloning/util/settings.py b/spaces/devthedeveloper/Bark-with-Voice-Cloning/util/settings.py deleted file mode 100644 index 2ab66b0c7605d2b877defdd8592097a8a4c6f21a..0000000000000000000000000000000000000000 --- a/spaces/devthedeveloper/Bark-with-Voice-Cloning/util/settings.py +++ /dev/null @@ -1,41 +0,0 @@ -import yaml - -class Settings: - def __init__(self, config_file): - self.config_file = config_file - self.load() - - def load(self): - try: - with open(self.config_file, 'r') as f: - data = yaml.load(f, Loader=yaml.FullLoader) - self.selected_theme = data.get('selected_theme', "gstaff/xkcd") - self.server_name = data.get('server_name', "") - self.server_port = data.get('server_port', 0) - self.server_share = data.get('server_share', False) - self.input_text_desired_length = data.get('input_text_desired_length', 110) - self.input_text_max_length = data.get('input_text_max_length', 170) - self.silence_sentence = data.get('silence_between_sentences', 250) - self.silence_speakers = data.get('silence_between_speakers', 500) - self.output_folder_path = data.get('output_folder_path', 'outputs') - - except: - self.selected_theme = "gstaff/xkcd" - - def save(self): - data = { - 'selected_theme': self.selected_theme, - 'server_name': self.server_name, - 'server_port': self.server_port, - 'server_share': self.server_share, - 'input_text_desired_length' : self.input_text_desired_length, - 'input_text_max_length' : self.input_text_max_length, - 'silence_between_sentences': self.silence_sentence, - 'silence_between_speakers': self.silence_speakers, - 'output_folder_path': self.output_folder_path - } - with open(self.config_file, 'w') as f: - yaml.dump(data, f) - - - diff --git a/spaces/diacanFperku/AutoGPT/DataPC DX11 AC3 Homestead.forge.md b/spaces/diacanFperku/AutoGPT/DataPC DX11 AC3 Homestead.forge.md deleted file mode 100644 index 20e804e167c132b89f20d7846f0ccc5d9230a7d2..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/DataPC DX11 AC3 Homestead.forge.md +++ /dev/null @@ -1,14 +0,0 @@ -

          DataPC DX11 AC3 Homestead.forge


          Downloadhttps://gohhs.com/2uFSYs



          - -luat si eu azi assasins creed III si cand se instaleaza imi da urmatorile erori:1)crc roor datapc dx11 ac3 boston.forge 2)crc eroor datapc . dx11 ac3 boston. -3) crc instaleaza imi da urmatorile erori: boston. -4) crc instaleaza imi da ur mnitorul ani eror: boston. -5) crc instaleaza imi da urmatorile erori: boston. -6) crc instaleaza imi da urmatorile erori: boston. -7) crc instaleaza imi da urmatorile erori: boston. -8) crc instaleaza imi da urmatorile erori: boston. -9) crc instaleaza imi da urmatorile erori: boston. -10) crc instaleaza imi da urmatorile 8a78ff9644
          -
          -
          -

          diff --git a/spaces/diacanFperku/AutoGPT/Korg Pa Manager V3 With Crack REPACK.md b/spaces/diacanFperku/AutoGPT/Korg Pa Manager V3 With Crack REPACK.md deleted file mode 100644 index e4954727ce59cee2381f9051748313b2f47c6b6e..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Korg Pa Manager V3 With Crack REPACK.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Korg pa manager v3 With Crack


          DOWNLOAD >>> https://gohhs.com/2uFTrf



          - -7 Jan 2022 - KORG PA Manager Crack is a good and user friendly software. This professional software helps you to process many music files. You can use this software for professional or amateur use. It has many features such as file management, synthesizer management, MIDI management, MIDI settings management and more. The program has a multilingual interface that is very easy to use. You can also use it as a music editor or music player. This software is for you to enjoy music easily. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/diagaiwei/ir_chinese_medqa/README.md b/spaces/diagaiwei/ir_chinese_medqa/README.md deleted file mode 100644 index 2de3798f5a72ca09f270879db39709b07dbd6565..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Ir Chinese Medqa -emoji: 📉 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.5 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/diego2554/RemBG_super/rembg/session_cloth.py b/spaces/diego2554/RemBG_super/rembg/session_cloth.py deleted file mode 100644 index 11bcef74378be4d64058772c29ac45240f60a85b..0000000000000000000000000000000000000000 --- a/spaces/diego2554/RemBG_super/rembg/session_cloth.py +++ /dev/null @@ -1,88 +0,0 @@ -from typing import List - -import numpy as np -from PIL import Image -from PIL.Image import Image as PILImage -from scipy.special import log_softmax - -from .session_base import BaseSession - -pallete1 = [ - 0, - 0, - 0, - 255, - 255, - 255, - 0, - 0, - 0, - 0, - 0, - 0, -] - -pallete2 = [ - 0, - 0, - 0, - 0, - 0, - 0, - 255, - 255, - 255, - 0, - 0, - 0, -] - -pallete3 = [ - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 255, - 255, - 255, -] - - -class ClothSession(BaseSession): - def predict(self, img: PILImage) -> List[PILImage]: - ort_outs = self.inner_session.run( - None, self.normalize(img, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), (768, 768)) - ) - - pred = ort_outs - pred = log_softmax(pred[0], 1) - pred = np.argmax(pred, axis=1, keepdims=True) - pred = np.squeeze(pred, 0) - pred = np.squeeze(pred, 0) - - mask = Image.fromarray(pred.astype("uint8"), mode="L") - mask = mask.resize(img.size, Image.LANCZOS) - - masks = [] - - mask1 = mask.copy() - mask1.putpalette(pallete1) - mask1 = mask1.convert("RGB").convert("L") - masks.append(mask1) - - mask2 = mask.copy() - mask2.putpalette(pallete2) - mask2 = mask2.convert("RGB").convert("L") - masks.append(mask2) - - mask3 = mask.copy() - mask3.putpalette(pallete3) - mask3 = mask3.convert("RGB").convert("L") - masks.append(mask3) - - return masks diff --git a/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/monotonic_align/setup.py b/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/digitalxingtong/Miiu-Bert-Vits2/transcribe_genshin.py b/spaces/digitalxingtong/Miiu-Bert-Vits2/transcribe_genshin.py deleted file mode 100644 index acc98814af6189d129ab85946525bec55419a33f..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Miiu-Bert-Vits2/transcribe_genshin.py +++ /dev/null @@ -1,78 +0,0 @@ -# coding=gbk -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count - -import soundfile -from scipy.io import wavfile -from tqdm import tqdm - -global speaker_annos -speaker_annos = [] - -def process(item): - spkdir, wav_name, args = item - speaker = spkdir.replace("\\", "/").split("/")[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, sr=args.sr) - soundfile.write( - os.path.join(args.out_dir, speaker, wav_name), - wav, - sr - ) - -def process_text(item): - spkdir, wav_name, args = item - speaker = spkdir.replace("\\", "/").split("/")[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - global speaker_annos - tr_name = wav_name.replace('.wav', '') - with open(args.out_dir+'/'+speaker+'/'+tr_name+'.lab', "r", encoding="utf-8") as file: - text = file.read() - text = text.replace("{NICKNAME}",'') - text = text.replace("{M#}{F#}",'') - text = text.replace("{M#}{F#}",'') - substring = "{M#}{F#}" - if substring in text: - if tr_name.endswith("a"): - text = text.replace("{M#}{F#}",'') - if tr_name.endswith("b"): - text = text.replace("{M#}{F#}",'') - text = text.replace("#",'') - text = "ZH|" + text + "\n" # - speaker_annos.append(args.out_dir+'/'+speaker+'/'+wav_name+ "|" + speaker + "|" + text) - - - -if __name__ == "__main__": - parent_dir = "./genshin_dataset/" - speaker_names = list(os.walk(parent_dir))[0][1] - parser = argparse.ArgumentParser() - parser.add_argument("--sr", type=int, default=44100, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./genshin_dataset", help="path to source dir") - parser.add_argument("--out_dir", type=str, default="./genshin_dataset", help="path to target dir") - args = parser.parse_args() - # processs = 8 - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass - for i in os.listdir(spk_dir): - if i.endswith("wav"): - pro=(spk_dir, i, args) - process_text(pro) - if len(speaker_annos) == 0: - print("transcribe error!!!") - with open("./filelists/short_character_anno.list", 'w', encoding='utf-8') as f: - for line in speaker_annos: - f.write(line) - print("transcript file finished.") diff --git a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/text/english.py b/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/text/english.py deleted file mode 100644 index 781d0a56cef71f66fc67db51d76538be90d3ddd2..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/text/english.py +++ /dev/null @@ -1,138 +0,0 @@ -import pickle -import os -import re -from g2p_en import G2p -from string import punctuation - -from text import symbols - -current_file_path = os.path.dirname(__file__) -CMU_DICT_PATH = os.path.join(current_file_path, 'cmudict.rep') -CACHE_PATH = os.path.join(current_file_path, 'cmudict_cache.pickle') -_g2p = G2p() - -arpa = {'AH0', 'S', 'AH1', 'EY2', 'AE2', 'EH0', 'OW2', 'UH0', 'NG', 'B', 'G', 'AY0', 'M', 'AA0', 'F', 'AO0', 'ER2', 'UH1', 'IY1', 'AH2', 'DH', 'IY0', 'EY1', 'IH0', 'K', 'N', 'W', 'IY2', 'T', 'AA1', 'ER1', 'EH2', 'OY0', 'UH2', 'UW1', 'Z', 'AW2', 'AW1', 'V', 'UW2', 'AA2', 'ER', 'AW0', 'UW0', 'R', 'OW1', 'EH1', 'ZH', 'AE0', 'IH2', 'IH', 'Y', 'JH', 'P', 'AY1', 'EY0', 'OY2', 'TH', 'HH', 'D', 'ER0', 'CH', 'AO1', 'AE1', 'AO2', 'OY1', 'AY2', 'IH1', 'OW0', 'L', 'SH'} - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def read_dict(): - g2p_dict = {} - start_line = 49 - with open(CMU_DICT_PATH) as f: - line = f.readline() - line_index = 1 - while line: - if line_index >= start_line: - line = line.strip() - word_split = line.split(' ') - word = word_split[0] - - syllable_split = word_split[1].split(' - ') - g2p_dict[word] = [] - for syllable in syllable_split: - phone_split = syllable.split(' ') - g2p_dict[word].append(phone_split) - - line_index = line_index + 1 - line = f.readline() - - return g2p_dict - - -def cache_dict(g2p_dict, file_path): - with open(file_path, 'wb') as pickle_file: - pickle.dump(g2p_dict, pickle_file) - - -def get_dict(): - if os.path.exists(CACHE_PATH): - with open(CACHE_PATH, 'rb') as pickle_file: - g2p_dict = pickle.load(pickle_file) - else: - g2p_dict = read_dict() - cache_dict(g2p_dict, CACHE_PATH) - - return g2p_dict - -eng_dict = get_dict() - -def refine_ph(phn): - tone = 0 - if re.search(r'\d$', phn): - tone = int(phn[-1]) + 1 - phn = phn[:-1] - return phn.lower(), tone - -def refine_syllables(syllables): - tones = [] - phonemes = [] - for phn_list in syllables: - for i in range(len(phn_list)): - phn = phn_list[i] - phn, tone = refine_ph(phn) - phonemes.append(phn) - tones.append(tone) - return phonemes, tones - - -def text_normalize(text): - # todo: eng text normalize - return text - -def g2p(text): - - phones = [] - tones = [] - words = re.split(r"([,;.\-\?\!\s+])", text) - for w in words: - if w.upper() in eng_dict: - phns, tns = refine_syllables(eng_dict[w.upper()]) - phones += phns - tones += tns - else: - phone_list = list(filter(lambda p: p != " ", _g2p(w))) - for ph in phone_list: - if ph in arpa: - ph, tn = refine_ph(ph) - phones.append(ph) - tones.append(tn) - else: - phones.append(ph) - tones.append(0) - # todo: implement word2ph - word2ph = [1 for i in phones] - - phones = [post_replace_ph(i) for i in phones] - return phones, tones, word2ph - -if __name__ == "__main__": - # print(get_dict()) - # print(eng_word_to_phoneme("hello")) - print(g2p("In this paper, we propose 1 DSPGAN, a GAN-based universal vocoder.")) - # all_phones = set() - # for k, syllables in eng_dict.items(): - # for group in syllables: - # for ph in group: - # all_phones.add(ph) - # print(all_phones) \ No newline at end of file diff --git a/spaces/dineshreddy/WALT/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py b/spaces/dineshreddy/WALT/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py deleted file mode 100644 index c735298487e14e4a0ec42913f25673cccb98a8a0..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py +++ /dev/null @@ -1,55 +0,0 @@ -import numpy as np -import torch - -from ..builder import BBOX_SAMPLERS -from .random_sampler import RandomSampler - - -@BBOX_SAMPLERS.register_module() -class InstanceBalancedPosSampler(RandomSampler): - """Instance balanced sampler that samples equal number of positive samples - for each instance.""" - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Sample positive boxes. - - Args: - assign_result (:obj:`AssignResult`): The assigned results of boxes. - num_expected (int): The number of expected positive samples - - Returns: - Tensor or ndarray: sampled indices. - """ - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - unique_gt_inds = assign_result.gt_inds[pos_inds].unique() - num_gts = len(unique_gt_inds) - num_per_gt = int(round(num_expected / float(num_gts)) + 1) - sampled_inds = [] - for i in unique_gt_inds: - inds = torch.nonzero( - assign_result.gt_inds == i.item(), as_tuple=False) - if inds.numel() != 0: - inds = inds.squeeze(1) - else: - continue - if len(inds) > num_per_gt: - inds = self.random_choice(inds, num_per_gt) - sampled_inds.append(inds) - sampled_inds = torch.cat(sampled_inds) - if len(sampled_inds) < num_expected: - num_extra = num_expected - len(sampled_inds) - extra_inds = np.array( - list(set(pos_inds.cpu()) - set(sampled_inds.cpu()))) - if len(extra_inds) > num_extra: - extra_inds = self.random_choice(extra_inds, num_extra) - extra_inds = torch.from_numpy(extra_inds).to( - assign_result.gt_inds.device).long() - sampled_inds = torch.cat([sampled_inds, extra_inds]) - elif len(sampled_inds) > num_expected: - sampled_inds = self.random_choice(sampled_inds, num_expected) - return sampled_inds diff --git a/spaces/dineshreddy/WALT/mmdet/models/losses/iou_loss.py b/spaces/dineshreddy/WALT/mmdet/models/losses/iou_loss.py deleted file mode 100644 index eba6f18b80981ca891c1add37007e6bf478c651f..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/losses/iou_loss.py +++ /dev/null @@ -1,436 +0,0 @@ -import math - -import mmcv -import torch -import torch.nn as nn - -from mmdet.core import bbox_overlaps -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def iou_loss(pred, target, linear=False, eps=1e-6): - """IoU loss. - - Computing the IoU loss between a set of predicted bboxes and target bboxes. - The loss is calculated as negative log of IoU. - - Args: - pred (torch.Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (torch.Tensor): Corresponding gt bboxes, shape (n, 4). - linear (bool, optional): If True, use linear scale of loss instead of - log scale. Default: False. - eps (float): Eps to avoid log(0). - - Return: - torch.Tensor: Loss tensor. - """ - ious = bbox_overlaps(pred, target, is_aligned=True).clamp(min=eps) - if linear: - loss = 1 - ious - else: - loss = -ious.log() - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def bounded_iou_loss(pred, target, beta=0.2, eps=1e-3): - """BIoULoss. - - This is an implementation of paper - `Improving Object Localization with Fitness NMS and Bounded IoU Loss. - `_. - - Args: - pred (torch.Tensor): Predicted bboxes. - target (torch.Tensor): Target bboxes. - beta (float): beta parameter in smoothl1. - eps (float): eps to avoid NaN. - """ - pred_ctrx = (pred[:, 0] + pred[:, 2]) * 0.5 - pred_ctry = (pred[:, 1] + pred[:, 3]) * 0.5 - pred_w = pred[:, 2] - pred[:, 0] - pred_h = pred[:, 3] - pred[:, 1] - with torch.no_grad(): - target_ctrx = (target[:, 0] + target[:, 2]) * 0.5 - target_ctry = (target[:, 1] + target[:, 3]) * 0.5 - target_w = target[:, 2] - target[:, 0] - target_h = target[:, 3] - target[:, 1] - - dx = target_ctrx - pred_ctrx - dy = target_ctry - pred_ctry - - loss_dx = 1 - torch.max( - (target_w - 2 * dx.abs()) / - (target_w + 2 * dx.abs() + eps), torch.zeros_like(dx)) - loss_dy = 1 - torch.max( - (target_h - 2 * dy.abs()) / - (target_h + 2 * dy.abs() + eps), torch.zeros_like(dy)) - loss_dw = 1 - torch.min(target_w / (pred_w + eps), pred_w / - (target_w + eps)) - loss_dh = 1 - torch.min(target_h / (pred_h + eps), pred_h / - (target_h + eps)) - loss_comb = torch.stack([loss_dx, loss_dy, loss_dw, loss_dh], - dim=-1).view(loss_dx.size(0), -1) - - loss = torch.where(loss_comb < beta, 0.5 * loss_comb * loss_comb / beta, - loss_comb - 0.5 * beta) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def giou_loss(pred, target, eps=1e-7): - r"""`Generalized Intersection over Union: A Metric and A Loss for Bounding - Box Regression `_. - - Args: - pred (torch.Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (torch.Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - - Return: - Tensor: Loss tensor. - """ - gious = bbox_overlaps(pred, target, mode='giou', is_aligned=True, eps=eps) - loss = 1 - gious - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def diou_loss(pred, target, eps=1e-7): - r"""`Implementation of Distance-IoU Loss: Faster and Better - Learning for Bounding Box Regression, https://arxiv.org/abs/1911.08287`_. - - Code is modified from https://github.com/Zzh-tju/DIoU. - - Args: - pred (Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - Return: - Tensor: Loss tensor. - """ - # overlap - lt = torch.max(pred[:, :2], target[:, :2]) - rb = torch.min(pred[:, 2:], target[:, 2:]) - wh = (rb - lt).clamp(min=0) - overlap = wh[:, 0] * wh[:, 1] - - # union - ap = (pred[:, 2] - pred[:, 0]) * (pred[:, 3] - pred[:, 1]) - ag = (target[:, 2] - target[:, 0]) * (target[:, 3] - target[:, 1]) - union = ap + ag - overlap + eps - - # IoU - ious = overlap / union - - # enclose area - enclose_x1y1 = torch.min(pred[:, :2], target[:, :2]) - enclose_x2y2 = torch.max(pred[:, 2:], target[:, 2:]) - enclose_wh = (enclose_x2y2 - enclose_x1y1).clamp(min=0) - - cw = enclose_wh[:, 0] - ch = enclose_wh[:, 1] - - c2 = cw**2 + ch**2 + eps - - b1_x1, b1_y1 = pred[:, 0], pred[:, 1] - b1_x2, b1_y2 = pred[:, 2], pred[:, 3] - b2_x1, b2_y1 = target[:, 0], target[:, 1] - b2_x2, b2_y2 = target[:, 2], target[:, 3] - - left = ((b2_x1 + b2_x2) - (b1_x1 + b1_x2))**2 / 4 - right = ((b2_y1 + b2_y2) - (b1_y1 + b1_y2))**2 / 4 - rho2 = left + right - - # DIoU - dious = ious - rho2 / c2 - loss = 1 - dious - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def ciou_loss(pred, target, eps=1e-7): - r"""`Implementation of paper `Enhancing Geometric Factors into - Model Learning and Inference for Object Detection and Instance - Segmentation `_. - - Code is modified from https://github.com/Zzh-tju/CIoU. - - Args: - pred (Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - Return: - Tensor: Loss tensor. - """ - # overlap - lt = torch.max(pred[:, :2], target[:, :2]) - rb = torch.min(pred[:, 2:], target[:, 2:]) - wh = (rb - lt).clamp(min=0) - overlap = wh[:, 0] * wh[:, 1] - - # union - ap = (pred[:, 2] - pred[:, 0]) * (pred[:, 3] - pred[:, 1]) - ag = (target[:, 2] - target[:, 0]) * (target[:, 3] - target[:, 1]) - union = ap + ag - overlap + eps - - # IoU - ious = overlap / union - - # enclose area - enclose_x1y1 = torch.min(pred[:, :2], target[:, :2]) - enclose_x2y2 = torch.max(pred[:, 2:], target[:, 2:]) - enclose_wh = (enclose_x2y2 - enclose_x1y1).clamp(min=0) - - cw = enclose_wh[:, 0] - ch = enclose_wh[:, 1] - - c2 = cw**2 + ch**2 + eps - - b1_x1, b1_y1 = pred[:, 0], pred[:, 1] - b1_x2, b1_y2 = pred[:, 2], pred[:, 3] - b2_x1, b2_y1 = target[:, 0], target[:, 1] - b2_x2, b2_y2 = target[:, 2], target[:, 3] - - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - - left = ((b2_x1 + b2_x2) - (b1_x1 + b1_x2))**2 / 4 - right = ((b2_y1 + b2_y2) - (b1_y1 + b1_y2))**2 / 4 - rho2 = left + right - - factor = 4 / math.pi**2 - v = factor * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - - # CIoU - cious = ious - (rho2 / c2 + v**2 / (1 - ious + v)) - loss = 1 - cious - return loss - - -@LOSSES.register_module() -class IoULoss(nn.Module): - """IoULoss. - - Computing the IoU loss between a set of predicted bboxes and target bboxes. - - Args: - linear (bool): If True, use linear scale of loss instead of log scale. - Default: False. - eps (float): Eps to avoid log(0). - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Weight of loss. - """ - - def __init__(self, - linear=False, - eps=1e-6, - reduction='mean', - loss_weight=1.0): - super(IoULoss, self).__init__() - self.linear = linear - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. Options are "none", "mean" and "sum". - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if (weight is not None) and (not torch.any(weight > 0)) and ( - reduction != 'none'): - return (pred * weight).sum() # 0 - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # iou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * iou_loss( - pred, - target, - weight, - linear=self.linear, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class BoundedIoULoss(nn.Module): - - def __init__(self, beta=0.2, eps=1e-3, reduction='mean', loss_weight=1.0): - super(BoundedIoULoss, self).__init__() - self.beta = beta - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss = self.loss_weight * bounded_iou_loss( - pred, - target, - weight, - beta=self.beta, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class GIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(GIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * giou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class DIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(DIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * diou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class CIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(CIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * ciou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss diff --git a/spaces/djillegal/illegal_stable_img2img/README.md b/spaces/djillegal/illegal_stable_img2img/README.md deleted file mode 100644 index 51e466958b0bbc4b161b069b694f46c6e150ea12..0000000000000000000000000000000000000000 --- a/spaces/djillegal/illegal_stable_img2img/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Illegal Stable Img2img -emoji: 🐨 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/doevent/3D_Photo_Inpainting/MiDaS/MiDaS_utils.py b/spaces/doevent/3D_Photo_Inpainting/MiDaS/MiDaS_utils.py deleted file mode 100644 index f961acdd797624ee802fdddc3d69344094009887..0000000000000000000000000000000000000000 --- a/spaces/doevent/3D_Photo_Inpainting/MiDaS/MiDaS_utils.py +++ /dev/null @@ -1,192 +0,0 @@ -"""Utils for monoDepth. -""" -import sys -import re -import numpy as np -import cv2 -import torch -import imageio - - -def read_pfm(path): - """Read pfm file. - - Args: - path (str): path to file - - Returns: - tuple: (data, scale) - """ - with open(path, "rb") as file: - - color = None - width = None - height = None - scale = None - endian = None - - header = file.readline().rstrip() - if header.decode("ascii") == "PF": - color = True - elif header.decode("ascii") == "Pf": - color = False - else: - raise Exception("Not a PFM file: " + path) - - dim_match = re.match(r"^(\d+)\s(\d+)\s$", file.readline().decode("ascii")) - if dim_match: - width, height = list(map(int, dim_match.groups())) - else: - raise Exception("Malformed PFM header.") - - scale = float(file.readline().decode("ascii").rstrip()) - if scale < 0: - # little-endian - endian = "<" - scale = -scale - else: - # big-endian - endian = ">" - - data = np.fromfile(file, endian + "f") - shape = (height, width, 3) if color else (height, width) - - data = np.reshape(data, shape) - data = np.flipud(data) - - return data, scale - - -def write_pfm(path, image, scale=1): - """Write pfm file. - - Args: - path (str): pathto file - image (array): data - scale (int, optional): Scale. Defaults to 1. - """ - - with open(path, "wb") as file: - color = None - - if image.dtype.name != "float32": - raise Exception("Image dtype must be float32.") - - image = np.flipud(image) - - if len(image.shape) == 3 and image.shape[2] == 3: # color image - color = True - elif ( - len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1 - ): # greyscale - color = False - else: - raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.") - - file.write("PF\n" if color else "Pf\n".encode()) - file.write("%d %d\n".encode() % (image.shape[1], image.shape[0])) - - endian = image.dtype.byteorder - - if endian == "<" or endian == "=" and sys.byteorder == "little": - scale = -scale - - file.write("%f\n".encode() % scale) - - image.tofile(file) - - -def read_image(path): - """Read image and output RGB image (0-1). - - Args: - path (str): path to file - - Returns: - array: RGB image (0-1) - """ - img = cv2.imread(path) - - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0 - - return img - - -def resize_image(img): - """Resize image and make it fit for network. - - Args: - img (array): image - - Returns: - tensor: data ready for network - """ - height_orig = img.shape[0] - width_orig = img.shape[1] - unit_scale = 384. - - if width_orig > height_orig: - scale = width_orig / unit_scale - else: - scale = height_orig / unit_scale - - height = (np.ceil(height_orig / scale / 32) * 32).astype(int) - width = (np.ceil(width_orig / scale / 32) * 32).astype(int) - - img_resized = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA) - - img_resized = ( - torch.from_numpy(np.transpose(img_resized, (2, 0, 1))).contiguous().float() - ) - img_resized = img_resized.unsqueeze(0) - - return img_resized - - -def resize_depth(depth, width, height): - """Resize depth map and bring to CPU (numpy). - - Args: - depth (tensor): depth - width (int): image width - height (int): image height - - Returns: - array: processed depth - """ - depth = torch.squeeze(depth[0, :, :, :]).to("cpu") - depth = cv2.blur(depth.numpy(), (3, 3)) - depth_resized = cv2.resize( - depth, (width, height), interpolation=cv2.INTER_AREA - ) - - return depth_resized - -def write_depth(path, depth, bits=1): - """Write depth map to pfm and png file. - - Args: - path (str): filepath without extension - depth (array): depth - """ - # write_pfm(path + ".pfm", depth.astype(np.float32)) - - depth_min = depth.min() - depth_max = depth.max() - - max_val = (2**(8*bits))-1 - - if depth_max - depth_min > np.finfo("float").eps: - out = max_val * (depth - depth_min) / (depth_max - depth_min) - else: - out = 0 - - if bits == 1: - cv2.imwrite(path + ".png", out.astype("uint8")) - elif bits == 2: - cv2.imwrite(path + ".png", out.astype("uint16")) - - return \ No newline at end of file diff --git a/spaces/duycse1603/math2tex/HybridViT/module/component/prediction_head/seq2seq.py b/spaces/duycse1603/math2tex/HybridViT/module/component/prediction_head/seq2seq.py deleted file mode 100644 index 0920f5035bc50914b3414a6c78302be2b83fb1cd..0000000000000000000000000000000000000000 --- a/spaces/duycse1603/math2tex/HybridViT/module/component/prediction_head/seq2seq.py +++ /dev/null @@ -1,268 +0,0 @@ -import random -import torch -import torch.nn as nn -import torch.nn.functional as F -from einops import repeat -from ...converter import AttnLabelConverter as ATTN -from .addon_module import * - -class Attention(nn.Module): - def __init__(self, - kernel_size, - kernel_dim, - input_size, - hidden_size, - num_classes, - embed_dim=None, - attn_type='coverage', - embed_target=False, - enc_init=False, #init hidden state of decoder with enc output - teacher_forcing=1.0, - droprate=0.1, - method='concat', - seqmodel='ViT', - viz_attn: bool = False, - device='cuda' - ): - super(Attention, self).__init__() - if embed_dim is None: embed_dim = input_size - if embed_target: - self.embedding = nn.Embedding(num_classes, embed_dim, padding_idx=ATTN.START()) - - common = { - 'input_size': input_size, - 'hidden_size': hidden_size, - 'num_embeddings': embed_dim if embed_target else num_classes, - 'num_classes': num_classes - } - - if attn_type == 'luong': - common['method'] = method - self.attention_cell = LuongAttention(**common) - elif attn_type == 'loc_aware': - self.attention_cell = LocationAwareAttention(kernel_size=kernel_size, kernel_dim=kernel_dim, **common) - elif attn_type == 'coverage': - self.attention_cell = LocationAwareAttention(kernel_size=kernel_size, kernel_dim=kernel_dim, **common) - else: - self.attention_cell = BahdanauAttention(**common) - - self.dropout = nn.Dropout(droprate) - self.embed_target = embed_target - self.hidden_size = hidden_size - self.input_size = input_size - self.num_classes = num_classes - self.teacher_forcing = teacher_forcing - self.device = device - self.attn_type = attn_type - self.enc_init = enc_init - self.viz_attn = viz_attn - self.seqmodel = seqmodel - - if enc_init: self.init_hidden() - - def _embed_text(self, input_char): - return self.embedding(input_char) - - def _char_to_onehot(self, input_char, onehot_dim=38): - input_char = input_char.unsqueeze(1) - batch_size = input_char.size(0) - one_hot = torch.FloatTensor(batch_size, onehot_dim).zero_().to(self.device) - one_hot = one_hot.scatter_(1, input_char, 1) - return one_hot - - def init_hidden(self): - self.proj_init_h = nn.Linear(self.input_size, self.hidden_size, bias=True) - self.proj_init_c = nn.Linear(self.input_size, self.hidden_size, bias=True) - - def forward_beam( - self, - batch_H: torch.Tensor, - batch_max_length=25, - beam_size=4, - ): - batch_size = batch_H.size(0) - assert batch_size == 1 - num_steps = batch_max_length + 1 - batch_H = batch_H.squeeze(dim=0) - batch_H = repeat(batch_H, "s e -> b s e", b = beam_size) - - if self.enc_init: - if self.seqmodel == 'BiLSTM': - init_embedding = batch_H.mean(dim=1) - else: - init_embedding = batch_H[:, 0, :] - h_0 = self.proj_init_h(init_embedding) - c_0 = self.proj_init_c(init_embedding) - hidden = (h_0, c_0) - else: - hidden = (torch.zeros(beam_size, self.hidden_size, dtype=torch.float32, device=self.device), - torch.zeros(beam_size, self.hidden_size, dtype=torch.float32, device=self.device)) - - if self.attn_type == 'coverage': - alpha_cum = torch.zeros(beam_size, batch_H.shape[1], 1, dtype=torch.float32, device=self.device) - self.attention_cell.reset_mem() - - k_prev_words = torch.LongTensor([[ATTN.START()]] * beam_size).to(self.device) - seqs = k_prev_words - targets = k_prev_words.squeeze(dim=-1) - top_k_scores = torch.zeros(beam_size, 1).to(self.device) - - if self.viz_attn: - seqs_alpha = torch.ones(beam_size, 1, batch_H.shape[1]).to(self.device) - - complete_seqs = list() - if self.viz_attn: - complete_seqs_alpha = list() - complete_seqs_scores = list() - - for step in range(num_steps): - embed_text = self._char_to_onehot(targets, onehot_dim=self.num_classes) if not self.embed_target else self._embed_text(targets) - output, hidden, alpha = self.attention_cell(hidden, batch_H, embed_text) - output = self.dropout(output) - vocab_size = output.shape[1] - - scores = F.log_softmax(output, dim=-1) - scores = top_k_scores.expand_as(scores) + scores - if step == 0: - top_k_scores, top_k_words = scores[0].topk(beam_size, 0, True, True) - else: - top_k_scores, top_k_words = scores.view(-1).topk(beam_size, 0, True, True) - - prev_word_inds = top_k_words // vocab_size - next_word_inds = top_k_words % vocab_size - - seqs = torch.cat([seqs[prev_word_inds], next_word_inds.unsqueeze(1)], dim=1) - if self.viz_attn: - seqs_alpha = torch.cat([seqs_alpha[prev_word_inds], alpha[prev_word_inds].permute(0, 2, 1)], - dim=1) - - incomplete_inds = [ind for ind, next_word in enumerate(next_word_inds) if - next_word != ATTN.END()] - - complete_inds = list(set(range(len(next_word_inds))) - set(incomplete_inds)) - - if len(complete_inds) > 0: - complete_seqs.extend(seqs[complete_inds].tolist()) - if self.viz_attn: - complete_seqs_alpha.extend(seqs_alpha[complete_inds]) - complete_seqs_scores.extend(top_k_scores[complete_inds]) - - beam_size = beam_size - len(complete_inds) - if beam_size == 0: - break - - seqs = seqs[incomplete_inds] - if self.viz_attn: - seqs_alpha = seqs_alpha[incomplete_inds] - hidden = hidden[0][prev_word_inds[incomplete_inds]], \ - hidden[1][prev_word_inds[incomplete_inds]] - batch_H = batch_H[prev_word_inds[incomplete_inds]] - top_k_scores = top_k_scores[incomplete_inds].unsqueeze(1) - targets = next_word_inds[incomplete_inds] - - if self.attn_type == 'coverage': - alpha_cum = alpha_cum + alpha - alpha_cum = alpha_cum[incomplete_inds] - self.attention_cell.set_mem(alpha_cum) - elif self.attn_type == 'loc_aware': - self.attention_cell.set_mem(alpha) - - if len(complete_inds) == 0: - seq = seqs[0][1:].tolist() - seq = torch.LongTensor(seq).unsqueeze(0) - score = top_k_scores[0] - if self.viz_attn: - alphas = seqs_alpha[0][1:, ...] - return seq, score, alphas - else: - return seq, score, None - else: - combine_lst = tuple(zip(complete_seqs, complete_seqs_scores)) - best_ind = combine_lst.index(max(combine_lst, key=lambda x: x[1] / len(x[0]))) #https://youtu.be/XXtpJxZBa2c?t=2407 - seq = complete_seqs[best_ind][1:] #not include [GO] token - seq = torch.LongTensor(seq).unsqueeze(0) - score = max(complete_seqs_scores) - - if self.viz_attn: - alphas = complete_seqs_alpha[best_ind][1:, ...] - return seq, score, alphas - else: - return seq, score, None - - def forward_greedy(self, batch_H, text, is_train=True, is_test=False, batch_max_length=25): - batch_size = batch_H.size(0) - num_steps = batch_max_length + 1 - if self.enc_init: - if self.seqmodel == 'BiLSTM': - init_embedding = batch_H.mean(dim=1) - encoder_hidden = batch_H - else: - encoder_hidden = batch_H - init_embedding = batch_H[:, 0, :] - h_0 = self.proj_init_h(init_embedding) - c_0 = self.proj_init_c(init_embedding) - hidden = (h_0, c_0) - else: - encoder_hidden = batch_H - hidden = (torch.zeros(batch_size, self.hidden_size, dtype=torch.float32, device=self.device), - torch.zeros(batch_size, self.hidden_size, dtype=torch.float32, device=self.device)) - - targets = torch.zeros(batch_size, dtype=torch.long, device=self.device) # [GO] token - probs = torch.zeros(batch_size, num_steps, self.num_classes, dtype=torch.float32, device=self.device) - - if self.viz_attn: - self.alpha_stores = torch.zeros(batch_size, num_steps, encoder_hidden.shape[1], 1, dtype=torch.float32, device=self.device) - if self.attn_type == 'coverage': - alpha_cum = torch.zeros(batch_size, encoder_hidden.shape[1], 1, dtype=torch.float32, device=self.device) - - self.attention_cell.reset_mem() - - if is_test: - end_flag = torch.zeros(batch_size, dtype=torch.bool, device=self.device) - - for i in range(num_steps): - embed_text = self._char_to_onehot(targets, onehot_dim=self.num_classes) if not self.embed_target else self._embed_text(targets) - output, hidden, alpha = self.attention_cell(hidden, encoder_hidden, embed_text) - output = self.dropout(output) - if self.viz_attn: - self.alpha_stores[:, i] = alpha - if self.attn_type == 'coverage': - alpha_cum = alpha_cum + alpha - self.attention_cell.set_mem(alpha_cum) - elif self.attn_type == 'loc_aware': - self.attention_cell.set_mem(alpha) - - probs_step = output - probs[:, i, :] = probs_step - - if i == num_steps - 1: - break - - if is_train: - if self.teacher_forcing < random.random(): - _, next_input = probs_step.max(1) - targets = next_input - else: - targets = text[:, i+1] - else: - _, next_input = probs_step.max(1) - targets = next_input - - if is_test: - end_flag = end_flag | (next_input == ATTN.END()) - if end_flag.all(): - break - - _, preds_index = probs.max(2) - - return preds_index, probs, None # batch_size x num_steps x num_classes - - def forward(self, beam_size, batch_H, text, batch_max_length, is_train=True, is_test=False): - if is_train: - return self.forward_greedy(batch_H, text, is_train, is_test, batch_max_length) - else: - if beam_size > 1: - return self.forward_beam(batch_H, batch_max_length, beam_size) - else: - return self.forward_greedy(batch_H, text, is_train, is_test, batch_max_length) - diff --git a/spaces/dylanebert/gaussian-viewer/public/index.html b/spaces/dylanebert/gaussian-viewer/public/index.html deleted file mode 100644 index e632cc9db931ec1a61a8aff7782544018e4c480e..0000000000000000000000000000000000000000 --- a/spaces/dylanebert/gaussian-viewer/public/index.html +++ /dev/null @@ -1,48 +0,0 @@ - - - - - - - - - - - - - - - - - -
          Loading...

          paper | - code | - explanation

          - - -
          - - diff --git a/spaces/editing-images/ai-halloween-photobooth/pipeline_semantic_stable_diffusion_xl_img2img_ddpm.py b/spaces/editing-images/ai-halloween-photobooth/pipeline_semantic_stable_diffusion_xl_img2img_ddpm.py deleted file mode 100644 index bd126c85461f2a27b0ff256c1cde018d02474533..0000000000000000000000000000000000000000 --- a/spaces/editing-images/ai-halloween-photobooth/pipeline_semantic_stable_diffusion_xl_img2img_ddpm.py +++ /dev/null @@ -1,1763 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import os -#from itertools import repeat -from typing import Any, Callable, Dict, List, Optional, Tuple, Union -import numpy as np -from PIL import Image -from tqdm import tqdm -import torch.nn.functional as F -import math - -import torch -from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer - -from diffusers.image_processor import VaeImageProcessor -from diffusers.loaders import FromSingleFileMixin, LoraLoaderMixin, TextualInversionLoaderMixin -from diffusers.models import AutoencoderKL, UNet2DConditionModel -from diffusers.models.attention_processor import ( - AttnProcessor2_0, - LoRAAttnProcessor2_0, - LoRAXFormersAttnProcessor, - XFormersAttnProcessor, - AttnProcessor, - Attention -) -from diffusers.schedulers import DDIMScheduler -from diffusers.utils import ( - is_accelerate_available, - is_accelerate_version, - is_invisible_watermark_available, - logging, - # randn_tensor, - replace_example_docstring, -) - -from diffusers.utils.torch_utils import randn_tensor -from diffusers.pipeline_utils import DiffusionPipeline -from diffusers.pipelines.stable_diffusion_xl import StableDiffusionXLPipelineOutput - - -if is_invisible_watermark_available(): - from diffusers.pipelines.stable_diffusion_xl.watermark import StableDiffusionXLWatermarker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import torch - >>> from diffusers import StableDiffusionXLPipeline - - >>> pipe = StableDiffusionXLPipeline.from_pretrained( - ... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 - ... ) - >>> pipe = pipe.to("cuda") - - >>> prompt = "a photo of an astronaut riding a horse on mars" - >>> image = pipe(prompt).images[0] - ``` -""" - - -class AttentionStore(): - @staticmethod - def get_empty_store(): - return {"down_cross": [], "mid_cross": [], "up_cross": [], - "down_self": [], "mid_self": [], "up_self": []} - - def __call__(self, attn, is_cross: bool, place_in_unet: str, editing_prompts): - # attn.shape = batch_size * head_size, seq_len query, seq_len_key - bs = 2 + editing_prompts - source_batch_size = int(attn.shape[0] // bs) - skip = 1 # skip unconditional - self.forward( - attn[skip*source_batch_size:], - is_cross, - place_in_unet) - - def forward(self, attn, is_cross: bool, place_in_unet: str): - key = f"{place_in_unet}_{'cross' if is_cross else 'self'}" - #print(f"{key} : {attn.shape[1]}") - self.step_store[key].append(attn) - - def between_steps(self, store_step=True): - if store_step: - if self.average: - if len(self.attention_store) == 0: - self.attention_store = self.step_store - else: - for key in self.attention_store: - for i in range(len(self.attention_store[key])): - self.attention_store[key][i] += self.step_store[key][i] - else: - if len(self.attention_store) == 0: - self.attention_store = [self.step_store] - else: - self.attention_store.append(self.step_store) - - self.cur_step += 1 - self.step_store = self.get_empty_store() - - def get_attention(self, step: int): - if self.average: - attention = {key: [item / self.cur_step for item in self.attention_store[key]] for key in self.attention_store} - else: - assert(step is not None) - attention = self.attention_store[step] - return attention - - def aggregate_attention(self, attention_maps, prompts, res: int, - from_where: List[str], is_cross: bool, select: int - ): - out = [] - num_pixels = res ** 2 - for location in from_where: - for item in attention_maps[f"{location}_{'cross' if is_cross else 'self'}"]: - if item.shape[1] == num_pixels: - cross_maps = item.reshape(len(prompts), -1, res, res, item.shape[-1])[select] - out.append(cross_maps) - out = torch.cat(out, dim=0) - # average over heads - out = out.sum(0) / out.shape[0] - return out - - def __init__(self, average: bool): - self.step_store = self.get_empty_store() - self.attention_store = [] - self.cur_step = 0 - self.average = average - -class CrossAttnProcessor: - - def __init__(self, attention_store, place_in_unet, editing_prompts): - self.attnstore = attention_store - self.place_in_unet = place_in_unet - self.editing_prompts = editing_prompts - - def __call__( - self, - attn: Attention, - hidden_states, - encoder_hidden_states=None, - attention_mask=None, - temb=None, - ): - assert(not attn.residual_connection) - assert(attn.spatial_norm is None) - assert(attn.group_norm is None) - assert(hidden_states.ndim != 4) - assert(encoder_hidden_states is not None) # is cross - - batch_size, sequence_length, _ = ( - hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape - ) - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - - query = attn.to_q(hidden_states) - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - - query = attn.head_to_batch_dim(query) - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - - attention_probs = attn.get_attention_scores(query, key, attention_mask) - self.attnstore(attention_probs, - is_cross=True, - place_in_unet=self.place_in_unet, - editing_prompts=self.editing_prompts) - - hidden_states = torch.bmm(attention_probs, value) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - hidden_states = hidden_states / attn.rescale_output_factor - return hidden_states - - -# Modified from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionAttendAndExcitePipeline.GaussianSmoothing -class GaussianSmoothing(): - - def __init__(self, device): - kernel_size = [3, 3] - sigma = [0.5, 0.5] - - # The gaussian kernel is the product of the gaussian function of each dimension. - kernel = 1 - meshgrids = torch.meshgrid([torch.arange(size, dtype=torch.float32) for size in kernel_size]) - for size, std, mgrid in zip(kernel_size, sigma, meshgrids): - mean = (size - 1) / 2 - kernel *= 1 / (std * math.sqrt(2 * math.pi)) * torch.exp(-(((mgrid - mean) / (2 * std)) ** 2)) - - # Make sure sum of values in gaussian kernel equals 1. - kernel = kernel / torch.sum(kernel) - - # Reshape to depthwise convolutional weight - kernel = kernel.view(1, 1, *kernel.size()) - kernel = kernel.repeat(1, *[1] * (kernel.dim() - 1)) - - self.weight = kernel.to(device) - - def __call__(self, input): - """ - Arguments: - Apply gaussian filter to input. - input (torch.Tensor): Input to apply gaussian filter on. - Returns: - filtered (torch.Tensor): Filtered output. - """ - return F.conv2d(input, weight=self.weight.to(input.dtype)) - - -def load_image(image_path, size=1024, left=0, right=0, top=0, bottom=0, device=None, dtype=None): - print(f"load image of size {size}x{size}") - if type(image_path) is str: - image = np.array(Image.open(image_path).convert('RGB'))[:, :, :3] - else: - image = image_path - h, w, c = image.shape - left = min(left, w-1) - right = min(right, w - left - 1) - top = min(top, h - left - 1) - bottom = min(bottom, h - top - 1) - image = image[top:h-bottom, left:w-right] - h, w, c = image.shape - if h < w: - offset = (w - h) // 2 - image = image[:, offset:offset + h] - elif w < h: - offset = (h - w) // 2 - image = image[offset:offset + w] - image = np.array(Image.fromarray(image).resize((size, size))) - image = torch.from_numpy(image).float() / 127.5 - 1 - image = image.permute(2, 0, 1).unsqueeze(0) - - image = image.to(device=device, dtype=dtype) - return image - -# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg -def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0): - """ - Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and - Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). See Section 3.4 - """ - std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True) - std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True) - # rescale the results from guidance (fixes overexposure) - noise_pred_rescaled = noise_cfg * (std_text / std_cfg) - # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images - noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg - return noise_cfg - - -class SemanticStableDiffusionXLImg2ImgPipeline_DDPMInversion(DiffusionPipeline, FromSingleFileMixin, LoraLoaderMixin): - r""" - Pipeline for text-to-image generation using Stable Diffusion XL. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - In addition the pipeline inherits the following loading methods: - - *LoRA*: [`StableDiffusionXLPipeline.load_lora_weights`] - - *Ckpt*: [`loaders.FromSingleFileMixin.from_single_file`] - - as well as the following saving methods: - - *LoRA*: [`loaders.StableDiffusionXLPipeline.save_lora_weights`] - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion XL uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - text_encoder_2 ([` CLIPTextModelWithProjection`]): - Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), - specifically the - [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k) - variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - tokenizer_2 (`CLIPTokenizer`): - Second Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - """ - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - text_encoder_2: CLIPTextModelWithProjection, - tokenizer: CLIPTokenizer, - tokenizer_2: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: DDIMScheduler, - force_zeros_for_empty_prompt: bool = True, - add_watermarker: Optional[bool] = None, - ): - super().__init__() - - if not isinstance(scheduler, DDIMScheduler): - scheduler = DDIMScheduler.from_config(scheduler.config) - logger.warning("This pipeline only supports DDIMScheduler. " - "The scheduler has been changed to DDIMScheduler.") - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - text_encoder_2=text_encoder_2, - tokenizer=tokenizer, - tokenizer_2=tokenizer_2, - unet=unet, - scheduler=scheduler, - ) - self.register_to_config(force_zeros_for_empty_prompt=force_zeros_for_empty_prompt) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) - self.default_sample_size = self.unet.config.sample_size - - add_watermarker = add_watermarker if add_watermarker is not None else is_invisible_watermark_available() - - if add_watermarker: - self.watermark = StableDiffusionXLWatermarker() - else: - self.watermark = None - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing - def enable_vae_slicing(self): - r""" - Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to - compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. - """ - self.vae.enable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing - def disable_vae_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to - computing decoding in one step. - """ - self.vae.disable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_tiling - def enable_vae_tiling(self): - r""" - Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to - compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow - processing larger images. - """ - self.vae.enable_tiling() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_tiling - def disable_vae_tiling(self): - r""" - Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to - computing decoding in one step. - """ - self.vae.disable_tiling() - - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared - to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward` - method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with - `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - model_sequence = ( - [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2] - ) - model_sequence.extend([self.unet, self.vae]) - - hook = None - for cpu_offloaded_model in model_sequence: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - def encode_prompt( - self, - prompt: str, - prompt_2: Optional[str] = None, - device: Optional[torch.device] = None, - num_images_per_prompt: int = 1, - do_classifier_free_guidance: bool = True, - negative_prompt: Optional[str] = None, - negative_prompt_2: Optional[str] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - pooled_prompt_embeds: Optional[torch.FloatTensor] = None, - negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None, - lora_scale: Optional[float] = None, - enable_edit_guidance: bool = True, - editing_prompt: Optional[str] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - prompt_2 (`str` or `List[str]`, *optional*): - The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is - used in both text-encoders - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - negative_prompt_2 (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and - `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - pooled_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. - If not provided, pooled text embeddings will be generated from `prompt` input argument. - negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt` - input argument. - lora_scale (`float`, *optional*): - A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. - """ - device = device or self._execution_device - - # set lora scale so that monkey patched LoRA - # function of text encoder can correctly access it - if lora_scale is not None and isinstance(self, LoraLoaderMixin): - self._lora_scale = lora_scale - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - # Define tokenizers and text encoders - tokenizers = [self.tokenizer, self.tokenizer_2] if self.tokenizer is not None else [self.tokenizer_2] - text_encoders = ( - [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2] - ) - - if prompt_embeds is None: - prompt_2 = prompt_2 or prompt - # textual inversion: procecss multi-vector tokens if necessary - prompt_embeds_list = [] - prompts = [prompt, prompt_2] - for prompt, tokenizer, text_encoder in zip(prompts, tokenizers, text_encoders): - if isinstance(self, TextualInversionLoaderMixin): - prompt = self.maybe_convert_prompt(prompt, tokenizer) - - text_inputs = tokenizer( - prompt, - padding="max_length", - max_length=tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - - text_input_ids = text_inputs.input_ids - untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = tokenizer.batch_decode(untruncated_ids[:, tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {tokenizer.model_max_length} tokens: {removed_text}" - ) - - prompt_embeds = text_encoder( - text_input_ids.to(device), - output_hidden_states=True, - ) - - # We are only ALWAYS interested in the pooled output of the final text encoder - pooled_prompt_embeds = prompt_embeds[0] - prompt_embeds = prompt_embeds.hidden_states[-2] - - prompt_embeds_list.append(prompt_embeds) - - prompt_embeds = torch.concat(prompt_embeds_list, dim=-1) - - # get unconditional embeddings for classifier free guidance - zero_out_negative_prompt = negative_prompt is None and self.config.force_zeros_for_empty_prompt - if do_classifier_free_guidance and negative_prompt_embeds is None and zero_out_negative_prompt: - negative_prompt_embeds = torch.zeros_like(prompt_embeds) - negative_pooled_prompt_embeds = torch.zeros_like(pooled_prompt_embeds) - elif do_classifier_free_guidance and negative_prompt_embeds is None: - negative_prompt = negative_prompt or "" - negative_prompt_2 = negative_prompt_2 or negative_prompt - - uncond_tokens: List[str] - if prompt is not None and type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt, negative_prompt_2] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = [negative_prompt, negative_prompt_2] - - negative_prompt_embeds_list = [] - for negative_prompt, tokenizer, text_encoder in zip(uncond_tokens, tokenizers, text_encoders): - if isinstance(self, TextualInversionLoaderMixin): - negative_prompt = self.maybe_convert_prompt(negative_prompt, tokenizer) - - max_length = prompt_embeds.shape[1] - uncond_input = tokenizer( - negative_prompt, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - negative_prompt_embeds = text_encoder( - uncond_input.input_ids.to(device), - output_hidden_states=True, - ) - # We are only ALWAYS interested in the pooled output of the final text encoder - negative_pooled_prompt_embeds = negative_prompt_embeds[0] - negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2] - - negative_prompt_embeds_list.append(negative_prompt_embeds) - - negative_prompt_embeds = torch.concat(negative_prompt_embeds_list, dim=-1) - - num_edit_tokens = None - if enable_edit_guidance: - editing_prompt_2 = editing_prompt - - editing_prompts = [editing_prompt, editing_prompt_2] - edit_prompt_embeds_list = [] - - for editing_prompt, tokenizer, text_encoder in zip(editing_prompts, tokenizers, text_encoders): - if isinstance(self, TextualInversionLoaderMixin): - editing_prompt = self.maybe_convert_prompt(editing_prompt, tokenizer) - - max_length = prompt_embeds.shape[1] - edit_concepts_input = tokenizer( - #[x for item in editing_prompt for x in repeat(item, batch_size)], - editing_prompt, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - return_length=True - ) - - num_edit_tokens = edit_concepts_input.length -2 # not counting startoftext and endoftext - edit_concepts_input_ids = edit_concepts_input.input_ids - edit_concepts_embeds = text_encoder( - edit_concepts_input.input_ids.to(device), - output_hidden_states=True, - ) - # We are only ALWAYS interested in the pooled output of the final text encoder - edit_pooled_prompt_embeds = edit_concepts_embeds[0] - edit_concepts_embeds = edit_concepts_embeds.hidden_states[-2] - - edit_prompt_embeds_list.append(edit_concepts_embeds) - - edit_concepts_embeds = torch.concat(edit_prompt_embeds_list, dim=-1) - else: - edit_concepts_embeds = None - edit_pooled_prompt_embeds = None - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device) - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device) - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - if enable_edit_guidance: - bs_embed_edit, seq_len, _ = edit_concepts_embeds.shape - edit_concepts_embeds = edit_concepts_embeds.to(dtype=self.text_encoder_2.dtype, device=device) - edit_concepts_embeds = edit_concepts_embeds.repeat(1, num_images_per_prompt, 1) - edit_concepts_embeds = edit_concepts_embeds.view(bs_embed_edit * num_images_per_prompt, seq_len, -1) - - pooled_prompt_embeds = pooled_prompt_embeds.repeat(1, num_images_per_prompt).view( - bs_embed * num_images_per_prompt, -1 - ) - if do_classifier_free_guidance: - negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.repeat(1, num_images_per_prompt).view( - bs_embed * num_images_per_prompt, -1 - ) - - if enable_edit_guidance: - edit_pooled_prompt_embeds = edit_pooled_prompt_embeds.repeat(1, num_images_per_prompt).view( - bs_embed_edit * num_images_per_prompt, -1 - ) - - return (prompt_embeds, negative_prompt_embeds, edit_concepts_embeds, - pooled_prompt_embeds, negative_pooled_prompt_embeds, edit_pooled_prompt_embeds, - num_edit_tokens) - - # Modified from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - return extra_step_kwargs - - def check_inputs( - self, - prompt, - prompt_2, - height, - width, - callback_steps, - negative_prompt=None, - negative_prompt_2=None, - prompt_embeds=None, - negative_prompt_embeds=None, - pooled_prompt_embeds=None, - negative_pooled_prompt_embeds=None, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt_2 is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt_2`: {prompt_2} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - elif prompt_2 is not None and (not isinstance(prompt_2, str) and not isinstance(prompt_2, list)): - raise ValueError(f"`prompt_2` has to be of type `str` or `list` but is {type(prompt_2)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - elif negative_prompt_2 is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt_2`: {negative_prompt_2} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - if prompt_embeds is not None and pooled_prompt_embeds is None: - raise ValueError( - "If `prompt_embeds` are provided, `pooled_prompt_embeds` also have to be passed. Make sure to generate `pooled_prompt_embeds` from the same text encoder that was used to generate `prompt_embeds`." - ) - - if negative_prompt_embeds is not None and negative_pooled_prompt_embeds is None: - raise ValueError( - "If `negative_prompt_embeds` are provided, `negative_pooled_prompt_embeds` also have to be passed. Make sure to generate `negative_pooled_prompt_embeds` from the same text encoder that was used to generate `negative_prompt_embeds`." - ) - - # Modified from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, latents): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - def prepare_unet(self, attention_store, enabled_editing_prompts): - attn_procs = {} - for name in self.unet.attn_processors.keys(): - if name.startswith("mid_block"): - place_in_unet = "mid" - elif name.startswith("up_blocks"): - place_in_unet = "up" - elif name.startswith("down_blocks"): - place_in_unet = "down" - else: - continue - - if "attn2" in name: - attn_procs[name] = CrossAttnProcessor( - attention_store=attention_store, - place_in_unet=place_in_unet, - editing_prompts=enabled_editing_prompts) - else: - attn_procs[name] = AttnProcessor() - - self.unet.set_attn_processor(attn_procs) - - - def _get_add_time_ids(self, original_size, crops_coords_top_left, target_size, dtype): - add_time_ids = list(original_size + crops_coords_top_left + target_size) - - passed_add_embed_dim = ( - self.unet.config.addition_time_embed_dim * len(add_time_ids) + self.text_encoder_2.config.projection_dim - ) - expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features - - if expected_add_embed_dim != passed_add_embed_dim: - raise ValueError( - f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. The model has an incorrect config. Please check `unet.config.time_embedding_type` and `text_encoder_2.config.projection_dim`." - ) - - add_time_ids = torch.tensor([add_time_ids], dtype=dtype) - return add_time_ids - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.StableDiffusionUpscalePipeline.upcast_vae - def upcast_vae(self): - dtype = self.vae.dtype - self.vae.to(dtype=torch.float32) - use_torch_2_0_or_xformers = isinstance( - self.vae.decoder.mid_block.attentions[0].processor, - ( - AttnProcessor2_0, - XFormersAttnProcessor, - LoRAXFormersAttnProcessor, - LoRAAttnProcessor2_0, - ), - ) - # if xformers or torch_2_0 is used attention block does not need - # to be in float32 which can save lots of memory - if use_torch_2_0_or_xformers: - self.vae.post_quant_conv.to(dtype) - self.vae.decoder.conv_in.to(dtype) - self.vae.decoder.mid_block.to(dtype) - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - prompt_2: Optional[Union[str, List[str]]] = None, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - #denoising_end: Optional[float] = None, - guidance_scale: float = 5.0, - negative_prompt: Optional[Union[str, List[str]]] = None, - negative_prompt_2: Optional[Union[str, List[str]]] = None, - #num_images_per_prompt: Optional[int] = 1, - eta: float = 1.0, - #generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - pooled_prompt_embeds: Optional[torch.FloatTensor] = None, - negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - guidance_rescale: float = 0.0, - original_size: Optional[Tuple[int, int]] = None, - crops_coords_top_left: Tuple[int, int] = (0, 0), - target_size: Optional[Tuple[int, int]] = None, - editing_prompt: Optional[Union[str, List[str]]] = None, - editing_prompt_embeddings: Optional[torch.Tensor] = None, - reverse_editing_direction: Optional[Union[bool, List[bool]]] = False, - edit_guidance_scale: Optional[Union[float, List[float]]] = 5, - edit_warmup_steps: Optional[Union[int, List[int]]] = 10, - edit_cooldown_steps: Optional[Union[int, List[int]]] = None, - edit_threshold: Optional[Union[float, List[float]]] = 0.9, - edit_momentum_scale: Optional[float] = 0.1, - edit_mom_beta: Optional[float] = 0.4, - edit_weights: Optional[List[float]] = None, - sem_guidance: Optional[List[torch.Tensor]] = None, - user_mask: Optional[torch.FloatTensor] = None, - use_cross_attn_mask: bool = False, - # Attention store (just for visualization purposes) - attn_store_steps: Optional[List[int]] = [], - store_averaged_over_steps: bool = True, - - zs: Optional[torch.FloatTensor] = None, - wts: Optional[torch.FloatTensor] = None, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - prompt_2 (`str` or `List[str]`, *optional*): - The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is - used in both text-encoders - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - denoising_end (`float`, *optional*): - When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be - completed before it is intentionally prematurely terminated. As a result, the returned sample will - still retain a substantial amount of noise as determined by the discrete timesteps selected by the - scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a - "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image - Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output) - guidance_scale (`float`, *optional*, defaults to 5.0): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - negative_prompt_2 (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and - `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - pooled_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. - If not provided, pooled text embeddings will be generated from `prompt` input argument. - negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt` - input argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] instead - of a plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). - guidance_rescale (`float`, *optional*, defaults to 0.7): - Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are - Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of - [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). - Guidance rescale factor should fix overexposure when using zero terminal SNR. - original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)): - If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled. - `original_size` defaults to `(width, height)` if not specified. Part of SDXL's micro-conditioning as - explained in section 2.2 of - [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). - crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)): - `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position - `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting - `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of - [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). - target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)): - For most cases, `target_size` should be set to the desired height and width of the generated image. If - not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in - section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). - editing_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting - `editing_prompt = None`. Guidance direction of prompt should be specified via - `reverse_editing_direction`. - editing_prompt_embeddings (`torch.Tensor`, *optional*): - Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be - specified via `reverse_editing_direction`. - reverse_editing_direction (`bool` or `List[bool]`, *optional*, defaults to `False`): - Whether the corresponding prompt in `editing_prompt` should be increased or decreased. - edit_guidance_scale (`float` or `List[float]`, *optional*, defaults to 5): - Guidance scale for semantic guidance. If provided as a list, values should correspond to - `editing_prompt`. - edit_warmup_steps (`float` or `List[float]`, *optional*, defaults to 10): - Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is - calculated for those steps and applied once all warmup periods are over. - edit_cooldown_steps (`float` or `List[float]`, *optional*, defaults to `None`): - Number of diffusion steps (for each prompt) after which semantic guidance is longer applied. - edit_threshold (`float` or `List[float]`, *optional*, defaults to 0.9): - Threshold of semantic guidance. - edit_momentum_scale (`float`, *optional*, defaults to 0.1): - Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0, - momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than - `sld_warmup_steps`). Momentum is only added to latent guidance once all warmup periods are finished. - edit_mom_beta (`float`, *optional*, defaults to 0.4): - Defines how semantic guidance momentum builds up. `edit_mom_beta` indicates how much of the previous - momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than - `edit_warmup_steps`). - edit_weights (`List[float]`, *optional*, defaults to `None`): - Indicates how much each individual concept should influence the overall guidance. If no weights are - provided all concepts are applied equally. - sem_guidance (`List[torch.Tensor]`, *optional*): - List of pre-generated guidance vectors to be applied at generation. Length of the list has to - correspond to `num_inference_steps`. - - Examples: - - Returns: - [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] if `return_dict` is True, otherwise a - `tuple`. When returning a tuple, the first element is a list with the generated images. - """ - # eta = self.eta - # num_inference_steps = self.num_inversion_steps - num_images_per_prompt = 1 - # latents = self.init_latents - - use_ddpm = True - # zs = self.zs - # wts = self.wts - - if use_cross_attn_mask: - self.smoothing = GaussianSmoothing(self.device) - - # 0. Default height and width to unet - # height = self.height - # width = self.width - # original_size = self.original_size - # target_size = self.target_size - - height = height or self.default_sample_size * self.vae_scale_factor - width = width or self.default_sample_size * self.vae_scale_factor - original_size = original_size or (height, width) - target_size = target_size or (height, width) - - # 1. Check inputs. Raise error if not correct - self.check_inputs( - prompt, - prompt_2, - height, - width, - callback_steps, - negative_prompt, - negative_prompt_2, - prompt_embeds, - negative_prompt_embeds, - pooled_prompt_embeds, - negative_pooled_prompt_embeds, - ) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - - if editing_prompt: - enable_edit_guidance = True - if isinstance(editing_prompt, str): - editing_prompt = [editing_prompt] - enabled_editing_prompts = len(editing_prompt) - elif editing_prompt_embeddings is not None: - enable_edit_guidance = True - enabled_editing_prompts = editing_prompt_embeddings.shape[0] - else: - enabled_editing_prompts = 0 - enable_edit_guidance = False - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - if prompt == "" and (prompt_2 == "" or prompt_2 is None): - # only use uncond noise pred - guidance_scale = 0.0 - do_classifier_free_guidance = True - else: - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_encoder_lora_scale = ( - cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None - ) - ( - prompt_embeds, - negative_prompt_embeds, - edit_prompt_embeds, - pooled_prompt_embeds, - negative_pooled_prompt_embeds, - pooled_edit_embeds, - num_edit_tokens - ) = self.encode_prompt( - prompt=prompt, - prompt_2=prompt_2, - device=device, - num_images_per_prompt=num_images_per_prompt, - do_classifier_free_guidance=do_classifier_free_guidance, - negative_prompt=negative_prompt, - negative_prompt_2=negative_prompt_2, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - pooled_prompt_embeds=pooled_prompt_embeds, - negative_pooled_prompt_embeds=negative_pooled_prompt_embeds, - lora_scale=text_encoder_lora_scale, - enable_edit_guidance=enable_edit_guidance, - editing_prompt=editing_prompt - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - - timesteps = self.scheduler.timesteps - if use_ddpm: - t_to_idx = {int(v):k for k,v in enumerate(timesteps[-zs.shape[0]:])} - timesteps = timesteps[-zs.shape[0]:] - - self.attention_store = AttentionStore(average=store_averaged_over_steps) - # self.prepare_unet(self.attention_store, enabled_editing_prompts) - - # 5. Prepare latent variables - num_channels_latents = self.unet.config.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - latents, - ) - - if user_mask is not None: - user_mask = user_mask.to(self.device) - assert(latents.shape[-2:] == user_mask.shape) - - # 6. Prepare extra step kwargs. - extra_step_kwargs = self.prepare_extra_step_kwargs(eta) - - # 7. Prepare added time ids & embeddings - add_text_embeds = pooled_prompt_embeds - add_time_ids = self._get_add_time_ids( - original_size, crops_coords_top_left, target_size, dtype=prompt_embeds.dtype - ) - - self.text_cross_attention_maps = [prompt] if isinstance(prompt, str) else prompt - if enable_edit_guidance: - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds, edit_prompt_embeds], dim=0) - add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds, pooled_edit_embeds], dim=0) - edit_concepts_time_ids = add_time_ids.repeat(edit_prompt_embeds.shape[0], 1) - add_time_ids = torch.cat([add_time_ids, add_time_ids, edit_concepts_time_ids], dim=0) - - self.text_cross_attention_maps += \ - ([editing_prompt] if isinstance(editing_prompt, str) else editing_prompt) - elif do_classifier_free_guidance: - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0) - add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds], dim=0) - add_time_ids = torch.cat([add_time_ids, add_time_ids], dim=0) - - prompt_embeds = prompt_embeds.to(device) - add_text_embeds = add_text_embeds.to(device) - add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1) - - # 8. Denoising loop - edit_momentum = None - self.uncond_estimates = None - self.text_estimates = None - self.edit_estimates = None - self.sem_guidance = None - - with self.progress_bar(total=len(timesteps)) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = ( - torch.cat([latents] * (2 + enabled_editing_prompts)) if do_classifier_free_guidance else latents - ) - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids} - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - added_cond_kwargs=added_cond_kwargs, - return_dict=False, - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_out = noise_pred.chunk(2 + enabled_editing_prompts) # [b,4, 64, 64] - noise_pred_uncond, noise_pred_text = noise_pred_out[0], noise_pred_out[1] - noise_pred_edit_concepts = noise_pred_out[2:] - - #noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_guidance = guidance_scale * (noise_pred_text - noise_pred_uncond) - - if self.uncond_estimates is None: - self.uncond_estimates = torch.zeros((len(timesteps), *noise_pred_uncond.shape)) - self.uncond_estimates[i] = noise_pred_uncond.detach().cpu() - - if self.text_estimates is None: - self.text_estimates = torch.zeros((len(timesteps), *noise_pred_text.shape)) - self.text_estimates[i] = noise_pred_text.detach().cpu() - - if self.edit_estimates is None and enable_edit_guidance: - self.edit_estimates = torch.zeros( - (len(timesteps), len(noise_pred_edit_concepts), *noise_pred_edit_concepts[0].shape) - ) - - if self.sem_guidance is None: - self.sem_guidance = torch.zeros((len(timesteps), *noise_pred_text.shape)) - - if edit_momentum is None: - edit_momentum = torch.zeros_like(noise_guidance) - - if enable_edit_guidance: - concept_weights = torch.zeros( - (len(noise_pred_edit_concepts), noise_guidance.shape[0]), - device=self.device, - dtype=noise_guidance.dtype, - ) - noise_guidance_edit = torch.zeros( - (len(noise_pred_edit_concepts), *noise_guidance.shape), - device=self.device, - dtype=noise_guidance.dtype, - ) - # noise_guidance_edit = torch.zeros_like(noise_guidance) - warmup_inds = [] - for c, noise_pred_edit_concept in enumerate(noise_pred_edit_concepts): - self.edit_estimates[i, c] = noise_pred_edit_concept - if isinstance(edit_guidance_scale, list): - edit_guidance_scale_c = edit_guidance_scale[c] - else: - edit_guidance_scale_c = edit_guidance_scale - - if isinstance(edit_threshold, list): - edit_threshold_c = edit_threshold[c] - else: - edit_threshold_c = edit_threshold - if isinstance(reverse_editing_direction, list): - reverse_editing_direction_c = reverse_editing_direction[c] - else: - reverse_editing_direction_c = reverse_editing_direction - if edit_weights: - edit_weight_c = edit_weights[c] - else: - edit_weight_c = 1.0 - if isinstance(edit_warmup_steps, list): - edit_warmup_steps_c = edit_warmup_steps[c] - else: - edit_warmup_steps_c = edit_warmup_steps - - if isinstance(edit_cooldown_steps, list): - edit_cooldown_steps_c = edit_cooldown_steps[c] - elif edit_cooldown_steps is None: - edit_cooldown_steps_c = i + 1 - else: - edit_cooldown_steps_c = edit_cooldown_steps - if i >= edit_warmup_steps_c: - warmup_inds.append(c) - if i >= edit_cooldown_steps_c: - noise_guidance_edit[c, :, :, :, :] = torch.zeros_like(noise_pred_edit_concept) - continue - - noise_guidance_edit_tmp = noise_pred_edit_concept - noise_pred_uncond - # tmp_weights = (noise_pred_text - noise_pred_edit_concept).sum(dim=(1, 2, 3)) - tmp_weights = (noise_guidance - noise_pred_edit_concept).sum(dim=(1, 2, 3)) - - tmp_weights = torch.full_like(tmp_weights, edit_weight_c) # * (1 / enabled_editing_prompts) - if reverse_editing_direction_c: - noise_guidance_edit_tmp = noise_guidance_edit_tmp * -1 - concept_weights[c, :] = tmp_weights - - noise_guidance_edit_tmp = noise_guidance_edit_tmp * edit_guidance_scale_c - - if user_mask is not None: - noise_guidance_edit_tmp = noise_guidance_edit_tmp * user_mask - - if use_cross_attn_mask: - out = self.attention_store.aggregate_attention( - attention_maps=self.attention_store.step_store, - prompts=self.text_cross_attention_maps, - res=32, - from_where=["up","down"], - is_cross=True, - select=self.text_cross_attention_maps.index(editing_prompt[c]), - ) - - attn_map = out[:, :, 1:1+num_edit_tokens[c]] # 0 -> startoftext - - # average over all tokens - assert(attn_map.shape[2]==num_edit_tokens[c]) - attn_map = torch.sum(attn_map, dim=2) - - # gaussian_smoothing - attn_map = F.pad(attn_map.unsqueeze(0).unsqueeze(0), (1, 1, 1, 1), mode="reflect") - attn_map = self.smoothing(attn_map).squeeze(0).squeeze(0) - - # create binary mask - # torch.quantile function expects float32 - if attn_map.dtype == torch.float32: - tmp = torch.quantile( - attn_map.flatten(), - edit_threshold_c - ) - else: - tmp = torch.quantile( - attn_map.flatten().to(torch.float32), - edit_threshold_c - ).to(attn_map.dtype) - - attn_mask = torch.where(attn_map >= tmp, 1.0, 0.0) - - # resolution must match latent space dimension - attn_mask = F.interpolate( - attn_mask.unsqueeze(0).unsqueeze(0), - noise_guidance_edit_tmp.shape[-2:] - )[0,0,:,:] - - noise_guidance_edit_tmp = noise_guidance_edit_tmp * attn_mask - else: - # calculate quantile - noise_guidance_edit_tmp_quantile = torch.abs(noise_guidance_edit_tmp) - noise_guidance_edit_tmp_quantile = torch.sum(noise_guidance_edit_tmp_quantile, dim=1, keepdim=True) - noise_guidance_edit_tmp_quantile = noise_guidance_edit_tmp_quantile.repeat(1,4,1,1) - - # torch.quantile function expects float32 - if noise_guidance_edit_tmp_quantile.dtype == torch.float32: - tmp = torch.quantile( - noise_guidance_edit_tmp_quantile.flatten(start_dim=2), - edit_threshold_c, - dim=2, - keepdim=False, - ) - else: - tmp = torch.quantile( - noise_guidance_edit_tmp_quantile.flatten(start_dim=2).to(torch.float32), - edit_threshold_c, - dim=2, - keepdim=False, - ).to(noise_guidance_edit_tmp_quantile.dtype) - - noise_guidance_edit_tmp = torch.where( - noise_guidance_edit_tmp_quantile >= tmp[:, :, None, None], - noise_guidance_edit_tmp, - torch.zeros_like(noise_guidance_edit_tmp), - ) - - noise_guidance_edit[c, :, :, :, :] = noise_guidance_edit_tmp - - warmup_inds = torch.tensor(warmup_inds).to(self.device) - if len(noise_pred_edit_concepts) > warmup_inds.shape[0] > 0: - concept_weights = concept_weights.to("cpu") # Offload to cpu - noise_guidance_edit = noise_guidance_edit.to("cpu") - - concept_weights_tmp = torch.index_select(concept_weights.to(self.device), 0, warmup_inds) - concept_weights_tmp = torch.where( - concept_weights_tmp < 0, torch.zeros_like(concept_weights_tmp), concept_weights_tmp - ) - concept_weights_tmp = concept_weights_tmp / concept_weights_tmp.sum(dim=0) - # concept_weights_tmp = torch.nan_to_num(concept_weights_tmp) - - noise_guidance_edit_tmp = torch.index_select( - noise_guidance_edit.to(self.device), 0, warmup_inds - ) - noise_guidance_edit_tmp = torch.einsum( - "cb,cbijk->bijk", concept_weights_tmp, noise_guidance_edit_tmp - ) - noise_guidance_edit_tmp = noise_guidance_edit_tmp - noise_guidance = noise_guidance + noise_guidance_edit_tmp - - self.sem_guidance[i] = noise_guidance_edit_tmp.detach().cpu() - - del noise_guidance_edit_tmp - del concept_weights_tmp - concept_weights = concept_weights.to(self.device) - noise_guidance_edit = noise_guidance_edit.to(self.device) - - concept_weights = torch.where( - concept_weights < 0, torch.zeros_like(concept_weights), concept_weights - ) - - concept_weights = torch.nan_to_num(concept_weights) - - noise_guidance_edit = torch.einsum("cb,cbijk->bijk", concept_weights, noise_guidance_edit) - - noise_guidance_edit = noise_guidance_edit + edit_momentum_scale * edit_momentum - - edit_momentum = edit_mom_beta * edit_momentum + (1 - edit_mom_beta) * noise_guidance_edit - - if warmup_inds.shape[0] == len(noise_pred_edit_concepts): - noise_guidance = noise_guidance + noise_guidance_edit - self.sem_guidance[i] = noise_guidance_edit.detach().cpu() - - if sem_guidance is not None: - edit_guidance = sem_guidance[i].to(self.device) - noise_guidance = noise_guidance + edit_guidance - - noise_pred = noise_pred_uncond + noise_guidance - - # TODO: compatible with SEGA? - #if do_classifier_free_guidance and guidance_rescale > 0.0: - # # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf - # noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale) - - # compute the previous noisy sample x_t -> x_t-1 - if use_ddpm: - idx = t_to_idx[int(t)] - latents = self.scheduler.step(noise_pred, t, latents, variance_noise=zs[idx], **extra_step_kwargs).prev_sample - - else: #if not use_ddpm: - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # step callback - store_step = i in attn_store_steps - if store_step: - print(f"storing attention for step {i}") - self.attention_store.between_steps(store_step) - - # call the callback, if provided - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # make sure the VAE is in float32 mode, as it overflows in float16 - if self.vae.dtype == torch.float16 and self.vae.config.force_upcast: - self.upcast_vae() - latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype) - elif self.vae.config.force_upcast: - latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype) - - if not output_type == "latent": - image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] - else: - image = latents - return StableDiffusionXLPipelineOutput(images=image) - - # apply watermark if available - if self.watermark is not None: - image = self.watermark.apply_watermark(image) - - image = self.image_processor.postprocess(image, output_type=output_type) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image,) - - return StableDiffusionXLPipelineOutput(images=image) - - # Overrride to properly handle the loading and unloading of the additional text encoder. - def load_lora_weights(self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs): - # We could have accessed the unet config from `lora_state_dict()` too. We pass - # it here explicitly to be able to tell that it's coming from an SDXL - # pipeline. - state_dict, network_alphas = self.lora_state_dict( - pretrained_model_name_or_path_or_dict, - unet_config=self.unet.config, - **kwargs, - ) - self.load_lora_into_unet(state_dict, network_alphas=network_alphas, unet=self.unet) - - text_encoder_state_dict = {k: v for k, v in state_dict.items() if "text_encoder." in k} - if len(text_encoder_state_dict) > 0: - self.load_lora_into_text_encoder( - text_encoder_state_dict, - network_alphas=network_alphas, - text_encoder=self.text_encoder, - prefix="text_encoder", - lora_scale=self.lora_scale, - ) - - text_encoder_2_state_dict = {k: v for k, v in state_dict.items() if "text_encoder_2." in k} - if len(text_encoder_2_state_dict) > 0: - self.load_lora_into_text_encoder( - text_encoder_2_state_dict, - network_alphas=network_alphas, - text_encoder=self.text_encoder_2, - prefix="text_encoder_2", - lora_scale=self.lora_scale, - ) - - @classmethod - def save_lora_weights( - self, - save_directory: Union[str, os.PathLike], - unet_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, - text_encoder_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, - text_encoder_2_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, - is_main_process: bool = True, - weight_name: str = None, - save_function: Callable = None, - safe_serialization: bool = True, - ): - state_dict = {} - - def pack_weights(layers, prefix): - layers_weights = layers.state_dict() if isinstance(layers, torch.nn.Module) else layers - layers_state_dict = {f"{prefix}.{module_name}": param for module_name, param in layers_weights.items()} - return layers_state_dict - - state_dict.update(pack_weights(unet_lora_layers, "unet")) - - if text_encoder_lora_layers and text_encoder_2_lora_layers: - state_dict.update(pack_weights(text_encoder_lora_layers, "text_encoder")) - state_dict.update(pack_weights(text_encoder_2_lora_layers, "text_encoder_2")) - - self.write_lora_layers( - state_dict=state_dict, - save_directory=save_directory, - is_main_process=is_main_process, - weight_name=weight_name, - save_function=save_function, - safe_serialization=safe_serialization, - ) - - def _remove_text_encoder_monkey_patch(self): - self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder) - self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder_2) - - - @torch.no_grad() - def invert(self, - # image_path: str, - x0, - source_prompt: str = "", - source_prompt_2: str = None, - source_guidance_scale = 3.5, - negative_prompt: str = None, - negative_prompt_2: str = None, - num_inversion_steps: int = 100, - skip_steps: int = 35, - eta: float = 1.0, - generator: Optional[torch.Generator] = None, - height: Optional[int] = None, - width: Optional[int] = None, - original_size: Optional[Tuple[int, int]] = None, - crops_coords_top_left: Tuple[int, int] = (0, 0), - target_size: Optional[Tuple[int, int]] = None, - ): - """ - Inverts a real image according to Algorihm 1 in https://arxiv.org/pdf/2304.06140.pdf, - based on the code in https://github.com/inbarhub/DDPM_inversion - - returns: - zs - noise maps - xts - intermediate inverted latents - """ - - # self.eta = eta - # assert(self.eta > 0) - - self.num_inversion_steps = num_inversion_steps - self.scheduler.set_timesteps(self.num_inversion_steps) - timesteps = self.scheduler.timesteps.to(self.device) - - cross_attention_kwargs = None # TODO - batch_size = 1 - num_images_per_prompt = 1 - - device = self._execution_device - - # Reset attn processor, we do not want to store attn maps during inversion - # self.unet.set_default_attn_processor() - - # 0. Ensure that only uncond embedding is used if prompt = "" - if source_prompt == "" and \ - (source_prompt_2 == "" or source_prompt_2 is None): - # noise pred should only be noise_pred_uncond - source_guidance_scale = 0.0 - do_classifier_free_guidance = True - else: - do_classifier_free_guidance = source_guidance_scale > 1.0 - - # 1. Default height and width to unet - height = height or self.default_sample_size * self.vae_scale_factor - width = width or self.default_sample_size * self.vae_scale_factor - original_size = original_size or (height, width) - target_size = target_size or (height, width) - - self.height = height - self.width = width - self.original_size = original_size - self.target_size = target_size - - # 2. get embeddings - text_encoder_lora_scale = ( - cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None - ) - - ( - prompt_embeds, - negative_prompt_embeds, - _, - pooled_prompt_embeds, - negative_pooled_prompt_embeds, - _, - _ - ) = self.encode_prompt( - prompt=source_prompt, - prompt_2=source_prompt_2, - device=device, - num_images_per_prompt=num_images_per_prompt, - do_classifier_free_guidance=do_classifier_free_guidance, - negative_prompt=negative_prompt, - negative_prompt_2=negative_prompt_2, - lora_scale=text_encoder_lora_scale, - enable_edit_guidance=False, - ) - - # 3. Prepare added time ids & embeddings - add_text_embeds = pooled_prompt_embeds - add_time_ids = self._get_add_time_ids( - original_size, crops_coords_top_left, target_size, dtype=prompt_embeds.dtype - ) - - if do_classifier_free_guidance: - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0) - add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds], dim=0) - add_time_ids = torch.cat([add_time_ids, add_time_ids], dim=0) - - prompt_embeds = prompt_embeds.to(device) - add_text_embeds = add_text_embeds.to(device) - add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1) - -# # 4. prepare image -# image = Image.open(image_path) -# size = self.unet.sample_size * self.vae_scale_factor -# image = image.convert("RGB").resize((size,size)) -# image = self.image_processor.preprocess(image) -# image = image.to(device=device, dtype=negative_prompt_embeds.dtype) - -# if image.shape[1] == 4: -# x0 = image -# else: -# if self.vae.config.force_upcast: -# image = image.float() -# self.vae.to(dtype=torch.float32) - -# x0 = self.vae.encode(image).latent_dist.sample(generator) -# x0 = x0.to(negative_prompt_embeds.dtype) -# x0 = self.vae.config.scaling_factor * x0 - - # autoencoder reconstruction - if self.vae.dtype == torch.float16 and self.vae.config.force_upcast: - self.upcast_vae() - x0_tmp = x0.to(next(iter(self.vae.post_quant_conv.parameters())).dtype) - image_rec = self.vae.decode(x0_tmp / self.vae.config.scaling_factor, return_dict=False)[0] - elif self.vae.config.force_upcast: - x0_tmp = x0.to(next(iter(self.vae.post_quant_conv.parameters())).dtype) - image_rec = self.vae.decode(x0_tmp / self.vae.config.scaling_factor, return_dict=False)[0] - else: - image_rec = self.vae.decode(x0 / self.vae.config.scaling_factor, return_dict=False)[0] - - image_rec = self.image_processor.postprocess(image_rec, output_type="pil") - - # 5. find zs and xts - variance_noise_shape = ( - self.num_inversion_steps, - self.unet.config.in_channels, - self.unet.sample_size, - self.unet.sample_size) - - # intermediate latents - t_to_idx = {int(v):k for k,v in enumerate(timesteps)} - xts = torch.zeros(size=variance_noise_shape, device=self.device, dtype=negative_prompt_embeds.dtype) - - for t in reversed(timesteps): - idx = t_to_idx[int(t)] - noise = randn_tensor(shape=x0.shape, generator=generator, device=self.device, dtype=x0.dtype) - xts[idx] = self.scheduler.add_noise(x0, noise, t) - xts = torch.cat([xts, x0 ],dim = 0) - - # noise maps - zs = torch.zeros(size=variance_noise_shape, device=self.device, dtype=negative_prompt_embeds.dtype) - - for t in tqdm(timesteps): - idx = t_to_idx[int(t)] - # 1. predict noise residual - xt = xts[idx][None] - - latent_model_input = ( - torch.cat([xt] * 2) if do_classifier_free_guidance else xt - ) - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids} - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - added_cond_kwargs=added_cond_kwargs, - return_dict=False, - )[0] - - # 2. perform guidance - if do_classifier_free_guidance: - noise_pred_out = noise_pred.chunk(2) - noise_pred_uncond, noise_pred_text = noise_pred_out[0], noise_pred_out[1] - noise_pred = noise_pred_uncond + source_guidance_scale * (noise_pred_text - noise_pred_uncond) - - xtm1 = xts[idx+1][None] - z, xtm1_corrected = compute_noise(self.scheduler, xtm1, xt, t, noise_pred, eta) - zs[idx] = z - - # correction to avoid error accumulation - xts[idx+1] = xtm1_corrected - - # TODO: I don't think that the noise map for the last step should be discarded ?! - # if not zs is None: - # zs[-1] = torch.zeros_like(zs[-1]) - - # self.init_latents = xts[skip_steps].expand(1, -1, -1, -1) - # self.zs = zs[skip_steps:] - # self.wts = xts - # self.latents_path = xts[skip_steps:] - # return zs, xts, image_rec - return zs, xts - - -# Copied from pipelines.StableDiffusion.CycleDiffusionPipeline.compute_noise -def compute_noise(scheduler, prev_latents, latents, timestep, noise_pred, eta): - # 1. get previous step value (=t-1) - prev_timestep = timestep - scheduler.config.num_train_timesteps // scheduler.num_inference_steps - - # 2. compute alphas, betas - alpha_prod_t = scheduler.alphas_cumprod[timestep] - alpha_prod_t_prev = ( - scheduler.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else scheduler.final_alpha_cumprod - ) - - beta_prod_t = 1 - alpha_prod_t - - # 3. compute predicted original sample from predicted noise also called - # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - pred_original_sample = (latents - beta_prod_t ** (0.5) * noise_pred) / alpha_prod_t ** (0.5) - - # 4. Clip "predicted x_0" - if scheduler.config.clip_sample: - pred_original_sample = torch.clamp(pred_original_sample, -1, 1) - - # 5. compute variance: "sigma_t(η)" -> see formula (16) - # σ_t = sqrt((1 − α_t−1)/(1 − α_t)) * sqrt(1 − α_t/α_t−1) - variance = scheduler._get_variance(timestep, prev_timestep) - std_dev_t = eta * variance ** (0.5) - - # 6. compute "direction pointing to x_t" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * noise_pred - - # modifed so that updated xtm1 is returned as well (to avoid error accumulation) - mu_xt = alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction - noise = (prev_latents - mu_xt) / (variance ** (0.5) * eta) - - return noise, mu_xt + ( eta * variance ** 0.5 )*noise diff --git a/spaces/epexVfeibi/Imagedeblurr/Adobe Acrobat Pro DC 2018.011.20055 Full UPDATED With Medicine[BabuPC] Serial Key.md b/spaces/epexVfeibi/Imagedeblurr/Adobe Acrobat Pro DC 2018.011.20055 Full UPDATED With Medicine[BabuPC] Serial Key.md deleted file mode 100644 index 01200f53a0876a2ee644df22a75bc4f90ff6e8fc..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/Adobe Acrobat Pro DC 2018.011.20055 Full UPDATED With Medicine[BabuPC] Serial Key.md +++ /dev/null @@ -1,5 +0,0 @@ - -

          Dallas Buyers Club Full Movie 720p HD Free [url= pawel4thgentabletair.pro [url= google maps 4 letter sorority zip code]download[/url] Noun fauna vega her latest trailer 2015 [url= movies stubbing/breaking [url= Dating Workbook for Teens License Full Version ВЂ“ TorrentDownload]Dating Workbook for Teens License Full Version ВЂ“ TorrentDownload[/url] GSM Cell Phone Battery Life [url= tododee azar-tangamu]azar-tangamu[/url] Embroidery Workshop [url= izabaschigi mahovshii 092 [url= izabaschigi mahovshii 04 [url= izabaschigi mahovshii 022030-1257 [url= 726th full version serial key [url= izabaschigi mahovshii 095 [url= izabaschigi mahovshii 042021-0220 [url= 332101-1252 [url= izabaschigi mahovshii 092 [url= bozza za x101 iMGSRC.RU [url=
          [url= DAZ3D for 99! Free Download [url= [url= Desktop Calendar for Windows 8.1 Home Premium 64-bit Crack [url= Anatomy And Function Of Of High-Strength Materials[/url] jasoda man thanu free download[/url]
          [url= frz2x32 [url= Customize desktop icons and colors[/url] free download[/url]
          [url= zap5uq5iq ]zz2x32[/url]
          [url= 1/2 year for[/url]

          [url= [url= desktop calendar for windows 8.[/url] Hit Man 2 Speech Box/iMGSRC.RU/Guys 2 Suertods 6, iMGSRC.RU/Gym boys 10, DSC_0334 iMGSRC.RU [url= guys 2 Suertudos 2, 6 iMGSRC.RU/ Audionamix crack full download for 21 iMGSRC.RU/PATCHED Adobe Photoshop Lightroom CC 2018 8.1 Crack [url= briletypeAbumunult [url= 2004, DSC_2044 iMGSRC.RU [url= pia, E76A2D8B-279B-460B-ED iMGSRC.RU [url=
          [url= [url=
          The emperors new clothes free worksheets[/url] Taiseertaids [url= U 1st see info, E76A2D8B-3 iMGSRC.RU [url= apk free[/url]U 1st see info, E76A2D8B-3 iMGSRC.RU [url=
          [url= Live Stream Online[/url]Holly, Z-cbR2OOHSI iMGSRC.RU [url=
          Live Stream Online.

          -

          Adobe Acrobat Pro DC 2018.011.20055 Full With Medicine[BabuPC] Serial Key


          DOWNLOADhttps://jinyurl.com/2uEpz1



          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/eradhea/spanish_chat/README.md b/spaces/eradhea/spanish_chat/README.md deleted file mode 100644 index 7f90a04d957a30337dbf960cf4a04d6e6d12ecc9..0000000000000000000000000000000000000000 --- a/spaces/eradhea/spanish_chat/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Spanish Chat -emoji: 🐨 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.8.2 -app_file: app.py -pinned: false -license: gpl-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/to_v2/test_queue.py b/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/to_v2/test_queue.py deleted file mode 100644 index 6662b9142e5fd8ec81994f53246e2b72de2ad1de..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/to_v2/test_queue.py +++ /dev/null @@ -1,20 +0,0 @@ - -from queue import Queue - -q = Queue(maxsize=0) - -#写入队列数据 -q.put(0) -q.put(1) -q.put(2) - -#输出当前队列所有数据 -print(q.queue) -#删除队列数据,并返回该数据 -q.get() -#输也所有队列数据 -print(q.queue) - -for i in range(10): - print(q.get(), q.qsize()) - diff --git "a/spaces/f2api/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" "b/spaces/f2api/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" deleted file mode 100644 index eada69dc65587782125c0809381260a6bbdce225..0000000000000000000000000000000000000000 --- "a/spaces/f2api/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" +++ /dev/null @@ -1,127 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - - -def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, os - # pip install python-docx 用于docx格式,跨平台 - # pip install pywin32 用于doc格式,仅支持Win平台 - for index, fp in enumerate(file_manifest): - if fp.split(".")[-1] == "docx": - from docx import Document - doc = Document(fp) - file_content = "\n".join([para.text for para in doc.paragraphs]) - else: - import win32com.client - word = win32com.client.Dispatch("Word.Application") - word.visible = False - # 打开文件 - print('fp', os.getcwd()) - doc = word.Documents.Open(os.getcwd() + '/' + fp) - # file_content = doc.Content.Text - doc = word.ActiveDocument - file_content = doc.Range().Text - doc.Close() - word.Quit() - - print(file_content) - # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名 - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - from request_llm.bridge_all import model_info - max_token = model_info[llm_kwargs['llm_model']]['max_token'] - TOKEN_LIMIT_PER_FRAGMENT = max_token * 3 // 4 - paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=file_content, - get_token_fn=model_info[llm_kwargs['llm_model']]['token_cnt'], - limit=TOKEN_LIMIT_PER_FRAGMENT - ) - this_paper_history = [] - for i, paper_frag in enumerate(paper_fragments): - i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```' - i_say_show_user = f'请对下面的文章片段做概述: {os.path.abspath(fp)}的第{i+1}/{len(paper_fragments)}个片段。' - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) - - chatbot[-1] = (i_say_show_user, gpt_say) - history.extend([i_say_show_user,gpt_say]) - this_paper_history.extend([i_say_show_user,gpt_say]) - - # 已经对该文章的所有片段总结完毕,如果文章被切分了, - if len(paper_fragments) > 1: - i_say = f"根据以上的对话,总结文章{os.path.abspath(fp)}的主要内容。" - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=this_paper_history, - sys_prompt="总结文章。" - ) - - history.extend([i_say,gpt_say]) - this_paper_history.extend([i_say,gpt_say]) - - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - res = write_results_to_file(history) - chatbot.append(("所有文件都总结完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -@CatchException -def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结Word文档。函数插件贡献者: JasonGuo1。注意, 如果是.doc文件, 请先转化为.docx格式。"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - from docx import Document - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - if txt.endswith('.docx') or txt.endswith('.doc'): - file_manifest = [txt] - else: - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)] - - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 开始正式执行任务 - yield from 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/facebook/ov-seg/open_vocab_seg/modeling/matcher.py b/spaces/facebook/ov-seg/open_vocab_seg/modeling/matcher.py deleted file mode 100644 index a72ba671ad60db078e08046357a6aa0e5e9bd5dc..0000000000000000000000000000000000000000 --- a/spaces/facebook/ov-seg/open_vocab_seg/modeling/matcher.py +++ /dev/null @@ -1,187 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/facebookresearch/detr/blob/master/models/matcher.py -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -""" -Modules to compute the matching cost and solve the corresponding LSAP. -""" -import torch -import torch.nn.functional as F -from scipy.optimize import linear_sum_assignment -from torch import nn - - -def batch_dice_loss(inputs, targets): - """ - Compute the DICE loss, similar to generalized IOU for masks - Args: - inputs: A float tensor of arbitrary shape. - The predictions for each example. - targets: A float tensor with the same shape as inputs. Stores the binary - classification label for each element in inputs - (0 for the negative class and 1 for the positive class). - """ - inputs = inputs.sigmoid() - inputs = inputs.flatten(1) - numerator = 2 * torch.einsum("nc,mc->nm", inputs, targets) - denominator = inputs.sum(-1)[:, None] + targets.sum(-1)[None, :] - loss = 1 - (numerator + 1) / (denominator + 1) - return loss - - -def batch_sigmoid_focal_loss(inputs, targets, alpha: float = 0.25, gamma: float = 2): - """ - Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002. - Args: - inputs: A float tensor of arbitrary shape. - The predictions for each example. - targets: A float tensor with the same shape as inputs. Stores the binary - classification label for each element in inputs - (0 for the negative class and 1 for the positive class). - alpha: (optional) Weighting factor in range (0,1) to balance - positive vs negative examples. Default = -1 (no weighting). - gamma: Exponent of the modulating factor (1 - p_t) to - balance easy vs hard examples. - Returns: - Loss tensor - """ - hw = inputs.shape[1] - - prob = inputs.sigmoid() - focal_pos = ((1 - prob) ** gamma) * F.binary_cross_entropy_with_logits( - inputs, torch.ones_like(inputs), reduction="none" - ) - focal_neg = (prob ** gamma) * F.binary_cross_entropy_with_logits( - inputs, torch.zeros_like(inputs), reduction="none" - ) - if alpha >= 0: - focal_pos = focal_pos * alpha - focal_neg = focal_neg * (1 - alpha) - - loss = torch.einsum("nc,mc->nm", focal_pos, targets) + torch.einsum( - "nc,mc->nm", focal_neg, (1 - targets) - ) - - return loss / hw - - -class HungarianMatcher(nn.Module): - """This class computes an assignment between the targets and the predictions of the network - - For efficiency reasons, the targets don't include the no_object. Because of this, in general, - there are more predictions than targets. In this case, we do a 1-to-1 matching of the best predictions, - while the others are un-matched (and thus treated as non-objects). - """ - - def __init__( - self, cost_class: float = 1, cost_mask: float = 1, cost_dice: float = 1 - ): - """Creates the matcher - - Params: - cost_class: This is the relative weight of the classification error in the matching cost - cost_mask: This is the relative weight of the focal loss of the binary mask in the matching cost - cost_dice: This is the relative weight of the dice loss of the binary mask in the matching cost - """ - super().__init__() - self.cost_class = cost_class - self.cost_mask = cost_mask - self.cost_dice = cost_dice - assert ( - cost_class != 0 or cost_mask != 0 or cost_dice != 0 - ), "all costs cant be 0" - - @torch.no_grad() - def memory_efficient_forward(self, outputs, targets): - """More memory-friendly matching""" - bs, num_queries = outputs["pred_logits"].shape[:2] - - # Work out the mask padding size - masks = [v["masks"] for v in targets] - h_max = max([m.shape[1] for m in masks]) - w_max = max([m.shape[2] for m in masks]) - - indices = [] - - # Iterate through batch size - for b in range(bs): - - out_prob = outputs["pred_logits"][b].softmax( - -1 - ) # [num_queries, num_classes] - out_mask = outputs["pred_masks"][b] # [num_queries, H_pred, W_pred] - - tgt_ids = targets[b]["labels"] - # gt masks are already padded when preparing target - tgt_mask = targets[b]["masks"].to(out_mask) - - # Compute the classification cost. Contrary to the loss, we don't use the NLL, - # but approximate it in 1 - proba[target class]. - # The 1 is a constant that doesn't change the matching, it can be ommitted. - cost_class = -out_prob[:, tgt_ids] - - # Downsample gt masks to save memory - tgt_mask = F.interpolate( - tgt_mask[:, None], size=out_mask.shape[-2:], mode="nearest" - ) - - # Flatten spatial dimension - out_mask = out_mask.flatten(1) # [batch_size * num_queries, H*W] - tgt_mask = tgt_mask[:, 0].flatten(1) # [num_total_targets, H*W] - - # Compute the focal loss between masks - cost_mask = batch_sigmoid_focal_loss(out_mask, tgt_mask) - - # Compute the dice loss betwen masks - cost_dice = batch_dice_loss(out_mask, tgt_mask) - - # Final cost matrix - C = ( - self.cost_mask * cost_mask - + self.cost_class * cost_class - + self.cost_dice * cost_dice - ) - C = C.reshape(num_queries, -1).cpu() - - indices.append(linear_sum_assignment(C)) - return [ - ( - torch.as_tensor(i, dtype=torch.int64), - torch.as_tensor(j, dtype=torch.int64), - ) - for i, j in indices - ] - - @torch.no_grad() - def forward(self, outputs, targets): - """Performs the matching - - Params: - outputs: This is a dict that contains at least these entries: - "pred_logits": Tensor of dim [batch_size, num_queries, num_classes] with the classification logits - "pred_masks": Tensor of dim [batch_size, num_queries, H_pred, W_pred] with the predicted masks - - targets: This is a list of targets (len(targets) = batch_size), where each target is a dict containing: - "labels": Tensor of dim [num_target_boxes] (where num_target_boxes is the number of ground-truth - objects in the target) containing the class labels - "masks": Tensor of dim [num_target_boxes, H_gt, W_gt] containing the target masks - - Returns: - A list of size batch_size, containing tuples of (index_i, index_j) where: - - index_i is the indices of the selected predictions (in order) - - index_j is the indices of the corresponding selected targets (in order) - For each batch element, it holds: - len(index_i) = len(index_j) = min(num_queries, num_target_boxes) - """ - return self.memory_efficient_forward(outputs, targets) - - def __repr__(self): - head = "Matcher " + self.__class__.__name__ - body = [ - "cost_class: {}".format(self.cost_class), - "cost_mask: {}".format(self.cost_mask), - "cost_dice: {}".format(self.cost_dice), - ] - _repr_indent = 4 - lines = [head] + [" " * _repr_indent + line for line in body] - return "\n".join(lines) diff --git a/spaces/falcondai/code-as-policies/LICENSE.md b/spaces/falcondai/code-as-policies/LICENSE.md deleted file mode 100644 index 2697cde25676d46a917a2d9362dd0e5495b6d2ca..0000000000000000000000000000000000000000 --- a/spaces/falcondai/code-as-policies/LICENSE.md +++ /dev/null @@ -1,7 +0,0 @@ -Copyright 2021 Google LLC. SPDX-License-Identifier: Apache-2.0 - -Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - -https://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Fiind Baiet Paduri Cutreieram De Mihai Eminescu Comentariu Literar.md b/spaces/falterWliame/Face_Mask_Detection/Fiind Baiet Paduri Cutreieram De Mihai Eminescu Comentariu Literar.md deleted file mode 100644 index 25e50fbad9187fab6205a28d95e7fb837a06e145..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Fiind Baiet Paduri Cutreieram De Mihai Eminescu Comentariu Literar.md +++ /dev/null @@ -1,10 +0,0 @@ -
          -

          fiind baiet paduri cutreieram de mihai eminescu comentariu literar.
          . image with caption: download: fiind baiet paduri cutreieram de mihai eminescu comentariu literar. mihai eminescu. thus the stems of the title are: mihai eminescu eminescu. . mihai eminescu s-a nascut in.

          -

          fiind baiet paduri cutreieram de mihai eminescu comentariu literar


          DOWNLOADhttps://urlca.com/2uDdyk



          -

          fiind baiet paduri cutreieram de mihai eminescu#eminescu #recitalpoezii #poeziiaudio interpretare: eduard ghergheluca. of william wordsworth and of mihai eminescu, they have many views and. fiind biet pduri cutreieram/ i m culcam ades lng izvor,/ iar braul drept.

          -

          serpent-rocks-1.3.09.tar.gz 2011-06-18 10:54 2.1m [ ]. acrobot 3.3 full.zip.. fiind baiet paduri cutreieram de mihai eminescu comentariu literar. fiind biet pduri cutreieram, de mihai eminescu (comentariu literar,. precum i la figuri de stil i motive literare care compun.

          -

          download download (mirror #1)







          fiind baiet paduri cutreieram de mihai eminescu comentariu literar

          about i want to write a book, but my english is not good, so i have to translate it from spanish to english.
          i am going to make some translation for a book, but my english is not good, so i have to translate it from spanish to english.. material, diacritics, punctuation, hyphens and word order are taken into account. another important issue is if the source material has a capital letter at the beginning of each of the title, chapter, and page numbers.

          -

          -

          -mihai-eminescu-comentariu-literar-link.. mihai eminescu. thus the stems of the title are: mihai eminescu eminescu. how to correctly choose a name for your novel or short story: a guide for writers in english, french, spanish, german and portuguese. i've seen some of my friends posting novels and short stories on facebook. buying a novel or short story in english is a great way to see if the. the main thing in writing a novel or short story in english is the title. find the best and highest rated books online and download book pdf. mobile books and ebook reader apps for android. to keep the legacy of the great romanian writer mihai eminescu alive. download or read mihai eminescu in english pdf online. in this guide, we'll teach you how to write a novel, short story, essay or any other kind of creative writing in english. we'll look at all the stages of the creative process and how to avoid the pitfalls. fiind baiet paduri cutreieram de mihai eminescu comentariu literar. hi, i'm mihai, and i'm a writer in the english language.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Cmo jugar a Arena Breakout APK en Android Gua completa.md b/spaces/fatiXbelha/sd/Cmo jugar a Arena Breakout APK en Android Gua completa.md deleted file mode 100644 index 1d8f451ec939dff6da2fa7a9b304d0594df939ca..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Cmo jugar a Arena Breakout APK en Android Gua completa.md +++ /dev/null @@ -1,146 +0,0 @@ -
          -

          Arena Breakout APK: Un juego de acción y supervivencia para Android

          -

          ¿Te gustan los juegos de disparos en primera persona? ¿Quieres vivir una experiencia de combate intensa y emocionante en tu dispositivo móvil? Si la respuesta es sí, entonces te presentamos Arena Breakout APK, un juego de acción y supervivencia para Android que te pondrá a prueba en diferentes escenarios y modos de juego. En este artículo, te contaremos todo lo que necesitas saber sobre este juego, sus características principales, cómo descargarlo e instalarlo en tu dispositivo Android, sus ventajas y desventajas, y algunas preguntas frecuentes que te pueden interesar.

          -

          arena breakout apk para android


          Download Filehttps://urllie.com/2uNzaQ



          -

          ¿Qué es Arena Breakout APK?

          -

          Arena Breakout APK es un juego de acción y supervivencia para Android desarrollado por Tencent Games, una de las compañías más importantes del sector de los videojuegos. Se trata de un juego que combina elementos de los géneros shooter, battle royale, y survival, ofreciendo una experiencia de juego dinámica, divertida, y competitiva. En el juego, tendrás que elegir un personaje entre varias categorías, como aviones de ataque, francotiradores, paramédicos, etc., y equiparlo con armas, accesorios, y habilidades especiales. Luego, tendrás que enfrentarte a otros jugadores en diferentes modos de juego, como deathmatch, team deathmatch, capture the flag, etc., en diferentes mapas ambientados en zonas urbanas, rurales, o industriales. El objetivo es sobrevivir el mayor tiempo posible, eliminar a tus enemigos, y conseguir la mayor puntuación. Además, el juego cuenta con un sistema de clasificación y recompensas que te permitirá desbloquear nuevos elementos para tu personaje y mejorar tus habilidades.

          -

          Características principales de Arena Breakout APK

          -

          Arena Breakout APK es un juego que destaca por sus múltiples características que lo hacen atractivo y entretenido para los amantes de los juegos de acción y supervivencia. Algunas de estas características son:

          -

          Gráficos impresionantes y realistas

          -

          El juego cuenta con unos gráficos de alta calidad que te harán sentir como si estuvieras dentro del juego. Los escenarios son detallados y variados, con efectos de luz, sombra, y partículas que le dan un toque de realismo. Los personajes y las armas también están bien diseñados y animados, con expresiones faciales, movimientos fluidos, y sonidos auténticos. El juego tiene una buena optimización que hace que funcione sin problemas en la mayoría de los dispositivos Android.

          -

          Modos de juego variados y desafiantes

          -

          El juego ofrece varios modos de juego que se adaptan a los gustos y preferencias de cada jugador. Puedes jugar solo o con tus amigos en partidas online o locales. Algunos de los modos de juego disponibles son:

          -
            -
          • Deathmatch: Un modo clásico donde tienes que eliminar al mayor número posible de enemigos en un tiempo limitado.
          • -
          • Team deathmatch: Un modo similar al anterior pero en equipos. Tienes que cooperar con tus aliados para derrotar al equipo rival.
          • -
          • Capture the flag: Un modo donde tienes que capturar la bandera del equipo enemigo y llevarla a tu base, mientras evitas que el equipo contrario haga lo mismo con la tuya.
          • -
          • Survival: Un modo donde tienes que sobrevivir el mayor tiempo posible en un mapa lleno de zombis y otros peligros. Puedes usar armas, vehículos, y objetos para defenderte y escapar.
          • -
          -

          Personajes personalizables y equipables

          -

          El juego te permite elegir entre varios personajes que pertenecen a diferentes categorías, como aviones de ataque, francotiradores, paramédicos, etc. Cada personaje tiene sus propias características, habilidades, y ventajas que lo hacen único y especial. Además, puedes personalizar el aspecto de tu personaje con diferentes trajes, accesorios, y gestos. También puedes equipar a tu personaje con diferentes armas, como pistolas, rifles, escopetas, lanzagranadas, etc., que puedes mejorar y modificar con diferentes accesorios, como miras, silenciadores, cargadores, etc.

          -

          arena breakout apk download free for android
          -arena breakout apk latest version 2023
          -arena breakout apk gratis en español
          -arena breakout apk mod unlimited money
          -arena breakout apk offline no internet
          -arena breakout apk obb data file
          -arena breakout apk hack cheats tool
          -arena breakout apk review gameplay video
          -arena breakout apk update new features
          -arena breakout apk beta test sign up
          -arena breakout apk Tencent Games official website
          -arena breakout apk Level Infinite developer
          -arena breakout apk 暗区突围 Chinese version
          -arena breakout apk action FPS game genre
          -arena breakout apk tactical shooter challenge
          -arena breakout apk multiplayer online mode
          -arena breakout apk solo campaign mode
          -arena breakout apk co-op team mode
          -arena breakout apk custom match mode
          -arena breakout apk ranked match mode
          -arena breakout apk battle royale mode
          -arena breakout apk survival mode
          -arena breakout apk zombie mode
          -arena breakout apk deathmatch mode
          -arena breakout apk capture the flag mode
          -arena breakout apk bomb defuse mode
          -arena breakout apk hostage rescue mode
          -arena breakout apk domination mode
          -arena breakout apk team deathmatch mode
          -arena breakout apk search and destroy mode
          -arena breakout apk gun game mode
          -arena breakout apk free for all mode
          -arena breakout apk training mode tutorial
          -arena breakout apk weapons list guide
          -arena breakout apk characters list guide
          -arena breakout apk skins list guide
          -arena breakout apk maps list guide
          -arena breakout apk tips tricks secrets
          -arena breakout apk best settings options
          -arena breakout apk system requirements specs
          -arena breakout apk compatible devices models
          -arena breakout apk how to install instructions
          -arena breakout apk how to play guide
          -arena breakout apk how to win strategy
          -arena breakout apk how to get free coins gems diamonds gold cash money currency rewards points vouchers coupons codes gift cards giveaways prizes bonuses rewards items resources materials equipment accessories outfits costumes weapons characters skins maps modes features updates patches fixes bugs glitches errors problems issues solutions fixes tips tricks secrets hacks cheats mods tools apps bots hacks generators injectors surveys verification human verification no verification no survey no root no jailbreak no password no email required needed necessary download install play enjoy fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun fun.

          -

          Sistema de clasificación y recompensas

          -

          El juego cuenta con un sistema de clasificación que te permite medir tu nivel y habilidad en comparación con otros jugadores. A medida que juegas y ganas partidas, obtienes puntos de experiencia y monedas que te permiten subir de nivel y desbloquear nuevos elementos para tu personaje. También puedes obtener recompensas especiales al completar misiones diarias y semanales, o al participar en eventos temporales. Estas recompensas pueden incluir trajes exclusivos, armas legendarias, cajas sorpresa, etc.

          -

          ¿Cómo descargar e instalar Arena Breakout APK en tu dispositivo Android?

          -

          Arena Breakout APK es un juego que no está disponible en la tienda oficial de Google Play, por lo que tendrás que descargarlo e instalarlo manualmente desde una fuente externa. Para ello, tendrás que seguir los siguientes pasos:

          -

          Requisitos mínimos del sistema

          -

          Antes de descargar e instalar el juego, asegúrate de que tu dispositivo Android cumpla con los requisitos mínimos del sistema para poder ejecutarlo correctamente. Estos requisitos son:

          -
            -
          • Sistema operativo: Android 5.0 o superior
          • -
          • Memoria RAM: 2 GB o más
          • -
          • Espacio de almacenamiento: 1 GB o más
          • -
          • Conexión a internet: Wi-Fi o datos móviles
          • -
          -

          Pasos para descargar e instalar Arena Breakout APK

          -

          Una vez que hayas comprobado los requisitos del sistema, sigue estos pasos para descargar e instalar el juego en tu dispositivo Android:

          -
            -
          1. Entra en el navegador web de tu dispositivo y busca el archivo APK de Arena Breakout. Puedes usar el siguiente enlace para descargarlo directamente: [Arena Breakout APK].
          2. -
          3. Descarga el archivo APK en tu dispositivo y espera a que se complete la descarga.
          4. -
          5. Antes de instalar el archivo APK, tendrás que habilitar la opción de "Orígenes desconocidos" en tu dispositivo. Para ello, ve a Ajustes > Seguridad > Orígenes desconocidos y activa la casilla correspondiente.
          6. -
          7. Busca el archivo APK descargado en la carpeta de descargas de tu dispositivo y ábrelo.
          8. -
          9. Sigue las instrucciones que aparecen en la pantalla para instalar el juego.
          10. -
          11. Una vez instalado el juego, podrás abrirlo desde el menú de aplicaciones de tu dispositivo y disfrutar de Arena Breakout APK.
          12. -

          Ventajas y desventajas de Arena Breakout APK

          -

          Como todo juego, Arena Breakout APK tiene sus ventajas y desventajas que debes conocer antes de descargarlo e instalarlo en tu dispositivo Android. A continuación, te las resumimos en una tabla comparativa:

          - - - - - - - - - - - - - - - - - - - - - - - - - -
          VentajasDesventajas
          Es un juego gratuito y sin anuncios.Requiere una conexión a internet constante para jugar.
          Es un juego divertido y adictivo que te ofrece horas de entretenimiento.Puede consumir mucha batería y recursos de tu dispositivo.
          Es un juego con una gran variedad de modos de juego, personajes, armas, y escenarios.Puede presentar algunos errores o fallos técnicos en algunos dispositivos.
          Es un juego con unos gráficos impresionantes y realistas que te harán sentir como si estuvieras en el juego.No está disponible en la tienda oficial de Google Play y hay que descargarlo e instalarlo manualmente.
          Es un juego con un sistema de clasificación y recompensas que te motiva a mejorar y competir con otros jugadores.No tiene un modo de juego offline o sin conexión a internet.
          -

          Conclusión

          -

          Arena Breakout APK es un juego de acción y supervivencia para Android que te ofrece una experiencia de juego única y emocionante. Podrás elegir entre varios personajes, armas, y modos de juego, y enfrentarte a otros jugadores en diferentes escenarios. El juego cuenta con unos gráficos impresionantes y realistas, un sistema de clasificación y recompensas, y una jugabilidad fluida y dinámica. Si te gustan los juegos de disparos en primera persona, no dudes en descargar e instalar Arena Breakout APK en tu dispositivo Android y disfrutar de este juego increíble.

          -

          Preguntas frecuentes sobre Arena Breakout APK

          -

          A continuación, te presentamos algunas preguntas frecuentes que te pueden interesar sobre Arena Breakout APK:

          -

          ¿Es seguro descargar e instalar Arena Breakout APK?

          -

          Sí, es seguro descargar e instalar Arena Breakout APK siempre que lo hagas desde una fuente confiable y sigas los pasos que te hemos indicado anteriormente. El archivo APK está libre de virus, malware, o cualquier otro tipo de amenaza para tu dispositivo. Sin embargo, te recomendamos que tengas un antivirus instalado en tu dispositivo por si acaso.

          -

          ¿Es legal descargar e instalar Arena Breakout APK?

          -

          Sí, es legal descargar e instalar Arena Breakout APK siempre que lo hagas para uso personal y no comercial. El juego es propiedad de Tencent Games, una compañía que tiene los derechos legales sobre el mismo. Sin embargo, al no estar disponible en la tienda oficial de Google Play, el juego no cuenta con el respaldo ni la garantía de Google. Por lo tanto, descarga e instala el juego bajo tu propia responsabilidad.

          -

          ¿Qué hacer si el juego no funciona correctamente o se cierra inesperadamente?

          -

          Si el juego no funciona correctamente o se cierra inesperadamente, puede deberse a varias razones, como una mala conexión a internet, un dispositivo incompatible, una falta de espacio de almacenamiento, o un error del juego. Para solucionar este problema, puedes intentar lo siguiente:

          -
            -
          • Verifica que tu conexión a internet sea estable y rápida.
          • -
          • Asegúrate de que tu dispositivo cumpla con los requisitos mínimos del sistema para ejecutar el juego.
          • -
          • Limpia la caché y los datos del juego desde los ajustes de tu dispositivo.
          • -
          • Reinicia tu dispositivo y vuelve a abrir el juego.
          • -
          • Desinstala el juego y vuelve a instalarlo siguiendo los pasos que te hemos indicado anteriormente.
          • -
          • Contacta con el servicio de atención al cliente del juego si el problema persiste.
          • -
          -

          ¿Cómo actualizar Arena Breakout APK?

          -

          Para actualizar Arena Breakout APK, tendrás que seguir los mismos pasos que para descargarlo e instalarlo. Es decir, tendrás que buscar el archivo APK de la última versión del juego en el navegador web de tu dispositivo y descargarlo. Luego, tendrás que abrir el archivo APK descargado y seguir las instrucciones que aparecen en la pantalla para instalar la nueva versión del juego. Recuerda que antes de instalar el archivo APK, tendrás que habilitar la opción de "Orígenes desconocidos" en tu dispositivo. También puedes borrar la versión anterior del juego si quieres liberar espacio de almacenamiento.

          -

          ¿Cómo jugar con amigos a Arena Breakout APK?

          -

          Para jugar con amigos a Arena Breakout APK, tendrás que seguir los siguientes pasos:

          -
            -
          1. Abre el juego desde el menú de aplicaciones de tu dispositivo.
          2. -
          3. En la pantalla principal, toca el icono de "Amigos" en la esquina superior derecha.
          4. -
          5. En la pantalla de "Amigos", podrás ver la lista de tus amigos que también juegan a Arena Breakout APK. Si quieres añadir a un nuevo amigo, toca el icono de "Añadir amigo" en la esquina superior derecha e introduce su nombre de usuario o su código QR.
          6. -
          7. Para invitar a un amigo a jugar contigo, toca su nombre en la lista y luego toca el botón de "Invitar". Tu amigo recibirá una notificación y podrá aceptar o rechazar tu invitación.
          8. -
          9. Una vez que tu amigo acepte tu invitación, podrás verlo en tu equipo y podrás elegir el modo de juego y el mapa que quieras jugar. Luego, toca el botón de "Empezar" para iniciar la partida.
          10. -
          11. Disfruta de jugar con tu amigo a Arena Breakout APK.
          12. -

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Live Your Life by Joeboy - A Soulful Song to Inspire You.md b/spaces/fatiXbelha/sd/Download Live Your Life by Joeboy - A Soulful Song to Inspire You.md deleted file mode 100644 index 1df25f32cbbd270b2818ff09f0bb89cf5a61a77a..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Live Your Life by Joeboy - A Soulful Song to Inspire You.md +++ /dev/null @@ -1,130 +0,0 @@ -
          -

          How to Download Joeboy Live Your Life and Enjoy His Music

          -

          If you are a fan of Nigerian music, you may have heard of Joeboy, a talented singer and songwriter who has been making waves in the industry. His latest song, Live Your Life, is a collaboration with MTN Pulse, a mobile service that offers affordable data plans and music streaming. In this article, we will tell you more about Joeboy, his new song, and how you can download it from different sources.

          -

          download joeboy live your life


          DOWNLOAD > https://urllie.com/2uNFLE



          -

          Who is Joeboy and What is Live Your Life?

          -

          Joeboy's Biography and Music Career

          -

          Joeboy, whose real name is Joseph Akinwale Akinfenwa, was born on May 21, 1997, in Lagos, Nigeria. He grew up in a musical family and started singing at a young age. He studied human resource management at the University of Lagos and graduated in 2019.

          -

          Joeboy's music career took off in 2017, when he recorded a cover of Ed Sheeran's Shape of You and uploaded it to SoundCloud. The song caught the attention of Mr Eazi, a Nigerian superstar and founder of emPawa Africa, a talent incubator for African artists. Mr Eazi became Joeboy's mentor and helped him launch his debut single, Baby, in 2019. The song was a huge hit and has over 40 million streams on YouTube.

          -

          Since then, Joeboy has released several other songs, such as Beginning, Don't Call Me Back, All for You, Call, Lonely, Focus, and Show Me. He has also collaborated with other artists, such as DJ Neptune, Major Lazer, Kwesi Arthur, Rayvanny, and E Kelly. He has won several awards, such as Best Artiste in African Pop at the All Africa Music Awards (AFRIMA) in 2019 and Best Pop at the Soundcity MVP Awards Festival in 2020.

          -

          Live Your Life: A New Song by Joeboy and MTN Pulse

          -

          Live Your Life is a new song by Joeboy that was released in December 2022. It is part of a partnership between Joeboy and MTN Pulse, a mobile service that offers affordable data plans and music streaming for young people. The song is an upbeat and inspirational tune that encourages listeners to live their lives to the fullest and enjoy every moment.

          -

          The song was produced by Killertunes, a Nigerian producer who has worked with artists such as Wizkid, Tiwa Savage, Olamide, and Naira Marley. The song features catchy lyrics, such as \"Live your life like you want it / Don't let nobody tell you nothing / You're the boss of your own life / So live it like you own it\". The song also has a catchy chorus that goes \"Live your life / Live your life / Live your life / Live your life / Live your life / Live your life\".

          -

          download joeboy live your life lyrics
          -download joeboy live your life mp3
          -download joeboy live your life video
          -download joeboy live your life song
          -download joeboy live your life audio
          -download joeboy live your life ft mtn pulse
          -download joeboy live your life music
          -download joeboy live your life youtube
          -download joeboy live your life free
          -download joeboy live your life remix
          -download joeboy live your life album
          -download joeboy live your life instrumental
          -download joeboy live your life official video
          -download joeboy live your life by lyrics lounge
          -download joeboy live your life 2023
          -download joeboy live your life mp4
          -download joeboy live your life spotify
          -download joeboy live your life soundcloud
          -download joeboy live your life naijaloaded
          -download joeboy live your life tooxclusive
          -download joeboy live your life justnaija
          -download joeboy live your life 9jaflaver
          -download joeboy live your life waploaded
          -download joeboy live your life genius
          -download joeboy live your life azlyrics
          -download joeboy live your life apple music
          -download joeboy live your life amazon music
          -download joeboy live your life deezer
          -download joeboy live your life tidal
          -download joeboy live your life audiomack
          -download joeboy live your life fakaza
          -download joeboy live your life zamusic
          -download joeboy live your life hiphopza
          -download joeboy live your life sahiphopmag
          -download joeboy live your life flexyjamz
          -download joeboy live your life hitvibes
          -download joeboy live your life afrobeat360
          -download joeboy live your life notjustok
          -download joeboy live your life jaguda
          -download joeboy live your life 360nobs

          -

          Why You Should Download Joeboy Live Your Life

          -

          The Benefits of Downloading Music

          -

          Downloading music is a great way to enjoy your favorite songs anytime and anywhere. By downloading music, you can:

          -
            -
          • Save data and money. Streaming music online can consume a lot of data and cost you money if you don't have an unlimited plan. By downloading music offline, you can save data and money and listen to music without interruptions.
          • -
          • Create playlists and mixtapes. Downloading music allows you to create playlists and mixtapes of your favorite songs. You can customize your playlists according to your mood, genre, artist, or occasion. You -
          • Enjoy music offline. Downloading music allows you to enjoy music offline, even when you don't have internet access or network coverage. You can listen to music on your device, such as your phone, tablet, laptop, or MP3 player, without worrying about connectivity issues.
          • -
          -

          The Features of Joeboy Live Your Life

          -

          Joeboy Live Your Life is a song that you should download and add to your music collection. Here are some of the features of the song that make it worth downloading:

          -
            -
          • It is a motivational and uplifting song. Joeboy Live Your Life is a song that inspires you to live your life to the fullest and enjoy every moment. It is a song that boosts your mood and energy and makes you feel positive and optimistic.
          • -
          • It is a catchy and danceable song. Joeboy Live Your Life is a song that has a catchy melody and rhythm that make you want to dance and sing along. It is a song that is suitable for parties, clubs, or any occasion where you want to have fun and groove.
          • -
          • It is a high-quality and original song. Joeboy Live Your Life is a song that showcases Joeboy's talent and creativity as a singer and songwriter. It is a song that has high-quality production and sound, as well as original lyrics and style. It is a song that stands out from the crowd and reflects Joeboy's personality and vision.
          • -
          -

          How to Download Joeboy Live Your Life from Different Sources

          -

          How to Download from YouTube

          -

          One of the sources where you can download Joeboy Live Your Life is YouTube, the popular video-sharing platform. Here are the steps to download the song from YouTube:

          -
            -
          1. Go to YouTube.com and search for Joeboy Live Your Life.
          2. -
          3. Select the video that has the official audio or video of the song.
          4. -
          5. Copy the URL of the video from the address bar of your browser.
          6. -
          7. Go to a YouTube to MP3 converter website, such as ytmp3.cc, y2mate.com, or flvto.biz.
          8. -
          9. Paste the URL of the video into the input box of the converter website.
          10. -
          11. Select MP3 as the output format and click on Convert or Download.
          12. -
          13. Wait for the conversion process to finish and then click on Download or Save to download the MP3 file of the song to your device.
          14. -
          -

          How to Download from iTunes

          -

          Another source where you can download Joeboy Live Your Life is iTunes, the popular music store and player by Apple. Here are the steps to download the song from iTunes:

          -
            -
          1. Go to iTunes.com and download and install the iTunes software on your device, if you don't have it already.
          2. -
          3. Launch iTunes and sign in with your Apple ID, or create one if you don't have one already.
          4. -
          5. Go to the iTunes Store and search for Joeboy Live Your Life.
          6. -
          7. Select the song from the results and click on Buy Song or Buy Album, depending on whether you want to buy only the song or the whole album that contains it.
          8. -
          9. Enter your payment details and confirm your purchase.
          10. -
          11. Wait for the download process to finish and then go to your Library to find and play the song on your device.
          12. -
          -

          How to Download from Google Play Music

          -

          A third source where you can download Joeboy Live Your Life is Google Play Music, the popular music store and player by Google. Here are the steps to download the song from Google Play Music:

          -
            -
          1. Go to play.google.com/music and sign in with your Google account, or create one if you don't have one already.
          2. -
          3. Go to the Music section and search for Joeboy Live Your Life.
          4. -
          5. Select the song from the results and click on Buy or Subscribe, depending on whether you want to buy only the song or subscribe to Google Play Music Unlimited, which gives you access to millions of songs for a monthly fee.
          6. -
          7. Enter your payment details and confirm your purchase or subscription.
          8. -
          9. Wait for the download process to finish and then go to your Library to find and play the song on your device.
          10. -
          -

          Conclusion and FAQs

          -

          Summary of the Main Points

          -

          In conclusion, Joeboy Live Your Life is a new song by Joeboy that was released in December 2022. It is a collaboration with MTN Pulse, a mobile service that offers affordable data plans and music streaming for young people. The song is an upbeat and inspirational tune that encourages listeners to live their lives to the fullest and - enjoy every moment. The song has many benefits and features that make it worth downloading, such as being motivational, catchy, danceable, high-quality, and original. You can download the song from different sources, such as YouTube, iTunes, and Google Play Music, by following the steps we have provided in this article. We hope you have learned something new and useful from this article and that you will download Joeboy Live Your Life and enjoy his music.

          -

          FAQs

          -

          Here are some of the frequently asked questions (FAQs) about Joeboy Live Your Life and their answers:

          - - - - - - - - - - - - - - - - - - - - - - - - - -
          QuestionAnswer
          Where can I listen to Joeboy Live Your Life online?You can listen to Joeboy Live Your Life online on platforms such as YouTube, Spotify, Apple Music, Deezer, Boomplay, Audiomack, and SoundCloud.
          What is the genre of Joeboy Live Your Life?Joeboy Live Your Life is a pop song with influences from afrobeat, dancehall, and R&B.
          What is the duration of Joeboy Live Your Life?Joeboy Live Your Life is 3 minutes and 15 seconds long.
          What is the album that contains Joeboy Live Your Life?Joeboy Live Your Life is not part of any album yet. It is a single that was released independently by Joeboy and MTN Pulse.
          What are some of the other songs by Joeboy that I should check out?Some of the other songs by Joeboy that you should check out are Baby, Beginning, Don't Call Me Back, All for You, Call, Lonely, Focus, Show Me, Nobody (with DJ Neptune and Mr Eazi), Door (with Kwesi Arthur), and Bounce (with Major Lazer).

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/examples/summary/pretrain_bart_summary.sh b/spaces/fclong/summary/fengshen/examples/summary/pretrain_bart_summary.sh deleted file mode 100644 index f8a6af24f935cc563891922b8a50cd293231367b..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/summary/pretrain_bart_summary.sh +++ /dev/null @@ -1,124 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=bart_summary -#SBATCH --nodes=1 -#SBATCH --ntasks-per-node=4 -#SBATCH --gres=gpu:4 # number of gpus -#SBATCH -o %x-%j.log - -set -x -e - -echo "START TIME: $(date)" -MODEL_NAME=bart-base -MICRO_BATCH_SIZE=16 -ROOT_DIR=/cognitive_comp/dongxiaoqun/finetune/${MODEL_NAME} - -ZERO_STAGE=1 -export TORCH_EXTENSIONS_DIR=/cognitive_comp/dongxiaoqun/torch_extendsions -config_json="./ds_config.${MODEL_NAME}.json" - -# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size() -cat < $config_json -{ - "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE}, - "steps_per_print": 100, - "gradient_clipping": 1.0, - "zero_optimization": { - "stage": $ZERO_STAGE, - "contiguous_gradients": false, - "overlap_comm": true, - "reduce_scatter": true, - "reduce_bucket_size": 50000000, - "allgather_bucket_size": 500000000 - }, - "optimizer": { - "type": "Adam", - "params": { - "lr": 1e-4, - "betas": [ - 0.9, - 0.95 - ], - "eps": 1e-8, - "weight_decay": 5e-2 - } - }, - "scheduler": { - "type": "WarmupLR", - "params":{ - "warmup_min_lr": 5e-6, - "warmup_max_lr": 1e-4 - } - }, - "zero_allow_untested_optimizer": false, - "fp16": { - "enabled": true, - "loss_scale": 0, - "loss_scale_window": 1000, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "activation_checkpointing": { - "partition_activations": false, - "contiguous_memory_optimization": false - }, - "wall_clock_breakdown": false -} -EOT - -# export PL_DEEPSPEED_CONFIG_PATH=$config_json - -TRAINER_ARGS=" - --max_epochs 2 \ - --gpus 1 \ - --num_nodes 1 \ - --strategy deepspeed_stage_${ZERO_STAGE} \ - --default_root_dir $ROOT_DIR \ - --dirpath $ROOT_DIR/ckpt \ - --save_top_k 3 \ - --monitor val_loss \ - --mode min \ - --save_last \ - --every_n_train_steps 0 \ - --val_check_interval 0.1 \ -" - -prompt='"' -DATA_ARGS=" - --datasets_name lcsts \ - --num_workers 8 \ - --train_batchsize $MICRO_BATCH_SIZE \ - --val_batchsize $MICRO_BATCH_SIZE \ - --test_batchsize $MICRO_BATCH_SIZE \ - --max_enc_length 128 \ - --max_dec_length 64 \ - --val_datasets_field val \ - --prompt $prompt \ -" - -MODEL_ARGS=" - --pretrained_model_path /cognitive_comp/gaoxinyu/pretrained_model/bart-base \ - --output_save_path $ROOT_DIR/${MODEL_NAME}_predict_lcsts.json \ - --learning_rate 1e-4 \ - --weight_decay 0.1 \ - --precision 16 \ -" - -SCRIPTS_PATH=seq2seq_summary.py - -export CMD=" \ - $SCRIPTS_PATH \ - $TRAINER_ARGS \ - $MODEL_ARGS \ - $DATA_ARGS \ - " - -echo $CMD - -#singularity exec --nv -B /cognitive_comp/ganruyi/Megatron/:/cognitive_comp/ganruyi/Megatron/,/cognitive_comp/gaoxinyu/:/cognitive_comp/gaoxinyu/ $SINGULARITY_PATH python $CMD - -# to debug - add echo (it exits and prints what it would have launched) -#run_cmd="$PY_LAUNCHER $CMD" -# srun --nodes=1 --gres=gpu:4 --ntasks-per-node=4 --cpus-per-gpu=20 -source activate -conda activate torchnew -srun --nodes=1 --ntasks-per-node=1 --gres=gpu:1 --cpus-per-task=30 -o ${MODEL_NAME}-%J.log --jobid=229623 bash -c 'python3 $SCRIPT_PATH $CMD' diff --git a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/version.py b/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/version.py deleted file mode 100644 index b794fd409a5e3b3b65ad76a43d6a01a318877640..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = '0.1.0' diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/node_modules/debug/src/browser.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/node_modules/debug/src/browser.js deleted file mode 100644 index cd0fc35d1ee11e0d6e15421021a54c18958e04d9..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/node_modules/debug/src/browser.js +++ /dev/null @@ -1,269 +0,0 @@ -/* eslint-env browser */ - -/** - * This is the web browser implementation of `debug()`. - */ - -exports.formatArgs = formatArgs; -exports.save = save; -exports.load = load; -exports.useColors = useColors; -exports.storage = localstorage(); -exports.destroy = (() => { - let warned = false; - - return () => { - if (!warned) { - warned = true; - console.warn('Instance method `debug.destroy()` is deprecated and no longer does anything. It will be removed in the next major version of `debug`.'); - } - }; -})(); - -/** - * Colors. - */ - -exports.colors = [ - '#0000CC', - '#0000FF', - '#0033CC', - '#0033FF', - '#0066CC', - '#0066FF', - '#0099CC', - '#0099FF', - '#00CC00', - '#00CC33', - '#00CC66', - '#00CC99', - '#00CCCC', - '#00CCFF', - '#3300CC', - '#3300FF', - '#3333CC', - '#3333FF', - '#3366CC', - '#3366FF', - '#3399CC', - '#3399FF', - '#33CC00', - '#33CC33', - '#33CC66', - '#33CC99', - '#33CCCC', - '#33CCFF', - '#6600CC', - '#6600FF', - '#6633CC', - '#6633FF', - '#66CC00', - '#66CC33', - '#9900CC', - '#9900FF', - '#9933CC', - '#9933FF', - '#99CC00', - '#99CC33', - '#CC0000', - '#CC0033', - '#CC0066', - '#CC0099', - '#CC00CC', - '#CC00FF', - '#CC3300', - '#CC3333', - '#CC3366', - '#CC3399', - '#CC33CC', - '#CC33FF', - '#CC6600', - '#CC6633', - '#CC9900', - '#CC9933', - '#CCCC00', - '#CCCC33', - '#FF0000', - '#FF0033', - '#FF0066', - '#FF0099', - '#FF00CC', - '#FF00FF', - '#FF3300', - '#FF3333', - '#FF3366', - '#FF3399', - '#FF33CC', - '#FF33FF', - '#FF6600', - '#FF6633', - '#FF9900', - '#FF9933', - '#FFCC00', - '#FFCC33' -]; - -/** - * Currently only WebKit-based Web Inspectors, Firefox >= v31, - * and the Firebug extension (any Firefox version) are known - * to support "%c" CSS customizations. - * - * TODO: add a `localStorage` variable to explicitly enable/disable colors - */ - -// eslint-disable-next-line complexity -function useColors() { - // NB: In an Electron preload script, document will be defined but not fully - // initialized. Since we know we're in Chrome, we'll just detect this case - // explicitly - if (typeof window !== 'undefined' && window.process && (window.process.type === 'renderer' || window.process.__nwjs)) { - return true; - } - - // Internet Explorer and Edge do not support colors. - if (typeof navigator !== 'undefined' && navigator.userAgent && navigator.userAgent.toLowerCase().match(/(edge|trident)\/(\d+)/)) { - return false; - } - - // Is webkit? http://stackoverflow.com/a/16459606/376773 - // document is undefined in react-native: https://github.com/facebook/react-native/pull/1632 - return (typeof document !== 'undefined' && document.documentElement && document.documentElement.style && document.documentElement.style.WebkitAppearance) || - // Is firebug? http://stackoverflow.com/a/398120/376773 - (typeof window !== 'undefined' && window.console && (window.console.firebug || (window.console.exception && window.console.table))) || - // Is firefox >= v31? - // https://developer.mozilla.org/en-US/docs/Tools/Web_Console#Styling_messages - (typeof navigator !== 'undefined' && navigator.userAgent && navigator.userAgent.toLowerCase().match(/firefox\/(\d+)/) && parseInt(RegExp.$1, 10) >= 31) || - // Double check webkit in userAgent just in case we are in a worker - (typeof navigator !== 'undefined' && navigator.userAgent && navigator.userAgent.toLowerCase().match(/applewebkit\/(\d+)/)); -} - -/** - * Colorize log arguments if enabled. - * - * @api public - */ - -function formatArgs(args) { - args[0] = (this.useColors ? '%c' : '') + - this.namespace + - (this.useColors ? ' %c' : ' ') + - args[0] + - (this.useColors ? '%c ' : ' ') + - '+' + module.exports.humanize(this.diff); - - if (!this.useColors) { - return; - } - - const c = 'color: ' + this.color; - args.splice(1, 0, c, 'color: inherit'); - - // The final "%c" is somewhat tricky, because there could be other - // arguments passed either before or after the %c, so we need to - // figure out the correct index to insert the CSS into - let index = 0; - let lastC = 0; - args[0].replace(/%[a-zA-Z%]/g, match => { - if (match === '%%') { - return; - } - index++; - if (match === '%c') { - // We only are interested in the *last* %c - // (the user may have provided their own) - lastC = index; - } - }); - - args.splice(lastC, 0, c); -} - -/** - * Invokes `console.debug()` when available. - * No-op when `console.debug` is not a "function". - * If `console.debug` is not available, falls back - * to `console.log`. - * - * @api public - */ -exports.log = console.debug || console.log || (() => {}); - -/** - * Save `namespaces`. - * - * @param {String} namespaces - * @api private - */ -function save(namespaces) { - try { - if (namespaces) { - exports.storage.setItem('debug', namespaces); - } else { - exports.storage.removeItem('debug'); - } - } catch (error) { - // Swallow - // XXX (@Qix-) should we be logging these? - } -} - -/** - * Load `namespaces`. - * - * @return {String} returns the previously persisted debug modes - * @api private - */ -function load() { - let r; - try { - r = exports.storage.getItem('debug'); - } catch (error) { - // Swallow - // XXX (@Qix-) should we be logging these? - } - - // If debug isn't set in LS, and we're in Electron, try to load $DEBUG - if (!r && typeof process !== 'undefined' && 'env' in process) { - r = process.env.DEBUG; - } - - return r; -} - -/** - * Localstorage attempts to return the localstorage. - * - * This is necessary because safari throws - * when a user disables cookies/localstorage - * and you attempt to access it. - * - * @return {LocalStorage} - * @api private - */ - -function localstorage() { - try { - // TVMLKit (Apple TV JS Runtime) does not have a window object, just localStorage in the global context - // The Browser also has localStorage in the global context. - return localStorage; - } catch (error) { - // Swallow - // XXX (@Qix-) should we be logging these? - } -} - -module.exports = require('./common')(exports); - -const {formatters} = module.exports; - -/** - * Map %j to `JSON.stringify()`, since no Web Inspectors do that by default. - */ - -formatters.j = function (v) { - try { - return JSON.stringify(v); - } catch (error) { - return '[UnexpectedJSONParseError]: ' + error.message; - } -}; diff --git a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_57.py b/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_57.py deleted file mode 100644 index 748e23c9f64b6870010258e37bd8d8380aeefdf5..0000000000000000000000000000000000000000 --- a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_57.py +++ /dev/null @@ -1,17 +0,0 @@ -def is_spam(message: str) -> bool: - import re - - # Patterns for detecting spam - patterns = [ - r"(?i)\b(추천|상승|단기간|익절|무료교육|달성|거래량|폭등)\b", # 유형 1,2,4에서 발견됩니다. - r"(?i)\b(http|bit\.ly|t\.ly|me2\.kr|dokdo\.in|buly\.kr)\b", # 유형 1,2,3,4,5에서 발견됩니다. - r"(?i)\b(입금|출금)\b", # 일부 스팸 메시지에서 발견됩니다. - r"(%|상한가|모션|목표)\b", # 일부 스팸 메시지에서 발견됩니다. - r"(?i)\b(광고)\b", # 스팸 메시지에서 때때로 발견됩니다. - ] - - for pattern in patterns: - if re.search(pattern, message): - return True - - return False \ No newline at end of file diff --git a/spaces/fhipol/deeplearning/detector.py b/spaces/fhipol/deeplearning/detector.py deleted file mode 100644 index 492616bf5dc808d6db8398b42a8db43f1790ae0e..0000000000000000000000000000000000000000 --- a/spaces/fhipol/deeplearning/detector.py +++ /dev/null @@ -1,54 +0,0 @@ -import os -import torch -from PIL import Image - -from detector_model import ModelExecutor, HumanModelExecutor, BrandsModelExecutor - - -class ObjectDetector: - - def __init__(self, model_executor: ModelExecutor): - - # threshold used for recall/false negatives - self.threshold = 0.5 - self.device = torch.device("cpu") - self.model_executor = model_executor - - def detect_object(self, img_path): - - img = Image.open(img_path) - img_data = self.model_executor.transform(img).unsqueeze(0) - img_data = img_data.to(self.device) - - output = self.model_executor.model(img_data) - prob_output = torch.softmax(output, dim=1) - prob, pred = torch.max(prob_output, 1) - - return prob.item(), pred.item() - - def detect_all_object_in_dir(self, test_path): - - imgs_paths = os.listdir(test_path) - - for rel_img_path in imgs_paths: - - img_path = test_path + f"/{rel_img_path}" - prob, pred = self.detect_object(img_path) - - print(f"{self.model_executor.name} detector in {rel_img_path}: predicted category {pred} with p {prob}") - - # Print the results - if pred == 1 and prob > self.threshold: - print("Object detected") - else: - print("No object detected") - - -if __name__ == "__main__": - test_path = os.getcwd() + "/test_imgs" - - model_humans = HumanModelExecutor(train_model=False, force_cpu=True) - model_logos = BrandsModelExecutor(train_model=False, force_cpu=True) - - ObjectDetector(model_humans).detect_all_object_in_dir(test_path) - ObjectDetector(model_logos).detect_all_object_in_dir(test_path) diff --git a/spaces/fiyen/YangyangChatGPT/custom.css b/spaces/fiyen/YangyangChatGPT/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/fiyen/YangyangChatGPT/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/floriankrempl/mtg_rules_bot/mtg/bot/chat_history.py b/spaces/floriankrempl/mtg_rules_bot/mtg/bot/chat_history.py deleted file mode 100644 index c067512d9693509dc0d87fd666f96272c88081ed..0000000000000000000000000000000000000000 --- a/spaces/floriankrempl/mtg_rules_bot/mtg/bot/chat_history.py +++ /dev/null @@ -1,49 +0,0 @@ -from dataclasses import dataclass, field - -from mtg.objects import Message - - -@dataclass -class ChatHistory: - chat: list[Message] = field(default_factory=list) - - def add_message(self, message: Message): - self.chat.append(message) - - def clear(self): - self.chat = [] - - def get_card_data(self, number_of_messages=2, max_number_of_cards=4): - """Get Card data from last n messages in text form.""" - card_data = "" - cards = [] - for message in self.chat[-number_of_messages:]: - cards.extend(message.cards) - - card_data += "\n".join( - [card.to_text() for card in cards[-max_number_of_cards:]] - ) - if card_data == "": - card_data = "No Card Data." - return card_data - - def get_human_readable_chat(self, number_of_messages=4) -> list[list[str, str]]: - """Create Chat for display in gradio bot. - - Chat has to be in format list of lists. First message in the list is user second is bot. - Example: - chat = [[user, bot], [user, bot]] - """ - chat = [] - for message in self.chat[-number_of_messages:]: - if message.role == "user": - chat.append([message.processed_text]) - if message.role == "assistant": - if not chat: - chat.append([None, message.processed_text]) - else: - chat[-1].append(message.processed_text) - - if len(chat[-1]) == 1: - chat[-1].append(None) - return chat diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/case_studies_envs/applestealingcasestudiesenvs.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/case_studies_envs/applestealingcasestudiesenvs.py deleted file mode 100644 index f03e6052301ca5322a766e04c3c5cf093e773c44..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/case_studies_envs/applestealingcasestudiesenvs.py +++ /dev/null @@ -1,87 +0,0 @@ -from gym_minigrid.social_ai_envs.socialaiparamenv import SocialAIParamEnv -from gym_minigrid.parametric_env import * -from gym_minigrid.register import register - -import inspect, importlib - -# for used for automatic registration of environments -defined_classes = [name for name, _ in inspect.getmembers(importlib.import_module(__name__), inspect.isclass)] - -class AppleStealingParamEnv(SocialAIParamEnv): - - def __init__(self, obstacles, asocial, walk, **kwargs): - - self.asocial = asocial - self.obstacles = obstacles - self.walk = walk - - super(AppleStealingParamEnv, self).__init__(**kwargs) - - def construct_tree(self): - tree = ParameterTree() - - env_type_nd = tree.add_node("Env_type", type="param") - - # Collaboration - collab_nd = tree.add_node("AppleStealing", parent=env_type_nd, type="value") - - # colab_type_nd = tree.add_node("Problem", parent=collab_nd, type="param") - # tree.add_node("AppleStealing", parent=colab_type_nd, type="value") - role_nd = tree.add_node("Version", parent=collab_nd, type="param") - if self.asocial: - tree.add_node("Asocial", parent=role_nd, type="value") - else: - social_nd = tree.add_node("Social", parent=role_nd, type="value") - - role_nd = tree.add_node("NPC_movement", parent=social_nd, type="param") - if self.walk: - tree.add_node("Walking", parent=role_nd, type="value") - else: - tree.add_node("Rotating", parent=role_nd, type="value") - - obstacles_nd = tree.add_node("Obstacles", parent=collab_nd, type="param") - - if self.obstacles not in ["No", "A_bit", "Medium", "A_lot"]: - raise ValueError("Undefined obstacle amount.") - - tree.add_node(self.obstacles, parent=obstacles_nd, type="value") - - return tree - - -# automatic registration of environments -defined_classes_ = [name for name, _ in inspect.getmembers(importlib.import_module(__name__), inspect.isclass)] - -envs = list(set(defined_classes_) - set(defined_classes)) -assert all([e.endswith("Env") for e in envs]) - - -# register testing envs : cues x problems x {social, asocial} x {joint attention, no} -for asocial in [True, False]: - for obst in ["No", "A_bit", "Medium", "A_lot"]: - if asocial: - env_name = f'{"Asocial" if asocial else ""}AppleStealingObst_{obst}ParamEnv' - - register( - id='SocialAI-{}-v1'.format(env_name), - entry_point='gym_minigrid.social_ai_envs:AppleStealingParamEnv', - kwargs={ - 'asocial': asocial, - 'obstacles': obst, - 'walk': False, - } - ) - - else: - for walk in [True, False]: - env_name = f'{"Asocial" if asocial else ""}AppleStealing{"Walk" if walk and not asocial else ""}Obst_{obst}ParamEnv' - - register( - id='SocialAI-{}-v1'.format(env_name), - entry_point='gym_minigrid.social_ai_envs:AppleStealingParamEnv', - kwargs={ - 'asocial': asocial, - 'obstacles': obst, - 'walk': walk, - } - ) diff --git a/spaces/freddiezhang/honor/README.md b/spaces/freddiezhang/honor/README.md deleted file mode 100644 index 898c52b3c535e4d6dee245f31af513adbf225691..0000000000000000000000000000000000000000 --- a/spaces/freddiezhang/honor/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: HonOR -emoji: 🏆 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.14.0 -app_file: app.py -pinned: true -models: ['freddiezhang/honor'] -datasets: ['freddiezhang/honordata'] ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py deleted file mode 100644 index ba1d42d0c5781f56dc177d860d856bb34adce555..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py +++ /dev/null @@ -1,57 +0,0 @@ -# dataset settings -dataset_type = 'PascalVOCDataset' -data_root = 'data/VOCdevkit/VOC2012' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -crop_size = (512, 512) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 512), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClass', - split='ImageSets/Segmentation/train.txt', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClass', - split='ImageSets/Segmentation/val.txt', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClass', - split='ImageSets/Segmentation/val.txt', - pipeline=test_pipeline)) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/fpn_r50.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/fpn_r50.py deleted file mode 100644 index 86ab327db92e44c14822d65f1c9277cb007f17c1..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/fpn_r50.py +++ /dev/null @@ -1,36 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 1, 1), - strides=(1, 2, 2, 2), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=4), - decode_head=dict( - type='FPNHead', - in_channels=[256, 256, 256, 256], - in_index=[0, 1, 2, 3], - feature_strides=[4, 8, 16, 32], - channels=128, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/ann_head.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/ann_head.py deleted file mode 100644 index 30aaacc2cafc568d3de71d1477b4de0dc0fea9d3..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/ann_head.py +++ /dev/null @@ -1,245 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule - -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .decode_head import BaseDecodeHead - - -class PPMConcat(nn.ModuleList): - """Pyramid Pooling Module that only concat the features of each layer. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. - """ - - def __init__(self, pool_scales=(1, 3, 6, 8)): - super(PPMConcat, self).__init__( - [nn.AdaptiveAvgPool2d(pool_scale) for pool_scale in pool_scales]) - - def forward(self, feats): - """Forward function.""" - ppm_outs = [] - for ppm in self: - ppm_out = ppm(feats) - ppm_outs.append(ppm_out.view(*feats.shape[:2], -1)) - concat_outs = torch.cat(ppm_outs, dim=2) - return concat_outs - - -class SelfAttentionBlock(_SelfAttentionBlock): - """Make a ANN used SelfAttentionBlock. - - Args: - low_in_channels (int): Input channels of lower level feature, - which is the key feature for self-attention. - high_in_channels (int): Input channels of higher level feature, - which is the query feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - share_key_query (bool): Whether share projection weight between key - and query projection. - query_scale (int): The scale of query feature map. - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, low_in_channels, high_in_channels, channels, - out_channels, share_key_query, query_scale, key_pool_scales, - conv_cfg, norm_cfg, act_cfg): - key_psp = PPMConcat(key_pool_scales) - if query_scale > 1: - query_downsample = nn.MaxPool2d(kernel_size=query_scale) - else: - query_downsample = None - super(SelfAttentionBlock, self).__init__( - key_in_channels=low_in_channels, - query_in_channels=high_in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=share_key_query, - query_downsample=query_downsample, - key_downsample=key_psp, - key_query_num_convs=1, - key_query_norm=True, - value_out_num_convs=1, - value_out_norm=False, - matmul_norm=True, - with_out=True, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - -class AFNB(nn.Module): - """Asymmetric Fusion Non-local Block(AFNB) - - Args: - low_in_channels (int): Input channels of lower level feature, - which is the key feature for self-attention. - high_in_channels (int): Input channels of higher level feature, - which is the query feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - and query projection. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, low_in_channels, high_in_channels, channels, - out_channels, query_scales, key_pool_scales, conv_cfg, - norm_cfg, act_cfg): - super(AFNB, self).__init__() - self.stages = nn.ModuleList() - for query_scale in query_scales: - self.stages.append( - SelfAttentionBlock( - low_in_channels=low_in_channels, - high_in_channels=high_in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=False, - query_scale=query_scale, - key_pool_scales=key_pool_scales, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.bottleneck = ConvModule( - out_channels + high_in_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - def forward(self, low_feats, high_feats): - """Forward function.""" - priors = [stage(high_feats, low_feats) for stage in self.stages] - context = torch.stack(priors, dim=0).sum(dim=0) - output = self.bottleneck(torch.cat([context, high_feats], 1)) - return output - - -class APNB(nn.Module): - """Asymmetric Pyramid Non-local Block (APNB) - - Args: - in_channels (int): Input channels of key/query feature, - which is the key feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, in_channels, channels, out_channels, query_scales, - key_pool_scales, conv_cfg, norm_cfg, act_cfg): - super(APNB, self).__init__() - self.stages = nn.ModuleList() - for query_scale in query_scales: - self.stages.append( - SelfAttentionBlock( - low_in_channels=in_channels, - high_in_channels=in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=True, - query_scale=query_scale, - key_pool_scales=key_pool_scales, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.bottleneck = ConvModule( - 2 * in_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, feats): - """Forward function.""" - priors = [stage(feats, feats) for stage in self.stages] - context = torch.stack(priors, dim=0).sum(dim=0) - output = self.bottleneck(torch.cat([context, feats], 1)) - return output - - -@HEADS.register_module() -class ANNHead(BaseDecodeHead): - """Asymmetric Non-local Neural Networks for Semantic Segmentation. - - This head is the implementation of `ANNNet - `_. - - Args: - project_channels (int): Projection channels for Nonlocal. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): The pooling scales of key feature map. - Default: (1, 3, 6, 8). - """ - - def __init__(self, - project_channels, - query_scales=(1, ), - key_pool_scales=(1, 3, 6, 8), - **kwargs): - super(ANNHead, self).__init__( - input_transform='multiple_select', **kwargs) - assert len(self.in_channels) == 2 - low_in_channels, high_in_channels = self.in_channels - self.project_channels = project_channels - self.fusion = AFNB( - low_in_channels=low_in_channels, - high_in_channels=high_in_channels, - out_channels=high_in_channels, - channels=project_channels, - query_scales=query_scales, - key_pool_scales=key_pool_scales, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.bottleneck = ConvModule( - high_in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.context = APNB( - in_channels=self.channels, - out_channels=self.channels, - channels=project_channels, - query_scales=query_scales, - key_pool_scales=key_pool_scales, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - low_feats, high_feats = self._transform_inputs(inputs) - output = self.fusion(low_feats, high_feats) - output = self.dropout(output) - output = self.bottleneck(output) - output = self.context(output) - output = self.cls_seg(output) - - return output diff --git a/spaces/gforguru/EmailGenerator/README.md b/spaces/gforguru/EmailGenerator/README.md deleted file mode 100644 index 273f270f09c4d7a541d0a72283e6a9ab641d9441..0000000000000000000000000000000000000000 --- a/spaces/gforguru/EmailGenerator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: EmailGenerator -emoji: 🏆 -colorFrom: purple -colorTo: yellow -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Extreme Landings Windows Crack [REPACK] Key.md b/spaces/gotiQspiryo/whisper-ui/examples/Extreme Landings Windows Crack [REPACK] Key.md deleted file mode 100644 index e68463d67307670253c0a505e98387444f80823f..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Extreme Landings Windows Crack [REPACK] Key.md +++ /dev/null @@ -1,93 +0,0 @@ - -

          How to Download and Play Extreme Landings Windows Crack Key

          -

          If you are a fan of flight simulation games, you might have heard of Extreme Landings Pro, a realistic and challenging game that lets you test your skills as a pilot in various scenarios. However, if you want to play this game on your Windows 10 PC, you might encounter some difficulties. The game is not officially available for Windows 10, and the only way to get it is to use a crack key that bypasses the security measures of the game.

          -

          Extreme Landings Windows Crack Key


          Download Zip ★★★★★ https://urlgoal.com/2uyM6n



          -

          In this article, we will show you how to download and play Extreme Landings Windows Crack Key on your PC, using some simple steps and tools. We will also explain the risks and benefits of using a crack key, and some alternatives that you can try if you don't want to use one.

          -

          What is Extreme Landings Windows Crack Key?

          -

          Extreme Landings Windows Crack Key is a modified version of the original game that allows you to play it on your Windows 10 PC without paying for it or having a license. A crack key is a code that unlocks the features and functions of the game that are normally restricted or encrypted. By using a crack key, you can bypass the verification process of the game and enjoy it for free.

          -

          However, using a crack key also comes with some risks and drawbacks. First of all, it is illegal and unethical to use a crack key, as it violates the terms and conditions of the game developer and publisher. You might face legal consequences or penalties if you are caught using a crack key. Secondly, using a crack key might expose your PC to malware or viruses that can harm your system or steal your data. Thirdly, using a crack key might compromise the quality and performance of the game, as it might not be compatible with your PC specifications or updates. You might experience bugs, glitches, crashes, or errors while playing the game.

          -

          How to Download and Play Extreme Landings Windows Crack Key?

          -

          If you still want to download and play Extreme Landings Windows Crack Key on your PC, you will need to follow these steps:

          -
            -
          1. Download an Android emulator on your PC, such as BlueStacks or NoxPlayer. An Android emulator is a software that allows you to run Android apps and games on your PC.
          2. -
          3. Install and launch the Android emulator on your PC.
          4. -
          5. Download Extreme Landings Pro APK + MOD (All Unlocked) from a reliable source on the internet. This is a modified version of the game that has all the content unlocked and does not require a license.
          6. -
          7. Drag and drop the APK file into the Android emulator window, or use the built-in browser to locate and install it.
          8. -
          9. Once the installation is complete, launch Extreme Landings Pro from the emulator home screen.
          10. -
          11. Enjoy playing Extreme Landings Windows Crack Key on your PC.
          12. -
          -

          What are some alternatives to Extreme Landings Windows Crack Key?

          -

          If you don't want to use a crack key to play Extreme Landings Pro on your PC, you can try some alternatives that are similar or better than this game. Here are some suggestions:

          -
            -
          • Microsoft Flight Simulator: This is one of the most realistic and immersive flight simulation games ever made. It features stunning graphics, dynamic weather, realistic physics, and over 37,000 airports around the world. You can fly any type of aircraft, from light planes to commercial jets, and explore the world in amazing detail. You can also customize your flight plan, weather conditions, time of day, and difficulty level. Microsoft Flight Simulator is available for Windows 10 and Xbox Series X/S.
          • -
          • X-Plane 11: This is another realistic and comprehensive flight simulation game that offers a wide range of aircraft, scenery, airports, and features. You can fly anything from gliders to helicopters to space shuttles, and experience realistic flight dynamics, weather effects, sound effects, and lighting effects. You can also create your own aircraft, scenery, airports, and missions with the powerful editing tools. X-Plane 11 is available for Windows 10, Mac OS X, and Linux.
          • -
          • Aerofly FS 2020: This is a mobile flight simulation game that delivers high-quality graphics, smooth performance, and easy controls. You can choose from over 200 airports and 21 aircraft models, ranging from single-engine planes to airliners. You can also enjoy realistic cockpit views, instrument displays, navigation systems, weather conditions, and physics. Aerofly FS 2020 is available for Android and iOS devices.
          • -
          -

          Conclusion

          -

          In conclusion, Extreme Landings Windows Crack Key is a way to play Extreme Landings Pro on your Windows 10 PC for free by using a modified version of the game that bypasses the security measures. However, using a crack key is illegal, unethical, -risky, -and potentially harmful for your PC -and -the game quality -and performance. -If you want to enjoy -a -flight simulation game on your PC, -you should consider -some alternatives -that are -similar or better than Extreme Landings Pro, -such as Microsoft Flight Simulator, -X-Plane 11, -or Aerofly FS 2020. -These games are -officially available for -Windows 10 -or other platforms, -and offer -a more realistic, -immersive, -and satisfying -flight simulation experience.

          -

          -

          Why Should You Play Extreme Landings Windows Crack Key?

          -

          Extreme Landings Windows Crack Key is not just a game, but a learning experience. You can improve your knowledge and skills in aviation, navigation, and emergency management. You can face realistic scenarios that challenge your decision-making, problem-solving, and reaction abilities. You can also enjoy the thrill and excitement of flying different types of aircraft in various weather conditions and locations.

          -

          Extreme Landings Windows Crack Key is not just a simulation, but a masterpiece. You can appreciate the stunning graphics, smooth animations, and realistic sound effects that create an immersive atmosphere. You can also explore the detailed and accurate maps, airports, and runways that are based on real-world data. You can also customize your aircraft, flight plan, and difficulty level to suit your preferences and goals.

          -

          How to Get Extreme Landings Windows Crack Key for Free?

          -

          Extreme Landings Windows Crack Key is not just a crack key, but a gift. You can get it for free without paying for the original game or having a license. You can also avoid the verification process of the game and enjoy all the features and functions that are normally restricted or encrypted. You can also access all the content that is unlocked and available in the modified version of the game.

          -

          However, getting Extreme Landings Windows Crack Key for free is not easy or safe. You will need to find a reliable source on the internet that provides the crack key and the modified version of the game. You will also need to download an Android emulator on your PC that allows you to run Android apps and games on your PC. You will also need to install and launch the Android emulator on your PC, drag and drop the APK file into the emulator window, or use the built-in browser to locate and install it. You will also need to launch Extreme Landings Pro from the emulator home screen and enjoy playing Extreme Landings Windows Crack Key on your PC.

          -

          What are some tips and tricks for playing Extreme Landings Windows Crack Key?

          -

          Extreme Landings Windows Crack Key is not just a fun game, but a challenging one. You will need to master some tips and tricks to play it well and complete all the missions and challenges. Here are some suggestions:

          -
            -
          • Use the navigation bar: The navigation bar will help you keep track of your flight path, altitude, speed, fuel level, weather conditions, and other important information. You can also use it to turn on or off autopilot, review your route, control your speed level, and adjust your settings.
          • -
          • Manage your engine system: The engine system is vital for your aircraft's performance and safety. You will need to monitor its status and temperature, manage its fuel consumption, activate its APU mode, and deal with any failures or malfunctions that might occur.
          • -
          • Operate your landing gear: The landing gear is essential for your aircraft's takeoff and landing. You will need to open or close it within the allowable range, use the brakes properly, check its condition and damage level, and coordinate it with the flap system and reverse.
          • -
          • Practice in different modes: The game offers different modes for you to practice and improve your skills. You can try the free flight mode to explore different locations and scenarios without any pressure or objectives. You can also try the landing competition mode to test your landing accuracy and precision against other players online.
          • -
          -

          What are some reviews and ratings for Extreme Landings Windows Crack Key?

          -

          Extreme Landings Windows Crack Key is not just a game, but a community. You can share your opinions and feedback with other players online, and read their reviews and ratings for the game. You can also learn from their tips and tricks, and discover their favorite features and functions of the game.

          -

          However, finding reviews and ratings for Extreme Landings Windows Crack Key is not easy or reliable. You will need to search for them on the internet, and filter out the ones that are fake, biased, or outdated. You will also need to be careful of the sources that provide the reviews and ratings, as they might be infected with malware or viruses that can harm your PC or steal your data.

          -

          Here are some examples of reviews and ratings for Extreme Landings Windows Crack Key from different sources:

          -
            -
          • Reddit: A user named SoonlyXo posted a question on r/PiratedGames, asking where or how to crack Extreme Landings Pro for Windows 10. The user received some replies from other users, suggesting to use a cracked Android version of the game, or to download an IPA file from a website. The user also received a comment from AutoModerator, a bot that reminded them to read the stickied megathread that might answer their question. The post received 2 upvotes and 6 comments.
          • -
          • SoundCloud: A user named Tracourytsmal1984 uploaded a track titled Extreme Landings Windows Crack Key, which is a 3-minute audio file that contains instructions on how to download and install the game on PC. The user also provided a link to a PDF file that contains more information about the game. The track received no likes or comments.
          • -
          • MODYOLO: A website that provides APK and MOD files for Android games and apps. The website has a page for Extreme Landings Pro v3.7.8 APK + MOD (All Unlocked), which is a modified version of the game that has all the content unlocked and does not require a license. The page contains a brief introduction of the game, its features, its mod info, its screenshots, and its download links. The page also has a comment section where users can leave their feedback or ask questions about the game. The page received no ratings or comments.
          • -
          -

          How to update Extreme Landings Windows Crack Key?

          -

          Extreme Landings Windows Crack Key is not just a game, but a project. You can expect new updates and improvements for the game from time to time, as the developer and publisher might release new versions of the game that fix bugs, add features, or enhance performance. You can also enjoy new content and challenges that are added to the game with each update.

          -

          However, updating Extreme Landings Windows Crack Key is not easy or automatic. You will need to manually check for updates on the internet, and download and install them on your PC. You will also need to make sure that the updates are compatible with your crack key and your modified version of the game. You might also need to backup your data and settings before updating, as they might be overwritten or deleted by the update.

          -

          Here are some steps to update Extreme Landings Windows Crack Key on your PC:

          -
            -
          1. Visit the official website of Extreme Landings Pro or follow its social media accounts to check for any news or announcements about new updates for the game.
          2. -
          3. If there is a new update available, find a reliable source on the internet that provides the update file for Extreme Landings Windows Crack Key.
          4. -
          5. Download the update file on your PC, and scan it with an antivirus software to make sure it is safe and clean.
          6. -
          7. Backup your data and settings of Extreme Landings Windows Crack Key on your PC, such as your progress, achievements, preferences, etc.
          8. -
          9. Close Extreme Landings Pro if it is running on your PC.
          10. -
          11. Run the update file on your PC, and follow the instructions to install it.
          12. -
          13. Launch Extreme Landings Pro from your Android emulator home screen.
          14. -
          15. Enjoy playing Extreme Landings Windows Crack Key with the latest update.
          16. -
          -

          Conclusion

          -

          In conclusion, Extreme Landings Windows Crack Key is a way to play Extreme Landings Pro on your Windows 10 PC for free by using a modified version of the game that bypasses the security measures. However, using a crack key is illegal, unethical, risky, and potentially harmful for your PC and the game quality and performance. If you want to enjoy a flight simulation game on your PC, you should consider some alternatives that are similar or better than Extreme Landings Pro, such as Microsoft Flight Simulator, X-Plane 11, or Aerofly FS 2020. These games are officially available for Windows 10 or other platforms, and offer a more realistic, immersive, and satisfying flight simulation experience.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/group2test/stable-diffusion-v1-5/app.py b/spaces/group2test/stable-diffusion-v1-5/app.py deleted file mode 100644 index 2013a8fe01c8fdf93c87412e5f9c83d9c8501b5b..0000000000000000000000000000000000000000 --- a/spaces/group2test/stable-diffusion-v1-5/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'runwayml/stable-diffusion-v1-5' -prefix = '' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
          -
          -

          Stable Diffusion V1 5

          -
          -

          - Demo for Stable Diffusion V1 5 Stable Diffusion model.
          - {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

          - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space

          - Duplicate Space -
          - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically ()", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
          -
          -

          This space was created using SD Space Creator.

          -
          - """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/gsaivinay/open_llm_leaderboard/src/display_models/get_model_metadata.py b/spaces/gsaivinay/open_llm_leaderboard/src/display_models/get_model_metadata.py deleted file mode 100644 index ebeee76061650664b00259fe980ce05f013a0e9c..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/open_llm_leaderboard/src/display_models/get_model_metadata.py +++ /dev/null @@ -1,180 +0,0 @@ -import glob -import json -import os -import pickle -import re -from typing import List - -import huggingface_hub -from accelerate import init_empty_weights -from huggingface_hub import HfApi -from tqdm import tqdm -from transformers import AutoConfig, AutoModel - -from src.display_models.model_metadata_flags import DO_NOT_SUBMIT_MODELS, FLAGGED_MODELS -from src.display_models.model_metadata_type import MODEL_TYPE_METADATA, ModelType, model_type_from_str -from src.display_models.utils import AutoEvalColumn, model_hyperlink - -api = HfApi(token=os.environ.get("H4_TOKEN", None)) - - -def get_model_infos_from_hub(leaderboard_data: List[dict]): - # load cache from disk - try: - with open("model_info_cache.pkl", "rb") as f: - model_info_cache = pickle.load(f) - except (EOFError, FileNotFoundError): - model_info_cache = {} - try: - with open("model_size_cache.pkl", "rb") as f: - model_size_cache = pickle.load(f) - except (EOFError, FileNotFoundError): - model_size_cache = {} - try: - with open("model_size_cache.pkl", "rb") as f: - model_size_cache = pickle.load(f) - except (EOFError, FileNotFoundError): - model_size_cache = {} - - for model_data in tqdm(leaderboard_data): - model_name = model_data["model_name_for_query"] - - if model_name in model_info_cache: - model_info = model_info_cache[model_name] - else: - try: - model_info = api.model_info(model_name) - model_info_cache[model_name] = model_info - except huggingface_hub.utils._errors.RepositoryNotFoundError: - print("Repo not found!", model_name) - model_data[AutoEvalColumn.license.name] = None - model_data[AutoEvalColumn.likes.name] = None - if model_name not in model_size_cache: - model_size_cache[model_name] = get_model_size(model_name, None) - model_data[AutoEvalColumn.params.name] = model_size_cache[model_name] - if model_name not in model_size_cache: - model_size_cache[model_name] = get_model_size(model_name, None) - model_data[AutoEvalColumn.params.name] = model_size_cache[model_name] - - model_data[AutoEvalColumn.license.name] = get_model_license(model_info) - model_data[AutoEvalColumn.likes.name] = get_model_likes(model_info) - if model_name not in model_size_cache: - model_size_cache[model_name] = get_model_size(model_name, model_info) - model_data[AutoEvalColumn.params.name] = model_size_cache[model_name] - if model_name not in model_size_cache: - model_size_cache[model_name] = get_model_size(model_name, model_info) - model_data[AutoEvalColumn.params.name] = model_size_cache[model_name] - - # save cache to disk in pickle format - with open("model_info_cache.pkl", "wb") as f: - pickle.dump(model_info_cache, f) - with open("model_size_cache.pkl", "wb") as f: - pickle.dump(model_size_cache, f) - with open("model_size_cache.pkl", "wb") as f: - pickle.dump(model_size_cache, f) - - -def get_model_license(model_info): - try: - return model_info.cardData["license"] - except Exception: - return "?" - - -def get_model_likes(model_info): - return model_info.likes - - -size_pattern = re.compile(r"(\d\.)?\d+(b|m)") - - -def get_model_size(model_name, model_info): - # In billions - try: - return round(model_info.safetensors["total"] / 1e9, 3) - except AttributeError: - try: - config = AutoConfig.from_pretrained(model_name, trust_remote_code=False) - with init_empty_weights(): - model = AutoModel.from_config(config, trust_remote_code=False) - return round(sum(p.numel() for p in model.parameters() if p.requires_grad) / 1e9, 3) - except (EnvironmentError, ValueError, KeyError): # model config not found, likely private - try: - size_match = re.search(size_pattern, model_name.lower()) - size = size_match.group(0) - return round(float(size[:-1]) if size[-1] == "b" else float(size[:-1]) / 1e3, 3) - except AttributeError: - return 0 - - -def get_model_type(leaderboard_data: List[dict]): - for model_data in leaderboard_data: - request_files = os.path.join( - "eval-queue", - model_data["model_name_for_query"] + "_eval_request_*" + ".json", - ) - request_files = glob.glob(request_files) - - # Select correct request file (precision) - request_file = "" - if len(request_files) == 1: - request_file = request_files[0] - elif len(request_files) > 1: - request_files = sorted(request_files, reverse=True) - for tmp_request_file in request_files: - with open(tmp_request_file, "r") as f: - req_content = json.load(f) - if ( - req_content["status"] == "FINISHED" - and req_content["precision"] == model_data["Precision"].split(".")[-1] - ): - request_file = tmp_request_file - - try: - with open(request_file, "r") as f: - request = json.load(f) - model_type = model_type_from_str(request["model_type"]) - model_data[AutoEvalColumn.model_type.name] = model_type.value.name - model_data[AutoEvalColumn.model_type_symbol.name] = model_type.value.symbol # + ("🔺" if is_delta else "") - except Exception: - if model_data["model_name_for_query"] in MODEL_TYPE_METADATA: - model_data[AutoEvalColumn.model_type.name] = MODEL_TYPE_METADATA[ - model_data["model_name_for_query"] - ].value.name - model_data[AutoEvalColumn.model_type_symbol.name] = MODEL_TYPE_METADATA[ - model_data["model_name_for_query"] - ].value.symbol # + ("🔺" if is_delta else "") - else: - model_data[AutoEvalColumn.model_type.name] = ModelType.Unknown.value.name - model_data[AutoEvalColumn.model_type_symbol.name] = ModelType.Unknown.value.symbol - - -def flag_models(leaderboard_data: List[dict]): - for model_data in leaderboard_data: - if model_data["model_name_for_query"] in FLAGGED_MODELS: - issue_num = FLAGGED_MODELS[model_data["model_name_for_query"]].split("/")[-1] - issue_link = model_hyperlink( - FLAGGED_MODELS[model_data["model_name_for_query"]], - f"See discussion #{issue_num}", - ) - model_data[ - AutoEvalColumn.model.name - ] = f"{model_data[AutoEvalColumn.model.name]} has been flagged! {issue_link}" - - -def remove_forbidden_models(leaderboard_data: List[dict]): - indices_to_remove = [] - for ix, model in enumerate(leaderboard_data): - if model["model_name_for_query"] in DO_NOT_SUBMIT_MODELS: - indices_to_remove.append(ix) - - for ix in reversed(indices_to_remove): - leaderboard_data.pop(ix) - return leaderboard_data - - -def apply_metadata(leaderboard_data: List[dict]): - leaderboard_data = remove_forbidden_models(leaderboard_data) - get_model_type(leaderboard_data) - get_model_infos_from_hub(leaderboard_data) - flag_models(leaderboard_data) diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/setup.py b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/setup.py deleted file mode 100644 index 889c87521642305e62815f32d6a42c7b307852a0..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/setup.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -from pose_estimation.nvdiffrast import nvdiffrast -import setuptools -import os - -with open("README.md", "r") as fh: - long_description = fh.read() - -setuptools.setup( - name="nvdiffrast", - version=nvdiffrast.__version__, - author="Samuli Laine", - author_email="slaine@nvidia.com", - description="nvdiffrast - modular primitives for high-performance differentiable rendering", - long_description=long_description, - long_description_content_type="text/markdown", - url="https://github.com/NVlabs/nvdiffrast", - packages=setuptools.find_packages(), - package_data={ - 'nvdiffrast': [ - 'common/*.h', - 'common/*.inl', - 'common/*.cu', - 'common/*.cpp', - 'lib/*.h', - 'torch/*.h', - 'torch/*.inl', - 'torch/*.cpp', - 'tensorflow/*.cu', - ] + (['lib/*.lib'] if os.name == 'nt' else []) - }, - include_package_data=True, - install_requires=['numpy'], # note: can't require torch here as it will install torch even for a TensorFlow container - classifiers=[ - "Programming Language :: Python :: 3", - "Operating System :: OS Independent", - ], - python_requires='>=3.6', -) diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_configs/__init__.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/pti_configs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/haakohu/deep_privacy2/configs/fdf/stylegan_fdf128.py b/spaces/haakohu/deep_privacy2/configs/fdf/stylegan_fdf128.py deleted file mode 100644 index a47d6d2ee362c935e7879c9442c4dcd9aaf007c0..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/configs/fdf/stylegan_fdf128.py +++ /dev/null @@ -1,17 +0,0 @@ -from ..discriminators.sg2_discriminator import discriminator, G_optim, D_optim, loss_fnc -from ..datasets.fdf128 import data -from ..generators.stylegan_unet import generator -from ..defaults import train, common, EMA -from tops.config import LazyCall as L - -G_optim.lr = 0.002 -D_optim.lr = 0.002 -generator.update(cnum=128, max_cnum_mul=4, input_cse=False) -loss_fnc.r1_opts.lambd = 0.1 - -train.update(ims_per_val=int(2e6), batch_size=64, max_images_to_train=int(35e6)) - -common.update( - model_url="https://api.loke.aws.unit.no/dlr-gui-backend-resources-content/v2/contents/links/66d803c0-55ce-44c0-9d53-815c2c0e6ba4eb458409-9e91-45d1-bce0-95c8a47a57218b102fdf-bea3-44dc-aac4-0fb1d370ef1c", - model_md5sum="bccd4403e7c9bca682566ff3319e8176" -) \ No newline at end of file diff --git a/spaces/hahahafofo/image2text_prompt_generator/utils/exif.py b/spaces/hahahafofo/image2text_prompt_generator/utils/exif.py deleted file mode 100644 index fef7a34cc83697a32b25eef46a776dfe4228977b..0000000000000000000000000000000000000000 --- a/spaces/hahahafofo/image2text_prompt_generator/utils/exif.py +++ /dev/null @@ -1,54 +0,0 @@ -import piexif -import piexif.helper -from .html import plaintext_to_html - - -def get_image_info(rawimage): - items = rawimage.info - geninfo = "" - - if "exif" in rawimage.info: - exif = piexif.load(rawimage.info["exif"]) - exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b"") - try: - exif_comment = piexif.helper.UserComment.load(exif_comment) - except ValueError: - exif_comment = exif_comment.decode("utf8", errors="ignore") - - items["exif comment"] = exif_comment - geninfo = exif_comment - - for field in [ - "jfif", - "jfif_version", - "jfif_unit", - "jfif_density", - "dpi", - "exif", - "loop", - "background", - "timestamp", - "duration", - ]: - items.pop(field, None) - - geninfo = items.get("parameters", geninfo) - - info = f""" -

          PNG Info

          - """ - for key, text in items.items(): - info += ( - f""" -
          -

          {plaintext_to_html(str(key))}

          -

          {plaintext_to_html(str(text))}

          -
          - """.strip() - + "\n" - ) - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

          {message}

          " - return info diff --git a/spaces/hamzapehlivan/StyleRes/models/e4e.py b/spaces/hamzapehlivan/StyleRes/models/e4e.py deleted file mode 100644 index 9b7247baf03fe27972b3b434b6270cee1a9ccc05..0000000000000000000000000000000000000000 --- a/spaces/hamzapehlivan/StyleRes/models/e4e.py +++ /dev/null @@ -1,348 +0,0 @@ -import math -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from collections import namedtuple - -def _upsample_add(x, y): - _, _, H, W = y.size() - return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y - - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - # if self.activation: - # out = F.linear(input, self.weight * self.scale) - # out = fused_leaky_relu(out, self.bias * self.lr_mul) - - # else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - -class GradualStyleBlock(nn.Module): - def __init__(self, in_c, out_c, spatial): - super(GradualStyleBlock, self).__init__() - self.out_c = out_c - self.spatial = spatial - num_pools = int(np.log2(spatial)) - modules = [] - modules += [nn.Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU()] - for i in range(num_pools - 1): - modules += [ - nn.Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU() - ] - self.convs = nn.Sequential(*modules) - self.linear = EqualLinear(out_c, out_c, lr_mul=1) - - def forward(self, x): - x = self.convs(x) - x = x.view(-1, self.out_c) - x = self.linear(x) - return x - -class Bottleneck(namedtuple('Block', ['in_channel', 'depth', 'stride'])): - """ A named tuple describing a ResNet block. """ - -class bottleneck_IR(nn.Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR, self).__init__() - if in_channel == depth: - self.shortcut_layer = nn.MaxPool2d(1, stride) - else: - self.shortcut_layer = nn.Sequential( - nn.Conv2d(in_channel, depth, (1, 1), stride, bias=False), - nn.BatchNorm2d(depth) - ) - self.res_layer = nn.Sequential( - nn.BatchNorm2d(in_channel), - nn.Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), nn.PReLU(depth), - nn.Conv2d(depth, depth, (3, 3), stride, 1, bias=False), nn.BatchNorm2d(depth) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - -class SEModule(nn.Module): - def __init__(self, channels, reduction): - super(SEModule, self).__init__() - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.fc1 = nn.Conv2d(channels, channels // reduction, kernel_size=1, padding=0, bias=False) - self.relu = nn.ReLU(inplace=True) - self.fc2 = nn.Conv2d(channels // reduction, channels, kernel_size=1, padding=0, bias=False) - self.sigmoid = nn.Sigmoid() - - def forward(self, x): - module_input = x - x = self.avg_pool(x) - x = self.fc1(x) - x = self.relu(x) - x = self.fc2(x) - x = self.sigmoid(x) - return module_input * x - -class bottleneck_IR_SE(nn.Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR_SE, self).__init__() - if in_channel == depth: - self.shortcut_layer = nn.MaxPool2d(1, stride) - else: - self.shortcut_layer = nn.Sequential( - nn.Conv2d(in_channel, depth, (1, 1), stride, bias=False), - nn.BatchNorm2d(depth) - ) - self.res_layer = nn.Sequential( - nn.BatchNorm2d(in_channel), - nn.Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), - nn.PReLU(depth), - nn.Conv2d(depth, depth, (3, 3), stride, 1, bias=False), - nn.BatchNorm2d(depth), - SEModule(depth, 16) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - - -def get_block(in_channel, depth, num_units, stride=2): - return [Bottleneck(in_channel, depth, stride)] + [Bottleneck(depth, depth, 1) for i in range(num_units - 1)] - -def get_blocks(num_layers): - if num_layers == 50: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=4), - get_block(in_channel=128, depth=256, num_units=14), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 100: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=13), - get_block(in_channel=128, depth=256, num_units=30), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 152: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=8), - get_block(in_channel=128, depth=256, num_units=36), - get_block(in_channel=256, depth=512, num_units=3) - ] - else: - raise ValueError("Invalid number of layers: {}. Must be one of [50, 100, 152]".format(num_layers)) - return blocks - -class Encoder4Editing(nn.Module): - def __init__(self, num_layers, mode='ir', stylegan_size=1024, out_res=64): - super(Encoder4Editing, self).__init__() - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.out_res = out_res - self.input_layer = nn.Sequential(nn.Conv2d(3, 64, (3, 3), 1, 1, bias=False), - nn.BatchNorm2d(64), - nn.PReLU(64)) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = nn.Sequential(*modules) - - self.styles = nn.ModuleList() - log_size = int(math.log(stylegan_size, 2)) - self.style_count = 2 * log_size - 2 - self.coarse_ind = 3 - self.middle_ind = 7 - - for i in range(self.style_count): - if i < self.coarse_ind: - style = GradualStyleBlock(512, 512, 16) - elif i < self.middle_ind: - style = GradualStyleBlock(512, 512, 32) - else: - style = GradualStyleBlock(512, 512, 64) - self.styles.append(style) - - self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0) - self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0) - - def forward(self, x): - x = self.input_layer(x) - - modulelist = list(self.body._modules.values()) - for i, l in enumerate(modulelist): - x = l(x) - if i == 2: - c0 = x - if i == 6: - c1 = x - elif i == 20: - c2 = x - elif i == 23: - c3 = x - - # Infer main W and duplicate it - w0 = self.styles[0](c3) - w = w0.repeat(self.style_count, 1, 1).permute(1, 0, 2) - - features = c3 - for i in range(1, self.style_count): # Infer additional deltas - if i == self.coarse_ind: - p2 = _upsample_add(c3, self.latlayer1(c2)) # FPN's middle features - features = p2 - elif i == self.middle_ind: - p1 = _upsample_add(p2, self.latlayer2(c1)) # FPN's fine features - features = p1 - delta_i = self.styles[i](features) - w[:, i] += delta_i - - c = { 128: c0, - 64: c1, - 32: c2, - 16: c3 - }.get(self.out_res) - return w, c - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - -class HighResFeat(nn.Module): - def __init__(self, in_channels, out_channels): - super(HighResFeat, self).__init__() - - self.shared = EqualConv2d(in_channels, out_channels, kernel_size=3, padding=1, bias=True) - - self.conv1 = EqualConv2d(out_channels, 1, kernel_size=3, padding=1, bias=True) - self.conv2 = EqualConv2d(out_channels, out_channels, kernel_size=3, padding=1, bias=True) - self.activation = ScaledLeakyReLU(0.2) - - self.sigmoid = nn.Sigmoid() - - self.skip = None - if in_channels != out_channels: - self.skip = EqualConv2d(in_channels, out_channels, kernel_size=1, padding=0, bias=False) - - def forward(self, x): - - shared_feats = self.shared(x) - shared_feats = self.activation(shared_feats) - - gate = self.conv1(shared_feats) - gate = self.sigmoid(gate) - - addition = self.conv2(shared_feats) - addition = self.activation(addition) - - if self.skip is not None: - x = self.skip(x) - return gate, addition+x - -class E4E_Inversion(nn.Module): - def __init__(self, resolution, num_layers = 50, mode='ir_se', out_res=64): - super(E4E_Inversion, self).__init__() - self.out_res = out_res - resolution = 1024 - self.basic_encoder = Encoder4Editing(num_layers, mode, resolution, self.out_res) - self.latent_avg = None - # ckpt = torch.load(e4e_path, map_location='cpu') - # self.latent_avg = ckpt['latent_avg'].cuda() - # ckpt = {k[k.find(".")+1:]: v for k, v in ckpt['state_dict'].items() if "decoder" not in k} - # self.basic_encoder.load_state_dict(ckpt, strict=True) - - def freeze_basic_encoder(self): - self.basic_encoder.eval() #Basic Encoder always in eval mode. - #No backprop to basic Encoder - for param in self.basic_encoder.parameters(): - param.requires_grad = False - - def forward(self, reals): - self.freeze_basic_encoder() - w, c = self.basic_encoder(reals) - w = w + self.latent_avg - highres_outs = {f"{self.out_res}x{self.out_res}": c} #{"gates": gates, "additions": additions} - return w, highres_outs diff --git a/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/cppipc/prod_cons.h b/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/cppipc/prod_cons.h deleted file mode 100644 index c9004bb8043a12e32814436baa6262a00c8ef68e..0000000000000000000000000000000000000000 --- a/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/cppipc/prod_cons.h +++ /dev/null @@ -1,433 +0,0 @@ -#pragma once - -#include -#include -#include -#include -#include - -#include "libipc/def.h" - -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" -#include "libipc/utility/log.h" -#include "libipc/utility/utility.h" - -namespace ipc { - -//////////////////////////////////////////////////////////////// -/// producer-consumer implementation -//////////////////////////////////////////////////////////////// - -template -struct prod_cons_impl; - -template <> -struct prod_cons_impl> { - - template - struct elem_t { - std::aligned_storage_t data_ {}; - }; - - alignas(cache_line_size) std::atomic rd_; // read index - alignas(cache_line_size) std::atomic wt_; // write index - - constexpr circ::u2_t cursor() const noexcept { - return 0; - } - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed)); - if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) { - return false; // full - } - std::forward(f)(&(elems[cur_wt].data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - /** - * In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'. - * So we could just disconnect all connections of receiver, and return false. - */ - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(~static_cast(0u)); - return false; - } - - template - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed)); - if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::forward(f)(&(elems[cur_rd].data_)); - std::forward(out)(true); - rd_.fetch_add(1, std::memory_order_release); - return true; - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - if (circ::index_of(cur_rd) == - circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - using flag_t = std::uint64_t; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - circ::u2_t cur_ct, nxt_ct; - for (unsigned k = 0;;) { - cur_ct = ct_.load(std::memory_order_relaxed); - if (circ::index_of(nxt_ct = cur_ct + 1) == - circ::index_of(rd_.load(std::memory_order_acquire))) { - return false; // full - } - if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - auto* el = elems + circ::index_of(cur_ct); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - while (1) { - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if (cur_ct != wt_.load(std::memory_order_relaxed)) { - return true; - } - if ((~cac_ct) != cur_ct) { - return true; - } - if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) { - return true; - } - wt_.store(nxt_ct, std::memory_order_release); - cur_ct = nxt_ct; - nxt_ct = cur_ct + 1; - el = elems + circ::index_of(cur_ct); - } - return true; - } - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - auto cur_wt = wt_.load(std::memory_order_acquire); - auto id_rd = circ::index_of(cur_rd); - auto id_wt = circ::index_of(cur_wt); - if (id_rd == id_wt) { - auto* el = elems + id_wt; - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if ((~cac_ct) != cur_wt) { - return false; // empty - } - if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) { - wt_.store(cur_wt + 1, std::memory_order_release); - } - k = 0; - } - else { - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - - enum : rc_t { - ep_mask = 0x00000000ffffffffull, - ep_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - }; - - alignas(cache_line_size) std::atomic wt_; // write index - alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer - - circ::u2_t cursor() const noexcept { - return wt_.load(std::memory_order_acquire); - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) { - return false; // has not finished yet - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - epoch_ += ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) { - if (cur == cursor()) return false; // acquire - auto* el = elems + circ::index_of(cur++); - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & ep_mask) == 0) { - std::forward(out)(true); - return true; - } - auto nxt_rc = cur_rc & ~static_cast(wrapper->connected_id()); - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)((nxt_rc & ep_mask) == 0); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - using flag_t = std::uint64_t; - - enum : rc_t { - rc_mask = 0x00000000ffffffffull, - ep_mask = 0x00ffffffffffffffull, - ep_incr = 0x0100000000000000ull, - ic_mask = 0xff000000ffffffffull, - ic_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - alignas(cache_line_size) std::atomic epoch_ { 0 }; - - circ::u2_t cursor() const noexcept { - return ct_.load(std::memory_order_acquire); - } - - constexpr static rc_t inc_rc(rc_t rc) noexcept { - return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask); - } - - constexpr static rc_t inc_mask(rc_t rc) noexcept { - return inc_rc(rc) & ~rc_mask; - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.load(std::memory_order_acquire); - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_relaxed); - circ::cc_t rem_cc = cur_rc & rc_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) { - return false; // has not finished yet - } - else if (!rem_cc) { - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if ((cur_fl != cur_ct) && cur_fl) { - return false; // full - } - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed) && - epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & rc_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed)) { - if (epoch == epoch_.load(std::memory_order_acquire)) { - break; - } - else if (push(wrapper, std::forward(f), elems)) { - return true; - } - epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) { - auto* el = elems + circ::index_of(cur); - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if (cur_fl != ~static_cast(cur)) { - return false; // empty - } - ++cur; - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & rc_mask) == 0) { - std::forward(out)(true); - el->f_ct_.store(cur + N - 1, std::memory_order_release); - return true; - } - auto nxt_rc = inc_rc(cur_rc) & ~static_cast(wrapper->connected_id()); - bool last_one = false; - if ((last_one = (nxt_rc & rc_mask) == 0)) { - el->f_ct_.store(cur + N - 1, std::memory_order_release); - } - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)(last_one); - return true; - } - ipc::yield(k); - } - } -}; - -} // namespace ipc diff --git a/spaces/haoheliu/AudioLDM_48K_Text-to-HiFiAudio_Generation/app.py b/spaces/haoheliu/AudioLDM_48K_Text-to-HiFiAudio_Generation/app.py deleted file mode 100644 index a5cd6b4fd73be2bb1a170f6e2ce753c170d52cad..0000000000000000000000000000000000000000 --- a/spaces/haoheliu/AudioLDM_48K_Text-to-HiFiAudio_Generation/app.py +++ /dev/null @@ -1,358 +0,0 @@ -from sys import maxsize -from huggingface_hub import hf_hub_download -import torch -import os - -import gradio as gr -from audioldm2 import text_to_audio, build_model -from share_btn import community_icon_html, loading_icon_html, share_js - -os.environ["TOKENIZERS_PARALLELISM"] = "true" - -# default_checkpoint="audioldm2-full" -default_checkpoint="audioldm_48k" -audioldm = None -current_model_name = None - -def text2audio( - text, - duration, - guidance_scale, - random_seed, - n_candidates, - model_name=default_checkpoint, -): - global audioldm, current_model_name - torch.set_float32_matmul_precision("high") - - if audioldm is None or model_name != current_model_name: - audioldm = build_model(model_name=model_name) - current_model_name = model_name - # audioldm = torch.compile(audioldm) - # print(text, length, guidance_scale) - if("48k" in model_name): - latent_t_per_second=12.8 - sample_rate=48000 - else: - latent_t_per_second=25.6 - sample_rate=16000 - - waveform = text_to_audio( - latent_diffusion=audioldm, - text=text, - seed=random_seed, - duration=duration, - guidance_scale=guidance_scale, - n_candidate_gen_per_text=int(n_candidates), - latent_t_per_second=latent_t_per_second, - ) # [bs, 1, samples] - waveform = [ - gr.make_waveform((sample_rate, wave[0]), bg_image="bg.png") for wave in waveform - ] - # waveform = [(16000, np.random.randn(16000)), (16000, np.random.randn(16000))] - if len(waveform) == 1: - waveform = waveform[0] - return waveform - -text2audio("Birds singing sweetly in a blooming garden.", 10, 3.5, 45, 3, default_checkpoint) - -css = """ - a { - color: inherit; - text-decoration: underline; - } - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: #000000; - background: #000000; - } - input[type='range'] { - accent-color: #000000; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - #advanced-btn { - font-size: .7rem !important; - line-height: 19px; - margin-top: 12px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - margin-bottom: 20px; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } - #container-advanced-btns{ - display: flex; - flex-wrap: wrap; - justify-content: space-between; - align-items: center; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - margin-top: 10px; - margin-left: auto; - } - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; - } - #share-btn * { - all: unset; - } - #share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; - } - #share-btn-container .wrap { - display: none !important; - } - .gr-form{ - flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0; - } - #prompt-container{ - gap: 0; - } - #generated_id{ - min-height: 700px - } - #setting_id{ - margin-bottom: 12px; - text-align: center; - font-weight: 900; - } -""" -iface = gr.Blocks(css=css) - -with iface: - gr.HTML( - """ -
          -
          -

          - 48kHz AudioLDM: Generating High-Fidelity Audio and Music with Text -

          -
          -

          - [Paper] [Project page] [Join Discord] -

          -
          - """ - ) - gr.HTML( - """ -

          For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. -
          - - Duplicate Space -

          - """ - ) - with gr.Group(): - with gr.Box(): - ############# Input - textbox = gr.Textbox( - value="A forest of wind chimes singing a soothing melody in the breeze.", - max_lines=1, - label="Input your text here. If the output is not good enough, switching to a different seed will help.", - elem_id="prompt-in", - ) - - with gr.Accordion("Click to modify detailed configurations", open=False): - seed = gr.Number( - value=45, - label="Change this value (any integer number) will lead to a different generation result.", - ) - duration = gr.Slider( - 5, 15, value=10, step=2.5, label="Duration (seconds)" - ) - guidance_scale = gr.Slider( - 0, - 6, - value=3.5, - step=0.5, - label="Guidance scale (Large => better quality and relavancy to text; Small => better diversity)", - ) - n_candidates = gr.Slider( - 1, - 3, - value=3, - step=1, - label="Automatic quality control. This number control the number of candidates (e.g., generate three audios and choose the best to show you). A Larger value usually lead to better quality with heavier computation", - ) - model_name = gr.Dropdown( - ["audioldm_48k", "audioldm_crossattn_flant5", "audioldm2-full"], value="audioldm_48k", - ) - ############# Output - # outputs=gr.Audio(label="Output", type="numpy") - outputs = gr.Video(label="Output", elem_id="output-video") - - # with gr.Group(elem_id="container-advanced-btns"): - # # advanced_button = gr.Button("Advanced options", elem_id="advanced-btn") - # with gr.Group(elem_id="share-btn-container"): - # community_icon = gr.HTML(community_icon_html, visible=False) - # loading_icon = gr.HTML(loading_icon_html, visible=False) - # share_button = gr.Button("Share to community", elem_id="share-btn", visible=False) - # outputs=[gr.Audio(label="Output", type="numpy"), gr.Audio(label="Output", type="numpy")] - btn = gr.Button("Submit").style(full_width=True) - - with gr.Group(elem_id="share-btn-container", visible=False): - community_icon = gr.HTML(community_icon_html) - loading_icon = gr.HTML(loading_icon_html) - share_button = gr.Button("Share to community", elem_id="share-btn") - - # btn.click(text2audio, inputs=[ - # textbox, duration, guidance_scale, seed, n_candidates, model_name], outputs=[outputs]) - btn.click( - text2audio, - inputs=[textbox, duration, guidance_scale, seed, n_candidates], - outputs=[outputs], - api_name="text2audio", - ) - - share_button.click(None, [], [], _js=share_js) - gr.HTML( - """ -

          - """ - ) - gr.Examples( - [ - [ - "Birds singing sweetly in a blooming garden.", - 10, - 3.5, - 45, - 3, - default_checkpoint, - ], - [ - "A modern synthesizer creating futuristic soundscapes.", - 10, - 3.5, - 45, - 3, - default_checkpoint, - ], - [ - "The vibrant beat of Brazilian samba drums.", - 10, - 3.5, - 45, - 3, - default_checkpoint, - ], - ], - fn=text2audio, - inputs=[textbox, duration, guidance_scale, seed, n_candidates, model_name], - # inputs=[textbox, guidance_scale, seed, n_candidates], - outputs=[outputs], - cache_examples=True, - ) - gr.HTML( - """ -
          -

          Essential Tricks for Enhancing the Quality of Your Generated Audio

          -

          1. Try to use more adjectives to describe your sound. For example: "A man is speaking clearly and slowly in a large room" is better than "A man is speaking". This can make sure AudioLDM 2 understands what you want.

          -

          2. Try to use different random seeds, which can affect the generation quality significantly sometimes.

          -

          3. It's better to use general terms like 'man' or 'woman' instead of specific names for individuals or abstract objects that humans may not be familiar with, such as 'mummy'.

          -
          - """ - ) - - with gr.Accordion("Additional information", open=False): - gr.HTML( - """ -
          -

          We build the model with data from AudioSet, Freesound and BBC Sound Effect library. We share this demo based on the UK copyright exception of data for academic research.

          -
          - """ - ) -#

          This demo is strictly for research demo purpose only. For commercial use please contact us.

          - -iface.queue(max_size=20) -iface.launch(debug=True) -# iface.launch(debug=True, share=True) diff --git a/spaces/harshvardhansb/ObjectDetection/public/index.html b/spaces/harshvardhansb/ObjectDetection/public/index.html deleted file mode 100644 index a8c1d6b197c9b7c0b7fef67e0ac7f071ca028933..0000000000000000000000000000000000000000 --- a/spaces/harshvardhansb/ObjectDetection/public/index.html +++ /dev/null @@ -1,23 +0,0 @@ - - - - - - - - - - - - - Object Detection - - - -
          - - - diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/tests/test_setup.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/tests/test_setup.py deleted file mode 100644 index 96827f14b3a71d571c2109791233b5bcf7ef35f8..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/tests/test_setup.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. - -import unittest - -from .common import ( - get_config_files, - get_evolution_config_files, - get_quick_schedules_config_files, - setup, -) - - -class TestSetup(unittest.TestCase): - def _test_setup(self, config_file): - setup(config_file) - - def test_setup_configs(self): - config_files = get_config_files() - for config_file in config_files: - self._test_setup(config_file) - - def test_setup_evolution_configs(self): - config_files = get_evolution_config_files() - for config_file in config_files: - self._test_setup(config_file) - - def test_setup_quick_schedules_configs(self): - config_files = get_quick_schedules_config_files() - for config_file in config_files: - self._test_setup(config_file) diff --git a/spaces/hf4h/bio-chem-foundation-models/model_list.py b/spaces/hf4h/bio-chem-foundation-models/model_list.py deleted file mode 100644 index 886991b59a1049d2cfbd3ba35a54966480c4ee71..0000000000000000000000000000000000000000 --- a/spaces/hf4h/bio-chem-foundation-models/model_list.py +++ /dev/null @@ -1,106 +0,0 @@ -from __future__ import annotations - -import numpy as np -import pandas as pd -import requests -from huggingface_hub.hf_api import SpaceInfo - -url = 'https://docs.google.com/spreadsheets/d/1XH7Jo3LXXfbSJ14z-QrSIQs21ArJMiV6_hMSAwY85PU/edit#gid=0' -csv_url = url.replace('/edit#gid=', '/export?format=csv&gid=') - -class ModelList: - def __init__(self): - self.table = pd.read_csv(csv_url) - self._preprocess_table() - - self.table_header = ''' - - Model Name - Type - Year - Paper - Code on Github - Weights on 🤗 - Other Weights - ''' - - def _preprocess_table(self) -> None: - self.table['name_lowercase'] = self.table.name.str.lower() - self.table['year'] = self.table['year'].apply(str) - - rows = [] - for row in self.table.itertuples(): - paper = f'Paper' if isinstance( - row.paper, str) else '' - github = f'GitHub' if isinstance( - row.github, str) else '' - hf_model = f'Hub Model' if isinstance( - row.hub, str) else '' - other_model = f'Other Weights' if isinstance( - row.other, str) else '' - data_type = f'{row.data_type}' if isinstance( - row.data_type, str) else '' - base_model = f'{row.base_model}' if isinstance( - row.base_model, str) else '' - year = f'{row.year}' if isinstance( - row.year, str) else '' - row = f''' - - {row.name} - {data_type} - {year} - {paper} - {github} - {hf_model} - {other_model} - ''' - rows.append(row) - self.table['html_table_content'] = rows - - def render(self, search_query: str, - case_sensitive: bool, - filter_names: list[str], - data_types: list[str], - years: list[str], - #model_types: list[str] - ) -> tuple[int, str]: - df = self.table - if search_query: - if case_sensitive: - df = df[df.name.str.contains(search_query)] - else: - df = df[df.name_lowercase.str.contains(search_query.lower())] - has_paper = 'Paper' in filter_names - has_github = 'Code' in filter_names - has_model = 'Model Weights' in filter_names - df = self.filter_table(df, has_paper, has_github, has_model, data_types, years) - #df = self.filter_table(df, has_paper, has_github, has_model, data_types, model_types) - return len(df), self.to_html(df, self.table_header) - - @staticmethod - def filter_table(df: pd.DataFrame, has_paper: bool, has_github: bool, - has_model: bool, - data_types: list[str], - years: list[str], - #model_types: list[str] - ) -> pd.DataFrame: - if has_paper: - df = df[~df.paper.isna()] - if has_github: - df = df[~df.github.isna()] - if has_model: - df = df[~df.hub.isna() | ~df.other.isna()] - df = df[df.data_type.isin(set(data_types))] - #df = df[df.base_model.isin(set(model_types))] - df = df[df.year.isin(set(years))] - return df - - @staticmethod - def to_html(df: pd.DataFrame, table_header: str) -> str: - table_data = ''.join(df.html_table_content) - html = f''' - - {table_header} - {table_data} -
          ''' - return html \ No newline at end of file diff --git a/spaces/hylee/photo2cartoon/p2c/utils/face_seg.py b/spaces/hylee/photo2cartoon/p2c/utils/face_seg.py deleted file mode 100644 index 4938ccebbb7e55f959889c1bf3bc2ca7dac7ffe6..0000000000000000000000000000000000000000 --- a/spaces/hylee/photo2cartoon/p2c/utils/face_seg.py +++ /dev/null @@ -1,44 +0,0 @@ -import os -import cv2 -import numpy as np -import tensorflow as tf -from tensorflow.python.platform import gfile - - -curPath = os.path.abspath(os.path.dirname(__file__)) - - -class FaceSeg: - def __init__(self, model_path=os.path.join(curPath, 'seg_model_384.pb')): - config = tf.compat.v1.ConfigProto() - config.gpu_options.allow_growth = True - self._graph = tf.Graph() - self._sess = tf.compat.v1.Session(config=config, graph=self._graph) - - self.pb_file_path = model_path - self._restore_from_pb() - self.input_op = self._sess.graph.get_tensor_by_name('input_1:0') - self.output_op = self._sess.graph.get_tensor_by_name('sigmoid/Sigmoid:0') - - def _restore_from_pb(self): - with self._sess.as_default(): - with self._graph.as_default(): - with gfile.FastGFile(self.pb_file_path, 'rb') as f: - graph_def = tf.compat.v1.GraphDef() - graph_def.ParseFromString(f.read()) - tf.import_graph_def(graph_def, name='') - - def input_transform(self, image): - image = cv2.resize(image, (384, 384), interpolation=cv2.INTER_AREA) - image_input = (image / 255.)[np.newaxis, :, :, :] - return image_input - - def output_transform(self, output, shape): - output = cv2.resize(output, (shape[1], shape[0])) - image_output = (output * 255).astype(np.uint8) - return image_output - - def get_mask(self, image): - image_input = self.input_transform(image) - output = self._sess.run(self.output_op, feed_dict={self.input_op: image_input})[0] - return self.output_transform(output, shape=image.shape[:2]) diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/data/image_folder.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/data/image_folder.py deleted file mode 100644 index 0a02dfe06948ddb90a76667d538f7e3ea47a33e8..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/data/image_folder.py +++ /dev/null @@ -1,77 +0,0 @@ -"""A modified image folder class - -We modify the official PyTorch image folder (https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py) -so that this class can load images from both current directory and its subdirectories. -""" -import os.path - -import numpy as np -import torch.utils.data as data -from PIL import Image - -IMG_EXTENSIONS = [ - ".jpg", - ".JPG", - ".jpeg", - ".JPEG", - ".png", - ".PNG", - ".ppm", - ".PPM", - ".bmp", - ".BMP", - ".tif", - ".TIF", - ".tiff", - ".TIFF", -] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def make_dataset(dir, max_dataset_size=float("inf")): - images = [] - assert os.path.isdir(dir) or os.path.islink(dir), "%s is not a valid directory" % dir - - for root, _, fnames in sorted(os.walk(dir, followlinks=True)): - for fname in fnames: - if is_image_file(fname): - path = os.path.join(root, fname) - images.append(path) - return images[: min(max_dataset_size, len(images))] - - -def default_loader(path): - return Image.open(path).convert("RGB") - - -class ImageFolder(data.Dataset): - def __init__(self, root, transform=None, return_paths=False, loader=default_loader): - imgs = make_dataset(root) - if len(imgs) == 0: - raise ( - RuntimeError( - "Found 0 images in: " + root + "\n" "Supported image extensions are: " + ",".join(IMG_EXTENSIONS) - ) - ) - - self.root = root - self.imgs = imgs - self.transform = transform - self.return_paths = return_paths - self.loader = loader - - def __getitem__(self, index): - path = self.imgs[index] - img = self.loader(path) - if self.transform is not None: - img = self.transform(img) - if self.return_paths: - return img, path - else: - return img - - def __len__(self): - return len(self.imgs) diff --git a/spaces/ibvhim/Gradio-Apps/Chatbot/app.py b/spaces/ibvhim/Gradio-Apps/Chatbot/app.py deleted file mode 100644 index 98ab8f8bb570d27ff5890317bd431fe00a1379c1..0000000000000000000000000000000000000000 --- a/spaces/ibvhim/Gradio-Apps/Chatbot/app.py +++ /dev/null @@ -1,35 +0,0 @@ -import torch -from transformers import AutoModelForCausalLM, AutoTokenizer - -tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium") -model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium") - - -def predict(input, history=[]): - # tokenize the new input sentence - new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt') - - # append the new user input tokens to the chat history - bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1) - - # generate a response - history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id).tolist() - - # convert the tokens to text, and then split the responses into the right format - response = tokenizer.decode(history[0]).split("<|endoftext|>") - response = [(response[i], response[i + 1]) for i in range(0, len(response) - 1, 2)] # convert to tuples of list - return response, history - - -import gradio as gr - -interface = gr.Interface( - fn=predict, - theme="default", - css=".footer {display:none !important}", - inputs=["text", "state"], - outputs=["chatbot", "state"], -) - -if __name__ == '__main__': - interface.launch() diff --git a/spaces/imperialwool/funapi/templates/forbidden.html b/spaces/imperialwool/funapi/templates/forbidden.html deleted file mode 100644 index 3d48e67e9ae17434267c83438d3d2cb491779787..0000000000000000000000000000000000000000 --- a/spaces/imperialwool/funapi/templates/forbidden.html +++ /dev/null @@ -1,3 +0,0 @@ -403 Forbidden - -

          403

          Forbidden

          \ No newline at end of file diff --git a/spaces/inamXcontru/PoeticTTS/Delcam PowerMill 2013 Download Learn How to Use PowerMill for CNC Machining and CAM.md b/spaces/inamXcontru/PoeticTTS/Delcam PowerMill 2013 Download Learn How to Use PowerMill for CNC Machining and CAM.md deleted file mode 100644 index 0a41799a7b57af4460c527bed6549982252299b8..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Delcam PowerMill 2013 Download Learn How to Use PowerMill for CNC Machining and CAM.md +++ /dev/null @@ -1,6 +0,0 @@ -

          delcampowermill2013download


          DOWNLOAD 🆓 https://gohhs.com/2uz46t



          -
          - aaccfb2cb3
          -
          -
          -

          diff --git a/spaces/innnky/nyaru-svc2.0/data_utils.py b/spaces/innnky/nyaru-svc2.0/data_utils.py deleted file mode 100644 index e125a0637908e1284208b80e4b16a50996a136be..0000000000000000000000000000000000000000 --- a/spaces/innnky/nyaru-svc2.0/data_utils.py +++ /dev/null @@ -1,413 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import numpy as np -import commons -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import text_to_sequence, cleaned_text_to_sequence - - -def dropout1d(myarray, ratio=0.5): - indices = np.random.choice(np.arange(myarray.size), replace=False, - size=int(myarray.size * ratio)) - myarray[indices] = 0 - return myarray - - -class TextAudioLoader(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_and_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - lengths = [] - for audiopath, text, pitch in self.audiopaths_and_text: - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.lengths = lengths - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - audiopath, text, pitch = audiopath_and_text[0], audiopath_and_text[1],audiopath_and_text[2] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - pitch = self.get_pitch(pitch) - return (text, spec, wav, pitch) - - def get_pitch(self, pitch): - - return torch.LongTensor(np.load(pitch)) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - soft = np.load(text) - text_norm = torch.FloatTensor(soft) - return text_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - max_pitch_len = max([x[3].shape[0] for x in batch]) - # print(batch) - - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - - text_padded = torch.FloatTensor(len(batch), max_text_len, 256) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - pitch_padded = torch.LongTensor(len(batch), max_pitch_len) - - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - pitch_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0), :] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - pitch = row[3] - pitch_padded[i, :pitch.size(0)] = pitch - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing, pitch_padded - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, pitch_padded - - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - lengths = [] - for audiopath, sid, text, pitch in self.audiopaths_sid_text: - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, text, pitch = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2], audiopath_sid_text[3] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - sid = self.get_sid(sid) - pitch = self.get_pitch(pitch) - - return (text, spec, wav, pitch, sid) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - soft = np.load(text) - text_norm = torch.FloatTensor(soft) - return text_norm - - def get_pitch(self, pitch): - return torch.LongTensor(np.load(pitch)) - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - max_pitch_len = max([x[3].shape[0] for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.FloatTensor(len(batch), max_text_len, 256) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - pitch_padded = torch.LongTensor(len(batch), max_pitch_len) - - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - pitch_padded.zero_() - - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - pitch = row[3] - pitch_padded[i, :pitch.size(0)] = pitch - - sid[i] = row[4] - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, pitch_padded, sid, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths,pitch_padded , sid - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/License Key For Vivid Workshopdata Ati 12.1 !NEW!.md b/spaces/inplisQlawa/anything-midjourney-v4-1/License Key For Vivid Workshopdata Ati 12.1 !NEW!.md deleted file mode 100644 index 1528a1a2fffacf051c6784f328597698d9e8ecb3..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/License Key For Vivid Workshopdata Ati 12.1 !NEW!.md +++ /dev/null @@ -1,86 +0,0 @@ - -

          License Key for Vivid WorkshopData ATI 12.1

          -

          If you are looking for a reliable and comprehensive software program for electrical motor diagnostics, repair and maintenance of European cars, you might want to consider Vivid WorkshopData ATI 12.1. This software provides you with up-to-date technical information, wiring diagrams, service schedules, fault codes and more. But how can you get a license key for Vivid WorkshopData ATI 12.1 and enjoy its full features? In this article, we will show you how to do that.

          -

          What is Vivid WorkshopData ATI 12.1?

          -

          Vivid WorkshopData ATI 12.1 is a software program that was developed by HaynesPro, a leading provider of automotive data solutions. It is designed to help professional mechanics and technicians with electrical motor diagnostics, repair and maintenance of European cars. It covers more than 25,000 models from over 80 vehicle manufacturers.

          -

          license key for vivid workshopdata ati 12.1


          Download File ››››› https://urlin.us/2uEvhF



          -

          Some of the features of Vivid WorkshopData ATI 12.1 are:

          -
            -
          • Easy-to-use interface with intuitive navigation
          • -
          • Comprehensive and up-to-date technical information
          • -
          • Wiring diagrams with component locations and colors
          • -
          • Service schedules with service indicators and reset procedures
          • -
          • Fault codes with descriptions and causes
          • -
          • Component testing and measuring procedures
          • -
          • Repair times and labor costs
          • -
          • Technical bulletins and recalls
          • -
          • Online updates and support
          • -
          -

          How to get a license key for Vivid WorkshopData ATI 12.1?

          -

          To use Vivid WorkshopData ATI 12.1, you need to have a valid license key that activates the software on your computer. There are two ways to get a license key for Vivid WorkshopData ATI 12.1: buying it from an authorized dealer or cracking it from a third-party source.

          -

          The first option is to buy a license key from an authorized dealer of HaynesPro. This is the safest and most legal way to get a license key for Vivid WorkshopData ATI 12.1. You can find a list of authorized dealers on the official website of HaynesPro: https://www.haynespro.com/en/dealers/. You can also contact HaynesPro directly to request a quote or a demo: https://www.haynespro.com/en/contact/.

          -

          The second option is to crack a license key from a third-party source. This is the riskiest and most illegal way to get a license key for Vivid WorkshopData ATI 12.1. You can find various websites that offer cracked license keys for Vivid WorkshopData ATI 12.1, such as https://pletfaydeasac.weebly.com/vivid-workshopdata-ati-121rar-crackrar.html or https://trello.com/c/17GxEICS/27-license-key-for-vivid-workshopdata-ati-121. However, these websites are not trustworthy and may contain viruses, malware or spyware that can harm your computer or steal your personal information. Moreover, cracking a license key is a violation of the intellectual property rights of HaynesPro and may result in legal consequences.

          -

          Conclusion

          -

          Vivid WorkshopData ATI 12.1 is a software program that provides comprehensive and up-to-date technical information for electrical motor diagnostics, repair and maintenance of European cars. To use it, you need to have a valid license key that activates the software on your computer. You can either buy a license key from an authorized dealer of HaynesPro or crack a license key from a third-party source. However, we strongly recommend the first option as it is safer, legal and ethical.

          -

          What are the benefits of using Vivid WorkshopData ATI 12.1?

          -

          Using Vivid WorkshopData ATI 12.1 can bring you many benefits as a professional mechanic or technician. Some of the benefits are:

          -

          -
            -
          • You can save time and money by accessing accurate and updated technical information in one place.
          • -
          • You can improve your skills and knowledge by learning from the best practices and tips from experts.
          • -
          • You can increase your customer satisfaction and loyalty by providing high-quality service and repair.
          • -
          • You can boost your reputation and credibility by using a trusted and reputable software program.
          • -
          • You can enhance your productivity and efficiency by using a user-friendly and intuitive interface.
          • -
          -

          How to install and activate Vivid WorkshopData ATI 12.1 with license key?

          -

          To install and activate Vivid WorkshopData ATI 12.1 with license key, you need to follow these steps:

          -
            -
          1. Download the software program from the official website of HaynesPro or from an authorized dealer.
          2. -
          3. Extract the .rar file using a program like WinRAR or 7-Zip.
          4. -
          5. Run the setup.exe file and follow the instructions on the screen.
          6. -
          7. Enter the license key when prompted. You can find the license key in the email confirmation or in the package that you received.
          8. -
          9. Wait for the installation and activation process to complete.
          10. -
          11. Enjoy using Vivid WorkshopData ATI 12.1!
          12. -
          -

          Conclusion

          -

          Vivid WorkshopData ATI 12.1 is a software program that provides comprehensive and up-to-date technical information for electrical motor diagnostics, repair and maintenance of European cars. To use it, you need to have a valid license key that activates the software on your computer. You can either buy a license key from an authorized dealer of HaynesPro or crack a license key from a third-party source. However, we strongly recommend the first option as it is safer, legal and ethical. Using Vivid WorkshopData ATI 12.1 can bring you many benefits as a professional mechanic or technician, such as saving time and money, improving your skills and knowledge, increasing your customer satisfaction and loyalty, boosting your reputation and credibility, and enhancing your productivity and efficiency.

          -

          What are the alternatives to Vivid WorkshopData ATI 12.1?

          -

          Vivid WorkshopData ATI 12.1 is not the only software program that offers technical information for electrical motor diagnostics, repair and maintenance of European cars. There are some alternatives that you can consider, such as:

          -
            -
          • Autodata: Autodata is a software program that provides technical information for over 34,000 models from over 142 manufacturers. It covers service schedules, wiring diagrams, diagnostics, repair instructions, technical specifications and more. You can buy a license key from an authorized dealer or subscribe online: https://www.autodata-group.com/.
          • -
          • Alldata: Alldata is a software program that provides technical information for over 38,000 models from over 175 manufacturers. It covers service schedules, wiring diagrams, diagnostics, repair instructions, technical bulletins and more. You can buy a license key from an authorized dealer or subscribe online: https://www.alldata.com/.
          • -
          • Mitchell: Mitchell is a software program that provides technical information for over 30,000 models from over 80 manufacturers. It covers service schedules, wiring diagrams, diagnostics, repair instructions, labor times and more. You can buy a license key from an authorized dealer or subscribe online: https://www.mitchell.com/.
          • -
          -

          How to choose the best software program for your needs?

          -

          To choose the best software program for your needs, you need to consider some factors, such as:

          -
            -
          • Your budget: Different software programs have different prices and payment options. You need to compare the costs and benefits of each software program and choose the one that fits your budget.
          • -
          • Your preferences: Different software programs have different features and interfaces. You need to test each software program and choose the one that suits your preferences and style.
          • -
          • Your requirements: Different software programs have different coverage and quality of technical information. You need to check each software program and choose the one that meets your requirements and expectations.
          • -
          -

          Conclusion

          -

          Vivid WorkshopData ATI 12.1 is a software program that provides comprehensive and up-to-date technical information for electrical motor diagnostics, repair and maintenance of European cars. To use it, you need to have a valid license key that activates the software on your computer. You can either buy a license key from an authorized dealer of HaynesPro or crack a license key from a third-party source. However, we strongly recommend the first option as it is safer, legal and ethical. Using Vivid WorkshopData ATI 12.1 can bring you many benefits as a professional mechanic or technician, such as saving time and money, improving your skills and knowledge, increasing your customer satisfaction and loyalty, boosting your reputation and credibility, and enhancing your productivity and efficiency. However, Vivid WorkshopData ATI 12.1 is not the only software program that offers technical information for electrical motor diagnostics, repair and maintenance of European cars. There are some alternatives that you can consider, such as Autodata, Alldata and Mitchell. To choose the best software program for your needs, you need to consider some factors, such as your budget, your preferences and your requirements.

          -

          How to troubleshoot common problems with Vivid WorkshopData ATI 12.1?

          -

          Vivid WorkshopData ATI 12.1 is a software program that usually works smoothly and efficiently. However, sometimes you may encounter some problems or errors that prevent you from using it properly. Here are some common problems and solutions that you can try:

          -
            -
          • Problem: The software does not start or crashes frequently.
          • -
          • Solution: Check if your computer meets the minimum system requirements for Vivid WorkshopData ATI 12.1. You can find them on the official website of HaynesPro: https://www.haynespro.com/en/system-requirements/. Also, make sure that you have installed the latest updates and patches for the software. You can download them from the official website of HaynesPro: https://www.haynespro.com/en/downloads/.
          • -
          • Problem: The license key is invalid or expired.
          • -
          • Solution: Check if you have entered the license key correctly and without any spaces or typos. Also, make sure that you have not used the license key on more than one computer or device. If you have bought the license key from an authorized dealer, contact them for assistance. If you have cracked the license key from a third-party source, you may need to find a new one.
          • -
          • Problem: The technical information is incomplete or inaccurate.
          • -
          • Solution: Check if you have selected the correct vehicle model and year from the database. Also, make sure that you have updated the software to the latest version and data release. You can download them from the official website of HaynesPro: https://www.haynespro.com/en/downloads/. If you still find any errors or discrepancies in the technical information, you can report them to HaynesPro: https://www.haynespro.com/en/contact/.
          • -
          -

          How to get help and support for Vivid WorkshopData ATI 12.1?

          -

          If you need any help or support for Vivid WorkshopData ATI 12.1, you can contact HaynesPro through various channels, such as:

          -
            -
          • Email: You can send an email to info@haynespro.com or support@haynespro.com and get a reply within 24 hours.
          • -
          • Phone: You can call +31 (0)35 603 62 70 from Monday to Friday between 9:00 and 17:00 CET.
          • -
          • Online: You can visit the official website of HaynesPro: https://www.haynespro.com/en/ and access the online help center, FAQ section, user manuals, video tutorials and more.
          • -
          • Social media: You can follow HaynesPro on Facebook, Twitter, LinkedIn and YouTube and get the latest news, updates and tips.
          • -
          -

          Conclusion

          -

          Vivid WorkshopData ATI 12.1 is a software program that provides comprehensive and up-to-date technical information for electrical motor diagnostics, repair and maintenance of European cars. To use it, you need to have a valid license key that activates the software on your computer. You can either buy a license key from an authorized dealer of HaynesPro or crack a license key from a third-party source. However, we strongly recommend the first option as it is safer, legal and ethical. Using Vivid WorkshopData ATI 12.1 can bring you many benefits as a professional mechanic or technician, such as saving time and money, improving your skills and knowledge, increasing your customer satisfaction and loyalty, boosting your reputation and credibility, and enhancing your productivity and efficiency. However, Vivid WorkshopData ATI 12.1 is not the only software program that offers technical information for electrical motor diagnostics, repair and maintenance of European cars. There are some alternatives that you can consider, such as Autodata, Alldata and Mitchell. To choose the best software program for your needs, you need to consider some factors, such as your budget, your preferences and your requirements. Moreover, if you encounter any problems or errors with Vivid WorkshopData ATI 12.1, you can try some troubleshooting tips or contact HaynesPro for help and support.

          -

          Conclusion

          -

          Vivid WorkshopData ATI 12.1 is a software program that provides comprehensive and up-to-date technical information for electrical motor diagnostics, repair and maintenance of European cars. To use it, you need to have a valid license key that activates the software on your computer. You can either buy a license key from an authorized dealer of HaynesPro or crack a license key from a third-party source. However, we strongly recommend the first option as it is safer, legal and ethical. Using Vivid WorkshopData ATI 12.1 can bring you many benefits as a professional mechanic or technician, such as saving time and money, improving your skills and knowledge, increasing your customer satisfaction and loyalty, boosting your reputation and credibility, and enhancing your productivity and efficiency. However, Vivid WorkshopData ATI 12.1 is not the only software program that offers technical information for electrical motor diagnostics, repair and maintenance of European cars. There are some alternatives that you can consider, such as Autodata, Alldata and Mitchell. To choose the best software program for your needs, you need to consider some factors, such as your budget, your preferences and your requirements. Moreover, if you encounter any problems or errors with Vivid WorkshopData ATI 12.1, you can try some troubleshooting tips or contact HaynesPro for help and support.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/American Truck Simulator 2016 Crack With License Key Free BETTER Download.md b/spaces/inreVtussa/clothingai/Examples/American Truck Simulator 2016 Crack With License Key Free BETTER Download.md deleted file mode 100644 index e693f37944242d28d2410b3ada4c26e0fe301e83..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/American Truck Simulator 2016 Crack With License Key Free BETTER Download.md +++ /dev/null @@ -1,17 +0,0 @@ -
          -Here is what I created: - -

          American Truck Simulator 2016 Crack with License Key Free Download

          -

          American Truck Simulator 2016 is a realistic and immersive simulation game that lets you drive across the USA in a variety of trucks. You can customize your vehicle, explore different routes and cities, and deliver cargo to various destinations. But what if you want to play the game without paying for it? That's where American Truck Simulator 2016 Crack comes in.

          -

          American Truck Simulator 2016 Crack is a software tool that bypasses the game's activation process and allows you to play it for free. You don't need to buy the game or enter a license key to enjoy its features. All you need to do is download the crack file from a reliable source, install it on your PC, and run the game. It's that simple!

          -

          American Truck Simulator 2016 Crack with License Key Free Download


          Downloadhttps://tiurll.com/2uClPG



          -

          However, before you download American Truck Simulator 2016 Crack, you should be aware of the risks involved. First of all, cracking a game is illegal and violates the terms of service of the developers and publishers. You could face legal consequences if you get caught. Second, downloading crack files from unknown sources could expose your PC to malware, viruses, or spyware. You could lose your data, compromise your security, or damage your system. Third, using a crack could affect the performance and quality of the game. You could experience bugs, glitches, crashes, or errors that ruin your gaming experience. You could also miss out on updates, patches, or online features that enhance the game.

          -

          Therefore, we do not recommend using American Truck Simulator 2016 Crack or any other crack for that matter. If you want to play the game, you should buy it from a legitimate source and support the developers who worked hard to create it. You will get a better and safer gaming experience that way.

          -Here is what I created: - -

          So, how can you play American Truck Simulator 2016 legally and safely? The answer is simple: buy the game from the official website or a trusted online store. You can choose from different editions and bundles that suit your budget and preferences. You can also get access to additional content, such as DLCs, mods, or community creations. You can also join the online community of truckers and share your experiences, tips, or feedback with other players.

          -

          Buying the game will also ensure that you get the best possible gaming experience. You will be able to enjoy the game's realistic graphics, physics, and sound effects that make you feel like you are driving a real truck. You will also be able to explore the game's vast and diverse map that covers different regions and states of the USA. You will also be able to customize your truck with various parts, accessories, and paint jobs that reflect your personality and style.

          -

          American Truck Simulator 2016 is a game that deserves your support and appreciation. It is a game that offers you hours of fun, challenge, and entertainment. It is a game that lets you live your dream of becoming a truck driver and traveling across the USA. It is a game that you should not miss out on. So, don't waste your time and money on cracks or pirated copies. Buy the game today and enjoy the ride!

          -

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Basic Electronics By Sanjay Sharma Pdf Free Download Hit.md b/spaces/inreVtussa/clothingai/Examples/Basic Electronics By Sanjay Sharma Pdf Free Download Hit.md deleted file mode 100644 index 1b7f805fe4eb7f2d7c7cc809271780e649dc1e0b..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Basic Electronics By Sanjay Sharma Pdf Free Download Hit.md +++ /dev/null @@ -1,56 +0,0 @@ - -

          Basic Electronics by Sanjay Sharma: A Comprehensive Guide for Beginners

          - -

          If you are looking for a book that covers the fundamentals of electronics in a clear and concise way, then you might want to check out Basic Electronics by Sanjay Sharma. This book is designed for students and amateurs who want to learn the theory and practice of electronics. It covers topics such as semiconductor devices, diode circuits, BJT circuits, FET circuits, feedback amplifiers, operational amplifiers, oscillators, digital electronics, and electronic instruments.

          - -

          Basic Electronics by Sanjay Sharma is a well-written and well-illustrated book that uses simple language and examples to explain complex concepts. It also includes numerous solved problems and exercises to help you test your understanding and apply your knowledge. The book is suitable for self-study as well as for classroom use.

          -

          basic electronics by sanjay sharma pdf free download hit


          Download Filehttps://tiurll.com/2uCiKL



          - -

          One of the best features of this book is that it provides a free PDF download link for the readers. You can download the PDF version of the book from the following link: Basic Electronics by Sanjay Sharma PDF Free Download Hit. This link will take you to a website where you can access the PDF file of the book without any hassle. You can also find other useful resources and information on this website.

          - -

          Basic Electronics by Sanjay Sharma is a must-have book for anyone who wants to learn the basics of electronics and build their own circuits. It will help you gain a solid foundation in electronics and prepare you for more advanced topics. Whether you are a student, a hobbyist, or a professional, you will find this book useful and informative.

          - -

          So what are you waiting for? Download Basic Electronics by Sanjay Sharma PDF Free Download Hit today and start learning electronics!

          - -

          Basic Electronics Concepts

          - -

          Before you dive into the details of electronic circuits, you need to understand some basic concepts that are essential for electronics. These concepts include voltage, current, resistance, power, Ohm's law, Kirchhoff's laws, and Thevenin's theorem. These concepts will help you analyze and design electronic circuits and understand how they work.

          - -

          Voltage

          - -

          Voltage is the measure of electric potential difference between two points in a circuit. It is also called electromotive force (EMF) or potential. Voltage is the cause of electric current in a circuit. It is measured in volts (V).

          - -

          Current

          - -

          Current is the measure of electric charge flow through a conductor in a circuit. It is also called electric current or amperage. Current is the effect of voltage in a circuit. It is measured in amperes (A) or milliamperes (mA).

          - -

          Resistance

          - -

          Resistance is the measure of opposition to electric current in a circuit. It is also called electrical resistance or impedance. Resistance is the property of a material that determines how much current can flow through it for a given voltage. It is measured in ohms ($\Omega$) or kilohms (k$\Omega$).

          - -

          Power

          - -

          Power is the measure of electric energy consumed or delivered by a circuit element in a unit time. It is also called electrical power or wattage. Power is the product of voltage and current in a circuit. It is measured in watts (W) or milliwatts (mW).

          -

          - -

          Ohm's Law

          - -

          Ohm's law is the fundamental relationship between voltage, current, and resistance in a circuit. It states that the voltage across a resistor is directly proportional to the current through it, and inversely proportional to its resistance. The mathematical expression of Ohm's law is: - -$$V = IR$$ - -where V is the voltage in volts, I is the current in amperes, and R is the resistance in ohms.

          - -

          Kirchhoff's Laws

          - -

          Kirchhoff's laws are two rules that govern the conservation of charge and energy in a circuit. They are also called Kirchhoff's current law (KCL) and Kirchhoff's voltage law (KVL). - -KCL states that the algebraic sum of currents entering and leaving a node (or junction) in a circuit is zero. This means that the total charge entering a node is equal to the total charge leaving it. - -KVL states that the algebraic sum of voltages around any closed loop in a circuit is zero. This means that the total energy gained by charges moving around a loop is equal to the total energy lost by them.

          - -

          Thevenin's Theorem

          - -

          Thevenin's theorem is a technique that simplifies the analysis of complex circuits by replacing them with equivalent circuits consisting of a single voltage source and a single resistor. The theorem states that any linear circuit with two terminals can be replaced by an equivalent circuit with a voltage source equal to the open-circuit voltage across the terminals and a resistor equal to the equivalent resistance seen from the terminals.

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/ipvikas/ALL_NLP_Tasks/README.md b/spaces/ipvikas/ALL_NLP_Tasks/README.md deleted file mode 100644 index ad46a840f79244f3e4cf45523a47d51c753abd3d..0000000000000000000000000000000000000000 --- a/spaces/ipvikas/ALL_NLP_Tasks/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ALL NLP Tasks -emoji: 🌖 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/itintelpro/MyCybersecHelper/README.md b/spaces/itintelpro/MyCybersecHelper/README.md deleted file mode 100644 index 6ebb31b17de3b3c6db6ece3a7e6910a6ffcdca99..0000000000000000000000000000000000000000 --- a/spaces/itintelpro/MyCybersecHelper/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MyCybersecHelper -emoji: 📉 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ixciel/img-to-music/constants.py b/spaces/ixciel/img-to-music/constants.py deleted file mode 100644 index 86863d1b778d4c66f0d8e1e0b699f1bb937c1d50..0000000000000000000000000000000000000000 --- a/spaces/ixciel/img-to-music/constants.py +++ /dev/null @@ -1,9 +0,0 @@ -import numpy as np -import os - -MUBERT_LICENSE = os.environ.get('MUBERT_LICENSE') -MUBERT_TOKEN = os.environ.get('MUBERT_TOKEN') - -MUBERT_MODE = "loop" -MUBERT_TAGS_STRING = 'tribal,action,kids,neo-classic,run 130,pumped,jazz / funk,ethnic,dubtechno,reggae,acid jazz,liquidfunk,funk,witch house,tech house,underground,artists,mystical,disco,sensorium,r&b,agender,psychedelic trance / psytrance,peaceful,run 140,piano,run 160,setting,meditation,christmas,ambient,horror,cinematic,electro house,idm,bass,minimal,underscore,drums,glitchy,beautiful,technology,tribal house,country pop,jazz & funk,documentary,space,classical,valentines,chillstep,experimental,trap,new jack swing,drama,post-rock,tense,corporate,neutral,happy,analog,funky,spiritual,sberzvuk special,chill hop,dramatic,catchy,holidays,fitness 90,optimistic,orchestra,acid techno,energizing,romantic,minimal house,breaks,hyper pop,warm up,dreamy,dark,urban,microfunk,dub,nu disco,vogue,keys,hardcore,aggressive,indie,electro funk,beauty,relaxing,trance,pop,hiphop,soft,acoustic,chillrave / ethno-house,deep techno,angry,dance,fun,dubstep,tropical,latin pop,heroic,world music,inspirational,uplifting,atmosphere,art,epic,advertising,chillout,scary,spooky,slow ballad,saxophone,summer,erotic,jazzy,energy 100,kara mar,xmas,atmospheric,indie pop,hip-hop,yoga,reggaeton,lounge,travel,running,folk,chillrave & ethno-house,detective,darkambient,chill,fantasy,minimal techno,special,night,tropical house,downtempo,lullaby,meditative,upbeat,glitch hop,fitness,neurofunk,sexual,indie rock,future pop,jazz,cyberpunk,melancholic,happy hardcore,family / kids,synths,electric guitar,comedy,psychedelic trance & psytrance,edm,psychedelic rock,calm,zen,bells,podcast,melodic house,ethnic percussion,nature,heavy,bassline,indie dance,techno,drumnbass,synth pop,vaporwave,sad,8-bit,chillgressive,deep,orchestral,futuristic,hardtechno,nostalgic,big room,sci-fi,tutorial,joyful,pads,minimal 170,drill,ethnic 108,amusing,sleepy ambient,psychill,italo disco,lofi,house,acoustic guitar,bassline house,rock,k-pop,synthwave,deep house,electronica,gabber,nightlife,sport & fitness,road trip,celebration,electro,disco house,electronic' -MUBERT_TAGS = np.array(MUBERT_TAGS_STRING.split(',')) \ No newline at end of file diff --git a/spaces/jackli888/stable-diffusion-webui/javascript/progressbar.js b/spaces/jackli888/stable-diffusion-webui/javascript/progressbar.js deleted file mode 100644 index ff6d757bae88f5f622767376e5315b9acf8271cd..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/javascript/progressbar.js +++ /dev/null @@ -1,243 +0,0 @@ -// code related to showing and updating progressbar shown as the image is being made - - -galleries = {} -storedGallerySelections = {} -galleryObservers = {} - -function rememberGallerySelection(id_gallery){ - storedGallerySelections[id_gallery] = getGallerySelectedIndex(id_gallery) -} - -function getGallerySelectedIndex(id_gallery){ - let galleryButtons = gradioApp().querySelectorAll('#'+id_gallery+' .gallery-item') - let galleryBtnSelected = gradioApp().querySelector('#'+id_gallery+' .gallery-item.\\!ring-2') - - let currentlySelectedIndex = -1 - galleryButtons.forEach(function(v, i){ if(v==galleryBtnSelected) { currentlySelectedIndex = i } }) - - return currentlySelectedIndex -} - -// this is a workaround for https://github.com/gradio-app/gradio/issues/2984 -function check_gallery(id_gallery){ - let gallery = gradioApp().getElementById(id_gallery) - // if gallery has no change, no need to setting up observer again. - if (gallery && galleries[id_gallery] !== gallery){ - galleries[id_gallery] = gallery; - if(galleryObservers[id_gallery]){ - galleryObservers[id_gallery].disconnect(); - } - - storedGallerySelections[id_gallery] = -1 - - galleryObservers[id_gallery] = new MutationObserver(function (){ - let galleryButtons = gradioApp().querySelectorAll('#'+id_gallery+' .gallery-item') - let galleryBtnSelected = gradioApp().querySelector('#'+id_gallery+' .gallery-item.\\!ring-2') - let currentlySelectedIndex = getGallerySelectedIndex(id_gallery) - prevSelectedIndex = storedGallerySelections[id_gallery] - storedGallerySelections[id_gallery] = -1 - - if (prevSelectedIndex !== -1 && galleryButtons.length>prevSelectedIndex && !galleryBtnSelected) { - // automatically re-open previously selected index (if exists) - activeElement = gradioApp().activeElement; - let scrollX = window.scrollX; - let scrollY = window.scrollY; - - galleryButtons[prevSelectedIndex].click(); - showGalleryImage(); - - // When the gallery button is clicked, it gains focus and scrolls itself into view - // We need to scroll back to the previous position - setTimeout(function (){ - window.scrollTo(scrollX, scrollY); - }, 50); - - if(activeElement){ - // i fought this for about an hour; i don't know why the focus is lost or why this helps recover it - // if someone has a better solution please by all means - setTimeout(function (){ - activeElement.focus({ - preventScroll: true // Refocus the element that was focused before the gallery was opened without scrolling to it - }) - }, 1); - } - } - }) - galleryObservers[id_gallery].observe( gallery, { childList:true, subtree:false }) - } -} - -onUiUpdate(function(){ - check_gallery('txt2img_gallery') - check_gallery('img2img_gallery') -}) - -function request(url, data, handler, errorHandler){ - var xhr = new XMLHttpRequest(); - var url = url; - xhr.open("POST", url, true); - xhr.setRequestHeader("Content-Type", "application/json"); - xhr.onreadystatechange = function () { - if (xhr.readyState === 4) { - if (xhr.status === 200) { - try { - var js = JSON.parse(xhr.responseText); - handler(js) - } catch (error) { - console.error(error); - errorHandler() - } - } else{ - errorHandler() - } - } - }; - var js = JSON.stringify(data); - xhr.send(js); -} - -function pad2(x){ - return x<10 ? '0'+x : x -} - -function formatTime(secs){ - if(secs > 3600){ - return pad2(Math.floor(secs/60/60)) + ":" + pad2(Math.floor(secs/60)%60) + ":" + pad2(Math.floor(secs)%60) - } else if(secs > 60){ - return pad2(Math.floor(secs/60)) + ":" + pad2(Math.floor(secs)%60) - } else{ - return Math.floor(secs) + "s" - } -} - -function setTitle(progress){ - var title = 'Stable Diffusion' - - if(opts.show_progress_in_title && progress){ - title = '[' + progress.trim() + '] ' + title; - } - - if(document.title != title){ - document.title = title; - } -} - - -function randomId(){ - return "task(" + Math.random().toString(36).slice(2, 7) + Math.random().toString(36).slice(2, 7) + Math.random().toString(36).slice(2, 7)+")" -} - -// starts sending progress requests to "/internal/progress" uri, creating progressbar above progressbarContainer element and -// preview inside gallery element. Cleans up all created stuff when the task is over and calls atEnd. -// calls onProgress every time there is a progress update -function requestProgress(id_task, progressbarContainer, gallery, atEnd, onProgress){ - var dateStart = new Date() - var wasEverActive = false - var parentProgressbar = progressbarContainer.parentNode - var parentGallery = gallery ? gallery.parentNode : null - - var divProgress = document.createElement('div') - divProgress.className='progressDiv' - divProgress.style.display = opts.show_progressbar ? "" : "none" - var divInner = document.createElement('div') - divInner.className='progress' - - divProgress.appendChild(divInner) - parentProgressbar.insertBefore(divProgress, progressbarContainer) - - if(parentGallery){ - var livePreview = document.createElement('div') - livePreview.className='livePreview' - parentGallery.insertBefore(livePreview, gallery) - } - - var removeProgressBar = function(){ - setTitle("") - parentProgressbar.removeChild(divProgress) - if(parentGallery) parentGallery.removeChild(livePreview) - atEnd() - } - - var fun = function(id_task, id_live_preview){ - request("./internal/progress", {"id_task": id_task, "id_live_preview": id_live_preview}, function(res){ - if(res.completed){ - removeProgressBar() - return - } - - var rect = progressbarContainer.getBoundingClientRect() - - if(rect.width){ - divProgress.style.width = rect.width + "px"; - } - - progressText = "" - - divInner.style.width = ((res.progress || 0) * 100.0) + '%' - divInner.style.background = res.progress ? "" : "transparent" - - if(res.progress > 0){ - progressText = ((res.progress || 0) * 100.0).toFixed(0) + '%' - } - - if(res.eta){ - progressText += " ETA: " + formatTime(res.eta) - } - - - setTitle(progressText) - - if(res.textinfo && res.textinfo.indexOf("\n") == -1){ - progressText = res.textinfo + " " + progressText - } - - divInner.textContent = progressText - - var elapsedFromStart = (new Date() - dateStart) / 1000 - - if(res.active) wasEverActive = true; - - if(! res.active && wasEverActive){ - removeProgressBar() - return - } - - if(elapsedFromStart > 5 && !res.queued && !res.active){ - removeProgressBar() - return - } - - - if(res.live_preview && gallery){ - var rect = gallery.getBoundingClientRect() - if(rect.width){ - livePreview.style.width = rect.width + "px" - livePreview.style.height = rect.height + "px" - } - - var img = new Image(); - img.onload = function() { - livePreview.appendChild(img) - if(livePreview.childElementCount > 2){ - livePreview.removeChild(livePreview.firstElementChild) - } - } - img.src = res.live_preview; - } - - - if(onProgress){ - onProgress(res) - } - - setTimeout(() => { - fun(id_task, res.id_live_preview); - }, opts.live_preview_refresh_period || 500) - }, function(){ - removeProgressBar() - }) - } - - fun(id_task, 0) -} diff --git a/spaces/jackli888/stable-diffusion-webui/modules/esrgan_model.py b/spaces/jackli888/stable-diffusion-webui/modules/esrgan_model.py deleted file mode 100644 index 80131c62cfeaa7f95455df55c45d6e62591adeee..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/esrgan_model.py +++ /dev/null @@ -1,233 +0,0 @@ -import os - -import numpy as np -import torch -from PIL import Image -from basicsr.utils.download_util import load_file_from_url - -import modules.esrgan_model_arch as arch -from modules import shared, modelloader, images, devices -from modules.upscaler import Upscaler, UpscalerData -from modules.shared import opts - - - -def mod2normal(state_dict): - # this code is copied from https://github.com/victorca25/iNNfer - if 'conv_first.weight' in state_dict: - crt_net = {} - items = [] - for k, v in state_dict.items(): - items.append(k) - - crt_net['model.0.weight'] = state_dict['conv_first.weight'] - crt_net['model.0.bias'] = state_dict['conv_first.bias'] - - for k in items.copy(): - if 'RDB' in k: - ori_k = k.replace('RRDB_trunk.', 'model.1.sub.') - if '.weight' in k: - ori_k = ori_k.replace('.weight', '.0.weight') - elif '.bias' in k: - ori_k = ori_k.replace('.bias', '.0.bias') - crt_net[ori_k] = state_dict[k] - items.remove(k) - - crt_net['model.1.sub.23.weight'] = state_dict['trunk_conv.weight'] - crt_net['model.1.sub.23.bias'] = state_dict['trunk_conv.bias'] - crt_net['model.3.weight'] = state_dict['upconv1.weight'] - crt_net['model.3.bias'] = state_dict['upconv1.bias'] - crt_net['model.6.weight'] = state_dict['upconv2.weight'] - crt_net['model.6.bias'] = state_dict['upconv2.bias'] - crt_net['model.8.weight'] = state_dict['HRconv.weight'] - crt_net['model.8.bias'] = state_dict['HRconv.bias'] - crt_net['model.10.weight'] = state_dict['conv_last.weight'] - crt_net['model.10.bias'] = state_dict['conv_last.bias'] - state_dict = crt_net - return state_dict - - -def resrgan2normal(state_dict, nb=23): - # this code is copied from https://github.com/victorca25/iNNfer - if "conv_first.weight" in state_dict and "body.0.rdb1.conv1.weight" in state_dict: - re8x = 0 - crt_net = {} - items = [] - for k, v in state_dict.items(): - items.append(k) - - crt_net['model.0.weight'] = state_dict['conv_first.weight'] - crt_net['model.0.bias'] = state_dict['conv_first.bias'] - - for k in items.copy(): - if "rdb" in k: - ori_k = k.replace('body.', 'model.1.sub.') - ori_k = ori_k.replace('.rdb', '.RDB') - if '.weight' in k: - ori_k = ori_k.replace('.weight', '.0.weight') - elif '.bias' in k: - ori_k = ori_k.replace('.bias', '.0.bias') - crt_net[ori_k] = state_dict[k] - items.remove(k) - - crt_net[f'model.1.sub.{nb}.weight'] = state_dict['conv_body.weight'] - crt_net[f'model.1.sub.{nb}.bias'] = state_dict['conv_body.bias'] - crt_net['model.3.weight'] = state_dict['conv_up1.weight'] - crt_net['model.3.bias'] = state_dict['conv_up1.bias'] - crt_net['model.6.weight'] = state_dict['conv_up2.weight'] - crt_net['model.6.bias'] = state_dict['conv_up2.bias'] - - if 'conv_up3.weight' in state_dict: - # modification supporting: https://github.com/ai-forever/Real-ESRGAN/blob/main/RealESRGAN/rrdbnet_arch.py - re8x = 3 - crt_net['model.9.weight'] = state_dict['conv_up3.weight'] - crt_net['model.9.bias'] = state_dict['conv_up3.bias'] - - crt_net[f'model.{8+re8x}.weight'] = state_dict['conv_hr.weight'] - crt_net[f'model.{8+re8x}.bias'] = state_dict['conv_hr.bias'] - crt_net[f'model.{10+re8x}.weight'] = state_dict['conv_last.weight'] - crt_net[f'model.{10+re8x}.bias'] = state_dict['conv_last.bias'] - - state_dict = crt_net - return state_dict - - -def infer_params(state_dict): - # this code is copied from https://github.com/victorca25/iNNfer - scale2x = 0 - scalemin = 6 - n_uplayer = 0 - plus = False - - for block in list(state_dict): - parts = block.split(".") - n_parts = len(parts) - if n_parts == 5 and parts[2] == "sub": - nb = int(parts[3]) - elif n_parts == 3: - part_num = int(parts[1]) - if (part_num > scalemin - and parts[0] == "model" - and parts[2] == "weight"): - scale2x += 1 - if part_num > n_uplayer: - n_uplayer = part_num - out_nc = state_dict[block].shape[0] - if not plus and "conv1x1" in block: - plus = True - - nf = state_dict["model.0.weight"].shape[0] - in_nc = state_dict["model.0.weight"].shape[1] - out_nc = out_nc - scale = 2 ** scale2x - - return in_nc, out_nc, nf, nb, plus, scale - - -class UpscalerESRGAN(Upscaler): - def __init__(self, dirname): - self.name = "ESRGAN" - self.model_url = "https://github.com/cszn/KAIR/releases/download/v1.0/ESRGAN.pth" - self.model_name = "ESRGAN_4x" - self.scalers = [] - self.user_path = dirname - super().__init__() - model_paths = self.find_models(ext_filter=[".pt", ".pth"]) - scalers = [] - if len(model_paths) == 0: - scaler_data = UpscalerData(self.model_name, self.model_url, self, 4) - scalers.append(scaler_data) - for file in model_paths: - if "http" in file: - name = self.model_name - else: - name = modelloader.friendly_name(file) - - scaler_data = UpscalerData(name, file, self, 4) - self.scalers.append(scaler_data) - - def do_upscale(self, img, selected_model): - model = self.load_model(selected_model) - if model is None: - return img - model.to(devices.device_esrgan) - img = esrgan_upscale(model, img) - return img - - def load_model(self, path: str): - if "http" in path: - filename = load_file_from_url(url=self.model_url, model_dir=self.model_path, - file_name="%s.pth" % self.model_name, - progress=True) - else: - filename = path - if not os.path.exists(filename) or filename is None: - print("Unable to load %s from %s" % (self.model_path, filename)) - return None - - state_dict = torch.load(filename, map_location='cpu' if devices.device_esrgan.type == 'mps' else None) - - if "params_ema" in state_dict: - state_dict = state_dict["params_ema"] - elif "params" in state_dict: - state_dict = state_dict["params"] - num_conv = 16 if "realesr-animevideov3" in filename else 32 - model = arch.SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=num_conv, upscale=4, act_type='prelu') - model.load_state_dict(state_dict) - model.eval() - return model - - if "body.0.rdb1.conv1.weight" in state_dict and "conv_first.weight" in state_dict: - nb = 6 if "RealESRGAN_x4plus_anime_6B" in filename else 23 - state_dict = resrgan2normal(state_dict, nb) - elif "conv_first.weight" in state_dict: - state_dict = mod2normal(state_dict) - elif "model.0.weight" not in state_dict: - raise Exception("The file is not a recognized ESRGAN model.") - - in_nc, out_nc, nf, nb, plus, mscale = infer_params(state_dict) - - model = arch.RRDBNet(in_nc=in_nc, out_nc=out_nc, nf=nf, nb=nb, upscale=mscale, plus=plus) - model.load_state_dict(state_dict) - model.eval() - - return model - - -def upscale_without_tiling(model, img): - img = np.array(img) - img = img[:, :, ::-1] - img = np.ascontiguousarray(np.transpose(img, (2, 0, 1))) / 255 - img = torch.from_numpy(img).float() - img = img.unsqueeze(0).to(devices.device_esrgan) - with torch.no_grad(): - output = model(img) - output = output.squeeze().float().cpu().clamp_(0, 1).numpy() - output = 255. * np.moveaxis(output, 0, 2) - output = output.astype(np.uint8) - output = output[:, :, ::-1] - return Image.fromarray(output, 'RGB') - - -def esrgan_upscale(model, img): - if opts.ESRGAN_tile == 0: - return upscale_without_tiling(model, img) - - grid = images.split_grid(img, opts.ESRGAN_tile, opts.ESRGAN_tile, opts.ESRGAN_tile_overlap) - newtiles = [] - scale_factor = 1 - - for y, h, row in grid.tiles: - newrow = [] - for tiledata in row: - x, w, tile = tiledata - - output = upscale_without_tiling(model, tile) - scale_factor = output.width // tile.width - - newrow.append([x * scale_factor, w * scale_factor, output]) - newtiles.append([y * scale_factor, h * scale_factor, newrow]) - - newgrid = images.Grid(newtiles, grid.tile_w * scale_factor, grid.tile_h * scale_factor, grid.image_w * scale_factor, grid.image_h * scale_factor, grid.overlap * scale_factor) - output = images.combine_grid(newgrid) - return output diff --git a/spaces/jhwen/bingo/src/components/learn-more.tsx b/spaces/jhwen/bingo/src/components/learn-more.tsx deleted file mode 100644 index a64459ee7900a612292e117a6bda96ee9260990f..0000000000000000000000000000000000000000 --- a/spaces/jhwen/bingo/src/components/learn-more.tsx +++ /dev/null @@ -1,39 +0,0 @@ -import React from 'react' -import { SourceAttribution } from '@/lib/bots/bing/types' - -export interface LearnMoreProps { - sourceAttributions?: SourceAttribution[] -} - -export function LearnMore({ sourceAttributions }: LearnMoreProps) { - if (!sourceAttributions?.length) { - return null - } - - return ( -
          -
          了解详细信息:
          -
          -
          - {sourceAttributions.map((attribution, index) => { - const { providerDisplayName, seeMoreUrl } = attribution - const { host } = new URL(seeMoreUrl) - return ( - - {index + 1}. {host} - - ) - })} -
          -
          -
          - ) -} diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Signature/DSS.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Signature/DSS.py deleted file mode 100644 index fa848179ef108ed53168b0bf6f2082196696a1e1..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Signature/DSS.py +++ /dev/null @@ -1,403 +0,0 @@ -# -# Signature/DSS.py : DSS.py -# -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -from Crypto.Util.asn1 import DerSequence -from Crypto.Util.number import long_to_bytes -from Crypto.Math.Numbers import Integer - -from Crypto.Hash import HMAC -from Crypto.PublicKey.ECC import EccKey -from Crypto.PublicKey.DSA import DsaKey - -__all__ = ['DssSigScheme', 'new'] - - -class DssSigScheme(object): - """A (EC)DSA signature object. - Do not instantiate directly. - Use :func:`Crypto.Signature.DSS.new`. - """ - - def __init__(self, key, encoding, order): - """Create a new Digital Signature Standard (DSS) object. - - Do not instantiate this object directly, - use `Crypto.Signature.DSS.new` instead. - """ - - self._key = key - self._encoding = encoding - self._order = order - - self._order_bits = self._order.size_in_bits() - self._order_bytes = (self._order_bits - 1) // 8 + 1 - - def can_sign(self): - """Return ``True`` if this signature object can be used - for signing messages.""" - - return self._key.has_private() - - def _compute_nonce(self, msg_hash): - raise NotImplementedError("To be provided by subclasses") - - def _valid_hash(self, msg_hash): - raise NotImplementedError("To be provided by subclasses") - - def sign(self, msg_hash): - """Compute the DSA/ECDSA signature of a message. - - Args: - msg_hash (hash object): - The hash that was carried out over the message. - The object belongs to the :mod:`Crypto.Hash` package. - Under mode ``'fips-186-3'``, the hash must be a FIPS - approved secure hash (SHA-2 or SHA-3). - - :return: The signature as ``bytes`` - :raise ValueError: if the hash algorithm is incompatible to the (EC)DSA key - :raise TypeError: if the (EC)DSA key has no private half - """ - - if not self._key.has_private(): - raise TypeError("Private key is needed to sign") - - if not self._valid_hash(msg_hash): - raise ValueError("Hash is not sufficiently strong") - - # Generate the nonce k (critical!) - nonce = self._compute_nonce(msg_hash) - - # Perform signature using the raw API - z = Integer.from_bytes(msg_hash.digest()[:self._order_bytes]) - sig_pair = self._key._sign(z, nonce) - - # Encode the signature into a single byte string - if self._encoding == 'binary': - output = b"".join([long_to_bytes(x, self._order_bytes) - for x in sig_pair]) - else: - # Dss-sig ::= SEQUENCE { - # r INTEGER, - # s INTEGER - # } - # Ecdsa-Sig-Value ::= SEQUENCE { - # r INTEGER, - # s INTEGER - # } - output = DerSequence(sig_pair).encode() - - return output - - def verify(self, msg_hash, signature): - """Check if a certain (EC)DSA signature is authentic. - - Args: - msg_hash (hash object): - The hash that was carried out over the message. - This is an object belonging to the :mod:`Crypto.Hash` module. - Under mode ``'fips-186-3'``, the hash must be a FIPS - approved secure hash (SHA-2 or SHA-3). - - signature (``bytes``): - The signature that needs to be validated. - - :raise ValueError: if the signature is not authentic - """ - - if not self._valid_hash(msg_hash): - raise ValueError("Hash is not sufficiently strong") - - if self._encoding == 'binary': - if len(signature) != (2 * self._order_bytes): - raise ValueError("The signature is not authentic (length)") - r_prime, s_prime = [Integer.from_bytes(x) - for x in (signature[:self._order_bytes], - signature[self._order_bytes:])] - else: - try: - der_seq = DerSequence().decode(signature, strict=True) - except (ValueError, IndexError): - raise ValueError("The signature is not authentic (DER)") - if len(der_seq) != 2 or not der_seq.hasOnlyInts(): - raise ValueError("The signature is not authentic (DER content)") - r_prime, s_prime = Integer(der_seq[0]), Integer(der_seq[1]) - - if not (0 < r_prime < self._order) or not (0 < s_prime < self._order): - raise ValueError("The signature is not authentic (d)") - - z = Integer.from_bytes(msg_hash.digest()[:self._order_bytes]) - result = self._key._verify(z, (r_prime, s_prime)) - if not result: - raise ValueError("The signature is not authentic") - # Make PyCrypto code to fail - return False - - -class DeterministicDsaSigScheme(DssSigScheme): - # Also applicable to ECDSA - - def __init__(self, key, encoding, order, private_key): - super(DeterministicDsaSigScheme, self).__init__(key, encoding, order) - self._private_key = private_key - - def _bits2int(self, bstr): - """See 2.3.2 in RFC6979""" - - result = Integer.from_bytes(bstr) - q_len = self._order.size_in_bits() - b_len = len(bstr) * 8 - if b_len > q_len: - # Only keep leftmost q_len bits - result >>= (b_len - q_len) - return result - - def _int2octets(self, int_mod_q): - """See 2.3.3 in RFC6979""" - - assert 0 < int_mod_q < self._order - return long_to_bytes(int_mod_q, self._order_bytes) - - def _bits2octets(self, bstr): - """See 2.3.4 in RFC6979""" - - z1 = self._bits2int(bstr) - if z1 < self._order: - z2 = z1 - else: - z2 = z1 - self._order - return self._int2octets(z2) - - def _compute_nonce(self, mhash): - """Generate k in a deterministic way""" - - # See section 3.2 in RFC6979.txt - # Step a - h1 = mhash.digest() - # Step b - mask_v = b'\x01' * mhash.digest_size - # Step c - nonce_k = b'\x00' * mhash.digest_size - - for int_oct in (b'\x00', b'\x01'): - # Step d/f - nonce_k = HMAC.new(nonce_k, - mask_v + int_oct + - self._int2octets(self._private_key) + - self._bits2octets(h1), mhash).digest() - # Step e/g - mask_v = HMAC.new(nonce_k, mask_v, mhash).digest() - - nonce = -1 - while not (0 < nonce < self._order): - # Step h.C (second part) - if nonce != -1: - nonce_k = HMAC.new(nonce_k, mask_v + b'\x00', - mhash).digest() - mask_v = HMAC.new(nonce_k, mask_v, mhash).digest() - - # Step h.A - mask_t = b"" - - # Step h.B - while len(mask_t) < self._order_bytes: - mask_v = HMAC.new(nonce_k, mask_v, mhash).digest() - mask_t += mask_v - - # Step h.C (first part) - nonce = self._bits2int(mask_t) - return nonce - - def _valid_hash(self, msg_hash): - return True - - -class FipsDsaSigScheme(DssSigScheme): - - #: List of L (bit length of p) and N (bit length of q) combinations - #: that are allowed by FIPS 186-3. The security level is provided in - #: Table 2 of FIPS 800-57 (rev3). - _fips_186_3_L_N = ( - (1024, 160), # 80 bits (SHA-1 or stronger) - (2048, 224), # 112 bits (SHA-224 or stronger) - (2048, 256), # 128 bits (SHA-256 or stronger) - (3072, 256) # 256 bits (SHA-512) - ) - - def __init__(self, key, encoding, order, randfunc): - super(FipsDsaSigScheme, self).__init__(key, encoding, order) - self._randfunc = randfunc - - L = Integer(key.p).size_in_bits() - if (L, self._order_bits) not in self._fips_186_3_L_N: - error = ("L/N (%d, %d) is not compliant to FIPS 186-3" - % (L, self._order_bits)) - raise ValueError(error) - - def _compute_nonce(self, msg_hash): - # hash is not used - return Integer.random_range(min_inclusive=1, - max_exclusive=self._order, - randfunc=self._randfunc) - - def _valid_hash(self, msg_hash): - """Verify that SHA-1, SHA-2 or SHA-3 are used""" - return (msg_hash.oid == "1.3.14.3.2.26" or - msg_hash.oid.startswith("2.16.840.1.101.3.4.2.")) - - -class FipsEcDsaSigScheme(DssSigScheme): - - def __init__(self, key, encoding, order, randfunc): - super(FipsEcDsaSigScheme, self).__init__(key, encoding, order) - self._randfunc = randfunc - - def _compute_nonce(self, msg_hash): - return Integer.random_range(min_inclusive=1, - max_exclusive=self._key._curve.order, - randfunc=self._randfunc) - - def _valid_hash(self, msg_hash): - """Verify that the strength of the hash matches or exceeds - the strength of the EC. We fail if the hash is too weak.""" - - modulus_bits = self._key.pointQ.size_in_bits() - - # SHS: SHA-2, SHA-3, truncated SHA-512 - sha224 = ("2.16.840.1.101.3.4.2.4", "2.16.840.1.101.3.4.2.7", "2.16.840.1.101.3.4.2.5") - sha256 = ("2.16.840.1.101.3.4.2.1", "2.16.840.1.101.3.4.2.8", "2.16.840.1.101.3.4.2.6") - sha384 = ("2.16.840.1.101.3.4.2.2", "2.16.840.1.101.3.4.2.9") - sha512 = ("2.16.840.1.101.3.4.2.3", "2.16.840.1.101.3.4.2.10") - shs = sha224 + sha256 + sha384 + sha512 - - try: - result = msg_hash.oid in shs - except AttributeError: - result = False - return result - - -def new(key, mode, encoding='binary', randfunc=None): - """Create a signature object :class:`DssSigScheme` that - can perform (EC)DSA signature or verification. - - .. note:: - Refer to `NIST SP 800 Part 1 Rev 4`_ (or newer release) for an - overview of the recommended key lengths. - - Args: - key (:class:`Crypto.PublicKey.DSA` or :class:`Crypto.PublicKey.ECC`): - The key to use for computing the signature (*private* keys only) - or for verifying one. - For DSA keys, let ``L`` and ``N`` be the bit lengths of the modulus ``p`` - and of ``q``: the pair ``(L,N)`` must appear in the following list, - in compliance to section 4.2 of `FIPS 186-4`_: - - - (1024, 160) *legacy only; do not create new signatures with this* - - (2048, 224) *deprecated; do not create new signatures with this* - - (2048, 256) - - (3072, 256) - - For ECC, only keys over P-224, P-256, P-384, and P-521 are accepted. - - mode (string): - The parameter can take these values: - - - ``'fips-186-3'``. The signature generation is randomized and carried out - according to `FIPS 186-3`_: the nonce ``k`` is taken from the RNG. - - ``'deterministic-rfc6979'``. The signature generation is not - randomized. See RFC6979_. - - encoding (string): - How the signature is encoded. This value determines the output of - :meth:`sign` and the input to :meth:`verify`. - - The following values are accepted: - - - ``'binary'`` (default), the signature is the raw concatenation - of ``r`` and ``s``. It is defined in the IEEE P.1363 standard. - For DSA, the size in bytes of the signature is ``N/4`` bytes - (e.g. 64 for ``N=256``). - For ECDSA, the signature is always twice the length of a point - coordinate (e.g. 64 bytes for P-256). - - - ``'der'``, the signature is a ASN.1 DER SEQUENCE - with two INTEGERs (``r`` and ``s``). It is defined in RFC3279_. - The size of the signature is variable. - - randfunc (callable): - A function that returns random ``bytes``, of a given length. - If omitted, the internal RNG is used. - Only applicable for the *'fips-186-3'* mode. - - .. _FIPS 186-3: http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf - .. _FIPS 186-4: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf - .. _NIST SP 800 Part 1 Rev 4: http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57pt1r4.pdf - .. _RFC6979: http://tools.ietf.org/html/rfc6979 - .. _RFC3279: https://tools.ietf.org/html/rfc3279#section-2.2.2 - """ - - # The goal of the 'mode' parameter is to avoid to - # have the current version of the standard as default. - # - # Over time, such version will be superseded by (for instance) - # FIPS 186-4 and it will be odd to have -3 as default. - - if encoding not in ('binary', 'der'): - raise ValueError("Unknown encoding '%s'" % encoding) - - if isinstance(key, EccKey): - order = key._curve.order - private_key_attr = 'd' - if key._curve.name == "ed25519": - raise ValueError("ECC key is not on a NIST P curve") - elif isinstance(key, DsaKey): - order = Integer(key.q) - private_key_attr = 'x' - else: - raise ValueError("Unsupported key type " + str(type(key))) - - if key.has_private(): - private_key = getattr(key, private_key_attr) - else: - private_key = None - - if mode == 'deterministic-rfc6979': - return DeterministicDsaSigScheme(key, encoding, order, private_key) - elif mode == 'fips-186-3': - if isinstance(key, EccKey): - return FipsEcDsaSigScheme(key, encoding, order, randfunc) - else: - return FipsDsaSigScheme(key, encoding, order, randfunc) - else: - raise ValueError("Unknown DSS mode '%s'" % mode) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/SRV.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/SRV.py deleted file mode 100644 index 84c5400728661ca94ff5a5dde4884a9a60771e35..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/SRV.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import struct - -import dns.exception -import dns.immutable -import dns.name -import dns.rdata -import dns.rdtypes.util - - -@dns.immutable.immutable -class SRV(dns.rdata.Rdata): - - """SRV record""" - - # see: RFC 2782 - - __slots__ = ["priority", "weight", "port", "target"] - - def __init__(self, rdclass, rdtype, priority, weight, port, target): - super().__init__(rdclass, rdtype) - self.priority = self._as_uint16(priority) - self.weight = self._as_uint16(weight) - self.port = self._as_uint16(port) - self.target = self._as_name(target) - - def to_text(self, origin=None, relativize=True, **kw): - target = self.target.choose_relativity(origin, relativize) - return "%d %d %d %s" % (self.priority, self.weight, self.port, target) - - @classmethod - def from_text( - cls, rdclass, rdtype, tok, origin=None, relativize=True, relativize_to=None - ): - priority = tok.get_uint16() - weight = tok.get_uint16() - port = tok.get_uint16() - target = tok.get_name(origin, relativize, relativize_to) - return cls(rdclass, rdtype, priority, weight, port, target) - - def _to_wire(self, file, compress=None, origin=None, canonicalize=False): - three_ints = struct.pack("!HHH", self.priority, self.weight, self.port) - file.write(three_ints) - self.target.to_wire(file, compress, origin, canonicalize) - - @classmethod - def from_wire_parser(cls, rdclass, rdtype, parser, origin=None): - (priority, weight, port) = parser.get_struct("!HHH") - target = parser.get_name(origin) - return cls(rdclass, rdtype, priority, weight, port, target) - - def _processing_priority(self): - return self.priority - - def _processing_weight(self): - return self.weight - - @classmethod - def _processing_order(cls, iterable): - return dns.rdtypes.util.weighted_processing_order(iterable) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/faiss.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/faiss.py deleted file mode 100644 index cfa3e0bf58583699f4732023241ee76093a4c21a..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/faiss.py +++ /dev/null @@ -1,75 +0,0 @@ -"""Faiss reader.""" - -from typing import Any, Dict, List - -import numpy as np - -from gpt_index.readers.base import BaseReader -from gpt_index.readers.schema.base import Document - - -class FaissReader(BaseReader): - """Faiss reader. - - Retrieves documents through an existing in-memory Faiss index. - These documents can then be used in a downstream LlamaIndex data structure. - If you wish use Faiss itself as an index to to organize documents, - insert documents, and perform queries on them, please use GPTFaissIndex. - - Args: - faiss_index (faiss.Index): A Faiss Index object (required) - - """ - - def __init__(self, index: Any): - """Initialize with parameters.""" - import_err_msg = """ - `faiss` package not found. For instructions on - how to install `faiss` please visit - https://github.com/facebookresearch/faiss/wiki/Installing-Faiss - """ - try: - import faiss # noqa: F401 - except ImportError: - raise ImportError(import_err_msg) - - self._index = index - - def load_data( - self, - query: np.ndarray, - id_to_text_map: Dict[str, str], - k: int = 4, - separate_documents: bool = True, - ) -> List[Document]: - """Load data from Faiss. - - Args: - query (np.ndarray): A 2D numpy array of query vectors. - id_to_text_map (Dict[str, str]): A map from ID's to text. - k (int): Number of nearest neighbors to retrieve. Defaults to 4. - separate_documents (Optional[bool]): Whether to return separate - documents. Defaults to True. - Returns: - List[Document]: A list of documents. - - """ - dists, indices = self._index.search(query, k) - documents = [] - for qidx in range(indices.shape[0]): - for didx in range(indices.shape[1]): - doc_id = indices[qidx, didx] - if doc_id not in id_to_text_map: - raise ValueError( - f"Document ID {doc_id} not found in id_to_text_map." - ) - text = id_to_text_map[doc_id] - documents.append(Document(text=text)) - - if not separate_documents: - # join all documents into one - text_list = [doc.get_text() for doc in documents] - text = "\n\n".join(text_list) - documents = [Document(text=text)] - - return documents diff --git a/spaces/jotarodadada/animeCf/README.md b/spaces/jotarodadada/animeCf/README.md deleted file mode 100644 index 6f4e6157b97caed784727a651a2b99df5c38e8c8..0000000000000000000000000000000000000000 --- a/spaces/jotarodadada/animeCf/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Real CUGAN -emoji: 🐢 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: siyangyuan/animeCf ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/junkmind/SOTER/Dockerfile b/spaces/junkmind/SOTER/Dockerfile deleted file mode 100644 index 8745c0c37a11c6bd3abf4a579dfae7742e9fd979..0000000000000000000000000000000000000000 --- a/spaces/junkmind/SOTER/Dockerfile +++ /dev/null @@ -1,54 +0,0 @@ -ARG PYTORCH="1.10.0" -ARG CUDA="11.3" -ARG CUDNN="8" - -FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel - -ENV TORCH_NVCC_FLAGS="-Xfatbin -compress-all" -ENV CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" - -# Setting noninteractive build, setting up tzdata and configuring timezones -ENV DEBIAN_FRONTEND=noninteractive -ENV TZ=Europe/Berlin -RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone - -RUN apt-get update && apt-get install -y libglib2.0-0 libsm6 libxrender-dev libxext6 nano mc glances vim git \ - && apt-get clean \ - && rm -rf /var/lib/apt/lists/* - -# Install cython -RUN conda install cython -y && conda clean --all - -# Installing APEX -RUN pip install -U pip -RUN git clone https://github.com/NVIDIA/apex -RUN sed -i 's/check_cuda_torch_binary_vs_bare_metal(torch.utils.cpp_extension.CUDA_HOME)/pass/g' apex/setup.py -RUN pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex -RUN apt-get update -y -RUN apt-get install build-essential cmake -y -RUN apt-get install libopenblas-dev liblapack-dev -y -RUN apt-get install libx11-dev libgtk-3-dev -y -RUN pip install dlib -RUN pip install facenet-pytorch -RUN pip install albumentations==1.0.0 timm==0.4.12 pytorch_toolbelt tensorboardx -RUN pip install cython jupyter jupyterlab ipykernel matplotlib tqdm pandas - -# download pretraned Imagenet models -RUN apt install wget -RUN wget https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b7_ns-1dbc32de.pth -P /root/.cache/torch/hub/checkpoints/ -RUN wget https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b5_ns-6f26d0cf.pth -P /root/.cache/torch/hub/checkpoints/ - -# Setting the working directory -WORKDIR /workspace - -# Copying the required codebase -COPY . /workspace - -RUN chmod 777 preprocess_data.sh -RUN chmod 777 train.sh -RUN chmod 777 predict_submission.sh - -ENV PYTHONPATH=. - -CMD ["/bin/bash"] - diff --git a/spaces/justest/gpt4free/CONTRIBUTING.md b/spaces/justest/gpt4free/CONTRIBUTING.md deleted file mode 100644 index 67aa60da1ce8322d31d71d9c8460f845f338bcde..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/CONTRIBUTING.md +++ /dev/null @@ -1,8 +0,0 @@ -gpt4free logo - -### Please, follow these steps to contribute: -1. Reverse a website from this list: [sites-to-reverse](https://github.com/xtekky/gpt4free/issues/40) -2. Add it to [./testing](https://github.com/xtekky/gpt4free/tree/main/testing) -3. Refractor it and add it to [./g4f](https://github.com/xtekky/gpt4free/tree/main/g4f) - -### We will be grateful to see you as a contributor! diff --git a/spaces/kevinwang676/Bark-Voice-Cloning/app.py b/spaces/kevinwang676/Bark-Voice-Cloning/app.py deleted file mode 100644 index dd274892625395e1d264dd7f8e7a600faa0a09b1..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bark-Voice-Cloning/app.py +++ /dev/null @@ -1,401 +0,0 @@ -from cProfile import label -import dataclasses -from distutils.command.check import check -from doctest import Example -import gradio as gr -import os -import sys -import numpy as np -import logging -import torch -import pytorch_seed -import time - -from xml.sax import saxutils -from bark.api import generate_with_settings -from bark.api import save_as_prompt -from util.settings import Settings -#import nltk - - -from bark import SAMPLE_RATE -from cloning.clonevoice import clone_voice -from bark.generation import SAMPLE_RATE, preload_models, _load_history_prompt, codec_decode -from scipy.io.wavfile import write as write_wav -from util.parseinput import split_and_recombine_text, build_ssml, is_ssml, create_clips_from_ssml -from datetime import datetime -from tqdm.auto import tqdm -from util.helper import create_filename, add_id3_tag -from swap_voice import swap_voice_from_audio -from training.training_prepare import prepare_semantics_from_text, prepare_wavs_from_semantics -from training.train import training_prepare_files, train - -settings = Settings('config.yaml') - - -def generate_text_to_speech(text, selected_speaker, text_temp, waveform_temp, eos_prob, quick_generation, complete_settings, seed, batchcount, progress=gr.Progress(track_tqdm=True)): - # Chunk the text into smaller pieces then combine the generated audio - - # generation settings - if selected_speaker == 'None': - selected_speaker = None - - voice_name = selected_speaker - - if text == None or len(text) < 1: - if selected_speaker == None: - raise gr.Error('No text entered!') - - # Extract audio data from speaker if no text and speaker selected - voicedata = _load_history_prompt(voice_name) - audio_arr = codec_decode(voicedata["fine_prompt"]) - result = create_filename(settings.output_folder_path, "None", "extract",".wav") - save_wav(audio_arr, result) - return result - - if batchcount < 1: - batchcount = 1 - - - silenceshort = np.zeros(int((float(settings.silence_sentence) / 1000.0) * SAMPLE_RATE), dtype=np.int16) # quarter second of silence - silencelong = np.zeros(int((float(settings.silence_speakers) / 1000.0) * SAMPLE_RATE), dtype=np.float32) # half a second of silence - use_last_generation_as_history = "Use last generation as history" in complete_settings - save_last_generation = "Save generation as Voice" in complete_settings - for l in range(batchcount): - currentseed = seed - if seed != None and seed > 2**32 - 1: - logger.warning(f"Seed {seed} > 2**32 - 1 (max), setting to random") - currentseed = None - if currentseed == None or currentseed <= 0: - currentseed = np.random.default_rng().integers(1, 2**32 - 1) - assert(0 < currentseed and currentseed < 2**32) - - progress(0, desc="Generating") - - full_generation = None - - all_parts = [] - complete_text = "" - text = text.lstrip() - if is_ssml(text): - list_speak = create_clips_from_ssml(text) - prev_speaker = None - for i, clip in tqdm(enumerate(list_speak), total=len(list_speak)): - selected_speaker = clip[0] - # Add pause break between speakers - if i > 0 and selected_speaker != prev_speaker: - all_parts += [silencelong.copy()] - prev_speaker = selected_speaker - text = clip[1] - text = saxutils.unescape(text) - if selected_speaker == "None": - selected_speaker = None - - print(f"\nGenerating Text ({i+1}/{len(list_speak)}) -> {selected_speaker} (Seed {currentseed}):`{text}`") - complete_text += text - with pytorch_seed.SavedRNG(currentseed): - audio_array = generate_with_settings(text_prompt=text, voice_name=selected_speaker, semantic_temp=text_temp, coarse_temp=waveform_temp, eos_p=eos_prob) - currentseed = torch.random.initial_seed() - if len(list_speak) > 1: - filename = create_filename(settings.output_folder_path, currentseed, "audioclip",".wav") - save_wav(audio_array, filename) - add_id3_tag(filename, text, selected_speaker, currentseed) - - all_parts += [audio_array] - else: - texts = split_and_recombine_text(text, settings.input_text_desired_length, settings.input_text_max_length) - for i, text in tqdm(enumerate(texts), total=len(texts)): - print(f"\nGenerating Text ({i+1}/{len(texts)}) -> {selected_speaker} (Seed {currentseed}):`{text}`") - complete_text += text - if quick_generation == True: - with pytorch_seed.SavedRNG(currentseed): - audio_array = generate_with_settings(text_prompt=text, voice_name=selected_speaker, semantic_temp=text_temp, coarse_temp=waveform_temp, eos_p=eos_prob) - currentseed = torch.random.initial_seed() - else: - full_output = use_last_generation_as_history or save_last_generation - if full_output: - full_generation, audio_array = generate_with_settings(text_prompt=text, voice_name=voice_name, semantic_temp=text_temp, coarse_temp=waveform_temp, eos_p=eos_prob, output_full=True) - else: - audio_array = generate_with_settings(text_prompt=text, voice_name=voice_name, semantic_temp=text_temp, coarse_temp=waveform_temp, eos_p=eos_prob) - - # Noticed this in the HF Demo - convert to 16bit int -32767/32767 - most used audio format - # audio_array = (audio_array * 32767).astype(np.int16) - - if len(texts) > 1: - filename = create_filename(settings.output_folder_path, currentseed, "audioclip",".wav") - save_wav(audio_array, filename) - add_id3_tag(filename, text, selected_speaker, currentseed) - - if quick_generation == False and (save_last_generation == True or use_last_generation_as_history == True): - # save to npz - voice_name = create_filename(settings.output_folder_path, seed, "audioclip", ".npz") - save_as_prompt(voice_name, full_generation) - if use_last_generation_as_history: - selected_speaker = voice_name - - all_parts += [audio_array] - # Add short pause between sentences - if text[-1] in "!?.\n" and i > 1: - all_parts += [silenceshort.copy()] - - # save & play audio - result = create_filename(settings.output_folder_path, currentseed, "final",".wav") - save_wav(np.concatenate(all_parts), result) - # write id3 tag with text truncated to 60 chars, as a precaution... - add_id3_tag(result, complete_text, selected_speaker, currentseed) - - return result - - - -def save_wav(audio_array, filename): - write_wav(filename, SAMPLE_RATE, audio_array) - -def save_voice(filename, semantic_prompt, coarse_prompt, fine_prompt): - np.savez_compressed( - filename, - semantic_prompt=semantic_prompt, - coarse_prompt=coarse_prompt, - fine_prompt=fine_prompt - ) - - -def on_quick_gen_changed(checkbox): - if checkbox == False: - return gr.CheckboxGroup.update(visible=True) - return gr.CheckboxGroup.update(visible=False) - -def delete_output_files(checkbox_state): - if checkbox_state: - outputs_folder = os.path.join(os.getcwd(), settings.output_folder_path) - if os.path.exists(outputs_folder): - purgedir(outputs_folder) - return False - - -# https://stackoverflow.com/a/54494779 -def purgedir(parent): - for root, dirs, files in os.walk(parent): - for item in files: - # Delete subordinate files - filespec = os.path.join(root, item) - os.unlink(filespec) - for item in dirs: - # Recursively perform this operation for subordinate directories - purgedir(os.path.join(root, item)) - -def convert_text_to_ssml(text, selected_speaker): - return build_ssml(text, selected_speaker) - - -def training_prepare(selected_step, num_text_generations, progress=gr.Progress(track_tqdm=True)): - if selected_step == prepare_training_list[0]: - prepare_semantics_from_text() - else: - prepare_wavs_from_semantics() - return None - - -def start_training(save_model_epoch, max_epochs, progress=gr.Progress(track_tqdm=True)): - training_prepare_files("./training/data/", "./training/data/checkpoint/hubert_base_ls960.pt") - train("./training/data/", save_model_epoch, max_epochs) - return None - - - -def apply_settings(themes, input_server_name, input_server_port, input_server_public, input_desired_len, input_max_len, input_silence_break, input_silence_speaker): - settings.selected_theme = themes - settings.server_name = input_server_name - settings.server_port = input_server_port - settings.server_share = input_server_public - settings.input_text_desired_length = input_desired_len - settings.input_text_max_length = input_max_len - settings.silence_sentence = input_silence_break - settings.silence_speaker = input_silence_speaker - settings.save() - -def restart(): - global restart_server - restart_server = True - - -def create_version_html(): - python_version = ".".join([str(x) for x in sys.version_info[0:3]]) - versions_html = f""" -python: {python_version} - •  -torch: {getattr(torch, '__long_version__',torch.__version__)} - •  -gradio: {gr.__version__} -""" - return versions_html - - - -logger = logging.getLogger(__name__) -APPTITLE = "Bark Voice Cloning UI" - - -autolaunch = False - -if len(sys.argv) > 1: - autolaunch = "-autolaunch" in sys.argv - - -if torch.cuda.is_available() == False: - os.environ['BARK_FORCE_CPU'] = 'True' - logger.warning("No CUDA detected, fallback to CPU!") - -print(f'smallmodels={os.environ.get("SUNO_USE_SMALL_MODELS", False)}') -print(f'enablemps={os.environ.get("SUNO_ENABLE_MPS", False)}') -print(f'offloadcpu={os.environ.get("SUNO_OFFLOAD_CPU", False)}') -print(f'forcecpu={os.environ.get("BARK_FORCE_CPU", False)}') -print(f'autolaunch={autolaunch}\n\n') - -#print("Updating nltk\n") -#nltk.download('punkt') - -print("Preloading Models\n") -preload_models() - -available_themes = ["Default", "gradio/glass", "gradio/monochrome", "gradio/seafoam", "gradio/soft", "gstaff/xkcd", "freddyaboulton/dracula_revamped", "ysharma/steampunk"] -tokenizer_language_list = ["de","en", "pl"] -prepare_training_list = ["Step 1: Semantics from Text","Step 2: WAV from Semantics"] - -seed = -1 -server_name = settings.server_name -if len(server_name) < 1: - server_name = None -server_port = settings.server_port -if server_port <= 0: - server_port = None -global run_server -global restart_server - -run_server = True - -while run_server: - # Collect all existing speakers/voices in dir - speakers_list = [] - - for root, dirs, files in os.walk("./bark/assets/prompts"): - for file in files: - if file.endswith(".npz"): - pathpart = root.replace("./bark/assets/prompts", "") - name = os.path.join(pathpart, file[:-4]) - if name.startswith("/") or name.startswith("\\"): - name = name[1:] - speakers_list.append(name) - - speakers_list = sorted(speakers_list, key=lambda x: x.lower()) - speakers_list.insert(0, 'None') - - print(f'Launching {APPTITLE} Server') - - # Create Gradio Blocks - - with gr.Blocks(title=f"{APPTITLE}", mode=f"{APPTITLE}", theme=settings.selected_theme) as barkgui: - gr.Markdown("#
          🐶🎶⭐ - Bark真实拟声2.0,一键实现声音克隆
          ") - gr.Markdown("###
          🤗 - 开启声音情感真实复刻的新纪元 🌊
          ") - gr.Markdown("###
          🎡 - Based on [bark-gui](https://github.com/C0untFloyd/bark-gui)
          ") - gr.Markdown(f""" You can duplicate and use it with a GPU: Duplicate Space - or open in [Colab](https://colab.research.google.com/github/KevinWang676/Bark-Voice-Cloning/blob/main/Bark_Voice_Cloning_UI.ipynb) for quick start 🌟 - """) - - with gr.Tab("🎙️ - Clone Voice"): - with gr.Row(): - input_audio_filename = gr.Audio(label="Input audio.wav", source="upload", type="filepath") - #transcription_text = gr.Textbox(label="Transcription Text", lines=1, placeholder="Enter Text of your Audio Sample here...") - with gr.Row(): - with gr.Column(): - initialname = "/home/user/app/bark/assets/prompts/file" - output_voice = gr.Textbox(label="Filename of trained Voice (do not change the initial name)", lines=1, placeholder=initialname, value=initialname, visible=False) - with gr.Column(): - tokenizerlang = gr.Dropdown(tokenizer_language_list, label="Base Language Tokenizer", value=tokenizer_language_list[1], visible=False) - with gr.Row(): - clone_voice_button = gr.Button("Create Voice", variant="primary") - with gr.Row(): - dummy = gr.Text(label="Progress") - npz_file = gr.File(label=".npz file") - speakers_list.insert(0, npz_file) # add prompt - - with gr.Tab("🎵 - TTS"): - with gr.Row(): - with gr.Column(): - placeholder = "Enter text here." - input_text = gr.Textbox(label="Input Text", lines=4, placeholder=placeholder) - convert_to_ssml_button = gr.Button("Convert Input Text to SSML") - with gr.Column(): - seedcomponent = gr.Number(label="Seed (default -1 = Random)", precision=0, value=-1) - batchcount = gr.Number(label="Batch count", precision=0, value=1) - - with gr.Row(): - with gr.Column(): - gr.Markdown("[Voice Prompt Library](https://suno-ai.notion.site/8b8e8749ed514b0cbf3f699013548683?v=bc67cff786b04b50b3ceb756fd05f68c)") - speaker = gr.Dropdown(speakers_list, value=speakers_list[0], label="Voice (Choose “file” if you wanna use the custom voice)") - - with gr.Column(): - text_temp = gr.Slider(0.1, 1.0, value=0.6, label="Generation Temperature", info="1.0 more diverse, 0.1 more conservative") - waveform_temp = gr.Slider(0.1, 1.0, value=0.7, label="Waveform temperature", info="1.0 more diverse, 0.1 more conservative") - - with gr.Row(): - with gr.Column(): - quick_gen_checkbox = gr.Checkbox(label="Quick Generation", value=True) - settings_checkboxes = ["Use last generation as history", "Save generation as Voice"] - complete_settings = gr.CheckboxGroup(choices=settings_checkboxes, value=settings_checkboxes, label="Detailed Generation Settings", type="value", interactive=True, visible=False) - with gr.Column(): - eos_prob = gr.Slider(0.0, 0.5, value=0.05, label="End of sentence probability") - - with gr.Row(): - with gr.Column(): - tts_create_button = gr.Button("Generate", variant="primary") - with gr.Column(): - hidden_checkbox = gr.Checkbox(visible=False) - button_stop_generation = gr.Button("Stop generation") - with gr.Row(): - output_audio = gr.Audio(label="Generated Audio", type="filepath") - - with gr.Tab("🔮 - Voice Conversion"): - with gr.Row(): - swap_audio_filename = gr.Audio(label="Input audio.wav to swap voice", source="upload", type="filepath") - with gr.Row(): - with gr.Column(): - swap_tokenizer_lang = gr.Dropdown(tokenizer_language_list, label="Base Language Tokenizer", value=tokenizer_language_list[1]) - swap_seed = gr.Number(label="Seed (default -1 = Random)", precision=0, value=-1) - with gr.Column(): - speaker_swap = gr.Dropdown(speakers_list, value=speakers_list[0], label="Voice (Choose “file” if you wanna use the custom voice)") - swap_batchcount = gr.Number(label="Batch count", precision=0, value=1) - with gr.Row(): - swap_voice_button = gr.Button("Generate", variant="primary") - with gr.Row(): - output_swap = gr.Audio(label="Generated Audio", type="filepath") - - - quick_gen_checkbox.change(fn=on_quick_gen_changed, inputs=quick_gen_checkbox, outputs=complete_settings) - convert_to_ssml_button.click(convert_text_to_ssml, inputs=[input_text, speaker],outputs=input_text) - gen_click = tts_create_button.click(generate_text_to_speech, inputs=[input_text, speaker, text_temp, waveform_temp, eos_prob, quick_gen_checkbox, complete_settings, seedcomponent, batchcount],outputs=output_audio) - button_stop_generation.click(fn=None, inputs=None, outputs=None, cancels=[gen_click]) - - - - swap_voice_button.click(swap_voice_from_audio, inputs=[swap_audio_filename, speaker_swap, swap_tokenizer_lang, swap_seed, swap_batchcount], outputs=output_swap) - clone_voice_button.click(clone_voice, inputs=[input_audio_filename, output_voice], outputs=[dummy, npz_file]) - - - restart_server = False - try: - barkgui.queue().launch(show_error=True) - except: - restart_server = True - run_server = False - try: - while restart_server == False: - time.sleep(1.0) - except (KeyboardInterrupt, OSError): - print("Keyboard interruption in main thread... closing server.") - run_server = False - barkgui.close() - - - - diff --git a/spaces/kevinwang676/Bert-VITS2/monotonic_align/core.c b/spaces/kevinwang676/Bert-VITS2/monotonic_align/core.c deleted file mode 100644 index 5f8af54d32474f821e9d1f4d2679d78128722596..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bert-VITS2/monotonic_align/core.c +++ /dev/null @@ -1,26530 +0,0 @@ -/* Generated by Cython 3.0.0 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "monotonic_align.core", - "sources": [ - "core.pyx" - ] - }, - "module_name": "monotonic_align.core" -} -END: Cython Metadata */ - -#ifndef PY_SSIZE_T_CLEAN -#define PY_SSIZE_T_CLEAN -#endif /* PY_SSIZE_T_CLEAN */ -#if defined(CYTHON_LIMITED_API) && 0 - #ifndef Py_LIMITED_API - #if CYTHON_LIMITED_API+0 > 0x03030000 - #define Py_LIMITED_API CYTHON_LIMITED_API - #else - #define Py_LIMITED_API 0x03030000 - #endif - #endif -#endif - -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02070000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.7+ or Python 3.3+. -#else -#define CYTHON_ABI "3_0_0" -#define __PYX_ABI_MODULE_NAME "_cython_" CYTHON_ABI -#define __PYX_TYPE_MODULE_PREFIX __PYX_ABI_MODULE_NAME "." -#define CYTHON_HEX_VERSION 0x030000F0 -#define CYTHON_FUTURE_DIVISION 1 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(_WIN32) && !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #define HAVE_LONG_LONG -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#if defined(GRAALVM_PYTHON) - /* For very preliminary testing purposes. Most variables are set the same as PyPy. - The existence of this section does not imply that anything works or is even tested */ - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 1 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS (PY_MAJOR_VERSION >= 3) - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PYPY_VERSION) - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS (PY_MAJOR_VERSION >= 3) - #endif - #if PY_VERSION_HEX < 0x03090000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1 && PYPY_VERSION_NUM >= 0x07030C00) - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(CYTHON_LIMITED_API) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 1 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_CLINE_IN_TRACEBACK - #define CYTHON_CLINE_IN_TRACEBACK 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 1 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #endif - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 1 - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PY_NOGIL) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 1 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #ifndef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #ifndef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #endif - #ifndef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #ifndef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 1 - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #ifndef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL (PY_MAJOR_VERSION < 3 || PY_VERSION_HEX >= 0x03060000 && PY_VERSION_HEX < 0x030C00A6) - #endif - #ifndef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL (PY_VERSION_HEX >= 0x030700A1) - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 1 - #endif - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #endif - #if PY_VERSION_HEX < 0x030400a1 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #elif !defined(CYTHON_USE_TP_FINALIZE) - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #if PY_VERSION_HEX < 0x030600B1 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #elif !defined(CYTHON_USE_DICT_VERSIONS) - #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX < 0x030C00A5) - #endif - #if PY_VERSION_HEX < 0x030700A3 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #elif !defined(CYTHON_USE_EXC_INFO_STACK) - #define CYTHON_USE_EXC_INFO_STACK 1 - #endif - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 1 - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if !defined(CYTHON_VECTORCALL) -#define CYTHON_VECTORCALL (CYTHON_FAST_PYCCALL && PY_VERSION_HEX >= 0x030800B1) -#endif -#define CYTHON_BACKPORT_VECTORCALL (CYTHON_METH_FASTCALL && PY_VERSION_HEX < 0x030800B1) -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_MAJOR_VERSION < 3 - #include "longintrepr.h" - #endif - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED - #if defined(__cplusplus) - /* for clang __has_cpp_attribute(maybe_unused) is true even before C++17 - * but leads to warnings with -pedantic, since it is a C++17 feature */ - #if ((defined(_MSVC_LANG) && _MSVC_LANG >= 201703L) || __cplusplus >= 201703L) - #if __has_cpp_attribute(maybe_unused) - #define CYTHON_UNUSED [[maybe_unused]] - #endif - #endif - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR - #define CYTHON_MAYBE_UNUSED_VAR(x) CYTHON_UNUSED_VAR(x) -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned short uint16_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int16 uint16_t; - typedef unsigned __int32 uint32_t; - #endif - #endif - #if _MSC_VER < 1300 - #ifdef _WIN64 - typedef unsigned long long __pyx_uintptr_t; - #else - typedef unsigned int __pyx_uintptr_t; - #endif - #else - #ifdef _WIN64 - typedef unsigned __int64 __pyx_uintptr_t; - #else - typedef unsigned __int32 __pyx_uintptr_t; - #endif - #endif -#else - #include - typedef uintptr_t __pyx_uintptr_t; -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) - /* for clang __has_cpp_attribute(fallthrough) is true even before C++17 - * but leads to warnings with -pedantic, since it is a C++17 feature */ - #if ((defined(_MSVC_LANG) && _MSVC_LANG >= 201703L) || __cplusplus >= 201703L) - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif -#ifdef __cplusplus - template - struct __PYX_IS_UNSIGNED_IMPL {static const bool value = T(0) < T(-1);}; - #define __PYX_IS_UNSIGNED(type) (__PYX_IS_UNSIGNED_IMPL::value) -#else - #define __PYX_IS_UNSIGNED(type) (((type)-1) > 0) -#endif -#if CYTHON_COMPILING_IN_PYPY == 1 - #define __PYX_NEED_TP_PRINT_SLOT (PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x030A0000) -#else - #define __PYX_NEED_TP_PRINT_SLOT (PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000) -#endif -#define __PYX_REINTERPRET_FUNCION(func_pointer, other_pointer) ((func_pointer)(void(*)(void))(other_pointer)) - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_DefaultClassType PyClass_Type - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" - #define __Pyx_DefaultClassType PyType_Type -#if PY_VERSION_HEX >= 0x030B00A1 - static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int p, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyObject *kwds=NULL, *argcount=NULL, *posonlyargcount=NULL, *kwonlyargcount=NULL; - PyObject *nlocals=NULL, *stacksize=NULL, *flags=NULL, *replace=NULL, *empty=NULL; - const char *fn_cstr=NULL; - const char *name_cstr=NULL; - PyCodeObject *co=NULL, *result=NULL; - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); - if (!(kwds=PyDict_New())) goto end; - if (!(argcount=PyLong_FromLong(a))) goto end; - if (PyDict_SetItemString(kwds, "co_argcount", argcount) != 0) goto end; - if (!(posonlyargcount=PyLong_FromLong(p))) goto end; - if (PyDict_SetItemString(kwds, "co_posonlyargcount", posonlyargcount) != 0) goto end; - if (!(kwonlyargcount=PyLong_FromLong(k))) goto end; - if (PyDict_SetItemString(kwds, "co_kwonlyargcount", kwonlyargcount) != 0) goto end; - if (!(nlocals=PyLong_FromLong(l))) goto end; - if (PyDict_SetItemString(kwds, "co_nlocals", nlocals) != 0) goto end; - if (!(stacksize=PyLong_FromLong(s))) goto end; - if (PyDict_SetItemString(kwds, "co_stacksize", stacksize) != 0) goto end; - if (!(flags=PyLong_FromLong(f))) goto end; - if (PyDict_SetItemString(kwds, "co_flags", flags) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_code", code) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_consts", c) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_names", n) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_varnames", v) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_freevars", fv) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_cellvars", cell) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_linetable", lnos) != 0) goto end; - if (!(fn_cstr=PyUnicode_AsUTF8AndSize(fn, NULL))) goto end; - if (!(name_cstr=PyUnicode_AsUTF8AndSize(name, NULL))) goto end; - if (!(co = PyCode_NewEmpty(fn_cstr, name_cstr, fline))) goto end; - if (!(replace = PyObject_GetAttrString((PyObject*)co, "replace"))) goto end; - if (!(empty = PyTuple_New(0))) goto end; - result = (PyCodeObject*) PyObject_Call(replace, empty, kwds); - end: - Py_XDECREF((PyObject*) co); - Py_XDECREF(kwds); - Py_XDECREF(argcount); - Py_XDECREF(posonlyargcount); - Py_XDECREF(kwonlyargcount); - Py_XDECREF(nlocals); - Py_XDECREF(stacksize); - Py_XDECREF(replace); - Py_XDECREF(empty); - if (type) { - PyErr_Restore(type, value, traceback); - } - return result; - } -#elif PY_VERSION_HEX >= 0x030800B2 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_NewWithPosOnlyArgs(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif -#endif -#if PY_VERSION_HEX >= 0x030900A4 || defined(Py_IS_TYPE) - #define __Pyx_IS_TYPE(ob, type) Py_IS_TYPE(ob, type) -#else - #define __Pyx_IS_TYPE(ob, type) (((const PyObject*)ob)->ob_type == (type)) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_Is) - #define __Pyx_Py_Is(x, y) Py_Is(x, y) -#else - #define __Pyx_Py_Is(x, y) ((x) == (y)) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsNone) - #define __Pyx_Py_IsNone(ob) Py_IsNone(ob) -#else - #define __Pyx_Py_IsNone(ob) __Pyx_Py_Is((ob), Py_None) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsTrue) - #define __Pyx_Py_IsTrue(ob) Py_IsTrue(ob) -#else - #define __Pyx_Py_IsTrue(ob) __Pyx_Py_Is((ob), Py_True) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsFalse) - #define __Pyx_Py_IsFalse(ob) Py_IsFalse(ob) -#else - #define __Pyx_Py_IsFalse(ob) __Pyx_Py_Is((ob), Py_False) -#endif -#define __Pyx_NoneAsNull(obj) (__Pyx_Py_IsNone(obj) ? NULL : (obj)) -#if PY_VERSION_HEX >= 0x030900F0 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyObject_GC_IsFinalized(o) PyObject_GC_IsFinalized(o) -#else - #define __Pyx_PyObject_GC_IsFinalized(o) _PyGC_FINALIZED(o) -#endif -#ifndef CO_COROUTINE - #define CO_COROUTINE 0x80 -#endif -#ifndef CO_ASYNC_GENERATOR - #define CO_ASYNC_GENERATOR 0x200 -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef Py_TPFLAGS_SEQUENCE - #define Py_TPFLAGS_SEQUENCE 0 -#endif -#ifndef Py_TPFLAGS_MAPPING - #define Py_TPFLAGS_MAPPING 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_METH_FASTCALL - #define __Pyx_METH_FASTCALL METH_FASTCALL - #define __Pyx_PyCFunction_FastCall __Pyx_PyCFunctionFast - #define __Pyx_PyCFunction_FastCallWithKeywords __Pyx_PyCFunctionFastWithKeywords -#else - #define __Pyx_METH_FASTCALL METH_VARARGS - #define __Pyx_PyCFunction_FastCall PyCFunction - #define __Pyx_PyCFunction_FastCallWithKeywords PyCFunctionWithKeywords -#endif -#if CYTHON_VECTORCALL - #define __pyx_vectorcallfunc vectorcallfunc - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET PY_VECTORCALL_ARGUMENTS_OFFSET - #define __Pyx_PyVectorcall_NARGS(n) PyVectorcall_NARGS((size_t)(n)) -#elif CYTHON_BACKPORT_VECTORCALL - typedef PyObject *(*__pyx_vectorcallfunc)(PyObject *callable, PyObject *const *args, - size_t nargsf, PyObject *kwnames); - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET ((size_t)1 << (8 * sizeof(size_t) - 1)) - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(((size_t)(n)) & ~__Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET)) -#else - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET 0 - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(n)) -#endif -#if PY_VERSION_HEX < 0x030900B1 - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) ((void)m, PyType_FromSpecWithBases(s, b)) - typedef PyObject *(*__Pyx_PyCMethod)(PyObject *, PyTypeObject *, PyObject *const *, size_t, PyObject *); -#else - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) PyType_FromModuleAndSpec(m, s, b) - #define __Pyx_PyCMethod PyCMethod -#endif -#ifndef METH_METHOD - #define METH_METHOD 0x200 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyThreadState_Current PyThreadState_Get() -#elif !CYTHON_FAST_THREAD_STATE - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE void *__Pyx_PyModule_GetState(PyObject *op) -{ - void *result; - result = PyModule_GetState(op); - if (!result) - Py_FatalError("Couldn't find the module state"); - return result; -} -#endif -#define __Pyx_PyObject_GetSlot(obj, name, func_ctype) __Pyx_PyType_GetSlot(Py_TYPE(obj), name, func_ctype) -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((func_ctype) PyType_GetSlot((type), Py_##name)) -#else - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((type)->name) -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if PY_MAJOR_VERSION < 3 - #if CYTHON_COMPILING_IN_PYPY - #if PYPY_VERSION_NUM < 0x07030600 - #if defined(__cplusplus) && __cplusplus >= 201402L - [[deprecated("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6")]] - #elif defined(__GNUC__) || defined(__clang__) - __attribute__ ((__deprecated__("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6"))) - #elif defined(_MSC_VER) - __declspec(deprecated("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6")) - #endif - static CYTHON_INLINE int PyGILState_Check(void) { - return 0; - } - #else // PYPY_VERSION_NUM < 0x07030600 - #endif // PYPY_VERSION_NUM < 0x07030600 - #else - static CYTHON_INLINE int PyGILState_Check(void) { - PyThreadState * tstate = _PyThreadState_Current; - return tstate && (tstate == PyGILState_GetThisThreadState()); - } - #endif -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX > 0x030600B4 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStrWithError(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStr(PyObject *dict, PyObject *name) { - PyObject *res = __Pyx_PyDict_GetItemStrWithError(dict, name); - if (res == NULL) PyErr_Clear(); - return res; -} -#elif PY_MAJOR_VERSION >= 3 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07020000) -#define __Pyx_PyDict_GetItemStrWithError PyDict_GetItemWithError -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#else -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStrWithError(PyObject *dict, PyObject *name) { -#if CYTHON_COMPILING_IN_PYPY - return PyDict_GetItem(dict, name); -#else - PyDictEntry *ep; - PyDictObject *mp = (PyDictObject*) dict; - long hash = ((PyStringObject *) name)->ob_shash; - assert(hash != -1); - ep = (mp->ma_lookup)(mp, name, hash); - if (ep == NULL) { - return NULL; - } - return ep->me_value; -#endif -} -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#endif -#if CYTHON_USE_TYPE_SLOTS - #define __Pyx_PyType_GetFlags(tp) (((PyTypeObject *)tp)->tp_flags) - #define __Pyx_PyType_HasFeature(type, feature) ((__Pyx_PyType_GetFlags(type) & (feature)) != 0) - #define __Pyx_PyObject_GetIterNextFunc(obj) (Py_TYPE(obj)->tp_iternext) -#else - #define __Pyx_PyType_GetFlags(tp) (PyType_GetFlags((PyTypeObject *)tp)) - #define __Pyx_PyType_HasFeature(type, feature) PyType_HasFeature(type, feature) - #define __Pyx_PyObject_GetIterNextFunc(obj) PyIter_Next -#endif -#if CYTHON_USE_TYPE_SPECS && PY_VERSION_HEX >= 0x03080000 -#define __Pyx_PyHeapTypeObject_GC_Del(obj) {\ - PyTypeObject *type = Py_TYPE(obj);\ - assert(__Pyx_PyType_HasFeature(type, Py_TPFLAGS_HEAPTYPE));\ - PyObject_GC_Del(obj);\ - Py_DECREF(type);\ -} -#else -#define __Pyx_PyHeapTypeObject_GC_Del(obj) PyObject_GC_Del(obj) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define CYTHON_PEP393_ENABLED 1 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GetLength(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_ReadChar(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((void)u, 1114111U) - #define __Pyx_PyUnicode_KIND(u) ((void)u, (0)) - #define __Pyx_PyUnicode_DATA(u) ((void*)u) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)k, PyUnicode_ReadChar((PyObject*)(d), i)) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GetLength(u)) -#elif PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_READY(op) (0) - #else - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #endif - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) ((int)PyUnicode_KIND(u)) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, (Py_UCS4) ch) - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #else - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length)) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #endif - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535U : 1114111U) - #define __Pyx_PyUnicode_KIND(u) ((int)sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = (Py_UNICODE) ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #if !defined(PyUnicode_DecodeUnicodeEscape) - #define PyUnicode_DecodeUnicodeEscape(s, size, errors) PyUnicode_Decode(s, size, "unicode_escape", errors) - #endif - #if !defined(PyUnicode_Contains) || (PY_MAJOR_VERSION == 2 && PYPY_VERSION_NUM < 0x07030500) - #undef PyUnicode_Contains - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) - #endif - #if !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) - #endif - #if !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) - #endif -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#if CYTHON_COMPILING_IN_CPYTHON - #define __Pyx_PySequence_ListKeepNew(obj)\ - (likely(PyList_CheckExact(obj) && Py_REFCNT(obj) == 1) ? __Pyx_NewRef(obj) : PySequence_List(obj)) -#else - #define __Pyx_PySequence_ListKeepNew(obj) PySequence_List(obj) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) __Pyx_IS_TYPE(obj, &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) -#else - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define __Pyx_Py3Int_Check(op) PyLong_Check(op) - #define __Pyx_Py3Int_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#else - #define __Pyx_Py3Int_Check(op) (PyLong_Check(op) || PyInt_Check(op)) - #define __Pyx_Py3Int_CheckExact(op) (PyLong_CheckExact(op) || PyInt_CheckExact(op)) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS) - #if !defined(_USE_MATH_DEFINES) - #define _USE_MATH_DEFINES - #endif -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifdef CYTHON_EXTERN_C - #undef __PYX_EXTERN_C - #define __PYX_EXTERN_C CYTHON_EXTERN_C -#elif defined(__PYX_EXTERN_C) - #ifdef _MSC_VER - #pragma message ("Please do not define the '__PYX_EXTERN_C' macro externally. Use 'CYTHON_EXTERN_C' instead.") - #else - #warning Please do not define the '__PYX_EXTERN_C' macro externally. Use 'CYTHON_EXTERN_C' instead. - #endif -#else - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__monotonic_align__core -#define __PYX_HAVE_API__monotonic_align__core -/* Early includes */ -#include "pythread.h" -#include -#include -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const wchar_t *u) -{ - const wchar_t *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#else -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) -{ - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#endif -#define __Pyx_PyUnicode_FromOrdinal(o) PyUnicode_FromOrdinal((int)o) -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_VERSION_HEX >= 0x030C00A7 - #ifndef _PyLong_SIGN_MASK - #define _PyLong_SIGN_MASK 3 - #endif - #ifndef _PyLong_NON_SIZE_BITS - #define _PyLong_NON_SIZE_BITS 3 - #endif - #define __Pyx_PyLong_Sign(x) (((PyLongObject*)x)->long_value.lv_tag & _PyLong_SIGN_MASK) - #define __Pyx_PyLong_IsNeg(x) ((__Pyx_PyLong_Sign(x) & 2) != 0) - #define __Pyx_PyLong_IsNonNeg(x) (!__Pyx_PyLong_IsNeg(x)) - #define __Pyx_PyLong_IsZero(x) (__Pyx_PyLong_Sign(x) & 1) - #define __Pyx_PyLong_IsPos(x) (__Pyx_PyLong_Sign(x) == 0) - #define __Pyx_PyLong_CompactValueUnsigned(x) (__Pyx_PyLong_Digits(x)[0]) - #define __Pyx_PyLong_DigitCount(x) ((Py_ssize_t) (((PyLongObject*)x)->long_value.lv_tag >> _PyLong_NON_SIZE_BITS)) - #define __Pyx_PyLong_SignedDigitCount(x)\ - ((1 - (Py_ssize_t) __Pyx_PyLong_Sign(x)) * __Pyx_PyLong_DigitCount(x)) - #if defined(PyUnstable_Long_IsCompact) && defined(PyUnstable_Long_CompactValue) - #define __Pyx_PyLong_IsCompact(x) PyUnstable_Long_IsCompact((PyLongObject*) x) - #define __Pyx_PyLong_CompactValue(x) PyUnstable_Long_CompactValue((PyLongObject*) x) - #else - #define __Pyx_PyLong_IsCompact(x) (((PyLongObject*)x)->long_value.lv_tag < (2 << _PyLong_NON_SIZE_BITS)) - #define __Pyx_PyLong_CompactValue(x) ((1 - (Py_ssize_t) __Pyx_PyLong_Sign(x)) * (Py_ssize_t) __Pyx_PyLong_Digits(x)[0]) - #endif - typedef Py_ssize_t __Pyx_compact_pylong; - typedef size_t __Pyx_compact_upylong; - #else // Py < 3.12 - #define __Pyx_PyLong_IsNeg(x) (Py_SIZE(x) < 0) - #define __Pyx_PyLong_IsNonNeg(x) (Py_SIZE(x) >= 0) - #define __Pyx_PyLong_IsZero(x) (Py_SIZE(x) == 0) - #define __Pyx_PyLong_IsPos(x) (Py_SIZE(x) > 0) - #define __Pyx_PyLong_CompactValueUnsigned(x) ((Py_SIZE(x) == 0) ? 0 : __Pyx_PyLong_Digits(x)[0]) - #define __Pyx_PyLong_DigitCount(x) __Pyx_sst_abs(Py_SIZE(x)) - #define __Pyx_PyLong_SignedDigitCount(x) Py_SIZE(x) - #define __Pyx_PyLong_IsCompact(x) (Py_SIZE(x) == 0 || Py_SIZE(x) == 1 || Py_SIZE(x) == -1) - #define __Pyx_PyLong_CompactValue(x)\ - ((Py_SIZE(x) == 0) ? (sdigit) 0 : ((Py_SIZE(x) < 0) ? -(sdigit)__Pyx_PyLong_Digits(x)[0] : (sdigit)__Pyx_PyLong_Digits(x)[0])) - typedef sdigit __Pyx_compact_pylong; - typedef digit __Pyx_compact_upylong; - #endif - #if PY_VERSION_HEX >= 0x030C00A5 - #define __Pyx_PyLong_Digits(x) (((PyLongObject*)x)->long_value.ob_digit) - #else - #define __Pyx_PyLong_Digits(x) (((PyLongObject*)x)->ob_digit) - #endif -#endif -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = (char) c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -#if !CYTHON_USE_MODULE_STATE -static PyObject *__pyx_m = NULL; -#endif -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm = __FILE__; -static const char *__pyx_filename; - -/* #### Code section: filename_table ### */ - -static const char *__pyx_f[] = { - "core.pyx", - "", -}; -/* #### Code section: utility_code_proto_before_types ### */ -/* ForceInitThreads.proto */ -#ifndef __PYX_FORCE_INIT_THREADS - #define __PYX_FORCE_INIT_THREADS 0 -#endif - -/* NoFastGil.proto */ -#define __Pyx_PyGILState_Ensure PyGILState_Ensure -#define __Pyx_PyGILState_Release PyGILState_Release -#define __Pyx_FastGIL_Remember() -#define __Pyx_FastGIL_Forget() -#define __Pyx_FastGilFuncInit() - -/* BufferFormatStructs.proto */ -struct __Pyx_StructField_; -#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0) -typedef struct { - const char* name; - struct __Pyx_StructField_* fields; - size_t size; - size_t arraysize[8]; - int ndim; - char typegroup; - char is_unsigned; - int flags; -} __Pyx_TypeInfo; -typedef struct __Pyx_StructField_ { - __Pyx_TypeInfo* type; - const char* name; - size_t offset; -} __Pyx_StructField; -typedef struct { - __Pyx_StructField* field; - size_t parent_offset; -} __Pyx_BufFmt_StackElem; -typedef struct { - __Pyx_StructField root; - __Pyx_BufFmt_StackElem* head; - size_t fmt_offset; - size_t new_count, enc_count; - size_t struct_alignment; - int is_complex; - char enc_type; - char new_packmode; - char enc_packmode; - char is_valid_array; -} __Pyx_BufFmt_Context; - -/* Atomics.proto */ -#include -#ifndef CYTHON_ATOMICS - #define CYTHON_ATOMICS 1 -#endif -#define __PYX_CYTHON_ATOMICS_ENABLED() CYTHON_ATOMICS -#define __pyx_atomic_int_type int -#define __pyx_nonatomic_int_type int -#if CYTHON_ATOMICS && (defined(__STDC_VERSION__) &&\ - (__STDC_VERSION__ >= 201112L) &&\ - !defined(__STDC_NO_ATOMICS__)) - #include -#elif CYTHON_ATOMICS && (defined(__cplusplus) && (\ - (__cplusplus >= 201103L) ||\ - (defined(_MSC_VER) && _MSC_VER >= 1700))) - #include -#endif -#if CYTHON_ATOMICS && (defined(__STDC_VERSION__) &&\ - (__STDC_VERSION__ >= 201112L) &&\ - !defined(__STDC_NO_ATOMICS__) &&\ - ATOMIC_INT_LOCK_FREE == 2) - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type atomic_int - #define __pyx_atomic_incr_aligned(value) atomic_fetch_add_explicit(value, 1, memory_order_relaxed) - #define __pyx_atomic_decr_aligned(value) atomic_fetch_sub_explicit(value, 1, memory_order_acq_rel) - #if defined(__PYX_DEBUG_ATOMICS) && defined(_MSC_VER) - #pragma message ("Using standard C atomics") - #elif defined(__PYX_DEBUG_ATOMICS) - #warning "Using standard C atomics" - #endif -#elif CYTHON_ATOMICS && (defined(__cplusplus) && (\ - (__cplusplus >= 201103L) ||\ -\ - (defined(_MSC_VER) && _MSC_VER >= 1700)) &&\ - ATOMIC_INT_LOCK_FREE == 2) - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type std::atomic_int - #define __pyx_atomic_incr_aligned(value) std::atomic_fetch_add_explicit(value, 1, std::memory_order_relaxed) - #define __pyx_atomic_decr_aligned(value) std::atomic_fetch_sub_explicit(value, 1, std::memory_order_acq_rel) - #if defined(__PYX_DEBUG_ATOMICS) && defined(_MSC_VER) - #pragma message ("Using standard C++ atomics") - #elif defined(__PYX_DEBUG_ATOMICS) - #warning "Using standard C++ atomics" - #endif -#elif CYTHON_ATOMICS && (__GNUC__ >= 5 || (__GNUC__ == 4 &&\ - (__GNUC_MINOR__ > 1 ||\ - (__GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL__ >= 2)))) - #define __pyx_atomic_incr_aligned(value) __sync_fetch_and_add(value, 1) - #define __pyx_atomic_decr_aligned(value) __sync_fetch_and_sub(value, 1) - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using GNU atomics" - #endif -#elif CYTHON_ATOMICS && defined(_MSC_VER) - #include - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type long - #define __pyx_nonatomic_int_type long - #pragma intrinsic (_InterlockedExchangeAdd) - #define __pyx_atomic_incr_aligned(value) _InterlockedExchangeAdd(value, 1) - #define __pyx_atomic_decr_aligned(value) _InterlockedExchangeAdd(value, -1) - #ifdef __PYX_DEBUG_ATOMICS - #pragma message ("Using MSVC atomics") - #endif -#else - #undef CYTHON_ATOMICS - #define CYTHON_ATOMICS 0 - #ifdef __PYX_DEBUG_ATOMICS - #warning "Not using atomics" - #endif -#endif -#if CYTHON_ATOMICS - #define __pyx_add_acquisition_count(memview)\ - __pyx_atomic_incr_aligned(__pyx_get_slice_count_pointer(memview)) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_atomic_decr_aligned(__pyx_get_slice_count_pointer(memview)) -#else - #define __pyx_add_acquisition_count(memview)\ - __pyx_add_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_sub_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) -#endif - -/* MemviewSliceStruct.proto */ -struct __pyx_memoryview_obj; -typedef struct { - struct __pyx_memoryview_obj *memview; - char *data; - Py_ssize_t shape[8]; - Py_ssize_t strides[8]; - Py_ssize_t suboffsets[8]; -} __Pyx_memviewslice; -#define __Pyx_MemoryView_Len(m) (m.shape[0]) - -/* #### Code section: numeric_typedefs ### */ -/* #### Code section: complex_type_declarations ### */ -/* #### Code section: type_declarations ### */ - -/*--- Type declarations ---*/ -struct __pyx_array_obj; -struct __pyx_MemviewEnum_obj; -struct __pyx_memoryview_obj; -struct __pyx_memoryviewslice_obj; -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each; - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each { - int __pyx_n; - float max_neg_val; -}; - -/* "View.MemoryView":114 - * @cython.collection_type("sequence") - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ -struct __pyx_array_obj { - PyObject_HEAD - struct __pyx_vtabstruct_array *__pyx_vtab; - char *data; - Py_ssize_t len; - char *format; - int ndim; - Py_ssize_t *_shape; - Py_ssize_t *_strides; - Py_ssize_t itemsize; - PyObject *mode; - PyObject *_format; - void (*callback_free_data)(void *); - int free_data; - int dtype_is_object; -}; - - -/* "View.MemoryView":302 - * - * @cname('__pyx_MemviewEnum') - * cdef class Enum(object): # <<<<<<<<<<<<<< - * cdef object name - * def __init__(self, name): - */ -struct __pyx_MemviewEnum_obj { - PyObject_HEAD - PyObject *name; -}; - - -/* "View.MemoryView":337 - * - * @cname('__pyx_memoryview') - * cdef class memoryview: # <<<<<<<<<<<<<< - * - * cdef object obj - */ -struct __pyx_memoryview_obj { - PyObject_HEAD - struct __pyx_vtabstruct_memoryview *__pyx_vtab; - PyObject *obj; - PyObject *_size; - PyObject *_array_interface; - PyThread_type_lock lock; - __pyx_atomic_int_type acquisition_count; - Py_buffer view; - int flags; - int dtype_is_object; - __Pyx_TypeInfo *typeinfo; -}; - - -/* "View.MemoryView":952 - * @cython.collection_type("sequence") - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ -struct __pyx_memoryviewslice_obj { - struct __pyx_memoryview_obj __pyx_base; - __Pyx_memviewslice from_slice; - PyObject *from_object; - PyObject *(*to_object_func)(char *); - int (*to_dtype_func)(char *, PyObject *); -}; - - - -/* "View.MemoryView":114 - * @cython.collection_type("sequence") - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ - -struct __pyx_vtabstruct_array { - PyObject *(*get_memview)(struct __pyx_array_obj *); -}; -static struct __pyx_vtabstruct_array *__pyx_vtabptr_array; - - -/* "View.MemoryView":337 - * - * @cname('__pyx_memoryview') - * cdef class memoryview: # <<<<<<<<<<<<<< - * - * cdef object obj - */ - -struct __pyx_vtabstruct_memoryview { - char *(*get_item_pointer)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*is_slice)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_slice_assignment)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*setitem_slice_assign_scalar)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_indexed)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*convert_item_to_object)(struct __pyx_memoryview_obj *, char *); - PyObject *(*assign_item_from_object)(struct __pyx_memoryview_obj *, char *, PyObject *); - PyObject *(*_get_base)(struct __pyx_memoryview_obj *); -}; -static struct __pyx_vtabstruct_memoryview *__pyx_vtabptr_memoryview; - - -/* "View.MemoryView":952 - * @cython.collection_type("sequence") - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ - -struct __pyx_vtabstruct__memoryviewslice { - struct __pyx_vtabstruct_memoryview __pyx_base; -}; -static struct __pyx_vtabstruct__memoryviewslice *__pyx_vtabptr__memoryviewslice; -/* #### Code section: utility_code_proto ### */ - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, Py_ssize_t); - void (*DECREF)(void*, PyObject*, Py_ssize_t); - void (*GOTREF)(void*, PyObject*, Py_ssize_t); - void (*GIVEREF)(void*, PyObject*, Py_ssize_t); - void* (*SetupContext)(const char*, Py_ssize_t, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - } - #define __Pyx_RefNannyFinishContextNogil() {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __Pyx_RefNannyFinishContext();\ - PyGILState_Release(__pyx_gilstate_save);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__)) - #define __Pyx_RefNannyFinishContextNogil() __Pyx_RefNannyFinishContext() -#endif - #define __Pyx_RefNannyFinishContextNogil() {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __Pyx_RefNannyFinishContext();\ - PyGILState_Release(__pyx_gilstate_save);\ - } - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_XINCREF(r) do { if((r) == NULL); else {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) == NULL); else {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) == NULL); else {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) == NULL); else {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContextNogil() - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_Py_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; Py_XDECREF(tmp);\ - } while (0) -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#if PY_VERSION_HEX >= 0x030C00A6 -#define __Pyx_PyErr_Occurred() (__pyx_tstate->current_exception != NULL) -#define __Pyx_PyErr_CurrentExceptionType() (__pyx_tstate->current_exception ? (PyObject*) Py_TYPE(__pyx_tstate->current_exception) : (PyObject*) NULL) -#else -#define __Pyx_PyErr_Occurred() (__pyx_tstate->curexc_type != NULL) -#define __Pyx_PyErr_CurrentExceptionType() (__pyx_tstate->curexc_type) -#endif -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() (PyErr_Occurred() != NULL) -#define __Pyx_PyErr_CurrentExceptionType() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A6 -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* PyObjectGetAttrStrNoError.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name); - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* TupleAndListFromArray.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n); -static CYTHON_INLINE PyObject* __Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n); -#endif - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* fastcall.proto */ -#define __Pyx_Arg_VARARGS(args, i) PyTuple_GET_ITEM(args, i) -#define __Pyx_NumKwargs_VARARGS(kwds) PyDict_Size(kwds) -#define __Pyx_KwValues_VARARGS(args, nargs) NULL -#define __Pyx_GetKwValue_VARARGS(kw, kwvalues, s) __Pyx_PyDict_GetItemStrWithError(kw, s) -#define __Pyx_KwargsAsDict_VARARGS(kw, kwvalues) PyDict_Copy(kw) -#if CYTHON_METH_FASTCALL - #define __Pyx_Arg_FASTCALL(args, i) args[i] - #define __Pyx_NumKwargs_FASTCALL(kwds) PyTuple_GET_SIZE(kwds) - #define __Pyx_KwValues_FASTCALL(args, nargs) ((args) + (nargs)) - static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s); - #define __Pyx_KwargsAsDict_FASTCALL(kw, kwvalues) _PyStack_AsDict(kwvalues, kw) -#else - #define __Pyx_Arg_FASTCALL __Pyx_Arg_VARARGS - #define __Pyx_NumKwargs_FASTCALL __Pyx_NumKwargs_VARARGS - #define __Pyx_KwValues_FASTCALL __Pyx_KwValues_VARARGS - #define __Pyx_GetKwValue_FASTCALL __Pyx_GetKwValue_VARARGS - #define __Pyx_KwargsAsDict_FASTCALL __Pyx_KwargsAsDict_VARARGS -#endif -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_ArgsSlice_VARARGS(args, start, stop) __Pyx_PyTuple_FromArray(&__Pyx_Arg_VARARGS(args, start), stop - start) -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) __Pyx_PyTuple_FromArray(&__Pyx_Arg_FASTCALL(args, start), stop - start) -#else -#define __Pyx_ArgsSlice_VARARGS(args, start, stop) PyTuple_GetSlice(args, start, stop) -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) PyTuple_GetSlice(args, start, stop) -#endif - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject *const *kwvalues, - PyObject **argnames[], - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args, - const char* function_name); - -/* ArgTypeTest.proto */ -#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\ - ((likely(__Pyx_IS_TYPE(obj, type) | (none_allowed && (obj == Py_None)))) ? 1 :\ - __Pyx__ArgTypeTest(obj, type, name, exact)) -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact); - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#if !CYTHON_VECTORCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif -#if !CYTHON_VECTORCALL -#if PY_VERSION_HEX >= 0x03080000 - #include "frameobject.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif - #define __Pxy_PyFrame_Initialize_Offsets() - #define __Pyx_PyFrame_GetLocalsplus(frame) ((frame)->f_localsplus) -#else - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif -#endif -#endif - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectFastCall.proto */ -#define __Pyx_PyObject_FastCall(func, args, nargs) __Pyx_PyObject_FastCallDict(func, args, (size_t)(nargs), NULL) -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject **args, size_t nargs, PyObject *kwargs); - -/* RaiseUnexpectedTypeError.proto */ -static int __Pyx_RaiseUnexpectedTypeError(const char *expected, PyObject *obj); - -/* GCCDiagnostics.proto */ -#if !defined(__INTEL_COMPILER) && defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) -#define __Pyx_HAS_GCC_DIAGNOSTIC -#endif - -/* BuildPyUnicode.proto */ -static PyObject* __Pyx_PyUnicode_BuildFromAscii(Py_ssize_t ulength, char* chars, int clength, - int prepend_sign, char padding_char); - -/* CIntToPyUnicode.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_From_int(int value, Py_ssize_t width, char padding_char, char format_char); - -/* CIntToPyUnicode.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_From_Py_ssize_t(Py_ssize_t value, Py_ssize_t width, char padding_char, char format_char); - -/* JoinPyUnicode.proto */ -static PyObject* __Pyx_PyUnicode_Join(PyObject* value_tuple, Py_ssize_t value_count, Py_ssize_t result_ulength, - Py_UCS4 max_char); - -/* StrEquals.proto */ -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals -#else -#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals -#endif - -/* PyObjectFormatSimple.proto */ -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - PyObject_Format(s, f)) -#elif PY_MAJOR_VERSION < 3 - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - likely(PyString_CheckExact(s)) ? PyUnicode_FromEncodedObject(s, NULL, "strict") :\ - PyObject_Format(s, f)) -#elif CYTHON_USE_TYPE_SLOTS - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - likely(PyLong_CheckExact(s)) ? PyLong_Type.tp_repr(s) :\ - likely(PyFloat_CheckExact(s)) ? PyFloat_Type.tp_repr(s) :\ - PyObject_Format(s, f)) -#else - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - PyObject_Format(s, f)) -#endif - -CYTHON_UNUSED static int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *); /*proto*/ -/* GetAttr.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *); - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* ObjectGetItem.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject *key); -#else -#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key) -#endif - -/* KeywordStringCheck.proto */ -static int __Pyx_CheckKeywordStrings(PyObject *kw, const char* function_name, int kw_allowed); - -/* DivInt[Py_ssize_t].proto */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t); - -/* UnaryNegOverflows.proto */ -#define __Pyx_UNARY_NEG_WOULD_OVERFLOW(x)\ - (((x) < 0) & ((unsigned long)(x) == 0-(unsigned long)(x))) - -/* GetAttr3.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) do {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -#define __Pyx_GetModuleGlobalNameUncached(var, name) do {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* AssertionsEnabled.proto */ -#define __Pyx_init_assertions_enabled() -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) - #define __pyx_assertions_enabled() (1) -#elif PY_VERSION_HEX < 0x03080000 || CYTHON_COMPILING_IN_PYPY || defined(Py_LIMITED_API) - #define __pyx_assertions_enabled() (!Py_OptimizeFlag) -#elif CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030900A6 - static int __pyx_assertions_enabled_flag; - #define __pyx_assertions_enabled() (__pyx_assertions_enabled_flag) - #undef __Pyx_init_assertions_enabled - static void __Pyx_init_assertions_enabled(void) { - __pyx_assertions_enabled_flag = ! _PyInterpreterState_GetConfig(__Pyx_PyThreadState_Current->interp)->optimization_level; - } -#else - #define __pyx_assertions_enabled() (!Py_OptimizeFlag) -#endif - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* RaiseNoneIterError.proto */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); - -/* ExtTypeTest.proto */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK && CYTHON_FAST_THREAD_STATE -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* ImportDottedModule.proto */ -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple); -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_ImportDottedModule_WalkParts(PyObject *module, PyObject *name, PyObject *parts_tuple); -#endif - -/* ssize_strlen.proto */ -static CYTHON_INLINE Py_ssize_t __Pyx_ssize_strlen(const char *s); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) __Pyx_IsAnySubtype2(Py_TYPE(obj), (PyTypeObject *)type1, (PyTypeObject *)type2) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) (PyObject_TypeCheck(obj, (PyTypeObject *)type1) || PyObject_TypeCheck(obj, (PyTypeObject *)type2)) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyErr_ExceptionMatches2(err1, err2) __Pyx_PyErr_GivenExceptionMatches2(__Pyx_PyErr_CurrentExceptionType(), err1, err2) -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -CYTHON_UNUSED static int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -/* ListCompAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len)) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) -#endif - -/* PySequenceMultiply.proto */ -#define __Pyx_PySequence_Multiply_Left(mul, seq) __Pyx_PySequence_Multiply(seq, mul) -static CYTHON_INLINE PyObject* __Pyx_PySequence_Multiply(PyObject *seq, Py_ssize_t mul); - -/* SetItemInt.proto */ -#define __Pyx_SetItemInt(o, i, v, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_SetItemInt_Fast(o, (Py_ssize_t)i, v, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list assignment index out of range"), -1) :\ - __Pyx_SetItemInt_Generic(o, to_py_func(i), v))) -static int __Pyx_SetItemInt_Generic(PyObject *o, PyObject *j, PyObject *v); -static CYTHON_INLINE int __Pyx_SetItemInt_Fast(PyObject *o, Py_ssize_t i, PyObject *v, - int is_list, int wraparound, int boundscheck); - -/* RaiseUnboundLocalError.proto */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname); - -/* DivInt[long].proto */ -static CYTHON_INLINE long __Pyx_div_long(long, long); - -/* PySequenceContains.proto */ -static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyObject* seq, int eq) { - int result = PySequence_Contains(seq, item); - return unlikely(result < 0) ? result : (result == (eq == Py_EQ)); -} - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* HasAttr.proto */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *); - -/* ErrOccurredWithGIL.proto */ -static CYTHON_INLINE int __Pyx_ErrOccurredWithGIL(void); - -/* PyObject_GenericGetAttrNoDict.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr -#endif - -/* PyObject_GenericGetAttr.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr -#endif - -/* IncludeStructmemberH.proto */ -#include - -/* FixUpExtensionType.proto */ -#if CYTHON_USE_TYPE_SPECS -static int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type); -#endif - -/* PyObjectCallNoArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); - -/* PyObjectGetMethod.proto */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method); - -/* PyObjectCallMethod0.proto */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name); - -/* ValidateBasesTuple.proto */ -#if CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_USE_TYPE_SPECS -static int __Pyx_validate_bases_tuple(const char *type_name, Py_ssize_t dictoffset, PyObject *bases); -#endif - -/* PyType_Ready.proto */ -CYTHON_UNUSED static int __Pyx_PyType_Ready(PyTypeObject *t); - -/* SetVTable.proto */ -static int __Pyx_SetVtable(PyTypeObject* typeptr , void* vtable); - -/* GetVTable.proto */ -static void* __Pyx_GetVtable(PyTypeObject *type); - -/* MergeVTables.proto */ -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __Pyx_MergeVtables(PyTypeObject *type); -#endif - -/* SetupReduce.proto */ -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __Pyx_setup_reduce(PyObject* type_obj); -#endif - -/* FetchSharedCythonModule.proto */ -static PyObject *__Pyx_FetchSharedCythonABIModule(void); - -/* FetchCommonType.proto */ -#if !CYTHON_USE_TYPE_SPECS -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type); -#else -static PyTypeObject* __Pyx_FetchCommonTypeFromSpec(PyObject *module, PyType_Spec *spec, PyObject *bases); -#endif - -/* PyMethodNew.proto */ -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_PyMethod_New(PyObject *func, PyObject *self, PyObject *typ) { - CYTHON_UNUSED_VAR(typ); - if (!self) - return __Pyx_NewRef(func); - return PyMethod_New(func, self); -} -#else - #define __Pyx_PyMethod_New PyMethod_New -#endif - -/* PyVectorcallFastCallDict.proto */ -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw); -#endif - -/* CythonFunctionShared.proto */ -#define __Pyx_CyFunction_USED -#define __Pyx_CYFUNCTION_STATICMETHOD 0x01 -#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02 -#define __Pyx_CYFUNCTION_CCLASS 0x04 -#define __Pyx_CYFUNCTION_COROUTINE 0x08 -#define __Pyx_CyFunction_GetClosure(f)\ - (((__pyx_CyFunctionObject *) (f))->func_closure) -#if PY_VERSION_HEX < 0x030900B1 - #define __Pyx_CyFunction_GetClassObj(f)\ - (((__pyx_CyFunctionObject *) (f))->func_classobj) -#else - #define __Pyx_CyFunction_GetClassObj(f)\ - ((PyObject*) ((PyCMethodObject *) (f))->mm_class) -#endif -#define __Pyx_CyFunction_SetClassObj(f, classobj)\ - __Pyx__CyFunction_SetClassObj((__pyx_CyFunctionObject *) (f), (classobj)) -#define __Pyx_CyFunction_Defaults(type, f)\ - ((type *)(((__pyx_CyFunctionObject *) (f))->defaults)) -#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\ - ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g) -typedef struct { -#if PY_VERSION_HEX < 0x030900B1 - PyCFunctionObject func; -#else - PyCMethodObject func; -#endif -#if CYTHON_BACKPORT_VECTORCALL - __pyx_vectorcallfunc func_vectorcall; -#endif -#if PY_VERSION_HEX < 0x030500A0 - PyObject *func_weakreflist; -#endif - PyObject *func_dict; - PyObject *func_name; - PyObject *func_qualname; - PyObject *func_doc; - PyObject *func_globals; - PyObject *func_code; - PyObject *func_closure; -#if PY_VERSION_HEX < 0x030900B1 - PyObject *func_classobj; -#endif - void *defaults; - int defaults_pyobjects; - size_t defaults_size; // used by FusedFunction for copying defaults - int flags; - PyObject *defaults_tuple; - PyObject *defaults_kwdict; - PyObject *(*defaults_getter)(PyObject *); - PyObject *func_annotations; - PyObject *func_is_coroutine; -} __pyx_CyFunctionObject; -#define __Pyx_CyFunction_Check(obj) __Pyx_TypeCheck(obj, __pyx_CyFunctionType) -#define __Pyx_IsCyOrPyCFunction(obj) __Pyx_TypeCheck2(obj, __pyx_CyFunctionType, &PyCFunction_Type) -#define __Pyx_CyFunction_CheckExact(obj) __Pyx_IS_TYPE(obj, __pyx_CyFunctionType) -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject* op, PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); -static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj); -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m, - size_t size, - int pyobjects); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m, - PyObject *tuple); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m, - PyObject *dict); -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m, - PyObject *dict); -static int __pyx_CyFunction_init(PyObject *module); -#if CYTHON_METH_FASTCALL -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -#if CYTHON_BACKPORT_VECTORCALL -#define __Pyx_CyFunction_func_vectorcall(f) (((__pyx_CyFunctionObject*)f)->func_vectorcall) -#else -#define __Pyx_CyFunction_func_vectorcall(f) (((PyCFunctionObject*)f)->vectorcall) -#endif -#endif - -/* CythonFunction.proto */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -#if !CYTHON_COMPILING_IN_LIMITED_API -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); -#endif - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -#if PY_MAJOR_VERSION < 3 - static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags); - static void __Pyx_ReleaseBuffer(Py_buffer *view); -#else - #define __Pyx_GetBuffer PyObject_GetBuffer - #define __Pyx_ReleaseBuffer PyBuffer_Release -#endif - - -/* BufferStructDeclare.proto */ -typedef struct { - Py_ssize_t shape, strides, suboffsets; -} __Pyx_Buf_DimInfo; -typedef struct { - size_t refcount; - Py_buffer pybuffer; -} __Pyx_Buffer; -typedef struct { - __Pyx_Buffer *rcbuffer; - char *data; - __Pyx_Buf_DimInfo diminfo[8]; -} __Pyx_LocalBuf_ND; - -/* MemviewSliceIsContig.proto */ -static int __pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim); - -/* OverlappingSlices.proto */ -static int __pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize); - -/* IsLittleEndian.proto */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void); - -/* BufferFormatCheck.proto */ -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts); -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type); - -/* TypeInfoCompare.proto */ -static int __pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b); - -/* MemviewSliceValidateAndInit.proto */ -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *, int writable_flag); - -/* MemviewSliceCopyTemplate.proto */ -static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object); - -/* MemviewSliceInit.proto */ -#define __Pyx_BUF_MAX_NDIMS %(BUF_MAX_NDIMS)d -#define __Pyx_MEMVIEW_DIRECT 1 -#define __Pyx_MEMVIEW_PTR 2 -#define __Pyx_MEMVIEW_FULL 4 -#define __Pyx_MEMVIEW_CONTIG 8 -#define __Pyx_MEMVIEW_STRIDED 16 -#define __Pyx_MEMVIEW_FOLLOW 32 -#define __Pyx_IS_C_CONTIG 1 -#define __Pyx_IS_F_CONTIG 2 -static int __Pyx_init_memviewslice( - struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference); -static CYTHON_INLINE int __pyx_add_acquisition_count_locked( - __pyx_atomic_int_type *acquisition_count, PyThread_type_lock lock); -static CYTHON_INLINE int __pyx_sub_acquisition_count_locked( - __pyx_atomic_int_type *acquisition_count, PyThread_type_lock lock); -#define __pyx_get_slice_count_pointer(memview) (&memview->acquisition_count) -#define __PYX_INC_MEMVIEW(slice, have_gil) __Pyx_INC_MEMVIEW(slice, have_gil, __LINE__) -#define __PYX_XCLEAR_MEMVIEW(slice, have_gil) __Pyx_XCLEAR_MEMVIEW(slice, have_gil, __LINE__) -static CYTHON_INLINE void __Pyx_INC_MEMVIEW(__Pyx_memviewslice *, int, int); -static CYTHON_INLINE void __Pyx_XCLEAR_MEMVIEW(__Pyx_memviewslice *, int, int); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *); - -/* FormatTypeName.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API -typedef PyObject *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%U" -static __Pyx_TypeName __Pyx_PyType_GetName(PyTypeObject* tp); -#define __Pyx_DECREF_TypeName(obj) Py_XDECREF(obj) -#else -typedef const char *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%.200s" -#define __Pyx_PyType_GetName(tp) ((tp)->tp_name) -#define __Pyx_DECREF_TypeName(obj) -#endif - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); - -/* #### Code section: module_declarations ### */ -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self); /* proto*/ -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto*/ -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview__get_base(struct __pyx_memoryview_obj *__pyx_v_self); /* proto*/ -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryviewslice__get_base(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto*/ - -/* Module declarations from "cython.view" */ - -/* Module declarations from "cython.dataclasses" */ - -/* Module declarations from "cython" */ - -/* Module declarations from "monotonic_align.core" */ -static PyObject *__pyx_collections_abc_Sequence = 0; -static PyObject *generic = 0; -static PyObject *strided = 0; -static PyObject *indirect = 0; -static PyObject *contiguous = 0; -static PyObject *indirect_contiguous = 0; -static int __pyx_memoryview_thread_locks_used; -static PyThread_type_lock __pyx_memoryview_thread_locks[8]; -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice, __Pyx_memviewslice, int, int, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, int __pyx_skip_dispatch); /*proto*/ -static int __pyx_array_allocate_buffer(struct __pyx_array_obj *); /*proto*/ -static struct __pyx_array_obj *__pyx_array_new(PyObject *, Py_ssize_t, char *, char *, char *); /*proto*/ -static PyObject *__pyx_memoryview_new(PyObject *, int, int, __Pyx_TypeInfo *); /*proto*/ -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *); /*proto*/ -static PyObject *_unellipsify(PyObject *, int); /*proto*/ -static int assert_direct_dimensions(Py_ssize_t *, int); /*proto*/ -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *, PyObject *); /*proto*/ -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int, int); /*proto*/ -static char *__pyx_pybuffer_index(Py_buffer *, char *, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memslice_transpose(__Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice, int, PyObject *(*)(char *), int (*)(char *, PyObject *), int); /*proto*/ -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static Py_ssize_t abs_py_ssize_t(Py_ssize_t); /*proto*/ -static char __pyx_get_best_slice_order(__Pyx_memviewslice *, int); /*proto*/ -static void _copy_strided_to_strided(char *, Py_ssize_t *, char *, Py_ssize_t *, Py_ssize_t *, Py_ssize_t *, int, size_t); /*proto*/ -static void copy_strided_to_strided(__Pyx_memviewslice *, __Pyx_memviewslice *, int, size_t); /*proto*/ -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *, int); /*proto*/ -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *, Py_ssize_t *, Py_ssize_t, int, char); /*proto*/ -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *, __Pyx_memviewslice *, char, int); /*proto*/ -static int __pyx_memoryview_err_extents(int, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memoryview_err_dim(PyObject *, PyObject *, int); /*proto*/ -static int __pyx_memoryview_err(PyObject *, PyObject *); /*proto*/ -static int __pyx_memoryview_err_no_memory(void); /*proto*/ -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice, __Pyx_memviewslice, int, int, int); /*proto*/ -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *, int, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *, int, size_t, void *, int); /*proto*/ -static void __pyx_memoryview__slice_assign_scalar(char *, Py_ssize_t *, Py_ssize_t *, int, size_t, void *); /*proto*/ -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *, PyObject *); /*proto*/ -/* #### Code section: typeinfo ### */ -static __Pyx_TypeInfo __Pyx_TypeInfo_int = { "int", NULL, sizeof(int), { 0 }, 0, __PYX_IS_UNSIGNED(int) ? 'U' : 'I', __PYX_IS_UNSIGNED(int), 0 }; -static __Pyx_TypeInfo __Pyx_TypeInfo_float = { "float", NULL, sizeof(float), { 0 }, 0, 'R', 0, 0 }; -/* #### Code section: before_global_var ### */ -#define __Pyx_MODULE_NAME "monotonic_align.core" -extern int __pyx_module_is_main_monotonic_align__core; -int __pyx_module_is_main_monotonic_align__core = 0; - -/* Implementation of "monotonic_align.core" */ -/* #### Code section: global_var ### */ -static PyObject *__pyx_builtin_range; -static PyObject *__pyx_builtin___import__; -static PyObject *__pyx_builtin_ValueError; -static PyObject *__pyx_builtin_MemoryError; -static PyObject *__pyx_builtin_enumerate; -static PyObject *__pyx_builtin_TypeError; -static PyObject *__pyx_builtin_AssertionError; -static PyObject *__pyx_builtin_Ellipsis; -static PyObject *__pyx_builtin_id; -static PyObject *__pyx_builtin_IndexError; -/* #### Code section: string_decls ### */ -static const char __pyx_k_[] = ": "; -static const char __pyx_k_O[] = "O"; -static const char __pyx_k_c[] = "c"; -static const char __pyx_k__2[] = "."; -static const char __pyx_k__3[] = "*"; -static const char __pyx_k__6[] = "'"; -static const char __pyx_k__7[] = ")"; -static const char __pyx_k_gc[] = "gc"; -static const char __pyx_k_id[] = "id"; -static const char __pyx_k__23[] = "?"; -static const char __pyx_k_abc[] = "abc"; -static const char __pyx_k_and[] = " and "; -static const char __pyx_k_got[] = " (got "; -static const char __pyx_k_new[] = "__new__"; -static const char __pyx_k_obj[] = "obj"; -static const char __pyx_k_sys[] = "sys"; -static const char __pyx_k_base[] = "base"; -static const char __pyx_k_dict[] = "__dict__"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_mode[] = "mode"; -static const char __pyx_k_name[] = "name"; -static const char __pyx_k_ndim[] = "ndim"; -static const char __pyx_k_pack[] = "pack"; -static const char __pyx_k_size[] = "size"; -static const char __pyx_k_spec[] = "__spec__"; -static const char __pyx_k_step[] = "step"; -static const char __pyx_k_stop[] = "stop"; -static const char __pyx_k_t_xs[] = "t_xs"; -static const char __pyx_k_t_ys[] = "t_ys"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_ASCII[] = "ASCII"; -static const char __pyx_k_class[] = "__class__"; -static const char __pyx_k_count[] = "count"; -static const char __pyx_k_error[] = "error"; -static const char __pyx_k_flags[] = "flags"; -static const char __pyx_k_index[] = "index"; -static const char __pyx_k_paths[] = "paths"; -static const char __pyx_k_range[] = "range"; -static const char __pyx_k_shape[] = "shape"; -static const char __pyx_k_start[] = "start"; -static const char __pyx_k_enable[] = "enable"; -static const char __pyx_k_encode[] = "encode"; -static const char __pyx_k_format[] = "format"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_name_2[] = "__name__"; -static const char __pyx_k_pickle[] = "pickle"; -static const char __pyx_k_reduce[] = "__reduce__"; -static const char __pyx_k_struct[] = "struct"; -static const char __pyx_k_unpack[] = "unpack"; -static const char __pyx_k_update[] = "update"; -static const char __pyx_k_values[] = "values"; -static const char __pyx_k_disable[] = "disable"; -static const char __pyx_k_fortran[] = "fortran"; -static const char __pyx_k_memview[] = "memview"; -static const char __pyx_k_Ellipsis[] = "Ellipsis"; -static const char __pyx_k_Sequence[] = "Sequence"; -static const char __pyx_k_core_pyx[] = "core.pyx"; -static const char __pyx_k_getstate[] = "__getstate__"; -static const char __pyx_k_itemsize[] = "itemsize"; -static const char __pyx_k_pyx_type[] = "__pyx_type"; -static const char __pyx_k_register[] = "register"; -static const char __pyx_k_setstate[] = "__setstate__"; -static const char __pyx_k_TypeError[] = "TypeError"; -static const char __pyx_k_enumerate[] = "enumerate"; -static const char __pyx_k_isenabled[] = "isenabled"; -static const char __pyx_k_pyx_state[] = "__pyx_state"; -static const char __pyx_k_reduce_ex[] = "__reduce_ex__"; -static const char __pyx_k_IndexError[] = "IndexError"; -static const char __pyx_k_ValueError[] = "ValueError"; -static const char __pyx_k_pyx_result[] = "__pyx_result"; -static const char __pyx_k_pyx_vtable[] = "__pyx_vtable__"; -static const char __pyx_k_MemoryError[] = "MemoryError"; -static const char __pyx_k_PickleError[] = "PickleError"; -static const char __pyx_k_collections[] = "collections"; -static const char __pyx_k_initializing[] = "_initializing"; -static const char __pyx_k_is_coroutine[] = "_is_coroutine"; -static const char __pyx_k_pyx_checksum[] = "__pyx_checksum"; -static const char __pyx_k_stringsource[] = ""; -static const char __pyx_k_version_info[] = "version_info"; -static const char __pyx_k_class_getitem[] = "__class_getitem__"; -static const char __pyx_k_reduce_cython[] = "__reduce_cython__"; -static const char __pyx_k_AssertionError[] = "AssertionError"; -static const char __pyx_k_maximum_path_c[] = "maximum_path_c"; -static const char __pyx_k_View_MemoryView[] = "View.MemoryView"; -static const char __pyx_k_allocate_buffer[] = "allocate_buffer"; -static const char __pyx_k_collections_abc[] = "collections.abc"; -static const char __pyx_k_dtype_is_object[] = "dtype_is_object"; -static const char __pyx_k_pyx_PickleError[] = "__pyx_PickleError"; -static const char __pyx_k_setstate_cython[] = "__setstate_cython__"; -static const char __pyx_k_pyx_unpickle_Enum[] = "__pyx_unpickle_Enum"; -static const char __pyx_k_asyncio_coroutines[] = "asyncio.coroutines"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_strided_and_direct[] = ""; -static const char __pyx_k_monotonic_align_core[] = "monotonic_align.core"; -static const char __pyx_k_strided_and_indirect[] = ""; -static const char __pyx_k_Invalid_shape_in_axis[] = "Invalid shape in axis "; -static const char __pyx_k_contiguous_and_direct[] = ""; -static const char __pyx_k_Cannot_index_with_type[] = "Cannot index with type '"; -static const char __pyx_k_MemoryView_of_r_object[] = ""; -static const char __pyx_k_MemoryView_of_r_at_0x_x[] = ""; -static const char __pyx_k_contiguous_and_indirect[] = ""; -static const char __pyx_k_Dimension_d_is_not_direct[] = "Dimension %d is not direct"; -static const char __pyx_k_Index_out_of_bounds_axis_d[] = "Index out of bounds (axis %d)"; -static const char __pyx_k_Step_may_not_be_zero_axis_d[] = "Step may not be zero (axis %d)"; -static const char __pyx_k_itemsize_0_for_cython_array[] = "itemsize <= 0 for cython.array"; -static const char __pyx_k_unable_to_allocate_array_data[] = "unable to allocate array data."; -static const char __pyx_k_strided_and_direct_or_indirect[] = ""; -static const char __pyx_k_All_dimensions_preceding_dimensi[] = "All dimensions preceding dimension %d must be indexed and not sliced"; -static const char __pyx_k_Buffer_view_does_not_expose_stri[] = "Buffer view does not expose strides"; -static const char __pyx_k_Can_only_create_a_buffer_that_is[] = "Can only create a buffer that is contiguous in memory."; -static const char __pyx_k_Cannot_assign_to_read_only_memor[] = "Cannot assign to read-only memoryview"; -static const char __pyx_k_Cannot_create_writable_memory_vi[] = "Cannot create writable memory view from read-only memoryview"; -static const char __pyx_k_Cannot_transpose_memoryview_with[] = "Cannot transpose memoryview with indirect dimensions"; -static const char __pyx_k_Empty_shape_tuple_for_cython_arr[] = "Empty shape tuple for cython.array"; -static const char __pyx_k_Incompatible_checksums_0x_x_vs_0[] = "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))"; -static const char __pyx_k_Indirect_dimensions_not_supporte[] = "Indirect dimensions not supported"; -static const char __pyx_k_Invalid_mode_expected_c_or_fortr[] = "Invalid mode, expected 'c' or 'fortran', got "; -static const char __pyx_k_Out_of_bounds_on_buffer_access_a[] = "Out of bounds on buffer access (axis "; -static const char __pyx_k_Unable_to_convert_item_to_object[] = "Unable to convert item to object"; -static const char __pyx_k_got_differing_extents_in_dimensi[] = "got differing extents in dimension "; -static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__"; -static const char __pyx_k_unable_to_allocate_shape_and_str[] = "unable to allocate shape and strides."; -/* #### Code section: decls ### */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /* proto */ -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name); /* proto */ -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object); /* proto */ -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs); /* proto */ -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -/* #### Code section: late_includes ### */ -/* #### Code section: module_state ### */ -typedef struct { - PyObject *__pyx_d; - PyObject *__pyx_b; - PyObject *__pyx_cython_runtime; - PyObject *__pyx_empty_tuple; - PyObject *__pyx_empty_bytes; - PyObject *__pyx_empty_unicode; - #ifdef __Pyx_CyFunction_USED - PyTypeObject *__pyx_CyFunctionType; - #endif - #ifdef __Pyx_FusedFunction_USED - PyTypeObject *__pyx_FusedFunctionType; - #endif - #ifdef __Pyx_Generator_USED - PyTypeObject *__pyx_GeneratorType; - #endif - #ifdef __Pyx_IterableCoroutine_USED - PyTypeObject *__pyx_IterableCoroutineType; - #endif - #ifdef __Pyx_Coroutine_USED - PyTypeObject *__pyx_CoroutineAwaitType; - #endif - #ifdef __Pyx_Coroutine_USED - PyTypeObject *__pyx_CoroutineType; - #endif - #if CYTHON_USE_MODULE_STATE - #endif - #if CYTHON_USE_MODULE_STATE - #endif - #if CYTHON_USE_MODULE_STATE - #endif - #if CYTHON_USE_MODULE_STATE - PyObject *__pyx_type___pyx_array; - PyObject *__pyx_type___pyx_MemviewEnum; - PyObject *__pyx_type___pyx_memoryview; - PyObject *__pyx_type___pyx_memoryviewslice; - #endif - PyTypeObject *__pyx_array_type; - PyTypeObject *__pyx_MemviewEnum_type; - PyTypeObject *__pyx_memoryview_type; - PyTypeObject *__pyx_memoryviewslice_type; - PyObject *__pyx_kp_u_; - PyObject *__pyx_n_s_ASCII; - PyObject *__pyx_kp_s_All_dimensions_preceding_dimensi; - PyObject *__pyx_n_s_AssertionError; - PyObject *__pyx_kp_s_Buffer_view_does_not_expose_stri; - PyObject *__pyx_kp_s_Can_only_create_a_buffer_that_is; - PyObject *__pyx_kp_s_Cannot_assign_to_read_only_memor; - PyObject *__pyx_kp_s_Cannot_create_writable_memory_vi; - PyObject *__pyx_kp_u_Cannot_index_with_type; - PyObject *__pyx_kp_s_Cannot_transpose_memoryview_with; - PyObject *__pyx_kp_s_Dimension_d_is_not_direct; - PyObject *__pyx_n_s_Ellipsis; - PyObject *__pyx_kp_s_Empty_shape_tuple_for_cython_arr; - PyObject *__pyx_kp_s_Incompatible_checksums_0x_x_vs_0; - PyObject *__pyx_n_s_IndexError; - PyObject *__pyx_kp_s_Index_out_of_bounds_axis_d; - PyObject *__pyx_kp_s_Indirect_dimensions_not_supporte; - PyObject *__pyx_kp_u_Invalid_mode_expected_c_or_fortr; - PyObject *__pyx_kp_u_Invalid_shape_in_axis; - PyObject *__pyx_n_s_MemoryError; - PyObject *__pyx_kp_s_MemoryView_of_r_at_0x_x; - PyObject *__pyx_kp_s_MemoryView_of_r_object; - PyObject *__pyx_n_b_O; - PyObject *__pyx_kp_u_Out_of_bounds_on_buffer_access_a; - PyObject *__pyx_n_s_PickleError; - PyObject *__pyx_n_s_Sequence; - PyObject *__pyx_kp_s_Step_may_not_be_zero_axis_d; - PyObject *__pyx_n_s_TypeError; - PyObject *__pyx_kp_s_Unable_to_convert_item_to_object; - PyObject *__pyx_n_s_ValueError; - PyObject *__pyx_n_s_View_MemoryView; - PyObject *__pyx_kp_u__2; - PyObject *__pyx_n_s__23; - PyObject *__pyx_n_s__3; - PyObject *__pyx_kp_u__6; - PyObject *__pyx_kp_u__7; - PyObject *__pyx_n_s_abc; - PyObject *__pyx_n_s_allocate_buffer; - PyObject *__pyx_kp_u_and; - PyObject *__pyx_n_s_asyncio_coroutines; - PyObject *__pyx_n_s_base; - PyObject *__pyx_n_s_c; - PyObject *__pyx_n_u_c; - PyObject *__pyx_n_s_class; - PyObject *__pyx_n_s_class_getitem; - PyObject *__pyx_n_s_cline_in_traceback; - PyObject *__pyx_n_s_collections; - PyObject *__pyx_kp_s_collections_abc; - PyObject *__pyx_kp_s_contiguous_and_direct; - PyObject *__pyx_kp_s_contiguous_and_indirect; - PyObject *__pyx_kp_s_core_pyx; - PyObject *__pyx_n_s_count; - PyObject *__pyx_n_s_dict; - PyObject *__pyx_kp_u_disable; - PyObject *__pyx_n_s_dtype_is_object; - PyObject *__pyx_kp_u_enable; - PyObject *__pyx_n_s_encode; - PyObject *__pyx_n_s_enumerate; - PyObject *__pyx_n_s_error; - PyObject *__pyx_n_s_flags; - PyObject *__pyx_n_s_format; - PyObject *__pyx_n_s_fortran; - PyObject *__pyx_n_u_fortran; - PyObject *__pyx_kp_u_gc; - PyObject *__pyx_n_s_getstate; - PyObject *__pyx_kp_u_got; - PyObject *__pyx_kp_u_got_differing_extents_in_dimensi; - PyObject *__pyx_n_s_id; - PyObject *__pyx_n_s_import; - PyObject *__pyx_n_s_index; - PyObject *__pyx_n_s_initializing; - PyObject *__pyx_n_s_is_coroutine; - PyObject *__pyx_kp_u_isenabled; - PyObject *__pyx_n_s_itemsize; - PyObject *__pyx_kp_s_itemsize_0_for_cython_array; - PyObject *__pyx_n_s_main; - PyObject *__pyx_n_s_maximum_path_c; - PyObject *__pyx_n_s_memview; - PyObject *__pyx_n_s_mode; - PyObject *__pyx_n_s_monotonic_align_core; - PyObject *__pyx_n_s_name; - PyObject *__pyx_n_s_name_2; - PyObject *__pyx_n_s_ndim; - PyObject *__pyx_n_s_new; - PyObject *__pyx_kp_s_no_default___reduce___due_to_non; - PyObject *__pyx_n_s_obj; - PyObject *__pyx_n_s_pack; - PyObject *__pyx_n_s_paths; - PyObject *__pyx_n_s_pickle; - PyObject *__pyx_n_s_pyx_PickleError; - PyObject *__pyx_n_s_pyx_checksum; - PyObject *__pyx_n_s_pyx_result; - PyObject *__pyx_n_s_pyx_state; - PyObject *__pyx_n_s_pyx_type; - PyObject *__pyx_n_s_pyx_unpickle_Enum; - PyObject *__pyx_n_s_pyx_vtable; - PyObject *__pyx_n_s_range; - PyObject *__pyx_n_s_reduce; - PyObject *__pyx_n_s_reduce_cython; - PyObject *__pyx_n_s_reduce_ex; - PyObject *__pyx_n_s_register; - PyObject *__pyx_n_s_setstate; - PyObject *__pyx_n_s_setstate_cython; - PyObject *__pyx_n_s_shape; - PyObject *__pyx_n_s_size; - PyObject *__pyx_n_s_spec; - PyObject *__pyx_n_s_start; - PyObject *__pyx_n_s_step; - PyObject *__pyx_n_s_stop; - PyObject *__pyx_kp_s_strided_and_direct; - PyObject *__pyx_kp_s_strided_and_direct_or_indirect; - PyObject *__pyx_kp_s_strided_and_indirect; - PyObject *__pyx_kp_s_stringsource; - PyObject *__pyx_n_s_struct; - PyObject *__pyx_n_s_sys; - PyObject *__pyx_n_s_t_xs; - PyObject *__pyx_n_s_t_ys; - PyObject *__pyx_n_s_test; - PyObject *__pyx_kp_s_unable_to_allocate_array_data; - PyObject *__pyx_kp_s_unable_to_allocate_shape_and_str; - PyObject *__pyx_n_s_unpack; - PyObject *__pyx_n_s_update; - PyObject *__pyx_n_s_values; - PyObject *__pyx_n_s_version_info; - PyObject *__pyx_int_0; - PyObject *__pyx_int_1; - PyObject *__pyx_int_3; - PyObject *__pyx_int_112105877; - PyObject *__pyx_int_136983863; - PyObject *__pyx_int_184977713; - PyObject *__pyx_int_neg_1; - float __pyx_k__9; - PyObject *__pyx_slice__5; - PyObject *__pyx_tuple__4; - PyObject *__pyx_tuple__8; - PyObject *__pyx_tuple__10; - PyObject *__pyx_tuple__11; - PyObject *__pyx_tuple__12; - PyObject *__pyx_tuple__13; - PyObject *__pyx_tuple__14; - PyObject *__pyx_tuple__15; - PyObject *__pyx_tuple__16; - PyObject *__pyx_tuple__17; - PyObject *__pyx_tuple__18; - PyObject *__pyx_tuple__19; - PyObject *__pyx_tuple__21; - PyObject *__pyx_codeobj__20; - PyObject *__pyx_codeobj__22; -} __pyx_mstate; - -#if CYTHON_USE_MODULE_STATE -#ifdef __cplusplus -namespace { - extern struct PyModuleDef __pyx_moduledef; -} /* anonymous namespace */ -#else -static struct PyModuleDef __pyx_moduledef; -#endif - -#define __pyx_mstate(o) ((__pyx_mstate *)__Pyx_PyModule_GetState(o)) - -#define __pyx_mstate_global (__pyx_mstate(PyState_FindModule(&__pyx_moduledef))) - -#define __pyx_m (PyState_FindModule(&__pyx_moduledef)) -#else -static __pyx_mstate __pyx_mstate_global_static = -#ifdef __cplusplus - {}; -#else - {0}; -#endif -static __pyx_mstate *__pyx_mstate_global = &__pyx_mstate_global_static; -#endif -/* #### Code section: module_state_clear ### */ -#if CYTHON_USE_MODULE_STATE -static int __pyx_m_clear(PyObject *m) { - __pyx_mstate *clear_module_state = __pyx_mstate(m); - if (!clear_module_state) return 0; - Py_CLEAR(clear_module_state->__pyx_d); - Py_CLEAR(clear_module_state->__pyx_b); - Py_CLEAR(clear_module_state->__pyx_cython_runtime); - Py_CLEAR(clear_module_state->__pyx_empty_tuple); - Py_CLEAR(clear_module_state->__pyx_empty_bytes); - Py_CLEAR(clear_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_CLEAR(clear_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_CLEAR(clear_module_state->__pyx_FusedFunctionType); - #endif - Py_CLEAR(clear_module_state->__pyx_array_type); - Py_CLEAR(clear_module_state->__pyx_type___pyx_array); - Py_CLEAR(clear_module_state->__pyx_MemviewEnum_type); - Py_CLEAR(clear_module_state->__pyx_type___pyx_MemviewEnum); - Py_CLEAR(clear_module_state->__pyx_memoryview_type); - Py_CLEAR(clear_module_state->__pyx_type___pyx_memoryview); - Py_CLEAR(clear_module_state->__pyx_memoryviewslice_type); - Py_CLEAR(clear_module_state->__pyx_type___pyx_memoryviewslice); - Py_CLEAR(clear_module_state->__pyx_kp_u_); - Py_CLEAR(clear_module_state->__pyx_n_s_ASCII); - Py_CLEAR(clear_module_state->__pyx_kp_s_All_dimensions_preceding_dimensi); - Py_CLEAR(clear_module_state->__pyx_n_s_AssertionError); - Py_CLEAR(clear_module_state->__pyx_kp_s_Buffer_view_does_not_expose_stri); - Py_CLEAR(clear_module_state->__pyx_kp_s_Can_only_create_a_buffer_that_is); - Py_CLEAR(clear_module_state->__pyx_kp_s_Cannot_assign_to_read_only_memor); - Py_CLEAR(clear_module_state->__pyx_kp_s_Cannot_create_writable_memory_vi); - Py_CLEAR(clear_module_state->__pyx_kp_u_Cannot_index_with_type); - Py_CLEAR(clear_module_state->__pyx_kp_s_Cannot_transpose_memoryview_with); - Py_CLEAR(clear_module_state->__pyx_kp_s_Dimension_d_is_not_direct); - Py_CLEAR(clear_module_state->__pyx_n_s_Ellipsis); - Py_CLEAR(clear_module_state->__pyx_kp_s_Empty_shape_tuple_for_cython_arr); - Py_CLEAR(clear_module_state->__pyx_kp_s_Incompatible_checksums_0x_x_vs_0); - Py_CLEAR(clear_module_state->__pyx_n_s_IndexError); - Py_CLEAR(clear_module_state->__pyx_kp_s_Index_out_of_bounds_axis_d); - Py_CLEAR(clear_module_state->__pyx_kp_s_Indirect_dimensions_not_supporte); - Py_CLEAR(clear_module_state->__pyx_kp_u_Invalid_mode_expected_c_or_fortr); - Py_CLEAR(clear_module_state->__pyx_kp_u_Invalid_shape_in_axis); - Py_CLEAR(clear_module_state->__pyx_n_s_MemoryError); - Py_CLEAR(clear_module_state->__pyx_kp_s_MemoryView_of_r_at_0x_x); - Py_CLEAR(clear_module_state->__pyx_kp_s_MemoryView_of_r_object); - Py_CLEAR(clear_module_state->__pyx_n_b_O); - Py_CLEAR(clear_module_state->__pyx_kp_u_Out_of_bounds_on_buffer_access_a); - Py_CLEAR(clear_module_state->__pyx_n_s_PickleError); - Py_CLEAR(clear_module_state->__pyx_n_s_Sequence); - Py_CLEAR(clear_module_state->__pyx_kp_s_Step_may_not_be_zero_axis_d); - Py_CLEAR(clear_module_state->__pyx_n_s_TypeError); - Py_CLEAR(clear_module_state->__pyx_kp_s_Unable_to_convert_item_to_object); - Py_CLEAR(clear_module_state->__pyx_n_s_ValueError); - Py_CLEAR(clear_module_state->__pyx_n_s_View_MemoryView); - Py_CLEAR(clear_module_state->__pyx_kp_u__2); - Py_CLEAR(clear_module_state->__pyx_n_s__23); - Py_CLEAR(clear_module_state->__pyx_n_s__3); - Py_CLEAR(clear_module_state->__pyx_kp_u__6); - Py_CLEAR(clear_module_state->__pyx_kp_u__7); - Py_CLEAR(clear_module_state->__pyx_n_s_abc); - Py_CLEAR(clear_module_state->__pyx_n_s_allocate_buffer); - Py_CLEAR(clear_module_state->__pyx_kp_u_and); - Py_CLEAR(clear_module_state->__pyx_n_s_asyncio_coroutines); - Py_CLEAR(clear_module_state->__pyx_n_s_base); - Py_CLEAR(clear_module_state->__pyx_n_s_c); - Py_CLEAR(clear_module_state->__pyx_n_u_c); - Py_CLEAR(clear_module_state->__pyx_n_s_class); - Py_CLEAR(clear_module_state->__pyx_n_s_class_getitem); - Py_CLEAR(clear_module_state->__pyx_n_s_cline_in_traceback); - Py_CLEAR(clear_module_state->__pyx_n_s_collections); - Py_CLEAR(clear_module_state->__pyx_kp_s_collections_abc); - Py_CLEAR(clear_module_state->__pyx_kp_s_contiguous_and_direct); - Py_CLEAR(clear_module_state->__pyx_kp_s_contiguous_and_indirect); - Py_CLEAR(clear_module_state->__pyx_kp_s_core_pyx); - Py_CLEAR(clear_module_state->__pyx_n_s_count); - Py_CLEAR(clear_module_state->__pyx_n_s_dict); - Py_CLEAR(clear_module_state->__pyx_kp_u_disable); - Py_CLEAR(clear_module_state->__pyx_n_s_dtype_is_object); - Py_CLEAR(clear_module_state->__pyx_kp_u_enable); - Py_CLEAR(clear_module_state->__pyx_n_s_encode); - Py_CLEAR(clear_module_state->__pyx_n_s_enumerate); - Py_CLEAR(clear_module_state->__pyx_n_s_error); - Py_CLEAR(clear_module_state->__pyx_n_s_flags); - Py_CLEAR(clear_module_state->__pyx_n_s_format); - Py_CLEAR(clear_module_state->__pyx_n_s_fortran); - Py_CLEAR(clear_module_state->__pyx_n_u_fortran); - Py_CLEAR(clear_module_state->__pyx_kp_u_gc); - Py_CLEAR(clear_module_state->__pyx_n_s_getstate); - Py_CLEAR(clear_module_state->__pyx_kp_u_got); - Py_CLEAR(clear_module_state->__pyx_kp_u_got_differing_extents_in_dimensi); - Py_CLEAR(clear_module_state->__pyx_n_s_id); - Py_CLEAR(clear_module_state->__pyx_n_s_import); - Py_CLEAR(clear_module_state->__pyx_n_s_index); - Py_CLEAR(clear_module_state->__pyx_n_s_initializing); - Py_CLEAR(clear_module_state->__pyx_n_s_is_coroutine); - Py_CLEAR(clear_module_state->__pyx_kp_u_isenabled); - Py_CLEAR(clear_module_state->__pyx_n_s_itemsize); - Py_CLEAR(clear_module_state->__pyx_kp_s_itemsize_0_for_cython_array); - Py_CLEAR(clear_module_state->__pyx_n_s_main); - Py_CLEAR(clear_module_state->__pyx_n_s_maximum_path_c); - Py_CLEAR(clear_module_state->__pyx_n_s_memview); - Py_CLEAR(clear_module_state->__pyx_n_s_mode); - Py_CLEAR(clear_module_state->__pyx_n_s_monotonic_align_core); - Py_CLEAR(clear_module_state->__pyx_n_s_name); - Py_CLEAR(clear_module_state->__pyx_n_s_name_2); - Py_CLEAR(clear_module_state->__pyx_n_s_ndim); - Py_CLEAR(clear_module_state->__pyx_n_s_new); - Py_CLEAR(clear_module_state->__pyx_kp_s_no_default___reduce___due_to_non); - Py_CLEAR(clear_module_state->__pyx_n_s_obj); - Py_CLEAR(clear_module_state->__pyx_n_s_pack); - Py_CLEAR(clear_module_state->__pyx_n_s_paths); - Py_CLEAR(clear_module_state->__pyx_n_s_pickle); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_PickleError); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_checksum); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_result); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_state); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_type); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_unpickle_Enum); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_vtable); - Py_CLEAR(clear_module_state->__pyx_n_s_range); - Py_CLEAR(clear_module_state->__pyx_n_s_reduce); - Py_CLEAR(clear_module_state->__pyx_n_s_reduce_cython); - Py_CLEAR(clear_module_state->__pyx_n_s_reduce_ex); - Py_CLEAR(clear_module_state->__pyx_n_s_register); - Py_CLEAR(clear_module_state->__pyx_n_s_setstate); - Py_CLEAR(clear_module_state->__pyx_n_s_setstate_cython); - Py_CLEAR(clear_module_state->__pyx_n_s_shape); - Py_CLEAR(clear_module_state->__pyx_n_s_size); - Py_CLEAR(clear_module_state->__pyx_n_s_spec); - Py_CLEAR(clear_module_state->__pyx_n_s_start); - Py_CLEAR(clear_module_state->__pyx_n_s_step); - Py_CLEAR(clear_module_state->__pyx_n_s_stop); - Py_CLEAR(clear_module_state->__pyx_kp_s_strided_and_direct); - Py_CLEAR(clear_module_state->__pyx_kp_s_strided_and_direct_or_indirect); - Py_CLEAR(clear_module_state->__pyx_kp_s_strided_and_indirect); - Py_CLEAR(clear_module_state->__pyx_kp_s_stringsource); - Py_CLEAR(clear_module_state->__pyx_n_s_struct); - Py_CLEAR(clear_module_state->__pyx_n_s_sys); - Py_CLEAR(clear_module_state->__pyx_n_s_t_xs); - Py_CLEAR(clear_module_state->__pyx_n_s_t_ys); - Py_CLEAR(clear_module_state->__pyx_n_s_test); - Py_CLEAR(clear_module_state->__pyx_kp_s_unable_to_allocate_array_data); - Py_CLEAR(clear_module_state->__pyx_kp_s_unable_to_allocate_shape_and_str); - Py_CLEAR(clear_module_state->__pyx_n_s_unpack); - Py_CLEAR(clear_module_state->__pyx_n_s_update); - Py_CLEAR(clear_module_state->__pyx_n_s_values); - Py_CLEAR(clear_module_state->__pyx_n_s_version_info); - Py_CLEAR(clear_module_state->__pyx_int_0); - Py_CLEAR(clear_module_state->__pyx_int_1); - Py_CLEAR(clear_module_state->__pyx_int_3); - Py_CLEAR(clear_module_state->__pyx_int_112105877); - Py_CLEAR(clear_module_state->__pyx_int_136983863); - Py_CLEAR(clear_module_state->__pyx_int_184977713); - Py_CLEAR(clear_module_state->__pyx_int_neg_1); - Py_CLEAR(clear_module_state->__pyx_slice__5); - Py_CLEAR(clear_module_state->__pyx_tuple__4); - Py_CLEAR(clear_module_state->__pyx_tuple__8); - Py_CLEAR(clear_module_state->__pyx_tuple__10); - Py_CLEAR(clear_module_state->__pyx_tuple__11); - Py_CLEAR(clear_module_state->__pyx_tuple__12); - Py_CLEAR(clear_module_state->__pyx_tuple__13); - Py_CLEAR(clear_module_state->__pyx_tuple__14); - Py_CLEAR(clear_module_state->__pyx_tuple__15); - Py_CLEAR(clear_module_state->__pyx_tuple__16); - Py_CLEAR(clear_module_state->__pyx_tuple__17); - Py_CLEAR(clear_module_state->__pyx_tuple__18); - Py_CLEAR(clear_module_state->__pyx_tuple__19); - Py_CLEAR(clear_module_state->__pyx_tuple__21); - Py_CLEAR(clear_module_state->__pyx_codeobj__20); - Py_CLEAR(clear_module_state->__pyx_codeobj__22); - return 0; -} -#endif -/* #### Code section: module_state_traverse ### */ -#if CYTHON_USE_MODULE_STATE -static int __pyx_m_traverse(PyObject *m, visitproc visit, void *arg) { - __pyx_mstate *traverse_module_state = __pyx_mstate(m); - if (!traverse_module_state) return 0; - Py_VISIT(traverse_module_state->__pyx_d); - Py_VISIT(traverse_module_state->__pyx_b); - Py_VISIT(traverse_module_state->__pyx_cython_runtime); - Py_VISIT(traverse_module_state->__pyx_empty_tuple); - Py_VISIT(traverse_module_state->__pyx_empty_bytes); - Py_VISIT(traverse_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_VISIT(traverse_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_VISIT(traverse_module_state->__pyx_FusedFunctionType); - #endif - Py_VISIT(traverse_module_state->__pyx_array_type); - Py_VISIT(traverse_module_state->__pyx_type___pyx_array); - Py_VISIT(traverse_module_state->__pyx_MemviewEnum_type); - Py_VISIT(traverse_module_state->__pyx_type___pyx_MemviewEnum); - Py_VISIT(traverse_module_state->__pyx_memoryview_type); - Py_VISIT(traverse_module_state->__pyx_type___pyx_memoryview); - Py_VISIT(traverse_module_state->__pyx_memoryviewslice_type); - Py_VISIT(traverse_module_state->__pyx_type___pyx_memoryviewslice); - Py_VISIT(traverse_module_state->__pyx_kp_u_); - Py_VISIT(traverse_module_state->__pyx_n_s_ASCII); - Py_VISIT(traverse_module_state->__pyx_kp_s_All_dimensions_preceding_dimensi); - Py_VISIT(traverse_module_state->__pyx_n_s_AssertionError); - Py_VISIT(traverse_module_state->__pyx_kp_s_Buffer_view_does_not_expose_stri); - Py_VISIT(traverse_module_state->__pyx_kp_s_Can_only_create_a_buffer_that_is); - Py_VISIT(traverse_module_state->__pyx_kp_s_Cannot_assign_to_read_only_memor); - Py_VISIT(traverse_module_state->__pyx_kp_s_Cannot_create_writable_memory_vi); - Py_VISIT(traverse_module_state->__pyx_kp_u_Cannot_index_with_type); - Py_VISIT(traverse_module_state->__pyx_kp_s_Cannot_transpose_memoryview_with); - Py_VISIT(traverse_module_state->__pyx_kp_s_Dimension_d_is_not_direct); - Py_VISIT(traverse_module_state->__pyx_n_s_Ellipsis); - Py_VISIT(traverse_module_state->__pyx_kp_s_Empty_shape_tuple_for_cython_arr); - Py_VISIT(traverse_module_state->__pyx_kp_s_Incompatible_checksums_0x_x_vs_0); - Py_VISIT(traverse_module_state->__pyx_n_s_IndexError); - Py_VISIT(traverse_module_state->__pyx_kp_s_Index_out_of_bounds_axis_d); - Py_VISIT(traverse_module_state->__pyx_kp_s_Indirect_dimensions_not_supporte); - Py_VISIT(traverse_module_state->__pyx_kp_u_Invalid_mode_expected_c_or_fortr); - Py_VISIT(traverse_module_state->__pyx_kp_u_Invalid_shape_in_axis); - Py_VISIT(traverse_module_state->__pyx_n_s_MemoryError); - Py_VISIT(traverse_module_state->__pyx_kp_s_MemoryView_of_r_at_0x_x); - Py_VISIT(traverse_module_state->__pyx_kp_s_MemoryView_of_r_object); - Py_VISIT(traverse_module_state->__pyx_n_b_O); - Py_VISIT(traverse_module_state->__pyx_kp_u_Out_of_bounds_on_buffer_access_a); - Py_VISIT(traverse_module_state->__pyx_n_s_PickleError); - Py_VISIT(traverse_module_state->__pyx_n_s_Sequence); - Py_VISIT(traverse_module_state->__pyx_kp_s_Step_may_not_be_zero_axis_d); - Py_VISIT(traverse_module_state->__pyx_n_s_TypeError); - Py_VISIT(traverse_module_state->__pyx_kp_s_Unable_to_convert_item_to_object); - Py_VISIT(traverse_module_state->__pyx_n_s_ValueError); - Py_VISIT(traverse_module_state->__pyx_n_s_View_MemoryView); - Py_VISIT(traverse_module_state->__pyx_kp_u__2); - Py_VISIT(traverse_module_state->__pyx_n_s__23); - Py_VISIT(traverse_module_state->__pyx_n_s__3); - Py_VISIT(traverse_module_state->__pyx_kp_u__6); - Py_VISIT(traverse_module_state->__pyx_kp_u__7); - Py_VISIT(traverse_module_state->__pyx_n_s_abc); - Py_VISIT(traverse_module_state->__pyx_n_s_allocate_buffer); - Py_VISIT(traverse_module_state->__pyx_kp_u_and); - Py_VISIT(traverse_module_state->__pyx_n_s_asyncio_coroutines); - Py_VISIT(traverse_module_state->__pyx_n_s_base); - Py_VISIT(traverse_module_state->__pyx_n_s_c); - Py_VISIT(traverse_module_state->__pyx_n_u_c); - Py_VISIT(traverse_module_state->__pyx_n_s_class); - Py_VISIT(traverse_module_state->__pyx_n_s_class_getitem); - Py_VISIT(traverse_module_state->__pyx_n_s_cline_in_traceback); - Py_VISIT(traverse_module_state->__pyx_n_s_collections); - Py_VISIT(traverse_module_state->__pyx_kp_s_collections_abc); - Py_VISIT(traverse_module_state->__pyx_kp_s_contiguous_and_direct); - Py_VISIT(traverse_module_state->__pyx_kp_s_contiguous_and_indirect); - Py_VISIT(traverse_module_state->__pyx_kp_s_core_pyx); - Py_VISIT(traverse_module_state->__pyx_n_s_count); - Py_VISIT(traverse_module_state->__pyx_n_s_dict); - Py_VISIT(traverse_module_state->__pyx_kp_u_disable); - Py_VISIT(traverse_module_state->__pyx_n_s_dtype_is_object); - Py_VISIT(traverse_module_state->__pyx_kp_u_enable); - Py_VISIT(traverse_module_state->__pyx_n_s_encode); - Py_VISIT(traverse_module_state->__pyx_n_s_enumerate); - Py_VISIT(traverse_module_state->__pyx_n_s_error); - Py_VISIT(traverse_module_state->__pyx_n_s_flags); - Py_VISIT(traverse_module_state->__pyx_n_s_format); - Py_VISIT(traverse_module_state->__pyx_n_s_fortran); - Py_VISIT(traverse_module_state->__pyx_n_u_fortran); - Py_VISIT(traverse_module_state->__pyx_kp_u_gc); - Py_VISIT(traverse_module_state->__pyx_n_s_getstate); - Py_VISIT(traverse_module_state->__pyx_kp_u_got); - Py_VISIT(traverse_module_state->__pyx_kp_u_got_differing_extents_in_dimensi); - Py_VISIT(traverse_module_state->__pyx_n_s_id); - Py_VISIT(traverse_module_state->__pyx_n_s_import); - Py_VISIT(traverse_module_state->__pyx_n_s_index); - Py_VISIT(traverse_module_state->__pyx_n_s_initializing); - Py_VISIT(traverse_module_state->__pyx_n_s_is_coroutine); - Py_VISIT(traverse_module_state->__pyx_kp_u_isenabled); - Py_VISIT(traverse_module_state->__pyx_n_s_itemsize); - Py_VISIT(traverse_module_state->__pyx_kp_s_itemsize_0_for_cython_array); - Py_VISIT(traverse_module_state->__pyx_n_s_main); - Py_VISIT(traverse_module_state->__pyx_n_s_maximum_path_c); - Py_VISIT(traverse_module_state->__pyx_n_s_memview); - Py_VISIT(traverse_module_state->__pyx_n_s_mode); - Py_VISIT(traverse_module_state->__pyx_n_s_monotonic_align_core); - Py_VISIT(traverse_module_state->__pyx_n_s_name); - Py_VISIT(traverse_module_state->__pyx_n_s_name_2); - Py_VISIT(traverse_module_state->__pyx_n_s_ndim); - Py_VISIT(traverse_module_state->__pyx_n_s_new); - Py_VISIT(traverse_module_state->__pyx_kp_s_no_default___reduce___due_to_non); - Py_VISIT(traverse_module_state->__pyx_n_s_obj); - Py_VISIT(traverse_module_state->__pyx_n_s_pack); - Py_VISIT(traverse_module_state->__pyx_n_s_paths); - Py_VISIT(traverse_module_state->__pyx_n_s_pickle); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_PickleError); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_checksum); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_result); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_state); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_type); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_unpickle_Enum); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_vtable); - Py_VISIT(traverse_module_state->__pyx_n_s_range); - Py_VISIT(traverse_module_state->__pyx_n_s_reduce); - Py_VISIT(traverse_module_state->__pyx_n_s_reduce_cython); - Py_VISIT(traverse_module_state->__pyx_n_s_reduce_ex); - Py_VISIT(traverse_module_state->__pyx_n_s_register); - Py_VISIT(traverse_module_state->__pyx_n_s_setstate); - Py_VISIT(traverse_module_state->__pyx_n_s_setstate_cython); - Py_VISIT(traverse_module_state->__pyx_n_s_shape); - Py_VISIT(traverse_module_state->__pyx_n_s_size); - Py_VISIT(traverse_module_state->__pyx_n_s_spec); - Py_VISIT(traverse_module_state->__pyx_n_s_start); - Py_VISIT(traverse_module_state->__pyx_n_s_step); - Py_VISIT(traverse_module_state->__pyx_n_s_stop); - Py_VISIT(traverse_module_state->__pyx_kp_s_strided_and_direct); - Py_VISIT(traverse_module_state->__pyx_kp_s_strided_and_direct_or_indirect); - Py_VISIT(traverse_module_state->__pyx_kp_s_strided_and_indirect); - Py_VISIT(traverse_module_state->__pyx_kp_s_stringsource); - Py_VISIT(traverse_module_state->__pyx_n_s_struct); - Py_VISIT(traverse_module_state->__pyx_n_s_sys); - Py_VISIT(traverse_module_state->__pyx_n_s_t_xs); - Py_VISIT(traverse_module_state->__pyx_n_s_t_ys); - Py_VISIT(traverse_module_state->__pyx_n_s_test); - Py_VISIT(traverse_module_state->__pyx_kp_s_unable_to_allocate_array_data); - Py_VISIT(traverse_module_state->__pyx_kp_s_unable_to_allocate_shape_and_str); - Py_VISIT(traverse_module_state->__pyx_n_s_unpack); - Py_VISIT(traverse_module_state->__pyx_n_s_update); - Py_VISIT(traverse_module_state->__pyx_n_s_values); - Py_VISIT(traverse_module_state->__pyx_n_s_version_info); - Py_VISIT(traverse_module_state->__pyx_int_0); - Py_VISIT(traverse_module_state->__pyx_int_1); - Py_VISIT(traverse_module_state->__pyx_int_3); - Py_VISIT(traverse_module_state->__pyx_int_112105877); - Py_VISIT(traverse_module_state->__pyx_int_136983863); - Py_VISIT(traverse_module_state->__pyx_int_184977713); - Py_VISIT(traverse_module_state->__pyx_int_neg_1); - Py_VISIT(traverse_module_state->__pyx_slice__5); - Py_VISIT(traverse_module_state->__pyx_tuple__4); - Py_VISIT(traverse_module_state->__pyx_tuple__8); - Py_VISIT(traverse_module_state->__pyx_tuple__10); - Py_VISIT(traverse_module_state->__pyx_tuple__11); - Py_VISIT(traverse_module_state->__pyx_tuple__12); - Py_VISIT(traverse_module_state->__pyx_tuple__13); - Py_VISIT(traverse_module_state->__pyx_tuple__14); - Py_VISIT(traverse_module_state->__pyx_tuple__15); - Py_VISIT(traverse_module_state->__pyx_tuple__16); - Py_VISIT(traverse_module_state->__pyx_tuple__17); - Py_VISIT(traverse_module_state->__pyx_tuple__18); - Py_VISIT(traverse_module_state->__pyx_tuple__19); - Py_VISIT(traverse_module_state->__pyx_tuple__21); - Py_VISIT(traverse_module_state->__pyx_codeobj__20); - Py_VISIT(traverse_module_state->__pyx_codeobj__22); - return 0; -} -#endif -/* #### Code section: module_state_defines ### */ -#define __pyx_d __pyx_mstate_global->__pyx_d -#define __pyx_b __pyx_mstate_global->__pyx_b -#define __pyx_cython_runtime __pyx_mstate_global->__pyx_cython_runtime -#define __pyx_empty_tuple __pyx_mstate_global->__pyx_empty_tuple -#define __pyx_empty_bytes __pyx_mstate_global->__pyx_empty_bytes -#define __pyx_empty_unicode __pyx_mstate_global->__pyx_empty_unicode -#ifdef __Pyx_CyFunction_USED -#define __pyx_CyFunctionType __pyx_mstate_global->__pyx_CyFunctionType -#endif -#ifdef __Pyx_FusedFunction_USED -#define __pyx_FusedFunctionType __pyx_mstate_global->__pyx_FusedFunctionType -#endif -#ifdef __Pyx_Generator_USED -#define __pyx_GeneratorType __pyx_mstate_global->__pyx_GeneratorType -#endif -#ifdef __Pyx_IterableCoroutine_USED -#define __pyx_IterableCoroutineType __pyx_mstate_global->__pyx_IterableCoroutineType -#endif -#ifdef __Pyx_Coroutine_USED -#define __pyx_CoroutineAwaitType __pyx_mstate_global->__pyx_CoroutineAwaitType -#endif -#ifdef __Pyx_Coroutine_USED -#define __pyx_CoroutineType __pyx_mstate_global->__pyx_CoroutineType -#endif -#if CYTHON_USE_MODULE_STATE -#endif -#if CYTHON_USE_MODULE_STATE -#endif -#if CYTHON_USE_MODULE_STATE -#endif -#if CYTHON_USE_MODULE_STATE -#define __pyx_type___pyx_array __pyx_mstate_global->__pyx_type___pyx_array -#define __pyx_type___pyx_MemviewEnum __pyx_mstate_global->__pyx_type___pyx_MemviewEnum -#define __pyx_type___pyx_memoryview __pyx_mstate_global->__pyx_type___pyx_memoryview -#define __pyx_type___pyx_memoryviewslice __pyx_mstate_global->__pyx_type___pyx_memoryviewslice -#endif -#define __pyx_array_type __pyx_mstate_global->__pyx_array_type -#define __pyx_MemviewEnum_type __pyx_mstate_global->__pyx_MemviewEnum_type -#define __pyx_memoryview_type __pyx_mstate_global->__pyx_memoryview_type -#define __pyx_memoryviewslice_type __pyx_mstate_global->__pyx_memoryviewslice_type -#define __pyx_kp_u_ __pyx_mstate_global->__pyx_kp_u_ -#define __pyx_n_s_ASCII __pyx_mstate_global->__pyx_n_s_ASCII -#define __pyx_kp_s_All_dimensions_preceding_dimensi __pyx_mstate_global->__pyx_kp_s_All_dimensions_preceding_dimensi -#define __pyx_n_s_AssertionError __pyx_mstate_global->__pyx_n_s_AssertionError -#define __pyx_kp_s_Buffer_view_does_not_expose_stri __pyx_mstate_global->__pyx_kp_s_Buffer_view_does_not_expose_stri -#define __pyx_kp_s_Can_only_create_a_buffer_that_is __pyx_mstate_global->__pyx_kp_s_Can_only_create_a_buffer_that_is -#define __pyx_kp_s_Cannot_assign_to_read_only_memor __pyx_mstate_global->__pyx_kp_s_Cannot_assign_to_read_only_memor -#define __pyx_kp_s_Cannot_create_writable_memory_vi __pyx_mstate_global->__pyx_kp_s_Cannot_create_writable_memory_vi -#define __pyx_kp_u_Cannot_index_with_type __pyx_mstate_global->__pyx_kp_u_Cannot_index_with_type -#define __pyx_kp_s_Cannot_transpose_memoryview_with __pyx_mstate_global->__pyx_kp_s_Cannot_transpose_memoryview_with -#define __pyx_kp_s_Dimension_d_is_not_direct __pyx_mstate_global->__pyx_kp_s_Dimension_d_is_not_direct -#define __pyx_n_s_Ellipsis __pyx_mstate_global->__pyx_n_s_Ellipsis -#define __pyx_kp_s_Empty_shape_tuple_for_cython_arr __pyx_mstate_global->__pyx_kp_s_Empty_shape_tuple_for_cython_arr -#define __pyx_kp_s_Incompatible_checksums_0x_x_vs_0 __pyx_mstate_global->__pyx_kp_s_Incompatible_checksums_0x_x_vs_0 -#define __pyx_n_s_IndexError __pyx_mstate_global->__pyx_n_s_IndexError -#define __pyx_kp_s_Index_out_of_bounds_axis_d __pyx_mstate_global->__pyx_kp_s_Index_out_of_bounds_axis_d -#define __pyx_kp_s_Indirect_dimensions_not_supporte __pyx_mstate_global->__pyx_kp_s_Indirect_dimensions_not_supporte -#define __pyx_kp_u_Invalid_mode_expected_c_or_fortr __pyx_mstate_global->__pyx_kp_u_Invalid_mode_expected_c_or_fortr -#define __pyx_kp_u_Invalid_shape_in_axis __pyx_mstate_global->__pyx_kp_u_Invalid_shape_in_axis -#define __pyx_n_s_MemoryError __pyx_mstate_global->__pyx_n_s_MemoryError -#define __pyx_kp_s_MemoryView_of_r_at_0x_x __pyx_mstate_global->__pyx_kp_s_MemoryView_of_r_at_0x_x -#define __pyx_kp_s_MemoryView_of_r_object __pyx_mstate_global->__pyx_kp_s_MemoryView_of_r_object -#define __pyx_n_b_O __pyx_mstate_global->__pyx_n_b_O -#define __pyx_kp_u_Out_of_bounds_on_buffer_access_a __pyx_mstate_global->__pyx_kp_u_Out_of_bounds_on_buffer_access_a -#define __pyx_n_s_PickleError __pyx_mstate_global->__pyx_n_s_PickleError -#define __pyx_n_s_Sequence __pyx_mstate_global->__pyx_n_s_Sequence -#define __pyx_kp_s_Step_may_not_be_zero_axis_d __pyx_mstate_global->__pyx_kp_s_Step_may_not_be_zero_axis_d -#define __pyx_n_s_TypeError __pyx_mstate_global->__pyx_n_s_TypeError -#define __pyx_kp_s_Unable_to_convert_item_to_object __pyx_mstate_global->__pyx_kp_s_Unable_to_convert_item_to_object -#define __pyx_n_s_ValueError __pyx_mstate_global->__pyx_n_s_ValueError -#define __pyx_n_s_View_MemoryView __pyx_mstate_global->__pyx_n_s_View_MemoryView -#define __pyx_kp_u__2 __pyx_mstate_global->__pyx_kp_u__2 -#define __pyx_n_s__23 __pyx_mstate_global->__pyx_n_s__23 -#define __pyx_n_s__3 __pyx_mstate_global->__pyx_n_s__3 -#define __pyx_kp_u__6 __pyx_mstate_global->__pyx_kp_u__6 -#define __pyx_kp_u__7 __pyx_mstate_global->__pyx_kp_u__7 -#define __pyx_n_s_abc __pyx_mstate_global->__pyx_n_s_abc -#define __pyx_n_s_allocate_buffer __pyx_mstate_global->__pyx_n_s_allocate_buffer -#define __pyx_kp_u_and __pyx_mstate_global->__pyx_kp_u_and -#define __pyx_n_s_asyncio_coroutines __pyx_mstate_global->__pyx_n_s_asyncio_coroutines -#define __pyx_n_s_base __pyx_mstate_global->__pyx_n_s_base -#define __pyx_n_s_c __pyx_mstate_global->__pyx_n_s_c -#define __pyx_n_u_c __pyx_mstate_global->__pyx_n_u_c -#define __pyx_n_s_class __pyx_mstate_global->__pyx_n_s_class -#define __pyx_n_s_class_getitem __pyx_mstate_global->__pyx_n_s_class_getitem -#define __pyx_n_s_cline_in_traceback __pyx_mstate_global->__pyx_n_s_cline_in_traceback -#define __pyx_n_s_collections __pyx_mstate_global->__pyx_n_s_collections -#define __pyx_kp_s_collections_abc __pyx_mstate_global->__pyx_kp_s_collections_abc -#define __pyx_kp_s_contiguous_and_direct __pyx_mstate_global->__pyx_kp_s_contiguous_and_direct -#define __pyx_kp_s_contiguous_and_indirect __pyx_mstate_global->__pyx_kp_s_contiguous_and_indirect -#define __pyx_kp_s_core_pyx __pyx_mstate_global->__pyx_kp_s_core_pyx -#define __pyx_n_s_count __pyx_mstate_global->__pyx_n_s_count -#define __pyx_n_s_dict __pyx_mstate_global->__pyx_n_s_dict -#define __pyx_kp_u_disable __pyx_mstate_global->__pyx_kp_u_disable -#define __pyx_n_s_dtype_is_object __pyx_mstate_global->__pyx_n_s_dtype_is_object -#define __pyx_kp_u_enable __pyx_mstate_global->__pyx_kp_u_enable -#define __pyx_n_s_encode __pyx_mstate_global->__pyx_n_s_encode -#define __pyx_n_s_enumerate __pyx_mstate_global->__pyx_n_s_enumerate -#define __pyx_n_s_error __pyx_mstate_global->__pyx_n_s_error -#define __pyx_n_s_flags __pyx_mstate_global->__pyx_n_s_flags -#define __pyx_n_s_format __pyx_mstate_global->__pyx_n_s_format -#define __pyx_n_s_fortran __pyx_mstate_global->__pyx_n_s_fortran -#define __pyx_n_u_fortran __pyx_mstate_global->__pyx_n_u_fortran -#define __pyx_kp_u_gc __pyx_mstate_global->__pyx_kp_u_gc -#define __pyx_n_s_getstate __pyx_mstate_global->__pyx_n_s_getstate -#define __pyx_kp_u_got __pyx_mstate_global->__pyx_kp_u_got -#define __pyx_kp_u_got_differing_extents_in_dimensi __pyx_mstate_global->__pyx_kp_u_got_differing_extents_in_dimensi -#define __pyx_n_s_id __pyx_mstate_global->__pyx_n_s_id -#define __pyx_n_s_import __pyx_mstate_global->__pyx_n_s_import -#define __pyx_n_s_index __pyx_mstate_global->__pyx_n_s_index -#define __pyx_n_s_initializing __pyx_mstate_global->__pyx_n_s_initializing -#define __pyx_n_s_is_coroutine __pyx_mstate_global->__pyx_n_s_is_coroutine -#define __pyx_kp_u_isenabled __pyx_mstate_global->__pyx_kp_u_isenabled -#define __pyx_n_s_itemsize __pyx_mstate_global->__pyx_n_s_itemsize -#define __pyx_kp_s_itemsize_0_for_cython_array __pyx_mstate_global->__pyx_kp_s_itemsize_0_for_cython_array -#define __pyx_n_s_main __pyx_mstate_global->__pyx_n_s_main -#define __pyx_n_s_maximum_path_c __pyx_mstate_global->__pyx_n_s_maximum_path_c -#define __pyx_n_s_memview __pyx_mstate_global->__pyx_n_s_memview -#define __pyx_n_s_mode __pyx_mstate_global->__pyx_n_s_mode -#define __pyx_n_s_monotonic_align_core __pyx_mstate_global->__pyx_n_s_monotonic_align_core -#define __pyx_n_s_name __pyx_mstate_global->__pyx_n_s_name -#define __pyx_n_s_name_2 __pyx_mstate_global->__pyx_n_s_name_2 -#define __pyx_n_s_ndim __pyx_mstate_global->__pyx_n_s_ndim -#define __pyx_n_s_new __pyx_mstate_global->__pyx_n_s_new -#define __pyx_kp_s_no_default___reduce___due_to_non __pyx_mstate_global->__pyx_kp_s_no_default___reduce___due_to_non -#define __pyx_n_s_obj __pyx_mstate_global->__pyx_n_s_obj -#define __pyx_n_s_pack __pyx_mstate_global->__pyx_n_s_pack -#define __pyx_n_s_paths __pyx_mstate_global->__pyx_n_s_paths -#define __pyx_n_s_pickle __pyx_mstate_global->__pyx_n_s_pickle -#define __pyx_n_s_pyx_PickleError __pyx_mstate_global->__pyx_n_s_pyx_PickleError -#define __pyx_n_s_pyx_checksum __pyx_mstate_global->__pyx_n_s_pyx_checksum -#define __pyx_n_s_pyx_result __pyx_mstate_global->__pyx_n_s_pyx_result -#define __pyx_n_s_pyx_state __pyx_mstate_global->__pyx_n_s_pyx_state -#define __pyx_n_s_pyx_type __pyx_mstate_global->__pyx_n_s_pyx_type -#define __pyx_n_s_pyx_unpickle_Enum __pyx_mstate_global->__pyx_n_s_pyx_unpickle_Enum -#define __pyx_n_s_pyx_vtable __pyx_mstate_global->__pyx_n_s_pyx_vtable -#define __pyx_n_s_range __pyx_mstate_global->__pyx_n_s_range -#define __pyx_n_s_reduce __pyx_mstate_global->__pyx_n_s_reduce -#define __pyx_n_s_reduce_cython __pyx_mstate_global->__pyx_n_s_reduce_cython -#define __pyx_n_s_reduce_ex __pyx_mstate_global->__pyx_n_s_reduce_ex -#define __pyx_n_s_register __pyx_mstate_global->__pyx_n_s_register -#define __pyx_n_s_setstate __pyx_mstate_global->__pyx_n_s_setstate -#define __pyx_n_s_setstate_cython __pyx_mstate_global->__pyx_n_s_setstate_cython -#define __pyx_n_s_shape __pyx_mstate_global->__pyx_n_s_shape -#define __pyx_n_s_size __pyx_mstate_global->__pyx_n_s_size -#define __pyx_n_s_spec __pyx_mstate_global->__pyx_n_s_spec -#define __pyx_n_s_start __pyx_mstate_global->__pyx_n_s_start -#define __pyx_n_s_step __pyx_mstate_global->__pyx_n_s_step -#define __pyx_n_s_stop __pyx_mstate_global->__pyx_n_s_stop -#define __pyx_kp_s_strided_and_direct __pyx_mstate_global->__pyx_kp_s_strided_and_direct -#define __pyx_kp_s_strided_and_direct_or_indirect __pyx_mstate_global->__pyx_kp_s_strided_and_direct_or_indirect -#define __pyx_kp_s_strided_and_indirect __pyx_mstate_global->__pyx_kp_s_strided_and_indirect -#define __pyx_kp_s_stringsource __pyx_mstate_global->__pyx_kp_s_stringsource -#define __pyx_n_s_struct __pyx_mstate_global->__pyx_n_s_struct -#define __pyx_n_s_sys __pyx_mstate_global->__pyx_n_s_sys -#define __pyx_n_s_t_xs __pyx_mstate_global->__pyx_n_s_t_xs -#define __pyx_n_s_t_ys __pyx_mstate_global->__pyx_n_s_t_ys -#define __pyx_n_s_test __pyx_mstate_global->__pyx_n_s_test -#define __pyx_kp_s_unable_to_allocate_array_data __pyx_mstate_global->__pyx_kp_s_unable_to_allocate_array_data -#define __pyx_kp_s_unable_to_allocate_shape_and_str __pyx_mstate_global->__pyx_kp_s_unable_to_allocate_shape_and_str -#define __pyx_n_s_unpack __pyx_mstate_global->__pyx_n_s_unpack -#define __pyx_n_s_update __pyx_mstate_global->__pyx_n_s_update -#define __pyx_n_s_values __pyx_mstate_global->__pyx_n_s_values -#define __pyx_n_s_version_info __pyx_mstate_global->__pyx_n_s_version_info -#define __pyx_int_0 __pyx_mstate_global->__pyx_int_0 -#define __pyx_int_1 __pyx_mstate_global->__pyx_int_1 -#define __pyx_int_3 __pyx_mstate_global->__pyx_int_3 -#define __pyx_int_112105877 __pyx_mstate_global->__pyx_int_112105877 -#define __pyx_int_136983863 __pyx_mstate_global->__pyx_int_136983863 -#define __pyx_int_184977713 __pyx_mstate_global->__pyx_int_184977713 -#define __pyx_int_neg_1 __pyx_mstate_global->__pyx_int_neg_1 -#define __pyx_k__9 __pyx_mstate_global->__pyx_k__9 -#define __pyx_slice__5 __pyx_mstate_global->__pyx_slice__5 -#define __pyx_tuple__4 __pyx_mstate_global->__pyx_tuple__4 -#define __pyx_tuple__8 __pyx_mstate_global->__pyx_tuple__8 -#define __pyx_tuple__10 __pyx_mstate_global->__pyx_tuple__10 -#define __pyx_tuple__11 __pyx_mstate_global->__pyx_tuple__11 -#define __pyx_tuple__12 __pyx_mstate_global->__pyx_tuple__12 -#define __pyx_tuple__13 __pyx_mstate_global->__pyx_tuple__13 -#define __pyx_tuple__14 __pyx_mstate_global->__pyx_tuple__14 -#define __pyx_tuple__15 __pyx_mstate_global->__pyx_tuple__15 -#define __pyx_tuple__16 __pyx_mstate_global->__pyx_tuple__16 -#define __pyx_tuple__17 __pyx_mstate_global->__pyx_tuple__17 -#define __pyx_tuple__18 __pyx_mstate_global->__pyx_tuple__18 -#define __pyx_tuple__19 __pyx_mstate_global->__pyx_tuple__19 -#define __pyx_tuple__21 __pyx_mstate_global->__pyx_tuple__21 -#define __pyx_codeobj__20 __pyx_mstate_global->__pyx_codeobj__20 -#define __pyx_codeobj__22 __pyx_mstate_global->__pyx_codeobj__22 -/* #### Code section: module_code ### */ - -/* "View.MemoryView":131 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - -/* Python wrapper */ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_shape = 0; - Py_ssize_t __pyx_v_itemsize; - PyObject *__pyx_v_format = 0; - PyObject *__pyx_v_mode = 0; - int __pyx_v_allocate_buffer; - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_shape,&__pyx_n_s_itemsize,&__pyx_n_s_format,&__pyx_n_s_mode,&__pyx_n_s_allocate_buffer,0}; - PyObject* values[5] = {0,0,0,0,0}; - values[3] = ((PyObject *)__pyx_n_s_c); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_VARARGS(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_VARARGS(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_VARARGS(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_VARARGS(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_VARARGS(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_VARARGS(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_shape)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 131, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_itemsize)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 131, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 1); __PYX_ERR(1, 131, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_format)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 131, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 2); __PYX_ERR(1, 131, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_mode); - if (value) { values[3] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 131, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_allocate_buffer); - if (value) { values[4] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 131, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__cinit__") < 0)) __PYX_ERR(1, 131, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_VARARGS(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_VARARGS(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_VARARGS(__pyx_args, 2); - values[1] = __Pyx_Arg_VARARGS(__pyx_args, 1); - values[0] = __Pyx_Arg_VARARGS(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_shape = ((PyObject*)values[0]); - __pyx_v_itemsize = __Pyx_PyIndex_AsSsize_t(values[1]); if (unlikely((__pyx_v_itemsize == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 131, __pyx_L3_error) - __pyx_v_format = values[2]; - __pyx_v_mode = values[3]; - if (values[4]) { - __pyx_v_allocate_buffer = __Pyx_PyObject_IsTrue(values[4]); if (unlikely((__pyx_v_allocate_buffer == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 132, __pyx_L3_error) - } else { - - /* "View.MemoryView":132 - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, - * mode="c", bint allocate_buffer=True): # <<<<<<<<<<<<<< - * - * cdef int idx - */ - __pyx_v_allocate_buffer = ((int)1); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, __pyx_nargs); __PYX_ERR(1, 131, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_shape), (&PyTuple_Type), 1, "shape", 1))) __PYX_ERR(1, 131, __pyx_L1_error) - if (unlikely(((PyObject *)__pyx_v_format) == Py_None)) { - PyErr_Format(PyExc_TypeError, "Argument '%.200s' must not be None", "format"); __PYX_ERR(1, 131, __pyx_L1_error) - } - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(((struct __pyx_array_obj *)__pyx_v_self), __pyx_v_shape, __pyx_v_itemsize, __pyx_v_format, __pyx_v_mode, __pyx_v_allocate_buffer); - - /* "View.MemoryView":131 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer) { - int __pyx_v_idx; - Py_ssize_t __pyx_v_dim; - char __pyx_v_order; - int __pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - char *__pyx_t_8; - Py_ssize_t __pyx_t_9; - Py_UCS4 __pyx_t_10; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - __Pyx_INCREF(__pyx_v_format); - - /* "View.MemoryView":137 - * cdef Py_ssize_t dim - * - * self.ndim = len(shape) # <<<<<<<<<<<<<< - * self.itemsize = itemsize - * - */ - if (unlikely(__pyx_v_shape == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 137, __pyx_L1_error) - } - __pyx_t_1 = PyTuple_GET_SIZE(__pyx_v_shape); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(1, 137, __pyx_L1_error) - __pyx_v_self->ndim = ((int)__pyx_t_1); - - /* "View.MemoryView":138 - * - * self.ndim = len(shape) - * self.itemsize = itemsize # <<<<<<<<<<<<<< - * - * if not self.ndim: - */ - __pyx_v_self->itemsize = __pyx_v_itemsize; - - /* "View.MemoryView":140 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError, "Empty shape tuple for cython.array" - * - */ - __pyx_t_2 = (!(__pyx_v_self->ndim != 0)); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":141 - * - * if not self.ndim: - * raise ValueError, "Empty shape tuple for cython.array" # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_Empty_shape_tuple_for_cython_arr, 0, 0); - __PYX_ERR(1, 141, __pyx_L1_error) - - /* "View.MemoryView":140 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError, "Empty shape tuple for cython.array" - * - */ - } - - /* "View.MemoryView":143 - * raise ValueError, "Empty shape tuple for cython.array" - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError, "itemsize <= 0 for cython.array" - * - */ - __pyx_t_2 = (__pyx_v_itemsize <= 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":144 - * - * if itemsize <= 0: - * raise ValueError, "itemsize <= 0 for cython.array" # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_itemsize_0_for_cython_array, 0, 0); - __PYX_ERR(1, 144, __pyx_L1_error) - - /* "View.MemoryView":143 - * raise ValueError, "Empty shape tuple for cython.array" - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError, "itemsize <= 0 for cython.array" - * - */ - } - - /* "View.MemoryView":146 - * raise ValueError, "itemsize <= 0 for cython.array" - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - __pyx_t_2 = PyBytes_Check(__pyx_v_format); - __pyx_t_3 = (!__pyx_t_2); - if (__pyx_t_3) { - - /* "View.MemoryView":147 - * - * if not isinstance(format, bytes): - * format = format.encode('ASCII') # <<<<<<<<<<<<<< - * self._format = format # keep a reference to the byte string - * self.format = self._format - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_format, __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_6, __pyx_n_s_ASCII}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_7, 1+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __Pyx_DECREF_SET(__pyx_v_format, __pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":146 - * raise ValueError, "itemsize <= 0 for cython.array" - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - } - - /* "View.MemoryView":148 - * if not isinstance(format, bytes): - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string # <<<<<<<<<<<<<< - * self.format = self._format - * - */ - if (!(likely(PyBytes_CheckExact(__pyx_v_format))||((__pyx_v_format) == Py_None) || __Pyx_RaiseUnexpectedTypeError("bytes", __pyx_v_format))) __PYX_ERR(1, 148, __pyx_L1_error) - __pyx_t_4 = __pyx_v_format; - __Pyx_INCREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __Pyx_GOTREF(__pyx_v_self->_format); - __Pyx_DECREF(__pyx_v_self->_format); - __pyx_v_self->_format = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":149 - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - * self.format = self._format # <<<<<<<<<<<<<< - * - * - */ - if (unlikely(__pyx_v_self->_format == Py_None)) { - PyErr_SetString(PyExc_TypeError, "expected bytes, NoneType found"); - __PYX_ERR(1, 149, __pyx_L1_error) - } - __pyx_t_8 = __Pyx_PyBytes_AsWritableString(__pyx_v_self->_format); if (unlikely((!__pyx_t_8) && PyErr_Occurred())) __PYX_ERR(1, 149, __pyx_L1_error) - __pyx_v_self->format = __pyx_t_8; - - /* "View.MemoryView":152 - * - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) # <<<<<<<<<<<<<< - * self._strides = self._shape + self.ndim - * - */ - __pyx_v_self->_shape = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * __pyx_v_self->ndim) * 2))); - - /* "View.MemoryView":153 - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) - * self._strides = self._shape + self.ndim # <<<<<<<<<<<<<< - * - * if not self._shape: - */ - __pyx_v_self->_strides = (__pyx_v_self->_shape + __pyx_v_self->ndim); - - /* "View.MemoryView":155 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError, "unable to allocate shape and strides." - * - */ - __pyx_t_3 = (!(__pyx_v_self->_shape != 0)); - if (unlikely(__pyx_t_3)) { - - /* "View.MemoryView":156 - * - * if not self._shape: - * raise MemoryError, "unable to allocate shape and strides." # <<<<<<<<<<<<<< - * - * - */ - __Pyx_Raise(__pyx_builtin_MemoryError, __pyx_kp_s_unable_to_allocate_shape_and_str, 0, 0); - __PYX_ERR(1, 156, __pyx_L1_error) - - /* "View.MemoryView":155 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError, "unable to allocate shape and strides." - * - */ - } - - /* "View.MemoryView":159 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError, f"Invalid shape in axis {idx}: {dim}." - */ - __pyx_t_7 = 0; - __pyx_t_4 = __pyx_v_shape; __Pyx_INCREF(__pyx_t_4); __pyx_t_1 = 0; - for (;;) { - if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely((0 < 0))) __PYX_ERR(1, 159, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_4, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_5); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 159, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_9; - __pyx_v_idx = __pyx_t_7; - __pyx_t_7 = (__pyx_t_7 + 1); - - /* "View.MemoryView":160 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError, f"Invalid shape in axis {idx}: {dim}." - * self._shape[idx] = dim - */ - __pyx_t_3 = (__pyx_v_dim <= 0); - if (unlikely(__pyx_t_3)) { - - /* "View.MemoryView":161 - * for idx, dim in enumerate(shape): - * if dim <= 0: - * raise ValueError, f"Invalid shape in axis {idx}: {dim}." # <<<<<<<<<<<<<< - * self._shape[idx] = dim - * - */ - __pyx_t_5 = PyTuple_New(5); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_9 = 0; - __pyx_t_10 = 127; - __Pyx_INCREF(__pyx_kp_u_Invalid_shape_in_axis); - __pyx_t_9 += 22; - __Pyx_GIVEREF(__pyx_kp_u_Invalid_shape_in_axis); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_kp_u_Invalid_shape_in_axis); - __pyx_t_6 = __Pyx_PyUnicode_From_int(__pyx_v_idx, 0, ' ', 'd'); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_9 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_6); - __pyx_t_6 = 0; - __Pyx_INCREF(__pyx_kp_u_); - __pyx_t_9 += 2; - __Pyx_GIVEREF(__pyx_kp_u_); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_kp_u_); - __pyx_t_6 = __Pyx_PyUnicode_From_Py_ssize_t(__pyx_v_dim, 0, ' ', 'd'); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_9 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_6); - __pyx_t_6 = 0; - __Pyx_INCREF(__pyx_kp_u__2); - __pyx_t_9 += 1; - __Pyx_GIVEREF(__pyx_kp_u__2); - PyTuple_SET_ITEM(__pyx_t_5, 4, __pyx_kp_u__2); - __pyx_t_6 = __Pyx_PyUnicode_Join(__pyx_t_5, 5, __pyx_t_9, __pyx_t_10); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_t_6, 0, 0); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __PYX_ERR(1, 161, __pyx_L1_error) - - /* "View.MemoryView":160 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError, f"Invalid shape in axis {idx}: {dim}." - * self._shape[idx] = dim - */ - } - - /* "View.MemoryView":162 - * if dim <= 0: - * raise ValueError, f"Invalid shape in axis {idx}: {dim}." - * self._shape[idx] = dim # <<<<<<<<<<<<<< - * - * cdef char order - */ - (__pyx_v_self->_shape[__pyx_v_idx]) = __pyx_v_dim; - - /* "View.MemoryView":159 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError, f"Invalid shape in axis {idx}: {dim}." - */ - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "View.MemoryView":165 - * - * cdef char order - * if mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - __pyx_t_3 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_c, Py_EQ)); if (unlikely((__pyx_t_3 < 0))) __PYX_ERR(1, 165, __pyx_L1_error) - if (__pyx_t_3) { - - /* "View.MemoryView":166 - * cdef char order - * if mode == 'c': - * order = b'C' # <<<<<<<<<<<<<< - * self.mode = u'c' - * elif mode == 'fortran': - */ - __pyx_v_order = 'C'; - - /* "View.MemoryView":167 - * if mode == 'c': - * order = b'C' - * self.mode = u'c' # <<<<<<<<<<<<<< - * elif mode == 'fortran': - * order = b'F' - */ - __Pyx_INCREF(__pyx_n_u_c); - __Pyx_GIVEREF(__pyx_n_u_c); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_c; - - /* "View.MemoryView":165 - * - * cdef char order - * if mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - goto __pyx_L11; - } - - /* "View.MemoryView":168 - * order = b'C' - * self.mode = u'c' - * elif mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - __pyx_t_3 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_fortran, Py_EQ)); if (unlikely((__pyx_t_3 < 0))) __PYX_ERR(1, 168, __pyx_L1_error) - if (likely(__pyx_t_3)) { - - /* "View.MemoryView":169 - * self.mode = u'c' - * elif mode == 'fortran': - * order = b'F' # <<<<<<<<<<<<<< - * self.mode = u'fortran' - * else: - */ - __pyx_v_order = 'F'; - - /* "View.MemoryView":170 - * elif mode == 'fortran': - * order = b'F' - * self.mode = u'fortran' # <<<<<<<<<<<<<< - * else: - * raise ValueError, f"Invalid mode, expected 'c' or 'fortran', got {mode}" - */ - __Pyx_INCREF(__pyx_n_u_fortran); - __Pyx_GIVEREF(__pyx_n_u_fortran); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_fortran; - - /* "View.MemoryView":168 - * order = b'C' - * self.mode = u'c' - * elif mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - goto __pyx_L11; - } - - /* "View.MemoryView":172 - * self.mode = u'fortran' - * else: - * raise ValueError, f"Invalid mode, expected 'c' or 'fortran', got {mode}" # <<<<<<<<<<<<<< - * - * self.len = fill_contig_strides_array(self._shape, self._strides, itemsize, self.ndim, order) - */ - /*else*/ { - __pyx_t_4 = __Pyx_PyObject_FormatSimple(__pyx_v_mode, __pyx_empty_unicode); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = __Pyx_PyUnicode_Concat(__pyx_kp_u_Invalid_mode_expected_c_or_fortr, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_t_6, 0, 0); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __PYX_ERR(1, 172, __pyx_L1_error) - } - __pyx_L11:; - - /* "View.MemoryView":174 - * raise ValueError, f"Invalid mode, expected 'c' or 'fortran', got {mode}" - * - * self.len = fill_contig_strides_array(self._shape, self._strides, itemsize, self.ndim, order) # <<<<<<<<<<<<<< - * - * self.free_data = allocate_buffer - */ - __pyx_v_self->len = __pyx_fill_contig_strides_array(__pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_itemsize, __pyx_v_self->ndim, __pyx_v_order); - - /* "View.MemoryView":176 - * self.len = fill_contig_strides_array(self._shape, self._strides, itemsize, self.ndim, order) - * - * self.free_data = allocate_buffer # <<<<<<<<<<<<<< - * self.dtype_is_object = format == b'O' - * - */ - __pyx_v_self->free_data = __pyx_v_allocate_buffer; - - /* "View.MemoryView":177 - * - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' # <<<<<<<<<<<<<< - * - * if allocate_buffer: - */ - __pyx_t_6 = PyObject_RichCompare(__pyx_v_format, __pyx_n_b_O, Py_EQ); __Pyx_XGOTREF(__pyx_t_6); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 177, __pyx_L1_error) - __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 177, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_v_self->dtype_is_object = __pyx_t_3; - - /* "View.MemoryView":179 - * self.dtype_is_object = format == b'O' - * - * if allocate_buffer: # <<<<<<<<<<<<<< - * _allocate_buffer(self) - * - */ - if (__pyx_v_allocate_buffer) { - - /* "View.MemoryView":180 - * - * if allocate_buffer: - * _allocate_buffer(self) # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - __pyx_t_7 = __pyx_array_allocate_buffer(__pyx_v_self); if (unlikely(__pyx_t_7 == ((int)-1))) __PYX_ERR(1, 180, __pyx_L1_error) - - /* "View.MemoryView":179 - * self.dtype_is_object = format == b'O' - * - * if allocate_buffer: # <<<<<<<<<<<<<< - * _allocate_buffer(self) - * - */ - } - - /* "View.MemoryView":131 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_format); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":182 - * _allocate_buffer(self) - * - * @cname('getbuffer') # <<<<<<<<<<<<<< - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - */ - -/* Python wrapper */ -CYTHON_UNUSED static int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -CYTHON_UNUSED static int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(((struct __pyx_array_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_v_bufmode; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - char *__pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - Py_ssize_t *__pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (unlikely(__pyx_v_info == NULL)) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":184 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 # <<<<<<<<<<<<<< - * if flags & (PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS): - * if self.mode == u"c": - */ - __pyx_v_bufmode = -1; - - /* "View.MemoryView":185 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if flags & (PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS): # <<<<<<<<<<<<<< - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_t_1 = ((__pyx_v_flags & ((PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS) | PyBUF_ANY_CONTIGUOUS)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":186 - * cdef int bufmode = -1 - * if flags & (PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS): - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - __pyx_t_1 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_c, Py_EQ)); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(1, 186, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":187 - * if flags & (PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS): - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = (PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":186 - * cdef int bufmode = -1 - * if flags & (PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS): - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - goto __pyx_L4; - } - - /* "View.MemoryView":188 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - __pyx_t_1 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_fortran, Py_EQ)); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(1, 188, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":189 - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * if not (flags & bufmode): - * raise ValueError, "Can only create a buffer that is contiguous in memory." - */ - __pyx_v_bufmode = (PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":188 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - } - __pyx_L4:; - - /* "View.MemoryView":190 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError, "Can only create a buffer that is contiguous in memory." - * info.buf = self.data - */ - __pyx_t_1 = (!((__pyx_v_flags & __pyx_v_bufmode) != 0)); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":191 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError, "Can only create a buffer that is contiguous in memory." # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_Can_only_create_a_buffer_that_is, 0, 0); - __PYX_ERR(1, 191, __pyx_L1_error) - - /* "View.MemoryView":190 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError, "Can only create a buffer that is contiguous in memory." - * info.buf = self.data - */ - } - - /* "View.MemoryView":185 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if flags & (PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS): # <<<<<<<<<<<<<< - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - } - - /* "View.MemoryView":192 - * if not (flags & bufmode): - * raise ValueError, "Can only create a buffer that is contiguous in memory." - * info.buf = self.data # <<<<<<<<<<<<<< - * info.len = self.len - * - */ - __pyx_t_2 = __pyx_v_self->data; - __pyx_v_info->buf = __pyx_t_2; - - /* "View.MemoryView":193 - * raise ValueError, "Can only create a buffer that is contiguous in memory." - * info.buf = self.data - * info.len = self.len # <<<<<<<<<<<<<< - * - * if flags & PyBUF_STRIDES: - */ - __pyx_t_3 = __pyx_v_self->len; - __pyx_v_info->len = __pyx_t_3; - - /* "View.MemoryView":195 - * info.len = self.len - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.ndim = self.ndim - * info.shape = self._shape - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_STRIDES) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":196 - * - * if flags & PyBUF_STRIDES: - * info.ndim = self.ndim # <<<<<<<<<<<<<< - * info.shape = self._shape - * info.strides = self._strides - */ - __pyx_t_4 = __pyx_v_self->ndim; - __pyx_v_info->ndim = __pyx_t_4; - - /* "View.MemoryView":197 - * if flags & PyBUF_STRIDES: - * info.ndim = self.ndim - * info.shape = self._shape # <<<<<<<<<<<<<< - * info.strides = self._strides - * else: - */ - __pyx_t_5 = __pyx_v_self->_shape; - __pyx_v_info->shape = __pyx_t_5; - - /* "View.MemoryView":198 - * info.ndim = self.ndim - * info.shape = self._shape - * info.strides = self._strides # <<<<<<<<<<<<<< - * else: - * info.ndim = 1 - */ - __pyx_t_5 = __pyx_v_self->_strides; - __pyx_v_info->strides = __pyx_t_5; - - /* "View.MemoryView":195 - * info.len = self.len - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.ndim = self.ndim - * info.shape = self._shape - */ - goto __pyx_L6; - } - - /* "View.MemoryView":200 - * info.strides = self._strides - * else: - * info.ndim = 1 # <<<<<<<<<<<<<< - * info.shape = &self.len if flags & PyBUF_ND else NULL - * info.strides = NULL - */ - /*else*/ { - __pyx_v_info->ndim = 1; - - /* "View.MemoryView":201 - * else: - * info.ndim = 1 - * info.shape = &self.len if flags & PyBUF_ND else NULL # <<<<<<<<<<<<<< - * info.strides = NULL - * - */ - if (((__pyx_v_flags & PyBUF_ND) != 0)) { - __pyx_t_5 = (&__pyx_v_self->len); - } else { - __pyx_t_5 = NULL; - } - __pyx_v_info->shape = __pyx_t_5; - - /* "View.MemoryView":202 - * info.ndim = 1 - * info.shape = &self.len if flags & PyBUF_ND else NULL - * info.strides = NULL # <<<<<<<<<<<<<< - * - * info.suboffsets = NULL - */ - __pyx_v_info->strides = NULL; - } - __pyx_L6:; - - /* "View.MemoryView":204 - * info.strides = NULL - * - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * info.itemsize = self.itemsize - * info.readonly = 0 - */ - __pyx_v_info->suboffsets = NULL; - - /* "View.MemoryView":205 - * - * info.suboffsets = NULL - * info.itemsize = self.itemsize # <<<<<<<<<<<<<< - * info.readonly = 0 - * info.format = self.format if flags & PyBUF_FORMAT else NULL - */ - __pyx_t_3 = __pyx_v_self->itemsize; - __pyx_v_info->itemsize = __pyx_t_3; - - /* "View.MemoryView":206 - * info.suboffsets = NULL - * info.itemsize = self.itemsize - * info.readonly = 0 # <<<<<<<<<<<<<< - * info.format = self.format if flags & PyBUF_FORMAT else NULL - * info.obj = self - */ - __pyx_v_info->readonly = 0; - - /* "View.MemoryView":207 - * info.itemsize = self.itemsize - * info.readonly = 0 - * info.format = self.format if flags & PyBUF_FORMAT else NULL # <<<<<<<<<<<<<< - * info.obj = self - * - */ - if (((__pyx_v_flags & PyBUF_FORMAT) != 0)) { - __pyx_t_2 = __pyx_v_self->format; - } else { - __pyx_t_2 = NULL; - } - __pyx_v_info->format = __pyx_t_2; - - /* "View.MemoryView":208 - * info.readonly = 0 - * info.format = self.format if flags & PyBUF_FORMAT else NULL - * info.obj = self # <<<<<<<<<<<<<< - * - * def __dealloc__(array self): - */ - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_GIVEREF((PyObject *)__pyx_v_self); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":182 - * _allocate_buffer(self) - * - * @cname('getbuffer') # <<<<<<<<<<<<<< - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.array.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":210 - * info.obj = self - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - -/* Python wrapper */ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":211 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data and self.data is not NULL: - */ - __pyx_t_1 = (__pyx_v_self->callback_free_data != NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":212 - * def __dealloc__(array self): - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) # <<<<<<<<<<<<<< - * elif self.free_data and self.data is not NULL: - * if self.dtype_is_object: - */ - __pyx_v_self->callback_free_data(__pyx_v_self->data); - - /* "View.MemoryView":211 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data and self.data is not NULL: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":213 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data and self.data is not NULL: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) - */ - if (__pyx_v_self->free_data) { - } else { - __pyx_t_1 = __pyx_v_self->free_data; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_2 = (__pyx_v_self->data != NULL); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (__pyx_t_1) { - - /* "View.MemoryView":214 - * self.callback_free_data(self.data) - * elif self.free_data and self.data is not NULL: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) - * free(self.data) - */ - if (__pyx_v_self->dtype_is_object) { - - /* "View.MemoryView":215 - * elif self.free_data and self.data is not NULL: - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) # <<<<<<<<<<<<<< - * free(self.data) - * PyObject_Free(self._shape) - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_self->data, __pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_self->ndim, 0); - - /* "View.MemoryView":214 - * self.callback_free_data(self.data) - * elif self.free_data and self.data is not NULL: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) - * free(self.data) - */ - } - - /* "View.MemoryView":216 - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) - * free(self.data) # <<<<<<<<<<<<<< - * PyObject_Free(self._shape) - * - */ - free(__pyx_v_self->data); - - /* "View.MemoryView":213 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data and self.data is not NULL: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) - */ - } - __pyx_L3:; - - /* "View.MemoryView":217 - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) - * free(self.data) - * PyObject_Free(self._shape) # <<<<<<<<<<<<<< - * - * @property - */ - PyObject_Free(__pyx_v_self->_shape); - - /* "View.MemoryView":210 - * info.obj = self - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":219 - * PyObject_Free(self._shape) - * - * @property # <<<<<<<<<<<<<< - * def memview(self): - * return self.get_memview() - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_5array_7memview___get__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":221 - * @property - * def memview(self): - * return self.get_memview() # <<<<<<<<<<<<<< - * - * @cname('get_memview') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = ((struct __pyx_vtabstruct_array *)__pyx_v_self->__pyx_vtab)->get_memview(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 221, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":219 - * PyObject_Free(self._shape) - * - * @property # <<<<<<<<<<<<<< - * def memview(self): - * return self.get_memview() - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.memview.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":224 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self) { - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_memview", 0); - - /* "View.MemoryView":225 - * @cname('get_memview') - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE # <<<<<<<<<<<<<< - * return memoryview(self, flags, self.dtype_is_object) - * - */ - __pyx_v_flags = ((PyBUF_ANY_CONTIGUOUS | PyBUF_FORMAT) | PyBUF_WRITABLE); - - /* "View.MemoryView":226 - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_GIVEREF((PyObject *)__pyx_v_self); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":224 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.get_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":228 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":229 - * - * def __len__(self): - * return self._shape[0] # <<<<<<<<<<<<<< - * - * def __getattr__(self, attr): - */ - __pyx_r = (__pyx_v_self->_shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":228 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":231 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr); /*proto*/ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getattr__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_attr)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getattr__", 0); - - /* "View.MemoryView":232 - * - * def __getattr__(self, attr): - * return getattr(self.memview, attr) # <<<<<<<<<<<<<< - * - * def __getitem__(self, item): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetAttr(__pyx_t_1, __pyx_v_attr); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":231 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getattr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":234 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item); /*proto*/ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":235 - * - * def __getitem__(self, item): - * return self.memview[item] # <<<<<<<<<<<<<< - * - * def __setitem__(self, item, value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_item); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":234 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":237 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - -/* Python wrapper */ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - - /* "View.MemoryView":238 - * - * def __setitem__(self, item, value): - * self.memview[item] = value # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (unlikely((PyObject_SetItem(__pyx_t_1, __pyx_v_item, __pyx_v_value) < 0))) __PYX_ERR(1, 238, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":237 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("__reduce_cython__", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "__reduce_cython__", 0))) return NULL; - __pyx_r = __pyx_pf___pyx_array___reduce_cython__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_no_default___reduce___due_to_non, 0, 0); - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.array.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - CYTHON_UNUSED PyObject *__pyx_v___pyx_state = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_state,0}; - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 3, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__setstate_cython__") < 0)) __PYX_ERR(1, 3, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v___pyx_state = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__setstate_cython__", 1, 1, 1, __pyx_nargs); __PYX_ERR(1, 3, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.array.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf___pyx_array_2__setstate_cython__(((struct __pyx_array_obj *)__pyx_v_self), __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" # <<<<<<<<<<<<<< - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_no_default___reduce___due_to_non, 0, 0); - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.array.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":248 - * - * @cname("__pyx_array_allocate_buffer") - * cdef int _allocate_buffer(array self) except -1: # <<<<<<<<<<<<<< - * - * - */ - -static int __pyx_array_allocate_buffer(struct __pyx_array_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_i; - PyObject **__pyx_v_p; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - Py_ssize_t __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_allocate_buffer", 0); - - /* "View.MemoryView":254 - * cdef PyObject **p - * - * self.free_data = True # <<<<<<<<<<<<<< - * self.data = malloc(self.len) - * if not self.data: - */ - __pyx_v_self->free_data = 1; - - /* "View.MemoryView":255 - * - * self.free_data = True - * self.data = malloc(self.len) # <<<<<<<<<<<<<< - * if not self.data: - * raise MemoryError, "unable to allocate array data." - */ - __pyx_v_self->data = ((char *)malloc(__pyx_v_self->len)); - - /* "View.MemoryView":256 - * self.free_data = True - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError, "unable to allocate array data." - * - */ - __pyx_t_1 = (!(__pyx_v_self->data != 0)); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":257 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError, "unable to allocate array data." # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __Pyx_Raise(__pyx_builtin_MemoryError, __pyx_kp_s_unable_to_allocate_array_data, 0, 0); - __PYX_ERR(1, 257, __pyx_L1_error) - - /* "View.MemoryView":256 - * self.free_data = True - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError, "unable to allocate array data." - * - */ - } - - /* "View.MemoryView":259 - * raise MemoryError, "unable to allocate array data." - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len // self.itemsize): - */ - if (__pyx_v_self->dtype_is_object) { - - /* "View.MemoryView":260 - * - * if self.dtype_is_object: - * p = self.data # <<<<<<<<<<<<<< - * for i in range(self.len // self.itemsize): - * p[i] = Py_None - */ - __pyx_v_p = ((PyObject **)__pyx_v_self->data); - - /* "View.MemoryView":261 - * if self.dtype_is_object: - * p = self.data - * for i in range(self.len // self.itemsize): # <<<<<<<<<<<<<< - * p[i] = Py_None - * Py_INCREF(Py_None) - */ - if (unlikely(__pyx_v_self->itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 261, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_self->itemsize == (Py_ssize_t)-1) && unlikely(__Pyx_UNARY_NEG_WOULD_OVERFLOW(__pyx_v_self->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 261, __pyx_L1_error) - } - __pyx_t_2 = __Pyx_div_Py_ssize_t(__pyx_v_self->len, __pyx_v_self->itemsize); - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":262 - * p = self.data - * for i in range(self.len // self.itemsize): - * p[i] = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * return 0 - */ - (__pyx_v_p[__pyx_v_i]) = Py_None; - - /* "View.MemoryView":263 - * for i in range(self.len // self.itemsize): - * p[i] = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * return 0 - * - */ - Py_INCREF(Py_None); - } - - /* "View.MemoryView":259 - * raise MemoryError, "unable to allocate array data." - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len // self.itemsize): - */ - } - - /* "View.MemoryView":264 - * p[i] = Py_None - * Py_INCREF(Py_None) - * return 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":248 - * - * @cname("__pyx_array_allocate_buffer") - * cdef int _allocate_buffer(array self) except -1: # <<<<<<<<<<<<<< - * - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView._allocate_buffer", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":268 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, char *c_mode, char *buf): # <<<<<<<<<<<<<< - * cdef array result - * cdef str mode = "fortran" if c_mode[0] == b'f' else "c" # this often comes from a constant C string. - */ - -static struct __pyx_array_obj *__pyx_array_new(PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, char *__pyx_v_format, char *__pyx_v_c_mode, char *__pyx_v_buf) { - struct __pyx_array_obj *__pyx_v_result = 0; - PyObject *__pyx_v_mode = 0; - struct __pyx_array_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("array_cwrapper", 0); - - /* "View.MemoryView":270 - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, char *c_mode, char *buf): - * cdef array result - * cdef str mode = "fortran" if c_mode[0] == b'f' else "c" # this often comes from a constant C string. # <<<<<<<<<<<<<< - * - * if buf is NULL: - */ - if (((__pyx_v_c_mode[0]) == 'f')) { - __Pyx_INCREF(__pyx_n_s_fortran); - __pyx_t_1 = __pyx_n_s_fortran; - } else { - __Pyx_INCREF(__pyx_n_s_c); - __pyx_t_1 = __pyx_n_s_c; - } - __pyx_v_mode = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":272 - * cdef str mode = "fortran" if c_mode[0] == b'f' else "c" # this often comes from a constant C string. - * - * if buf is NULL: # <<<<<<<<<<<<<< - * result = array.__new__(array, shape, itemsize, format, mode) - * else: - */ - __pyx_t_2 = (__pyx_v_buf == NULL); - if (__pyx_t_2) { - - /* "View.MemoryView":273 - * - * if buf is NULL: - * result = array.__new__(array, shape, itemsize, format, mode) # <<<<<<<<<<<<<< - * else: - * result = array.__new__(array, shape, itemsize, format, mode, allocate_buffer=False) - */ - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_3); - __Pyx_INCREF(__pyx_v_mode); - __Pyx_GIVEREF(__pyx_v_mode); - PyTuple_SET_ITEM(__pyx_t_4, 3, __pyx_v_mode); - __pyx_t_1 = 0; - __pyx_t_3 = 0; - __pyx_t_3 = ((PyObject *)__pyx_tp_new_array(((PyTypeObject *)__pyx_array_type), __pyx_t_4, NULL)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 273, __pyx_L1_error) - __Pyx_GOTREF((PyObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":272 - * cdef str mode = "fortran" if c_mode[0] == b'f' else "c" # this often comes from a constant C string. - * - * if buf is NULL: # <<<<<<<<<<<<<< - * result = array.__new__(array, shape, itemsize, format, mode) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":275 - * result = array.__new__(array, shape, itemsize, format, mode) - * else: - * result = array.__new__(array, shape, itemsize, format, mode, allocate_buffer=False) # <<<<<<<<<<<<<< - * result.data = buf - * - */ - /*else*/ { - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyTuple_New(4); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_t_4); - __Pyx_INCREF(__pyx_v_mode); - __Pyx_GIVEREF(__pyx_v_mode); - PyTuple_SET_ITEM(__pyx_t_1, 3, __pyx_v_mode); - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_allocate_buffer, Py_False) < 0) __PYX_ERR(1, 275, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_tp_new_array(((PyTypeObject *)__pyx_array_type), __pyx_t_1, __pyx_t_4)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 275, __pyx_L1_error) - __Pyx_GOTREF((PyObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":276 - * else: - * result = array.__new__(array, shape, itemsize, format, mode, allocate_buffer=False) - * result.data = buf # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->data = __pyx_v_buf; - } - __pyx_L3:; - - /* "View.MemoryView":278 - * result.data = buf - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF((PyObject *)__pyx_r); - __Pyx_INCREF((PyObject *)__pyx_v_result); - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":268 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, char *c_mode, char *buf): # <<<<<<<<<<<<<< - * cdef array result - * cdef str mode = "fortran" if c_mode[0] == b'f' else "c" # this often comes from a constant C string. - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.array_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XDECREF(__pyx_v_mode); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":304 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - -/* Python wrapper */ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_name = 0; - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,0}; - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_VARARGS(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_VARARGS(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_name)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 304, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__init__") < 0)) __PYX_ERR(1, 304, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_VARARGS(__pyx_args, 0); - } - __pyx_v_name = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, __pyx_nargs); __PYX_ERR(1, 304, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.Enum.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), __pyx_v_name); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__", 0); - - /* "View.MemoryView":305 - * cdef object name - * def __init__(self, name): - * self.name = name # <<<<<<<<<<<<<< - * def __repr__(self): - * return self.name - */ - __Pyx_INCREF(__pyx_v_name); - __Pyx_GIVEREF(__pyx_v_name); - __Pyx_GOTREF(__pyx_v_self->name); - __Pyx_DECREF(__pyx_v_self->name); - __pyx_v_self->name = __pyx_v_name; - - /* "View.MemoryView":304 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - - /* function exit code */ - __pyx_r = 0; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":306 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - -/* Python wrapper */ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":307 - * self.name = name - * def __repr__(self): - * return self.name # <<<<<<<<<<<<<< - * - * cdef generic = Enum("") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->name); - __pyx_r = __pyx_v_self->name; - goto __pyx_L0; - - /* "View.MemoryView":306 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("__reduce_cython__", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "__reduce_cython__", 0))) return NULL; - __pyx_r = __pyx_pf___pyx_MemviewEnum___reduce_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_v_state = 0; - PyObject *__pyx_v__dict = 0; - int __pyx_v_use_setstate; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":5 - * cdef object _dict - * cdef bint use_setstate - * state = (self.name,) # <<<<<<<<<<<<<< - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_self->name); - __Pyx_GIVEREF(__pyx_v_self->name); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_self->name); - __pyx_v_state = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "(tree fragment)":6 - * cdef bint use_setstate - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<< - * if _dict is not None: - * state += (_dict,) - */ - __pyx_t_1 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v__dict = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - __pyx_t_2 = (__pyx_v__dict != Py_None); - if (__pyx_t_2) { - - /* "(tree fragment)":8 - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - * state += (_dict,) # <<<<<<<<<<<<<< - * use_setstate = True - * else: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v__dict); - __Pyx_GIVEREF(__pyx_v__dict); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v__dict); - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_3)); - __pyx_t_3 = 0; - - /* "(tree fragment)":9 - * if _dict is not None: - * state += (_dict,) - * use_setstate = True # <<<<<<<<<<<<<< - * else: - * use_setstate = self.name is not None - */ - __pyx_v_use_setstate = 1; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - goto __pyx_L3; - } - - /* "(tree fragment)":11 - * use_setstate = True - * else: - * use_setstate = self.name is not None # <<<<<<<<<<<<<< - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, None), state - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_self->name != Py_None); - __pyx_v_use_setstate = __pyx_t_2; - } - __pyx_L3:; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, None), state - * else: - */ - if (__pyx_v_use_setstate) { - - /* "(tree fragment)":13 - * use_setstate = self.name is not None - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, None), state # <<<<<<<<<<<<<< - * else: - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, state) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_136983863); - __Pyx_GIVEREF(__pyx_int_136983863); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_136983863); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_1, 2, Py_None); - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_state); - __pyx_t_3 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, None), state - * else: - */ - } - - /* "(tree fragment)":15 - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, None), state - * else: - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, state) # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_136983863); - __Pyx_GIVEREF(__pyx_int_136983863); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_136983863); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __pyx_t_4 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.Enum.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_state); - __Pyx_XDECREF(__pyx_v__dict); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v___pyx_state = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_state,0}; - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 16, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__setstate_cython__") < 0)) __PYX_ERR(1, 16, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v___pyx_state = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__setstate_cython__", 1, 1, 1, __pyx_nargs); __PYX_ERR(1, 16, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.Enum.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf___pyx_MemviewEnum_2__setstate_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":17 - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, state) - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) # <<<<<<<<<<<<<< - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None) || __Pyx_RaiseUnexpectedTypeError("tuple", __pyx_v___pyx_state))) __PYX_ERR(1, 17, __pyx_L1_error) - __pyx_t_1 = __pyx_unpickle_Enum__set_state(__pyx_v_self, ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.Enum.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":349 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - -/* Python wrapper */ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_obj = 0; - int __pyx_v_flags; - int __pyx_v_dtype_is_object; - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_obj,&__pyx_n_s_flags,&__pyx_n_s_dtype_is_object,0}; - PyObject* values[3] = {0,0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_VARARGS(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_VARARGS(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_VARARGS(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_VARARGS(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_obj)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 349, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_flags)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 349, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, 1); __PYX_ERR(1, 349, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_dtype_is_object); - if (value) { values[2] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 349, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__cinit__") < 0)) __PYX_ERR(1, 349, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_VARARGS(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_VARARGS(__pyx_args, 1); - values[0] = __Pyx_Arg_VARARGS(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_obj = values[0]; - __pyx_v_flags = __Pyx_PyInt_As_int(values[1]); if (unlikely((__pyx_v_flags == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 349, __pyx_L3_error) - if (values[2]) { - __pyx_v_dtype_is_object = __Pyx_PyObject_IsTrue(values[2]); if (unlikely((__pyx_v_dtype_is_object == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 349, __pyx_L3_error) - } else { - __pyx_v_dtype_is_object = ((int)0); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, __pyx_nargs); __PYX_ERR(1, 349, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_obj, __pyx_v_flags, __pyx_v_dtype_is_object); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - Py_intptr_t __pyx_t_4; - size_t __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - - /* "View.MemoryView":350 - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj # <<<<<<<<<<<<<< - * self.flags = flags - * if type(self) is memoryview or obj is not None: - */ - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - __Pyx_GOTREF(__pyx_v_self->obj); - __Pyx_DECREF(__pyx_v_self->obj); - __pyx_v_self->obj = __pyx_v_obj; - - /* "View.MemoryView":351 - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj - * self.flags = flags # <<<<<<<<<<<<<< - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - */ - __pyx_v_self->flags = __pyx_v_flags; - - /* "View.MemoryView":352 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - __pyx_t_2 = (((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))) == ((PyObject *)__pyx_memoryview_type)); - if (!__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_2 = (__pyx_v_obj != Py_None); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (__pyx_t_1) { - - /* "View.MemoryView":353 - * self.flags = flags - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) # <<<<<<<<<<<<<< - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - */ - __pyx_t_3 = __Pyx_GetBuffer(__pyx_v_obj, (&__pyx_v_self->view), __pyx_v_flags); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 353, __pyx_L1_error) - - /* "View.MemoryView":354 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_t_1 = (((PyObject *)__pyx_v_self->view.obj) == NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":355 - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = Py_None; - - /* "View.MemoryView":356 - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * if not __PYX_CYTHON_ATOMICS_ENABLED(): - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":354 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - } - - /* "View.MemoryView":352 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - } - - /* "View.MemoryView":358 - * Py_INCREF(Py_None) - * - * if not __PYX_CYTHON_ATOMICS_ENABLED(): # <<<<<<<<<<<<<< - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < 8: - */ - __pyx_t_1 = (!__PYX_CYTHON_ATOMICS_ENABLED()); - if (__pyx_t_1) { - - /* "View.MemoryView":360 - * if not __PYX_CYTHON_ATOMICS_ENABLED(): - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < 8: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - __pyx_t_1 = (__pyx_memoryview_thread_locks_used < 8); - if (__pyx_t_1) { - - /* "View.MemoryView":361 - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < 8: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - */ - __pyx_v_self->lock = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - - /* "View.MemoryView":362 - * if __pyx_memoryview_thread_locks_used < 8: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 # <<<<<<<<<<<<<< - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used + 1); - - /* "View.MemoryView":360 - * if not __PYX_CYTHON_ATOMICS_ENABLED(): - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < 8: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - } - - /* "View.MemoryView":363 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - __pyx_t_1 = (__pyx_v_self->lock == NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":364 - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() # <<<<<<<<<<<<<< - * if self.lock is NULL: - * raise MemoryError - */ - __pyx_v_self->lock = PyThread_allocate_lock(); - - /* "View.MemoryView":365 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - __pyx_t_1 = (__pyx_v_self->lock == NULL); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":366 - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - PyErr_NoMemory(); __PYX_ERR(1, 366, __pyx_L1_error) - - /* "View.MemoryView":365 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - } - - /* "View.MemoryView":363 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - } - - /* "View.MemoryView":358 - * Py_INCREF(Py_None) - * - * if not __PYX_CYTHON_ATOMICS_ENABLED(): # <<<<<<<<<<<<<< - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < 8: - */ - } - - /* "View.MemoryView":368 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":369 - * - * if flags & PyBUF_FORMAT: - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') # <<<<<<<<<<<<<< - * else: - * self.dtype_is_object = dtype_is_object - */ - __pyx_t_2 = ((__pyx_v_self->view.format[0]) == 'O'); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L12_bool_binop_done; - } - __pyx_t_2 = ((__pyx_v_self->view.format[1]) == '\x00'); - __pyx_t_1 = __pyx_t_2; - __pyx_L12_bool_binop_done:; - __pyx_v_self->dtype_is_object = __pyx_t_1; - - /* "View.MemoryView":368 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - goto __pyx_L11; - } - - /* "View.MemoryView":371 - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - * self.dtype_is_object = dtype_is_object # <<<<<<<<<<<<<< - * - * assert (&self.acquisition_count) % sizeof(__pyx_atomic_int_type) == 0 - */ - /*else*/ { - __pyx_v_self->dtype_is_object = __pyx_v_dtype_is_object; - } - __pyx_L11:; - - /* "View.MemoryView":373 - * self.dtype_is_object = dtype_is_object - * - * assert (&self.acquisition_count) % sizeof(__pyx_atomic_int_type) == 0 # <<<<<<<<<<<<<< - * self.typeinfo = NULL - * - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(__pyx_assertions_enabled())) { - __pyx_t_4 = ((Py_intptr_t)((void *)(&__pyx_v_self->acquisition_count))); - __pyx_t_5 = (sizeof(__pyx_atomic_int_type)); - if (unlikely(__pyx_t_5 == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 373, __pyx_L1_error) - } - __pyx_t_1 = ((__pyx_t_4 % __pyx_t_5) == 0); - if (unlikely(!__pyx_t_1)) { - __Pyx_Raise(__pyx_builtin_AssertionError, 0, 0, 0); - __PYX_ERR(1, 373, __pyx_L1_error) - } - } - #else - if ((1)); else __PYX_ERR(1, 373, __pyx_L1_error) - #endif - - /* "View.MemoryView":374 - * - * assert (&self.acquisition_count) % sizeof(__pyx_atomic_int_type) == 0 - * self.typeinfo = NULL # <<<<<<<<<<<<<< - * - * def __dealloc__(memoryview self): - */ - __pyx_v_self->typeinfo = NULL; - - /* "View.MemoryView":349 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":376 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - -/* Python wrapper */ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self) { - int __pyx_v_i; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - PyThread_type_lock __pyx_t_5; - PyThread_type_lock __pyx_t_6; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":377 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - __pyx_t_1 = (__pyx_v_self->obj != Py_None); - if (__pyx_t_1) { - - /* "View.MemoryView":378 - * def __dealloc__(memoryview self): - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) # <<<<<<<<<<<<<< - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - */ - __Pyx_ReleaseBuffer((&__pyx_v_self->view)); - - /* "View.MemoryView":377 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":379 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - __pyx_t_1 = (((Py_buffer *)(&__pyx_v_self->view))->obj == Py_None); - if (__pyx_t_1) { - - /* "View.MemoryView":381 - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - * (<__pyx_buffer *> &self.view).obj = NULL # <<<<<<<<<<<<<< - * Py_DECREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = NULL; - - /* "View.MemoryView":382 - * - * (<__pyx_buffer *> &self.view).obj = NULL - * Py_DECREF(Py_None) # <<<<<<<<<<<<<< - * - * cdef int i - */ - Py_DECREF(Py_None); - - /* "View.MemoryView":379 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - } - __pyx_L3:; - - /* "View.MemoryView":386 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - __pyx_t_1 = (__pyx_v_self->lock != NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":387 - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): # <<<<<<<<<<<<<< - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - */ - __pyx_t_2 = __pyx_memoryview_thread_locks_used; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":388 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - __pyx_t_1 = ((__pyx_memoryview_thread_locks[__pyx_v_i]) == __pyx_v_self->lock); - if (__pyx_t_1) { - - /* "View.MemoryView":389 - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 # <<<<<<<<<<<<<< - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used - 1); - - /* "View.MemoryView":390 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - __pyx_t_1 = (__pyx_v_i != __pyx_memoryview_thread_locks_used); - if (__pyx_t_1) { - - /* "View.MemoryView":392 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) # <<<<<<<<<<<<<< - * break - * else: - */ - __pyx_t_5 = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - __pyx_t_6 = (__pyx_memoryview_thread_locks[__pyx_v_i]); - - /* "View.MemoryView":391 - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break - */ - (__pyx_memoryview_thread_locks[__pyx_v_i]) = __pyx_t_5; - (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]) = __pyx_t_6; - - /* "View.MemoryView":390 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - } - - /* "View.MemoryView":393 - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break # <<<<<<<<<<<<<< - * else: - * PyThread_free_lock(self.lock) - */ - goto __pyx_L6_break; - - /* "View.MemoryView":388 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - } - } - /*else*/ { - - /* "View.MemoryView":395 - * break - * else: - * PyThread_free_lock(self.lock) # <<<<<<<<<<<<<< - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - */ - PyThread_free_lock(__pyx_v_self->lock); - } - __pyx_L6_break:; - - /* "View.MemoryView":386 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - } - - /* "View.MemoryView":376 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":397 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - Py_ssize_t __pyx_v_dim; - char *__pyx_v_itemp; - PyObject *__pyx_v_idx = NULL; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - Py_ssize_t __pyx_t_6; - char *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_item_pointer", 0); - - /* "View.MemoryView":399 - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf # <<<<<<<<<<<<<< - * - * for dim, idx in enumerate(index): - */ - __pyx_v_itemp = ((char *)__pyx_v_self->view.buf); - - /* "View.MemoryView":401 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_v_index)) || PyTuple_CheckExact(__pyx_v_index)) { - __pyx_t_2 = __pyx_v_index; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 401, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 401, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(1, 401, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 401, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(1, 401, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 401, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } - } else { - __pyx_t_5 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 401, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_5); - } - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_5); - __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_1; - __pyx_t_1 = (__pyx_t_1 + 1); - - /* "View.MemoryView":402 - * - * for dim, idx in enumerate(index): - * itemp = pybuffer_index(&self.view, itemp, idx, dim) # <<<<<<<<<<<<<< - * - * return itemp - */ - __pyx_t_6 = __Pyx_PyIndex_AsSsize_t(__pyx_v_idx); if (unlikely((__pyx_t_6 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 402, __pyx_L1_error) - __pyx_t_7 = __pyx_pybuffer_index((&__pyx_v_self->view), __pyx_v_itemp, __pyx_t_6, __pyx_v_dim); if (unlikely(__pyx_t_7 == ((char *)NULL))) __PYX_ERR(1, 402, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_7; - - /* "View.MemoryView":401 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":404 - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - * return itemp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_itemp; - goto __pyx_L0; - - /* "View.MemoryView":397 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.get_item_pointer", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":407 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index); /*proto*/ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_indices = NULL; - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - char *__pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":408 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_t_1 = (__pyx_v_index == __pyx_builtin_Ellipsis); - if (__pyx_t_1) { - - /* "View.MemoryView":409 - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: - * return self # <<<<<<<<<<<<<< - * - * have_slices, indices = _unellipsify(index, self.view.ndim) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF((PyObject *)__pyx_v_self); - __pyx_r = ((PyObject *)__pyx_v_self); - goto __pyx_L0; - - /* "View.MemoryView":408 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - } - - /* "View.MemoryView":411 - * return self - * - * have_slices, indices = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * cdef char *itemp - */ - __pyx_t_2 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 411, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(__pyx_t_2 != Py_None)) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 411, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 411, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 411, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 411, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_indices = __pyx_t_4; - __pyx_t_4 = 0; - - /* "View.MemoryView":414 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(1, 414, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":415 - * cdef char *itemp - * if have_slices: - * return memview_slice(self, indices) # <<<<<<<<<<<<<< - * else: - * itemp = self.get_item_pointer(indices) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = ((PyObject *)__pyx_memview_slice(__pyx_v_self, __pyx_v_indices)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 415, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":414 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - } - - /* "View.MemoryView":417 - * return memview_slice(self, indices) - * else: - * itemp = self.get_item_pointer(indices) # <<<<<<<<<<<<<< - * return self.convert_item_to_object(itemp) - * - */ - /*else*/ { - __pyx_t_5 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_indices); if (unlikely(__pyx_t_5 == ((char *)NULL))) __PYX_ERR(1, 417, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_5; - - /* "View.MemoryView":418 - * else: - * itemp = self.get_item_pointer(indices) - * return self.convert_item_to_object(itemp) # <<<<<<<<<<<<<< - * - * def __setitem__(memoryview self, object index, object value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->convert_item_to_object(__pyx_v_self, __pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 418, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":407 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_indices); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":420 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError, "Cannot assign to read-only memoryview" - */ - -/* Python wrapper */ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_obj = NULL; - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - __Pyx_INCREF(__pyx_v_index); - - /* "View.MemoryView":421 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError, "Cannot assign to read-only memoryview" - * - */ - if (unlikely(__pyx_v_self->view.readonly)) { - - /* "View.MemoryView":422 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError, "Cannot assign to read-only memoryview" # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_Cannot_assign_to_read_only_memor, 0, 0); - __PYX_ERR(1, 422, __pyx_L1_error) - - /* "View.MemoryView":421 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError, "Cannot assign to read-only memoryview" - * - */ - } - - /* "View.MemoryView":424 - * raise TypeError, "Cannot assign to read-only memoryview" - * - * have_slices, index = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * if have_slices: - */ - __pyx_t_1 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 424, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (likely(__pyx_t_1 != Py_None)) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 424, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 424, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 424, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 424, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_2; - __pyx_t_2 = 0; - __Pyx_DECREF_SET(__pyx_v_index, __pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":426 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(1, 426, __pyx_L1_error) - if (__pyx_t_4) { - - /* "View.MemoryView":427 - * - * if have_slices: - * obj = self.is_slice(value) # <<<<<<<<<<<<<< - * if obj: - * self.setitem_slice_assignment(self[index], obj) - */ - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->is_slice(__pyx_v_self, __pyx_v_value); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_obj = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":428 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_v_obj); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(1, 428, __pyx_L1_error) - if (__pyx_t_4) { - - /* "View.MemoryView":429 - * obj = self.is_slice(value) - * if obj: - * self.setitem_slice_assignment(self[index], obj) # <<<<<<<<<<<<<< - * else: - * self.setitem_slice_assign_scalar(self[index], value) - */ - __pyx_t_1 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assignment(__pyx_v_self, __pyx_t_1, __pyx_v_obj); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":428 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":431 - * self.setitem_slice_assignment(self[index], obj) - * else: - * self.setitem_slice_assign_scalar(self[index], value) # <<<<<<<<<<<<<< - * else: - * self.setitem_indexed(index, value) - */ - /*else*/ { - __pyx_t_3 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 431, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 431, __pyx_L1_error) - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assign_scalar(__pyx_v_self, ((struct __pyx_memoryview_obj *)__pyx_t_3), __pyx_v_value); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 431, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_L5:; - - /* "View.MemoryView":426 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":433 - * self.setitem_slice_assign_scalar(self[index], value) - * else: - * self.setitem_indexed(index, value) # <<<<<<<<<<<<<< - * - * cdef is_slice(self, obj): - */ - /*else*/ { - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_indexed(__pyx_v_self, __pyx_v_index, __pyx_v_value); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 433, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_L4:; - - /* "View.MemoryView":420 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError, "Cannot assign to read-only memoryview" - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":435 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_slice", 0); - __Pyx_INCREF(__pyx_v_obj); - - /* "View.MemoryView":436 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_obj, __pyx_memoryview_type); - __pyx_t_2 = (!__pyx_t_1); - if (__pyx_t_2) { - - /* "View.MemoryView":437 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_5); - /*try:*/ { - - /* "View.MemoryView":438 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_6 = __Pyx_PyInt_From_int(((__pyx_v_self->flags & (~PyBUF_WRITABLE)) | PyBUF_ANY_CONTIGUOUS)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 438, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":439 - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) # <<<<<<<<<<<<<< - * except TypeError: - * return None - */ - __pyx_t_7 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 439, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - - /* "View.MemoryView":438 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_8 = PyTuple_New(3); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 438, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_obj); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_7); - __pyx_t_6 = 0; - __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_8, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 438, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF_SET(__pyx_v_obj, __pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":437 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L9_try_end; - __pyx_L4_error:; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "View.MemoryView":440 - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - * except TypeError: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_9 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_TypeError); - if (__pyx_t_9) { - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_6) < 0) __PYX_ERR(1, 440, __pyx_L6_except_error) - __Pyx_XGOTREF(__pyx_t_7); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_6); - - /* "View.MemoryView":441 - * self.dtype_is_object) - * except TypeError: - * return None # <<<<<<<<<<<<<< - * - * return obj - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L7_except_return; - } - goto __pyx_L6_except_error; - - /* "View.MemoryView":437 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - __pyx_L6_except_error:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L1_error; - __pyx_L7_except_return:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L0; - __pyx_L9_try_end:; - } - - /* "View.MemoryView":436 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - } - - /* "View.MemoryView":443 - * return None - * - * return obj # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assignment(self, dst, src): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_obj); - __pyx_r = __pyx_v_obj; - goto __pyx_L0; - - /* "View.MemoryView":435 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":445 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src) { - __Pyx_memviewslice __pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_src_slice; - __Pyx_memviewslice __pyx_v_msrc; - __Pyx_memviewslice __pyx_v_mdst; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assignment", 0); - - /* "View.MemoryView":448 - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - * cdef __Pyx_memviewslice msrc = get_slice_from_memview(src, &src_slice)[0] # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mdst = get_slice_from_memview(dst, &dst_slice)[0] - * - */ - if (!(likely(((__pyx_v_src) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_src, __pyx_memoryview_type))))) __PYX_ERR(1, 448, __pyx_L1_error) - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_src), (&__pyx_v_src_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 448, __pyx_L1_error) - __pyx_v_msrc = (__pyx_t_1[0]); - - /* "View.MemoryView":449 - * cdef __Pyx_memviewslice src_slice - * cdef __Pyx_memviewslice msrc = get_slice_from_memview(src, &src_slice)[0] - * cdef __Pyx_memviewslice mdst = get_slice_from_memview(dst, &dst_slice)[0] # <<<<<<<<<<<<<< - * - * memoryview_copy_contents(msrc, mdst, src.ndim, dst.ndim, self.dtype_is_object) - */ - if (!(likely(((__pyx_v_dst) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_dst, __pyx_memoryview_type))))) __PYX_ERR(1, 449, __pyx_L1_error) - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_dst), (&__pyx_v_dst_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 449, __pyx_L1_error) - __pyx_v_mdst = (__pyx_t_1[0]); - - /* "View.MemoryView":451 - * cdef __Pyx_memviewslice mdst = get_slice_from_memview(dst, &dst_slice)[0] - * - * memoryview_copy_contents(msrc, mdst, src.ndim, dst.ndim, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_src, __pyx_n_s_ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_t_2); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 451, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_dst, __pyx_n_s_ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_t_2); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 451, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = __pyx_memoryview_copy_contents(__pyx_v_msrc, __pyx_v_mdst, __pyx_t_3, __pyx_t_4, __pyx_v_self->dtype_is_object); if (unlikely(__pyx_t_5 == ((int)-1))) __PYX_ERR(1, 451, __pyx_L1_error) - - /* "View.MemoryView":445 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assignment", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":453 - * memoryview_copy_contents(msrc, mdst, src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value) { - int __pyx_v_array[0x80]; - void *__pyx_v_tmp; - void *__pyx_v_item; - __Pyx_memviewslice *__pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_tmp_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - char const *__pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assign_scalar", 0); - - /* "View.MemoryView":455 - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - * cdef int array[128] - * cdef void *tmp = NULL # <<<<<<<<<<<<<< - * cdef void *item - * - */ - __pyx_v_tmp = NULL; - - /* "View.MemoryView":460 - * cdef __Pyx_memviewslice *dst_slice - * cdef __Pyx_memviewslice tmp_slice - * dst_slice = get_slice_from_memview(dst, &tmp_slice) # <<<<<<<<<<<<<< - * - * if self.view.itemsize > sizeof(array): - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_dst, (&__pyx_v_tmp_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 460, __pyx_L1_error) - __pyx_v_dst_slice = __pyx_t_1; - - /* "View.MemoryView":462 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - __pyx_t_2 = (((size_t)__pyx_v_self->view.itemsize) > (sizeof(__pyx_v_array))); - if (__pyx_t_2) { - - /* "View.MemoryView":463 - * - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) # <<<<<<<<<<<<<< - * if tmp == NULL: - * raise MemoryError - */ - __pyx_v_tmp = PyMem_Malloc(__pyx_v_self->view.itemsize); - - /* "View.MemoryView":464 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - __pyx_t_2 = (__pyx_v_tmp == NULL); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":465 - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * item = tmp - * else: - */ - PyErr_NoMemory(); __PYX_ERR(1, 465, __pyx_L1_error) - - /* "View.MemoryView":464 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - } - - /* "View.MemoryView":466 - * if tmp == NULL: - * raise MemoryError - * item = tmp # <<<<<<<<<<<<<< - * else: - * item = array - */ - __pyx_v_item = __pyx_v_tmp; - - /* "View.MemoryView":462 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":468 - * item = tmp - * else: - * item = array # <<<<<<<<<<<<<< - * - * try: - */ - /*else*/ { - __pyx_v_item = ((void *)__pyx_v_array); - } - __pyx_L3:; - - /* "View.MemoryView":470 - * item = array - * - * try: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * ( item)[0] = value - */ - /*try:*/ { - - /* "View.MemoryView":471 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - if (__pyx_v_self->dtype_is_object) { - - /* "View.MemoryView":472 - * try: - * if self.dtype_is_object: - * ( item)[0] = value # <<<<<<<<<<<<<< - * else: - * self.assign_item_from_object( item, value) - */ - (((PyObject **)__pyx_v_item)[0]) = ((PyObject *)__pyx_v_value); - - /* "View.MemoryView":471 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":474 - * ( item)[0] = value - * else: - * self.assign_item_from_object( item, value) # <<<<<<<<<<<<<< - * - * - */ - /*else*/ { - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, ((char *)__pyx_v_item), __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 474, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L8:; - - /* "View.MemoryView":478 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - __pyx_t_2 = (__pyx_v_self->view.suboffsets != NULL); - if (__pyx_t_2) { - - /* "View.MemoryView":479 - * - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) # <<<<<<<<<<<<<< - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - * item, self.dtype_is_object) - */ - __pyx_t_4 = assert_direct_dimensions(__pyx_v_self->view.suboffsets, __pyx_v_self->view.ndim); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 479, __pyx_L6_error) - - /* "View.MemoryView":478 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - } - - /* "View.MemoryView":480 - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, # <<<<<<<<<<<<<< - * item, self.dtype_is_object) - * finally: - */ - __pyx_memoryview_slice_assign_scalar(__pyx_v_dst_slice, __pyx_v_dst->view.ndim, __pyx_v_self->view.itemsize, __pyx_v_item, __pyx_v_self->dtype_is_object); - } - - /* "View.MemoryView":483 - * item, self.dtype_is_object) - * finally: - * PyMem_Free(tmp) # <<<<<<<<<<<<<< - * - * cdef setitem_indexed(self, index, value): - */ - /*finally:*/ { - /*normal exit:*/{ - PyMem_Free(__pyx_v_tmp); - goto __pyx_L7; - } - __pyx_L6_error:; - /*exception exit:*/{ - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12); - if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_7); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_10); - __Pyx_XGOTREF(__pyx_t_11); - __Pyx_XGOTREF(__pyx_t_12); - __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename; - { - PyMem_Free(__pyx_v_tmp); - } - if (PY_MAJOR_VERSION >= 3) { - __Pyx_XGIVEREF(__pyx_t_10); - __Pyx_XGIVEREF(__pyx_t_11); - __Pyx_XGIVEREF(__pyx_t_12); - __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12); - } - __Pyx_XGIVEREF(__pyx_t_7); - __Pyx_XGIVEREF(__pyx_t_8); - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9); - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6; - goto __pyx_L1_error; - } - __pyx_L7:; - } - - /* "View.MemoryView":453 - * memoryview_copy_contents(msrc, mdst, src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assign_scalar", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":485 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - char *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_indexed", 0); - - /* "View.MemoryView":486 - * - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) # <<<<<<<<<<<<<< - * self.assign_item_from_object(itemp, value) - * - */ - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_index); if (unlikely(__pyx_t_1 == ((char *)NULL))) __PYX_ERR(1, 486, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_1; - - /* "View.MemoryView":487 - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 487, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":485 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_indexed", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":489 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_v_struct = NULL; - PyObject *__pyx_v_bytesitem = 0; - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - int __pyx_t_10; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":492 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef bytes bytesitem - * - */ - __pyx_t_1 = __Pyx_ImportDottedModule(__pyx_n_s_struct, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 492, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":495 - * cdef bytes bytesitem - * - * bytesitem = itemp[:self.view.itemsize] # <<<<<<<<<<<<<< - * try: - * result = struct.unpack(self.view.format, bytesitem) - */ - __pyx_t_1 = __Pyx_PyBytes_FromStringAndSize(__pyx_v_itemp + 0, __pyx_v_self->view.itemsize - 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 495, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_bytesitem = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":496 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_2, &__pyx_t_3, &__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - /*try:*/ { - - /* "View.MemoryView":497 - * bytesitem = itemp[:self.view.itemsize] - * try: - * result = struct.unpack(self.view.format, bytesitem) # <<<<<<<<<<<<<< - * except struct.error: - * raise ValueError, "Unable to convert item to object" - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_unpack); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 497, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 497, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_8, 2+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 497, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_v_result = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":496 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - } - - /* "View.MemoryView":501 - * raise ValueError, "Unable to convert item to object" - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - /*else:*/ { - __pyx_t_9 = __Pyx_ssize_strlen(__pyx_v_self->view.format); if (unlikely(__pyx_t_9 == ((Py_ssize_t)-1))) __PYX_ERR(1, 501, __pyx_L5_except_error) - __pyx_t_10 = (__pyx_t_9 == 1); - if (__pyx_t_10) { - - /* "View.MemoryView":502 - * else: - * if len(self.view.format) == 1: - * return result[0] # <<<<<<<<<<<<<< - * return result - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_result, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 502, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L6_except_return; - - /* "View.MemoryView":501 - * raise ValueError, "Unable to convert item to object" - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - } - - /* "View.MemoryView":503 - * if len(self.view.format) == 1: - * return result[0] - * return result # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_result); - __pyx_r = __pyx_v_result; - goto __pyx_L6_except_return; - } - __pyx_L3_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":498 - * try: - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: # <<<<<<<<<<<<<< - * raise ValueError, "Unable to convert item to object" - * else: - */ - __Pyx_ErrFetch(&__pyx_t_1, &__pyx_t_5, &__pyx_t_6); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_error); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 498, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = __Pyx_PyErr_GivenExceptionMatches(__pyx_t_1, __pyx_t_7); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_ErrRestore(__pyx_t_1, __pyx_t_5, __pyx_t_6); - __pyx_t_1 = 0; __pyx_t_5 = 0; __pyx_t_6 = 0; - if (__pyx_t_8) { - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_6, &__pyx_t_5, &__pyx_t_1) < 0) __PYX_ERR(1, 498, __pyx_L5_except_error) - __Pyx_XGOTREF(__pyx_t_6); - __Pyx_XGOTREF(__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_1); - - /* "View.MemoryView":499 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError, "Unable to convert item to object" # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_Unable_to_convert_item_to_object, 0, 0); - __PYX_ERR(1, 499, __pyx_L5_except_error) - } - goto __pyx_L5_except_error; - - /* "View.MemoryView":496 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - __pyx_L5_except_error:; - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L1_error; - __pyx_L6_except_return:; - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L0; - } - - /* "View.MemoryView":489 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesitem); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":505 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_v_struct = NULL; - char __pyx_v_c; - PyObject *__pyx_v_bytesvalue = 0; - Py_ssize_t __pyx_v_i; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - Py_ssize_t __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - char *__pyx_t_9; - char *__pyx_t_10; - char *__pyx_t_11; - char *__pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":508 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef char c - * cdef bytes bytesvalue - */ - __pyx_t_1 = __Pyx_ImportDottedModule(__pyx_n_s_struct, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 508, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":513 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - __pyx_t_2 = PyTuple_Check(__pyx_v_value); - if (__pyx_t_2) { - - /* "View.MemoryView":514 - * - * if isinstance(value, tuple): - * bytesvalue = struct.pack(self.view.format, *value) # <<<<<<<<<<<<<< - * else: - * bytesvalue = struct.pack(self.view.format, value) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyNumber_Add(__pyx_t_4, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None) || __Pyx_RaiseUnexpectedTypeError("bytes", __pyx_t_3))) __PYX_ERR(1, 514, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":513 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":516 - * bytesvalue = struct.pack(self.view.format, *value) - * else: - * bytesvalue = struct.pack(self.view.format, value) # <<<<<<<<<<<<<< - * - * for i, c in enumerate(bytesvalue): - */ - /*else*/ { - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 516, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 516, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_4, __pyx_t_1, __pyx_v_value}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 516, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - if (!(likely(PyBytes_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None) || __Pyx_RaiseUnexpectedTypeError("bytes", __pyx_t_3))) __PYX_ERR(1, 516, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":518 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_7 = 0; - if (unlikely(__pyx_v_bytesvalue == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' is not iterable"); - __PYX_ERR(1, 518, __pyx_L1_error) - } - __Pyx_INCREF(__pyx_v_bytesvalue); - __pyx_t_8 = __pyx_v_bytesvalue; - __pyx_t_10 = PyBytes_AS_STRING(__pyx_t_8); - __pyx_t_11 = (__pyx_t_10 + PyBytes_GET_SIZE(__pyx_t_8)); - for (__pyx_t_12 = __pyx_t_10; __pyx_t_12 < __pyx_t_11; __pyx_t_12++) { - __pyx_t_9 = __pyx_t_12; - __pyx_v_c = (__pyx_t_9[0]); - - /* "View.MemoryView":519 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - __pyx_v_i = __pyx_t_7; - - /* "View.MemoryView":518 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_7 = (__pyx_t_7 + 1); - - /* "View.MemoryView":519 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - (__pyx_v_itemp[__pyx_v_i]) = __pyx_v_c; - } - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "View.MemoryView":505 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.memoryview.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesvalue); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":521 - * itemp[i] = c - * - * @cname('getbuffer') # <<<<<<<<<<<<<< - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - */ - -/* Python wrapper */ -CYTHON_UNUSED static int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -CYTHON_UNUSED static int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - Py_ssize_t *__pyx_t_3; - char *__pyx_t_4; - void *__pyx_t_5; - int __pyx_t_6; - Py_ssize_t __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (unlikely(__pyx_v_info == NULL)) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":523 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError, "Cannot create writable memory view from read-only memoryview" - * - */ - __pyx_t_2 = ((__pyx_v_flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_1 = __pyx_v_self->view.readonly; - __pyx_L4_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":524 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError, "Cannot create writable memory view from read-only memoryview" # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_Cannot_create_writable_memory_vi, 0, 0); - __PYX_ERR(1, 524, __pyx_L1_error) - - /* "View.MemoryView":523 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError, "Cannot create writable memory view from read-only memoryview" - * - */ - } - - /* "View.MemoryView":526 - * raise ValueError, "Cannot create writable memory view from read-only memoryview" - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_ND) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":527 - * - * if flags & PyBUF_ND: - * info.shape = self.view.shape # <<<<<<<<<<<<<< - * else: - * info.shape = NULL - */ - __pyx_t_3 = __pyx_v_self->view.shape; - __pyx_v_info->shape = __pyx_t_3; - - /* "View.MemoryView":526 - * raise ValueError, "Cannot create writable memory view from read-only memoryview" - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":529 - * info.shape = self.view.shape - * else: - * info.shape = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_STRIDES: - */ - /*else*/ { - __pyx_v_info->shape = NULL; - } - __pyx_L6:; - - /* "View.MemoryView":531 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_STRIDES) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":532 - * - * if flags & PyBUF_STRIDES: - * info.strides = self.view.strides # <<<<<<<<<<<<<< - * else: - * info.strides = NULL - */ - __pyx_t_3 = __pyx_v_self->view.strides; - __pyx_v_info->strides = __pyx_t_3; - - /* "View.MemoryView":531 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - goto __pyx_L7; - } - - /* "View.MemoryView":534 - * info.strides = self.view.strides - * else: - * info.strides = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_INDIRECT: - */ - /*else*/ { - __pyx_v_info->strides = NULL; - } - __pyx_L7:; - - /* "View.MemoryView":536 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_INDIRECT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":537 - * - * if flags & PyBUF_INDIRECT: - * info.suboffsets = self.view.suboffsets # <<<<<<<<<<<<<< - * else: - * info.suboffsets = NULL - */ - __pyx_t_3 = __pyx_v_self->view.suboffsets; - __pyx_v_info->suboffsets = __pyx_t_3; - - /* "View.MemoryView":536 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":539 - * info.suboffsets = self.view.suboffsets - * else: - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - /*else*/ { - __pyx_v_info->suboffsets = NULL; - } - __pyx_L8:; - - /* "View.MemoryView":541 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":542 - * - * if flags & PyBUF_FORMAT: - * info.format = self.view.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_4 = __pyx_v_self->view.format; - __pyx_v_info->format = __pyx_t_4; - - /* "View.MemoryView":541 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":544 - * info.format = self.view.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.buf = self.view.buf - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L9:; - - /* "View.MemoryView":546 - * info.format = NULL - * - * info.buf = self.view.buf # <<<<<<<<<<<<<< - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - */ - __pyx_t_5 = __pyx_v_self->view.buf; - __pyx_v_info->buf = __pyx_t_5; - - /* "View.MemoryView":547 - * - * info.buf = self.view.buf - * info.ndim = self.view.ndim # <<<<<<<<<<<<<< - * info.itemsize = self.view.itemsize - * info.len = self.view.len - */ - __pyx_t_6 = __pyx_v_self->view.ndim; - __pyx_v_info->ndim = __pyx_t_6; - - /* "View.MemoryView":548 - * info.buf = self.view.buf - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize # <<<<<<<<<<<<<< - * info.len = self.view.len - * info.readonly = self.view.readonly - */ - __pyx_t_7 = __pyx_v_self->view.itemsize; - __pyx_v_info->itemsize = __pyx_t_7; - - /* "View.MemoryView":549 - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - * info.len = self.view.len # <<<<<<<<<<<<<< - * info.readonly = self.view.readonly - * info.obj = self - */ - __pyx_t_7 = __pyx_v_self->view.len; - __pyx_v_info->len = __pyx_t_7; - - /* "View.MemoryView":550 - * info.itemsize = self.view.itemsize - * info.len = self.view.len - * info.readonly = self.view.readonly # <<<<<<<<<<<<<< - * info.obj = self - * - */ - __pyx_t_1 = __pyx_v_self->view.readonly; - __pyx_v_info->readonly = __pyx_t_1; - - /* "View.MemoryView":551 - * info.len = self.view.len - * info.readonly = self.view.readonly - * info.obj = self # <<<<<<<<<<<<<< - * - * - */ - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_GIVEREF((PyObject *)__pyx_v_self); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":521 - * itemp[i] = c - * - * @cname('getbuffer') # <<<<<<<<<<<<<< - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":554 - * - * - * @property # <<<<<<<<<<<<<< - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":556 - * @property - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) # <<<<<<<<<<<<<< - * transpose_memslice(&result.from_slice) - * return result - */ - __pyx_t_1 = __pyx_memoryview_copy_object(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 556, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_memoryviewslice_type))))) __PYX_ERR(1, 556, __pyx_L1_error) - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":557 - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_t_2 = __pyx_memslice_transpose((&__pyx_v_result->from_slice)); if (unlikely(__pyx_t_2 == ((int)-1))) __PYX_ERR(1, 557, __pyx_L1_error) - - /* "View.MemoryView":558 - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - * return result # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF((PyObject *)__pyx_v_result); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":554 - * - * - * @property # <<<<<<<<<<<<<< - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.T.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":560 - * return result - * - * @property # <<<<<<<<<<<<<< - * def base(self): - * return self._get_base() - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":562 - * @property - * def base(self): - * return self._get_base() # <<<<<<<<<<<<<< - * - * cdef _get_base(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->_get_base(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 562, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":560 - * return result - * - * @property # <<<<<<<<<<<<<< - * def base(self): - * return self._get_base() - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.base.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":564 - * return self._get_base() - * - * cdef _get_base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - -static PyObject *__pyx_memoryview__get_base(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_get_base", 0); - - /* "View.MemoryView":565 - * - * cdef _get_base(self): - * return self.obj # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->obj); - __pyx_r = __pyx_v_self->obj; - goto __pyx_L0; - - /* "View.MemoryView":564 - * return self._get_base() - * - * cdef _get_base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":567 - * return self.obj - * - * @property # <<<<<<<<<<<<<< - * def shape(self): - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_7genexpr__pyx_v_length; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":569 - * @property - * def shape(self): - * return tuple([length for length in self.view.shape[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - { /* enter inner scope */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 569, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_4 = __pyx_v_self->view.shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_7genexpr__pyx_v_length = (__pyx_t_2[0]); - __pyx_t_5 = PyInt_FromSsize_t(__pyx_7genexpr__pyx_v_length); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 569, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(1, 569, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - } /* exit inner scope */ - __pyx_t_5 = PyList_AsTuple(((PyObject*)__pyx_t_1)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 569, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":567 - * return self.obj - * - * @property # <<<<<<<<<<<<<< - * def shape(self): - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.shape.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":571 - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - * @property # <<<<<<<<<<<<<< - * def strides(self): - * if self.view.strides == NULL: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_8genexpr1__pyx_v_stride; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":573 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError, "Buffer view does not expose strides" - */ - __pyx_t_1 = (__pyx_v_self->view.strides == NULL); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":575 - * if self.view.strides == NULL: - * - * raise ValueError, "Buffer view does not expose strides" # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_Buffer_view_does_not_expose_stri, 0, 0); - __PYX_ERR(1, 575, __pyx_L1_error) - - /* "View.MemoryView":573 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError, "Buffer view does not expose strides" - */ - } - - /* "View.MemoryView":577 - * raise ValueError, "Buffer view does not expose strides" - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - { /* enter inner scope */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = (__pyx_v_self->view.strides + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.strides; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_8genexpr1__pyx_v_stride = (__pyx_t_3[0]); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_8genexpr1__pyx_v_stride); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - } /* exit inner scope */ - __pyx_t_6 = PyList_AsTuple(((PyObject*)__pyx_t_2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "View.MemoryView":571 - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - * @property # <<<<<<<<<<<<<< - * def strides(self): - * if self.view.strides == NULL: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.strides.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":579 - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - * - * @property # <<<<<<<<<<<<<< - * def suboffsets(self): - * if self.view.suboffsets == NULL: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_8genexpr2__pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":581 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - __pyx_t_1 = (__pyx_v_self->view.suboffsets == NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":582 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PySequence_Multiply(__pyx_tuple__4, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 582, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":581 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - } - - /* "View.MemoryView":584 - * return (-1,) * self.view.ndim - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - { /* enter inner scope */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 584, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = (__pyx_v_self->view.suboffsets + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.suboffsets; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_8genexpr2__pyx_v_suboffset = (__pyx_t_3[0]); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_8genexpr2__pyx_v_suboffset); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 584, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(1, 584, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - } /* exit inner scope */ - __pyx_t_6 = PyList_AsTuple(((PyObject*)__pyx_t_2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 584, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "View.MemoryView":579 - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - * - * @property # <<<<<<<<<<<<<< - * def suboffsets(self): - * if self.view.suboffsets == NULL: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.suboffsets.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":586 - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - * - * @property # <<<<<<<<<<<<<< - * def ndim(self): - * return self.view.ndim - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":588 - * @property - * def ndim(self): - * return self.view.ndim # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 588, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":586 - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - * - * @property # <<<<<<<<<<<<<< - * def ndim(self): - * return self.view.ndim - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.ndim.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":590 - * return self.view.ndim - * - * @property # <<<<<<<<<<<<<< - * def itemsize(self): - * return self.view.itemsize - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":592 - * @property - * def itemsize(self): - * return self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 592, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":590 - * return self.view.ndim - * - * @property # <<<<<<<<<<<<<< - * def itemsize(self): - * return self.view.itemsize - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.itemsize.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":594 - * return self.view.itemsize - * - * @property # <<<<<<<<<<<<<< - * def nbytes(self): - * return self.size * self.view.itemsize - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":596 - * @property - * def nbytes(self): - * return self.size * self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 596, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 596, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 596, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":594 - * return self.view.itemsize - * - * @property # <<<<<<<<<<<<<< - * def nbytes(self): - * return self.size * self.view.itemsize - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.nbytes.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":598 - * return self.size * self.view.itemsize - * - * @property # <<<<<<<<<<<<<< - * def size(self): - * if self._size is None: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":600 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - __pyx_t_1 = (__pyx_v_self->_size == Py_None); - if (__pyx_t_1) { - - /* "View.MemoryView":601 - * def size(self): - * if self._size is None: - * result = 1 # <<<<<<<<<<<<<< - * - * for length in self.view.shape[:self.view.ndim]: - */ - __Pyx_INCREF(__pyx_int_1); - __pyx_v_result = __pyx_int_1; - - /* "View.MemoryView":603 - * result = 1 - * - * for length in self.view.shape[:self.view.ndim]: # <<<<<<<<<<<<<< - * result *= length - * - */ - __pyx_t_3 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_4 = __pyx_v_self->view.shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_t_5 = PyInt_FromSsize_t((__pyx_t_2[0])); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 603, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_5); - __pyx_t_5 = 0; - - /* "View.MemoryView":604 - * - * for length in self.view.shape[:self.view.ndim]: - * result *= length # <<<<<<<<<<<<<< - * - * self._size = result - */ - __pyx_t_5 = PyNumber_InPlaceMultiply(__pyx_v_result, __pyx_v_length); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 604, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF_SET(__pyx_v_result, __pyx_t_5); - __pyx_t_5 = 0; - } - - /* "View.MemoryView":606 - * result *= length - * - * self._size = result # <<<<<<<<<<<<<< - * - * return self._size - */ - __Pyx_INCREF(__pyx_v_result); - __Pyx_GIVEREF(__pyx_v_result); - __Pyx_GOTREF(__pyx_v_self->_size); - __Pyx_DECREF(__pyx_v_self->_size); - __pyx_v_self->_size = __pyx_v_result; - - /* "View.MemoryView":600 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - } - - /* "View.MemoryView":608 - * self._size = result - * - * return self._size # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->_size); - __pyx_r = __pyx_v_self->_size; - goto __pyx_L0; - - /* "View.MemoryView":598 - * return self.size * self.view.itemsize - * - * @property # <<<<<<<<<<<<<< - * def size(self): - * if self._size is None: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.size.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":610 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":611 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - __pyx_t_1 = (__pyx_v_self->view.ndim >= 1); - if (__pyx_t_1) { - - /* "View.MemoryView":612 - * def __len__(self): - * if self.view.ndim >= 1: - * return self.view.shape[0] # <<<<<<<<<<<<<< - * - * return 0 - */ - __pyx_r = (__pyx_v_self->view.shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":611 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - } - - /* "View.MemoryView":614 - * return self.view.shape[0] - * - * return 0 # <<<<<<<<<<<<<< - * - * def __repr__(self): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":610 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":616 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":617 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 617, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 617, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 617, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":618 - * def __repr__(self): - * return "" % (self.base.__class__.__name__, - * id(self)) # <<<<<<<<<<<<<< - * - * def __str__(self): - */ - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_id, ((PyObject *)__pyx_v_self)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 618, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "View.MemoryView":617 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 617, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 617, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":616 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__repr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":620 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__str__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__str__", 0); - - /* "View.MemoryView":621 - * - * def __str__(self): - * return "" % (self.base.__class__.__name__,) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 621, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 621, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 621, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 621, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_object, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 621, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":620 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.__str__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":624 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_c_contig (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("is_c_contig", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "is_c_contig", 0))) return NULL; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_c_contig", 0); - - /* "View.MemoryView":627 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 627, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":628 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'C', self.view.ndim) # <<<<<<<<<<<<<< - * - * def is_f_contig(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'C', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 628, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":624 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_c_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":630 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_f_contig (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("is_f_contig", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "is_f_contig", 0))) return NULL; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_f_contig", 0); - - /* "View.MemoryView":633 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 633, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":634 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'F', self.view.ndim) # <<<<<<<<<<<<<< - * - * def copy(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'F', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 634, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":630 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_f_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":636 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("copy", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "copy", 0))) return NULL; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_mslice; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy", 0); - - /* "View.MemoryView":638 - * def copy(self): - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &mslice) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_F_CONTIGUOUS)); - - /* "View.MemoryView":640 - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - * - * slice_copy(self, &mslice) # <<<<<<<<<<<<<< - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_mslice)); - - /* "View.MemoryView":641 - * - * slice_copy(self, &mslice) - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_C_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_mslice), ((char *)"c"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_C_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 641, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":646 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &mslice) # <<<<<<<<<<<<<< - * - * def copy_fortran(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_mslice)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 646, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":636 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":648 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy_fortran (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("copy_fortran", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "copy_fortran", 0))) return NULL; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy_fortran", 0); - - /* "View.MemoryView":650 - * def copy_fortran(self): - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &src) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_C_CONTIGUOUS)); - - /* "View.MemoryView":652 - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - * - * slice_copy(self, &src) # <<<<<<<<<<<<<< - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_src)); - - /* "View.MemoryView":653 - * - * slice_copy(self, &src) - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_F_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_src), ((char *)"fortran"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_F_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 653, __pyx_L1_error) - __pyx_v_dst = __pyx_t_1; - - /* "View.MemoryView":658 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &dst) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_dst)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":648 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy_fortran", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("__reduce_cython__", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "__reduce_cython__", 0))) return NULL; - __pyx_r = __pyx_pf___pyx_memoryview___reduce_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_no_default___reduce___due_to_non, 0, 0); - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - CYTHON_UNUSED PyObject *__pyx_v___pyx_state = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_state,0}; - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 3, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__setstate_cython__") < 0)) __PYX_ERR(1, 3, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v___pyx_state = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__setstate_cython__", 1, 1, 1, __pyx_nargs); __PYX_ERR(1, 3, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf___pyx_memoryview_2__setstate_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" # <<<<<<<<<<<<<< - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_no_default___reduce___due_to_non, 0, 0); - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":662 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - -static PyObject *__pyx_memoryview_new(PyObject *__pyx_v_o, int __pyx_v_flags, int __pyx_v_dtype_is_object, __Pyx_TypeInfo *__pyx_v_typeinfo) { - struct __pyx_memoryview_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_cwrapper", 0); - - /* "View.MemoryView":663 - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) # <<<<<<<<<<<<<< - * result.typeinfo = typeinfo - * return result - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 663, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 663, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 663, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_o); - __Pyx_GIVEREF(__pyx_v_o); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_o); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 663, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryview_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":664 - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_v_result->typeinfo = __pyx_v_typeinfo; - - /* "View.MemoryView":665 - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_check') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF((PyObject *)__pyx_v_result); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":662 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":668 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o) noexcept: # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *__pyx_v_o) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("memoryview_check", 0); - - /* "View.MemoryView":669 - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o) noexcept: - * return isinstance(o, memoryview) # <<<<<<<<<<<<<< - * - * cdef tuple _unellipsify(object index, int ndim): - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_o, __pyx_memoryview_type); - __pyx_r = __pyx_t_1; - goto __pyx_L0; - - /* "View.MemoryView":668 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o) noexcept: # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":671 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - -static PyObject *_unellipsify(PyObject *__pyx_v_index, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_idx; - PyObject *__pyx_v_tup = NULL; - PyObject *__pyx_v_result = NULL; - int __pyx_v_have_slices; - int __pyx_v_seen_ellipsis; - PyObject *__pyx_v_item = NULL; - Py_ssize_t __pyx_v_nslices; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_UCS4 __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_unellipsify", 0); - - /* "View.MemoryView":677 - * """ - * cdef Py_ssize_t idx - * tup = index if isinstance(index, tuple) else (index,) # <<<<<<<<<<<<<< - * - * result = [slice(None)] * ndim - */ - __pyx_t_2 = PyTuple_Check(__pyx_v_index); - if (__pyx_t_2) { - __Pyx_INCREF(((PyObject*)__pyx_v_index)); - __pyx_t_1 = __pyx_v_index; - } else { - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 677, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_index); - __Pyx_GIVEREF(__pyx_v_index); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_index); - __pyx_t_1 = __pyx_t_3; - __pyx_t_3 = 0; - } - __pyx_v_tup = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":679 - * tup = index if isinstance(index, tuple) else (index,) - * - * result = [slice(None)] * ndim # <<<<<<<<<<<<<< - * have_slices = False - * seen_ellipsis = False - */ - __pyx_t_1 = PyList_New(1 * ((__pyx_v_ndim<0) ? 0:__pyx_v_ndim)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < __pyx_v_ndim; __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__5); - __Pyx_GIVEREF(__pyx_slice__5); - PyList_SET_ITEM(__pyx_t_1, __pyx_temp, __pyx_slice__5); - } - } - __pyx_v_result = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":680 - * - * result = [slice(None)] * ndim - * have_slices = False # <<<<<<<<<<<<<< - * seen_ellipsis = False - * idx = 0 - */ - __pyx_v_have_slices = 0; - - /* "View.MemoryView":681 - * result = [slice(None)] * ndim - * have_slices = False - * seen_ellipsis = False # <<<<<<<<<<<<<< - * idx = 0 - * for item in tup: - */ - __pyx_v_seen_ellipsis = 0; - - /* "View.MemoryView":682 - * have_slices = False - * seen_ellipsis = False - * idx = 0 # <<<<<<<<<<<<<< - * for item in tup: - * if item is Ellipsis: - */ - __pyx_v_idx = 0; - - /* "View.MemoryView":683 - * seen_ellipsis = False - * idx = 0 - * for item in tup: # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - if (unlikely(__pyx_v_tup == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); - __PYX_ERR(1, 683, __pyx_L1_error) - } - __pyx_t_1 = __pyx_v_tup; __Pyx_INCREF(__pyx_t_1); __pyx_t_4 = 0; - for (;;) { - if (__pyx_t_4 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_4); __Pyx_INCREF(__pyx_t_3); __pyx_t_4++; if (unlikely((0 < 0))) __PYX_ERR(1, 683, __pyx_L1_error) - #else - __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 683, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_XDECREF_SET(__pyx_v_item, __pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":684 - * idx = 0 - * for item in tup: - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * idx += ndim - len(tup) - */ - __pyx_t_2 = (__pyx_v_item == __pyx_builtin_Ellipsis); - if (__pyx_t_2) { - - /* "View.MemoryView":685 - * for item in tup: - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * idx += ndim - len(tup) - * seen_ellipsis = True - */ - __pyx_t_2 = (!__pyx_v_seen_ellipsis); - if (__pyx_t_2) { - - /* "View.MemoryView":686 - * if item is Ellipsis: - * if not seen_ellipsis: - * idx += ndim - len(tup) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * have_slices = True - */ - if (unlikely(__pyx_v_tup == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 686, __pyx_L1_error) - } - __pyx_t_5 = PyTuple_GET_SIZE(__pyx_v_tup); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(1, 686, __pyx_L1_error) - __pyx_v_idx = (__pyx_v_idx + (__pyx_v_ndim - __pyx_t_5)); - - /* "View.MemoryView":687 - * if not seen_ellipsis: - * idx += ndim - len(tup) - * seen_ellipsis = True # <<<<<<<<<<<<<< - * have_slices = True - * else: - */ - __pyx_v_seen_ellipsis = 1; - - /* "View.MemoryView":685 - * for item in tup: - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * idx += ndim - len(tup) - * seen_ellipsis = True - */ - } - - /* "View.MemoryView":688 - * idx += ndim - len(tup) - * seen_ellipsis = True - * have_slices = True # <<<<<<<<<<<<<< - * else: - * if isinstance(item, slice): - */ - __pyx_v_have_slices = 1; - - /* "View.MemoryView":684 - * idx = 0 - * for item in tup: - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * idx += ndim - len(tup) - */ - goto __pyx_L5; - } - - /* "View.MemoryView":690 - * have_slices = True - * else: - * if isinstance(item, slice): # <<<<<<<<<<<<<< - * have_slices = True - * elif not PyIndex_Check(item): - */ - /*else*/ { - __pyx_t_2 = PySlice_Check(__pyx_v_item); - if (__pyx_t_2) { - - /* "View.MemoryView":691 - * else: - * if isinstance(item, slice): - * have_slices = True # <<<<<<<<<<<<<< - * elif not PyIndex_Check(item): - * raise TypeError, f"Cannot index with type '{type(item)}'" - */ - __pyx_v_have_slices = 1; - - /* "View.MemoryView":690 - * have_slices = True - * else: - * if isinstance(item, slice): # <<<<<<<<<<<<<< - * have_slices = True - * elif not PyIndex_Check(item): - */ - goto __pyx_L7; - } - - /* "View.MemoryView":692 - * if isinstance(item, slice): - * have_slices = True - * elif not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError, f"Cannot index with type '{type(item)}'" - * result[idx] = item - */ - __pyx_t_2 = (!(PyIndex_Check(__pyx_v_item) != 0)); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":693 - * have_slices = True - * elif not PyIndex_Check(item): - * raise TypeError, f"Cannot index with type '{type(item)}'" # <<<<<<<<<<<<<< - * result[idx] = item - * idx += 1 - */ - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 693, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = 0; - __pyx_t_6 = 127; - __Pyx_INCREF(__pyx_kp_u_Cannot_index_with_type); - __pyx_t_5 += 24; - __Pyx_GIVEREF(__pyx_kp_u_Cannot_index_with_type); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_kp_u_Cannot_index_with_type); - __pyx_t_7 = __Pyx_PyObject_FormatSimple(((PyObject *)Py_TYPE(__pyx_v_item)), __pyx_empty_unicode); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 693, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_6 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_7) > __pyx_t_6) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_7) : __pyx_t_6; - __pyx_t_5 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_INCREF(__pyx_kp_u__6); - __pyx_t_5 += 1; - __Pyx_GIVEREF(__pyx_kp_u__6); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_kp_u__6); - __pyx_t_7 = __Pyx_PyUnicode_Join(__pyx_t_3, 3, __pyx_t_5, __pyx_t_6); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 693, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_t_7, 0, 0); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __PYX_ERR(1, 693, __pyx_L1_error) - - /* "View.MemoryView":692 - * if isinstance(item, slice): - * have_slices = True - * elif not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError, f"Cannot index with type '{type(item)}'" - * result[idx] = item - */ - } - __pyx_L7:; - - /* "View.MemoryView":694 - * elif not PyIndex_Check(item): - * raise TypeError, f"Cannot index with type '{type(item)}'" - * result[idx] = item # <<<<<<<<<<<<<< - * idx += 1 - * - */ - if (unlikely((__Pyx_SetItemInt(__pyx_v_result, __pyx_v_idx, __pyx_v_item, Py_ssize_t, 1, PyInt_FromSsize_t, 1, 1, 1) < 0))) __PYX_ERR(1, 694, __pyx_L1_error) - } - __pyx_L5:; - - /* "View.MemoryView":695 - * raise TypeError, f"Cannot index with type '{type(item)}'" - * result[idx] = item - * idx += 1 # <<<<<<<<<<<<<< - * - * nslices = ndim - idx - */ - __pyx_v_idx = (__pyx_v_idx + 1); - - /* "View.MemoryView":683 - * seen_ellipsis = False - * idx = 0 - * for item in tup: # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":697 - * idx += 1 - * - * nslices = ndim - idx # <<<<<<<<<<<<<< - * return have_slices or nslices, tuple(result) - * - */ - __pyx_v_nslices = (__pyx_v_ndim - __pyx_v_idx); - - /* "View.MemoryView":698 - * - * nslices = ndim - idx - * return have_slices or nslices, tuple(result) # <<<<<<<<<<<<<< - * - * cdef int assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim) except -1: - */ - __Pyx_XDECREF(__pyx_r); - if (!__pyx_v_have_slices) { - } else { - __pyx_t_7 = __Pyx_PyBool_FromLong(__pyx_v_have_slices); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_1 = __pyx_t_7; - __pyx_t_7 = 0; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_7 = PyInt_FromSsize_t(__pyx_v_nslices); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_1 = __pyx_t_7; - __pyx_t_7 = 0; - __pyx_L9_bool_binop_done:; - __pyx_t_7 = PyList_AsTuple(__pyx_v_result); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_7); - __pyx_t_1 = 0; - __pyx_t_7 = 0; - __pyx_r = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":671 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("View.MemoryView._unellipsify", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_tup); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_item); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":700 - * return have_slices or nslices, tuple(result) - * - * cdef int assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim) except -1: # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - -static int assert_direct_dimensions(Py_ssize_t *__pyx_v_suboffsets, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_suboffset; - int __pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assert_direct_dimensions", 0); - - /* "View.MemoryView":701 - * - * cdef int assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim) except -1: - * for suboffset in suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * raise ValueError, "Indirect dimensions not supported" - */ - __pyx_t_2 = (__pyx_v_suboffsets + __pyx_v_ndim); - for (__pyx_t_3 = __pyx_v_suboffsets; __pyx_t_3 < __pyx_t_2; __pyx_t_3++) { - __pyx_t_1 = __pyx_t_3; - __pyx_v_suboffset = (__pyx_t_1[0]); - - /* "View.MemoryView":702 - * cdef int assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim) except -1: - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError, "Indirect dimensions not supported" - * return 0 # return type just used as an error flag - */ - __pyx_t_4 = (__pyx_v_suboffset >= 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":703 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError, "Indirect dimensions not supported" # <<<<<<<<<<<<<< - * return 0 # return type just used as an error flag - * - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_Indirect_dimensions_not_supporte, 0, 0); - __PYX_ERR(1, 703, __pyx_L1_error) - - /* "View.MemoryView":702 - * cdef int assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim) except -1: - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError, "Indirect dimensions not supported" - * return 0 # return type just used as an error flag - */ - } - } - - /* "View.MemoryView":704 - * if suboffset >= 0: - * raise ValueError, "Indirect dimensions not supported" - * return 0 # return type just used as an error flag # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":700 - * return have_slices or nslices, tuple(result) - * - * cdef int assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim) except -1: # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.assert_direct_dimensions", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":711 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *__pyx_v_memview, PyObject *__pyx_v_indices) { - int __pyx_v_new_ndim; - int __pyx_v_suboffset_dim; - int __pyx_v_dim; - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - __Pyx_memviewslice *__pyx_v_p_src; - struct __pyx_memoryviewslice_obj *__pyx_v_memviewsliceobj = 0; - __Pyx_memviewslice *__pyx_v_p_dst; - int *__pyx_v_p_suboffset_dim; - Py_ssize_t __pyx_v_start; - Py_ssize_t __pyx_v_stop; - Py_ssize_t __pyx_v_step; - Py_ssize_t __pyx_v_cindex; - int __pyx_v_have_start; - int __pyx_v_have_stop; - int __pyx_v_have_step; - PyObject *__pyx_v_index = NULL; - struct __pyx_memoryview_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - struct __pyx_memoryview_obj *__pyx_t_3; - char *__pyx_t_4; - int __pyx_t_5; - Py_ssize_t __pyx_t_6; - PyObject *(*__pyx_t_7)(PyObject *); - PyObject *__pyx_t_8 = NULL; - Py_ssize_t __pyx_t_9; - int __pyx_t_10; - Py_ssize_t __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memview_slice", 0); - - /* "View.MemoryView":712 - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): - * cdef int new_ndim = 0, suboffset_dim = -1, dim # <<<<<<<<<<<<<< - * cdef bint negative_step - * cdef __Pyx_memviewslice src, dst - */ - __pyx_v_new_ndim = 0; - __pyx_v_suboffset_dim = -1; - - /* "View.MemoryView":719 - * - * - * memset(&dst, 0, sizeof(dst)) # <<<<<<<<<<<<<< - * - * cdef _memoryviewslice memviewsliceobj - */ - (void)(memset((&__pyx_v_dst), 0, (sizeof(__pyx_v_dst)))); - - /* "View.MemoryView":723 - * cdef _memoryviewslice memviewsliceobj - * - * assert memview.view.ndim > 0 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(__pyx_assertions_enabled())) { - __pyx_t_1 = (__pyx_v_memview->view.ndim > 0); - if (unlikely(!__pyx_t_1)) { - __Pyx_Raise(__pyx_builtin_AssertionError, 0, 0, 0); - __PYX_ERR(1, 723, __pyx_L1_error) - } - } - #else - if ((1)); else __PYX_ERR(1, 723, __pyx_L1_error) - #endif - - /* "View.MemoryView":725 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - if (__pyx_t_1) { - - /* "View.MemoryView":726 - * - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview # <<<<<<<<<<<<<< - * p_src = &memviewsliceobj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 726, __pyx_L1_error) - __pyx_t_2 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_2); - __pyx_v_memviewsliceobj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":727 - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, &src) - */ - __pyx_v_p_src = (&__pyx_v_memviewsliceobj->from_slice); - - /* "View.MemoryView":725 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - goto __pyx_L3; - } - - /* "View.MemoryView":729 - * p_src = &memviewsliceobj.from_slice - * else: - * slice_copy(memview, &src) # <<<<<<<<<<<<<< - * p_src = &src - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_src)); - - /* "View.MemoryView":730 - * else: - * slice_copy(memview, &src) - * p_src = &src # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_p_src = (&__pyx_v_src); - } - __pyx_L3:; - - /* "View.MemoryView":736 - * - * - * dst.memview = p_src.memview # <<<<<<<<<<<<<< - * dst.data = p_src.data - * - */ - __pyx_t_3 = __pyx_v_p_src->memview; - __pyx_v_dst.memview = __pyx_t_3; - - /* "View.MemoryView":737 - * - * dst.memview = p_src.memview - * dst.data = p_src.data # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_4 = __pyx_v_p_src->data; - __pyx_v_dst.data = __pyx_t_4; - - /* "View.MemoryView":742 - * - * - * cdef __Pyx_memviewslice *p_dst = &dst # <<<<<<<<<<<<<< - * cdef int *p_suboffset_dim = &suboffset_dim - * cdef Py_ssize_t start, stop, step, cindex - */ - __pyx_v_p_dst = (&__pyx_v_dst); - - /* "View.MemoryView":743 - * - * cdef __Pyx_memviewslice *p_dst = &dst - * cdef int *p_suboffset_dim = &suboffset_dim # <<<<<<<<<<<<<< - * cdef Py_ssize_t start, stop, step, cindex - * cdef bint have_start, have_stop, have_step - */ - __pyx_v_p_suboffset_dim = (&__pyx_v_suboffset_dim); - - /* "View.MemoryView":747 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * cindex = index - */ - __pyx_t_5 = 0; - if (likely(PyList_CheckExact(__pyx_v_indices)) || PyTuple_CheckExact(__pyx_v_indices)) { - __pyx_t_2 = __pyx_v_indices; __Pyx_INCREF(__pyx_t_2); __pyx_t_6 = 0; - __pyx_t_7 = NULL; - } else { - __pyx_t_6 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_indices); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 747, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 747, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_7)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_6 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_8 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_6); __Pyx_INCREF(__pyx_t_8); __pyx_t_6++; if (unlikely((0 < 0))) __PYX_ERR(1, 747, __pyx_L1_error) - #else - __pyx_t_8 = PySequence_ITEM(__pyx_t_2, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 747, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - #endif - } else { - if (__pyx_t_6 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_8 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_6); __Pyx_INCREF(__pyx_t_8); __pyx_t_6++; if (unlikely((0 < 0))) __PYX_ERR(1, 747, __pyx_L1_error) - #else - __pyx_t_8 = PySequence_ITEM(__pyx_t_2, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 747, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - #endif - } - } else { - __pyx_t_8 = __pyx_t_7(__pyx_t_2); - if (unlikely(!__pyx_t_8)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 747, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_8); - } - __Pyx_XDECREF_SET(__pyx_v_index, __pyx_t_8); - __pyx_t_8 = 0; - __pyx_v_dim = __pyx_t_5; - __pyx_t_5 = (__pyx_t_5 + 1); - - /* "View.MemoryView":748 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * cindex = index - * slice_memviewslice( - */ - __pyx_t_1 = (PyIndex_Check(__pyx_v_index) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":749 - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): - * cindex = index # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_v_index); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 749, __pyx_L1_error) - __pyx_v_cindex = __pyx_t_9; - - /* "View.MemoryView":750 - * if PyIndex_Check(index): - * cindex = index - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_10 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_v_cindex, 0, 0, 0, 0, 0, 0); if (unlikely(__pyx_t_10 == ((int)-1))) __PYX_ERR(1, 750, __pyx_L1_error) - - /* "View.MemoryView":748 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * cindex = index - * slice_memviewslice( - */ - goto __pyx_L6; - } - - /* "View.MemoryView":756 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - __pyx_t_1 = (__pyx_v_index == Py_None); - if (__pyx_t_1) { - - /* "View.MemoryView":757 - * False) - * elif index is None: - * p_dst.shape[new_ndim] = 1 # <<<<<<<<<<<<<< - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - */ - (__pyx_v_p_dst->shape[__pyx_v_new_ndim]) = 1; - - /* "View.MemoryView":758 - * elif index is None: - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 # <<<<<<<<<<<<<< - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 - */ - (__pyx_v_p_dst->strides[__pyx_v_new_ndim]) = 0; - - /* "View.MemoryView":759 - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 # <<<<<<<<<<<<<< - * new_ndim += 1 - * else: - */ - (__pyx_v_p_dst->suboffsets[__pyx_v_new_ndim]) = -1L; - - /* "View.MemoryView":760 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 # <<<<<<<<<<<<<< - * else: - * start = index.start or 0 - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - - /* "View.MemoryView":756 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - goto __pyx_L6; - } - - /* "View.MemoryView":762 - * new_ndim += 1 - * else: - * start = index.start or 0 # <<<<<<<<<<<<<< - * stop = index.stop or 0 - * step = index.step or 0 - */ - /*else*/ { - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 762, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_8); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(1, 762, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } else { - __pyx_t_11 = __Pyx_PyIndex_AsSsize_t(__pyx_t_8); if (unlikely((__pyx_t_11 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 762, __pyx_L1_error) - __pyx_t_9 = __pyx_t_11; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L7_bool_binop_done; - } - __pyx_t_9 = 0; - __pyx_L7_bool_binop_done:; - __pyx_v_start = __pyx_t_9; - - /* "View.MemoryView":763 - * else: - * start = index.start or 0 - * stop = index.stop or 0 # <<<<<<<<<<<<<< - * step = index.step or 0 - * - */ - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 763, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_8); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(1, 763, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } else { - __pyx_t_11 = __Pyx_PyIndex_AsSsize_t(__pyx_t_8); if (unlikely((__pyx_t_11 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 763, __pyx_L1_error) - __pyx_t_9 = __pyx_t_11; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_9 = 0; - __pyx_L9_bool_binop_done:; - __pyx_v_stop = __pyx_t_9; - - /* "View.MemoryView":764 - * start = index.start or 0 - * stop = index.stop or 0 - * step = index.step or 0 # <<<<<<<<<<<<<< - * - * have_start = index.start is not None - */ - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 764, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_8); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(1, 764, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } else { - __pyx_t_11 = __Pyx_PyIndex_AsSsize_t(__pyx_t_8); if (unlikely((__pyx_t_11 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 764, __pyx_L1_error) - __pyx_t_9 = __pyx_t_11; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_9 = 0; - __pyx_L11_bool_binop_done:; - __pyx_v_step = __pyx_t_9; - - /* "View.MemoryView":766 - * step = index.step or 0 - * - * have_start = index.start is not None # <<<<<<<<<<<<<< - * have_stop = index.stop is not None - * have_step = index.step is not None - */ - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 766, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = (__pyx_t_8 != Py_None); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_v_have_start = __pyx_t_1; - - /* "View.MemoryView":767 - * - * have_start = index.start is not None - * have_stop = index.stop is not None # <<<<<<<<<<<<<< - * have_step = index.step is not None - * - */ - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 767, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = (__pyx_t_8 != Py_None); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_v_have_stop = __pyx_t_1; - - /* "View.MemoryView":768 - * have_start = index.start is not None - * have_stop = index.stop is not None - * have_step = index.step is not None # <<<<<<<<<<<<<< - * - * slice_memviewslice( - */ - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 768, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = (__pyx_t_8 != Py_None); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_v_have_step = __pyx_t_1; - - /* "View.MemoryView":770 - * have_step = index.step is not None - * - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_10 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_v_start, __pyx_v_stop, __pyx_v_step, __pyx_v_have_start, __pyx_v_have_stop, __pyx_v_have_step, 1); if (unlikely(__pyx_t_10 == ((int)-1))) __PYX_ERR(1, 770, __pyx_L1_error) - - /* "View.MemoryView":776 - * have_start, have_stop, have_step, - * True) - * new_ndim += 1 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - } - __pyx_L6:; - - /* "View.MemoryView":747 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * cindex = index - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":778 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - if (__pyx_t_1) { - - /* "View.MemoryView":779 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __Pyx_XDECREF((PyObject *)__pyx_r); - - /* "View.MemoryView":780 - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, # <<<<<<<<<<<<<< - * memviewsliceobj.to_dtype_func, - * memview.dtype_is_object) - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 780, __pyx_L1_error) } - - /* "View.MemoryView":781 - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * else: - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 781, __pyx_L1_error) } - - /* "View.MemoryView":779 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __pyx_t_2 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, __pyx_v_memviewsliceobj->to_object_func, __pyx_v_memviewsliceobj->to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 779, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_memoryview_type))))) __PYX_ERR(1, 779, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_2); - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":778 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - } - - /* "View.MemoryView":784 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - /*else*/ { - __Pyx_XDECREF((PyObject *)__pyx_r); - - /* "View.MemoryView":785 - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, NULL, NULL, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 784, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "View.MemoryView":784 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_memoryview_type))))) __PYX_ERR(1, 784, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_2); - __pyx_t_2 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":711 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.memview_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_memviewsliceobj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":793 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *__pyx_v_dst, Py_ssize_t __pyx_v_shape, Py_ssize_t __pyx_v_stride, Py_ssize_t __pyx_v_suboffset, int __pyx_v_dim, int __pyx_v_new_ndim, int *__pyx_v_suboffset_dim, Py_ssize_t __pyx_v_start, Py_ssize_t __pyx_v_stop, Py_ssize_t __pyx_v_step, int __pyx_v_have_start, int __pyx_v_have_stop, int __pyx_v_have_step, int __pyx_v_is_slice) { - Py_ssize_t __pyx_v_new_shape; - int __pyx_v_negative_step; - int __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save; - #endif - - /* "View.MemoryView":813 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - __pyx_t_1 = (!__pyx_v_is_slice); - if (__pyx_t_1) { - - /* "View.MemoryView":815 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - __pyx_t_1 = (__pyx_v_start < 0); - if (__pyx_t_1) { - - /* "View.MemoryView":816 - * - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if not 0 <= start < shape: - * _err_dim(PyExc_IndexError, "Index out of bounds (axis %d)", dim) - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":815 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - } - - /* "View.MemoryView":817 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(PyExc_IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - __pyx_t_1 = (0 <= __pyx_v_start); - if (__pyx_t_1) { - __pyx_t_1 = (__pyx_v_start < __pyx_v_shape); - } - __pyx_t_2 = (!__pyx_t_1); - if (__pyx_t_2) { - - /* "View.MemoryView":818 - * start += shape - * if not 0 <= start < shape: - * _err_dim(PyExc_IndexError, "Index out of bounds (axis %d)", dim) # <<<<<<<<<<<<<< - * else: - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(PyExc_IndexError, __pyx_kp_s_Index_out_of_bounds_axis_d, __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 818, __pyx_L1_error) - - /* "View.MemoryView":817 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(PyExc_IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - } - - /* "View.MemoryView":813 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":821 - * else: - * - * if have_step: # <<<<<<<<<<<<<< - * negative_step = step < 0 - * if step == 0: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_have_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":822 - * - * if have_step: - * negative_step = step < 0 # <<<<<<<<<<<<<< - * if step == 0: - * _err_dim(PyExc_ValueError, "Step may not be zero (axis %d)", dim) - */ - __pyx_v_negative_step = (__pyx_v_step < 0); - - /* "View.MemoryView":823 - * if have_step: - * negative_step = step < 0 - * if step == 0: # <<<<<<<<<<<<<< - * _err_dim(PyExc_ValueError, "Step may not be zero (axis %d)", dim) - * else: - */ - __pyx_t_2 = (__pyx_v_step == 0); - if (__pyx_t_2) { - - /* "View.MemoryView":824 - * negative_step = step < 0 - * if step == 0: - * _err_dim(PyExc_ValueError, "Step may not be zero (axis %d)", dim) # <<<<<<<<<<<<<< - * else: - * negative_step = False - */ - __pyx_t_3 = __pyx_memoryview_err_dim(PyExc_ValueError, __pyx_kp_s_Step_may_not_be_zero_axis_d, __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 824, __pyx_L1_error) - - /* "View.MemoryView":823 - * if have_step: - * negative_step = step < 0 - * if step == 0: # <<<<<<<<<<<<<< - * _err_dim(PyExc_ValueError, "Step may not be zero (axis %d)", dim) - * else: - */ - } - - /* "View.MemoryView":821 - * else: - * - * if have_step: # <<<<<<<<<<<<<< - * negative_step = step < 0 - * if step == 0: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":826 - * _err_dim(PyExc_ValueError, "Step may not be zero (axis %d)", dim) - * else: - * negative_step = False # <<<<<<<<<<<<<< - * step = 1 - * - */ - /*else*/ { - __pyx_v_negative_step = 0; - - /* "View.MemoryView":827 - * else: - * negative_step = False - * step = 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_step = 1; - } - __pyx_L6:; - - /* "View.MemoryView":830 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - __pyx_t_2 = (__pyx_v_have_start != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":831 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - __pyx_t_2 = (__pyx_v_start < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":832 - * if have_start: - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if start < 0: - * start = 0 - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":833 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - __pyx_t_2 = (__pyx_v_start < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":834 - * start += shape - * if start < 0: - * start = 0 # <<<<<<<<<<<<<< - * elif start >= shape: - * if negative_step: - */ - __pyx_v_start = 0; - - /* "View.MemoryView":833 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - } - - /* "View.MemoryView":831 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":835 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - __pyx_t_2 = (__pyx_v_start >= __pyx_v_shape); - if (__pyx_t_2) { - - /* "View.MemoryView":836 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - if (__pyx_v_negative_step) { - - /* "View.MemoryView":837 - * elif start >= shape: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = shape - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":836 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L11; - } - - /* "View.MemoryView":839 - * start = shape - 1 - * else: - * start = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - /*else*/ { - __pyx_v_start = __pyx_v_shape; - } - __pyx_L11:; - - /* "View.MemoryView":835 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - } - __pyx_L9:; - - /* "View.MemoryView":830 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - goto __pyx_L8; - } - - /* "View.MemoryView":841 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - /*else*/ { - if (__pyx_v_negative_step) { - - /* "View.MemoryView":842 - * else: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = 0 - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":841 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L12; - } - - /* "View.MemoryView":844 - * start = shape - 1 - * else: - * start = 0 # <<<<<<<<<<<<<< - * - * if have_stop: - */ - /*else*/ { - __pyx_v_start = 0; - } - __pyx_L12:; - } - __pyx_L8:; - - /* "View.MemoryView":846 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - __pyx_t_2 = (__pyx_v_have_stop != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":847 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - __pyx_t_2 = (__pyx_v_stop < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":848 - * if have_stop: - * if stop < 0: - * stop += shape # <<<<<<<<<<<<<< - * if stop < 0: - * stop = 0 - */ - __pyx_v_stop = (__pyx_v_stop + __pyx_v_shape); - - /* "View.MemoryView":849 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - __pyx_t_2 = (__pyx_v_stop < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":850 - * stop += shape - * if stop < 0: - * stop = 0 # <<<<<<<<<<<<<< - * elif stop > shape: - * stop = shape - */ - __pyx_v_stop = 0; - - /* "View.MemoryView":849 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - } - - /* "View.MemoryView":847 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - goto __pyx_L14; - } - - /* "View.MemoryView":851 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - __pyx_t_2 = (__pyx_v_stop > __pyx_v_shape); - if (__pyx_t_2) { - - /* "View.MemoryView":852 - * stop = 0 - * elif stop > shape: - * stop = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - __pyx_v_stop = __pyx_v_shape; - - /* "View.MemoryView":851 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - } - __pyx_L14:; - - /* "View.MemoryView":846 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - goto __pyx_L13; - } - - /* "View.MemoryView":854 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - /*else*/ { - if (__pyx_v_negative_step) { - - /* "View.MemoryView":855 - * else: - * if negative_step: - * stop = -1 # <<<<<<<<<<<<<< - * else: - * stop = shape - */ - __pyx_v_stop = -1L; - - /* "View.MemoryView":854 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - goto __pyx_L16; - } - - /* "View.MemoryView":857 - * stop = -1 - * else: - * stop = shape # <<<<<<<<<<<<<< - * - * - */ - /*else*/ { - __pyx_v_stop = __pyx_v_shape; - } - __pyx_L16:; - } - __pyx_L13:; - - /* "View.MemoryView":861 - * - * with cython.cdivision(True): - * new_shape = (stop - start) // step # <<<<<<<<<<<<<< - * - * if (stop - start) - step * new_shape: - */ - __pyx_v_new_shape = ((__pyx_v_stop - __pyx_v_start) / __pyx_v_step); - - /* "View.MemoryView":863 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - __pyx_t_2 = (((__pyx_v_stop - __pyx_v_start) - (__pyx_v_step * __pyx_v_new_shape)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":864 - * - * if (stop - start) - step * new_shape: - * new_shape += 1 # <<<<<<<<<<<<<< - * - * if new_shape < 0: - */ - __pyx_v_new_shape = (__pyx_v_new_shape + 1); - - /* "View.MemoryView":863 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - } - - /* "View.MemoryView":866 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - __pyx_t_2 = (__pyx_v_new_shape < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":867 - * - * if new_shape < 0: - * new_shape = 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_new_shape = 0; - - /* "View.MemoryView":866 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - } - - /* "View.MemoryView":870 - * - * - * dst.strides[new_ndim] = stride * step # <<<<<<<<<<<<<< - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset - */ - (__pyx_v_dst->strides[__pyx_v_new_ndim]) = (__pyx_v_stride * __pyx_v_step); - - /* "View.MemoryView":871 - * - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape # <<<<<<<<<<<<<< - * dst.suboffsets[new_ndim] = suboffset - * - */ - (__pyx_v_dst->shape[__pyx_v_new_ndim]) = __pyx_v_new_shape; - - /* "View.MemoryView":872 - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_dst->suboffsets[__pyx_v_new_ndim]) = __pyx_v_suboffset; - } - __pyx_L3:; - - /* "View.MemoryView":875 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - __pyx_t_2 = ((__pyx_v_suboffset_dim[0]) < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":876 - * - * if suboffset_dim[0] < 0: - * dst.data += start * stride # <<<<<<<<<<<<<< - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride - */ - __pyx_v_dst->data = (__pyx_v_dst->data + (__pyx_v_start * __pyx_v_stride)); - - /* "View.MemoryView":875 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - goto __pyx_L19; - } - - /* "View.MemoryView":878 - * dst.data += start * stride - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride # <<<<<<<<<<<<<< - * - * if suboffset >= 0: - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_suboffset_dim[0]); - (__pyx_v_dst->suboffsets[__pyx_t_3]) = ((__pyx_v_dst->suboffsets[__pyx_t_3]) + (__pyx_v_start * __pyx_v_stride)); - } - __pyx_L19:; - - /* "View.MemoryView":880 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - __pyx_t_2 = (__pyx_v_suboffset >= 0); - if (__pyx_t_2) { - - /* "View.MemoryView":881 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - __pyx_t_2 = (!__pyx_v_is_slice); - if (__pyx_t_2) { - - /* "View.MemoryView":882 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - __pyx_t_2 = (__pyx_v_new_ndim == 0); - if (__pyx_t_2) { - - /* "View.MemoryView":883 - * if not is_slice: - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset # <<<<<<<<<<<<<< - * else: - * _err_dim(PyExc_IndexError, "All dimensions preceding dimension %d " - */ - __pyx_v_dst->data = ((((char **)__pyx_v_dst->data)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":882 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - goto __pyx_L22; - } - - /* "View.MemoryView":885 - * dst.data = ( dst.data)[0] + suboffset - * else: - * _err_dim(PyExc_IndexError, "All dimensions preceding dimension %d " # <<<<<<<<<<<<<< - * "must be indexed and not sliced", dim) - * else: - */ - /*else*/ { - - /* "View.MemoryView":886 - * else: - * _err_dim(PyExc_IndexError, "All dimensions preceding dimension %d " - * "must be indexed and not sliced", dim) # <<<<<<<<<<<<<< - * else: - * suboffset_dim[0] = new_ndim - */ - __pyx_t_3 = __pyx_memoryview_err_dim(PyExc_IndexError, __pyx_kp_s_All_dimensions_preceding_dimensi, __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 885, __pyx_L1_error) - } - __pyx_L22:; - - /* "View.MemoryView":881 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - goto __pyx_L21; - } - - /* "View.MemoryView":888 - * "must be indexed and not sliced", dim) - * else: - * suboffset_dim[0] = new_ndim # <<<<<<<<<<<<<< - * - * return 0 - */ - /*else*/ { - (__pyx_v_suboffset_dim[0]) = __pyx_v_new_ndim; - } - __pyx_L21:; - - /* "View.MemoryView":880 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - } - - /* "View.MemoryView":890 - * suboffset_dim[0] = new_ndim - * - * return 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":793 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - - /* function exit code */ - __pyx_L1_error:; - #ifdef WITH_THREAD - __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.slice_memviewslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":896 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - -static char *__pyx_pybuffer_index(Py_buffer *__pyx_v_view, char *__pyx_v_bufp, Py_ssize_t __pyx_v_index, Py_ssize_t __pyx_v_dim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_suboffset; - Py_ssize_t __pyx_v_itemsize; - char *__pyx_v_resultp; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - Py_UCS4 __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("pybuffer_index", 0); - - /* "View.MemoryView":898 - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 # <<<<<<<<<<<<<< - * cdef Py_ssize_t itemsize = view.itemsize - * cdef char *resultp - */ - __pyx_v_suboffset = -1L; - - /* "View.MemoryView":899 - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - * cdef Py_ssize_t itemsize = view.itemsize # <<<<<<<<<<<<<< - * cdef char *resultp - * - */ - __pyx_t_1 = __pyx_v_view->itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":902 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len // itemsize - * stride = itemsize - */ - __pyx_t_2 = (__pyx_v_view->ndim == 0); - if (__pyx_t_2) { - - /* "View.MemoryView":903 - * - * if view.ndim == 0: - * shape = view.len // itemsize # <<<<<<<<<<<<<< - * stride = itemsize - * else: - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 903, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(__Pyx_UNARY_NEG_WOULD_OVERFLOW(__pyx_v_view->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 903, __pyx_L1_error) - } - __pyx_v_shape = __Pyx_div_Py_ssize_t(__pyx_v_view->len, __pyx_v_itemsize); - - /* "View.MemoryView":904 - * if view.ndim == 0: - * shape = view.len // itemsize - * stride = itemsize # <<<<<<<<<<<<<< - * else: - * shape = view.shape[dim] - */ - __pyx_v_stride = __pyx_v_itemsize; - - /* "View.MemoryView":902 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len // itemsize - * stride = itemsize - */ - goto __pyx_L3; - } - - /* "View.MemoryView":906 - * stride = itemsize - * else: - * shape = view.shape[dim] # <<<<<<<<<<<<<< - * stride = view.strides[dim] - * if view.suboffsets != NULL: - */ - /*else*/ { - __pyx_v_shape = (__pyx_v_view->shape[__pyx_v_dim]); - - /* "View.MemoryView":907 - * else: - * shape = view.shape[dim] - * stride = view.strides[dim] # <<<<<<<<<<<<<< - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] - */ - __pyx_v_stride = (__pyx_v_view->strides[__pyx_v_dim]); - - /* "View.MemoryView":908 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - __pyx_t_2 = (__pyx_v_view->suboffsets != NULL); - if (__pyx_t_2) { - - /* "View.MemoryView":909 - * stride = view.strides[dim] - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] # <<<<<<<<<<<<<< - * - * if index < 0: - */ - __pyx_v_suboffset = (__pyx_v_view->suboffsets[__pyx_v_dim]); - - /* "View.MemoryView":908 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - } - } - __pyx_L3:; - - /* "View.MemoryView":911 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - __pyx_t_2 = (__pyx_v_index < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":912 - * - * if index < 0: - * index += view.shape[dim] # <<<<<<<<<<<<<< - * if index < 0: - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - */ - __pyx_v_index = (__pyx_v_index + (__pyx_v_view->shape[__pyx_v_dim])); - - /* "View.MemoryView":913 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - */ - __pyx_t_2 = (__pyx_v_index < 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":914 - * index += view.shape[dim] - * if index < 0: - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" # <<<<<<<<<<<<<< - * - * if index >= shape: - */ - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 914, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = 0; - __pyx_t_4 = 127; - __Pyx_INCREF(__pyx_kp_u_Out_of_bounds_on_buffer_access_a); - __pyx_t_1 += 37; - __Pyx_GIVEREF(__pyx_kp_u_Out_of_bounds_on_buffer_access_a); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_kp_u_Out_of_bounds_on_buffer_access_a); - __pyx_t_5 = __Pyx_PyUnicode_From_Py_ssize_t(__pyx_v_dim, 0, ' ', 'd'); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 914, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_5); - __pyx_t_5 = 0; - __Pyx_INCREF(__pyx_kp_u__7); - __pyx_t_1 += 1; - __Pyx_GIVEREF(__pyx_kp_u__7); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_kp_u__7); - __pyx_t_5 = __Pyx_PyUnicode_Join(__pyx_t_3, 3, __pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 914, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_builtin_IndexError, __pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __PYX_ERR(1, 914, __pyx_L1_error) - - /* "View.MemoryView":913 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - */ - } - - /* "View.MemoryView":911 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - } - - /* "View.MemoryView":916 - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - */ - __pyx_t_2 = (__pyx_v_index >= __pyx_v_shape); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":917 - * - * if index >= shape: - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" # <<<<<<<<<<<<<< - * - * resultp = bufp + index * stride - */ - __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 917, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = 0; - __pyx_t_4 = 127; - __Pyx_INCREF(__pyx_kp_u_Out_of_bounds_on_buffer_access_a); - __pyx_t_1 += 37; - __Pyx_GIVEREF(__pyx_kp_u_Out_of_bounds_on_buffer_access_a); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_kp_u_Out_of_bounds_on_buffer_access_a); - __pyx_t_3 = __Pyx_PyUnicode_From_Py_ssize_t(__pyx_v_dim, 0, ' ', 'd'); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 917, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_kp_u__7); - __pyx_t_1 += 1; - __Pyx_GIVEREF(__pyx_kp_u__7); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_kp_u__7); - __pyx_t_3 = __Pyx_PyUnicode_Join(__pyx_t_5, 3, __pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 917, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_builtin_IndexError, __pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 917, __pyx_L1_error) - - /* "View.MemoryView":916 - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - */ - } - - /* "View.MemoryView":919 - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - * resultp = bufp + index * stride # <<<<<<<<<<<<<< - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset - */ - __pyx_v_resultp = (__pyx_v_bufp + (__pyx_v_index * __pyx_v_stride)); - - /* "View.MemoryView":920 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - __pyx_t_2 = (__pyx_v_suboffset >= 0); - if (__pyx_t_2) { - - /* "View.MemoryView":921 - * resultp = bufp + index * stride - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset # <<<<<<<<<<<<<< - * - * return resultp - */ - __pyx_v_resultp = ((((char **)__pyx_v_resultp)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":920 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - } - - /* "View.MemoryView":923 - * resultp = ( resultp)[0] + suboffset - * - * return resultp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_resultp; - goto __pyx_L0; - - /* "View.MemoryView":896 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.pybuffer_index", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":929 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) except -1 nogil: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - -static int __pyx_memslice_transpose(__Pyx_memviewslice *__pyx_v_memslice) { - int __pyx_v_ndim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - int __pyx_v_i; - int __pyx_v_j; - int __pyx_r; - int __pyx_t_1; - Py_ssize_t *__pyx_t_2; - long __pyx_t_3; - long __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - int __pyx_t_7; - int __pyx_t_8; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save; - #endif - - /* "View.MemoryView":930 - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) except -1 nogil: - * cdef int ndim = memslice.memview.view.ndim # <<<<<<<<<<<<<< - * - * cdef Py_ssize_t *shape = memslice.shape - */ - __pyx_t_1 = __pyx_v_memslice->memview->view.ndim; - __pyx_v_ndim = __pyx_t_1; - - /* "View.MemoryView":932 - * cdef int ndim = memslice.memview.view.ndim - * - * cdef Py_ssize_t *shape = memslice.shape # <<<<<<<<<<<<<< - * cdef Py_ssize_t *strides = memslice.strides - * - */ - __pyx_t_2 = __pyx_v_memslice->shape; - __pyx_v_shape = __pyx_t_2; - - /* "View.MemoryView":933 - * - * cdef Py_ssize_t *shape = memslice.shape - * cdef Py_ssize_t *strides = memslice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = __pyx_v_memslice->strides; - __pyx_v_strides = __pyx_t_2; - - /* "View.MemoryView":937 - * - * cdef int i, j - * for i in range(ndim // 2): # <<<<<<<<<<<<<< - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - */ - __pyx_t_3 = __Pyx_div_long(__pyx_v_ndim, 2); - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_4; __pyx_t_1+=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":938 - * cdef int i, j - * for i in range(ndim // 2): - * j = ndim - 1 - i # <<<<<<<<<<<<<< - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] - */ - __pyx_v_j = ((__pyx_v_ndim - 1) - __pyx_v_i); - - /* "View.MemoryView":939 - * for i in range(ndim // 2): - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] # <<<<<<<<<<<<<< - * shape[i], shape[j] = shape[j], shape[i] - * - */ - __pyx_t_5 = (__pyx_v_strides[__pyx_v_j]); - __pyx_t_6 = (__pyx_v_strides[__pyx_v_i]); - (__pyx_v_strides[__pyx_v_i]) = __pyx_t_5; - (__pyx_v_strides[__pyx_v_j]) = __pyx_t_6; - - /* "View.MemoryView":940 - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] # <<<<<<<<<<<<<< - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - */ - __pyx_t_6 = (__pyx_v_shape[__pyx_v_j]); - __pyx_t_5 = (__pyx_v_shape[__pyx_v_i]); - (__pyx_v_shape[__pyx_v_i]) = __pyx_t_6; - (__pyx_v_shape[__pyx_v_j]) = __pyx_t_5; - - /* "View.MemoryView":942 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(PyExc_ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - __pyx_t_8 = ((__pyx_v_memslice->suboffsets[__pyx_v_i]) >= 0); - if (!__pyx_t_8) { - } else { - __pyx_t_7 = __pyx_t_8; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_8 = ((__pyx_v_memslice->suboffsets[__pyx_v_j]) >= 0); - __pyx_t_7 = __pyx_t_8; - __pyx_L6_bool_binop_done:; - if (__pyx_t_7) { - - /* "View.MemoryView":943 - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - * _err(PyExc_ValueError, "Cannot transpose memoryview with indirect dimensions") # <<<<<<<<<<<<<< - * - * return 0 - */ - __pyx_t_9 = __pyx_memoryview_err(PyExc_ValueError, __pyx_kp_s_Cannot_transpose_memoryview_with); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 943, __pyx_L1_error) - - /* "View.MemoryView":942 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(PyExc_ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - } - } - - /* "View.MemoryView":945 - * _err(PyExc_ValueError, "Cannot transpose memoryview with indirect dimensions") - * - * return 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":929 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) except -1 nogil: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - #ifdef WITH_THREAD - __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.transpose_memslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":963 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XCLEAR_MEMVIEW(&self.from_slice, 1) - * - */ - -/* Python wrapper */ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":964 - * - * def __dealloc__(self): - * __PYX_XCLEAR_MEMVIEW(&self.from_slice, 1) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __PYX_XCLEAR_MEMVIEW((&__pyx_v_self->from_slice), 1); - - /* "View.MemoryView":963 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XCLEAR_MEMVIEW(&self.from_slice, 1) - * - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":966 - * __PYX_XCLEAR_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":967 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - __pyx_t_1 = (__pyx_v_self->to_object_func != NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":968 - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) # <<<<<<<<<<<<<< - * else: - * return memoryview.convert_item_to_object(self, itemp) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_v_self->to_object_func(__pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 968, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":967 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - } - - /* "View.MemoryView":970 - * return self.to_object_func(itemp) - * else: - * return memoryview.convert_item_to_object(self, itemp) # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_convert_item_to_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 970, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":966 - * __PYX_XCLEAR_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":972 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":973 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - __pyx_t_1 = (__pyx_v_self->to_dtype_func != NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":974 - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) # <<<<<<<<<<<<<< - * else: - * memoryview.assign_item_from_object(self, itemp, value) - */ - __pyx_t_2 = __pyx_v_self->to_dtype_func(__pyx_v_itemp, __pyx_v_value); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 974, __pyx_L1_error) - - /* "View.MemoryView":973 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":976 - * self.to_dtype_func(itemp, value) - * else: - * memoryview.assign_item_from_object(self, itemp, value) # <<<<<<<<<<<<<< - * - * cdef _get_base(self): - */ - /*else*/ { - __pyx_t_3 = __pyx_memoryview_assign_item_from_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 976, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":972 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":978 - * memoryview.assign_item_from_object(self, itemp, value) - * - * cdef _get_base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - -static PyObject *__pyx_memoryviewslice__get_base(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_get_base", 0); - - /* "View.MemoryView":979 - * - * cdef _get_base(self): - * return self.from_object # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->from_object); - __pyx_r = __pyx_v_self->from_object; - goto __pyx_L0; - - /* "View.MemoryView":978 - * memoryview.assign_item_from_object(self, itemp, value) - * - * cdef _get_base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("__reduce_cython__", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "__reduce_cython__", 0))) return NULL; - __pyx_r = __pyx_pf___pyx_memoryviewslice___reduce_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_no_default___reduce___due_to_non, 0, 0); - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - CYTHON_UNUSED PyObject *__pyx_v___pyx_state = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_state,0}; - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 3, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__setstate_cython__") < 0)) __PYX_ERR(1, 3, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v___pyx_state = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__setstate_cython__", 1, 1, 1, __pyx_nargs); __PYX_ERR(1, 3, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf___pyx_memoryviewslice_2__setstate_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self), __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" # <<<<<<<<<<<<<< - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_no_default___reduce___due_to_non, 0, 0); - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":999 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice __pyx_v_memviewslice, int __pyx_v_ndim, PyObject *(*__pyx_v_to_object_func)(char *), int (*__pyx_v_to_dtype_func)(char *, PyObject *), int __pyx_v_dtype_is_object) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - __Pyx_TypeInfo *__pyx_t_4; - Py_buffer __pyx_t_5; - Py_ssize_t *__pyx_t_6; - Py_ssize_t *__pyx_t_7; - Py_ssize_t *__pyx_t_8; - Py_ssize_t __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_fromslice", 0); - - /* "View.MemoryView":1007 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_1 = (((PyObject *)__pyx_v_memviewslice.memview) == Py_None); - if (__pyx_t_1) { - - /* "View.MemoryView":1008 - * - * if memviewslice.memview == Py_None: - * return None # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - - /* "View.MemoryView":1007 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - } - - /* "View.MemoryView":1013 - * - * - * result = _memoryviewslice.__new__(_memoryviewslice, None, 0, dtype_is_object) # <<<<<<<<<<<<<< - * - * result.from_slice = memviewslice - */ - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_3, 0, Py_None); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = ((PyObject *)__pyx_tp_new__memoryviewslice(((PyTypeObject *)__pyx_memoryviewslice_type), __pyx_t_3, NULL)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF((PyObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1015 - * result = _memoryviewslice.__new__(_memoryviewslice, None, 0, dtype_is_object) - * - * result.from_slice = memviewslice # <<<<<<<<<<<<<< - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - */ - __pyx_v_result->from_slice = __pyx_v_memviewslice; - - /* "View.MemoryView":1016 - * - * result.from_slice = memviewslice - * __PYX_INC_MEMVIEW(&memviewslice, 1) # <<<<<<<<<<<<<< - * - * result.from_object = ( memviewslice.memview)._get_base() - */ - __PYX_INC_MEMVIEW((&__pyx_v_memviewslice), 1); - - /* "View.MemoryView":1018 - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - * result.from_object = ( memviewslice.memview)._get_base() # <<<<<<<<<<<<<< - * result.typeinfo = memviewslice.memview.typeinfo - * - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)->__pyx_vtab)->_get_base(((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1018, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_v_result->from_object); - __Pyx_DECREF(__pyx_v_result->from_object); - __pyx_v_result->from_object = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":1019 - * - * result.from_object = ( memviewslice.memview)._get_base() - * result.typeinfo = memviewslice.memview.typeinfo # <<<<<<<<<<<<<< - * - * result.view = memviewslice.memview.view - */ - __pyx_t_4 = __pyx_v_memviewslice.memview->typeinfo; - __pyx_v_result->__pyx_base.typeinfo = __pyx_t_4; - - /* "View.MemoryView":1021 - * result.typeinfo = memviewslice.memview.typeinfo - * - * result.view = memviewslice.memview.view # <<<<<<<<<<<<<< - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - */ - __pyx_t_5 = __pyx_v_memviewslice.memview->view; - __pyx_v_result->__pyx_base.view = __pyx_t_5; - - /* "View.MemoryView":1022 - * - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data # <<<<<<<<<<<<<< - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - */ - __pyx_v_result->__pyx_base.view.buf = ((void *)__pyx_v_memviewslice.data); - - /* "View.MemoryView":1023 - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data - * result.view.ndim = ndim # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_v_result->__pyx_base.view.ndim = __pyx_v_ndim; - - /* "View.MemoryView":1024 - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_result->__pyx_base.view))->obj = Py_None; - - /* "View.MemoryView":1025 - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":1027 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - __pyx_t_1 = ((((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)->flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1028 - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - * result.flags = PyBUF_RECORDS # <<<<<<<<<<<<<< - * else: - * result.flags = PyBUF_RECORDS_RO - */ - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS; - - /* "View.MemoryView":1027 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1030 - * result.flags = PyBUF_RECORDS - * else: - * result.flags = PyBUF_RECORDS_RO # <<<<<<<<<<<<<< - * - * result.view.shape = result.from_slice.shape - */ - /*else*/ { - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS_RO; - } - __pyx_L4:; - - /* "View.MemoryView":1032 - * result.flags = PyBUF_RECORDS_RO - * - * result.view.shape = result.from_slice.shape # <<<<<<<<<<<<<< - * result.view.strides = result.from_slice.strides - * - */ - __pyx_v_result->__pyx_base.view.shape = ((Py_ssize_t *)__pyx_v_result->from_slice.shape); - - /* "View.MemoryView":1033 - * - * result.view.shape = result.from_slice.shape - * result.view.strides = result.from_slice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_result->__pyx_base.view.strides = ((Py_ssize_t *)__pyx_v_result->from_slice.strides); - - /* "View.MemoryView":1036 - * - * - * result.view.suboffsets = NULL # <<<<<<<<<<<<<< - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - */ - __pyx_v_result->__pyx_base.view.suboffsets = NULL; - - /* "View.MemoryView":1037 - * - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - */ - __pyx_t_7 = (__pyx_v_result->from_slice.suboffsets + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->from_slice.suboffsets; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_v_suboffset = (__pyx_t_6[0]); - - /* "View.MemoryView":1038 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - __pyx_t_1 = (__pyx_v_suboffset >= 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1039 - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_result->__pyx_base.view.suboffsets = ((Py_ssize_t *)__pyx_v_result->from_slice.suboffsets); - - /* "View.MemoryView":1040 - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - * break # <<<<<<<<<<<<<< - * - * result.view.len = result.view.itemsize - */ - goto __pyx_L6_break; - - /* "View.MemoryView":1038 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - } - } - __pyx_L6_break:; - - /* "View.MemoryView":1042 - * break - * - * result.view.len = result.view.itemsize # <<<<<<<<<<<<<< - * for length in result.view.shape[:ndim]: - * result.view.len *= length - */ - __pyx_t_9 = __pyx_v_result->__pyx_base.view.itemsize; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - - /* "View.MemoryView":1043 - * - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: # <<<<<<<<<<<<<< - * result.view.len *= length - * - */ - __pyx_t_7 = (__pyx_v_result->__pyx_base.view.shape + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->__pyx_base.view.shape; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_t_2 = PyInt_FromSsize_t((__pyx_t_6[0])); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1044 - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: - * result.view.len *= length # <<<<<<<<<<<<<< - * - * result.to_object_func = to_object_func - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_result->__pyx_base.view.len); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_InPlaceMultiply(__pyx_t_2, __pyx_v_length); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_3); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - } - - /* "View.MemoryView":1046 - * result.view.len *= length - * - * result.to_object_func = to_object_func # <<<<<<<<<<<<<< - * result.to_dtype_func = to_dtype_func - * - */ - __pyx_v_result->to_object_func = __pyx_v_to_object_func; - - /* "View.MemoryView":1047 - * - * result.to_object_func = to_object_func - * result.to_dtype_func = to_dtype_func # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->to_dtype_func = __pyx_v_to_dtype_func; - - /* "View.MemoryView":1049 - * result.to_dtype_func = to_dtype_func - * - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF((PyObject *)__pyx_v_result); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":999 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_fromslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1052 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_mslice) { - struct __pyx_memoryviewslice_obj *__pyx_v_obj = 0; - __Pyx_memviewslice *__pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_slice_from_memview", 0); - - /* "View.MemoryView":1055 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - if (__pyx_t_1) { - - /* "View.MemoryView":1056 - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): - * obj = memview # <<<<<<<<<<<<<< - * return &obj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 1056, __pyx_L1_error) - __pyx_t_2 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_2); - __pyx_v_obj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1057 - * if isinstance(memview, _memoryviewslice): - * obj = memview - * return &obj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, mslice) - */ - __pyx_r = (&__pyx_v_obj->from_slice); - goto __pyx_L0; - - /* "View.MemoryView":1055 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - } - - /* "View.MemoryView":1059 - * return &obj.from_slice - * else: - * slice_copy(memview, mslice) # <<<<<<<<<<<<<< - * return mslice - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, __pyx_v_mslice); - - /* "View.MemoryView":1060 - * else: - * slice_copy(memview, mslice) - * return mslice # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_slice_copy') - */ - __pyx_r = __pyx_v_mslice; - goto __pyx_L0; - } - - /* "View.MemoryView":1052 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.get_slice_from_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_obj); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1063 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst) noexcept: # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_dst) { - int __pyx_v_dim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - Py_ssize_t *__pyx_v_suboffsets; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - __Pyx_RefNannySetupContext("slice_copy", 0); - - /* "View.MemoryView":1067 - * cdef (Py_ssize_t*) shape, strides, suboffsets - * - * shape = memview.view.shape # <<<<<<<<<<<<<< - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets - */ - __pyx_t_1 = __pyx_v_memview->view.shape; - __pyx_v_shape = __pyx_t_1; - - /* "View.MemoryView":1068 - * - * shape = memview.view.shape - * strides = memview.view.strides # <<<<<<<<<<<<<< - * suboffsets = memview.view.suboffsets - * - */ - __pyx_t_1 = __pyx_v_memview->view.strides; - __pyx_v_strides = __pyx_t_1; - - /* "View.MemoryView":1069 - * shape = memview.view.shape - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets # <<<<<<<<<<<<<< - * - * dst.memview = <__pyx_memoryview *> memview - */ - __pyx_t_1 = __pyx_v_memview->view.suboffsets; - __pyx_v_suboffsets = __pyx_t_1; - - /* "View.MemoryView":1071 - * suboffsets = memview.view.suboffsets - * - * dst.memview = <__pyx_memoryview *> memview # <<<<<<<<<<<<<< - * dst.data = memview.view.buf - * - */ - __pyx_v_dst->memview = ((struct __pyx_memoryview_obj *)__pyx_v_memview); - - /* "View.MemoryView":1072 - * - * dst.memview = <__pyx_memoryview *> memview - * dst.data = memview.view.buf # <<<<<<<<<<<<<< - * - * for dim in range(memview.view.ndim): - */ - __pyx_v_dst->data = ((char *)__pyx_v_memview->view.buf); - - /* "View.MemoryView":1074 - * dst.data = memview.view.buf - * - * for dim in range(memview.view.ndim): # <<<<<<<<<<<<<< - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - */ - __pyx_t_2 = __pyx_v_memview->view.ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_dim = __pyx_t_4; - - /* "View.MemoryView":1075 - * - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] # <<<<<<<<<<<<<< - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - */ - (__pyx_v_dst->shape[__pyx_v_dim]) = (__pyx_v_shape[__pyx_v_dim]); - - /* "View.MemoryView":1076 - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] # <<<<<<<<<<<<<< - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - * - */ - (__pyx_v_dst->strides[__pyx_v_dim]) = (__pyx_v_strides[__pyx_v_dim]); - - /* "View.MemoryView":1077 - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object') - */ - if ((__pyx_v_suboffsets != 0)) { - __pyx_t_5 = (__pyx_v_suboffsets[__pyx_v_dim]); - } else { - __pyx_t_5 = -1L; - } - (__pyx_v_dst->suboffsets[__pyx_v_dim]) = __pyx_t_5; - } - - /* "View.MemoryView":1063 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst) noexcept: # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1080 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *__pyx_v_memview) { - __Pyx_memviewslice __pyx_v_memviewslice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy", 0); - - /* "View.MemoryView":1083 - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) # <<<<<<<<<<<<<< - * return memoryview_copy_from_slice(memview, &memviewslice) - * - */ - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_memviewslice)); - - /* "View.MemoryView":1084 - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) - * return memoryview_copy_from_slice(memview, &memviewslice) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object_from_slice') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_memoryview_copy_object_from_slice(__pyx_v_memview, (&__pyx_v_memviewslice)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1084, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1080 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1087 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_memviewslice) { - PyObject *(*__pyx_v_to_object_func)(char *); - int (*__pyx_v_to_dtype_func)(char *, PyObject *); - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *(*__pyx_t_2)(char *); - int (*__pyx_t_3)(char *, PyObject *); - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy_from_slice", 0); - - /* "View.MemoryView":1094 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - if (__pyx_t_1) { - - /* "View.MemoryView":1095 - * - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func # <<<<<<<<<<<<<< - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - */ - __pyx_t_2 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_object_func; - __pyx_v_to_object_func = __pyx_t_2; - - /* "View.MemoryView":1096 - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func # <<<<<<<<<<<<<< - * else: - * to_object_func = NULL - */ - __pyx_t_3 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_dtype_func; - __pyx_v_to_dtype_func = __pyx_t_3; - - /* "View.MemoryView":1094 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1098 - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - * to_object_func = NULL # <<<<<<<<<<<<<< - * to_dtype_func = NULL - * - */ - /*else*/ { - __pyx_v_to_object_func = NULL; - - /* "View.MemoryView":1099 - * else: - * to_object_func = NULL - * to_dtype_func = NULL # <<<<<<<<<<<<<< - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - */ - __pyx_v_to_dtype_func = NULL; - } - __pyx_L3:; - - /* "View.MemoryView":1101 - * to_dtype_func = NULL - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, # <<<<<<<<<<<<<< - * to_object_func, to_dtype_func, - * memview.dtype_is_object) - */ - __Pyx_XDECREF(__pyx_r); - - /* "View.MemoryView":1103 - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - * to_object_func, to_dtype_func, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_4 = __pyx_memoryview_fromslice((__pyx_v_memviewslice[0]), __pyx_v_memview->view.ndim, __pyx_v_to_object_func, __pyx_v_to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1087 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_from_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1109 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) noexcept nogil: # <<<<<<<<<<<<<< - * return -arg if arg < 0 else arg - * - */ - -static Py_ssize_t abs_py_ssize_t(Py_ssize_t __pyx_v_arg) { - Py_ssize_t __pyx_r; - Py_ssize_t __pyx_t_1; - - /* "View.MemoryView":1110 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) noexcept nogil: - * return -arg if arg < 0 else arg # <<<<<<<<<<<<<< - * - * @cname('__pyx_get_best_slice_order') - */ - if ((__pyx_v_arg < 0)) { - __pyx_t_1 = (-__pyx_v_arg); - } else { - __pyx_t_1 = __pyx_v_arg; - } - __pyx_r = __pyx_t_1; - goto __pyx_L0; - - /* "View.MemoryView":1109 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) noexcept nogil: # <<<<<<<<<<<<<< - * return -arg if arg < 0 else arg - * - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1113 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) noexcept nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - -static char __pyx_get_best_slice_order(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim) { - int __pyx_v_i; - Py_ssize_t __pyx_v_c_stride; - Py_ssize_t __pyx_v_f_stride; - char __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1118 - * """ - * cdef int i - * cdef Py_ssize_t c_stride = 0 # <<<<<<<<<<<<<< - * cdef Py_ssize_t f_stride = 0 - * - */ - __pyx_v_c_stride = 0; - - /* "View.MemoryView":1119 - * cdef int i - * cdef Py_ssize_t c_stride = 0 - * cdef Py_ssize_t f_stride = 0 # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_f_stride = 0; - - /* "View.MemoryView":1121 - * cdef Py_ssize_t f_stride = 0 - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1122 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = ((__pyx_v_mslice->shape[__pyx_v_i]) > 1); - if (__pyx_t_2) { - - /* "View.MemoryView":1123 - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_c_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1124 - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - goto __pyx_L4_break; - - /* "View.MemoryView":1122 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L4_break:; - - /* "View.MemoryView":1126 - * break - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - */ - __pyx_t_1 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_1; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1127 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = ((__pyx_v_mslice->shape[__pyx_v_i]) > 1); - if (__pyx_t_2) { - - /* "View.MemoryView":1128 - * for i in range(ndim): - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_f_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1129 - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - */ - goto __pyx_L7_break; - - /* "View.MemoryView":1127 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L7_break:; - - /* "View.MemoryView":1131 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - __pyx_t_2 = (abs_py_ssize_t(__pyx_v_c_stride) <= abs_py_ssize_t(__pyx_v_f_stride)); - if (__pyx_t_2) { - - /* "View.MemoryView":1132 - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - * return 'C' # <<<<<<<<<<<<<< - * else: - * return 'F' - */ - __pyx_r = 'C'; - goto __pyx_L0; - - /* "View.MemoryView":1131 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - } - - /* "View.MemoryView":1134 - * return 'C' - * else: - * return 'F' # <<<<<<<<<<<<<< - * - * @cython.cdivision(True) - */ - /*else*/ { - __pyx_r = 'F'; - goto __pyx_L0; - } - - /* "View.MemoryView":1113 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) noexcept nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1137 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - -static void _copy_strided_to_strided(char *__pyx_v_src_data, Py_ssize_t *__pyx_v_src_strides, char *__pyx_v_dst_data, Py_ssize_t *__pyx_v_dst_strides, Py_ssize_t *__pyx_v_src_shape, Py_ssize_t *__pyx_v_dst_shape, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - CYTHON_UNUSED Py_ssize_t __pyx_v_src_extent; - Py_ssize_t __pyx_v_dst_extent; - Py_ssize_t __pyx_v_src_stride; - Py_ssize_t __pyx_v_dst_stride; - int __pyx_t_1; - int __pyx_t_2; - Py_ssize_t __pyx_t_3; - Py_ssize_t __pyx_t_4; - Py_ssize_t __pyx_t_5; - - /* "View.MemoryView":1144 - * - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - */ - __pyx_v_src_extent = (__pyx_v_src_shape[0]); - - /* "View.MemoryView":1145 - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] - */ - __pyx_v_dst_extent = (__pyx_v_dst_shape[0]); - - /* "View.MemoryView":1146 - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - */ - __pyx_v_src_stride = (__pyx_v_src_strides[0]); - - /* "View.MemoryView":1147 - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_dst_stride = (__pyx_v_dst_strides[0]); - - /* "View.MemoryView":1149 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - __pyx_t_1 = (__pyx_v_ndim == 1); - if (__pyx_t_1) { - - /* "View.MemoryView":1150 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - __pyx_t_2 = (__pyx_v_src_stride > 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - __pyx_t_2 = (__pyx_v_dst_stride > 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - - /* "View.MemoryView":1151 - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - */ - __pyx_t_2 = (((size_t)__pyx_v_src_stride) == __pyx_v_itemsize); - if (__pyx_t_2) { - __pyx_t_2 = (__pyx_v_itemsize == ((size_t)__pyx_v_dst_stride)); - } - __pyx_t_1 = __pyx_t_2; - __pyx_L5_bool_binop_done:; - - /* "View.MemoryView":1150 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - if (__pyx_t_1) { - - /* "View.MemoryView":1152 - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, (__pyx_v_itemsize * __pyx_v_dst_extent))); - - /* "View.MemoryView":1150 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1154 - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - */ - /*else*/ { - __pyx_t_3 = __pyx_v_dst_extent; - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { - __pyx_v_i = __pyx_t_5; - - /* "View.MemoryView":1155 - * else: - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) # <<<<<<<<<<<<<< - * src_data += src_stride - * dst_data += dst_stride - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, __pyx_v_itemsize)); - - /* "View.MemoryView":1156 - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * else: - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1157 - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L4:; - - /* "View.MemoryView":1149 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1159 - * dst_data += dst_stride - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * _copy_strided_to_strided(src_data, src_strides + 1, - * dst_data, dst_strides + 1, - */ - /*else*/ { - __pyx_t_3 = __pyx_v_dst_extent; - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { - __pyx_v_i = __pyx_t_5; - - /* "View.MemoryView":1160 - * else: - * for i in range(dst_extent): - * _copy_strided_to_strided(src_data, src_strides + 1, # <<<<<<<<<<<<<< - * dst_data, dst_strides + 1, - * src_shape + 1, dst_shape + 1, - */ - _copy_strided_to_strided(__pyx_v_src_data, (__pyx_v_src_strides + 1), __pyx_v_dst_data, (__pyx_v_dst_strides + 1), (__pyx_v_src_shape + 1), (__pyx_v_dst_shape + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize); - - /* "View.MemoryView":1164 - * src_shape + 1, dst_shape + 1, - * ndim - 1, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1165 - * ndim - 1, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1137 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - - /* function exit code */ -} - -/* "View.MemoryView":1167 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) noexcept nogil: - */ - -static void copy_strided_to_strided(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - - /* "View.MemoryView":1170 - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) noexcept nogil: - * _copy_strided_to_strided(src.data, src.strides, dst.data, dst.strides, # <<<<<<<<<<<<<< - * src.shape, dst.shape, ndim, itemsize) - * - */ - _copy_strided_to_strided(__pyx_v_src->data, __pyx_v_src->strides, __pyx_v_dst->data, __pyx_v_dst->strides, __pyx_v_src->shape, __pyx_v_dst->shape, __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1167 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) noexcept nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1174 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) noexcept nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *__pyx_v_src, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_size; - Py_ssize_t __pyx_r; - Py_ssize_t __pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - - /* "View.MemoryView":1176 - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) noexcept nogil: - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize # <<<<<<<<<<<<<< - * - * for shape in src.shape[:ndim]: - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_size = __pyx_t_1; - - /* "View.MemoryView":1178 - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - * - * for shape in src.shape[:ndim]: # <<<<<<<<<<<<<< - * size *= shape - * - */ - __pyx_t_3 = (__pyx_v_src->shape + __pyx_v_ndim); - for (__pyx_t_4 = __pyx_v_src->shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_shape = (__pyx_t_2[0]); - - /* "View.MemoryView":1179 - * - * for shape in src.shape[:ndim]: - * size *= shape # <<<<<<<<<<<<<< - * - * return size - */ - __pyx_v_size = (__pyx_v_size * __pyx_v_shape); - } - - /* "View.MemoryView":1181 - * size *= shape - * - * return size # <<<<<<<<<<<<<< - * - * @cname('__pyx_fill_contig_strides_array') - */ - __pyx_r = __pyx_v_size; - goto __pyx_L0; - - /* "View.MemoryView":1174 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) noexcept nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1184 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) noexcept nogil: - */ - -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, Py_ssize_t __pyx_v_stride, int __pyx_v_ndim, char __pyx_v_order) { - int __pyx_v_idx; - Py_ssize_t __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1193 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - __pyx_t_1 = (__pyx_v_order == 'F'); - if (__pyx_t_1) { - - /* "View.MemoryView":1194 - * - * if order == 'F': - * for idx in range(ndim): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - __pyx_t_2 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_idx = __pyx_t_4; - - /* "View.MemoryView":1195 - * if order == 'F': - * for idx in range(ndim): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * else: - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1196 - * for idx in range(ndim): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * else: - * for idx in range(ndim - 1, -1, -1): - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - - /* "View.MemoryView":1193 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1198 - * stride *= shape[idx] - * else: - * for idx in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - /*else*/ { - for (__pyx_t_2 = (__pyx_v_ndim - 1); __pyx_t_2 > -1; __pyx_t_2-=1) { - __pyx_v_idx = __pyx_t_2; - - /* "View.MemoryView":1199 - * else: - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1200 - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * - * return stride - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - } - __pyx_L3:; - - /* "View.MemoryView":1202 - * stride *= shape[idx] - * - * return stride # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_data_to_temp') - */ - __pyx_r = __pyx_v_stride; - goto __pyx_L0; - - /* "View.MemoryView":1184 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) noexcept nogil: - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1205 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_tmpslice, char __pyx_v_order, int __pyx_v_ndim) { - int __pyx_v_i; - void *__pyx_v_result; - size_t __pyx_v_itemsize; - size_t __pyx_v_size; - void *__pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - struct __pyx_memoryview_obj *__pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save; - #endif - - /* "View.MemoryView":1216 - * cdef void *result - * - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef size_t size = slice_get_size(src, ndim) - * - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1217 - * - * cdef size_t itemsize = src.memview.view.itemsize - * cdef size_t size = slice_get_size(src, ndim) # <<<<<<<<<<<<<< - * - * result = malloc(size) - */ - __pyx_v_size = __pyx_memoryview_slice_get_size(__pyx_v_src, __pyx_v_ndim); - - /* "View.MemoryView":1219 - * cdef size_t size = slice_get_size(src, ndim) - * - * result = malloc(size) # <<<<<<<<<<<<<< - * if not result: - * _err_no_memory() - */ - __pyx_v_result = malloc(__pyx_v_size); - - /* "View.MemoryView":1220 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err_no_memory() - * - */ - __pyx_t_2 = (!(__pyx_v_result != 0)); - if (__pyx_t_2) { - - /* "View.MemoryView":1221 - * result = malloc(size) - * if not result: - * _err_no_memory() # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err_no_memory(); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 1221, __pyx_L1_error) - - /* "View.MemoryView":1220 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err_no_memory() - * - */ - } - - /* "View.MemoryView":1224 - * - * - * tmpslice.data = result # <<<<<<<<<<<<<< - * tmpslice.memview = src.memview - * for i in range(ndim): - */ - __pyx_v_tmpslice->data = ((char *)__pyx_v_result); - - /* "View.MemoryView":1225 - * - * tmpslice.data = result - * tmpslice.memview = src.memview # <<<<<<<<<<<<<< - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - */ - __pyx_t_4 = __pyx_v_src->memview; - __pyx_v_tmpslice->memview = __pyx_t_4; - - /* "View.MemoryView":1226 - * tmpslice.data = result - * tmpslice.memview = src.memview - * for i in range(ndim): # <<<<<<<<<<<<<< - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1227 - * tmpslice.memview = src.memview - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] # <<<<<<<<<<<<<< - * tmpslice.suboffsets[i] = -1 - * - */ - (__pyx_v_tmpslice->shape[__pyx_v_i]) = (__pyx_v_src->shape[__pyx_v_i]); - - /* "View.MemoryView":1228 - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, ndim, order) - */ - (__pyx_v_tmpslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1230 - * tmpslice.suboffsets[i] = -1 - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, ndim, order) # <<<<<<<<<<<<<< - * - * - */ - (void)(__pyx_fill_contig_strides_array((&(__pyx_v_tmpslice->shape[0])), (&(__pyx_v_tmpslice->strides[0])), __pyx_v_itemsize, __pyx_v_ndim, __pyx_v_order)); - - /* "View.MemoryView":1233 - * - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1234 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - __pyx_t_2 = ((__pyx_v_tmpslice->shape[__pyx_v_i]) == 1); - if (__pyx_t_2) { - - /* "View.MemoryView":1235 - * for i in range(ndim): - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 # <<<<<<<<<<<<<< - * - * if slice_is_contig(src[0], order, ndim): - */ - (__pyx_v_tmpslice->strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1234 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - } - } - - /* "View.MemoryView":1237 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - __pyx_t_2 = __pyx_memviewslice_is_contig((__pyx_v_src[0]), __pyx_v_order, __pyx_v_ndim); - if (__pyx_t_2) { - - /* "View.MemoryView":1238 - * - * if slice_is_contig(src[0], order, ndim): - * memcpy(result, src.data, size) # <<<<<<<<<<<<<< - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - */ - (void)(memcpy(__pyx_v_result, __pyx_v_src->data, __pyx_v_size)); - - /* "View.MemoryView":1237 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":1240 - * memcpy(result, src.data, size) - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) # <<<<<<<<<<<<<< - * - * return result - */ - /*else*/ { - copy_strided_to_strided(__pyx_v_src, __pyx_v_tmpslice, __pyx_v_ndim, __pyx_v_itemsize); - } - __pyx_L9:; - - /* "View.MemoryView":1242 - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":1205 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - - /* function exit code */ - __pyx_L1_error:; - #ifdef WITH_THREAD - __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.copy_data_to_temp", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1247 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError, f"got differing extents in dimension {i} (got {extent1} and {extent2})" - */ - -static int __pyx_memoryview_err_extents(int __pyx_v_i, Py_ssize_t __pyx_v_extent1, Py_ssize_t __pyx_v_extent2) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - Py_ssize_t __pyx_t_2; - Py_UCS4 __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_extents", 0); - - /* "View.MemoryView":1249 - * cdef int _err_extents(int i, Py_ssize_t extent1, - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError, f"got differing extents in dimension {i} (got {extent1} and {extent2})" # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err_dim') - */ - __pyx_t_1 = PyTuple_New(7); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = 0; - __pyx_t_3 = 127; - __Pyx_INCREF(__pyx_kp_u_got_differing_extents_in_dimensi); - __pyx_t_2 += 35; - __Pyx_GIVEREF(__pyx_kp_u_got_differing_extents_in_dimensi); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_kp_u_got_differing_extents_in_dimensi); - __pyx_t_4 = __Pyx_PyUnicode_From_int(__pyx_v_i, 0, ' ', 'd'); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_INCREF(__pyx_kp_u_got); - __pyx_t_2 += 6; - __Pyx_GIVEREF(__pyx_kp_u_got); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_kp_u_got); - __pyx_t_4 = __Pyx_PyUnicode_From_Py_ssize_t(__pyx_v_extent1, 0, ' ', 'd'); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_1, 3, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_INCREF(__pyx_kp_u_and); - __pyx_t_2 += 5; - __Pyx_GIVEREF(__pyx_kp_u_and); - PyTuple_SET_ITEM(__pyx_t_1, 4, __pyx_kp_u_and); - __pyx_t_4 = __Pyx_PyUnicode_From_Py_ssize_t(__pyx_v_extent2, 0, ' ', 'd'); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_1, 5, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_INCREF(__pyx_kp_u__7); - __pyx_t_2 += 1; - __Pyx_GIVEREF(__pyx_kp_u__7); - PyTuple_SET_ITEM(__pyx_t_1, 6, __pyx_kp_u__7); - __pyx_t_4 = __Pyx_PyUnicode_Join(__pyx_t_1, 7, __pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_t_4, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(1, 1249, __pyx_L1_error) - - /* "View.MemoryView":1247 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError, f"got differing extents in dimension {i} (got {extent1} and {extent2})" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_extents", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1252 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(PyObject *error, str msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error, msg % dim - * - */ - -static int __pyx_memoryview_err_dim(PyObject *__pyx_v_error, PyObject *__pyx_v_msg, int __pyx_v_dim) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_dim", 0); - __Pyx_INCREF(__pyx_v_msg); - - /* "View.MemoryView":1253 - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(PyObject *error, str msg, int dim) except -1 with gil: - * raise error, msg % dim # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err') - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_dim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyString_FormatSafe(__pyx_v_msg, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_Raise(((PyObject *)__pyx_v_error), __pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 1253, __pyx_L1_error) - - /* "View.MemoryView":1252 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(PyObject *error, str msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error, msg % dim - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView._err_dim", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_msg); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1256 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(PyObject *error, str msg) except -1 with gil: # <<<<<<<<<<<<<< - * raise error, msg - * - */ - -static int __pyx_memoryview_err(PyObject *__pyx_v_error, PyObject *__pyx_v_msg) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err", 0); - __Pyx_INCREF(__pyx_v_msg); - - /* "View.MemoryView":1257 - * @cname('__pyx_memoryview_err') - * cdef int _err(PyObject *error, str msg) except -1 with gil: - * raise error, msg # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err_no_memory') - */ - __Pyx_Raise(((PyObject *)__pyx_v_error), __pyx_v_msg, 0, 0); - __PYX_ERR(1, 1257, __pyx_L1_error) - - /* "View.MemoryView":1256 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(PyObject *error, str msg) except -1 with gil: # <<<<<<<<<<<<<< - * raise error, msg - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView._err", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_msg); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1260 - * - * @cname('__pyx_memoryview_err_no_memory') - * cdef int _err_no_memory() except -1 with gil: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - -static int __pyx_memoryview_err_no_memory(void) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_no_memory", 0); - - /* "View.MemoryView":1261 - * @cname('__pyx_memoryview_err_no_memory') - * cdef int _err_no_memory() except -1 with gil: - * raise MemoryError # <<<<<<<<<<<<<< - * - * - */ - PyErr_NoMemory(); __PYX_ERR(1, 1261, __pyx_L1_error) - - /* "View.MemoryView":1260 - * - * @cname('__pyx_memoryview_err_no_memory') - * cdef int _err_no_memory() except -1 with gil: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView._err_no_memory", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1265 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice __pyx_v_src, __Pyx_memviewslice __pyx_v_dst, int __pyx_v_src_ndim, int __pyx_v_dst_ndim, int __pyx_v_dtype_is_object) { - void *__pyx_v_tmpdata; - size_t __pyx_v_itemsize; - int __pyx_v_i; - char __pyx_v_order; - int __pyx_v_broadcasting; - int __pyx_v_direct_copy; - __Pyx_memviewslice __pyx_v_tmp; - int __pyx_v_ndim; - int __pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - void *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save; - #endif - - /* "View.MemoryView":1273 - * Check for overlapping memory and verify the shapes. - * """ - * cdef void *tmpdata = NULL # <<<<<<<<<<<<<< - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - */ - __pyx_v_tmpdata = NULL; - - /* "View.MemoryView":1274 - * """ - * cdef void *tmpdata = NULL - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - */ - __pyx_t_1 = __pyx_v_src.memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1276 - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) # <<<<<<<<<<<<<< - * cdef bint broadcasting = False - * cdef bint direct_copy = False - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_src), __pyx_v_src_ndim); - - /* "View.MemoryView":1277 - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False # <<<<<<<<<<<<<< - * cdef bint direct_copy = False - * cdef __Pyx_memviewslice tmp - */ - __pyx_v_broadcasting = 0; - - /* "View.MemoryView":1278 - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False - * cdef bint direct_copy = False # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice tmp - * - */ - __pyx_v_direct_copy = 0; - - /* "View.MemoryView":1281 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - __pyx_t_2 = (__pyx_v_src_ndim < __pyx_v_dst_ndim); - if (__pyx_t_2) { - - /* "View.MemoryView":1282 - * - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_src), __pyx_v_src_ndim, __pyx_v_dst_ndim); - - /* "View.MemoryView":1281 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1283 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - __pyx_t_2 = (__pyx_v_dst_ndim < __pyx_v_src_ndim); - if (__pyx_t_2) { - - /* "View.MemoryView":1284 - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) # <<<<<<<<<<<<<< - * - * cdef int ndim = max(src_ndim, dst_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_dst), __pyx_v_dst_ndim, __pyx_v_src_ndim); - - /* "View.MemoryView":1283 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - } - __pyx_L3:; - - /* "View.MemoryView":1286 - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - * cdef int ndim = max(src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - __pyx_t_3 = __pyx_v_dst_ndim; - __pyx_t_4 = __pyx_v_src_ndim; - if ((__pyx_t_3 > __pyx_t_4)) { - __pyx_t_5 = __pyx_t_3; - } else { - __pyx_t_5 = __pyx_t_4; - } - __pyx_v_ndim = __pyx_t_5; - - /* "View.MemoryView":1288 - * cdef int ndim = max(src_ndim, dst_ndim) - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - */ - __pyx_t_5 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_5; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1289 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - __pyx_t_2 = ((__pyx_v_src.shape[__pyx_v_i]) != (__pyx_v_dst.shape[__pyx_v_i])); - if (__pyx_t_2) { - - /* "View.MemoryView":1290 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - __pyx_t_2 = ((__pyx_v_src.shape[__pyx_v_i]) == 1); - if (__pyx_t_2) { - - /* "View.MemoryView":1291 - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - * broadcasting = True # <<<<<<<<<<<<<< - * src.strides[i] = 0 - * else: - */ - __pyx_v_broadcasting = 1; - - /* "View.MemoryView":1292 - * if src.shape[i] == 1: - * broadcasting = True - * src.strides[i] = 0 # <<<<<<<<<<<<<< - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) - */ - (__pyx_v_src.strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1290 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - goto __pyx_L7; - } - - /* "View.MemoryView":1294 - * src.strides[i] = 0 - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) # <<<<<<<<<<<<<< - * - * if src.suboffsets[i] >= 0: - */ - /*else*/ { - __pyx_t_6 = __pyx_memoryview_err_extents(__pyx_v_i, (__pyx_v_dst.shape[__pyx_v_i]), (__pyx_v_src.shape[__pyx_v_i])); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1294, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":1289 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - } - - /* "View.MemoryView":1296 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(PyExc_ValueError, "Dimension %d is not direct", i) - * - */ - __pyx_t_2 = ((__pyx_v_src.suboffsets[__pyx_v_i]) >= 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1297 - * - * if src.suboffsets[i] >= 0: - * _err_dim(PyExc_ValueError, "Dimension %d is not direct", i) # <<<<<<<<<<<<<< - * - * if slices_overlap(&src, &dst, ndim, itemsize): - */ - __pyx_t_6 = __pyx_memoryview_err_dim(PyExc_ValueError, __pyx_kp_s_Dimension_d_is_not_direct, __pyx_v_i); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1297, __pyx_L1_error) - - /* "View.MemoryView":1296 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(PyExc_ValueError, "Dimension %d is not direct", i) - * - */ - } - } - - /* "View.MemoryView":1299 - * _err_dim(PyExc_ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - __pyx_t_2 = __pyx_slices_overlap((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize); - if (__pyx_t_2) { - - /* "View.MemoryView":1301 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - __pyx_t_2 = (!__pyx_memviewslice_is_contig(__pyx_v_src, __pyx_v_order, __pyx_v_ndim)); - if (__pyx_t_2) { - - /* "View.MemoryView":1302 - * - * if not slice_is_contig(src, order, ndim): - * order = get_best_order(&dst, ndim) # <<<<<<<<<<<<<< - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim); - - /* "View.MemoryView":1301 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - } - - /* "View.MemoryView":1304 - * order = get_best_order(&dst, ndim) - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) # <<<<<<<<<<<<<< - * src = tmp - * - */ - __pyx_t_7 = __pyx_memoryview_copy_data_to_temp((&__pyx_v_src), (&__pyx_v_tmp), __pyx_v_order, __pyx_v_ndim); if (unlikely(__pyx_t_7 == ((void *)NULL))) __PYX_ERR(1, 1304, __pyx_L1_error) - __pyx_v_tmpdata = __pyx_t_7; - - /* "View.MemoryView":1305 - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - * src = tmp # <<<<<<<<<<<<<< - * - * if not broadcasting: - */ - __pyx_v_src = __pyx_v_tmp; - - /* "View.MemoryView":1299 - * _err_dim(PyExc_ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - } - - /* "View.MemoryView":1307 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = (!__pyx_v_broadcasting); - if (__pyx_t_2) { - - /* "View.MemoryView":1310 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - __pyx_t_2 = __pyx_memviewslice_is_contig(__pyx_v_src, 'C', __pyx_v_ndim); - if (__pyx_t_2) { - - /* "View.MemoryView":1311 - * - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) # <<<<<<<<<<<<<< - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'C', __pyx_v_ndim); - - /* "View.MemoryView":1310 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - goto __pyx_L12; - } - - /* "View.MemoryView":1312 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - __pyx_t_2 = __pyx_memviewslice_is_contig(__pyx_v_src, 'F', __pyx_v_ndim); - if (__pyx_t_2) { - - /* "View.MemoryView":1313 - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) # <<<<<<<<<<<<<< - * - * if direct_copy: - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'F', __pyx_v_ndim); - - /* "View.MemoryView":1312 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - } - __pyx_L12:; - - /* "View.MemoryView":1315 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - */ - if (__pyx_v_direct_copy) { - - /* "View.MemoryView":1317 - * if direct_copy: - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) # <<<<<<<<<<<<<< - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1318 - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - * free(tmpdata) - */ - (void)(memcpy(__pyx_v_dst.data, __pyx_v_src.data, __pyx_memoryview_slice_get_size((&__pyx_v_src), __pyx_v_ndim))); - - /* "View.MemoryView":1319 - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) # <<<<<<<<<<<<<< - * free(tmpdata) - * return 0 - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1320 - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1321 - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * if order == 'F' == get_best_order(&dst, ndim): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1315 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - */ - } - - /* "View.MemoryView":1307 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1323 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = (__pyx_v_order == 'F'); - if (__pyx_t_2) { - __pyx_t_2 = ('F' == __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim)); - } - if (__pyx_t_2) { - - /* "View.MemoryView":1326 - * - * - * transpose_memslice(&src) # <<<<<<<<<<<<<< - * transpose_memslice(&dst) - * - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_src)); if (unlikely(__pyx_t_5 == ((int)-1))) __PYX_ERR(1, 1326, __pyx_L1_error) - - /* "View.MemoryView":1327 - * - * transpose_memslice(&src) - * transpose_memslice(&dst) # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_dst)); if (unlikely(__pyx_t_5 == ((int)-1))) __PYX_ERR(1, 1327, __pyx_L1_error) - - /* "View.MemoryView":1323 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1329 - * transpose_memslice(&dst) - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) # <<<<<<<<<<<<<< - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1330 - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - * - */ - copy_strided_to_strided((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1331 - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) # <<<<<<<<<<<<<< - * - * free(tmpdata) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1333 - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - * - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1334 - * - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_broadcast_leading') - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1265 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - - /* function exit code */ - __pyx_L1_error:; - #ifdef WITH_THREAD - __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_contents", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1337 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) noexcept nogil: - */ - -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim, int __pyx_v_ndim_other) { - int __pyx_v_i; - int __pyx_v_offset; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - - /* "View.MemoryView":1341 - * int ndim_other) noexcept nogil: - * cdef int i - * cdef int offset = ndim_other - ndim # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_offset = (__pyx_v_ndim_other - __pyx_v_ndim); - - /* "View.MemoryView":1343 - * cdef int offset = ndim_other - ndim - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1344 - * - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] # <<<<<<<<<<<<<< - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - */ - (__pyx_v_mslice->shape[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->shape[__pyx_v_i]); - - /* "View.MemoryView":1345 - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] # <<<<<<<<<<<<<< - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - */ - (__pyx_v_mslice->strides[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1346 - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] # <<<<<<<<<<<<<< - * - * for i in range(offset): - */ - (__pyx_v_mslice->suboffsets[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->suboffsets[__pyx_v_i]); - } - - /* "View.MemoryView":1348 - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - * for i in range(offset): # <<<<<<<<<<<<<< - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - */ - __pyx_t_1 = __pyx_v_offset; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1349 - * - * for i in range(offset): - * mslice.shape[i] = 1 # <<<<<<<<<<<<<< - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 - */ - (__pyx_v_mslice->shape[__pyx_v_i]) = 1; - - /* "View.MemoryView":1350 - * for i in range(offset): - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] # <<<<<<<<<<<<<< - * mslice.suboffsets[i] = -1 - * - */ - (__pyx_v_mslice->strides[__pyx_v_i]) = (__pyx_v_mslice->strides[0]); - - /* "View.MemoryView":1351 - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_mslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1337 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) noexcept nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1359 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, int ndim, bint inc) noexcept nogil: # <<<<<<<<<<<<<< - * - * if dtype_is_object: - */ - -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_dtype_is_object, int __pyx_v_ndim, int __pyx_v_inc) { - - /* "View.MemoryView":1361 - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, int ndim, bint inc) noexcept nogil: - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, dst.strides, ndim, inc) - * - */ - if (__pyx_v_dtype_is_object) { - - /* "View.MemoryView":1362 - * - * if dtype_is_object: - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, dst.strides, ndim, inc) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - */ - __pyx_memoryview_refcount_objects_in_slice_with_gil(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1361 - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, int ndim, bint inc) noexcept nogil: - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, dst.strides, ndim, inc) - * - */ - } - - /* "View.MemoryView":1359 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, int ndim, bint inc) noexcept nogil: # <<<<<<<<<<<<<< - * - * if dtype_is_object: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1365 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) noexcept with gil: - */ - -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - __Pyx_RefNannyDeclarations - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("refcount_objects_in_slice_with_gil", 0); - - /* "View.MemoryView":1368 - * Py_ssize_t *strides, int ndim, - * bint inc) noexcept with gil: - * refcount_objects_in_slice(data, shape, strides, ndim, inc) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, __pyx_v_shape, __pyx_v_strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1365 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) noexcept with gil: - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif -} - -/* "View.MemoryView":1371 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc) noexcept: - * cdef Py_ssize_t i - */ - -static void __pyx_memoryview_refcount_objects_in_slice(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_stride; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - __Pyx_RefNannySetupContext("refcount_objects_in_slice", 0); - - /* "View.MemoryView":1374 - * Py_ssize_t *strides, int ndim, bint inc) noexcept: - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] # <<<<<<<<<<<<<< - * - * for i in range(shape[0]): - */ - __pyx_v_stride = (__pyx_v_strides[0]); - - /* "View.MemoryView":1376 - * cdef Py_ssize_t stride = strides[0] - * - * for i in range(shape[0]): # <<<<<<<<<<<<<< - * if ndim == 1: - * if inc: - */ - __pyx_t_1 = (__pyx_v_shape[0]); - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1377 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - __pyx_t_4 = (__pyx_v_ndim == 1); - if (__pyx_t_4) { - - /* "View.MemoryView":1378 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - if (__pyx_v_inc) { - - /* "View.MemoryView":1379 - * if ndim == 1: - * if inc: - * Py_INCREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * Py_DECREF(( data)[0]) - */ - Py_INCREF((((PyObject **)__pyx_v_data)[0])); - - /* "View.MemoryView":1378 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":1381 - * Py_INCREF(( data)[0]) - * else: - * Py_DECREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, ndim - 1, inc) - */ - /*else*/ { - Py_DECREF((((PyObject **)__pyx_v_data)[0])); - } - __pyx_L6:; - - /* "View.MemoryView":1377 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - goto __pyx_L5; - } - - /* "View.MemoryView":1383 - * Py_DECREF(( data)[0]) - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, ndim - 1, inc) # <<<<<<<<<<<<<< - * - * data += stride - */ - /*else*/ { - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_inc); - } - __pyx_L5:; - - /* "View.MemoryView":1385 - * refcount_objects_in_slice(data, shape + 1, strides + 1, ndim - 1, inc) - * - * data += stride # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - - /* "View.MemoryView":1371 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc) noexcept: - * cdef Py_ssize_t i - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1391 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) noexcept nogil: - */ - -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item, int __pyx_v_dtype_is_object) { - - /* "View.MemoryView":1394 - * size_t itemsize, void *item, - * bint dtype_is_object) noexcept nogil: - * refcount_copying(dst, dtype_is_object, ndim, inc=False) # <<<<<<<<<<<<<< - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, inc=True) - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1395 - * bint dtype_is_object) noexcept nogil: - * refcount_copying(dst, dtype_is_object, ndim, inc=False) - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, itemsize, item) # <<<<<<<<<<<<<< - * refcount_copying(dst, dtype_is_object, ndim, inc=True) - * - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1396 - * refcount_copying(dst, dtype_is_object, ndim, inc=False) - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, inc=True) # <<<<<<<<<<<<<< - * - * - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1391 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) noexcept nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1400 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) noexcept nogil: - */ - -static void __pyx_memoryview__slice_assign_scalar(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_extent; - int __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - Py_ssize_t __pyx_t_4; - - /* "View.MemoryView":1404 - * size_t itemsize, void *item) noexcept nogil: - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t extent = shape[0] - * - */ - __pyx_v_stride = (__pyx_v_strides[0]); - - /* "View.MemoryView":1405 - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] - * cdef Py_ssize_t extent = shape[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_extent = (__pyx_v_shape[0]); - - /* "View.MemoryView":1407 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - __pyx_t_1 = (__pyx_v_ndim == 1); - if (__pyx_t_1) { - - /* "View.MemoryView":1408 - * - * if ndim == 1: - * for i in range(extent): # <<<<<<<<<<<<<< - * memcpy(data, item, itemsize) - * data += stride - */ - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1409 - * if ndim == 1: - * for i in range(extent): - * memcpy(data, item, itemsize) # <<<<<<<<<<<<<< - * data += stride - * else: - */ - (void)(memcpy(__pyx_v_data, __pyx_v_item, __pyx_v_itemsize)); - - /* "View.MemoryView":1410 - * for i in range(extent): - * memcpy(data, item, itemsize) - * data += stride # <<<<<<<<<<<<<< - * else: - * for i in range(extent): - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - - /* "View.MemoryView":1407 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1412 - * data += stride - * else: - * for i in range(extent): # <<<<<<<<<<<<<< - * _slice_assign_scalar(data, shape + 1, strides + 1, ndim - 1, itemsize, item) - * data += stride - */ - /*else*/ { - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1413 - * else: - * for i in range(extent): - * _slice_assign_scalar(data, shape + 1, strides + 1, ndim - 1, itemsize, item) # <<<<<<<<<<<<<< - * data += stride - * - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1414 - * for i in range(extent): - * _slice_assign_scalar(data, shape + 1, strides + 1, ndim - 1, itemsize, item) - * data += stride # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1400 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) noexcept nogil: - */ - - /* function exit code */ -} - -/* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum = {"__pyx_unpickle_Enum", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v___pyx_type = 0; - long __pyx_v___pyx_checksum; - PyObject *__pyx_v___pyx_state = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_type,&__pyx_n_s_pyx_checksum,&__pyx_n_s_pyx_state,0}; - PyObject* values[3] = {0,0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_type)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_checksum)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 1); __PYX_ERR(1, 1, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 2); __PYX_ERR(1, 1, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__pyx_unpickle_Enum") < 0)) __PYX_ERR(1, 1, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v___pyx_type = values[0]; - __pyx_v___pyx_checksum = __Pyx_PyInt_As_long(values[1]); if (unlikely((__pyx_v___pyx_checksum == (long)-1) && PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_v___pyx_state = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, __pyx_nargs); __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(__pyx_self, __pyx_v___pyx_type, __pyx_v___pyx_checksum, __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_v___pyx_PickleError = 0; - PyObject *__pyx_v___pyx_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum", 0); - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum not in (0x82a3537, 0x6ae9995, 0xb068931): # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - */ - __pyx_t_1 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = (__Pyx_PySequence_ContainsTF(__pyx_t_1, __pyx_tuple__8, Py_NE)); if (unlikely((__pyx_t_2 < 0))) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_2) { - - /* "(tree fragment)":5 - * cdef object __pyx_result - * if __pyx_checksum not in (0x82a3537, 0x6ae9995, 0xb068931): - * from pickle import PickleError as __pyx_PickleError # <<<<<<<<<<<<<< - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - * __pyx_result = Enum.__new__(__pyx_type) - */ - __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_n_s_PickleError); - __Pyx_GIVEREF(__pyx_n_s_PickleError); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_PickleError); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_pickle, __pyx_t_1, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_PickleError); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_1); - __pyx_v___pyx_PickleError = __pyx_t_1; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "(tree fragment)":6 - * if __pyx_checksum not in (0x82a3537, 0x6ae9995, 0xb068931): - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum # <<<<<<<<<<<<<< - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - */ - __pyx_t_3 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_Incompatible_checksums_0x_x_vs_0, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_v___pyx_PickleError, __pyx_t_1, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 6, __pyx_L1_error) - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum not in (0x82a3537, 0x6ae9995, 0xb068931): # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - */ - } - - /* "(tree fragment)":7 - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - * __pyx_result = Enum.__new__(__pyx_type) # <<<<<<<<<<<<<< - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_MemviewEnum_type), __pyx_n_s_new); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_v___pyx_type}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_v___pyx_result = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - __pyx_t_2 = (__pyx_v___pyx_state != Py_None); - if (__pyx_t_2) { - - /* "(tree fragment)":9 - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) # <<<<<<<<<<<<<< - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None) || __Pyx_RaiseUnexpectedTypeError("tuple", __pyx_v___pyx_state))) __PYX_ERR(1, 9, __pyx_L1_error) - __pyx_t_1 = __pyx_unpickle_Enum__set_state(((struct __pyx_MemviewEnum_obj *)__pyx_v___pyx_result), ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 9, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - } - - /* "(tree fragment)":10 - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result # <<<<<<<<<<<<<< - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v___pyx_result); - __pyx_r = __pyx_v___pyx_result; - goto __pyx_L0; - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v___pyx_PickleError); - __Pyx_XDECREF(__pyx_v___pyx_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum__set_state", 0); - - /* "(tree fragment)":12 - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] # <<<<<<<<<<<<<< - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 12, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_v___pyx_result->name); - __Pyx_DECREF(__pyx_v___pyx_result->name); - __pyx_v___pyx_result->name = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 13, __pyx_L1_error) - } - __pyx_t_3 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_4 = (__pyx_t_3 > 1); - if (__pyx_t_4) { - } else { - __pyx_t_2 = __pyx_t_4; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_4 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_2 = __pyx_t_4; - __pyx_L4_bool_binop_done:; - if (__pyx_t_2) { - - /* "(tree fragment)":14 - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) # <<<<<<<<<<<<<< - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_update); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 14, __pyx_L1_error) - } - __pyx_t_5 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_7 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_7, __pyx_t_5}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_8, 1+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - } - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice __pyx_v_path, __Pyx_memviewslice __pyx_v_value, int __pyx_v_t_y, int __pyx_v_t_x, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args) { - float __pyx_v_max_neg_val = __pyx_k__9; - int __pyx_v_x; - int __pyx_v_y; - float __pyx_v_v_prev; - float __pyx_v_v_cur; - int __pyx_v_index; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - long __pyx_t_4; - int __pyx_t_5; - long __pyx_t_6; - long __pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - Py_ssize_t __pyx_t_10; - float __pyx_t_11; - float __pyx_t_12; - float __pyx_t_13; - int __pyx_t_14; - Py_ssize_t __pyx_t_15; - Py_ssize_t __pyx_t_16; - if (__pyx_optional_args) { - if (__pyx_optional_args->__pyx_n > 0) { - __pyx_v_max_neg_val = __pyx_optional_args->max_neg_val; - } - } - - /* "monotonic_align/core.pyx":13 - * cdef float v_cur - * cdef float tmp - * cdef int index = t_x - 1 # <<<<<<<<<<<<<< - * - * for y in range(t_y): - */ - __pyx_v_index = (__pyx_v_t_x - 1); - - /* "monotonic_align/core.pyx":15 - * cdef int index = t_x - 1 - * - * for y in range(t_y): # <<<<<<<<<<<<<< - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - */ - __pyx_t_1 = __pyx_v_t_y; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_y = __pyx_t_3; - - /* "monotonic_align/core.pyx":16 - * - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): # <<<<<<<<<<<<<< - * if x == y: - * v_cur = max_neg_val - */ - __pyx_t_4 = (__pyx_v_y + 1); - __pyx_t_5 = __pyx_v_t_x; - if ((__pyx_t_4 < __pyx_t_5)) { - __pyx_t_6 = __pyx_t_4; - } else { - __pyx_t_6 = __pyx_t_5; - } - __pyx_t_4 = __pyx_t_6; - __pyx_t_5 = ((__pyx_v_t_x + __pyx_v_y) - __pyx_v_t_y); - __pyx_t_6 = 0; - if ((__pyx_t_5 > __pyx_t_6)) { - __pyx_t_7 = __pyx_t_5; - } else { - __pyx_t_7 = __pyx_t_6; - } - __pyx_t_6 = __pyx_t_4; - for (__pyx_t_5 = __pyx_t_7; __pyx_t_5 < __pyx_t_6; __pyx_t_5+=1) { - __pyx_v_x = __pyx_t_5; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - __pyx_t_8 = (__pyx_v_x == __pyx_v_y); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":18 - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - * v_cur = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_cur = value[y-1, x] - */ - __pyx_v_v_cur = __pyx_v_max_neg_val; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - goto __pyx_L7; - } - - /* "monotonic_align/core.pyx":20 - * v_cur = max_neg_val - * else: - * v_cur = value[y-1, x] # <<<<<<<<<<<<<< - * if x == 0: - * if y == 0: - */ - /*else*/ { - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_x; - __pyx_v_v_cur = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))); - } - __pyx_L7:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - __pyx_t_8 = (__pyx_v_x == 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - __pyx_t_8 = (__pyx_v_y == 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":23 - * if x == 0: - * if y == 0: - * v_prev = 0. # <<<<<<<<<<<<<< - * else: - * v_prev = max_neg_val - */ - __pyx_v_v_prev = 0.; - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - goto __pyx_L9; - } - - /* "monotonic_align/core.pyx":25 - * v_prev = 0. - * else: - * v_prev = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_prev = value[y-1, x-1] - */ - /*else*/ { - __pyx_v_v_prev = __pyx_v_max_neg_val; - } - __pyx_L9:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - goto __pyx_L8; - } - - /* "monotonic_align/core.pyx":27 - * v_prev = max_neg_val - * else: - * v_prev = value[y-1, x-1] # <<<<<<<<<<<<<< - * value[y, x] += max(v_prev, v_cur) - * - */ - /*else*/ { - __pyx_t_10 = (__pyx_v_y - 1); - __pyx_t_9 = (__pyx_v_x - 1); - __pyx_v_v_prev = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_10 * __pyx_v_value.strides[0]) )) + __pyx_t_9)) ))); - } - __pyx_L8:; - - /* "monotonic_align/core.pyx":28 - * else: - * v_prev = value[y-1, x-1] - * value[y, x] += max(v_prev, v_cur) # <<<<<<<<<<<<<< - * - * for y in range(t_y - 1, -1, -1): - */ - __pyx_t_11 = __pyx_v_v_cur; - __pyx_t_12 = __pyx_v_v_prev; - if ((__pyx_t_11 > __pyx_t_12)) { - __pyx_t_13 = __pyx_t_11; - } else { - __pyx_t_13 = __pyx_t_12; - } - __pyx_t_9 = __pyx_v_y; - __pyx_t_10 = __pyx_v_x; - *((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )) += __pyx_t_13; - } - } - - /* "monotonic_align/core.pyx":30 - * value[y, x] += max(v_prev, v_cur) - * - * for y in range(t_y - 1, -1, -1): # <<<<<<<<<<<<<< - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - */ - for (__pyx_t_1 = (__pyx_v_t_y - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_y = __pyx_t_1; - - /* "monotonic_align/core.pyx":31 - * - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 # <<<<<<<<<<<<<< - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 - */ - __pyx_t_10 = __pyx_v_y; - __pyx_t_9 = __pyx_v_index; - *((int *) ( /* dim=1 */ ((char *) (((int *) ( /* dim=0 */ (__pyx_v_path.data + __pyx_t_10 * __pyx_v_path.strides[0]) )) + __pyx_t_9)) )) = 1; - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - __pyx_t_14 = (__pyx_v_index != 0); - if (__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_14 = (__pyx_v_index == __pyx_v_y); - if (!__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_index; - __pyx_t_15 = (__pyx_v_y - 1); - __pyx_t_16 = (__pyx_v_index - 1); - __pyx_t_14 = ((*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))) < (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_15 * __pyx_v_value.strides[0]) )) + __pyx_t_16)) )))); - __pyx_t_8 = __pyx_t_14; - __pyx_L13_bool_binop_done:; - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":33 - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_index = (__pyx_v_index - 1); - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - } - } - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - - /* function exit code */ -} - -/* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs, CYTHON_UNUSED int __pyx_skip_dispatch) { - CYTHON_UNUSED int __pyx_v_b; - int __pyx_v_i; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - __Pyx_memviewslice __pyx_t_4 = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_t_5 = { 0, 0, { 0 }, { 0 }, { 0 } }; - Py_ssize_t __pyx_t_6; - Py_ssize_t __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save; - #endif - - /* "monotonic_align/core.pyx":39 - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: - * cdef int b = paths.shape[0] # <<<<<<<<<<<<<< - * cdef int i - * for i in prange(b, nogil=True): - */ - __pyx_v_b = (__pyx_v_paths.shape[0]); - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - { - #ifdef WITH_THREAD - PyThreadState *_save; - _save = NULL; - if (PyGILState_Check()) { - Py_UNBLOCK_THREADS - } - __Pyx_FastGIL_Remember(); - #endif - /*try:*/ { - __pyx_t_1 = __pyx_v_b; - { - int __pyx_parallel_temp0 = ((int)0xbad0bad0); - const char *__pyx_parallel_filename = NULL; int __pyx_parallel_lineno = 0, __pyx_parallel_clineno = 0; - PyObject *__pyx_parallel_exc_type = NULL, *__pyx_parallel_exc_value = NULL, *__pyx_parallel_exc_tb = NULL; - int __pyx_parallel_why; - __pyx_parallel_why = 0; - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) (x) - #define unlikely(x) (x) - #endif - __pyx_t_3 = (__pyx_t_1 - 0 + 1 - 1/abs(1)) / 1; - if (__pyx_t_3 > 0) - { - #ifdef _OPENMP - #pragma omp parallel private(__pyx_t_6, __pyx_t_7) firstprivate(__pyx_t_4, __pyx_t_5) private(__pyx_filename, __pyx_lineno, __pyx_clineno) shared(__pyx_parallel_why, __pyx_parallel_exc_type, __pyx_parallel_exc_value, __pyx_parallel_exc_tb) - #endif /* _OPENMP */ - { - #ifdef _OPENMP - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - Py_BEGIN_ALLOW_THREADS - #endif /* _OPENMP */ - #ifdef _OPENMP - #pragma omp for firstprivate(__pyx_v_i) lastprivate(__pyx_v_i) - #endif /* _OPENMP */ - for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_3; __pyx_t_2++){ - if (__pyx_parallel_why < 2) - { - __pyx_v_i = (int)(0 + 1 * __pyx_t_2); - - /* "monotonic_align/core.pyx":42 - * cdef int i - * for i in prange(b, nogil=True): - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) # <<<<<<<<<<<<<< - */ - __pyx_t_4.data = __pyx_v_paths.data; - __pyx_t_4.memview = __pyx_v_paths.memview; - __PYX_INC_MEMVIEW(&__pyx_t_4, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_paths.strides[0]; - __pyx_t_4.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_4.shape[0] = __pyx_v_paths.shape[1]; -__pyx_t_4.strides[0] = __pyx_v_paths.strides[1]; - __pyx_t_4.suboffsets[0] = -1; - -__pyx_t_4.shape[1] = __pyx_v_paths.shape[2]; -__pyx_t_4.strides[1] = __pyx_v_paths.strides[2]; - __pyx_t_4.suboffsets[1] = -1; - -__pyx_t_5.data = __pyx_v_values.data; - __pyx_t_5.memview = __pyx_v_values.memview; - __PYX_INC_MEMVIEW(&__pyx_t_5, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_values.strides[0]; - __pyx_t_5.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_5.shape[0] = __pyx_v_values.shape[1]; -__pyx_t_5.strides[0] = __pyx_v_values.strides[1]; - __pyx_t_5.suboffsets[0] = -1; - -__pyx_t_5.shape[1] = __pyx_v_values.shape[2]; -__pyx_t_5.strides[1] = __pyx_v_values.strides[2]; - __pyx_t_5.suboffsets[1] = -1; - -__pyx_t_6 = __pyx_v_i; - __pyx_t_7 = __pyx_v_i; - __pyx_f_15monotonic_align_4core_maximum_path_each(__pyx_t_4, __pyx_t_5, (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_ys.data) + __pyx_t_6)) ))), (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_xs.data) + __pyx_t_7)) ))), NULL); if (unlikely(__Pyx_ErrOccurredWithGIL())) __PYX_ERR(0, 42, __pyx_L8_error) - __PYX_XCLEAR_MEMVIEW(&__pyx_t_4, 0); - __pyx_t_4.memview = NULL; __pyx_t_4.data = NULL; - __PYX_XCLEAR_MEMVIEW(&__pyx_t_5, 0); - __pyx_t_5.memview = NULL; __pyx_t_5.data = NULL; - goto __pyx_L11; - __pyx_L8_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - #ifdef _OPENMP - #pragma omp flush(__pyx_parallel_exc_type) - #endif /* _OPENMP */ - if (!__pyx_parallel_exc_type) { - __Pyx_ErrFetchWithState(&__pyx_parallel_exc_type, &__pyx_parallel_exc_value, &__pyx_parallel_exc_tb); - __pyx_parallel_filename = __pyx_filename; __pyx_parallel_lineno = __pyx_lineno; __pyx_parallel_clineno = __pyx_clineno; - __Pyx_GOTREF(__pyx_parallel_exc_type); - } - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_parallel_why = 4; - goto __pyx_L10; - __pyx_L10:; - #ifdef _OPENMP - #pragma omp critical(__pyx_parallel_lastprivates0) - #endif /* _OPENMP */ - { - __pyx_parallel_temp0 = __pyx_v_i; - } - __pyx_L11:; - #ifdef _OPENMP - #pragma omp flush(__pyx_parallel_why) - #endif /* _OPENMP */ - } - } - #ifdef _OPENMP - Py_END_ALLOW_THREADS - #else -{ -#ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - #endif /* _OPENMP */ - /* Clean up any temporaries */ - __PYX_XCLEAR_MEMVIEW(&__pyx_t_4, 0); - __pyx_t_4.memview = NULL; __pyx_t_4.data = NULL; - __PYX_XCLEAR_MEMVIEW(&__pyx_t_5, 0); - __pyx_t_5.memview = NULL; __pyx_t_5.data = NULL; - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - #ifndef _OPENMP -} -#endif /* _OPENMP */ - } - } - if (__pyx_parallel_exc_type) { - /* This may have been overridden by a continue, break or return in another thread. Prefer the error. */ - __pyx_parallel_why = 4; - } - if (__pyx_parallel_why) { - __pyx_v_i = __pyx_parallel_temp0; - switch (__pyx_parallel_why) { - case 4: - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_GIVEREF(__pyx_parallel_exc_type); - __Pyx_ErrRestoreWithState(__pyx_parallel_exc_type, __pyx_parallel_exc_value, __pyx_parallel_exc_tb); - __pyx_filename = __pyx_parallel_filename; __pyx_lineno = __pyx_parallel_lineno; __pyx_clineno = __pyx_parallel_clineno; - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - goto __pyx_L4_error; - } - } - } - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) - #endif - } - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - /*finally:*/ { - /*normal exit:*/{ - #ifdef WITH_THREAD - __Pyx_FastGIL_Forget(); - if (_save) { - Py_BLOCK_THREADS - } - #endif - goto __pyx_L5; - } - __pyx_L4_error: { - #ifdef WITH_THREAD - __Pyx_FastGIL_Forget(); - if (_save) { - Py_BLOCK_THREADS - } - #endif - goto __pyx_L1_error; - } - __pyx_L5:; - } - } - - /* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - #ifdef WITH_THREAD - __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __PYX_XCLEAR_MEMVIEW(&__pyx_t_4, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_t_5, 1); - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - __pyx_L0:; -} - -/* Python wrapper */ -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_15monotonic_align_4core_1maximum_path_c = {"maximum_path_c", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_15monotonic_align_4core_1maximum_path_c, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - __Pyx_memviewslice __pyx_v_paths = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_values = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_ys = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_xs = { 0, 0, { 0 }, { 0 }, { 0 } }; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("maximum_path_c (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_paths,&__pyx_n_s_values,&__pyx_n_s_t_ys,&__pyx_n_s_t_xs,0}; - PyObject* values[4] = {0,0,0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_paths)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 38, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_values)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 38, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 1); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_t_ys)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 38, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 2); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_t_xs)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 38, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 3); __PYX_ERR(0, 38, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "maximum_path_c") < 0)) __PYX_ERR(0, 38, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_paths = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(values[0], PyBUF_WRITABLE); if (unlikely(!__pyx_v_paths.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_values = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(values[1], PyBUF_WRITABLE); if (unlikely(!__pyx_v_values.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_ys = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[2], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_ys.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_xs = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[3], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_xs.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, __pyx_nargs); __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_L3_error:; - __PYX_XCLEAR_MEMVIEW(&__pyx_v_paths, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_v_values, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_v_t_ys, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_v_t_xs, 1); - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15monotonic_align_4core_maximum_path_c(__pyx_self, __pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs); - - /* function exit code */ - __PYX_XCLEAR_MEMVIEW(&__pyx_v_paths, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_v_values, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_v_t_ys, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_v_t_xs, 1); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("maximum_path_c", 0); - __Pyx_XDECREF(__pyx_r); - if (unlikely(!__pyx_v_paths.memview)) { __Pyx_RaiseUnboundLocalError("paths"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_values.memview)) { __Pyx_RaiseUnboundLocalError("values"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_ys.memview)) { __Pyx_RaiseUnboundLocalError("t_ys"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_xs.memview)) { __Pyx_RaiseUnboundLocalError("t_xs"); __PYX_ERR(0, 38, __pyx_L1_error) } - __pyx_f_15monotonic_align_4core_maximum_path_c(__pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs, 0); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 38, __pyx_L1_error) - __pyx_t_1 = __Pyx_void_to_None(NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static struct __pyx_vtabstruct_array __pyx_vtable_array; - -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_array_obj *p; - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (likely(!__Pyx_PyType_HasFeature(t, Py_TPFLAGS_IS_ABSTRACT))) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - #endif - p = ((struct __pyx_array_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_array; - p->mode = ((PyObject*)Py_None); Py_INCREF(Py_None); - p->_format = ((PyObject*)Py_None); Py_INCREF(Py_None); - if (unlikely(__pyx_array___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_array(PyObject *o) { - struct __pyx_array_obj *p = (struct __pyx_array_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && (!PyType_IS_GC(Py_TYPE(o)) || !__Pyx_PyObject_GC_IsFinalized(o))) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_array) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_array___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->mode); - Py_CLEAR(p->_format); - (*Py_TYPE(o)->tp_free)(o); -} -static PyObject *__pyx_sq_item_array(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_array(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_array___setitem__(o, i, v); - } - else { - __Pyx_TypeName o_type_name; - o_type_name = __Pyx_PyType_GetName(Py_TYPE(o)); - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by " __Pyx_FMT_TYPENAME, o_type_name); - __Pyx_DECREF_TypeName(o_type_name); - return -1; - } -} - -static PyObject *__pyx_tp_getattro_array(PyObject *o, PyObject *n) { - PyObject *v = __Pyx_PyObject_GenericGetAttr(o, n); - if (!v && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - v = __pyx_array___getattr__(o, n); - } - return v; -} - -static PyObject *__pyx_getprop___pyx_array_memview(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(o); -} - -static PyMethodDef __pyx_methods_array[] = { - {"__getattr__", (PyCFunction)__pyx_array___getattr__, METH_O|METH_COEXIST, 0}, - {"__reduce_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_array_1__reduce_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"__setstate_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_array_3__setstate_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_array[] = { - {(char *)"memview", __pyx_getprop___pyx_array_memview, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; -#if CYTHON_USE_TYPE_SPECS -#if !CYTHON_COMPILING_IN_LIMITED_API - -static PyBufferProcs __pyx_tp_as_buffer_array = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_array_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; -#endif -static PyType_Slot __pyx_type___pyx_array_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_array}, - {Py_sq_length, (void *)__pyx_array___len__}, - {Py_sq_item, (void *)__pyx_sq_item_array}, - {Py_mp_length, (void *)__pyx_array___len__}, - {Py_mp_subscript, (void *)__pyx_array___getitem__}, - {Py_mp_ass_subscript, (void *)__pyx_mp_ass_subscript_array}, - {Py_tp_getattro, (void *)__pyx_tp_getattro_array}, - #if defined(Py_bf_getbuffer) - {Py_bf_getbuffer, (void *)__pyx_array_getbuffer}, - #endif - {Py_tp_methods, (void *)__pyx_methods_array}, - {Py_tp_getset, (void *)__pyx_getsets_array}, - {Py_tp_new, (void *)__pyx_tp_new_array}, - {0, 0}, -}; -static PyType_Spec __pyx_type___pyx_array_spec = { - "monotonic_align.core.array", - sizeof(struct __pyx_array_obj), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_SEQUENCE, - __pyx_type___pyx_array_slots, -}; -#else - -static PySequenceMethods __pyx_tp_as_sequence_array = { - __pyx_array___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_array, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_array = { - __pyx_array___len__, /*mp_length*/ - __pyx_array___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_array, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_array = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_array_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_array = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.""array", /*tp_name*/ - sizeof(struct __pyx_array_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_array, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_array, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_array, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - __pyx_tp_getattro_array, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_array, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_SEQUENCE, /*tp_flags*/ - 0, /*tp_doc*/ - 0, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_array, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_array, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_array, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - struct __pyx_MemviewEnum_obj *p; - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (likely(!__Pyx_PyType_HasFeature(t, Py_TPFLAGS_IS_ABSTRACT))) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - #endif - p = ((struct __pyx_MemviewEnum_obj *)o); - p->name = Py_None; Py_INCREF(Py_None); - return o; -} - -static void __pyx_tp_dealloc_Enum(PyObject *o) { - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && !__Pyx_PyObject_GC_IsFinalized(o)) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_Enum) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->name); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_Enum(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - if (p->name) { - e = (*v)(p->name, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_Enum(PyObject *o) { - PyObject* tmp; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - tmp = ((PyObject*)p->name); - p->name = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} - -static PyObject *__pyx_specialmethod___pyx_MemviewEnum___repr__(PyObject *self, CYTHON_UNUSED PyObject *arg) { - return __pyx_MemviewEnum___repr__(self); -} - -static PyMethodDef __pyx_methods_Enum[] = { - {"__repr__", (PyCFunction)__pyx_specialmethod___pyx_MemviewEnum___repr__, METH_NOARGS|METH_COEXIST, 0}, - {"__reduce_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_MemviewEnum_1__reduce_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"__setstate_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_MemviewEnum_3__setstate_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type___pyx_MemviewEnum_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_Enum}, - {Py_tp_repr, (void *)__pyx_MemviewEnum___repr__}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_Enum}, - {Py_tp_clear, (void *)__pyx_tp_clear_Enum}, - {Py_tp_methods, (void *)__pyx_methods_Enum}, - {Py_tp_init, (void *)__pyx_MemviewEnum___init__}, - {Py_tp_new, (void *)__pyx_tp_new_Enum}, - {0, 0}, -}; -static PyType_Spec __pyx_type___pyx_MemviewEnum_spec = { - "monotonic_align.core.Enum", - sizeof(struct __pyx_MemviewEnum_obj), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, - __pyx_type___pyx_MemviewEnum_slots, -}; -#else - -static PyTypeObject __pyx_type___pyx_MemviewEnum = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.""Enum", /*tp_name*/ - sizeof(struct __pyx_MemviewEnum_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_Enum, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_MemviewEnum___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_Enum, /*tp_traverse*/ - __pyx_tp_clear_Enum, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_Enum, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - __pyx_MemviewEnum___init__, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_Enum, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif -static struct __pyx_vtabstruct_memoryview __pyx_vtable_memoryview; - -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryview_obj *p; - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (likely(!__Pyx_PyType_HasFeature(t, Py_TPFLAGS_IS_ABSTRACT))) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - #endif - p = ((struct __pyx_memoryview_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_memoryview; - p->obj = Py_None; Py_INCREF(Py_None); - p->_size = Py_None; Py_INCREF(Py_None); - p->_array_interface = Py_None; Py_INCREF(Py_None); - p->view.obj = NULL; - if (unlikely(__pyx_memoryview___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_memoryview(PyObject *o) { - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && !__Pyx_PyObject_GC_IsFinalized(o)) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_memoryview) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryview___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->obj); - Py_CLEAR(p->_size); - Py_CLEAR(p->_array_interface); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_memoryview(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - if (p->obj) { - e = (*v)(p->obj, a); if (e) return e; - } - if (p->_size) { - e = (*v)(p->_size, a); if (e) return e; - } - if (p->_array_interface) { - e = (*v)(p->_array_interface, a); if (e) return e; - } - if (p->view.obj) { - e = (*v)(p->view.obj, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_memoryview(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - tmp = ((PyObject*)p->obj); - p->obj = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_size); - p->_size = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_array_interface); - p->_array_interface = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - Py_CLEAR(p->view.obj); - return 0; -} -static PyObject *__pyx_sq_item_memoryview(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_memoryview(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_memoryview___setitem__(o, i, v); - } - else { - __Pyx_TypeName o_type_name; - o_type_name = __Pyx_PyType_GetName(Py_TYPE(o)); - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by " __Pyx_FMT_TYPENAME, o_type_name); - __Pyx_DECREF_TypeName(o_type_name); - return -1; - } -} - -static PyObject *__pyx_getprop___pyx_memoryview_T(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_shape(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_strides(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_suboffsets(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_ndim(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_itemsize(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_nbytes(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_size(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(o); -} - -static PyObject *__pyx_specialmethod___pyx_memoryview___repr__(PyObject *self, CYTHON_UNUSED PyObject *arg) { - return __pyx_memoryview___repr__(self); -} - -static PyMethodDef __pyx_methods_memoryview[] = { - {"__repr__", (PyCFunction)__pyx_specialmethod___pyx_memoryview___repr__, METH_NOARGS|METH_COEXIST, 0}, - {"is_c_contig", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_memoryview_is_c_contig, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"is_f_contig", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_memoryview_is_f_contig, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"copy", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_memoryview_copy, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"copy_fortran", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_memoryview_copy_fortran, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"__reduce_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_memoryview_1__reduce_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"__setstate_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_memoryview_3__setstate_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_memoryview[] = { - {(char *)"T", __pyx_getprop___pyx_memoryview_T, 0, (char *)0, 0}, - {(char *)"base", __pyx_getprop___pyx_memoryview_base, 0, (char *)0, 0}, - {(char *)"shape", __pyx_getprop___pyx_memoryview_shape, 0, (char *)0, 0}, - {(char *)"strides", __pyx_getprop___pyx_memoryview_strides, 0, (char *)0, 0}, - {(char *)"suboffsets", __pyx_getprop___pyx_memoryview_suboffsets, 0, (char *)0, 0}, - {(char *)"ndim", __pyx_getprop___pyx_memoryview_ndim, 0, (char *)0, 0}, - {(char *)"itemsize", __pyx_getprop___pyx_memoryview_itemsize, 0, (char *)0, 0}, - {(char *)"nbytes", __pyx_getprop___pyx_memoryview_nbytes, 0, (char *)0, 0}, - {(char *)"size", __pyx_getprop___pyx_memoryview_size, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; -#if CYTHON_USE_TYPE_SPECS -#if !CYTHON_COMPILING_IN_LIMITED_API - -static PyBufferProcs __pyx_tp_as_buffer_memoryview = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_memoryview_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; -#endif -static PyType_Slot __pyx_type___pyx_memoryview_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_memoryview}, - {Py_tp_repr, (void *)__pyx_memoryview___repr__}, - {Py_sq_length, (void *)__pyx_memoryview___len__}, - {Py_sq_item, (void *)__pyx_sq_item_memoryview}, - {Py_mp_length, (void *)__pyx_memoryview___len__}, - {Py_mp_subscript, (void *)__pyx_memoryview___getitem__}, - {Py_mp_ass_subscript, (void *)__pyx_mp_ass_subscript_memoryview}, - {Py_tp_str, (void *)__pyx_memoryview___str__}, - #if defined(Py_bf_getbuffer) - {Py_bf_getbuffer, (void *)__pyx_memoryview_getbuffer}, - #endif - {Py_tp_traverse, (void *)__pyx_tp_traverse_memoryview}, - {Py_tp_clear, (void *)__pyx_tp_clear_memoryview}, - {Py_tp_methods, (void *)__pyx_methods_memoryview}, - {Py_tp_getset, (void *)__pyx_getsets_memoryview}, - {Py_tp_new, (void *)__pyx_tp_new_memoryview}, - {0, 0}, -}; -static PyType_Spec __pyx_type___pyx_memoryview_spec = { - "monotonic_align.core.memoryview", - sizeof(struct __pyx_memoryview_obj), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, - __pyx_type___pyx_memoryview_slots, -}; -#else - -static PySequenceMethods __pyx_tp_as_sequence_memoryview = { - __pyx_memoryview___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_memoryview, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_memoryview = { - __pyx_memoryview___len__, /*mp_length*/ - __pyx_memoryview___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_memoryview, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_memoryview = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_memoryview_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_memoryview = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.""memoryview", /*tp_name*/ - sizeof(struct __pyx_memoryview_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_memoryview, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_memoryview___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_memoryview, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_memoryview, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - __pyx_memoryview___str__, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_memoryview, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_memoryview, /*tp_traverse*/ - __pyx_tp_clear_memoryview, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_memoryview, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_memoryview, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_memoryview, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif -static struct __pyx_vtabstruct__memoryviewslice __pyx_vtable__memoryviewslice; - -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryviewslice_obj *p; - PyObject *o = __pyx_tp_new_memoryview(t, a, k); - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryviewslice_obj *)o); - p->__pyx_base.__pyx_vtab = (struct __pyx_vtabstruct_memoryview*)__pyx_vtabptr__memoryviewslice; - p->from_object = Py_None; Py_INCREF(Py_None); - p->from_slice.memview = NULL; - return o; -} - -static void __pyx_tp_dealloc__memoryviewslice(PyObject *o) { - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && !__Pyx_PyObject_GC_IsFinalized(o)) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc__memoryviewslice) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryviewslice___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->from_object); - PyObject_GC_Track(o); - __pyx_tp_dealloc_memoryview(o); -} - -static int __pyx_tp_traverse__memoryviewslice(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - e = __pyx_tp_traverse_memoryview(o, v, a); if (e) return e; - if (p->from_object) { - e = (*v)(p->from_object, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear__memoryviewslice(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - __pyx_tp_clear_memoryview(o); - tmp = ((PyObject*)p->from_object); - p->from_object = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - __PYX_XCLEAR_MEMVIEW(&p->from_slice, 1); - return 0; -} - -static PyMethodDef __pyx_methods__memoryviewslice[] = { - {"__reduce_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_memoryviewslice_1__reduce_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"__setstate_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_memoryviewslice_3__setstate_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type___pyx_memoryviewslice_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc__memoryviewslice}, - {Py_tp_doc, (void *)PyDoc_STR("Internal class for passing memoryview slices to Python")}, - {Py_tp_traverse, (void *)__pyx_tp_traverse__memoryviewslice}, - {Py_tp_clear, (void *)__pyx_tp_clear__memoryviewslice}, - {Py_tp_methods, (void *)__pyx_methods__memoryviewslice}, - {Py_tp_new, (void *)__pyx_tp_new__memoryviewslice}, - {0, 0}, -}; -static PyType_Spec __pyx_type___pyx_memoryviewslice_spec = { - "monotonic_align.core._memoryviewslice", - sizeof(struct __pyx_memoryviewslice_obj), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_SEQUENCE, - __pyx_type___pyx_memoryviewslice_slots, -}; -#else - -static PyTypeObject __pyx_type___pyx_memoryviewslice = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.""_memoryviewslice", /*tp_name*/ - sizeof(struct __pyx_memoryviewslice_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc__memoryviewslice, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - #if CYTHON_COMPILING_IN_PYPY || 0 - __pyx_memoryview___repr__, /*tp_repr*/ - #else - 0, /*tp_repr*/ - #endif - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - #if CYTHON_COMPILING_IN_PYPY || 0 - __pyx_memoryview___str__, /*tp_str*/ - #else - 0, /*tp_str*/ - #endif - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_SEQUENCE, /*tp_flags*/ - PyDoc_STR("Internal class for passing memoryview slices to Python"), /*tp_doc*/ - __pyx_tp_traverse__memoryviewslice, /*tp_traverse*/ - __pyx_tp_clear__memoryviewslice, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods__memoryviewslice, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new__memoryviewslice, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static PyMethodDef __pyx_methods[] = { - {0, 0, 0, 0} -}; -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif -/* #### Code section: pystring_table ### */ - -static int __Pyx_CreateStringTabAndInitStrings(void) { - __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_kp_u_, __pyx_k_, sizeof(__pyx_k_), 0, 1, 0, 0}, - {&__pyx_n_s_ASCII, __pyx_k_ASCII, sizeof(__pyx_k_ASCII), 0, 0, 1, 1}, - {&__pyx_kp_s_All_dimensions_preceding_dimensi, __pyx_k_All_dimensions_preceding_dimensi, sizeof(__pyx_k_All_dimensions_preceding_dimensi), 0, 0, 1, 0}, - {&__pyx_n_s_AssertionError, __pyx_k_AssertionError, sizeof(__pyx_k_AssertionError), 0, 0, 1, 1}, - {&__pyx_kp_s_Buffer_view_does_not_expose_stri, __pyx_k_Buffer_view_does_not_expose_stri, sizeof(__pyx_k_Buffer_view_does_not_expose_stri), 0, 0, 1, 0}, - {&__pyx_kp_s_Can_only_create_a_buffer_that_is, __pyx_k_Can_only_create_a_buffer_that_is, sizeof(__pyx_k_Can_only_create_a_buffer_that_is), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_assign_to_read_only_memor, __pyx_k_Cannot_assign_to_read_only_memor, sizeof(__pyx_k_Cannot_assign_to_read_only_memor), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_create_writable_memory_vi, __pyx_k_Cannot_create_writable_memory_vi, sizeof(__pyx_k_Cannot_create_writable_memory_vi), 0, 0, 1, 0}, - {&__pyx_kp_u_Cannot_index_with_type, __pyx_k_Cannot_index_with_type, sizeof(__pyx_k_Cannot_index_with_type), 0, 1, 0, 0}, - {&__pyx_kp_s_Cannot_transpose_memoryview_with, __pyx_k_Cannot_transpose_memoryview_with, sizeof(__pyx_k_Cannot_transpose_memoryview_with), 0, 0, 1, 0}, - {&__pyx_kp_s_Dimension_d_is_not_direct, __pyx_k_Dimension_d_is_not_direct, sizeof(__pyx_k_Dimension_d_is_not_direct), 0, 0, 1, 0}, - {&__pyx_n_s_Ellipsis, __pyx_k_Ellipsis, sizeof(__pyx_k_Ellipsis), 0, 0, 1, 1}, - {&__pyx_kp_s_Empty_shape_tuple_for_cython_arr, __pyx_k_Empty_shape_tuple_for_cython_arr, sizeof(__pyx_k_Empty_shape_tuple_for_cython_arr), 0, 0, 1, 0}, - {&__pyx_kp_s_Incompatible_checksums_0x_x_vs_0, __pyx_k_Incompatible_checksums_0x_x_vs_0, sizeof(__pyx_k_Incompatible_checksums_0x_x_vs_0), 0, 0, 1, 0}, - {&__pyx_n_s_IndexError, __pyx_k_IndexError, sizeof(__pyx_k_IndexError), 0, 0, 1, 1}, - {&__pyx_kp_s_Index_out_of_bounds_axis_d, __pyx_k_Index_out_of_bounds_axis_d, sizeof(__pyx_k_Index_out_of_bounds_axis_d), 0, 0, 1, 0}, - {&__pyx_kp_s_Indirect_dimensions_not_supporte, __pyx_k_Indirect_dimensions_not_supporte, sizeof(__pyx_k_Indirect_dimensions_not_supporte), 0, 0, 1, 0}, - {&__pyx_kp_u_Invalid_mode_expected_c_or_fortr, __pyx_k_Invalid_mode_expected_c_or_fortr, sizeof(__pyx_k_Invalid_mode_expected_c_or_fortr), 0, 1, 0, 0}, - {&__pyx_kp_u_Invalid_shape_in_axis, __pyx_k_Invalid_shape_in_axis, sizeof(__pyx_k_Invalid_shape_in_axis), 0, 1, 0, 0}, - {&__pyx_n_s_MemoryError, __pyx_k_MemoryError, sizeof(__pyx_k_MemoryError), 0, 0, 1, 1}, - {&__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_k_MemoryView_of_r_at_0x_x, sizeof(__pyx_k_MemoryView_of_r_at_0x_x), 0, 0, 1, 0}, - {&__pyx_kp_s_MemoryView_of_r_object, __pyx_k_MemoryView_of_r_object, sizeof(__pyx_k_MemoryView_of_r_object), 0, 0, 1, 0}, - {&__pyx_n_b_O, __pyx_k_O, sizeof(__pyx_k_O), 0, 0, 0, 1}, - {&__pyx_kp_u_Out_of_bounds_on_buffer_access_a, __pyx_k_Out_of_bounds_on_buffer_access_a, sizeof(__pyx_k_Out_of_bounds_on_buffer_access_a), 0, 1, 0, 0}, - {&__pyx_n_s_PickleError, __pyx_k_PickleError, sizeof(__pyx_k_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_Sequence, __pyx_k_Sequence, sizeof(__pyx_k_Sequence), 0, 0, 1, 1}, - {&__pyx_kp_s_Step_may_not_be_zero_axis_d, __pyx_k_Step_may_not_be_zero_axis_d, sizeof(__pyx_k_Step_may_not_be_zero_axis_d), 0, 0, 1, 0}, - {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1}, - {&__pyx_kp_s_Unable_to_convert_item_to_object, __pyx_k_Unable_to_convert_item_to_object, sizeof(__pyx_k_Unable_to_convert_item_to_object), 0, 0, 1, 0}, - {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1}, - {&__pyx_n_s_View_MemoryView, __pyx_k_View_MemoryView, sizeof(__pyx_k_View_MemoryView), 0, 0, 1, 1}, - {&__pyx_kp_u__2, __pyx_k__2, sizeof(__pyx_k__2), 0, 1, 0, 0}, - {&__pyx_n_s__23, __pyx_k__23, sizeof(__pyx_k__23), 0, 0, 1, 1}, - {&__pyx_n_s__3, __pyx_k__3, sizeof(__pyx_k__3), 0, 0, 1, 1}, - {&__pyx_kp_u__6, __pyx_k__6, sizeof(__pyx_k__6), 0, 1, 0, 0}, - {&__pyx_kp_u__7, __pyx_k__7, sizeof(__pyx_k__7), 0, 1, 0, 0}, - {&__pyx_n_s_abc, __pyx_k_abc, sizeof(__pyx_k_abc), 0, 0, 1, 1}, - {&__pyx_n_s_allocate_buffer, __pyx_k_allocate_buffer, sizeof(__pyx_k_allocate_buffer), 0, 0, 1, 1}, - {&__pyx_kp_u_and, __pyx_k_and, sizeof(__pyx_k_and), 0, 1, 0, 0}, - {&__pyx_n_s_asyncio_coroutines, __pyx_k_asyncio_coroutines, sizeof(__pyx_k_asyncio_coroutines), 0, 0, 1, 1}, - {&__pyx_n_s_base, __pyx_k_base, sizeof(__pyx_k_base), 0, 0, 1, 1}, - {&__pyx_n_s_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1}, - {&__pyx_n_u_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 1, 0, 1}, - {&__pyx_n_s_class, __pyx_k_class, sizeof(__pyx_k_class), 0, 0, 1, 1}, - {&__pyx_n_s_class_getitem, __pyx_k_class_getitem, sizeof(__pyx_k_class_getitem), 0, 0, 1, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_n_s_collections, __pyx_k_collections, sizeof(__pyx_k_collections), 0, 0, 1, 1}, - {&__pyx_kp_s_collections_abc, __pyx_k_collections_abc, sizeof(__pyx_k_collections_abc), 0, 0, 1, 0}, - {&__pyx_kp_s_contiguous_and_direct, __pyx_k_contiguous_and_direct, sizeof(__pyx_k_contiguous_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_contiguous_and_indirect, __pyx_k_contiguous_and_indirect, sizeof(__pyx_k_contiguous_and_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_core_pyx, __pyx_k_core_pyx, sizeof(__pyx_k_core_pyx), 0, 0, 1, 0}, - {&__pyx_n_s_count, __pyx_k_count, sizeof(__pyx_k_count), 0, 0, 1, 1}, - {&__pyx_n_s_dict, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1}, - {&__pyx_kp_u_disable, __pyx_k_disable, sizeof(__pyx_k_disable), 0, 1, 0, 0}, - {&__pyx_n_s_dtype_is_object, __pyx_k_dtype_is_object, sizeof(__pyx_k_dtype_is_object), 0, 0, 1, 1}, - {&__pyx_kp_u_enable, __pyx_k_enable, sizeof(__pyx_k_enable), 0, 1, 0, 0}, - {&__pyx_n_s_encode, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1}, - {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1}, - {&__pyx_n_s_error, __pyx_k_error, sizeof(__pyx_k_error), 0, 0, 1, 1}, - {&__pyx_n_s_flags, __pyx_k_flags, sizeof(__pyx_k_flags), 0, 0, 1, 1}, - {&__pyx_n_s_format, __pyx_k_format, sizeof(__pyx_k_format), 0, 0, 1, 1}, - {&__pyx_n_s_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 0, 1, 1}, - {&__pyx_n_u_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 1, 0, 1}, - {&__pyx_kp_u_gc, __pyx_k_gc, sizeof(__pyx_k_gc), 0, 1, 0, 0}, - {&__pyx_n_s_getstate, __pyx_k_getstate, sizeof(__pyx_k_getstate), 0, 0, 1, 1}, - {&__pyx_kp_u_got, __pyx_k_got, sizeof(__pyx_k_got), 0, 1, 0, 0}, - {&__pyx_kp_u_got_differing_extents_in_dimensi, __pyx_k_got_differing_extents_in_dimensi, sizeof(__pyx_k_got_differing_extents_in_dimensi), 0, 1, 0, 0}, - {&__pyx_n_s_id, __pyx_k_id, sizeof(__pyx_k_id), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_index, __pyx_k_index, sizeof(__pyx_k_index), 0, 0, 1, 1}, - {&__pyx_n_s_initializing, __pyx_k_initializing, sizeof(__pyx_k_initializing), 0, 0, 1, 1}, - {&__pyx_n_s_is_coroutine, __pyx_k_is_coroutine, sizeof(__pyx_k_is_coroutine), 0, 0, 1, 1}, - {&__pyx_kp_u_isenabled, __pyx_k_isenabled, sizeof(__pyx_k_isenabled), 0, 1, 0, 0}, - {&__pyx_n_s_itemsize, __pyx_k_itemsize, sizeof(__pyx_k_itemsize), 0, 0, 1, 1}, - {&__pyx_kp_s_itemsize_0_for_cython_array, __pyx_k_itemsize_0_for_cython_array, sizeof(__pyx_k_itemsize_0_for_cython_array), 0, 0, 1, 0}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_s_maximum_path_c, __pyx_k_maximum_path_c, sizeof(__pyx_k_maximum_path_c), 0, 0, 1, 1}, - {&__pyx_n_s_memview, __pyx_k_memview, sizeof(__pyx_k_memview), 0, 0, 1, 1}, - {&__pyx_n_s_mode, __pyx_k_mode, sizeof(__pyx_k_mode), 0, 0, 1, 1}, - {&__pyx_n_s_monotonic_align_core, __pyx_k_monotonic_align_core, sizeof(__pyx_k_monotonic_align_core), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_name_2, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1}, - {&__pyx_n_s_ndim, __pyx_k_ndim, sizeof(__pyx_k_ndim), 0, 0, 1, 1}, - {&__pyx_n_s_new, __pyx_k_new, sizeof(__pyx_k_new), 0, 0, 1, 1}, - {&__pyx_kp_s_no_default___reduce___due_to_non, __pyx_k_no_default___reduce___due_to_non, sizeof(__pyx_k_no_default___reduce___due_to_non), 0, 0, 1, 0}, - {&__pyx_n_s_obj, __pyx_k_obj, sizeof(__pyx_k_obj), 0, 0, 1, 1}, - {&__pyx_n_s_pack, __pyx_k_pack, sizeof(__pyx_k_pack), 0, 0, 1, 1}, - {&__pyx_n_s_paths, __pyx_k_paths, sizeof(__pyx_k_paths), 0, 0, 1, 1}, - {&__pyx_n_s_pickle, __pyx_k_pickle, sizeof(__pyx_k_pickle), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_PickleError, __pyx_k_pyx_PickleError, sizeof(__pyx_k_pyx_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_checksum, __pyx_k_pyx_checksum, sizeof(__pyx_k_pyx_checksum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_result, __pyx_k_pyx_result, sizeof(__pyx_k_pyx_result), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_state, __pyx_k_pyx_state, sizeof(__pyx_k_pyx_state), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_type, __pyx_k_pyx_type, sizeof(__pyx_k_pyx_type), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_unpickle_Enum, __pyx_k_pyx_unpickle_Enum, sizeof(__pyx_k_pyx_unpickle_Enum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_vtable, __pyx_k_pyx_vtable, sizeof(__pyx_k_pyx_vtable), 0, 0, 1, 1}, - {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, - {&__pyx_n_s_reduce, __pyx_k_reduce, sizeof(__pyx_k_reduce), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_cython, __pyx_k_reduce_cython, sizeof(__pyx_k_reduce_cython), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_ex, __pyx_k_reduce_ex, sizeof(__pyx_k_reduce_ex), 0, 0, 1, 1}, - {&__pyx_n_s_register, __pyx_k_register, sizeof(__pyx_k_register), 0, 0, 1, 1}, - {&__pyx_n_s_setstate, __pyx_k_setstate, sizeof(__pyx_k_setstate), 0, 0, 1, 1}, - {&__pyx_n_s_setstate_cython, __pyx_k_setstate_cython, sizeof(__pyx_k_setstate_cython), 0, 0, 1, 1}, - {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1}, - {&__pyx_n_s_size, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1}, - {&__pyx_n_s_spec, __pyx_k_spec, sizeof(__pyx_k_spec), 0, 0, 1, 1}, - {&__pyx_n_s_start, __pyx_k_start, sizeof(__pyx_k_start), 0, 0, 1, 1}, - {&__pyx_n_s_step, __pyx_k_step, sizeof(__pyx_k_step), 0, 0, 1, 1}, - {&__pyx_n_s_stop, __pyx_k_stop, sizeof(__pyx_k_stop), 0, 0, 1, 1}, - {&__pyx_kp_s_strided_and_direct, __pyx_k_strided_and_direct, sizeof(__pyx_k_strided_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_direct_or_indirect, __pyx_k_strided_and_direct_or_indirect, sizeof(__pyx_k_strided_and_direct_or_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_indirect, __pyx_k_strided_and_indirect, sizeof(__pyx_k_strided_and_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_stringsource, __pyx_k_stringsource, sizeof(__pyx_k_stringsource), 0, 0, 1, 0}, - {&__pyx_n_s_struct, __pyx_k_struct, sizeof(__pyx_k_struct), 0, 0, 1, 1}, - {&__pyx_n_s_sys, __pyx_k_sys, sizeof(__pyx_k_sys), 0, 0, 1, 1}, - {&__pyx_n_s_t_xs, __pyx_k_t_xs, sizeof(__pyx_k_t_xs), 0, 0, 1, 1}, - {&__pyx_n_s_t_ys, __pyx_k_t_ys, sizeof(__pyx_k_t_ys), 0, 0, 1, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_kp_s_unable_to_allocate_array_data, __pyx_k_unable_to_allocate_array_data, sizeof(__pyx_k_unable_to_allocate_array_data), 0, 0, 1, 0}, - {&__pyx_kp_s_unable_to_allocate_shape_and_str, __pyx_k_unable_to_allocate_shape_and_str, sizeof(__pyx_k_unable_to_allocate_shape_and_str), 0, 0, 1, 0}, - {&__pyx_n_s_unpack, __pyx_k_unpack, sizeof(__pyx_k_unpack), 0, 0, 1, 1}, - {&__pyx_n_s_update, __pyx_k_update, sizeof(__pyx_k_update), 0, 0, 1, 1}, - {&__pyx_n_s_values, __pyx_k_values, sizeof(__pyx_k_values), 0, 0, 1, 1}, - {&__pyx_n_s_version_info, __pyx_k_version_info, sizeof(__pyx_k_version_info), 0, 0, 1, 1}, - {0, 0, 0, 0, 0, 0, 0} - }; - return __Pyx_InitStrings(__pyx_string_tab); -} -/* #### Code section: cached_builtins ### */ -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 15, __pyx_L1_error) - __pyx_builtin___import__ = __Pyx_GetBuiltinName(__pyx_n_s_import); if (!__pyx_builtin___import__) __PYX_ERR(1, 100, __pyx_L1_error) - __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 141, __pyx_L1_error) - __pyx_builtin_MemoryError = __Pyx_GetBuiltinName(__pyx_n_s_MemoryError); if (!__pyx_builtin_MemoryError) __PYX_ERR(1, 156, __pyx_L1_error) - __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(1, 159, __pyx_L1_error) - __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(1, 2, __pyx_L1_error) - __pyx_builtin_AssertionError = __Pyx_GetBuiltinName(__pyx_n_s_AssertionError); if (!__pyx_builtin_AssertionError) __PYX_ERR(1, 373, __pyx_L1_error) - __pyx_builtin_Ellipsis = __Pyx_GetBuiltinName(__pyx_n_s_Ellipsis); if (!__pyx_builtin_Ellipsis) __PYX_ERR(1, 408, __pyx_L1_error) - __pyx_builtin_id = __Pyx_GetBuiltinName(__pyx_n_s_id); if (!__pyx_builtin_id) __PYX_ERR(1, 618, __pyx_L1_error) - __pyx_builtin_IndexError = __Pyx_GetBuiltinName(__pyx_n_s_IndexError); if (!__pyx_builtin_IndexError) __PYX_ERR(1, 914, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: cached_constants ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "View.MemoryView":582 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __pyx_tuple__4 = PyTuple_New(1); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 582, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__4); - __Pyx_INCREF(__pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_int_neg_1); - PyTuple_SET_ITEM(__pyx_tuple__4, 0, __pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_tuple__4); - - /* "View.MemoryView":679 - * tup = index if isinstance(index, tuple) else (index,) - * - * result = [slice(None)] * ndim # <<<<<<<<<<<<<< - * have_slices = False - * seen_ellipsis = False - */ - __pyx_slice__5 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__5)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__5); - __Pyx_GIVEREF(__pyx_slice__5); - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum not in (0x82a3537, 0x6ae9995, 0xb068931): # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - */ - __pyx_tuple__8 = PyTuple_Pack(3, __pyx_int_136983863, __pyx_int_112105877, __pyx_int_184977713); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__8); - __Pyx_GIVEREF(__pyx_tuple__8); - - /* "View.MemoryView":100 - * cdef object __pyx_collections_abc_Sequence "__pyx_collections_abc_Sequence" - * try: - * if __import__("sys").version_info >= (3, 3): # <<<<<<<<<<<<<< - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - * else: - */ - __pyx_tuple__10 = PyTuple_Pack(1, __pyx_n_s_sys); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(1, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__10); - __Pyx_GIVEREF(__pyx_tuple__10); - __pyx_tuple__11 = PyTuple_Pack(2, __pyx_int_3, __pyx_int_3); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(1, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__11); - __Pyx_GIVEREF(__pyx_tuple__11); - - /* "View.MemoryView":101 - * try: - * if __import__("sys").version_info >= (3, 3): - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence # <<<<<<<<<<<<<< - * else: - * __pyx_collections_abc_Sequence = __import__("collections").Sequence - */ - __pyx_tuple__12 = PyTuple_Pack(1, __pyx_kp_s_collections_abc); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(1, 101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__12); - __Pyx_GIVEREF(__pyx_tuple__12); - - /* "View.MemoryView":103 - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - * else: - * __pyx_collections_abc_Sequence = __import__("collections").Sequence # <<<<<<<<<<<<<< - * except: - * - */ - __pyx_tuple__13 = PyTuple_Pack(1, __pyx_n_s_collections); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(1, 103, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__13); - __Pyx_GIVEREF(__pyx_tuple__13); - - /* "View.MemoryView":309 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_tuple__14 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct_or_indirect); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(1, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__14); - __Pyx_GIVEREF(__pyx_tuple__14); - - /* "View.MemoryView":310 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_tuple__15 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(1, 310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__15); - __Pyx_GIVEREF(__pyx_tuple__15); - - /* "View.MemoryView":311 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__16 = PyTuple_Pack(1, __pyx_kp_s_strided_and_indirect); if (unlikely(!__pyx_tuple__16)) __PYX_ERR(1, 311, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__16); - __Pyx_GIVEREF(__pyx_tuple__16); - - /* "View.MemoryView":314 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_tuple__17 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_direct); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(1, 314, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__17); - __Pyx_GIVEREF(__pyx_tuple__17); - - /* "View.MemoryView":315 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__18 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_indirect); if (unlikely(!__pyx_tuple__18)) __PYX_ERR(1, 315, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__18); - __Pyx_GIVEREF(__pyx_tuple__18); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_tuple__19 = PyTuple_Pack(5, __pyx_n_s_pyx_type, __pyx_n_s_pyx_checksum, __pyx_n_s_pyx_state, __pyx_n_s_pyx_PickleError, __pyx_n_s_pyx_result); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__19); - __Pyx_GIVEREF(__pyx_tuple__19); - __pyx_codeobj__20 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__19, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_pyx_unpickle_Enum, 1, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__20)) __PYX_ERR(1, 1, __pyx_L1_error) - - /* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - __pyx_tuple__21 = PyTuple_Pack(4, __pyx_n_s_paths, __pyx_n_s_values, __pyx_n_s_t_ys, __pyx_n_s_t_xs); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__21); - __Pyx_GIVEREF(__pyx_tuple__21); - __pyx_codeobj__22 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__21, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_core_pyx, __pyx_n_s_maximum_path_c, 38, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__22)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} -/* #### Code section: init_constants ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitConstants(void) { - if (__Pyx_CreateStringTabAndInitStrings() < 0) __PYX_ERR(0, 1, __pyx_L1_error); - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_3 = PyInt_FromLong(3); if (unlikely(!__pyx_int_3)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_112105877 = PyInt_FromLong(112105877L); if (unlikely(!__pyx_int_112105877)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_136983863 = PyInt_FromLong(136983863L); if (unlikely(!__pyx_int_136983863)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_184977713 = PyInt_FromLong(184977713L); if (unlikely(!__pyx_int_184977713)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: init_globals ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - /* AssertionsEnabled.init */ - __Pyx_init_assertions_enabled(); - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - /* InitThreads.init */ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 -PyEval_InitThreads(); -#endif - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: init_module ### */ - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - __pyx_collections_abc_Sequence = Py_None; Py_INCREF(Py_None); - generic = Py_None; Py_INCREF(Py_None); - strided = Py_None; Py_INCREF(Py_None); - indirect = Py_None; Py_INCREF(Py_None); - contiguous = Py_None; Py_INCREF(Py_None); - indirect_contiguous = Py_None; Py_INCREF(Py_None); - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - __pyx_vtabptr_array = &__pyx_vtable_array; - __pyx_vtable_array.get_memview = (PyObject *(*)(struct __pyx_array_obj *))__pyx_array_get_memview; - #if CYTHON_USE_TYPE_SPECS - __pyx_array_type = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type___pyx_array_spec, NULL); if (unlikely(!__pyx_array_type)) __PYX_ERR(1, 114, __pyx_L1_error) - #if !CYTHON_COMPILING_IN_LIMITED_API - __pyx_array_type->tp_as_buffer = &__pyx_tp_as_buffer_array; - if (!__pyx_array_type->tp_as_buffer->bf_releasebuffer && __pyx_array_type->tp_base->tp_as_buffer && __pyx_array_type->tp_base->tp_as_buffer->bf_releasebuffer) { - __pyx_array_type->tp_as_buffer->bf_releasebuffer = __pyx_array_type->tp_base->tp_as_buffer->bf_releasebuffer; - } - #elif defined(Py_bf_getbuffer) && defined(Py_bf_releasebuffer) - /* PY_VERSION_HEX >= 0x03090000 || Py_LIMITED_API >= 0x030B0000 */ - #elif defined(_MSC_VER) - #pragma message ("The buffer protocol is not supported in the Limited C-API < 3.11.") - #else - #warning "The buffer protocol is not supported in the Limited C-API < 3.11." - #endif - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type___pyx_array_spec, __pyx_array_type) < 0) __PYX_ERR(1, 114, __pyx_L1_error) - #else - __pyx_array_type = &__pyx_type___pyx_array; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_array_type) < 0) __PYX_ERR(1, 114, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_array_type->tp_print = 0; - #endif - if (__Pyx_SetVtable(__pyx_array_type, __pyx_vtabptr_array) < 0) __PYX_ERR(1, 114, __pyx_L1_error) - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_MergeVtables(__pyx_array_type) < 0) __PYX_ERR(1, 114, __pyx_L1_error) - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_setup_reduce((PyObject *) __pyx_array_type) < 0) __PYX_ERR(1, 114, __pyx_L1_error) - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_MemviewEnum_type = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type___pyx_MemviewEnum_spec, NULL); if (unlikely(!__pyx_MemviewEnum_type)) __PYX_ERR(1, 302, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type___pyx_MemviewEnum_spec, __pyx_MemviewEnum_type) < 0) __PYX_ERR(1, 302, __pyx_L1_error) - #else - __pyx_MemviewEnum_type = &__pyx_type___pyx_MemviewEnum; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_MemviewEnum_type) < 0) __PYX_ERR(1, 302, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_MemviewEnum_type->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_MemviewEnum_type->tp_dictoffset && __pyx_MemviewEnum_type->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_MemviewEnum_type->tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_setup_reduce((PyObject *) __pyx_MemviewEnum_type) < 0) __PYX_ERR(1, 302, __pyx_L1_error) - #endif - __pyx_vtabptr_memoryview = &__pyx_vtable_memoryview; - __pyx_vtable_memoryview.get_item_pointer = (char *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_get_item_pointer; - __pyx_vtable_memoryview.is_slice = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_is_slice; - __pyx_vtable_memoryview.setitem_slice_assignment = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_slice_assignment; - __pyx_vtable_memoryview.setitem_slice_assign_scalar = (PyObject *(*)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_setitem_slice_assign_scalar; - __pyx_vtable_memoryview.setitem_indexed = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_indexed; - __pyx_vtable_memoryview.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryview_convert_item_to_object; - __pyx_vtable_memoryview.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryview_assign_item_from_object; - __pyx_vtable_memoryview._get_base = (PyObject *(*)(struct __pyx_memoryview_obj *))__pyx_memoryview__get_base; - #if CYTHON_USE_TYPE_SPECS - __pyx_memoryview_type = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type___pyx_memoryview_spec, NULL); if (unlikely(!__pyx_memoryview_type)) __PYX_ERR(1, 337, __pyx_L1_error) - #if !CYTHON_COMPILING_IN_LIMITED_API - __pyx_memoryview_type->tp_as_buffer = &__pyx_tp_as_buffer_memoryview; - if (!__pyx_memoryview_type->tp_as_buffer->bf_releasebuffer && __pyx_memoryview_type->tp_base->tp_as_buffer && __pyx_memoryview_type->tp_base->tp_as_buffer->bf_releasebuffer) { - __pyx_memoryview_type->tp_as_buffer->bf_releasebuffer = __pyx_memoryview_type->tp_base->tp_as_buffer->bf_releasebuffer; - } - #elif defined(Py_bf_getbuffer) && defined(Py_bf_releasebuffer) - /* PY_VERSION_HEX >= 0x03090000 || Py_LIMITED_API >= 0x030B0000 */ - #elif defined(_MSC_VER) - #pragma message ("The buffer protocol is not supported in the Limited C-API < 3.11.") - #else - #warning "The buffer protocol is not supported in the Limited C-API < 3.11." - #endif - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type___pyx_memoryview_spec, __pyx_memoryview_type) < 0) __PYX_ERR(1, 337, __pyx_L1_error) - #else - __pyx_memoryview_type = &__pyx_type___pyx_memoryview; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_memoryview_type) < 0) __PYX_ERR(1, 337, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_memoryview_type->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_memoryview_type->tp_dictoffset && __pyx_memoryview_type->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_memoryview_type->tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - #endif - if (__Pyx_SetVtable(__pyx_memoryview_type, __pyx_vtabptr_memoryview) < 0) __PYX_ERR(1, 337, __pyx_L1_error) - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_MergeVtables(__pyx_memoryview_type) < 0) __PYX_ERR(1, 337, __pyx_L1_error) - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_setup_reduce((PyObject *) __pyx_memoryview_type) < 0) __PYX_ERR(1, 337, __pyx_L1_error) - #endif - __pyx_vtabptr__memoryviewslice = &__pyx_vtable__memoryviewslice; - __pyx_vtable__memoryviewslice.__pyx_base = *__pyx_vtabptr_memoryview; - __pyx_vtable__memoryviewslice.__pyx_base.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryviewslice_convert_item_to_object; - __pyx_vtable__memoryviewslice.__pyx_base.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryviewslice_assign_item_from_object; - __pyx_vtable__memoryviewslice.__pyx_base._get_base = (PyObject *(*)(struct __pyx_memoryview_obj *))__pyx_memoryviewslice__get_base; - #if CYTHON_USE_TYPE_SPECS - __pyx_t_1 = PyTuple_Pack(1, (PyObject *)__pyx_memoryview_type); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 952, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_memoryviewslice_type = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type___pyx_memoryviewslice_spec, __pyx_t_1); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_memoryviewslice_type)) __PYX_ERR(1, 952, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type___pyx_memoryviewslice_spec, __pyx_memoryviewslice_type) < 0) __PYX_ERR(1, 952, __pyx_L1_error) - #else - __pyx_memoryviewslice_type = &__pyx_type___pyx_memoryviewslice; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - __pyx_memoryviewslice_type->tp_base = __pyx_memoryview_type; - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_memoryviewslice_type) < 0) __PYX_ERR(1, 952, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_memoryviewslice_type->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_memoryviewslice_type->tp_dictoffset && __pyx_memoryviewslice_type->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_memoryviewslice_type->tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - #endif - if (__Pyx_SetVtable(__pyx_memoryviewslice_type, __pyx_vtabptr__memoryviewslice) < 0) __PYX_ERR(1, 952, __pyx_L1_error) - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_MergeVtables(__pyx_memoryviewslice_type) < 0) __PYX_ERR(1, 952, __pyx_L1_error) - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_setup_reduce((PyObject *) __pyx_memoryviewslice_type) < 0) __PYX_ERR(1, 952, __pyx_L1_error) - #endif - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_RefNannyFinishContext(); - return -1; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_core(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_core}, - {0, NULL} -}; -#endif - -#ifdef __cplusplus -namespace { - struct PyModuleDef __pyx_moduledef = - #else - static struct PyModuleDef __pyx_moduledef = - #endif - { - PyModuleDef_HEAD_INIT, - "core", - 0, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #elif CYTHON_USE_MODULE_STATE - sizeof(__pyx_mstate), /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - #if CYTHON_USE_MODULE_STATE - __pyx_m_traverse, /* m_traverse */ - __pyx_m_clear, /* m_clear */ - NULL /* m_free */ - #else - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ - #endif - }; - #ifdef __cplusplus -} /* anonymous namespace */ -#endif -#endif - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initcore(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initcore(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_core(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_core(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *module, const char* from_name, const char* to_name, int allow_none) -#else -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) -#endif -{ - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { -#if CYTHON_COMPILING_IN_LIMITED_API - result = PyModule_AddObject(module, to_name, value); -#else - result = PyDict_SetItemString(moddict, to_name, value); -#endif - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - CYTHON_UNUSED_VAR(def); - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; -#if CYTHON_COMPILING_IN_LIMITED_API - moddict = module; -#else - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; -#endif - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_core(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - int stringtab_initialized = 0; - #if CYTHON_USE_MODULE_STATE - int pystate_addmodule_run = 0; - #endif - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - static PyThread_type_lock __pyx_t_8[8]; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'core' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("core", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #elif CYTHON_USE_MODULE_STATE - __pyx_t_1 = PyModule_Create(&__pyx_moduledef); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - { - int add_module_result = PyState_AddModule(__pyx_t_1, &__pyx_moduledef); - __pyx_t_1 = 0; /* transfer ownership from __pyx_t_1 to core pseudovariable */ - if (unlikely((add_module_result < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - pystate_addmodule_run = 1; - } - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #endif - CYTHON_UNUSED_VAR(__pyx_t_1); - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_core(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 && defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - PyEval_InitThreads(); - #endif - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - stringtab_initialized = 1; - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_monotonic_align__core) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name_2, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "monotonic_align.core")) { - if (unlikely((PyDict_SetItemString(modules, "monotonic_align.core", __pyx_m) < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - if (unlikely((__Pyx_modinit_type_init_code() < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "View.MemoryView":99 - * - * cdef object __pyx_collections_abc_Sequence "__pyx_collections_abc_Sequence" - * try: # <<<<<<<<<<<<<< - * if __import__("sys").version_info >= (3, 3): - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - /*try:*/ { - - /* "View.MemoryView":100 - * cdef object __pyx_collections_abc_Sequence "__pyx_collections_abc_Sequence" - * try: - * if __import__("sys").version_info >= (3, 3): # <<<<<<<<<<<<<< - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - * else: - */ - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin___import__, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 100, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_version_info); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 100, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyObject_RichCompare(__pyx_t_5, __pyx_tuple__11, Py_GE); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 100, __pyx_L2_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(1, 100, __pyx_L2_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_6) { - - /* "View.MemoryView":101 - * try: - * if __import__("sys").version_info >= (3, 3): - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence # <<<<<<<<<<<<<< - * else: - * __pyx_collections_abc_Sequence = __import__("collections").Sequence - */ - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin___import__, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 101, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_abc); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 101, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_Sequence); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 101, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XGOTREF(__pyx_collections_abc_Sequence); - __Pyx_DECREF_SET(__pyx_collections_abc_Sequence, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":100 - * cdef object __pyx_collections_abc_Sequence "__pyx_collections_abc_Sequence" - * try: - * if __import__("sys").version_info >= (3, 3): # <<<<<<<<<<<<<< - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":103 - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - * else: - * __pyx_collections_abc_Sequence = __import__("collections").Sequence # <<<<<<<<<<<<<< - * except: - * - */ - /*else*/ { - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin___import__, __pyx_tuple__13, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 103, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_Sequence); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 103, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XGOTREF(__pyx_collections_abc_Sequence); - __Pyx_DECREF_SET(__pyx_collections_abc_Sequence, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - } - __pyx_L8:; - - /* "View.MemoryView":99 - * - * cdef object __pyx_collections_abc_Sequence "__pyx_collections_abc_Sequence" - * try: # <<<<<<<<<<<<<< - * if __import__("sys").version_info >= (3, 3): - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - */ - } - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L7_try_end; - __pyx_L2_error:; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "View.MemoryView":104 - * else: - * __pyx_collections_abc_Sequence = __import__("collections").Sequence - * except: # <<<<<<<<<<<<<< - * - * __pyx_collections_abc_Sequence = None - */ - /*except:*/ { - __Pyx_AddTraceback("View.MemoryView", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_4, &__pyx_t_7) < 0) __PYX_ERR(1, 104, __pyx_L4_except_error) - __Pyx_XGOTREF(__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_7); - - /* "View.MemoryView":106 - * except: - * - * __pyx_collections_abc_Sequence = None # <<<<<<<<<<<<<< - * - * - */ - __Pyx_INCREF(Py_None); - __Pyx_XGOTREF(__pyx_collections_abc_Sequence); - __Pyx_DECREF_SET(__pyx_collections_abc_Sequence, Py_None); - __Pyx_GIVEREF(Py_None); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L3_exception_handled; - } - - /* "View.MemoryView":99 - * - * cdef object __pyx_collections_abc_Sequence "__pyx_collections_abc_Sequence" - * try: # <<<<<<<<<<<<<< - * if __import__("sys").version_info >= (3, 3): - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - */ - __pyx_L4_except_error:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); - goto __pyx_L1_error; - __pyx_L3_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); - __pyx_L7_try_end:; - } - - /* "View.MemoryView":241 - * - * - * try: # <<<<<<<<<<<<<< - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_2, &__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_1); - /*try:*/ { - - /* "View.MemoryView":242 - * - * try: - * count = __pyx_collections_abc_Sequence.count # <<<<<<<<<<<<<< - * index = __pyx_collections_abc_Sequence.index - * except: - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_collections_abc_Sequence, __pyx_n_s_count); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 242, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_array_type->tp_dict, __pyx_n_s_count, __pyx_t_7) < 0) __PYX_ERR(1, 242, __pyx_L11_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - PyType_Modified(__pyx_array_type); - - /* "View.MemoryView":243 - * try: - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index # <<<<<<<<<<<<<< - * except: - * pass - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_collections_abc_Sequence, __pyx_n_s_index); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 243, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_array_type->tp_dict, __pyx_n_s_index, __pyx_t_7) < 0) __PYX_ERR(1, 243, __pyx_L11_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - PyType_Modified(__pyx_array_type); - - /* "View.MemoryView":241 - * - * - * try: # <<<<<<<<<<<<<< - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - goto __pyx_L16_try_end; - __pyx_L11_error:; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":244 - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index - * except: # <<<<<<<<<<<<<< - * pass - * - */ - /*except:*/ { - __Pyx_ErrRestore(0,0,0); - goto __pyx_L12_exception_handled; - } - __pyx_L12_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_2, __pyx_t_1); - __pyx_L16_try_end:; - } - - /* "View.MemoryView":309 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_XGOTREF(generic); - __Pyx_DECREF_SET(generic, __pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":310 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_XGOTREF(strided); - __Pyx_DECREF_SET(strided, __pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":311 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__16, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 311, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_XGOTREF(indirect); - __Pyx_DECREF_SET(indirect, __pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":314 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 314, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_XGOTREF(contiguous); - __Pyx_DECREF_SET(contiguous, __pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":315 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 315, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_XGOTREF(indirect_contiguous); - __Pyx_DECREF_SET(indirect_contiguous, __pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":323 - * - * - * cdef int __pyx_memoryview_thread_locks_used = 0 # <<<<<<<<<<<<<< - * cdef PyThread_type_lock[8] __pyx_memoryview_thread_locks = [ - * PyThread_allocate_lock(), - */ - __pyx_memoryview_thread_locks_used = 0; - - /* "View.MemoryView":324 - * - * cdef int __pyx_memoryview_thread_locks_used = 0 - * cdef PyThread_type_lock[8] __pyx_memoryview_thread_locks = [ # <<<<<<<<<<<<<< - * PyThread_allocate_lock(), - * PyThread_allocate_lock(), - */ - __pyx_t_8[0] = PyThread_allocate_lock(); - __pyx_t_8[1] = PyThread_allocate_lock(); - __pyx_t_8[2] = PyThread_allocate_lock(); - __pyx_t_8[3] = PyThread_allocate_lock(); - __pyx_t_8[4] = PyThread_allocate_lock(); - __pyx_t_8[5] = PyThread_allocate_lock(); - __pyx_t_8[6] = PyThread_allocate_lock(); - __pyx_t_8[7] = PyThread_allocate_lock(); - memcpy(&(__pyx_memoryview_thread_locks[0]), __pyx_t_8, sizeof(__pyx_memoryview_thread_locks[0]) * (8)); - - /* "View.MemoryView":982 - * - * - * try: # <<<<<<<<<<<<<< - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - /*try:*/ { - - /* "View.MemoryView":983 - * - * try: - * count = __pyx_collections_abc_Sequence.count # <<<<<<<<<<<<<< - * index = __pyx_collections_abc_Sequence.index - * except: - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_collections_abc_Sequence, __pyx_n_s_count); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 983, __pyx_L17_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_memoryviewslice_type->tp_dict, __pyx_n_s_count, __pyx_t_7) < 0) __PYX_ERR(1, 983, __pyx_L17_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - PyType_Modified(__pyx_memoryviewslice_type); - - /* "View.MemoryView":984 - * try: - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index # <<<<<<<<<<<<<< - * except: - * pass - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_collections_abc_Sequence, __pyx_n_s_index); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 984, __pyx_L17_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_memoryviewslice_type->tp_dict, __pyx_n_s_index, __pyx_t_7) < 0) __PYX_ERR(1, 984, __pyx_L17_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - PyType_Modified(__pyx_memoryviewslice_type); - - /* "View.MemoryView":982 - * - * - * try: # <<<<<<<<<<<<<< - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index - */ - } - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L22_try_end; - __pyx_L17_error:; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":985 - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index - * except: # <<<<<<<<<<<<<< - * pass - * - */ - /*except:*/ { - __Pyx_ErrRestore(0,0,0); - goto __pyx_L18_exception_handled; - } - __pyx_L18_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); - __pyx_L22_try_end:; - } - - /* "View.MemoryView":988 - * pass - * - * try: # <<<<<<<<<<<<<< - * if __pyx_collections_abc_Sequence: - * - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_2, &__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_1); - /*try:*/ { - - /* "View.MemoryView":989 - * - * try: - * if __pyx_collections_abc_Sequence: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_collections_abc_Sequence); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(1, 989, __pyx_L23_error) - if (__pyx_t_6) { - - /* "View.MemoryView":993 - * - * - * __pyx_collections_abc_Sequence.register(_memoryviewslice) # <<<<<<<<<<<<<< - * __pyx_collections_abc_Sequence.register(array) - * except: - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_collections_abc_Sequence, __pyx_n_s_register); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 993, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_t_7, ((PyObject *)__pyx_memoryviewslice_type)); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 993, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "View.MemoryView":994 - * - * __pyx_collections_abc_Sequence.register(_memoryviewslice) - * __pyx_collections_abc_Sequence.register(array) # <<<<<<<<<<<<<< - * except: - * pass # ignore failure, it's a minor issue - */ - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_collections_abc_Sequence, __pyx_n_s_register); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 994, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_7 = __Pyx_PyObject_CallOneArg(__pyx_t_4, ((PyObject *)__pyx_array_type)); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 994, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":989 - * - * try: - * if __pyx_collections_abc_Sequence: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":988 - * pass - * - * try: # <<<<<<<<<<<<<< - * if __pyx_collections_abc_Sequence: - * - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - goto __pyx_L28_try_end; - __pyx_L23_error:; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":995 - * __pyx_collections_abc_Sequence.register(_memoryviewslice) - * __pyx_collections_abc_Sequence.register(array) - * except: # <<<<<<<<<<<<<< - * pass # ignore failure, it's a minor issue - * - */ - /*except:*/ { - __Pyx_ErrRestore(0,0,0); - goto __pyx_L24_exception_handled; - } - __pyx_L24_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_2, __pyx_t_1); - __pyx_L28_try_end:; - } - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_t_7 = PyCFunction_NewEx(&__pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum, NULL, __pyx_n_s_View_MemoryView); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_Enum, __pyx_t_7) < 0) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - __pyx_k__9 = (-1e9); - - /* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - __pyx_t_7 = __Pyx_CyFunction_New(&__pyx_mdef_15monotonic_align_4core_1maximum_path_c, 0, __pyx_n_s_maximum_path_c, NULL, __pyx_n_s_monotonic_align_core, __pyx_d, ((PyObject *)__pyx_codeobj__22)); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_maximum_path_c, __pyx_t_7) < 0) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "monotonic_align/core.pyx":1 - * cimport cython # <<<<<<<<<<<<<< - * from cython.parallel import prange - * - */ - __pyx_t_7 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_7) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_7); - if (__pyx_m) { - if (__pyx_d && stringtab_initialized) { - __Pyx_AddTraceback("init monotonic_align.core", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - #if !CYTHON_USE_MODULE_STATE - Py_CLEAR(__pyx_m); - #else - Py_DECREF(__pyx_m); - if (pystate_addmodule_run) { - PyObject *tp, *value, *tb; - PyErr_Fetch(&tp, &value, &tb); - PyState_RemoveModule(&__pyx_moduledef); - PyErr_Restore(tp, value, tb); - } - #endif - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init monotonic_align.core"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} -/* #### Code section: cleanup_globals ### */ -/* #### Code section: cleanup_module ### */ -/* #### Code section: main_method ### */ -/* #### Code section: utility_code_pragmas ### */ -#ifdef _MSC_VER -#pragma warning( push ) -/* Warning 4127: conditional expression is constant - * Cython uses constant conditional expressions to allow in inline functions to be optimized at - * compile-time, so this warning is not useful - */ -#pragma warning( disable : 4127 ) -#endif - - - -/* #### Code section: utility_code_def ### */ - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i= 0x030C00A6 - PyObject *current_exception = tstate->current_exception; - if (unlikely(!current_exception)) return 0; - exc_type = (PyObject*) Py_TYPE(current_exception); - if (exc_type == err) return 1; -#else - exc_type = tstate->curexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; -#endif - #if CYTHON_AVOID_BORROWED_REFS - Py_INCREF(exc_type); - #endif - if (unlikely(PyTuple_Check(err))) { - result = __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - } else { - result = __Pyx_PyErr_GivenExceptionMatches(exc_type, err); - } - #if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(exc_type); - #endif - return result; -} -#endif - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { -#if PY_VERSION_HEX >= 0x030C00A6 - PyObject *tmp_value; - assert(type == NULL || (value != NULL && type == (PyObject*) Py_TYPE(value))); - if (value) { - #if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(((PyBaseExceptionObject*) value)->traceback != tb)) - #endif - PyException_SetTraceback(value, tb); - } - tmp_value = tstate->current_exception; - tstate->current_exception = value; - Py_XDECREF(tmp_value); -#else - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#endif -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { -#if PY_VERSION_HEX >= 0x030C00A6 - PyObject* exc_value; - exc_value = tstate->current_exception; - tstate->current_exception = 0; - *value = exc_value; - *type = NULL; - *tb = NULL; - if (exc_value) { - *type = (PyObject*) Py_TYPE(exc_value); - Py_INCREF(*type); - #if CYTHON_COMPILING_IN_CPYTHON - *tb = ((PyBaseExceptionObject*) exc_value)->traceback; - Py_XINCREF(*tb); - #else - *tb = PyException_GetTraceback(exc_value); - #endif - } -#else - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#endif -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* PyObjectGetAttrStrNoError */ -static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - __Pyx_PyErr_Clear(); -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) { - PyObject *result; -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS && PY_VERSION_HEX >= 0x030700B1 - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) { - return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1); - } -#endif - result = __Pyx_PyObject_GetAttrStr(obj, attr_name); - if (unlikely(!result)) { - __Pyx_PyObject_GetAttrStr_ClearAttributeError(); - } - return result; -} - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStrNoError(__pyx_b, name); - if (unlikely(!result) && !PyErr_Occurred()) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* TupleAndListFromArray */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE void __Pyx_copy_object_array(PyObject *const *CYTHON_RESTRICT src, PyObject** CYTHON_RESTRICT dest, Py_ssize_t length) { - PyObject *v; - Py_ssize_t i; - for (i = 0; i < length; i++) { - v = dest[i] = src[i]; - Py_INCREF(v); - } -} -static CYTHON_INLINE PyObject * -__Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - Py_INCREF(__pyx_empty_tuple); - return __pyx_empty_tuple; - } - res = PyTuple_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyTupleObject*)res)->ob_item, n); - return res; -} -static CYTHON_INLINE PyObject * -__Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - return PyList_New(0); - } - res = PyList_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyListObject*)res)->ob_item, n); - return res; -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS && (PY_VERSION_HEX < 0x030B0000) - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* fastcall */ -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s) -{ - Py_ssize_t i, n = PyTuple_GET_SIZE(kwnames); - for (i = 0; i < n; i++) - { - if (s == PyTuple_GET_ITEM(kwnames, i)) return kwvalues[i]; - } - for (i = 0; i < n; i++) - { - int eq = __Pyx_PyUnicode_Equals(s, PyTuple_GET_ITEM(kwnames, i), Py_EQ); - if (unlikely(eq != 0)) { - if (unlikely(eq < 0)) return NULL; // error - return kwvalues[i]; - } - } - return NULL; // not found (no exception set) -} -#endif - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject *const *kwvalues, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - int kwds_is_tuple = CYTHON_METH_FASTCALL && likely(PyTuple_Check(kwds)); - while (1) { - if (kwds_is_tuple) { - if (pos >= PyTuple_GET_SIZE(kwds)) break; - key = PyTuple_GET_ITEM(kwds, pos); - value = kwvalues[pos]; - pos++; - } - else - { - if (!PyDict_Next(kwds, &pos, &key, &value)) break; - } - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - continue; - } - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = ( - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key) - ); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - #if PY_MAJOR_VERSION < 3 - PyErr_Format(PyExc_TypeError, - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - PyErr_Format(PyExc_TypeError, - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - -/* ArgTypeTest */ -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact) -{ - __Pyx_TypeName type_name; - __Pyx_TypeName obj_type_name; - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - else if (exact) { - #if PY_MAJOR_VERSION == 2 - if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1; - #endif - } - else { - if (likely(__Pyx_TypeCheck(obj, type))) return 1; - } - type_name = __Pyx_PyType_GetName(type); - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "Argument '%.200s' has incorrect type (expected " __Pyx_FMT_TYPENAME - ", got " __Pyx_FMT_TYPENAME ")", name, type_name, obj_type_name); - __Pyx_DECREF_TypeName(type_name); - __Pyx_DECREF_TypeName(obj_type_name); - return 0; -} - -/* RaiseException */ -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - __Pyx_PyThreadState_declare - CYTHON_UNUSED_VAR(cause); - Py_XINCREF(type); - if (!value || value == Py_None) - value = NULL; - else - Py_INCREF(value); - if (!tb || tb == Py_None) - tb = NULL; - else { - Py_INCREF(tb); - if (!PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - } - if (PyType_Check(type)) { -#if CYTHON_COMPILING_IN_PYPY - if (!value) { - Py_INCREF(Py_None); - value = Py_None; - } -#endif - PyErr_NormalizeException(&type, &value, &tb); - } else { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - value = type; - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - } - __Pyx_PyThreadState_assign - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} -#else -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { - #if PY_VERSION_HEX >= 0x030C00A6 - PyException_SetTraceback(value, tb); - #elif CYTHON_FAST_THREAD_STATE - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#else - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL && !CYTHON_VECTORCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = Py_TYPE(func)->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectFastCall */ -static PyObject* __Pyx_PyObject_FastCall_fallback(PyObject *func, PyObject **args, size_t nargs, PyObject *kwargs) { - PyObject *argstuple; - PyObject *result; - size_t i; - argstuple = PyTuple_New((Py_ssize_t)nargs); - if (unlikely(!argstuple)) return NULL; - for (i = 0; i < nargs; i++) { - Py_INCREF(args[i]); - PyTuple_SET_ITEM(argstuple, (Py_ssize_t)i, args[i]); - } - result = __Pyx_PyObject_Call(func, argstuple, kwargs); - Py_DECREF(argstuple); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject **args, size_t _nargs, PyObject *kwargs) { - Py_ssize_t nargs = __Pyx_PyVectorcall_NARGS(_nargs); -#if CYTHON_COMPILING_IN_CPYTHON - if (nargs == 0 && kwargs == NULL) { -#if defined(__Pyx_CyFunction_USED) && defined(NDEBUG) - if (__Pyx_IsCyOrPyCFunction(func)) -#else - if (PyCFunction_Check(func)) -#endif - { - if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) { - return __Pyx_PyObject_CallMethO(func, NULL); - } - } - } - else if (nargs == 1 && kwargs == NULL) { - if (PyCFunction_Check(func)) - { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, args[0]); - } - } - } -#endif - #if PY_VERSION_HEX < 0x030800B1 - #if CYTHON_FAST_PYCCALL - if (PyCFunction_Check(func)) { - if (kwargs) { - return _PyCFunction_FastCallDict(func, args, nargs, kwargs); - } else { - return _PyCFunction_FastCallKeywords(func, args, nargs, NULL); - } - } - #if PY_VERSION_HEX >= 0x030700A1 - if (!kwargs && __Pyx_IS_TYPE(func, &PyMethodDescr_Type)) { - return _PyMethodDescr_FastCallKeywords(func, args, nargs, NULL); - } - #endif - #endif - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs); - } - #endif - #endif - #if CYTHON_VECTORCALL - vectorcallfunc f = _PyVectorcall_Function(func); - if (f) { - return f(func, args, (size_t)nargs, kwargs); - } - #elif defined(__Pyx_CyFunction_USED) && CYTHON_BACKPORT_VECTORCALL - if (__Pyx_CyFunction_CheckExact(func)) { - __pyx_vectorcallfunc f = __Pyx_CyFunction_func_vectorcall(func); - if (f) return f(func, args, (size_t)nargs, kwargs); - } - #endif - if (nargs == 0) { - return __Pyx_PyObject_Call(func, __pyx_empty_tuple, kwargs); - } - return __Pyx_PyObject_FastCall_fallback(func, args, (size_t)nargs, kwargs); -} - -/* RaiseUnexpectedTypeError */ -static int -__Pyx_RaiseUnexpectedTypeError(const char *expected, PyObject *obj) -{ - __Pyx_TypeName obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, "Expected %s, got " __Pyx_FMT_TYPENAME, - expected, obj_type_name); - __Pyx_DECREF_TypeName(obj_type_name); - return 0; -} - -/* CIntToDigits */ -static const char DIGIT_PAIRS_10[2*10*10+1] = { - "00010203040506070809" - "10111213141516171819" - "20212223242526272829" - "30313233343536373839" - "40414243444546474849" - "50515253545556575859" - "60616263646566676869" - "70717273747576777879" - "80818283848586878889" - "90919293949596979899" -}; -static const char DIGIT_PAIRS_8[2*8*8+1] = { - "0001020304050607" - "1011121314151617" - "2021222324252627" - "3031323334353637" - "4041424344454647" - "5051525354555657" - "6061626364656667" - "7071727374757677" -}; -static const char DIGITS_HEX[2*16+1] = { - "0123456789abcdef" - "0123456789ABCDEF" -}; - -/* BuildPyUnicode */ -static PyObject* __Pyx_PyUnicode_BuildFromAscii(Py_ssize_t ulength, char* chars, int clength, - int prepend_sign, char padding_char) { - PyObject *uval; - Py_ssize_t uoffset = ulength - clength; -#if CYTHON_USE_UNICODE_INTERNALS - Py_ssize_t i; -#if CYTHON_PEP393_ENABLED - void *udata; - uval = PyUnicode_New(ulength, 127); - if (unlikely(!uval)) return NULL; - udata = PyUnicode_DATA(uval); -#else - Py_UNICODE *udata; - uval = PyUnicode_FromUnicode(NULL, ulength); - if (unlikely(!uval)) return NULL; - udata = PyUnicode_AS_UNICODE(uval); -#endif - if (uoffset > 0) { - i = 0; - if (prepend_sign) { - __Pyx_PyUnicode_WRITE(PyUnicode_1BYTE_KIND, udata, 0, '-'); - i++; - } - for (; i < uoffset; i++) { - __Pyx_PyUnicode_WRITE(PyUnicode_1BYTE_KIND, udata, i, padding_char); - } - } - for (i=0; i < clength; i++) { - __Pyx_PyUnicode_WRITE(PyUnicode_1BYTE_KIND, udata, uoffset+i, chars[i]); - } -#else - { - PyObject *sign = NULL, *padding = NULL; - uval = NULL; - if (uoffset > 0) { - prepend_sign = !!prepend_sign; - if (uoffset > prepend_sign) { - padding = PyUnicode_FromOrdinal(padding_char); - if (likely(padding) && uoffset > prepend_sign + 1) { - PyObject *tmp; - PyObject *repeat = PyInt_FromSsize_t(uoffset - prepend_sign); - if (unlikely(!repeat)) goto done_or_error; - tmp = PyNumber_Multiply(padding, repeat); - Py_DECREF(repeat); - Py_DECREF(padding); - padding = tmp; - } - if (unlikely(!padding)) goto done_or_error; - } - if (prepend_sign) { - sign = PyUnicode_FromOrdinal('-'); - if (unlikely(!sign)) goto done_or_error; - } - } - uval = PyUnicode_DecodeASCII(chars, clength, NULL); - if (likely(uval) && padding) { - PyObject *tmp = PyNumber_Add(padding, uval); - Py_DECREF(uval); - uval = tmp; - } - if (likely(uval) && sign) { - PyObject *tmp = PyNumber_Add(sign, uval); - Py_DECREF(uval); - uval = tmp; - } -done_or_error: - Py_XDECREF(padding); - Py_XDECREF(sign); - } -#endif - return uval; -} - -/* CIntToPyUnicode */ -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_From_int(int value, Py_ssize_t width, char padding_char, char format_char) { - char digits[sizeof(int)*3+2]; - char *dpos, *end = digits + sizeof(int)*3+2; - const char *hex_digits = DIGITS_HEX; - Py_ssize_t length, ulength; - int prepend_sign, last_one_off; - int remaining; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (format_char == 'X') { - hex_digits += 16; - format_char = 'x'; - } - remaining = value; - last_one_off = 0; - dpos = end; - do { - int digit_pos; - switch (format_char) { - case 'o': - digit_pos = abs((int)(remaining % (8*8))); - remaining = (int) (remaining / (8*8)); - dpos -= 2; - memcpy(dpos, DIGIT_PAIRS_8 + digit_pos * 2, 2); - last_one_off = (digit_pos < 8); - break; - case 'd': - digit_pos = abs((int)(remaining % (10*10))); - remaining = (int) (remaining / (10*10)); - dpos -= 2; - memcpy(dpos, DIGIT_PAIRS_10 + digit_pos * 2, 2); - last_one_off = (digit_pos < 10); - break; - case 'x': - *(--dpos) = hex_digits[abs((int)(remaining % 16))]; - remaining = (int) (remaining / 16); - break; - default: - assert(0); - break; - } - } while (unlikely(remaining != 0)); - assert(!last_one_off || *dpos == '0'); - dpos += last_one_off; - length = end - dpos; - ulength = length; - prepend_sign = 0; - if (!is_unsigned && value <= neg_one) { - if (padding_char == ' ' || width <= length + 1) { - *(--dpos) = '-'; - ++length; - } else { - prepend_sign = 1; - } - ++ulength; - } - if (width > ulength) { - ulength = width; - } - if (ulength == 1) { - return PyUnicode_FromOrdinal(*dpos); - } - return __Pyx_PyUnicode_BuildFromAscii(ulength, dpos, (int) length, prepend_sign, padding_char); -} - -/* CIntToPyUnicode */ -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_From_Py_ssize_t(Py_ssize_t value, Py_ssize_t width, char padding_char, char format_char) { - char digits[sizeof(Py_ssize_t)*3+2]; - char *dpos, *end = digits + sizeof(Py_ssize_t)*3+2; - const char *hex_digits = DIGITS_HEX; - Py_ssize_t length, ulength; - int prepend_sign, last_one_off; - Py_ssize_t remaining; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const Py_ssize_t neg_one = (Py_ssize_t) -1, const_zero = (Py_ssize_t) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (format_char == 'X') { - hex_digits += 16; - format_char = 'x'; - } - remaining = value; - last_one_off = 0; - dpos = end; - do { - int digit_pos; - switch (format_char) { - case 'o': - digit_pos = abs((int)(remaining % (8*8))); - remaining = (Py_ssize_t) (remaining / (8*8)); - dpos -= 2; - memcpy(dpos, DIGIT_PAIRS_8 + digit_pos * 2, 2); - last_one_off = (digit_pos < 8); - break; - case 'd': - digit_pos = abs((int)(remaining % (10*10))); - remaining = (Py_ssize_t) (remaining / (10*10)); - dpos -= 2; - memcpy(dpos, DIGIT_PAIRS_10 + digit_pos * 2, 2); - last_one_off = (digit_pos < 10); - break; - case 'x': - *(--dpos) = hex_digits[abs((int)(remaining % 16))]; - remaining = (Py_ssize_t) (remaining / 16); - break; - default: - assert(0); - break; - } - } while (unlikely(remaining != 0)); - assert(!last_one_off || *dpos == '0'); - dpos += last_one_off; - length = end - dpos; - ulength = length; - prepend_sign = 0; - if (!is_unsigned && value <= neg_one) { - if (padding_char == ' ' || width <= length + 1) { - *(--dpos) = '-'; - ++length; - } else { - prepend_sign = 1; - } - ++ulength; - } - if (width > ulength) { - ulength = width; - } - if (ulength == 1) { - return PyUnicode_FromOrdinal(*dpos); - } - return __Pyx_PyUnicode_BuildFromAscii(ulength, dpos, (int) length, prepend_sign, padding_char); -} - -/* JoinPyUnicode */ -static PyObject* __Pyx_PyUnicode_Join(PyObject* value_tuple, Py_ssize_t value_count, Py_ssize_t result_ulength, - Py_UCS4 max_char) { -#if CYTHON_USE_UNICODE_INTERNALS && CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - PyObject *result_uval; - int result_ukind, kind_shift; - Py_ssize_t i, char_pos; - void *result_udata; - CYTHON_MAYBE_UNUSED_VAR(max_char); -#if CYTHON_PEP393_ENABLED - result_uval = PyUnicode_New(result_ulength, max_char); - if (unlikely(!result_uval)) return NULL; - result_ukind = (max_char <= 255) ? PyUnicode_1BYTE_KIND : (max_char <= 65535) ? PyUnicode_2BYTE_KIND : PyUnicode_4BYTE_KIND; - kind_shift = (result_ukind == PyUnicode_4BYTE_KIND) ? 2 : result_ukind - 1; - result_udata = PyUnicode_DATA(result_uval); -#else - result_uval = PyUnicode_FromUnicode(NULL, result_ulength); - if (unlikely(!result_uval)) return NULL; - result_ukind = sizeof(Py_UNICODE); - kind_shift = (result_ukind == 4) ? 2 : result_ukind - 1; - result_udata = PyUnicode_AS_UNICODE(result_uval); -#endif - assert(kind_shift == 2 || kind_shift == 1 || kind_shift == 0); - char_pos = 0; - for (i=0; i < value_count; i++) { - int ukind; - Py_ssize_t ulength; - void *udata; - PyObject *uval = PyTuple_GET_ITEM(value_tuple, i); - if (unlikely(__Pyx_PyUnicode_READY(uval))) - goto bad; - ulength = __Pyx_PyUnicode_GET_LENGTH(uval); - if (unlikely(!ulength)) - continue; - if (unlikely((PY_SSIZE_T_MAX >> kind_shift) - ulength < char_pos)) - goto overflow; - ukind = __Pyx_PyUnicode_KIND(uval); - udata = __Pyx_PyUnicode_DATA(uval); - if (!CYTHON_PEP393_ENABLED || ukind == result_ukind) { - memcpy((char *)result_udata + (char_pos << kind_shift), udata, (size_t) (ulength << kind_shift)); - } else { - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030300F0 || defined(_PyUnicode_FastCopyCharacters) - _PyUnicode_FastCopyCharacters(result_uval, char_pos, uval, 0, ulength); - #else - Py_ssize_t j; - for (j=0; j < ulength; j++) { - Py_UCS4 uchar = __Pyx_PyUnicode_READ(ukind, udata, j); - __Pyx_PyUnicode_WRITE(result_ukind, result_udata, char_pos+j, uchar); - } - #endif - } - char_pos += ulength; - } - return result_uval; -overflow: - PyErr_SetString(PyExc_OverflowError, "join() result is too long for a Python string"); -bad: - Py_DECREF(result_uval); - return NULL; -#else - CYTHON_UNUSED_VAR(max_char); - CYTHON_UNUSED_VAR(result_ulength); - CYTHON_UNUSED_VAR(value_count); - return PyUnicode_Join(__pyx_empty_unicode, value_tuple); -#endif -} - -/* GetAttr */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) { -#if CYTHON_USE_TYPE_SLOTS -#if PY_MAJOR_VERSION >= 3 - if (likely(PyUnicode_Check(n))) -#else - if (likely(PyString_Check(n))) -#endif - return __Pyx_PyObject_GetAttrStr(o, n); -#endif - return PyObject_GetAttr(o, n); -} - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (unlikely(!j)) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - PyObject *r = PyList_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PyMappingMethods *mm = Py_TYPE(o)->tp_as_mapping; - PySequenceMethods *sm = Py_TYPE(o)->tp_as_sequence; - if (mm && mm->mp_subscript) { - PyObject *r, *key = PyInt_FromSsize_t(i); - if (unlikely(!key)) return NULL; - r = mm->mp_subscript(o, key); - Py_DECREF(key); - return r; - } - if (likely(sm && sm->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(sm->sq_length)) { - Py_ssize_t l = sm->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return sm->sq_item(o, i); - } - } -#else - if (is_list || PySequence_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -} - -/* PyObjectCallOneArg */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *args[2] = {NULL, arg}; - return __Pyx_PyObject_FastCall(func, args+1, 1 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* ObjectGetItem */ -#if CYTHON_USE_TYPE_SLOTS -static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject *index) { - PyObject *runerr = NULL; - Py_ssize_t key_value; - key_value = __Pyx_PyIndex_AsSsize_t(index); - if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) { - return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1); - } - if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) { - __Pyx_TypeName index_type_name = __Pyx_PyType_GetName(Py_TYPE(index)); - PyErr_Clear(); - PyErr_Format(PyExc_IndexError, - "cannot fit '" __Pyx_FMT_TYPENAME "' into an index-sized integer", index_type_name); - __Pyx_DECREF_TypeName(index_type_name); - } - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem_Slow(PyObject *obj, PyObject *key) { - __Pyx_TypeName obj_type_name; - if (likely(PyType_Check(obj))) { - PyObject *meth = __Pyx_PyObject_GetAttrStrNoError(obj, __pyx_n_s_class_getitem); - if (meth) { - PyObject *result = __Pyx_PyObject_CallOneArg(meth, key); - Py_DECREF(meth); - return result; - } - } - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "'" __Pyx_FMT_TYPENAME "' object is not subscriptable", obj_type_name); - __Pyx_DECREF_TypeName(obj_type_name); - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject *key) { - PyTypeObject *tp = Py_TYPE(obj); - PyMappingMethods *mm = tp->tp_as_mapping; - PySequenceMethods *sm = tp->tp_as_sequence; - if (likely(mm && mm->mp_subscript)) { - return mm->mp_subscript(obj, key); - } - if (likely(sm && sm->sq_item)) { - return __Pyx_PyObject_GetIndex(obj, key); - } - return __Pyx_PyObject_GetItem_Slow(obj, key); -} -#endif - -/* KeywordStringCheck */ -static int __Pyx_CheckKeywordStrings( - PyObject *kw, - const char* function_name, - int kw_allowed) -{ - PyObject* key = 0; - Py_ssize_t pos = 0; -#if CYTHON_COMPILING_IN_PYPY - if (!kw_allowed && PyDict_Next(kw, &pos, &key, 0)) - goto invalid_keyword; - return 1; -#else - if (CYTHON_METH_FASTCALL && likely(PyTuple_Check(kw))) { - if (unlikely(PyTuple_GET_SIZE(kw) == 0)) - return 1; - if (!kw_allowed) { - key = PyTuple_GET_ITEM(kw, 0); - goto invalid_keyword; - } -#if PY_VERSION_HEX < 0x03090000 - for (pos = 0; pos < PyTuple_GET_SIZE(kw); pos++) { - key = PyTuple_GET_ITEM(kw, pos); - if (unlikely(!PyUnicode_Check(key))) - goto invalid_keyword_type; - } -#endif - return 1; - } - while (PyDict_Next(kw, &pos, &key, 0)) { - #if PY_MAJOR_VERSION < 3 - if (unlikely(!PyString_Check(key))) - #endif - if (unlikely(!PyUnicode_Check(key))) - goto invalid_keyword_type; - } - if (!kw_allowed && unlikely(key)) - goto invalid_keyword; - return 1; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - return 0; -#endif -invalid_keyword: - #if PY_MAJOR_VERSION < 3 - PyErr_Format(PyExc_TypeError, - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - PyErr_Format(PyExc_TypeError, - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif - return 0; -} - -/* DivInt[Py_ssize_t] */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) { - Py_ssize_t q = a / b; - Py_ssize_t r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* GetAttr3 */ -static PyObject *__Pyx_GetAttr3Default(PyObject *d) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - return NULL; - __Pyx_PyErr_Clear(); - Py_INCREF(d); - return d; -} -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) { - PyObject *r; -#if CYTHON_USE_TYPE_SLOTS - if (likely(PyString_Check(n))) { - r = __Pyx_PyObject_GetAttrStrNoError(o, n); - if (unlikely(!r) && likely(!PyErr_Occurred())) { - r = __Pyx_NewRef(d); - } - return r; - } -#endif - r = PyObject_GetAttr(o, n); - return (likely(r)) ? r : __Pyx_GetAttr3Default(d); -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#elif CYTHON_COMPILING_IN_LIMITED_API - if (unlikely(!__pyx_m)) { - return NULL; - } - result = PyObject_GetAttr(__pyx_m, name); - if (likely(result)) { - return result; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* RaiseNoneIterError */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); -} - -/* ExtTypeTest */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { - __Pyx_TypeName obj_type_name; - __Pyx_TypeName type_name; - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - if (likely(__Pyx_TypeCheck(obj, type))) - return 1; - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - type_name = __Pyx_PyType_GetName(type); - PyErr_Format(PyExc_TypeError, - "Cannot convert " __Pyx_FMT_TYPENAME " to " __Pyx_FMT_TYPENAME, - obj_type_name, type_name); - __Pyx_DECREF_TypeName(obj_type_name); - __Pyx_DECREF_TypeName(type_name); - return 0; -} - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK && CYTHON_FAST_THREAD_STATE -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_value == NULL || exc_info->exc_value == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK && PY_VERSION_HEX >= 0x030B00a4 - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - PyObject *exc_value = exc_info->exc_value; - if (exc_value == NULL || exc_value == Py_None) { - *value = NULL; - *type = NULL; - *tb = NULL; - } else { - *value = exc_value; - Py_INCREF(*value); - *type = (PyObject*) Py_TYPE(exc_value); - Py_INCREF(*type); - *tb = PyException_GetTraceback(exc_value); - } - #elif CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); - #endif -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - #if CYTHON_USE_EXC_INFO_STACK && PY_VERSION_HEX >= 0x030B00a4 - _PyErr_StackItem *exc_info = tstate->exc_info; - PyObject *tmp_value = exc_info->exc_value; - exc_info->exc_value = value; - Py_XDECREF(tmp_value); - Py_XDECREF(type); - Py_XDECREF(tb); - #else - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); - #endif -} -#endif - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type = NULL, *local_value, *local_tb = NULL; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if PY_VERSION_HEX >= 0x030C00A6 - local_value = tstate->current_exception; - tstate->current_exception = 0; - if (likely(local_value)) { - local_type = (PyObject*) Py_TYPE(local_value); - Py_INCREF(local_type); - local_tb = PyException_GetTraceback(local_value); - } - #else - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; - #endif -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE && PY_VERSION_HEX >= 0x030C00A6 - if (unlikely(tstate->current_exception)) -#elif CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - #if PY_VERSION_HEX >= 0x030B00a4 - tmp_value = exc_info->exc_value; - exc_info->exc_value = local_value; - tmp_type = NULL; - tmp_tb = NULL; - Py_XDECREF(local_type); - Py_XDECREF(local_tb); - #else - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - #endif - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* SwapException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK && PY_VERSION_HEX >= 0x030B00a4 - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_value = exc_info->exc_value; - exc_info->exc_value = *value; - if (tmp_value == NULL || tmp_value == Py_None) { - Py_XDECREF(tmp_value); - tmp_value = NULL; - tmp_type = NULL; - tmp_tb = NULL; - } else { - tmp_type = (PyObject*) Py_TYPE(tmp_value); - Py_INCREF(tmp_type); - #if CYTHON_COMPILING_IN_CPYTHON - tmp_tb = ((PyBaseExceptionObject*) tmp_value)->traceback; - Py_XINCREF(tmp_tb); - #else - tmp_tb = PyException_GetTraceback(tmp_value); - #endif - } - #elif CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *module = 0; - PyObject *empty_dict = 0; - PyObject *empty_list = 0; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (unlikely(!py_import)) - goto bad; - if (!from_list) { - empty_list = PyList_New(0); - if (unlikely(!empty_list)) - goto bad; - from_list = empty_list; - } - #endif - empty_dict = PyDict_New(); - if (unlikely(!empty_dict)) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - #if CYTHON_COMPILING_IN_LIMITED_API - module = PyImport_ImportModuleLevelObject( - name, empty_dict, empty_dict, from_list, 1); - #else - module = PyImport_ImportModuleLevelObject( - name, __pyx_d, empty_dict, from_list, 1); - #endif - if (unlikely(!module)) { - if (unlikely(!PyErr_ExceptionMatches(PyExc_ImportError))) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (unlikely(!py_level)) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, __pyx_d, empty_dict, from_list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - #if CYTHON_COMPILING_IN_LIMITED_API - module = PyImport_ImportModuleLevelObject( - name, empty_dict, empty_dict, from_list, level); - #else - module = PyImport_ImportModuleLevelObject( - name, __pyx_d, empty_dict, from_list, level); - #endif - #endif - } - } -bad: - Py_XDECREF(empty_dict); - Py_XDECREF(empty_list); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - return module; -} - -/* ImportDottedModule */ -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx__ImportDottedModule_Error(PyObject *name, PyObject *parts_tuple, Py_ssize_t count) { - PyObject *partial_name = NULL, *slice = NULL, *sep = NULL; - if (unlikely(PyErr_Occurred())) { - PyErr_Clear(); - } - if (likely(PyTuple_GET_SIZE(parts_tuple) == count)) { - partial_name = name; - } else { - slice = PySequence_GetSlice(parts_tuple, 0, count); - if (unlikely(!slice)) - goto bad; - sep = PyUnicode_FromStringAndSize(".", 1); - if (unlikely(!sep)) - goto bad; - partial_name = PyUnicode_Join(sep, slice); - } - PyErr_Format( -#if PY_MAJOR_VERSION < 3 - PyExc_ImportError, - "No module named '%s'", PyString_AS_STRING(partial_name)); -#else -#if PY_VERSION_HEX >= 0x030600B1 - PyExc_ModuleNotFoundError, -#else - PyExc_ImportError, -#endif - "No module named '%U'", partial_name); -#endif -bad: - Py_XDECREF(sep); - Py_XDECREF(slice); - Py_XDECREF(partial_name); - return NULL; -} -#endif -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx__ImportDottedModule_Lookup(PyObject *name) { - PyObject *imported_module; -#if PY_VERSION_HEX < 0x030700A1 || (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - return NULL; - imported_module = __Pyx_PyDict_GetItemStr(modules, name); - Py_XINCREF(imported_module); -#else - imported_module = PyImport_GetModule(name); -#endif - return imported_module; -} -#endif -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_ImportDottedModule_WalkParts(PyObject *module, PyObject *name, PyObject *parts_tuple) { - Py_ssize_t i, nparts; - nparts = PyTuple_GET_SIZE(parts_tuple); - for (i=1; i < nparts && module; i++) { - PyObject *part, *submodule; -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - part = PyTuple_GET_ITEM(parts_tuple, i); -#else - part = PySequence_ITEM(parts_tuple, i); -#endif - submodule = __Pyx_PyObject_GetAttrStrNoError(module, part); -#if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(part); -#endif - Py_DECREF(module); - module = submodule; - } - if (unlikely(!module)) { - return __Pyx__ImportDottedModule_Error(name, parts_tuple, i); - } - return module; -} -#endif -static PyObject *__Pyx__ImportDottedModule(PyObject *name, PyObject *parts_tuple) { -#if PY_MAJOR_VERSION < 3 - PyObject *module, *from_list, *star = __pyx_n_s__3; - CYTHON_UNUSED_VAR(parts_tuple); - from_list = PyList_New(1); - if (unlikely(!from_list)) - return NULL; - Py_INCREF(star); - PyList_SET_ITEM(from_list, 0, star); - module = __Pyx_Import(name, from_list, 0); - Py_DECREF(from_list); - return module; -#else - PyObject *imported_module; - PyObject *module = __Pyx_Import(name, NULL, 0); - if (!parts_tuple || unlikely(!module)) - return module; - imported_module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(imported_module)) { - Py_DECREF(module); - return imported_module; - } - PyErr_Clear(); - return __Pyx_ImportDottedModule_WalkParts(module, name, parts_tuple); -#endif -} -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple) { -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030400B1 - PyObject *module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(module)) { - PyObject *spec = __Pyx_PyObject_GetAttrStrNoError(module, __pyx_n_s_spec); - if (likely(spec)) { - PyObject *unsafe = __Pyx_PyObject_GetAttrStrNoError(spec, __pyx_n_s_initializing); - if (likely(!unsafe || !__Pyx_PyObject_IsTrue(unsafe))) { - Py_DECREF(spec); - spec = NULL; - } - Py_XDECREF(unsafe); - } - if (likely(!spec)) { - PyErr_Clear(); - return module; - } - Py_DECREF(spec); - Py_DECREF(module); - } else if (PyErr_Occurred()) { - PyErr_Clear(); - } -#endif - return __Pyx__ImportDottedModule(name, parts_tuple); -} - -/* ssize_strlen */ -static CYTHON_INLINE Py_ssize_t __Pyx_ssize_strlen(const char *s) { - size_t len = strlen(s); - if (unlikely(len > PY_SSIZE_T_MAX)) { - PyErr_SetString(PyExc_OverflowError, "byte string is too long"); - return -1; - } - return (Py_ssize_t) len; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = __Pyx_PyType_GetSlot(a, tp_base, PyTypeObject*); - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (cls == a || cls == b) return 1; - mro = cls->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - PyObject *base = PyTuple_GET_ITEM(mro, i); - if (base == (PyObject *)a || base == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(cls, a) || __Pyx_InBases(cls, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - if (exc_type1) { - return __Pyx_IsAnySubtype2((PyTypeObject*)err, (PyTypeObject*)exc_type1, (PyTypeObject*)exc_type2); - } else { - return __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; itp_as_sequence && type->tp_as_sequence->sq_repeat)) { - return type->tp_as_sequence->sq_repeat(seq, mul); - } else -#endif - { - return __Pyx_PySequence_Multiply_Generic(seq, mul); - } -} - -/* SetItemInt */ -static int __Pyx_SetItemInt_Generic(PyObject *o, PyObject *j, PyObject *v) { - int r; - if (unlikely(!j)) return -1; - r = PyObject_SetItem(o, j, v); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE int __Pyx_SetItemInt_Fast(PyObject *o, Py_ssize_t i, PyObject *v, int is_list, - CYTHON_NCP_UNUSED int wraparound, CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = (!wraparound) ? i : ((likely(i >= 0)) ? i : i + PyList_GET_SIZE(o)); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o)))) { - PyObject* old = PyList_GET_ITEM(o, n); - Py_INCREF(v); - PyList_SET_ITEM(o, n, v); - Py_DECREF(old); - return 1; - } - } else { - PyMappingMethods *mm = Py_TYPE(o)->tp_as_mapping; - PySequenceMethods *sm = Py_TYPE(o)->tp_as_sequence; - if (mm && mm->mp_ass_subscript) { - int r; - PyObject *key = PyInt_FromSsize_t(i); - if (unlikely(!key)) return -1; - r = mm->mp_ass_subscript(o, key, v); - Py_DECREF(key); - return r; - } - if (likely(sm && sm->sq_ass_item)) { - if (wraparound && unlikely(i < 0) && likely(sm->sq_length)) { - Py_ssize_t l = sm->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return -1; - PyErr_Clear(); - } - } - return sm->sq_ass_item(o, i, v); - } - } -#else -#if CYTHON_COMPILING_IN_PYPY - if (is_list || (PySequence_Check(o) && !PyDict_Check(o))) -#else - if (is_list || PySequence_Check(o)) -#endif - { - return PySequence_SetItem(o, i, v); - } -#endif - return __Pyx_SetItemInt_Generic(o, PyInt_FromSsize_t(i), v); -} - -/* RaiseUnboundLocalError */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname) { - PyErr_Format(PyExc_UnboundLocalError, "local variable '%s' referenced before assignment", varname); -} - -/* DivInt[long] */ -static CYTHON_INLINE long __Pyx_div_long(long a, long b) { - long q = a / b; - long r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - const char* module_name_str = 0; - PyObject* module_name = 0; - PyObject* module_dot = 0; - PyObject* full_name = 0; - PyErr_Clear(); - module_name_str = PyModule_GetName(module); - if (unlikely(!module_name_str)) { goto modbad; } - module_name = PyUnicode_FromString(module_name_str); - if (unlikely(!module_name)) { goto modbad; } - module_dot = PyUnicode_Concat(module_name, __pyx_kp_u__2); - if (unlikely(!module_dot)) { goto modbad; } - full_name = PyUnicode_Concat(module_dot, name); - if (unlikely(!full_name)) { goto modbad; } - #if PY_VERSION_HEX < 0x030700A1 || (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) - { - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - goto modbad; - value = PyObject_GetItem(modules, full_name); - } - #else - value = PyImport_GetModule(full_name); - #endif - modbad: - Py_XDECREF(full_name); - Py_XDECREF(module_dot); - Py_XDECREF(module_name); - } - if (unlikely(!value)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* HasAttr */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) { - PyObject *r; - if (unlikely(!__Pyx_PyBaseString_Check(n))) { - PyErr_SetString(PyExc_TypeError, - "hasattr(): attribute name must be string"); - return -1; - } - r = __Pyx_GetAttr(o, n); - if (!r) { - PyErr_Clear(); - return 0; - } else { - Py_DECREF(r); - return 1; - } -} - -/* ErrOccurredWithGIL */ -static CYTHON_INLINE int __Pyx_ErrOccurredWithGIL(void) { - int err; - #ifdef WITH_THREAD - PyGILState_STATE _save = PyGILState_Ensure(); - #endif - err = !!PyErr_Occurred(); - #ifdef WITH_THREAD - PyGILState_Release(_save); - #endif - return err; -} - -/* PyObject_GenericGetAttrNoDict */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) { - __Pyx_TypeName type_name = __Pyx_PyType_GetName(tp); - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%U'", - type_name, attr_name); -#else - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%.400s'", - type_name, PyString_AS_STRING(attr_name)); -#endif - __Pyx_DECREF_TypeName(type_name); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) { - PyObject *descr; - PyTypeObject *tp = Py_TYPE(obj); - if (unlikely(!PyString_Check(attr_name))) { - return PyObject_GenericGetAttr(obj, attr_name); - } - assert(!tp->tp_dictoffset); - descr = _PyType_Lookup(tp, attr_name); - if (unlikely(!descr)) { - return __Pyx_RaiseGenericGetAttributeError(tp, attr_name); - } - Py_INCREF(descr); - #if PY_MAJOR_VERSION < 3 - if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS))) - #endif - { - descrgetfunc f = Py_TYPE(descr)->tp_descr_get; - if (unlikely(f)) { - PyObject *res = f(descr, obj, (PyObject *)tp); - Py_DECREF(descr); - return res; - } - } - return descr; -} -#endif - -/* PyObject_GenericGetAttr */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name) { - if (unlikely(Py_TYPE(obj)->tp_dictoffset)) { - return PyObject_GenericGetAttr(obj, attr_name); - } - return __Pyx_PyObject_GenericGetAttrNoDict(obj, attr_name); -} -#endif - -/* FixUpExtensionType */ -#if CYTHON_USE_TYPE_SPECS -static int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type) { -#if PY_VERSION_HEX > 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - CYTHON_UNUSED_VAR(spec); - CYTHON_UNUSED_VAR(type); -#else - const PyType_Slot *slot = spec->slots; - while (slot && slot->slot && slot->slot != Py_tp_members) - slot++; - if (slot && slot->slot == Py_tp_members) { - int changed = 0; -#if !(PY_VERSION_HEX <= 0x030900b1 && CYTHON_COMPILING_IN_CPYTHON) - const -#endif - PyMemberDef *memb = (PyMemberDef*) slot->pfunc; - while (memb && memb->name) { - if (memb->name[0] == '_' && memb->name[1] == '_') { -#if PY_VERSION_HEX < 0x030900b1 - if (strcmp(memb->name, "__weaklistoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_weaklistoffset = memb->offset; - changed = 1; - } - else if (strcmp(memb->name, "__dictoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_dictoffset = memb->offset; - changed = 1; - } -#if CYTHON_METH_FASTCALL - else if (strcmp(memb->name, "__vectorcalloffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); -#if PY_VERSION_HEX >= 0x030800b4 - type->tp_vectorcall_offset = memb->offset; -#else - type->tp_print = (printfunc) memb->offset; -#endif - changed = 1; - } -#endif -#else - if ((0)); -#endif -#if PY_VERSION_HEX <= 0x030900b1 && CYTHON_COMPILING_IN_CPYTHON - else if (strcmp(memb->name, "__module__") == 0) { - PyObject *descr; - assert(memb->type == T_OBJECT); - assert(memb->flags == 0 || memb->flags == READONLY); - descr = PyDescr_NewMember(type, memb); - if (unlikely(!descr)) - return -1; - if (unlikely(PyDict_SetItem(type->tp_dict, PyDescr_NAME(descr), descr) < 0)) { - Py_DECREF(descr); - return -1; - } - Py_DECREF(descr); - changed = 1; - } -#endif - } - memb++; - } - if (changed) - PyType_Modified(type); - } -#endif - return 0; -} -#endif - -/* PyObjectCallNoArg */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { - PyObject *arg = NULL; - return __Pyx_PyObject_FastCall(func, (&arg)+1, 0 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* PyObjectGetMethod */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method) { - PyObject *attr; -#if CYTHON_UNPACK_METHODS && CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_PYTYPE_LOOKUP - __Pyx_TypeName type_name; - PyTypeObject *tp = Py_TYPE(obj); - PyObject *descr; - descrgetfunc f = NULL; - PyObject **dictptr, *dict; - int meth_found = 0; - assert (*method == NULL); - if (unlikely(tp->tp_getattro != PyObject_GenericGetAttr)) { - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; - } - if (unlikely(tp->tp_dict == NULL) && unlikely(PyType_Ready(tp) < 0)) { - return 0; - } - descr = _PyType_Lookup(tp, name); - if (likely(descr != NULL)) { - Py_INCREF(descr); -#if defined(Py_TPFLAGS_METHOD_DESCRIPTOR) && Py_TPFLAGS_METHOD_DESCRIPTOR - if (__Pyx_PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_METHOD_DESCRIPTOR)) -#elif PY_MAJOR_VERSION >= 3 - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type))) - #endif -#else - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr))) - #endif -#endif - { - meth_found = 1; - } else { - f = Py_TYPE(descr)->tp_descr_get; - if (f != NULL && PyDescr_IsData(descr)) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - } - } - dictptr = _PyObject_GetDictPtr(obj); - if (dictptr != NULL && (dict = *dictptr) != NULL) { - Py_INCREF(dict); - attr = __Pyx_PyDict_GetItemStr(dict, name); - if (attr != NULL) { - Py_INCREF(attr); - Py_DECREF(dict); - Py_XDECREF(descr); - goto try_unpack; - } - Py_DECREF(dict); - } - if (meth_found) { - *method = descr; - return 1; - } - if (f != NULL) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - if (likely(descr != NULL)) { - *method = descr; - return 0; - } - type_name = __Pyx_PyType_GetName(tp); - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%U'", - type_name, name); -#else - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%.400s'", - type_name, PyString_AS_STRING(name)); -#endif - __Pyx_DECREF_TypeName(type_name); - return 0; -#else - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; -#endif -try_unpack: -#if CYTHON_UNPACK_METHODS - if (likely(attr) && PyMethod_Check(attr) && likely(PyMethod_GET_SELF(attr) == obj)) { - PyObject *function = PyMethod_GET_FUNCTION(attr); - Py_INCREF(function); - Py_DECREF(attr); - *method = function; - return 1; - } -#endif - *method = attr; - return 0; -} - -/* PyObjectCallMethod0 */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name) { - PyObject *method = NULL, *result = NULL; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_CallOneArg(method, obj); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) goto bad; - result = __Pyx_PyObject_CallNoArg(method); - Py_DECREF(method); -bad: - return result; -} - -/* ValidateBasesTuple */ -#if CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_USE_TYPE_SPECS -static int __Pyx_validate_bases_tuple(const char *type_name, Py_ssize_t dictoffset, PyObject *bases) { - Py_ssize_t i, n = PyTuple_GET_SIZE(bases); - for (i = 1; i < n; i++) - { - PyObject *b0 = PyTuple_GET_ITEM(bases, i); - PyTypeObject *b; -#if PY_MAJOR_VERSION < 3 - if (PyClass_Check(b0)) - { - PyErr_Format(PyExc_TypeError, "base class '%.200s' is an old-style class", - PyString_AS_STRING(((PyClassObject*)b0)->cl_name)); - return -1; - } -#endif - b = (PyTypeObject*) b0; - if (!__Pyx_PyType_HasFeature(b, Py_TPFLAGS_HEAPTYPE)) - { - __Pyx_TypeName b_name = __Pyx_PyType_GetName(b); - PyErr_Format(PyExc_TypeError, - "base class '" __Pyx_FMT_TYPENAME "' is not a heap type", b_name); - __Pyx_DECREF_TypeName(b_name); - return -1; - } - if (dictoffset == 0 && b->tp_dictoffset) - { - __Pyx_TypeName b_name = __Pyx_PyType_GetName(b); - PyErr_Format(PyExc_TypeError, - "extension type '%.200s' has no __dict__ slot, " - "but base type '" __Pyx_FMT_TYPENAME "' has: " - "either add 'cdef dict __dict__' to the extension type " - "or add '__slots__ = [...]' to the base type", - type_name, b_name); - __Pyx_DECREF_TypeName(b_name); - return -1; - } - } - return 0; -} -#endif - -/* PyType_Ready */ -static int __Pyx_PyType_Ready(PyTypeObject *t) { -#if CYTHON_USE_TYPE_SPECS || !(CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API) || defined(PYSTON_MAJOR_VERSION) - (void)__Pyx_PyObject_CallMethod0; -#if CYTHON_USE_TYPE_SPECS - (void)__Pyx_validate_bases_tuple; -#endif - return PyType_Ready(t); -#else - int r; - PyObject *bases = __Pyx_PyType_GetSlot(t, tp_bases, PyObject*); - if (bases && unlikely(__Pyx_validate_bases_tuple(t->tp_name, t->tp_dictoffset, bases) == -1)) - return -1; -#if PY_VERSION_HEX >= 0x03050000 && !defined(PYSTON_MAJOR_VERSION) - { - int gc_was_enabled; - #if PY_VERSION_HEX >= 0x030A00b1 - gc_was_enabled = PyGC_Disable(); - (void)__Pyx_PyObject_CallMethod0; - #else - PyObject *ret, *py_status; - PyObject *gc = NULL; - #if PY_VERSION_HEX >= 0x030700a1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM+0 >= 0x07030400) - gc = PyImport_GetModule(__pyx_kp_u_gc); - #endif - if (unlikely(!gc)) gc = PyImport_Import(__pyx_kp_u_gc); - if (unlikely(!gc)) return -1; - py_status = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_isenabled); - if (unlikely(!py_status)) { - Py_DECREF(gc); - return -1; - } - gc_was_enabled = __Pyx_PyObject_IsTrue(py_status); - Py_DECREF(py_status); - if (gc_was_enabled > 0) { - ret = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_disable); - if (unlikely(!ret)) { - Py_DECREF(gc); - return -1; - } - Py_DECREF(ret); - } else if (unlikely(gc_was_enabled == -1)) { - Py_DECREF(gc); - return -1; - } - #endif - t->tp_flags |= Py_TPFLAGS_HEAPTYPE; -#if PY_VERSION_HEX >= 0x030A0000 - t->tp_flags |= Py_TPFLAGS_IMMUTABLETYPE; -#endif -#else - (void)__Pyx_PyObject_CallMethod0; -#endif - r = PyType_Ready(t); -#if PY_VERSION_HEX >= 0x03050000 && !defined(PYSTON_MAJOR_VERSION) - t->tp_flags &= ~Py_TPFLAGS_HEAPTYPE; - #if PY_VERSION_HEX >= 0x030A00b1 - if (gc_was_enabled) - PyGC_Enable(); - #else - if (gc_was_enabled) { - PyObject *tp, *v, *tb; - PyErr_Fetch(&tp, &v, &tb); - ret = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_enable); - if (likely(ret || r == -1)) { - Py_XDECREF(ret); - PyErr_Restore(tp, v, tb); - } else { - Py_XDECREF(tp); - Py_XDECREF(v); - Py_XDECREF(tb); - r = -1; - } - } - Py_DECREF(gc); - #endif - } -#endif - return r; -#endif -} - -/* SetVTable */ -static int __Pyx_SetVtable(PyTypeObject *type, void *vtable) { - PyObject *ob = PyCapsule_New(vtable, 0, 0); - if (unlikely(!ob)) - goto bad; -#if CYTHON_COMPILING_IN_LIMITED_API - if (unlikely(PyObject_SetAttr((PyObject *) type, __pyx_n_s_pyx_vtable, ob) < 0)) -#else - if (unlikely(PyDict_SetItem(type->tp_dict, __pyx_n_s_pyx_vtable, ob) < 0)) -#endif - goto bad; - Py_DECREF(ob); - return 0; -bad: - Py_XDECREF(ob); - return -1; -} - -/* GetVTable */ -static void* __Pyx_GetVtable(PyTypeObject *type) { - void* ptr; -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *ob = PyObject_GetAttr((PyObject *)type, __pyx_n_s_pyx_vtable); -#else - PyObject *ob = PyObject_GetItem(type->tp_dict, __pyx_n_s_pyx_vtable); -#endif - if (!ob) - goto bad; - ptr = PyCapsule_GetPointer(ob, 0); - if (!ptr && !PyErr_Occurred()) - PyErr_SetString(PyExc_RuntimeError, "invalid vtable found for imported type"); - Py_DECREF(ob); - return ptr; -bad: - Py_XDECREF(ob); - return NULL; -} - -/* MergeVTables */ -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __Pyx_MergeVtables(PyTypeObject *type) { - int i; - void** base_vtables; - __Pyx_TypeName tp_base_name; - __Pyx_TypeName base_name; - void* unknown = (void*)-1; - PyObject* bases = type->tp_bases; - int base_depth = 0; - { - PyTypeObject* base = type->tp_base; - while (base) { - base_depth += 1; - base = base->tp_base; - } - } - base_vtables = (void**) malloc(sizeof(void*) * (size_t)(base_depth + 1)); - base_vtables[0] = unknown; - for (i = 1; i < PyTuple_GET_SIZE(bases); i++) { - void* base_vtable = __Pyx_GetVtable(((PyTypeObject*)PyTuple_GET_ITEM(bases, i))); - if (base_vtable != NULL) { - int j; - PyTypeObject* base = type->tp_base; - for (j = 0; j < base_depth; j++) { - if (base_vtables[j] == unknown) { - base_vtables[j] = __Pyx_GetVtable(base); - base_vtables[j + 1] = unknown; - } - if (base_vtables[j] == base_vtable) { - break; - } else if (base_vtables[j] == NULL) { - goto bad; - } - base = base->tp_base; - } - } - } - PyErr_Clear(); - free(base_vtables); - return 0; -bad: - tp_base_name = __Pyx_PyType_GetName(type->tp_base); - base_name = __Pyx_PyType_GetName((PyTypeObject*)PyTuple_GET_ITEM(bases, i)); - PyErr_Format(PyExc_TypeError, - "multiple bases have vtable conflict: '" __Pyx_FMT_TYPENAME "' and '" __Pyx_FMT_TYPENAME "'", tp_base_name, base_name); - __Pyx_DECREF_TypeName(tp_base_name); - __Pyx_DECREF_TypeName(base_name); - free(base_vtables); - return -1; -} -#endif - -/* SetupReduce */ -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) { - int ret; - PyObject *name_attr; - name_attr = __Pyx_PyObject_GetAttrStrNoError(meth, __pyx_n_s_name_2); - if (likely(name_attr)) { - ret = PyObject_RichCompareBool(name_attr, name, Py_EQ); - } else { - ret = -1; - } - if (unlikely(ret < 0)) { - PyErr_Clear(); - ret = 0; - } - Py_XDECREF(name_attr); - return ret; -} -static int __Pyx_setup_reduce(PyObject* type_obj) { - int ret = 0; - PyObject *object_reduce = NULL; - PyObject *object_getstate = NULL; - PyObject *object_reduce_ex = NULL; - PyObject *reduce = NULL; - PyObject *reduce_ex = NULL; - PyObject *reduce_cython = NULL; - PyObject *setstate = NULL; - PyObject *setstate_cython = NULL; - PyObject *getstate = NULL; -#if CYTHON_USE_PYTYPE_LOOKUP - getstate = _PyType_Lookup((PyTypeObject*)type_obj, __pyx_n_s_getstate); -#else - getstate = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_getstate); - if (!getstate && PyErr_Occurred()) { - goto __PYX_BAD; - } -#endif - if (getstate) { -#if CYTHON_USE_PYTYPE_LOOKUP - object_getstate = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_getstate); -#else - object_getstate = __Pyx_PyObject_GetAttrStrNoError((PyObject*)&PyBaseObject_Type, __pyx_n_s_getstate); - if (!object_getstate && PyErr_Occurred()) { - goto __PYX_BAD; - } -#endif - if (object_getstate != getstate) { - goto __PYX_GOOD; - } - } -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#else - object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#endif - reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_ex); if (unlikely(!reduce_ex)) goto __PYX_BAD; - if (reduce_ex == object_reduce_ex) { -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#else - object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#endif - reduce = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce); if (unlikely(!reduce)) goto __PYX_BAD; - if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, __pyx_n_s_reduce_cython)) { - reduce_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_reduce_cython); - if (likely(reduce_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce, reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (reduce == object_reduce || PyErr_Occurred()) { - goto __PYX_BAD; - } - setstate = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_setstate); - if (!setstate) PyErr_Clear(); - if (!setstate || __Pyx_setup_reduce_is_named(setstate, __pyx_n_s_setstate_cython)) { - setstate_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_setstate_cython); - if (likely(setstate_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate, setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (!setstate || PyErr_Occurred()) { - goto __PYX_BAD; - } - } - PyType_Modified((PyTypeObject*)type_obj); - } - } - goto __PYX_GOOD; -__PYX_BAD: - if (!PyErr_Occurred()) { - __Pyx_TypeName type_obj_name = - __Pyx_PyType_GetName((PyTypeObject*)type_obj); - PyErr_Format(PyExc_RuntimeError, - "Unable to initialize pickling for " __Pyx_FMT_TYPENAME, type_obj_name); - __Pyx_DECREF_TypeName(type_obj_name); - } - ret = -1; -__PYX_GOOD: -#if !CYTHON_USE_PYTYPE_LOOKUP - Py_XDECREF(object_reduce); - Py_XDECREF(object_reduce_ex); - Py_XDECREF(object_getstate); - Py_XDECREF(getstate); -#endif - Py_XDECREF(reduce); - Py_XDECREF(reduce_ex); - Py_XDECREF(reduce_cython); - Py_XDECREF(setstate); - Py_XDECREF(setstate_cython); - return ret; -} -#endif - -/* FetchSharedCythonModule */ -static PyObject *__Pyx_FetchSharedCythonABIModule(void) { - PyObject *abi_module = PyImport_AddModule((char*) __PYX_ABI_MODULE_NAME); - if (unlikely(!abi_module)) return NULL; - Py_INCREF(abi_module); - return abi_module; -} - -/* FetchCommonType */ -static int __Pyx_VerifyCachedType(PyObject *cached_type, - const char *name, - Py_ssize_t basicsize, - Py_ssize_t expected_basicsize) { - if (!PyType_Check(cached_type)) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s is not a type object", name); - return -1; - } - if (basicsize != expected_basicsize) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s has the wrong size, try recompiling", - name); - return -1; - } - return 0; -} -#if !CYTHON_USE_TYPE_SPECS -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) { - PyObject* abi_module; - const char* object_name; - PyTypeObject *cached_type = NULL; - abi_module = __Pyx_FetchSharedCythonABIModule(); - if (!abi_module) return NULL; - object_name = strrchr(type->tp_name, '.'); - object_name = object_name ? object_name+1 : type->tp_name; - cached_type = (PyTypeObject*) PyObject_GetAttrString(abi_module, object_name); - if (cached_type) { - if (__Pyx_VerifyCachedType( - (PyObject *)cached_type, - object_name, - cached_type->tp_basicsize, - type->tp_basicsize) < 0) { - goto bad; - } - goto done; - } - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - if (PyType_Ready(type) < 0) goto bad; - if (PyObject_SetAttrString(abi_module, object_name, (PyObject *)type) < 0) - goto bad; - Py_INCREF(type); - cached_type = type; -done: - Py_DECREF(abi_module); - return cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} -#else -static PyTypeObject *__Pyx_FetchCommonTypeFromSpec(PyObject *module, PyType_Spec *spec, PyObject *bases) { - PyObject *abi_module, *cached_type = NULL; - const char* object_name = strrchr(spec->name, '.'); - object_name = object_name ? object_name+1 : spec->name; - abi_module = __Pyx_FetchSharedCythonABIModule(); - if (!abi_module) return NULL; - cached_type = PyObject_GetAttrString(abi_module, object_name); - if (cached_type) { - Py_ssize_t basicsize; -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *py_basicsize; - py_basicsize = PyObject_GetAttrString(cached_type, "__basicsize__"); - if (unlikely(!py_basicsize)) goto bad; - basicsize = PyLong_AsSsize_t(py_basicsize); - Py_DECREF(py_basicsize); - py_basicsize = 0; - if (unlikely(basicsize == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; -#else - basicsize = likely(PyType_Check(cached_type)) ? ((PyTypeObject*) cached_type)->tp_basicsize : -1; -#endif - if (__Pyx_VerifyCachedType( - cached_type, - object_name, - basicsize, - spec->basicsize) < 0) { - goto bad; - } - goto done; - } - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - CYTHON_UNUSED_VAR(module); - cached_type = __Pyx_PyType_FromModuleAndSpec(abi_module, spec, bases); - if (unlikely(!cached_type)) goto bad; - if (unlikely(__Pyx_fix_up_extension_type_from_spec(spec, (PyTypeObject *) cached_type) < 0)) goto bad; - if (PyObject_SetAttrString(abi_module, object_name, cached_type) < 0) goto bad; -done: - Py_DECREF(abi_module); - assert(cached_type == NULL || PyType_Check(cached_type)); - return (PyTypeObject *) cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} -#endif - -/* PyVectorcallFastCallDict */ -#if CYTHON_METH_FASTCALL -static PyObject *__Pyx_PyVectorcall_FastCallDict_kw(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - PyObject *res = NULL; - PyObject *kwnames; - PyObject **newargs; - PyObject **kwvalues; - Py_ssize_t i, pos; - size_t j; - PyObject *key, *value; - unsigned long keys_are_strings; - Py_ssize_t nkw = PyDict_GET_SIZE(kw); - newargs = (PyObject **)PyMem_Malloc((nargs + (size_t)nkw) * sizeof(args[0])); - if (unlikely(newargs == NULL)) { - PyErr_NoMemory(); - return NULL; - } - for (j = 0; j < nargs; j++) newargs[j] = args[j]; - kwnames = PyTuple_New(nkw); - if (unlikely(kwnames == NULL)) { - PyMem_Free(newargs); - return NULL; - } - kwvalues = newargs + nargs; - pos = i = 0; - keys_are_strings = Py_TPFLAGS_UNICODE_SUBCLASS; - while (PyDict_Next(kw, &pos, &key, &value)) { - keys_are_strings &= Py_TYPE(key)->tp_flags; - Py_INCREF(key); - Py_INCREF(value); - PyTuple_SET_ITEM(kwnames, i, key); - kwvalues[i] = value; - i++; - } - if (unlikely(!keys_are_strings)) { - PyErr_SetString(PyExc_TypeError, "keywords must be strings"); - goto cleanup; - } - res = vc(func, newargs, nargs, kwnames); -cleanup: - Py_DECREF(kwnames); - for (i = 0; i < nkw; i++) - Py_DECREF(kwvalues[i]); - PyMem_Free(newargs); - return res; -} -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - if (likely(kw == NULL) || PyDict_GET_SIZE(kw) == 0) { - return vc(func, args, nargs, NULL); - } - return __Pyx_PyVectorcall_FastCallDict_kw(func, vc, args, nargs, kw); -} -#endif - -/* CythonFunctionShared */ -static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj) { -#if PY_VERSION_HEX < 0x030900B1 - __Pyx_Py_XDECREF_SET( - __Pyx_CyFunction_GetClassObj(f), - ((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#else - __Pyx_Py_XDECREF_SET( - ((PyCMethodObject *) (f))->mm_class, - (PyTypeObject*)((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#endif -} -static PyObject * -__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, void *closure) -{ - CYTHON_UNUSED_VAR(closure); - if (unlikely(op->func_doc == NULL)) { - if (((PyCFunctionObject*)op)->m_ml->ml_doc) { -#if PY_MAJOR_VERSION >= 3 - op->func_doc = PyUnicode_FromString(((PyCFunctionObject*)op)->m_ml->ml_doc); -#else - op->func_doc = PyString_FromString(((PyCFunctionObject*)op)->m_ml->ml_doc); -#endif - if (unlikely(op->func_doc == NULL)) - return NULL; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - Py_INCREF(op->func_doc); - return op->func_doc; -} -static int -__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (value == NULL) { - value = Py_None; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_doc, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(op->func_name == NULL)) { -#if PY_MAJOR_VERSION >= 3 - op->func_name = PyUnicode_InternFromString(((PyCFunctionObject*)op)->m_ml->ml_name); -#else - op->func_name = PyString_InternFromString(((PyCFunctionObject*)op)->m_ml->ml_name); -#endif - if (unlikely(op->func_name == NULL)) - return NULL; - } - Py_INCREF(op->func_name); - return op->func_name; -} -static int -__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_name, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - Py_INCREF(op->func_qualname); - return op->func_qualname; -} -static int -__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_qualname, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(op->func_dict == NULL)) { - op->func_dict = PyDict_New(); - if (unlikely(op->func_dict == NULL)) - return NULL; - } - Py_INCREF(op->func_dict); - return op->func_dict; -} -static int -__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(value == NULL)) { - PyErr_SetString(PyExc_TypeError, - "function's dictionary may not be deleted"); - return -1; - } - if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "setting function's dictionary to a non-dict"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_dict, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - Py_INCREF(op->func_globals); - return op->func_globals; -} -static PyObject * -__Pyx_CyFunction_get_closure(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(op); - CYTHON_UNUSED_VAR(context); - Py_INCREF(Py_None); - return Py_None; -} -static PyObject * -__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, void *context) -{ - PyObject* result = (op->func_code) ? op->func_code : Py_None; - CYTHON_UNUSED_VAR(context); - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) { - int result = 0; - PyObject *res = op->defaults_getter((PyObject *) op); - if (unlikely(!res)) - return -1; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - op->defaults_tuple = PyTuple_GET_ITEM(res, 0); - Py_INCREF(op->defaults_tuple); - op->defaults_kwdict = PyTuple_GET_ITEM(res, 1); - Py_INCREF(op->defaults_kwdict); - #else - op->defaults_tuple = PySequence_ITEM(res, 0); - if (unlikely(!op->defaults_tuple)) result = -1; - else { - op->defaults_kwdict = PySequence_ITEM(res, 1); - if (unlikely(!op->defaults_kwdict)) result = -1; - } - #endif - Py_DECREF(res); - return result; -} -static int -__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyTuple_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__defaults__ must be set to a tuple object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__defaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->defaults_tuple, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->defaults_tuple; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_tuple; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__kwdefaults__ must be set to a dict object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__kwdefaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->defaults_kwdict, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->defaults_kwdict; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_kwdict; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value || value == Py_None) { - value = NULL; - } else if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__annotations__ must be set to a dict object"); - return -1; - } - Py_XINCREF(value); - __Pyx_Py_XDECREF_SET(op->func_annotations, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->func_annotations; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - result = PyDict_New(); - if (unlikely(!result)) return NULL; - op->func_annotations = result; - } - Py_INCREF(result); - return result; -} -static PyObject * -__Pyx_CyFunction_get_is_coroutine(__pyx_CyFunctionObject *op, void *context) { - int is_coroutine; - CYTHON_UNUSED_VAR(context); - if (op->func_is_coroutine) { - return __Pyx_NewRef(op->func_is_coroutine); - } - is_coroutine = op->flags & __Pyx_CYFUNCTION_COROUTINE; -#if PY_VERSION_HEX >= 0x03050000 - if (is_coroutine) { - PyObject *module, *fromlist, *marker = __pyx_n_s_is_coroutine; - fromlist = PyList_New(1); - if (unlikely(!fromlist)) return NULL; - Py_INCREF(marker); - PyList_SET_ITEM(fromlist, 0, marker); - module = PyImport_ImportModuleLevelObject(__pyx_n_s_asyncio_coroutines, NULL, NULL, fromlist, 0); - Py_DECREF(fromlist); - if (unlikely(!module)) goto ignore; - op->func_is_coroutine = __Pyx_PyObject_GetAttrStr(module, marker); - Py_DECREF(module); - if (likely(op->func_is_coroutine)) { - return __Pyx_NewRef(op->func_is_coroutine); - } -ignore: - PyErr_Clear(); - } -#endif - op->func_is_coroutine = __Pyx_PyBool_FromLong(is_coroutine); - return __Pyx_NewRef(op->func_is_coroutine); -} -static PyGetSetDef __pyx_CyFunction_getsets[] = { - {(char *) "func_doc", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "__doc__", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "func_name", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__name__", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__qualname__", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0}, - {(char *) "func_dict", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "__dict__", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "func_globals", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "__globals__", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "func_closure", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "__closure__", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "func_code", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "__code__", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "func_defaults", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__defaults__", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__kwdefaults__", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0}, - {(char *) "__annotations__", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0}, - {(char *) "_is_coroutine", (getter)__Pyx_CyFunction_get_is_coroutine, 0, 0, 0}, - {0, 0, 0, 0, 0} -}; -static PyMemberDef __pyx_CyFunction_members[] = { - {(char *) "__module__", T_OBJECT, offsetof(PyCFunctionObject, m_module), 0, 0}, -#if CYTHON_USE_TYPE_SPECS - {(char *) "__dictoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_dict), READONLY, 0}, -#if CYTHON_METH_FASTCALL -#if CYTHON_BACKPORT_VECTORCALL - {(char *) "__vectorcalloffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_vectorcall), READONLY, 0}, -#else - {(char *) "__vectorcalloffset__", T_PYSSIZET, offsetof(PyCFunctionObject, vectorcall), READONLY, 0}, -#endif -#endif -#if PY_VERSION_HEX < 0x030500A0 - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_weakreflist), READONLY, 0}, -#else - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(PyCFunctionObject, m_weakreflist), READONLY, 0}, -#endif -#endif - {0, 0, 0, 0, 0} -}; -static PyObject * -__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, PyObject *args) -{ - CYTHON_UNUSED_VAR(args); -#if PY_MAJOR_VERSION >= 3 - Py_INCREF(m->func_qualname); - return m->func_qualname; -#else - return PyString_FromString(((PyCFunctionObject*)m)->m_ml->ml_name); -#endif -} -static PyMethodDef __pyx_CyFunction_methods[] = { - {"__reduce__", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0}, - {0, 0, 0, 0} -}; -#if PY_VERSION_HEX < 0x030500A0 -#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist) -#else -#define __Pyx_CyFunction_weakreflist(cyfunc) (((PyCFunctionObject*)cyfunc)->m_weakreflist) -#endif -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject *op, PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyCFunctionObject *cf = (PyCFunctionObject*) op; - if (unlikely(op == NULL)) - return NULL; - op->flags = flags; - __Pyx_CyFunction_weakreflist(op) = NULL; - cf->m_ml = ml; - cf->m_self = (PyObject *) op; - Py_XINCREF(closure); - op->func_closure = closure; - Py_XINCREF(module); - cf->m_module = module; - op->func_dict = NULL; - op->func_name = NULL; - Py_INCREF(qualname); - op->func_qualname = qualname; - op->func_doc = NULL; -#if PY_VERSION_HEX < 0x030900B1 - op->func_classobj = NULL; -#else - ((PyCMethodObject*)op)->mm_class = NULL; -#endif - op->func_globals = globals; - Py_INCREF(op->func_globals); - Py_XINCREF(code); - op->func_code = code; - op->defaults_pyobjects = 0; - op->defaults_size = 0; - op->defaults = NULL; - op->defaults_tuple = NULL; - op->defaults_kwdict = NULL; - op->defaults_getter = NULL; - op->func_annotations = NULL; - op->func_is_coroutine = NULL; -#if CYTHON_METH_FASTCALL - switch (ml->ml_flags & (METH_VARARGS | METH_FASTCALL | METH_NOARGS | METH_O | METH_KEYWORDS | METH_METHOD)) { - case METH_NOARGS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_NOARGS; - break; - case METH_O: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_O; - break; - case METH_METHOD | METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD; - break; - case METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS; - break; - case METH_VARARGS | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = NULL; - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - Py_DECREF(op); - return NULL; - } -#endif - return (PyObject *) op; -} -static int -__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m) -{ - Py_CLEAR(m->func_closure); - Py_CLEAR(((PyCFunctionObject*)m)->m_module); - Py_CLEAR(m->func_dict); - Py_CLEAR(m->func_name); - Py_CLEAR(m->func_qualname); - Py_CLEAR(m->func_doc); - Py_CLEAR(m->func_globals); - Py_CLEAR(m->func_code); -#if PY_VERSION_HEX < 0x030900B1 - Py_CLEAR(__Pyx_CyFunction_GetClassObj(m)); -#else - { - PyObject *cls = (PyObject*) ((PyCMethodObject *) (m))->mm_class; - ((PyCMethodObject *) (m))->mm_class = NULL; - Py_XDECREF(cls); - } -#endif - Py_CLEAR(m->defaults_tuple); - Py_CLEAR(m->defaults_kwdict); - Py_CLEAR(m->func_annotations); - Py_CLEAR(m->func_is_coroutine); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_XDECREF(pydefaults[i]); - PyObject_Free(m->defaults); - m->defaults = NULL; - } - return 0; -} -static void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - if (__Pyx_CyFunction_weakreflist(m) != NULL) - PyObject_ClearWeakRefs((PyObject *) m); - __Pyx_CyFunction_clear(m); - __Pyx_PyHeapTypeObject_GC_Del(m); -} -static void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - PyObject_GC_UnTrack(m); - __Pyx__CyFunction_dealloc(m); -} -static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg) -{ - Py_VISIT(m->func_closure); - Py_VISIT(((PyCFunctionObject*)m)->m_module); - Py_VISIT(m->func_dict); - Py_VISIT(m->func_name); - Py_VISIT(m->func_qualname); - Py_VISIT(m->func_doc); - Py_VISIT(m->func_globals); - Py_VISIT(m->func_code); - Py_VISIT(__Pyx_CyFunction_GetClassObj(m)); - Py_VISIT(m->defaults_tuple); - Py_VISIT(m->defaults_kwdict); - Py_VISIT(m->func_is_coroutine); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_VISIT(pydefaults[i]); - } - return 0; -} -static PyObject* -__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op) -{ -#if PY_MAJOR_VERSION >= 3 - return PyUnicode_FromFormat("", - op->func_qualname, (void *)op); -#else - return PyString_FromFormat("", - PyString_AsString(op->func_qualname), (void *)op); -#endif -} -static PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) { - PyCFunctionObject* f = (PyCFunctionObject*)func; - PyCFunction meth = f->m_ml->ml_meth; - Py_ssize_t size; - switch (f->m_ml->ml_flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) { - case METH_VARARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) - return (*meth)(self, arg); - break; - case METH_VARARGS | METH_KEYWORDS: - return (*(PyCFunctionWithKeywords)(void*)meth)(self, arg, kw); - case METH_NOARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 0)) - return (*meth)(self, NULL); - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - case METH_O: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 1)) { - PyObject *result, *arg0; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - arg0 = PyTuple_GET_ITEM(arg, 0); - #else - arg0 = PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL; - #endif - result = (*meth)(self, arg0); - #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(arg0); - #endif - return result; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - return NULL; - } - PyErr_Format(PyExc_TypeError, "%.200s() takes no keyword arguments", - f->m_ml->ml_name); - return NULL; -} -static CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) { - return __Pyx_CyFunction_CallMethod(func, ((PyCFunctionObject*)func)->m_self, arg, kw); -} -static PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) { - PyObject *result; - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func; -#if CYTHON_METH_FASTCALL - __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc); - if (vc) { -#if CYTHON_ASSUME_SAFE_MACROS - return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw); -#else - (void) &__Pyx_PyVectorcall_FastCallDict; - return PyVectorcall_Call(func, args, kw); -#endif - } -#endif - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - Py_ssize_t argc; - PyObject *new_args; - PyObject *self; - argc = PyTuple_GET_SIZE(args); - new_args = PyTuple_GetSlice(args, 1, argc); - if (unlikely(!new_args)) - return NULL; - self = PyTuple_GetItem(args, 0); - if (unlikely(!self)) { - Py_DECREF(new_args); -#if PY_MAJOR_VERSION > 2 - PyErr_Format(PyExc_TypeError, - "unbound method %.200S() needs an argument", - cyfunc->func_qualname); -#else - PyErr_SetString(PyExc_TypeError, - "unbound method needs an argument"); -#endif - return NULL; - } - result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw); - Py_DECREF(new_args); - } else { - result = __Pyx_CyFunction_Call(func, args, kw); - } - return result; -} -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE int __Pyx_CyFunction_Vectorcall_CheckArgs(__pyx_CyFunctionObject *cyfunc, Py_ssize_t nargs, PyObject *kwnames) -{ - int ret = 0; - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - if (unlikely(nargs < 1)) { - PyErr_Format(PyExc_TypeError, "%.200s() needs an argument", - ((PyCFunctionObject*)cyfunc)->m_ml->ml_name); - return -1; - } - ret = 1; - } - if (unlikely(kwnames) && unlikely(PyTuple_GET_SIZE(kwnames))) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes no keyword arguments", ((PyCFunctionObject*)cyfunc)->m_ml->ml_name); - return -1; - } - return ret; -} -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - if (unlikely(nargs != 0)) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - def->ml_name, nargs); - return NULL; - } - return def->ml_meth(self, NULL); -} -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - if (unlikely(nargs != 1)) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - def->ml_name, nargs); - return NULL; - } - return def->ml_meth(self, args[0]); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - return ((_PyCFunctionFastWithKeywords)(void(*)(void))def->ml_meth)(self, args, nargs, kwnames); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; - PyTypeObject *cls = (PyTypeObject *) __Pyx_CyFunction_GetClassObj(cyfunc); -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - return ((__Pyx_PyCMethod)(void(*)(void))def->ml_meth)(self, cls, args, (size_t)nargs, kwnames); -} -#endif -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_CyFunctionType_slots[] = { - {Py_tp_dealloc, (void *)__Pyx_CyFunction_dealloc}, - {Py_tp_repr, (void *)__Pyx_CyFunction_repr}, - {Py_tp_call, (void *)__Pyx_CyFunction_CallAsMethod}, - {Py_tp_traverse, (void *)__Pyx_CyFunction_traverse}, - {Py_tp_clear, (void *)__Pyx_CyFunction_clear}, - {Py_tp_methods, (void *)__pyx_CyFunction_methods}, - {Py_tp_members, (void *)__pyx_CyFunction_members}, - {Py_tp_getset, (void *)__pyx_CyFunction_getsets}, - {Py_tp_descr_get, (void *)__Pyx_PyMethod_New}, - {0, 0}, -}; -static PyType_Spec __pyx_CyFunctionType_spec = { - __PYX_TYPE_MODULE_PREFIX "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, -#ifdef Py_TPFLAGS_METHOD_DESCRIPTOR - Py_TPFLAGS_METHOD_DESCRIPTOR | -#endif -#if (defined(_Py_TPFLAGS_HAVE_VECTORCALL) && CYTHON_METH_FASTCALL) - _Py_TPFLAGS_HAVE_VECTORCALL | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, - __pyx_CyFunctionType_slots -}; -#else -static PyTypeObject __pyx_CyFunctionType_type = { - PyVarObject_HEAD_INIT(0, 0) - __PYX_TYPE_MODULE_PREFIX "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, - (destructor) __Pyx_CyFunction_dealloc, -#if !CYTHON_METH_FASTCALL - 0, -#elif CYTHON_BACKPORT_VECTORCALL - (printfunc)offsetof(__pyx_CyFunctionObject, func_vectorcall), -#else - offsetof(PyCFunctionObject, vectorcall), -#endif - 0, - 0, -#if PY_MAJOR_VERSION < 3 - 0, -#else - 0, -#endif - (reprfunc) __Pyx_CyFunction_repr, - 0, - 0, - 0, - 0, - __Pyx_CyFunction_CallAsMethod, - 0, - 0, - 0, - 0, -#ifdef Py_TPFLAGS_METHOD_DESCRIPTOR - Py_TPFLAGS_METHOD_DESCRIPTOR | -#endif -#ifdef _Py_TPFLAGS_HAVE_VECTORCALL - _Py_TPFLAGS_HAVE_VECTORCALL | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, - 0, - (traverseproc) __Pyx_CyFunction_traverse, - (inquiry) __Pyx_CyFunction_clear, - 0, -#if PY_VERSION_HEX < 0x030500A0 - offsetof(__pyx_CyFunctionObject, func_weakreflist), -#else - offsetof(PyCFunctionObject, m_weakreflist), -#endif - 0, - 0, - __pyx_CyFunction_methods, - __pyx_CyFunction_members, - __pyx_CyFunction_getsets, - 0, - 0, - __Pyx_PyMethod_New, - 0, - offsetof(__pyx_CyFunctionObject, func_dict), - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, -#if PY_VERSION_HEX >= 0x030400a1 - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, -#endif -#if __PYX_NEED_TP_PRINT_SLOT - 0, -#endif -#if PY_VERSION_HEX >= 0x030C0000 - 0, -#endif -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, -#endif -}; -#endif -static int __pyx_CyFunction_init(PyObject *module) { -#if CYTHON_USE_TYPE_SPECS - __pyx_CyFunctionType = __Pyx_FetchCommonTypeFromSpec(module, &__pyx_CyFunctionType_spec, NULL); -#else - CYTHON_UNUSED_VAR(module); - __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type); -#endif - if (unlikely(__pyx_CyFunctionType == NULL)) { - return -1; - } - return 0; -} -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *func, size_t size, int pyobjects) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults = PyObject_Malloc(size); - if (unlikely(!m->defaults)) - return PyErr_NoMemory(); - memset(m->defaults, 0, size); - m->defaults_pyobjects = pyobjects; - m->defaults_size = size; - return m->defaults; -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_tuple = tuple; - Py_INCREF(tuple); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_kwdict = dict; - Py_INCREF(dict); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->func_annotations = dict; - Py_INCREF(dict); -} - -/* CythonFunction */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyObject *op = __Pyx_CyFunction_Init( - PyObject_GC_New(__pyx_CyFunctionObject, __pyx_CyFunctionType), - ml, flags, qualname, closure, module, globals, code - ); - if (likely(op)) { - PyObject_GC_Track(op); - } - return op; -} - -/* CLineInTraceback */ -#ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - CYTHON_MAYBE_UNUSED_VAR(tstate); - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStrNoError(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - (void) PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} -#endif - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif -#if CYTHON_COMPILING_IN_LIMITED_API -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - if (c_line) { - (void) __pyx_cfilenm; - (void) __Pyx_CLineForTraceback(__Pyx_PyThreadState_Current, c_line); - } - _PyTraceback_Add(funcname, filename, py_line); -} -#else -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = NULL; - PyObject *py_funcname = NULL; - #if PY_MAJOR_VERSION < 3 - PyObject *py_srcfile = NULL; - py_srcfile = PyString_FromString(filename); - if (!py_srcfile) goto bad; - #endif - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - funcname = PyUnicode_AsUTF8(py_funcname); - if (!funcname) goto bad; - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - if (!py_funcname) goto bad; - #endif - } - #if PY_MAJOR_VERSION < 3 - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - #else - py_code = PyCode_NewEmpty(filename, funcname, py_line); - #endif - Py_XDECREF(py_funcname); // XDECREF since it's only set on Py3 if cline - return py_code; -bad: - Py_XDECREF(py_funcname); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_srcfile); - #endif - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject *ptype, *pvalue, *ptraceback; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) { - /* If the code object creation fails, then we should clear the - fetched exception references and propagate the new exception */ - Py_XDECREF(ptype); - Py_XDECREF(pvalue); - Py_XDECREF(ptraceback); - goto bad; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} -#endif - -#if PY_MAJOR_VERSION < 3 -static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) { - __Pyx_TypeName obj_type_name; - if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_array_type)) return __pyx_array_getbuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_memoryview_type)) return __pyx_memoryview_getbuffer(obj, view, flags); - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "'" __Pyx_FMT_TYPENAME "' does not have the buffer interface", - obj_type_name); - __Pyx_DECREF_TypeName(obj_type_name); - return -1; -} -static void __Pyx_ReleaseBuffer(Py_buffer *view) { - PyObject *obj = view->obj; - if (!obj) return; - if (PyObject_CheckBuffer(obj)) { - PyBuffer_Release(view); - return; - } - if ((0)) {} - view->obj = NULL; - Py_DECREF(obj); -} -#endif - - -/* MemviewSliceIsContig */ -static int -__pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim) -{ - int i, index, step, start; - Py_ssize_t itemsize = mvs.memview->view.itemsize; - if (order == 'F') { - step = 1; - start = 0; - } else { - step = -1; - start = ndim - 1; - } - for (i = 0; i < ndim; i++) { - index = start + step * i; - if (mvs.suboffsets[index] >= 0 || mvs.strides[index] != itemsize) - return 0; - itemsize *= mvs.shape[index]; - } - return 1; -} - -/* OverlappingSlices */ -static void -__pyx_get_array_memory_extents(__Pyx_memviewslice *slice, - void **out_start, void **out_end, - int ndim, size_t itemsize) -{ - char *start, *end; - int i; - start = end = slice->data; - for (i = 0; i < ndim; i++) { - Py_ssize_t stride = slice->strides[i]; - Py_ssize_t extent = slice->shape[i]; - if (extent == 0) { - *out_start = *out_end = start; - return; - } else { - if (stride > 0) - end += stride * (extent - 1); - else - start += stride * (extent - 1); - } - } - *out_start = start; - *out_end = end + itemsize; -} -static int -__pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize) -{ - void *start1, *end1, *start2, *end2; - __pyx_get_array_memory_extents(slice1, &start1, &end1, ndim, itemsize); - __pyx_get_array_memory_extents(slice2, &start2, &end2, ndim, itemsize); - return (start1 < end2) && (start2 < end1); -} - -/* IsLittleEndian */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void) -{ - union { - uint32_t u32; - uint8_t u8[4]; - } S; - S.u32 = 0x01020304; - return S.u8[0] == 4; -} - -/* BufferFormatCheck */ -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type) { - stack[0].field = &ctx->root; - stack[0].parent_offset = 0; - ctx->root.type = type; - ctx->root.name = "buffer dtype"; - ctx->root.offset = 0; - ctx->head = stack; - ctx->head->field = &ctx->root; - ctx->fmt_offset = 0; - ctx->head->parent_offset = 0; - ctx->new_packmode = '@'; - ctx->enc_packmode = '@'; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->is_complex = 0; - ctx->is_valid_array = 0; - ctx->struct_alignment = 0; - while (type->typegroup == 'S') { - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = 0; - type = type->fields->type; - } -} -static int __Pyx_BufFmt_ParseNumber(const char** ts) { - int count; - const char* t = *ts; - if (*t < '0' || *t > '9') { - return -1; - } else { - count = *t++ - '0'; - while (*t >= '0' && *t <= '9') { - count *= 10; - count += *t++ - '0'; - } - } - *ts = t; - return count; -} -static int __Pyx_BufFmt_ExpectNumber(const char **ts) { - int number = __Pyx_BufFmt_ParseNumber(ts); - if (number == -1) - PyErr_Format(PyExc_ValueError,\ - "Does not understand character buffer dtype format string ('%c')", **ts); - return number; -} -static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) { - PyErr_Format(PyExc_ValueError, - "Unexpected format string character: '%c'", ch); -} -static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) { - switch (ch) { - case '?': return "'bool'"; - case 'c': return "'char'"; - case 'b': return "'signed char'"; - case 'B': return "'unsigned char'"; - case 'h': return "'short'"; - case 'H': return "'unsigned short'"; - case 'i': return "'int'"; - case 'I': return "'unsigned int'"; - case 'l': return "'long'"; - case 'L': return "'unsigned long'"; - case 'q': return "'long long'"; - case 'Q': return "'unsigned long long'"; - case 'f': return (is_complex ? "'complex float'" : "'float'"); - case 'd': return (is_complex ? "'complex double'" : "'double'"); - case 'g': return (is_complex ? "'complex long double'" : "'long double'"); - case 'T': return "a struct"; - case 'O': return "Python object"; - case 'P': return "a pointer"; - case 's': case 'p': return "a string"; - case 0: return "end"; - default: return "unparsable format string"; - } -} -static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return 2; - case 'i': case 'I': case 'l': case 'L': return 4; - case 'q': case 'Q': return 8; - case 'f': return (is_complex ? 8 : 4); - case 'd': return (is_complex ? 16 : 8); - case 'g': { - PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g').."); - return 0; - } - case 'O': case 'P': return sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(short); - case 'i': case 'I': return sizeof(int); - case 'l': case 'L': return sizeof(long); - #ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(PY_LONG_LONG); - #endif - case 'f': return sizeof(float) * (is_complex ? 2 : 1); - case 'd': return sizeof(double) * (is_complex ? 2 : 1); - case 'g': return sizeof(long double) * (is_complex ? 2 : 1); - case 'O': case 'P': return sizeof(void*); - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -typedef struct { char c; short x; } __Pyx_st_short; -typedef struct { char c; int x; } __Pyx_st_int; -typedef struct { char c; long x; } __Pyx_st_long; -typedef struct { char c; float x; } __Pyx_st_float; -typedef struct { char c; double x; } __Pyx_st_double; -typedef struct { char c; long double x; } __Pyx_st_longdouble; -typedef struct { char c; void *x; } __Pyx_st_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, int is_complex) { - CYTHON_UNUSED_VAR(is_complex); - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_st_float) - sizeof(float); - case 'd': return sizeof(__Pyx_st_double) - sizeof(double); - case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -/* These are for computing the padding at the end of the struct to align - on the first member of the struct. This will probably the same as above, - but we don't have any guarantees. - */ -typedef struct { short x; char c; } __Pyx_pad_short; -typedef struct { int x; char c; } __Pyx_pad_int; -typedef struct { long x; char c; } __Pyx_pad_long; -typedef struct { float x; char c; } __Pyx_pad_float; -typedef struct { double x; char c; } __Pyx_pad_double; -typedef struct { long double x; char c; } __Pyx_pad_longdouble; -typedef struct { void *x; char c; } __Pyx_pad_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, int is_complex) { - CYTHON_UNUSED_VAR(is_complex); - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_pad_float) - sizeof(float); - case 'd': return sizeof(__Pyx_pad_double) - sizeof(double); - case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) { - switch (ch) { - case 'c': - return 'H'; - case 'b': case 'h': case 'i': - case 'l': case 'q': case 's': case 'p': - return 'I'; - case '?': case 'B': case 'H': case 'I': case 'L': case 'Q': - return 'U'; - case 'f': case 'd': case 'g': - return (is_complex ? 'C' : 'R'); - case 'O': - return 'O'; - case 'P': - return 'P'; - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) { - if (ctx->head == NULL || ctx->head->field == &ctx->root) { - const char* expected; - const char* quote; - if (ctx->head == NULL) { - expected = "end"; - quote = ""; - } else { - expected = ctx->head->field->type->name; - quote = "'"; - } - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected %s%s%s but got %s", - quote, expected, quote, - __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex)); - } else { - __Pyx_StructField* field = ctx->head->field; - __Pyx_StructField* parent = (ctx->head - 1)->field; - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'", - field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex), - parent->type->name, field->name); - } -} -static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) { - char group; - size_t size, offset, arraysize = 1; - if (ctx->enc_type == 0) return 0; - if (ctx->head->field->type->arraysize[0]) { - int i, ndim = 0; - if (ctx->enc_type == 's' || ctx->enc_type == 'p') { - ctx->is_valid_array = ctx->head->field->type->ndim == 1; - ndim = 1; - if (ctx->enc_count != ctx->head->field->type->arraysize[0]) { - PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %zu", - ctx->head->field->type->arraysize[0], ctx->enc_count); - return -1; - } - } - if (!ctx->is_valid_array) { - PyErr_Format(PyExc_ValueError, "Expected %d dimensions, got %d", - ctx->head->field->type->ndim, ndim); - return -1; - } - for (i = 0; i < ctx->head->field->type->ndim; i++) { - arraysize *= ctx->head->field->type->arraysize[i]; - } - ctx->is_valid_array = 0; - ctx->enc_count = 1; - } - group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex); - do { - __Pyx_StructField* field = ctx->head->field; - __Pyx_TypeInfo* type = field->type; - if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') { - size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex); - } else { - size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex); - } - if (ctx->enc_packmode == '@') { - size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex); - size_t align_mod_offset; - if (align_at == 0) return -1; - align_mod_offset = ctx->fmt_offset % align_at; - if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset; - if (ctx->struct_alignment == 0) - ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type, - ctx->is_complex); - } - if (type->size != size || type->typegroup != group) { - if (type->typegroup == 'C' && type->fields != NULL) { - size_t parent_offset = ctx->head->parent_offset + field->offset; - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = parent_offset; - continue; - } - if ((type->typegroup == 'H' || group == 'H') && type->size == size) { - } else { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - } - offset = ctx->head->parent_offset + field->offset; - if (ctx->fmt_offset != offset) { - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch; next field is at offset %" CYTHON_FORMAT_SSIZE_T "d but %" CYTHON_FORMAT_SSIZE_T "d expected", - (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset); - return -1; - } - ctx->fmt_offset += size; - if (arraysize) - ctx->fmt_offset += (arraysize - 1) * size; - --ctx->enc_count; - while (1) { - if (field == &ctx->root) { - ctx->head = NULL; - if (ctx->enc_count != 0) { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - break; - } - ctx->head->field = ++field; - if (field->type == NULL) { - --ctx->head; - field = ctx->head->field; - continue; - } else if (field->type->typegroup == 'S') { - size_t parent_offset = ctx->head->parent_offset + field->offset; - if (field->type->fields->type == NULL) continue; - field = field->type->fields; - ++ctx->head; - ctx->head->field = field; - ctx->head->parent_offset = parent_offset; - break; - } else { - break; - } - } - } while (ctx->enc_count); - ctx->enc_type = 0; - ctx->is_complex = 0; - return 0; -} -static PyObject * -__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp) -{ - const char *ts = *tsp; - int i = 0, number, ndim; - ++ts; - if (ctx->new_count != 1) { - PyErr_SetString(PyExc_ValueError, - "Cannot handle repeated arrays in format string"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ndim = ctx->head->field->type->ndim; - while (*ts && *ts != ')') { - switch (*ts) { - case ' ': case '\f': case '\r': case '\n': case '\t': case '\v': continue; - default: break; - } - number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i]) - return PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %d", - ctx->head->field->type->arraysize[i], number); - if (*ts != ',' && *ts != ')') - return PyErr_Format(PyExc_ValueError, - "Expected a comma in format string, got '%c'", *ts); - if (*ts == ',') ts++; - i++; - } - if (i != ndim) - return PyErr_Format(PyExc_ValueError, "Expected %d dimension(s), got %d", - ctx->head->field->type->ndim, i); - if (!*ts) { - PyErr_SetString(PyExc_ValueError, - "Unexpected end of format string, expected ')'"); - return NULL; - } - ctx->is_valid_array = 1; - ctx->new_count = 1; - *tsp = ++ts; - return Py_None; -} -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) { - int got_Z = 0; - while (1) { - switch(*ts) { - case 0: - if (ctx->enc_type != 0 && ctx->head == NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - if (ctx->head != NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - return ts; - case ' ': - case '\r': - case '\n': - ++ts; - break; - case '<': - if (!__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '>': - case '!': - if (__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '=': - case '@': - case '^': - ctx->new_packmode = *ts++; - break; - case 'T': - { - const char* ts_after_sub; - size_t i, struct_count = ctx->new_count; - size_t struct_alignment = ctx->struct_alignment; - ctx->new_count = 1; - ++ts; - if (*ts != '{') { - PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - ctx->enc_count = 0; - ctx->struct_alignment = 0; - ++ts; - ts_after_sub = ts; - for (i = 0; i != struct_count; ++i) { - ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts); - if (!ts_after_sub) return NULL; - } - ts = ts_after_sub; - if (struct_alignment) ctx->struct_alignment = struct_alignment; - } - break; - case '}': - { - size_t alignment = ctx->struct_alignment; - ++ts; - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - if (alignment && ctx->fmt_offset % alignment) { - ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment); - } - } - return ts; - case 'x': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->fmt_offset += ctx->new_count; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->enc_packmode = ctx->new_packmode; - ++ts; - break; - case 'Z': - got_Z = 1; - ++ts; - if (*ts != 'f' && *ts != 'd' && *ts != 'g') { - __Pyx_BufFmt_RaiseUnexpectedChar('Z'); - return NULL; - } - CYTHON_FALLTHROUGH; - case '?': case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I': - case 'l': case 'L': case 'q': case 'Q': - case 'f': case 'd': case 'g': - case 'O': case 'p': - if ((ctx->enc_type == *ts) && (got_Z == ctx->is_complex) && - (ctx->enc_packmode == ctx->new_packmode) && (!ctx->is_valid_array)) { - ctx->enc_count += ctx->new_count; - ctx->new_count = 1; - got_Z = 0; - ++ts; - break; - } - CYTHON_FALLTHROUGH; - case 's': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_count = ctx->new_count; - ctx->enc_packmode = ctx->new_packmode; - ctx->enc_type = *ts; - ctx->is_complex = got_Z; - ++ts; - ctx->new_count = 1; - got_Z = 0; - break; - case ':': - ++ts; - while(*ts != ':') ++ts; - ++ts; - break; - case '(': - if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL; - break; - default: - { - int number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - ctx->new_count = (size_t)number; - } - } - } -} - -/* TypeInfoCompare */ - static int -__pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b) -{ - int i; - if (!a || !b) - return 0; - if (a == b) - return 1; - if (a->size != b->size || a->typegroup != b->typegroup || - a->is_unsigned != b->is_unsigned || a->ndim != b->ndim) { - if (a->typegroup == 'H' || b->typegroup == 'H') { - return a->size == b->size; - } else { - return 0; - } - } - if (a->ndim) { - for (i = 0; i < a->ndim; i++) - if (a->arraysize[i] != b->arraysize[i]) - return 0; - } - if (a->typegroup == 'S') { - if (a->flags != b->flags) - return 0; - if (a->fields || b->fields) { - if (!(a->fields && b->fields)) - return 0; - for (i = 0; a->fields[i].type && b->fields[i].type; i++) { - __Pyx_StructField *field_a = a->fields + i; - __Pyx_StructField *field_b = b->fields + i; - if (field_a->offset != field_b->offset || - !__pyx_typeinfo_cmp(field_a->type, field_b->type)) - return 0; - } - return !a->fields[i].type && !b->fields[i].type; - } - } - return 1; -} - -/* MemviewSliceValidateAndInit */ - static int -__pyx_check_strides(Py_buffer *buf, int dim, int ndim, int spec) -{ - if (buf->shape[dim] <= 1) - return 1; - if (buf->strides) { - if (spec & __Pyx_MEMVIEW_CONTIG) { - if (spec & (__Pyx_MEMVIEW_PTR|__Pyx_MEMVIEW_FULL)) { - if (unlikely(buf->strides[dim] != sizeof(void *))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly contiguous " - "in dimension %d.", dim); - goto fail; - } - } else if (unlikely(buf->strides[dim] != buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_FOLLOW) { - Py_ssize_t stride = buf->strides[dim]; - if (stride < 0) - stride = -stride; - if (unlikely(stride < buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - } else { - if (unlikely(spec & __Pyx_MEMVIEW_CONTIG && dim != ndim - 1)) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not contiguous in " - "dimension %d", dim); - goto fail; - } else if (unlikely(spec & (__Pyx_MEMVIEW_PTR))) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not indirect in " - "dimension %d", dim); - goto fail; - } else if (unlikely(buf->suboffsets)) { - PyErr_SetString(PyExc_ValueError, - "Buffer exposes suboffsets but no strides"); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_check_suboffsets(Py_buffer *buf, int dim, int ndim, int spec) -{ - CYTHON_UNUSED_VAR(ndim); - if (spec & __Pyx_MEMVIEW_DIRECT) { - if (unlikely(buf->suboffsets && buf->suboffsets[dim] >= 0)) { - PyErr_Format(PyExc_ValueError, - "Buffer not compatible with direct access " - "in dimension %d.", dim); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_PTR) { - if (unlikely(!buf->suboffsets || (buf->suboffsets[dim] < 0))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly accessible " - "in dimension %d.", dim); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_verify_contig(Py_buffer *buf, int ndim, int c_or_f_flag) -{ - int i; - if (c_or_f_flag & __Pyx_IS_F_CONTIG) { - Py_ssize_t stride = 1; - for (i = 0; i < ndim; i++) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not fortran contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } else if (c_or_f_flag & __Pyx_IS_C_CONTIG) { - Py_ssize_t stride = 1; - for (i = ndim - 1; i >- 1; i--) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not C contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } - return 1; -fail: - return 0; -} -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj) -{ - struct __pyx_memoryview_obj *memview, *new_memview; - __Pyx_RefNannyDeclarations - Py_buffer *buf; - int i, spec = 0, retval = -1; - __Pyx_BufFmt_Context ctx; - int from_memoryview = __pyx_memoryview_check(original_obj); - __Pyx_RefNannySetupContext("ValidateAndInit_memviewslice", 0); - if (from_memoryview && __pyx_typeinfo_cmp(dtype, ((struct __pyx_memoryview_obj *) - original_obj)->typeinfo)) { - memview = (struct __pyx_memoryview_obj *) original_obj; - new_memview = NULL; - } else { - memview = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - original_obj, buf_flags, 0, dtype); - new_memview = memview; - if (unlikely(!memview)) - goto fail; - } - buf = &memview->view; - if (unlikely(buf->ndim != ndim)) { - PyErr_Format(PyExc_ValueError, - "Buffer has wrong number of dimensions (expected %d, got %d)", - ndim, buf->ndim); - goto fail; - } - if (new_memview) { - __Pyx_BufFmt_Init(&ctx, stack, dtype); - if (unlikely(!__Pyx_BufFmt_CheckString(&ctx, buf->format))) goto fail; - } - if (unlikely((unsigned) buf->itemsize != dtype->size)) { - PyErr_Format(PyExc_ValueError, - "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "u byte%s) " - "does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "u byte%s)", - buf->itemsize, - (buf->itemsize > 1) ? "s" : "", - dtype->name, - dtype->size, - (dtype->size > 1) ? "s" : ""); - goto fail; - } - if (buf->len > 0) { - for (i = 0; i < ndim; i++) { - spec = axes_specs[i]; - if (unlikely(!__pyx_check_strides(buf, i, ndim, spec))) - goto fail; - if (unlikely(!__pyx_check_suboffsets(buf, i, ndim, spec))) - goto fail; - } - if (unlikely(buf->strides && !__pyx_verify_contig(buf, ndim, c_or_f_flag))) - goto fail; - } - if (unlikely(__Pyx_init_memviewslice(memview, ndim, memviewslice, - new_memview != NULL) == -1)) { - goto fail; - } - retval = 0; - goto no_fail; -fail: - Py_XDECREF(new_memview); - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_float, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 1, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* CIntFromPyVerify */ - #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* MemviewSliceCopyTemplate */ - static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object) -{ - __Pyx_RefNannyDeclarations - int i; - __Pyx_memviewslice new_mvs = { 0, 0, { 0 }, { 0 }, { 0 } }; - struct __pyx_memoryview_obj *from_memview = from_mvs->memview; - Py_buffer *buf = &from_memview->view; - PyObject *shape_tuple = NULL; - PyObject *temp_int = NULL; - struct __pyx_array_obj *array_obj = NULL; - struct __pyx_memoryview_obj *memview_obj = NULL; - __Pyx_RefNannySetupContext("__pyx_memoryview_copy_new_contig", 0); - for (i = 0; i < ndim; i++) { - if (unlikely(from_mvs->suboffsets[i] >= 0)) { - PyErr_Format(PyExc_ValueError, "Cannot copy memoryview slice with " - "indirect dimensions (axis %d)", i); - goto fail; - } - } - shape_tuple = PyTuple_New(ndim); - if (unlikely(!shape_tuple)) { - goto fail; - } - __Pyx_GOTREF(shape_tuple); - for(i = 0; i < ndim; i++) { - temp_int = PyInt_FromSsize_t(from_mvs->shape[i]); - if(unlikely(!temp_int)) { - goto fail; - } else { - PyTuple_SET_ITEM(shape_tuple, i, temp_int); - temp_int = NULL; - } - } - array_obj = __pyx_array_new(shape_tuple, sizeof_dtype, buf->format, (char *) mode, NULL); - if (unlikely(!array_obj)) { - goto fail; - } - __Pyx_GOTREF(array_obj); - memview_obj = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - (PyObject *) array_obj, contig_flag, - dtype_is_object, - from_mvs->memview->typeinfo); - if (unlikely(!memview_obj)) - goto fail; - if (unlikely(__Pyx_init_memviewslice(memview_obj, ndim, &new_mvs, 1) < 0)) - goto fail; - if (unlikely(__pyx_memoryview_copy_contents(*from_mvs, new_mvs, ndim, ndim, - dtype_is_object) < 0)) - goto fail; - goto no_fail; -fail: - __Pyx_XDECREF(new_mvs.memview); - new_mvs.memview = NULL; - new_mvs.data = NULL; -no_fail: - __Pyx_XDECREF(shape_tuple); - __Pyx_XDECREF(temp_int); - __Pyx_XDECREF(array_obj); - __Pyx_RefNannyFinishContext(); - return new_mvs; -} - -/* MemviewSliceInit */ - static int -__Pyx_init_memviewslice(struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference) -{ - __Pyx_RefNannyDeclarations - int i, retval=-1; - Py_buffer *buf = &memview->view; - __Pyx_RefNannySetupContext("init_memviewslice", 0); - if (unlikely(memviewslice->memview || memviewslice->data)) { - PyErr_SetString(PyExc_ValueError, - "memviewslice is already initialized!"); - goto fail; - } - if (buf->strides) { - for (i = 0; i < ndim; i++) { - memviewslice->strides[i] = buf->strides[i]; - } - } else { - Py_ssize_t stride = buf->itemsize; - for (i = ndim - 1; i >= 0; i--) { - memviewslice->strides[i] = stride; - stride *= buf->shape[i]; - } - } - for (i = 0; i < ndim; i++) { - memviewslice->shape[i] = buf->shape[i]; - if (buf->suboffsets) { - memviewslice->suboffsets[i] = buf->suboffsets[i]; - } else { - memviewslice->suboffsets[i] = -1; - } - } - memviewslice->memview = memview; - memviewslice->data = (char *)buf->buf; - if (__pyx_add_acquisition_count(memview) == 0 && !memview_is_new_reference) { - Py_INCREF(memview); - } - retval = 0; - goto no_fail; -fail: - memviewslice->memview = 0; - memviewslice->data = 0; - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} -#ifndef Py_NO_RETURN -#define Py_NO_RETURN -#endif -static void __pyx_fatalerror(const char *fmt, ...) Py_NO_RETURN { - va_list vargs; - char msg[200]; -#if PY_VERSION_HEX >= 0x030A0000 || defined(HAVE_STDARG_PROTOTYPES) - va_start(vargs, fmt); -#else - va_start(vargs); -#endif - vsnprintf(msg, 200, fmt, vargs); - va_end(vargs); - Py_FatalError(msg); -} -static CYTHON_INLINE int -__pyx_add_acquisition_count_locked(__pyx_atomic_int_type *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)++; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE int -__pyx_sub_acquisition_count_locked(__pyx_atomic_int_type *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)--; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE void -__Pyx_INC_MEMVIEW(__Pyx_memviewslice *memslice, int have_gil, int lineno) -{ - __pyx_nonatomic_int_type old_acquisition_count; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) { - return; - } - old_acquisition_count = __pyx_add_acquisition_count(memview); - if (unlikely(old_acquisition_count <= 0)) { - if (likely(old_acquisition_count == 0)) { - if (have_gil) { - Py_INCREF((PyObject *) memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_INCREF((PyObject *) memview); - PyGILState_Release(_gilstate); - } - } else { - __pyx_fatalerror("Acquisition count is %d (line %d)", - old_acquisition_count+1, lineno); - } - } -} -static CYTHON_INLINE void __Pyx_XCLEAR_MEMVIEW(__Pyx_memviewslice *memslice, - int have_gil, int lineno) { - __pyx_nonatomic_int_type old_acquisition_count; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) { - memslice->memview = NULL; - return; - } - old_acquisition_count = __pyx_sub_acquisition_count(memview); - memslice->data = NULL; - if (likely(old_acquisition_count > 1)) { - memslice->memview = NULL; - } else if (likely(old_acquisition_count == 1)) { - if (have_gil) { - Py_CLEAR(memslice->memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_CLEAR(memslice->memview); - PyGILState_Release(_gilstate); - } - } else { - __pyx_fatalerror("Acquisition count is %d (line %d)", - old_acquisition_count-1, lineno); - } -} - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(int) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(int) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(int) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(int), - little, !is_unsigned); - } -} - -/* CIntFromPy */ - static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(int) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - if (unlikely(__Pyx_PyLong_IsNeg(x))) { - goto raise_neg_overflow; - } else if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(int, __Pyx_compact_upylong, __Pyx_PyLong_CompactValueUnsigned(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_DigitCount(x)) { - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 2 * PyLong_SHIFT)) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 3 * PyLong_SHIFT)) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 4 * PyLong_SHIFT)) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(int) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(int, __Pyx_compact_pylong, __Pyx_PyLong_CompactValue(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_SignedDigitCount(x)) { - case -2: - if ((8 * sizeof(int) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } - } -#endif - if ((sizeof(int) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); -#if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } -#endif - if (likely(v)) { - int ret = -1; -#if !(CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) || defined(_PyLong_AsByteArray) - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); -#else - PyObject *stepval = NULL, *mask = NULL, *shift = NULL; - int bits, remaining_bits, is_negative = 0; - long idigit; - int chunk_size = (sizeof(long) < 8) ? 30 : 62; - if (unlikely(!PyLong_CheckExact(v))) { - PyObject *tmp = v; - v = PyNumber_Long(v); - assert(PyLong_CheckExact(v)); - Py_DECREF(tmp); - if (unlikely(!v)) return (int) -1; - } -#if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(x) == 0) - return (int) 0; - is_negative = Py_SIZE(x) < 0; -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - is_negative = result == 1; - } -#endif - if (is_unsigned && unlikely(is_negative)) { - goto raise_neg_overflow; - } else if (is_negative) { - stepval = PyNumber_Invert(v); - if (unlikely(!stepval)) - return (int) -1; - } else { - stepval = __Pyx_NewRef(v); - } - val = (int) 0; - mask = PyLong_FromLong((1L << chunk_size) - 1); if (unlikely(!mask)) goto done; - shift = PyLong_FromLong(chunk_size); if (unlikely(!shift)) goto done; - for (bits = 0; bits < (int) sizeof(int) * 8 - chunk_size; bits += chunk_size) { - PyObject *tmp, *digit; - digit = PyNumber_And(stepval, mask); - if (unlikely(!digit)) goto done; - idigit = PyLong_AsLong(digit); - Py_DECREF(digit); - if (unlikely(idigit < 0)) goto done; - tmp = PyNumber_Rshift(stepval, shift); - if (unlikely(!tmp)) goto done; - Py_DECREF(stepval); stepval = tmp; - val |= ((int) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(stepval) == 0) - goto unpacking_done; - #endif - } - idigit = PyLong_AsLong(stepval); - if (unlikely(idigit < 0)) goto done; - remaining_bits = ((int) sizeof(int) * 8) - bits - (is_unsigned ? 0 : 1); - if (unlikely(idigit >= (1L << remaining_bits))) - goto raise_overflow; - val |= ((int) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - unpacking_done: - #endif - if (!is_unsigned) { - if (unlikely(val & (((int) 1) << (sizeof(int) * 8 - 1)))) - goto raise_overflow; - if (is_negative) - val = ~val; - } - ret = 0; - done: - Py_XDECREF(shift); - Py_XDECREF(mask); - Py_XDECREF(stepval); -#endif - Py_DECREF(v); - if (likely(!ret)) - return val; - } - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); - } -} - -/* CIntFromPy */ - static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(long) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - if (unlikely(__Pyx_PyLong_IsNeg(x))) { - goto raise_neg_overflow; - } else if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(long, __Pyx_compact_upylong, __Pyx_PyLong_CompactValueUnsigned(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_DigitCount(x)) { - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 2 * PyLong_SHIFT)) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 3 * PyLong_SHIFT)) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 4 * PyLong_SHIFT)) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(long) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(long, __Pyx_compact_pylong, __Pyx_PyLong_CompactValue(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_SignedDigitCount(x)) { - case -2: - if ((8 * sizeof(long) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } - } -#endif - if ((sizeof(long) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); -#if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } -#endif - if (likely(v)) { - int ret = -1; -#if !(CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) || defined(_PyLong_AsByteArray) - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); -#else - PyObject *stepval = NULL, *mask = NULL, *shift = NULL; - int bits, remaining_bits, is_negative = 0; - long idigit; - int chunk_size = (sizeof(long) < 8) ? 30 : 62; - if (unlikely(!PyLong_CheckExact(v))) { - PyObject *tmp = v; - v = PyNumber_Long(v); - assert(PyLong_CheckExact(v)); - Py_DECREF(tmp); - if (unlikely(!v)) return (long) -1; - } -#if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(x) == 0) - return (long) 0; - is_negative = Py_SIZE(x) < 0; -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - is_negative = result == 1; - } -#endif - if (is_unsigned && unlikely(is_negative)) { - goto raise_neg_overflow; - } else if (is_negative) { - stepval = PyNumber_Invert(v); - if (unlikely(!stepval)) - return (long) -1; - } else { - stepval = __Pyx_NewRef(v); - } - val = (long) 0; - mask = PyLong_FromLong((1L << chunk_size) - 1); if (unlikely(!mask)) goto done; - shift = PyLong_FromLong(chunk_size); if (unlikely(!shift)) goto done; - for (bits = 0; bits < (int) sizeof(long) * 8 - chunk_size; bits += chunk_size) { - PyObject *tmp, *digit; - digit = PyNumber_And(stepval, mask); - if (unlikely(!digit)) goto done; - idigit = PyLong_AsLong(digit); - Py_DECREF(digit); - if (unlikely(idigit < 0)) goto done; - tmp = PyNumber_Rshift(stepval, shift); - if (unlikely(!tmp)) goto done; - Py_DECREF(stepval); stepval = tmp; - val |= ((long) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(stepval) == 0) - goto unpacking_done; - #endif - } - idigit = PyLong_AsLong(stepval); - if (unlikely(idigit < 0)) goto done; - remaining_bits = ((int) sizeof(long) * 8) - bits - (is_unsigned ? 0 : 1); - if (unlikely(idigit >= (1L << remaining_bits))) - goto raise_overflow; - val |= ((long) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - unpacking_done: - #endif - if (!is_unsigned) { - if (unlikely(val & (((long) 1) << (sizeof(long) * 8 - 1)))) - goto raise_overflow; - if (is_negative) - val = ~val; - } - ret = 0; - done: - Py_XDECREF(shift); - Py_XDECREF(mask); - Py_XDECREF(stepval); -#endif - Py_DECREF(v); - if (likely(!ret)) - return val; - } - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* CIntFromPy */ - static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const char neg_one = (char) -1, const_zero = (char) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(char) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(char, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (char) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - if (unlikely(__Pyx_PyLong_IsNeg(x))) { - goto raise_neg_overflow; - } else if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(char, __Pyx_compact_upylong, __Pyx_PyLong_CompactValueUnsigned(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_DigitCount(x)) { - case 2: - if ((8 * sizeof(char) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) >= 2 * PyLong_SHIFT)) { - return (char) (((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(char) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) >= 3 * PyLong_SHIFT)) { - return (char) (((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(char) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) >= 4 * PyLong_SHIFT)) { - return (char) (((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - } - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (char) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(char) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(char) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(char, __Pyx_compact_pylong, __Pyx_PyLong_CompactValue(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_SignedDigitCount(x)) { - case -2: - if ((8 * sizeof(char) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) - 1 > 2 * PyLong_SHIFT)) { - return (char) (((char)-1)*(((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(char) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) - 1 > 2 * PyLong_SHIFT)) { - return (char) ((((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(char) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) - 1 > 3 * PyLong_SHIFT)) { - return (char) (((char)-1)*(((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(char) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) - 1 > 3 * PyLong_SHIFT)) { - return (char) ((((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(char) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) - 1 > 4 * PyLong_SHIFT)) { - return (char) (((char)-1)*(((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(char) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) - 1 > 4 * PyLong_SHIFT)) { - return (char) ((((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - } - } -#endif - if ((sizeof(char) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(char, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(char) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(char, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { - char val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); -#if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } -#endif - if (likely(v)) { - int ret = -1; -#if !(CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) || defined(_PyLong_AsByteArray) - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); -#else - PyObject *stepval = NULL, *mask = NULL, *shift = NULL; - int bits, remaining_bits, is_negative = 0; - long idigit; - int chunk_size = (sizeof(long) < 8) ? 30 : 62; - if (unlikely(!PyLong_CheckExact(v))) { - PyObject *tmp = v; - v = PyNumber_Long(v); - assert(PyLong_CheckExact(v)); - Py_DECREF(tmp); - if (unlikely(!v)) return (char) -1; - } -#if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(x) == 0) - return (char) 0; - is_negative = Py_SIZE(x) < 0; -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (char) -1; - is_negative = result == 1; - } -#endif - if (is_unsigned && unlikely(is_negative)) { - goto raise_neg_overflow; - } else if (is_negative) { - stepval = PyNumber_Invert(v); - if (unlikely(!stepval)) - return (char) -1; - } else { - stepval = __Pyx_NewRef(v); - } - val = (char) 0; - mask = PyLong_FromLong((1L << chunk_size) - 1); if (unlikely(!mask)) goto done; - shift = PyLong_FromLong(chunk_size); if (unlikely(!shift)) goto done; - for (bits = 0; bits < (int) sizeof(char) * 8 - chunk_size; bits += chunk_size) { - PyObject *tmp, *digit; - digit = PyNumber_And(stepval, mask); - if (unlikely(!digit)) goto done; - idigit = PyLong_AsLong(digit); - Py_DECREF(digit); - if (unlikely(idigit < 0)) goto done; - tmp = PyNumber_Rshift(stepval, shift); - if (unlikely(!tmp)) goto done; - Py_DECREF(stepval); stepval = tmp; - val |= ((char) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(stepval) == 0) - goto unpacking_done; - #endif - } - idigit = PyLong_AsLong(stepval); - if (unlikely(idigit < 0)) goto done; - remaining_bits = ((int) sizeof(char) * 8) - bits - (is_unsigned ? 0 : 1); - if (unlikely(idigit >= (1L << remaining_bits))) - goto raise_overflow; - val |= ((char) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - unpacking_done: - #endif - if (!is_unsigned) { - if (unlikely(val & (((char) 1) << (sizeof(char) * 8 - 1)))) - goto raise_overflow; - if (is_negative) - val = ~val; - } - ret = 0; - done: - Py_XDECREF(shift); - Py_XDECREF(mask); - Py_XDECREF(stepval); -#endif - Py_DECREF(v); - if (likely(!ret)) - return val; - } - return (char) -1; - } - } else { - char val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (char) -1; - val = __Pyx_PyInt_As_char(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to char"); - return (char) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to char"); - return (char) -1; -} - -/* FormatTypeName */ - #if CYTHON_COMPILING_IN_LIMITED_API -static __Pyx_TypeName -__Pyx_PyType_GetName(PyTypeObject* tp) -{ - PyObject *name = __Pyx_PyObject_GetAttrStr((PyObject *)tp, - __pyx_n_s_name_2); - if (unlikely(name == NULL) || unlikely(!PyUnicode_Check(name))) { - PyErr_Clear(); - Py_XSETREF(name, __Pyx_NewRef(__pyx_n_s__23)); - } - return name; -} -#endif - -/* CheckBinaryVersion */ - static int __Pyx_check_binary_version(void) { - char ctversion[5]; - int same=1, i, found_dot; - const char* rt_from_call = Py_GetVersion(); - PyOS_snprintf(ctversion, 5, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); - found_dot = 0; - for (i = 0; i < 4; i++) { - if (!ctversion[i]) { - same = (rt_from_call[i] < '0' || rt_from_call[i] > '9'); - break; - } - if (rt_from_call[i] != ctversion[i]) { - same = 0; - break; - } - } - if (!same) { - char rtversion[5] = {'\0'}; - char message[200]; - for (i=0; i<4; ++i) { - if (rt_from_call[i] == '.') { - if (found_dot) break; - found_dot = 1; - } else if (rt_from_call[i] < '0' || rt_from_call[i] > '9') { - break; - } - rtversion[i] = rt_from_call[i]; - } - PyOS_snprintf(message, sizeof(message), - "compile time version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ - #if PY_MAJOR_VERSION >= 3 -static int __Pyx_InitString(__Pyx_StringTabEntry t, PyObject **str) { - if (t.is_unicode | t.is_str) { - if (t.intern) { - *str = PyUnicode_InternFromString(t.s); - } else if (t.encoding) { - *str = PyUnicode_Decode(t.s, t.n - 1, t.encoding, NULL); - } else { - *str = PyUnicode_FromStringAndSize(t.s, t.n - 1); - } - } else { - *str = PyBytes_FromStringAndSize(t.s, t.n - 1); - } - if (!*str) - return -1; - if (PyObject_Hash(*str) == -1) - return -1; - return 0; -} -#endif -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION >= 3 - __Pyx_InitString(*t, t->p); - #else - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - #endif - ++t; - } - return 0; -} - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY && !CYTHON_COMPILING_IN_LIMITED_API) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { - __Pyx_TypeName result_type_name = __Pyx_PyType_GetName(Py_TYPE(result)); -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type " __Pyx_FMT_TYPENAME "). " - "The ability to return an instance of a strict subclass of int is deprecated, " - "and may be removed in a future version of Python.", - result_type_name)) { - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; - } - __Pyx_DECREF_TypeName(result_type_name); - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type " __Pyx_FMT_TYPENAME ")", - type_name, type_name, result_type_name); - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(__Pyx_PyLong_IsCompact(b))) { - return __Pyx_PyLong_CompactValue(b); - } else { - const digit* digits = __Pyx_PyLong_Digits(b); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(b); - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) { - if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) { - return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o); -#if PY_MAJOR_VERSION < 3 - } else if (likely(PyInt_CheckExact(o))) { - return PyInt_AS_LONG(o); -#endif - } else { - Py_ssize_t ival; - PyObject *x; - x = PyNumber_Index(o); - if (!x) return -1; - ival = PyInt_AsLong(x); - Py_DECREF(x); - return ival; - } -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -/* #### Code section: utility_code_pragmas_end ### */ -#ifdef _MSC_VER -#pragma warning( pop ) -#endif - - - -/* #### Code section: end ### */ -#endif /* Py_PYTHON_H */ diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/util/nvdiffrast.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/util/nvdiffrast.py deleted file mode 100644 index f3245859c650afbfe841a66b74cddefaf28820d9..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/util/nvdiffrast.py +++ /dev/null @@ -1,126 +0,0 @@ -"""This script is the differentiable renderer for Deep3DFaceRecon_pytorch - Attention, antialiasing step is missing in current version. -""" -import pytorch3d.ops -import torch -import torch.nn.functional as F -import kornia -from kornia.geometry.camera import pixel2cam -import numpy as np -from typing import List -from scipy.io import loadmat -from torch import nn - -from pytorch3d.structures import Meshes -from pytorch3d.renderer import ( - look_at_view_transform, - FoVPerspectiveCameras, - DirectionalLights, - RasterizationSettings, - MeshRenderer, - MeshRasterizer, - SoftPhongShader, - TexturesUV, -) - -# def ndc_projection(x=0.1, n=1.0, f=50.0): -# return np.array([[n/x, 0, 0, 0], -# [ 0, n/-x, 0, 0], -# [ 0, 0, -(f+n)/(f-n), -(2*f*n)/(f-n)], -# [ 0, 0, -1, 0]]).astype(np.float32) - -class MeshRenderer(nn.Module): - def __init__(self, - rasterize_fov, - znear=0.1, - zfar=10, - rasterize_size=224): - super(MeshRenderer, self).__init__() - - # x = np.tan(np.deg2rad(rasterize_fov * 0.5)) * znear - # self.ndc_proj = torch.tensor(ndc_projection(x=x, n=znear, f=zfar)).matmul( - # torch.diag(torch.tensor([1., -1, -1, 1]))) - self.rasterize_size = rasterize_size - self.fov = rasterize_fov - self.znear = znear - self.zfar = zfar - - self.rasterizer = None - - def forward(self, vertex, tri, feat=None): - """ - Return: - mask -- torch.tensor, size (B, 1, H, W) - depth -- torch.tensor, size (B, 1, H, W) - features(optional) -- torch.tensor, size (B, C, H, W) if feat is not None - - Parameters: - vertex -- torch.tensor, size (B, N, 3) - tri -- torch.tensor, size (B, M, 3) or (M, 3), triangles - feat(optional) -- torch.tensor, size (B, N ,C), features - """ - device = vertex.device - rsize = int(self.rasterize_size) - # ndc_proj = self.ndc_proj.to(device) - # trans to homogeneous coordinates of 3d vertices, the direction of y is the same as v - if vertex.shape[-1] == 3: - vertex = torch.cat([vertex, torch.ones([*vertex.shape[:2], 1]).to(device)], dim=-1) - vertex[..., 0] = -vertex[..., 0] - - - # vertex_ndc = vertex @ ndc_proj.t() - if self.rasterizer is None: - self.rasterizer = MeshRasterizer() - print("create rasterizer on device cuda:%d"%device.index) - - # ranges = None - # if isinstance(tri, List) or len(tri.shape) == 3: - # vum = vertex_ndc.shape[1] - # fnum = torch.tensor([f.shape[0] for f in tri]).unsqueeze(1).to(device) - # fstartidx = torch.cumsum(fnum, dim=0) - fnum - # ranges = torch.cat([fstartidx, fnum], axis=1).type(torch.int32).cpu() - # for i in range(tri.shape[0]): - # tri[i] = tri[i] + i*vum - # vertex_ndc = torch.cat(vertex_ndc, dim=0) - # tri = torch.cat(tri, dim=0) - - # for range_mode vetex: [B*N, 4], tri: [B*M, 3], for instance_mode vetex: [B, N, 4], tri: [M, 3] - tri = tri.type(torch.int32).contiguous() - - # rasterize - cameras = FoVPerspectiveCameras( - device=device, - fov=self.fov, - znear=self.znear, - zfar=self.zfar, - ) - - raster_settings = RasterizationSettings( - image_size=rsize - ) - - # print(vertex.shape, tri.shape) - mesh = Meshes(vertex.contiguous()[...,:3], tri.unsqueeze(0).repeat((vertex.shape[0],1,1))) - - fragments = self.rasterizer(mesh, cameras = cameras, raster_settings = raster_settings) - rast_out = fragments.pix_to_face.squeeze(-1) - depth = fragments.zbuf - - # render depth - depth = depth.permute(0, 3, 1, 2) - mask = (rast_out > 0).float().unsqueeze(1) - depth = mask * depth - - - image = None - if feat is not None: - attributes = feat.reshape(-1,3)[mesh.faces_packed()] - image = pytorch3d.ops.interpolate_face_attributes(fragments.pix_to_face, - fragments.bary_coords, - attributes) - # print(image.shape) - image = image.squeeze(-2).permute(0, 3, 1, 2) - image = mask * image - - return mask, depth, image - diff --git a/spaces/kevinwang676/SadTalker/src/facerender/sync_batchnorm/comm.py b/spaces/kevinwang676/SadTalker/src/facerender/sync_batchnorm/comm.py deleted file mode 100644 index 922f8c4a3adaa9b32fdcaef09583be03b0d7eb2b..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/facerender/sync_batchnorm/comm.py +++ /dev/null @@ -1,137 +0,0 @@ -# -*- coding: utf-8 -*- -# File : comm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import queue -import collections -import threading - -__all__ = ['FutureResult', 'SlavePipe', 'SyncMaster'] - - -class FutureResult(object): - """A thread-safe future implementation. Used only as one-to-one pipe.""" - - def __init__(self): - self._result = None - self._lock = threading.Lock() - self._cond = threading.Condition(self._lock) - - def put(self, result): - with self._lock: - assert self._result is None, 'Previous result has\'t been fetched.' - self._result = result - self._cond.notify() - - def get(self): - with self._lock: - if self._result is None: - self._cond.wait() - - res = self._result - self._result = None - return res - - -_MasterRegistry = collections.namedtuple('MasterRegistry', ['result']) -_SlavePipeBase = collections.namedtuple('_SlavePipeBase', ['identifier', 'queue', 'result']) - - -class SlavePipe(_SlavePipeBase): - """Pipe for master-slave communication.""" - - def run_slave(self, msg): - self.queue.put((self.identifier, msg)) - ret = self.result.get() - self.queue.put(True) - return ret - - -class SyncMaster(object): - """An abstract `SyncMaster` object. - - - During the replication, as the data parallel will trigger an callback of each module, all slave devices should - call `register(id)` and obtain an `SlavePipe` to communicate with the master. - - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected, - and passed to a registered callback. - - After receiving the messages, the master device should gather the information and determine to message passed - back to each slave devices. - """ - - def __init__(self, master_callback): - """ - - Args: - master_callback: a callback to be invoked after having collected messages from slave devices. - """ - self._master_callback = master_callback - self._queue = queue.Queue() - self._registry = collections.OrderedDict() - self._activated = False - - def __getstate__(self): - return {'master_callback': self._master_callback} - - def __setstate__(self, state): - self.__init__(state['master_callback']) - - def register_slave(self, identifier): - """ - Register an slave device. - - Args: - identifier: an identifier, usually is the device id. - - Returns: a `SlavePipe` object which can be used to communicate with the master device. - - """ - if self._activated: - assert self._queue.empty(), 'Queue is not clean before next initialization.' - self._activated = False - self._registry.clear() - future = FutureResult() - self._registry[identifier] = _MasterRegistry(future) - return SlavePipe(identifier, self._queue, future) - - def run_master(self, master_msg): - """ - Main entry for the master device in each forward pass. - The messages were first collected from each devices (including the master device), and then - an callback will be invoked to compute the message to be sent back to each devices - (including the master device). - - Args: - master_msg: the message that the master want to send to itself. This will be placed as the first - message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example. - - Returns: the message to be sent back to the master device. - - """ - self._activated = True - - intermediates = [(0, master_msg)] - for i in range(self.nr_slaves): - intermediates.append(self._queue.get()) - - results = self._master_callback(intermediates) - assert results[0][0] == 0, 'The first result should belongs to the master.' - - for i, res in results: - if i == 0: - continue - self._registry[i].result.put(res) - - for i in range(self.nr_slaves): - assert self._queue.get() is True - - return results[0][1] - - @property - def nr_slaves(self): - return len(self._registry) diff --git a/spaces/kevinwang676/VALLE/data/__init__.py b/spaces/kevinwang676/VALLE/data/__init__.py deleted file mode 100644 index 68f9defe677e03da5224c42cb28932f2e7f75ada..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VALLE/data/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .collation import * diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/datasets/pascal_context.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/datasets/pascal_context.py deleted file mode 100644 index ff65bad1b86d7e3a5980bb5b9fc55798dc8df5f4..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/datasets/pascal_context.py +++ /dev/null @@ -1,60 +0,0 @@ -# dataset settings -dataset_type = 'PascalContextDataset' -data_root = 'data/VOCdevkit/VOC2010/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -img_scale = (520, 520) -crop_size = (480, 480) - -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/train.txt', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/val.txt', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/val.txt', - pipeline=test_pipeline)) diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/pointer_generator/pointer_generator_src/__init__.py b/spaces/koajoel/PolyFormer/fairseq/examples/pointer_generator/pointer_generator_src/__init__.py deleted file mode 100644 index c361ff6bd616512fe2521387665de1ad1aff66d0..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/pointer_generator/pointer_generator_src/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import transformer_pg # noqa diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/shuffled_word_order/README.finetuning.md b/spaces/koajoel/PolyFormer/fairseq/examples/shuffled_word_order/README.finetuning.md deleted file mode 100644 index ecbcb65884640c3327a2cbaef8aad4f3cfe812f7..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/shuffled_word_order/README.finetuning.md +++ /dev/null @@ -1,135 +0,0 @@ -# Fine-tuning details - -For each task (GLUE and PAWS), we perform hyperparam search for each model, and report the mean and standard deviation across 5 seeds of the best model. First, get the datasets following the instructions in [RoBERTa fine-tuning README](../roberta/README.glue.md). Alternatively, you can use [huggingface datasets](https://huggingface.co/docs/datasets/) to get the task data: - -```python -from datasets import load_dataset -import pandas as pd -from pathlib import Path - -key2file = { -"paws": { - "loc": "paws_data", - "columns": ["id", "sentence1", "sentence2", "label"], - "train": "train.tsv", - "validation": "dev.tsv", - "test": "test.tsv" - } -} - -task_data = load_dataset("paws", "labeled_final") -task_config = key2file["paws"] -save_path = Path(task_config["loc"]) -save_path.mkdir(exist_ok=True, parents=True) -for key, fl in task_config.items(): - if key in ["loc", "columns"]: - continue - print(f"Reading {key}") - columns = task_config["columns"] - df = pd.DataFrame(task_data[key]) - print(df.columns) - df = df[columns] - print(f"Got {len(df)} records") - save_loc = save_path / fl - print(f"Saving to : {save_loc}") - df.to_csv(save_loc, sep="\t", header=None, index=None) - -``` - -- Preprocess using RoBERTa GLUE preprocessing script, while keeping in mind the column numbers for `sentence1`, `sentence2` and `label` (which is 0,1,2 if you save the data according to the above example.) -- Then, fine-tuning is performed similarly to RoBERTa (for example, in case of RTE): - -```bash -TOTAL_NUM_UPDATES=30875 # 10 epochs through RTE for bsz 16 -WARMUP_UPDATES=1852 # 6 percent of the number of updates -LR=2e-05 # Peak LR for polynomial LR scheduler. -NUM_CLASSES=2 -MAX_SENTENCES=16 # Batch size. -SHUFFLED_ROBERTA_PATH=/path/to/shuffled_roberta/model.pt - -CUDA_VISIBLE_DEVICES=0 fairseq-train RTE-bin/ \ - --restore-file $SHUFFLED_ROBERTA_PATH \ - --max-positions 512 \ - --batch-size $MAX_SENTENCES \ - --max-tokens 4400 \ - --task sentence_prediction \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --init-token 0 --separator-token 2 \ - --arch roberta_large \ - --criterion sentence_prediction \ - --num-classes $NUM_CLASSES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --max-epoch 10 \ - --find-unused-parameters \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric; -``` - -- `TOTAL_NUM_UPDATES` is computed based on the `--batch_size` value and the dataset size. -- `WARMUP_UPDATES` is computed as 6% of `TOTAL_NUM_UPDATES` -- Best hyperparam of `--lr` and `--batch_size` is reported below: - -## `--lr` - -| | name | RTE | MRPC | SST-2 | CoLA | QQP | QNLI | MNLI | PAWS | -| --: | :----------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ----: | -| 0 | original | 2e-05 | 2e-05 | 1e-05 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 2e-05 | -| 1 | n_1 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | -| 2 | n_2 | 2e-05 | 2e-05 | 1e-05 | 1e-05 | 2e-05 | 1e-05 | 1e-05 | 3e-05 | -| 3 | n_3 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | 3e-05 | 1e-05 | 1e-05 | 2e-05 | -| 4 | n_4 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | 2e-05 | 1e-05 | 1e-05 | 2e-05 | -| 5 | r512 | 1e-05 | 3e-05 | 2e-05 | 2e-05 | 3e-05 | 2e-05 | 3e-05 | 2e-05 | -| 6 | rand_corpus | 2e-05 | 1e-05 | 3e-05 | 1e-05 | 3e-05 | 3e-05 | 3e-05 | 2e-05 | -| 7 | rand_uniform | 2e-05 | 1e-05 | 3e-05 | 2e-05 | 3e-05 | 3e-05 | 3e-05 | 1e-05 | -| 8 | rand_init | 1e-05 | 1e-05 | 3e-05 | 1e-05 | 1e-05 | 1e-05 | 2e-05 | 1e-05 | -| 9 | no_pos | 1e-05 | 3e-05 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 1e-05 | 1e-05 | - -## `--batch_size` - -| | name | RTE | MRPC | SST-2 | CoLA | QQP | QNLI | MNLI | PAWS | -| --: | :----------- | --: | ---: | ----: | ---: | --: | ---: | ---: | ---: | -| 0 | orig | 16 | 16 | 32 | 16 | 16 | 32 | 32 | 16 | -| 1 | n_1 | 32 | 32 | 16 | 32 | 32 | 16 | 32 | 16 | -| 2 | n_2 | 32 | 16 | 32 | 16 | 32 | 32 | 16 | 32 | -| 3 | n_3 | 32 | 32 | 16 | 32 | 32 | 16 | 32 | 32 | -| 4 | n_4 | 32 | 16 | 32 | 16 | 32 | 32 | 32 | 32 | -| 5 | r512 | 32 | 16 | 16 | 32 | 32 | 16 | 16 | 16 | -| 6 | rand_corpus | 16 | 16 | 16 | 16 | 32 | 16 | 16 | 32 | -| 7 | rand_uniform | 16 | 32 | 16 | 16 | 32 | 16 | 16 | 16 | -| 8 | rand_init | 16 | 16 | 32 | 16 | 16 | 16 | 32 | 16 | -| 9 | no_pos | 16 | 32 | 16 | 16 | 32 | 16 | 16 | 16 | - -- Perform inference similar to RoBERTa as well: - -```python -from fairseq.models.roberta import RobertaModel - -roberta = RobertaModel.from_pretrained( - 'checkpoints/', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='PAWS-bin' -) - -label_fn = lambda label: roberta.task.label_dictionary.string( - [label + roberta.task.label_dictionary.nspecial] -) -ncorrect, nsamples = 0, 0 -roberta.cuda() -roberta.eval() -with open('paws_data/dev.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[0], tokens[1], tokens[2] - tokens = roberta.encode(sent1, sent2) - prediction = roberta.predict('sentence_classification_head', tokens).argmax().item() - prediction_label = label_fn(prediction) - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) - -``` diff --git a/spaces/kornia/homography-warping/README.md b/spaces/kornia/homography-warping/README.md deleted file mode 100644 index 61b292db93e41c0397a735b91552b1d9722ba119..0000000000000000000000000000000000000000 --- a/spaces/kornia/homography-warping/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Homography Warping -emoji: 🌐 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-de9ed39e.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-de9ed39e.css deleted file mode 100644 index 463d37a8a75c97e2c4ecd3aaf5081dd8a2f90164..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-de9ed39e.css +++ /dev/null @@ -1 +0,0 @@ -.rangeSlider{--pip:var(--range-pip, lightslategray);--pip-text:var(--range-pip-text, var(--pip));--pip-active:var(--range-pip-active, darkslategrey);--pip-active-text:var(--range-pip-active-text, var(--pip-active));--pip-hover:var(--range-pip-hover, darkslategrey);--pip-hover-text:var(--range-pip-hover-text, var(--pip-hover));--pip-in-range:var(--range-pip-in-range, var(--pip-active));--pip-in-range-text:var(--range-pip-in-range-text, var(--pip-active-text))}.rangePips{position:absolute;height:1em;left:0;right:0;bottom:-1em}.rangePips.vertical{height:auto;width:1em;inset:0 auto 0 100%}.rangePips .pip{height:.4em;position:absolute;top:.25em;width:1px;white-space:nowrap}.rangePips.vertical .pip{height:1px;width:.4em;left:.25em;top:auto;bottom:auto}.rangePips .pipVal{position:absolute;top:.4em;transform:translate(-50%,25%)}.rangePips.vertical .pipVal{position:absolute;top:0;left:.4em;transform:translate(25%,-50%)}.rangePips .pip{transition:all .15s ease}.rangePips .pipVal{transition:all .15s ease,font-weight 0s linear}.rangePips .pip{color:#789;color:var(--pip-text);background-color:#789;background-color:var(--pip)}.rangePips .pip.selected{color:#2f4f4f;color:var(--pip-active-text);background-color:#2f4f4f;background-color:var(--pip-active)}.rangePips.hoverable:not(.disabled) .pip:hover{color:#2f4f4f;color:var(--pip-hover-text);background-color:#2f4f4f;background-color:var(--pip-hover)}.rangePips .pip.in-range{color:#2f4f4f;color:var(--pip-in-range-text);background-color:#2f4f4f;background-color:var(--pip-in-range)}.rangePips .pip.selected{height:.75em}.rangePips.vertical .pip.selected{height:1px;width:.75em}.rangePips .pip.selected .pipVal{font-weight:700;top:.75em}.rangePips.vertical .pip.selected .pipVal{top:0;left:.75em}.rangePips.hoverable:not(.disabled) .pip:not(.selected):hover{transition:none}.rangePips.hoverable:not(.disabled) .pip:not(.selected):hover .pipVal{transition:none;font-weight:700}.rangeSlider{--slider:var(--range-slider, #d7dada);--handle-inactive:var(--range-handle-inactive, #99a2a2);--handle:var(--range-handle, #838de7);--handle-focus:var(--range-handle-focus, #4a40d4);--handle-border:var(--range-handle-border, var(--handle));--range-inactive:var(--range-range-inactive, var(--handle-inactive));--range:var(--range-range, var(--handle-focus));--float-inactive:var(--range-float-inactive, var(--handle-inactive));--float:var(--range-float, var(--handle-focus));--float-text:var(--range-float-text, white)}.rangeSlider{position:relative;border-radius:100px;height:.5em;margin:1em;transition:opacity .2s ease;user-select:none}.rangeSlider *{user-select:none}.rangeSlider.pips{margin-bottom:1.8em}.rangeSlider.pip-labels{margin-bottom:2.8em}.rangeSlider.vertical{display:inline-block;border-radius:100px;width:.5em;min-height:200px}.rangeSlider.vertical.pips{margin-right:1.8em;margin-bottom:1em}.rangeSlider.vertical.pip-labels{margin-right:2.8em;margin-bottom:1em}.rangeSlider .rangeHandle{position:absolute;display:block;height:1.4em;width:1.4em;top:.25em;bottom:auto;transform:translateY(-50%) translate(-50%);z-index:2}.rangeSlider.reversed .rangeHandle{transform:translateY(-50%) translate(50%)}.rangeSlider.vertical .rangeHandle{left:.25em;top:auto;transform:translateY(50%) translate(-50%)}.rangeSlider.vertical.reversed .rangeHandle{transform:translateY(-50%) translate(-50%)}.rangeSlider .rangeNub,.rangeSlider .rangeHandle:before{position:absolute;left:0;top:0;display:block;border-radius:10em;height:100%;width:100%;transition:box-shadow .2s ease}.rangeSlider .rangeHandle:before{content:"";inset:1px;height:auto;width:auto;box-shadow:0 0 0 0 var(--handle-border);opacity:0}.rangeSlider.hoverable:not(.disabled) .rangeHandle:hover:before{box-shadow:0 0 0 8px var(--handle-border);opacity:.2}.rangeSlider.hoverable:not(.disabled) .rangeHandle.press:before,.rangeSlider.hoverable:not(.disabled) .rangeHandle.press:hover:before{box-shadow:0 0 0 12px var(--handle-border);opacity:.4}.rangeSlider.range:not(.min):not(.max) .rangeNub{border-radius:10em 10em 10em 1.6em}.rangeSlider.range .rangeHandle:nth-of-type(1) .rangeNub{transform:rotate(-135deg)}.rangeSlider.range .rangeHandle:nth-of-type(2) .rangeNub{transform:rotate(45deg)}.rangeSlider.range.reversed .rangeHandle:nth-of-type(1) .rangeNub{transform:rotate(45deg)}.rangeSlider.range.reversed .rangeHandle:nth-of-type(2) .rangeNub{transform:rotate(-135deg)}.rangeSlider.range.vertical .rangeHandle:nth-of-type(1) .rangeNub{transform:rotate(135deg)}.rangeSlider.range.vertical .rangeHandle:nth-of-type(2) .rangeNub{transform:rotate(-45deg)}.rangeSlider.range.vertical.reversed .rangeHandle:nth-of-type(1) .rangeNub{transform:rotate(-45deg)}.rangeSlider.range.vertical.reversed .rangeHandle:nth-of-type(2) .rangeNub{transform:rotate(135deg)}.rangeSlider .rangeFloat{display:block;position:absolute;left:50%;top:-.5em;transform:translate(-50%,-100%);font-size:1em;text-align:center;opacity:0;pointer-events:none;white-space:nowrap;transition:all .2s ease;font-size:.9em;padding:.2em .4em;border-radius:.2em}.rangeSlider .rangeHandle.active .rangeFloat,.rangeSlider.hoverable .rangeHandle:hover .rangeFloat{opacity:1;top:-.2em;transform:translate(-50%,-100%)}.rangeSlider .rangeBar{position:absolute;display:block;transition:background .2s ease;border-radius:1em;height:.5em;top:0;user-select:none;z-index:1}.rangeSlider.vertical .rangeBar{width:.5em;height:auto}.rangeSlider{background-color:#d7dada;background-color:var(--slider)}.rangeSlider .rangeBar{background-color:#99a2a2;background-color:var(--range-inactive)}.rangeSlider.focus .rangeBar{background-color:#838de7;background-color:var(--range)}.rangeSlider .rangeNub{background-color:#99a2a2;background-color:var(--handle-inactive)}.rangeSlider.focus .rangeNub{background-color:#838de7;background-color:var(--handle)}.rangeSlider .rangeHandle.active .rangeNub{background-color:#4a40d4;background-color:var(--handle-focus)}.rangeSlider .rangeFloat{color:#fff;color:var(--float-text);background-color:#99a2a2;background-color:var(--float-inactive)}.rangeSlider.focus .rangeFloat{background-color:#4a40d4;background-color:var(--float)}.rangeSlider.disabled{opacity:.5}.rangeSlider.disabled .rangeNub{background-color:#d7dada;background-color:var(--slider)}.mic-wrap.svelte-1thnwz{padding:var(--size-2)}.record-icon.svelte-1thnwz{display:flex;position:relative;margin-right:var(--size-2);width:6px;height:6px}.dot.svelte-1thnwz{display:inline-flex;position:relative;border-radius:var(--radius-full);background:var(--color-red-500);width:6px;height:6px}.pinger.svelte-1thnwz{display:inline-flex;position:absolute;opacity:.9;animation:svelte-1thnwz-ping 1s cubic-bezier(0,0,.2,1) infinite;border-radius:var(--radius-full);background:var(--color-red-500);width:var(--size-full);height:var(--size-full)}@keyframes svelte-1thnwz-ping{75%,to{transform:scale(2);opacity:0}}audio.svelte-1thnwz{padding:var(--size-2);width:var(--size-full);height:var(--size-14)}audio.svelte-eemfgq{padding:var(--size-2);width:var(--size-full);height:var(--size-14)} diff --git a/spaces/leogabraneth/text-generation-webui-main/modules/presets.py b/spaces/leogabraneth/text-generation-webui-main/modules/presets.py deleted file mode 100644 index 84e4492c181d8c99f82fb1ec1157585c1f9b9280..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/modules/presets.py +++ /dev/null @@ -1,74 +0,0 @@ -import functools -from pathlib import Path - -import yaml - - -def default_preset(): - return { - 'do_sample': True, - 'temperature': 1, - 'top_p': 1, - 'top_k': 0, - 'typical_p': 1, - 'epsilon_cutoff': 0, - 'eta_cutoff': 0, - 'tfs': 1, - 'top_a': 0, - 'repetition_penalty': 1, - 'presence_penalty': 0, - 'frequency_penalty': 0, - 'repetition_penalty_range': 0, - 'encoder_repetition_penalty': 1, - 'no_repeat_ngram_size': 0, - 'min_length': 0, - 'guidance_scale': 1, - 'mirostat_mode': 0, - 'mirostat_tau': 5.0, - 'mirostat_eta': 0.1, - 'penalty_alpha': 0, - 'num_beams': 1, - 'length_penalty': 1, - 'early_stopping': False, - 'custom_token_bans': '', - } - - -def presets_params(): - return [k for k in default_preset()] - - -def load_preset(name): - generate_params = default_preset() - if name not in ['None', None, '']: - with open(Path(f'presets/{name}.yaml'), 'r') as infile: - preset = yaml.safe_load(infile) - - for k in preset: - generate_params[k] = preset[k] - - generate_params['temperature'] = min(1.99, generate_params['temperature']) - return generate_params - - -@functools.cache -def load_preset_memoized(name): - return load_preset(name) - - -def load_preset_for_ui(name, state): - generate_params = load_preset(name) - state.update(generate_params) - return state, *[generate_params[k] for k in presets_params()] - - -def generate_preset_yaml(state): - defaults = default_preset() - data = {k: state[k] for k in presets_params()} - - # Remove entries that are identical to the defaults - for k in list(data.keys()): - if data[k] == defaults[k]: - del data[k] - - return yaml.dump(data, sort_keys=False) diff --git a/spaces/lewiswu1209/MockingBird/web/config/__init__.py b/spaces/lewiswu1209/MockingBird/web/config/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/librarian-bots/MetaRefine/README.md b/spaces/librarian-bots/MetaRefine/README.md deleted file mode 100644 index 51c75c49f8506d24c8310b9b198e180287a5a316..0000000000000000000000000000000000000000 --- a/spaces/librarian-bots/MetaRefine/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MetaRefine -emoji: 🔎 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/liimefruit/RVCollection/vc_infer_pipeline.py b/spaces/liimefruit/RVCollection/vc_infer_pipeline.py deleted file mode 100644 index ff2bab96cf56ac81173a4b06f725b9c2eb86dbc2..0000000000000000000000000000000000000000 --- a/spaces/liimefruit/RVCollection/vc_infer_pipeline.py +++ /dev/null @@ -1,431 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss, librosa, torchcrepe -from scipy import signal -from functools import lru_cache - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - model = "full" - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch!=None and pitchf!=None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch!=None and pitchf!=None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch!=None and pitchf!=None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt \ No newline at end of file diff --git a/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/docs/eval.md b/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/docs/eval.md deleted file mode 100644 index dd1d9e257367b6422680966198646c45e5a2671d..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/docs/eval.md +++ /dev/null @@ -1,31 +0,0 @@ -## Eval on ICCV2021-MFR - -coming soon. - - -## Eval IJBC -You can eval ijbc with pytorch or onnx. - - -1. Eval IJBC With Onnx -```shell -CUDA_VISIBLE_DEVICES=0 python onnx_ijbc.py --model-root ms1mv3_arcface_r50 --image-path IJB_release/IJBC --result-dir ms1mv3_arcface_r50 -``` - -2. Eval IJBC With Pytorch -```shell -CUDA_VISIBLE_DEVICES=0,1 python eval_ijbc.py \ ---model-prefix ms1mv3_arcface_r50/backbone.pth \ ---image-path IJB_release/IJBC \ ---result-dir ms1mv3_arcface_r50 \ ---batch-size 128 \ ---job ms1mv3_arcface_r50 \ ---target IJBC \ ---network iresnet50 -``` - -## Inference - -```shell -python inference.py --weight ms1mv3_arcface_r50/backbone.pth --network r50 -``` diff --git a/spaces/lizhen30/LangChainGo/chatgpt-next-web/utils.py b/spaces/lizhen30/LangChainGo/chatgpt-next-web/utils.py deleted file mode 100644 index f332036e6bdc5552eb9308f0e2d16d4304bbe2dc..0000000000000000000000000000000000000000 --- a/spaces/lizhen30/LangChainGo/chatgpt-next-web/utils.py +++ /dev/null @@ -1,6 +0,0 @@ -# coding=utf-8 -import datetime - - -def nowtime(): - return datetime.datetime.utcnow() + datetime.timedelta(hours=8) diff --git a/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/nnf.cpp b/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/nnf.cpp deleted file mode 100644 index efa2751e8ad07a65c41a589010bcd79eb54cdfff..0000000000000000000000000000000000000000 --- a/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/nnf.cpp +++ /dev/null @@ -1,268 +0,0 @@ -#include -#include -#include - -#include "masked_image.h" -#include "nnf.h" - -/** -* Nearest-Neighbor Field (see PatchMatch algorithm). -* This algorithme uses a version proposed by Xavier Philippeau. -* -*/ - -template -T clamp(T value, T min_value, T max_value) { - return std::min(std::max(value, min_value), max_value); -} - -void NearestNeighborField::_randomize_field(int max_retry, bool reset) { - auto this_size = source_size(); - for (int i = 0; i < this_size.height; ++i) { - for (int j = 0; j < this_size.width; ++j) { - if (m_source.is_globally_masked(i, j)) continue; - - auto this_ptr = mutable_ptr(i, j); - int distance = reset ? PatchDistanceMetric::kDistanceScale : this_ptr[2]; - if (distance < PatchDistanceMetric::kDistanceScale) { - continue; - } - - int i_target = 0, j_target = 0; - for (int t = 0; t < max_retry; ++t) { - i_target = rand() % this_size.height; - j_target = rand() % this_size.width; - if (m_target.is_globally_masked(i_target, j_target)) continue; - - distance = _distance(i, j, i_target, j_target); - if (distance < PatchDistanceMetric::kDistanceScale) - break; - } - - this_ptr[0] = i_target, this_ptr[1] = j_target, this_ptr[2] = distance; - } - } -} - -void NearestNeighborField::_initialize_field_from(const NearestNeighborField &other, int max_retry) { - const auto &this_size = source_size(); - const auto &other_size = other.source_size(); - double fi = static_cast(this_size.height) / other_size.height; - double fj = static_cast(this_size.width) / other_size.width; - - for (int i = 0; i < this_size.height; ++i) { - for (int j = 0; j < this_size.width; ++j) { - if (m_source.is_globally_masked(i, j)) continue; - - int ilow = static_cast(std::min(i / fi, static_cast(other_size.height - 1))); - int jlow = static_cast(std::min(j / fj, static_cast(other_size.width - 1))); - auto this_value = mutable_ptr(i, j); - auto other_value = other.ptr(ilow, jlow); - - this_value[0] = static_cast(other_value[0] * fi); - this_value[1] = static_cast(other_value[1] * fj); - this_value[2] = _distance(i, j, this_value[0], this_value[1]); - } - } - - _randomize_field(max_retry, false); -} - -void NearestNeighborField::minimize(int nr_pass) { - const auto &this_size = source_size(); - while (nr_pass--) { - for (int i = 0; i < this_size.height; ++i) - for (int j = 0; j < this_size.width; ++j) { - if (m_source.is_globally_masked(i, j)) continue; - if (at(i, j, 2) > 0) _minimize_link(i, j, +1); - } - for (int i = this_size.height - 1; i >= 0; --i) - for (int j = this_size.width - 1; j >= 0; --j) { - if (m_source.is_globally_masked(i, j)) continue; - if (at(i, j, 2) > 0) _minimize_link(i, j, -1); - } - } -} - -void NearestNeighborField::_minimize_link(int y, int x, int direction) { - const auto &this_size = source_size(); - const auto &this_target_size = target_size(); - auto this_ptr = mutable_ptr(y, x); - - // propagation along the y direction. - if (y - direction >= 0 && y - direction < this_size.height && !m_source.is_globally_masked(y - direction, x)) { - int yp = at(y - direction, x, 0) + direction; - int xp = at(y - direction, x, 1); - int dp = _distance(y, x, yp, xp); - if (dp < at(y, x, 2)) { - this_ptr[0] = yp, this_ptr[1] = xp, this_ptr[2] = dp; - } - } - - // propagation along the x direction. - if (x - direction >= 0 && x - direction < this_size.width && !m_source.is_globally_masked(y, x - direction)) { - int yp = at(y, x - direction, 0); - int xp = at(y, x - direction, 1) + direction; - int dp = _distance(y, x, yp, xp); - if (dp < at(y, x, 2)) { - this_ptr[0] = yp, this_ptr[1] = xp, this_ptr[2] = dp; - } - } - - // random search with a progressive step size. - int random_scale = (std::min(this_target_size.height, this_target_size.width) - 1) / 2; - while (random_scale > 0) { - int yp = this_ptr[0] + (rand() % (2 * random_scale + 1) - random_scale); - int xp = this_ptr[1] + (rand() % (2 * random_scale + 1) - random_scale); - yp = clamp(yp, 0, target_size().height - 1); - xp = clamp(xp, 0, target_size().width - 1); - - if (m_target.is_globally_masked(yp, xp)) { - random_scale /= 2; - } - - int dp = _distance(y, x, yp, xp); - if (dp < at(y, x, 2)) { - this_ptr[0] = yp, this_ptr[1] = xp, this_ptr[2] = dp; - } - random_scale /= 2; - } -} - -const int PatchDistanceMetric::kDistanceScale = 65535; -const int PatchSSDDistanceMetric::kSSDScale = 9 * 255 * 255; - -namespace { - -inline int pow2(int i) { - return i * i; -} - -int distance_masked_images( - const MaskedImage &source, int ys, int xs, - const MaskedImage &target, int yt, int xt, - int patch_size -) { - long double distance = 0; - long double wsum = 0; - - source.compute_image_gradients(); - target.compute_image_gradients(); - - auto source_size = source.size(); - auto target_size = target.size(); - - for (int dy = -patch_size; dy <= patch_size; ++dy) { - const int yys = ys + dy, yyt = yt + dy; - - if (yys <= 0 || yys >= source_size.height - 1 || yyt <= 0 || yyt >= target_size.height - 1) { - distance += (long double)(PatchSSDDistanceMetric::kSSDScale) * (2 * patch_size + 1); - wsum += 2 * patch_size + 1; - continue; - } - - const auto *p_si = source.image().ptr(yys, 0); - const auto *p_ti = target.image().ptr(yyt, 0); - const auto *p_sm = source.mask().ptr(yys, 0); - const auto *p_tm = target.mask().ptr(yyt, 0); - - const unsigned char *p_sgm = nullptr; - const unsigned char *p_tgm = nullptr; - if (!source.global_mask().empty()) { - p_sgm = source.global_mask().ptr(yys, 0); - p_tgm = target.global_mask().ptr(yyt, 0); - } - - const auto *p_sgy = source.grady().ptr(yys, 0); - const auto *p_tgy = target.grady().ptr(yyt, 0); - const auto *p_sgx = source.gradx().ptr(yys, 0); - const auto *p_tgx = target.gradx().ptr(yyt, 0); - - for (int dx = -patch_size; dx <= patch_size; ++dx) { - int xxs = xs + dx, xxt = xt + dx; - wsum += 1; - - if (xxs <= 0 || xxs >= source_size.width - 1 || xxt <= 0 || xxt >= source_size.width - 1) { - distance += PatchSSDDistanceMetric::kSSDScale; - continue; - } - - if (p_sm[xxs] || p_tm[xxt] || (p_sgm && p_sgm[xxs]) || (p_tgm && p_tgm[xxt]) ) { - distance += PatchSSDDistanceMetric::kSSDScale; - continue; - } - - int ssd = 0; - for (int c = 0; c < 3; ++c) { - int s_value = p_si[xxs * 3 + c]; - int t_value = p_ti[xxt * 3 + c]; - int s_gy = p_sgy[xxs * 3 + c]; - int t_gy = p_tgy[xxt * 3 + c]; - int s_gx = p_sgx[xxs * 3 + c]; - int t_gx = p_tgx[xxt * 3 + c]; - - ssd += pow2(static_cast(s_value) - t_value); - ssd += pow2(static_cast(s_gx) - t_gx); - ssd += pow2(static_cast(s_gy) - t_gy); - } - distance += ssd; - } - } - - distance /= (long double)(PatchSSDDistanceMetric::kSSDScale); - - int res = int(PatchDistanceMetric::kDistanceScale * distance / wsum); - if (res < 0 || res > PatchDistanceMetric::kDistanceScale) return PatchDistanceMetric::kDistanceScale; - return res; -} - -} - -int PatchSSDDistanceMetric::operator ()(const MaskedImage &source, int source_y, int source_x, const MaskedImage &target, int target_y, int target_x) const { - return distance_masked_images(source, source_y, source_x, target, target_y, target_x, m_patch_size); -} - -int DebugPatchSSDDistanceMetric::operator ()(const MaskedImage &source, int source_y, int source_x, const MaskedImage &target, int target_y, int target_x) const { - fprintf(stderr, "DebugPatchSSDDistanceMetric: %d %d %d %d\n", source.size().width, source.size().height, m_width, m_height); - return distance_masked_images(source, source_y, source_x, target, target_y, target_x, m_patch_size); -} - -int RegularityGuidedPatchDistanceMetricV1::operator ()(const MaskedImage &source, int source_y, int source_x, const MaskedImage &target, int target_y, int target_x) const { - double dx = remainder(double(source_x - target_x) / source.size().width, m_dx1); - double dy = remainder(double(source_y - target_y) / source.size().height, m_dy2); - - double score1 = sqrt(dx * dx + dy *dy) / m_scale; - if (score1 < 0 || score1 > 1) score1 = 1; - score1 *= PatchDistanceMetric::kDistanceScale; - - double score2 = distance_masked_images(source, source_y, source_x, target, target_y, target_x, m_patch_size); - double score = score1 * m_weight + score2 / (1 + m_weight); - return static_cast(score / (1 + m_weight)); -} - -int RegularityGuidedPatchDistanceMetricV2::operator ()(const MaskedImage &source, int source_y, int source_x, const MaskedImage &target, int target_y, int target_x) const { - if (target_y < 0 || target_y >= target.size().height || target_x < 0 || target_x >= target.size().width) - return PatchDistanceMetric::kDistanceScale; - - int source_scale = m_ijmap.size().height / source.size().height; - int target_scale = m_ijmap.size().height / target.size().height; - - // fprintf(stderr, "RegularityGuidedPatchDistanceMetricV2 %d %d %d %d\n", source_y * source_scale, m_ijmap.size().height, source_x * source_scale, m_ijmap.size().width); - - double score1 = PatchDistanceMetric::kDistanceScale; - if (!source.is_globally_masked(source_y, source_x) && !target.is_globally_masked(target_y, target_x)) { - auto source_ij = m_ijmap.ptr(source_y * source_scale, source_x * source_scale); - auto target_ij = m_ijmap.ptr(target_y * target_scale, target_x * target_scale); - - float di = fabs(source_ij[0] - target_ij[0]); if (di > 0.5) di = 1 - di; - float dj = fabs(source_ij[1] - target_ij[1]); if (dj > 0.5) dj = 1 - dj; - score1 = sqrt(di * di + dj *dj) / 0.707; - if (score1 < 0 || score1 > 1) score1 = 1; - score1 *= PatchDistanceMetric::kDistanceScale; - } - - double score2 = distance_masked_images(source, source_y, source_x, target, target_y, target_x, m_patch_size); - double score = score1 * m_weight + score2; - return int(score / (1 + m_weight)); -} - diff --git a/spaces/lordvader31/almithal/classifier.py b/spaces/lordvader31/almithal/classifier.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/lordvader31/text-matching/README.md b/spaces/lordvader31/text-matching/README.md deleted file mode 100644 index bc3bc5cba09782cac60c8afcbf4f37dd48256515..0000000000000000000000000000000000000000 --- a/spaces/lordvader31/text-matching/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Text Matching -emoji: 📉 -colorFrom: pink -colorTo: green -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lsmyrtaj/cse6242-dataminers/betas.py b/spaces/lsmyrtaj/cse6242-dataminers/betas.py deleted file mode 100644 index 89499d4d777eef77388d00f2d229b49cb8d12071..0000000000000000000000000000000000000000 --- a/spaces/lsmyrtaj/cse6242-dataminers/betas.py +++ /dev/null @@ -1,74 +0,0 @@ -import pandas as pd -import numpy as np -import datetime as dt -import pandas_datareader as pdr -from datetime import datetime - - - -def convert_simFin2(path): - df = pd.read_csv(path, sep=';') - stocks = df.pivot(index="Date", columns="Ticker", values="Adj. Close") - return stocks - -def log_of_returns2(stocks): - log_returns = np.log(stocks/stocks.shift()) - return log_returns - - - - - -# Code to Calculate and output Betas -# Read in Stock csv data and convert to have each Ticker as a column. -#df = pd.read_csv('D:/SimFinData/us-shareprices-daily.csv', sep=';') -#stocks = df.pivot(index="Date", columns="Ticker", values="Adj. Close") -#stocks -#start = min(df['Date']) -#end = max(df['Date']) -#logRet = np.log(stocks/stocks.shift()) - - -#SP500 = pdr.get_data_yahoo("^GSPC", start) -#IXIC = pdr.get_data_yahoo("^IXIC", start) -#AOK = pdr.get_data_yahoo("AOK", start) - -#SP500['SP500'] = SP500['Adj Close'] -#IXIC['IXIC'] = IXIC['Adj Close'] -#AOK['AOK'] = AOK['Adj Close'] - -#spAC = np.log(SP500['SP500']/SP500['SP500'].shift()) -#spAC = spAC.loc[spAC.index <= end] - -#ixicAC = np.log(IXIC['IXIC']/IXIC['IXIC'].shift()) -#ixicAC = ixicAC.loc[ixicAC.index <= end] - -#aokAC = np.log(AOK['AOK']/AOK['AOK'].shift()) -#aokAC = aokAC.loc[aokAC.index <= end] - -#sp500B = logRet.join(spAC) -#ixicB = logRet.join(ixicAC) -#aokB = logRet.join(aokAC) - -#sp5Cov = sp500B.cov() -#ixicCov = ixicB.cov() -#aokCov = aokB.cov() - -#sp500Var = sp500B['SP500'].var() -#ixicVar = ixicB['IXIC'].var() -#aokVar = aokB['AOK'].var() - -#sp500Beta = sp5Cov.loc['SP500']/sp500Var -#ixicBeta = ixicCov.loc['IXIC']/ixicVar -#aokBeta = aokCov.loc['AOK']/aokVar - -#betas = pd.concat([sp500Beta,ixicBeta,aokBeta], axis=1) - -#betas['Ticker'] = betas.index - -#betas = betas[['Ticker','SP500','IXIC','AOK']] - -#betas.to_csv (r'betas.csv', index = None, header=True) - - - diff --git a/spaces/luxuedong/lxd/src/lib/bots/bing/index.ts b/spaces/luxuedong/lxd/src/lib/bots/bing/index.ts deleted file mode 100644 index c75c69f94af8c3db92d4c90d465c219a2af72a4d..0000000000000000000000000000000000000000 --- a/spaces/luxuedong/lxd/src/lib/bots/bing/index.ts +++ /dev/null @@ -1,432 +0,0 @@ -import { fetch, WebSocket, debug } from '@/lib/isomorphic' -import WebSocketAsPromised from 'websocket-as-promised' -import { - SendMessageParams, - BingConversationStyle, - ConversationResponse, - ChatResponseMessage, - ConversationInfo, - InvocationEventType, - ChatError, - ErrorCode, - ChatUpdateCompleteResponse, - ImageInfo, - KBlobResponse -} from './types' - -import { convertMessageToMarkdown, websocketUtils, streamAsyncIterable } from './utils' -import { WatchDog, createChunkDecoder } from '@/lib/utils' - -type Params = SendMessageParams<{ bingConversationStyle: BingConversationStyle }> - -const OPTIONS_SETS = [ - 'nlu_direct_response_filter', - 'deepleo', - 'disable_emoji_spoken_text', - 'responsible_ai_policy_235', - 'enablemm', - 'iycapbing', - 'iyxapbing', - 'objopinion', - 'rweasgv2', - 'dagslnv1', - 'dv3sugg', - 'autosave', - 'iyoloxap', - 'iyoloneutral', - 'clgalileo', - 'gencontentv3', -] - -export class BingWebBot { - protected conversationContext?: ConversationInfo - protected cookie: string - protected ua: string - protected endpoint = '' - private lastText = '' - private asyncTasks: Array> = [] - - constructor(opts: { - cookie: string - ua: string - bingConversationStyle?: BingConversationStyle - conversationContext?: ConversationInfo - }) { - const { cookie, ua, conversationContext } = opts - this.cookie = cookie?.includes(';') ? cookie : `_EDGE_V=1; _U=${cookie}` - this.ua = ua - this.conversationContext = conversationContext - } - - static buildChatRequest(conversation: ConversationInfo) { - const optionsSets = OPTIONS_SETS - if (conversation.conversationStyle === BingConversationStyle.Precise) { - optionsSets.push('h3precise') - } else if (conversation.conversationStyle === BingConversationStyle.Creative) { - optionsSets.push('h3imaginative') - } - return { - arguments: [ - { - source: 'cib', - optionsSets, - allowedMessageTypes: [ - 'ActionRequest', - 'Chat', - 'Context', - 'InternalSearchQuery', - 'InternalSearchResult', - 'Disengaged', - 'InternalLoaderMessage', - 'Progress', - 'RenderCardRequest', - 'SemanticSerp', - 'GenerateContentQuery', - 'SearchQuery', - ], - sliceIds: [ - 'winmuid1tf', - 'anssupfor_c', - 'imgchatgptv2', - 'tts2cf', - 'contansperf', - 'mlchatpc8500w', - 'mlchatpc2', - 'ctrlworkpay', - 'winshortmsgtf', - 'cibctrl', - 'sydtransctrl', - 'sydconfigoptc', - '0705trt4', - '517opinion', - '628ajcopus0', - '330uaugs0', - '529rwea', - '0626snptrcs0', - '424dagslnv1', - ], - isStartOfSession: conversation.invocationId === 0, - message: { - author: 'user', - inputMethod: 'Keyboard', - text: conversation.prompt, - imageUrl: conversation.imageUrl, - messageType: 'Chat', - }, - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - participant: { id: conversation.clientId }, - }, - ], - invocationId: conversation.invocationId.toString(), - target: 'chat', - type: InvocationEventType.StreamInvocation, - } - } - - async createConversation(): Promise { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - - let resp: ConversationResponse | undefined - try { - const response = await fetch(this.endpoint + '/api/create', { method: 'POST', headers, redirect: 'error', mode: 'cors', credentials: 'include' }) - if (response.status === 404) { - throw new ChatError('Not Found', ErrorCode.NOTFOUND_ERROR) - } - resp = await response.json() as ConversationResponse - } catch (err) { - console.error('create conversation error', err) - } - - if (!resp?.result) { - throw new ChatError('你的 VPS 或代理可能被封禁,如有疑问,请前往 https://github.com/weaigc/bingo 咨询', ErrorCode.BING_IP_FORBIDDEN) - } - - const { value, message } = resp.result || {} - if (value !== 'Success') { - const errorMsg = `${value}: ${message}` - if (value === 'UnauthorizedRequest') { - if (/fetch failed/i.test(message || '')) { - throw new ChatError(errorMsg, ErrorCode.BING_IP_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.BING_UNAUTHORIZED) - } - if (value === 'TryLater') { - throw new ChatError(errorMsg, ErrorCode.BING_TRY_LATER) - } - if (value === 'Forbidden') { - throw new ChatError(errorMsg, ErrorCode.BING_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.UNKOWN_ERROR) - } - return resp - } - - private async createContext(conversationStyle: BingConversationStyle) { - if (!this.conversationContext) { - const conversation = await this.createConversation() - this.conversationContext = { - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - clientId: conversation.clientId, - invocationId: 0, - conversationStyle, - prompt: '', - } - } - return this.conversationContext - } - - async sendMessage(params: Params) { - try { - await this.createContext(params.options.bingConversationStyle) - Object.assign(this.conversationContext!, { prompt: params.prompt, imageUrl: params.imageUrl }) - return this.sydneyProxy(params) - } catch (error) { - params.onEvent({ - type: 'ERROR', - error: error instanceof ChatError ? error : new ChatError('Catch Error', ErrorCode.UNKOWN_ERROR), - }) - } - } - - private async sydneyProxy(params: Params) { - const abortController = new AbortController() - const response = await fetch(this.endpoint + '/api/sydney', { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - signal: abortController.signal, - body: JSON.stringify(this.conversationContext!) - }) - if (response.status !== 200) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Unknown error', - ErrorCode.UNKOWN_ERROR, - ), - }) - } - params.signal?.addEventListener('abort', () => { - abortController.abort() - }) - - const textDecoder = createChunkDecoder() - for await (const chunk of streamAsyncIterable(response.body!)) { - this.parseEvents(params, websocketUtils.unpackMessage(textDecoder(chunk))) - } - } - - async sendWs() { - const wsConfig: ConstructorParameters[1] = { - packMessage: websocketUtils.packMessage, - unpackMessage: websocketUtils.unpackMessage, - createWebSocket: (url) => new WebSocket(url, { - headers: { - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'User-Agent': this.ua, - pragma: 'no-cache', - cookie: this.cookie, - } - }) - } - const wsp = new WebSocketAsPromised('wss://sydney.bing.com/sydney/ChatHub', wsConfig) - - wsp.open().then(() => { - wsp.sendPacked({ protocol: 'json', version: 1 }) - wsp.sendPacked({ type: 6 }) - wsp.sendPacked(BingWebBot.buildChatRequest(this.conversationContext!)) - }) - - return wsp - } - - private async useWs(params: Params) { - const wsp = await this.sendWs() - const watchDog = new WatchDog() - wsp.onUnpackedMessage.addListener((events) => { - watchDog.watch(() => { - wsp.sendPacked({ type: 6 }) - }) - this.parseEvents(params, events) - }) - - wsp.onClose.addListener(() => { - watchDog.reset() - params.onEvent({ type: 'DONE' }) - wsp.removeAllListeners() - }) - - params.signal?.addEventListener('abort', () => { - wsp.removeAllListeners() - wsp.close() - }) - } - - private async createImage(prompt: string, id: string) { - try { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - const query = new URLSearchParams({ - prompt, - id - }) - const response = await fetch(this.endpoint + '/api/image?' + query.toString(), - { - method: 'POST', - headers, - mode: 'cors', - credentials: 'include' - }) - .then(res => res.text()) - if (response) { - this.lastText += '\n' + response - } - } catch (err) { - console.error('Create Image Error', err) - } - } - - private buildKnowledgeApiPayload(imageUrl: string, conversationStyle: BingConversationStyle) { - const imageInfo: ImageInfo = {} - let imageBase64: string | undefined = undefined - const knowledgeRequest = { - imageInfo, - knowledgeRequest: { - invokedSkills: [ - 'ImageById' - ], - subscriptionId: 'Bing.Chat.Multimodal', - invokedSkillsRequestData: { - enableFaceBlur: true - }, - convoData: { - convoid: this.conversationContext?.conversationId, - convotone: conversationStyle, - } - }, - } - - if (imageUrl.startsWith('data:image/')) { - imageBase64 = imageUrl.replace('data:image/', ''); - const partIndex = imageBase64.indexOf(',') - if (partIndex) { - imageBase64 = imageBase64.substring(partIndex + 1) - } - } else { - imageInfo.url = imageUrl - } - return { knowledgeRequest, imageBase64 } - } - - async uploadImage(imageUrl: string, conversationStyle: BingConversationStyle = BingConversationStyle.Creative): Promise { - if (!imageUrl) { - return - } - await this.createContext(conversationStyle) - const payload = this.buildKnowledgeApiPayload(imageUrl, conversationStyle) - - const response = await fetch(this.endpoint + '/api/kblob', - { - headers: { - 'Content-Type': 'application/json', - }, - method: 'POST', - mode: 'cors', - credentials: 'include', - body: JSON.stringify(payload), - }) - .then(res => res.json()) - .catch(e => { - console.log('Error', e) - }) - return response - } - - private async generateContent(message: ChatResponseMessage) { - if (message.contentType === 'IMAGE') { - this.asyncTasks.push(this.createImage(message.text, message.messageId)) - } - } - - private async parseEvents(params: Params, events: any) { - const conversation = this.conversationContext! - - events?.forEach(async (event: ChatUpdateCompleteResponse) => { - debug('bing event', event) - if (event.type === 3) { - await Promise.all(this.asyncTasks) - this.asyncTasks = [] - params.onEvent({ type: 'UPDATE_ANSWER', data: { text: this.lastText } }) - params.onEvent({ type: 'DONE' }) - conversation.invocationId = parseInt(event.invocationId, 10) + 1 - } else if (event.type === 1) { - const messages = event.arguments[0].messages - if (messages) { - const text = convertMessageToMarkdown(messages[0]) - this.lastText = text - params.onEvent({ type: 'UPDATE_ANSWER', data: { text, spokenText: messages[0].text, throttling: event.arguments[0].throttling } }) - } - } else if (event.type === 2) { - const messages = event.item.messages as ChatResponseMessage[] | undefined - if (!messages) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - event.item.result.error || 'Unknown error', - event.item.result.value === 'Throttled' ? ErrorCode.THROTTLE_LIMIT - : event.item.result.value === 'CaptchaChallenge' ? (this.conversationContext?.conversationId?.includes('BingProdUnAuthenticatedUsers') ? ErrorCode.BING_UNAUTHORIZED : ErrorCode.BING_CAPTCHA) - : ErrorCode.UNKOWN_ERROR - ), - }) - return - } - const limited = messages.some((message) => - message.contentOrigin === 'TurnLimiter' - || message.messageType === 'Disengaged' - ) - if (limited) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Sorry, you have reached chat limit in this conversation.', - ErrorCode.CONVERSATION_LIMIT, - ), - }) - return - } - - const lastMessage = event.item.messages.at(-1) as ChatResponseMessage - const specialMessage = event.item.messages.find(message => message.author === 'bot' && message.contentType === 'IMAGE') - if (specialMessage) { - this.generateContent(specialMessage) - } - - if (lastMessage) { - const text = convertMessageToMarkdown(lastMessage) - this.lastText = text - params.onEvent({ - type: 'UPDATE_ANSWER', - data: { text, throttling: event.item.throttling, suggestedResponses: lastMessage.suggestedResponses, sourceAttributions: lastMessage.sourceAttributions }, - }) - } - } - }) - } - - resetConversation() { - this.conversationContext = undefined - } -} diff --git a/spaces/ma-xu/LIVE/thrust/thrust/detail/event_error.h b/spaces/ma-xu/LIVE/thrust/thrust/detail/event_error.h deleted file mode 100644 index 114d4763f116ef20966572a86ca52076b837f1cc..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/detail/event_error.h +++ /dev/null @@ -1,166 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/// \file thrust/detail/event_error.h -/// \brief \c thrust::future and thrust::future error handling types and codes. - -#pragma once - -#include -#include -#include - -#if THRUST_CPP_DIALECT >= 2011 && !defined(THRUST_LEGACY_GCC) - -#include -#include - -#include - -namespace thrust -{ - -enum class event_errc -{ - unknown_event_error -, no_state -, no_content -, last_event_error -}; - -/// \return error_code(static_cast(e), event_category()) -inline error_code make_error_code(event_errc e); - -/// \return error_condition(static_cast(e), event_category()). -inline error_condition make_error_condition(event_errc e); - -struct event_error_category : error_category -{ - event_error_category() = default; - - virtual char const* name() const - { - return "event"; - } - - virtual std::string message(int ev) const - { - switch (static_cast(ev)) - { - case event_errc::no_state: - { - return "no_state: an operation that requires an event or future to have " - "a stream or content has been performed on a event or future " - "without either, e.g. a moved-from or default constructed event " - "or future (an event or future may have been consumed more than " - "once)"; - } - case event_errc::no_content: - { - return "no_content: an operation that requires a future to have content " - "has been performed on future without any, e.g. a moved-from, " - "default constructed, or `thrust::new_stream` constructed future " - "(a future may have been consumed more than once)"; - } - default: - { - return "unknown_event_error: an unknown error with a future " - "object has occurred"; - } - }; - } - - virtual error_condition default_error_condition(int ev) const - { - if ( - event_errc::last_event_error - > - static_cast(ev) - ) - return make_error_condition(static_cast(ev)); - - return system_category().default_error_condition(ev); - } -}; - -/// Obtains a reference to the static error category object for the errors -/// related to futures and promises. The object is required to override the -/// virtual function error_category::name() to return a pointer to the string -/// "event". It is used to identify error codes provided in the -/// exceptions of type event_error. -inline error_category const& event_category() -{ - static const event_error_category result; - return result; -} - -namespace system -{ -/// Specialization of \p is_error_code_enum for \p event_errc. -template<> struct is_error_code_enum : true_type {}; -} // end system - -/// \return error_code(static_cast(e), event_category()) -inline error_code make_error_code(event_errc e) -{ - return error_code(static_cast(e), event_category()); -} - -/// \return error_condition(static_cast(e), event_category()). -inline error_condition make_error_condition(event_errc e) -{ - return error_condition(static_cast(e), event_category()); -} - -struct event_error : std::logic_error -{ - __host__ - explicit event_error(error_code ec) - : std::logic_error(ec.message()), ec_(ec) - {} - - __host__ - explicit event_error(event_errc e) - : event_error(make_error_code(e)) - {} - - __host__ - error_code const& code() const noexcept - { - return ec_; - } - - __host__ - virtual ~event_error() noexcept {} - -private: - error_code ec_; -}; - -inline bool operator==(event_error const& lhs, event_error const& rhs) noexcept -{ - return lhs.code() == rhs.code(); -} - -inline bool operator<(event_error const& lhs, event_error const& rhs) noexcept -{ - return lhs.code() < rhs.code(); -} - -} // end namespace thrust - -#endif - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/mr/disjoint_tls_pool.h b/spaces/ma-xu/LIVE/thrust/thrust/mr/disjoint_tls_pool.h deleted file mode 100644 index e50eba76255421812bb1b0c4a355e879eef37492..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/mr/disjoint_tls_pool.h +++ /dev/null @@ -1,69 +0,0 @@ -/* - * Copyright 2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file disjoint_tls_pool.h - * \brief A function wrapping a thread local instance of a \p disjoint_unsynchronized_pool_resource. - */ - -#pragma once - -#include - -#if THRUST_CPP_DIALECT >= 2011 - -#include - -namespace thrust -{ -namespace mr -{ - -/*! \addtogroup memory_management Memory Management - * \addtogroup memory_resources Memory Resources - * \ingroup memory_resources - * \{ - */ - -/*! Potentially constructs, if not yet created, and then returns the address of a thread-local - * \p disjoint_unsynchronized_pool_resource, - * - * \tparam Upstream the first template argument to the pool template - * \tparam Bookkeeper the second template argument to the pool template - * \param upstream the first argument to the constructor, if invoked - * \param bookkeeper the second argument to the constructor, if invoked - */ -template -__host__ -thrust::mr::disjoint_unsynchronized_pool_resource & tls_disjoint_pool( - Upstream * upstream = NULL, - Bookkeeper * bookkeeper = NULL) -{ - static thread_local auto adaptor = [&]{ - assert(upstream && bookkeeper); - return thrust::mr::disjoint_unsynchronized_pool_resource(upstream, bookkeeper); - }(); - - return adaptor; -} - -/*! \} - */ - -} // end mr -} // end thrust - -#endif // THRUST_CPP_DIALECT >= 2011 - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/execution_policy.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/execution_policy.h deleted file mode 100644 index 3bf521be348f834fe71f0a754425a9c2438a1526..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/execution_policy.h +++ /dev/null @@ -1,157 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -/*! \file thrust/system/cpp/execution_policy.h - * \brief Execution policies for Thrust's standard C++ system. - */ - -#include - -// get the execution policies definitions first -#include - -// get the definition of par -#include - -// now get all the algorithm definitions - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - - -// define these entities here for the purpose of Doxygenating them -// they are actually defined elsewhere -#if 0 -namespace thrust -{ -namespace system -{ -namespace cpp -{ - - -/*! \addtogroup execution_policies - * \{ - */ - - -/*! \p thrust::system::cpp::execution_policy is the base class for all Thrust parallel execution - * policies which are derived from Thrust's standard C++ backend system. - */ -template -struct execution_policy : thrust::execution_policy -{}; - - -/*! \p thrust::system::cpp::tag is a type representing Thrust's standard C++ backend system in C++'s type system. - * Iterators "tagged" with a type which is convertible to \p cpp::tag assert that they may be - * "dispatched" to algorithm implementations in the \p cpp system. - */ -struct tag : thrust::system::cpp::execution_policy { unspecified }; - - -/*! - * \p thrust::system::cpp::par is the parallel execution policy associated with Thrust's standard - * C++ backend system. - * - * Instead of relying on implicit algorithm dispatch through iterator system tags, users may - * directly target Thrust's C++ backend system by providing \p thrust::cpp::par as an algorithm - * parameter. - * - * Explicit dispatch can be useful in avoiding the introduction of data copies into containers such - * as \p thrust::cpp::vector. - * - * The type of \p thrust::cpp::par is implementation-defined. - * - * The following code snippet demonstrates how to use \p thrust::cpp::par to explicitly dispatch an - * invocation of \p thrust::for_each to the standard C++ backend system: - * - * \code - * #include - * #include - * #include - * - * struct printf_functor - * { - * __host__ __device__ - * void operator()(int x) - * { - * printf("%d\n", x); - * } - * }; - * ... - * int vec[3]; - * vec[0] = 0; vec[1] = 1; vec[2] = 2; - * - * thrust::for_each(thrust::cpp::par, vec.begin(), vec.end(), printf_functor()); - * - * // 0 1 2 is printed to standard output in some unspecified order - * \endcode - */ -static const unspecified par; - - -/*! \} - */ - - -} // end cpp -} // end system -} // end thrust -#endif - - diff --git a/spaces/macaodha/batdetect2/bat_detect/__init__.py b/spaces/macaodha/batdetect2/bat_detect/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/manivannan7gp/Words2Image/style.css b/spaces/manivannan7gp/Words2Image/style.css deleted file mode 100644 index 07f8d9fc7f44dc2b3e44d622ef522a614ac7ce03..0000000000000000000000000000000000000000 --- a/spaces/manivannan7gp/Words2Image/style.css +++ /dev/null @@ -1,3 +0,0 @@ -.gradio-container { - background-image: linear-gradient(#660099, #000000) !important; - } \ No newline at end of file diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/modules/diffusion_schedule.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/modules/diffusion_schedule.py deleted file mode 100644 index 74ca6e3f2e7c4ff904d96dade315b0b46856778d..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/modules/diffusion_schedule.py +++ /dev/null @@ -1,272 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Functions for Noise Schedule, defines diffusion process, reverse process and data processor. -""" - -from collections import namedtuple -import random -import typing as tp -import julius -import torch - -TrainingItem = namedtuple("TrainingItem", "noisy noise step") - - -def betas_from_alpha_bar(alpha_bar): - alphas = torch.cat([torch.Tensor([alpha_bar[0]]), alpha_bar[1:]/alpha_bar[:-1]]) - return 1 - alphas - - -class SampleProcessor(torch.nn.Module): - def project_sample(self, x: torch.Tensor): - """Project the original sample to the 'space' where the diffusion will happen.""" - return x - - def return_sample(self, z: torch.Tensor): - """Project back from diffusion space to the actual sample space.""" - return z - - -class MultiBandProcessor(SampleProcessor): - """ - MultiBand sample processor. The input audio is splitted across - frequency bands evenly distributed in mel-scale. - - Each band will be rescaled to match the power distribution - of Gaussian noise in that band, using online metrics - computed on the first few samples. - - Args: - n_bands (int): Number of mel-bands to split the signal over. - sample_rate (int): Sample rate of the audio. - num_samples (int): Number of samples to use to fit the rescaling - for each band. The processor won't be stable - until it has seen that many samples. - power_std (float or list/tensor): The rescaling factor computed to match the - power of Gaussian noise in each band is taken to - that power, i.e. `1.` means full correction of the energy - in each band, and values less than `1` means only partial - correction. Can be used to balance the relative importance - of low vs. high freq in typical audio signals. - """ - def __init__(self, n_bands: int = 8, sample_rate: float = 24_000, - num_samples: int = 10_000, power_std: tp.Union[float, tp.List[float], torch.Tensor] = 1.): - super().__init__() - self.n_bands = n_bands - self.split_bands = julius.SplitBands(sample_rate, n_bands=n_bands) - self.num_samples = num_samples - self.power_std = power_std - if isinstance(power_std, list): - assert len(power_std) == n_bands - power_std = torch.tensor(power_std) - self.register_buffer('counts', torch.zeros(1)) - self.register_buffer('sum_x', torch.zeros(n_bands)) - self.register_buffer('sum_x2', torch.zeros(n_bands)) - self.register_buffer('sum_target_x2', torch.zeros(n_bands)) - self.counts: torch.Tensor - self.sum_x: torch.Tensor - self.sum_x2: torch.Tensor - self.sum_target_x2: torch.Tensor - - @property - def mean(self): - mean = self.sum_x / self.counts - return mean - - @property - def std(self): - std = (self.sum_x2 / self.counts - self.mean**2).clamp(min=0).sqrt() - return std - - @property - def target_std(self): - target_std = self.sum_target_x2 / self.counts - return target_std - - def project_sample(self, x: torch.Tensor): - assert x.dim() == 3 - bands = self.split_bands(x) - if self.counts.item() < self.num_samples: - ref_bands = self.split_bands(torch.randn_like(x)) - self.counts += len(x) - self.sum_x += bands.mean(dim=(2, 3)).sum(dim=1) - self.sum_x2 += bands.pow(2).mean(dim=(2, 3)).sum(dim=1) - self.sum_target_x2 += ref_bands.pow(2).mean(dim=(2, 3)).sum(dim=1) - rescale = (self.target_std / self.std.clamp(min=1e-12)) ** self.power_std # same output size - bands = (bands - self.mean.view(-1, 1, 1, 1)) * rescale.view(-1, 1, 1, 1) - return bands.sum(dim=0) - - def return_sample(self, x: torch.Tensor): - assert x.dim() == 3 - bands = self.split_bands(x) - rescale = (self.std / self.target_std) ** self.power_std - bands = bands * rescale.view(-1, 1, 1, 1) + self.mean.view(-1, 1, 1, 1) - return bands.sum(dim=0) - - -class NoiseSchedule: - """Noise schedule for diffusion. - - Args: - beta_t0 (float): Variance of the first diffusion step. - beta_t1 (float): Variance of the last diffusion step. - beta_exp (float): Power schedule exponent - num_steps (int): Number of diffusion step. - variance (str): choice of the sigma value for the denoising eq. Choices: "beta" or "beta_tilde" - clip (float): clipping value for the denoising steps - rescale (float): rescaling value to avoid vanishing signals unused by default (i.e 1) - repartition (str): shape of the schedule only power schedule is supported - sample_processor (SampleProcessor): Module that normalize data to match better the gaussian distribution - noise_scale (float): Scaling factor for the noise - """ - def __init__(self, beta_t0: float = 1e-4, beta_t1: float = 0.02, num_steps: int = 1000, variance: str = 'beta', - clip: float = 5., rescale: float = 1., device='cuda', beta_exp: float = 1, - repartition: str = "power", alpha_sigmoid: dict = {}, n_bands: tp.Optional[int] = None, - sample_processor: SampleProcessor = SampleProcessor(), noise_scale: float = 1.0, **kwargs): - - self.beta_t0 = beta_t0 - self.beta_t1 = beta_t1 - self.variance = variance - self.num_steps = num_steps - self.clip = clip - self.sample_processor = sample_processor - self.rescale = rescale - self.n_bands = n_bands - self.noise_scale = noise_scale - assert n_bands is None - if repartition == "power": - self.betas = torch.linspace(beta_t0 ** (1 / beta_exp), beta_t1 ** (1 / beta_exp), num_steps, - device=device, dtype=torch.float) ** beta_exp - else: - raise RuntimeError('Not implemented') - self.rng = random.Random(1234) - - def get_beta(self, step: tp.Union[int, torch.Tensor]): - if self.n_bands is None: - return self.betas[step] - else: - return self.betas[:, step] # [n_bands, len(step)] - - def get_initial_noise(self, x: torch.Tensor): - if self.n_bands is None: - return torch.randn_like(x) - return torch.randn((x.size(0), self.n_bands, x.size(2))) - - def get_alpha_bar(self, step: tp.Optional[tp.Union[int, torch.Tensor]] = None) -> torch.Tensor: - """Return 'alpha_bar', either for a given step, or as a tensor with its value for each step.""" - if step is None: - return (1 - self.betas).cumprod(dim=-1) # works for simgle and multi bands - if type(step) is int: - return (1 - self.betas[:step + 1]).prod() - else: - return (1 - self.betas).cumprod(dim=0)[step].view(-1, 1, 1) - - def get_training_item(self, x: torch.Tensor, tensor_step: bool = False) -> TrainingItem: - """Create a noisy data item for diffusion model training: - - Args: - x (torch.Tensor): clean audio data torch.tensor(bs, 1, T) - tensor_step (bool): If tensor_step = false, only one step t is sample, - the whole batch is diffused to the same step and t is int. - If tensor_step = true, t is a tensor of size (x.size(0),) - every element of the batch is diffused to a independently sampled. - """ - step: tp.Union[int, torch.Tensor] - if tensor_step: - bs = x.size(0) - step = torch.randint(0, self.num_steps, size=(bs,), device=x.device) - else: - step = self.rng.randrange(self.num_steps) - alpha_bar = self.get_alpha_bar(step) # [batch_size, n_bands, 1] - - x = self.sample_processor.project_sample(x) - noise = torch.randn_like(x) - noisy = (alpha_bar.sqrt() / self.rescale) * x + (1 - alpha_bar).sqrt() * noise * self.noise_scale - return TrainingItem(noisy, noise, step) - - def generate(self, model: torch.nn.Module, initial: tp.Optional[torch.Tensor] = None, - condition: tp.Optional[torch.Tensor] = None, return_list: bool = False): - """Full ddpm reverse process. - - Args: - model (nn.Module): Diffusion model. - initial (tensor): Initial Noise. - condition (tensor): Input conditionning Tensor (e.g. encodec compressed representation). - return_list (bool): Whether to return the whole process or only the sampled point. - """ - alpha_bar = self.get_alpha_bar(step=self.num_steps - 1) - current = initial - iterates = [initial] - for step in range(self.num_steps)[::-1]: - with torch.no_grad(): - estimate = model(current, step, condition=condition).sample - alpha = 1 - self.betas[step] - previous = (current - (1 - alpha) / (1 - alpha_bar).sqrt() * estimate) / alpha.sqrt() - previous_alpha_bar = self.get_alpha_bar(step=step - 1) - if step == 0: - sigma2 = 0 - elif self.variance == 'beta': - sigma2 = 1 - alpha - elif self.variance == 'beta_tilde': - sigma2 = (1 - previous_alpha_bar) / (1 - alpha_bar) * (1 - alpha) - elif self.variance == 'none': - sigma2 = 0 - else: - raise ValueError(f'Invalid variance type {self.variance}') - - if sigma2 > 0: - previous += sigma2**0.5 * torch.randn_like(previous) * self.noise_scale - if self.clip: - previous = previous.clamp(-self.clip, self.clip) - current = previous - alpha_bar = previous_alpha_bar - if step == 0: - previous *= self.rescale - if return_list: - iterates.append(previous.cpu()) - - if return_list: - return iterates - else: - return self.sample_processor.return_sample(previous) - - def generate_subsampled(self, model: torch.nn.Module, initial: torch.Tensor, step_list: tp.Optional[list] = None, - condition: tp.Optional[torch.Tensor] = None, return_list: bool = False): - """Reverse process that only goes through Markov chain states in step_list.""" - if step_list is None: - step_list = list(range(1000))[::-50] + [0] - alpha_bar = self.get_alpha_bar(step=self.num_steps - 1) - alpha_bars_subsampled = (1 - self.betas).cumprod(dim=0)[list(reversed(step_list))].cpu() - betas_subsampled = betas_from_alpha_bar(alpha_bars_subsampled) - current = initial * self.noise_scale - iterates = [current] - for idx, step in enumerate(step_list[:-1]): - with torch.no_grad(): - estimate = model(current, step, condition=condition).sample * self.noise_scale - alpha = 1 - betas_subsampled[-1 - idx] - previous = (current - (1 - alpha) / (1 - alpha_bar).sqrt() * estimate) / alpha.sqrt() - previous_alpha_bar = self.get_alpha_bar(step_list[idx + 1]) - if step == step_list[-2]: - sigma2 = 0 - previous_alpha_bar = torch.tensor(1.0) - else: - sigma2 = (1 - previous_alpha_bar) / (1 - alpha_bar) * (1 - alpha) - if sigma2 > 0: - previous += sigma2**0.5 * torch.randn_like(previous) * self.noise_scale - if self.clip: - previous = previous.clamp(-self.clip, self.clip) - current = previous - alpha_bar = previous_alpha_bar - if step == 0: - previous *= self.rescale - if return_list: - iterates.append(previous.cpu()) - if return_list: - return iterates - else: - return self.sample_processor.return_sample(previous) diff --git a/spaces/matthoffner/chatbot/components/Chat/ChatLoader.tsx b/spaces/matthoffner/chatbot/components/Chat/ChatLoader.tsx deleted file mode 100644 index e666d5759f502ebb041c2ebc5548a045df4c796a..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/components/Chat/ChatLoader.tsx +++ /dev/null @@ -1,20 +0,0 @@ -import { IconRobot } from '@tabler/icons-react'; -import { FC } from 'react'; - -interface Props { } - -export const ChatLoader: FC = () => { - return ( -
          -
          -
          - -
          - -
          -
          - ); -}; diff --git a/spaces/maxspad/nlp-qual-space/overview.py b/spaces/maxspad/nlp-qual-space/overview.py deleted file mode 100644 index 57a4076e686acf8437c28298eaf945624fc7beae..0000000000000000000000000000000000000000 --- a/spaces/maxspad/nlp-qual-space/overview.py +++ /dev/null @@ -1,150 +0,0 @@ -from matplotlib.cm import get_cmap -import plotly.graph_objects as go -import hydralit_components as hc - -about_blurb = ''' -### About the QuAL Score - -The Quality of Assessment for Learning score (QuAL score), -was created to evaluate short qualitative comments that are related to specific -scores entered into a workplace-based assessment, -common within the competency-based medical education (CBME) context. - -It is rated on a scale of 0-5, with 0 signifying very low quality and 5 very high quality. -It consists of three subscores which are summed to calculate the overall QuAL score: - -1. Evidence - Does the rater provide sufficient evidence about resident performance? (0-no comment at all, 1-no, but comment present, 2-somewhat, 3-yes/full description) -2. Suggestion - Does the rater provide a suggestion for improvement? (0-no/1-yes) -3. Connection - Is the rater's suggestion linked to the behavior described? (0-no/1-yes) - -The QuAL score has validity evidence for accurately measuring the quality of evaluation comments in CBME. - -For more information, see the paper [here](https://doi.org/10.1080/10401334.2019.1708365). - -### About this Tool - -The QuAL score accurately rates the quality of narrative comments in CBME, but -it still requires time-consuming manual rating. With large volumes of text generated in a -typical CBME program, large-scale assessment of comment quality is impractical. -This tool uses machine learning (ML) and natural langugage processing (NLP) to automate -the rating of the QuAL score on narratie comments. - -We trained a machine learning model to predict each of the three subscores described above. -The resulting models are accurate: -1. Evidence - Balanced accuracy of 61.5% for a 0-3 result, within-one accuracy of 96.4% -2. Suggestion - Accuracy of 85%, sensitivity for lack of suggestion 86.2% -3. Connection - Accuracy of 82%, sensitivity for lack of connection 90% - -The models are highly accurate, but not perfect! You may experience times where -the results are not consistent with your interpretation of the text. If you do, please -leave us [feedback](https://forms.gle/PfXxcGmvLYvd9jWz5). This tool is intendened as a demonstration only -and should not be used for high-stakes assessment (yet!). -''' -class NQDOverview(object): - def __init__(self, parent, results, - dial_cmap='RdYlGn'): - self.p = parent - self.results = results - self.cmap = get_cmap(dial_cmap) - - def _get_color(self): - lab = self.results['qual']['label'] - if lab == 0: - color = '#ffffff' - elif lab == 1: - color = '#dc3545' - elif lab == 2: - color = '#f60' - elif lab == 3: - color = '#ffc107' - elif lab == 4: - color = '#6ea728' - elif lab == 5: - color = '#28a745' - # color = self.cmap(self.results['qual']['label'] / 6.0) - # color = f'rgba({int(color[0]*256)}, {int(color[1]*256)}, {int(color[2]*256)}, {int(color[3]*256)})' - return color - - def _build_figure(self): - fig = go.Figure(go.Indicator( - mode = "number+gauge", value = self.results['qual']['label'], - domain = {'x': [0.1, 1], 'y': [0, 1]}, - title = {'text' :"QuAL:"}, - gauge = { - 'shape': "bullet", - 'axis': {'range': [-0.5, 5.5]}, - 'steps': [ - {'range': [-0.5, 0.5], 'color': "maroon"}, - {'range': [0.5, 1.5], 'color': 'indianred'}, - {'range': [1.5, 2.5], 'color': "orange"}, - {"range": [2.5, 3.5], 'color': 'gold'}, - {'range': [3.5,4.5], 'color': 'lightgreen'}, - {'range': [4.5,5.5], 'color': 'green'} - ], - 'bar': { - 'color': 'rgba(123, 123, 123, 0.85)', - 'thickness': 0.7 - }})) - fig.update_layout(margin=go.Margin(t=25, b=20), height=125) - return fig - - def draw(self): - st = self.p - - with st.expander('About the QuAL Score and this Tool', expanded=False): - st.markdown(about_blurb) - - fig = self._build_figure() - st.plotly_chart(fig, use_container_width=True) - - cols = st.columns(3) - with cols[0]: - q1lab = self.results['q1']['label'] - if q1lab == 0: - md_str = '😥 None' - elif q1lab == 1: - md_str = '😐 Low' - elif q1lab == 2: - md_str = '😊 Medium' - elif q1lab == 3: - md_str = '😁 High' - # prog_score, prog_theme = self.get_prog_setup('q1') - # hc.info_card(title='Level of Detail', content=md_str, sentiment='good', bar_value=prog_score) - st.metric('Level of Detail', md_str, - help='Q1 - Evidence - Does the rater provide sufficient evidence about resident performance? (0-no comment at all, 1-no, but comment present, 2-somewhat, 3-yes/full description)') - prog_score, prog_theme = self.get_prog_setup('q1') - # hc.progress_bar(prog_score, f'{prog_score:.2f}% confident', override_theme=prog_theme) - - with cols[1]: - q2lab = self.results['q2i']['label'] - if q2lab == 0: - md_str = '✅ Yes' - else: - md_str = '❌ No' - st.metric('Suggestion Given', (md_str), - help='Q2 - Suggestion - Does the rater provide a suggestion for improvement? (0-no/1-yes)') - prog_score, prog_theme = self.get_prog_setup('q2i') - # hc.progress_bar(prog_score, f'{prog_score:.2f}% confident', override_theme=prog_theme) - - with cols[2]: - q3lab = self.results['q3i']['label'] - if q3lab == 0: - md_str = '✅ Yes' - else: - md_str = '❌ No' - st.metric('Suggestion Linked', md_str, - help='Q3 - Connection - Is the rater’s suggestion linked to the behavior described? (0-no/1-yes)') - prog_score, prog_theme = self.get_prog_setup('q3i') - # hc.progress_bar(prog_score, f'{prog_score:.2f}% confident', override_theme=prog_theme) - - - def get_prog_setup(self, q): - prog_score = self.results[q]['scores'][self.results[q]['label']] * 100 - if prog_score > 75: - prog_sent = '#28a745' - elif (prog_score > 25) and (prog_score <= 75): - prog_sent = '#ffc107' - else: - prog_sent = '#dc3545' - prog_theme = {'content_color': 'white', 'progress_color': '#aaa'} - return prog_score, prog_theme \ No newline at end of file diff --git a/spaces/mdnestor/URL-to-Whisper/app.py b/spaces/mdnestor/URL-to-Whisper/app.py deleted file mode 100644 index efd4c9f141981a8198d50a8d99f8134c63df9c6f..0000000000000000000000000000000000000000 --- a/spaces/mdnestor/URL-to-Whisper/app.py +++ /dev/null @@ -1,26 +0,0 @@ -import gradio as gr -import os -import whisper - -model = whisper.load_model("base") - -def transcribe(url): - os.system(f"yt-dlp -x {url} -o audio.m4a") - result = model.transcribe("audio.m4a") - return result['text'] - -with gr.Blocks() as demo: - gr.HTML( - """ -

          - Transcribes web videos to text using OpenAI's Whisper (base model). - Works on many sites such as YouTube, Twitter, Reddit, Instagram, etc. -

          - """ - ) - url = gr.Textbox(placeholder="Enter video link here...", label="") - button = gr.Button("Transcribe!") - output = gr.Textbox(label="") - button.click(transcribe, inputs=[url], outputs=[output]) - -demo.launch() \ No newline at end of file diff --git a/spaces/merve/hidden-bias/server-side/fill-in-the-blank/scatter-plot-colab/spearman-distribution/test.html b/spaces/merve/hidden-bias/server-side/fill-in-the-blank/scatter-plot-colab/spearman-distribution/test.html deleted file mode 100644 index bd51a96a0e44f236d2fef909e99ce49251683407..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/server-side/fill-in-the-blank/scatter-plot-colab/spearman-distribution/test.html +++ /dev/null @@ -1,12 +0,0 @@ - - - - - - -
          - - - - - diff --git a/spaces/merve/uncertainty-calibration/public/base-rate/script.js b/spaces/merve/uncertainty-calibration/public/base-rate/script.js deleted file mode 100644 index efc40861466afc2bb19cee8d3ef6cd5a98d80ddc..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/base-rate/script.js +++ /dev/null @@ -1,317 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - - - -console.clear() -var ttSel = d3.select('body').selectAppend('div.tooltip.tooltip-hidden') - -window.renderFns = [] - -window.m = (function(){ - var rv = {b: .7, tpr: .8, fnr: .5, update, str: 'kids', titleStr: 'Children',} - - function update(obj={}){ - Object.assign(rv, obj) - window.renderFns.forEach(d => d()) - } - - return rv -})() - -window.f = (function(){ - var rv = {b: .3, tpr: .8, fnr: .5, update, str: 'adults', titleStr: 'Adults'} - - function update(obj={}){ - window.renderFns.forEach(d => d()) - } - - return rv -})() - - -var wLarge = d3.clamp(0, innerWidth/2 - 30, 300) - -d3.select('#big-matrix').html('') - .appendMany('div.big-container', [{w: wLarge, s: f, isText: 1}, {w: wLarge, s: m, isText: 1}]) - .each(drawMatrix) - - -addPattern(10, `pattern-${wLarge}-`) -addPattern(5, 'pattern-50-') - -function addPattern(s, str){ - var cColors = [colors.sick, colors.sick, colors.well, colors.well, lcolors.sick, lcolors.sick, lcolors.well, lcolors.well] - var rColors = [lcolors.sick, lcolors.well, lcolors.sick, lcolors.well, llcolors.sick, llcolors.well, llcolors.sick, llcolors.well] - - d3.select('#big-matrix') - .append('svg') - .st({height: 0, position: 'absolute'}) - .append('defs').appendMany('pattern', d3.range(8)) - .at({ id: i => str + i, width: s, height: s}) - .attr('patternUnits', 'userSpaceOnUse') - .append('rect') - .at({width: s, height: s, fill: i => rColors[i]}) - .parent().append('circle') - .at({r: s == 10 ? 2.5 : 1.5, cx: s/2, cy: s/2, fill: i => cColors[i]}) -} - - -var scale = d3.clamp(0, ((innerWidth - 50) / 3)/280, 1) -var isScaled = scale != 1 - -d3.select('#metrics').html('').st({height: 350*scale + 30}) - .appendMany('div', [0, 1, 2]) - .st({width: 280*scale, display: 'inline-block'}) - .append('div') - .st({transform: `scale(${scale})`, transformOrigin: '0% 0%'}) - .append('div.metrics-container').st({width: 280}) - .each(drawMetric) - -d3.selectAll('rect.drag') - .on('mouseover.style', d => d3.selectAll('rect.' + d).st({strokeWidth: 3, stroke: '#000'})) - .on('mouseout.style', d => d3.selectAll('rect.' + d).st({strokeWidth: 0})) - -function drawMetric(i){ - var sel = d3.select(this) - - var text = [ - // 'Percentage of sick people
          who test positive', - 'Percentage of sick people
          who test positive', - 'Percentage of positive tests
          who are actually sick', - 'Percentage of well people
          who test negative', - ][i] - - var percentFn = [ - s => s.tpr, - s => s.b*s.tpr/(s.b*s.tpr + (1 - s.b)*(s.fnr)), - s => 1 - s.fnr, - ][i] - - var colors = [ - ['#f0f', '#fcf', '#fff', '#fff'], - ['#f0f', '#fff', '#fcf', '#fff'], - ['#fff', '#fff', '#fcf', '#f0f'], - ][i] - - sel.append('h3').st({marginBottom: 20, fontSize: isScaled ? 30 : 20}).html(isScaled ? text.replace('
          ', '') : text) - - var h = 200 - var width = 100 - - var fDiv = sel.append('div').st({position: 'relative', top: -h + 7}) - .datum({w: 50, s: f, isText: 0, colors}).each(drawMatrix) - - var svg = sel.append('svg') - .at({width, height: h}) - .st({fontSize: 14, fontFamily: 'monospace'}) - - svg.append('path').at({stroke: '#ccc', d: `M ${width/2 + .5} 0 V ${h}`}) - - var errorSel = svg.append('path') - .translate(width/2 + .5, 0) - .at({stroke: 'orange', strokeWidth: 3}) - - var fSel = svg.append('g') - var mSel = svg.append('g') - - mSel.append('circle').at({r: 4, cx: width/2 + .5, fill: 'none', stroke: '#000'}) - fSel.append('circle').at({r: 4, cx: width/2 + .5, fill: 'none', stroke: '#000'}) - - var fTextSel = fSel.append('text').text('23%') - .at({dy: '.33em', textAnchor: 'middle', x: width/4 - 3, fontSize: isScaled ? 20 : 16}) - var mTextSel = mSel.append('text').text('23%') - .at({dy: '.33em', textAnchor: 'middle', x: width/4*3 + 5, fontSize: isScaled ? 20 : 16}) - - fSel.append('text').text('Adults').st({fontSize: isScaled ? 18 : 12}) - .at({textAnchor: 'middle', x: -23, y: -30}) - mSel.append('text').text('Children').st({fontSize: isScaled ? 18 : 12}) - .at({textAnchor: 'middle', x: 124, y: -30}) - - var mDiv = sel.append('div').st({position: 'relative', top: -h + 7}) - .datum({w: 50, s: m, isText: 0, colors}).each(drawMatrix) - - - renderFns.push(() => { - var fPercent = percentFn(f) - fSel.translate(h - h*fPercent, 1) - fTextSel.text(d3.format('.0%')(fPercent)) - - var mPercent = percentFn(m) - mSel.translate(h - h*mPercent, 1) - mTextSel.text(d3.format('.0%')(mPercent)) - - fDiv.translate(h - h*fPercent, 1) - mDiv.translate(h - h*mPercent, 1) - - errorSel.at({d: 'M 0 ' + (h - h*fPercent) + ' V ' + (h - h*mPercent) }) - }) -} - -function drawMatrix({s, w, isText, colors}){ - var svg = d3.select(this).append('svg') - .at({width: w, height: w}) - - - svg.append('rect').at({width: w + 1, height: w + 1}) - - if (!colors) colors = ['#000', '#000', '#000', '#000'] - - var rects = [ - {n: 'tp', x: 0, y: 0, width: _ => s.b*w, height: _ => s.tpr*w}, - {n: 'fn', x: 0, y: _ => 1 + s.tpr*w, width: _ => s.b*w, height: _ => w - s.tpr*w}, - {n: 'fp', x: _ => 1 + s.b*w, y: 0, width: _ => w - s.b*w, height: _ => s.fnr*w}, - {n: 'tn', x: _ => 1 + s.b*w, y: _ => 1 + s.fnr*w, width: _ => w - s.b*w, height: _ => w - s.fnr*w}, - ] - rects.forEach((d, i) => d.i = i) - - var rectSel = svg.appendMany('rect', rects) - .at({fill: d => `url(#pattern-${w}-${d.i}`}) - // .at({opacity: d => colors[d.i] == '#fff' ? .5 : 1}) - // .at({fill: d => `url(#pattern-${w}-${d.i + (colors[d.i] == '#ccc' ? 4 : 0)})`}) - // .at({fill: d => colors[d.i] == '#ccc' ? '#000' : `url(#pattern-${w}-${d.i + (colors[d.i] == '#ccc' ? 4 : 0)})`}) - .each(function(d){ d.sel = d3.select(this) }) - rectSel.filter(d => colors[d.i] == '#fff').at({fill: '#eee'}) - - var bh = .5 - svg.append('rect.tpr').at({height: bh}).translate(-bh/2, 1) - .datum('tpr') - - svg.append('rect.fnr').at({height: bh}).translate(-bh/2, 1) - .datum('fnr') - - svg.append('rect.b').at({width: bh, height: w}).translate(-bh/2, 0) - .datum('b') - - var bh = 20 - svg.append('rect.drag.tpr').at({height: bh}).translate(-bh/2, 1) - .call(makeDrag('tpr', 1)).datum('tpr').call(d3.attachTooltip).on('mouseover', ttFormat) - - svg.append('rect.drag.fnr').at({height: bh}).translate(-bh/2, 1) - .call(makeDrag('fnr', 1)).datum('fnr').call(d3.attachTooltip).on('mouseover', ttFormat) - - svg.append('rect.drag.b').at({width: bh, height: w}).translate(-bh/2, 0) - .call(makeDrag('b', 0)).datum('b').call(d3.attachTooltip).on('mouseover', ttFormat) - - - var tprRect = svg.selectAll('rect.tpr') - var fnrRect = svg.selectAll('rect.fnr') - var bRect = svg.selectAll('rect.b') - - function ttFormat(str){ - var html = '' - if (str == 'tpr') html = `${d3.format('.0%')(s.tpr)} of sick ${s.titleStr.toLowerCase()} test positive` - if (str == 'fnr') html = `${d3.format('.0%')(s.fnr)} of well ${s.titleStr.toLowerCase()} test negative` - if (str == 'b') html = `${d3.format('.0%')(s.b)} of ${s.titleStr.toLowerCase()} are sick` - ttSel.html(html) - } - - function makeDrag(str, index){ - - return d3.drag() - .on('drag', function(){ - var percent = d3.mouse(this)[index]/w - s[str] = d3.clamp(.15, percent, .85) - - window.basetimer.stop() - s.update() - - ttMove() - ttFormat(str) - }) - .on('start', _ => svg.classed('dragging', 1)) - .on('end', _ => svg.classed('dragging', 0)) - } - - renderFns.push(() => { - rectSel.each(d => d.sel.at(d)) - - tprRect.at({width: w*s.b, y: w*s.tpr}) - fnrRect.at({x: w*s.b, width: w - w*s.b, y: w*s.fnr}) - bRect.at({x: w*s.b}) - - // s => s.tpr, - // s => s.b*s.tpr/(s.b*s.tpr + (1 - s.b)*(s.fnr)), - // s => 1 - s.fnr, - if (!isText) return - }) - - - if (!isText) return - - svg.append('text').text(s.titleStr).at({textAnchor: 'middle', x: w/2, y: -8, fontSize: 20}) - - if (innerWidth < 800) return - // if (true) - - svg.appendMany('text', d3.range(4)).each(function(i){ - var isSick = i < 2 - var isPos = i % 2 - - var pad = 5 - d3.select(this) - .translate([isSick ? pad : w - pad, isPos ? 13 : w - 23]) - .at({ - textAnchor: isSick ? 'start' : 'end', - fill: '#000', - fontSize: 12, - fontFamily: 'monospace', - pointerEvents: 'none', - }) - .tspans([ - ' test : ' + (isPos ? 'sick' : 'well'), - 'truth: ' + (isSick ? 'sick' : 'well')]) - }) -} - - -if (window.basetimer) window.basetimer.stop() -window.basetimer = d3.timer(t => { - - var val = t/1000 % (Math.PI*4) - - if (val < Math.PI*2){ - m.b = (Math.sin(val + Math.PI/2))/4 + .4 - } else if (Math.PI*3 < val && val < Math.PI*5 || true){ - f.tpr = (Math.sin(val + Math.PI/2))/4 + .4 - } - m.update() -}) - - - - - -m.update() - - - -function ttMove(d){ - if (!ttSel.size()) return; - - var e = d3.event.sourceEvent, - x = e.clientX, - y = e.clientY, - bb = ttSel.node().getBoundingClientRect(), - left = d3.clamp(20, (x-bb.width/2), window.innerWidth - bb.width - 20), - top = innerHeight > y + 20 + bb.height ? y + 20 : y - bb.height - 20; - - ttSel - .style('left', left +'px') - .style('top', top + 'px'); -} - diff --git a/spaces/mfrashad/CharacterGAN/models/stylegan/stylegan_tf/metrics/frechet_inception_distance.py b/spaces/mfrashad/CharacterGAN/models/stylegan/stylegan_tf/metrics/frechet_inception_distance.py deleted file mode 100644 index 41f71fe4bfb85218cc283b3f7bc3a34fea5f790d..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/models/stylegan/stylegan_tf/metrics/frechet_inception_distance.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Frechet Inception Distance (FID).""" - -import os -import numpy as np -import scipy -import tensorflow as tf -import dnnlib.tflib as tflib - -from metrics import metric_base -from training import misc - -#---------------------------------------------------------------------------- - -class FID(metric_base.MetricBase): - def __init__(self, num_images, minibatch_per_gpu, **kwargs): - super().__init__(**kwargs) - self.num_images = num_images - self.minibatch_per_gpu = minibatch_per_gpu - - def _evaluate(self, Gs, num_gpus): - minibatch_size = num_gpus * self.minibatch_per_gpu - inception = misc.load_pkl('https://drive.google.com/uc?id=1MzTY44rLToO5APn8TZmfR7_ENSe5aZUn') # inception_v3_features.pkl - activations = np.empty([self.num_images, inception.output_shape[1]], dtype=np.float32) - - # Calculate statistics for reals. - cache_file = self._get_cache_file_for_reals(num_images=self.num_images) - os.makedirs(os.path.dirname(cache_file), exist_ok=True) - if os.path.isfile(cache_file): - mu_real, sigma_real = misc.load_pkl(cache_file) - else: - for idx, images in enumerate(self._iterate_reals(minibatch_size=minibatch_size)): - begin = idx * minibatch_size - end = min(begin + minibatch_size, self.num_images) - activations[begin:end] = inception.run(images[:end-begin], num_gpus=num_gpus, assume_frozen=True) - if end == self.num_images: - break - mu_real = np.mean(activations, axis=0) - sigma_real = np.cov(activations, rowvar=False) - misc.save_pkl((mu_real, sigma_real), cache_file) - - # Construct TensorFlow graph. - result_expr = [] - for gpu_idx in range(num_gpus): - with tf.device('/gpu:%d' % gpu_idx): - Gs_clone = Gs.clone() - inception_clone = inception.clone() - latents = tf.random_normal([self.minibatch_per_gpu] + Gs_clone.input_shape[1:]) - images = Gs_clone.get_output_for(latents, None, is_validation=True, randomize_noise=True) - images = tflib.convert_images_to_uint8(images) - result_expr.append(inception_clone.get_output_for(images)) - - # Calculate statistics for fakes. - for begin in range(0, self.num_images, minibatch_size): - end = min(begin + minibatch_size, self.num_images) - activations[begin:end] = np.concatenate(tflib.run(result_expr), axis=0)[:end-begin] - mu_fake = np.mean(activations, axis=0) - sigma_fake = np.cov(activations, rowvar=False) - - # Calculate FID. - m = np.square(mu_fake - mu_real).sum() - s, _ = scipy.linalg.sqrtm(np.dot(sigma_fake, sigma_real), disp=False) # pylint: disable=no-member - dist = m + np.trace(sigma_fake + sigma_real - 2*s) - self._report_result(np.real(dist)) - -#---------------------------------------------------------------------------- diff --git a/spaces/mikebars/huggingface/assets/index-fcdbd030.js b/spaces/mikebars/huggingface/assets/index-fcdbd030.js deleted file mode 100644 index 898f5a012f8e2d920a01209575d5037d7c3b0ee7..0000000000000000000000000000000000000000 --- a/spaces/mikebars/huggingface/assets/index-fcdbd030.js +++ /dev/null @@ -1,41 +0,0 @@ -var $c=Object.defineProperty;var Uc=(e,t,n)=>t in e?$c(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n;var yn=(e,t,n)=>(Uc(e,typeof t!="symbol"?t+"":t,n),n);(function(){const t=document.createElement("link").relList;if(t&&t.supports&&t.supports("modulepreload"))return;for(const l of document.querySelectorAll('link[rel="modulepreload"]'))r(l);new MutationObserver(l=>{for(const o of l)if(o.type==="childList")for(const i of o.addedNodes)i.tagName==="LINK"&&i.rel==="modulepreload"&&r(i)}).observe(document,{childList:!0,subtree:!0});function n(l){const o={};return l.integrity&&(o.integrity=l.integrity),l.referrerPolicy&&(o.referrerPolicy=l.referrerPolicy),l.crossOrigin==="use-credentials"?o.credentials="include":l.crossOrigin==="anonymous"?o.credentials="omit":o.credentials="same-origin",o}function r(l){if(l.ep)return;l.ep=!0;const o=n(l);fetch(l.href,o)}})();var bu={exports:{}},ul={},es={exports:{}},I={};/** - * @license React - * react.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var tr=Symbol.for("react.element"),Vc=Symbol.for("react.portal"),Bc=Symbol.for("react.fragment"),Qc=Symbol.for("react.strict_mode"),Hc=Symbol.for("react.profiler"),Wc=Symbol.for("react.provider"),Kc=Symbol.for("react.context"),Yc=Symbol.for("react.forward_ref"),Xc=Symbol.for("react.suspense"),Gc=Symbol.for("react.memo"),Zc=Symbol.for("react.lazy"),Qi=Symbol.iterator;function qc(e){return e===null||typeof e!="object"?null:(e=Qi&&e[Qi]||e["@@iterator"],typeof e=="function"?e:null)}var ts={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},ns=Object.assign,rs={};function cn(e,t,n){this.props=e,this.context=t,this.refs=rs,this.updater=n||ts}cn.prototype.isReactComponent={};cn.prototype.setState=function(e,t){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,t,"setState")};cn.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function ls(){}ls.prototype=cn.prototype;function Ko(e,t,n){this.props=e,this.context=t,this.refs=rs,this.updater=n||ts}var Yo=Ko.prototype=new ls;Yo.constructor=Ko;ns(Yo,cn.prototype);Yo.isPureReactComponent=!0;var Hi=Array.isArray,os=Object.prototype.hasOwnProperty,Xo={current:null},is={key:!0,ref:!0,__self:!0,__source:!0};function us(e,t,n){var r,l={},o=null,i=null;if(t!=null)for(r in t.ref!==void 0&&(i=t.ref),t.key!==void 0&&(o=""+t.key),t)os.call(t,r)&&!is.hasOwnProperty(r)&&(l[r]=t[r]);var u=arguments.length-2;if(u===1)l.children=n;else if(1>>1,te=j[G];if(0>>1;Gl(jl,z))ktl(ur,jl)?(j[G]=ur,j[kt]=z,G=kt):(j[G]=jl,j[xt]=z,G=xt);else if(ktl(ur,z))j[G]=ur,j[kt]=z,G=kt;else break e}}return L}function l(j,L){var z=j.sortIndex-L.sortIndex;return z!==0?z:j.id-L.id}if(typeof performance=="object"&&typeof performance.now=="function"){var o=performance;e.unstable_now=function(){return o.now()}}else{var i=Date,u=i.now();e.unstable_now=function(){return i.now()-u}}var s=[],f=[],h=1,c=null,v=3,g=!1,w=!1,k=!1,M=typeof setTimeout=="function"?setTimeout:null,m=typeof clearTimeout=="function"?clearTimeout:null,d=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function y(j){for(var L=n(f);L!==null;){if(L.callback===null)r(f);else if(L.startTime<=j)r(f),L.sortIndex=L.expirationTime,t(s,L);else break;L=n(f)}}function S(j){if(k=!1,y(j),!w)if(n(s)!==null)w=!0,El(C);else{var L=n(f);L!==null&&Cl(S,L.startTime-j)}}function C(j,L){w=!1,k&&(k=!1,m(T),T=-1),g=!0;var z=v;try{for(y(L),c=n(s);c!==null&&(!(c.expirationTime>L)||j&&!ze());){var G=c.callback;if(typeof G=="function"){c.callback=null,v=c.priorityLevel;var te=G(c.expirationTime<=L);L=e.unstable_now(),typeof te=="function"?c.callback=te:c===n(s)&&r(s),y(L)}else r(s);c=n(s)}if(c!==null)var ir=!0;else{var xt=n(f);xt!==null&&Cl(S,xt.startTime-L),ir=!1}return ir}finally{c=null,v=z,g=!1}}var _=!1,N=null,T=-1,X=5,F=-1;function ze(){return!(e.unstable_now()-Fj||125G?(j.sortIndex=z,t(f,j),n(s)===null&&j===n(f)&&(k?(m(T),T=-1):k=!0,Cl(S,z-G))):(j.sortIndex=te,t(s,j),w||g||(w=!0,El(C))),j},e.unstable_shouldYield=ze,e.unstable_wrapCallback=function(j){var L=v;return function(){var z=v;v=L;try{return j.apply(this,arguments)}finally{v=z}}}})(fs);cs.exports=fs;var af=cs.exports;/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var ds=p,Ee=af;function x(e){for(var t="https://reactjs.org/docs/error-decoder.html?invariant="+e,n=1;n"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),bl=Object.prototype.hasOwnProperty,cf=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,Ki={},Yi={};function ff(e){return bl.call(Yi,e)?!0:bl.call(Ki,e)?!1:cf.test(e)?Yi[e]=!0:(Ki[e]=!0,!1)}function df(e,t,n,r){if(n!==null&&n.type===0)return!1;switch(typeof t){case"function":case"symbol":return!0;case"boolean":return r?!1:n!==null?!n.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function pf(e,t,n,r){if(t===null||typeof t>"u"||df(e,t,n,r))return!0;if(r)return!1;if(n!==null)switch(n.type){case 3:return!t;case 4:return t===!1;case 5:return isNaN(t);case 6:return isNaN(t)||1>t}return!1}function me(e,t,n,r,l,o,i){this.acceptsBooleans=t===2||t===3||t===4,this.attributeName=r,this.attributeNamespace=l,this.mustUseProperty=n,this.propertyName=e,this.type=t,this.sanitizeURL=o,this.removeEmptyString=i}var ie={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){ie[e]=new me(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var t=e[0];ie[t]=new me(t,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){ie[e]=new me(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){ie[e]=new me(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){ie[e]=new me(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){ie[e]=new me(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){ie[e]=new me(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){ie[e]=new me(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){ie[e]=new me(e,5,!1,e.toLowerCase(),null,!1,!1)});var Zo=/[\-:]([a-z])/g;function qo(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var t=e.replace(Zo,qo);ie[t]=new me(t,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var t=e.replace(Zo,qo);ie[t]=new me(t,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var t=e.replace(Zo,qo);ie[t]=new me(t,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){ie[e]=new me(e,1,!1,e.toLowerCase(),null,!1,!1)});ie.xlinkHref=new me("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){ie[e]=new me(e,1,!1,e.toLowerCase(),null,!0,!0)});function Jo(e,t,n,r){var l=ie.hasOwnProperty(t)?ie[t]:null;(l!==null?l.type!==0:r||!(2u||l[i]!==o[u]){var s=` -`+l[i].replace(" at new "," at ");return e.displayName&&s.includes("")&&(s=s.replace("",e.displayName)),s}while(1<=i&&0<=u);break}}}finally{Tl=!1,Error.prepareStackTrace=n}return(e=e?e.displayName||e.name:"")?jn(e):""}function mf(e){switch(e.tag){case 5:return jn(e.type);case 16:return jn("Lazy");case 13:return jn("Suspense");case 19:return jn("SuspenseList");case 0:case 2:case 15:return e=Ol(e.type,!1),e;case 11:return e=Ol(e.type.render,!1),e;case 1:return e=Ol(e.type,!0),e;default:return""}}function ro(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case $t:return"Fragment";case Dt:return"Portal";case eo:return"Profiler";case bo:return"StrictMode";case to:return"Suspense";case no:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case hs:return(e.displayName||"Context")+".Consumer";case ms:return(e._context.displayName||"Context")+".Provider";case ei:var t=e.render;return e=e.displayName,e||(e=t.displayName||t.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case ti:return t=e.displayName||null,t!==null?t:ro(e.type)||"Memo";case nt:t=e._payload,e=e._init;try{return ro(e(t))}catch{}}return null}function hf(e){var t=e.type;switch(e.tag){case 24:return"Cache";case 9:return(t.displayName||"Context")+".Consumer";case 10:return(t._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=t.render,e=e.displayName||e.name||"",t.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return t;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return ro(t);case 8:return t===bo?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof t=="function")return t.displayName||t.name||null;if(typeof t=="string")return t}return null}function yt(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function vs(e){var t=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(t==="checkbox"||t==="radio")}function yf(e){var t=vs(e)?"checked":"value",n=Object.getOwnPropertyDescriptor(e.constructor.prototype,t),r=""+e[t];if(!e.hasOwnProperty(t)&&typeof n<"u"&&typeof n.get=="function"&&typeof n.set=="function"){var l=n.get,o=n.set;return Object.defineProperty(e,t,{configurable:!0,get:function(){return l.call(this)},set:function(i){r=""+i,o.call(this,i)}}),Object.defineProperty(e,t,{enumerable:n.enumerable}),{getValue:function(){return r},setValue:function(i){r=""+i},stopTracking:function(){e._valueTracker=null,delete e[t]}}}}function cr(e){e._valueTracker||(e._valueTracker=yf(e))}function gs(e){if(!e)return!1;var t=e._valueTracker;if(!t)return!0;var n=t.getValue(),r="";return e&&(r=vs(e)?e.checked?"true":"false":e.value),e=r,e!==n?(t.setValue(e),!0):!1}function Mr(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function lo(e,t){var n=t.checked;return W({},t,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:n??e._wrapperState.initialChecked})}function Gi(e,t){var n=t.defaultValue==null?"":t.defaultValue,r=t.checked!=null?t.checked:t.defaultChecked;n=yt(t.value!=null?t.value:n),e._wrapperState={initialChecked:r,initialValue:n,controlled:t.type==="checkbox"||t.type==="radio"?t.checked!=null:t.value!=null}}function ws(e,t){t=t.checked,t!=null&&Jo(e,"checked",t,!1)}function oo(e,t){ws(e,t);var n=yt(t.value),r=t.type;if(n!=null)r==="number"?(n===0&&e.value===""||e.value!=n)&&(e.value=""+n):e.value!==""+n&&(e.value=""+n);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}t.hasOwnProperty("value")?io(e,t.type,n):t.hasOwnProperty("defaultValue")&&io(e,t.type,yt(t.defaultValue)),t.checked==null&&t.defaultChecked!=null&&(e.defaultChecked=!!t.defaultChecked)}function Zi(e,t,n){if(t.hasOwnProperty("value")||t.hasOwnProperty("defaultValue")){var r=t.type;if(!(r!=="submit"&&r!=="reset"||t.value!==void 0&&t.value!==null))return;t=""+e._wrapperState.initialValue,n||t===e.value||(e.value=t),e.defaultValue=t}n=e.name,n!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,n!==""&&(e.name=n)}function io(e,t,n){(t!=="number"||Mr(e.ownerDocument)!==e)&&(n==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+n&&(e.defaultValue=""+n))}var _n=Array.isArray;function Zt(e,t,n,r){if(e=e.options,t){t={};for(var l=0;l"+t.valueOf().toString()+"",t=fr.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;t.firstChild;)e.appendChild(t.firstChild)}});function $n(e,t){if(t){var n=e.firstChild;if(n&&n===e.lastChild&&n.nodeType===3){n.nodeValue=t;return}}e.textContent=t}var On={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},vf=["Webkit","ms","Moz","O"];Object.keys(On).forEach(function(e){vf.forEach(function(t){t=t+e.charAt(0).toUpperCase()+e.substring(1),On[t]=On[e]})});function Es(e,t,n){return t==null||typeof t=="boolean"||t===""?"":n||typeof t!="number"||t===0||On.hasOwnProperty(e)&&On[e]?(""+t).trim():t+"px"}function Cs(e,t){e=e.style;for(var n in t)if(t.hasOwnProperty(n)){var r=n.indexOf("--")===0,l=Es(n,t[n],r);n==="float"&&(n="cssFloat"),r?e.setProperty(n,l):e[n]=l}}var gf=W({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function ao(e,t){if(t){if(gf[e]&&(t.children!=null||t.dangerouslySetInnerHTML!=null))throw Error(x(137,e));if(t.dangerouslySetInnerHTML!=null){if(t.children!=null)throw Error(x(60));if(typeof t.dangerouslySetInnerHTML!="object"||!("__html"in t.dangerouslySetInnerHTML))throw Error(x(61))}if(t.style!=null&&typeof t.style!="object")throw Error(x(62))}}function co(e,t){if(e.indexOf("-")===-1)return typeof t.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var fo=null;function ni(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var po=null,qt=null,Jt=null;function bi(e){if(e=lr(e)){if(typeof po!="function")throw Error(x(280));var t=e.stateNode;t&&(t=dl(t),po(e.stateNode,e.type,t))}}function js(e){qt?Jt?Jt.push(e):Jt=[e]:qt=e}function _s(){if(qt){var e=qt,t=Jt;if(Jt=qt=null,bi(e),t)for(e=0;e>>=0,e===0?32:31-(Of(e)/Pf|0)|0}var dr=64,pr=4194304;function Nn(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function Vr(e,t){var n=e.pendingLanes;if(n===0)return 0;var r=0,l=e.suspendedLanes,o=e.pingedLanes,i=n&268435455;if(i!==0){var u=i&~l;u!==0?r=Nn(u):(o&=i,o!==0&&(r=Nn(o)))}else i=n&~l,i!==0?r=Nn(i):o!==0&&(r=Nn(o));if(r===0)return 0;if(t!==0&&t!==r&&!(t&l)&&(l=r&-r,o=t&-t,l>=o||l===16&&(o&4194240)!==0))return t;if(r&4&&(r|=n&16),t=e.entangledLanes,t!==0)for(e=e.entanglements,t&=r;0n;n++)t.push(e);return t}function nr(e,t,n){e.pendingLanes|=t,t!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,t=31-Me(t),e[t]=n}function Ff(e,t){var n=e.pendingLanes&~t;e.pendingLanes=t,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=t,e.mutableReadLanes&=t,e.entangledLanes&=t,t=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0=Ln),su=String.fromCharCode(32),au=!1;function Ks(e,t){switch(e){case"keyup":return sd.indexOf(t.keyCode)!==-1;case"keydown":return t.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function Ys(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var Ut=!1;function cd(e,t){switch(e){case"compositionend":return Ys(t);case"keypress":return t.which!==32?null:(au=!0,su);case"textInput":return e=t.data,e===su&&au?null:e;default:return null}}function fd(e,t){if(Ut)return e==="compositionend"||!ci&&Ks(e,t)?(e=Hs(),Tr=ui=it=null,Ut=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(t.ctrlKey||t.altKey||t.metaKey)||t.ctrlKey&&t.altKey){if(t.char&&1=t)return{node:n,offset:t-e};e=r}e:{for(;n;){if(n.nextSibling){n=n.nextSibling;break e}n=n.parentNode}n=void 0}n=pu(n)}}function qs(e,t){return e&&t?e===t?!0:e&&e.nodeType===3?!1:t&&t.nodeType===3?qs(e,t.parentNode):"contains"in e?e.contains(t):e.compareDocumentPosition?!!(e.compareDocumentPosition(t)&16):!1:!1}function Js(){for(var e=window,t=Mr();t instanceof e.HTMLIFrameElement;){try{var n=typeof t.contentWindow.location.href=="string"}catch{n=!1}if(n)e=t.contentWindow;else break;t=Mr(e.document)}return t}function fi(e){var t=e&&e.nodeName&&e.nodeName.toLowerCase();return t&&(t==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||t==="textarea"||e.contentEditable==="true")}function Sd(e){var t=Js(),n=e.focusedElem,r=e.selectionRange;if(t!==n&&n&&n.ownerDocument&&qs(n.ownerDocument.documentElement,n)){if(r!==null&&fi(n)){if(t=r.start,e=r.end,e===void 0&&(e=t),"selectionStart"in n)n.selectionStart=t,n.selectionEnd=Math.min(e,n.value.length);else if(e=(t=n.ownerDocument||document)&&t.defaultView||window,e.getSelection){e=e.getSelection();var l=n.textContent.length,o=Math.min(r.start,l);r=r.end===void 0?o:Math.min(r.end,l),!e.extend&&o>r&&(l=r,r=o,o=l),l=mu(n,o);var i=mu(n,r);l&&i&&(e.rangeCount!==1||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==i.node||e.focusOffset!==i.offset)&&(t=t.createRange(),t.setStart(l.node,l.offset),e.removeAllRanges(),o>r?(e.addRange(t),e.extend(i.node,i.offset)):(t.setEnd(i.node,i.offset),e.addRange(t)))}}for(t=[],e=n;e=e.parentNode;)e.nodeType===1&&t.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof n.focus=="function"&&n.focus(),n=0;n=document.documentMode,Vt=null,wo=null,In=null,So=!1;function hu(e,t,n){var r=n.window===n?n.document:n.nodeType===9?n:n.ownerDocument;So||Vt==null||Vt!==Mr(r)||(r=Vt,"selectionStart"in r&&fi(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),In&&Wn(In,r)||(In=r,r=Hr(wo,"onSelect"),0Ht||(e.current=_o[Ht],_o[Ht]=null,Ht--)}function D(e,t){Ht++,_o[Ht]=e.current,e.current=t}var vt={},ce=wt(vt),ve=wt(!1),Pt=vt;function rn(e,t){var n=e.type.contextTypes;if(!n)return vt;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===t)return r.__reactInternalMemoizedMaskedChildContext;var l={},o;for(o in n)l[o]=t[o];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=t,e.__reactInternalMemoizedMaskedChildContext=l),l}function ge(e){return e=e.childContextTypes,e!=null}function Kr(){U(ve),U(ce)}function ku(e,t,n){if(ce.current!==vt)throw Error(x(168));D(ce,t),D(ve,n)}function ua(e,t,n){var r=e.stateNode;if(t=t.childContextTypes,typeof r.getChildContext!="function")return n;r=r.getChildContext();for(var l in r)if(!(l in t))throw Error(x(108,hf(e)||"Unknown",l));return W({},n,r)}function Yr(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||vt,Pt=ce.current,D(ce,e),D(ve,ve.current),!0}function Eu(e,t,n){var r=e.stateNode;if(!r)throw Error(x(169));n?(e=ua(e,t,Pt),r.__reactInternalMemoizedMergedChildContext=e,U(ve),U(ce),D(ce,e)):U(ve),D(ve,n)}var Ke=null,pl=!1,Ql=!1;function sa(e){Ke===null?Ke=[e]:Ke.push(e)}function zd(e){pl=!0,sa(e)}function St(){if(!Ql&&Ke!==null){Ql=!0;var e=0,t=A;try{var n=Ke;for(A=1;e>=i,l-=i,Ye=1<<32-Me(t)+l|n<T?(X=N,N=null):X=N.sibling;var F=v(m,N,y[T],S);if(F===null){N===null&&(N=X);break}e&&N&&F.alternate===null&&t(m,N),d=o(F,d,T),_===null?C=F:_.sibling=F,_=F,N=X}if(T===y.length)return n(m,N),V&&Et(m,T),C;if(N===null){for(;TT?(X=N,N=null):X=N.sibling;var ze=v(m,N,F.value,S);if(ze===null){N===null&&(N=X);break}e&&N&&ze.alternate===null&&t(m,N),d=o(ze,d,T),_===null?C=ze:_.sibling=ze,_=ze,N=X}if(F.done)return n(m,N),V&&Et(m,T),C;if(N===null){for(;!F.done;T++,F=y.next())F=c(m,F.value,S),F!==null&&(d=o(F,d,T),_===null?C=F:_.sibling=F,_=F);return V&&Et(m,T),C}for(N=r(m,N);!F.done;T++,F=y.next())F=g(N,m,T,F.value,S),F!==null&&(e&&F.alternate!==null&&N.delete(F.key===null?T:F.key),d=o(F,d,T),_===null?C=F:_.sibling=F,_=F);return e&&N.forEach(function(mn){return t(m,mn)}),V&&Et(m,T),C}function M(m,d,y,S){if(typeof y=="object"&&y!==null&&y.type===$t&&y.key===null&&(y=y.props.children),typeof y=="object"&&y!==null){switch(y.$$typeof){case ar:e:{for(var C=y.key,_=d;_!==null;){if(_.key===C){if(C=y.type,C===$t){if(_.tag===7){n(m,_.sibling),d=l(_,y.props.children),d.return=m,m=d;break e}}else if(_.elementType===C||typeof C=="object"&&C!==null&&C.$$typeof===nt&&Pu(C)===_.type){n(m,_.sibling),d=l(_,y.props),d.ref=kn(m,_,y),d.return=m,m=d;break e}n(m,_);break}else t(m,_);_=_.sibling}y.type===$t?(d=Ot(y.props.children,m.mode,S,y.key),d.return=m,m=d):(S=Ar(y.type,y.key,y.props,null,m.mode,S),S.ref=kn(m,d,y),S.return=m,m=S)}return i(m);case Dt:e:{for(_=y.key;d!==null;){if(d.key===_)if(d.tag===4&&d.stateNode.containerInfo===y.containerInfo&&d.stateNode.implementation===y.implementation){n(m,d.sibling),d=l(d,y.children||[]),d.return=m,m=d;break e}else{n(m,d);break}else t(m,d);d=d.sibling}d=ql(y,m.mode,S),d.return=m,m=d}return i(m);case nt:return _=y._init,M(m,d,_(y._payload),S)}if(_n(y))return w(m,d,y,S);if(vn(y))return k(m,d,y,S);Sr(m,y)}return typeof y=="string"&&y!==""||typeof y=="number"?(y=""+y,d!==null&&d.tag===6?(n(m,d.sibling),d=l(d,y),d.return=m,m=d):(n(m,d),d=Zl(y,m.mode,S),d.return=m,m=d),i(m)):n(m,d)}return M}var on=ya(!0),va=ya(!1),or={},He=wt(or),Gn=wt(or),Zn=wt(or);function Nt(e){if(e===or)throw Error(x(174));return e}function Si(e,t){switch(D(Zn,t),D(Gn,e),D(He,or),e=t.nodeType,e){case 9:case 11:t=(t=t.documentElement)?t.namespaceURI:so(null,"");break;default:e=e===8?t.parentNode:t,t=e.namespaceURI||null,e=e.tagName,t=so(t,e)}U(He),D(He,t)}function un(){U(He),U(Gn),U(Zn)}function ga(e){Nt(Zn.current);var t=Nt(He.current),n=so(t,e.type);t!==n&&(D(Gn,e),D(He,n))}function xi(e){Gn.current===e&&(U(He),U(Gn))}var Q=wt(0);function br(e){for(var t=e;t!==null;){if(t.tag===13){var n=t.memoizedState;if(n!==null&&(n=n.dehydrated,n===null||n.data==="$?"||n.data==="$!"))return t}else if(t.tag===19&&t.memoizedProps.revealOrder!==void 0){if(t.flags&128)return t}else if(t.child!==null){t.child.return=t,t=t.child;continue}if(t===e)break;for(;t.sibling===null;){if(t.return===null||t.return===e)return null;t=t.return}t.sibling.return=t.return,t=t.sibling}return null}var Hl=[];function ki(){for(var e=0;en?n:4,e(!0);var r=Wl.transition;Wl.transition={};try{e(!1),t()}finally{A=n,Wl.transition=r}}function Fa(){return Le().memoizedState}function Ad(e,t,n){var r=mt(e);if(n={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null},Ra(e))Aa(t,n);else if(n=da(e,t,n,r),n!==null){var l=de();De(n,e,r,l),Ma(n,t,r)}}function Md(e,t,n){var r=mt(e),l={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null};if(Ra(e))Aa(t,l);else{var o=e.alternate;if(e.lanes===0&&(o===null||o.lanes===0)&&(o=t.lastRenderedReducer,o!==null))try{var i=t.lastRenderedState,u=o(i,n);if(l.hasEagerState=!0,l.eagerState=u,$e(u,i)){var s=t.interleaved;s===null?(l.next=l,gi(t)):(l.next=s.next,s.next=l),t.interleaved=l;return}}catch{}finally{}n=da(e,t,l,r),n!==null&&(l=de(),De(n,e,r,l),Ma(n,t,r))}}function Ra(e){var t=e.alternate;return e===H||t!==null&&t===H}function Aa(e,t){Fn=el=!0;var n=e.pending;n===null?t.next=t:(t.next=n.next,n.next=t),e.pending=t}function Ma(e,t,n){if(n&4194240){var r=t.lanes;r&=e.pendingLanes,n|=r,t.lanes=n,li(e,n)}}var tl={readContext:Pe,useCallback:ue,useContext:ue,useEffect:ue,useImperativeHandle:ue,useInsertionEffect:ue,useLayoutEffect:ue,useMemo:ue,useReducer:ue,useRef:ue,useState:ue,useDebugValue:ue,useDeferredValue:ue,useTransition:ue,useMutableSource:ue,useSyncExternalStore:ue,useId:ue,unstable_isNewReconciler:!1},Dd={readContext:Pe,useCallback:function(e,t){return Ve().memoizedState=[e,t===void 0?null:t],e},useContext:Pe,useEffect:zu,useImperativeHandle:function(e,t,n){return n=n!=null?n.concat([e]):null,zr(4194308,4,Oa.bind(null,t,e),n)},useLayoutEffect:function(e,t){return zr(4194308,4,e,t)},useInsertionEffect:function(e,t){return zr(4,2,e,t)},useMemo:function(e,t){var n=Ve();return t=t===void 0?null:t,e=e(),n.memoizedState=[e,t],e},useReducer:function(e,t,n){var r=Ve();return t=n!==void 0?n(t):t,r.memoizedState=r.baseState=t,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:t},r.queue=e,e=e.dispatch=Ad.bind(null,H,e),[r.memoizedState,e]},useRef:function(e){var t=Ve();return e={current:e},t.memoizedState=e},useState:Lu,useDebugValue:Ni,useDeferredValue:function(e){return Ve().memoizedState=e},useTransition:function(){var e=Lu(!1),t=e[0];return e=Rd.bind(null,e[1]),Ve().memoizedState=e,[t,e]},useMutableSource:function(){},useSyncExternalStore:function(e,t,n){var r=H,l=Ve();if(V){if(n===void 0)throw Error(x(407));n=n()}else{if(n=t(),re===null)throw Error(x(349));zt&30||xa(r,t,n)}l.memoizedState=n;var o={value:n,getSnapshot:t};return l.queue=o,zu(Ea.bind(null,r,o,e),[e]),r.flags|=2048,bn(9,ka.bind(null,r,o,n,t),void 0,null),n},useId:function(){var e=Ve(),t=re.identifierPrefix;if(V){var n=Xe,r=Ye;n=(r&~(1<<32-Me(r)-1)).toString(32)+n,t=":"+t+"R"+n,n=qn++,0<\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=i.createElement(n,{is:r.is}):(e=i.createElement(n),n==="select"&&(i=e,r.multiple?i.multiple=!0:r.size&&(i.size=r.size))):e=i.createElementNS(e,n),e[Be]=t,e[Xn]=r,Ka(e,t,!1,!1),t.stateNode=e;e:{switch(i=co(n,r),n){case"dialog":$("cancel",e),$("close",e),l=r;break;case"iframe":case"object":case"embed":$("load",e),l=r;break;case"video":case"audio":for(l=0;lan&&(t.flags|=128,r=!0,En(o,!1),t.lanes=4194304)}else{if(!r)if(e=br(i),e!==null){if(t.flags|=128,r=!0,n=e.updateQueue,n!==null&&(t.updateQueue=n,t.flags|=4),En(o,!0),o.tail===null&&o.tailMode==="hidden"&&!i.alternate&&!V)return se(t),null}else 2*Z()-o.renderingStartTime>an&&n!==1073741824&&(t.flags|=128,r=!0,En(o,!1),t.lanes=4194304);o.isBackwards?(i.sibling=t.child,t.child=i):(n=o.last,n!==null?n.sibling=i:t.child=i,o.last=i)}return o.tail!==null?(t=o.tail,o.rendering=t,o.tail=t.sibling,o.renderingStartTime=Z(),t.sibling=null,n=Q.current,D(Q,r?n&1|2:n&1),t):(se(t),null);case 22:case 23:return Ii(),r=t.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(t.flags|=8192),r&&t.mode&1?Se&1073741824&&(se(t),t.subtreeFlags&6&&(t.flags|=8192)):se(t),null;case 24:return null;case 25:return null}throw Error(x(156,t.tag))}function Kd(e,t){switch(pi(t),t.tag){case 1:return ge(t.type)&&Kr(),e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 3:return un(),U(ve),U(ce),ki(),e=t.flags,e&65536&&!(e&128)?(t.flags=e&-65537|128,t):null;case 5:return xi(t),null;case 13:if(U(Q),e=t.memoizedState,e!==null&&e.dehydrated!==null){if(t.alternate===null)throw Error(x(340));ln()}return e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 19:return U(Q),null;case 4:return un(),null;case 10:return vi(t.type._context),null;case 22:case 23:return Ii(),null;case 24:return null;default:return null}}var kr=!1,ae=!1,Yd=typeof WeakSet=="function"?WeakSet:Set,E=null;function Xt(e,t){var n=e.ref;if(n!==null)if(typeof n=="function")try{n(null)}catch(r){K(e,t,r)}else n.current=null}function Do(e,t,n){try{n()}catch(r){K(e,t,r)}}var Vu=!1;function Xd(e,t){if(xo=Br,e=Js(),fi(e)){if("selectionStart"in e)var n={start:e.selectionStart,end:e.selectionEnd};else e:{n=(n=e.ownerDocument)&&n.defaultView||window;var r=n.getSelection&&n.getSelection();if(r&&r.rangeCount!==0){n=r.anchorNode;var l=r.anchorOffset,o=r.focusNode;r=r.focusOffset;try{n.nodeType,o.nodeType}catch{n=null;break e}var i=0,u=-1,s=-1,f=0,h=0,c=e,v=null;t:for(;;){for(var g;c!==n||l!==0&&c.nodeType!==3||(u=i+l),c!==o||r!==0&&c.nodeType!==3||(s=i+r),c.nodeType===3&&(i+=c.nodeValue.length),(g=c.firstChild)!==null;)v=c,c=g;for(;;){if(c===e)break t;if(v===n&&++f===l&&(u=i),v===o&&++h===r&&(s=i),(g=c.nextSibling)!==null)break;c=v,v=c.parentNode}c=g}n=u===-1||s===-1?null:{start:u,end:s}}else n=null}n=n||{start:0,end:0}}else n=null;for(ko={focusedElem:e,selectionRange:n},Br=!1,E=t;E!==null;)if(t=E,e=t.child,(t.subtreeFlags&1028)!==0&&e!==null)e.return=t,E=e;else for(;E!==null;){t=E;try{var w=t.alternate;if(t.flags&1024)switch(t.tag){case 0:case 11:case 15:break;case 1:if(w!==null){var k=w.memoizedProps,M=w.memoizedState,m=t.stateNode,d=m.getSnapshotBeforeUpdate(t.elementType===t.type?k:Fe(t.type,k),M);m.__reactInternalSnapshotBeforeUpdate=d}break;case 3:var y=t.stateNode.containerInfo;y.nodeType===1?y.textContent="":y.nodeType===9&&y.documentElement&&y.removeChild(y.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(x(163))}}catch(S){K(t,t.return,S)}if(e=t.sibling,e!==null){e.return=t.return,E=e;break}E=t.return}return w=Vu,Vu=!1,w}function Rn(e,t,n){var r=t.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var l=r=r.next;do{if((l.tag&e)===e){var o=l.destroy;l.destroy=void 0,o!==void 0&&Do(t,n,o)}l=l.next}while(l!==r)}}function yl(e,t){if(t=t.updateQueue,t=t!==null?t.lastEffect:null,t!==null){var n=t=t.next;do{if((n.tag&e)===e){var r=n.create;n.destroy=r()}n=n.next}while(n!==t)}}function $o(e){var t=e.ref;if(t!==null){var n=e.stateNode;switch(e.tag){case 5:e=n;break;default:e=n}typeof t=="function"?t(e):t.current=e}}function Ga(e){var t=e.alternate;t!==null&&(e.alternate=null,Ga(t)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(t=e.stateNode,t!==null&&(delete t[Be],delete t[Xn],delete t[jo],delete t[Pd],delete t[Ld])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function Za(e){return e.tag===5||e.tag===3||e.tag===4}function Bu(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||Za(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function Uo(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.nodeType===8?n.parentNode.insertBefore(e,t):n.insertBefore(e,t):(n.nodeType===8?(t=n.parentNode,t.insertBefore(e,n)):(t=n,t.appendChild(e)),n=n._reactRootContainer,n!=null||t.onclick!==null||(t.onclick=Wr));else if(r!==4&&(e=e.child,e!==null))for(Uo(e,t,n),e=e.sibling;e!==null;)Uo(e,t,n),e=e.sibling}function Vo(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.insertBefore(e,t):n.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(Vo(e,t,n),e=e.sibling;e!==null;)Vo(e,t,n),e=e.sibling}var le=null,Re=!1;function tt(e,t,n){for(n=n.child;n!==null;)qa(e,t,n),n=n.sibling}function qa(e,t,n){if(Qe&&typeof Qe.onCommitFiberUnmount=="function")try{Qe.onCommitFiberUnmount(sl,n)}catch{}switch(n.tag){case 5:ae||Xt(n,t);case 6:var r=le,l=Re;le=null,tt(e,t,n),le=r,Re=l,le!==null&&(Re?(e=le,n=n.stateNode,e.nodeType===8?e.parentNode.removeChild(n):e.removeChild(n)):le.removeChild(n.stateNode));break;case 18:le!==null&&(Re?(e=le,n=n.stateNode,e.nodeType===8?Bl(e.parentNode,n):e.nodeType===1&&Bl(e,n),Qn(e)):Bl(le,n.stateNode));break;case 4:r=le,l=Re,le=n.stateNode.containerInfo,Re=!0,tt(e,t,n),le=r,Re=l;break;case 0:case 11:case 14:case 15:if(!ae&&(r=n.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){l=r=r.next;do{var o=l,i=o.destroy;o=o.tag,i!==void 0&&(o&2||o&4)&&Do(n,t,i),l=l.next}while(l!==r)}tt(e,t,n);break;case 1:if(!ae&&(Xt(n,t),r=n.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=n.memoizedProps,r.state=n.memoizedState,r.componentWillUnmount()}catch(u){K(n,t,u)}tt(e,t,n);break;case 21:tt(e,t,n);break;case 22:n.mode&1?(ae=(r=ae)||n.memoizedState!==null,tt(e,t,n),ae=r):tt(e,t,n);break;default:tt(e,t,n)}}function Qu(e){var t=e.updateQueue;if(t!==null){e.updateQueue=null;var n=e.stateNode;n===null&&(n=e.stateNode=new Yd),t.forEach(function(r){var l=rp.bind(null,e,r);n.has(r)||(n.add(r),r.then(l,l))})}}function Ie(e,t){var n=t.deletions;if(n!==null)for(var r=0;rl&&(l=i),r&=~o}if(r=l,r=Z()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*Zd(r/1960))-r,10e?16:e,ut===null)var r=!1;else{if(e=ut,ut=null,ll=0,R&6)throw Error(x(331));var l=R;for(R|=4,E=e.current;E!==null;){var o=E,i=o.child;if(E.flags&16){var u=o.deletions;if(u!==null){for(var s=0;sZ()-Li?Tt(e,0):Pi|=n),we(e,t)}function oc(e,t){t===0&&(e.mode&1?(t=pr,pr<<=1,!(pr&130023424)&&(pr=4194304)):t=1);var n=de();e=Je(e,t),e!==null&&(nr(e,t,n),we(e,n))}function np(e){var t=e.memoizedState,n=0;t!==null&&(n=t.retryLane),oc(e,n)}function rp(e,t){var n=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;l!==null&&(n=l.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(x(314))}r!==null&&r.delete(t),oc(e,n)}var ic;ic=function(e,t,n){if(e!==null)if(e.memoizedProps!==t.pendingProps||ve.current)ye=!0;else{if(!(e.lanes&n)&&!(t.flags&128))return ye=!1,Hd(e,t,n);ye=!!(e.flags&131072)}else ye=!1,V&&t.flags&1048576&&aa(t,Gr,t.index);switch(t.lanes=0,t.tag){case 2:var r=t.type;Ir(e,t),e=t.pendingProps;var l=rn(t,ce.current);en(t,n),l=Ci(null,t,r,e,l,n);var o=ji();return t.flags|=1,typeof l=="object"&&l!==null&&typeof l.render=="function"&&l.$$typeof===void 0?(t.tag=1,t.memoizedState=null,t.updateQueue=null,ge(r)?(o=!0,Yr(t)):o=!1,t.memoizedState=l.state!==null&&l.state!==void 0?l.state:null,wi(t),l.updater=ml,t.stateNode=l,l._reactInternals=t,Lo(t,r,e,n),t=Fo(null,t,r,!0,o,n)):(t.tag=0,V&&o&&di(t),fe(null,t,l,n),t=t.child),t;case 16:r=t.elementType;e:{switch(Ir(e,t),e=t.pendingProps,l=r._init,r=l(r._payload),t.type=r,l=t.tag=op(r),e=Fe(r,e),l){case 0:t=Io(null,t,r,e,n);break e;case 1:t=Du(null,t,r,e,n);break e;case 11:t=Au(null,t,r,e,n);break e;case 14:t=Mu(null,t,r,Fe(r.type,e),n);break e}throw Error(x(306,r,""))}return t;case 0:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Fe(r,l),Io(e,t,r,l,n);case 1:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Fe(r,l),Du(e,t,r,l,n);case 3:e:{if(Qa(t),e===null)throw Error(x(387));r=t.pendingProps,o=t.memoizedState,l=o.element,pa(e,t),Jr(t,r,null,n);var i=t.memoizedState;if(r=i.element,o.isDehydrated)if(o={element:r,isDehydrated:!1,cache:i.cache,pendingSuspenseBoundaries:i.pendingSuspenseBoundaries,transitions:i.transitions},t.updateQueue.baseState=o,t.memoizedState=o,t.flags&256){l=sn(Error(x(423)),t),t=$u(e,t,r,n,l);break e}else if(r!==l){l=sn(Error(x(424)),t),t=$u(e,t,r,n,l);break e}else for(xe=ft(t.stateNode.containerInfo.firstChild),ke=t,V=!0,Ae=null,n=va(t,null,r,n),t.child=n;n;)n.flags=n.flags&-3|4096,n=n.sibling;else{if(ln(),r===l){t=be(e,t,n);break e}fe(e,t,r,n)}t=t.child}return t;case 5:return ga(t),e===null&&To(t),r=t.type,l=t.pendingProps,o=e!==null?e.memoizedProps:null,i=l.children,Eo(r,l)?i=null:o!==null&&Eo(r,o)&&(t.flags|=32),Ba(e,t),fe(e,t,i,n),t.child;case 6:return e===null&&To(t),null;case 13:return Ha(e,t,n);case 4:return Si(t,t.stateNode.containerInfo),r=t.pendingProps,e===null?t.child=on(t,null,r,n):fe(e,t,r,n),t.child;case 11:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Fe(r,l),Au(e,t,r,l,n);case 7:return fe(e,t,t.pendingProps,n),t.child;case 8:return fe(e,t,t.pendingProps.children,n),t.child;case 12:return fe(e,t,t.pendingProps.children,n),t.child;case 10:e:{if(r=t.type._context,l=t.pendingProps,o=t.memoizedProps,i=l.value,D(Zr,r._currentValue),r._currentValue=i,o!==null)if($e(o.value,i)){if(o.children===l.children&&!ve.current){t=be(e,t,n);break e}}else for(o=t.child,o!==null&&(o.return=t);o!==null;){var u=o.dependencies;if(u!==null){i=o.child;for(var s=u.firstContext;s!==null;){if(s.context===r){if(o.tag===1){s=Ge(-1,n&-n),s.tag=2;var f=o.updateQueue;if(f!==null){f=f.shared;var h=f.pending;h===null?s.next=s:(s.next=h.next,h.next=s),f.pending=s}}o.lanes|=n,s=o.alternate,s!==null&&(s.lanes|=n),Oo(o.return,n,t),u.lanes|=n;break}s=s.next}}else if(o.tag===10)i=o.type===t.type?null:o.child;else if(o.tag===18){if(i=o.return,i===null)throw Error(x(341));i.lanes|=n,u=i.alternate,u!==null&&(u.lanes|=n),Oo(i,n,t),i=o.sibling}else i=o.child;if(i!==null)i.return=o;else for(i=o;i!==null;){if(i===t){i=null;break}if(o=i.sibling,o!==null){o.return=i.return,i=o;break}i=i.return}o=i}fe(e,t,l.children,n),t=t.child}return t;case 9:return l=t.type,r=t.pendingProps.children,en(t,n),l=Pe(l),r=r(l),t.flags|=1,fe(e,t,r,n),t.child;case 14:return r=t.type,l=Fe(r,t.pendingProps),l=Fe(r.type,l),Mu(e,t,r,l,n);case 15:return Ua(e,t,t.type,t.pendingProps,n);case 17:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Fe(r,l),Ir(e,t),t.tag=1,ge(r)?(e=!0,Yr(t)):e=!1,en(t,n),ha(t,r,l),Lo(t,r,l,n),Fo(null,t,r,!0,e,n);case 19:return Wa(e,t,n);case 22:return Va(e,t,n)}throw Error(x(156,t.tag))};function uc(e,t){return Is(e,t)}function lp(e,t,n,r){this.tag=e,this.key=n,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=t,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Te(e,t,n,r){return new lp(e,t,n,r)}function Ri(e){return e=e.prototype,!(!e||!e.isReactComponent)}function op(e){if(typeof e=="function")return Ri(e)?1:0;if(e!=null){if(e=e.$$typeof,e===ei)return 11;if(e===ti)return 14}return 2}function ht(e,t){var n=e.alternate;return n===null?(n=Te(e.tag,t,e.key,e.mode),n.elementType=e.elementType,n.type=e.type,n.stateNode=e.stateNode,n.alternate=e,e.alternate=n):(n.pendingProps=t,n.type=e.type,n.flags=0,n.subtreeFlags=0,n.deletions=null),n.flags=e.flags&14680064,n.childLanes=e.childLanes,n.lanes=e.lanes,n.child=e.child,n.memoizedProps=e.memoizedProps,n.memoizedState=e.memoizedState,n.updateQueue=e.updateQueue,t=e.dependencies,n.dependencies=t===null?null:{lanes:t.lanes,firstContext:t.firstContext},n.sibling=e.sibling,n.index=e.index,n.ref=e.ref,n}function Ar(e,t,n,r,l,o){var i=2;if(r=e,typeof e=="function")Ri(e)&&(i=1);else if(typeof e=="string")i=5;else e:switch(e){case $t:return Ot(n.children,l,o,t);case bo:i=8,l|=8;break;case eo:return e=Te(12,n,t,l|2),e.elementType=eo,e.lanes=o,e;case to:return e=Te(13,n,t,l),e.elementType=to,e.lanes=o,e;case no:return e=Te(19,n,t,l),e.elementType=no,e.lanes=o,e;case ys:return gl(n,l,o,t);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case ms:i=10;break e;case hs:i=9;break e;case ei:i=11;break e;case ti:i=14;break e;case nt:i=16,r=null;break e}throw Error(x(130,e==null?e:typeof e,""))}return t=Te(i,n,t,l),t.elementType=e,t.type=r,t.lanes=o,t}function Ot(e,t,n,r){return e=Te(7,e,r,t),e.lanes=n,e}function gl(e,t,n,r){return e=Te(22,e,r,t),e.elementType=ys,e.lanes=n,e.stateNode={isHidden:!1},e}function Zl(e,t,n){return e=Te(6,e,null,t),e.lanes=n,e}function ql(e,t,n){return t=Te(4,e.children!==null?e.children:[],e.key,t),t.lanes=n,t.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},t}function ip(e,t,n,r,l){this.tag=t,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=Ll(0),this.expirationTimes=Ll(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=Ll(0),this.identifierPrefix=r,this.onRecoverableError=l,this.mutableSourceEagerHydrationData=null}function Ai(e,t,n,r,l,o,i,u,s){return e=new ip(e,t,n,u,s),t===1?(t=1,o===!0&&(t|=8)):t=0,o=Te(3,null,null,t),e.current=o,o.stateNode=e,o.memoizedState={element:r,isDehydrated:n,cache:null,transitions:null,pendingSuspenseBoundaries:null},wi(o),e}function up(e,t,n){var r=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(fc)}catch(e){console.error(e)}}fc(),as.exports=Ce;var dp=as.exports,dc,qu=dp;dc=qu.createRoot,qu.hydrateRoot;var pp=(typeof process<"u","https://huggingface.co");async function mp(e,t){var r;const n=new hp(e.url,e.status,e.headers.get("X-Request-Id")??(t==null?void 0:t.requestId));if(n.message=`Api error with status ${n.statusCode}.${t!=null&&t.message?` ${t.message}.`:""} Request ID: ${n.requestId}, url: ${n.url}`,(r=e.headers.get("Content-Type"))!=null&&r.startsWith("application/json")){const l=await e.json();n.message=l.error||l.message||n.message,n.data=l}else n.data={message:await e.text()};throw n}var hp=class extends Error{constructor(t,n,r,l){super(l);yn(this,"statusCode");yn(this,"url");yn(this,"requestId");yn(this,"data");this.statusCode=n,this.requestId=r,this.url=t}};function yp(e){if(!(!e||e.accessToken===void 0||e.accessToken===null)&&!e.accessToken.startsWith("hf_"))throw new TypeError("Your access token must start with 'hf_'")}function vp(e){const t=/<(https?:[/][/][^>]+)>;\s+rel="([^"]+)"/g;return Object.fromEntries([...e.matchAll(t)].map(([,n,r])=>[r,n]))}var gp=["pipeline_tag","private","gated","downloads","likes"];async function*wp(e){var r,l;yp(e==null?void 0:e.credentials);const t=new URLSearchParams([...Object.entries({limit:"500",...(r=e==null?void 0:e.search)!=null&&r.owner?{author:e.search.owner}:void 0,...(l=e==null?void 0:e.search)!=null&&l.task?{pipeline_tag:e.search.task}:void 0}),...gp.map(o=>["expand",o])]).toString();let n=`${(e==null?void 0:e.hubUrl)||pp}/api/models?${t}`;for(;n;){const o=await fetch(n,{headers:{accept:"application/json",...e!=null&&e.credentials?{Authorization:`Bearer ${e.credentials.accessToken}`}:void 0}});if(!o.ok)throw mp(o);const i=await o.json();for(const s of i)yield{id:s._id,name:s.id,private:s.private,task:s.pipeline_tag,downloads:s.downloads,gated:s.gated,likes:s.likes,updatedAt:new Date(s.lastModified)};const u=o.headers.get("Link");n=u?vp(u).next:void 0}}var Sp=Object.defineProperty,xp=(e,t)=>{for(var n in t)Sp(e,n,{get:t[n],enumerable:!0})},kp={};xp(kp,{audioClassification:()=>mc,automaticSpeechRecognition:()=>hc,conversational:()=>kc,documentQuestionAnswering:()=>Rc,featureExtraction:()=>Ec,fillMask:()=>Cc,imageClassification:()=>vc,imageSegmentation:()=>gc,imageToText:()=>wc,objectDetection:()=>Sc,questionAnswering:()=>jc,request:()=>B,sentenceSimilarity:()=>_c,streamingRequest:()=>Ui,summarization:()=>Nc,tableQuestionAnswering:()=>Tc,textClassification:()=>Oc,textGeneration:()=>Pc,textGenerationStream:()=>Np,textToImage:()=>xc,textToSpeech:()=>yc,tokenClassification:()=>Lc,translation:()=>zc,visualQuestionAnswering:()=>Ac,zeroShotClassification:()=>Ic});var Ep="https://api-inference.huggingface.co/models/";function pc(e,t){const{model:n,accessToken:r,...l}=e,o={};r&&(o.Authorization=`Bearer ${r}`);const i="data"in e&&!!e.data;i?(t!=null&&t.wait_for_model&&(o["X-Wait-For-Model"]="true"),(t==null?void 0:t.use_cache)===!1&&(o["X-Use-Cache"]="false"),t!=null&&t.dont_load_model&&(o["X-Load-Model"]="0")):o["Content-Type"]="application/json";const u=/^http(s?):/.test(n)||n.startsWith("/")?n:`${Ep}${n}`,s={headers:o,method:"POST",body:i?e.data:JSON.stringify({...l,options:t}),credentials:t!=null&&t.includeCredentials?"include":"same-origin"};return{url:u,info:s}}async function B(e,t){var o,i;const{url:n,info:r}=pc(e,t),l=await((t==null?void 0:t.fetch)??fetch)(n,r);if((t==null?void 0:t.retry_on_error)!==!1&&l.status===503&&!(t!=null&&t.wait_for_model))return B(e,{...t,wait_for_model:!0});if(!l.ok){if((o=l.headers.get("Content-Type"))!=null&&o.startsWith("application/json")){const u=await l.json();if(u.error)throw new Error(u.error)}throw new Error("An error occurred while fetching the blob")}return(i=l.headers.get("Content-Type"))!=null&&i.startsWith("application/json")?await l.json():await l.blob()}function Cp(e){let t,n,r,l=!1;return function(i){t===void 0?(t=i,n=0,r=-1):t=_p(t,i);const u=t.length;let s=0;for(;n0){const s=l.decode(i.subarray(0,u)),f=u+(i[u+1]===32?2:1),h=l.decode(i.subarray(f));switch(s){case"data":r.data=r.data?r.data+` -`+h:h;break;case"event":r.event=h;break;case"id":e(r.id=h);break;case"retry":const c=parseInt(h,10);isNaN(c)||t(r.retry=c);break}}}}function _p(e,t){const n=new Uint8Array(e.length+t.length);return n.set(e),n.set(t,e.length),n}function Ju(){return{data:"",event:"",id:"",retry:void 0}}async function*Ui(e,t){var f;const{url:n,info:r}=pc({...e,stream:!0},t),l=await((t==null?void 0:t.fetch)??fetch)(n,r);if((t==null?void 0:t.retry_on_error)!==!1&&l.status===503&&!(t!=null&&t.wait_for_model))return Ui(e,{...t,wait_for_model:!0});if(!l.ok){if((f=l.headers.get("Content-Type"))!=null&&f.startsWith("application/json")){const h=await l.json();if(h.error)throw new Error(h.error)}throw new Error(`Server response contains error: ${l.status}`)}if(l.headers.get("content-type")!=="text/event-stream")throw new Error("Server does not support event stream content type, it returned "+l.headers.get("content-type"));if(!l.body)return;const o=l.body.getReader();let i=[];const s=Cp(jp(()=>{},()=>{},h=>{i.push(h)}));try{for(;;){const{done:h,value:c}=await o.read();if(h)return;s(c);for(const v of i)if(v.data.length>0){const g=JSON.parse(v.data);if(typeof g=="object"&&g!==null&&"error"in g)throw new Error(g.error);yield g}i=[]}}finally{o.releaseLock()}}var Y=class extends TypeError{constructor(e){super(`Invalid inference output: ${e}. Use the 'request' method with the same parameters to do a custom call with no type checking.`),this.name="InferenceOutputError"}};async function mc(e,t){const n=await B(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.score=="number")))throw new Y("Expected Array<{label: string, score: number}>");return n}async function hc(e,t){const n=await B(e,t);if(!(typeof(n==null?void 0:n.text)=="string"))throw new Y("Expected {text: string}");return n}async function yc(e,t){const n=await B(e,t);if(!(n&&n instanceof Blob))throw new Y("Expected Blob");return n}async function vc(e,t){const n=await B(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.score=="number")))throw new Y("Expected Array<{label: string, score: number}>");return n}async function gc(e,t){const n=await B(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.mask=="string"&&typeof l.score=="number")))throw new Y("Expected Array<{label: string, mask: string, score: number}>");return n}async function wc(e,t){var r;const n=(r=await B(e,t))==null?void 0:r[0];if(typeof(n==null?void 0:n.generated_text)!="string")throw new Y("Expected {generated_text: string}");return n}async function Sc(e,t){const n=await B(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.score=="number"&&typeof l.box.xmin=="number"&&typeof l.box.ymin=="number"&&typeof l.box.xmax=="number"&&typeof l.box.ymax=="number")))throw new Y("Expected Array<{label:string; score:number; box:{xmin:number; ymin:number; xmax:number; ymax:number}}>");return n}async function xc(e,t){const n=await B(e,t);if(!(n&&n instanceof Blob))throw new Y("Expected Blob");return n}async function kc(e,t){const n=await B(e,t);if(!(Array.isArray(n.conversation.generated_responses)&&n.conversation.generated_responses.every(l=>typeof l=="string")&&Array.isArray(n.conversation.past_user_inputs)&&n.conversation.past_user_inputs.every(l=>typeof l=="string")&&typeof n.generated_text=="string"&&Array.isArray(n.warnings)&&n.warnings.every(l=>typeof l=="string")))throw new Y("Expected {conversation: {generated_responses: string[], past_user_inputs: string[]}, generated_text: string, warnings: string[]}");return n}async function Ec(e,t){const n=await B(e,t);let r=!0;if(Array.isArray(n)){for(const l of n)if(Array.isArray(l)){if(r=l.every(o=>typeof o=="number"),!r)break}else if(typeof l!="number"){r=!1;break}}else r=!1;if(!r)throw new Y("Expected Array");return n}async function Cc(e,t){const n=await B(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.score=="number"&&typeof l.sequence=="string"&&typeof l.token=="number"&&typeof l.token_str=="string")))throw new Y("Expected Array<{score: number, sequence: string, token: number, token_str: string}>");return n}async function jc(e,t){const n=await B(e,t);if(!(typeof n=="object"&&!!n&&typeof n.answer=="string"&&typeof n.end=="number"&&typeof n.score=="number"&&typeof n.start=="number"))throw new Y("Expected {answer: string, end: number, score: number, start: number}");return n}async function _c(e,t){const n=await B(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l=="number")))throw new Y("Expected number[]");return n}async function Nc(e,t){const n=await B(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof(l==null?void 0:l.summary_text)=="string")))throw new Y("Expected Array<{summary_text: string}>");return n==null?void 0:n[0]}async function Tc(e,t){const n=await B(e,t);if(!(typeof(n==null?void 0:n.aggregator)=="string"&&typeof n.answer=="string"&&Array.isArray(n.cells)&&n.cells.every(l=>typeof l=="string")&&Array.isArray(n.coordinates)&&n.coordinates.every(l=>Array.isArray(l)&&l.every(o=>typeof o=="number"))))throw new Y("Expected {aggregator: string, answer: string, cells: string[], coordinates: number[][]}");return n}async function Oc(e,t){var l;const n=(l=await B(e,t))==null?void 0:l[0];if(!(Array.isArray(n)&&n.every(o=>typeof(o==null?void 0:o.label)=="string"&&typeof o.score=="number")))throw new Y("Expected Array<{label: string, score: number}>");return n}async function Pc(e,t){const n=await B(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof(l==null?void 0:l.generated_text)=="string")))throw new Y("Expected Array<{generated_text: string}>");return n==null?void 0:n[0]}async function*Np(e,t){yield*Ui(e,t)}function Vi(e){return Array.isArray(e)?e:[e]}async function Lc(e,t){const n=Vi(await B(e,t));if(!(Array.isArray(n)&&n.every(l=>typeof l.end=="number"&&typeof l.entity_group=="string"&&typeof l.score=="number"&&typeof l.start=="number"&&typeof l.word=="string")))throw new Y("Expected Array<{end: number, entity_group: string, score: number, start: number, word: string}>");return n}async function zc(e,t){const n=await B(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof(l==null?void 0:l.translation_text)=="string")))throw new Y("Expected type Array<{translation_text: string}>");return n==null?void 0:n[0]}async function Ic(e,t){const n=Vi(await B(e,t));if(!(Array.isArray(n)&&n.every(l=>Array.isArray(l.labels)&&l.labels.every(o=>typeof o=="string")&&Array.isArray(l.scores)&&l.scores.every(o=>typeof o=="number")&&typeof l.sequence=="string")))throw new Y("Expected Array<{labels: string[], scores: number[], sequence: string}>");return n}function Fc(e){if(globalThis.Buffer)return globalThis.Buffer.from(e).toString("base64");{const t=[];return e.forEach(n=>{t.push(String.fromCharCode(n))}),globalThis.btoa(t.join(""))}}async function Rc(e,t){var o;const n={...e,inputs:{question:e.inputs.question,image:Fc(new Uint8Array(await e.inputs.image.arrayBuffer()))}},r=(o=Vi(await B(n,t)))==null?void 0:o[0];if(!(typeof(r==null?void 0:r.answer)=="string"&&(typeof r.end=="number"||typeof r.end>"u")&&(typeof r.score=="number"||typeof r.score>"u")&&(typeof r.start=="number"||typeof r.start>"u")))throw new Y("Expected Array<{answer: string, end?: number, score?: number, start?: number}>");return r}async function Ac(e,t){var o;const n={...e,inputs:{question:e.inputs.question,image:Fc(new Uint8Array(await e.inputs.image.arrayBuffer()))}},r=(o=await B(n,t))==null?void 0:o[0];if(!(typeof(r==null?void 0:r.answer)=="string"&&typeof r.score=="number"))throw new Y("Expected Array<{answer: string, score: number}>");return r}const O=e=>a.jsx("button",{className:`${e.variant==="secondary"?"border-4 border-yellow-200":"bg-yellow-200"} py-6 text-center w-full ${e.disabled?"cursor-not-allowed opacity-50":""}`,disabled:e.disabled??!1,onClick:e.onClick,children:e.label??"Submit"}),Mc=e=>a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:e.label??"Input"}),e.input?a.jsx("audio",{className:"w-full",controls:!0,src:URL.createObjectURL(e.input)}):a.jsxs("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",a.jsx("input",{accept:"audio/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInput(t.target.files[0])},type:"file"})]})]}),P=e=>{const t=(()=>{try{return JSON.stringify(e.output,void 0,2)}catch(n){if(n instanceof Error)return`Error during JSON.stringify: ${n.message}`}})();return a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:e.label??"Output"}),a.jsx("pre",{className:`bg-yellow-200 break-words p-6 select-text w-full whitespace-pre-wrap ${e.disabled?"cursor-wait opacity-50":""}`,children:t})]})},Tp="audio-classification",Op=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await mc({data:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(Mc,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?u.map(c=>a.jsx(P,{disabled:r,output:c},c.label)):a.jsx(p.Fragment,{})]})},Pp="automatic-speech-recognition",Lp=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await hc({data:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(Mc,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?a.jsx(P,{disabled:r,output:u}):a.jsx(p.Fragment,{})]})},J=e=>{const t=p.useRef(null);return p.useLayoutEffect(()=>{t.current&&(t.current.style.height="inherit",t.current.style.height=`${t.current.scrollHeight}px`)},[e.input]),a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:e.label??"Input"}),a.jsx("textarea",{className:"bg-yellow-200 py-6 resize-none text-center w-full",disabled:e.disabled??!1,onChange:n=>{!e.disabled&&e.setInput&&(n.target.value?e.setInput(n.target.value):e.setInput(""))},ref:t,rows:1,style:{height:t.current?`${t.current.scrollHeight}px`:"inherit"},value:e.input??""})]})},zp="conversational",Ip=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=()=>{t&&(l(!0),s(c=>c?{...c,conversation:{...c.conversation,past_user_inputs:[...c.conversation.past_user_inputs,t]}}:{conversation:{generated_responses:[],past_user_inputs:[t]},generated_text:"",warnings:[]}),n(void 0),kc({inputs:{generated_responses:u==null?void 0:u.conversation.generated_responses,past_user_inputs:u==null?void 0:u.conversation.past_user_inputs,text:t},model:e.model}).then(s).catch(i).finally(()=>l(!1)))};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t&&!u,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?Array.from({length:Math.max(u.conversation.generated_responses.length,u.conversation.past_user_inputs.length)}).map((c,v,g)=>a.jsxs(p.Fragment,{children:[u.conversation.generated_responses[g.length-v-1]?a.jsx(P,{disabled:r,label:`Output - Generated Response #${g.length-v}`,output:u.conversation.generated_responses[g.length-v-1]}):a.jsx(p.Fragment,{}),u.conversation.past_user_inputs[g.length-v-1]?a.jsx(J,{disabled:!0,label:`Output - Past User Input #${g.length-v}`,input:u.conversation.past_user_inputs[g.length-v-1]}):a.jsx(p.Fragment,{})]},v)):a.jsx(p.Fragment,{})]})},pn=e=>a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:e.label??"Input"}),e.input?a.jsx("img",{className:"w-full",src:URL.createObjectURL(e.input)}):a.jsxs("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",a.jsx("input",{accept:"image/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInput(t.target.files[0])},type:"file"})]})]}),Fp="document-question-answering",Rp=e=>{const[t,n]=p.useState(),[r,l]=p.useState(),[o,i]=p.useState(!1),[u,s]=p.useState(),[f,h]=p.useState(),c=()=>{n(void 0),l(void 0),s(void 0),h(void 0)},v=async()=>{if(t&&r){i(!0);try{const g=await Rc({inputs:{question:t,image:r},model:e.model});h(g)}catch(g){g instanceof Error&&s(g)}finally{i(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,label:"Input - Question",setInput:n}),a.jsx(pn,{input:r,label:"Input - Image",setInput:l}),a.jsx(O,{label:"Clear",disabled:o||!r,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:o||!r,onClick:v}),!o&&u?a.jsx(P,{disabled:o,label:"Error",output:u}):a.jsx(p.Fragment,{}),!u&&f?a.jsx(P,{disabled:o,output:f}):a.jsx(p.Fragment,{})]})},Ap="feature-extraction",Mp=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await Ec({inputs:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?a.jsx(P,{disabled:r,output:u}):a.jsx(p.Fragment,{})]})},Dp="fill-mask",$p=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await Cc({inputs:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?u.map(c=>a.jsx(P,{disabled:r,output:c},c.token_str)):a.jsx(p.Fragment,{})]})},Up="image-classification",Vp=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await vc({data:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(pn,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?u.map(c=>a.jsx(P,{disabled:r,output:c},c.label)):a.jsx(p.Fragment,{})]})},Bp="image-segmentation",Qp=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await gc({data:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(pn,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?u.map(c=>a.jsx(P,{disabled:r,output:c},c.label)):a.jsx(p.Fragment,{})]})},Hp="image-to-text",Wp=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await wc({data:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(pn,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?a.jsx(P,{disabled:r,output:u}):a.jsx(p.Fragment,{})]})},Kp="object-detection",Yp=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await Sc({data:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(pn,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?u.map(c=>a.jsx(P,{disabled:r,output:c},c.label)):a.jsx(p.Fragment,{})]})},Xp="question-answering",Gp=e=>{const[t,n]=p.useState(),[r,l]=p.useState(),[o,i]=p.useState(!1),[u,s]=p.useState(),[f,h]=p.useState(),c=()=>{n(void 0),l(void 0),s(void 0),h(void 0)},v=async()=>{if(t&&r){i(!0);try{const g=await jc({inputs:{question:t,context:r},model:e.model});h(g)}catch(g){g instanceof Error&&s(g)}finally{i(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,label:"Input - Question",setInput:n}),a.jsx(J,{input:r,label:"Input - Context",setInput:l}),a.jsx(O,{label:"Clear",disabled:o||!t||!r,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:o||!t||!r,onClick:v}),!o&&u?a.jsx(P,{disabled:o,label:"Error",output:u}):a.jsx(p.Fragment,{}),!u&&f?a.jsx(P,{disabled:o,output:f}):a.jsx(p.Fragment,{})]})},Zp="sentence-similarity",qp=e=>{const[t,n]=p.useState(),r=Array.from({length:2}).map(()=>{}),[l,o]=p.useState(r),[i,u]=p.useState(!1),[s,f]=p.useState(),[h,c]=p.useState(),v=()=>{n(void 0),o(r),f(void 0),c(void 0)},g=async()=>{if(t&&l.every(Boolean)){u(!0);try{const w=await _c({inputs:{source_sentence:t,sentences:l},model:e.model});c(w)}catch(w){w instanceof Error&&f(w)}finally{u(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,label:"Input - Source Sentence",setInput:n}),l.map((w,k)=>a.jsx(J,{input:w,label:`Input - Sentence #${k+1}`,setInput:M=>o(m=>[...m.slice(0,k),M,...m.slice(k+1,m.length)])})),a.jsx(O,{disabled:i||!t||!l.every(Boolean),label:"Add Sentence",onClick:()=>o(w=>[...w,void 0])}),a.jsx(O,{disabled:i||!t||!l.every(Boolean),label:"Clear",onClick:v,variant:"secondary"}),a.jsx(O,{disabled:i||!t||!l.every(Boolean),onClick:g}),!i&&s?a.jsx(P,{disabled:i,label:"Error",output:s}):a.jsx(p.Fragment,{}),!s&&h?h.map((w,k)=>a.jsx(P,{disabled:i,label:`Output - Sentence #${k+1}`,output:w})):a.jsx(p.Fragment,{})]})},Jp="summarization",bp=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await Nc({inputs:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?a.jsx(P,{disabled:r,output:u}):a.jsx(p.Fragment,{})]})},em=async e=>{const t=await e.text();try{const n=JSON.parse(t);try{return JSON.stringify(n,void 0,2)}catch(r){if(r instanceof Error)return`Error during JSON.stringify: ${r.message}`}}catch(n){if(n instanceof Error)return`Error during JSON.parse: ${n.message}`}},tm=e=>{const[t,n]=p.useState();return p.useEffect(()=>{e.input&&em(e.input).then(n)},[e.input]),a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:e.label??"Input"}),e.input?a.jsx("pre",{className:"bg-yellow-200 break-words p-6 select-text w-full whitespace-pre-wrap",children:t}):a.jsxs("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",a.jsx("input",{accept:".json",className:"hidden",onChange:r=>{r.target.files&&r.target.files[0]&&e.setInput(r.target.files[0])},type:"file"})]})]})},nm="table-question-answering",rm=e=>{const[t,n]=p.useState(),[r,l]=p.useState(),[o,i]=p.useState(!1),[u,s]=p.useState(),[f,h]=p.useState(),c=()=>{n(void 0),l(void 0),s(void 0),h(void 0)},v=async()=>{if(t&&r){i(!0);try{const g=await Tc({inputs:{query:t,table:JSON.parse(await r.text()??"{}")},model:e.model});h(g)}catch(g){g instanceof Error&&s(g)}finally{i(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,label:"Input - Query",setInput:n}),a.jsx(tm,{input:r,label:"Input - Table",setInput:l}),a.jsx(O,{label:"Clear",disabled:o||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:o||!t,onClick:v}),!o&&u?a.jsx(P,{disabled:o,label:"Error",output:u}):a.jsx(p.Fragment,{}),!u&&f?a.jsx(P,{disabled:o,output:f}):a.jsx(p.Fragment,{})]})},lm="text-classification",om=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await Oc({inputs:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?u.map(c=>a.jsx(P,{disabled:r,output:c},c.label)):a.jsx(p.Fragment,{})]})},im="text-generation",um=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await Pc({inputs:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?a.jsx(P,{disabled:r,output:u}):a.jsx(p.Fragment,{})]})},sm=e=>a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:e.label??"Output"}),a.jsx("img",{className:`w-full ${e.disabled?"cursor-wait opacity-50":""}`,src:URL.createObjectURL(e.output)})]}),am="text-to-image",cm=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await xc({inputs:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?a.jsx(sm,{disabled:r,output:u}):a.jsx(p.Fragment,{})]})},fm=e=>a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:e.label??"Output"}),a.jsx("audio",{className:`w-full ${e.disabled?"cursor-wait opacity-50":""}`,controls:!0,src:URL.createObjectURL(e.output)})]}),dm="text-to-speech",pm=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await yc({inputs:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?a.jsx(fm,{disabled:r,output:u}):a.jsx(p.Fragment,{})]})},mm="token-classification",hm=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await Lc({inputs:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?u.map(c=>a.jsx(P,{disabled:r,output:c},c.word)):a.jsx(p.Fragment,{})]})},ym="translation",vm=e=>{const[t,n]=p.useState(),[r,l]=p.useState(!1),[o,i]=p.useState(),[u,s]=p.useState(),f=()=>{n(void 0),i(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const c=await zc({inputs:t,model:e.model});s(c)}catch(c){c instanceof Error&&i(c)}finally{l(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&o?a.jsx(P,{disabled:r,label:"Error",output:o}):a.jsx(p.Fragment,{}),!o&&u?a.jsx(P,{disabled:r,output:u}):a.jsx(p.Fragment,{})]})},gm="visual-question-answering",wm=e=>{const[t,n]=p.useState(),[r,l]=p.useState(),[o,i]=p.useState(!1),[u,s]=p.useState(),[f,h]=p.useState(),c=()=>{n(void 0),l(void 0),s(void 0),h(void 0)},v=async()=>{if(t&&r){i(!0);try{const g=await Ac({inputs:{question:t,image:r},model:e.model});h(g)}catch(g){g instanceof Error&&s(g)}finally{i(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,label:"Input - Question",setInput:n}),a.jsx(pn,{input:r,label:"Input - Image",setInput:l}),a.jsx(O,{label:"Clear",disabled:o||!r,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:o||!r,onClick:v}),!o&&u?a.jsx(P,{disabled:o,label:"Error",output:u}):a.jsx(p.Fragment,{}),!u&&f?a.jsx(P,{disabled:o,output:f}):a.jsx(p.Fragment,{})]})},Sm="zero-shot-classification",xm=e=>{const[t,n]=p.useState(),r=Array.from({length:2}).map(()=>{}),[l,o]=p.useState(r),[i,u]=p.useState(!1),[s,f]=p.useState(),[h,c]=p.useState(),v=()=>{n(void 0),o(r),f(void 0),c(void 0)},g=async()=>{if(t&&l.every(Boolean)){u(!0);try{const w=await Ic({inputs:t,model:e.model,parameters:{candidate_labels:l}});c(w)}catch(w){w instanceof Error&&f(w)}finally{u(!1)}}};return a.jsxs(p.Fragment,{children:[a.jsx(J,{input:t,setInput:n}),l.map((w,k)=>a.jsx(J,{input:w,label:`Parameter - Candidate Label #${k+1}`,setInput:M=>o(m=>[...m.slice(0,k),M,...m.slice(k+1,m.length)])})),a.jsx(O,{disabled:i||!t||!l.every(Boolean),label:"Add Candidate Label",onClick:()=>o(w=>[...w,void 0])}),a.jsx(O,{disabled:i||!t||!l.every(Boolean),label:"Clear",onClick:v,variant:"secondary"}),a.jsx(O,{disabled:i||!t||!l.every(Boolean),onClick:g}),!i&&s?a.jsx(P,{disabled:i,label:"Error",output:s}):a.jsx(p.Fragment,{}),!s&&h?h.map((w,k)=>a.jsx(P,{disabled:i,output:w})):a.jsx(p.Fragment,{})]})},km=[Tp,Pp,zp,Fp,Ap,Dp,Up,Bp,Hp,Kp,Xp,Zp,Jp,nm,lm,im,am,dm,mm,ym,gm,Sm],Em=e=>{if(!e.model||!e.task)return a.jsx(p.Fragment,{});switch(e.task){case"audio-classification":return a.jsx(Op,{model:e.model});case"automatic-speech-recognition":return a.jsx(Lp,{model:e.model});case"conversational":return a.jsx(Ip,{model:e.model});case"document-question-answering":return a.jsx(Rp,{model:e.model});case"feature-extraction":return a.jsx(Mp,{model:e.model});case"fill-mask":return a.jsx($p,{model:e.model});case"image-classification":return a.jsx(Vp,{model:e.model});case"image-segmentation":return a.jsx(Qp,{model:e.model});case"image-to-text":return a.jsx(Wp,{model:e.model});case"object-detection":return a.jsx(Yp,{model:e.model});case"question-answering":return a.jsx(Gp,{model:e.model});case"sentence-similarity":return a.jsx(qp,{model:e.model});case"summarization":return a.jsx(bp,{model:e.model});case"table-question-answering":return a.jsx(rm,{model:e.model});case"text-classification":return a.jsx(om,{model:e.model});case"text-generation":return a.jsx(um,{model:e.model});case"text-to-image":return a.jsx(cm,{model:e.model});case"text-to-speech":return a.jsx(pm,{model:e.model});case"token-classification":return a.jsx(hm,{model:e.model});case"translation":return a.jsx(vm,{model:e.model});case"visual-question-answering":return a.jsx(wm,{model:e.model});case"zero-shot-classification":return a.jsx(xm,{model:e.model});default:return a.jsx(p.Fragment,{})}},Cm=e=>a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:"Task"}),a.jsxs("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:t=>e.onTaskSelect(t.target.value),placeholder:"Select a task",value:e.task,children:[a.jsx("option",{children:"Select a task"}),km.map(t=>a.jsx("option",{value:t,children:t},t))]})]}),Jl={},jm=async e=>{if(Jl[e])return Jl[e];const t=[];for await(const n of wp({search:{task:e}}))t.push(n);return t.sort((n,r)=>n.downloads>r.downloads?-1:n.downloadsr.likes?-1:n.likesr.name?-1:n.name{const[t,n]=p.useState(!1),[r,l]=p.useState([]);return p.useEffect(()=>{l([]),e.task&&(n(!0),jm(e.task).then(o=>l(o)).finally(()=>n(!1)))},[e.task]),r.length>0?a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:"Model"}),a.jsxs("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:o=>e.onModelSelect(o.target.value),placeholder:"Select a model",value:e.model,children:[a.jsx("option",{children:"Select a model"}),r.map(o=>a.jsx("option",{value:o.name,children:o.name},o.name))]}),e.model?a.jsx("div",{className:"font-bold py-6 text-center text-yellow-200",children:a.jsx("a",{href:`https://huggingface.co/${e.model}`,rel:"noopener noferrer",target:"_blank",children:"View model on 🤗"})}):a.jsx(p.Fragment,{})]}):a.jsx("p",{className:"text-center w-full",children:e.task?t?"Loading models for this task":"No models available for this task":"Select a task to view available models"})},Nm=()=>{const[e,t]=p.useState(),[n,r]=p.useState(),l=o=>{r(void 0),t(o)};return a.jsx("div",{className:"bg-yellow-500 flex flex-col h-full items-center min-h-screen min-w-screen overflow-auto w-full",children:a.jsxs("div",{className:"flex flex-col items-center justify-center py-24 space-y-12 w-2/3 lg:w-1/3",children:[a.jsx("header",{className:"text-center text-6xl",children:"🤗"}),a.jsx(Cm,{onTaskSelect:l,task:e}),a.jsx(_m,{model:n,onModelSelect:r,task:e}),a.jsx(Em,{model:n,task:e})]})})};const Tm=()=>{const e="root",t=document.getElementById(e);if(t){const n=dc(t),r=a.jsx(p.StrictMode,{children:a.jsx(Nm,{})});n.render(r)}};Tm(); diff --git a/spaces/mikeee/ultimatumbee/ubee/__main__.py b/spaces/mikeee/ultimatumbee/ubee/__main__.py deleted file mode 100644 index d8d160b6a918a990e0bc09bbff3e92d70044b2b9..0000000000000000000000000000000000000000 --- a/spaces/mikeee/ultimatumbee/ubee/__main__.py +++ /dev/null @@ -1,287 +0,0 @@ -"""Gen ubee main. - -private -url = 'https://hf.space/embed/mikeee/zero-shot/+/api/predict' -resp = httpx.post( - url, - json={"data": ["love", ",".join(["liebe", "this is test", "hate you"]), False]}, - timeout=httpx.Timeout(None, connect=3), -) -resp.json() -{'data': [{'label': 'liebe', - 'confidences': [{'label': 'liebe', 'confidence': 0.8688847422599792}, - {'label': 'this is test', 'confidence': 0.12558135390281677}, - {'label': 'hate you', 'confidence': 0.005533925257623196}]}], - 'duration': 0.265749454498291, - 'average_duration': 4.639325571060181} - -""" -# pylint: disable=unused-import, wrong-import-position, wrong-import-order, too-many-locals, broad-except, line-too-long - -import sys -from itertools import zip_longest -from pathlib import Path -from random import shuffle -from textwrap import dedent -from typing import Optional, Tuple - -import gradio as gr -import logzero -import pandas as pd -from icecream import ic -from icecream import install as ic_install -from logzero import logger -from set_loglevel import set_loglevel - -from ubee import __version__ -from ubee.ubee import ubee - -# for embeddable python -# if "." not in sys.path: sys.path.insert(0, ".") - -logzero.loglevel(set_loglevel()) -logger.debug(" debug on ") - -ic_install() -ic.configureOutput( - includeContext=True, - outputFunction=logger.info, -) -ic.enable() -# ic.disenable() # to turn off - -ic(" ic.enabled ") - -_ = """ -ic("Testing...") -import model_pool -from model_pool import fetch_check_aux -print("model-pool version", model_pool.__version__) -print("gradio version", gr.__version__) - -try: - fetch_check_aux.fetch_check_aux() -except Exception as _: - ic(["fetch_check_aux.fetch_check_aux", _]) - -from model_pool.load_model import load_model -try: - clas = load_model("clas-l-user") -except Exception as _: - ic(["load_model(\"clas-l-user\")", _]) -# """ - -# _ = clas("love", ["liebe", "hate you", "test"]) -# print(_) -# raise SystemExit("Exit by intention") -# {'sequence': 'love', 'labels': ['liebe', 'test', 'hate you'], -# 'scores': [0.8885253667831421, 0.10581762343645096, 0.005657028406858444]} -# Runs OK - - -# segment: str -def ifn(text1, text2, thresh): - """Take inputs, return outputs. - - Args: - text1: text - text2: text - Returns: - pd.DataFrame - """ - res1 = [elm.strip() for elm in text1.splitlines() if elm.strip()] - res2 = [elm.strip() for elm in text2.splitlines() if elm.strip()] - - ic(res1) - ic(res2) - - # _ = pd.DataFrame(zip_longest(res1, res2), columns=["text1", "text2"]) - # return _ - - res1_, res2_ = ubee(res1, res2, thresh) - # res1_, res2_ = res1, res2 - - out_df = pd.DataFrame( - zip_longest(res1, res2), - columns=["text1", "text2"], - ) - - if res2_: - _ = pd.DataFrame(res2_, columns=["text1", "text2"]) - else: - _ = None - - # return out_df, pd.DataFrame(res1_, columns=["text1", "text2", "likelihood"]), _ - - df = pd.DataFrame(res1_, columns=["text1", "text2", "likelihood"]) - html1 = df.to_html() if df is not None else df - - html2 = _.to_html() if _ is not None else _ - - return html1, html2 - - -def main(): - """Create main entry.""" - # global text1, text2, threash - - text_zh = Path("data/test_zh.txt").read_text(encoding="utf8") - text_zh = [elm.strip() for elm in text_zh.splitlines() if elm.strip()][:10] - text_zh = "\n\n".join(text_zh) - - text_en = [ - elm.strip() - for elm in Path("data/test_en.txt").read_text(encoding="utf8").splitlines() - if elm.strip() - ] - _ = text_en[:9] - shuffle(_) - text_en = "\n\n".join(_) - - title = "Ultimatumbee" - theme = "dark-grass" - theme = "grass" - description = """WIP showcasing a novel aligner""" - article = dedent( - """ - ## NB - - * The ultimatumbee aligner (``ubee`` for short) is intended for aligning text blocks (be it paragraphs, sentences or words). Since it is rather slow (30 para pairs (Wuthering Height ch1. for example) can take 10 to 20 mniutes), anything more than 50 blocks should probably be avaoided. Nevertheless, you are welcome to try. No big brother is watching. - - * ``thresh``: longer text blocks justify a larger value; `.5` appears to be just right for paragraphs for Wuthering Height ch1. - - Stay tuned for more details coming soon... - """ - ).strip() - - ex1_zh = [ - "雪开始下大了。", - "我握住门柄又试一回。", - "这时一个没穿外衣的年轻人,扛着一根草耙,在后面院子里出现了。", - "他招呼我跟着他走,穿过了一个洗衣房和一片铺平的地,那儿有煤棚、抽水机和鸽笼,我们终于到了我上次被接待过的那间温暖的、热闹的大屋子。", - "煤、炭和木材混合在一起燃起的熊熊炉火,使这屋子放着光彩。", - "在准备摆上丰盛晚餐的桌旁,我很高兴地看到了那位“太太”,以前我从未料想到会有这么一个人存在的。", - "我鞠躬等候,以为她会叫我坐下。", - "她望望我,往她的椅背一靠,不动,也不出声。", - ] - ex1_en = [ - "The snow began to drive thickly.", - "I seized the handle to essay another trial; when a young man without coat, and shouldering a pitchfork, appeared in the yard behind.", - "He hailed me to follow him, and, after marching through a wash-house, and a paved area containing a coal shed, pump, and pigeon cot, we at length arrived in the huge, warm, cheerful apartment, where I was formerly received.", - "It glowed delightfully in the radiance of an immense fire, compounded of coal, peat, and wood; and near the table, laid for a plentiful evening meal, I was pleased to observe the `missis', an individual whose existence I had never previously suspected.", - "I bowed and waited, thinking she would bid me take a seat.", - "She looked at me, leaning back in her chair, and remained motionless and mute.", - ] - shuffle(ex1_en) - ex1_zh = "\n".join(ex1_zh) - ex1_en = "\n".join(ex1_en) - - ex2_zh = "她\n望望\n我\n往\n她的\n椅背\n一靠\n不\n动\n也\n不\n出声" - ex2_en = "She looked at me leaning back in her chair and remained motionless and mute".split() - shuffle(ex2_en) - ex2_en = "\n".join(ex2_en) - - examples = [ - [ex2_zh, ex2_en, 0.3], - [text_zh, text_en, 0.5], - ] - lines = 15 - placeholder = "Type or paste text here" - - # blocks = gr.Blocks() - - with gr.Blocks() as blocks: - gr.Markdown( - dedent( - f""" - ## Ultimatumbee {__version__} - - Align non-sequential dualtexts. - - 可对词、句、段,每个词(或句或段)一行。可对任意语言对(英中、英德、德法、中日……等等)。建议 threshold 门槛值 -- 词: 0.3,句:0.5, 段: 0.7。如果太多 leftover,可适当调小 threshold。 如果太多误对则可以适当调大 threshold。 - - """ - ).strip() - ) - with gr.Column(): - with gr.Row(): - text1 = gr.inputs.Textbox( - lines=lines, placeholder=placeholder, default=ex1_zh, label="text1" - ) - text2 = gr.inputs.Textbox( - lines=lines, placeholder=placeholder, default=ex1_en, label="text2" - ) - with gr.Row(): - thresh = gr.Slider( - minimum=0.1, - maximum=0.9, - step=0.1, - value=0.4, - label="threshold", - ) - btn = gr.Button("Run") - - _ = """ - out_df = gr.outputs.Dataframe( - headers=None, - max_rows=lines, # 20 - max_cols=None, - overflow_row_behaviour="paginate", - type="auto", - label="To be aligned", - ) - # """ - - with gr.Row(): - _ = """ - aligned = gr.Dataframe( - headers=None, - max_rows=lines, # 20 - max_cols=None, - overflow_row_behaviour="paginate", - type="auto", - label="Aligned", - ) - leftover = gr.Dataframe( - headers=None, - max_rows=lines, # 20 - max_cols=None, - overflow_row_behaviour="paginate", - type="auto", - label="Leftover", - ) - # """ - - aligned = gr.HTML(label="Aligned") - leftover = gr.HTML(label="Leftover") - - btn.click( - fn=ifn, - inputs=[ - text1, - text2, - thresh, - ], - outputs=[ - # out_df, - aligned, - leftover, - ], - ) - - # blocks.launch() - blocks.launch(debug=True, enable_queue=True) - - -if __name__ == "__main__": - # logger.info(" Start main()") - main() - -_ = """ - - gr.inputs.Radio( - ["para", "sent", "word"], - default="para", - label="segment" - ) -# """ diff --git a/spaces/miyaaa666/bingo/src/components/ui/button.tsx b/spaces/miyaaa666/bingo/src/components/ui/button.tsx deleted file mode 100644 index 281da005124fa94c89a9a9db7605748a92b60865..0000000000000000000000000000000000000000 --- a/spaces/miyaaa666/bingo/src/components/ui/button.tsx +++ /dev/null @@ -1,57 +0,0 @@ -import * as React from 'react' -import { Slot } from '@radix-ui/react-slot' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const buttonVariants = cva( - 'inline-flex items-center justify-center rounded-md text-sm font-medium shadow ring-offset-background transition-colors outline-none disabled:pointer-events-none disabled:opacity-50', - { - variants: { - variant: { - default: - 'bg-primary text-primary-foreground shadow-md hover:bg-primary/90', - destructive: - 'bg-destructive text-destructive-foreground hover:bg-destructive/90', - outline: - 'border border-input hover:bg-accent hover:text-accent-foreground', - secondary: - 'bg-secondary text-secondary-foreground hover:bg-secondary/80', - ghost: 'shadow-none hover:bg-accent hover:text-accent-foreground', - link: 'text-primary underline-offset-4 shadow-none hover:underline' - }, - size: { - default: 'h-8 px-4 py-2', - sm: 'h-8 rounded-md px-3', - lg: 'h-11 rounded-md px-8', - icon: 'h-8 w-8 p-0' - } - }, - defaultVariants: { - variant: 'default', - size: 'default' - } - } -) - -export interface ButtonProps - extends React.ButtonHTMLAttributes, - VariantProps { - asChild?: boolean -} - -const Button = React.forwardRef( - ({ className, variant, size, asChild = false, ...props }, ref) => { - const Comp = asChild ? Slot : 'button' - return ( - - ) - } -) -Button.displayName = 'Button' - -export { Button, buttonVariants } diff --git a/spaces/mowang/mowang/app.py b/spaces/mowang/mowang/app.py deleted file mode 100644 index 2439c5cec6b61e8a517f957daf710cbb6b5c3cf6..0000000000000000000000000000000000000000 --- a/spaces/mowang/mowang/app.py +++ /dev/null @@ -1,62 +0,0 @@ -from upcunet_v3 import RealWaifuUpScaler -import gradio as gr -import time -import logging -import os -from PIL import ImageOps -import numpy as np -import math - - -def greet(input_img, input_model_name, input_tile_mode): - # if input_img.size[0] * input_img.size[1] > 256 * 256: - # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1])) - # x = int(input_img.size[0]/input_img.size[1]*y) - # input_img = ImageOps.fit(input_img, (x, y)) - input_img = np.array(input_img) - if input_model_name not in model_cache: - t1 = time.time() - upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu") - t2 = time.time() - logger.info(f'load model time, {t2 - t1}') - model_cache[input_model_name] = upscaler - else: - upscaler = model_cache[input_model_name] - logger.info(f'load model from cache') - - start = time.time() - result = upscaler(input_img, tile_mode=input_tile_mode) - end = time.time() - logger.info(f'input_model_name, {input_model_name}') - logger.info(f'input_tile_mode, {input_tile_mode}') - logger.info(f'input shape, {input_img.shape}') - logger.info(f'output shape, {result.shape}') - logger.info(f'speed time, {end - start}') - return result - - -if __name__ == '__main__': - logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s") - logger = logging.getLogger() - - ModelPath = "weights_v3/" - model_cache = {} - - input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model') - input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode') - input_img = gr.inputs.Image(label='image', type='pil') - - inputs = [input_img, input_model_name, input_tile_mode] - outputs = "image" - iface = gr.Interface(fn=greet, - inputs=inputs, - outputs=outputs, - allow_screenshot=False, - allow_flagging='never', - examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]], - article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN)
          ' - '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。
          ' - '修改bbb' - 'The large image will lead to memory limit exceeded. So I crop and resize image. ' - 'If you want to experience the large image, please go to the link above.') - iface.launch() diff --git a/spaces/mserras/somos-alpaca-es/Dockerfile b/spaces/mserras/somos-alpaca-es/Dockerfile deleted file mode 100644 index a98814b1a6a7949eb8cb8fbf2c90c65a8c0c1005..0000000000000000000000000000000000000000 --- a/spaces/mserras/somos-alpaca-es/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM argilla/argilla-quickstart:latest - -COPY load_data.py / - -RUN pip install argilla[listeners] - -CMD whoami && /start_quickstart_argilla.sh \ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py b/spaces/mshukor/UnIVAL/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py deleted file mode 100644 index 9e7b655feee0042d42ac2b13cec5f1d2a88e201e..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.models import register_model, register_model_architecture -from fairseq.models.multilingual_transformer import MultilingualTransformerModel -from fairseq.models.transformer import ( - TransformerDecoder, - TransformerEncoder, - base_architecture, -) -from fairseq.utils import safe_hasattr - -from .latent_transformer import LatentTransformerDecoder, LatentTransformerEncoder - - -@register_model("latent_multilingual_transformer") -class LatentMultilingualTransformerModel(MultilingualTransformerModel): - """A variant of standard multilingual Transformer models which encoder and/or - decoders supports latent depth, as is in "Deep Transformer with Latent Depth" - (https://arxiv.org/abs/2009.13102). - """ - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - MultilingualTransformerModel.add_args(parser) - parser.add_argument( - '--soft-select', - action='store_true', - help='use soft samples in training an inference', - ) - parser.add_argument( - '--sampling-tau', - type=float, - default=5., - help='sampling temperature', - ) - - @classmethod - def _get_module_class(cls, is_encoder, args, lang_dict, embed_tokens, langs): - if is_encoder: - if safe_hasattr(args, "encoder_latent_layer") and args.encoder_latent_layer: - return LatentTransformerEncoder( - args, lang_dict, embed_tokens, num_logits=len(langs) - ) - else: - return TransformerEncoder(args, lang_dict, embed_tokens) - else: - if safe_hasattr(args, "decoder_latent_layer") and args.decoder_latent_layer: - return LatentTransformerDecoder( - args, lang_dict, embed_tokens, num_logits=len(langs) - ) - else: - return TransformerDecoder(args, lang_dict, embed_tokens) - - -@register_model_architecture( - "latent_multilingual_transformer", "latent_multilingual_transformer" -) -def latent_multilingual_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 24) - args.share_encoders = getattr(args, "share_encoders", True) - args.share_decoders = getattr(args, "share_decoders", True) - args.share_encoder_embeddings = getattr(args, "share_encoder_embeddings", True) - args.share_decoder_embeddings = getattr(args, "share_decoder_embeddings", True) - - base_architecture(args) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/amp_optimizer.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/amp_optimizer.py deleted file mode 100644 index 3b7958e50ce444474c48d1f5aeff05d66c19e5b6..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/amp_optimizer.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -from fairseq import optim -from omegaconf import DictConfig - -logger = logging.getLogger(__name__) - - -class AMPOptimizer(optim.FairseqOptimizer): - """ - Wrap an *optimizer* to support AMP (automatic mixed precision) training. - """ - - def __init__(self, cfg: DictConfig, params, fp32_optimizer, **kwargs): - super().__init__(cfg.optimizer) - self.fp32_optimizer = fp32_optimizer - amp_kwargs = {"init_scale": cfg.common.fp16_init_scale} - if getattr(cfg.common, "amp_scale_window", None) is not None: - amp_kwargs["growth_interval"] = cfg.common.amp_init_scale - self._grad_scaler = torch.cuda.amp.GradScaler(**amp_kwargs) - self.min_loss_scale = cfg.common.min_loss_scale - - @classmethod - def build_optimizer(cls, cfg: DictConfig, params, **kwargs): - """ - Args: - cfg (omegaconf.DictConfig): fairseq args - params (iterable): iterable of parameters to optimize - """ - fp32_optimizer = optim.build_optimizer(cfg.optimizer, params) - return cls(cfg, params, fp32_optimizer, **kwargs) - - def backward(self, loss): - """Computes the sum of gradients of the given tensor w.r.t. graph leaves. - - Compared to :func:`fairseq.optim.FairseqOptimizer.backward`, this - function additionally dynamically scales the loss to avoid gradient - underflow. - """ - self._grad_scaler.scale(loss).backward() - - def step(self): - self.scaler.step(self.fp32_optimizer) - self.scaler.update() - - def clip_grad_norm(self, max_norm, aggregate_norm_fn=None): - """Clips gradient norm.""" - self.scaler.unscale_(self.optimizer) - grad_norm = self.fp32_optimizer.clip_grad_norm(max_norm, aggregate_norm_fn) - if not torch.isfinite(grad_norm).all(): - new_loss_scale = self.next_loss_scale - if new_loss_scale <= self.min_loss_scale: - raise FloatingPointError( - ( - "AMP: Minimum loss scale reached ({}). Your loss is probably exploding. " - "Try restarting training or use fp32. {}" - ).format(self.min_loss_scale, new_loss_scale) - ) - else: - logger.info("AMP: overflow detected, setting scale to " - f"to {new_loss_scale}") - return grad_norm - - @property - def scaler(self): - return self._grad_scaler - - @property - def next_loss_scale(self): - return self.scaler.get_scale() * self.scaler.get_backoff_factor() - - @property - def optimizer(self): - return self.fp32_optimizer.optimizer - - @optimizer.setter - def optimizer(self, optimizer): - self.fp32_optimizer.optimizer = optimizer - - @property - def lr_scheduler(self): - return getattr(self.fp32_optimizer, "lr_scheduler", None) - - @property - def optimizer_config(self): - return self.fp32_optimizer.optimizer_config - - def get_lr(self): - return self.fp32_optimizer.get_lr() - - def set_lr(self, lr): - self.fp32_optimizer.set_lr(lr) - - def all_reduce_grads(self, module): - self.fp32_optimizer.all_reduce_grads(module) - - @property - def supports_flat_params(self): - return self.fp32_optimizer.supports_flat_params diff --git a/spaces/mthsk/sovits-models-misc/modules/commons.py b/spaces/mthsk/sovits-models-misc/modules/commons.py deleted file mode 100644 index 074888006392e956ce204d8368362dbb2cd4e304..0000000000000000000000000000000000000000 --- a/spaces/mthsk/sovits-models-misc/modules/commons.py +++ /dev/null @@ -1,188 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -def slice_pitch_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - -def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size) - return ret, ret_pitch, ids_str - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def rand_spec_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/nahue-passano/librispeech-corpus-generator/utils/files.py b/spaces/nahue-passano/librispeech-corpus-generator/utils/files.py deleted file mode 100644 index f46e3dc270c4b2e52232c8d0053b515a3512b4c6..0000000000000000000000000000000000000000 --- a/spaces/nahue-passano/librispeech-corpus-generator/utils/files.py +++ /dev/null @@ -1,71 +0,0 @@ -from pathlib import Path -import zipfile -import shutil -import io -import streamlit as st - - -def save_temp_file(file: st.runtime.uploaded_file_manager.UploadedFile) -> str: - """Saves in a temporary directory an Streamlit uploaded file - - Parameters - ---------- - file : st.runtime.uploaded_file_manager.UploadedFile - File from st.file_uploader return - - Returns - ------- - str - Path were file is saved temporary - """ - temp_dir = Path(".temp") - temp_file_path = temp_dir.joinpath(file.name) - with open(str(temp_file_path), "wb") as temp_file: - temp_file.write(file.getvalue()) - return temp_file_path - - -def create_temp_directory(dir_name: str = ".temp") -> Path: - """Create a temporary directory. - - Parameters - ---------- - dir_name : str, optional - Name of the temporary directory, by default ".temp" - - Returns - ------- - Path - Path object representing the created temporary directory. - """ - temp_dir = Path(dir_name) - temp_dir.mkdir(exist_ok=True) - return temp_dir - - -def clean_temp_directory() -> None: - """Cleans .temp directory""" - shutil.rmtree(Path(".temp")) - - -def compress_utterances_folder(utterances_folder: Path) -> io.BytesIO: - """Compresses the contents of utterances_folder into a zip file. - - Parameters - ---------- - utterances_folder : Path - Path to the folder containing utterances. - - Returns - ------- - io.BytesIO - A BytesIO object representing the compressed zip file. - """ - memory_file = io.BytesIO() - with zipfile.ZipFile(memory_file, "w") as zip_file: - for file_i in utterances_folder.iterdir(): - zip_file.write(str(file_i), arcname=file_i.name) - - memory_file.seek(0) - clean_temp_directory() - return memory_file diff --git a/spaces/najimino/aicv/README.md b/spaces/najimino/aicv/README.md deleted file mode 100644 index 11bdb596c9e5c6cab87688b74b94f3157c628daf..0000000000000000000000000000000000000000 --- a/spaces/najimino/aicv/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: najimino AI職務経歴書生成(β) -emoji: ⚡ -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/activations.py b/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/activations.py deleted file mode 100644 index a4ff789cf336b4564e99198e0995bf39b8c79c15..0000000000000000000000000000000000000000 --- a/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/activations.py +++ /dev/null @@ -1,101 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Activation functions -""" - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -# SiLU https://arxiv.org/pdf/1606.08415.pdf ---------------------------------------------------------------------------- -class SiLU(nn.Module): # export-friendly version of nn.SiLU() - @staticmethod - def forward(x): - return x * torch.sigmoid(x) - - -class Hardswish(nn.Module): # export-friendly version of nn.Hardswish() - @staticmethod - def forward(x): - # return x * F.hardsigmoid(x) # for TorchScript and CoreML - return x * F.hardtanh(x + 3, 0.0, 6.0) / 6.0 # for TorchScript, CoreML and ONNX - - -# Mish https://github.com/digantamisra98/Mish -------------------------------------------------------------------------- -class Mish(nn.Module): - @staticmethod - def forward(x): - return x * F.softplus(x).tanh() - - -class MemoryEfficientMish(nn.Module): - class F(torch.autograd.Function): - @staticmethod - def forward(ctx, x): - ctx.save_for_backward(x) - return x.mul(torch.tanh(F.softplus(x))) # x * tanh(ln(1 + exp(x))) - - @staticmethod - def backward(ctx, grad_output): - x = ctx.saved_tensors[0] - sx = torch.sigmoid(x) - fx = F.softplus(x).tanh() - return grad_output * (fx + x * sx * (1 - fx * fx)) - - def forward(self, x): - return self.F.apply(x) - - -# FReLU https://arxiv.org/abs/2007.11824 ------------------------------------------------------------------------------- -class FReLU(nn.Module): - def __init__(self, c1, k=3): # ch_in, kernel - super().__init__() - self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False) - self.bn = nn.BatchNorm2d(c1) - - def forward(self, x): - return torch.max(x, self.bn(self.conv(x))) - - -# ACON https://arxiv.org/pdf/2009.04759.pdf ---------------------------------------------------------------------------- -class AconC(nn.Module): - r""" ACON activation (activate or not). - AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter - according to "Activate or Not: Learning Customized Activation" . - """ - - def __init__(self, c1): - super().__init__() - self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1)) - self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1)) - self.beta = nn.Parameter(torch.ones(1, c1, 1, 1)) - - def forward(self, x): - dpx = (self.p1 - self.p2) * x - return dpx * torch.sigmoid(self.beta * dpx) + self.p2 * x - - -class MetaAconC(nn.Module): - r""" ACON activation (activate or not). - MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network - according to "Activate or Not: Learning Customized Activation" . - """ - - def __init__(self, c1, k=1, s=1, r=16): # ch_in, kernel, stride, r - super().__init__() - c2 = max(r, c1 // r) - self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1)) - self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1)) - self.fc1 = nn.Conv2d(c1, c2, k, s, bias=True) - self.fc2 = nn.Conv2d(c2, c1, k, s, bias=True) - # self.bn1 = nn.BatchNorm2d(c2) - # self.bn2 = nn.BatchNorm2d(c1) - - def forward(self, x): - y = x.mean(dim=2, keepdims=True).mean(dim=3, keepdims=True) - # batch-size 1 bug/instabilities https://github.com/ultralytics/yolov5/issues/2891 - # beta = torch.sigmoid(self.bn2(self.fc2(self.bn1(self.fc1(y))))) # bug/unstable - beta = torch.sigmoid(self.fc2(self.fc1(y))) # bug patch BN layers removed - dpx = (self.p1 - self.p2) * x - return dpx * torch.sigmoid(beta * dpx) + self.p2 * x diff --git a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/framework.py b/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/framework.py deleted file mode 100644 index a4375b659a91267d3db9278f72bd1f0b030a4655..0000000000000000000000000000000000000000 --- a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/framework.py +++ /dev/null @@ -1,90 +0,0 @@ -# Mario Rosasco, 2016 -# adapted from framework.cpp, Copyright (C) 2010-2012 by Jason L. McKesson -# This file is licensed under the MIT License. -# -# NB: Unlike in the framework.cpp organization, the main loop is contained -# in the tutorial files, not in this framework file. Additionally, a copy of -# this module file must exist in the same directory as the tutorial files -# to be imported properly. - -import os -from OpenGL.GL import * - -# Function that creates and compiles shaders according to the given type (a GL enum value) and -# shader program (a file containing a GLSL program). -def loadShader(shaderType, shaderFile): - # check if file exists, get full path name - strFilename = findFileOrThrow(shaderFile) - shaderData = None - with open(strFilename, 'r') as f: - shaderData = f.read() - - shader = glCreateShader(shaderType) - glShaderSource(shader, shaderData) # note that this is a simpler function call than in C - - # This shader compilation is more explicit than the one used in - # framework.cpp, which relies on a glutil wrapper function. - # This is made explicit here mainly to decrease dependence on pyOpenGL - # utilities and wrappers, which docs caution may change in future versions. - glCompileShader(shader) - - status = glGetShaderiv(shader, GL_COMPILE_STATUS) - if status == GL_FALSE: - # Note that getting the error log is much simpler in Python than in C/C++ - # and does not require explicit handling of the string buffer - strInfoLog = glGetShaderInfoLog(shader) - strShaderType = "" - if shaderType is GL_VERTEX_SHADER: - strShaderType = "vertex" - elif shaderType is GL_GEOMETRY_SHADER: - strShaderType = "geometry" - elif shaderType is GL_FRAGMENT_SHADER: - strShaderType = "fragment" - - print("Compilation failure for " + strShaderType + " shader:\n" + str(strInfoLog)) - - return shader - - -# Function that accepts a list of shaders, compiles them, and returns a handle to the compiled program -def createProgram(shaderList): - program = glCreateProgram() - - for shader in shaderList: - glAttachShader(program, shader) - - glLinkProgram(program) - - status = glGetProgramiv(program, GL_LINK_STATUS) - if status == GL_FALSE: - # Note that getting the error log is much simpler in Python than in C/C++ - # and does not require explicit handling of the string buffer - strInfoLog = glGetProgramInfoLog(program) - print("Linker failure: \n" + str(strInfoLog)) - - for shader in shaderList: - glDetachShader(program, shader) - - return program - - -# Helper function to locate and open the target file (passed in as a string). -# Returns the full path to the file as a string. -def findFileOrThrow(strBasename): - # Keep constant names in C-style convention, for readability - # when comparing to C(/C++) code. - if os.path.isfile(strBasename): - return strBasename - - LOCAL_FILE_DIR = "data" + os.sep - GLOBAL_FILE_DIR = os.path.dirname(os.path.abspath(__file__)) + os.sep + "data" + os.sep - - strFilename = LOCAL_FILE_DIR + strBasename - if os.path.isfile(strFilename): - return strFilename - - strFilename = GLOBAL_FILE_DIR + strBasename - if os.path.isfile(strFilename): - return strFilename - - raise IOError('Could not find target file ' + strBasename) \ No newline at end of file diff --git a/spaces/nateraw/lavila/lavila/data/datasets.py b/spaces/nateraw/lavila/lavila/data/datasets.py deleted file mode 100644 index 22e296951c1ee958f8676f990ae1e3fd342b28e0..0000000000000000000000000000000000000000 --- a/spaces/nateraw/lavila/lavila/data/datasets.py +++ /dev/null @@ -1,517 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import glob -import json -import numpy as np -import os.path as osp -import pickle -import random - -import decord -import pandas as pd -import torch - - -def datetime2sec(str): - hh, mm, ss = str.split(':') - return int(hh) * 3600 + int(mm) * 60 + float(ss) - - -def video_loader(root, vid, second, end_second=None, chunk_len=300, fps=30, clip_length=32, jitter=False): - if chunk_len == -1: - vr = decord.VideoReader(osp.join(root, '{}.mp4'.format(vid))) - second_offset = second - if end_second is not None: - end_second = min(end_second, len(vr) / vr.get_avg_fps()) - else: - end_second = len(vr) / vr.get_avg_fps() - else: - chunk_start = int(second) // chunk_len * chunk_len - second_offset = second - chunk_start - vr = decord.VideoReader(osp.join(root, '{}.mp4'.format(vid), '{}.mp4'.format(chunk_start))) - if fps == -1: - fps = vr.get_avg_fps() - - # calculate frame_ids - frame_offset = int(np.round(second_offset * fps)) - total_duration = max(int((end_second - second) * fps), clip_length) - if chunk_len == -1: - if end_second <= second: - raise ValueError("end_second should be greater than second") - else: - frame_ids = get_frame_ids(frame_offset, min(frame_offset + total_duration, len(vr)), num_segments=clip_length, jitter=jitter) - else: - frame_ids = get_frame_ids(frame_offset, frame_offset + total_duration, num_segments=clip_length, jitter=jitter) - - # load frames - if max(frame_ids) < len(vr): - try: - frames = vr.get_batch(frame_ids).asnumpy() - except decord.DECORDError as error: - print(error) - frames = vr.get_batch([0] * len(frame_ids)).asnumpy() - else: - # find the remaining frames in the next chunk - try: - frame_ids_part1 = list(filter(lambda frame_id: frame_id < len(vr), frame_ids)) - frames_part1 = vr.get_batch(frame_ids_part1).asnumpy() - vr2 = decord.VideoReader(osp.join(root, '{}.mp4'.format(vid), '{}.mp4'.format(chunk_start + chunk_len))) - frame_ids_part2 = list(filter(lambda frame_id: frame_id >= len(vr), frame_ids)) - frame_ids_part2 = [min(frame_id % len(vr), len(vr2) - 1) for frame_id in frame_ids_part2] - frames_part2 = vr2.get_batch(frame_ids_part2).asnumpy() - frames = np.concatenate([frames_part1, frames_part2], axis=0) - # the next chunk does not exist; the current chunk is the last one - except (RuntimeError, decord.DECORDError) as error: - print(error) - frame_ids = get_frame_ids(min(frame_offset, len(vr) - 1), len(vr), num_segments=clip_length, jitter=jitter) - frames = vr.get_batch(frame_ids).asnumpy() - - frames = [torch.tensor(frame, dtype=torch.float32) for frame in frames] - return torch.stack(frames, dim=0) - - -def get_frame_ids(start_frame, end_frame, num_segments=32, jitter=True): - seg_size = float(end_frame - start_frame - 1) / num_segments - seq = [] - for i in range(num_segments): - start = int(np.round(seg_size * i) + start_frame) - end = int(np.round(seg_size * (i + 1)) + start_frame) - end = min(end, end_frame) - if jitter: - frame_id = np.random.randint(low=start, high=(end + 1)) - else: - frame_id = (start + end) // 2 - seq.append(frame_id) - return seq - - -def video_loader_by_frames(root, vid, frame_ids): - vr = decord.VideoReader(osp.join(root, vid)) - try: - frames = vr.get_batch(frame_ids).asnumpy() - frames = [torch.tensor(frame, dtype=torch.float32) for frame in frames] - except (IndexError, decord.DECORDError) as error: - print(error) - print("Erroneous video: ", vid) - frames = [torch.zeros((240, 320, 3)) for _ in range(len(frame_ids))] - return torch.stack(frames, dim=0) - - -class VideoCaptionDatasetBase(torch.utils.data.Dataset): - def __init__(self, dataset, root, metadata, is_trimmed=True): - self.dataset = dataset - self.root = root - self.is_trimmed = is_trimmed - - if self.dataset == 'ego4d': - with open(metadata, 'rb') as f: - self.samples = pickle.load(f) - elif self.dataset == 'ego4d_mcq': - with open(metadata, 'r') as f: - self.samples = json.load(f) - elif self.dataset in ['ek100_cls', 'ek100_mir']: - video_list = glob.glob(osp.join(self.root, '*/*.MP4')) - fps_dict = {video: decord.VideoReader(video).get_avg_fps() for video in video_list} - self.samples = [] - with open(metadata) as f: - csv_reader = csv.reader(f) - _ = next(csv_reader) # skip the header - for row in csv_reader: - pid, vid = row[1:3] - # start_frame, end_frame = int(row[6]), int(row[7]) - # Deprecated: some videos might have fps mismatch issue - start_timestamp, end_timestamp = datetime2sec(row[4]), datetime2sec(row[5]) - narration = row[8] - verb, noun = int(row[10]), int(row[12]) - vid_path = '{}/{}.MP4'.format(pid, vid) - fps = fps_dict[osp.join(self.root, vid_path)] - start_frame = int(np.round(fps * start_timestamp)) - end_frame = int(np.ceil(fps * end_timestamp)) - self.samples.append((vid_path, start_frame, end_frame, narration, verb, noun)) - if self.dataset == 'ek100_mir': - self.metadata_sentence = pd.read_csv(metadata[:metadata.index('.csv')] + '_sentence.csv') - if 'train' in metadata: - self.relevancy_mat = pickle.load(open(osp.join(osp.dirname(metadata), 'relevancy', 'caption_relevancy_EPIC_100_retrieval_train.pkl'), 'rb')) - elif 'test' in metadata: - self.relevancy_mat = pickle.load(open(osp.join(osp.dirname(metadata), 'relevancy', 'caption_relevancy_EPIC_100_retrieval_test.pkl'), 'rb')) - else: - raise ValueError('{} should contain either "train" or "test"!'.format(metadata)) - self.relevancy = .1 - elif self.dataset == 'egtea': - video_list = glob.glob(osp.join(self.root, '*/*')) - len_dict = {video: len(decord.VideoReader(video)) for video in video_list} - - vn_list, labels = [], [] - for row in open(osp.join(osp.dirname(metadata), 'action_idx.txt')): - row = row.strip() - vn = int(row.split(' ')[-1]) - vn_list.append(vn) - narration = ' '.join(row.split(' ')[:-1]) - labels.append(narration.replace('_', ' ').lower()) - # labels.append(narration) - mapping_act2narration = {vn: narration for vn, narration in zip(vn_list, labels)} - - self.samples = [] - with open(metadata) as f: - for row in f: - clip_id, action_idx = row.strip().split(' ')[:2] - video_id = '-'.join(clip_id.split('-')[:3]) - vid_relpath = osp.join(video_id, '{}.mp4'.format(clip_id)) - vid_fullpath = osp.join(self.root, video_id, '{}.mp4'.format(clip_id)) - self.samples.append((vid_relpath, 0, len_dict[vid_fullpath], mapping_act2narration[int(action_idx)])) - elif self.dataset == 'charades_ego': - video_list = glob.glob(osp.join(self.root, '*.mp4')) - fps_dict = {video: decord.VideoReader(video).get_avg_fps() for video in video_list} - self.samples = [] - with open(metadata) as f: - csv_reader = csv.reader(f) - _ = next(csv_reader) # skip the header - for row in csv_reader: - video_id = row[0] - if self.is_trimmed: - for action_tuple in row[9].split(';'): - if not action_tuple: - continue - action, start_timestamp, end_timestamp = action_tuple.split(' ') - start_timestamp, end_timestamp = float(start_timestamp), float(end_timestamp) - vid_path = '{}.mp4'.format(video_id) - fps = fps_dict[osp.join(self.root, vid_path)] - start_frame = int(np.round(fps * start_timestamp)) - end_frame = int(np.ceil(fps * end_timestamp)) - self.samples.append((vid_path, start_frame, end_frame, action)) - else: - if not row[9]: - action_list = [] - else: - action_list = [action_tuple.split(' ')[0] for action_tuple in row[9].split(';')] - vid_path = '{}.mp4'.format(video_id) - fps = fps_dict[osp.join(self.root, vid_path)] - duration = fps * float(row[10]) - self.samples.append((vid_path, 0, duration, action_list)) - elif self.dataset == 'charades_ego_trimmed': - with open(metadata, 'rb') as f: - self.samples = pickle.load(f) - else: - raise NotImplementedError - - def get_raw_item(self, i, is_training=True, num_clips=1, clip_length=32, clip_stride=2, sparse_sample=False, - narration_selection='random'): - if self.dataset == 'ego4d': - if len(self.samples[i]) == 4: - vid, start_second, end_second, narration = self.samples[i] - frames = video_loader(self.root, vid, start_second, - end_second=end_second, - clip_length=clip_length, - jitter=is_training) - if isinstance(narration, list): - if narration_selection == 'random': - narration = random.choice(narration) - elif narration_selection == 'concat': - narration = '. '.join(narration) - elif narration_selection == 'list': - narration = narration - else: - raise ValueError - return frames, narration - elif len(self.samples[i]) == 5: - # TODO: need better filtering strategy based on nll - vid, start_second, end_second, narration, _ = self.samples[i] - frames = video_loader(self.root, vid, start_second, - end_second=end_second, - clip_length=clip_length, - jitter=is_training) - if isinstance(narration, list): - if narration_selection == 'random': - narration = random.choice(narration) - elif narration_selection == 'concat': - narration = '. '.join(narration) - elif narration_selection == 'list': - narration = narration - else: - raise ValueError - return frames, narration - elif self.dataset == 'ego4d_mcq': - itemMCQ = self.samples[str(i)] - answerIndex = itemMCQ['answer'] - textQuery = itemMCQ['query']['clip_text'] - sampleOptions = itemMCQ['choices'] - frames_options = [] - narration_options = [] - for option_id in range(len(sampleOptions)): - option = sampleOptions[str(option_id)] - frames = video_loader(self.root, option['video_uid'], - float(option['clip_start']), end_second=float(option['clip_end']), - clip_length=clip_length, - jitter=is_training) - frames_options.append(frames) - narration_options.append(option['clip_text']) - return textQuery, frames_options, narration_options, answerIndex, itemMCQ['types'] - elif self.dataset == 'ek100_mir': - vid_path, start_frame, end_frame, narration, verb, noun = self.samples[i] - # from third_party.EgoVLP.base.base_dataset import sample_frames_start_end - # frame_ids = sample_frames_start_end(clip_length, start_frame, end_frame, sample='uniform', fix_start=None) - frame_ids = get_frame_ids(start_frame, end_frame, num_segments=clip_length, jitter=is_training) - frames = video_loader_by_frames(self.root, vid_path, frame_ids) - if is_training: - positive_list = np.where(self.relevancy_mat[i] > self.relevancy)[0].tolist() - if positive_list != []: - pos = random.sample(positive_list, min(len(positive_list), 1))[0] - if pos < len(self.metadata_sentence) and pos < self.relevancy_mat.shape[1]: - return frames, (self.metadata_sentence.iloc[pos][1], self.relevancy_mat[i][pos]) - else: - return frames, (narration, 1) - elif self.dataset == 'ek100_cls': - vid_path, start_frame, end_frame, narration, verb, noun = self.samples[i] - frame_ids = get_frame_ids(start_frame, end_frame, num_segments=clip_length, jitter=is_training) - frames = video_loader_by_frames(self.root, vid_path, frame_ids) - return frames, '{}:{}'.format(verb, noun) - elif self.dataset == 'egtea': - vid_path, start_frame, end_frame, sentence = self.samples[i] - if is_training: - assert num_clips == 1 - if end_frame < clip_length * clip_stride: - frames = video_loader_by_frames(self.root, vid_path, list(np.arange(0, end_frame))) - zeros = torch.zeros((clip_length * clip_stride - end_frame, *frames.shape[1:])) - frames = torch.cat((frames, zeros), dim=0) - frames = frames[::clip_stride] - else: - start_id = np.random.randint(0, end_frame - clip_length * clip_stride + 1) - frame_ids = np.arange(start_id, start_id + clip_length * clip_stride, clip_stride) - frames = video_loader_by_frames(self.root, vid_path, frame_ids) - else: - if end_frame < clip_length * clip_stride: - frames = video_loader_by_frames(self.root, vid_path, list(np.arange(0, end_frame))) - zeros = torch.zeros((clip_length * clip_stride - end_frame, *frames.shape[1:])) - frames = torch.cat((frames, zeros), dim=0) - frames = frames[::clip_stride] - frames = frames.repeat(num_clips, 1, 1, 1) - else: - frame_ids = [] - for start_id in np.linspace(0, end_frame - clip_length * clip_stride, num_clips, dtype=int): - frame_ids.extend(np.arange(start_id, start_id + clip_length * clip_stride, clip_stride)) - frames = video_loader_by_frames(self.root, vid_path, frame_ids) - return frames, sentence - elif self.dataset == 'charades_ego': - vid_path, start_frame, end_frame, action_list = self.samples[i] - if sparse_sample: - frame_ids = get_frame_ids(start_frame, end_frame, num_segments=num_clips * clip_length, jitter=is_training) - frames = video_loader_by_frames(self.root, vid_path, frame_ids) - else: - if end_frame < clip_length * clip_stride: - frames = video_loader_by_frames(self.root, vid_path, list(np.arange(0, end_frame))) - zeros = torch.zeros((clip_length * clip_stride - end_frame, *frames.shape[1:])) - frames = torch.cat((frames, zeros), dim=0) - frames = frames[::clip_stride] - frames = frames.repeat(num_clips, 1, 1, 1) - else: - frame_ids = [] - for start_id in np.linspace(0, end_frame - clip_length * clip_stride, num_clips, dtype=int): - frame_ids.extend(np.arange(start_id, start_id + clip_length * clip_stride, clip_stride)) - print('frame_ids:', frame_ids) - frames = video_loader_by_frames(self.root, vid_path, frame_ids) - return frames, action_list - elif self.dataset == 'charades_ego_trimmed': - vid, start_second, end_second, narration = self.samples[i] - frames = video_loader(self.root, vid, start_second, - end_second=end_second, - chunk_len=-1, # no chunk for CharadesEgo - fps=-1, # could be variable fps - clip_length=clip_length, - jitter=is_training) - return frames, narration - else: - raise NotImplementedError - - def __getitem__(self, i): - raise NotImplementedError - - def __len__(self): - return len(self.samples) - - -class VideoCaptionDatasetCLIP(VideoCaptionDatasetBase): - def __init__(self, dataset, root, metadata, transform=None, - is_training=True, tokenizer=None, - clip_length=32, clip_stride=2, sparse_sample=False, - narration_selection='random', - num_hard_negatives=0, - subsample_stride=None): - super().__init__(dataset, root, metadata) - - self.full_samples = self.samples.copy() - if isinstance(subsample_stride, int): - self.samples = self.samples[::subsample_stride] - self.transform = transform - self.is_training = is_training - self.tokenizer = tokenizer - self.clip_length = clip_length - self.clip_stride = clip_stride - self.sparse_sample = sparse_sample - self.narration_selection = narration_selection - self.num_hard_negatives = num_hard_negatives - if num_hard_negatives > 0: - assert self.dataset == 'htm_aa' - - def __getitem__(self, i): - frames, caption = self.get_raw_item( - i, is_training=self.is_training, - clip_length=self.clip_length, - clip_stride=self.clip_stride, - sparse_sample=self.sparse_sample, - narration_selection=self.narration_selection, - ) - - # ek100_mir will also output relevancy value - if isinstance(caption, tuple): - caption, relevancy = caption - else: - relevancy = 0. - - # apply transformation - if self.transform is not None: - frames = self.transform(frames) - - # tokenize caption - if self.tokenizer is not None: - caption = self.tokenizer(caption) - - if isinstance(caption, tuple): - caption, mask = caption - return frames, caption, mask, relevancy - else: - return frames, caption, relevancy - - -class VideoCaptionDatasetMCQ(VideoCaptionDatasetBase): - def __init__(self, dataset, root, metadata, transform=None, - is_training=True, tokenizer=None, - clip_length=32, clip_stride=2, sparse_sample=False, - narration_selection='random'): - super().__init__(dataset, root, metadata) - - self.full_samples = self.samples.copy() - self.transform = transform - self.is_training = is_training - self.tokenizer = tokenizer - self.clip_length = clip_length - self.clip_stride = clip_stride - self.sparse_sample = sparse_sample - self.narration_selection = narration_selection - - def __getitem__(self, i): - - textQuery, frames_options, narration_options, answerIndex, q_type = self.get_raw_item( - i, is_training=self.is_training, - clip_length=self.clip_length, - clip_stride=self.clip_stride, - sparse_sample=self.sparse_sample, - narration_selection=self.narration_selection, - ) - - # apply transformation - if self.transform is not None: - frames_options = [self.transform(frames) for frames in frames_options] - - # tokenize caption - if self.tokenizer is not None: - textQuery = self.tokenizer(textQuery) - narration_options = self.tokenizer(narration_options) - if isinstance(textQuery, tuple): - textQuery, mask_query = textQuery - narration_options, mask_options = narration_options - return ( - textQuery, torch.stack(frames_options, dim=0), - narration_options, answerIndex, q_type, - mask_query, mask_options - ) - else: - return textQuery, torch.stack(frames_options, dim=0), narration_options, answerIndex, q_type - - -class VideoClassyDataset(VideoCaptionDatasetBase): - def __init__( - self, dataset, root, metadata, transform=None, - is_training=True, label_mapping=None, - num_clips=1, - clip_length=32, clip_stride=2, - sparse_sample=False, - is_trimmed=True, - ): - super().__init__(dataset, root, metadata, is_trimmed=is_trimmed) - - self.transform = transform - self.is_training = is_training - self.label_mapping = label_mapping - self.num_clips = num_clips - self.clip_length = clip_length - self.clip_stride = clip_stride - self.sparse_sample = sparse_sample - - def __getitem__(self, i): - frames, label = self.get_raw_item( - i, is_training=self.is_training, - num_clips=self.num_clips, - clip_length=self.clip_length, - clip_stride=self.clip_stride, - sparse_sample=self.sparse_sample, - ) - - # apply transformation - if self.transform is not None: - frames = self.transform(frames) - - if self.label_mapping is not None: - if isinstance(label, list): - # multi-label case - res_array = np.zeros(len(self.label_mapping)) - for lbl in label: - res_array[self.label_mapping[lbl]] = 1. - label = res_array - else: - label = self.label_mapping[label] - - return frames, label - - -def get_dataset(train_transform, tokenizer, args, is_training=True): - if 'narration_selection' not in args: - args.narration_selection = 'random' - if args.model.startswith('CLIP') or args.model.startswith('VCLM'): - return VideoCaptionDatasetCLIP( - args.dataset, args.root, args.metadata, train_transform, - is_training=is_training, - tokenizer=tokenizer, - clip_length=args.clip_length, clip_stride=args.clip_stride, - sparse_sample=args.sparse_sample, - narration_selection=args.narration_selection, - num_hard_negatives=args.num_hard_neg if 'num_hard_neg' in args else 0, - ) - else: - raise NotImplementedError - - -def get_downstream_dataset(transform, tokenizer, args, subset='train', label_mapping=None): - if subset == 'train': - return VideoClassyDataset( - args.dataset, args.root, args.metadata_train, transform, - is_training=True, label_mapping=label_mapping, - num_clips=args.num_clips, - clip_length=args.clip_length, clip_stride=args.clip_stride, - sparse_sample=args.sparse_sample, - ) - elif subset == 'val': - return VideoClassyDataset( - args.dataset, args.root, args.metadata_val, transform, - is_training=False, label_mapping=label_mapping, - num_clips=args.num_clips, - clip_length=args.clip_length, clip_stride=args.clip_stride, - sparse_sample=args.sparse_sample, - is_trimmed=not args.dataset == 'charades_ego' - ) - else: - assert ValueError("subset should be either 'train' or 'val'") diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/CATS Crash Arena Turbo Stars 2.19.1 ?.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/CATS Crash Arena Turbo Stars 2.19.1 ?.md deleted file mode 100644 index 05df6c854aa21413cecabaa022b72c7a1779f97d..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/CATS Crash Arena Turbo Stars 2.19.1 ?.md +++ /dev/null @@ -1,25 +0,0 @@ - -

          CATS: Crash Arena Turbo Stars 2.19.1 - The Ultimate Battle Bot Game

          -

          If you love designing, crafting, and upgrading your own battle bots, then you will love CATS: Crash Arena Turbo Stars 2.19.1, the latest version of the popular PvP game from the creators of Cut the Rope and King of Thieves. In this game, you can join more than 200 million players from all over the world and become the star of the arena by building the most ingenious and stylish war machine constructor.

          -

          CATS: Crash Arena Turbo Stars 2.19.1 –


          Download Ziphttps://urlcod.com/2uIbuK



          -

          In CATS: Crash Arena Turbo Stars 2.19.1, you can:

          -
            -
          • Be a master engineer: design, craft, upgrade, and improve the ultimate battle bot from collected parts and unleash its power against other players in automatic PvP fights.
          • -
          • Take the role of a mean street cat and fight against other players in fast and hilarious PvP action.
          • -
          • Discover dozens of crazy weapons, gadgets and body shapes, including ultimate machines. Outsmart your opponents with your unique battle bot design.
          • -
          • Create a powerful gang and rule the streets. Participate in gang battles to win unique parts, make new friends and share your secrets in your gang's chat.
          • -
          • City Kings: Fight against real gangs from around the globe to conquer the city in cooperative mode.
          • -
          • Battle against real players and fight your way to the top of the world championship.
          • -
          • Bet on other bots and share replays of your best fights.
          • -
          -

          CATS: Crash Arena Turbo Stars 2.19.1 is free to download and play for iOS or Android devices. You can also play it on your web browser (desktop and mobile). Get ready for some explosive action and fun with CATS: Crash Arena Turbo Stars 2.19.1, the most addictive and entertaining battle bot game ever!

          - -

          But building your ultimate battle bot is not enough. You also need to know some tips and tricks to win more fights and climb up the ranks. Here are some of the best strategies that can help you become a champion in CATS: Crash Arena Turbo Stars 2.19.1.

          -

          Use Duels and Bets to Prepare for the Championship

          -

          Duels are a great way to test your battle bot against other players and see how it performs in different situations. You can also earn some coins and supply boxes by winning duels. Duels will give you a good idea of the general behavior of your robot and its strengths and weaknesses. Do several duels in a row to be sure of yourself and start the championship rounds with confidence.

          -

          Bets are another way to prepare for the championship and earn some extra rewards. You can bet on a fight between two other players and choose the vehicle that you think has better chances of winning. You can base your decision on the bet meter that shows the total bets in favor of one of the cars, or on their attack and health stats, size, shape, and weapons. Choosing a car that received more bets will grant you fewer rewards, but also lower risk. Choosing a car that received fewer bets will grant you more rewards, but also higher risk. Bets can also help you learn from other players' designs and strategies.

          -

          Upgrade Your Parts and Use Bonuses Wisely

          -

          As you play the game, you will collect various parts from supply boxes that you can use to build and improve your battle bot. You can fuse parts together to upgrade them by dragging the part you want to fuse to the stats page for the part you want to upgrade. This will increase their attack or health rating, depending on the type of part. You should not hoard parts that you don't use, but rather fuse them to upgrade your best parts.

          -

          Some parts also have bonuses that can boost their performance when paired with certain body types or weapons. For example, a wheel may have a bonus for sneaky body type, or a blade may have a bonus for rocket launcher. You can see these bonuses on the stats page of each part. Try to use parts that have bonuses that match your current setup to get an edge over your opponents.

          7b8c122e87
          -
          -
          \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Gigantic Skyrim Fps Performance Patch.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Gigantic Skyrim Fps Performance Patch.md deleted file mode 100644 index 16d3748747a4c3d932bf07d2d5fb76bb94c9fe1d..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Gigantic Skyrim Fps Performance Patch.md +++ /dev/null @@ -1,36 +0,0 @@ - -

          Gigantic Skyrim Fps Performance Patch: How to Boost Your Frame Rate in the Legendary RPG

          -

          If you love playing Skyrim but hate the low frame rate and stuttering issues, you might want to check out the Gigantic Skyrim Fps Performance Patch. This mod is designed to optimize the game's performance and improve its stability, especially on lower-end systems. Here's what you need to know about this amazing patch and how to install it.

          -

          Gigantic Skyrim Fps Performance Patch


          Download Filehttps://urlcod.com/2uIbjU



          -

          What is the Gigantic Skyrim Fps Performance Patch?

          -

          The Gigantic Skyrim Fps Performance Patch is a mod that aims to increase the frame rate and reduce the lag in Skyrim. It does this by tweaking various settings and features of the game, such as shadows, grass, water, reflections, particles, animations, scripts, and more. The mod also fixes some bugs and errors that can cause crashes and freezes.

          -

          The mod claims to boost the fps by up to 40%, depending on your system and settings. It also promises to make the game smoother and more responsive, without sacrificing the visual quality or gameplay experience. The mod is compatible with most other mods and DLCs, as long as they don't conflict with the same files or settings.

          -

          How to Install the Gigantic Skyrim Fps Performance Patch?

          -

          To install the Gigantic Skyrim Fps Performance Patch, you will need a mod manager such as Nexus Mod Manager or Mod Organizer. You can download the mod from its Nexus Mods page here. Once you have downloaded the mod, follow these steps:

          -
            -
          1. Open your mod manager and activate the Gigantic Skyrim Fps Performance Patch.
          2. -
          3. Launch Skyrim and go to the Options menu.
          4. -
          5. Under the Display tab, set the Antialiasing and Anisotropic Filtering to Off.
          6. -
          7. Under the Advanced tab, set the Shadow Quality to Low and Shadow Distance to Medium.
          8. -
          9. Under the View Distance tab, set the Object Fade, Actor Fade, Item Fade, and Grass Fade sliders to around 50%.
          10. -
          11. Save your settings and exit the game.
          12. -
          13. Go back to your mod manager and run LOOT (Load Order Optimization Tool) to sort your load order.
          14. -
          15. Launch Skyrim again and enjoy your improved performance!
          16. -
          -

          Note: You can tweak these settings according to your preference and system specifications. You can also use other tools such as ENBoost or SKSE (Skyrim Script Extender) to further enhance your performance.

          -

          Conclusion

          -

          The Gigantic Skyrim Fps Performance Patch is a great mod for anyone who wants to play Skyrim with a higher frame rate and less lag. It is easy to install and use, and it works well with most other mods and DLCs. If you are looking for a simple and effective way to boost your Skyrim performance, you should definitely give this mod a try!

          -

          - -

          Other Tips to Improve Skyrim Performance

          -

          Besides using the Gigantic Skyrim Fps Performance Patch, there are some other tips and tricks that can help you improve your Skyrim performance. Here are some of them:

          -
            -
          • Update your drivers and DirectX. Make sure you have the latest versions of your graphics card and sound card drivers, as well as DirectX. This can improve your compatibility and stability with the game.
          • -
          • Clean your save files. Over time, your save files can accumulate a lot of junk data and scripts that can slow down your game. You can use a tool such as Save Game Script Cleaner or Save Game Cleaner to remove these unwanted elements from your save files.
          • -
          • Disable unnecessary background programs. Close any programs or processes that are not essential for running the game, such as antivirus, web browsers, music players, etc. This can free up some memory and CPU resources for the game.
          • -
          • Lower your resolution. If you are playing on a high-resolution monitor, you can try lowering your resolution to a lower one, such as 1280x720 or 1024x768. This can reduce the strain on your graphics card and increase your fps.
          • -
          • Use performance-friendly mods. There are some mods that can improve your performance without compromising the quality or immersion of the game. For example, you can use mods such as Skyrim Project Optimization, Optimized Vanilla Textures, or Skyrim Performance Plus to optimize the game's meshes, textures, and effects.
          • -
          -

          With these tips and tricks, you can make your Skyrim run faster and smoother than ever before. Have fun exploring the vast and beautiful world of Skyrim!

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Principles Of Helicopter Aerodynamics By Gordon P. Leishman.pdf UPDATED.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Principles Of Helicopter Aerodynamics By Gordon P. Leishman.pdf UPDATED.md deleted file mode 100644 index 0fe9c3e391ccb98e4dcba67cd8da651f19770f28..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Principles Of Helicopter Aerodynamics By Gordon P. Leishman.pdf UPDATED.md +++ /dev/null @@ -1,10 +0,0 @@ - -

          Principles of Helicopter Aerodynamics by Gordon P. Leishman

          -

          This book, written by an internationally recognized expert, provides a thorough, modern treatment of the aerodynamic principles of helicopters and other rotating-wing vertical lift aircraft. Every chapter is extensively illustrated and concludes with a bibliography and homework problems. Advanced undergraduate and graduate students, practising engineers, and researchers will welcome this text on rotating-wing aerodynamics.

          -

          The book covers topics such as the history of helicopter flight, fundamentals of rotor aerodynamics, blade element analysis, rotating blade motion, helicopter performance, aerodynamic design of helicopters, aerodynamics of rotor airfoils, unsteady airfoil behavior, dynamic stall, rotor wakes and blade tip vortices, rotor-airframe interaction aerodynamics, autogiros and gyroplanes, and advanced methods for helicopter aerodynamic analysis. The book also includes appendices on notation and definitions, basic helicopter mathematics, airfoil data, and helicopter design projects.

          -

          Principles of Helicopter Aerodynamics by Gordon P. Leishman.pdf


          Download ——— https://urlcod.com/2uIcmO



          -

          The book is based on the author's extensive teaching and research experience at the University of Maryland and other institutions. The author has also written a companion volume on Basic Helicopter Aerodynamics, which provides an introduction to the subject for students and engineers new to the field.

          One of the unique features of this book is that it provides a comprehensive historical overview of the development of helicopter flight, from the early pioneers to the modern era. The book traces the evolution of helicopter design and performance, as well as the challenges and achievements of helicopter aerodynamics research. The book also highlights some of the key contributions of famous helicopter engineers and scientists, such as Igor Sikorsky, Juan de la Cierva, Anton Flettner, Alfred Gessow, and Wayne Johnson.

          -

          Another distinctive aspect of this book is that it covers not only conventional helicopters, but also other types of rotating-wing aircraft that have similar aerodynamic characteristics. These include tilt-rotors, which can transition from vertical to horizontal flight by tilting their rotors; autogiros, which use a free-spinning rotor for lift and a propeller for propulsion; and gyroplanes, which are similar to autogiros but have a powered rotor for takeoff and landing. The book also discusses the aerodynamics of wind turbines, which are essentially inverted rotors that extract energy from the wind.

          -

          The book is intended for advanced undergraduate and graduate students who have a basic knowledge of fluid mechanics and aerodynamics, as well as for practising engineers and researchers who work on helicopter and rotating-wing aerodynamics. The book provides both theoretical and empirical methods of analysis, as well as practical examples and case studies. The book also includes several appendices that cover notation and definitions, basic helicopter mathematics, airfoil data, and helicopter design projects.

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/neural-ti/NeTI/utils/__init__.py b/spaces/neural-ti/NeTI/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/neurotech/cat_dog_audio_classifier/app.py b/spaces/neurotech/cat_dog_audio_classifier/app.py deleted file mode 100644 index 56bdf673cd8da985564f5aa9e91820d3a1ac9056..0000000000000000000000000000000000000000 --- a/spaces/neurotech/cat_dog_audio_classifier/app.py +++ /dev/null @@ -1,57 +0,0 @@ -# import library -import gradio as gr -import librosa -import pandas as pd -import numpy as np -import pickle -import os - -import tensorflow as tf -from tensorflow.keras.layers.experimental import preprocessing -from tensorflow.keras.preprocessing.image import load_img, img_to_array -from tensorflow.keras.models import Sequential -from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, BatchNormalization, Input - -def get_waveform_label(file): - #lab = tf.strings.split(file, os.path.sep)[-2] - print(file) - print(file.name) - audio_binary = tf.io.read_file(file.name) - audio, _ = tf.audio.decode_wav(audio_binary) - waveform=tf.squeeze(audio, axis=-1) - return waveform - -def get_spectrogram_label(audio): - padding = tf.zeros([300000]-tf.shape(audio), dtype=tf.float32) - wave = tf.cast(audio, tf.float32) - eq_length = tf.concat([wave, padding], 0) - spectrogram = tf.signal.stft(eq_length, frame_length=210, frame_step=110) - spectrogram = tf.abs(spectrogram) - spectrogram = tf.expand_dims(spectrogram, -1) - return spectrogram - - # %load saved model -model = pickle.load(open('audio_classifier_model.pkl', 'rb')) - -def get_audio(audio): - audio_waveform = get_waveform_label(audio) - audio_spect = get_spectrogram_label(audio_waveform) - final_feat = np.array([audio_spect]) - res = np.argmax(model.predict(final_feat),axis=1) - if res == 1: - res ="Dog Audio"; - else: - res = "Cat Audio" - return res - - -# %gradio interface - -inputs = gr.inputs.Audio(label="Input Audio", type="file") -outputs = "text" -title = "Cat/Dog Audio Classification" -description = "Gradio demo App for Cat and Dog Audio Classification with Tensorflow. To use it, simply upload your audio .wav format, or use sample audio by click the button below Example" -examples = [ - ['dog_barking_102.wav'] -] -gr.Interface(get_audio, inputs, outputs, title=title, description=description, examples=examples).launch() \ No newline at end of file diff --git a/spaces/neveu/img-to-music/utils.py b/spaces/neveu/img-to-music/utils.py deleted file mode 100644 index e4d5448735f516afa03c8a99be64fa5a2915706c..0000000000000000000000000000000000000000 --- a/spaces/neveu/img-to-music/utils.py +++ /dev/null @@ -1,36 +0,0 @@ -import json -import numpy as np -import httpx -import os - -from constants import MUBERT_TAGS, MUBERT_MODE, MUBERT_LICENSE - -def get_mubert_tags_embeddings(w2v_model): - return w2v_model.encode(MUBERT_TAGS) - - - - - -def find_similar(em, embeddings, method='cosine'): - scores = [] - for ref in embeddings: - if method == 'cosine': - scores.append(1 - np.dot(ref, em) / (np.linalg.norm(ref) * np.linalg.norm(em))) - if method == 'norm': - scores.append(np.linalg.norm(ref - em)) - return np.array(scores), np.argsort(scores) - - -def get_tags_for_prompts(w2v_model, mubert_tags_embeddings, prompts, top_n=3, debug=False): - prompts_embeddings = w2v_model.encode(prompts) - ret = [] - for i, pe in enumerate(prompts_embeddings): - scores, idxs = find_similar(pe, mubert_tags_embeddings) - top_tags = MUBERT_TAGS[idxs[:top_n]] - top_prob = 1 - scores[idxs[:top_n]] - if debug: - print(f"Prompt: {prompts[i]}\nTags: {', '.join(top_tags)}\nScores: {top_prob}\n\n\n") - ret.append((prompts[i], list(top_tags))) - print("ret: " + ret) - return ret \ No newline at end of file diff --git a/spaces/nguyennghia0902/SentimentAnalysis_usingBERT/streamlit_app.py/pages/Homepage.py b/spaces/nguyennghia0902/SentimentAnalysis_usingBERT/streamlit_app.py/pages/Homepage.py deleted file mode 100644 index 9484966a625f75b9823a886346aa45042b1fffb5..0000000000000000000000000000000000000000 --- a/spaces/nguyennghia0902/SentimentAnalysis_usingBERT/streamlit_app.py/pages/Homepage.py +++ /dev/null @@ -1,43 +0,0 @@ -import streamlit as st -from st_pages import Page, show_pages - -st.set_page_config(page_title="Sentiment Analysis", page_icon="🏠") - -show_pages( - [ - Page("streamlit_app.py/Homepage.py", "Home", "🏠"), - Page( - "streamlit_app.py/pages/Sentiment_Analysis.py", "Sentiment Analysis", "📝" - ), - ] -) - -st.title("Seminar Công nghệ Tri thức - Transformer trong NLP") -st.markdown( - """ - **Team members:** - | Student ID | Full Name | - | ---------- | ------------------------ | - | 19120600 | Bùi Nguyên Nghĩa | - | 19120607 | Phạm Thị Nguyệt | - """ -) - -st.header("The Need for Sentiment Analysis") -st.markdown( - """ - Sentiment analysis algorithms are used to analyze sentiment in a comment or a review. - It is said that around 90% of consumers read online reviews before visiting a business or buying a product. - These reviews can be positive or negative or neutral, and it is important to know what the customers are saying about your business. - """ -) - -st.header("Technology used") -st.markdown( - """ - In this demo, we used BERT as the model for sentiment analysis. BERT is a transformer-based model that was proposed in 2018 by Google. - It is a pre-trained model that can be used for various NLP tasks such as sentiment analysis, question answering, etc. - """ -) - - diff --git a/spaces/nickmuchi/FaceId-Corise-Project/app.py b/spaces/nickmuchi/FaceId-Corise-Project/app.py deleted file mode 100644 index 2c7ec13f9ca7d8c7b2e68d553df6be0819ea95ce..0000000000000000000000000000000000000000 --- a/spaces/nickmuchi/FaceId-Corise-Project/app.py +++ /dev/null @@ -1,131 +0,0 @@ -import gradio as gr -from sklearn.metrics.pairwise import cosine_similarity -from sentence_transformers import SentenceTransformer -from PIL import Image -import cv2 -import os -import numpy as np - -def extract_face(im): - - prototxt_path = 'deploy.prototxt' - caffemodel_path = 'weights.caffemodel' - - # Read the model - cv2_model = cv2.dnn.readNetFromCaffe(prototxt_path, caffemodel_path) - - #pil_image = PIL.Image.open('image.jpg') - image = cv2.cvtColor(np.array(im), cv2.COLOR_RGB2BGR) - #image = cv2.imread(im) - - (h, w) = image.shape[:2] - blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 1.0, (300, 300), (104.0, 177.0, 123.0)) - - cv2_model.setInput(blob) - detections = cv2_model.forward() - - # Identify each face - for i in range(0, detections.shape[2]): - box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) - (startX, startY, endX, endY) = box.astype("int") - - confidence = detections[0, 0, i, 2] - - # If confidence > 0.5, save it as a separate file - if (confidence > 0.5): - frame = image[startY:endY, startX:endX] - #PIL_image = Image.fromarray(frame) - file_name = 'faces/' + str(np.random.randint(1,10)) + '_' + 'face.png' - cv2.imwrite(file_name, frame) - - return file_name - -def predict(im1, im2,thresh,model_name): - - if not isinstance(im1,str): - im1_face = im1 - im2_face = im2 - - else: - - im1_face = Image.open(im1) - im2_face = Image.open(im2) - - model = load_model(model_name) - - sim=cosine_similarity(model.encode([im1_face,im2_face]))[0][1] - - if sim > thresh: - return round(sim,2), "SAME PERSON, UNLOCK PHONE" - else: - return round(sim,2), "DIFFERENT PEOPLE, DON'T UNLOCK" - -def load_model(model_name): - - model = SentenceTransformer(model_name) - - return model - -title = """

          FaceID for Facial Recognition with Face Detector

          """ - -models = ['clip-ViT-B-16','clip-ViT-B-32','clip-ViT-L-14'] - -twitter_link = """ -[![](https://img.shields.io/twitter/follow/nickmuchi?label=@nickmuchi&style=social)](https://twitter.com/nickmuchi) -""" - -css = ''' -h1#title { - text-align: center; -} -''' -demo = gr.Blocks(css=css) - -with demo: - gr.Markdown(title) - gr.Markdown(twitter_link) - model_options = gr.Dropdown(choices=models,label='Embedding Models',value=models[-1],show_label=True) - thresh = gr.Slider(minimum=0.5,maximum=1,value=0.85,step=0.1,label='Confidence') - - with gr.Tabs(): - with gr.TabItem("Face ID with No Face Detection"): - - with gr.Row(): - with gr.Column(): - nd_image_input_1 = gr.Image(label='Image 1',type='pil',source='webcam') - nd_image_input_2 = gr.Image(label='Image 2',type='pil',source='webcam') - - with gr.Column(): - sim = gr.Number(label="Similarity") - msg = gr.Textbox(label="Message") - - nd_but = gr.Button('Verify') - - with gr.TabItem("Face ID with Face Detector"): - - with gr.Row(): - with gr.Column(): - fd_image_1 = gr.Image(label='Image 1',type='pil',source='webcam') - fd_image_2 = gr.Image(label='Image 2',type='pil',source='webcam') - - with gr.Column(): - face_1 = gr.Image(label='Face Detected 1',type='filepath') - face_2 = gr.Image(label='Face Detected 2',type='filepath') - fd_image_1.change(extract_face,fd_image_1,face_1) - fd_image_2.change(extract_face,fd_image_2,face_2) - - - with gr.Row(): - with gr.Column(): - sim_1 = gr.Number(label="Similarity") - msg_1 = gr.Textbox(label="Message") - - fd_but = gr.Button('Verify') - - - nd_but.click(predict,inputs=[nd_image_input_1,nd_image_input_2,thresh,model_options],outputs=[sim,msg],queue=True) - fd_but.click(predict,inputs=[face_1,face_2,thresh,model_options],outputs=[sim_1,msg_1],queue=True) - - gr.Markdown("![visitor badge](https://visitor-badge.glitch.me/badge?page_id=nickmuchi-faceId-corise-project)") - -demo.launch(debug=True,enable_queue=True) \ No newline at end of file diff --git a/spaces/nightfury/Magic_Text_to_prompt_to_art_Diffusion/app.py b/spaces/nightfury/Magic_Text_to_prompt_to_art_Diffusion/app.py deleted file mode 100644 index c7556ba0dd105ae7d0e08c06da74c1620859e280..0000000000000000000000000000000000000000 --- a/spaces/nightfury/Magic_Text_to_prompt_to_art_Diffusion/app.py +++ /dev/null @@ -1,104 +0,0 @@ -import gradio as gr -import os -from share_btn import community_icon_html, loading_icon_html, share_js - -text_gen = gr.Interface.load(name="spaces/Gustavosta/MagicPrompt-Stable-Diffusion") -stable_diffusion = gr.Blocks.load(name="spaces/runwayml/stable-diffusion-v1-5") - -def get_images(prompt): - gallery_dir = stable_diffusion(prompt, fn_index=2) - sd_output = [os.path.join(gallery_dir, image) for image in os.listdir(gallery_dir)] - return sd_output, gr.update(visible=True), gr.update(visible=True), gr.update(visible=True) - -def get_prompts(prompt_text): - return text_gen(prompt_text) - -css = ''' -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -a {text-decoration-line: underline;} -''' - -with gr.Blocks(css=css) as demo: - gr.HTML("""
          -
          -

          - Magic text to prompt & prompt to art Diffusion Generator🪄 -

          -
          -

          - This Space prettifies your prompt using MagicPrompt - and then runs it through Stable Diffusion 1.5 to create aesthetically pleasing images. Simply enter a few concepts and let it improve your prompt. You can then diffuse the prompt. -

          -
          """) - - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Short text prompt", - lines=4, elem_id="input-text") - with gr.Row(): - see_prompts = gr.Button("Feed in your text!") - - with gr.Column(): - text_output = gr.Textbox( - label="Prettified text prompt", - lines=4, - elem_id="translated" - ) - with gr.Row(): - diffuse_btn = gr.Button(value="Diffuse the Prompt!") - with gr.Column(elem_id="generated-gallery"): - sd_output = gr.Gallery().style(grid=2, height="auto") - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html, visible=False) - loading_icon = gr.HTML(loading_icon_html, visible=False) - share_button = gr.Button("Share to community", elem_id="share-btn", visible=False) - - see_prompts.click(get_prompts, - inputs = [input_text], - outputs = [ - text_output - ]) - diffuse_btn.click(get_images, - inputs = [ - text_output - ], - outputs = [sd_output, community_icon, loading_icon, share_button] - ) - share_button.click(None, [], [], _js=share_js) - - - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/evaluation/evaluator.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/evaluation/evaluator.py deleted file mode 100644 index baf996002b2fddc8c1952408d450b5bf69394f0a..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/evaluation/evaluator.py +++ /dev/null @@ -1,224 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import datetime -import logging -import time -from collections import OrderedDict, abc -from contextlib import ExitStack, contextmanager -from typing import List, Union -import torch -from torch import nn - -from detectron2.utils.comm import get_world_size, is_main_process -from detectron2.utils.logger import log_every_n_seconds - - -class DatasetEvaluator: - """ - Base class for a dataset evaluator. - - The function :func:`inference_on_dataset` runs the model over - all samples in the dataset, and have a DatasetEvaluator to process the inputs/outputs. - - This class will accumulate information of the inputs/outputs (by :meth:`process`), - and produce evaluation results in the end (by :meth:`evaluate`). - """ - - def reset(self): - """ - Preparation for a new round of evaluation. - Should be called before starting a round of evaluation. - """ - pass - - def process(self, inputs, outputs): - """ - Process the pair of inputs and outputs. - If they contain batches, the pairs can be consumed one-by-one using `zip`: - - .. code-block:: python - - for input_, output in zip(inputs, outputs): - # do evaluation on single input/output pair - ... - - Args: - inputs (list): the inputs that's used to call the model. - outputs (list): the return value of `model(inputs)` - """ - pass - - def evaluate(self): - """ - Evaluate/summarize the performance, after processing all input/output pairs. - - Returns: - dict: - A new evaluator class can return a dict of arbitrary format - as long as the user can process the results. - In our train_net.py, we expect the following format: - - * key: the name of the task (e.g., bbox) - * value: a dict of {metric name: score}, e.g.: {"AP50": 80} - """ - pass - - -class DatasetEvaluators(DatasetEvaluator): - """ - Wrapper class to combine multiple :class:`DatasetEvaluator` instances. - - This class dispatches every evaluation call to - all of its :class:`DatasetEvaluator`. - """ - - def __init__(self, evaluators): - """ - Args: - evaluators (list): the evaluators to combine. - """ - super().__init__() - self._evaluators = evaluators - - def reset(self): - for evaluator in self._evaluators: - evaluator.reset() - - def process(self, inputs, outputs): - for evaluator in self._evaluators: - evaluator.process(inputs, outputs) - - def evaluate(self): - results = OrderedDict() - for evaluator in self._evaluators: - result = evaluator.evaluate() - if is_main_process() and result is not None: - for k, v in result.items(): - assert ( - k not in results - ), "Different evaluators produce results with the same key {}".format(k) - results[k] = v - return results - - -def inference_on_dataset( - model, data_loader, evaluator: Union[DatasetEvaluator, List[DatasetEvaluator], None] -): - """ - Run model on the data_loader and evaluate the metrics with evaluator. - Also benchmark the inference speed of `model.__call__` accurately. - The model will be used in eval mode. - - Args: - model (callable): a callable which takes an object from - `data_loader` and returns some outputs. - - If it's an nn.Module, it will be temporarily set to `eval` mode. - If you wish to evaluate a model in `training` mode instead, you can - wrap the given model and override its behavior of `.eval()` and `.train()`. - data_loader: an iterable object with a length. - The elements it generates will be the inputs to the model. - evaluator: the evaluator(s) to run. Use `None` if you only want to benchmark, - but don't want to do any evaluation. - - Returns: - The return value of `evaluator.evaluate()` - """ - num_devices = get_world_size() - logger = logging.getLogger(__name__) - logger.info("Start inference on {} batches".format(len(data_loader))) - - total = len(data_loader) # inference data loader must have a fixed length - if evaluator is None: - # create a no-op evaluator - evaluator = DatasetEvaluators([]) - if isinstance(evaluator, abc.MutableSequence): - evaluator = DatasetEvaluators(evaluator) - evaluator.reset() - - num_warmup = min(5, total - 1) - start_time = time.perf_counter() - total_data_time = 0 - total_compute_time = 0 - total_eval_time = 0 - with ExitStack() as stack: - if isinstance(model, nn.Module): - stack.enter_context(inference_context(model)) - stack.enter_context(torch.no_grad()) - - start_data_time = time.perf_counter() - for idx, inputs in enumerate(data_loader): - total_data_time += time.perf_counter() - start_data_time - if idx == num_warmup: - start_time = time.perf_counter() - total_data_time = 0 - total_compute_time = 0 - total_eval_time = 0 - - start_compute_time = time.perf_counter() - outputs = model(inputs) - if torch.cuda.is_available(): - torch.cuda.synchronize() - total_compute_time += time.perf_counter() - start_compute_time - - start_eval_time = time.perf_counter() - evaluator.process(inputs, outputs) - total_eval_time += time.perf_counter() - start_eval_time - - iters_after_start = idx + 1 - num_warmup * int(idx >= num_warmup) - data_seconds_per_iter = total_data_time / iters_after_start - compute_seconds_per_iter = total_compute_time / iters_after_start - eval_seconds_per_iter = total_eval_time / iters_after_start - total_seconds_per_iter = (time.perf_counter() - start_time) / iters_after_start - if idx >= num_warmup * 2 or compute_seconds_per_iter > 5: - eta = datetime.timedelta(seconds=int(total_seconds_per_iter * (total - idx - 1))) - log_every_n_seconds( - logging.INFO, - ( - f"Inference done {idx + 1}/{total}. " - f"Dataloading: {data_seconds_per_iter:.4f} s/iter. " - f"Inference: {compute_seconds_per_iter:.4f} s/iter. " - f"Eval: {eval_seconds_per_iter:.4f} s/iter. " - f"Total: {total_seconds_per_iter:.4f} s/iter. " - f"ETA={eta}" - ), - n=5, - ) - start_data_time = time.perf_counter() - - # Measure the time only for this worker (before the synchronization barrier) - total_time = time.perf_counter() - start_time - total_time_str = str(datetime.timedelta(seconds=total_time)) - # NOTE this format is parsed by grep - logger.info( - "Total inference time: {} ({:.6f} s / iter per device, on {} devices)".format( - total_time_str, total_time / (total - num_warmup), num_devices - ) - ) - total_compute_time_str = str(datetime.timedelta(seconds=int(total_compute_time))) - logger.info( - "Total inference pure compute time: {} ({:.6f} s / iter per device, on {} devices)".format( - total_compute_time_str, total_compute_time / (total - num_warmup), num_devices - ) - ) - - results = evaluator.evaluate() - # An evaluator may return None when not in main process. - # Replace it by an empty dict instead to make it easier for downstream code to handle - if results is None: - results = {} - return results - - -@contextmanager -def inference_context(model): - """ - A context where the model is temporarily changed to eval mode, - and restored to previous mode afterwards. - - Args: - model: a torch Module - """ - training_mode = model.training - model.eval() - yield - model.train(training_mode) diff --git a/spaces/nmfasano5/content_based_movie_recommendation_system/README.md b/spaces/nmfasano5/content_based_movie_recommendation_system/README.md deleted file mode 100644 index 06ee553ed152dedb195cd55cf9169539ec0d89b5..0000000000000000000000000000000000000000 --- a/spaces/nmfasano5/content_based_movie_recommendation_system/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Content Based Movie Recommendation System -emoji: 📉 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ntt123/vietnam-male-voice-wavegru-tts/build_ext.sh b/spaces/ntt123/vietnam-male-voice-wavegru-tts/build_ext.sh deleted file mode 100644 index da0a17dd2bc5a00aab7550d52d104eb52dc630a3..0000000000000000000000000000000000000000 --- a/spaces/ntt123/vietnam-male-voice-wavegru-tts/build_ext.sh +++ /dev/null @@ -1,3 +0,0 @@ -chmod +x ./bazelisk-linux-amd64 -USE_BAZEL_VERSION=5.0.0 ./bazelisk-linux-amd64 build wavegru_mod -c opt --copt=-march=native -cp -f bazel-bin/wavegru_mod.so . \ No newline at end of file diff --git a/spaces/olivierdehaene/chat-ui-example/entrypoint.sh b/spaces/olivierdehaene/chat-ui-example/entrypoint.sh deleted file mode 100644 index 126cadc8713083b650c15044bae9c6f12c66f01e..0000000000000000000000000000000000000000 --- a/spaces/olivierdehaene/chat-ui-example/entrypoint.sh +++ /dev/null @@ -1,19 +0,0 @@ -#!/bin/bash - -# Start the local Mongo database -mongod & - -# Start the text-generation-inference process -text-generation-launcher --model-id OpenAssistant/falcon-7b-sft-top1-696 --num-shard 1 --port 8080 & - -# Wait for text-generation-inference to start -curl --retry 60 --retry-delay 10 --retry-connrefused http://127.0.0.1:8080/health - -# Start the chat-ui process -pm2 start /app/build/index.js -i $CPU_CORES --no-daemon & - -# Wait for any process to exit -wait -n - -# Exit with status of process that exited first -exit $? diff --git a/spaces/ori1026/OriChatGPT/custom.css b/spaces/ori1026/OriChatGPT/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/ori1026/OriChatGPT/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git "a/spaces/oskarvanderwal/MT-bias-demo/results/simple_f\303\251rfi_fr_aggregate.html" "b/spaces/oskarvanderwal/MT-bias-demo/results/simple_f\303\251rfi_fr_aggregate.html" deleted file mode 100644 index 980bf0ca75e20ddf1873cc64f55dbf3fdca3cb01..0000000000000000000000000000000000000000 --- "a/spaces/oskarvanderwal/MT-bias-demo/results/simple_f\303\251rfi_fr_aggregate.html" +++ /dev/null @@ -1,46 +0,0 @@ -
          0th instance:
          - -
          -
          -
          - -
          -
          - Source Saliency Heatmap -
          - x: Generated tokens, y: Attributed tokens -
          - - - -
          ▁C'est▁un▁homme.</s>
          ▁Ő0.705-0.0260.071-0.255
          ▁férfi.0.7090.2490.970.3
          </s>0.00.00.00.0
          -
          - -
          -
          -
          - -
          0th instance:
          - -
          -
          -
          - -
          -
          - Target Saliency Heatmap -
          - x: Generated tokens, y: Attributed tokens -
          - - - -
          ▁C'est▁un▁homme.</s>
          ▁C'est0.9680.0550.439
          ▁un0.2260.16
          ▁homme.0.791
          </s>
          -
          - -
          -
          -
          - diff --git a/spaces/owaiskha9654/Custom_Yolov7/utils/aws/__init__.py b/spaces/owaiskha9654/Custom_Yolov7/utils/aws/__init__.py deleted file mode 100644 index e9691f241edc06ad981b36ca27f7eff9e46686ed..0000000000000000000000000000000000000000 --- a/spaces/owaiskha9654/Custom_Yolov7/utils/aws/__init__.py +++ /dev/null @@ -1 +0,0 @@ -#init \ No newline at end of file diff --git a/spaces/ozgur34/qb-Engine2/dialogue.py b/spaces/ozgur34/qb-Engine2/dialogue.py deleted file mode 100644 index 6ca26d2d887c6b683c5bd6240a09f9a55a046a3a..0000000000000000000000000000000000000000 --- a/spaces/ozgur34/qb-Engine2/dialogue.py +++ /dev/null @@ -1,239 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json -import os -from dataclasses import asdict, dataclass -from pathlib import Path -from typing import Any, Dict, List, Optional, Type, TypeVar, Union - -from huggingface_hub import ModelHubMixin, hf_hub_download - -# Generic variable that is either ModelHubMixin or a subclass thereof -T = TypeVar("T", bound="ModelHubMixin") - -TEMPLATE_FILENAME = "dialogue_template.json" -IGNORE_INDEX = -100 - - -@dataclass -class DialogueTemplate(ModelHubMixin): - """Converts all turns of a dialogue between a user and assistant to a standardized format. - Adapted from OpenAI's ChatML (https://github.com/openai/openai-python/blob/main/chatml.md) and Vicuna (https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py) - """ - - system: str - messages: List[Dict[str, str]] = None - system_token: str = "<|system|>" - user_token: str = "<|user|>" - assistant_token: str = "<|assistant|>" - end_token: str = "<|end|>" - - def get_training_prompt(self) -> str: - prompt = self.system_token + "\n" + self.system + self.end_token + "\n" - if self.messages is None: - raise ValueError("Dialogue template must have at least one message.") - for message in self.messages: - if message["role"] == "user": - prompt += self.user_token + "\n" + message["content"] + self.end_token + "\n" - else: - prompt += self.assistant_token + "\n" + message["content"] + self.end_token + "\n" - return prompt - - def get_inference_prompt(self) -> str: - prompt = self.system_token + "\n" + self.system + self.end_token + "\n" - if self.messages is None: - raise ValueError("Dialogue template must have at least one message.") - for message in self.messages: - if message["role"] == "user": - prompt += self.user_token + "\n" + message["content"] + self.end_token + "\n" - else: - prompt += self.assistant_token + "\n" + message["content"] + self.end_token + "\n" - prompt += self.assistant_token - return prompt - - def get_dialogue(self): - """Helper function to format the messages as an easy-to-read dialogue.""" - prompt = "" - if self.messages is None: - raise ValueError("Dialogue template must have at least one message.") - for message in self.messages: - if message["role"] == "user": - prompt += "\n\nHuman: " + message["content"] - else: - prompt += "\n\nAssistant: " + message["content"] - return prompt - - def get_special_tokens(self) -> List[str]: - return [self.system_token, self.user_token, self.assistant_token, self.end_token] - - def copy(self): - return DialogueTemplate( - system=self.system, - messages=self.messages, - system_token=self.system_token, - user_token=self.user_token, - assistant_token=self.assistant_token, - end_token=self.end_token, - ) - - def to_dict(self) -> Dict[str, Any]: - return {k: v for k, v in asdict(self).items()} - - @classmethod - def from_dict(cls, data): - return DialogueTemplate( - system=data["system"] if "system" in data else "", - messages=data["messages"] if "messages" in data else None, - system_token=data["system_token"] if "system_token" in data else "<|system|>", - user_token=data["user_token"] if "user_token" in data else "<|user|>", - assistant_token=data["assistant_token"] if "assistant_token" in data else "<|assistant|>", - end_token=data["end_token"] if "end_token" in data else "<|end|>", - ) - - def _save_pretrained(self, save_directory: Union[str, Path]) -> None: - save_directory = Path(save_directory) - save_directory.mkdir(exist_ok=True) - with open(save_directory / "dialogue_template.json", "w") as f: - json.dump(self.to_dict(), f, indent=2) - - @classmethod - def _from_pretrained( - cls: Type[T], - *, - model_id: str, - revision: Optional[str], - cache_dir: Optional[Union[str, Path]], - force_download: bool, - proxies: Optional[Dict], - resume_download: bool, - local_files_only: bool, - token: Optional[Union[str, bool]], - **model_kwargs, - ) -> T: - """Loads the dialogue template from a local directory or the Huggingface Hub. - Args: - model_id (`str`): - ID of the model to load from the Huggingface Hub (e.g. `bigscience/bloom`). - revision (`str`, *optional*): - Revision of the model on the Hub. Can be a branch name, a git tag or any commit id. Defaults to the - latest commit on `main` branch. - force_download (`bool`, *optional*, defaults to `False`): - Whether to force (re-)downloading the model weights and configuration files from the Hub, overriding - the existing cache. - resume_download (`bool`, *optional*, defaults to `False`): - Whether to delete incompletely received files. Will attempt to resume the download if such a file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint (e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`). - token (`str` or `bool`, *optional*): - The token to use as HTTP bearer authorization for remote files. By default, it will use the token - cached when running `huggingface-cli login`. - cache_dir (`str`, `Path`, *optional*): - Path to the folder where cached files are stored. - local_files_only (`bool`, *optional*, defaults to `False`): - If `True`, avoid downloading the file and return the path to the local cached file if it exists. - model_kwargs: - Additional keyword arguments passed along to the [`~ModelHubMixin._from_pretrained`] method. - """ - if os.path.isdir(model_id): # Can either be a local directory - print("Loading dialogue template from local directory") - template_file = os.path.join(model_id, TEMPLATE_FILENAME) - else: # Or a template on the Hub - template_file = hf_hub_download( # Download from the hub, passing same input args - repo_id=model_id, - filename=TEMPLATE_FILENAME, - revision=revision, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - token=token, - local_files_only=local_files_only, - ) - - # Load template - with open(template_file, "r") as f: - data = json.load(f) - return cls.from_dict(data=data) - - -# A shortened version of the system message in Anthropic's HHH prompt: https://gist.github.com/jareddk/2509330f8ef3d787fc5aaac67aab5f11#file-hhh_prompt-txt -default_template = DialogueTemplate( - system="A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.", -) - -# OpenAI and OpenAssistant train on few to no system messages. -# TODO: consider defining this as the `default` template -no_system_template = DialogueTemplate( - system="", -) - -alpaca_template = DialogueTemplate( - system="A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.", - user_token="### Instruction:", - assistant_token="### Response:", -) - -SUPPORTED_DIALOGUE_TEMPLATES = { - "default": default_template, - "no_system": no_system_template, - "alpaca": alpaca_template, -} - - -def get_dialogue_template(template: str) -> DialogueTemplate: - if template not in SUPPORTED_DIALOGUE_TEMPLATES.keys(): - raise ValueError(f"Template {template} is not supported!") - return SUPPORTED_DIALOGUE_TEMPLATES[template].copy() - - -def prepare_dialogue(example, dialogue_template, is_train=True): - """Format example to single- or multi-turn dialogue.""" - # TODO: make this simpler by just ensuring every dataset has a messages column - if "messages" in example.keys() and example["messages"] is not None: - dialogue_template.messages = example["messages"] - elif all(k in example.keys() for k in ("prompt", "completion")): - # Construct single-turn dialogue from prompt and completion - dialogue_template.messages = [ - {"role": "user", "content": example["prompt"]}, - {"role": "assistant", "content": example["completion"]}, - ] - elif "prompt" in example.keys(): - # Construct single-turn dialogue from prompt (inference only) - dialogue_template.messages = [ - {"role": "user", "content": example["prompt"]}, - ] - else: - raise ValueError( - f"Could not format example as dialogue! Require either `messages` or `[prompt, completion]` or `[prompt]` keys but found {list(example.keys())}" - ) - if is_train: - example["text"] = dialogue_template.get_training_prompt() - else: - example["text"] = dialogue_template.get_inference_prompt() - return example - - -def mask_user_labels(tokenizer, dialogue_template, labels): - """Masks the user turns of a dialogue from the loss""" - user_token_id = tokenizer.convert_tokens_to_ids(dialogue_template.user_token) - assistant_token_id = tokenizer.convert_tokens_to_ids(dialogue_template.assistant_token) - for idx, label_id in enumerate(labels): - if label_id == user_token_id: - current_idx = idx - while labels[current_idx] != assistant_token_id and current_idx < len(labels): - labels[current_idx] = IGNORE_INDEX - current_idx += 1 \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/models/unet.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/models/unet.md deleted file mode 100644 index 9a488a3231a658ddc81b5c31636f208d768038a8..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/models/unet.md +++ /dev/null @@ -1,13 +0,0 @@ -# UNet1DModel - -The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 1D UNet model. - -The abstract from the paper is: - -*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.* - -## UNet1DModel -[[autodoc]] UNet1DModel - -## UNet1DOutput -[[autodoc]] models.unet_1d.UNet1DOutput \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/dance_diffusion.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/dance_diffusion.md deleted file mode 100644 index 1510454d178f0c97b5b3e63d2f4f576c547e6a82..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/dance_diffusion.md +++ /dev/null @@ -1,33 +0,0 @@ - - -# Dance Diffusion - -[Dance Diffusion](https://github.com/Harmonai-org/sample-generator) is by Zach Evans. - -Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by [Harmonai](https://github.com/Harmonai-org). - -The original codebase of this implementation can be found at [Harmonai-org](https://github.com/Harmonai-org/sample-generator). - - - -Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. - - - -## DanceDiffusionPipeline -[[autodoc]] DanceDiffusionPipeline - - all - - __call__ - -## AudioPipelineOutput -[[autodoc]] pipelines.AudioPipelineOutput \ No newline at end of file diff --git a/spaces/phamson02/tho_ai/README.md b/spaces/phamson02/tho_ai/README.md deleted file mode 100644 index 40a7faa6a0c9004dfe8d72d1d883083d303fb697..0000000000000000000000000000000000000000 --- a/spaces/phamson02/tho_ai/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Tho Ai -emoji: 🔥 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: cc-by-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/install.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/install.py deleted file mode 100644 index f6a300804f4a99eb79b4f3a1ee676251c30e629f..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/install.py +++ /dev/null @@ -1,778 +0,0 @@ -import errno -import json -import operator -import os -import shutil -import site -from optparse import SUPPRESS_HELP, Values -from typing import List, Optional - -from pip._vendor.rich import print_json - -from pip._internal.cache import WheelCache -from pip._internal.cli import cmdoptions -from pip._internal.cli.cmdoptions import make_target_python -from pip._internal.cli.req_command import ( - RequirementCommand, - warn_if_run_as_root, - with_cleanup, -) -from pip._internal.cli.status_codes import ERROR, SUCCESS -from pip._internal.exceptions import CommandError, InstallationError -from pip._internal.locations import get_scheme -from pip._internal.metadata import get_environment -from pip._internal.models.installation_report import InstallationReport -from pip._internal.operations.build.build_tracker import get_build_tracker -from pip._internal.operations.check import ConflictDetails, check_install_conflicts -from pip._internal.req import install_given_reqs -from pip._internal.req.req_install import ( - InstallRequirement, - check_legacy_setup_py_options, -) -from pip._internal.utils.compat import WINDOWS -from pip._internal.utils.filesystem import test_writable_dir -from pip._internal.utils.logging import getLogger -from pip._internal.utils.misc import ( - check_externally_managed, - ensure_dir, - get_pip_version, - protect_pip_from_modification_on_windows, - write_output, -) -from pip._internal.utils.temp_dir import TempDirectory -from pip._internal.utils.virtualenv import ( - running_under_virtualenv, - virtualenv_no_global, -) -from pip._internal.wheel_builder import build, should_build_for_install_command - -logger = getLogger(__name__) - - -class InstallCommand(RequirementCommand): - """ - Install packages from: - - - PyPI (and other indexes) using requirement specifiers. - - VCS project urls. - - Local project directories. - - Local or remote source archives. - - pip also supports installing from "requirements files", which provide - an easy way to specify a whole environment to be installed. - """ - - usage = """ - %prog [options] [package-index-options] ... - %prog [options] -r [package-index-options] ... - %prog [options] [-e] ... - %prog [options] [-e] ... - %prog [options] ...""" - - def add_options(self) -> None: - self.cmd_opts.add_option(cmdoptions.requirements()) - self.cmd_opts.add_option(cmdoptions.constraints()) - self.cmd_opts.add_option(cmdoptions.no_deps()) - self.cmd_opts.add_option(cmdoptions.pre()) - - self.cmd_opts.add_option(cmdoptions.editable()) - self.cmd_opts.add_option( - "--dry-run", - action="store_true", - dest="dry_run", - default=False, - help=( - "Don't actually install anything, just print what would be. " - "Can be used in combination with --ignore-installed " - "to 'resolve' the requirements." - ), - ) - self.cmd_opts.add_option( - "-t", - "--target", - dest="target_dir", - metavar="dir", - default=None, - help=( - "Install packages into . " - "By default this will not replace existing files/folders in " - ". Use --upgrade to replace existing packages in " - "with new versions." - ), - ) - cmdoptions.add_target_python_options(self.cmd_opts) - - self.cmd_opts.add_option( - "--user", - dest="use_user_site", - action="store_true", - help=( - "Install to the Python user install directory for your " - "platform. Typically ~/.local/, or %APPDATA%\\Python on " - "Windows. (See the Python documentation for site.USER_BASE " - "for full details.)" - ), - ) - self.cmd_opts.add_option( - "--no-user", - dest="use_user_site", - action="store_false", - help=SUPPRESS_HELP, - ) - self.cmd_opts.add_option( - "--root", - dest="root_path", - metavar="dir", - default=None, - help="Install everything relative to this alternate root directory.", - ) - self.cmd_opts.add_option( - "--prefix", - dest="prefix_path", - metavar="dir", - default=None, - help=( - "Installation prefix where lib, bin and other top-level " - "folders are placed. Note that the resulting installation may " - "contain scripts and other resources which reference the " - "Python interpreter of pip, and not that of ``--prefix``. " - "See also the ``--python`` option if the intention is to " - "install packages into another (possibly pip-free) " - "environment." - ), - ) - - self.cmd_opts.add_option(cmdoptions.src()) - - self.cmd_opts.add_option( - "-U", - "--upgrade", - dest="upgrade", - action="store_true", - help=( - "Upgrade all specified packages to the newest available " - "version. The handling of dependencies depends on the " - "upgrade-strategy used." - ), - ) - - self.cmd_opts.add_option( - "--upgrade-strategy", - dest="upgrade_strategy", - default="only-if-needed", - choices=["only-if-needed", "eager"], - help=( - "Determines how dependency upgrading should be handled " - "[default: %default]. " - '"eager" - dependencies are upgraded regardless of ' - "whether the currently installed version satisfies the " - "requirements of the upgraded package(s). " - '"only-if-needed" - are upgraded only when they do not ' - "satisfy the requirements of the upgraded package(s)." - ), - ) - - self.cmd_opts.add_option( - "--force-reinstall", - dest="force_reinstall", - action="store_true", - help="Reinstall all packages even if they are already up-to-date.", - ) - - self.cmd_opts.add_option( - "-I", - "--ignore-installed", - dest="ignore_installed", - action="store_true", - help=( - "Ignore the installed packages, overwriting them. " - "This can break your system if the existing package " - "is of a different version or was installed " - "with a different package manager!" - ), - ) - - self.cmd_opts.add_option(cmdoptions.ignore_requires_python()) - self.cmd_opts.add_option(cmdoptions.no_build_isolation()) - self.cmd_opts.add_option(cmdoptions.use_pep517()) - self.cmd_opts.add_option(cmdoptions.no_use_pep517()) - self.cmd_opts.add_option(cmdoptions.check_build_deps()) - self.cmd_opts.add_option(cmdoptions.override_externally_managed()) - - self.cmd_opts.add_option(cmdoptions.config_settings()) - self.cmd_opts.add_option(cmdoptions.global_options()) - - self.cmd_opts.add_option( - "--compile", - action="store_true", - dest="compile", - default=True, - help="Compile Python source files to bytecode", - ) - - self.cmd_opts.add_option( - "--no-compile", - action="store_false", - dest="compile", - help="Do not compile Python source files to bytecode", - ) - - self.cmd_opts.add_option( - "--no-warn-script-location", - action="store_false", - dest="warn_script_location", - default=True, - help="Do not warn when installing scripts outside PATH", - ) - self.cmd_opts.add_option( - "--no-warn-conflicts", - action="store_false", - dest="warn_about_conflicts", - default=True, - help="Do not warn about broken dependencies", - ) - self.cmd_opts.add_option(cmdoptions.no_binary()) - self.cmd_opts.add_option(cmdoptions.only_binary()) - self.cmd_opts.add_option(cmdoptions.prefer_binary()) - self.cmd_opts.add_option(cmdoptions.require_hashes()) - self.cmd_opts.add_option(cmdoptions.progress_bar()) - self.cmd_opts.add_option(cmdoptions.root_user_action()) - - index_opts = cmdoptions.make_option_group( - cmdoptions.index_group, - self.parser, - ) - - self.parser.insert_option_group(0, index_opts) - self.parser.insert_option_group(0, self.cmd_opts) - - self.cmd_opts.add_option( - "--report", - dest="json_report_file", - metavar="file", - default=None, - help=( - "Generate a JSON file describing what pip did to install " - "the provided requirements. " - "Can be used in combination with --dry-run and --ignore-installed " - "to 'resolve' the requirements. " - "When - is used as file name it writes to stdout. " - "When writing to stdout, please combine with the --quiet option " - "to avoid mixing pip logging output with JSON output." - ), - ) - - @with_cleanup - def run(self, options: Values, args: List[str]) -> int: - if options.use_user_site and options.target_dir is not None: - raise CommandError("Can not combine '--user' and '--target'") - - # Check whether the environment we're installing into is externally - # managed, as specified in PEP 668. Specifying --root, --target, or - # --prefix disables the check, since there's no reliable way to locate - # the EXTERNALLY-MANAGED file for those cases. An exception is also - # made specifically for "--dry-run --report" for convenience. - installing_into_current_environment = ( - not (options.dry_run and options.json_report_file) - and options.root_path is None - and options.target_dir is None - and options.prefix_path is None - ) - if ( - installing_into_current_environment - and not options.override_externally_managed - ): - check_externally_managed() - - upgrade_strategy = "to-satisfy-only" - if options.upgrade: - upgrade_strategy = options.upgrade_strategy - - cmdoptions.check_dist_restriction(options, check_target=True) - - logger.verbose("Using %s", get_pip_version()) - options.use_user_site = decide_user_install( - options.use_user_site, - prefix_path=options.prefix_path, - target_dir=options.target_dir, - root_path=options.root_path, - isolated_mode=options.isolated_mode, - ) - - target_temp_dir: Optional[TempDirectory] = None - target_temp_dir_path: Optional[str] = None - if options.target_dir: - options.ignore_installed = True - options.target_dir = os.path.abspath(options.target_dir) - if ( - # fmt: off - os.path.exists(options.target_dir) and - not os.path.isdir(options.target_dir) - # fmt: on - ): - raise CommandError( - "Target path exists but is not a directory, will not continue." - ) - - # Create a target directory for using with the target option - target_temp_dir = TempDirectory(kind="target") - target_temp_dir_path = target_temp_dir.path - self.enter_context(target_temp_dir) - - global_options = options.global_options or [] - - session = self.get_default_session(options) - - target_python = make_target_python(options) - finder = self._build_package_finder( - options=options, - session=session, - target_python=target_python, - ignore_requires_python=options.ignore_requires_python, - ) - build_tracker = self.enter_context(get_build_tracker()) - - directory = TempDirectory( - delete=not options.no_clean, - kind="install", - globally_managed=True, - ) - - try: - reqs = self.get_requirements(args, options, finder, session) - check_legacy_setup_py_options(options, reqs) - - wheel_cache = WheelCache(options.cache_dir) - - # Only when installing is it permitted to use PEP 660. - # In other circumstances (pip wheel, pip download) we generate - # regular (i.e. non editable) metadata and wheels. - for req in reqs: - req.permit_editable_wheels = True - - preparer = self.make_requirement_preparer( - temp_build_dir=directory, - options=options, - build_tracker=build_tracker, - session=session, - finder=finder, - use_user_site=options.use_user_site, - verbosity=self.verbosity, - ) - resolver = self.make_resolver( - preparer=preparer, - finder=finder, - options=options, - wheel_cache=wheel_cache, - use_user_site=options.use_user_site, - ignore_installed=options.ignore_installed, - ignore_requires_python=options.ignore_requires_python, - force_reinstall=options.force_reinstall, - upgrade_strategy=upgrade_strategy, - use_pep517=options.use_pep517, - ) - - self.trace_basic_info(finder) - - requirement_set = resolver.resolve( - reqs, check_supported_wheels=not options.target_dir - ) - - if options.json_report_file: - report = InstallationReport(requirement_set.requirements_to_install) - if options.json_report_file == "-": - print_json(data=report.to_dict()) - else: - with open(options.json_report_file, "w", encoding="utf-8") as f: - json.dump(report.to_dict(), f, indent=2, ensure_ascii=False) - - if options.dry_run: - # In non dry-run mode, the legacy versions and specifiers check - # will be done as part of conflict detection. - requirement_set.warn_legacy_versions_and_specifiers() - would_install_items = sorted( - (r.metadata["name"], r.metadata["version"]) - for r in requirement_set.requirements_to_install - ) - if would_install_items: - write_output( - "Would install %s", - " ".join("-".join(item) for item in would_install_items), - ) - return SUCCESS - - try: - pip_req = requirement_set.get_requirement("pip") - except KeyError: - modifying_pip = False - else: - # If we're not replacing an already installed pip, - # we're not modifying it. - modifying_pip = pip_req.satisfied_by is None - protect_pip_from_modification_on_windows(modifying_pip=modifying_pip) - - reqs_to_build = [ - r - for r in requirement_set.requirements.values() - if should_build_for_install_command(r) - ] - - _, build_failures = build( - reqs_to_build, - wheel_cache=wheel_cache, - verify=True, - build_options=[], - global_options=global_options, - ) - - if build_failures: - raise InstallationError( - "Could not build wheels for {}, which is required to " - "install pyproject.toml-based projects".format( - ", ".join(r.name for r in build_failures) # type: ignore - ) - ) - - to_install = resolver.get_installation_order(requirement_set) - - # Check for conflicts in the package set we're installing. - conflicts: Optional[ConflictDetails] = None - should_warn_about_conflicts = ( - not options.ignore_dependencies and options.warn_about_conflicts - ) - if should_warn_about_conflicts: - conflicts = self._determine_conflicts(to_install) - - # Don't warn about script install locations if - # --target or --prefix has been specified - warn_script_location = options.warn_script_location - if options.target_dir or options.prefix_path: - warn_script_location = False - - installed = install_given_reqs( - to_install, - global_options, - root=options.root_path, - home=target_temp_dir_path, - prefix=options.prefix_path, - warn_script_location=warn_script_location, - use_user_site=options.use_user_site, - pycompile=options.compile, - ) - - lib_locations = get_lib_location_guesses( - user=options.use_user_site, - home=target_temp_dir_path, - root=options.root_path, - prefix=options.prefix_path, - isolated=options.isolated_mode, - ) - env = get_environment(lib_locations) - - installed.sort(key=operator.attrgetter("name")) - items = [] - for result in installed: - item = result.name - try: - installed_dist = env.get_distribution(item) - if installed_dist is not None: - item = f"{item}-{installed_dist.version}" - except Exception: - pass - items.append(item) - - if conflicts is not None: - self._warn_about_conflicts( - conflicts, - resolver_variant=self.determine_resolver_variant(options), - ) - - installed_desc = " ".join(items) - if installed_desc: - write_output( - "Successfully installed %s", - installed_desc, - ) - except OSError as error: - show_traceback = self.verbosity >= 1 - - message = create_os_error_message( - error, - show_traceback, - options.use_user_site, - ) - logger.error(message, exc_info=show_traceback) # noqa - - return ERROR - - if options.target_dir: - assert target_temp_dir - self._handle_target_dir( - options.target_dir, target_temp_dir, options.upgrade - ) - if options.root_user_action == "warn": - warn_if_run_as_root() - return SUCCESS - - def _handle_target_dir( - self, target_dir: str, target_temp_dir: TempDirectory, upgrade: bool - ) -> None: - ensure_dir(target_dir) - - # Checking both purelib and platlib directories for installed - # packages to be moved to target directory - lib_dir_list = [] - - # Checking both purelib and platlib directories for installed - # packages to be moved to target directory - scheme = get_scheme("", home=target_temp_dir.path) - purelib_dir = scheme.purelib - platlib_dir = scheme.platlib - data_dir = scheme.data - - if os.path.exists(purelib_dir): - lib_dir_list.append(purelib_dir) - if os.path.exists(platlib_dir) and platlib_dir != purelib_dir: - lib_dir_list.append(platlib_dir) - if os.path.exists(data_dir): - lib_dir_list.append(data_dir) - - for lib_dir in lib_dir_list: - for item in os.listdir(lib_dir): - if lib_dir == data_dir: - ddir = os.path.join(data_dir, item) - if any(s.startswith(ddir) for s in lib_dir_list[:-1]): - continue - target_item_dir = os.path.join(target_dir, item) - if os.path.exists(target_item_dir): - if not upgrade: - logger.warning( - "Target directory %s already exists. Specify " - "--upgrade to force replacement.", - target_item_dir, - ) - continue - if os.path.islink(target_item_dir): - logger.warning( - "Target directory %s already exists and is " - "a link. pip will not automatically replace " - "links, please remove if replacement is " - "desired.", - target_item_dir, - ) - continue - if os.path.isdir(target_item_dir): - shutil.rmtree(target_item_dir) - else: - os.remove(target_item_dir) - - shutil.move(os.path.join(lib_dir, item), target_item_dir) - - def _determine_conflicts( - self, to_install: List[InstallRequirement] - ) -> Optional[ConflictDetails]: - try: - return check_install_conflicts(to_install) - except Exception: - logger.exception( - "Error while checking for conflicts. Please file an issue on " - "pip's issue tracker: https://github.com/pypa/pip/issues/new" - ) - return None - - def _warn_about_conflicts( - self, conflict_details: ConflictDetails, resolver_variant: str - ) -> None: - package_set, (missing, conflicting) = conflict_details - if not missing and not conflicting: - return - - parts: List[str] = [] - if resolver_variant == "legacy": - parts.append( - "pip's legacy dependency resolver does not consider dependency " - "conflicts when selecting packages. This behaviour is the " - "source of the following dependency conflicts." - ) - else: - assert resolver_variant == "2020-resolver" - parts.append( - "pip's dependency resolver does not currently take into account " - "all the packages that are installed. This behaviour is the " - "source of the following dependency conflicts." - ) - - # NOTE: There is some duplication here, with commands/check.py - for project_name in missing: - version = package_set[project_name][0] - for dependency in missing[project_name]: - message = ( - "{name} {version} requires {requirement}, " - "which is not installed." - ).format( - name=project_name, - version=version, - requirement=dependency[1], - ) - parts.append(message) - - for project_name in conflicting: - version = package_set[project_name][0] - for dep_name, dep_version, req in conflicting[project_name]: - message = ( - "{name} {version} requires {requirement}, but {you} have " - "{dep_name} {dep_version} which is incompatible." - ).format( - name=project_name, - version=version, - requirement=req, - dep_name=dep_name, - dep_version=dep_version, - you=("you" if resolver_variant == "2020-resolver" else "you'll"), - ) - parts.append(message) - - logger.critical("\n".join(parts)) - - -def get_lib_location_guesses( - user: bool = False, - home: Optional[str] = None, - root: Optional[str] = None, - isolated: bool = False, - prefix: Optional[str] = None, -) -> List[str]: - scheme = get_scheme( - "", - user=user, - home=home, - root=root, - isolated=isolated, - prefix=prefix, - ) - return [scheme.purelib, scheme.platlib] - - -def site_packages_writable(root: Optional[str], isolated: bool) -> bool: - return all( - test_writable_dir(d) - for d in set(get_lib_location_guesses(root=root, isolated=isolated)) - ) - - -def decide_user_install( - use_user_site: Optional[bool], - prefix_path: Optional[str] = None, - target_dir: Optional[str] = None, - root_path: Optional[str] = None, - isolated_mode: bool = False, -) -> bool: - """Determine whether to do a user install based on the input options. - - If use_user_site is False, no additional checks are done. - If use_user_site is True, it is checked for compatibility with other - options. - If use_user_site is None, the default behaviour depends on the environment, - which is provided by the other arguments. - """ - # In some cases (config from tox), use_user_site can be set to an integer - # rather than a bool, which 'use_user_site is False' wouldn't catch. - if (use_user_site is not None) and (not use_user_site): - logger.debug("Non-user install by explicit request") - return False - - if use_user_site: - if prefix_path: - raise CommandError( - "Can not combine '--user' and '--prefix' as they imply " - "different installation locations" - ) - if virtualenv_no_global(): - raise InstallationError( - "Can not perform a '--user' install. User site-packages " - "are not visible in this virtualenv." - ) - logger.debug("User install by explicit request") - return True - - # If we are here, user installs have not been explicitly requested/avoided - assert use_user_site is None - - # user install incompatible with --prefix/--target - if prefix_path or target_dir: - logger.debug("Non-user install due to --prefix or --target option") - return False - - # If user installs are not enabled, choose a non-user install - if not site.ENABLE_USER_SITE: - logger.debug("Non-user install because user site-packages disabled") - return False - - # If we have permission for a non-user install, do that, - # otherwise do a user install. - if site_packages_writable(root=root_path, isolated=isolated_mode): - logger.debug("Non-user install because site-packages writeable") - return False - - logger.info( - "Defaulting to user installation because normal site-packages " - "is not writeable" - ) - return True - - -def create_os_error_message( - error: OSError, show_traceback: bool, using_user_site: bool -) -> str: - """Format an error message for an OSError - - It may occur anytime during the execution of the install command. - """ - parts = [] - - # Mention the error if we are not going to show a traceback - parts.append("Could not install packages due to an OSError") - if not show_traceback: - parts.append(": ") - parts.append(str(error)) - else: - parts.append(".") - - # Spilt the error indication from a helper message (if any) - parts[-1] += "\n" - - # Suggest useful actions to the user: - # (1) using user site-packages or (2) verifying the permissions - if error.errno == errno.EACCES: - user_option_part = "Consider using the `--user` option" - permissions_part = "Check the permissions" - - if not running_under_virtualenv() and not using_user_site: - parts.extend( - [ - user_option_part, - " or ", - permissions_part.lower(), - ] - ) - else: - parts.append(permissions_part) - parts.append(".\n") - - # Suggest the user to enable Long Paths if path length is - # more than 260 - if ( - WINDOWS - and error.errno == errno.ENOENT - and error.filename - and len(error.filename) > 260 - ): - parts.append( - "HINT: This error might have occurred since " - "this system does not have Windows Long Path " - "support enabled. You can find information on " - "how to enable this at " - "https://pip.pypa.io/warnings/enable-long-paths\n" - ) - - return "".join(parts).strip() + "\n" diff --git a/spaces/plzdontcry/dakubettergpt/src/constants/color.ts b/spaces/plzdontcry/dakubettergpt/src/constants/color.ts deleted file mode 100644 index c9a823c4931a312e175e9ff5473662c846401c61..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/constants/color.ts +++ /dev/null @@ -1,7 +0,0 @@ -export const folderColorOptions = [ - '#be123c', // rose-700 - '#6d28d9', // violet-700 - '#0369a1', // sky-700 - '#047857', // emerald-700 - '#b45309', // amber-700 -]; diff --git a/spaces/portal/Top-20/style.css b/spaces/portal/Top-20/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/portal/Top-20/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/power2/JoJoGan-powerhow2/e4e/training/coach.py b/spaces/power2/JoJoGan-powerhow2/e4e/training/coach.py deleted file mode 100644 index 4c99da79e699c9362e02c289cd1425848d331d0b..0000000000000000000000000000000000000000 --- a/spaces/power2/JoJoGan-powerhow2/e4e/training/coach.py +++ /dev/null @@ -1,437 +0,0 @@ -import os -import random -import matplotlib -import matplotlib.pyplot as plt - -matplotlib.use('Agg') - -import torch -from torch import nn, autograd -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.nn.functional as F - -from utils import common, train_utils -from criteria import id_loss, moco_loss -from configs import data_configs -from datasets.images_dataset import ImagesDataset -from criteria.lpips.lpips import LPIPS -from models.psp import pSp -from models.latent_codes_pool import LatentCodesPool -from models.discriminator import LatentCodesDiscriminator -from models.encoders.psp_encoders import ProgressiveStage -from training.ranger import Ranger - -random.seed(0) -torch.manual_seed(0) - - -class Coach: - def __init__(self, opts, prev_train_checkpoint=None): - self.opts = opts - - self.global_step = 0 - - self.device = 'cuda:0' - self.opts.device = self.device - # Initialize network - self.net = pSp(self.opts).to(self.device) - - # Initialize loss - if self.opts.lpips_lambda > 0: - self.lpips_loss = LPIPS(net_type=self.opts.lpips_type).to(self.device).eval() - if self.opts.id_lambda > 0: - if 'ffhq' in self.opts.dataset_type or 'celeb' in self.opts.dataset_type: - self.id_loss = id_loss.IDLoss().to(self.device).eval() - else: - self.id_loss = moco_loss.MocoLoss(opts).to(self.device).eval() - self.mse_loss = nn.MSELoss().to(self.device).eval() - - # Initialize optimizer - self.optimizer = self.configure_optimizers() - - # Initialize discriminator - if self.opts.w_discriminator_lambda > 0: - self.discriminator = LatentCodesDiscriminator(512, 4).to(self.device) - self.discriminator_optimizer = torch.optim.Adam(list(self.discriminator.parameters()), - lr=opts.w_discriminator_lr) - self.real_w_pool = LatentCodesPool(self.opts.w_pool_size) - self.fake_w_pool = LatentCodesPool(self.opts.w_pool_size) - - # Initialize dataset - self.train_dataset, self.test_dataset = self.configure_datasets() - self.train_dataloader = DataLoader(self.train_dataset, - batch_size=self.opts.batch_size, - shuffle=True, - num_workers=int(self.opts.workers), - drop_last=True) - self.test_dataloader = DataLoader(self.test_dataset, - batch_size=self.opts.test_batch_size, - shuffle=False, - num_workers=int(self.opts.test_workers), - drop_last=True) - - # Initialize logger - log_dir = os.path.join(opts.exp_dir, 'logs') - os.makedirs(log_dir, exist_ok=True) - self.logger = SummaryWriter(log_dir=log_dir) - - # Initialize checkpoint dir - self.checkpoint_dir = os.path.join(opts.exp_dir, 'checkpoints') - os.makedirs(self.checkpoint_dir, exist_ok=True) - self.best_val_loss = None - if self.opts.save_interval is None: - self.opts.save_interval = self.opts.max_steps - - if prev_train_checkpoint is not None: - self.load_from_train_checkpoint(prev_train_checkpoint) - prev_train_checkpoint = None - - def load_from_train_checkpoint(self, ckpt): - print('Loading previous training data...') - self.global_step = ckpt['global_step'] + 1 - self.best_val_loss = ckpt['best_val_loss'] - self.net.load_state_dict(ckpt['state_dict']) - - if self.opts.keep_optimizer: - self.optimizer.load_state_dict(ckpt['optimizer']) - if self.opts.w_discriminator_lambda > 0: - self.discriminator.load_state_dict(ckpt['discriminator_state_dict']) - self.discriminator_optimizer.load_state_dict(ckpt['discriminator_optimizer_state_dict']) - if self.opts.progressive_steps: - self.check_for_progressive_training_update(is_resume_from_ckpt=True) - print(f'Resuming training from step {self.global_step}') - - def train(self): - self.net.train() - if self.opts.progressive_steps: - self.check_for_progressive_training_update() - while self.global_step < self.opts.max_steps: - for batch_idx, batch in enumerate(self.train_dataloader): - loss_dict = {} - if self.is_training_discriminator(): - loss_dict = self.train_discriminator(batch) - x, y, y_hat, latent = self.forward(batch) - loss, encoder_loss_dict, id_logs = self.calc_loss(x, y, y_hat, latent) - loss_dict = {**loss_dict, **encoder_loss_dict} - self.optimizer.zero_grad() - loss.backward() - self.optimizer.step() - - # Logging related - if self.global_step % self.opts.image_interval == 0 or ( - self.global_step < 1000 and self.global_step % 25 == 0): - self.parse_and_log_images(id_logs, x, y, y_hat, title='images/train/faces') - if self.global_step % self.opts.board_interval == 0: - self.print_metrics(loss_dict, prefix='train') - self.log_metrics(loss_dict, prefix='train') - - # Validation related - val_loss_dict = None - if self.global_step % self.opts.val_interval == 0 or self.global_step == self.opts.max_steps: - val_loss_dict = self.validate() - if val_loss_dict and (self.best_val_loss is None or val_loss_dict['loss'] < self.best_val_loss): - self.best_val_loss = val_loss_dict['loss'] - self.checkpoint_me(val_loss_dict, is_best=True) - - if self.global_step % self.opts.save_interval == 0 or self.global_step == self.opts.max_steps: - if val_loss_dict is not None: - self.checkpoint_me(val_loss_dict, is_best=False) - else: - self.checkpoint_me(loss_dict, is_best=False) - - if self.global_step == self.opts.max_steps: - print('OMG, finished training!') - break - - self.global_step += 1 - if self.opts.progressive_steps: - self.check_for_progressive_training_update() - - def check_for_progressive_training_update(self, is_resume_from_ckpt=False): - for i in range(len(self.opts.progressive_steps)): - if is_resume_from_ckpt and self.global_step >= self.opts.progressive_steps[i]: # Case checkpoint - self.net.encoder.set_progressive_stage(ProgressiveStage(i)) - if self.global_step == self.opts.progressive_steps[i]: # Case training reached progressive step - self.net.encoder.set_progressive_stage(ProgressiveStage(i)) - - def validate(self): - self.net.eval() - agg_loss_dict = [] - for batch_idx, batch in enumerate(self.test_dataloader): - cur_loss_dict = {} - if self.is_training_discriminator(): - cur_loss_dict = self.validate_discriminator(batch) - with torch.no_grad(): - x, y, y_hat, latent = self.forward(batch) - loss, cur_encoder_loss_dict, id_logs = self.calc_loss(x, y, y_hat, latent) - cur_loss_dict = {**cur_loss_dict, **cur_encoder_loss_dict} - agg_loss_dict.append(cur_loss_dict) - - # Logging related - self.parse_and_log_images(id_logs, x, y, y_hat, - title='images/test/faces', - subscript='{:04d}'.format(batch_idx)) - - # For first step just do sanity test on small amount of data - if self.global_step == 0 and batch_idx >= 4: - self.net.train() - return None # Do not log, inaccurate in first batch - - loss_dict = train_utils.aggregate_loss_dict(agg_loss_dict) - self.log_metrics(loss_dict, prefix='test') - self.print_metrics(loss_dict, prefix='test') - - self.net.train() - return loss_dict - - def checkpoint_me(self, loss_dict, is_best): - save_name = 'best_model.pt' if is_best else 'iteration_{}.pt'.format(self.global_step) - save_dict = self.__get_save_dict() - checkpoint_path = os.path.join(self.checkpoint_dir, save_name) - torch.save(save_dict, checkpoint_path) - with open(os.path.join(self.checkpoint_dir, 'timestamp.txt'), 'a') as f: - if is_best: - f.write( - '**Best**: Step - {}, Loss - {:.3f} \n{}\n'.format(self.global_step, self.best_val_loss, loss_dict)) - else: - f.write('Step - {}, \n{}\n'.format(self.global_step, loss_dict)) - - def configure_optimizers(self): - params = list(self.net.encoder.parameters()) - if self.opts.train_decoder: - params += list(self.net.decoder.parameters()) - else: - self.requires_grad(self.net.decoder, False) - if self.opts.optim_name == 'adam': - optimizer = torch.optim.Adam(params, lr=self.opts.learning_rate) - else: - optimizer = Ranger(params, lr=self.opts.learning_rate) - return optimizer - - def configure_datasets(self): - if self.opts.dataset_type not in data_configs.DATASETS.keys(): - Exception('{} is not a valid dataset_type'.format(self.opts.dataset_type)) - print('Loading dataset for {}'.format(self.opts.dataset_type)) - dataset_args = data_configs.DATASETS[self.opts.dataset_type] - transforms_dict = dataset_args['transforms'](self.opts).get_transforms() - train_dataset = ImagesDataset(source_root=dataset_args['train_source_root'], - target_root=dataset_args['train_target_root'], - source_transform=transforms_dict['transform_source'], - target_transform=transforms_dict['transform_gt_train'], - opts=self.opts) - test_dataset = ImagesDataset(source_root=dataset_args['test_source_root'], - target_root=dataset_args['test_target_root'], - source_transform=transforms_dict['transform_source'], - target_transform=transforms_dict['transform_test'], - opts=self.opts) - print("Number of training samples: {}".format(len(train_dataset))) - print("Number of test samples: {}".format(len(test_dataset))) - return train_dataset, test_dataset - - def calc_loss(self, x, y, y_hat, latent): - loss_dict = {} - loss = 0.0 - id_logs = None - if self.is_training_discriminator(): # Adversarial loss - loss_disc = 0. - dims_to_discriminate = self.get_dims_to_discriminate() if self.is_progressive_training() else \ - list(range(self.net.decoder.n_latent)) - - for i in dims_to_discriminate: - w = latent[:, i, :] - fake_pred = self.discriminator(w) - loss_disc += F.softplus(-fake_pred).mean() - loss_disc /= len(dims_to_discriminate) - loss_dict['encoder_discriminator_loss'] = float(loss_disc) - loss += self.opts.w_discriminator_lambda * loss_disc - - if self.opts.progressive_steps and self.net.encoder.progressive_stage.value != 18: # delta regularization loss - total_delta_loss = 0 - deltas_latent_dims = self.net.encoder.get_deltas_starting_dimensions() - - first_w = latent[:, 0, :] - for i in range(1, self.net.encoder.progressive_stage.value + 1): - curr_dim = deltas_latent_dims[i] - delta = latent[:, curr_dim, :] - first_w - delta_loss = torch.norm(delta, self.opts.delta_norm, dim=1).mean() - loss_dict[f"delta{i}_loss"] = float(delta_loss) - total_delta_loss += delta_loss - loss_dict['total_delta_loss'] = float(total_delta_loss) - loss += self.opts.delta_norm_lambda * total_delta_loss - - if self.opts.id_lambda > 0: # Similarity loss - loss_id, sim_improvement, id_logs = self.id_loss(y_hat, y, x) - loss_dict['loss_id'] = float(loss_id) - loss_dict['id_improve'] = float(sim_improvement) - loss += loss_id * self.opts.id_lambda - if self.opts.l2_lambda > 0: - loss_l2 = F.mse_loss(y_hat, y) - loss_dict['loss_l2'] = float(loss_l2) - loss += loss_l2 * self.opts.l2_lambda - if self.opts.lpips_lambda > 0: - loss_lpips = self.lpips_loss(y_hat, y) - loss_dict['loss_lpips'] = float(loss_lpips) - loss += loss_lpips * self.opts.lpips_lambda - loss_dict['loss'] = float(loss) - return loss, loss_dict, id_logs - - def forward(self, batch): - x, y = batch - x, y = x.to(self.device).float(), y.to(self.device).float() - y_hat, latent = self.net.forward(x, return_latents=True) - if self.opts.dataset_type == "cars_encode": - y_hat = y_hat[:, :, 32:224, :] - return x, y, y_hat, latent - - def log_metrics(self, metrics_dict, prefix): - for key, value in metrics_dict.items(): - self.logger.add_scalar('{}/{}'.format(prefix, key), value, self.global_step) - - def print_metrics(self, metrics_dict, prefix): - print('Metrics for {}, step {}'.format(prefix, self.global_step)) - for key, value in metrics_dict.items(): - print('\t{} = '.format(key), value) - - def parse_and_log_images(self, id_logs, x, y, y_hat, title, subscript=None, display_count=2): - im_data = [] - for i in range(display_count): - cur_im_data = { - 'input_face': common.log_input_image(x[i], self.opts), - 'target_face': common.tensor2im(y[i]), - 'output_face': common.tensor2im(y_hat[i]), - } - if id_logs is not None: - for key in id_logs[i]: - cur_im_data[key] = id_logs[i][key] - im_data.append(cur_im_data) - self.log_images(title, im_data=im_data, subscript=subscript) - - def log_images(self, name, im_data, subscript=None, log_latest=False): - fig = common.vis_faces(im_data) - step = self.global_step - if log_latest: - step = 0 - if subscript: - path = os.path.join(self.logger.log_dir, name, '{}_{:04d}.jpg'.format(subscript, step)) - else: - path = os.path.join(self.logger.log_dir, name, '{:04d}.jpg'.format(step)) - os.makedirs(os.path.dirname(path), exist_ok=True) - fig.savefig(path) - plt.close(fig) - - def __get_save_dict(self): - save_dict = { - 'state_dict': self.net.state_dict(), - 'opts': vars(self.opts) - } - # save the latent avg in state_dict for inference if truncation of w was used during training - if self.opts.start_from_latent_avg: - save_dict['latent_avg'] = self.net.latent_avg - - if self.opts.save_training_data: # Save necessary information to enable training continuation from checkpoint - save_dict['global_step'] = self.global_step - save_dict['optimizer'] = self.optimizer.state_dict() - save_dict['best_val_loss'] = self.best_val_loss - if self.opts.w_discriminator_lambda > 0: - save_dict['discriminator_state_dict'] = self.discriminator.state_dict() - save_dict['discriminator_optimizer_state_dict'] = self.discriminator_optimizer.state_dict() - return save_dict - - def get_dims_to_discriminate(self): - deltas_starting_dimensions = self.net.encoder.get_deltas_starting_dimensions() - return deltas_starting_dimensions[:self.net.encoder.progressive_stage.value + 1] - - def is_progressive_training(self): - return self.opts.progressive_steps is not None - -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Discriminator ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # - - def is_training_discriminator(self): - return self.opts.w_discriminator_lambda > 0 - - @staticmethod - def discriminator_loss(real_pred, fake_pred, loss_dict): - real_loss = F.softplus(-real_pred).mean() - fake_loss = F.softplus(fake_pred).mean() - - loss_dict['d_real_loss'] = float(real_loss) - loss_dict['d_fake_loss'] = float(fake_loss) - - return real_loss + fake_loss - - @staticmethod - def discriminator_r1_loss(real_pred, real_w): - grad_real, = autograd.grad( - outputs=real_pred.sum(), inputs=real_w, create_graph=True - ) - grad_penalty = grad_real.pow(2).reshape(grad_real.shape[0], -1).sum(1).mean() - - return grad_penalty - - @staticmethod - def requires_grad(model, flag=True): - for p in model.parameters(): - p.requires_grad = flag - - def train_discriminator(self, batch): - loss_dict = {} - x, _ = batch - x = x.to(self.device).float() - self.requires_grad(self.discriminator, True) - - with torch.no_grad(): - real_w, fake_w = self.sample_real_and_fake_latents(x) - real_pred = self.discriminator(real_w) - fake_pred = self.discriminator(fake_w) - loss = self.discriminator_loss(real_pred, fake_pred, loss_dict) - loss_dict['discriminator_loss'] = float(loss) - - self.discriminator_optimizer.zero_grad() - loss.backward() - self.discriminator_optimizer.step() - - # r1 regularization - d_regularize = self.global_step % self.opts.d_reg_every == 0 - if d_regularize: - real_w = real_w.detach() - real_w.requires_grad = True - real_pred = self.discriminator(real_w) - r1_loss = self.discriminator_r1_loss(real_pred, real_w) - - self.discriminator.zero_grad() - r1_final_loss = self.opts.r1 / 2 * r1_loss * self.opts.d_reg_every + 0 * real_pred[0] - r1_final_loss.backward() - self.discriminator_optimizer.step() - loss_dict['discriminator_r1_loss'] = float(r1_final_loss) - - # Reset to previous state - self.requires_grad(self.discriminator, False) - - return loss_dict - - def validate_discriminator(self, test_batch): - with torch.no_grad(): - loss_dict = {} - x, _ = test_batch - x = x.to(self.device).float() - real_w, fake_w = self.sample_real_and_fake_latents(x) - real_pred = self.discriminator(real_w) - fake_pred = self.discriminator(fake_w) - loss = self.discriminator_loss(real_pred, fake_pred, loss_dict) - loss_dict['discriminator_loss'] = float(loss) - return loss_dict - - def sample_real_and_fake_latents(self, x): - sample_z = torch.randn(self.opts.batch_size, 512, device=self.device) - real_w = self.net.decoder.get_latent(sample_z) - fake_w = self.net.encoder(x) - if self.is_progressive_training(): # When progressive training, feed only unique w's - dims_to_discriminate = self.get_dims_to_discriminate() - fake_w = fake_w[:, dims_to_discriminate, :] - if self.opts.use_w_pool: - real_w = self.real_w_pool.query(real_w) - fake_w = self.fake_w_pool.query(fake_w) - if fake_w.ndim == 3: - fake_w = fake_w[:, 0, :] - return real_w, fake_w diff --git a/spaces/ppsingh/annotation_dev/README.md b/spaces/ppsingh/annotation_dev/README.md deleted file mode 100644 index d29b7e5fb234fdf23f92e903171097dc373ac817..0000000000000000000000000000000000000000 --- a/spaces/ppsingh/annotation_dev/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Annotation Dev -emoji: 😻 -colorFrom: purple -colorTo: purple -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/param_functions.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/param_functions.py deleted file mode 100644 index 3f6dbc959d895b46053560b2f04ab8009a080018..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/param_functions.py +++ /dev/null @@ -1,2360 +0,0 @@ -from typing import Any, Callable, Dict, List, Optional, Sequence, Union - -from fastapi import params -from fastapi._compat import Undefined -from fastapi.openapi.models import Example -from typing_extensions import Annotated, Doc, deprecated # type: ignore [attr-defined] - -_Unset: Any = Undefined - - -def Path( # noqa: N802 - default: Annotated[ - Any, - Doc( - """ - Default value if the parameter field is not set. - - This doesn't affect `Path` parameters as the value is always required. - The parameter is available only for compatibility. - """ - ), - ] = ..., - *, - default_factory: Annotated[ - Union[Callable[[], Any], None], - Doc( - """ - A callable to generate the default value. - - This doesn't affect `Path` parameters as the value is always required. - The parameter is available only for compatibility. - """ - ), - ] = _Unset, - alias: Annotated[ - Optional[str], - Doc( - """ - An alternative name for the parameter field. - - This will be used to extract the data and for the generated OpenAPI. - It is particularly useful when you can't use the name you want because it - is a Python reserved keyword or similar. - """ - ), - ] = None, - alias_priority: Annotated[ - Union[int, None], - Doc( - """ - Priority of the alias. This affects whether an alias generator is used. - """ - ), - ] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Whitelist' validation step. The parameter field will be the single one - allowed by the alias or set of aliases defined. - """ - ), - ] = None, - serialization_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Blacklist' validation step. The vanilla parameter field will be the - single one of the alias' or set of aliases' fields and all the other - fields will be ignored at serialization time. - """ - ), - ] = None, - title: Annotated[ - Optional[str], - Doc( - """ - Human-readable title. - """ - ), - ] = None, - description: Annotated[ - Optional[str], - Doc( - """ - Human-readable description. - """ - ), - ] = None, - gt: Annotated[ - Optional[float], - Doc( - """ - Greater than. If set, value must be greater than this. Only applicable to - numbers. - """ - ), - ] = None, - ge: Annotated[ - Optional[float], - Doc( - """ - Greater than or equal. If set, value must be greater than or equal to - this. Only applicable to numbers. - """ - ), - ] = None, - lt: Annotated[ - Optional[float], - Doc( - """ - Less than. If set, value must be less than this. Only applicable to numbers. - """ - ), - ] = None, - le: Annotated[ - Optional[float], - Doc( - """ - Less than or equal. If set, value must be less than or equal to this. - Only applicable to numbers. - """ - ), - ] = None, - min_length: Annotated[ - Optional[int], - Doc( - """ - Minimum length for strings. - """ - ), - ] = None, - max_length: Annotated[ - Optional[int], - Doc( - """ - Maximum length for strings. - """ - ), - ] = None, - pattern: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - ] = None, - regex: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Annotated[ - Union[str, None], - Doc( - """ - Parameter field name for discriminating the type in a tagged union. - """ - ), - ] = None, - strict: Annotated[ - Union[bool, None], - Doc( - """ - If `True`, strict validation is applied to the field. - """ - ), - ] = _Unset, - multiple_of: Annotated[ - Union[float, None], - Doc( - """ - Value must be a multiple of this. Only applicable to numbers. - """ - ), - ] = _Unset, - allow_inf_nan: Annotated[ - Union[bool, None], - Doc( - """ - Allow `inf`, `-inf`, `nan`. Only applicable to numbers. - """ - ), - ] = _Unset, - max_digits: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of allow digits for strings. - """ - ), - ] = _Unset, - decimal_places: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of decimal places allowed for numbers. - """ - ), - ] = _Unset, - examples: Annotated[ - Optional[List[Any]], - Doc( - """ - Example values for this field. - """ - ), - ] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - openapi_examples: Annotated[ - Optional[Dict[str, Example]], - Doc( - """ - OpenAPI-specific examples. - - It will be added to the generated OpenAPI (e.g. visible at `/docs`). - - Swagger UI (that provides the `/docs` interface) has better support for the - OpenAPI-specific examples than the JSON Schema `examples`, that's the main - use case for this. - - Read more about it in the - [FastAPI docs for Declare Request Example Data](https://fastapi.tiangolo.com/tutorial/schema-extra-example/#using-the-openapi_examples-parameter). - """ - ), - ] = None, - deprecated: Annotated[ - Optional[bool], - Doc( - """ - Mark this parameter field as deprecated. - - It will affect the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = None, - include_in_schema: Annotated[ - bool, - Doc( - """ - To include (or not) this parameter field in the generated OpenAPI. - You probably don't need it, but it's available. - - This affects the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = True, - json_schema_extra: Annotated[ - Union[Dict[str, Any], None], - Doc( - """ - Any additional JSON schema data. - """ - ), - ] = None, - **extra: Annotated[ - Any, - Doc( - """ - Include extra fields used by the JSON Schema. - """ - ), - deprecated( - """ - The `extra` kwargs is deprecated. Use `json_schema_extra` instead. - """ - ), - ], -) -> Any: - """ - Declare a path parameter for a *path operation*. - - Read more about it in the - [FastAPI docs for Path Parameters and Numeric Validations](https://fastapi.tiangolo.com/tutorial/path-params-numeric-validations/). - - ```python - from typing import Annotated - - from fastapi import FastAPI, Path - - app = FastAPI() - - - @app.get("/items/{item_id}") - async def read_items( - item_id: Annotated[int, Path(title="The ID of the item to get")], - ): - return {"item_id": item_id} - ``` - """ - return params.Path( - default=default, - default_factory=default_factory, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - openapi_examples=openapi_examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Query( # noqa: N802 - default: Annotated[ - Any, - Doc( - """ - Default value if the parameter field is not set. - """ - ), - ] = Undefined, - *, - default_factory: Annotated[ - Union[Callable[[], Any], None], - Doc( - """ - A callable to generate the default value. - - This doesn't affect `Path` parameters as the value is always required. - The parameter is available only for compatibility. - """ - ), - ] = _Unset, - alias: Annotated[ - Optional[str], - Doc( - """ - An alternative name for the parameter field. - - This will be used to extract the data and for the generated OpenAPI. - It is particularly useful when you can't use the name you want because it - is a Python reserved keyword or similar. - """ - ), - ] = None, - alias_priority: Annotated[ - Union[int, None], - Doc( - """ - Priority of the alias. This affects whether an alias generator is used. - """ - ), - ] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Whitelist' validation step. The parameter field will be the single one - allowed by the alias or set of aliases defined. - """ - ), - ] = None, - serialization_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Blacklist' validation step. The vanilla parameter field will be the - single one of the alias' or set of aliases' fields and all the other - fields will be ignored at serialization time. - """ - ), - ] = None, - title: Annotated[ - Optional[str], - Doc( - """ - Human-readable title. - """ - ), - ] = None, - description: Annotated[ - Optional[str], - Doc( - """ - Human-readable description. - """ - ), - ] = None, - gt: Annotated[ - Optional[float], - Doc( - """ - Greater than. If set, value must be greater than this. Only applicable to - numbers. - """ - ), - ] = None, - ge: Annotated[ - Optional[float], - Doc( - """ - Greater than or equal. If set, value must be greater than or equal to - this. Only applicable to numbers. - """ - ), - ] = None, - lt: Annotated[ - Optional[float], - Doc( - """ - Less than. If set, value must be less than this. Only applicable to numbers. - """ - ), - ] = None, - le: Annotated[ - Optional[float], - Doc( - """ - Less than or equal. If set, value must be less than or equal to this. - Only applicable to numbers. - """ - ), - ] = None, - min_length: Annotated[ - Optional[int], - Doc( - """ - Minimum length for strings. - """ - ), - ] = None, - max_length: Annotated[ - Optional[int], - Doc( - """ - Maximum length for strings. - """ - ), - ] = None, - pattern: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - ] = None, - regex: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Annotated[ - Union[str, None], - Doc( - """ - Parameter field name for discriminating the type in a tagged union. - """ - ), - ] = None, - strict: Annotated[ - Union[bool, None], - Doc( - """ - If `True`, strict validation is applied to the field. - """ - ), - ] = _Unset, - multiple_of: Annotated[ - Union[float, None], - Doc( - """ - Value must be a multiple of this. Only applicable to numbers. - """ - ), - ] = _Unset, - allow_inf_nan: Annotated[ - Union[bool, None], - Doc( - """ - Allow `inf`, `-inf`, `nan`. Only applicable to numbers. - """ - ), - ] = _Unset, - max_digits: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of allow digits for strings. - """ - ), - ] = _Unset, - decimal_places: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of decimal places allowed for numbers. - """ - ), - ] = _Unset, - examples: Annotated[ - Optional[List[Any]], - Doc( - """ - Example values for this field. - """ - ), - ] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - openapi_examples: Annotated[ - Optional[Dict[str, Example]], - Doc( - """ - OpenAPI-specific examples. - - It will be added to the generated OpenAPI (e.g. visible at `/docs`). - - Swagger UI (that provides the `/docs` interface) has better support for the - OpenAPI-specific examples than the JSON Schema `examples`, that's the main - use case for this. - - Read more about it in the - [FastAPI docs for Declare Request Example Data](https://fastapi.tiangolo.com/tutorial/schema-extra-example/#using-the-openapi_examples-parameter). - """ - ), - ] = None, - deprecated: Annotated[ - Optional[bool], - Doc( - """ - Mark this parameter field as deprecated. - - It will affect the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = None, - include_in_schema: Annotated[ - bool, - Doc( - """ - To include (or not) this parameter field in the generated OpenAPI. - You probably don't need it, but it's available. - - This affects the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = True, - json_schema_extra: Annotated[ - Union[Dict[str, Any], None], - Doc( - """ - Any additional JSON schema data. - """ - ), - ] = None, - **extra: Annotated[ - Any, - Doc( - """ - Include extra fields used by the JSON Schema. - """ - ), - deprecated( - """ - The `extra` kwargs is deprecated. Use `json_schema_extra` instead. - """ - ), - ], -) -> Any: - return params.Query( - default=default, - default_factory=default_factory, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - openapi_examples=openapi_examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Header( # noqa: N802 - default: Annotated[ - Any, - Doc( - """ - Default value if the parameter field is not set. - """ - ), - ] = Undefined, - *, - default_factory: Annotated[ - Union[Callable[[], Any], None], - Doc( - """ - A callable to generate the default value. - - This doesn't affect `Path` parameters as the value is always required. - The parameter is available only for compatibility. - """ - ), - ] = _Unset, - alias: Annotated[ - Optional[str], - Doc( - """ - An alternative name for the parameter field. - - This will be used to extract the data and for the generated OpenAPI. - It is particularly useful when you can't use the name you want because it - is a Python reserved keyword or similar. - """ - ), - ] = None, - alias_priority: Annotated[ - Union[int, None], - Doc( - """ - Priority of the alias. This affects whether an alias generator is used. - """ - ), - ] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Whitelist' validation step. The parameter field will be the single one - allowed by the alias or set of aliases defined. - """ - ), - ] = None, - serialization_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Blacklist' validation step. The vanilla parameter field will be the - single one of the alias' or set of aliases' fields and all the other - fields will be ignored at serialization time. - """ - ), - ] = None, - convert_underscores: Annotated[ - bool, - Doc( - """ - Automatically convert underscores to hyphens in the parameter field name. - - Read more about it in the - [FastAPI docs for Header Parameters](https://fastapi.tiangolo.com/tutorial/header-params/#automatic-conversion) - """ - ), - ] = True, - title: Annotated[ - Optional[str], - Doc( - """ - Human-readable title. - """ - ), - ] = None, - description: Annotated[ - Optional[str], - Doc( - """ - Human-readable description. - """ - ), - ] = None, - gt: Annotated[ - Optional[float], - Doc( - """ - Greater than. If set, value must be greater than this. Only applicable to - numbers. - """ - ), - ] = None, - ge: Annotated[ - Optional[float], - Doc( - """ - Greater than or equal. If set, value must be greater than or equal to - this. Only applicable to numbers. - """ - ), - ] = None, - lt: Annotated[ - Optional[float], - Doc( - """ - Less than. If set, value must be less than this. Only applicable to numbers. - """ - ), - ] = None, - le: Annotated[ - Optional[float], - Doc( - """ - Less than or equal. If set, value must be less than or equal to this. - Only applicable to numbers. - """ - ), - ] = None, - min_length: Annotated[ - Optional[int], - Doc( - """ - Minimum length for strings. - """ - ), - ] = None, - max_length: Annotated[ - Optional[int], - Doc( - """ - Maximum length for strings. - """ - ), - ] = None, - pattern: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - ] = None, - regex: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Annotated[ - Union[str, None], - Doc( - """ - Parameter field name for discriminating the type in a tagged union. - """ - ), - ] = None, - strict: Annotated[ - Union[bool, None], - Doc( - """ - If `True`, strict validation is applied to the field. - """ - ), - ] = _Unset, - multiple_of: Annotated[ - Union[float, None], - Doc( - """ - Value must be a multiple of this. Only applicable to numbers. - """ - ), - ] = _Unset, - allow_inf_nan: Annotated[ - Union[bool, None], - Doc( - """ - Allow `inf`, `-inf`, `nan`. Only applicable to numbers. - """ - ), - ] = _Unset, - max_digits: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of allow digits for strings. - """ - ), - ] = _Unset, - decimal_places: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of decimal places allowed for numbers. - """ - ), - ] = _Unset, - examples: Annotated[ - Optional[List[Any]], - Doc( - """ - Example values for this field. - """ - ), - ] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - openapi_examples: Annotated[ - Optional[Dict[str, Example]], - Doc( - """ - OpenAPI-specific examples. - - It will be added to the generated OpenAPI (e.g. visible at `/docs`). - - Swagger UI (that provides the `/docs` interface) has better support for the - OpenAPI-specific examples than the JSON Schema `examples`, that's the main - use case for this. - - Read more about it in the - [FastAPI docs for Declare Request Example Data](https://fastapi.tiangolo.com/tutorial/schema-extra-example/#using-the-openapi_examples-parameter). - """ - ), - ] = None, - deprecated: Annotated[ - Optional[bool], - Doc( - """ - Mark this parameter field as deprecated. - - It will affect the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = None, - include_in_schema: Annotated[ - bool, - Doc( - """ - To include (or not) this parameter field in the generated OpenAPI. - You probably don't need it, but it's available. - - This affects the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = True, - json_schema_extra: Annotated[ - Union[Dict[str, Any], None], - Doc( - """ - Any additional JSON schema data. - """ - ), - ] = None, - **extra: Annotated[ - Any, - Doc( - """ - Include extra fields used by the JSON Schema. - """ - ), - deprecated( - """ - The `extra` kwargs is deprecated. Use `json_schema_extra` instead. - """ - ), - ], -) -> Any: - return params.Header( - default=default, - default_factory=default_factory, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - convert_underscores=convert_underscores, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - openapi_examples=openapi_examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Cookie( # noqa: N802 - default: Annotated[ - Any, - Doc( - """ - Default value if the parameter field is not set. - """ - ), - ] = Undefined, - *, - default_factory: Annotated[ - Union[Callable[[], Any], None], - Doc( - """ - A callable to generate the default value. - - This doesn't affect `Path` parameters as the value is always required. - The parameter is available only for compatibility. - """ - ), - ] = _Unset, - alias: Annotated[ - Optional[str], - Doc( - """ - An alternative name for the parameter field. - - This will be used to extract the data and for the generated OpenAPI. - It is particularly useful when you can't use the name you want because it - is a Python reserved keyword or similar. - """ - ), - ] = None, - alias_priority: Annotated[ - Union[int, None], - Doc( - """ - Priority of the alias. This affects whether an alias generator is used. - """ - ), - ] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Whitelist' validation step. The parameter field will be the single one - allowed by the alias or set of aliases defined. - """ - ), - ] = None, - serialization_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Blacklist' validation step. The vanilla parameter field will be the - single one of the alias' or set of aliases' fields and all the other - fields will be ignored at serialization time. - """ - ), - ] = None, - title: Annotated[ - Optional[str], - Doc( - """ - Human-readable title. - """ - ), - ] = None, - description: Annotated[ - Optional[str], - Doc( - """ - Human-readable description. - """ - ), - ] = None, - gt: Annotated[ - Optional[float], - Doc( - """ - Greater than. If set, value must be greater than this. Only applicable to - numbers. - """ - ), - ] = None, - ge: Annotated[ - Optional[float], - Doc( - """ - Greater than or equal. If set, value must be greater than or equal to - this. Only applicable to numbers. - """ - ), - ] = None, - lt: Annotated[ - Optional[float], - Doc( - """ - Less than. If set, value must be less than this. Only applicable to numbers. - """ - ), - ] = None, - le: Annotated[ - Optional[float], - Doc( - """ - Less than or equal. If set, value must be less than or equal to this. - Only applicable to numbers. - """ - ), - ] = None, - min_length: Annotated[ - Optional[int], - Doc( - """ - Minimum length for strings. - """ - ), - ] = None, - max_length: Annotated[ - Optional[int], - Doc( - """ - Maximum length for strings. - """ - ), - ] = None, - pattern: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - ] = None, - regex: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Annotated[ - Union[str, None], - Doc( - """ - Parameter field name for discriminating the type in a tagged union. - """ - ), - ] = None, - strict: Annotated[ - Union[bool, None], - Doc( - """ - If `True`, strict validation is applied to the field. - """ - ), - ] = _Unset, - multiple_of: Annotated[ - Union[float, None], - Doc( - """ - Value must be a multiple of this. Only applicable to numbers. - """ - ), - ] = _Unset, - allow_inf_nan: Annotated[ - Union[bool, None], - Doc( - """ - Allow `inf`, `-inf`, `nan`. Only applicable to numbers. - """ - ), - ] = _Unset, - max_digits: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of allow digits for strings. - """ - ), - ] = _Unset, - decimal_places: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of decimal places allowed for numbers. - """ - ), - ] = _Unset, - examples: Annotated[ - Optional[List[Any]], - Doc( - """ - Example values for this field. - """ - ), - ] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - openapi_examples: Annotated[ - Optional[Dict[str, Example]], - Doc( - """ - OpenAPI-specific examples. - - It will be added to the generated OpenAPI (e.g. visible at `/docs`). - - Swagger UI (that provides the `/docs` interface) has better support for the - OpenAPI-specific examples than the JSON Schema `examples`, that's the main - use case for this. - - Read more about it in the - [FastAPI docs for Declare Request Example Data](https://fastapi.tiangolo.com/tutorial/schema-extra-example/#using-the-openapi_examples-parameter). - """ - ), - ] = None, - deprecated: Annotated[ - Optional[bool], - Doc( - """ - Mark this parameter field as deprecated. - - It will affect the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = None, - include_in_schema: Annotated[ - bool, - Doc( - """ - To include (or not) this parameter field in the generated OpenAPI. - You probably don't need it, but it's available. - - This affects the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = True, - json_schema_extra: Annotated[ - Union[Dict[str, Any], None], - Doc( - """ - Any additional JSON schema data. - """ - ), - ] = None, - **extra: Annotated[ - Any, - Doc( - """ - Include extra fields used by the JSON Schema. - """ - ), - deprecated( - """ - The `extra` kwargs is deprecated. Use `json_schema_extra` instead. - """ - ), - ], -) -> Any: - return params.Cookie( - default=default, - default_factory=default_factory, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - openapi_examples=openapi_examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Body( # noqa: N802 - default: Annotated[ - Any, - Doc( - """ - Default value if the parameter field is not set. - """ - ), - ] = Undefined, - *, - default_factory: Annotated[ - Union[Callable[[], Any], None], - Doc( - """ - A callable to generate the default value. - - This doesn't affect `Path` parameters as the value is always required. - The parameter is available only for compatibility. - """ - ), - ] = _Unset, - embed: Annotated[ - bool, - Doc( - """ - When `embed` is `True`, the parameter will be expected in a JSON body as a - key instead of being the JSON body itself. - - This happens automatically when more than one `Body` parameter is declared. - - Read more about it in the - [FastAPI docs for Body - Multiple Parameters](https://fastapi.tiangolo.com/tutorial/body-multiple-params/#embed-a-single-body-parameter). - """ - ), - ] = False, - media_type: Annotated[ - str, - Doc( - """ - The media type of this parameter field. Changing it would affect the - generated OpenAPI, but currently it doesn't affect the parsing of the data. - """ - ), - ] = "application/json", - alias: Annotated[ - Optional[str], - Doc( - """ - An alternative name for the parameter field. - - This will be used to extract the data and for the generated OpenAPI. - It is particularly useful when you can't use the name you want because it - is a Python reserved keyword or similar. - """ - ), - ] = None, - alias_priority: Annotated[ - Union[int, None], - Doc( - """ - Priority of the alias. This affects whether an alias generator is used. - """ - ), - ] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Whitelist' validation step. The parameter field will be the single one - allowed by the alias or set of aliases defined. - """ - ), - ] = None, - serialization_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Blacklist' validation step. The vanilla parameter field will be the - single one of the alias' or set of aliases' fields and all the other - fields will be ignored at serialization time. - """ - ), - ] = None, - title: Annotated[ - Optional[str], - Doc( - """ - Human-readable title. - """ - ), - ] = None, - description: Annotated[ - Optional[str], - Doc( - """ - Human-readable description. - """ - ), - ] = None, - gt: Annotated[ - Optional[float], - Doc( - """ - Greater than. If set, value must be greater than this. Only applicable to - numbers. - """ - ), - ] = None, - ge: Annotated[ - Optional[float], - Doc( - """ - Greater than or equal. If set, value must be greater than or equal to - this. Only applicable to numbers. - """ - ), - ] = None, - lt: Annotated[ - Optional[float], - Doc( - """ - Less than. If set, value must be less than this. Only applicable to numbers. - """ - ), - ] = None, - le: Annotated[ - Optional[float], - Doc( - """ - Less than or equal. If set, value must be less than or equal to this. - Only applicable to numbers. - """ - ), - ] = None, - min_length: Annotated[ - Optional[int], - Doc( - """ - Minimum length for strings. - """ - ), - ] = None, - max_length: Annotated[ - Optional[int], - Doc( - """ - Maximum length for strings. - """ - ), - ] = None, - pattern: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - ] = None, - regex: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Annotated[ - Union[str, None], - Doc( - """ - Parameter field name for discriminating the type in a tagged union. - """ - ), - ] = None, - strict: Annotated[ - Union[bool, None], - Doc( - """ - If `True`, strict validation is applied to the field. - """ - ), - ] = _Unset, - multiple_of: Annotated[ - Union[float, None], - Doc( - """ - Value must be a multiple of this. Only applicable to numbers. - """ - ), - ] = _Unset, - allow_inf_nan: Annotated[ - Union[bool, None], - Doc( - """ - Allow `inf`, `-inf`, `nan`. Only applicable to numbers. - """ - ), - ] = _Unset, - max_digits: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of allow digits for strings. - """ - ), - ] = _Unset, - decimal_places: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of decimal places allowed for numbers. - """ - ), - ] = _Unset, - examples: Annotated[ - Optional[List[Any]], - Doc( - """ - Example values for this field. - """ - ), - ] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - openapi_examples: Annotated[ - Optional[Dict[str, Example]], - Doc( - """ - OpenAPI-specific examples. - - It will be added to the generated OpenAPI (e.g. visible at `/docs`). - - Swagger UI (that provides the `/docs` interface) has better support for the - OpenAPI-specific examples than the JSON Schema `examples`, that's the main - use case for this. - - Read more about it in the - [FastAPI docs for Declare Request Example Data](https://fastapi.tiangolo.com/tutorial/schema-extra-example/#using-the-openapi_examples-parameter). - """ - ), - ] = None, - deprecated: Annotated[ - Optional[bool], - Doc( - """ - Mark this parameter field as deprecated. - - It will affect the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = None, - include_in_schema: Annotated[ - bool, - Doc( - """ - To include (or not) this parameter field in the generated OpenAPI. - You probably don't need it, but it's available. - - This affects the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = True, - json_schema_extra: Annotated[ - Union[Dict[str, Any], None], - Doc( - """ - Any additional JSON schema data. - """ - ), - ] = None, - **extra: Annotated[ - Any, - Doc( - """ - Include extra fields used by the JSON Schema. - """ - ), - deprecated( - """ - The `extra` kwargs is deprecated. Use `json_schema_extra` instead. - """ - ), - ], -) -> Any: - return params.Body( - default=default, - default_factory=default_factory, - embed=embed, - media_type=media_type, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - openapi_examples=openapi_examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Form( # noqa: N802 - default: Annotated[ - Any, - Doc( - """ - Default value if the parameter field is not set. - """ - ), - ] = Undefined, - *, - default_factory: Annotated[ - Union[Callable[[], Any], None], - Doc( - """ - A callable to generate the default value. - - This doesn't affect `Path` parameters as the value is always required. - The parameter is available only for compatibility. - """ - ), - ] = _Unset, - media_type: Annotated[ - str, - Doc( - """ - The media type of this parameter field. Changing it would affect the - generated OpenAPI, but currently it doesn't affect the parsing of the data. - """ - ), - ] = "application/x-www-form-urlencoded", - alias: Annotated[ - Optional[str], - Doc( - """ - An alternative name for the parameter field. - - This will be used to extract the data and for the generated OpenAPI. - It is particularly useful when you can't use the name you want because it - is a Python reserved keyword or similar. - """ - ), - ] = None, - alias_priority: Annotated[ - Union[int, None], - Doc( - """ - Priority of the alias. This affects whether an alias generator is used. - """ - ), - ] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Whitelist' validation step. The parameter field will be the single one - allowed by the alias or set of aliases defined. - """ - ), - ] = None, - serialization_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Blacklist' validation step. The vanilla parameter field will be the - single one of the alias' or set of aliases' fields and all the other - fields will be ignored at serialization time. - """ - ), - ] = None, - title: Annotated[ - Optional[str], - Doc( - """ - Human-readable title. - """ - ), - ] = None, - description: Annotated[ - Optional[str], - Doc( - """ - Human-readable description. - """ - ), - ] = None, - gt: Annotated[ - Optional[float], - Doc( - """ - Greater than. If set, value must be greater than this. Only applicable to - numbers. - """ - ), - ] = None, - ge: Annotated[ - Optional[float], - Doc( - """ - Greater than or equal. If set, value must be greater than or equal to - this. Only applicable to numbers. - """ - ), - ] = None, - lt: Annotated[ - Optional[float], - Doc( - """ - Less than. If set, value must be less than this. Only applicable to numbers. - """ - ), - ] = None, - le: Annotated[ - Optional[float], - Doc( - """ - Less than or equal. If set, value must be less than or equal to this. - Only applicable to numbers. - """ - ), - ] = None, - min_length: Annotated[ - Optional[int], - Doc( - """ - Minimum length for strings. - """ - ), - ] = None, - max_length: Annotated[ - Optional[int], - Doc( - """ - Maximum length for strings. - """ - ), - ] = None, - pattern: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - ] = None, - regex: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Annotated[ - Union[str, None], - Doc( - """ - Parameter field name for discriminating the type in a tagged union. - """ - ), - ] = None, - strict: Annotated[ - Union[bool, None], - Doc( - """ - If `True`, strict validation is applied to the field. - """ - ), - ] = _Unset, - multiple_of: Annotated[ - Union[float, None], - Doc( - """ - Value must be a multiple of this. Only applicable to numbers. - """ - ), - ] = _Unset, - allow_inf_nan: Annotated[ - Union[bool, None], - Doc( - """ - Allow `inf`, `-inf`, `nan`. Only applicable to numbers. - """ - ), - ] = _Unset, - max_digits: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of allow digits for strings. - """ - ), - ] = _Unset, - decimal_places: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of decimal places allowed for numbers. - """ - ), - ] = _Unset, - examples: Annotated[ - Optional[List[Any]], - Doc( - """ - Example values for this field. - """ - ), - ] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - openapi_examples: Annotated[ - Optional[Dict[str, Example]], - Doc( - """ - OpenAPI-specific examples. - - It will be added to the generated OpenAPI (e.g. visible at `/docs`). - - Swagger UI (that provides the `/docs` interface) has better support for the - OpenAPI-specific examples than the JSON Schema `examples`, that's the main - use case for this. - - Read more about it in the - [FastAPI docs for Declare Request Example Data](https://fastapi.tiangolo.com/tutorial/schema-extra-example/#using-the-openapi_examples-parameter). - """ - ), - ] = None, - deprecated: Annotated[ - Optional[bool], - Doc( - """ - Mark this parameter field as deprecated. - - It will affect the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = None, - include_in_schema: Annotated[ - bool, - Doc( - """ - To include (or not) this parameter field in the generated OpenAPI. - You probably don't need it, but it's available. - - This affects the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = True, - json_schema_extra: Annotated[ - Union[Dict[str, Any], None], - Doc( - """ - Any additional JSON schema data. - """ - ), - ] = None, - **extra: Annotated[ - Any, - Doc( - """ - Include extra fields used by the JSON Schema. - """ - ), - deprecated( - """ - The `extra` kwargs is deprecated. Use `json_schema_extra` instead. - """ - ), - ], -) -> Any: - return params.Form( - default=default, - default_factory=default_factory, - media_type=media_type, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - openapi_examples=openapi_examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def File( # noqa: N802 - default: Annotated[ - Any, - Doc( - """ - Default value if the parameter field is not set. - """ - ), - ] = Undefined, - *, - default_factory: Annotated[ - Union[Callable[[], Any], None], - Doc( - """ - A callable to generate the default value. - - This doesn't affect `Path` parameters as the value is always required. - The parameter is available only for compatibility. - """ - ), - ] = _Unset, - media_type: Annotated[ - str, - Doc( - """ - The media type of this parameter field. Changing it would affect the - generated OpenAPI, but currently it doesn't affect the parsing of the data. - """ - ), - ] = "multipart/form-data", - alias: Annotated[ - Optional[str], - Doc( - """ - An alternative name for the parameter field. - - This will be used to extract the data and for the generated OpenAPI. - It is particularly useful when you can't use the name you want because it - is a Python reserved keyword or similar. - """ - ), - ] = None, - alias_priority: Annotated[ - Union[int, None], - Doc( - """ - Priority of the alias. This affects whether an alias generator is used. - """ - ), - ] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Whitelist' validation step. The parameter field will be the single one - allowed by the alias or set of aliases defined. - """ - ), - ] = None, - serialization_alias: Annotated[ - Union[str, None], - Doc( - """ - 'Blacklist' validation step. The vanilla parameter field will be the - single one of the alias' or set of aliases' fields and all the other - fields will be ignored at serialization time. - """ - ), - ] = None, - title: Annotated[ - Optional[str], - Doc( - """ - Human-readable title. - """ - ), - ] = None, - description: Annotated[ - Optional[str], - Doc( - """ - Human-readable description. - """ - ), - ] = None, - gt: Annotated[ - Optional[float], - Doc( - """ - Greater than. If set, value must be greater than this. Only applicable to - numbers. - """ - ), - ] = None, - ge: Annotated[ - Optional[float], - Doc( - """ - Greater than or equal. If set, value must be greater than or equal to - this. Only applicable to numbers. - """ - ), - ] = None, - lt: Annotated[ - Optional[float], - Doc( - """ - Less than. If set, value must be less than this. Only applicable to numbers. - """ - ), - ] = None, - le: Annotated[ - Optional[float], - Doc( - """ - Less than or equal. If set, value must be less than or equal to this. - Only applicable to numbers. - """ - ), - ] = None, - min_length: Annotated[ - Optional[int], - Doc( - """ - Minimum length for strings. - """ - ), - ] = None, - max_length: Annotated[ - Optional[int], - Doc( - """ - Maximum length for strings. - """ - ), - ] = None, - pattern: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - ] = None, - regex: Annotated[ - Optional[str], - Doc( - """ - RegEx pattern for strings. - """ - ), - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Annotated[ - Union[str, None], - Doc( - """ - Parameter field name for discriminating the type in a tagged union. - """ - ), - ] = None, - strict: Annotated[ - Union[bool, None], - Doc( - """ - If `True`, strict validation is applied to the field. - """ - ), - ] = _Unset, - multiple_of: Annotated[ - Union[float, None], - Doc( - """ - Value must be a multiple of this. Only applicable to numbers. - """ - ), - ] = _Unset, - allow_inf_nan: Annotated[ - Union[bool, None], - Doc( - """ - Allow `inf`, `-inf`, `nan`. Only applicable to numbers. - """ - ), - ] = _Unset, - max_digits: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of allow digits for strings. - """ - ), - ] = _Unset, - decimal_places: Annotated[ - Union[int, None], - Doc( - """ - Maximum number of decimal places allowed for numbers. - """ - ), - ] = _Unset, - examples: Annotated[ - Optional[List[Any]], - Doc( - """ - Example values for this field. - """ - ), - ] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - openapi_examples: Annotated[ - Optional[Dict[str, Example]], - Doc( - """ - OpenAPI-specific examples. - - It will be added to the generated OpenAPI (e.g. visible at `/docs`). - - Swagger UI (that provides the `/docs` interface) has better support for the - OpenAPI-specific examples than the JSON Schema `examples`, that's the main - use case for this. - - Read more about it in the - [FastAPI docs for Declare Request Example Data](https://fastapi.tiangolo.com/tutorial/schema-extra-example/#using-the-openapi_examples-parameter). - """ - ), - ] = None, - deprecated: Annotated[ - Optional[bool], - Doc( - """ - Mark this parameter field as deprecated. - - It will affect the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = None, - include_in_schema: Annotated[ - bool, - Doc( - """ - To include (or not) this parameter field in the generated OpenAPI. - You probably don't need it, but it's available. - - This affects the generated OpenAPI (e.g. visible at `/docs`). - """ - ), - ] = True, - json_schema_extra: Annotated[ - Union[Dict[str, Any], None], - Doc( - """ - Any additional JSON schema data. - """ - ), - ] = None, - **extra: Annotated[ - Any, - Doc( - """ - Include extra fields used by the JSON Schema. - """ - ), - deprecated( - """ - The `extra` kwargs is deprecated. Use `json_schema_extra` instead. - """ - ), - ], -) -> Any: - return params.File( - default=default, - default_factory=default_factory, - media_type=media_type, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - openapi_examples=openapi_examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Depends( # noqa: N802 - dependency: Annotated[ - Optional[Callable[..., Any]], - Doc( - """ - A "dependable" callable (like a function). - - Don't call it directly, FastAPI will call it for you, just pass the object - directly. - """ - ), - ] = None, - *, - use_cache: Annotated[ - bool, - Doc( - """ - By default, after a dependency is called the first time in a request, if - the dependency is declared again for the rest of the request (for example - if the dependency is needed by several dependencies), the value will be - re-used for the rest of the request. - - Set `use_cache` to `False` to disable this behavior and ensure the - dependency is called again (if declared more than once) in the same request. - """ - ), - ] = True, -) -> Any: - """ - Declare a FastAPI dependency. - - It takes a single "dependable" callable (like a function). - - Don't call it directly, FastAPI will call it for you. - - Read more about it in the - [FastAPI docs for Dependencies](https://fastapi.tiangolo.com/tutorial/dependencies/). - - **Example** - - ```python - from typing import Annotated - - from fastapi import Depends, FastAPI - - app = FastAPI() - - - async def common_parameters(q: str | None = None, skip: int = 0, limit: int = 100): - return {"q": q, "skip": skip, "limit": limit} - - - @app.get("/items/") - async def read_items(commons: Annotated[dict, Depends(common_parameters)]): - return commons - ``` - """ - return params.Depends(dependency=dependency, use_cache=use_cache) - - -def Security( # noqa: N802 - dependency: Annotated[ - Optional[Callable[..., Any]], - Doc( - """ - A "dependable" callable (like a function). - - Don't call it directly, FastAPI will call it for you, just pass the object - directly. - """ - ), - ] = None, - *, - scopes: Annotated[ - Optional[Sequence[str]], - Doc( - """ - OAuth2 scopes required for the *path operation* that uses this Security - dependency. - - The term "scope" comes from the OAuth2 specification, it seems to be - intentionaly vague and interpretable. It normally refers to permissions, - in cases to roles. - - These scopes are integrated with OpenAPI (and the API docs at `/docs`). - So they are visible in the OpenAPI specification. - ) - """ - ), - ] = None, - use_cache: Annotated[ - bool, - Doc( - """ - By default, after a dependency is called the first time in a request, if - the dependency is declared again for the rest of the request (for example - if the dependency is needed by several dependencies), the value will be - re-used for the rest of the request. - - Set `use_cache` to `False` to disable this behavior and ensure the - dependency is called again (if declared more than once) in the same request. - """ - ), - ] = True, -) -> Any: - """ - Declare a FastAPI Security dependency. - - The only difference with a regular dependency is that it can declare OAuth2 - scopes that will be integrated with OpenAPI and the automatic UI docs (by default - at `/docs`). - - It takes a single "dependable" callable (like a function). - - Don't call it directly, FastAPI will call it for you. - - Read more about it in the - [FastAPI docs for Security](https://fastapi.tiangolo.com/tutorial/security/) and - in the - [FastAPI docs for OAuth2 scopes](https://fastapi.tiangolo.com/advanced/security/oauth2-scopes/). - - **Example** - - ```python - from typing import Annotated - - from fastapi import Depends, FastAPI - - from .db import User - from .security import get_current_active_user - - app = FastAPI() - - @app.get("/users/me/items/") - async def read_own_items( - current_user: Annotated[User, Security(get_current_active_user, scopes=["items"])] - ): - return [{"item_id": "Foo", "owner": current_user.username}] - ``` - """ - return params.Security(dependency=dependency, scopes=scopes, use_cache=use_cache) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/intTools.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/intTools.py deleted file mode 100644 index 0ca29854aae85750bdd7d25efc25ffd59392dc8e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/intTools.py +++ /dev/null @@ -1,25 +0,0 @@ -__all__ = ["popCount", "bit_count", "bit_indices"] - - -try: - bit_count = int.bit_count -except AttributeError: - - def bit_count(v): - return bin(v).count("1") - - -"""Return number of 1 bits (population count) of the absolute value of an integer. - -See https://docs.python.org/3.10/library/stdtypes.html#int.bit_count -""" -popCount = bit_count # alias - - -def bit_indices(v): - """Return list of indices where bits are set, 0 being the index of the least significant bit. - - >>> bit_indices(0b101) - [0, 2] - """ - return [i for i, b in enumerate(bin(v)[::-1]) if b == "1"] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/chatbot/shared/utils.ts b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/chatbot/shared/utils.ts deleted file mode 100644 index 2c85e2484f28004793516653087e3f14ab0b298a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/chatbot/shared/utils.ts +++ /dev/null @@ -1,57 +0,0 @@ -import type { FileData } from "@gradio/client"; -import { uploadToHuggingFace } from "@gradio/utils"; - -export const format_chat_for_sharing = async ( - chat: [string | FileData | null, string | FileData | null][] -): Promise => { - let messages = await Promise.all( - chat.map(async (message_pair) => { - return await Promise.all( - message_pair.map(async (message, i) => { - if (message === null) return ""; - let speaker_emoji = i === 0 ? "😃" : "🤖"; - let html_content = ""; - - if (typeof message === "string") { - const regexPatterns = { - audio: /|!\[.*?\]\((\/file=.*?)\)/g - }; - - html_content = message; - - for (let [_, regex] of Object.entries(regexPatterns)) { - let match; - - while ((match = regex.exec(message)) !== null) { - const fileUrl = match[1] || match[2]; - const newUrl = await uploadToHuggingFace(fileUrl, "url"); - html_content = html_content.replace(fileUrl, newUrl); - } - } - } else { - if (!message?.url) return ""; - const file_url = await uploadToHuggingFace(message.url, "url"); - if (message.mime_type?.includes("audio")) { - html_content = ``; - } else if (message.mime_type?.includes("video")) { - html_content = file_url; - } else if (message.mime_type?.includes("image")) { - html_content = ``; - } - } - - return `${speaker_emoji}: ${html_content}`; - }) - ); - }) - ); - return messages - .map((message_pair) => - message_pair.join( - message_pair[0] !== "" && message_pair[1] !== "" ? "\n" : "" - ) - ) - .join("\n"); -}; diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/state_holder.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/state_holder.py deleted file mode 100644 index a0c4a95dfce2ca22336cd457b1f0d35418a60673..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/state_holder.py +++ /dev/null @@ -1,57 +0,0 @@ -from __future__ import annotations - -import threading -from collections import OrderedDict -from copy import deepcopy -from typing import TYPE_CHECKING, Any - -if TYPE_CHECKING: - from gradio.blocks import Blocks - - -class StateHolder: - def __init__(self): - self.capacity = 10000 - self.session_data = OrderedDict() - self.lock = threading.Lock() - - def set_blocks(self, blocks: Blocks): - self.blocks = blocks - self.capacity = blocks.state_session_capacity - - def __getitem__(self, session_id: str) -> SessionState: - if session_id not in self.session_data: - self.session_data[session_id] = SessionState(self.blocks) - self.update(session_id) - return self.session_data[session_id] - - def __contains__(self, session_id: str): - return session_id in self.session_data - - def update(self, session_id: str): - with self.lock: - if session_id in self.session_data: - self.session_data.move_to_end(session_id) - if len(self.session_data) > self.capacity: - self.session_data.popitem(last=False) - - -class SessionState: - def __init__(self, blocks: Blocks): - self.blocks = blocks - self._data = {} - - def __getitem__(self, key: int) -> Any: - if key not in self._data: - block = self.blocks.blocks[key] - if getattr(block, "stateful", False): - self._data[key] = deepcopy(getattr(block, "value", None)) - else: - self._data[key] = None - return self._data[key] - - def __setitem__(self, key: int, value: Any): - self._data[key] = value - - def __contains__(self, key: int): - return key in self._data diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/gh23879.f90 b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/gh23879.f90 deleted file mode 100644 index fac262d53c9d3f0f3a5ba1138594f5b694b95717..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/gh23879.f90 +++ /dev/null @@ -1,20 +0,0 @@ -module gh23879 - implicit none - private - public :: foo - - contains - - subroutine foo(a, b) - integer, intent(in) :: a - integer, intent(out) :: b - b = a - call bar(b) - end subroutine - - subroutine bar(x) - integer, intent(inout) :: x - x = 2*x - end subroutine - - end module gh23879 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/copy_view/test_core_functionalities.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/copy_view/test_core_functionalities.py deleted file mode 100644 index 5c177465d2fa400ca71ab3abf34b6ab8e98578cb..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/copy_view/test_core_functionalities.py +++ /dev/null @@ -1,100 +0,0 @@ -import numpy as np -import pytest - -from pandas import DataFrame -import pandas._testing as tm -from pandas.tests.copy_view.util import get_array - - -def test_assigning_to_same_variable_removes_references(using_copy_on_write): - df = DataFrame({"a": [1, 2, 3]}) - df = df.reset_index() - if using_copy_on_write: - assert df._mgr._has_no_reference(1) - arr = get_array(df, "a") - df.iloc[0, 1] = 100 # Write into a - - assert np.shares_memory(arr, get_array(df, "a")) - - -def test_setitem_dont_track_unnecessary_references(using_copy_on_write): - df = DataFrame({"a": [1, 2, 3], "b": 1, "c": 1}) - - df["b"] = 100 - arr = get_array(df, "a") - # We split the block in setitem, if we are not careful the new blocks will - # reference each other triggering a copy - df.iloc[0, 0] = 100 - assert np.shares_memory(arr, get_array(df, "a")) - - -def test_setitem_with_view_copies(using_copy_on_write): - df = DataFrame({"a": [1, 2, 3], "b": 1, "c": 1}) - view = df[:] - expected = df.copy() - - df["b"] = 100 - arr = get_array(df, "a") - df.iloc[0, 0] = 100 # Check that we correctly track reference - if using_copy_on_write: - assert not np.shares_memory(arr, get_array(df, "a")) - tm.assert_frame_equal(view, expected) - - -def test_setitem_with_view_invalidated_does_not_copy(using_copy_on_write, request): - df = DataFrame({"a": [1, 2, 3], "b": 1, "c": 1}) - view = df[:] - - df["b"] = 100 - arr = get_array(df, "a") - view = None # noqa: F841 - df.iloc[0, 0] = 100 - if using_copy_on_write: - # Setitem split the block. Since the old block shared data with view - # all the new blocks are referencing view and each other. When view - # goes out of scope, they don't share data with any other block, - # so we should not trigger a copy - mark = pytest.mark.xfail( - reason="blk.delete does not track references correctly" - ) - request.node.add_marker(mark) - assert np.shares_memory(arr, get_array(df, "a")) - - -def test_out_of_scope(using_copy_on_write): - def func(): - df = DataFrame({"a": [1, 2], "b": 1.5, "c": 1}) - # create some subset - result = df[["a", "b"]] - return result - - result = func() - if using_copy_on_write: - assert not result._mgr.blocks[0].refs.has_reference() - assert not result._mgr.blocks[1].refs.has_reference() - - -def test_delete(using_copy_on_write): - df = DataFrame( - np.random.default_rng(2).standard_normal((4, 3)), columns=["a", "b", "c"] - ) - del df["b"] - if using_copy_on_write: - assert not df._mgr.blocks[0].refs.has_reference() - assert not df._mgr.blocks[1].refs.has_reference() - - df = df[["a"]] - if using_copy_on_write: - assert not df._mgr.blocks[0].refs.has_reference() - - -def test_delete_reference(using_copy_on_write): - df = DataFrame( - np.random.default_rng(2).standard_normal((4, 3)), columns=["a", "b", "c"] - ) - x = df[:] - del df["b"] - if using_copy_on_write: - assert df._mgr.blocks[0].refs.has_reference() - assert df._mgr.blocks[1].refs.has_reference() - assert x._mgr.blocks[0].refs.has_reference() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/reshape/concat/test_append_common.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/reshape/concat/test_append_common.py deleted file mode 100644 index df5ca2f27c15dbb6d1b23db2793fac800927e5c7..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/reshape/concat/test_append_common.py +++ /dev/null @@ -1,751 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -from pandas import ( - Categorical, - DataFrame, - Index, - Series, -) -import pandas._testing as tm - - -@pytest.fixture( - params=list( - { - "bool": [True, False, True], - "int64": [1, 2, 3], - "float64": [1.1, np.nan, 3.3], - "category": Categorical(["X", "Y", "Z"]), - "object": ["a", "b", "c"], - "datetime64[ns]": [ - pd.Timestamp("2011-01-01"), - pd.Timestamp("2011-01-02"), - pd.Timestamp("2011-01-03"), - ], - "datetime64[ns, US/Eastern]": [ - pd.Timestamp("2011-01-01", tz="US/Eastern"), - pd.Timestamp("2011-01-02", tz="US/Eastern"), - pd.Timestamp("2011-01-03", tz="US/Eastern"), - ], - "timedelta64[ns]": [ - pd.Timedelta("1 days"), - pd.Timedelta("2 days"), - pd.Timedelta("3 days"), - ], - "period[M]": [ - pd.Period("2011-01", freq="M"), - pd.Period("2011-02", freq="M"), - pd.Period("2011-03", freq="M"), - ], - }.items() - ) -) -def item(request): - key, data = request.param - return key, data - - -@pytest.fixture -def item2(item): - return item - - -class TestConcatAppendCommon: - """ - Test common dtype coercion rules between concat and append. - """ - - def test_dtypes(self, item, index_or_series): - # to confirm test case covers intended dtypes - typ, vals = item - obj = index_or_series(vals) - if isinstance(obj, Index): - assert obj.dtype == typ - elif isinstance(obj, Series): - if typ.startswith("period"): - assert obj.dtype == "Period[M]" - else: - assert obj.dtype == typ - - def test_concatlike_same_dtypes(self, item): - # GH 13660 - typ1, vals1 = item - - vals2 = vals1 - vals3 = vals1 - - if typ1 == "category": - exp_data = Categorical(list(vals1) + list(vals2)) - exp_data3 = Categorical(list(vals1) + list(vals2) + list(vals3)) - else: - exp_data = vals1 + vals2 - exp_data3 = vals1 + vals2 + vals3 - - # ----- Index ----- # - - # index.append - res = Index(vals1).append(Index(vals2)) - exp = Index(exp_data) - tm.assert_index_equal(res, exp) - - # 3 elements - res = Index(vals1).append([Index(vals2), Index(vals3)]) - exp = Index(exp_data3) - tm.assert_index_equal(res, exp) - - # index.append name mismatch - i1 = Index(vals1, name="x") - i2 = Index(vals2, name="y") - res = i1.append(i2) - exp = Index(exp_data) - tm.assert_index_equal(res, exp) - - # index.append name match - i1 = Index(vals1, name="x") - i2 = Index(vals2, name="x") - res = i1.append(i2) - exp = Index(exp_data, name="x") - tm.assert_index_equal(res, exp) - - # cannot append non-index - with pytest.raises(TypeError, match="all inputs must be Index"): - Index(vals1).append(vals2) - - with pytest.raises(TypeError, match="all inputs must be Index"): - Index(vals1).append([Index(vals2), vals3]) - - # ----- Series ----- # - - # series.append - res = Series(vals1)._append(Series(vals2), ignore_index=True) - exp = Series(exp_data) - tm.assert_series_equal(res, exp, check_index_type=True) - - # concat - res = pd.concat([Series(vals1), Series(vals2)], ignore_index=True) - tm.assert_series_equal(res, exp, check_index_type=True) - - # 3 elements - res = Series(vals1)._append([Series(vals2), Series(vals3)], ignore_index=True) - exp = Series(exp_data3) - tm.assert_series_equal(res, exp) - - res = pd.concat( - [Series(vals1), Series(vals2), Series(vals3)], - ignore_index=True, - ) - tm.assert_series_equal(res, exp) - - # name mismatch - s1 = Series(vals1, name="x") - s2 = Series(vals2, name="y") - res = s1._append(s2, ignore_index=True) - exp = Series(exp_data) - tm.assert_series_equal(res, exp, check_index_type=True) - - res = pd.concat([s1, s2], ignore_index=True) - tm.assert_series_equal(res, exp, check_index_type=True) - - # name match - s1 = Series(vals1, name="x") - s2 = Series(vals2, name="x") - res = s1._append(s2, ignore_index=True) - exp = Series(exp_data, name="x") - tm.assert_series_equal(res, exp, check_index_type=True) - - res = pd.concat([s1, s2], ignore_index=True) - tm.assert_series_equal(res, exp, check_index_type=True) - - # cannot append non-index - msg = ( - r"cannot concatenate object of type '.+'; " - "only Series and DataFrame objs are valid" - ) - with pytest.raises(TypeError, match=msg): - Series(vals1)._append(vals2) - - with pytest.raises(TypeError, match=msg): - Series(vals1)._append([Series(vals2), vals3]) - - with pytest.raises(TypeError, match=msg): - pd.concat([Series(vals1), vals2]) - - with pytest.raises(TypeError, match=msg): - pd.concat([Series(vals1), Series(vals2), vals3]) - - def test_concatlike_dtypes_coercion(self, item, item2, request): - # GH 13660 - typ1, vals1 = item - typ2, vals2 = item2 - - vals3 = vals2 - - # basically infer - exp_index_dtype = None - exp_series_dtype = None - - if typ1 == typ2: - pytest.skip("same dtype is tested in test_concatlike_same_dtypes") - elif typ1 == "category" or typ2 == "category": - pytest.skip("categorical type tested elsewhere") - - # specify expected dtype - if typ1 == "bool" and typ2 in ("int64", "float64"): - # series coerces to numeric based on numpy rule - # index doesn't because bool is object dtype - exp_series_dtype = typ2 - mark = pytest.mark.xfail(reason="GH#39187 casting to object") - request.node.add_marker(mark) - elif typ2 == "bool" and typ1 in ("int64", "float64"): - exp_series_dtype = typ1 - mark = pytest.mark.xfail(reason="GH#39187 casting to object") - request.node.add_marker(mark) - elif typ1 in {"datetime64[ns, US/Eastern]", "timedelta64[ns]"} or typ2 in { - "datetime64[ns, US/Eastern]", - "timedelta64[ns]", - }: - exp_index_dtype = object - exp_series_dtype = object - - exp_data = vals1 + vals2 - exp_data3 = vals1 + vals2 + vals3 - - # ----- Index ----- # - - # index.append - # GH#39817 - res = Index(vals1).append(Index(vals2)) - exp = Index(exp_data, dtype=exp_index_dtype) - tm.assert_index_equal(res, exp) - - # 3 elements - res = Index(vals1).append([Index(vals2), Index(vals3)]) - exp = Index(exp_data3, dtype=exp_index_dtype) - tm.assert_index_equal(res, exp) - - # ----- Series ----- # - - # series._append - # GH#39817 - res = Series(vals1)._append(Series(vals2), ignore_index=True) - exp = Series(exp_data, dtype=exp_series_dtype) - tm.assert_series_equal(res, exp, check_index_type=True) - - # concat - # GH#39817 - res = pd.concat([Series(vals1), Series(vals2)], ignore_index=True) - tm.assert_series_equal(res, exp, check_index_type=True) - - # 3 elements - # GH#39817 - res = Series(vals1)._append([Series(vals2), Series(vals3)], ignore_index=True) - exp = Series(exp_data3, dtype=exp_series_dtype) - tm.assert_series_equal(res, exp) - - # GH#39817 - res = pd.concat( - [Series(vals1), Series(vals2), Series(vals3)], - ignore_index=True, - ) - tm.assert_series_equal(res, exp) - - def test_concatlike_common_coerce_to_pandas_object(self): - # GH 13626 - # result must be Timestamp/Timedelta, not datetime.datetime/timedelta - dti = pd.DatetimeIndex(["2011-01-01", "2011-01-02"]) - tdi = pd.TimedeltaIndex(["1 days", "2 days"]) - - exp = Index( - [ - pd.Timestamp("2011-01-01"), - pd.Timestamp("2011-01-02"), - pd.Timedelta("1 days"), - pd.Timedelta("2 days"), - ] - ) - - res = dti.append(tdi) - tm.assert_index_equal(res, exp) - assert isinstance(res[0], pd.Timestamp) - assert isinstance(res[-1], pd.Timedelta) - - dts = Series(dti) - tds = Series(tdi) - res = dts._append(tds) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - assert isinstance(res.iloc[0], pd.Timestamp) - assert isinstance(res.iloc[-1], pd.Timedelta) - - res = pd.concat([dts, tds]) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - assert isinstance(res.iloc[0], pd.Timestamp) - assert isinstance(res.iloc[-1], pd.Timedelta) - - def test_concatlike_datetimetz(self, tz_aware_fixture): - tz = tz_aware_fixture - # GH 7795 - dti1 = pd.DatetimeIndex(["2011-01-01", "2011-01-02"], tz=tz) - dti2 = pd.DatetimeIndex(["2012-01-01", "2012-01-02"], tz=tz) - - exp = pd.DatetimeIndex( - ["2011-01-01", "2011-01-02", "2012-01-01", "2012-01-02"], tz=tz - ) - - res = dti1.append(dti2) - tm.assert_index_equal(res, exp) - - dts1 = Series(dti1) - dts2 = Series(dti2) - res = dts1._append(dts2) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - res = pd.concat([dts1, dts2]) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - @pytest.mark.parametrize("tz", ["UTC", "US/Eastern", "Asia/Tokyo", "EST5EDT"]) - def test_concatlike_datetimetz_short(self, tz): - # GH#7795 - ix1 = pd.date_range(start="2014-07-15", end="2014-07-17", freq="D", tz=tz) - ix2 = pd.DatetimeIndex(["2014-07-11", "2014-07-21"], tz=tz) - df1 = DataFrame(0, index=ix1, columns=["A", "B"]) - df2 = DataFrame(0, index=ix2, columns=["A", "B"]) - - exp_idx = pd.DatetimeIndex( - ["2014-07-15", "2014-07-16", "2014-07-17", "2014-07-11", "2014-07-21"], - tz=tz, - ) - exp = DataFrame(0, index=exp_idx, columns=["A", "B"]) - - tm.assert_frame_equal(df1._append(df2), exp) - tm.assert_frame_equal(pd.concat([df1, df2]), exp) - - def test_concatlike_datetimetz_to_object(self, tz_aware_fixture): - tz = tz_aware_fixture - # GH 13660 - - # different tz coerces to object - dti1 = pd.DatetimeIndex(["2011-01-01", "2011-01-02"], tz=tz) - dti2 = pd.DatetimeIndex(["2012-01-01", "2012-01-02"]) - - exp = Index( - [ - pd.Timestamp("2011-01-01", tz=tz), - pd.Timestamp("2011-01-02", tz=tz), - pd.Timestamp("2012-01-01"), - pd.Timestamp("2012-01-02"), - ], - dtype=object, - ) - - res = dti1.append(dti2) - tm.assert_index_equal(res, exp) - - dts1 = Series(dti1) - dts2 = Series(dti2) - res = dts1._append(dts2) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - res = pd.concat([dts1, dts2]) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - # different tz - dti3 = pd.DatetimeIndex(["2012-01-01", "2012-01-02"], tz="US/Pacific") - - exp = Index( - [ - pd.Timestamp("2011-01-01", tz=tz), - pd.Timestamp("2011-01-02", tz=tz), - pd.Timestamp("2012-01-01", tz="US/Pacific"), - pd.Timestamp("2012-01-02", tz="US/Pacific"), - ], - dtype=object, - ) - - res = dti1.append(dti3) - tm.assert_index_equal(res, exp) - - dts1 = Series(dti1) - dts3 = Series(dti3) - res = dts1._append(dts3) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - res = pd.concat([dts1, dts3]) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - def test_concatlike_common_period(self): - # GH 13660 - pi1 = pd.PeriodIndex(["2011-01", "2011-02"], freq="M") - pi2 = pd.PeriodIndex(["2012-01", "2012-02"], freq="M") - - exp = pd.PeriodIndex(["2011-01", "2011-02", "2012-01", "2012-02"], freq="M") - - res = pi1.append(pi2) - tm.assert_index_equal(res, exp) - - ps1 = Series(pi1) - ps2 = Series(pi2) - res = ps1._append(ps2) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - res = pd.concat([ps1, ps2]) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - def test_concatlike_common_period_diff_freq_to_object(self): - # GH 13221 - pi1 = pd.PeriodIndex(["2011-01", "2011-02"], freq="M") - pi2 = pd.PeriodIndex(["2012-01-01", "2012-02-01"], freq="D") - - exp = Index( - [ - pd.Period("2011-01", freq="M"), - pd.Period("2011-02", freq="M"), - pd.Period("2012-01-01", freq="D"), - pd.Period("2012-02-01", freq="D"), - ], - dtype=object, - ) - - res = pi1.append(pi2) - tm.assert_index_equal(res, exp) - - ps1 = Series(pi1) - ps2 = Series(pi2) - res = ps1._append(ps2) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - res = pd.concat([ps1, ps2]) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - def test_concatlike_common_period_mixed_dt_to_object(self): - # GH 13221 - # different datetimelike - pi1 = pd.PeriodIndex(["2011-01", "2011-02"], freq="M") - tdi = pd.TimedeltaIndex(["1 days", "2 days"]) - exp = Index( - [ - pd.Period("2011-01", freq="M"), - pd.Period("2011-02", freq="M"), - pd.Timedelta("1 days"), - pd.Timedelta("2 days"), - ], - dtype=object, - ) - - res = pi1.append(tdi) - tm.assert_index_equal(res, exp) - - ps1 = Series(pi1) - tds = Series(tdi) - res = ps1._append(tds) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - res = pd.concat([ps1, tds]) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - # inverse - exp = Index( - [ - pd.Timedelta("1 days"), - pd.Timedelta("2 days"), - pd.Period("2011-01", freq="M"), - pd.Period("2011-02", freq="M"), - ], - dtype=object, - ) - - res = tdi.append(pi1) - tm.assert_index_equal(res, exp) - - ps1 = Series(pi1) - tds = Series(tdi) - res = tds._append(ps1) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - res = pd.concat([tds, ps1]) - tm.assert_series_equal(res, Series(exp, index=[0, 1, 0, 1])) - - def test_concat_categorical(self): - # GH 13524 - - # same categories -> category - s1 = Series([1, 2, np.nan], dtype="category") - s2 = Series([2, 1, 2], dtype="category") - - exp = Series([1, 2, np.nan, 2, 1, 2], dtype="category") - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - - # partially different categories => not-category - s1 = Series([3, 2], dtype="category") - s2 = Series([2, 1], dtype="category") - - exp = Series([3, 2, 2, 1]) - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - - # completely different categories (same dtype) => not-category - s1 = Series([10, 11, np.nan], dtype="category") - s2 = Series([np.nan, 1, 3, 2], dtype="category") - - exp = Series([10, 11, np.nan, np.nan, 1, 3, 2], dtype=np.float64) - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - - def test_union_categorical_same_categories_different_order(self): - # https://github.com/pandas-dev/pandas/issues/19096 - a = Series(Categorical(["a", "b", "c"], categories=["a", "b", "c"])) - b = Series(Categorical(["a", "b", "c"], categories=["b", "a", "c"])) - result = pd.concat([a, b], ignore_index=True) - expected = Series( - Categorical(["a", "b", "c", "a", "b", "c"], categories=["a", "b", "c"]) - ) - tm.assert_series_equal(result, expected) - - def test_concat_categorical_coercion(self): - # GH 13524 - - # category + not-category => not-category - s1 = Series([1, 2, np.nan], dtype="category") - s2 = Series([2, 1, 2]) - - exp = Series([1, 2, np.nan, 2, 1, 2], dtype=np.float64) - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - - # result shouldn't be affected by 1st elem dtype - exp = Series([2, 1, 2, 1, 2, np.nan], dtype=np.float64) - tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) - - # all values are not in category => not-category - s1 = Series([3, 2], dtype="category") - s2 = Series([2, 1]) - - exp = Series([3, 2, 2, 1]) - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - - exp = Series([2, 1, 3, 2]) - tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) - - # completely different categories => not-category - s1 = Series([10, 11, np.nan], dtype="category") - s2 = Series([1, 3, 2]) - - exp = Series([10, 11, np.nan, 1, 3, 2], dtype=np.float64) - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - - exp = Series([1, 3, 2, 10, 11, np.nan], dtype=np.float64) - tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) - - # different dtype => not-category - s1 = Series([10, 11, np.nan], dtype="category") - s2 = Series(["a", "b", "c"]) - - exp = Series([10, 11, np.nan, "a", "b", "c"]) - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - - exp = Series(["a", "b", "c", 10, 11, np.nan]) - tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) - - # if normal series only contains NaN-likes => not-category - s1 = Series([10, 11], dtype="category") - s2 = Series([np.nan, np.nan, np.nan]) - - exp = Series([10, 11, np.nan, np.nan, np.nan]) - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - - exp = Series([np.nan, np.nan, np.nan, 10, 11]) - tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) - - def test_concat_categorical_3elem_coercion(self): - # GH 13524 - - # mixed dtypes => not-category - s1 = Series([1, 2, np.nan], dtype="category") - s2 = Series([2, 1, 2], dtype="category") - s3 = Series([1, 2, 1, 2, np.nan]) - - exp = Series([1, 2, np.nan, 2, 1, 2, 1, 2, 1, 2, np.nan], dtype="float") - tm.assert_series_equal(pd.concat([s1, s2, s3], ignore_index=True), exp) - tm.assert_series_equal(s1._append([s2, s3], ignore_index=True), exp) - - exp = Series([1, 2, 1, 2, np.nan, 1, 2, np.nan, 2, 1, 2], dtype="float") - tm.assert_series_equal(pd.concat([s3, s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s3._append([s1, s2], ignore_index=True), exp) - - # values are all in either category => not-category - s1 = Series([4, 5, 6], dtype="category") - s2 = Series([1, 2, 3], dtype="category") - s3 = Series([1, 3, 4]) - - exp = Series([4, 5, 6, 1, 2, 3, 1, 3, 4]) - tm.assert_series_equal(pd.concat([s1, s2, s3], ignore_index=True), exp) - tm.assert_series_equal(s1._append([s2, s3], ignore_index=True), exp) - - exp = Series([1, 3, 4, 4, 5, 6, 1, 2, 3]) - tm.assert_series_equal(pd.concat([s3, s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s3._append([s1, s2], ignore_index=True), exp) - - # values are all in either category => not-category - s1 = Series([4, 5, 6], dtype="category") - s2 = Series([1, 2, 3], dtype="category") - s3 = Series([10, 11, 12]) - - exp = Series([4, 5, 6, 1, 2, 3, 10, 11, 12]) - tm.assert_series_equal(pd.concat([s1, s2, s3], ignore_index=True), exp) - tm.assert_series_equal(s1._append([s2, s3], ignore_index=True), exp) - - exp = Series([10, 11, 12, 4, 5, 6, 1, 2, 3]) - tm.assert_series_equal(pd.concat([s3, s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s3._append([s1, s2], ignore_index=True), exp) - - def test_concat_categorical_multi_coercion(self): - # GH 13524 - - s1 = Series([1, 3], dtype="category") - s2 = Series([3, 4], dtype="category") - s3 = Series([2, 3]) - s4 = Series([2, 2], dtype="category") - s5 = Series([1, np.nan]) - s6 = Series([1, 3, 2], dtype="category") - - # mixed dtype, values are all in categories => not-category - exp = Series([1, 3, 3, 4, 2, 3, 2, 2, 1, np.nan, 1, 3, 2]) - res = pd.concat([s1, s2, s3, s4, s5, s6], ignore_index=True) - tm.assert_series_equal(res, exp) - res = s1._append([s2, s3, s4, s5, s6], ignore_index=True) - tm.assert_series_equal(res, exp) - - exp = Series([1, 3, 2, 1, np.nan, 2, 2, 2, 3, 3, 4, 1, 3]) - res = pd.concat([s6, s5, s4, s3, s2, s1], ignore_index=True) - tm.assert_series_equal(res, exp) - res = s6._append([s5, s4, s3, s2, s1], ignore_index=True) - tm.assert_series_equal(res, exp) - - def test_concat_categorical_ordered(self): - # GH 13524 - - s1 = Series(Categorical([1, 2, np.nan], ordered=True)) - s2 = Series(Categorical([2, 1, 2], ordered=True)) - - exp = Series(Categorical([1, 2, np.nan, 2, 1, 2], ordered=True)) - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - - exp = Series(Categorical([1, 2, np.nan, 2, 1, 2, 1, 2, np.nan], ordered=True)) - tm.assert_series_equal(pd.concat([s1, s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s1._append([s2, s1], ignore_index=True), exp) - - def test_concat_categorical_coercion_nan(self): - # GH 13524 - - # some edge cases - # category + not-category => not category - s1 = Series(np.array([np.nan, np.nan], dtype=np.float64), dtype="category") - s2 = Series([np.nan, 1]) - - exp = Series([np.nan, np.nan, np.nan, 1]) - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - - s1 = Series([1, np.nan], dtype="category") - s2 = Series([np.nan, np.nan]) - - exp = Series([1, np.nan, np.nan, np.nan], dtype="float") - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - - # mixed dtype, all nan-likes => not-category - s1 = Series([np.nan, np.nan], dtype="category") - s2 = Series([np.nan, np.nan]) - - exp = Series([np.nan, np.nan, np.nan, np.nan]) - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) - - # all category nan-likes => category - s1 = Series([np.nan, np.nan], dtype="category") - s2 = Series([np.nan, np.nan], dtype="category") - - exp = Series([np.nan, np.nan, np.nan, np.nan], dtype="category") - - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - - def test_concat_categorical_empty(self): - # GH 13524 - - s1 = Series([], dtype="category") - s2 = Series([1, 2], dtype="category") - - msg = "The behavior of array concatenation with empty entries is deprecated" - with tm.assert_produces_warning(FutureWarning, match=msg): - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), s2) - tm.assert_series_equal(s1._append(s2, ignore_index=True), s2) - - with tm.assert_produces_warning(FutureWarning, match=msg): - tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), s2) - tm.assert_series_equal(s2._append(s1, ignore_index=True), s2) - - s1 = Series([], dtype="category") - s2 = Series([], dtype="category") - - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), s2) - tm.assert_series_equal(s1._append(s2, ignore_index=True), s2) - - s1 = Series([], dtype="category") - s2 = Series([], dtype="object") - - # different dtype => not-category - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), s2) - tm.assert_series_equal(s1._append(s2, ignore_index=True), s2) - tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), s2) - tm.assert_series_equal(s2._append(s1, ignore_index=True), s2) - - s1 = Series([], dtype="category") - s2 = Series([np.nan, np.nan]) - - # empty Series is ignored - exp = Series([np.nan, np.nan]) - with tm.assert_produces_warning(FutureWarning, match=msg): - tm.assert_series_equal(pd.concat([s1, s2], ignore_index=True), exp) - tm.assert_series_equal(s1._append(s2, ignore_index=True), exp) - - with tm.assert_produces_warning(FutureWarning, match=msg): - tm.assert_series_equal(pd.concat([s2, s1], ignore_index=True), exp) - tm.assert_series_equal(s2._append(s1, ignore_index=True), exp) - - def test_categorical_concat_append(self): - cat = Categorical(["a", "b"], categories=["a", "b"]) - vals = [1, 2] - df = DataFrame({"cats": cat, "vals": vals}) - cat2 = Categorical(["a", "b", "a", "b"], categories=["a", "b"]) - vals2 = [1, 2, 1, 2] - exp = DataFrame({"cats": cat2, "vals": vals2}, index=Index([0, 1, 0, 1])) - - tm.assert_frame_equal(pd.concat([df, df]), exp) - tm.assert_frame_equal(df._append(df), exp) - - # GH 13524 can concat different categories - cat3 = Categorical(["a", "b"], categories=["a", "b", "c"]) - vals3 = [1, 2] - df_different_categories = DataFrame({"cats": cat3, "vals": vals3}) - - res = pd.concat([df, df_different_categories], ignore_index=True) - exp = DataFrame({"cats": list("abab"), "vals": [1, 2, 1, 2]}) - tm.assert_frame_equal(res, exp) - - res = df._append(df_different_categories, ignore_index=True) - tm.assert_frame_equal(res, exp) diff --git a/spaces/pyodide-demo/self-hosted/yt.js b/spaces/pyodide-demo/self-hosted/yt.js deleted file mode 100644 index 4a942a2e92803e01ca8ec8ff26f82fed83c5c377..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/yt.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="yt.data";var REMOTE_PACKAGE_BASE="yt.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","yt",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt","analysis_modules",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","absorption_spectrum",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/absorption_spectrum","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","cosmological_observation",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation","light_cone",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/light_cone","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation","light_ray",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/light_ray","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","halo_analysis",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","halo_finding",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/halo_finding","fof",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/halo_finding","hop",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/halo_finding","rockstar",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/halo_finding","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","halo_mass_function",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","level_sets",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","particle_trajectories",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","photon_simulator",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/photon_simulator","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","ppv_cube",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/ppv_cube","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","radmc3d_export",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/radmc3d_export","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","spectral_integrator",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","star_analysis",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","sunrise_export",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","sunyaev_zeldovich",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules/sunyaev_zeldovich","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/analysis_modules","two_point_functions",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt","data_objects",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/data_objects","level_sets",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/data_objects/level_sets","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/data_objects","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt","extensions",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt","extern",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/extern","tqdm",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt","fields",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/fields","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt","frontends",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","adaptahop",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/adaptahop","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","ahf",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/ahf","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","amrvac",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/amrvac","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/amrvac/tests","sample_parfiles",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","art",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/art","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","artio",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/artio","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/artio","artio_headers",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","athena",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/athena","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","athena_pp",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/athena_pp","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","boxlib",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/boxlib","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","chombo",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/chombo","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","eagle",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/eagle","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","enzo",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/enzo","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","enzo_p",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/enzo_p","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","exodus_ii",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/exodus_ii","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","fits",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/fits","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","flash",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/flash","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","gadget",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/gadget","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","gadget_fof",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/gadget_fof","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","gamer",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/gamer","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","gdf",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/gdf","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","gizmo",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/gizmo","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","halo_catalog",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/halo_catalog","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","http_stream",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","moab",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/moab","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","open_pmd",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/open_pmd","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","owls",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/owls","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","owls_subfind",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/owls_subfind","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","ramses",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/ramses","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","rockstar",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/rockstar","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","sdf",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/sdf","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","sph",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","stream",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/stream","sample_data",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/stream","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","tipsy",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/tipsy","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends","ytdata",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/frontends/ytdata","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt","geometry",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/geometry","coordinates",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/geometry/coordinates","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/geometry","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt","units",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/units","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt","utilities",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/utilities","amr_kdtree",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/utilities","answer_testing",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/utilities","grid_data_format",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/utilities/grid_data_format","conversion",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/utilities/grid_data_format","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/utilities/grid_data_format","docs",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/utilities/grid_data_format","scripts",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/utilities","lib",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/utilities/lib","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/utilities","parallel_tools",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/utilities","poster",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/utilities","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt","visualization",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/visualization","mapserver",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/visualization/mapserver","html",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/visualization","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/visualization","volume_rendering",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/visualization/volume_rendering","shaders",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/yt/visualization/volume_rendering","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","yt-3.6.1-py3.9.egg-info",true,true);Module["FS_createPath"]("/","bin",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:26474107,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1292,2489,3496,4751,5962,7387,8588,9608,10876,12105,13358,14607,15977,17165,18623,19759,20932,22342,23435,24529,25704,27136,28425,29747,31179,32559,33931,35140,36527,37516,38880,40460,42171,43670,45200,46589,47939,49074,50390,51597,52770,54012,55351,56462,57638,58748,59862,60571,61994,63163,64169,65343,66543,67547,68646,69892,71069,72288,73701,74950,76295,77373,78839,80099,81243,82554,83430,84649,85857,86860,87871,89126,90146,91377,92579,93828,94981,96044,97082,98067,99432,100870,102022,103211,104529,105691,107096,108337,109634,110940,112219,113490,114790,116042,117276,118766,119917,121184,122323,123472,124708,125766,127009,128276,129685,130887,132039,133167,134179,135081,136066,137236,138377,139329,140510,141538,142562,143358,144373,145364,146377,147191,148295,149428,150290,151214,152140,152810,153629,154903,156064,157373,158496,159605,160568,161541,162444,163347,164166,165311,166138,167043,168027,169065,170175,171393,172511,173618,174828,175987,177274,178007,179311,180629,181962,182944,184045,185249,186477,187570,188596,189709,190916,191913,193160,194488,195737,197020,198293,199561,200745,201883,203173,204242,205412,206501,207702,208774,209967,211085,212355,213480,214626,215805,216990,218282,219218,220413,221497,222772,224040,225282,226533,227678,228645,229936,231025,232284,233481,234550,235812,237027,237935,238941,240049,241119,242249,243564,244676,245730,246866,248096,249230,250354,251573,252591,253635,254774,255688,256839,257833,258977,260263,261437,262489,263909,265247,266462,267898,269183,270521,271743,272775,274124,275571,276864,277939,279184,280352,281559,282550,283798,285460,286961,288555,289822,291090,291959,293198,294545,295765,296642,297529,298699,300098,301553,302949,304258,305696,306839,308096,309520,310911,312181,313476,314822,316273,317394,318590,320021,321227,322710,323754,325019,326177,327465,328747,330039,331422,332706,333978,335192,336382,337614,338931,340228,341508,342714,343758,344800,346223,347626,348744,350193,351322,352545,353841,354860,356498,357909,359299,360508,362112,363530,364919,366417,367925,369364,370691,372289,373843,375401,376748,378276,379795,381289,382572,383767,384903,386057,387438,388882,390118,391301,392353,393600,394808,395836,397146,398361,399444,400547,401666,402812,404018,405149,406290,407566,408793,410002,411263,412380,413547,414611,415567,416636,417821,419148,420417,421515,422774,423886,425084,426413,427426,428569,429679,430844,431771,433092,434319,435453,436536,437421,438798,439940,440957,442120,443343,444540,445455,446587,447006,447799,449055,450209,451260,452408,453307,454457,455643,456719,457909,458878,459885,460822,461697,462869,463694,464496,465498,466692,467830,468746,469999,471296,472515,473646,474959,476306,477379,478583,479810,480522,481210,482191,483223,484084,484971,485921,486867,487736,488613,489443,490564,491605,492283,493215,494292,495361,496479,497066,497732,498460,499385,500106,501195,502060,503034,503906,504577,505695,506600,507183,507913,508604,509266,510098,510879,511763,512452,513020,513979,515034,516047,516926,517667,518555,519433,520244,521115,522018,522967,523918,524838,525768,526661,527623,528453,529388,530288,531323,532045,532741,533934,534581,535574,536317,537187,537623,538632,539701,540697,541744,542616,543934,545063,545784,546929,547977,548832,549702,550694,551650,552765,553752,554658,555804,556915,557999,559236,560325,561248,562354,563224,564189,565283,566298,567264,568145,569148,570250,571071,571999,573127,573923,574635,575434,576120,576819,577612,578407,579447,580011,580706,581161,582065,583141,583773,584403,584835,585772,586850,587739,588782,589865,591003,591610,592944,594306,595614,596782,598338,599881,601130,602572,604032,605575,607113,608486,609654,610815,612089,613315,614546,615691,616795,618052,619357,620762,621844,622999,624122,625307,626503,627286,628011,628975,629960,630946,631758,632712,633676,634525,635422,636235,637371,638433,639161,640012,641074,642102,643247,643842,644499,645223,646141,646808,647919,648849,649839,650675,651390,652403,653438,654070,654678,655387,656240,657041,658014,658780,659736,660655,661659,662530,663631,664462,665339,666182,667034,667885,668772,669620,670679,671540,672533,673398,674303,675211,676146,676922,677591,678593,679323,680320,680926,681628,682220,683277,684406,685393,686351,687234,688101,689422,690543,691300,692448,693503,694359,695243,696223,697155,698239,699249,700125,701255,702382,703478,704713,705792,706688,707780,708651,709604,710693,711683,712631,713544,714555,715718,716610,717598,718709,719632,720247,721029,721805,722465,723272,724317,724812,725550,725976,726812,727746,728752,729447,729988,730392,731391,732258,732785,733626,734006,734815,735959,736855,737906,738913,740075,741048,742087,743498,744806,746253,747417,749006,750490,751928,753440,755039,756595,758145,759497,760503,761523,762744,764136,765514,766584,767673,768829,769946,770977,772053,773447,774526,775841,776793,777985,779144,780374,781599,782771,783594,784380,785516,786764,787978,789420,790866,792032,793424,794793,795959,797229,797986,799294,800515,801622,802712,803759,804866,806258,807427,808640,809791,811193,812005,813371,814721,815741,816971,818345,819449,820846,822200,823262,824434,825603,826635,827845,829086,830225,831264,832430,833680,834707,835912,836949,838241,839403,840584,841722,843008,844189,845191,846586,847864,849269,850612,851738,852865,854028,855238,856320,857607,858770,860079,861180,862329,863505,864402,865641,866806,868124,869199,870351,871415,872484,873930,875017,876246,877534,878611,879830,880672,881605,882635,883793,884902,886089,886957,887844,889106,890230,891132,892453,893736,894947,896154,897473,898520,899611,900703,901777,902858,904006,905228,906456,907619,908680,909970,911112,912352,913434,914640,915851,916812,917891,919070,920260,921334,922609,923714,924652,925692,926925,928136,929222,930377,931333,932343,933097,933893,935160,936398,937606,938849,939760,940591,941514,942423,943172,944043,945107,946387,947406,948669,949929,951203,952682,954073,955268,956551,957685,958625,959751,960800,961887,963097,964045,965257,966369,967593,968613,969837,971038,972164,973349,974448,975532,976479,977603,978978,980161,981246,982407,983551,984857,985913,987132,988303,989697,990977,992324,993524,994922,996162,997121,998296,999388,1000674,1001973,1003273,1004561,1005802,1006972,1008119,1009231,1010265,1011451,1012684,1013889,1014922,1016055,1016958,1018007,1019065,1020310,1021350,1022447,1023600,1024809,1026011,1026988,1027851,1028811,1029835,1031006,1031818,1032773,1033619,1034561,1035811,1037034,1038080,1039053,1040061,1040949,1042227,1043367,1044510,1045452,1046758,1048148,1049412,1050862,1052011,1053138,1054386,1055749,1056643,1057761,1059112,1060467,1061439,1062615,1063785,1064933,1066151,1067274,1068366,1069407,1070489,1071697,1072936,1074258,1075531,1076690,1077931,1079081,1080223,1081293,1082447,1083449,1084617,1085741,1086765,1087932,1089062,1090186,1091276,1092206,1093172,1094482,1095653,1096966,1098142,1099262,1100377,1101539,1102742,1104009,1105219,1106470,1107837,1108956,1110318,1111405,1112653,1113923,1115082,1116240,1117265,1118426,1119429,1120586,1121653,1122686,1123775,1124954,1125981,1127181,1128380,1129579,1130601,1131448,1132569,1133725,1134640,1135886,1137198,1138020,1139369,1140728,1141963,1143090,1144255,1145342,1146356,1147213,1148098,1149094,1150072,1151152,1152067,1152945,1153911,1154818,1155702,1156770,1158e3,1158981,1159872,1160988,1161911,1162944,1163875,1164971,1166073,1167249,1168234,1169473,1170545,1171647,1172906,1174124,1175215,1176293,1176938,1177799,1178757,1180054,1180950,1181648,1182610,1183921,1185034,1186189,1187350,1188329,1189267,1190040,1190783,1191923,1193030,1193800,1194985,1196202,1197585,1198694,1199915,1200594,1201221,1201947,1202742,1203544,1204333,1205192,1206015,1207287,1208103,1209154,1210192,1211448,1212727,1213578,1214650,1215976,1217586,1218876,1220131,1221208,1222398,1223651,1224891,1226110,1227001,1228228,1229295,1230574,1231691,1232857,1233859,1234885,1235856,1236924,1237893,1238922,1240042,1240966,1242127,1242949,1244041,1245107,1246223,1247180,1248463,1249747,1250985,1252231,1253524,1254477,1255627,1256594,1257727,1258920,1259962,1260973,1262122,1262964,1263975,1265050,1266210,1267182,1268192,1269086,1269999,1271010,1272231,1273421,1274671,1275960,1277134,1278140,1279655,1280628,1281718,1282605,1283890,1284776,1285755,1286996,1287984,1289060,1289845,1291040,1291870,1292826,1294200,1295266,1296201,1297829,1299051,1300132,1301175,1301977,1302726,1303403,1304044,1304956,1305874,1307055,1308109,1309505,1310902,1312172,1313446,1314825,1316061,1317257,1318416,1319403,1320490,1321442,1322768,1323938,1325028,1325923,1326962,1328081,1329371,1330584,1332218,1333159,1333610,1334574,1335797,1336785,1337845,1338902,1339995,1340732,1341838,1343158,1344559,1345571,1346647,1347777,1348934,1350086,1351007,1352150,1353087,1353816,1354581,1355178,1355923,1357075,1358277,1359173,1360202,1361095,1362052,1363373,1364740,1365825,1366933,1368057,1369079,1370423,1371486,1372544,1373742,1374901,1375967,1377124,1378070,1378875,1379627,1380740,1381819,1382689,1383503,1384209,1384843,1385508,1386158,1387086,1388181,1389111,1390289,1391173,1392220,1392923,1393686,1394706,1395899,1397124,1398027,1398693,1399688,1400862,1401926,1402999,1403916,1404951,1406120,1407111,1408065,1409030,1409986,1410832,1411627,1412454,1413355,1414187,1414882,1415659,1416699,1417826,1418933,1419997,1420618,1421512,1422568,1423850,1424821,1425889,1427015,1427783,1428659,1429527,1430251,1431075,1431829,1432535,1433670,1434930,1436202,1437282,1438550,1439638,1440653,1441958,1443046,1444142,1445123,1446392,1447415,1448407,1449195,1450248,1451095,1452075,1453158,1454191,1455033,1456247,1457170,1458360,1459636,1460688,1462053,1463245,1464243,1465139,1466428,1467565,1468613,1469765,1470907,1472064,1472934,1474115,1475302,1476375,1477362,1478493,1479730,1481009,1482013,1483236,1484430,1485541,1486715,1487856,1489216,1490166,1491275,1492544,1493480,1494570,1495993,1497245,1498203,1499356,1500333,1501557,1502892,1504167,1505288,1505897,1506661,1507460,1508740,1509849,1510650,1511846,1513099,1514203,1515421,1516560,1517582,1518519,1519738,1520827,1521974,1522564,1523556,1524759,1525388,1526426,1527532,1528624,1529774,1530832,1532028,1532927,1533597,1534648,1536090,1537334,1538292,1539415,1540684,1541871,1543199,1544504,1545826,1547195,1548460,1549786,1551098,1552156,1552937,1554081,1555511,1556844,1557974,1558901,1559996,1561113,1562011,1562998,1564169,1565426,1566222,1567363,1568486,1569323,1569768,1570808,1571471,1572143,1573268,1574232,1574912,1575887,1576872,1577815,1578699,1579592,1580479,1581683,1582602,1583221,1584260,1585275,1586426,1587589,1588490,1589079,1589738,1590459,1591041,1591874,1592482,1593355,1594265,1595108,1596233,1597176,1597687,1598265,1598876,1599599,1600471,1601598,1602472,1603462,1604392,1604876,1605332,1605855,1606407,1606902,1607553,1608240,1608914,1609691,1610510,1611303,1612145,1613110,1614097,1614919,1615639,1616600,1617679,1618610,1619529,1620571,1621696,1622654,1623604,1624714,1625736,1626395,1627041,1627703,1628268,1628918,1629461,1629989,1630623,1631252,1632013,1632661,1633371,1634035,1634718,1635399,1636157,1636933,1638061,1638963,1639830,1640657,1641342,1642010,1642677,1643341,1643707,1644115,1644482,1644801,1645324,1645617,1645983,1646484,1646897,1647402,1647745,1648344,1648913,1649262,1649797,1650242,1651026,1652003,1652875,1653805,1654685,1655849,1656865,1657661,1658592,1659498,1660212,1660911,1661619,1662499,1663296,1664080,1664861,1665563,1666462,1667329,1668182,1669187,1670162,1670997,1671781,1672636,1673523,1674393,1675274,1676175,1677055,1677894,1678515,1679545,1680257,1680899,1681796,1682740,1683492,1684313,1685144,1686128,1686873,1687756,1688437,1689291,1690113,1690863,1691733,1692725,1693691,1694655,1695473,1696478,1697330,1698321,1699239,1700219,1701275,1702189,1703159,1704064,1705011,1705988,1706909,1707983,1708818,1709878,1710822,1711734,1712732,1713561,1714536,1715425,1716438,1717479,1718404,1719380,1720250,1721174,1722173,1723199,1724195,1725190,1726061,1727095,1728224,1728985,1729985,1730957,1731774,1732393,1733010,1734014,1734900,1735863,1736694,1737559,1738413,1739198,1739989,1740768,1741613,1742433,1743311,1744110,1744950,1745713,1746485,1747229,1748025,1748914,1749703,1750679,1751609,1752525,1753354,1754195,1755146,1755850,1756584,1757412,1758286,1759066,1759751,1760498,1761327,1762166,1762913,1763516,1764363,1765050,1765779,1766580,1767423,1768140,1768987,1769799,1770814,1771832,1772904,1773870,1774817,1775911,1776693,1777609,1778427,1779372,1780255,1781157,1782041,1782805,1783566,1784389,1785232,1786120,1786933,1787722,1788615,1789383,1790195,1791021,1792071,1793075,1793876,1794723,1795744,1796718,1797749,1798607,1799250,1800033,1800749,1801652,1802411,1802967,1803874,1804778,1805567,1806489,1807365,1808146,1809065,1809859,1810770,1811632,1812539,1813542,1814430,1815240,1816093,1817127,1817928,1818958,1819614,1820538,1821514,1822220,1822830,1823527,1824347,1825209,1825954,1826961,1827637,1828271,1829189,1830059,1830940,1831714,1832893,1833859,1834818,1835730,1836501,1837198,1838050,1838744,1839420,1840157,1840965,1842042,1842819,1843657,1844458,1845371,1846227,1847032,1847881,1848576,1849208,1849854,1850481,1851147,1851762,1852407,1853207,1854186,1855043,1855904,1857091,1858126,1859025,1860135,1861108,1861901,1862738,1863568,1864390,1865150,1865942,1867013,1867943,1868731,1869545,1870469,1871181,1871953,1872665,1873446,1874327,1875404,1876281,1876936,1877809,1878570,1879332,1880127,1880888,1881722,1882668,1883515,1884506,1885212,1885954,1886661,1887312,1888033,1888640,1889715,1890736,1891573,1892355,1893119,1893997,1894790,1895896,1896478,1897335,1898141,1898979,1899892,1900760,1901556,1902177,1903161,1904015,1904797,1905409,1906154,1907006,1908098,1908943,1909830,1910550,1911465,1912330,1913207,1913992,1914951,1915802,1916566,1917480,1918485,1919427,1920251,1921271,1922317,1923133,1924096,1924932,1925959,1926903,1927755,1928588,1929562,1930543,1931527,1932231,1932969,1933715,1934450,1935157,1935826,1936468,1937116,1938001,1939053,1940060,1940712,1941386,1942249,1943052,1943961,1944896,1945811,1946875,1947674,1948841,1949646,1950679,1951696,1952766,1953629,1954634,1955536,1956416,1957363,1958349,1959373,1960308,1961371,1962367,1963314,1964158,1965122,1966182,1967285,1968305,1969205,1970290,1971266,1972199,1973143,1974114,1975094,1976114,1977157,1978228,1979170,1980300,1981335,1982199,1983068,1984101,1985043,1985809,1986903,1987983,1988770,1989665,1990703,1991468,1992259,1993297,1994401,1995430,1996381,1997195,1998104,1998996,1999863,2000782,2001871,2002738,2003832,2004865,2005805,2006743,2007774,2008708,2009744,2010582,2011466,2012556,2013260,2013971,2014810,2015468,2016315,2017179,2018072,2019127,2019830,2020738,2021540,2022448,2023452,2024378,2025346,2026301,2027076,2028115,2028916,2029609,2030418,2031308,2032141,2032975,2033844,2034877,2035442,2036300,2037063,2037616,2038407,2039262,2040084,2040937,2041993,2043115,2043984,2044694,2045736,2046639,2047503,2048057,2049004,2049763,2050618,2051515,2052388,2053441,2054348,2055270,2056117,2056836,2057738,2058598,2059356,2060224,2061084,2062093,2062954,2064061,2064921,2065866,2066940,2067919,2068857,2069924,2070687,2071356,2072092,2072945,2073940,2074918,2076013,2076844,2077631,2078454,2079362,2080177,2080961,2081769,2082650,2083615,2084489,2085404,2086320,2087227,2087950,2088699,2089645,2090404,2091356,2092152,2093208,2093974,2094861,2095554,2096225,2096944,2097583,2098342,2099022,2099881,2100842,2101571,2102393,2103080,2103909,2104657,2105476,2106222,2106963,2107785,2108638,2109628,2110245,2110880,2111467,2112083,2112997,2113960,2114883,2115809,2116694,2117528,2118404,2119298,2120065,2121158,2122128,2123129,2124112,2125006,2126025,2126860,2127633,2128733,2129626,2130607,2131744,2132619,2133601,2134624,2135597,2136488,2137313,2138127,2139031,2139803,2140720,2141805,2142507,2143228,2143936,2144776,2145716,2146333,2147205,2148145,2149047,2149811,2150808,2151813,2152857,2153752,2154610,2155576,2156406,2157301,2158422,2159431,2160260,2161134,2162026,2162804,2163807,2164926,2165828,2166781,2167839,2168691,2169375,2169962,2170778,2171681,2172446,2173193,2173711,2174205,2175181,2176047,2177165,2178051,2179044,2179929,2180808,2181765,2182651,2183521,2184584,2185336,2186217,2187047,2188005,2188808,2189776,2190501,2191305,2192011,2192894,2193899,2194923,2195877,2196619,2197481,2198396,2199326,2200276,2201337,2202331,2203375,2204434,2205236,2205874,2206614,2207537,2208445,2209344,2210287,2211253,2212135,2213070,2213953,2214898,2215812,2216720,2217672,2218637,2219563,2220424,2221213,2222107,2223019,2224005,2224956,2225974,2226932,2227856,2228852,2229762,2230600,2231590,2232530,2233484,2234378,2235203,2236112,2236961,2237817,2238789,2239835,2240759,2241767,2242843,2243782,2244649,2245649,2246386,2247066,2247796,2248786,2249426,2250206,2251195,2252067,2252922,2253513,2254182,2254754,2255445,2256130,2256900,2257626,2258694,2259672,2260406,2261183,2262019,2262999,2263796,2264598,2265513,2266294,2267234,2268214,2269098,2269966,2270938,2271925,2272736,2273451,2274256,2275020,2275965,2276944,2277808,2278621,2279485,2280358,2281113,2282003,2282844,2283573,2284314,2285168,2285858,2286596,2287428,2288208,2289072,2289884,2290865,2291681,2292394,2293257,2294182,2295115,2296200,2297166,2298157,2299113,2299745,2300574,2301429,2302290,2303053,2303782,2304659,2305529,2306415,2307218,2307912,2308738,2309400,2310275,2311092,2311873,2312878,2313764,2314831,2315714,2316644,2317488,2318403,2319391,2320375,2321415,2322008,2322419,2322991,2323534,2324087,2325021,2325972,2326631,2327083,2327653,2328158,2329063,2329891,2330600,2331437,2332301,2333269,2333899,2334483,2335151,2336049,2336887,2337565,2338502,2339663,2340870,2341898,2343041,2344094,2345096,2345778,2346578,2347605,2348693,2349849,2350966,2351668,2352328,2352955,2353560,2354182,2354834,2355504,2356111,2356712,2357315,2357903,2358566,2359521,2360179,2360942,2361703,2362602,2363182,2364020,2364853,2365688,2366546,2367500,2368226,2369167,2370041,2370827,2371523,2372324,2373079,2373769,2374438,2375123,2375725,2376493,2377208,2378270,2378804,2379416,2379868,2380337,2380830,2381319,2381784,2382342,2382944,2383501,2384200,2385102,2386265,2387428,2388368,2389305,2390057,2391014,2391996,2392579,2393357,2394577,2395667,2396808,2397910,2398754,2399926,2400953,2401907,2402887,2403741,2404853,2405868,2406839,2407978,2408730,2409854,2411036,2411420,2411896,2412917,2413865,2415024,2415965,2416921,2418085,2419324,2420370,2421227,2422102,2423093,2424044,2425150,2426132,2427027,2428179,2429381,2430468,2431500,2432697,2433092,2433460,2434670,2435720,2436771,2437878,2438852,2439684,2440724,2441789,2442476,2443586,2444769,2445875,2446737,2447569,2448828,2449891,2450939,2451668,2452639,2453620,2454334,2454994,2455799,2456603,2457295,2458122,2458946,2459973,2460824,2461449,2462261,2462704,2463181,2464218,2465195,2465630,2466447,2466826,2467515,2468466,2469551,2470188,2470811,2471263,2472230,2473212,2473902,2474428,2474850,2475796,2476801,2477568,2478045,2478442,2479426,2480313,2480859,2481706,2482118,2482814,2483781,2484518,2485234,2486315,2486966,2487617,2488068,2488984,2490197,2491188,2492193,2493299,2494442,2495052,2496108,2496937,2497666,2498705,2499677,2500569,2501530,2502250,2502934,2503956,2504703,2505350,2506471,2507322,2508306,2509318,2510307,2511333,2512420,2513700,2514791,2516047,2516988,2517997,2519083,2519950,2520859,2522008,2522851,2523824,2524982,2526175,2527094,2528172,2528935,2530299,2531625,2532883,2534149,2535185,2536263,2537570,2538608,2539642,2540684,2541856,2543016,2544140,2545421,2546081,2546827,2547740,2548951,2549992,2550429,2550971,2552063,2552907,2554420,2555895,2557402,2558856,2560296,2561743,2563154,2564659,2566180,2567843,2569030,2570476,2571963,2573355,2574851,2576427,2577831,2578730,2580153,2581531,2582887,2584079,2585526,2586587,2588029,2589344,2590661,2592202,2593511,2594929,2595979,2597280,2598615,2600008,2601231,2602490,2603662,2605073,2606543,2607948,2609334,2610661,2612123,2613689,2615176,2616719,2618295,2619436,2620848,2622418,2623954,2625281,2626648,2628070,2629244,2630745,2631682,2633118,2634486,2635996,2637207,2638250,2639585,2640838,2642277,2643461,2645044,2646448,2647929,2649581,2651082,2652659,2654210,2655739,2656913,2658482,2659970,2661476,2662840,2664449,2665983,2667593,2669151,2670599,2671922,2673368,2674950,2676351,2677924,2679429,2680809,2682143,2683515,2684914,2686279,2687587,2689011,2690464,2691924,2693375,2694783,2696230,2697501,2699159,2700678,2702031,2703536,2705229,2706700,2708147,2709370,2710823,2712337,2713838,2715299,2716513,2718022,2719592,2720971,2722454,2723951,2725041,2726242,2727785,2729286,2730538,2731999,2733566,2735097,2736591,2737863,2739006,2740399,2741769,2742494,2743460,2744906,2746270,2747432,2748607,2749725,2750379,2751208,2752167,2753153,2754311,2755606,2756692,2757735,2758505,2759431,2759941,2760180,2761371,2762682,2764117,2765402,2766638,2768063,2769668,2771239,2772698,2774192,2775667,2776947,2778465,2779920,2781333,2782746,2784213,2785642,2787117,2788502,2790129,2791615,2792939,2794080,2795216,2796241,2797460,2798294,2799351,2799835,2800913,2801952,2802958,2803839,2804701,2805807,2806895,2807963,2808570,2809778,2810778,2811927,2812917,2813959,2814988,2815972,2817127,2818382,2819435,2820355,2820962,2822022,2823098,2824211,2825302,2826335,2827417,2828632,2829778,2830885,2831872,2833199,2834280,2835313,2836345,2837425,2838700,2839628,2840734,2841604,2842471,2842985,2843834,2845069,2846040,2846993,2847965,2848856,2849684,2850775,2851725,2852840,2853811,2854712,2855397,2856206,2857137,2857828,2858827,2859719,2861012,2862121,2863151,2864131,2865395,2866372,2867317,2868249,2869243,2870524,2871748,2872851,2873687,2874833,2875787,2877194,2878238,2879134,2880380,2881575,2882771,2883902,2885048,2886084,2887128,2888148,2888935,2889878,2890773,2891775,2892992,2894046,2895128,2896218,2897344,2898033,2899170,2900111,2901383,2902579,2903574,2904671,2905813,2907108,2908106,2909214,2910414,2911567,2912677,2913492,2914869,2915927,2917177,2918015,2919144,2920388,2921298,2922070,2922940,2923793,2924962,2926036,2927134,2928492,2929783,2930823,2931969,2933140,2934341,2935673,2936870,2938044,2939035,2940289,2941372,2942418,2943407,2944445,2945454,2946462,2947518,2948740,2949816,2950626,2951742,2952731,2953733,2954713,2955914,2956877,2957698,2958327,2959247,2960128,2961108,2962222,2963256,2964217,2965585,2966719,2967822,2968652,2969714,2970929,2971840,2972721,2973606,2974474,2975530,2976798,2977982,2979064,2980063,2981185,2982204,2983117,2984305,2985419,2986258,2987394,2988547,2989538,2990513,2991493,2992310,2993163,2993841,2994856,2995944,2996590,2997307,2998545,2999573,3000584,3001781,3002837,3003927,3004848,3006040,3007243,3008545,3009603,3010875,3012018,3013036,3014147,3015343,3016436,3017626,3018689,3019789,3020937,3022073,3023157,3024315,3025524,3026643,3027606,3028595,3029647,3030755,3031563,3032553,3033466,3034556,3035824,3036971,3037803,3038652,3039893,3040859,3041533,3042643,3043968,3045057,3046187,3047362,3048230,3049437,3050752,3052019,3053220,3054316,3055089,3056202,3057038,3058053,3059175,3060007,3060888,3061925,3062853,3063795,3064821,3065960,3067139,3068176,3069211,3070593,3071576,3072714,3073844,3074974,3075981,3077170,3078463,3079531,3080631,3081444,3082466,3083697,3084870,3085924,3087252,3088449,3089783,3090937,3092039,3093075,3094080,3095159,3096402,3097272,3098453,3099555,3100290,3101390,3102714,3103988,3104903,3106111,3107277,3108493,3109105,3110130,3111413,3112648,3113817,3114800,3115936,3117183,3118176,3119297,3120459,3121574,3122656,3123753,3124866,3126067,3126982,3128052,3129182,3130247,3131328,3132706,3134001,3135362,3136657,3137641,3138858,3139812,3140891,3141985,3143197,3144282,3145483,3146566,3147400,3148337,3149276,3150523,3151850,3152716,3153672,3154432,3155667,3156675,3157794,3158924,3159906,3161083,3162204,3163558,3164635,3165876,3167096,3168237,3169523,3170679,3171655,3172715,3173817,3175018,3175951,3176986,3178037,3179218,3180316,3181345,3182381,3183431,3184355,3185467,3186590,3187671,3188979,3189971,3191002,3192084,3193047,3194077,3195253,3196407,3197377,3198382,3199612,3200686,3201893,3203144,3203948,3205060,3206192,3207338,3208446,3209488,3210626,3211712,3212875,3214096,3215361,3216438,3217415,3218529,3219166,3219904,3221129,3222188,3223215,3224313,3225344,3226270,3227296,3228290,3229454,3230494,3231705,3232789,3234003,3234954,3236012,3237124,3238369,3239310,3240289,3241222,3242421,3243431,3244409,3245472,3246570,3247693,3248837,3249862,3250985,3252063,3253119,3254106,3255338,3256308,3257141,3258421,3259301,3260153,3261060,3262330,3263233,3264216,3265407,3266562,3267827,3268961,3270211,3271057,3272244,3273390,3274668,3275623,3276431,3277584,3278791,3280035,3280902,3281851,3283118,3283879,3285159,3286320,3287174,3288205,3289467,3290486,3291424,3292494,3293668,3294635,3295774,3296862,3298071,3299119,3300360,3301710,3302756,3303955,3305089,3306192,3307342,3308530,3309799,3311002,3311653,3312089,3312860,3313547,3314237,3315263,3316285,3317392,3318344,3319181,3320122,3321201,3322403,3323407,3324493,3325501,3326747,3327912,3328723,3329879,3331061,3332296,3333239,3334112,3335303,3336416,3337422,3338438,3339385,3340254,3341318,3342401,3343607,3344794,3345939,3347049,3348294,3349510,3350747,3351946,3353048,3354253,3355284,3356651,3357853,3359020,3360069,3361324,3362241,3363406,3364569,3365801,3366576,3367234,3368199,3369283,3370302,3371358,3372541,3373595,3374721,3376008,3377175,3378584,3379841,3380806,3381886,3382878,3384197,3385321,3386399,3387516,3388587,3389612,3390810,3391555,3392274,3393257,3394239,3395171,3396030,3396968,3397878,3398738,3399622,3400445,3401559,3402643,3403358,3404224,3405281,3406327,3407554,3408391,3409060,3409727,3410302,3411103,3411787,3412590,3413412,3414160,3415018,3415853,3416824,3417569,3418092,3418636,3419423,3420130,3421084,3422128,3422840,3423452,3423905,3424403,3424892,3425544,3426156,3426783,3427555,3428252,3428989,3429858,3430762,3431760,3432840,3433835,3434587,3435366,3436472,3437495,3438406,3439284,3440427,3441388,3442493,3443449,3444038,3444721,3445627,3446619,3447237,3447807,3448371,3449154,3449915,3450666,3451296,3451978,3452684,3453679,3454807,3455544,3456163,3456844,3457550,3458117,3458464,3459005,3459743,3460759,3461609,3462710,3463641,3464772,3465639,3466792,3467350,3468169,3469171,3470166,3471153,3472021,3472909,3473810,3474749,3475542,3476434,3477239,3478179,3479022,3479795,3480700,3481572,3482386,3483171,3484219,3485042,3485791,3486490,3487249,3488086,3489110,3490132,3490946,3491913,3492764,3493527,3494341,3495179,3496063,3497030,3497972,3498640,3499594,3500618,3501542,3502093,3502686,3503596,3504411,3504988,3505702,3506409,3507371,3508225,3509239,3510092,3511164,3511965,3512922,3513888,3514874,3515662,3516546,3517353,3518232,3519074,3519947,3520816,3521726,3522554,3523555,3524432,3525413,3526293,3527196,3528098,3529063,3529857,3530519,3531253,3531948,3532592,3533299,3533939,3534687,3535510,3536302,3537227,3538079,3539108,3539935,3540826,3541945,3542693,3543826,3544943,3545600,3546482,3547519,3548551,3549330,3550131,3551014,3551871,3552825,3553794,3554794,3555476,3556230,3556942,3557914,3558712,3559304,3560256,3561187,3562148,3563086,3564022,3564946,3565969,3566821,3567826,3568726,3569570,3570611,3571706,3572557,3573559,3574393,3575203,3576005,3576934,3578066,3579140,3579974,3580973,3581772,3582464,3583096,3583875,3584851,3585549,3586234,3586727,3587352,3588381,3589305,3590309,3591280,3592137,3593155,3594077,3595116,3595849,3596846,3597772,3598570,3599373,3600321,3601098,3602023,3602954,3603666,3604522,3605421,3606437,3607403,3608369,3609340,3610233,3611182,3612121,3613148,3614024,3614879,3615843,3616963,3618031,3618682,3619364,3620046,3621080,3622032,3622793,3623770,3624706,3625636,3626536,3627472,3628309,3629182,3630066,3630997,3631913,3632778,3633676,3634565,3635415,3636421,3637346,3638309,3639271,3640292,3641242,3642285,3643154,3644171,3645041,3645910,3646879,3647858,3648803,3649654,3650657,3651592,3652516,3653596,3654464,3655399,3656317,3657279,3658200,3659053,3659873,3660531,3661192,3662092,3662813,3663820,3664599,3665468,3666124,3666712,3667436,3668050,3668786,3669498,3670290,3671115,3672060,3672943,3673619,3674409,3675227,3676126,3676960,3678065,3678931,3679769,3680562,3681507,3682425,3683232,3684320,3685233,3686014,3686786,3687457,3688344,3689308,3690124,3691115,3691933,3692870,3693885,3694630,3695614,3696561,3697273,3698193,3698932,3699582,3700394,3701188,3701997,3702995,3703765,3704744,3705608,3706444,3707515,3708410,3709437,3710477,3711566,3712528,3713330,3713987,3714719,3715585,3716448,3717004,3717702,3718548,3719520,3720235,3721090,3721927,3722873,3723606,3724361,3725197,3725968,3727046,3727962,3728922,3729869,3730807,3731806,3732770,3733793,3734765,3735406,3736329,3737116,3737996,3739127,3740364,3741399,3742519,3743581,3744632,3745365,3746174,3747194,3748267,3749438,3750574,3751368,3752023,3752641,3753273,3753881,3754474,3755081,3755944,3756881,3757767,3758618,3759542,3760284,3761237,3761977,3762904,3763639,3764269,3764976,3765788,3766593,3767107,3767696,3768189,3768665,3769129,3769624,3770118,3770711,3771262,3771833,3772497,3773393,3774558,3775712,3776649,3777627,3778547,3779547,3780137,3780972,3782171,3783327,3784338,3785349,3786568,3787072,3788173,3789044,3790092,3790954,3791834,3792809,3793757,3794823,3795836,3796720,3797825,3799013,3800013,3801047,3801700,3802776,3803899,3805094,3806243,3807274,3808187,3809326,3810383,3811389,3812364,3813351,3814424,3815501,3816452,3817383,3818509,3819555,3820655,3821541,3822667,3823036,3823700,3824874,3826e3,3826816,3827801,3828785,3829965,3830782,3831701,3832843,3833980,3834929,3835954,3836797,3837803,3838941,3839676,3840469,3841263,3841891,3842686,3843505,3844534,3845582,3846207,3846825,3847265,3848149,3849054,3849960,3850524,3851359,3851733,3852565,3853675,3854209,3854965,3855371,3856259,3857189,3858090,3858664,3859509,3859913,3860645,3861685,3862775,3863871,3864872,3865730,3866886,3868045,3868944,3869903,3870865,3871947,3872974,3874039,3875093,3875908,3876882,3877841,3878947,3879898,3881064,3882130,3883080,3884277,3885402,3886007,3887307,3888518,3889644,3891032,3892340,3893654,3894732,3895770,3896847,3898038,3899432,3900749,3902316,3903713,3905319,3906982,3908532,3910122,3911698,3913186,3914081,3915173,3916338,3917335,3917771,3918289,3919489,3920356,3921814,3923274,3924682,3926151,3927575,3928986,3930302,3931717,3933216,3934562,3936019,3937595,3939073,3940396,3941799,3943218,3944719,3946144,3947546,3948901,3950496,3951979,3953239,3954740,3956375,3957841,3959296,3960591,3961928,3963254,3964800,3966198,3967760,3969098,3970529,3971840,3972898,3974319,3975827,3977119,3978573,3980066,3981544,3983030,3984200,3985134,3986382,3987595,3988737,3989494,3989792,3990871,3992070,3992944,3994085,3995006,3995835,3996820,3997483,3998679,3999510,4000740,4001960,4002746,4003731,4004486,4005701,4006742,4007859,4008724,4009787,4010703,4011778,4012903,4013995,4015130,4016022,4016955,4017914,4019059,4020007,4021212,4022244,4023342,4024505,4025540,4026644,4027764,4028724,4029868,4031055,4032048,4033079,4034226,4035607,4036919,4038113,4039294,4040594,4041788,4042936,4044146,4045514,4046587,4047846,4049185,4050332,4051608,4053018,4054335,4055456,4056544,4057804,4059130,4060227,4061382,4062770,4064010,4065086,4066260,4067431,4068273,4068835,4070013,4071016,4072203,4073303,4074397,4075378,4076458,4077115,4077776,4078425,4079047,4079794,4080358,4080936,4081496,4082147,4082809,4083411,4084025,4084658,4085338,4085967,4086589,4087221,4087868,4088424,4089002,4089593,4090243,4090870,4091491,4092163,4092955,4093830,4094747,4095532,4096296,4097225,4098040,4098994,4099913,4100768,4101541,4102454,4103356,4104279,4105161,4106066,4106871,4107640,4108586,4109393,4110359,4111259,4112119,4112862,4113782,4114686,4115624,4116514,4117434,4118231,4119039,4119995,4120807,4121757,4122646,4123479,4124200,4125108,4125995,4126934,4127838,4128754,4129552,4130373,4131333,4132128,4133071,4133961,4134763,4135498,4136405,4137303,4138236,4139158,4140077,4140875,4141683,4142648,4143442,4144384,4145257,4146076,4146798,4147716,4148600,4149545,4150449,4151377,4152252,4153184,4154104,4155018,4155923,4156809,4157643,4158557,4159456,4160323,4161327,4162374,4163430,4164479,4165533,4166584,4167650,4168686,4169756,4170831,4171900,4172984,4174076,4175179,4176272,4177400,4178541,4179671,4180782,4181932,4183058,4184215,4185343,4186454,4187534,4188682,4189771,4190884,4191982,4193097,4194200,4195304,4196423,4197532,4198632,4199709,4200817,4201925,4203050,4204135,4205239,4206323,4207462,4208587,4209695,4210833,4211927,4213065,4214171,4215312,4216431,4217567,4218678,4219799,4220908,4222030,4223136,4224220,4225335,4226431,4227539,4228644,4229750,4230861,4231984,4233075,4234187,4235306,4236392,4237524,4238650,4239772,4240887,4241992,4243110,4244222,4245321,4246454,4247724,4249082,4250470,4251813,4253206,4254536,4255881,4257230,4258654,4260127,4261571,4262925,4264279,4265628,4266981,4268334,4269686,4271066,4272088,4273071,4274037,4274954,4275849,4276792,4277762,4278632,4279489,4280315,4281166,4282055,4282935,4283806,4284696,4285559,4286434,4287258,4288143,4288999,4289833,4290638,4291495,4292357,4293242,4294091,4294942,4295825,4296670,4297507,4298369,4299184,4300020,4300832,4301973,4302998,4304208,4305238,4306362,4307226,4307849,4308677,4309389,4310135,4311062,4312103,4313365,4314610,4315763,4316847,4317882,4319236,4320226,4321383,4322403,4323356,4324305,4325423,4326370,4327366,4328477,4329338,4330553,4331535,4332476,4333582,4334637,4335698,4336876,4338103,4339248,4340245,4341379,4342478,4343494,4344716,4345794,4346983,4348223,4349117,4350174,4351220,4352448,4353650,4354815,4355864,4356909,4357924,4359075,4360090,4361121,4362376,4363575,4364694,4366003,4367016,4367954,4368852,4370034,4371270,4372199,4373124,4374118,4375258,4376348,4377581,4378716,4379509,4380633,4381235,4382423,4383751,4384850,4385877,4387031,4388254,4389355,4390401,4391651,4392908,4394039,4395187,4396416,4397580,4398310,4399433,4400621,4401795,4402960,4404206,4405422,4406743,4408022,4409282,4409995,4410689,4411678,4412705,4413553,4414449,4415409,4416357,4417213,4418091,4418938,4420047,4421099,4421780,4422712,4423791,4424864,4426113,4427019,4427631,4428329,4428979,4429633,4430430,4431072,4431940,4432843,4433658,4434529,4435041,4435597,4436265,4437016,4437925,4439005,4439536,4439996,4440488,4440998,4441622,4442250,4442888,4443639,4444363,4445130,4445872,4446893,4447962,4449012,4449983,4450749,4451487,4452396,4453207,4454258,4455279,4456447,4457319,4457901,4458720,4459577,4460143,4460586,4461255,4462135,4462797,4463609,4464294,4465033,4466114,4467083,4467750,4468505,4469081,4469472,4469896,4470567,4471501,4472034,4472940,4473889,4474510,4475474,4476459,4477319,4478353,4479173,4480273,4481204,4482129,4482872,4483728,4484586,4485327,4486462,4487274,4488152,4488989,4489858,4490726,4491634,4492469,4493481,4494351,4495329,4496207,4497122,4498016,4498982,4499786,4500444,4501239,4502084,4502771,4503477,4504166,4504853,4505547,4506294,4507127,4507969,4509001,4509822,4510714,4511833,4512576,4513716,4514839,4515520,4516423,4517445,4518484,4519259,4520038,4520900,4521786,4522721,4523711,4524756,4525442,4526199,4526919,4527879,4528642,4529246,4530186,4531127,4532095,4533036,4533973,4534909,4535932,4536792,4537782,4538667,4539463,4540521,4541632,4542467,4543483,4544306,4545114,4545944,4546905,4548017,4549133,4549937,4550929,4551720,4552406,4553021,4553805,4554780,4555459,4556124,4556595,4557228,4558247,4559165,4560141,4561142,4561984,4563015,4563946,4564984,4565705,4566709,4567629,4568423,4569227,4570171,4570941,4571850,4572807,4573511,4574349,4575257,4576278,4577233,4578159,4579128,4580041,4580968,4581925,4582961,4583823,4584667,4585627,4586729,4587766,4588430,4589113,4589801,4590819,4591755,4592495,4593488,4594430,4595391,4596302,4597226,4598091,4598983,4599892,4600806,4601760,4602603,4603493,4604384,4605249,4606244,4607171,4608118,4609064,4610095,4611053,4612089,4612980,4613983,4614854,4615715,4616673,4617661,4618616,4619480,4620485,4621408,4622340,4623422,4624260,4625207,4626087,4627076,4628016,4628859,4629688,4630342,4631008,4631890,4632634,4633634,4634417,4635276,4635912,4636543,4637270,4637894,4638630,4639366,4640146,4640984,4641933,4642815,4643500,4644267,4645112,4646021,4646846,4647934,4648794,4649635,4650430,4651379,4652273,4653103,4654195,4655115,4655917,4656683,4657363,4658250,4659180,4659982,4660975,4661773,4662714,4663744,4664474,4665486,4666424,4667145,4668065,4668763,4669408,4670209,4670995,4671796,4672815,4673613,4674581,4675488,4676336,4677419,4678331,4679345,4680399,4681482,4682443,4683234,4683908,4684624,4685506,4686399,4686957,4687669,4688484,4689442,4690166,4691005,4691833,4692778,4693522,4694267,4695128,4695917,4696993,4697910,4698850,4699794,4700745,4701744,4702702,4703725,4704811,4706118,4707173,4708296,4709404,4710567,4711404,4712027,4713033,4714151,4715236,4716342,4717227,4717882,4718519,4719172,4719798,4720499,4721344,4722225,4723094,4724026,4724838,4725869,4726583,4727402,4728301,4729023,4730096,4730623,4731232,4731681,4732146,4732649,4733138,4733605,4734165,4734645,4735262,4735711,4736483,4737585,4738732,4739665,4740572,4741548,4742249,4743048,4743989,4745215,4746316,4747212,4748437,4749435,4750274,4751189,4752162,4753164,4754283,4755275,4756128,4757330,4758590,4759774,4760834,4761845,4762926,4764059,4764866,4765928,4766899,4767858,4768952,4769941,4770883,4772018,4773001,4773967,4775182,4776229,4777229,4778265,4779214,4779827,4780223,4781379,4782438,4783553,4784237,4785347,4786588,4787615,4788558,4789464,4790564,4791679,4792504,4793125,4793934,4794718,4795376,4796184,4797209,4798391,4798974,4799641,4800060,4800961,4801856,4802564,4803686,4804706,4805760,4806940,4807490,4808223,4808619,4809518,4810591,4811212,4811857,4812296,4813214,4814417,4815402,4816409,4817520,4818669,4819281,4820559,4821699,4822672,4823328,4824097,4825120,4826138,4826873,4827844,4828794,4829722,4830624,4831457,4832392,4833558,4834465,4835137,4836162,4837123,4838292,4839319,4840086,4840730,4841329,4842069,4842773,4843571,4844366,4845152,4845950,4846453,4847012,4847778,4848494,4849431,4850386,4850852,4851334,4851857,4852374,4852901,4853559,4854271,4855011,4855938,4856809,4857774,4858740,4859569,4860287,4861434,4862462,4863279,4864315,4865184,4866398,4867289,4868426,4869369,4869973,4870662,4871431,4872290,4873239,4873872,4874413,4874933,4875490,4876348,4877041,4877755,4878119,4878835,4879491,4880180,4880866,4881987,4882638,4883318,4883826,4884441,4885017,4885520,4886045,4886628,4887117,4887503,4888053,4888674,4889508,4890474,4891097,4891837,4892859,4893831,4894392,4895349,4896100,4896735,4897281,4898047,4898952,4899837,4900603,4901315,4902075,4902796,4903611,4904390,4905299,4906146,4907071,4908063,4909190,4910005,4910754,4911447,4912230,4913149,4914197,4915232,4916138,4917058,4917926,4918906,4919686,4920284,4921106,4922138,4923022,4923878,4924705,4925600,4926525,4927307,4928242,4929221,4930150,4930906,4931912,4932940,4934011,4934897,4935828,4936721,4937692,4938860,4939957,4940925,4941800,4942736,4943754,4944934,4946032,4946993,4947856,4948743,4949685,4950673,4951649,4952667,4953386,4954283,4955207,4956359,4956980,4957865,4958726,4959579,4960706,4961333,4962245,4963172,4963908,4964651,4965655,4966557,4967503,4968491,4969216,4970130,4971188,4971911,4972606,4973387,4974238,4974986,4975845,4976670,4977725,4978508,4979329,4980174,4981100,4981952,4982766,4983626,4984518,4985473,4986326,4987258,4988140,4989069,4989746,4990471,4991419,4992165,4993106,4993903,4994985,4995686,4996579,4997274,4997996,4998708,4999348,5000076,5000711,5001568,5002327,5003086,5004032,5004999,5005843,5006840,5007739,5008618,5009552,5010422,5011311,5012217,5013127,5014014,5014923,5015837,5016711,5017576,5018451,5019306,5020205,5021047,5021947,5022785,5023677,5024516,5025411,5026261,5027174,5028033,5028969,5029912,5030777,5031709,5032626,5033547,5034474,5035363,5036429,5037371,5038291,5039416,5040399,5041346,5042363,5043251,5044201,5045030,5045780,5046700,5047495,5048393,5049458,5050298,5051091,5051858,5052655,5053607,5054287,5055129,5056089,5056961,5057721,5058708,5059682,5060620,5061500,5062347,5063390,5064205,5065041,5066173,5067227,5068039,5068993,5069857,5070641,5071561,5072608,5073412,5074475,5075524,5076428,5077112,5077672,5078401,5079276,5080134,5080833,5081374,5081873,5082842,5083736,5084848,5085733,5086793,5087633,5088526,5089516,5090438,5091275,5092350,5093132,5093980,5094838,5095869,5096662,5097669,5098516,5099324,5100062,5100962,5102011,5103055,5104016,5104703,5105590,5106483,5107369,5108365,5109382,5110281,5111342,5112335,5113170,5113845,5114592,5115504,5116448,5117364,5118202,5119211,5120122,5121083,5121958,5122925,5123804,5124717,5125679,5126654,5127616,5128494,5129293,5130225,5131119,5132076,5132974,5134043,5135020,5136013,5136935,5137772,5138680,5139665,5140541,5141416,5142258,5143126,5144057,5144841,5145696,5146690,5147730,5148667,5149718,5150813,5151697,5152618,5153613,5154449,5155203,5155966,5156910,5157541,5158228,5159227,5160052,5160864,5161428,5162081,5162669,5163370,5164085,5164812,5165506,5166614,5167607,5168364,5169135,5169950,5170905,5171792,5172577,5173549,5174368,5175356,5176339,5177225,5178105,5179054,5180026,5180850,5181583,5182338,5183001,5183938,5184896,5185785,5186594,5187465,5188332,5189167,5190028,5190894,5191612,5192317,5193234,5193975,5194700,5195544,5196293,5197123,5197967,5198922,5199801,5200544,5201362,5202330,5203230,5204285,5205248,5206244,5207314,5207980,5208806,5209637,5210501,5211353,5212103,5212968,5213798,5214767,5215492,5216146,5216941,5217662,5218532,5219315,5220047,5221030,5221935,5222970,5223791,5224730,5225651,5226597,5227537,5228555,5229602,5230375,5231229,5232202,5233087,5233948,5235061,5235937,5236693,5237614,5238497,5239445,5240283,5241190,5242331,5243554,5244581,5245697,5246750,5247769,5248469,5249276,5250304,5251388,5252546,5253676,5254402,5254808,5255200,5255858,5256465,5257080,5257679,5258262,5258938,5259950,5260624,5261311,5261856,5262435,5262971,5263739,5264588,5265547,5266376,5267080,5267891,5268562,5269237,5269909,5270538,5271294,5272005,5272926,5273370,5273975,5274451,5274933,5275395,5275865,5276336,5276908,5277482,5278255,5278883,5280012,5281152,5282082,5283020,5283932,5284632,5285407,5286739,5287981,5288869,5289786,5290690,5291664,5292658,5293728,5294581,5295492,5296676,5297744,5298792,5299679,5300817,5301887,5302687,5303714,5304923,5305905,5306967,5308091,5309379,5310370,5311370,5312379,5313460,5314544,5315638,5316601,5317565,5318538,5319634,5320622,5321590,5322698,5323772,5324846,5325722,5326845,5327219,5327882,5329098,5330261,5331110,5331970,5332862,5334089,5334953,5335920,5336982,5338158,5339193,5340056,5340908,5341961,5343046,5343519,5344057,5344773,5345489,5346299,5346961,5347723,5348511,5349579,5350606,5351429,5352246,5352625,5353119,5354163,5354854,5355661,5356093,5356527,5357555,5358268,5358982,5359909,5360409,5361246,5361639,5362235,5363253,5363875,5364681,5365121,5365602,5366622,5367326,5368133,5368517,5369016,5370081,5371038,5371513,5372341,5372767,5373609,5374563,5375541,5376312,5376816,5377216,5378204,5379329,5380290,5381457,5382523,5383469,5384480,5385582,5386458,5387529,5388529,5389438,5390465,5391698,5392883,5393860,5394903,5395886,5396868,5397976,5399024,5400003,5401166,5402214,5403094,5404013,5405040,5405999,5407257,5408300,5408982,5409810,5410955,5411757,5412464,5413399,5414389,5415395,5416150,5417110,5418078,5418928,5419836,5420649,5421776,5422845,5423611,5424439,5425497,5426503,5427351,5428012,5428631,5429386,5430097,5430973,5431818,5432718,5433453,5434399,5435541,5436308,5436905,5437753,5438331,5439283,5440066,5441024,5441791,5442445,5443485,5444277,5445346,5446173,5447034,5447885,5448670,5449560,5450443,5451536,5452398,5453208,5454006,5454851,5455582,5456354,5457160,5458100,5459051,5459974,5460863,5461808,5462660,5463512,5464322,5465147,5465878,5466561,5467265,5467883,5468619,5469327,5470177,5471003,5472187,5473072,5473782,5474336,5475121,5475653,5476537,5477657,5478781,5479768,5480825,5482063,5483028,5484021,5485080,5486067,5486923,5488045,5489197,5490327,5491212,5492178,5493186,5493936,5494654,5495461,5496123,5496875,5497656,5498537,5499175,5500021,5500426,5501148,5502104,5502856,5503608,5504423,5504843,5505343,5506408,5507346,5508422,5509478,5510557,5511698,5512306,5513358,5514628,5515951,5516966,5518189,5518999,5519705,5520621,5521628,5522645,5523392,5524359,5525320,5526172,5527079,5527894,5529018,5530081,5530909,5531702,5532762,5533769,5534934,5535753,5536500,5537173,5537804,5538530,5539224,5539918,5540765,5541656,5542487,5543432,5544049,5544535,5545119,5545924,5546710,5547664,5548622,5549149,5549593,5550096,5550553,5551139,5551801,5552586,5553306,5553993,5554777,5555589,5556603,5557461,5558388,5559475,5560271,5561184,5562216,5563311,5564116,5564970,5566098,5566983,5568108,5569037,5569777,5570467,5571088,5571864,5572590,5573145,5573737,5574340,5574898,5575501,5576134,5576888,5577652,5578370,5579042,5579741,5580476,5581335,5582484,5583284,5583953,5584596,5585265,5585763,5586192,5586700,5587127,5587612,5588179,5588748,5589100,5589633,5590065,5591001,5591753,5592775,5593669,5594477,5595356,5596412,5597218,5597918,5598836,5599947,5600859,5601951,5602678,5603457,5604322,5605173,5605966,5607034,5608005,5608734,5609521,5610185,5610854,5611640,5612388,5613339,5614287,5614967,5615775,5616674,5617413,5618180,5618827,5619649,5620633,5621618,5622563,5623369,5624040,5624874,5625555,5626239,5627023,5627881,5628886,5629891,5630804,5631689,5632747,5633692,5634612,5635477,5636505,5637280,5638325,5639216,5639931,5640863,5641932,5642972,5643954,5644829,5645778,5646772,5647819,5648762,5649817,5650603,5651213,5651890,5652508,5653208,5653835,5654556,5655332,5655999,5656628,5657396,5658280,5659263,5660332,5661368,5662205,5663137,5663944,5664838,5665711,5666894,5667993,5668940,5669912,5670913,5671803,5672649,5673602,5674615,5675677,5676493,5677516,5678524,5679284,5680171,5681118,5682076,5683119,5684229,5685170,5686156,5687153,5688056,5688895,5689865,5690852,5691667,5692617,5693512,5694508,5695490,5696222,5697134,5698044,5699033,5699963,5700765,5701875,5702746,5703476,5704216,5705007,5705843,5706785,5707851,5708938,5709936,5710597,5711526,5712395,5713389,5714155,5714992,5716021,5717134,5717918,5718923,5719704,5720545,5721359,5722205,5723048,5723935,5724734,5725544,5726409,5727199,5728004,5728936,5730020,5731090,5732191,5733136,5734178,5735234,5736295,5737024,5737951,5738990,5739750,5740574,5741274,5741963,5742786,5743560,5744327,5745425,5746300,5747158,5748205,5749098,5750033,5750795,5751707,5752539,5753382,5754177,5755313,5756266,5757152,5758033,5758809,5759661,5760669,5761565,5762530,5763536,5764202,5765134,5765882,5766714,5767548,5768493,5769493,5770467,5771489,5772441,5773374,5774424,5775380,5775953,5776947,5777810,5778573,5779599,5780617,5781630,5782570,5783361,5784269,5785181,5786077,5787140,5787946,5788710,5789497,5790453,5791273,5792010,5792709,5793557,5794348,5795356,5796084,5797027,5797967,5798952,5799815,5800462,5801248,5802146,5802971,5803877,5804593,5805559,5806565,5807598,5808424,5809190,5810092,5810889,5811801,5812456,5813196,5813914,5814852,5815858,5816869,5817670,5818404,5819340,5820208,5821086,5821971,5822822,5823660,5824605,5825499,5826454,5827406,5828055,5829168,5830066,5830887,5831857,5832810,5833487,5834137,5834926,5835782,5836511,5837276,5838111,5838975,5840012,5841066,5842178,5843001,5843832,5844936,5845867,5846879,5847858,5848852,5849726,5850719,5851543,5852422,5853268,5854137,5854999,5855915,5856739,5857736,5858595,5859584,5860454,5861366,5862255,5863226,5864032,5864710,5865442,5866143,5866793,5867505,5868139,5868886,5869695,5870464,5871397,5872263,5873262,5874083,5874949,5875863,5876746,5877735,5878672,5879556,5880407,5881308,5882145,5883051,5884204,5884952,5886042,5887161,5887891,5888802,5889776,5890784,5891528,5892352,5893247,5894138,5895034,5896060,5897068,5897801,5898543,5899241,5900173,5901037,5901584,5902523,5903500,5904490,5905344,5906313,5907292,5908330,5909209,5910209,5911097,5911957,5912939,5914056,5914887,5915816,5916647,5917508,5918287,5919221,5920310,5921300,5922196,5923256,5924107,5924796,5925434,5926147,5927130,5927894,5928623,5929089,5929723,5930727,5931627,5932673,5933595,5934551,5935569,5936511,5937557,5938306,5939257,5940243,5940983,5941825,5942706,5943508,5944328,5945283,5945944,5946817,5947641,5948587,5949450,5950483,5951501,5952407,5953325,5954278,5955272,5956178,5957084,5958143,5959283,5960388,5961192,5961881,5962609,5963650,5964572,5965386,5966359,5967291,5968185,5969076,5969935,5970841,5971689,5972604,5973526,5974395,5975314,5976177,5976964,5977872,5978827,5979781,5980740,5981676,5982654,5983579,5984668,5985564,5986496,5987371,5988270,5989323,5990264,5991149,5991986,5992883,5993843,5994750,5995814,5996672,5997729,5998737,5999637,6000472,6001261,6002056,6002745,6003436,6004413,6005107,6006006,6006837,6007763,6008546,6009126,6009794,6010381,6011091,6011813,6012586,6013363,6014412,6015279,6015994,6016784,6017599,6018563,6019332,6020340,6021126,6021881,6022722,6023688,6024618,6025530,6026551,6027469,6028280,6029012,6029733,6030567,6031538,6032441,6033396,6034217,6035113,6036088,6036726,6037679,6038602,6039281,6040197,6040942,6041628,6042409,6043266,6044020,6044978,6045758,6046751,6047636,6048426,6049426,6050296,6051322,6052324,6053368,6054365,6055203,6055813,6056614,6057436,6058239,6058839,6059500,6060342,6061215,6061964,6062821,6063690,6064624,6065327,6066161,6067019,6067818,6068908,6069736,6070704,6071683,6072625,6073578,6074582,6075599,6076652,6077591,6078434,6079199,6080224,6081071,6082036,6083054,6083998,6085031,6085701,6086288,6086750,6087503,6088395,6089328,6090245,6091174,6092067,6093033,6094241,6095400,6096461,6097606,6098721,6099809,6100288,6101164,6102210,6103278,6104359,6105443,6106128,6106707,6107358,6107946,6108574,6109223,6109827,6110429,6111123,6112071,6112607,6113129,6113902,6114521,6115405,6116262,6117213,6117817,6118483,6119356,6120113,6120963,6121699,6122176,6122855,6123356,6124026,6124612,6125297,6126018,6127108,6127661,6128292,6128765,6129224,6129707,6130206,6130679,6131265,6131750,6132670,6133779,6134947,6135817,6136743,6137619,6138531,6139249,6140035,6140982,6142206,6143317,6144217,6145036,6146078,6147379,6148467,6149540,6150580,6151772,6152733,6153518,6154442,6155404,6156393,6157450,6158401,6159272,6160484,6161576,6162633,6163584,6164553,6165435,6166545,6167613,6168909,6170025,6170415,6171112,6172179,6173286,6174263,6175200,6176281,6177252,6178247,6179376,6180494,6181570,6182663,6183536,6184405,6184785,6185751,6186873,6188080,6189183,6190177,6190914,6192007,6193174,6194013,6194930,6196072,6197228,6198287,6199048,6199981,6201116,6202100,6202947,6203559,6204179,6205285,6206299,6207045,6207760,6208607,6209385,6210019,6210795,6211608,6212634,6213676,6214719,6215183,6215947,6216404,6217159,6218235,6218695,6219460,6219902,6220760,6221823,6222384,6223092,6223497,6224357,6225288,6226023,6226755,6227778,6228439,6228976,6229389,6230336,6231055,6232098,6232576,6233339,6233790,6234539,6235625,6236503,6237334,6237781,6238159,6239202,6240379,6241438,6242469,6243573,6244701,6245323,6246623,6247817,6249157,6250307,6251306,6252399,6253599,6254680,6255793,6256890,6257596,6258691,6259797,6260897,6262026,6263057,6264082,6265247,6266487,6267719,6268842,6270074,6271018,6271715,6272516,6273490,6274541,6275284,6276242,6277186,6278117,6279023,6279869,6280830,6281984,6282860,6283550,6284585,6285577,6286753,6287597,6288541,6289208,6289841,6290630,6291251,6291943,6292818,6293723,6294517,6295449,6296199,6297107,6297831,6298349,6298898,6299717,6300671,6301286,6301955,6302700,6303400,6304205,6304773,6305289,6305867,6306386,6307098,6307971,6308863,6309687,6310649,6311623,6312607,6313529,6314396,6315290,6316319,6317377,6318488,6319531,6320137,6320869,6321471,6322134,6322623,6322981,6323510,6324047,6324715,6325161,6325635,6326084,6326852,6327637,6328322,6328999,6329756,6330508,6331308,6332200,6332914,6333659,6334279,6334889,6335227,6335557,6335913,6336291,6336602,6337208,6337695,6338078,6338530,6338977,6339348,6340379,6341438,6342228,6343145,6344131,6344932,6345711,6346622,6347365,6348253,6349162,6350348,6350999,6352180,6353053,6353714,6354600,6355435,6356215,6357046,6357886,6359053,6359950,6360827,6361734,6362649,6363430,6364299,6365193,6365994,6366927,6367937,6368742,6369590,6370235,6371074,6371994,6372765,6373681,6374596,6375590,6376515,6377119,6378009,6378888,6379672,6380617,6381621,6382456,6383422,6384335,6385093,6385954,6386869,6387630,6388577,6389588,6390508,6391372,6392185,6393025,6393867,6394644,6395453,6396326,6397480,6398291,6399165,6399901,6400699,6401615,6402395,6403238,6404144,6405259,6406257,6407192,6407906,6408733,6409657,6410425,6411332,6412248,6413264,6414153,6415085,6415911,6416734,6417538,6418360,6419206,6420432,6421326,6422480,6423493,6424089,6424973,6425870,6426642,6427584,6428596,6429665,6430789,6431814,6432417,6433294,6434202,6434975,6435915,6436929,6437830,6438662,6439406,6440178,6441094,6441878,6442723,6443629,6444714,6445584,6446371,6447152,6448061,6448805,6449682,6450586,6451736,6452471,6453345,6454185,6454925,6455750,6456684,6457472,6458346,6459273,6460334,6461152,6461961,6462681,6463447,6464305,6465195,6465963,6466896,6467809,6468857,6469715,6470512,6471353,6472192,6472989,6473793,6474628,6475793,6476553,6477436,6478195,6479115,6479921,6480738,6481556,6482400,6483580,6484472,6485204,6485980,6486783,6487520,6488089,6488995,6489858,6490728,6491718,6492669,6493671,6494500,6495460,6496492,6497504,6498417,6499153,6499754,6500251,6501272,6502292,6503121,6504105,6505116,6505995,6506982,6507984,6508938,6509905,6510986,6511519,6512107,6512720,6513830,6514846,6515559,6516586,6517499,6518497,6519496,6520490,6521371,6522129,6523116,6523776,6524389,6524897,6525870,6526845,6527708,6528747,6529598,6530627,6531521,6532496,6533532,6534412,6535159,6535709,6536272,6536878,6537743,6538734,6539701,6540607,6541614,6542411,6543388,6544498,6545416,6546292,6546977,6547590,6548098,6549151,6550155,6550978,6551953,6552984,6553866,6554842,6555833,6556810,6557765,6558673,6559311,6559961,6560739,6561634,6562662,6563684,6564535,6565628,6566480,6567465,6568521,6569415,6570205,6570813,6571376,6572037,6573111,6574141,6574904,6575930,6576863,6577825,6578834,6579787,6580714,6581534,6582600,6583209,6583716,6584418,6585288,6586292,6587219,6588168,6589176,6589958,6590921,6591946,6592944,6593902,6594795,6595527,6596018,6596799,6597700,6598694,6599716,6600575,6601658,6602527,6603508,6604551,6605428,6606212,6606804,6607510,6608040,6609106,6610138,6610888,6611905,6612837,6613802,6614813,6615765,6616682,6617494,6618560,6619245,6619796,6620343,6621241,6622261,6623191,6624146,6625149,6625966,6626947,6627964,6628951,6629909,6630803,6631542,6632036,6632817,6633719,6634717,6635742,6636591,6637700,6638522,6639504,6640584,6641522,6642350,6643066,6643776,6644310,6645269,6646264,6646970,6648062,6648910,6649854,6650883,6651866,6652756,6653531,6654470,6655056,6655595,6656334,6657188,6658211,6659186,6660038,6661141,6662001,6662982,6664065,6664997,6665929,6666906,6667578,6668188,6668839,6669754,6670724,6671601,6672596,6673436,6674537,6675376,6676364,6677473,6678404,6679340,6680304,6680984,6681582,6682229,6683162,6684137,6684978,6685991,6686829,6687900,6688736,6689711,6690783,6691712,6692539,6693256,6693973,6694518,6695462,6696459,6697181,6698293,6699149,6700107,6701144,6702133,6703030,6703816,6704752,6705288,6705856,6706454,6707558,6708530,6709324,6710141,6711023,6711883,6712768,6713647,6714580,6715428,6716409,6717279,6718269,6719152,6720064,6720989,6721978,6722787,6723472,6724331,6725223,6726019,6726732,6727394,6728123,6728748,6729445,6730252,6731167,6732101,6732978,6733851,6734812,6735635,6736654,6737485,6738255,6739245,6740158,6741122,6742084,6742965,6743779,6744581,6745469,6746238,6747393,6748133,6749244,6750362,6751004,6751881,6752901,6753925,6754705,6755519,6756396,6757249,6758206,6759173,6760174,6760865,6761607,6762309,6763294,6764101,6764688,6765660,6766618,6767573,6768528,6769483,6770410,6771451,6772326,6773327,6774208,6775059,6776077,6777175,6777946,6778974,6779830,6780632,6781421,6782315,6783471,6784482,6785335,6786363,6787179,6787853,6788490,6789263,6790209,6790905,6791596,6792103,6792719,6793758,6794695,6795740,6796726,6797578,6798592,6799529,6800532,6801244,6802244,6803194,6803949,6804754,6805692,6806470,6807400,6808304,6809010,6809876,6810764,6811771,6812722,6813691,6814661,6815549,6816520,6817470,6818485,6819377,6820225,6821215,6822346,6823442,6824129,6824813,6825542,6826583,6827539,6828332,6829318,6830278,6831191,6832087,6832990,6833852,6834706,6835614,6836570,6837470,6838336,6839265,6840113,6840946,6841955,6842900,6843872,6844832,6845842,6846766,6847834,6848714,6849718,6850574,6851454,6852459,6853461,6854401,6855232,6856225,6857174,6858077,6859147,6859996,6860971,6861917,6862848,6863751,6864627,6865397,6866070,6866730,6867643,6868366,6869381,6870164,6871036,6871715,6872317,6872986,6873618,6874347,6875074,6875894,6876709,6877688,6878609,6879276,6880062,6880857,6881768,6882601,6883709,6884556,6885396,6886196,6887175,6888121,6888946,6890011,6890891,6891678,6892442,6893125,6893991,6894959,6895803,6896776,6897566,6898520,6899529,6900280,6901262,6902217,6902932,6903829,6904584,6905249,6906073,6906905,6907711,6908708,6909451,6910418,6911266,6912089,6913124,6914034,6915054,6916055,6917120,6918082,6918889,6919502,6920240,6921132,6921991,6922506,6923219,6924086,6925050,6925759,6926633,6927473,6928409,6929114,6929881,6930735,6931493,6932580,6933491,6934442,6935399,6936337,6937345,6938320,6939311,6940303,6941414,6942329,6943309,6944241,6945178,6946269,6947258,6948264,6949216,6950151,6951173,6952129,6953217,6954181,6955174,6956119,6957090,6958030,6958999,6960019,6960986,6962047,6962947,6963930,6964809,6965739,6966640,6967583,6968598,6969562,6970619,6971561,6972588,6973811,6974930,6975986,6977149,6978262,6979284,6979849,6980722,6981724,6982824,6983920,6985023,6985644,6985960,6986663,6987320,6987981,6988510,6989034,6989743,6990555,6991436,6992355,6993303,6994115,6994930,6995520,6996045,6996588,6997101,6997676,6998418,6999198,6999872,7000555,7001245,7001910,7002576,7003276,7003952,7004640,7005359,7006028,7006719,7007387,7008061,7008758,7009445,7010119,7010842,7011518,7012225,7012910,7013844,7014527,7015097,7015551,7016145,7016625,7017389,7018489,7019644,7020635,7021601,7022535,7023296,7024020,7024763,7025505,7026280,7027067,7027834,7028755,7029417,7030176,7031452,7032592,7033305,7034441,7035545,7036568,7037546,7038611,7039643,7040702,7041788,7042950,7043966,7045192,7046313,7047311,7048287,7049008,7050133,7051071,7052153,7053092,7054024,7055214,7056328,7057372,7058230,7058998,7059397,7060462,7061598,7062636,7063365,7064404,7065715,7066750,7067726,7068675,7069923,7070989,7072217,7073085,7073999,7074981,7075969,7076937,7078002,7078817,7079789,7080839,7081668,7082655,7083793,7084633,7085303,7086157,7087013,7088035,7088839,7089530,7090336,7091028,7091740,7092544,7093662,7094649,7095557,7096386,7096829,7097213,7098281,7099258,7099712,7100551,7100951,7101569,7102569,7103644,7104214,7104916,7105301,7106155,7107233,7107716,7108506,7108920,7109700,7110644,7111710,7112338,7112972,7113357,7114239,7115162,7116107,7116762,7117309,7117757,7118689,7119590,7120291,7121117,7121538,7122093,7123128,7124209,7124787,7125467,7125851,7126748,7127833,7128444,7129083,7129498,7130408,7131614,7132604,7133645,7134751,7135874,7136506,7137789,7138613,7139704,7140559,7141378,7142222,7143221,7144402,7145103,7145794,7146797,7147827,7148651,7149551,7150511,7151455,7152265,7153106,7153986,7155103,7156172,7156850,7157778,7158857,7159937,7161173,7162014,7162622,7163311,7163960,7164648,7165345,7166142,7166952,7167763,7168640,7169670,7170398,7170920,7171481,7172239,7172969,7173895,7174693,7175516,7176442,7177326,7177856,7178341,7178828,7179325,7179983,7180616,7181259,7182003,7182693,7183503,7184133,7184852,7185843,7186502,7187524,7188467,7189280,7190308,7191296,7192253,7193171,7194029,7194915,7195951,7196963,7198086,7199250,7200260,7200928,7201551,7202205,7203081,7203988,7204768,7205138,7205657,7206276,7206759,7207296,7207912,7208508,7209108,7209818,7210587,7211246,7211941,7212623,7213566,7214716,7215535,7216174,7216812,7217351,7217729,7218051,7218440,7218795,7219354,7219912,7220271,7220802,7221175,7221783,7222771,7223610,7224734,7225728,7226736,7227646,7228504,7229529,7230349,7230994,7231963,7232935,7233861,7234696,7235689,7236564,7237369,7238241,7239334,7240252,7241153,7242061,7242933,7243851,7244820,7245831,7246970,7247916,7248738,7249620,7250628,7251485,7252293,7253220,7253975,7254600,7255512,7256649,7257600,7258636,7259242,7259966,7260736,7261677,7262548,7263435,7264417,7265391,7266487,7267378,7268354,7269395,7270469,7271311,7272149,7273177,7274106,7275038,7276045,7277077,7277863,7278573,7279287,7279959,7280742,7281686,7282611,7283519,7284428,7285389,7286431,7287340,7288231,7289178,7290128,7291232,7292121,7293143,7294189,7295202,7296066,7296932,7297926,7298774,7299619,7300509,7301552,7302558,7303362,7304131,7304949,7305747,7306567,7307616,7308700,7309736,7310643,7311696,7312586,7313454,7314402,7315312,7316395,7317358,7318414,7319325,7320348,7321312,7322267,7323370,7324151,7324933,7325916,7326870,7327884,7328813,7329370,7330161,7331035,7332079,7333024,7333984,7334887,7335995,7336892,7337823,7338702,7339710,7340744,7341735,7342626,7343497,7344345,7345235,7346043,7346961,7347803,7348937,7349923,7350789,7351633,7352766,7353645,7354748,7355825,7356716,7357638,7358481,7359475,7360396,7361321,7362181,7363170,7364045,7365001,7365723,7366535,7367283,7367984,7369008,7370088,7371125,7372032,7372847,7373854,7374627,7375462,7376314,7377249,7378197,7378987,7379878,7380790,7381734,7382611,7383520,7384397,7385339,7385936,7386584,7387412,7388097,7388826,7389509,7390176,7390895,7391512,7392362,7393203,7394319,7395272,7396244,7396867,7397779,7398813,7399492,7400265,7400936,7401776,7402436,7403256,7404078,7404854,7405562,7406418,7407291,7407930,7408532,7409091,7409865,7410901,7411990,7412755,7413473,7414377,7415273,7416241,7417275,7418197,7419103,7420142,7421079,7422180,7422924,7424066,7425181,7425865,7426797,7427809,7428852,7429629,7430402,7431265,7432149,7433094,7434094,7435148,7435842,7436598,7437308,7438259,7439013,7439634,7440577,7441513,7442481,7443414,7444355,7445278,7446304,7447174,7448169,7449050,7449854,7450913,7452023,7452864,7453874,7454685,7455494,7456336,7457287,7458396,7459510,7460316,7461295,7462097,7462789,7463404,7464182,7465160,7465837,7466502,7466989,7467631,7468645,7469550,7470525,7471540,7472384,7473427,7474354,7475393,7476111,7477124,7478038,7478837,7479646,7480604,7481368,7482265,7483222,7483929,7484772,7485691,7486707,7487660,7488584,7489547,7490463,7491393,7492352,7493397,7494258,7495094,7496046,7497145,7498178,7498841,7499506,7500205,7501222,7502161,7502891,7503881,7504820,7505789,7506708,7507640,7508511,7509402,7510318,7511236,7512200,7513033,7513909,7514787,7515658,7516649,7517577,7518527,7519482,7520512,7521480,7522507,7523395,7524397,7525267,7526140,7527094,7528072,7529032,7529882,7530895,7531827,7532761,7533850,7534688,7535643,7536518,7537515,7538455,7539303,7540140,7540791,7541476,7542359,7543111,7544111,7544894,7545741,7546389,7547031,7547754,7548379,7549120,7549856,7550639,7551472,7552411,7553294,7553981,7554749,7555593,7556511,7557334,7558431,7559315,7560186,7560983,7561919,7562799,7563633,7564712,7565636,7566435,7567201,7567906,7568792,7569712,7570506,7571500,7572300,7573236,7574273,7575006,7576020,7576969,7577690,7578602,7579345,7580011,7580821,7581598,7582399,7583417,7584210,7585186,7586104,7586967,7588044,7588966,7589981,7591042,7592121,7593087,7593879,7594553,7595271,7596147,7597070,7597627,7598348,7599157,7600121,7600840,7601670,7602495,7603447,7604187,7604924,7605795,7606588,7607661,7608582,7609512,7610459,7611403,7612413,7613363,7614388,7615391,7616202,7616850,7617681,7618594,7619518,7620371,7621388,7622282,7623230,7624140,7625019,7625961,7626787,7627583,7628484,7629340,7630251,7631083,7631984,7632875,7633813,7634624,7635582,7636291,7637143,7637958,7638698,7639730,7641039,7642073,7643205,7644326,7645505,7646332,7646937,7647960,7649060,7650170,7651245,7652185,7652866,7653487,7654079,7654724,7655370,7655987,7656711,7657645,7658106,7658575,7659327,7659791,7660262,7661129,7662059,7663020,7663825,7664691,7665282,7666144,7666943,7667574,7668282,7668991,7669683,7670382,7671081,7671770,7672478,7673192,7673788,7674484,7675221,7676286,7676871,7677319,7677894,7678389,7678844,7679332,7679816,7680318,7680888,7681487,7682048,7682865,7683871,7684971,7686141,7687144,7688093,7688901,7689557,7690510,7691188,7691952,7693249,7694484,7695582,7696410,7697569,7698716,7699729,7700681,7701726,7702701,7703547,7704807,7705858,7706687,7707592,7708591,7709555,7710714,7711629,7712524,7713671,7714797,7715846,7717026,7717855,7718899,7719542,7720726,7721211,7722343,7723375,7724382,7725346,7725717,7726548,7727450,7728502,7729482,7730401,7731406,7732612,7733670,7734718,7735640,7736345,7736732,7737840,7738964,7739902,7740682,7741598,7742832,7743782,7744755,7745738,7746904,7747980,7749023,7749853,7750804,7751972,7752610,7753434,7754438,7755151,7756004,7756755,7757363,7758196,7759216,7760346,7761017,7761848,7762257,7762980,7763947,7764724,7765547,7766358,7766807,7767197,7768143,7769023,7769657,7770484,7770885,7771556,7772508,7773589,7774225,7774851,7775241,7776183,7777310,7778258,7779442,7780449,7781298,7782361,7783493,7784371,7785392,7786422,7787361,7788398,7789602,7790774,7791707,7792709,7793751,7794770,7795814,7796795,7797835,7798839,7800006,7800993,7801952,7803006,7803960,7805233,7806484,7807689,7808780,7809877,7811051,7812113,7813315,7814300,7815280,7816430,7817311,7818023,7818877,7819861,7820955,7821677,7822646,7823596,7824473,7825386,7826188,7827260,7828379,7829261,7829961,7831004,7832037,7833226,7834184,7834987,7835658,7836269,7836986,7837698,7838497,7839303,7840116,7840984,7841967,7842704,7843227,7843797,7844554,7845282,7846145,7847167,7848110,7848549,7849032,7849541,7850027,7850572,7851264,7851946,7852620,7853365,7853961,7854820,7855534,7856504,7857365,7858311,7859474,7860338,7861254,7862172,7863060,7863878,7864966,7865942,7867033,7867650,7868366,7869195,7870095,7871034,7871606,7872189,7872711,7873222,7873828,7874541,7875278,7876024,7876751,7877426,7878235,7879383,7880305,7880975,7881594,7881964,7882449,7883041,7883607,7883914,7884457,7885151,7885985,7886958,7887861,7888673,7889640,7890498,7891455,7892399,7893175,7894133,7894889,7895529,7896364,7897349,7898390,7899263,7900026,7900805,7901753,7902486,7903442,7904226,7905315,7906159,7907275,7908017,7908864,7909680,7910541,7911578,7912419,7913026,7914039,7914895,7915625,7916649,7917742,7918683,7919563,7920388,7921136,7921882,7922572,7923695,7924619,7925341,7926067,7927105,7928018,7928951,7929839,7930632,7931427,7932452,7933428,7934048,7935042,7935959,7936856,7937614,7938593,7939490,7940328,7941446,7942177,7943209,7943939,7944692,7945614,7946587,7947713,7948725,7949695,7950528,7951511,7952473,7953530,7954475,7955176,7955918,7956751,7957834,7958822,7959867,7960746,7961441,7962131,7962959,7964032,7964968,7965904,7966940,7967704,7968403,7969180,7969932,7970558,7971443,7972385,7973225,7973941,7974587,7975489,7976313,7977162,7977966,7978766,7979589,7980661,7981562,7982570,7983360,7984326,7985408,7986344,7987140,7987681,7988311,7989273,7990291,7991353,7992163,7992948,7993764,7994666,7995478,7996250,7997043,7997919,7998892,7999775,8000698,8001610,8002515,8003252,8004014,8004835,8005518,8006201,8006918,8007545,8008286,8008944,8009857,8010646,8011722,8012654,8013682,8014299,8015152,8016118,8017032,8017918,8018783,8019654,8020609,8021517,8022290,8023354,8024252,8025236,8026204,8027105,8027962,8028786,8029576,8030486,8031542,8032292,8033429,8034499,8035288,8036325,8037306,8038305,8039119,8039879,8040818,8041679,8042586,8043659,8044721,8045484,8046273,8046958,8047969,8048727,8049487,8050400,8051289,8052138,8053029,8053955,8054856,8055832,8056672,8057727,8058540,8059360,8060444,8061559,8062387,8063341,8064182,8064965,8065866,8066784,8067811,8068911,8069845,8070793,8071502,8072114,8072732,8073563,8074582,8075250,8075861,8076340,8077172,8078244,8079272,8080239,8081204,8082054,8083012,8083891,8084909,8085719,8086810,8087725,8088544,8089352,8090363,8091119,8092142,8093048,8093869,8094717,8095631,8096682,8097707,8098673,8099490,8100362,8101324,8102251,8103284,8104189,8104918,8105956,8107006,8107915,8108673,8109380,8110125,8111102,8112038,8112756,8113746,8114695,8115702,8116648,8117640,8118534,8119432,8120415,8121418,8122427,8123248,8124072,8124975,8125873,8126877,8127837,8128800,8129727,8130785,8131767,8132738,8133651,8134654,8135477,8136417,8137327,8138202,8139165,8139986,8140948,8141913,8142852,8143911,8144835,8145824,8146657,8147693,8148657,8149515,8150385,8151061,8151791,8152643,8153435,8154474,8155203,8155996,8156565,8157241,8157940,8158544,8159267,8160022,8160756,8161781,8162731,8163666,8164534,8165316,8166183,8167036,8167830,8168949,8169849,8170789,8171659,8172540,8173455,8174286,8175361,8176217,8176954,8177645,8178286,8179154,8180108,8180893,8181814,8182594,8183558,8184562,8185268,8186244,8187162,8187895,8188846,8189582,8190334,8191166,8191984,8192872,8193860,8194741,8195686,8196531,8197295,8198325,8199143,8200184,8201228,8202295,8203305,8204074,8204832,8205528,8206354,8207245,8207879,8208577,8209351,8210325,8210989,8211780,8212524,8213372,8214229,8214912,8215772,8216681,8217650,8218671,8219482,8220409,8221478,8222446,8223427,8224456,8225373,8226190,8226791,8227660,8228607,8229401,8230199,8231111,8231762,8232701,8233925,8235079,8236143,8237275,8238385,8239467,8239960,8240821,8241873,8242940,8244028,8245117,8245800,8246413,8247072,8247721,8248314,8249006,8249899,8250773,8251625,8252571,8253419,8254250,8254900,8255791,8256573,8257084,8257686,8258336,8259006,8259738,8260436,8261333,8261758,8262342,8262824,8263300,8263776,8264247,8264733,8265304,8265884,8266487,8267251,8268326,8269474,8270468,8271445,8272378,8273219,8273907,8274715,8275993,8277240,8278483,8279537,8280572,8281633,8282882,8283748,8284677,8285597,8286581,8287550,8288604,8289417,8290342,8291553,8292578,8293491,8294694,8295125,8295498,8296634,8297623,8298657,8299372,8300464,8301448,8302580,8303621,8304772,8305777,8306853,8307793,8308822,8309794,8310819,8312025,8313128,8314092,8315010,8316052,8317025,8318222,8319044,8319975,8321088,8321952,8322785,8323848,8324701,8325415,8326217,8326878,8327654,8328436,8329564,8330412,8331029,8331842,8332285,8332759,8333789,8334761,8335184,8336012,8336379,8337009,8338023,8338593,8339433,8339793,8340609,8341712,8342179,8342938,8343376,8344221,8345153,8346007,8346630,8347443,8347890,8348368,8349407,8350447,8351507,8352599,8353623,8354256,8355084,8355497,8356065,8357113,8357727,8358564,8358959,8359678,8360686,8361878,8362961,8363985,8365029,8366091,8366983,8367996,8369051,8370005,8371023,8372164,8373233,8374273,8375296,8376384,8377347,8378510,8379443,8380538,8381587,8382724,8383352,8384657,8385508,8386635,8387842,8389042,8390080,8391121,8392278,8392981,8393689,8394706,8395779,8396597,8397507,8398437,8399418,8400232,8401066,8401966,8403120,8404143,8404783,8405705,8406732,8407794,8408879,8409965,8410524,8411238,8411895,8412566,8413278,8414017,8414845,8415750,8416648,8417720,8418694,8419184,8419742,8420265,8421030,8421912,8422832,8423793,8424662,8425576,8426047,8426491,8427025,8427582,8428059,8428676,8429406,8430060,8430844,8431577,8432149,8432894,8433574,8434527,8435285,8436170,8437023,8438021,8438919,8439984,8440851,8441983,8442906,8443788,8444939,8445916,8446978,8447922,8448543,8449242,8450098,8450667,8451159,8451821,8452364,8452910,8453594,8454155,8454950,8455606,8456429,8457110,8457801,8458558,8459671,8460762,8461473,8462119,8462709,8463229,8463633,8464017,8464386,8464932,8465497,8465809,8466360,8466819,8467433,8468279,8469163,8470145,8471079,8472062,8473038,8474012,8474811,8475512,8476074,8476692,8477569,8478502,8479280,8480168,8481001,8481757,8482564,8483493,8484505,8485385,8486308,8487179,8488142,8488926,8489834,8490617,8491449,8492459,8493418,8494290,8495297,8496253,8497324,8498342,8499304,8500213,8501162,8502149,8503117,8503926,8504906,8505736,8506636,8507553,8508440,8509296,8510259,8511276,8512293,8513243,8514117,8514873,8515722,8516530,8517516,8518420,8519419,8520076,8520910,8521972,8522873,8523665,8524493,8525433,8526211,8527079,8527971,8528844,8529640,8530334,8531163,8531967,8532743,8533567,8534558,8535540,8536418,8537378,8538338,8539291,8540165,8541059,8541948,8543027,8543971,8544837,8545713,8546538,8547380,8548390,8549392,8550293,8551079,8551827,8552551,8553353,8554074,8554652,8555319,8556035,8556742,8557688,8558751,8559822,8560840,8561846,8562814,8563765,8564709,8565504,8566178,8566757,8567534,8568285,8569314,8570253,8571007,8571801,8572511,8573267,8574138,8574999,8575920,8576744,8577661,8578587,8579119,8579913,8580776,8581746,8582682,8583695,8584601,8585666,8586620,8587515,8588337,8588967,8589720,8590576,8591392,8591951,8592809,8593617,8594622,8595619,8596500,8597405,8598426,8599380,8599987,8600801,8601632,8602543,8603479,8604403,8605395,8606357,8607253,8608075,8608802,8609762,8610616,8611350,8611967,8612826,8613663,8614639,8615363,8616370,8617403,8618285,8619149,8619956,8620754,8621620,8622372,8623396,8624291,8625170,8626028,8626846,8627721,8628596,8629535,8630473,8631367,8632311,8633223,8634189,8635019,8635961,8636865,8637892,8638627,8639335,8640180,8640910,8641621,8642365,8643015,8643707,8644387,8645103,8645940,8646870,8647811,8648759,8649554,8650583,8651590,8652355,8653154,8654021,8654806,8655674,8656418,8657201,8658052,8658944,8659936,8660558,8661178,8661773,8662428,8663411,8664325,8665293,8666434,8667397,8668373,8669373,8670304,8671185,8672036,8672806,8673694,8674460,8675395,8676451,8677253,8678054,8678784,8679590,8680562,8681196,8682057,8683008,8683903,8684653,8685626,8686624,8687599,8688503,8689316,8690352,8691176,8692028,8693151,8694183,8694999,8695913,8696793,8697570,8698536,8699610,8700462,8701462,8702524,8703469,8704118,8704700,8705465,8706345,8707163,8707870,8708370,8708872,8709853,8710699,8711817,8712695,8713744,8714589,8715465,8716441,8717340,8718175,8719240,8720013,8720910,8721748,8722746,8723519,8724531,8725316,8726118,8726877,8727770,8728818,8729855,8730798,8731496,8732361,8733259,8734179,8735164,8736217,8737142,8738186,8739204,8740034,8740686,8741446,8742379,8743317,8744226,8745150,8746141,8747030,8747972,8748842,8749801,8750696,8751604,8752560,8753526,8754449,8755338,8756130,8757054,8757955,8758939,8759843,8760872,8761841,8762783,8763739,8764564,8765458,8766414,8767320,8768250,8769139,8769989,8770921,8771730,8772563,8773552,8774576,8775492,8776548,8777639,8778570,8779474,8780450,8781259,8781973,8782741,8783716,8784354,8785088,8786059,8786896,8787710,8788270,8788934,8789491,8790201,8790924,8791652,8792381,8793451,8794440,8795152,8795923,8796752,8797752,8798607,8799405,8800341,8801146,8802125,8803096,8803974,8804847,8805808,8806790,8807631,8808356,8809140,8809806,8810750,8811710,8812615,8813431,8814296,8815160,8815963,8816810,8817664,8818374,8819093,8819988,8820688,8821437,8822257,8822994,8823852,8824656,8825625,8826505,8827258,8828069,8829024,8829947,8831010,8831979,8832944,8833973,8834621,8835428,8836208,8837060,8837894,8838640,8839507,8840344,8841290,8842042,8842730,8843506,8844186,8845055,8845829,8846595,8847583,8848483,8849536,8850399,8851329,8852211,8853122,8854083,8855066,8856117,8856763,8857680,8858530,8859372,8860256,8861137,8861946,8862818,8863713,8864517,8865190,8866098,8866874,8867829,8868691,8869766,8871054,8872100,8873203,8874284,8875429,8876233,8876934,8877976,8879096,8880207,8881315,8882188,8882860,8883508,8884154,8884796,8885408,8886009,8886665,8887606,8888283,8888782,8889274,8889920,8890817,8891669,8892632,8893326,8894036,8894954,8895604,8896198,8896889,8897603,8898337,8899046,8899769,8900413,8901075,8901772,8902746,8903427,8904013,8904484,8904980,8905461,8905948,8906409,8906988,8907456,8908048,8908520,8909129,8909961,8910896,8912072,8913273,8914192,8915125,8916010,8916997,8917592,8918430,8919659,8920831,8921843,8922826,8923870,8924854,8925693,8926845,8927868,8929128,8930080,8931035,8932096,8933342,8934617,8935646,8936614,8937814,8938212,8938698,8939548,8940576,8941781,8942656,8943584,8944585,8945563,8946523,8947592,8948415,8949399,8950712,8951905,8952852,8953984,8954940,8955907,8957023,8958105,8959139,8960195,8961102,8961845,8962243,8963330,8964348,8965513,8966229,8967357,8968516,8969565,8970478,8971392,8972684,8973818,8974697,8975541,8976593,8977632,8978644,8979698,8980707,8981563,8982233,8983025,8983735,8984437,8985214,8986327,8987474,8988039,8988757,8989153,8990058,8991003,8991850,8992648,8993109,8993507,8994453,8995245,8995996,8996924,8997710,8998138,8998570,8999591,9000426,9001237,9001667,9002149,9003203,9004269,9005334,9006291,9007476,9008542,9009371,9010475,9011401,9012616,9013768,9014724,9016063,9017293,9018267,9019553,9020732,9021780,9022884,9023919,9025026,9026243,9027548,9028742,9029811,9031042,9031978,9032720,9033550,9034516,9035565,9036306,9037284,9038225,9039141,9040042,9040889,9041884,9043028,9043889,9044581,9045627,9046617,9047758,9048791,9049566,9050203,9050802,9051552,9052102,9052629,9053372,9054024,9054916,9055811,9056690,9057539,9058170,9058951,9059385,9059920,9060575,9061108,9061665,9062494,9063384,9064180,9064916,9065531,9066174,9066789,9067506,9068120,9068774,9069774,9070722,9071198,9071658,9072187,9072745,9073490,9074080,9074806,9075416,9076209,9076740,9077241,9077768,9078318,9078843,9079514,9079967,9080607,9081416,9082155,9083176,9084203,9085170,9085950,9086638,9087592,9088723,9089754,9090794,9091658,9092525,9093410,9094560,9095500,9096258,9097362,9098232,9099006,9099598,9100268,9101218,9102157,9102655,9103247,9103590,9104033,9104443,9104834,9105225,9105646,9106033,9106436,9106833,9107253,9107666,9108084,9108471,9109067,9109613,9110152,9110729,9111371,9111701,9112370,9113150,9113859,9114572,9115182,9115841,9116497,9117186,9117858,9118489,9119159,9119979,9121190,9122145,9122955,9123581,9124132,9124721,9125258,9125798,9126384,9126757,9127137,9127495,9127866,9128249,9128622,9129060,9129494,9129935,9130333,9130771,9131192,9131567,9131896,9132277,9132823,9133393,9133738,9134261,9134545,9134942,9135317,9135962,9136937,9137990,9138864,9139781,9140715,9141548,9142134,9142785,9143785,9144766,9145441,9146226,9146932,9147950,9148837,9149598,9150434,9151341,9152308,9153132,9154110,9154919,9155835,9156679,9157728,9158888,9159934,9160608,9161288,9162129,9162943,9163719,9164863,9165789,9166669,9167454,9168144,9169081,9170016,9170797,9171844,9172863,9173963,9175048,9175914,9176926,9177897,9178927,9179765,9180786,9181674,9182409,9183121,9183719,9184546,9185249,9185973,9186732,9187230,9187809,9188560,9189237,9189867,9190564,9191515,9192244,9193048,9193889,9194617,9195257,9195898,9196917,9197838,9198483,9199219,9199881,9200614,9201469,9202362,9203341,9204357,9205336,9206287,9207127,9207980,9209031,9209924,9210741,9211537,9212289,9213290,9214289,9215283,9216276,9217064,9217897,9218952,9219862,9220633,9221566,9222475,9223149,9224095,9225034,9225986,9226779,9227703,9228699,9229840,9230675,9231489,9232315,9233066,9233744,9234524,9235255,9236209,9237201,9238087,9238711,9239394,9240113,9240873,9241548,9242190,9242791,9243362,9244355,9245231,9246221,9247243,9248131,9248888,9249638,9250427,9251448,9252309,9253096,9254075,9254869,9255787,9256611,9257394,9258277,9259084,9259785,9260851,9261799,9262738,9263618,9264324,9265103,9265766,9266532,9267631,9268526,9269397,9270386,9271415,9272314,9273073,9273823,9274626,9275651,9276512,9277284,9278262,9279074,9280006,9280861,9281741,9282688,9283484,9284216,9285022,9285688,9286749,9287709,9288715,9289508,9290355,9291027,9291951,9292889,9293782,9294427,9295236,9296011,9297133,9298094,9298980,9299871,9300684,9301715,9302595,9303538,9304509,9305519,9306415,9307258,9307910,9308514,9309088,9309863,9310721,9311629,9312413,9313368,9314235,9315164,9316255,9317238,9317966,9318914,9319647,9320634,9321447,9322451,9323210,9323928,9324754,9325634,9326357,9327265,9328104,9328945,9329775,9330602,9331709,9332777,9333663,9334457,9335220,9336e3,9336784,9337766,9338579,9339326,9340177,9340847,9341669,9342452,9343329,9344045,9344778,9345480,9346277,9347131,9348004,9348652,9349393,9350268,9351128,9352187,9353278,9354253,9354985,9355674,9356568,9357467,9358275,9359281,9360012,9360589,9361109,9362150,9362778,9363547,9364439,9365270,9366324,9367029,9367768,9368465,9369274,9369981,9370904,9371681,9372483,9373148,9373779,9374619,9375348,9376175,9376901,9377676,9378636,9379445,9380066,9380726,9381673,9382576,9383450,9384596,9385643,9386701,9387485,9388632,9389515,9390284,9391097,9391996,9392941,9393712,9394559,9395429,9396312,9397081,9398131,9398957,9399907,9400779,9401740,9402824,9403851,9404667,9405632,9406394,9407171,9408047,9408814,9409654,9410502,9411335,9411962,9412589,9413272,9414163,9415012,9415830,9416992,9418110,9419041,9419922,9420771,9421666,9422591,9423289,9423995,9424645,9425376,9426256,9427004,9427686,9428593,9429471,9430425,9431511,9432464,9433512,9434366,9435158,9436014,9436811,9437565,9438469,9439268,9439919,9440778,9441676,9442456,9443192,9443978,9445090,9445948,9446602,9447381,9448101,9448772,9449662,9450414,9451253,9452121,9453110,9454192,9454970,9455883,9456689,9457561,9458276,9459322,9460162,9461024,9461737,9462563,9463410,9464098,9464984,9465817,9466601,9467351,9468169,9469299,9470165,9470808,9471617,9472412,9473176,9474018,9474669,9475543,9476186,9476968,9477785,9478652,9479721,9480483,9481446,9482532,9483346,9484131,9485215,9486016,9486830,9487604,9488519,9489409,9490382,9491267,9492002,9492752,9493535,9494413,9495202,9496031,9496995,9497889,9498665,9499397,9500240,9501105,9502035,9503033,9503760,9504462,9505292,9506114,9507027,9507961,9508940,9509890,9510685,9511377,9512182,9513038,9514055,9515053,9515841,9516556,9517515,9518458,9519197,9520101,9521017,9521911,9522825,9523724,9524719,9525676,9526336,9527046,9527803,9528447,9529201,9529908,9530867,9531583,9532358,9533057,9533817,9534561,9535410,9536374,9537194,9537870,9538495,9539201,9540177,9541059,9541899,9543031,9544045,9545088,9545965,9546865,9547664,9548354,9549310,9550260,9551185,9552241,9553140,9554080,9554930,9556096,9557014,9558006,9558842,9559659,9560699,9561768,9562457,9563051,9563901,9564657,9565374,9566413,9567250,9568079,9568931,9569678,9570480,9571187,9571894,9572645,9573393,9574267,9575013,9575649,9576291,9576932,9577545,9578165,9578877,9579696,9580583,9581552,9582636,9583574,9584577,9585494,9586257,9586845,9587704,9588772,9589744,9590658,9591576,9592233,9593116,9593991,9594830,9595831,9596903,9597747,9598808,9599708,9600599,9601341,9602216,9603037,9603613,9604301,9605179,9605788,9606732,9607728,9608497,9609477,9610379,9611236,9612362,9613439,9614239,9615006,9615829,9616861,9617756,9618633,9619354,9620244,9621067,9621803,9622530,9623325,9624183,9625084,9626024,9626841,9627661,9628664,9629753,9630796,9631665,9632515,9633126,9634047,9635124,9636146,9636914,9637610,9638253,9639088,9639957,9640771,9641892,9642761,9643656,9644562,9645549,9646505,9647471,9648373,9649052,9649981,9650850,9651552,9652314,9653262,9653930,9654760,9655534,9656185,9656860,9657676,9658358,9659104,9659777,9660641,9661622,9662512,9663424,9664533,9665441,9666474,9667414,9668147,9668716,9669635,9670560,9671458,9672276,9673193,9674064,9674495,9675036,9676020,9676897,9677911,9678938,9679553,9680425,9681455,9682542,9683433,9684290,9685068,9685958,9686771,9687471,9688279,9689108,9689992,9690828,9691718,9692504,9693309,9694084,9694831,9695637,9696351,9697066,9697960,9698884,9699964,9700843,9701680,9702669,9703374,9704139,9704874,9705714,9706533,9707235,9707994,9708862,9709803,9710846,9711683,9712505,9713558,9714277,9715062,9715853,9716663,9717609,9718636,9719532,9720398,9721211,9721980,9722721,9723428,9724138,9724981,9725935,9726975,9727825,9728650,9729727,9730597,9731511,9732254,9733033,9733862,9734610,9735550,9736373,9737310,9738192,9739147,9740245,9741205,9742100,9743193,9744066,9744869,9745972,9746830,9747494,9748430,9749366,9750175,9750838,9751806,9752766,9753461,9754304,9755290,9756128,9756980,9757935,9758640,9759306,9760283,9761230,9762258,9763254,9764127,9764975,9766055,9766802,9767497,9768424,9769375,9770121,9770804,9771813,9772734,9773477,9774360,9775330,9776130,9777027,9777967,9778727,9779471,9780383,9781207,9782189,9783121,9783999,9784913,9785738,9786615,9787483,9788362,9789229,9790143,9790998,9791964,9792864,9793822,9794692,9795600,9796506,9797530,9798340,9799027,9799936,9800818,9801884,9802786,9803760,9804373,9805185,9805915,9806592,9807332,9807976,9808678,9809431,9810301,9811235,9812238,9813005,9814020,9814852,9815955,9816973,9817802,9818800,9819699,9820644,9821501,9822239,9823190,9824012,9824931,9825963,9826866,9827565,9828379,9829110,9830065,9830697,9831524,9832508,9833397,9834231,9835192,9836146,9836995,9837844,9838768,9839820,9840651,9841490,9842622,9843726,9844547,9845468,9846323,9847106,9848037,9849031,9849918,9851004,9852039,9852953,9853619,9854151,9854834,9855667,9856597,9857267,9857858,9858309,9859215,9860216,9861298,9862221,9863233,9864045,9864938,9865883,9866789,9867628,9868714,9869594,9870360,9871212,9872247,9873034,9874055,9874945,9875812,9876617,9877536,9878633,9879671,9880685,9881424,9882297,9883205,9884117,9885103,9886100,9886913,9887908,9888941,9889821,9890553,9891308,9892180,9893106,9894060,9894900,9895912,9896829,9897840,9898766,9899776,9900663,9901547,9902564,9903621,9904636,9905524,9906362,9907280,9908173,9909113,9910066,9911028,9912001,9913022,9913963,9914907,9915708,9916716,9917562,9918462,9919372,9920208,9921185,9922017,9922850,9923819,9924792,9925769,9926809,9927860,9928722,9929733,9930763,9931596,9932405,9933173,9934029,9934749,9935441,9936466,9937260,9938035,9938622,9939237,9939867,9940512,9941186,9941907,9942589,9943697,9944653,9945532,9946356,9947086,9947996,9948890,9949669,9950739,9951607,9952609,9953524,9954414,9955299,9956183,9957112,9957931,9958611,9959390,9960060,9960942,9961896,9962743,9963635,9964497,9965398,9966306,9967125,9967989,9968804,9969559,9970486,9971200,9971933,9972767,9973517,9974386,9975349,9976268,9977183,9977958,9978703,9979694,9980505,9981549,9982585,9983496,9984592,9985283,9986058,9986797,9987677,9988549,9989208,9990024,9990781,9991724,9992430,9993138,9993836,9994650,9995563,9996263,9997039,9998023,9998968,10000007,10000807,10001759,10002781,10003688,10004715,10005701,10006833,10007427,10007990,10008939,10009781,10010696,10011448,10012239,10013156,10014007,10014666,10015354,10016261,10017176,10018064,10019048,10019901,10020821,10021606,10022507,10023381,10024314,10025150,10026081,10026995,10027872,10028823,10029641,10030585,10031374,10032176,10033083,10033961,10034840,10035684,10036583,10037431,10038195,10039083,10039919,10040670,10041601,10042459,10043248,10043898,10044748,10045585,10046488,10047325,10048277,10049099,10050019,10050942,10051810,10052740,10053670,10054422,10055365,10056153,10057061,10057883,10058779,10059427,10060347,10061120,10061993,10063169,10064381,10065420,10066563,10067617,10068622,10069306,10070108,10071136,10072223,10073381,10074494,10075127,10075765,10076458,10077059,10077669,10078243,10078797,10079422,10080045,10080668,10081261,10081827,10082381,10082954,10083521,10084184,10084959,10085412,10085869,10086328,10086998,10087470,10087950,10088513,10089040,10089507,10090165,10090808,10091480,10092045,10092675,10093546,10094454,10095405,10096214,10097052,10097947,10098732,10099527,10100489,10101113,10101748,10102196,10102879,10103476,10103952,10104630,10105281,10105932,10106491,10106994,10107674,10108288,10108745,10109364,10110022,10110567,10111066,10111728,10112368,10112972,10113482,10114102,10114685,10115289,10115757,10116428,10117103,10117658,10118186,10118831,10119194,10119872,10120419,10120935,10121575,10122181,10122573,10123226,10123778,10124376,10125186,10125925,10126955,10127418,10128032,10128533,10129015,10129486,10129982,10130476,10131086,10131544,10132130,10132582,10133129,10133924,10134834,10135955,10137115,10138070,10138969,10139789,10140643,10141420,10142003,10142643,10143337,10143969,10144797,10145678,10146379,10147158,10148445,10149534,10150409,10151506,10152653,10153522,10154410,10155392,10156383,10157396,10158394,10159225,10160300,10161451,10162492,10163642,10164623,10165562,10166657,10167278,10168388,10169486,10170466,10171708,10172804,10173805,10174852,10176067,10177255,10178267,10179329,10180419,10181499,10182608,10183575,10184589,10185816,10186932,10187857,10188237,10189117,10190032,10190422,10191335,10192217,10193236,10194140,10195229,10196230,10197183,10198327,10199399,10200502,10201307,10202425,10203578,10204545,10205336,10206267,10207468,10208365,10209193,10210156,10210947,10211666,10212621,10213629,10214574,10215616,10216477,10217305,10218355,10219320,10220350,10221355,10222387,10223417,10224397,10225353,10226609,10227738,10228540,10229387,10230424,10231290,10232250,10233366,10234249,10235142,10235841,10236645,10237339,10238047,10238839,10239959,10241087,10241637,10242335,10242722,10243617,10244686,10245288,10245930,10246349,10247239,10248168,10248885,10249932,10250424,10251177,10251610,10252415,10253504,10253964,10254726,10255159,10256035,10256964,10257684,10258740,10259218,10259981,10260432,10261181,10262142,10263191,10263792,10264434,10264918,10265811,10266863,10267391,10268135,10268564,10269304,10270344,10271377,10272053,10272600,10273012,10273974,10275117,10276190,10277152,10278141,10279239,10280120,10281188,10282197,10283084,10284159,10285364,10286537,10287517,10288574,10289560,10290521,10291624,10292579,10293718,10294717,10295852,10296683,10297895,10298803,10300014,10300980,10302047,10303176,10304374,10305185,10306315,10307308,10308292,10309266,10310323,10311007,10312084,10313001,10313911,10314867,10315978,10316932,10317788,10318749,10319539,10320601,10321458,10322550,10323525,10324539,10325380,10326437,10327294,10328304,10329265,10330244,10331052,10332038,10332836,10333813,10334626,10335497,10336296,10337383,10337994,10338623,10339272,10340101,10340727,10341654,10343057,10344380,10345731,10347274,10348680,10349853,10351171,10352496,10353655,10354695,10355733,10356890,10358024,10359365,10360035,10360758,10361760,10362754,10363203,10363925,10364762,10366253,10367706,10369214,10370647,10372155,10373597,10375005,10376608,10377882,10379364,10380765,10382196,10383533,10385133,10386640,10388108,10388998,10390471,10391920,10393268,10394811,10396357,10397855,10399394,10400936,10402485,10403943,10405495,10406968,10408508,10409992,10411202,10412629,10414093,10415150,10416836,10418215,10419506,10420805,10422108,10423444,10424684,10425979,10427259,10428755,10430041,10431480,10432926,10434375,10435823,10437272,10438759,10440110,10441568,10443102,10444535,10445708,10447259,10448869,10450166,10451489,10452876,10454330,10455834,10456781,10458196,10459386,10460549,10461450,10462702,10463935,10464936,10465693,10466439,10466946,10468234,10469456,10470912,10471943,10472979,10474107,10475229,10476426,10477489,10478208,10479323,10480390,10481472,10481902,10482673,10483611,10485038,10486622,10488203,10489745,10491207,10492648,10494099,10495513,10496971,10498490,10500100,10501614,10502950,10504552,10506075,10507561,10509044,10510573,10511954,10513035,10514588,10516113,10517653,10519176,10520697,10522016,10523626,10525070,10526722,10528269,10529834,10531017,10532457,10534101,10535680,10537208,10538763,10540172,10541761,10543235,10544809,10546316,10547867,10549504,10550999,10552471,10553900,10555476,10556901,10558298,10559709,10560950,10562328,10563785,10565334,10566856,10568512,10570065,10571466,10572924,10573780,10575246,10576812,10578257,10579692,10581155,10582543,10583885,10585449,10586857,10588276,10589685,10591107,10592598,10593963,10595385,10596709,10598420,10599768,10601283,10602904,10604374,10605739,10606965,10608374,10609808,10611185,10612477,10613783,10615128,10615824,10616682,10617711,10619022,10620041,10620682,10621659,10621921,10622848,10624161,10625568,10626677,10627715,10628803,10629983,10631091,10632320,10633018,10633886,10634546,10635319,10636026,10636726,10637406,10638348,10639158,10639988,10640680,10641607,10642916,10643781,10644811,10646241,10647742,10649181,10650639,10652129,10653591,10655196,10656576,10658020,10659602,10661187,10662821,10664223,10665161,10666530,10667972,10669530,10670755,10672050,10673348,10674389,10675868,10677436,10678372,10679975,10681511,10683023,10684611,10685943,10686921,10688443,10689678,10690884,10692370,10693523,10694877,10696217,10697177,10698707,10700031,10701087,10702581,10703818,10704966,10706473,10707858,10708814,10710276,10711684,10712717,10714189,10715550,10716492,10717950,10719072,10720262,10721657,10722796,10723941,10725359,10726711,10727569,10729118,10730418,10731422,10732914,10734085,10735325,10736700,10738105,10739605,10741117,10742463,10743771,10745424,10746944,10748307,10749881,10751416,10752838,10754197,10755583,10757051,10758506,10759865,10761254,10762658,10764045,10765446,10766824,10768192,10769596,10770984,10772371,10773751,10775137,10776494,10777890,10779330,10780856,10781718,10783132,10784231,10785412,10785983,10786586,10787609,10788654,10789456,10790531,10791212,10791584,10792130,10792600,10792662,10793967,10795271,10796689,10797742,10798784,10799970,10801307,10802281,10803158,10804368,10805387,10805842,10806423,10807486,10808821,10809835,10811418,10812836,10814302,10815772,10817167,10818609,10819951,10821351,10822951,10824230,10825676,10827026,10828399,10829963,10831357,10832868,10834467,10835644,10837166,10838562,10840141,10841634,10843186,10844645,10846050,10847558,10849023,10850661,10852185,10853601,10854910,10856280,10857832,10859198,10860674,10862122,10863535,10864978,10866223,10867904,10869404,10870800,10872320,10874036,10875508,10877021,10878385,10879726,10881118,10882557,10884014,10885417,10886464,10887541,10888574,10889555,10890798,10891954,10892829,10893604,10894065,10895339,10896618,10898122,10899469,10900535,10901574,10902614,10903650,10904693,10905759,10906916,10908075,10909157,10910223,10911433,10912522,10913195,10913797,10914599,10915723,10916604,10917384,10918228,10919110,10919911,10920922,10921937,10922380,10922907,10923964,10924868,10925793,10926596,10928116,10929709,10931125,10932420,10933885,10935184,10936115,10937081,10938243,10939683,10941142,10942581,10944012,10945512,10946952,10948499,10949848,10951024,10952374,10953646,10955089,10956603,10958054,10959607,10961094,10962522,10964038,10965612,10967052,10968379,10970022,10971406,10972834,10974440,10975808,10977271,10978590,10979962,10981355,10982665,10984035,10985200,10986642,10987998,10989372,10990552,10991782,10993077,10994452,10995865,10997131,10998418,10999874,11001340,11002679,11004066,11005475,11006791,11008053,11009449,11010878,11012172,11013499,11014685,11016014,11017366,11018762,11020149,11021651,11022978,11024322,11025662,11027158,11028602,11029865,11031355,11032645,11034083,11035304,11036794,11038030,11039363,11040669,11041636,11043066,11044379,11045641,11046943,11048380,11049751,11051156,11052369,11053666,11055234,11056516,11057867,11059235,11060652,11062005,11063381,11064093,11065447,11066809,11068090,11069470,11070817,11072171,11073536,11074914,11076233,11077524,11078863,11080298,11081314,11082406,11083325,11084821,11086243,11087417,11088963,11090364,11091850,11093315,11094672,11096094,11097631,11099067,11100240,11101733,11103382,11104858,11106267,11107585,11108917,11110520,11111939,11113433,11114701,11116198,11117392,11118839,11120335,11121588,11122931,11124352,11125475,11126985,11128359,11129837,11131105,11132484,11133878,11135085,11136234,11136754,11137342,11137811,11138219,11138566,11139543,11140625,11141735,11142769,11143881,11144876,11145885,11146731,11147109,11147516,11148048,11148787,11149584,11149609,11149634,11150673,11151996,11153396,11154542,11155574,11156623,11157816,11158943,11160296,11161155,11161848,11162690,11163345,11164354,11165078,11165585,11166418,11167466,11168686,11170218,11171562,11172788,11174392,11175887,11177269,11178734,11180133,11181603,11183168,11184525,11185974,11187449,11188715,11190177,11191554,11193019,11194571,11195891,11197204,11198649,11200157,11201668,11203354,11204668,11205950,11207411,11208986,11210478,11212049,11213567,11214894,11216389,11217805,11219345,11220857,11222201,11223561,11225045,11226306,11227694,11229081,11230634,11231950,11233416,11234916,11236157,11237555,11238934,11240167,11241677,11243171,11244534,11246037,11247393,11248892,11250429,11251793,11253238,11254613,11256281,11257409,11258977,11260560,11262017,11263460,11264669,11266080,11267615,11269049,11270427,11271701,11272538,11273021,11273989,11274975,11276175,11277276,11278222,11278645,11279400,11279737,11280416,11281732,11283068,11284293,11285327,11286413,11287545,11288763,11289820,11290576,11291548,11292328,11293377,11293807,11294329,11295288,11296164,11297715,11299312,11300877,11302315,11303777,11305211,11306631,11308131,11309585,11311172,11312701,11314102,11315637,11317173,11318809,11320358,11321864,11323384,11324608,11325775,11327244,11328616,11329919,11331152,11332435,11333808,11335063,11336361,11337806,11339260,11340790,11342225,11343232,11344502,11345897,11347053,11348253,11349422,11350961,11352421,11353869,11355352,11356925,11358492,11360027,11361357,11362735,11363740,11365232,11366718,11368178,11369697,11371037,11372503,11373984,11375399,11376564,11378142,11379769,11381313,11382761,11384057,11385489,11386983,11388436,11389697,11390900,11391653,11392687,11393907,11395091,11396039,11396689,11397306,11397572,11398813,11400105,11401315,11402353,11403540,11404841,11405570,11406595,11407499,11407927,11408572,11409444,11410771,11412209,11413661,11415092,11416498,11417997,11419470,11421098,11422636,11424037,11425497,11427077,11428451,11429907,11431474,11432971,11434309,11435775,11437059,11438775,11440069,11441618,11443221,11444896,11446423,11447993,11449381,11450724,11451925,11453436,11454676,11456237,11457689,11459258,11460667,11462233,11463551,11464735,11465829,11467069,11468160,11469038,11469428,11470551,11471558,11472717,11473928,11474976,11475726,11476818,11477821,11478583,11479601,11480652,11481697,11482285,11483350,11484361,11485463,11486120,11486871,11487721,11488782,11489867,11490805,11491616,11492668,11493542,11494414,11495496,11496567,11497084,11497919,11498741,11499747,11500754,11501829,11503151,11504308,11505491,11506587,11507732,11508777,11509585,11510806,11512087,11513231,11514336,11515465,11516524,11517547,11518784,11519864,11520995,11522075,11523255,11523969,11524847,11525924,11527242,11528266,11529160,11530084,11531145,11532460,11533524,11534876,11536006,11537038,11538056,11538947,11539920,11540829,11542009,11543191,11544238,11545425,11546761,11547912,11548992,11550210,11551313,11552482,11553481,11554504,11555350,11556671,11557851,11558963,11560035,11561176,11562613,11563948,11564567,11565580,11566701,11567755,11568918,11569905,11571030,11572043,11573235,11574407,11575590,11576310,11576802,11577634,11578287,11579157,11580214,11581121,11582244,11583628,11585037,11586157,11587231,11588346,11589707,11591046,11592353,11593370,11594314,11595005,11596297,11597264,11597979,11598790,11599816,11600726,11601711,11602969,11603627,11604472,11605208,11606127,11606611,11607273,11607949,11608985,11609880,11610744,11611813,11613100,11613957,11615069,11616133,11617052,11618006,11619060,11620191,11621226,11622199,11623137,11624306,11625396,11626586,11627921,11629256,11630423,11631599,11632392,11633138,11633915,11634682,11635865,11637074,11638242,11639450,11640734,11641934,11643032,11644e3,11645267,11646394,11647446,11648567,11649701,11650769,11651827,11652863,11654109,11655283,11656484,11657424,11658792,11659972,11661154,11661984,11662757,11663576,11664580,11665610,11666758,11668108,11669423,11670810,11672013,11672986,11673906,11674762,11675758,11676803,11677829,11678857,11679915,11681018,11682011,11683049,11684034,11684940,11686171,11687345,11688592,11689695,11690797,11692124,11693434,11694809,11696053,11697312,11698713,11699757,11700680,11701966,11703015,11704264,11705353,11706636,11707806,11708895,11710077,11710901,11712210,11713577,11714755,11715967,11717122,11718358,11719498,11720915,11722362,11723323,11724368,11725241,11726486,11727779,11728943,11729977,11731093,11732282,11733292,11734634,11735981,11737336,11738758,11740062,11741415,11742780,11743683,11744942,11745935,11747118,11748182,11748898,11750080,11751001,11751926,11753070,11754362,11755445,11756575,11757780,11758960,11760142,11761257,11762483,11763451,11764165,11764846,11765649,11766199,11766889,11767984,11769055,11770238,11771615,11772755,11773978,11775229,11776270,11777415,11778450,11779508,11780466,11781550,11782640,11783633,11784723,11785956,11787203,11788321,11789416,11790444,11791652,11792938,11794359,11795491,11796793,11798091,11799278,11800571,11801590,11802871,11803965,11805208,11806453,11807570,11808293,11809123,11810002,11811032,11812007,11812997,11814125,11815161,11816405,11817521,11818698,11819902,11820505,11821406,11822672,11823473,11824197,11825134,11826111,11827107,11827885,11828838,11829809,11830660,11831552,11832351,11833461,11834528,11835276,11836116,11837179,11838192,11839025,11839685,11840283,11841087,11841929,11842711,11843573,11844445,11845535,11846598,11847368,11848285,11849295,11850045,11850891,11851972,11852586,11853159,11853964,11854516,11855208,11855971,11857080,11858177,11858873,11859269,11860126,11861178,11862223,11863026,11863926,11864891,11865769,11866729,11867647,11868404,11869214,11870344,11871393,11872449,11873079,11874e3,11875083,11876092,11877036,11877976,11878961,11879974,11880866,11881937,11882931,11883908,11884893,11885811,11886683,11887491,11888494,11889429,11890638,11891565,11892560,11893530,11894279,11895327,11896157,11897151,11897861,11898697,11899627,11900348,11901200,11902053,11903153,11904201,11905166,11906081,11906951,11907945,11908892,11909721,11910531,11911444,11912505,11913438,11914314,11915419,11916299,11917224,11918107,11918976,11919847,11920810,11921627,11922501,11923357,11924227,11925093,11926009,11926850,11927832,11928695,11929657,11930548,11931454,11932371,11933358,11934155,11934828,11935940,11936532,11937731,11938807,11939437,11940062,11940779,11941669,11942550,11943255,11943807,11944728,11945199,11945949,11947074,11948223,11949132,11950111,11950898,11952165,11953335,11954047,11955180,11956366,11957370,11958560,11959861,11960843,11961815,11962889,11963907,11964881,11965851,11966786,11967816,11968753,11969845,11970722,11971699,11972677,11973684,11974774,11975934,11976669,11977743,11978693,11979764,11980785,11981751,11982685,11983639,11984266,11985089,11985877,11986531,11987325,11988373,11988871,11989633,11990084,11990842,11991770,11992811,11993407,11994058,11994476,11995355,11996430,11997031,11997678,11998129,11999021,11999967,12000820,12001643,12002075,12002566,12003633,12004359,12005333,12006112,12006611,12007002,12007975,12008979,12009890,12010821,12012e3,12013119,12013741,12015127,12016284,12017185,12018418,12019534,12020292,12021465,12022705,12024013,12025306,12026596,12027819,12028970,12030300,12031732,12033230,12034766,12036121,12037702,12039281,12040776,12042330,12043952,12045558,12046972,12048372,12049855,12050949,12052349,12053598,12054689,12055787,12056569,12057826,12058948,12059913,12061066,12062172,12063231,12064353,12065525,12066638,12067529,12068328,12069415,12070737,12072015,12072916,12074075,12075407,12076725,12077871,12078839,12079925,12081165,12082017,12083184,12084305,12085487,12086385,12087486,12088370,12089426,12090613,12091699,12092762,12093640,12094850,12095668,12096849,12098053,12099323,12100496,12101718,12102658,12103645,12104975,12105935,12107236,12108446,12109591,12110754,12111840,12112863,12113976,12114916,12115957,12117018,12118096,12118976,12119955,12120813,12121921,12123011,12124405,12125990,12127486,12128976,12130378,12131843,12133167,12134132,12135337,12136082,12136802,12137783,12138759,12139694,12140553,12141493,12142405,12143272,12144152,12144973,12146087,12147174,12147889,12148765,12149808,12150860,12151929,12152940,12153559,12154258,12154896,12155701,12156601,12157672,12158459,12159407,12160136,12161193,12162021,12162943,12163850,12164826,12165633,12166649,12167582,12168771,12169646,12170788,12171794,12172412,12173066,12173796,12174357,12174953,12175744,12176599,12177267,12177959,12178980,12179975,12180633,12181202,12181628,12182236,12182690,12183094,12183646,12184608,12185500,12186273,12187236,12188302,12189395,12190298,12191241,12192275,12193343,12194202,12194964,12195640,12196644,12197529,12198456,12199442,12200501,12201503,12202292,12203249,12204128,12205089,12206095,12207050,12208093,12209110,12210134,12211201,12212089,12212937,12213878,12214693,12215550,12216428,12217371,12218320,12219203,12220122,12221023,12222014,12222897,12223850,12224744,12225745,12226463,12227175,12228327,12229070,12230234,12231351,12232054,12233008,12234007,12235047,12235839,12236604,12237484,12238342,12239280,12240268,12241352,12242049,12242842,12243550,12244506,12245267,12245898,12246835,12247741,12248730,12249660,12250617,12251551,12252566,12253445,12254429,12255277,12256073,12257152,12258259,12259118,12260104,12260917,12261717,12262564,12263550,12264674,12265785,12266617,12267578,12268399,12269075,12269660,12270434,12271449,12272087,12272746,12273211,12273903,12274900,12275829,12276804,12277844,12278687,12279715,12280640,12281691,12282418,12283434,12284338,12285128,12285957,12286954,12287750,12288692,12289637,12290346,12291183,12292114,12293123,12294062,12295015,12295947,12296874,12297831,12298783,12299812,12300691,12301511,12302480,12303550,12304552,12305224,12305870,12306591,12307587,12308506,12309200,12310182,12311141,12312106,12313037,12313997,12314836,12315719,12316641,12317540,12318518,12319323,12320212,12321082,12321970,12322930,12323896,12324853,12325800,12326806,12327786,12328823,12329703,12330729,12331590,12332453,12333387,12334355,12335356,12336198,12337197,12338155,12339068,12340161,12341009,12341937,12342819,12343832,12344755,12345625,12346453,12347121,12347775,12348667,12349431,12350447,12351237,12352069,12352703,12353328,12354046,12354687,12355420,12356168,12356952,12357813,12358757,12359642,12360365,12361134,12361957,12362873,12363740,12364845,12365683,12366560,12367370,12368282,12369141,12370004,12371073,12371982,12372772,12373518,12374228,12375117,12376031,12376830,12377817,12378609,12379557,12380614,12381348,12382349,12383283,12384011,12384939,12385678,12386338,12387148,12387920,12388756,12389779,12390560,12391535,12392431,12393302,12394379,12395306,12396309,12397345,12398439,12399432,12400245,12400906,12401582,12402431,12403352,12403929,12404622,12405451,12406436,12407144,12407972,12408775,12409748,12410506,12411235,12412118,12412940,12413998,12414917,12415815,12416753,12417713,12418738,12419678,12420728,12421744,12422620,12423556,12424349,12425327,12426175,12427240,12428298,12429044,12429993,12430858,12431826,12433098,12434187,12435288,12436453,12437630,12438594,12439159,12440089,12441144,12442279,12443268,12444252,12444953,12445597,12446262,12446902,12447584,12448305,12449168,12450064,12451022,12451823,12452681,12453334,12454139,12454816,12455592,12456263,12457079,12457811,12458789,12459247,12460097,12461221,12462369,12463344,12464291,12465122,12465811,12466622,12467919,12469043,12470285,12471461,12472521,12473542,12474653,12475539,12476281,12477330,12478364,12479304,12480393,12481457,12482500,12483385,12484333,12485295,12486507,12487472,12488412,12489596,12490725,12491800,12492893,12493716,12494762,12495149,12495892,12497104,12498236,12499106,12499914,12500785,12501931,12502759,12503658,12504792,12505950,12507037,12508016,12508795,12509739,12510714,12511733,12512808,12513757,12514642,12515782,12516525,12517527,12518547,12519570,12520286,12521144,12521907,12522516,12523333,12524374,12525400,12526477,12527062,12527716,12528135,12529058,12530099,12530700,12531345,12531757,12532653,12533566,12534454,12535031,12535846,12536284,12536767,12537791,12538773,12539205,12540018,12540392,12541100,12542150,12542650,12543477,12543856,12544643,12545586,12546582,12547279,12547815,12548223,12549211,12550250,12551261,12552394,12553197,12554386,12555629,12556821,12558175,12558969,12559696,12560650,12561633,12562618,12563417,12564367,12565332,12566191,12567100,12567923,12569046,12570110,12570844,12571693,12572752,12573780,12574918,12575513,12576171,12576892,12577808,12578476,12579613,12580446,12581338,12582123,12583141,12584113,12584925,12586096,12586833,12587428,12588207,12588900,12589532,12590232,12590846,12591349,12592302,12593278,12594152,12595063,12595809,12596717,12597785,12598599,12599503,12600192,12600778,12601633,12602572,12603578,12604561,12605603,12606560,12607575,12608438,12609210,12610139,12611077,12611764,12612714,12613518,12614417,12614961,12615375,12616029,12616491,12616965,12617555,12618528,12619451,12620540,12621538,12622549,12623527,12624517,12625481,12626457,12627361,12627943,12628852,12629715,12630695,12631678,12632617,12633502,12634459,12635396,12636356,12637332,12637932,12638886,12639829,12640664,12641581,12642448,12643317,12644105,12645027,12645759,12646651,12647465,12648246,12649178,12650123,12651039,12651886,12652925,12653893,12654847,12655875,12656877,12657907,12658914,12659904,12660947,12661699,12662594,12663596,12664635,12665469,12666241,12666963,12667799,12668560,12669414,12670401,12671426,12672248,12673055,12673821,12674708,12675265,12675789,12676348,12677087,12677964,12678930,12679540,12680128,12680655,12681347,12682161,12683114,12683730,12684856,12685668,12686428,12687321,12688123,12689175,12689939,12690848,12691836,12692731,12693759,12694781,12695656,12696667,12697597,12698579,12699493,12700514,12701503,12702299,12703366,12704400,12705390,12706406,12707425,12708397,12709076,12709873,12710744,12711596,12712429,12713406,12714135,12714540,12714803,12715512,12716394,12717223,12718041,12718880,12719690,12720499,12721316,12722298,12723217,12724187,12725023,12725963,12726847,12727738,12728569,12729346,12730281,12730879,12731495,12732120,12733079,12734090,12734929,12735712,12736515,12736971,12737758,12738897,12740036,12740984,12741939,12742692,12743989,12745234,12746269,12747104,12747986,12748987,12749968,12751119,12752082,12752997,12754196,12755311,12756507,12757577,12758383,12759456,12760276,12761484,12762593,12763737,12764635,12765760,12766916,12767885,12768852,12769840,12770811,12771659,12772812,12773857,12774935,12775901,12776876,12777857,12778870,12779529,12780402,12781169,12781784,12782577,12783469,12784259,12784959,12785785,12786227,12786834,12787839,12788915,12789511,12790168,12790551,12791433,12792347,12793263,12794062,12794483,12794908,12795918,12796905,12797848,12798789,12799996,12801127,12801726,12802895,12804022,12805064,12805822,12807119,12807875,12808603,12809573,12810557,12811502,12812372,12813327,12814244,12815115,12815994,12816812,12817932,12819012,12819727,12820603,12821647,12822700,12823765,12824767,12825376,12826065,12826712,12827504,12828390,12828985,12829979,12830579,12831502,12832164,12833266,12834177,12835151,12836179,12836892,12837633,12838746,12839591,12840446,12841390,12842276,12843397,12844378,12845262,12845907,12846575,12847554,12848489,12848948,12849496,12850084,12850779,12851545,12852354,12852989,12853689,12854387,12855349,12856448,12857146,12857784,12858400,12858827,12859248,12859553,12859890,12860473,12860957,12861345,12861821,12862416,12863322,12863910,12864789,12865624,12866392,12867261,12868042,12868758,12869531,12870470,12871434,12872303,12873394,12874431,12875455,12876501,12877505,12878395,12879310,12880269,12881233,12882189,12883110,12884113,12885169,12886068,12887163,12888197,12889082,12890207,12890950,12891886,12892747,12893644,12894614,12895604,12896735,12897388,12898337,12899260,12900094,12901110,12902052,12902992,12903868,12904834,12905741,12906629,12907351,12908391,12909302,12910090,12910963,12911772,12912416,12913294,12914181,12914938,12915853,12916718,12917585,12918427,12919269,12920222,12920940,12921849,12922654,12923420,12924235,12925152,12925912,12926831,12927747,12928345,12929259,12930280,12931134,12931976,12932786,12933721,12934627,12935555,12936509,12937419,12938236,12939119,12940001,12940725,12941802,12942896,12943759,12944631,12945483,12946332,12947228,12948083,12948944,12949832,12950654,12951710,12952626,12953569,12954717,12955627,12956568,12957332,12958206,12959022,12959858,12960648,12961460,12962369,12963197,12964059,12964870,12965713,12966558,12967528,12968303,12969239,12970092,12970936,12971658,12972404,12973073,12973954,12974646,12975657,12976485,12977534,12978466,12979332,12980139,12980963,12981699,12982513,12983480,12984321,12985196,12985998,12986876,12987756,12988653,12989586,12990622,12991465,12992512,12993429,12994464,12995379,12996459,12997192,12998240,12999086,12999882,13000677,13001832,13002904,13003858,13004891,13005725,13006539,13007583,13008470,13009316,13010106,13011243,13012093,13013126,13014001,13014754,13015642,13016768,13017611,13018598,13019521,13020361,13021199,13022261,13023165,13024190,13024972,13026108,13027165,13027894,13028702,13029431,13030261,13030981,13031805,13032663,13033562,13034338,13035258,13036136,13037006,13037923,13038569,13039354,13040216,13041247,13041903,13042808,13043611,13044592,13045506,13046306,13047019,13047764,13048474,13049283,13050266,13051182,13052079,13052858,13053672,13054532,13055291,13056059,13056957,13057700,13058558,13059277,13060276,13061218,13062082,13062938,13063827,13064512,13065247,13065950,13066691,13067535,13068647,13069706,13070565,13071331,13072199,13073312,13074178,13074984,13075779,13076622,13077362,13078132,13078934,13079871,13080830,13081751,13082640,13083583,13084434,13085285,13086093,13086978,13087893,13088884,13090021,13090916,13091898,13092933,13093894,13094794,13095624,13096421,13097323,13098094,13099010,13100116,13100843,13101578,13102279,13103119,13104079,13104696,13105568,13106491,13107387,13108142,13109136,13110137,13111152,13112049,13112891,13113865,13114690,13115567,13116693,13117699,13118523,13119405,13120297,13121077,13122065,13123173,13124068,13125033,13126089,13126959,13127642,13128220,13129022,13129932,13130719,13131460,13131975,13132480,13133453,13134316,13135433,13136329,13137340,13138214,13139105,13140057,13140943,13141803,13142867,13143634,13144510,13145349,13146327,13147136,13148119,13148854,13149669,13150377,13151268,13152260,13153276,13154231,13154964,13155828,13156724,13157662,13158640,13159688,13160657,13161686,13162735,13163546,13164167,13164901,13165813,13166691,13167599,13168536,13169501,13170380,13171316,13172200,13173133,13174053,13174965,13175913,13176885,13177806,13178677,13179452,13180372,13181297,13182294,13183242,13184247,13185208,13186125,13187118,13188007,13188855,13189833,13190755,13191712,13192612,13193421,13194345,13195183,13196038,13197016,13198057,13198970,13199994,13201063,13202037,13202911,13203910,13204653,13205344,13206067,13207046,13207691,13208450,13209443,13210294,13211161,13211755,13212427,13212993,13213685,13214379,13215136,13215861,13216910,13217898,13218612,13219380,13220181,13221175,13221990,13222779,13223692,13224472,13225427,13226402,13227288,13228156,13229120,13230104,13230934,13231648,13232457,13233164,13234104,13235079,13235952,13236764,13237617,13238497,13239264,13240135,13240974,13241699,13242426,13243297,13243983,13244728,13245562,13246329,13247203,13248007,13248977,13249813,13250532,13251380,13252306,13253245,13254321,13255290,13256273,13257226,13257852,13258679,13259523,13260391,13261176,13261891,13262760,13263632,13264529,13265353,13266024,13266844,13267504,13268382,13269194,13269980,13270981,13271870,13272941,13273792,13274720,13275576,13276506,13277474,13278461,13279481,13280316,13281176,13282179,13283018,13283888,13284894,13285823,13286285,13286714,13287377,13287990,13288789,13289631,13290313,13291199,13291835,13292729,13293528,13294222,13295105,13295744,13296639,13297402,13298166,13299256,13300558,13301611,13302739,13303844,13305002,13305841,13306487,13307511,13308635,13309724,13310834,13311728,13312400,13313055,13313675,13314320,13314959,13315578,13316267,13317228,13318218,13319100,13319944,13320879,13321686,13322591,13323197,13324091,13324756,13325146,13325829,13326324,13326862,13327626,13328338,13329270,13329723,13330720,13331774,13332881,13333948,13334878,13335846,13336432,13337297,13338524,13339680,13340691,13341704,13342875,13343801,13344880,13345976,13347228,13348384,13349412,13350562,13351651,13352287,13353336,13354458,13355489,13356013,13356993,13357357,13358168,13359160,13360022,13361252,13362232,13362994,13363930,13364897,13365900,13366966,13367913,13368786,13370027,13371012,13372003,13373044,13373998,13375039,13376251,13377339,13378403,13379328,13380393,13381431,13382538,13383240,13384352,13385503,13386469,13387442,13388423,13389615,13390777,13391522,13392458,13393476,13394551,13395237,13396334,13397156,13397895,13398589,13399383,13400194,13400863,13401663,13402449,13403481,13404458,13405208,13405680,13406092,13407128,13407949,13408760,13409202,13409611,13410560,13411278,13412354,13412945,13413591,13414041,13414930,13415971,13417015,13417922,13418473,13419312,13419681,13420431,13421470,13422559,13423683,13424666,13425485,13426630,13427798,13428704,13429672,13430639,13431689,13432698,13433772,13434825,13435646,13436651,13437627,13438718,13439675,13440695,13441697,13442810,13443677,13444776,13445271,13446199,13447173,13448126,13448955,13449957,13450940,13451904,13452750,13453659,13454290,13455270,13456001,13457059,13458273,13459029,13459756,13460726,13461713,13462658,13463528,13464490,13465406,13466270,13467152,13467970,13469095,13470123,13470790,13471708,13472771,13473917,13474631,13475513,13476243,13477284,13478265,13478808,13479485,13480080,13480749,13481364,13481928,13482568,13483205,13483802,13484444,13485050,13485706,13486315,13486905,13487629,13488290,13488860,13489460,13490146,13490767,13491429,13492031,13492536,13493159,13493797,13494257,13494872,13495479,13496013,13496648,13497209,13498035,13498815,13499549,13500285,13501008,13501858,13502514,13503120,13503752,13504431,13505185,13505823,13506722,13507382,13508178,13508778,13509477,13510157,13510680,13511421,13511904,13512489,13513067,13513561,13514580,13515410,13515978,13516519,13517617,13518772,13519761,13520683,13522049,13523092,13524207,13525030,13525939,13527087,13528060,13528742,13529568,13529998,13530560,13531601,13532208,13533056,13533459,13534209,13535347,13536230,13537414,13538365,13539561,13540687,13541292,13541743,13542443,13543086,13543740,13544405,13545082,13545716,13546441,13547069,13547727,13548372,13549491,13550204,13550884,13551864,13552900,13553756,13554649,13555604,13556553,13557426,13558307,13559145,13560265,13561319,13562006,13562941,13564021,13565092,13566202,13566789,13567452,13568216,13568952,13569750,13570254,13570962,13571983,13572936,13573937,13574824,13575846,13576677,13577706,13578835,13579447,13580042,13580827,13581231,13581765,13582321,13583122,13583830,13584723,13585414,13585968,13586813,13587690,13588642,13589362,13590224,13591082,13591726,13592435,13593208,13594169,13594927,13595983,13597105,13598137,13598900,13599579,13600505,13601398,13602133,13602964,13603782,13604894,13605852,13606509,13607187,13608240,13608926,13610110,13610984,13611589,13612426,13613300,13613901,13614665,13615394,13616478,13617446,13618105,13619029,13619854,13620423,13621356,13622228,13623052,13623833,13624764,13625628,13626609,13627470,13628368,13629378,13630026,13630840,13631728,13632627,13633455,13634404,13635248,13635980,13636723,13637546,13638441,13639080,13639978,13640699,13641446,13642269,13643322,13644065,13644839,13645771,13646480,13647478,13648557,13649447,13650304,13651007,13651724,13652438,13653142,13654249,13655180,13655928,13656737,13657563,13658298,13659150,13660018,13661073,13662143,13662979,13664067,13664939,13666113,13666974,13667745,13668687,13669610,13670455,13671315,13672041,13672914,13673667,13674472,13675274,13676250,13677017,13677920,13678808,13679644,13680587,13681407,13682261,13682974,13683963,13684687,13685546,13686427,13687306,13688143,13688961,13689840,13690713,13691672,13692577,13693468,13694435,13695343,13696302,13697201,13698134,13698976,13700013,13700785,13701434,13702284,13703012,13703693,13704440,13705101,13705785,13706478,13707470,13708407,13709555,13710237,13710880,13711735,13712572,13713292,13713661,13714197,13715160,13715620,13716405,13717572,13718692,13719616,13720612,13721705,13722869,13723885,13724846,13726095,13726989,13727886,13728779,13729754,13730758,13731828,13732715,13733618,13734823,13735903,13736876,13738112,13739404,13740493,13741634,13742714,13743715,13744712,13745689,13746694,13747747,13748905,13749654,13750721,13751660,13752728,13753770,13754764,13755713,13756811,13757944,13758776,13759412,13760256,13761034,13761677,13762453,13763517,13764092,13764792,13765183,13766083,13766983,13768032,13768914,13769544,13770384,13770781,13771468,13772554,13773461,13774480,13775461,13776637,13777637,13778555,13779725,13780753,13781815,13782716,13783679,13784860,13786004,13787052,13788134,13789365,13790238,13791289,13792232,13792901,13793683,13794671,13795710,13796456,13797424,13798359,13799292,13800210,13801049,13801999,13803159,13804023,13804717,13805765,13806760,13807938,13808594,13809290,13809872,13810684,13811675,13812390,13813326,13814192,13815031,13816087,13817159,13817853,13818586,13819744,13820560,13821146,13821868,13822494,13823330,13824178,13825228,13825917,13826406,13827491,13828414,13829426,13830262,13831145,13831958,13832949,13833867,13834885,13835906,13837036,13837966,13839038,13840051,13840975,13842050,13842979,13843785,13844817,13845575,13846752,13847651,13848730,13849787,13850685,13851664,13852502,13853109,13854016,13854902,13855775,13856556,13857523,13858589,13859558,13860589,13861423,13862361,13863398,13864375,13865337,13865977,13866908,13867835,13868649,13869479,13870365,13871240,13872192,13873102,13873981,13874939,13875830,13876780,13877679,13878609,13879439,13880495,13881320,13881995,13882918,13883726,13884610,13885440,13886460,13887368,13887996,13888756,13889744,13890371,13891167,13892088,13892532,13893499,13894580,13895698,13896752,13897664,13898827,13899991,13900730,13901849,13902486,13903544,13904511,13905363,13906418,13907202,13907846,13908241,13909269,13909626,13910051,13911156,13912303,13913383,13914582,13915796,13916686,13917604,13918580,13919575,13920559,13921626,13922448,13923424,13924617,13925630,13926813,13927858,13928903,13929867,13930895,13931819,13932856,13933990,13934716,13935782,13936781,13937819,13938800,13939755,13940895,13941602,13942446,13943164,13943870,13944713,13945569,13946619,13947161,13947880,13948307,13949112,13950190,13950619,13951451,13951809,13952458,13953491,13954049,13954895,13955272,13956019,13956950,13957702,13958411,13959238,13959652,13960226,13961301,13962232,13963251,13964205,13965387,13966461,13967278,13968352,13969468,13970449,13971511,13972182,13972874,13973941,13974950,13975609,13976629,13977580,13978545,13979393,13980279,13981158,13982336,13983319,13983916,13984918,13985940,13987057,13988075,13989162,13989735,13990414,13991147,13991809,13992516,13993175,13993962,13994814,13995619,13996089,13996613,13997494,13998039,13999017,13999979,14000369,14000787,14001232,14001910,14002463,14003059,14003913,14004636,14005615,14006591,14007389,14008383,14009213,14010090,14010922,14011760,14012742,14013809,14014780,14015868,14016793,14017425,14018068,14018604,14019196,14019810,14020298,14020900,14021554,14022287,14023106,14023789,14024506,14025588,14026638,14027340,14028017,14028612,14029193,14029555,14030108,14030756,14031695,14032649,14033389,14034247,14035123,14035976,14036597,14037377,14038208,14038822,14039528,14040246,14040964,14041796,14042538,14043210,14043869,14044813,14045509,14046213,14046927,14047759,14048505,14049211,14049893,14050752,14051580,14052304,14052917,14053596,14054474,14055073,14055930,14056673,14057530,14058302,14058999,14059613,14060478,14061223,14062084,14062957,14063737,14064465,14065259,14066002,14066794,14067579,14068294,14068893,14069685,14070641,14071520,14072452,14073541,14074360,14075070,14075795,14076760,14077563,14078444,14079334,14080196,14081233,14082306,14083298,14084153,14085241,14085811,14086669,14087371,14087933,14088716,14089385,14090247,14090885,14091614,14092468,14093601,14094479,14095311,14096318,14097208,14098021,14098830,14099675,14100427,14101197,14102017,14102967,14103914,14104890,14105756,14106660,14107528,14108406,14109240,14110049,14110624,14111244,14112024,14112760,14113417,14113997,14114598,14115369,14116405,14117279,14118272,14119408,14120279,14121273,14122311,14123293,14124157,14124974,14125782,14126710,14127465,14128434,14129491,14130162,14130864,14131578,14132439,14133379,14133988,14134859,14135821,14136741,14137496,14138491,14139469,14140548,14141436,14142333,14143269,14144115,14145029,14146173,14147161,14148030,14148913,14149776,14150548,14151535,14152674,14153597,14154561,14155602,14156500,14157185,14157784,14158568,14159450,14160208,14160954,14161473,14161975,14162972,14163864,14164943,14165807,14166823,14167740,14168594,14169573,14170419,14171281,14172339,14173062,14173946,14174814,14175763,14176573,14177513,14178236,14179057,14179785,14180675,14181700,14182723,14183683,14184428,14185288,14186215,14187171,14188076,14189156,14190184,14191247,14192319,14193132,14193758,14194490,14195457,14196375,14197240,14198194,14199165,14200044,14201e3,14201863,14202810,14203715,14204618,14205570,14206530,14207454,14208337,14209119,14209990,14210907,14211874,14212797,14213823,14214769,14215692,14216737,14217643,14218464,14219441,14220396,14221381,14222265,14223092,14223982,14224866,14225738,14226687,14227720,14228682,14229707,14230785,14231709,14232562,14233514,14234190,14234838,14235545,14236565,14237209,14238047,14239008,14239890,14240767,14241358,14242053,14242647,14243347,14244019,14244788,14245509,14246589,14247561,14248309,14249064,14249888,14250865,14251649,14252504,14253407,14254166,14255116,14256098,14257002,14257893,14258864,14259825,14260643,14261377,14262133,14262945,14263904,14264867,14265755,14266582,14267489,14268382,14269116,14270065,14270876,14271609,14272391,14273203,14273888,14274621,14275476,14276225,14277101,14277916,14278891,14279736,14280491,14281399,14282356,14283333,14284401,14285368,14286353,14287323,14287968,14288784,14289625,14290486,14291229,14291973,14292836,14293732,14294600,14295420,14296162,14297018,14297679,14298551,14299399,14300161,14301157,14302024,14303058,14303965,14304906,14305731,14306659,14307670,14308673,14309729,14310680,14312052,14312982,14313732,14314690,14315983,14317051,14318177,14319315,14320503,14321396,14321986,14322946,14324018,14325137,14326177,14327153,14327853,14328486,14329148,14329759,14330466,14331240,14331936,14332803,14333663,14334629,14335430,14336447,14337147,14338013,14338622,14339011,14339688,14340428,14341218,14341925,14342726,14343554,14344014,14344513,14345006,14345468,14345913,14346465,14347037,14347685,14348354,14349077,14350241,14351411,14352325,14353299,14353964,14354542,14355487,14356370,14357074,14357853,14359161,14360241,14361093,14362229,14363451,14364458,14365311,14366375,14367396,14368292,14369435,14370321,14371594,14372612,14373458,14374363,14375350,14376351,14377478,14378452,14379321,14380525,14381487,14382484,14383527,14384512,14385478,14386480,14387437,14388640,14389711,14390756,14391810,14392885,14393779,14394900,14395280,14395857,14397064,14398119,14399268,14400020,14401100,14402128,14403241,14404121,14405096,14406149,14407348,14408518,14409250,14410247,14411278,14411922,14412728,14413400,14414191,14414905,14415609,14416391,14417504,14418644,14419212,14419929,14420325,14421232,14422177,14423024,14423824,14424286,14424683,14425626,14426606,14427385,14427880,14428265,14429260,14429966,14431049,14431687,14432316,14432709,14433651,14434869,14435990,14437046,14438170,14438858,14440045,14440864,14442163,14443220,14444016,14444758,14445706,14446695,14447706,14448853,14449948,14451079,14451781,14452475,14453488,14454566,14455364,14456294,14457222,14458200,14459023,14459858,14460744,14461885,14462902,14463552,14464474,14465496,14466566,14467812,14468742,14469307,14470017,14470681,14471370,14472086,14472740,14473611,14474507,14475350,14476291,14476932,14477665,14478163,14478708,14479350,14480094,14480995,14481920,14482723,14483711,14484226,14484693,14485192,14485683,14486299,14486991,14487703,14488439,14489125,14489723,14490446,14491340,14492018,14493135,14493899,14494881,14495971,14496954,14497642,14498477,14499348,14500391,14501343,14502344,14503217,14504256,14505217,14506259,14507398,14508312,14508944,14509562,14510402,14510997,14511593,14512175,14512794,14513333,14514009,14514764,14515546,14516171,14516835,14517527,14518246,14519126,14520245,14521273,14521956,14522532,14523138,14523720,14524155,14524719,14525230,14525852,14526322,14526718,14527250,14527878,14528631,14529528,14530252,14531067,14531856,14532507,14533118,14533757,14534641,14535415,14536187,14536882,14537768,14538651,14539464,14540458,14541480,14542539,14543563,14544524,14545399,14546257,14546949,14547747,14548385,14549362,14550390,14551283,14552386,14553278,14554184,14555072,14555917,14556778,14557633,14558517,14559208,14560206,14561211,14562129,14562902,14563636,14564219,14565049,14565793,14566818,14567763,14568691,14569538,14570559,14571447,14572418,14573172,14574126,14575272,14576157,14576951,14578085,14579042,14579925,14580840,14581841,14582616,14583312,14584135,14584862,14585694,14586473,14587190,14587839,14588437,14588949,14589528,14590163,14590913,14591811,14592805,14593733,14594613,14595389,14596252,14597108,14598252,14599129,14600060,14601129,14602205,14603197,14604204,14605237,14606289,14607255,14608059,14608896,14610006,14611104,14612284,14613218,14614140,14615239,14616104,14616944,14617829,14618895,14619667,14620246,14620880,14621460,14621988,14622770,14623653,14624303,14624946,14625609,14626243,14626964,14627864,14628868,14629848,14630742,14631650,14632421,14633179,14633979,14634735,14635428,14636098,14637119,14638265,14639131,14640086,14640949,14641695,14642628,14643536,14644654,14645627,14646537,14647398,14648486,14649305,14650254,14651192,14652064,14652813,14653532,14654227,14655271,14656180,14656968,14657878,14658586,14659480,14660318,14661015,14661672,14662422,14663110,14664135,14665181,14666251,14667211,14668274,14669097,14669862,14670773,14671620,14672659,14673521,14674249,14675032,14676011,14676845,14677668,14678630,14679472,14680210,14680837,14681684,14682486,14683315,14684241,14685343,14686341,14687074,14687924,14688589,14689549,14690153,14691068,14691910,14692638,14693305,14694219,14695085,14696118,14697167,14698214,14699280,14700327,14701016,14701821,14702539,14703523,14704598,14705399,14706198,14707045,14707930,14708784,14709514,14710452,14711353,14712124,14712782,14713571,14714394,14715444,14716381,14717482,14718237,14719012,14719970,14720753,14721608,14722307,14723039,14723792,14724673,14725447,14726191,14726886,14727764,14728465,14729253,14729968,14730862,14731576,14732365,14733066,14734015,14734855,14735795,14736641,14737318,14737947,14738964,14739808,14740803,14741715,14742682,14743551,14744441,14745299,14746259,14747204,14748178,14749160,14750033,14750965,14751934,14752964,14753981,14754971,14755979,14756975,14757770,14758604,14759477,14760402,14761356,14762178,14763106,14763965,14764963,14765848,14766754,14767655,14768576,14769192,14769846,14770736,14771587,14772330,14773260,14774152,14775108,14775981,14776814,14777555,14778243,14778936,14779593,14780331,14781062,14781901,14782784,14783635,14784615,14785452,14786558,14787562,14788416,14789426,14790319,14791274,14792132,14792839,14793810,14794627,14795527,14796569,14797454,14798197,14798982,14799727,14800681,14801343,14802176,14803169,14804045,14804871,14805853,14806808,14807666,14808491,14809420,14810474,14811319,14812141,14813280,14814384,14815187,14816142,14816990,14817781,14818705,14819707,14820563,14821632,14822667,14823564,14824235,14824770,14825461,14826297,14827233,14827873,14828459,14828929,14829846,14830829,14831917,14832815,14833849,14834672,14835569,14836506,14837405,14838223,14839307,14840144,14840928,14841778,14842821,14843624,14844648,14845526,14846404,14847191,14848123,14849214,14850263,14851270,14852006,14852868,14853760,14854673,14855666,14856654,14857468,14858479,14859491,14860360,14861063,14861811,14862690,14863617,14864550,14865407,14866412,14867310,14868318,14869225,14870233,14871111,14872006,14873004,14874046,14875050,14875923,14876749,14877665,14878553,14879508,14880442,14881455,14882422,14883438,14884375,14885298,14886139,14887129,14888014,14888902,14889807,14890663,14891632,14892451,14893299,14894272,14895259,14896240,14897274,14898336,14899226,14900209,14901209,14902044,14902828,14903611,14904495,14905191,14905883,14906894,14907681,14908454,14909034,14909681,14910294,14910969,14911637,14912345,14913033,14914160,14915117,14915979,14916783,14917540,14918466,14919357,14920132,14921203,14922040,14923043,14923950,14924832,14925720,14926638,14927584,14928388,14929082,14929827,14930512,14931429,14932408,14933259,14934130,14935003,14935842,14936730,14937575,14938436,14939206,14939950,14940867,14941609,14942297,14943140,14943881,14944726,14945660,14946574,14947485,14948246,14949026,14950020,14950839,14951881,14952901,14953869,14954942,14955623,14956430,14957240,14958113,14958981,14959644,14960478,14961258,14962189,14962857,14963528,14964242,14965041,14965950,14966687,14967432,14968423,14969345,14970381,14971188,14972139,14973128,14974026,14975045,14976057,14977118,14977870,14978747,14979686,14980595,14981442,14982617,14983392,14984371,14985440,14986509,14987486,14988283,14989330,14990634,14991667,14992811,14993933,14995116,14995942,14996551,14997569,14998670,14999776,15000853,15001783,15002429,15003120,15003733,15004352,15004950,15005575,15006153,15006760,15007477,15008309,15009332,15010089,15010967,15011818,15012746,15013432,15014302,15015264,15016032,15016948,15017597,15018204,15019004,15019615,15020293,15021111,15021831,15022811,15023274,15023881,15024338,15024798,15025306,15025809,15026295,15026851,15027453,15028005,15028574,15029447,15030535,15031601,15032725,15033780,15034671,15035443,15036055,15036885,15037674,15038348,15039200,15040459,15041558,15042806,15043714,15044599,15045494,15046464,15047471,15048534,15049398,15050278,15051474,15052557,15053649,15054628,15055528,15056732,15057849,15058884,15059993,15061117,15062006,15063082,15064093,15065019,15066110,15066851,15067687,15068524,15069393,15070257,15071149,15072200,15073166,15074170,15075081,15076158,15077238,15078319,15079395,15080521,15081736,15082683,15083813,15084911,15085952,15087039,15087457,15087832,15088955,15090081,15090949,15092053,15093005,15093912,15095062,15096167,15097041,15097912,15098281,15099243,15100298,15101328,15102045,15103238,15104324,15105172,15106230,15107189,15108213,15109206,15110269,15111314,15112292,15113241,15114328,15115471,15116512,15117323,15118269,15119426,15120430,15121366,15122402,15123089,15123945,15124709,15125329,15126139,15127026,15128076,15128681,15129315,15129759,15130647,15131774,15132920,15133424,15134179,15134595,15135488,15136588,15137182,15137835,15138285,15139180,15140086,15140810,15141817,15143002,15143738,15144561,15145002,15145531,15146575,15147684,15148682,15149686,15150794,15151939,15152549,15153649,15154822,15155930,15157222,15158200,15159230,15160227,15161357,15162526,15163366,15164342,15165499,15166607,15167658,15168764,15169837,15170487,15171183,15172300,15173346,15173986,15174998,15175965,15176932,15177782,15178682,15179581,15180749,15181669,15182266,15183314,15184333,15185509,15186333,15186983,15187576,15188307,15189212,1519e4,15190891,15191702,15192698,15193703,15194545,15195681,15196492,15197078,15197879,15198547,15199381,15200101,15201027,15201935,15202707,15203474,15204440,15205256,15206290,15207274,15208234,15209154,15210022,15210890,15211709,15212644,15213558,15214229,15215227,15216077,15216922,15217694,15218433,15219373,15220331,15221288,15222186,15222981,15223818,15224692,15225642,15226603,15227457,15228388,15229258,15230248,15231140,15232071,15232960,15233910,15234576,15235249,15236402,15237337,15238166,15239020,15239464,15240413,15241503,15242638,15243667,15244561,15245765,15246869,15247858,15249094,15250274,15251345,15252352,15253605,15254581,15255340,15256284,15257252,15258258,15259312,15260273,15261136,15262312,15263412,15264504,15265314,15266324,15267279,15268378,15269358,15270297,15271220,15272232,15273373,15274205,15275112,15276340,15277026,15277859,15278590,15279288,15280129,15280967,15281768,15282802,15283348,15284083,15284494,15285384,15286470,15287090,15287726,15288168,15289091,15290154,15290994,15292027,15293125,15294244,15294836,15295872,15296931,15297880,15298578,15299375,15300338,15301382,15302121,15303073,15304003,15304939,15305844,15306691,15307640,15308807,15309664,15310351,15311401,15312414,15313603,15314227,15314917,15315549,15316345,15317358,15318017,15318933,15319909,15320694,15321692,15322703,15323647,15324558,15325671,15326285,15326885,15327672,15328389,15329035,15329774,15330381,15330864,15331145,15331530,15332005,15332821,15333544,15334361,15335240,15336176,15337097,15337946,15339030,15339642,15340454,15341240,15341998,15342605,15343406,15344100,15344994,15345619,15346172,15346817,15347440,15347983,15348604,15349170,15349737,15350496,15351277,15352139,15353074,15353931,15354742,15355887,15356979,15357929,15358934,15359571,15360286,15361158,15362101,15363058,15363636,15364077,15364751,15365627,15366310,15366992,15367982,15368834,15369865,15370643,15371443,15372276,15372976,15373758,15374615,15375533,15376186,15377020,15377892,15378486,15379367,15380246,15381043,15381517,15382119,15382905,15383671,15384356,15384986,15385599,15386485,15387356,15388215,15389280,15390109,15390845,15391523,15392307,15393052,15393969,15394946,15395774,15396470,15397189,15398051,15398896,15399525,15400040,15401012,15401746,15402579,15403462,15404306,15405156,15406012,15406907,15407760,15408810,15409673,15410660,15411537,15412467,15413377,15414303,15415079,15415749,15416621,15417584,15418551,15419318,15419701,15420170,15420749,15421648,15422545,15423349,15424258,15425328,15425870,15426486,15427350,15427794,15428737,15429616,15430282,15430799,15431739,15432274,15432974,15434084,15435219,15436141,15437095,15437824,15439149,15440273,15441396,15442363,15443207,15444355,15445172,15446375,15447390,15448240,15449125,15450107,15451112,15452239,15453202,15454102,15455297,15456034,15457118,15458310,15459428,15460634,15461793,15462725,15463730,15464826,15465846,15466576,15467649,15468605,15469671,15470699,15471673,15472631,15473683,15474392,15475247,15475989,15476626,15477497,15478498,15478992,15479811,15480243,15481129,15482067,15482785,15483826,15484316,15485061,15485492,15486299,15487214,15488275,15488971,15489540,15489964,15490902,15491956,15492869,15493907,15495008,15496135,15496764,15497897,15498997,15499925,15501073,15501935,15502636,15503508,15504504,15505583,15506322,15507308,15508258,15509139,15510050,15510866,15511940,15513045,15513924,15514685,15515711,15516746,15517889,15518911,15519672,15520351,15520977,15521783,15522641,15523731,15524591,15525517,15526186,15527091,15527900,15528836,15529689,15530628,15531716,15532775,15533667,15534736,15535692,15536827,15537778,15538393,15539090,15539806,15540337,15541081,15541773,15542593,15543274,15544058,15545179,15545976,15546603,15547064,15547624,15548044,15548467,15548983,15549766,15550723,15551537,15552225,15553328,15554408,15555317,15556212,15556970,15557690,15558357,15559188,15560026,15560849,15562080,15563144,15563922,15565010,15565810,15566811,15567886,15568995,15569783,15570580,15571431,15572296,15573287,15574254,15575113,15576085,15576985,15578059,15578961,15579866,15580684,15581552,15582430,15583315,15584206,15585126,15586001,15586970,15587869,15588816,15589697,15590608,15591488,15592524,15593329,15594008,15595120,15595875,15596997,15598066,15598846,15599889,15600868,15601860,15602683,15603444,15604370,15605235,15606149,15607237,15608291,15609055,15609846,15610538,15611554,15612312,15613065,15614008,15614896,15615740,15616633,15617554,15618456,15619430,15620273,15621332,15622145,15622976,15624057,15625168,15625992,15626954,15627787,15628567,15629467,15630387,15631413,15632526,15633464,15634412,15635115,15635728,15636354,15637185,15638199,15638877,15639482,15639965,15640796,15641869,15642902,15643860,15644819,15645669,15646626,15647513,15648528,15649316,15650403,15651305,15652132,15652937,15653939,15654700,15655722,15656623,15657460,15658314,15659225,15660280,15661332,15662294,15663117,15663985,15664947,15665857,15666893,15667799,15668525,15669562,15670609,15671515,15672263,15672957,15673701,15674682,15675629,15676358,15677352,15678300,15679320,15680261,15681257,15682151,15683051,15684042,15685055,15686067,15686903,15687734,15688652,15689559,15690537,15691494,15692463,15693395,15694450,15695421,15696380,15697289,15698279,15699105,15700056,15700968,15701844,15702806,15703642,15704601,15705560,15706485,15707558,15708479,15709477,15710296,15711335,15712297,15713142,15714021,15714703,15715434,15716284,15717079,15718113,15718837,15719628,15720185,15720858,15721551,15722146,15722872,15723630,15724362,15725398,15726344,15727283,15728158,15728964,15729835,15730685,15731473,15732599,15733503,15734462,15735328,15736214,15737122,15737947,15739035,15739892,15740625,15741322,15741966,15742828,15743782,15744572,15745499,15746282,15747241,15748249,15748958,15749927,15750848,15751582,15752527,15753264,15754014,15754854,15755660,15756548,15757527,15758407,15759345,15760194,15760954,15761977,15762797,15763837,15764881,15765944,15766964,15767737,15768490,15769214,15770034,15770920,15771552,15772261,15773048,15774033,15774710,15775503,15776235,15777086,15777942,15778637,15779493,15780409,15781383,15782409,15783222,15784145,15785213,15786183,15787169,15788200,15789241,15790440,15791538,15792515,15793589,15794886,15795964,15797087,15798219,15799398,15800288,15800868,15801817,15802888,15804003,15805054,15806020,15806704,15807346,15807997,15808577,15809252,15809786,15810629,15811459,15812423,15813285,15814053,15814813,15815578,15816335,15816972,15817691,15818399,15819207,15820010,15820523,15821471,15822569,15823584,15824609,15825527,15826247,15827031,15827954,15829176,15830185,15831222,15832346,15833336,15834217,15835377,15836248,15837385,15838275,15839359,15840433,15841407,15842367,15843528,15844784,15845906,15846940,15847953,15848881,15849788,15850990,15852099,15853158,15854229,15855115,15856229,15856604,15857294,15858498,15859618,15860496,15861299,15862167,15863315,15864146,15865049,15866185,15867343,15868422,15869398,15870178,15871115,15872093,15873114,15874192,15875146,15876020,15877160,15877902,15878907,15879924,15880580,15881235,15882048,15882809,15883512,15884332,15885178,15886214,15887063,15887886,15888332,15888784,15889854,15890593,15891570,15892337,15892809,15893205,15894191,15895080,15895656,15896501,15896902,15897632,15898768,15899836,15900905,15902022,15902781,15903986,15905110,15906508,15907774,15909013,15909809,15910533,15911489,15912472,15913459,15914260,15915214,15916185,15917044,15917954,15918774,15919895,15920960,15921706,15922546,15923607,15924624,15925781,15926359,15927008,15927729,15928335,15929171,15929639,15930180,15930704,15931120,15931538,15932404,15933111,15933979,15934833,15935802,15936842,15937901,15938691,15939622,15940732,15941339,15941941,15942730,15943192,15943640,15944085,15944626,15945089,15945521,15946207,15946839,15947670,15948346,15948852,15949209,15949547,15949873,15950265,15950617,15950957,15951472,15952282,15953306,15954132,15954773,15955629,15956590,15957412,15958211,15959024,15959841,15961010,15961943,15963027,15963837,15964597,15965538,15966423,15967317,15968074,15969075,15970123,15971055,15972038,15972837,15973549,15974385,15975323,15976127,15977061,15977951,15978822,15979594,15980470,15981558,15982486,15983553,15984289,15984950,15985789,15986719,15987483,15988303,15988900,15989432,15990039,15990762,15991701,15992587,15993465,15994234,15995120,15996241,15997150,15998062,15998866,15999778,16000397,16001150,16001873,16002606,16003409,16004253,16005070,16006063,16007015,16007884,16008825,16009472,16010362,16011011,16011813,16012446,16013016,16013911,16014787,16015641,16016649,16017511,16018403,16019138,16019856,16020716,16021319,16022092,16022743,16023284,16023944,16024787,16025610,16026621,16027362,16028039,16028804,16029549,16030209,16030925,16031543,16032190,16033122,16034048,16034849,16035742,16036557,16037086,16037982,16038723,16039448,16040020,16040949,16041823,16042850,16043769,16044709,16045560,16046492,16047232,16048057,16048779,16049483,16050433,16051277,16052233,16053154,16054066,16055053,16055857,16056529,16057515,16058412,16059278,16060202,16061086,16062011,16062694,16063251,16063920,16064828,16065671,16066515,16067321,16068407,16069354,16070160,16070996,16071663,16072647,16073289,16074291,16075147,16076088,16077012,16077932,16078906,16079640,16080194,16080881,16081786,16082440,16083253,16083840,16084370,16084910,16085875,16086762,16087644,16088657,16089576,16090436,16091145,16092038,16092946,16093811,16094729,16095431,16096330,16097236,16098219,16099195,16100082,16100911,16101912,16102945,16103862,16104717,16105408,16106294,16107219,16108121,16109031,16109766,16110707,16111635,16112615,16113589,16114467,16115296,16116299,16117334,16118251,16119105,16119803,16120717,16121629,16122502,16123431,16124103,16125011,16125902,16126869,16127795,16128656,16129500,16130494,16131528,16132442,16133314,16134035,16134967,16135917,16136772,16137783,16138693,16139666,16140595,16141449,16142391,16143377,16144353,16145364,16146277,16147246,16148245,16149197,16150144,16151061,16151908,16152975,16153949,16154874,16155897,16156686,16157615,16158720,16159677,16160692,16161699,16162737,16163774,16164605,16165582,16166518,16167500,16168376,16169306,16170218,16171171,16172150,16172967,16173848,16174706,16175577,16176451,16177385,16178234,16179214,16180098,16181068,16181949,16182868,16183791,16184802,16185610,16186301,16187364,16188467,16189329,16190289,16191336,16192213,16193235,16194055,16195175,16196195,16197067,16198077,16198965,16199935,16201091,16201972,16203015,16203911,16204887,16205917,16206807,16207849,16208839,16209441,16210010,16210643,16211345,16211800,16212268,16212732,16213382,16214322,16214998,16215706,16216415,16217116,16217813,16218564,16219146,16219785,16220449,16221082,16221561,16222234,16222874,16223536,16224170,16224767,16225439,16225983,16226618,16227297,16227964,16228627,16229547,16229981,16230773,16231156,16232236,16233393,16234404,16235310,16236163,16236764,16237373,16237936,16238633,16239763,16240997,16242126,16243121,16244083,16245309,16246439,16247483,16248560,16249517,16250400,16251564,16252275,16253464,16254505,16255359,16256241,16257251,16258190,16259322,16260252,16261146,16262323,16263475,16264536,16265238,16266416,16267347,16268435,16269589,16270428,16271320,16272242,16273304,16274149,16275076,16276101,16276942,16277661,16278469,16279130,16279905,16280690,16281578,16282206,16283049,16283438,16284171,16285252,16285720,16286519,16286931,16287811,16288735,16289771,16290682,16291833,16292800,16294006,16295167,16295806,16296518,16297337,16298001,16298411,16299448,16300334,16301203,16302193,16303127,16304364,16305448,16306444,16307418,16308217,16308992,16309576,16310443,16311545,16312850,16313992,16314998,16316014,16316811,16317326,16317736,16318364,16319553,16320490,16321657,16322558,16323758,16324908,16325634,16326443,16327351,16328148,16329082,16330005,16330799,16331655,16332720,16333522,16334243,16335184,16336156,16337144,16337923,16338878,16339853,16340710,16341605,16342405,16343513,16344565,16345331,16346153,16347206,16348212,16349336,16350359,16351028,16351689,16352282,16353070,16354044,16354967,16355629,16356634,16357369,16358230,16359267,16360185,16361018,16361851,16362785,16363748,16364715,16365853,16366802,16367395,16368190,16368724,16369492,16370239,16371102,16371785,16372737,16373795,16374510,16375183,16375751,16376112,16376603,16377400,16378368,16379408,16380363,16381166,16382e3,16382876,16383831,16384778,16385634,16386563,16387432,16388420,16389312,16390245,16391121,16392073,16392735,16393406,16394528,16395263,16396392,16397510,16398171,16399059,16400098,16401133,16401908,16402708,16403601,16404455,16405420,16406384,16407402,16408090,16408855,16409573,16410550,16411346,16411954,16412914,16413874,16414842,16415788,16416731,16417655,16418680,16419537,16420539,16421440,16422292,16423331,16424425,16425235,16426235,16427072,16427877,16428664,16429586,16430725,16431787,16432621,16433627,16434437,16435134,16435767,16436548,16437521,16438221,16438913,16439407,16440032,16441061,16442003,16443016,16443986,16444842,16445852,16446784,16447823,16448549,16449551,16450483,16451283,16452093,16453046,16453819,16454732,16455655,16456371,16457228,16458129,16459147,16460120,16461089,16462062,16462955,16463905,16464850,16465880,16466762,16467625,16468599,16469715,16470784,16471438,16472119,16472804,16473847,16474808,16475570,16476554,16477497,16478426,16479324,16480257,16481096,16481965,16482853,16483789,16484704,16485567,16486458,16487344,16488193,16489199,16490126,16491080,16492036,16493056,16493996,16495040,16495904,16496913,16497782,16498650,16499633,16500623,16501570,16502415,16503415,16504361,16505287,16506373,16507231,16508185,16509102,16510058,16510968,16511829,16512634,16513295,16513978,16514882,16515613,16516631,16517406,16518282,16518936,16519515,16520215,16520835,16521577,16522286,16523106,16523918,16524864,16525747,16526412,16527200,16528011,16528919,16529760,16530870,16531732,16532571,16533361,16534308,16535235,16536034,16537122,16538035,16538815,16539574,16540238,16541125,16542087,16542906,16543893,16544707,16545643,16546661,16547417,16548397,16549341,16550064,16550972,16551715,16552373,16553196,16553988,16554790,16555792,16556548,16557525,16558409,16559247,16560281,16561172,16562186,16563213,16564305,16565267,16566072,16566713,16567436,16568317,16569174,16569712,16570412,16571261,16572240,16572952,16573804,16574639,16575583,16576313,16577069,16577912,16578682,16579759,16580675,16581639,16582593,16583536,16584528,16585498,16586514,16587579,16588903,16589961,16591081,16592204,16593366,16594209,16594815,16595835,16596938,16598040,16599129,16600041,16600704,16601353,16601997,16602695,16603525,16604395,16605246,16606222,16607018,16607957,16608758,16609501,16610458,16611188,16612251,16612789,16613488,16614596,16615749,16616636,16617590,16618510,16619092,16619935,16621038,16622250,16623286,16624189,16625133,16626099,16627152,16628114,16628960,16630080,16630890,16631983,16633209,16634310,16635313,16636357,16637315,16638243,16639185,16640166,16641382,16642421,16643478,16644517,16645616,16646467,16647530,16647910,16648623,16649772,16650927,16651772,16652622,16653513,16654593,16655405,16656337,16657499,16658627,16659785,16660747,16661522,16662461,16663424,16664414,16665461,16666426,16667295,16668427,16669172,16670150,16671242,16672153,16672841,16673648,16674345,16675059,16675861,16676976,16678095,16678690,16679358,16679811,16680717,16681661,16682452,16683258,16683690,16684189,16685236,16685995,16686932,16687677,16688156,16688560,16689581,16690757,16691684,16692875,16693959,16694692,16695765,16696829,16697486,16698166,16699292,16700270,16700939,16701915,16702899,16703842,16704722,16705617,16706506,16707702,16708625,16709263,16710290,16711285,16712484,16713249,16713925,16714552,16715288,16716186,16717129,16718297,16719275,16720146,16721179,16721968,16722819,16723885,16725016,16725620,16726207,16727015,16727717,16728327,16728983,16729799,16730286,16730922,16731556,16732562,16733222,16734195,16734996,16736013,16736761,16737550,16738492,16739201,16739939,16740728,16741532,16742574,16743500,16744056,16744903,16745833,16746425,16747223,16747914,16748557,16749237,16750073,16750770,16751453,16752253,16752991,16753881,16754813,16755649,16756634,16757611,16758649,16759241,16759670,16760282,16761336,16762068,16762970,16764039,16765119,16765974,16766830,16767677,16768608,16769661,16770500,16770994,16771857,16772711,16773357,16773999,16774650,16775529,16776559,16777587,16778434,16779259,16780023,16780739,16781384,16782227,16782977,16783632,16784314,16785258,16786146,16786883,16787719,16788431,16789016,16789919,16790687,16791456,16792228,16792985,16793905,16794848,16795665,16796452,16797296,16798222,16799047,16799839,16800701,16801583,16802545,16803417,16804350,16805250,16806176,16806887,16807635,16808626,16809462,16810051,16810606,16811204,16811944,16812929,16813773,16814700,16815606,16816304,16817174,16817625,16818599,16819676,16820792,16821862,16822846,16823545,16824379,16825631,16826733,16827470,16828700,16829932,16830926,16831902,16832957,16833906,16835071,16836114,16836951,16837811,16838815,16839762,16840887,16841815,16842703,16843882,16845039,16846221,16846777,16847159,16848268,16849017,16850097,16851283,16852247,16853241,16854231,16855091,16856225,16857258,16858318,16859401,16860231,16861145,16862266,16863338,16864224,16864928,16865777,16866505,16867143,16868002,16868953,16869465,16870301,16870698,16871249,16872287,16872924,16873768,16874162,16874889,16875972,16876432,16877244,16877658,16878535,16879473,16880164,16881260,16882239,16883222,16884182,16885390,16886543,16887187,16888503,16889626,16890532,16891679,16892804,16893760,16894846,16895900,16896729,16897435,16898344,16899340,16900380,16901131,16902114,16903078,16903922,16904816,16905645,16906769,16907845,16908669,16909460,16910522,16911529,16912723,16913296,16913982,16914711,16915605,16916316,16917331,16918338,16919350,16920331,16921183,16922095,16922952,16924126,16924896,16925491,16926288,16926824,16927262,16927829,16928587,16929264,16930145,16930666,16931250,16931949,16932734,16933547,16934403,16935303,16936101,16936894,16937671,16938398,16939118,16939907,16940663,16941590,16942366,16943123,16944239,16945274,16946099,16947036,16948e3,16948831,16949841,16950928,16951923,16952903,16953957,16954892,16955837,16956945,16957841,16958627,16959518,16960471,16961267,16962290,16962954,16963906,16964915,16965682,16966516,16967219,16968332,16969257,16970195,16970932,16971811,16972587,16973304,16974094,16974888,16975533,16976286,16976998,16977860,16978665,16979462,16980249,16981064,16981772,16982527,16983360,16984236,16985208,16985952,16986904,16987775,16988580,16989594,16990646,16991518,16992420,16993353,16994136,16994911,16995813,16996523,16997673,16998685,16999589,17000601,17001463,17002405,17003257,17004130,17004913,17005615,17006448,17007198,17008127,17008787,17009662,17010502,17011309,17012224,17013028,17013873,17014619,17015385,17016064,17016918,17017875,17018674,17019443,17020229,17021318,17022203,17023035,17023850,17024684,17025493,17026308,17027132,17028122,17029034,17030006,17030854,17031789,17032663,17033560,17034399,17035145,17035856,17036547,17037254,17037933,17038675,17039398,17040168,17040814,17041450,17042152,17042861,17043508,17044102,17044802,17046001,17046648,17047181,17047785,17048792,17049733,17050528,17051457,17052297,17053164,17053629,17054502,17055614,17056743,17057744,17058710,17059561,17060171,17060936,17061876,17063106,17064207,17065103,17066328,17067326,17068165,17069080,17070053,17071055,17072174,17073166,17074019,17075221,17076214,17077479,17078515,17079732,17080833,17081610,17082646,17083608,17084450,17085521,17086531,17087495,17088513,17089396,17090387,17091576,17092396,17093324,17094513,17095557,17096507,17097256,17097977,17098773,17099446,17100162,17100964,17101941,17102722,17103214,17103625,17104658,17105381,17106456,17107092,17107722,17108161,17109045,17109972,17110899,17111698,17112127,17112557,17113579,17114651,17115550,17116568,17117549,17118713,17119717,17120652,17121570,17122555,17123545,17124379,17125364,17126529,17127731,17128756,17129699,17130402,17131215,17132181,17133229,17133966,17134930,17135864,17136787,17137684,17138542,17139512,17140675,17141556,17142248,17143279,17144269,17145435,17146416,17147197,17147865,17148498,17149279,17150097,17150951,17151840,17152635,17153527,17154412,17154867,17155728,17156434,17157394,17158265,17159379,17160345,17161192,17162230,17162938,17163746,17164797,17165775,17166970,17167747,17168345,17169271,17170085,17170709,17171296,17171875,17172659,17173492,17174113,17174798,17175492,17176553,17177583,17178216,17178786,17179262,17179946,17180512,17180874,17181376,17181936,17182731,17183464,17184046,17184785,17185476,17186154,17186754,17187339,17187908,17188543,17189335,17190247,17190971,17191856,17192599,17193249,17193951,17194676,17195494,17196101,17196842,17197506,17198617,17199582,17200570,17201598,17202716,17203800,17204598,17205521,17206490,17207237,17208173,17208880,17209852,17210890,17211932,17213073,17214141,17215256,17216355,17217237,17218239,17219168,17220192,17221086,17222059,17222874,17223836,17224861,17225923,17226947,17227734,17228668,17229276,17230137,17231157,17231768,17232553,17233430,17234364,17235297,17236236,17237107,17237939,17238763,17239361,17240205,17241006,17241792,17242835,17243786,17244608,17245441,17246304,17247248,17248189,17249052,17249964,17250840,17251827,17252730,17253678,17254565,17255547,17256242,17256953,17257878,17258751,17259490,17260187,17260835,17261550,17262184,17262934,17263744,17264489,17265175,17265973,17266767,17267525,17268453,17269182,17269986,17270882,17271741,17272454,17273280,17274028,17275234,17275977,17277128,17278245,17278931,17279855,17280870,17281908,17282689,17283462,17284323,17285205,17286144,17287136,17288185,17288876,17289637,17290359,17291304,17292063,17292673,17293613,17294552,17295517,17296452,17297391,17298323,17299348,17300216,17301208,17302090,17302889,17303947,17305059,17305900,17306908,17307727,17308537,17309374,17310330,17311439,17312553,17313361,17314343,17315137,17315832,17316444,17317225,17318200,17318878,17319537,17320022,17320663,17321684,17322593,17323563,17324576,17325420,17326450,17327377,17328417,17329138,17330151,17331068,17331862,17332668,17333624,17334388,17335292,17336252,17336965,17337810,17338723,17339743,17340696,17341627,17342594,17343509,17344439,17345397,17346438,17347299,17348137,17349088,17350189,17351221,17351883,17352550,17353247,17354267,17355207,17355943,17356932,17357872,17358840,17359758,17360692,17361560,17362448,17363359,17364275,17365244,17366082,17366963,17367850,17368716,17369709,17370637,17371589,17372538,17373570,17374533,17375560,17376453,17377465,17378334,17379192,17380145,17381128,17382086,17382949,17383956,17384886,17385816,17386899,17387733,17388684,17389559,17390552,17391489,17392344,17393180,17393832,17394490,17395371,17396118,17397121,17397902,17398753,17399390,17400027,17400751,17401371,17402105,17402840,17403627,17404469,17405411,17406292,17406975,17407743,17408588,17409502,17410326,17411422,17412304,17413177,17413976,17414915,17415799,17416630,17417713,17418636,17419436,17420205,17420906,17421788,17422709,17423502,17424494,17425294,17426230,17427269,17428003,17429017,17429965,17430691,17431601,17432348,17433014,17433822,17434599,17435397,17436411,17437203,17438177,17439097,17439956,17441039,17441956,17442971,17444029,17445111,17446073,17446865,17447541,17448257,17449136,17450045,17450601,17451312,17452123,17453089,17453814,17454648,17455480,17456432,17457174,17457913,17458779,17459565,17460642,17461564,17462495,17463442,17464397,17465407,17466358,17467383,17468468,17469778,17470843,17471965,17473067,17474227,17475069,17475692,17476698,17477812,17478898,17479989,17480890,17481559,17482174,17482804,17483405,17483983,17484694,17485535,17486410,17487281,17488235,17489023,17489910,17490691,17491482,17492325,17493288,17493947,17494774,17495634,17496391,17497090,17497996,17498430,17499038,17499518,17500282,17501397,17502560,17503464,17504413,17505120,17505810,17506544,17507480,17508220,17508911,17509735,17510986,17512090,17512835,17513906,17514928,17515813,17517088,17518087,17518931,17519839,17520824,17521819,17522925,17523923,17524768,17525980,17526909,17527906,17528957,17530119,17531094,17532117,17533362,17534476,17535498,17536515,17537467,17538347,17539206,17540210,17541409,17542465,17543471,17544514,17545446,17546121,17546505,17547645,17548733,17549901,17550643,17551701,17552776,17553874,17554698,17555578,17556695,17557478,17558196,17558996,17559675,17560384,17561180,17562280,17563257,17564036,17564534,17564919,17565905,17566757,17567576,17568007,17568494,17569549,17570599,17571644,17572737,17573706,17574872,17575456,17576107,17576519,17577424,17578624,17579610,17580769,17581900,17582515,17583623,17584982,17586002,17587120,17588076,17588871,17589984,17591263,17592433,17593341,17594047,17594878,17595840,17596927,17597639,17598614,17599547,17600433,17601323,17602113,17603160,17604283,17605134,17605819,17606861,17607900,17609107,17610091,17610868,17611537,17612142,17612932,17613748,17614682,17615572,17616439,17616971,17618021,17618885,17619336,17620193,17620903,17622007,17622946,17623946,17624868,17625917,17626724,17627519,17628586,17629468,17630392,17631585,17632677,17633680,17634489,17635071,17635930,17636781,17637756,17638257,17638799,17639364,17639923,17640622,17641406,17642256,17642918,17643613,17644345,17645375,17646489,17647212,17647894,17648577,17648921,17649204,17649592,1765e4,17650502,17651019,17651548,17652119,17652481,17653010,17653545,17654429,17655244,17655865,17656720,17657538,17658316,17658914,17659728,17660646,17661447,17662388,17663331,17664144,17665058,17665825,17666714,17667431,17668298,17669155,17670083,17670779,17671607,17672576,17673552,17674469,17675299,17676068,17677098,17678154,17679127,17680070,17680941,17681538,17682416,17683379,17684003,17684856,17685504,17686305,17687113,17687749,17688565,17689110,17689914,17690681,17691736,17692869,17693717,17694694,17695771,17696527,17697496,17698602,17699549,17700467,17701347,17702333,17703072,17703631,17704195,17704760,17705400,17705993,17706523,17707274,17707836,17708566,17709120,17709821,17710478,17711114,17711880,17712739,17713588,17714557,17715283,17716330,17717133,17718010,17718991,17719749,17720350,17721156,17722008,17722672,17723384,17724120,17725001,17725944,17726657,17727423,17728211,17728973,17729782,17730325,17731075,17731627,17732412,17733289,17734307,17735177,17735723,17736482,17737316,17738168,17738838,17739552,17740271,17741123,17741963,17742851,17743834,17744569,17745315,17746243,17747051,17748035,17748881,17749616,17750281,17750948,17751792,17752742,17753536,17754248,17754845,17755474,17756084,17756929,17757760,17758715,17759535,17760013,17760685,17761506,17762323,17763147,17763668,17764495,17765390,17766253,17766989,17767707,17768429,17769285,17770040,17770920,17771913,17772580,17773300,17774207,17775007,17776022,17776860,17777600,17778381,17779160,17779951,17780829,17781473,17782161,17782775,17783385,17783975,17784822,17785653,17786688,17787520,17788297,17789115,17789998,17790819,17791572,17792346,17793221,17794209,17795110,17796016,17796922,17797819,17798577,17799376,17800205,17801149,17802099,17802763,17803457,17804161,17804818,17805505,17806170,17807065,17807704,17808466,17809383,17810150,17810999,17811803,17812607,17813359,17814230,17814981,17815765,17816549,17817519,17818157,17818801,17819482,17820219,17820958,17821533,17822192,17823136,17824058,17825182,17826164,17827114,17828129,17829016,17829966,17830795,17831548,17832466,17833255,17834155,17835218,17836059,17836857,17837621,17838416,17839373,17840051,17840896,17841854,17842726,17843486,17844472,17845449,17846388,17847270,17848108,17849151,17849965,17850801,17851933,17852985,17853795,17854749,17855615,17856399,17857321,17858370,17859172,17860233,17861283,17862187,17862873,17863433,17864160,17865038,17865894,17866595,17867133,17867632,17868603,17869495,17870605,17871490,17872551,17873395,17874286,17875276,17876196,17877035,17878113,17878895,17879745,17880601,17881632,17882421,17883431,17884278,17885084,17885822,17886725,17887774,17888817,17889778,17890462,17891344,17892237,17893128,17894124,17895139,17896038,17897100,17898092,17898928,17899602,17900353,17901236,17902178,17903096,17903934,17904943,17905856,17906817,17907688,17908653,17909521,17910434,17911393,17912369,17913330,17914206,17915006,17915936,17916831,17917788,17918688,17919758,17920735,17921726,17922649,17923485,17924395,17925384,17926263,17927138,17927980,17928846,17929780,17930567,17931427,17932420,17933470,17934410,17935456,17936541,17937410,17938334,17939330,17940184,17940940,17941703,17942649,17943279,17943972,17944969,17945791,17946600,17947168,17947827,17948419,17949099,17949781,17950521,17951217,17952336,17953313,17954080,17954853,17955678,17956629,17957519,17958308,17959283,17960111,17961117,17962096,17962989,17963879,17964822,17965798,17966613,17967351,17968097,17968765,17969695,17970659,17971548,17972369,17973246,17974104,17974941,17975801,17976676,17977391,17978105,17979023,17979754,17980467,17981323,17982084,17982908,17983765,17984696,17985578,17986318,17987129,17988098,17988983,17990027,17990995,17991986,17993063,17993729,17994558,17995388,17996251,17997101,17997850,17998712,17999548,18000512,18001229,18001898,18002672,18003421,18004292,18005077,18005827,18006807,18007723,18008745,18009554,18010496,18011424,18012362,18013311,18014337,18015328,18016145,18016662,18017030,18017537,18018075,18018974,18019859,18020743,18021537,18022101,18023008,18023895,18024809,18025412,18026332,18027209,18028055,18028669,18029571,18030366,18031001,18031923,18032699,18033550,18034677,18035940,18036999,18038118,18039184,18040237,18040977,18041771,18042793,18043898,18045047,18046129,18046922,18047591,18048224,18048873,18049520,18050111,18050822,18051744,18052236,18052851,18053551,18054372,18055233,18056167,18056950,18057652,18058629,18059337,18060203,18060896,18061560,18062225,18062916,18063517,18064239,18064947,18066023,18066519,18067138,18067910,18068696,18069827,18070988,18071889,18072825,18073478,18074369,18075065,18075838,18077158,18078264,18079073,18080182,18081274,18082141,18083014,18083982,18084956,18085973,18086979,18087823,18088908,18090174,18091171,18092112,18093266,18094321,18095360,18096463,18097650,18098799,18099796,18100708,18101735,18102767,18103749,18104610,18105650,18106393,18107583,18108677,18109874,18110825,18111918,18112858,18113940,18114992,18115991,18117021,18117970,18118583,18118979,18120134,18121181,18122295,18122978,18124086,18125327,18126355,18127298,18128206,18129495,18130647,18131496,18132305,18133324,18134401,18135336,18136202,18136909,18137696,18138398,18139085,18139870,18140964,18142094,18142687,18143325,18143739,18144658,18145538,18146265,18147035,18147740,18148563,18148983,18149531,18150564,18151199,18152012,18152438,18152975,18154003,18154711,18155536,18155956,18156513,18157573,18158814,18159894,18160851,18161838,18162935,18163803,18164870,18165870,18166760,18167813,18169038,18170220,18171206,18172265,18173246,18174213,18175318,18176361,18177335,18178510,18179552,18180458,18181498,18182644,18183557,18184752,18185842,18186828,18187751,18188732,18189724,18190653,18191803,18192756,18193418,18194198,18195202,18196239,18196979,18197948,18198883,18199821,18200736,18201581,18202521,18203675,18204547,18205235,18206278,18207266,18208441,18209094,18209792,18210374,18211183,18211899,18212910,18213870,18214634,18215684,18216814,18217731,18218325,18219032,18219608,18220381,18221017,18221768,18222527,18223427,18224287,18224898,18225605,18226422,18227160,18228125,18229062,18229768,18230733,18231518,18232350,18233132,18233929,18234837,18235811,18236603,18237479,18238334,18238890,18239696,18240492,18241258,18242186,18243116,18243910,18244742,18245616,18246540,18247495,18248306,18249241,18250105,18251102,18251985,18252893,18253793,18254717,18255333,18255987,18256832,18257489,18258198,18258896,18259559,18260269,18260943,18262063,18262668,18263685,18264422,18265130,18265996,18266440,18267416,18268496,18269489,18270541,18271445,18272324,18273543,18274616,18275499,18276763,18277738,18278494,18279439,18280412,18281418,18282484,18283442,18284329,18285544,18286389,18287330,18288450,18289468,18290381,18291503,18292451,18293444,18294570,18295399,18296309,18297494,18298292,18298995,18299788,18300486,18301198,18302003,18302887,18303571,18304382,18304816,18305436,18306424,18307499,18308150,18308798,18309255,18310168,18311232,18312074,18313104,18314200,18315321,18315923,18317035,18318209,18319111,18319818,18320643,18321604,18322690,18323407,18324385,18325313,18326190,18327083,18327875,18328917,18330038,18330908,18331655,18332695,18333719,18334941,18335510,18336191,18336875,18337762,18338463,18339517,18340473,18341220,18342283,18343502,18344295,18344884,18345705,18346197,18346929,18347622,18348170,18348618,18349548,18350209,18351042,18351803,18352354,18353182,18353966,18354792,18355691,18356521,18356986,18357846,18358598,18359226,18359816,18360330,18361167,18361767,18362450,18363054,18363777,18364589,18365350,18365796,18366468,18367156,18367941,18368573,18369229,18369753,18370267,18371178,18371778,18372421,18373058,18373754,18374312,18375049,18375552,18376184,18377142,18377766,18378648,18379349,18380103,18380706,18381433,18382143,18382939,18383702,18384493,18385263,18385829,18386434,18386873,18387512,18388527,18389402,18390239,18391050,18391891,18392725,18393539,18394361,18395375,18396288,18397255,18398106,18399038,18399922,18400811,18401621,18402363,18403101,18403797,18404508,18405187,18405927,18406660,18407593,18408357,18408946,18409766,18410668,18411405,18412340,18413131,18413615,18414347,18415463,18416625,18417515,18418487,18419068,18419986,18421211,18422281,18423170,18424441,18425412,18426168,18427113,18428085,18429089,18430158,18431121,18432009,18433242,18434093,18435057,18436170,18437148,18438030,18439150,18440137,18441168,18442236,18443070,18443990,18445116,18445866,18446586,18447390,18448042,18448806,18449588,18450544,18451294,18451766,18452178,18453218,18453939,18455013,18455646,18456274,18456705,18457648,18458736,18459629,18460679,18461753,18462888,18463492,18464676,18465365,18466282,18467193,18468243,18468904,18469577,18470703,18471663,18472351,18473328,18474309,18475253,18476143,18477035,18477927,18479135,18480058,18480696,18481723,18482718,18483853,18484939,18485810,18486405,18487063,18487846,18488637,18489453,18490324,18491075,18492080,18492645,18493419,18494249,18495131,18495878,18496823,18497865,18498735,18499561,18500390,18501333,18502355,18503362,18504283,18504872,18505765,18506697,18507233,18507864,18508487,18509265,18510023,18510698,18511579,18512640,18513409,18514070,18514640,18515002,18515500,18516308,18517058,18517655,18518219,18518961,18519564,18520482,18521193,18521737,18522219,18522705,18523562,18524212,18525129,18525919,18526736,18527457,18528177,18528942,18529793,18530750,18531749,18532633,18533364,18534071,18535017,18536191,18537369,18538233,18539029,18539822,18540664,18541389,18542141,18542953,18543870,18544841,18545744,18546639,18547579,18548447,18549273,18550066,18550873,18551823,18552756,18553545,18554163,18554740,18555486,18556324,18557031,18557720,18558405,18559085,18559802,18560433,18561191,18561866,18562930,18563672,18564812,18565935,18566611,18567518,18568547,18569582,18570363,18571162,18572027,18572895,18573847,18574825,18575842,18576535,18577294,18578017,18578975,18579750,18580347,18581299,18582229,18583197,18584130,18585063,18585992,18587013,18587871,18588879,18589767,18590582,18591646,18592748,18593596,18594609,18595447,18596253,18597071,18598009,18599118,18600204,18601015,18602014,18602809,18603493,18604117,18604901,18605877,18606570,18607238,18607725,18608347,18609367,18610291,18611275,18612263,18613106,18614133,18615061,18616106,18616833,18617833,18618741,18619505,18620318,18621263,18622037,18622935,18623875,18624585,18625437,18626338,18627355,18628310,18629240,18630208,18631123,18632062,18633010,18634042,18634916,18635766,18636720,18637826,18638873,18639533,18640216,18640902,18641937,18642886,18643639,18644623,18645554,18646513,18647414,18648344,18649207,18650100,18650999,18651926,18652862,18653716,18654615,18655504,18656360,18657360,18658284,18659231,18660189,18661226,18662163,18663203,18664088,18665099,18665968,18666827,18667795,18668776,18669720,18670577,18671578,18672495,18673425,18674497,18675345,18676286,18677180,18678154,18679090,18679933,18680765,18681415,18682079,18682969,18683701,18684701,18685483,18686343,18686991,18687588,18688312,18688935,18689666,18690391,18691180,18692013,18692954,18693840,18694518,18695299,18696133,18697028,18697853,18698940,18699813,18700643,18701428,18702376,18703283,18704090,18705196,18706112,18706887,18707651,18708349,18709237,18710179,18710984,18711971,18712794,18713740,18714761,18715501,18716509,18717457,18718180,18719104,18719800,18720467,18721269,18722059,18722862,18723878,18724664,18725643,18726549,18727397,18728481,18729386,18730403,18731454,18732547,18733506,18734293,18734948,18735664,18736529,18737417,18737969,18738674,18739509,18740473,18741202,18742044,18742873,18743820,18744562,18745316,18746166,18746952,18748031,18748945,18749906,18750856,18751797,18752793,18753750,18754772,18755850,18757155,18758219,18759345,18760453,18761618,18762455,18763072,18764079,18765195,18766279,18767390,18768279,18768949,18769631,18770270,18770948,18771889,18772846,18773749,18774621,18775403,18776064,18776958,18777701,18778564,18779250,18780156,18780850,18781434,18781986,18782480,18783363,18784470,18785622,18786634,18787559,18788396,18789073,18789887,18791164,18792249,18793239,18794194,18795332,18796285,18797123,18798244,18799056,18800151,18801389,18802490,18803493,18804527,18805479,18806397,18807342,18808318,18809530,18810566,18811624,18812668,18813756,18814605,18815668,18816063,18816777,18817929,18819084,18819931,18820782,18821675,18822898,18823755,18824710,18825738,18826477,18827198,18828006,18828669,18829446,18830227,18831436,18832648,18833244,18833920,18834378,18835284,18836429,18837424,18838499,18839679,18840193,18840954,18841362,18842246,18843171,18844070,18844645,18845490,18845892,18846623,18847757,18848824,18849889,18851005,18851759,18853045,18854086,18854978,18856044,18857119,18858124,18858777,18859474,18860569,18861550,18862229,18863204,18864186,18865138,18866026,18866886,18867753,18868955,18869844,18870481,18871493,18872487,18873672,18874431,18875090,18875721,18876473,18877323,18878313,18879383,18880361,18881146,18882175,18882918,18883723,18884868,18885594,18886194,18887001,18887771,18888461,18889349,18890053,18890784,18891529,18892121,18893051,18893868,18894737,18895554,18896332,18897138,18898029,18899152,18900066,18900806,18901645,18902394,18903215,18903789,18904584,18905227,18906014,18906792,18907600,18908448,18909404,18910285,18911093,18911895,18912726,18913467,18914245,18915056,18915994,18916942,18917913,18918810,18919721,18920579,18921460,18922293,18923162,18924067,18924782,18925786,18926616,18927464,18927929,18928832,18929932,18931076,18932081,18933053,18934289,18935403,18936401,18937488,18938692,18939571,18940496,18941499,18942473,18943429,18944496,18945325,18946309,18947501,18948538,18949531,18950506,18951774,18952929,18953955,18955115,18956196,18957155,18958127,18959128,18960040,18961165,18962108,18963168,18964201,18965178,18966125,18967225,18967937,18968780,18969501,18970200,18971049,18971917,18972996,18973627,18974263,18974656,18975611,18976503,18977168,18977992,18978408,18979022,18980093,18981042,18982061,18983034,18984205,18985251,18986126,18987294,18988377,18989143,18989863,18990838,18991827,18992800,18993633,18994595,18995552,18996386,18997282,18998093,18999223,19000290,19001036,19001877,19002938,19003949,19005097,19005676,19006338,19007056,19007873,19008812,19009648,19010606,19011573,19012567,19013567,19014446,19015556,19016481,19017065,19017774,19018345,19018978,19019731,19020494,19021014,19021745,19022447,19023155,19024009,19025004,19025959,19027057,19027882,19028922,19029731,19030540,19031623,19032385,19033024,19033699,19034341,19034998,19035595,19036230,19036834,19037505,19038208,19039075,19040084,19040704,19041700,19042610,19043317,19044017,19044630,19045590,19046418,19047278,19048014,19048787,19049524,19050221,19050811,19051376,19051989,19052617,19053281,19053947,19054590,19055330,19056222,19057219,19057972,19058745,19059711,19060510,19061251,19062275,19063242,19064049,19064901,19065342,19066224,19067003,19067682,19068335,19068919,19069499,19070235,19070767,19071478,19072275,19073040,19073735,19074366,19074958,19075547,19076282,19077039,19077695,19078520,19079275,19080081,19080607,19081098,19081703,19082395,19083204,19083996,19084818,19085693,19086537,19087412,19088275,19089192,19090017,19091006,19091863,19092852,19093724,19094642,19095531,19096502,19097311,19097986,19098717,19099420,19100070,19100782,19101412,19102158,19103003,19103899,19104488,19105083,19106111,19106996,19107775,19108626,19109060,19110075,19111142,19112186,19113251,19114221,19114961,19116238,19117443,19118313,19119233,19120221,19121215,19122174,19123241,19124067,19125046,19126253,19127361,19128415,19129334,19130456,19131462,19132674,19133770,19134955,19135904,19136877,19137929,19138860,19139990,19140937,19142023,19143136,19144e3,19144833,19145970,19146670,19147456,19148252,19148885,19149680,19150517,19151374,19152188,19152617,19153154,19154198,19154964,19155914,19156691,19157183,19157569,19158549,19159547,19160463,19161393,19162578,19163691,19164313,19165565,19166291,19167060,19167845,19168732,19169453,19170305,19171137,19171892,19173058,19174001,19174872,19175875,19176722,19177608,19178425,19179364,19180559,19181833,19182809,19183767,19184990,19185901,19186871,19187845,19188884,19189884,19191011,19192118,19193175,19194246,19195518,19196762,19197935,19198683,19199396,19200378,19201360,19202291,19203152,19204091,19205001,19205862,19206748,19207572,19208687,19209760,19210477,19211342,19212400,19213459,19214522,19215557,19216167,19216854,19217499,19218340,19219249,19220245,19221124,19221800,19222830,19223777,19224616,19225621,19226642,19227464,19228168,19229024,19230103,19231055,19232181,19232895,19233675,19234302,19235009,19235728,19236271,19237071,19237774,19238524,19239183,19239873,19240712,19242035,19242993,19243650,19244290,19244738,19245366,19245833,19246232,19246786,19247533,19248249,19249130,19250102,19250885,19251656,19252439,19253309,19254236,19255116,19255983,19256718,19257662,19258303,19259039,19259768,19260588,19261383,19262189,19262906,19263576,19264477,19265327,19266335,19267470,19268303,19269075,19269950,19270728,19271849,19272761,19273606,19274666,19275634,19276491,19277548,19278568,19279470,19280419,19281302,19282231,19283163,19284113,19285046,19286070,19286888,19287891,19288823,19289706,19290389,19291366,19292183,19293e3,19293839,19294758,19295481,19296248,19297275,19298041,19298963,19300038,19301022,19301850,19302735,19303763,19304838,19305657,19306447,19307295,19308224,19309048,19309831,19310690,19311574,19312539,19313410,19314338,19315235,19316160,19316876,19317627,19318568,19319416,19320496,19321633,19322445,19323422,19324404,19325383,19326203,19326982,19327891,19328828,19329685,19330715,19331728,19332399,19333091,19333807,19334731,19335612,19336191,19337098,19338071,19339031,19339862,19340805,19341790,19342844,19343738,19344692,19345574,19346440,19347369,19348491,19349418,19350363,19351238,19352103,19352920,19353898,19355023,19355983,19356919,19357993,19358892,19359557,19360139,19360895,19361866,19362621,19363382,19363867,19364454,19365437,19366267,19367296,19368167,19369188,19370137,19371022,19372024,19372790,19373685,19374673,19375401,19376293,19377147,19378006,19378819,19379770,19380495,19381381,19382063,19383018,19383910,19384941,19385951,19386779,19387717,19388610,19389518,19390439,19391411,19392422,19393526,19394582,19395448,19396129,19396910,19397954,19398854,19399691,19400640,19401602,19402457,19403368,19404238,19405167,19406045,19406963,19407869,19408777,19409725,19410614,19411414,19412312,19413287,19414267,19415199,19416183,19417121,19418059,19419157,19420031,19420933,19421821,19422706,19423739,19424678,19425565,19426428,19427326,19428233,19429151,19430193,19431088,19432133,19433201,19434046,19434916,19435749,19436402,19437048,19437762,19438783,19439499,19440376,19441272,19442176,19442998,19443632,19444304,19444890,19445626,19446302,19447046,19447785,19448869,19449761,19450503,19451264,19452122,19453105,19453890,19454807,19455660,19456434,19457346,19458324,19459261,19460135,19461146,19462061,19462888,19463580,19464336,19465138,19466088,19467002,19467928,19468779,19469687,19470595,19471304,19472252,19473131,19473851,19474746,19475454,19476126,19476906,19477777,19478513,19479422,19480190,19481201,19482064,19482849,19483764,19484683,19485654,19486706,19487708,19488726,19489600,19490229,19491011,19491835,19492678,19493350,19494057,19494843,19495705,19496450,19497290,19498111,19498998,19499651,19500486,19501317,19502055,19503077,19503907,19504887,19505864,19506795,19507703,19508685,19509669,19510684,19511706,19512832,19513909,19514942,19516250,19517284,19518421,19519547,19520730,19521556,19522165,19523187,19524290,19525400,19526478,19527436,19528148,19528776,19529420,19530057,19530633,19531299,19531928,19532754,19533602,19534569,19535457,19536301,19537130,19538019,19538809,19539556,19540376,19541045,19541766,19542510,19543372,19543819,19544757,19545848,19546858,19547930,19548831,19549650,19550529,19551261,19552033,19552916,19554145,19555257,19556244,19557379,19558329,19559391,19560412,19561297,19562439,19563332,19564578,19565599,19566759,19567716,19568812,19569763,19570710,19571829,19572772,19573727,19574667,19575653,19576864,19577997,19579040,19580094,19580999,19581724,19582111,19583207,19584302,19585467,19586198,19587270,19588294,19589324,19590240,19591138,19592432,19593660,19594674,19595527,19596418,19597403,19598405,19599528,19600494,19601388,19602477,19603208,19604137,19605157,19605933,19606859,19607822,19608719,19609607,19610302,19611164,19611887,19612541,19613391,19614326,19615443,19616144,19616970,19617388,19617943,19618964,19620006,19620503,19621240,19621666,19622506,19623431,19624154,19624904,19625703,19626130,19626485,19627470,19628342,19628907,19629733,19630129,19630681,19631720,19632350,19633193,19633588,19634323,19635257,19636013,19636719,19637548,19637969,19638528,19639585,19640593,19641704,19642850,19643464,19644743,19646156,19647256,19648280,19649285,19650420,19651487,19652149,19652848,19653910,19654918,19655622,19656614,19657544,19658493,19659328,19660202,19661083,19662246,19663242,19663836,19664847,19665872,19666998,19668155,19669073,19669663,19670319,19671027,19671949,19672806,19673716,19674589,19675733,19676601,19677438,19678371,19679279,19680367,19681115,19682017,19683048,19683989,19685116,19686080,19686663,19687363,19687903,19688388,19689153,19689908,19690613,19691298,19692121,19693250,19694067,19694694,19695356,19695921,19696313,19696744,19697336,19698318,19699393,19700208,19700981,19701870,19702836,19703624,19704614,19705578,19706428,19707379,19708128,19708845,19709747,19710723,19711540,19712371,19713282,19714266,19715316,19716154,19716950,19717807,19718581,19719657,19720696,19721480,19722492,19723215,19724167,19724993,19725833,19726705,19727641,19728594,19729457,19730372,19731260,19732246,19733145,19734104,19734995,19735983,19736695,19737409,19738250,19738932,19739632,19740364,19741020,19741714,19742396,19743447,19744253,19745372,19746448,19747275,19748293,19749166,19750096,19750963,19751698,19752664,19753485,19754373,19755448,19756388,19757068,19757909,19758625,19759600,19760286,19761129,19762126,19763008,19763833,19764797,19765701,19766563,19767479,19768361,19769373,19770217,19771045,19772167,19773272,19774102,19775048,19775924,19776680,19777591,19778528,19779450,19780574,19781579,19782532,19783190,19783710,19784402,19785209,19786185,19786867,19787457,19787870,19788763,19789790,19790860,19791768,19792805,19793583,19794494,19795425,19796364,19797156,19798258,19799171,19799942,19800787,19801798,19802544,19803571,19804465,19805277,19806097,19806998,19808046,19809102,19810114,19810865,19811741,19812693,19813614,19814613,19815608,19816399,19817382,19818463,19819297,19820035,19820768,19821601,19822510,19823483,19824287,19825318,19826229,19827245,19828186,19829219,19830097,19830971,19831954,19832985,19834003,19834874,19835729,19836658,19837554,19838494,19839469,19840427,19841400,19842410,19843368,19844348,19845182,19846161,19846996,19847901,19848833,19849663,19850636,19851485,19852355,19853304,19854258,19855284,19856295,19857335,19858176,19859193,19860210,19861033,19861859,19862608,19863411,19864208,19864907,19865971,19866702,19867467,19868056,19868709,19869346,19869978,19870616,19871346,19872075,19873151,19874045,19874947,19875784,19876546,19877489,19878374,19879182,19880212,19881118,19882109,19882985,19883833,19884719,19885577,19886515,19887326,19888018,19888808,19889492,19890356,19891308,19892158,19893090,19893903,19894739,19895689,19896480,19897349,19898209,19898954,19899898,19900609,19901337,19902148,19902937,19903826,19904799,19905688,19906604,19907386,19908149,19909139,19909945,19910946,19911978,19912954,19914036,19914741,19915526,19916246,19917107,19917995,19918640,19919434,19920167,19921124,19921798,19922529,19923223,19924037,19924946,19925634,19926470,19927462,19928440,19929436,19930241,19931215,19932281,19933228,19934235,19935271,19936500,19937705,19938725,19939865,19940914,19941917,19942594,19943402,19944424,19945516,19946668,19947789,19948497,19949148,19949776,19950407,19950999,19951705,19952630,19953552,19954440,19955324,19956133,19956892,19957784,19958474,19959348,19960152,19960833,19961702,19962448,19962978,19963986,19965073,19966071,19967057,19967951,19968555,19969578,19970167,19971034,19972167,19973363,19974411,19975331,19976601,19977562,19978347,19979271,19980233,19981222,19982279,19983230,19984101,19985318,19986307,19987429,19988409,19989303,19990438,19991513,19992551,19993521,19994545,19995801,19996925,19997955,19998976,19999932,20000829,20001673,20002673,20003893,20004943,20005950,20006985,20007936,20008558,20008954,20010114,20011166,20012328,20013076,20014142,20015163,20016176,20017126,20018048,20019321,20020393,20021443,20022216,20023153,20024298,20025095,20025811,20026618,20027281,20028024,20028812,20029876,20030994,20031653,20032300,20032701,20033644,20034461,20035446,20035879,20036709,20037079,20037712,20038725,20039321,20040181,20040604,20041319,20042238,20043291,20043978,20044583,20045028,20045982,20047051,20048045,20049161,20050029,20051184,20051927,20053182,20054060,20054756,20055634,20056628,20057713,20058437,20059415,20060372,20061268,20062187,20063002,20064065,20065164,20066031,20066761,20067800,20068826,20070018,20070969,20071752,20072406,20073007,20073721,20074440,20075085,20075945,20076865,20077362,20077921,20078444,20079213,20080259,20081101,20081614,20082089,20082564,20083047,20083703,20084392,20085319,20085988,20087089,20087923,20088939,20089851,20090853,20091816,20092649,20093568,20094508,20095350,20096404,20097498,20098400,20099248,20099867,20100469,20101143,20101687,20102255,20102779,20103648,20104303,20105007,20105547,20106130,20106790,20107465,20108162,20108844,20109584,20110464,20111602,20112563,20113163,20113763,20114265,20114827,20115369,20115895,20116325,20116853,20117589,20118166,20118667,20119045,20119597,20120135,20120988,20121517,20122389,20123087,20123705,20124331,20124928,20125754,20126513,20127264,20127938,20128890,20129597,20130394,20130899,20131662,20132346,20133011,20133652,20134236,20134837,20135603,20136366,20137120,20137851,20138504,20139432,20140212,20140954,20141421,20142114,20142801,20143535,20144085,20144625,20145212,20145808,20146514,20147316,20148007,20148737,20149420,20150080,20150916,20151723,20152613,20153226,20153933,20154821,20155605,20156393,20157190,20158002,20158713,20159410,20160132,20160776,20161450,20162054,20162708,20163362,20163927,20164639,20165358,20166094,20166929,20167921,20168497,20169137,20169995,20170772,20171500,20172119,20172661,20173675,20174590,20175345,20176080,20176823,20177555,20178280,20178896,20179583,20180416,20181192,20181915,20182688,20183444,20184071,20184838,20185437,20186279,20186971,20187636,20188345,20188927,20189805,20190630,20191436,20191936,20192730,20193589,20194316,20194949,20195508,20196053,20197058,20198020,20198955,20199954,20200755,20201801,20202838,20203806,20204764,20205607,20206273,20207076,20207786,20208447,20209093,20209866,20210698,20211731,20212628,20213629,20214542,20215295,20216117,20216972,20217511,20218063,20218669,20219495,20220462,20221200,20221670,20222404,20223185,20223996,20224688,20225396,20226318,20226915,20227552,20228209,20229141,20229934,20230888,20231864,20232648,20233303,20234148,20234856,20235723,20236423,20237180,20237903,20238653,20239660,20240702,20241591,20242580,20243527,20244556,20245311,20246147,20246933,20247695,20248681,20249248,20250075,20250949,20251623,20252223,20252836,20253747,20254747,20255525,20256281,20257286,20258055,20258890,20259417,20260328,20261083,20261852,20262412,20263253,20264215,20265018,20265726,20266437,20267264,20268114,20268800,20269611,20270529,20271333,20272318,20273317,20274301,20275087,20275873,20276820,20277671,20278536,20279431,20280328,20281292,20282101,20282889,20283658,20284551,20285379,20286026,20286860,20287446,20288414,20289225,20290120,20290835,20291644,20292489,20293371,20294364,20295381,20296405,20297400,20298200,20299147,20299864,20300609,20301555,20302425,20303142,20304067,20304857,20305887,20306835,20307833,20308740,20309587,20310410,20311314,20312281,20313134,20314051,20314908,20315781,20316624,20317477,20318145,20318869,20319610,20320513,20321149,20322006,20322581,20323419,20324395,20325339,20326359,20327190,20327886,20328573,20329294,20330067,20331078,20332133,20333021,20333476,20334374,20335214,20335893,20336522,20337407,20338295,20338935,20339885,20340818,20341821,20342622,20343455,20343998,20344729,20345447,20345931,20346640,20347321,20347812,20348402,20349306,20350130,20351064,20351553,20352427,20353265,20353863,20354473,20355032,20355731,20356657,20357667,20358670,20359435,20360135,20360986,20361677,20362366,20362933,20363652,20364397,20365103,20365846,20366710,20367315,20368065,20368635,20369288,20369871,20370470,20371199,20372139,20373053,20373684,20374491,20375464,20376432,20377383,20378264,20378960,20379758,20380375,20381004,20381658,20382290,20383096,20383787,20384362,20385090,20386161,20387218,20388192,20389095,20390085,20390815,20391798,20392642,20393424,20394138,20395069,20395947,20396822,20397568,20398307,20399006,20399960,20400864,20401860,20402809,20403648,20404411,20405268,20406081,20406774,20407675,20408570,20409459,20410366,20411277,20412111,20412970,20413806,20414679,20415448,20416287,20417368,20418198,20418993,20419793,20420641,20421453,20422189,20422973,20423858,20424843,20425754,20426648,20427601,20428504,20429296,20430096,20430917,20431627,20432302,20432995,20433627,20434363,20435051,20435984,20436762,20437796,20438676,20439756,20440451,20441496,20442247,20443342,20444445,20445107,20445990,20446975,20448052,20448807,20449633,20450491,20451380,20452341,20453336,20454330,20455050,20455778,20456453,20457418,20458232,20458814,20459797,20460772,20461749,20462659,20463603,20464520,20465552,20466447,20467443,20468337,20469210,20470227,20471323,20472088,20473090,20473950,20474779,20475565,20476443,20477607,20478580,20479438,20480468,20481296,20481956,20482620,20483398,20484368,20485081,20485800,20486302,20486919,20487958,20488911,20489974,20490939,20491824,20492830,20493785,20494780,20495475,20496468,20497444,20498171,20499004,20499912,20500699,20501594,20502498,20503202,20504073,20504938,20505908,20506821,20507817,20508806,20509714,20510661,20511611,20512614,20513513,20514382,20515373,20516544,20517678,20518451,20519146,20519888,20520942,20521871,20522694,20523670,20524628,20525527,20526401,20527274,20528137,20528986,20529860,20530781,20531685,20532552,20533489,20534295,20535176,20536172,20537130,20538099,20539057,20540034,20540959,20542063,20542949,20543937,20544805,20545704,20546724,20547705,20548658,20549453,20550383,20551336,20552253,20553317,20554180,20555196,20556179,20557108,20557999,20558855,20559593,20560262,20560915,20561849,20562559,20563542,20564373,20565250,20565983,20566562,20567221,20567814,20568590,20569333,20570102,20570932,20571944,20572857,20573568,20574362,20575175,20576118,20576934,20578020,20578844,20579695,20580504,20581474,20582429,20583280,20584308,20585190,20585987,20586751,20587436,20588308,20589255,20590108,20591081,20591892,20592831,20593847,20594568,20595557,20596483,20597151,20598068,20598831,20599481,20600303,20601161,20601949,20602959,20603720,20604694,20605551,20606393,20607437,20608331,20609364,20610341,20611410,20612397,20613230,20613822,20614567,20615441,20616297,20616798,20617474,20618349,20619258,20619954,20620809,20621652,20622581,20623297,20624107,20624984,20625741,20626848,20627731,20628690,20629645,20630588,20631566,20632556,20633550,20634583,20635525,20636458,20637609,20638830,20639853,20640975,20642028,20643043,20643741,20644548,20645571,20646648,20647797,20648929,20649658,20650292,20650860,20651500,20652074,20652685,20653266,20653821,20654431,20655011,20655567,20656135,20656713,20657257,20657898,20658997,20659776,20660606,20661447,20662421,20663321,20664185,20664943,20665829,20666622,20667458,20668314,20669032,20669953,20670912,20671788,20672647,20673533,20674268,20675116,20675828,20676625,20677381,20677853,20678476,20678968,20679449,20679911,20680405,20680900,20681493,20682033,20683081,20684176,20685328,20686362,20687286,20688078,20688834,20689481,20690159,20690822,20691674,20692395,20693325,20693977,20694737,20696005,20697117,20698116,20699204,20700409,20701288,20702217,20703219,20704193,20705149,20706215,20707043,20708027,20709160,20710341,20711276,20712334,20713392,20714670,20715743,20716654,20717673,20718688,20719589,20720667,20721302,20722485,20723613,20724735,20725593,20726555,20727689,20728067,20728739,20729603,20730581,20731121,20731502,20732598,20733615,20734567,20735660,20736710,20737718,20738687,20739058,20739905,20741066,20742113,20742841,20744003,20745057,20746129,20746965,20747885,20749019,20750151,20751204,20752010,20752947,20754052,20754693,20755649,20756524,20757179,20757991,20758771,20759470,20760302,20761124,20762161,20763020,20763848,20764289,20764764,20765811,20766537,20767333,20767767,20768130,20769136,20769841,20770900,20771502,20772144,20772563,20773453,20774520,20775115,20775808,20776206,20777087,20777984,20778921,20779432,20780263,20780641,20781425,20782569,20783592,20784577,20785750,20786746,20787678,20788834,20789662,20790685,20791625,20792720,20793594,20794646,20795582,20796607,20797330,20798351,20799270,20800276,20801465,20802277,20803242,20804233,20805280,20806571,20807524,20808571,20809521,20810180,20810960,20811960,20812999,20813744,20814717,20815654,20816595,20817511,20818358,20819301,20820461,20821342,20822032,20823058,20824045,20825219,20825876,20826556,20827151,20827959,20828689,20829696,20830760,20831660,20832701,20833378,20834357,20835496,20836116,20836734,20837515,20838311,20839099,20839952,20840673,20841632,20842379,20843272,20844191,20845053,20846068,20846870,20847852,20848785,20849696,20850477,20851299,20852144,20853070,20853922,20854736,20855596,20856488,20857446,20858299,20859230,20860112,20861040,20861717,20862442,20863458,20864269,20865187,20866174,20866836,20867521,20868129,20869214,20870349,20871347,20872272,20873165,20874513,20875530,20876468,20877716,20878631,20879505,20880404,20881363,20882368,20883433,20884292,20885169,20886367,20887482,20888627,20889850,20890871,20891837,20892898,20893873,20894892,20895961,20896928,20898018,20898954,20900024,20901102,20902004,20902879,20904159,20905038,20906080,20906783,20907621,20908351,20909048,20909899,20910793,20911848,20912538,20913077,20913479,20914476,20915175,20916243,20916862,20917498,20917930,20918863,20919923,20920774,20921815,20922903,20924042,20924646,20925944,20927328,20928215,20928847,20930138,20930886,20931617,20932586,20933567,20934508,20935372,20936328,20937243,20938110,20938990,20939805,20940929,20942002,20942725,20943581,20944639,20945691,20946907,20947749,20948358,20949047,20949698,20950537,20951502,20952451,20953144,20954230,20955010,20956030,20956917,20957957,20958750,20959562,20960430,20961476,20962603,20963695,20964767,20965504,20966099,20966865,20967403,20968103,20968798,20969588,20970194,20970864,20971536,20972404,20973503,20974290,20974811,20975420,20975992,20976580,20977148,20977509,20977984,20978550,20979464,20980023,20980767,20981616,20982308,20983071,20983630,20984429,20985165,20985907,20986638,20987284,20987919,20988566,20989168,20989773,20990487,20991295,20992178,20992684,20993422,20994326,20995092,20995874,20996614,20997483,20998182,20998919,20999834,21000792,21001361,21002154,21003071,21003811,21004546,21005366,21006173,21006978,21007856,21008649,21009451,21010220,21010876,21011463,21012195,21013143,21014157,21014928,21015658,21016407,21017013,21017973,21018816,21019684,21020098,21021025,21021736,21022357,21022994,21023563,21024363,21025026,21025750,21026325,21027043,21027695,21028274,21028898,21029341,21030198,21031006,21031756,21032594,21033317,21033989,21034605,21035373,21036342,21037295,21038320,21039110,21039815,21040491,21041258,21042051,21042910,21043457,21044278,21045027,21045714,21046345,21046897,21047880,21048785,21049546,21050301,21051073,21051738,21052663,21053439,21054294,21054780,21055620,21056432,21057126,21057674,21058609,21059703,21060592,21061578,21062526,21063494,21064439,21065174,21065877,21066538,21067469,21068297,21068903,21069805,21070538,21071354,21072230,21073078,21073934,21074809,21075741,21076590,21077573,21078451,21079436,21080314,21081230,21082154,21083142,21083950,21084638,21085408,21086105,21086772,21087502,21088133,21088836,21089673,21090604,21091579,21092714,21093643,21094618,21095648,21096597,21097498,21098332,21099119,21100022,21100780,21101704,21102801,21103549,21104287,21104990,21105805,21106776,21107399,21108272,21109185,21110097,21110835,21111828,21112813,21113833,21114727,21115561,21116538,21117367,21118249,21119378,21120399,21121225,21122120,21123002,21123781,21124772,21125878,21126778,21127746,21128807,21129675,21130359,21130945,21131715,21132623,21133420,21134169,21134689,21135197,21136176,21137034,21138153,21139059,21140077,21140945,21141838,21142783,21143668,21144530,21145594,21146372,21147246,21148092,21149070,21149863,21150841,21151587,21152397,21153130,21154036,21155046,21156068,21157025,21157742,21158604,21159491,21160429,21161405,21162450,21163411,21164437,21165485,21166301,21166931,21167678,21168597,21169509,21170425,21171359,21172332,21173202,21174144,21175021,21175968,21176885,21177794,21178738,21179708,21180629,21181480,21182267,21183203,21184119,21185113,21186057,21187066,21188024,21188945,21189948,21190821,21191686,21192652,21193575,21194527,21195421,21196243,21197167,21198002,21198854,21199841,21200876,21201789,21202820,21203903,21204870,21205751,21206749,21207508,21208192,21208913,21209904,21210540,21211289,21212295,21213155,21213982,21214557,21215227,21215812,21216509,21217205,21217953,21218690,21219756,21220744,21221449,21222214,21223011,21224006,21224819,21225610,21226528,21227321,21228288,21229244,21230143,21231013,21231973,21232957,21233791,21234503,21235308,21235997,21236933,21237917,21238798,21239599,21240472,21241359,21242134,21243005,21243845,21244570,21245287,21246171,21246867,21247605,21248416,21249189,21250059,21250857,21251839,21252691,21253414,21254247,21255184,21256128,21257200,21258187,21259147,21260113,21260742,21261560,21262405,21263279,21264082,21264801,21265661,21266541,21267436,21268264,21268931,21269725,21270390,21271261,21272068,21272842,21273852,21274741,21275809,21276664,21277598,21278463,21279393,21280357,21281347,21282596,21283716,21284794,21285961,21287075,21288093,21288649,21289531,21290531,21291631,21292737,21293862,21294557,21295207,21295782,21296380,21297e3,21297579,21298180,21298839,21299766,21300651,21301607,21302467,21303338,21304159,21305052,21305961,21306746,21307596,21308429,21309087,21310004,21310738,21311715,21312170,21313001,21314111,21315240,21316211,21317210,21317798,21318466,21319227,21320058,21320747,21321561,21322839,21323973,21324728,21325884,21326932,21327790,21328669,21329639,21330583,21331658,21332673,21333560,21334674,21335859,21336896,21337745,21338745,21340034,21341282,21342295,21343252,21344472,21344845,21345310,21346210,21347212,21348328,21349270,21350326,21351312,21352181,21353238,21354230,21355197,21356329,21357378,21358471,21359363,21360496,21360864,21361495,21362696,21363824,21364682,21365499,21366368,21367505,21368333,21369248,21370385,21371538,21372494,21373270,21374143,21375208,21376086,21376859,21377555,21378355,21379e3,21379821,21380596,21381643,21382730,21383368,21383999,21384440,21385322,21386242,21387142,21387975,21388416,21388805,21389870,21390879,21391340,21392103,21392534,21393376,21394305,21395219,21396030,21396462,21396826,21397849,21398897,21399948,21401074,21401743,21402891,21403869,21404842,21405798,21406793,21407658,21408578,21409689,21410671,21411330,21412097,21413131,21414141,21414877,21415848,21416807,21417743,21418643,21419471,21420404,21421578,21422483,21423160,21424190,21425145,21426302,21427019,21427692,21428268,21429036,21429918,21430756,21431746,21432738,21433524,21434535,21435400,21436414,21437245,21438299,21439361,21439997,21440656,21441406,21442261,21442969,21443540,21443823,21444573,21445285,21446256,21447106,21447839,21448676,21449454,21450130,21450906,21451822,21452819,21453741,21454657,21455463,21456458,21457469,21458380,21459314,21460098,21460878,21461793,21462574,21463424,21464189,21464976,21465811,21466552,21467466,21468228,21468992,21469790,21470510,21471474,21472391,21473445,21474413,21475200,21476078,21476902,21477867,21478785,21479392,21480097,21480657,21481392,21481921,21482652,21483192,21484071,21485e3,21485847,21486633,21487657,21488433,21489268,21490114,21491036,21491916,21492717,21493582,21494474,21495404,21496281,21497225,21498099,21499035,21499658,21500331,21501198,21502124,21503142,21503955,21504333,21505030,21505882,21507035,21507697,21508513,21509318,21510121,21511040,21511495,21512334,21513445,21514561,21515562,21516474,21517772,21518900,21519638,21520793,21521832,21522688,21523564,21524547,21525484,21526606,21527584,21528460,21529614,21530839,21531630,21532702,21533802,21534893,21536123,21537115,21538089,21539154,21540239,21541209,21542258,21543200,21544161,21545182,21546349,21547065,21548195,21549226,21550281,21551271,21552247,21553216,21554220,21554920,21555779,21556544,21557165,21557979,21559023,21559534,21560295,21560717,21561610,21562701,21563295,21563948,21564397,21565292,21566187,21567253,21568130,21569157,21570168,21571333,21572318,21573289,21574363,21575313,21576228,21577559,21578350,21579526,21580350,21581058,21581960,21582952,21583992,21584727,21585713,21586673,21587516,21588423,21589256,21590374,21591474,21592324,21593103,21594161,21595195,21596350,21597296,21598119,21598791,21599426,21600102,21600874,21601563,21602044,21602798,21603727,21604121,21604530,21604975,21605860,21606561,21607526,21608448,21609380,21610324,21611186,21612236,21613351,21614128,21615009,21616043,21617035,21618192,21619081,21619781,21620389,21621098,21621642,21622189,21622855,21623584,21624356,21624912,21625513,21626160,21626838,21627514,21628296,21629438,21630445,21631057,21631598,21632160,21632691,21633245,21633819,21634418,21634888,21635300,21635829,21636630,21637437,21637908,21638671,21639561,21640300,21641127,21641813,21642472,21643122,21643731,21644703,21645653,21646263,21646891,21647471,21648063,21648800,21649550,21650379,21651153,21651755,21652413,21653038,21653563,21654272,21655043,21655810,21656845,21657678,21658406,21658866,21659503,21660021,21661036,21661586,21662552,21663269,21664022,21664710,21665239,21666080,21667015,21667580,21668252,21668798,21669416,21670133,21670839,21671521,21672356,21673067,21673716,21674303,21675326,21676005,21676681,21677054,21677603,21678078,21678941,21679503,21680347,21681085,21681793,21682346,21683226,21684137,21685060,21686009,21686590,21687238,21687921,21688627,21689322,21690029,21690607,21691339,21691908,21692862,21693726,21694655,21695490,21696011,21696747,21697494,21698482,21699562,21700487,21701263,21701863,21702575,21703261,21704048,21704774,21705470,21706129,21706772,21707449,21708439,21709223,21710178,21711034,21711531,21712273,21713269,21714005,21714905,21715871,21716814,21717956,21718907,21719664,21720391,21721129,21721871,21722644,21723214,21723951,21724652,21725460,21726100,21727144,21728128,21728869,21729834,21730501,21731183,21731911,21732836,21733785,21734465,21735432,21736244,21737038,21737855,21738793,21739701,21740537,21741217,21741941,21742676,21743436,21744117,21744851,21745818,21746579,21747436,21748345,21749138,21750020,21750932,21751775,21752525,21753234,21753979,21754597,21755355,21756150,21756808,21757507,21758390,21759264,21760171,21760779,21761651,21762565,21763253,21763938,21764495,21765313,21766195,21767200,21768095,21768973,21769859,21770805,21771834,21772763,21773533,21774300,21774975,21775699,21776412,21777126,21777924,21778928,21779819,21780894,21781899,21782935,21783823,21784698,21785433,21786208,21787082,21787943,21788837,21789705,21790706,21791449,21792303,21793297,21794097,21794950,21795767,21796554,21797398,21798324,21799149,21799941,21800803,21801685,21802644,21803516,21804447,21805347,21806273,21806984,21807732,21808597,21809281,21809998,21810727,21811360,21812088,21812718,21813501,21814274,21814936,21815479,21816126,21816922,21817719,21818674,21819421,21820521,21821646,21822361,21823256,21824241,21825272,21826045,21826859,21827723,21828601,21829525,21830535,21831522,21832272,21833019,21833758,21834691,21835559,21836096,21837059,21838040,21839017,21839885,21840820,21841768,21842805,21843688,21844680,21845570,21846420,21847403,21848508,21849312,21850273,21851102,21851960,21852737,21853666,21854789,21855746,21856638,21857675,21858496,21859186,21859850,21860571,21861578,21862349,21863080,21863557,21864196,21865216,21866132,21867209,21868156,21869089,21870110,21871061,21872092,21872832,21873804,21874783,21875509,21876359,21877236,21878030,21878871,21879816,21880473,21881320,21882175,21883129,21884007,21885036,21886049,21886987,21887913,21888893,21889872,21890764,21891624,21892657,21893797,21894933,21895744,21896429,21897166,21898221,21899159,21899977,21900937,21901882,21902793,21903681,21904544,21905425,21906269,21907177,21908092,21908985,21909893,21910770,21911571,21912485,21913453,21914382,21915335,21916270,21917235,21918145,21919243,21920135,21921060,21921946,21922857,21923916,21924866,21925773,21926574,21927475,21928443,21929358,21930407,21931277,21932309,21933297,21934215,21935060,21935884,21936641,21937333,21938001,21938978,21939648,21940544,21941378,21942254,21943032,21943605,21944283,21944869,21945622,21946353,21947125,21947920,21948928,21949806,21950507,21951309,21952112,21953075,21953878,21954922,21955722,21956545,21957364,21958338,21959269,21960165,21961169,21962065,21962868,21963606,21964323,21965147,21966127,21967010,21967997,21968802,21969722,21970703,21971356,21972347,21973239,21973909,21974823,21975578,21976233,21977005,21977860,21978622,21979605,21980414,21981404,21982244,21983078,21984070,21984942,21986002,21986964,21988023,21989022,21989868,21990468,21991231,21992054,21992908,21993453,21994099,21994962,21995863,21996620,21997490,21998364,21999289,21999988,22000803,22001674,22002448,22003562,22004415,22005372,22006338,22007257,22008238,22009253,22010225,22011262,22012558,22013635,22014758,22015889,22017065,22017956,22018535,22019496,22020561,22021682,22022717,22023680,22024378,22025020,22025608,22026222,22026813,22027420,22027999,22028568,22029139,22029716,22030374,22031249,22032229,22033139,22033993,22034908,22035673,22036540,22037404,22038264,22039178,22040052,22040802,22041741,22042531,22043238,22044047,22044850,22045358,22045837,22046329,22046804,22047261,22047892,22048981,22050130,22051112,22052024,22052916,22053578,22054202,22054956,22055712,22056400,22057226,22058469,22059591,22060312,22061299,22062278,22063299,22064528,22065720,22066782,22067798,22068899,22069876,22070824,22072e3,22072773,22074002,22074965,22075750,22076676,22077631,22078623,22079675,22080626,22081496,22082712,22083729,22084876,22085810,22086781,22087351,22087736,22088950,22089917,22090872,22091946,22092913,22093835,22094966,22096016,22097117,22097967,22099034,22099415,22100122,22101276,22102424,22103272,22104116,22105e3,22106219,22107103,22108077,22109133,22110318,22111496,22112228,22113216,22114257,22114959,22116017,22116752,22117579,22118365,22118997,22119775,22120585,22121606,22122635,22123320,22123866,22124273,22125278,22125984,22126763,22127605,22128400,22128831,22129228,22130157,22130910,22131945,22132421,22133205,22133564,22134224,22135241,22135790,22136636,22137007,22137752,22138831,22139322,22140087,22140519,22141406,22142499,22143429,22144610,22145719,22146456,22147555,22148926,22150078,22151186,22152040,22153209,22154563,22155645,22156589,22157625,22158746,22159852,22160888,22162191,22163330,22164342,22165392,22166286,22167280,22168655,22169711,22170365,22171056,22172182,22173209,22173854,22174860,22175840,22176784,22177645,22178529,22179426,22180628,22181547,22182166,22183201,22184216,22185397,22186194,22186847,22187475,22188222,22189133,22190032,22191139,22192027,22192788,22193483,22194519,22195454,22196491,22197520,22198152,22198734,22199400,22200116,22200757,22201431,22202122,22202675,22203403,22204288,22204891,22205748,22206564,22207216,22207810,22208845,22209826,22210646,22211417,22212184,22213107,22214096,22214913,22215865,22216748,22217713,22218457,22219067,22219854,22220648,22221388,22221982,22222667,22223412,22224358,22225173,22226191,22227124,22227968,22228813,22229649,22230289,22230963,22231747,22232570,22233542,22234416,22235410,22236193,22236992,22237676,22238677,22239427,22240279,22240987,22241601,22242343,22243379,22244346,22245251,22246021,22246774,22247583,22248279,22249108,22250064,22250991,22251824,22252238,22253057,22253823,22254465,22255103,22255653,22256467,22257151,22257968,22258802,22259729,22260651,22261478,22262396,22263282,22264183,22265017,22265890,22266722,22267575,22268439,22269312,22270155,22271207,22272071,22273056,22273929,22274845,22275736,22276655,22277444,22278114,22279127,22279722,22280285,22280914,22281813,22282657,22283612,22284441,22285305,22285769,22286653,22287753,22288887,22289918,22290929,22291653,22292941,22294059,22295064,22296146,22297351,22298226,22299150,22300142,22301134,22302091,22303157,22303984,22304971,22306169,22307192,22308265,22309016,22310149,22311399,22312505,22313501,22314704,22315402,22316491,22317432,22318397,22319456,22320389,22321513,22322457,22323545,22324560,22325515,22326437,22327587,22328270,22329102,22329870,22330573,22331411,22332285,22332930,22333764,22334167,22334884,22335847,22336604,22337544,22338027,22338870,22339256,22339889,22340893,22341969,22342575,22343222,22343675,22344575,22345661,22346487,22347530,22348583,22349725,22350360,22351554,22352533,22353634,22354809,22355784,22356998,22357712,22358394,22359383,22360421,22361249,22362148,22363103,22364050,22364858,22365703,22366574,22367685,22368750,22369416,22370337,22371390,22372460,22373191,22373855,22374443,22375210,22375848,22376745,22377555,22378278,22379061,22379975,22381019,22381685,22382314,22383125,22383686,22384088,22385033,22385905,22386548,22387436,22388228,22389025,22389795,22390692,22391463,22392203,22393060,22393900,22394706,22395597,22396387,22397198,22398060,22398914,22399735,22400539,22401195,22402003,22402722,22403539,22404407,22405220,22405989,22406683,22407438,22408192,22408918,22409804,22410665,22411431,22412280,22413063,22413931,22414589,22415533,22416417,22417248,22418019,22418786,22419634,22420470,22421200,22421930,22422600,22423378,22424130,22424975,22425817,22426644,22427432,22428087,22428963,22429609,22430346,22431215,22432085,22432836,22433747,22434491,22435333,22435990,22436935,22437742,22438613,22439515,22440462,22441413,22442328,22443261,22444151,22445111,22445941,22446879,22447778,22448812,22449538,22450231,22451046,22451824,22452506,22453053,22453668,22454426,22455212,22456314,22457257,22458075,22458639,22459137,22459598,22460469,22461016,22461699,22462799,22463930,22464867,22465812,22466781,22468027,22469009,22470011,22471050,22472049,22472962,22474085,22475029,22476020,22477045,22478053,22479001,22479923,22480878,22481508,22482345,22483130,22483785,22484554,22485592,22486074,22486847,22487267,22488055,22489123,22489594,22490390,22490849,22491753,22492835,22493432,22494130,22494515,22495394,22496313,22497394,22498306,22499457,22500461,22501566,22502710,22503320,22504329,22504863,22506115,22507268,22508357,22509075,22510226,22511440,22512362,22513263,22514438,22515460,22516360,22517083,22517920,22518880,22519971,22520690,22521661,22522601,22523482,22524367,22525163,22526210,22527320,22528189,22528931,22529971,22530993,22532222,22532784,22533458,22534128,22534988,22535838,22536391,22537270,22538178,22539233,22540039,22540873,22541735,22542762,22543574,22544655,22545716,22546343,22546973,22547659,22548294,22548852,22549604,22550283,22551040,22551615,22552107,22553047,22553925,22554862,22555779,22556531,22557358,22558295,22559211,22560163,22561274,22562332,22563036,22563801,22564776,22565517,22566453,22567322,22568154,22569207,22570024,22571022,22571769,22572691,22573536,22574502,22575407,22576223,22577016,22577734,22578736,22579700,22580491,22581348,22581971,22582776,22583584,22584557,22585472,22586308,22587263,22588243,22589167,22590133,22591135,22591991,22592974,22593872,22594884,22595460,22596169,22597019,22597953,22598609,22599675,22600356,22601314,22602159,22602853,22603706,22604530,22605371,22606221,22607119,22607931,22608656,22609339,22610179,22611047,22611804,22612481,22613424,22614405,22615285,22616149,22616886,22617689,22618452,22619196,22620150,22621182,22622005,22622743,22623428,22624270,22625198,22626141,22627107,22628001,22628969,22629975,22630806,22631739,22632685,22633356,22633914,22634660,22635480,22636271,22637122,22638065,22638883,22639721,22640388,22641189,22641928,22642704,22643423,22644151,22645198,22645908,22646902,22647898,22648718,22649470,22650328,22651183,22651990,22653027,22653824,22654530,22655347,22655988,22656649,22657410,22658264,22659184,22660067,22660952,22661900,22662648,22663368,22664438,22665469,22666415,22667409,22668148,22669006,22669933,22670741,22671595,22672478,22673359,22674251,22675175,22676052,22677023,22677919,22678875,22679770,22680676,22681549,22682590,22683398,22684074,22684908,22685634,22686308,22687040,22687669,22688367,22689088,22690102,22691022,22692178,22692815,22693426,22694291,22695264,22695966,22696539,22697224,22698127,22698554,22699569,22700637,22701655,22702744,22703664,22704710,22705908,22706951,22707872,22709134,22710094,22710885,22711807,22712764,22713754,22714808,22715733,22716609,22717831,22718843,22719833,22720835,22721829,22722756,22723484,22724513,22725650,22726170,22727168,22728457,22729686,22730824,22731781,22732760,22733826,22734926,22735752,22736787,22737743,22738788,22739857,22740700,22741623,22742744,22743660,22744304,22745085,22745842,22746520,22747299,22748349,22748942,22749580,22749994,22750921,22751764,22752741,22753186,22754005,22754366,22755050,22756010,22756797,22757637,22758449,22758891,22759371,22760403,22761352,22762337,22763290,22764481,22765631,22766299,22767443,22768448,22769336,22770325,22771485,22772201,22773e3,22773828,22774929,22775958,22777053,22778067,22778734,22779426,22780541,22781501,22782172,22783146,22784119,22785057,22785963,22786836,22787720,22788913,22789820,22790453,22791460,22792455,22793664,22794672,22795496,22796114,22796723,22797488,22798423,22799206,22800016,22801032,22801864,22802826,22803562,22804607,22805664,22806765,22807654,22808560,22809422,22810502,22811419,22812214,22813258,22814157,22814929,22815522,22816437,22817168,22817752,22818253,22818989,22819724,22820452,22821122,22821818,22822827,22823953,22824696,22825277,22825794,22826392,22826847,22827251,22827802,22828424,22829072,22829778,22830389,22831162,22831690,22832323,22832856,22833843,22834690,22835738,22836715,22837588,22838308,22839053,22839724,22840528,22841111,22842154,22843063,22843853,22844852,22845602,22846498,22847277,22848065,22848938,22849863,22850940,22851722,22852446,22853393,22854282,22855241,22856111,22857019,22857977,22858914,22859772,22860387,22861289,22862292,22863208,22863966,22864655,22865591,22866593,22867481,22868336,22869334,22870263,22871010,22871866,22872664,22873683,22874538,22875529,22876464,22877168,22877969,22878947,22879725,22880470,22881482,22882326,22883113,22884025,22884772,22885553,22886475,22887456,22888548,22889335,22890040,22890982,22891887,22892870,22893716,22894658,22895477,22896356,22897198,22898067,22898927,22899839,22900665,22901665,22902546,22903526,22904402,22905315,22906212,22907183,22907983,22908644,22909731,22910539,22911654,22912728,22913563,22914575,22915447,22916387,22917255,22917979,22918950,22919775,22920669,22921746,22922697,22923380,22924222,22924940,22925934,22926624,22927457,22928447,22929330,22930163,22931114,22932001,22932866,22933782,22934658,22935687,22936539,22937379,22938502,22939599,22940427,22941369,22942250,22943003,22943922,22944874,22945800,22946932,22947933,22948907,22949569,22950100,22950787,22951591,22952575,22953255,22953847,22954258,22955150,22956186,22957251,22958149,22959184,22959972,22960891,22961823,22962771,22963547,22964651,22965552,22966342,22967194,22968211,22968959,22970002,22970896,22971709,22972541,22973437,22974491,22975546,22976556,22977309,22978168,22979133,22980050,22981053,22982026,22982827,22983807,22984887,22985711,22986454,22987182,22988009,22988906,22989877,22990662,22991706,22992620,22993640,22994583,22995615,22996497,22997379,22998362,22999396,23000408,23001289,23002151,23003085,23003992,23004931,23005908,23006898,23007886,23008893,23009852,23010817,23011661,23012631,23013466,23014382,23015314,23016139,23017129,23017972,23018847,23019805,23020762,23021787,23022789,23023835,23024639,23025660,23026673,23027489,23028324,23029066,23029870,23030680,23031382,23032435,23033170,23033945,23034554,23035199,23035846,23036481,23037127,23037859,23038591,23039662,23040551,23041457,23042295,23043059,23043997,23044895,23045717,23046761,23047671,23048661,23049540,23050382,23051265,23052114,23053059,23053871,23054570,23055343,23056014,23056879,23057827,23058678,23059611,23060414,23061256,23062204,23062994,23063854,23064729,23065441,23066388,23067117,23067849,23068659,23069443,23070331,23071293,23072172,23073083,23073867,23074631,23075615,23076417,23077404,23078433,23079416,23080496,23081194,23081970,23082687,23083537,23084426,23085064,23085855,23086592,23087562,23088235,23088972,23089668,23090495,23091393,23092090,23092933,23093916,23094892,23095886,23096707,23097683,23098745,23099690,23100702,23101740,23102945,23104160,23105196,23106331,23107379,23108387,23109078,23109885,23110913,23111999,23113159,23114290,23115032,23115717,23116363,23116973,23117604,23118197,23118860,23119828,23120754,23121614,23122515,23123316,23124257,23125065,23125789,23126713,23127421,23128464,23128936,23129672,23130794,23131958,23132844,23133802,23134494,23135335,23136047,23136783,23137621,23138852,23139942,23140755,23141668,23142682,23143813,23145099,23146186,23147163,23148182,23149170,23150228,23151382,23152548,23153630,23154666,23155714,23156784,23157841,23158770,23160026,23160967,23161799,23162758,23163730,23164725,23165784,23166673,23167544,23168747,23169788,23170810,23171718,23172756,23173725,23174715,23175902,23176962,23178003,23178977,23179955,23180471,23180857,23182046,23183055,23184220,23184929,23186061,23187048,23187967,23188775,23189717,23190516,23191209,23192175,23193172,23194112,23195203,23196057,23196886,23197927,23198923,23199980,23200923,23201987,23203007,23203950,23204859,23206132,23207273,23208105,23208904,23209916,23210869,23211493,23212428,23213098,23213957,23214719,23215335,23216144,23217197,23218254,23219339,23220305,23221383,23221844,23222686,23223087,23223709,23224693,23225766,23226342,23227041,23227433,23228292,23229393,23229928,23230676,23231091,23231978,23233053,23233726,23234553,23234964,23235684,23236653,23237737,23238377,23239006,23239393,23240344,23241424,23242416,23243535,23244436,23245495,23246516,23247517,23248739,23249549,23250545,23251862,23252707,23253421,23254297,23255296,23256372,23257109,23258103,23259049,23259920,23260829,23261646,23262728,23263815,23264650,23265429,23266496,23267515,23268716,23269288,23269971,23270745,23271604,23272334,23273377,23274392,23275346,23276323,23277029,23278019,23279083,23279702,23280388,23281072,23281877,23282572,23283586,23284618,23285534,23286452,23287400,23288330,23289206,23290205,23290920,23291923,23292870,23293817,23294591,23295485,23296322,23296899,23297793,23298791,23299702,23300634,23301536,23302437,23303445,23304141,23304874,23305739,23306735,23307554,23308434,23309278,23310147,23311009,23311923,23312750,23313750,23314631,23315623,23316497,23317411,23318308,23319282,23320084,23320748,23321851,23322446,23323442,23324322,23325100,23325624,23326340,23327454,23328606,23329506,23330471,23331311,23332597,23333685,23334505,23335716,23336738,23337583,23338477,23339465,23340460,23341604,23342570,23343469,23344670,23345790,23346891,23348106,23349137,23350092,23351069,23351970,23352960,23354007,23355012,23356062,23356957,23358018,23359129,23359948,23360807,23361933,23362648,23363386,23364184,23364831,23365632,23366426,23367306,23367898,23368725,23369195,23369661,23370717,23371694,23372158,23372961,23373376,23374252,23375339,23375900,23376600,23377041,23377943,23379063,23379910,23380966,23382035,23383156,23383858,23385048,23386358,23387697,23389098,23390134,23391245,23392458,23393579,23394410,23395659,23396741,23397213,23398100,23398949,23399771,23401248,23402687,23404139,23405644,23407044,23408491,23409852,23411334,23412841,23414463,23415966,23417507,23418917,23420489,23421804,23423374,23424865,23426447,23427925,23429596,23431045,23432461,23433897,23435341,23436840,23438191,23439625,23441160,23442600,23443769,23445296,23446897,23448352,23449796,23451151,23452455,23453746,23455296,23456806,23458262,23459723,23460953,23462093,23463031,23464205,23465346,23466259,23466761,23467744,23469037,23470447,23471683,23472720,23473753,23474792,23475951,23477102,23478350,23479054,23480169,23481110,23482149,23482800,23483295,23484052,23485049,23485889,23487409,23488632,23490188,23491634,23493030,23494487,23495908,23497305,23498868,23500127,23501220,23502675,23504204,23505601,23507141,23508594,23510240,23511912,23513364,23514946,23516392,23517885,23519375,23520938,23522518,23523895,23525348,23526779,23528189,23529633,23531024,23532472,23533899,23535341,23536715,23538286,23539617,23541058,23542324,23543658,23545008,23546540,23548022,23549298,23550726,23552093,23553566,23555057,23556457,23557894,23559163,23560840,23562283,23563724,23565280,23566971,23568428,23569936,23571484,23572673,23574020,23575563,23577099,23578521,23580136,23581533,23582856,23584387,23586009,23587331,23588792,23589893,23591353,23592816,23594269,23595690,23597139,23598623,23599592,23600866,23601755,23602797,23603879,23605038,23606084,23607155,23608142,23608786,23609290,23609852,23611151,23612538,23613576,23614698,23616050,23616790,23617905,23618911,23619672,23621163,23622590,23624023,23625475,23626896,23628389,23629895,23631505,23633015,23634588,23636038,23637579,23639008,23640463,23642020,23643544,23644928,23646420,23647782,23649458,23650595,23652167,23653726,23655381,23656892,23658374,23659708,23661039,23662672,23664224,23665779,23667322,23668706,23669992,23670960,23672185,23673353,23674268,23675035,23676435,23677467,23678875,23680315,23681735,23682833,23684409,23685512,23686956,23688279,23689834,23691263,23692783,23694242,23695771,23697271,23698365,23699370,23700762,23701993,23703406,23704958,23705838,23706764,23707697,23708813,23709742,23710848,23711941,23712869,23713787,23714804,23716119,23717486,23718520,23719551,23720583,23721623,23722803,23723914,23724915,23725872,23727165,23728455,23729786,23731013,23732013,23732599,23733490,23734683,23736111,23737589,23738974,23740420,23741761,23743200,23744780,23746316,23747697,23749225,23750691,23752270,23753683,23755221,23756615,23758082,23759438,23760898,23762403,23763822,23764972,23766528,23768147,23769642,23771083,23772609,23773814,23775109,23776199,23777440,23778925,23780518,23782072,23783386,23784460,23785655,23786825,23787919,23788815,23790195,23791709,23792904,23794127,23795538,23797028,23798155,23799530,23800940,23802323,23803579,23804923,23806346,23807899,23809401,23810540,23811991,23813375,23814792,23816213,23817523,23818780,23820188,23821512,23822988,23824514,23826033,23827321,23828274,23829501,23830629,23831623,23832593,23833485,23834050,23834574,23836091,23837533,23838689,23839648,23840522,23841393,23842385,23843443,23844698,23846045,23847253,23848479,23849832,23851033,23852582,23853952,23855490,23856986,23858553,23860058,23861582,23863107,23864533,23865880,23867457,23868966,23870442,23871686,23872967,23874280,23875491,23876358,23877419,23878730,23879863,23881170,23882386,23883409,23884902,23886529,23888131,23889732,23891247,23892783,23894337,23895700,23897210,23898737,23900026,23901480,23902765,23904179,23905617,23906815,23908075,23909577,23911090,23912623,23914149,23915664,23916996,23918149,23918839,23920135,23921470,23922713,23923749,23924894,23926196,23927070,23927921,23928966,2393e4,23930870,23932125,23933560,23935077,23936466,23937928,23939263,23940678,23942268,23943801,23945168,23946697,23948167,23949722,23951320,23952956,23954442,23955992,23957507,23959100,23960561,23962012,23963448,23965041,23966560,23967926,23969351,23970731,23972409,23973525,23975106,23976706,23978170,23979604,23980805,23982252,23983779,23985284,23986781,23988232,23989647,23990985,23992389,23993463,23994715,23996179,23997690,23999207,24000773,24002178,24003410,24004518,24005567,24006792,24007808,24008588,24009273,24010217,24011637,24012860,24014105,24015675,24017041,24018315,24019799,24021373,24022890,24024208,24025260,24026457,24027754,24029157,24030575,24031842,24033147,24034581,24036213,24037762,24039347,24040850,24042215,24043582,24044925,24045990,24047412,24048687,24049880,24051442,24052979,24054192,24055648,24057125,24058639,24060222,24061280,24062609,24063864,24064913,24066195,24067460,24068821,24069856,24070917,24072041,24073525,24074745,24075599,24076500,24077675,24078532,24079281,24080232,24081844,24083275,24084756,24086151,24087605,24088931,24090369,24091952,24093477,24094873,24096381,24097918,24099393,24100845,24102426,24103930,24105282,24106707,24108083,24109680,24111165,24112429,24113934,24115571,24117035,24118576,24120012,24121370,24122796,24124412,24125969,24127384,24128696,24130189,24131598,24132999,24134324,24135596,24137189,24138724,24140170,24141573,24142856,24143829,24145035,24146185,24147188,24148054,24148577,24149880,24151155,24152648,24153686,24154726,24155868,24156959,24158331,24159839,24160730,24161550,24162373,24163388,24164540,24165426,24166937,24168262,24169700,24171193,24172578,24174023,24175373,24176902,24178493,24179854,24181289,24182742,24184145,24185697,24187155,24188582,24189634,24191283,24192552,24193597,24195122,24196543,24197871,24199282,24200533,24201847,24203011,24204453,24206041,24207252,24208570,24210072,24211670,24212988,24213968,24215249,24216529,24217644,24218863,24220129,24221466,24222823,24224188,24225223,24226558,24227613,24228825,24230213,24231488,24232940,24234427,24235883,24237290,24238731,24240006,24241678,24243124,24244556,24246109,24247791,24249250,24250714,24251932,24253321,24254777,24256206,24257632,24258943,24259868,24260914,24261941,24263003,24264163,24265033,24265706,24266209,24267058,24268420,24269674,24271125,24272521,24273633,24275028,24276493,24278072,24279403,24280707,24282112,24283702,24284563,24285448,24286383,24287472,24288633,24290132,24291632,24293073,24294281,24295476,24296499,24297957,24299057,24300192,24301307,24302423,24303116,24303983,24304789,24305592,24306614,24307374,24308686,24310085,24311411,24312895,24314456,24315888,24317370,24318785,24320221,24321634,24323027,24324407,24325794,24327185,24328163,24329648,24331138,24332152,24333176,24334204,24335434,24337005,24338555,24340093,24341405,24342630,24343878,24345113,24346344,24347590,24348869,24350153,24351393,24352839,24354290,24355679,24357072,24358577,24359615,24360440,24361196,24362387,24363244,24364033,24364423,24364681,24365986,24367204,24368382,24369710,24371028,24372276,24373873,24375448,24376990,24378469,24379576,24381013,24382228,24383539,24384981,24386411,24387852,24388959,24390340,24391609,24392719,24393804,24395147,24396403,24397697,24398902,24400210,24401369,24402590,24403648,24404749,24405846,24407035,24408066,24408877,24410095,24411288,24412272,24413381,24414427,24415698,24416665,24417645,24418582,24419735,24420546,24421597,24422801,24424114,24425333,24426400,24427787,24428628,24429863,24431126,24432278,24433374,24434594,24435593,24436719,24437854,24438847,24440011,24441095,24442141,24443372,24444595,24445665,24446873,24447927,24449022,24450091,24451261,24452391,24453605,24454877,24456464,24457707,24458935,24459990,24460526,24461814,24463125,24464410,24465441,24466472,24467510,24468543,24469573,24470709,24471759,24473029,24473811,24474822,24475948,24477081,24478198,24479484,24480686,24481630,24482066,24482890,24483754,24484524,24485818,24487277,24488733,24490167,24491624,24493017,24494464,24496052,24497567,24498954,24500399,24501873,24503418,24504877,24506292,24507409,24508883,24510351,24511844,24513251,24514779,24516228,24517564,24518988,24520281,24521987,24523224,24524778,24526371,24528030,24529585,24531084,24532541,24533900,24535052,24536118,24537471,24539040,24540534,24541431,24542582,24543836,24544657,24545746,24546993,24548393,24550002,24551438,24552314,24553825,24555343,24556743,24557814,24559020,24560452,24561939,24563350,24564853,24566238,24567455,24568633,24569928,24571417,24572778,24574283,24575698,24576626,24577874,24579178,24580312,24581629,24582750,24583888,24585141,24586655,24588104,24589530,24590873,24592068,24593359,24594611,24595869,24597304,24598632,24599885,24601125,24602375,24603654,24605028,24606194,24607592,24609077,24610090,24611515,24613036,24614339,24615298,24616576,24618022,24619315,24620653,24622148,24623632,24625212,24626813,24628206,24629255,24630331,24631639,24632686,24633868,24634869,24635838,24636800,24637552,24637965,24638479,24639746,24641047,24642432,24643472,24644514,24645689,24646975,24647709,24648740,24649761,24650587,24651405,24652898,24654324,24655750,24657199,24658606,24660111,24661630,24663011,24663999,24665437,24666980,24668302,24669896,24671399,24672982,24674365,24675861,24677272,24678777,24680158,24681619,24683092,24684507,24685673,24687244,24688843,24690406,24691832,24693302,24694515,24695943,24697194,24698607,24700018,24701540,24703125,24704407,24705590,24707160,24708570,24709866,24711308,24712744,24714041,24715223,24716714,24718246,24719572,24720398,24721800,24723349,24724912,24726353,24727762,24729024,24730034,24731203,24732382,24733436,24734356,24734761,24735867,24737139,24738493,24739524,24740559,24741693,24742845,24743987,24744969,24746140,24747147,24748251,24749165,24750184,24751589,24753107,24754492,24755952,24757300,24758706,24760302,24761841,24763211,24764736,24766231,24767714,24769201,24770776,24772226,24773642,24775019,24776437,24777977,24779426,24780597,24782152,24783773,24785222,24786669,24788235,24789436,24790711,24791552,24792770,24794385,24795929,24797349,24798624,24800196,24801470,24802459,24803884,24805174,24806094,24807491,24808797,24809812,24811358,24812599,24813397,24814780,24816105,24817324,24818618,24820071,24821439,24822841,24824326,24825861,24827216,24828422,24829473,24830574,24831757,24832749,24833564,24833980,24835287,24836542,24837673,24839017,24840333,24841384,24842758,24844332,24845697,24846845,24848159,24849378,24850925,24852406,24853821,24855039,24856515,24857990,24859414,24860829,24861828,24863144,24864703,24866215,24867608,24869041,24870174,24871314,24872296,24873137,24874481,24875757,24876887,24878156,24879519,24880815,24881893,24883416,24885020,24886523,24887978,24889282,24890548,24892008,24893583,24895059,24896526,24897541,24899039,24900571,24901821,24903296,24904905,24906210,24907457,24908393,24909314,24910602,24911837,24913181,24914397,24915795,24917244,24918457,2492e4,24921583,24923084,24923898,24925253,24926673,24927178,24928523,24929683,24930861,24932385,24933702,24934898,24935853,24937232,24938422,24939739,24941148,24942166,24943277,24944684,24945915,24947149,24948728,24950309,24951910,24952933,24954429,24955521,24957100,24958380,24959909,24961522,24962699,24963823,24965298,24966845,24968264,24969876,24971157,24972388,24973363,24974430,24975765,24977081,24978192,24979711,24980796,24982140,24983768,24985304,24986755,24988259,24989438,24990944,24992200,24993213,24994498,24995765,24996894,24998236,24999524,25000559,25002137,25003210,25004708,25006027,25007646,25009191,25010699,25011533,25012740,25014200,25015360,25016960,25018152,25019673,25021028,25022288,25023529,25024928,25026344,25027617,25028590,25029496,25030789,25032058,25033335,25034627,25035739,25037041,25038368,25039428,25041049,25042560,25044139,25045646,25047004,25048618,25050155,25051495,25052938,25054383,25055735,25057087,25058128,25059615,25060992,25062510,25063979,25065384,25066737,25068042,25069454,25071094,25072653,25074187,25075516,25076759,25077759,25078669,25080005,25081325,25082361,25083453,25084706,25085664,25086584,25087680,25088726,25089449,25090867,25092328,25093788,25095191,25096585,25098085,25099594,25101227,25102770,25104216,25105646,25107208,25108620,25110045,25111646,25113165,25114519,25115993,25117282,25118987,25120222,25121780,25123372,25125030,25126580,25128067,25129523,25130882,25132332,25133847,25135379,25136851,25138298,25139792,25141284,25142926,25144404,25145893,25147187,25148309,25149368,25150674,25151684,25152403,25153236,25154561,25155889,25157172,25158204,25159245,25160451,25161598,25162938,25163725,25164696,25165731,25166771,25167727,25169080,25170431,25171577,25172871,25174033,25175309,25176671,25177737,25178958,25180425,25181486,25182760,25184166,25185320,25186708,25188063,25189335,25190744,25192073,25193242,25194677,25196105,25197626,25199013,25200469,25201808,25203249,25204829,25206197,25207603,25209034,25210320,25211678,25213314,25214914,25216368,25217864,25219434,25220944,25222355,25223823,25225441,25227014,25228606,25230124,25231489,25232854,25234179,25235332,25236930,25238373,25239855,25241314,25242585,25243995,25245348,25246841,25248295,25249785,25251238,25252701,25253950,25254882,25255652,25257073,25258480,25259972,25261473,25262827,25264273,25265810,25267245,25268420,25269970,25271582,25272879,25274202,25275588,25276812,25277900,25279297,25280825,25282245,25283639,25284793,25285519,25286556,25287538,25288755,25289771,25290314,25291294,25291618,25292777,25294048,25295364,25296411,25297673,25298791,25299664,25300695,25301523,25302368,25303852,25305462,25306913,25308349,25309813,25311233,25312740,25314232,25315796,25317344,25318728,25320187,25321772,25323138,25324615,25326218,25327718,25329060,25330507,25331796,25333491,25334863,25336370,25337956,25339658,25341130,25342641,25343977,25345367,25346851,25348453,25349597,25350837,25351878,25353064,25353940,25354671,25356015,25357366,25358397,25359534,25360857,25361687,25362575,25363571,25364536,25365623,25367145,25368596,25370046,25371471,25372876,25374372,25375890,25377524,25379053,25380516,25381937,25383559,25385088,25386614,25388210,25389638,25391189,25392722,25394240,25395613,25397093,25398612,25399969,25401403,25402663,25404340,25405799,25407225,25408774,25410465,25411917,25413381,25414609,25416020,25417564,25418949,25420438,25421611,25422712,25423722,25424846,25425671,25426289,25427547,25428869,25430200,25431490,25433047,25434355,25435908,25437384,25438543,25439996,25441565,25443079,25444378,25445561,25446711,25448011,25449368,25450405,25451551,25452876,25453699,25454584,25455614,25456584,25457776,25459226,25460718,25462106,25463555,25464898,25466307,25467888,25469421,25470784,25472324,25473808,25475423,25476970,25478627,25480113,25481571,25483113,25484700,25486271,25487797,25489221,25490644,25492133,25493616,25494999,25496365,25497917,25499390,25500587,25502114,25503731,25505069,25506444,25507759,25509239,25510694,25511845,25513091,25514148,25515167,25516322,25517208,25517744,25519039,25520275,25521678,25522822,25524327,25525958,25527179,25528633,25530164,25531705,25533254,25534681,25536069,25537332,25538824,25540312,25541648,25542446,25543443,25544599,25545954,25546864,25547621,25548479,25549454,25550543,25551543,25552415,25553631,25554859,25556257,25557617,25558931,25560286,25561542,25562788,25563860,25565010,25566391,25567729,25569131,25570341,25571348,25572640,25573584,25574659,25575710,25576667,25577748,25578797,25579977,25581039,25582261,25583438,25584380,25585558,25586741,25587862,25589443,25590786,25591930,25593030,25594020,25595181,25596220,25597475,25598817,25599959,25601246,25602161,25603593,25604761,25605961,25606970,25607719,25608955,25610003,25610707,25612033,25613376,25614529,25615779,25616948,25617830,25618879,25619593,25620722,25621929,25622948,25623861,25624949,25625841,25626886,25627939,25629220,25630454,25631770,25633078,25634372,25635657,25636178,25636968,25637386,25638410,25639529,25640645,25641696,25642175,25643372,25644346,25645311,25646306,25647546,25648090,25649156,25649253,25649882,25650257,25650309,25650996,25651373,25652244,25652975,25653511,25654312,25655168,25655250,25656459,25657815,25659137,25660490,25661776,25662442,25663452,25664559,25665671,25666766,25667811,25667960,25668181,25668705,25669799,25670860,25670941,25671761,25672714,25673e3,25673628,25674271,25674867,25675696,25675880,25676755,25676899,25677335,25677750,25678577,25679554,25680393,25680473,25680786,25681294,25681592,25682148,25682960,25683329,25684299,25684367,25685157,25685313,25686122,25686819,25687391,25687702,25688644,25689701,25690080,25690625,25691701,25692804,25693014,25693619,25694190,25695193,25696142,25697090,25697914,25698262,25698682,25699393,25700330,25701285,25701918,25702071,25702643,25703479,25704003,25704401,25704666,25704776,25705022,25705170,25705381,25705602,25706177,25706600,25707015,25707752,25708147,25708316,25709220,25709580,25710142,25710719,25711090,25712018,25713139,25714249,25715287,25716287,25716705,25717717,25718791,25719811,25720915,25721485,25721885,25722786,25723815,25724884,25725530,25725686,25726143,25726459,25726980,25727752,25727868,25728790,25729816,25730650,25731440,25732182,25732639,25733658,25734535,25735307,25736055,25736492,25737281,25738161,25738781,25739338,25740186,25740493,25741500,25742390,25742906,25743485,25743904,25744645,25745160,25745749,25746660,25747289,25747440,25748034,25748954,25749326,25749901,25750265,25750751,25751427,25751852,25752476,25752612,25753103,25753835,25754580,25755169,25755243,25755420,25755836,25756646,25757344,25757829,25757911,25758579,25759589,25760575,25761629,25762559,25762915,25763781,25764526,25765273,25766182,25766573,25767270,25767995,25768920,25769661,25770455,25770716,25770990,25771347,25771671,25771783,25771897,25772752,25773717,25774414,25775370,25776381,25776570,25776995,25777670,25778156,25778659,25778736,25779148,25779675,25780153,25780734,25781159,25781564,25782437,25783352,25784379,25785275,25785768,25786668,25787659,25788625,25789620,25790554,25791036,25791985,25792600,25793082,25794048,25794508,25794817,25795035,25795256,25795516,25795715,25796237,25796630,25797051,25797683,25798140,25798230,25798780,25799211,25799703,25800404,25800670,25801290,25802331,25803674,25805174,25806649,25808108,25809555,25811100,25812565,25814094,25815584,25817103,25818658,25820137,25821656,25823171,25824528,25826038,25827546,25828999,25830479,25831898,25833071,25833937,25834875,25835860,25837040,25838104,25839329,25840570,25841485,25842519,25843607,25844905,25846141,25847227,25848100,25849491,25850871,25852225,25853609,25855046,25856389,25857761,25858795,25859968,25861320,25862489,25863459,25863931,25864315,25864916,25866101,25866962,25867476,25868418,25869606,25870790,25871751,25872308,25873456,25874487,25875458,25876493,25877299,25878287,25879418,25880739,25881630,25882567,25883275,25884198,25885355,25886685,25888027,25889404,25890431,25891448,25892536,25893704,25894909,25896037,25897239,25898381,25899530,25900771,25902065,25903277,25904619,25905937,25907133,25908493,25909839,25911206,25912365,25913452,25914389,25915582,25916924,25918099,25918879,25920054,25921087,25922149,25923067,25924348,25925717,25926776,25928093,25928904,25930304,25931609,25932957,25934245,25935590,25936921,25938262,25939472,25940442,25941716,25942978,25944041,25944941,25945853,25947136,25948407,25949600,25950785,25952082,25953290,25954496,25955717,25957077,25958244,25959259,25960410,25961474,25962497,25963505,25964398,25965654,25966861,25968141,25969369,25970545,25971650,25972808,25973636,25974692,25975798,25976862,25977925,25979150,25980490,25981672,25982763,25983731,25984812,25986113,25987159,25988240,25989524,25990268,25991589,25992722,25994083,25995368,25996486,25997389,25998616,25999343,26000540,26001594,26002821,26003821,26005071,26006216,26007344,26008463,26009626,26010880,26012066,26013208,26014304,26015558,26016602,26017608,26018713,26020087,26021067,26022027,26023376,26024612,26025926,26027076,26028343,26029140,26030126,26031484,26032786,26034103,26035101,26036234,26037567,26038592,26039659,26040958,26041959,26043352,26044382,26045615,26046815,26048302,26049367,26050487,26051631,26052770,26053922,26055009,26055792,26056925,26058018,26059152,26060445,26061424,26062290,26063508,26064329,26065309,26066251,26067170,26068148,26069483,26070634,26071659,26072948,26074235,26075496,26076504,26077928,26079090,26080351,26081798,26082906,26084238,26085547,26086829,26088102,26089374,26090595,26091858,26092998,26094187,26095439,26096583,26097824,26099004,26100111,26101383,26102630,26103722,26104913,26106034,26107187,26108309,26109170,26110273,26111394,26112459,26113722,26114917,26116188,26117118,26118147,26119083,26119969,26121138,26122324,26123518,26124803,26125771,26126694,26127767,26128798,26130005,26131306,26132391,26133419,26134686,26135757,26136727,26138060,26139374,26140774,26141491,26142173,26142840,26143564,26144325,26144967,26145804,26146620,26147286,26148161,26148996,26149847,26150487,26151268,26152418,26153684,26154685,26155715,26156737,26158027,26158956,26159846,26161054,26162420,26163382,26164687,26165768,26166970,26167959,26169082,26170210,26171264,26172074,26173266,26174195,26175479,26176326,26177325,26178147,26178967,26179771,26180956,26181982,26182922,26184110,26185314,26186328,26187322,26188316,26189325,26190433,26191587,26192731,26193786,26194712,26195932,26197206,26198379,26199442,26200423,26201557,26202801,26203763,26204702,26205732,26206431,26207627,26208804,26210110,26211207,26212388,26213495,26214731,26215794,26216834,26218157,26219222,26220601,26222006,26223295,26224154,26225121,26225889,26226825,26227623,26228695,26229679,26230824,26232004,26232933,26234090,26234996,26236008,26237240,26238411,26239475,26240619,26241653,26242892,26243798,26244841,26246104,26247201,26248290,26249488,26250643,26251641,26252938,26254179,26255375,26256625,26257810,26258745,26259818,26261067,26261999,26263001,26264090,26265388,26266575,26267864,26268968,26270178,26271166,26272398,26273889,26275241,26276509,26277859,26279097,26280461,26281912,26283022,26284179,26285033,26286387,26287520,26288620,26289973,26290902,26291954,26293086,26294339,26295487,26296759,26298017,26299171,26300269,26300873,26302074,26303460,26304891,26305791,26306994,26308112,26309163,26310346,26311600,26312567,26313758,26314842,26315932,26316991,26318360,26319477,26320563,26321730,26322764,26323842,26324815,26326181,26327638,26328936,26330017,26331357,26332325,26333364,26334450,26335634,26336801,26338130,26339233,26340188,26341467,26342546,26343879,26345063,26346428,26347649,26348840,26350148,26351013,26352259,26353536,26354611,26355930,26357058,26358187,26359482,26360548,26361875,26363143,26364225,26365465,26366818,26367907,26369015,26370127,26371313,26372489,26373745,26374818,26375889,26377086,26378182,26379174,26380499,26381663,26382822,26384069,26385373,26386507,26387692,26388795,26390088,26391313,26392586,26393852,26394927,26395872,26396966,26398089,26399404,26400520,26401728,26402933,26404279,26405652,26406705,26407379,26408628,26409978,26411369,26412671,26413915,26414988,26416024,26416773,26417602,26418756,26419870,26421056,26422423,26423736,26424936,26425807,26427050,26428211,26429309,26430085,26431125,26431928,26432901,26434122,26435399,26436577,26437983,26439020,26440157,26440886,26442124,26442828,26443804,26444946,26446178,26447453,26448609,26449920,26451297,26452803,26454265,26455100,26455650,26456214,26457055,26457810,26458600,26459439,26460016,26460552,26461141,26461866,26462694,26463301,26463876,26464436,26465020,26465620,26466305,26467199,26467932,26468591,26469322,26470080,26470644,26471809,26473127],sizes:[1292,1197,1007,1255,1211,1425,1201,1020,1268,1229,1253,1249,1370,1188,1458,1136,1173,1410,1093,1094,1175,1432,1289,1322,1432,1380,1372,1209,1387,989,1364,1580,1711,1499,1530,1389,1350,1135,1316,1207,1173,1242,1339,1111,1176,1110,1114,709,1423,1169,1006,1174,1200,1004,1099,1246,1177,1219,1413,1249,1345,1078,1466,1260,1144,1311,876,1219,1208,1003,1011,1255,1020,1231,1202,1249,1153,1063,1038,985,1365,1438,1152,1189,1318,1162,1405,1241,1297,1306,1279,1271,1300,1252,1234,1490,1151,1267,1139,1149,1236,1058,1243,1267,1409,1202,1152,1128,1012,902,985,1170,1141,952,1181,1028,1024,796,1015,991,1013,814,1104,1133,862,924,926,670,819,1274,1161,1309,1123,1109,963,973,903,903,819,1145,827,905,984,1038,1110,1218,1118,1107,1210,1159,1287,733,1304,1318,1333,982,1101,1204,1228,1093,1026,1113,1207,997,1247,1328,1249,1283,1273,1268,1184,1138,1290,1069,1170,1089,1201,1072,1193,1118,1270,1125,1146,1179,1185,1292,936,1195,1084,1275,1268,1242,1251,1145,967,1291,1089,1259,1197,1069,1262,1215,908,1006,1108,1070,1130,1315,1112,1054,1136,1230,1134,1124,1219,1018,1044,1139,914,1151,994,1144,1286,1174,1052,1420,1338,1215,1436,1285,1338,1222,1032,1349,1447,1293,1075,1245,1168,1207,991,1248,1662,1501,1594,1267,1268,869,1239,1347,1220,877,887,1170,1399,1455,1396,1309,1438,1143,1257,1424,1391,1270,1295,1346,1451,1121,1196,1431,1206,1483,1044,1265,1158,1288,1282,1292,1383,1284,1272,1214,1190,1232,1317,1297,1280,1206,1044,1042,1423,1403,1118,1449,1129,1223,1296,1019,1638,1411,1390,1209,1604,1418,1389,1498,1508,1439,1327,1598,1554,1558,1347,1528,1519,1494,1283,1195,1136,1154,1381,1444,1236,1183,1052,1247,1208,1028,1310,1215,1083,1103,1119,1146,1206,1131,1141,1276,1227,1209,1261,1117,1167,1064,956,1069,1185,1327,1269,1098,1259,1112,1198,1329,1013,1143,1110,1165,927,1321,1227,1134,1083,885,1377,1142,1017,1163,1223,1197,915,1132,419,793,1256,1154,1051,1148,899,1150,1186,1076,1190,969,1007,937,875,1172,825,802,1002,1194,1138,916,1253,1297,1219,1131,1313,1347,1073,1204,1227,712,688,981,1032,861,887,950,946,869,877,830,1121,1041,678,932,1077,1069,1118,587,666,728,925,721,1089,865,974,872,671,1118,905,583,730,691,662,832,781,884,689,568,959,1055,1013,879,741,888,878,811,871,903,949,951,920,930,893,962,830,935,900,1035,722,696,1193,647,993,743,870,436,1009,1069,996,1047,872,1318,1129,721,1145,1048,855,870,992,956,1115,987,906,1146,1111,1084,1237,1089,923,1106,870,965,1094,1015,966,881,1003,1102,821,928,1128,796,712,799,686,699,793,795,1040,564,695,455,904,1076,632,630,432,937,1078,889,1043,1083,1138,607,1334,1362,1308,1168,1556,1543,1249,1442,1460,1543,1538,1373,1168,1161,1274,1226,1231,1145,1104,1257,1305,1405,1082,1155,1123,1185,1196,783,725,964,985,986,812,954,964,849,897,813,1136,1062,728,851,1062,1028,1145,595,657,724,918,667,1111,930,990,836,715,1013,1035,632,608,709,853,801,973,766,956,919,1004,871,1101,831,877,843,852,851,887,848,1059,861,993,865,905,908,935,776,669,1002,730,997,606,702,592,1057,1129,987,958,883,867,1321,1121,757,1148,1055,856,884,980,932,1084,1010,876,1130,1127,1096,1235,1079,896,1092,871,953,1089,990,948,913,1011,1163,892,988,1111,923,615,782,776,660,807,1045,495,738,426,836,934,1006,695,541,404,999,867,527,841,380,809,1144,896,1051,1007,1162,973,1039,1411,1308,1447,1164,1589,1484,1438,1512,1599,1556,1550,1352,1006,1020,1221,1392,1378,1070,1089,1156,1117,1031,1076,1394,1079,1315,952,1192,1159,1230,1225,1172,823,786,1136,1248,1214,1442,1446,1166,1392,1369,1166,1270,757,1308,1221,1107,1090,1047,1107,1392,1169,1213,1151,1402,812,1366,1350,1020,1230,1374,1104,1397,1354,1062,1172,1169,1032,1210,1241,1139,1039,1166,1250,1027,1205,1037,1292,1162,1181,1138,1286,1181,1002,1395,1278,1405,1343,1126,1127,1163,1210,1082,1287,1163,1309,1101,1149,1176,897,1239,1165,1318,1075,1152,1064,1069,1446,1087,1229,1288,1077,1219,842,933,1030,1158,1109,1187,868,887,1262,1124,902,1321,1283,1211,1207,1319,1047,1091,1092,1074,1081,1148,1222,1228,1163,1061,1290,1142,1240,1082,1206,1211,961,1079,1179,1190,1074,1275,1105,938,1040,1233,1211,1086,1155,956,1010,754,796,1267,1238,1208,1243,911,831,923,909,749,871,1064,1280,1019,1263,1260,1274,1479,1391,1195,1283,1134,940,1126,1049,1087,1210,948,1212,1112,1224,1020,1224,1201,1126,1185,1099,1084,947,1124,1375,1183,1085,1161,1144,1306,1056,1219,1171,1394,1280,1347,1200,1398,1240,959,1175,1092,1286,1299,1300,1288,1241,1170,1147,1112,1034,1186,1233,1205,1033,1133,903,1049,1058,1245,1040,1097,1153,1209,1202,977,863,960,1024,1171,812,955,846,942,1250,1223,1046,973,1008,888,1278,1140,1143,942,1306,1390,1264,1450,1149,1127,1248,1363,894,1118,1351,1355,972,1176,1170,1148,1218,1123,1092,1041,1082,1208,1239,1322,1273,1159,1241,1150,1142,1070,1154,1002,1168,1124,1024,1167,1130,1124,1090,930,966,1310,1171,1313,1176,1120,1115,1162,1203,1267,1210,1251,1367,1119,1362,1087,1248,1270,1159,1158,1025,1161,1003,1157,1067,1033,1089,1179,1027,1200,1199,1199,1022,847,1121,1156,915,1246,1312,822,1349,1359,1235,1127,1165,1087,1014,857,885,996,978,1080,915,878,966,907,884,1068,1230,981,891,1116,923,1033,931,1096,1102,1176,985,1239,1072,1102,1259,1218,1091,1078,645,861,958,1297,896,698,962,1311,1113,1155,1161,979,938,773,743,1140,1107,770,1185,1217,1383,1109,1221,679,627,726,795,802,789,859,823,1272,816,1051,1038,1256,1279,851,1072,1326,1610,1290,1255,1077,1190,1253,1240,1219,891,1227,1067,1279,1117,1166,1002,1026,971,1068,969,1029,1120,924,1161,822,1092,1066,1116,957,1283,1284,1238,1246,1293,953,1150,967,1133,1193,1042,1011,1149,842,1011,1075,1160,972,1010,894,913,1011,1221,1190,1250,1289,1174,1006,1515,973,1090,887,1285,886,979,1241,988,1076,785,1195,830,956,1374,1066,935,1628,1222,1081,1043,802,749,677,641,912,918,1181,1054,1396,1397,1270,1274,1379,1236,1196,1159,987,1087,952,1326,1170,1090,895,1039,1119,1290,1213,1634,941,451,964,1223,988,1060,1057,1093,737,1106,1320,1401,1012,1076,1130,1157,1152,921,1143,937,729,765,597,745,1152,1202,896,1029,893,957,1321,1367,1085,1108,1124,1022,1344,1063,1058,1198,1159,1066,1157,946,805,752,1113,1079,870,814,706,634,665,650,928,1095,930,1178,884,1047,703,763,1020,1193,1225,903,666,995,1174,1064,1073,917,1035,1169,991,954,965,956,846,795,827,901,832,695,777,1040,1127,1107,1064,621,894,1056,1282,971,1068,1126,768,876,868,724,824,754,706,1135,1260,1272,1080,1268,1088,1015,1305,1088,1096,981,1269,1023,992,788,1053,847,980,1083,1033,842,1214,923,1190,1276,1052,1365,1192,998,896,1289,1137,1048,1152,1142,1157,870,1181,1187,1073,987,1131,1237,1279,1004,1223,1194,1111,1174,1141,1360,950,1109,1269,936,1090,1423,1252,958,1153,977,1224,1335,1275,1121,609,764,799,1280,1109,801,1196,1253,1104,1218,1139,1022,937,1219,1089,1147,590,992,1203,629,1038,1106,1092,1150,1058,1196,899,670,1051,1442,1244,958,1123,1269,1187,1328,1305,1322,1369,1265,1326,1312,1058,781,1144,1430,1333,1130,927,1095,1117,898,987,1171,1257,796,1141,1123,837,445,1040,663,672,1125,964,680,975,985,943,884,893,887,1204,919,619,1039,1015,1151,1163,901,589,659,721,582,833,608,873,910,843,1125,943,511,578,611,723,872,1127,874,990,930,484,456,523,552,495,651,687,674,777,819,793,842,965,987,822,720,961,1079,931,919,1042,1125,958,950,1110,1022,659,646,662,565,650,543,528,634,629,761,648,710,664,683,681,758,776,1128,902,867,827,685,668,667,664,366,408,367,319,523,293,366,501,413,505,343,599,569,349,535,445,784,977,872,930,880,1164,1016,796,931,906,714,699,708,880,797,784,781,702,899,867,853,1005,975,835,784,855,887,870,881,901,880,839,621,1030,712,642,897,944,752,821,831,984,745,883,681,854,822,750,870,992,966,964,818,1005,852,991,918,980,1056,914,970,905,947,977,921,1074,835,1060,944,912,998,829,975,889,1013,1041,925,976,870,924,999,1026,996,995,871,1034,1129,761,1e3,972,817,619,617,1004,886,963,831,865,854,785,791,779,845,820,878,799,840,763,772,744,796,889,789,976,930,916,829,841,951,704,734,828,874,780,685,747,829,839,747,603,847,687,729,801,843,717,847,812,1015,1018,1072,966,947,1094,782,916,818,945,883,902,884,764,761,823,843,888,813,789,893,768,812,826,1050,1004,801,847,1021,974,1031,858,643,783,716,903,759,556,907,904,789,922,876,781,919,794,911,862,907,1003,888,810,853,1034,801,1030,656,924,976,706,610,697,820,862,745,1007,676,634,918,870,881,774,1179,966,959,912,771,697,852,694,676,737,808,1077,777,838,801,913,856,805,849,695,632,646,627,666,615,645,800,979,857,861,1187,1035,899,1110,973,793,837,830,822,760,792,1071,930,788,814,924,712,772,712,781,881,1077,877,655,873,761,762,795,761,834,946,847,991,706,742,707,651,721,607,1075,1021,837,782,764,878,793,1106,582,857,806,838,913,868,796,621,984,854,782,612,745,852,1092,845,887,720,915,865,877,785,959,851,764,914,1005,942,824,1020,1046,816,963,836,1027,944,852,833,974,981,984,704,738,746,735,707,669,642,648,885,1052,1007,652,674,863,803,909,935,915,1064,799,1167,805,1033,1017,1070,863,1005,902,880,947,986,1024,935,1063,996,947,844,964,1060,1103,1020,900,1085,976,933,944,971,980,1020,1043,1071,942,1130,1035,864,869,1033,942,766,1094,1080,787,895,1038,765,791,1038,1104,1029,951,814,909,892,867,919,1089,867,1094,1033,940,938,1031,934,1036,838,884,1090,704,711,839,658,847,864,893,1055,703,908,802,908,1004,926,968,955,775,1039,801,693,809,890,833,834,869,1033,565,858,763,553,791,855,822,853,1056,1122,869,710,1042,903,864,554,947,759,855,897,873,1053,907,922,847,719,902,860,758,868,860,1009,861,1107,860,945,1074,979,938,1067,763,669,736,853,995,978,1095,831,787,823,908,815,784,808,881,965,874,915,916,907,723,749,946,759,952,796,1056,766,887,693,671,719,639,759,680,859,961,729,822,687,829,748,819,746,741,822,853,990,617,635,587,616,914,963,923,926,885,834,876,894,767,1093,970,1001,983,894,1019,835,773,1100,893,981,1137,875,982,1023,973,891,825,814,904,772,917,1085,702,721,708,840,940,617,872,940,902,764,997,1005,1044,895,858,966,830,895,1121,1009,829,874,892,778,1003,1119,902,953,1058,852,684,587,816,903,765,747,518,494,976,866,1118,886,993,885,879,957,886,870,1063,752,881,830,958,803,968,725,804,706,883,1005,1024,954,742,862,915,930,950,1061,994,1044,1059,802,638,740,923,908,899,943,966,882,935,883,945,914,908,952,965,926,861,789,894,912,986,951,1018,958,924,996,910,838,990,940,954,894,825,909,849,856,972,1046,924,1008,1076,939,867,1e3,737,680,730,990,640,780,989,872,855,591,669,572,691,685,770,726,1068,978,734,777,836,980,797,802,915,781,940,980,884,868,972,987,811,715,805,764,945,979,864,813,864,873,755,890,841,729,741,854,690,738,832,780,864,812,981,816,713,863,925,933,1085,966,991,956,632,829,855,861,763,729,877,870,886,803,694,826,662,875,817,781,1005,886,1067,883,930,844,915,988,984,1040,593,411,572,543,553,934,951,659,452,570,505,905,828,709,837,864,968,630,584,668,898,838,678,937,1161,1207,1028,1143,1053,1002,682,800,1027,1088,1156,1117,702,660,627,605,622,652,670,607,601,603,588,663,955,658,763,761,899,580,838,833,835,858,954,726,941,874,786,696,801,755,690,669,685,602,768,715,1062,534,612,452,469,493,489,465,558,602,557,699,902,1163,1163,940,937,752,957,982,583,778,1220,1090,1141,1102,844,1172,1027,954,980,854,1112,1015,971,1139,752,1124,1182,384,476,1021,948,1159,941,956,1164,1239,1046,857,875,991,951,1106,982,895,1152,1202,1087,1032,1197,395,368,1210,1050,1051,1107,974,832,1040,1065,687,1110,1183,1106,862,832,1259,1063,1048,729,971,981,714,660,805,804,692,827,824,1027,851,625,812,443,477,1037,977,435,817,379,689,951,1085,637,623,452,967,982,690,526,422,946,1005,767,477,397,984,887,546,847,412,696,967,737,716,1081,651,651,451,916,1213,991,1005,1106,1143,610,1056,829,729,1039,972,892,961,720,684,1022,747,647,1121,851,984,1012,989,1026,1087,1280,1091,1256,941,1009,1086,867,909,1149,843,973,1158,1193,919,1078,763,1364,1326,1258,1266,1036,1078,1307,1038,1034,1042,1172,1160,1124,1281,660,746,913,1211,1041,437,542,1092,844,1513,1475,1507,1454,1440,1447,1411,1505,1521,1663,1187,1446,1487,1392,1496,1576,1404,899,1423,1378,1356,1192,1447,1061,1442,1315,1317,1541,1309,1418,1050,1301,1335,1393,1223,1259,1172,1411,1470,1405,1386,1327,1462,1566,1487,1543,1576,1141,1412,1570,1536,1327,1367,1422,1174,1501,937,1436,1368,1510,1211,1043,1335,1253,1439,1184,1583,1404,1481,1652,1501,1577,1551,1529,1174,1569,1488,1506,1364,1609,1534,1610,1558,1448,1323,1446,1582,1401,1573,1505,1380,1334,1372,1399,1365,1308,1424,1453,1460,1451,1408,1447,1271,1658,1519,1353,1505,1693,1471,1447,1223,1453,1514,1501,1461,1214,1509,1570,1379,1483,1497,1090,1201,1543,1501,1252,1461,1567,1531,1494,1272,1143,1393,1370,725,966,1446,1364,1162,1175,1118,654,829,959,986,1158,1295,1086,1043,770,926,510,239,1191,1311,1435,1285,1236,1425,1605,1571,1459,1494,1475,1280,1518,1455,1413,1413,1467,1429,1475,1385,1627,1486,1324,1141,1136,1025,1219,834,1057,484,1078,1039,1006,881,862,1106,1088,1068,607,1208,1e3,1149,990,1042,1029,984,1155,1255,1053,920,607,1060,1076,1113,1091,1033,1082,1215,1146,1107,987,1327,1081,1033,1032,1080,1275,928,1106,870,867,514,849,1235,971,953,972,891,828,1091,950,1115,971,901,685,809,931,691,999,892,1293,1109,1030,980,1264,977,945,932,994,1281,1224,1103,836,1146,954,1407,1044,896,1246,1195,1196,1131,1146,1036,1044,1020,787,943,895,1002,1217,1054,1082,1090,1126,689,1137,941,1272,1196,995,1097,1142,1295,998,1108,1200,1153,1110,815,1377,1058,1250,838,1129,1244,910,772,870,853,1169,1074,1098,1358,1291,1040,1146,1171,1201,1332,1197,1174,991,1254,1083,1046,989,1038,1009,1008,1056,1222,1076,810,1116,989,1002,980,1201,963,821,629,920,881,980,1114,1034,961,1368,1134,1103,830,1062,1215,911,881,885,868,1056,1268,1184,1082,999,1122,1019,913,1188,1114,839,1136,1153,991,975,980,817,853,678,1015,1088,646,717,1238,1028,1011,1197,1056,1090,921,1192,1203,1302,1058,1272,1143,1018,1111,1196,1093,1190,1063,1100,1148,1136,1084,1158,1209,1119,963,989,1052,1108,808,990,913,1090,1268,1147,832,849,1241,966,674,1110,1325,1089,1130,1175,868,1207,1315,1267,1201,1096,773,1113,836,1015,1122,832,881,1037,928,942,1026,1139,1179,1037,1035,1382,983,1138,1130,1130,1007,1189,1293,1068,1100,813,1022,1231,1173,1054,1328,1197,1334,1154,1102,1036,1005,1079,1243,870,1181,1102,735,1100,1324,1274,915,1208,1166,1216,612,1025,1283,1235,1169,983,1136,1247,993,1121,1162,1115,1082,1097,1113,1201,915,1070,1130,1065,1081,1378,1295,1361,1295,984,1217,954,1079,1094,1212,1085,1201,1083,834,937,939,1247,1327,866,956,760,1235,1008,1119,1130,982,1177,1121,1354,1077,1241,1220,1141,1286,1156,976,1060,1102,1201,933,1035,1051,1181,1098,1029,1036,1050,924,1112,1123,1081,1308,992,1031,1082,963,1030,1176,1154,970,1005,1230,1074,1207,1251,804,1112,1132,1146,1108,1042,1138,1086,1163,1221,1265,1077,977,1114,637,738,1225,1059,1027,1098,1031,926,1026,994,1164,1040,1211,1084,1214,951,1058,1112,1245,941,979,933,1199,1010,978,1063,1098,1123,1144,1025,1123,1078,1056,987,1232,970,833,1280,880,852,907,1270,903,983,1191,1155,1265,1134,1250,846,1187,1146,1278,955,808,1153,1207,1244,867,949,1267,761,1280,1161,854,1031,1262,1019,938,1070,1174,967,1139,1088,1209,1048,1241,1350,1046,1199,1134,1103,1150,1188,1269,1203,651,436,771,687,690,1026,1022,1107,952,837,941,1079,1202,1004,1086,1008,1246,1165,811,1156,1182,1235,943,873,1191,1113,1006,1016,947,869,1064,1083,1206,1187,1145,1110,1245,1216,1237,1199,1102,1205,1031,1367,1202,1167,1049,1255,917,1165,1163,1232,775,658,965,1084,1019,1056,1183,1054,1126,1287,1167,1409,1257,965,1080,992,1319,1124,1078,1117,1071,1025,1198,745,719,983,982,932,859,938,910,860,884,823,1114,1084,715,866,1057,1046,1227,837,669,667,575,801,684,803,822,748,858,835,971,745,523,544,787,707,954,1044,712,612,453,498,489,652,612,627,772,697,737,869,904,998,1080,995,752,779,1106,1023,911,878,1143,961,1105,956,589,683,906,992,618,570,564,783,761,751,630,682,706,995,1128,737,619,681,706,567,347,541,738,1016,850,1101,931,1131,867,1153,558,819,1002,995,987,868,888,901,939,793,892,805,940,843,773,905,872,814,785,1048,823,749,699,759,837,1024,1022,814,967,851,763,814,838,884,967,942,668,954,1024,924,551,593,910,815,577,714,707,962,854,1014,853,1072,801,957,966,986,788,884,807,879,842,873,869,910,828,1001,877,981,880,903,902,965,794,662,734,695,644,707,640,748,823,792,925,852,1029,827,891,1119,748,1133,1117,657,882,1037,1032,779,801,883,857,954,969,1e3,682,754,712,972,798,592,952,931,961,938,936,924,1023,852,1005,900,844,1041,1095,851,1002,834,810,802,929,1132,1074,834,999,799,692,632,779,976,698,685,493,625,1029,924,1004,971,857,1018,922,1039,733,997,926,798,803,948,777,925,931,712,856,899,1016,966,966,971,893,949,939,1027,876,855,964,1120,1068,651,682,682,1034,952,761,977,936,930,900,936,837,873,884,931,916,865,898,889,850,1006,925,963,962,1021,950,1043,869,1017,870,869,969,979,945,851,1003,935,924,1080,868,935,918,962,921,853,820,658,661,900,721,1007,779,869,656,588,724,614,736,712,792,825,945,883,676,790,818,899,834,1105,866,838,793,945,918,807,1088,913,781,772,671,887,964,816,991,818,937,1015,745,984,947,712,920,739,650,812,794,809,998,770,979,864,836,1071,895,1027,1040,1089,962,802,657,732,866,863,556,698,846,972,715,855,837,946,733,755,836,771,1078,916,960,947,938,999,964,1023,972,641,923,787,880,1131,1237,1035,1120,1062,1051,733,809,1020,1073,1171,1136,794,655,618,632,608,593,607,863,937,886,851,924,742,953,740,927,735,630,707,812,805,514,589,493,476,464,495,494,593,551,571,664,896,1165,1154,937,978,920,1e3,590,835,1199,1156,1011,1011,1219,504,1101,871,1048,862,880,975,948,1066,1013,884,1105,1188,1e3,1034,653,1076,1123,1195,1149,1031,913,1139,1057,1006,975,987,1073,1077,951,931,1126,1046,1100,886,1126,369,664,1174,1126,816,985,984,1180,817,919,1142,1137,949,1025,843,1006,1138,735,793,794,628,795,819,1029,1048,625,618,440,884,905,906,564,835,374,832,1110,534,756,406,888,930,901,574,845,404,732,1040,1090,1096,1001,858,1156,1159,899,959,962,1082,1027,1065,1054,815,974,959,1106,951,1166,1066,950,1197,1125,605,1300,1211,1126,1388,1308,1314,1078,1038,1077,1191,1394,1317,1567,1397,1606,1663,1550,1590,1576,1488,895,1092,1165,997,436,518,1200,867,1458,1460,1408,1469,1424,1411,1316,1415,1499,1346,1457,1576,1478,1323,1403,1419,1501,1425,1402,1355,1595,1483,1260,1501,1635,1466,1455,1295,1337,1326,1546,1398,1562,1338,1431,1311,1058,1421,1508,1292,1454,1493,1478,1486,1170,934,1248,1213,1142,757,298,1079,1199,874,1141,921,829,985,663,1196,831,1230,1220,786,985,755,1215,1041,1117,865,1063,916,1075,1125,1092,1135,892,933,959,1145,948,1205,1032,1098,1163,1035,1104,1120,960,1144,1187,993,1031,1147,1381,1312,1194,1181,1300,1194,1148,1210,1368,1073,1259,1339,1147,1276,1410,1317,1121,1088,1260,1326,1097,1155,1388,1240,1076,1174,1171,842,562,1178,1003,1187,1100,1094,981,1080,657,661,649,622,747,564,578,560,651,662,602,614,633,680,629,622,632,647,556,578,591,650,627,621,672,792,875,917,785,764,929,815,954,919,855,773,913,902,923,882,905,805,769,946,807,966,900,860,743,920,904,938,890,920,797,808,956,812,950,889,833,721,908,887,939,904,916,798,821,960,795,943,890,802,735,907,898,933,922,919,798,808,965,794,942,873,819,722,918,884,945,904,928,875,932,920,914,905,886,834,914,899,867,1004,1047,1056,1049,1054,1051,1066,1036,1070,1075,1069,1084,1092,1103,1093,1128,1141,1130,1111,1150,1126,1157,1128,1111,1080,1148,1089,1113,1098,1115,1103,1104,1119,1109,1100,1077,1108,1108,1125,1085,1104,1084,1139,1125,1108,1138,1094,1138,1106,1141,1119,1136,1111,1121,1109,1122,1106,1084,1115,1096,1108,1105,1106,1111,1123,1091,1112,1119,1086,1132,1126,1122,1115,1105,1118,1112,1099,1133,1270,1358,1388,1343,1393,1330,1345,1349,1424,1473,1444,1354,1354,1349,1353,1353,1352,1380,1022,983,966,917,895,943,970,870,857,826,851,889,880,871,890,863,875,824,885,856,834,805,857,862,885,849,851,883,845,837,862,815,836,812,1141,1025,1210,1030,1124,864,623,828,712,746,927,1041,1262,1245,1153,1084,1035,1354,990,1157,1020,953,949,1118,947,996,1111,861,1215,982,941,1106,1055,1061,1178,1227,1145,997,1134,1099,1016,1222,1078,1189,1240,894,1057,1046,1228,1202,1165,1049,1045,1015,1151,1015,1031,1255,1199,1119,1309,1013,938,898,1182,1236,929,925,994,1140,1090,1233,1135,793,1124,602,1188,1328,1099,1027,1154,1223,1101,1046,1250,1257,1131,1148,1229,1164,730,1123,1188,1174,1165,1246,1216,1321,1279,1260,713,694,989,1027,848,896,960,948,856,878,847,1109,1052,681,932,1079,1073,1249,906,612,698,650,654,797,642,868,903,815,871,512,556,668,751,909,1080,531,460,492,510,624,628,638,751,724,767,742,1021,1069,1050,971,766,738,909,811,1051,1021,1168,872,582,819,857,566,443,669,880,662,812,685,739,1081,969,667,755,576,391,424,671,934,533,906,949,621,964,985,860,1034,820,1100,931,925,743,856,858,741,1135,812,878,837,869,868,908,835,1012,870,978,878,915,894,966,804,658,795,845,687,706,689,687,694,747,833,842,1032,821,892,1119,743,1140,1123,681,903,1022,1039,775,779,862,886,935,990,1045,686,757,720,960,763,604,940,941,968,941,937,936,1023,860,990,885,796,1058,1111,835,1016,823,808,830,961,1112,1116,804,992,791,686,615,784,975,679,665,471,633,1019,918,976,1001,842,1031,931,1038,721,1004,920,794,804,944,770,909,957,704,838,908,1021,955,926,969,913,927,957,1036,862,844,960,1102,1037,664,683,688,1018,936,740,993,942,961,911,924,865,892,909,914,954,843,890,891,865,995,927,947,946,1031,958,1036,891,1003,871,861,958,988,955,864,1005,923,932,1082,838,947,880,989,940,843,829,654,666,882,744,1e3,783,859,636,631,727,624,736,736,780,838,949,882,685,767,845,909,825,1088,860,841,795,949,894,830,1092,920,802,766,680,887,930,802,993,798,941,1030,730,1012,938,721,920,698,645,801,786,801,1019,798,968,907,848,1083,912,1014,1054,1083,961,791,674,716,882,893,558,712,815,958,724,839,828,945,744,745,861,789,1076,917,940,944,951,999,958,1023,1086,1307,1055,1123,1108,1163,837,623,1006,1118,1085,1106,885,655,637,653,626,701,845,881,869,932,812,1031,714,819,899,722,1073,527,609,449,465,503,489,467,560,480,617,449,772,1102,1147,933,907,976,701,799,941,1226,1101,896,1225,998,839,915,973,1002,1119,992,853,1202,1260,1184,1060,1011,1081,1133,807,1062,971,959,1094,989,942,1135,983,966,1215,1047,1e3,1036,949,613,396,1156,1059,1115,684,1110,1241,1027,943,906,1100,1115,825,621,809,784,658,808,1025,1182,583,667,419,901,895,708,1122,1020,1054,1180,550,733,396,899,1073,621,645,439,918,1203,985,1007,1111,1149,612,1278,1140,973,656,769,1023,1018,735,971,950,928,902,833,935,1166,907,672,1025,961,1169,1027,767,644,599,740,704,798,795,786,798,503,559,766,716,937,955,466,482,523,517,527,658,712,740,927,871,965,966,829,718,1147,1028,817,1036,869,1214,891,1137,943,604,689,769,859,949,633,541,520,557,858,693,714,364,716,656,689,686,1121,651,680,508,615,576,503,525,583,489,386,550,621,834,966,623,740,1022,972,561,957,751,635,546,766,905,885,766,712,760,721,815,779,909,847,925,992,1127,815,749,693,783,919,1048,1035,906,920,868,980,780,598,822,1032,884,856,827,895,925,782,935,979,929,756,1006,1028,1071,886,931,893,971,1168,1097,968,875,936,1018,1180,1098,961,863,887,942,988,976,1018,719,897,924,1152,621,885,861,853,1127,627,912,927,736,743,1004,902,946,988,725,914,1058,723,695,781,851,748,859,825,1055,783,821,845,926,852,814,860,892,955,853,932,882,929,677,725,948,746,941,797,1082,701,893,695,722,712,640,728,635,857,759,759,946,967,844,997,899,879,934,870,889,906,910,887,909,914,874,865,875,855,899,842,900,838,892,839,895,850,913,859,936,943,865,932,917,921,927,889,1066,942,920,1125,983,947,1017,888,950,829,750,920,795,898,1065,840,793,767,797,952,680,842,960,872,760,987,974,938,880,847,1043,815,836,1132,1054,812,954,864,784,920,1047,804,1063,1049,904,684,560,729,875,858,699,541,499,969,894,1112,885,1060,840,893,990,922,837,1075,782,848,858,1031,793,1007,847,808,738,900,1049,1044,961,687,887,893,886,996,1017,899,1061,993,835,675,747,912,944,916,838,1009,911,961,875,967,879,913,962,975,962,878,799,932,894,957,898,1069,977,993,922,837,908,985,876,875,842,868,931,784,855,994,1040,937,1051,1095,884,921,995,836,754,763,944,631,687,999,825,812,564,653,588,701,715,727,694,1108,993,757,771,815,955,887,785,972,819,988,983,886,880,949,972,824,733,755,663,937,958,889,809,871,867,835,861,866,718,705,917,741,725,844,749,830,844,955,879,743,818,968,900,1055,963,996,1070,666,826,831,864,852,750,865,830,969,725,654,795,721,870,783,732,983,905,1035,821,939,921,946,940,1018,1047,773,854,973,885,861,1113,876,756,921,883,948,838,907,1141,1223,1027,1116,1053,1019,700,807,1028,1084,1158,1130,726,406,392,658,607,615,599,583,676,1012,674,687,545,579,536,768,849,959,829,704,811,671,675,672,629,756,711,921,444,605,476,482,462,470,471,572,574,773,628,1129,1140,930,938,912,700,775,1332,1242,888,917,904,974,994,1070,853,911,1184,1068,1048,887,1138,1070,800,1027,1209,982,1062,1124,1288,991,1e3,1009,1081,1084,1094,963,964,973,1096,988,968,1108,1074,1074,876,1123,374,663,1216,1163,849,860,892,1227,864,967,1062,1176,1035,863,852,1053,1085,473,538,716,716,810,662,762,788,1068,1027,823,817,379,494,1044,691,807,432,434,1028,713,714,927,500,837,393,596,1018,622,806,440,481,1020,704,807,384,499,1065,957,475,828,426,842,954,978,771,504,400,988,1125,961,1167,1066,946,1011,1102,876,1071,1e3,909,1027,1233,1185,977,1043,983,982,1108,1048,979,1163,1048,880,919,1027,959,1258,1043,682,828,1145,802,707,935,990,1006,755,960,968,850,908,813,1127,1069,766,828,1058,1006,848,661,619,755,711,876,845,900,735,946,1142,767,597,848,578,952,783,958,767,654,1040,792,1069,827,861,851,785,890,883,1093,862,810,798,845,731,772,806,940,951,923,889,945,852,852,810,825,731,683,704,618,736,708,850,826,1184,885,710,554,785,532,884,1120,1124,987,1057,1238,965,993,1059,987,856,1122,1152,1130,885,966,1008,750,718,807,662,752,781,881,638,846,405,722,956,752,752,815,420,500,1065,938,1076,1056,1079,1141,608,1052,1270,1323,1015,1223,810,706,916,1007,1017,747,967,961,852,907,815,1124,1063,828,793,1060,1007,1165,819,747,673,631,726,694,694,847,891,831,945,617,486,584,805,786,954,958,527,444,503,457,586,662,785,720,687,784,812,1014,858,927,1087,796,913,1032,1095,805,854,1128,885,1125,929,740,690,621,776,726,555,592,603,558,603,633,754,764,718,672,699,735,859,1149,800,669,643,669,498,429,508,427,485,567,569,352,533,432,936,752,1022,894,808,879,1056,806,700,918,1111,912,1092,727,779,865,851,793,1068,971,729,787,664,669,786,748,951,948,680,808,899,739,767,647,822,984,985,945,806,671,834,681,684,784,858,1005,1005,913,885,1058,945,920,865,1028,775,1045,891,715,932,1069,1040,982,875,949,994,1047,943,1055,786,610,677,618,700,627,721,776,667,629,768,884,983,1069,1036,837,932,807,894,873,1183,1099,947,972,1001,890,846,953,1013,1062,816,1023,1008,760,887,947,958,1043,1110,941,986,997,903,839,970,987,815,950,895,996,982,732,912,910,989,930,802,1110,871,730,740,791,836,942,1066,1087,998,661,929,869,994,766,837,1029,1113,784,1005,781,841,814,846,843,887,799,810,865,790,805,932,1084,1070,1101,945,1042,1056,1061,729,927,1039,760,824,700,689,823,774,767,1098,875,858,1047,893,935,762,912,832,843,795,1136,953,886,881,776,852,1008,896,965,1006,666,932,748,832,834,945,1e3,974,1022,952,933,1050,956,573,994,863,763,1026,1018,1013,940,791,908,912,896,1063,806,764,787,956,820,737,699,848,791,1008,728,943,940,985,863,647,786,898,825,906,716,966,1006,1033,826,766,902,797,912,655,740,718,938,1006,1011,801,734,936,868,878,885,851,838,945,894,955,952,649,1113,898,821,970,953,677,650,789,856,729,765,835,864,1037,1054,1112,823,831,1104,931,1012,979,994,874,993,824,879,846,869,862,916,824,997,859,989,870,912,889,971,806,678,732,701,650,712,634,747,809,769,933,866,999,821,866,914,883,989,937,884,851,901,837,906,1153,748,1090,1119,730,911,974,1008,744,824,895,891,896,1026,1008,733,742,698,932,864,547,939,977,990,854,969,979,1038,879,1e3,888,860,982,1117,831,929,831,861,779,934,1089,990,896,1060,851,689,638,713,983,764,729,466,634,1004,900,1046,922,956,1018,942,1046,749,951,986,740,842,881,802,820,955,661,873,824,946,863,1033,1018,906,918,953,994,906,906,1059,1140,1105,804,689,728,1041,922,814,973,932,894,891,859,906,848,915,922,869,919,863,787,908,955,954,959,936,978,925,1089,896,932,875,899,1053,941,885,837,897,960,907,1064,858,1057,1008,900,835,789,795,689,691,977,694,899,831,926,783,580,668,587,710,722,773,777,1049,867,715,790,815,964,769,1008,786,755,841,966,930,912,1021,918,811,732,721,834,971,903,955,821,896,975,638,953,923,679,916,745,686,781,857,754,958,780,993,885,790,1e3,870,1026,1002,1044,997,838,610,801,822,803,600,661,842,873,749,857,869,934,703,834,858,799,1090,828,968,979,942,953,1004,1017,1053,939,843,765,1025,847,965,1018,944,1033,670,587,462,753,892,933,917,929,893,966,1208,1159,1061,1145,1115,1088,479,876,1046,1068,1081,1084,685,579,651,588,628,649,604,602,694,948,536,522,773,619,884,857,951,604,666,873,757,850,736,477,679,501,670,586,685,721,1090,553,631,473,459,483,499,473,586,485,920,1109,1168,870,926,876,912,718,786,947,1224,1111,900,819,1042,1301,1088,1073,1040,1192,961,785,924,962,989,1057,951,871,1212,1092,1057,951,969,882,1110,1068,1296,1116,390,697,1067,1107,977,937,1081,971,995,1129,1118,1076,1093,873,869,380,966,1122,1207,1103,994,737,1093,1167,839,917,1142,1156,1059,761,933,1135,984,847,612,620,1106,1014,746,715,847,778,634,776,813,1026,1042,1043,464,764,457,755,1076,460,765,442,858,1063,561,708,405,860,931,735,732,1023,661,537,413,947,719,1043,478,763,451,749,1086,878,831,447,378,1043,1177,1059,1031,1104,1128,622,1300,1194,1340,1150,999,1093,1200,1081,1113,1097,706,1095,1106,1100,1129,1031,1025,1165,1240,1232,1123,1232,944,697,801,974,1051,743,958,944,931,906,846,961,1154,876,690,1035,992,1176,844,944,667,633,789,621,692,875,905,794,932,750,908,724,518,549,819,954,615,669,745,700,805,568,516,578,519,712,873,892,824,962,974,984,922,867,894,1029,1058,1111,1043,606,732,602,663,489,358,529,537,668,446,474,449,768,785,685,677,757,752,800,892,714,745,620,610,338,330,356,378,311,606,487,383,452,447,371,1031,1059,790,917,986,801,779,911,743,888,909,1186,651,1181,873,661,886,835,780,831,840,1167,897,877,907,915,781,869,894,801,933,1010,805,848,645,839,920,771,916,915,994,925,604,890,879,784,945,1004,835,966,913,758,861,915,761,947,1011,920,864,813,840,842,777,809,873,1154,811,874,736,798,916,780,843,906,1115,998,935,714,827,924,768,907,916,1016,889,932,826,823,804,822,846,1226,894,1154,1013,596,884,897,772,942,1012,1069,1124,1025,603,877,908,773,940,1014,901,832,744,772,916,784,845,906,1085,870,787,781,909,744,877,904,1150,735,874,840,740,825,934,788,874,927,1061,818,809,720,766,858,890,768,933,913,1048,858,797,841,839,797,804,835,1165,760,883,759,920,806,817,818,844,1180,892,732,776,803,737,569,906,863,870,990,951,1002,829,960,1032,1012,913,736,601,497,1021,1020,829,984,1011,879,987,1002,954,967,1081,533,588,613,1110,1016,713,1027,913,998,999,994,881,758,987,660,613,508,973,975,863,1039,851,1029,894,975,1036,880,747,550,563,606,865,991,967,906,1007,797,977,1110,918,876,685,613,508,1053,1004,823,975,1031,882,976,991,977,955,908,638,650,778,895,1028,1022,851,1093,852,985,1056,894,790,608,563,661,1074,1030,763,1026,933,962,1009,953,927,820,1066,609,507,702,870,1004,927,949,1008,782,963,1025,998,958,893,732,491,781,901,994,1022,859,1083,869,981,1043,877,784,592,706,530,1066,1032,750,1017,932,965,1011,952,917,812,1066,685,551,547,898,1020,930,955,1003,817,981,1017,987,958,894,739,494,781,902,998,1025,849,1109,822,982,1080,938,828,716,710,534,959,995,706,1092,848,944,1029,983,890,775,939,586,539,739,854,1023,975,852,1103,860,981,1083,932,932,977,672,610,651,915,970,877,995,840,1101,839,988,1109,931,936,964,680,598,647,933,975,841,1013,838,1071,836,975,1072,929,827,717,717,545,944,997,722,1112,856,958,1037,989,897,786,936,536,568,598,1104,972,794,817,882,860,885,879,933,848,981,870,990,883,912,925,989,809,685,859,892,796,713,662,729,625,697,807,915,934,877,873,961,823,1019,831,770,990,913,964,962,881,814,802,888,769,1155,740,1111,1118,642,877,1020,1024,780,814,877,853,957,967,1001,691,742,702,985,807,587,972,958,955,955,955,927,1041,875,1001,881,851,1018,1098,771,1028,856,802,789,894,1156,1011,853,1028,816,674,637,773,946,696,691,507,616,1039,937,1045,986,852,1014,937,1003,712,1e3,950,755,805,938,778,930,904,706,866,888,1007,951,969,970,888,971,950,1015,892,848,990,1131,1096,687,684,729,1041,956,793,986,960,913,896,903,862,854,908,956,900,866,929,848,833,1009,945,972,960,1010,924,1068,880,1004,856,880,1005,1002,940,831,993,949,903,1070,849,975,946,931,903,876,770,673,660,913,723,1015,783,872,679,602,669,632,729,727,820,815,979,921,667,786,795,911,833,1108,847,840,800,979,946,825,1065,880,787,764,683,866,968,844,973,790,954,1009,751,982,955,715,897,755,665,824,832,806,997,743,967,848,823,1035,910,1020,1001,1065,962,807,613,738,892,859,515,713,867,964,709,874,840,936,705,767,854,758,1087,911,951,957,938,1008,975,991,992,1111,915,980,932,937,1091,989,1006,952,935,1022,956,1088,964,993,945,971,940,969,1020,967,1061,900,983,879,930,901,943,1015,964,1057,942,1027,1223,1119,1056,1163,1113,1022,565,873,1002,1100,1096,1103,621,316,703,657,661,529,524,709,812,881,919,948,812,815,590,525,543,513,575,742,780,674,683,690,665,666,700,676,688,719,669,691,668,674,697,687,674,723,676,707,685,934,683,570,454,594,480,764,1100,1155,991,966,934,761,724,743,742,775,787,767,921,662,759,1276,1140,713,1136,1104,1023,978,1065,1032,1059,1086,1162,1016,1226,1121,998,976,721,1125,938,1082,939,932,1190,1114,1044,858,768,399,1065,1136,1038,729,1039,1311,1035,976,949,1248,1066,1228,868,914,982,988,968,1065,815,972,1050,829,987,1138,840,670,854,856,1022,804,691,806,692,712,804,1118,987,908,829,443,384,1068,977,454,839,400,618,1e3,1075,570,702,385,854,1078,483,790,414,780,944,1066,628,634,385,882,923,945,655,547,448,932,901,701,826,421,555,1035,1081,578,680,384,897,1085,611,639,415,910,1206,990,1041,1106,1123,632,1283,824,1091,855,819,844,999,1181,701,691,1003,1030,824,900,960,944,810,841,880,1117,1069,678,928,1079,1080,1236,841,608,689,649,688,697,797,810,811,877,1030,728,522,561,758,730,926,798,823,926,884,530,485,487,497,658,633,643,744,690,810,630,719,991,659,1022,943,813,1028,988,957,918,858,886,1036,1012,1123,1164,1010,668,623,654,876,907,780,370,519,619,483,537,616,596,600,710,769,659,695,682,943,1150,819,639,638,539,378,322,389,355,559,558,359,531,373,608,988,839,1124,994,1008,910,858,1025,820,645,969,972,926,835,993,875,805,872,1093,918,901,908,872,918,969,1011,1139,946,822,882,1008,857,808,927,755,625,912,1137,951,1036,606,724,770,941,871,887,982,974,1096,891,976,1041,1074,842,838,1028,929,932,1007,1032,786,710,714,672,783,944,925,908,909,961,1042,909,891,947,950,1104,889,1022,1046,1013,864,866,994,848,845,890,1043,1006,804,769,818,798,820,1049,1084,1036,907,1053,890,868,948,910,1083,963,1056,911,1023,964,955,1103,781,782,983,954,1014,929,557,791,874,1044,945,960,903,1108,897,931,879,1008,1034,991,891,871,848,890,808,918,842,1134,986,866,844,1133,879,1103,1077,891,922,843,994,921,925,860,989,875,956,722,812,748,701,1024,1080,1037,907,815,1007,773,835,852,935,948,790,891,912,944,877,909,877,942,597,648,828,685,729,683,667,719,617,850,841,1116,953,972,623,912,1034,679,773,671,840,660,820,822,776,708,856,873,639,602,559,774,1036,1089,765,718,904,896,968,1034,922,906,1039,937,1101,744,1142,1115,684,932,1012,1043,777,773,863,884,945,1e3,1054,694,756,710,951,754,621,943,936,968,933,941,923,1026,870,995,881,804,1059,1110,841,1010,811,809,842,951,1109,1114,806,979,802,692,615,778,978,677,665,487,642,1014,905,975,1015,844,1043,927,1039,718,1013,914,799,809,958,764,897,957,707,843,919,1016,953,924,963,916,930,959,1045,861,836,952,1099,1033,663,665,699,1017,939,730,990,939,969,919,932,871,891,916,918,964,833,876,878,871,991,928,950,955,1030,968,1027,888,1002,870,873,954,978,960,850,1013,932,934,1089,838,955,875,997,940,848,837,651,685,883,752,1e3,783,847,648,642,723,625,741,736,783,833,939,883,687,768,844,918,823,1097,884,871,797,936,880,834,1079,924,799,766,705,886,920,794,994,800,936,1037,733,1014,949,721,912,743,666,810,777,801,1018,793,976,918,863,1077,922,1015,1061,1079,966,792,674,718,876,923,557,721,809,964,719,830,825,952,740,737,871,793,1073,921,930,947,944,1010,950,1025,1003,811,648,831,913,924,853,1017,894,948,910,879,942,826,796,901,856,911,832,901,891,938,811,958,709,852,815,740,1032,1309,1034,1132,1121,1179,827,605,1023,1100,1110,1075,940,681,621,592,645,646,617,724,934,461,469,752,464,471,867,930,961,805,866,591,862,799,631,708,709,692,699,699,689,708,714,596,696,737,1065,585,448,575,495,455,488,484,502,570,599,561,817,1006,1100,1170,1003,949,808,656,953,678,764,1297,1235,1098,828,1159,1147,1013,952,1045,975,846,1260,1051,829,905,999,964,1159,915,895,1147,1126,1049,1180,829,1044,643,1184,485,1132,1032,1007,964,371,831,902,1052,980,919,1005,1206,1058,1048,922,705,387,1108,1124,938,780,916,1234,950,973,983,1166,1076,1043,830,951,1168,638,824,1004,713,853,751,608,833,1020,1130,671,831,409,723,967,777,823,811,449,390,946,880,634,827,401,671,952,1081,636,626,390,942,1127,948,1184,1007,849,1063,1132,878,1021,1030,939,1037,1204,1172,933,1002,1042,1019,1044,981,1040,1004,1167,987,959,1054,954,1273,1251,1205,1091,1097,1174,1062,1202,985,980,1150,881,712,854,984,1094,722,969,950,877,913,802,1072,1119,882,700,1043,1033,1189,958,803,671,611,717,712,799,806,813,868,983,737,523,570,757,728,863,1022,943,439,483,509,486,545,692,682,674,745,596,859,714,970,861,946,1163,864,916,918,888,818,1088,976,1091,617,716,829,900,939,572,583,522,511,606,713,737,746,727,675,809,1148,922,670,619,370,485,592,566,307,543,694,834,973,903,812,967,858,957,944,776,958,756,640,835,985,1041,873,763,779,948,733,956,784,1089,844,1116,742,847,816,861,1037,841,607,1013,856,730,1024,1093,941,880,825,748,746,690,1123,924,722,726,1038,913,933,888,793,795,1025,976,620,994,917,897,758,979,897,838,1118,731,1032,730,753,922,973,1126,1012,970,833,983,962,1057,945,701,742,833,1083,988,1045,879,695,690,828,1073,936,936,1036,764,699,777,752,626,885,942,840,716,646,902,824,849,804,800,823,1072,901,1008,790,966,1082,936,796,541,630,962,1018,1062,810,785,816,902,812,772,793,876,973,883,923,912,905,737,762,821,683,683,717,627,741,658,913,789,1076,932,1028,617,853,966,914,886,865,871,955,908,773,1064,898,984,968,901,857,824,790,910,1056,750,1137,1070,789,1037,981,999,814,760,939,861,907,1073,1062,763,789,685,1011,758,760,913,889,849,891,926,901,976,840,1055,813,820,1084,1115,828,954,841,783,901,918,1027,1100,934,948,709,612,618,831,1019,668,611,479,832,1072,1028,967,965,850,958,879,1018,810,1091,915,819,808,1011,756,1023,906,821,848,914,1051,1025,966,817,872,962,927,1033,905,729,1038,1050,909,758,707,745,977,936,718,990,949,1007,946,992,894,898,983,1003,1009,821,824,903,898,1004,960,963,927,1058,982,971,913,1003,823,940,910,875,963,821,962,965,939,1059,924,989,833,1036,964,858,870,676,730,852,792,1039,729,793,569,676,699,604,723,755,734,1025,950,935,868,782,867,853,794,1119,900,940,870,881,915,831,1075,856,737,691,641,868,954,785,921,780,964,1004,706,976,918,733,951,736,752,832,818,888,988,881,945,845,764,1030,818,1041,1044,1067,1010,769,758,696,826,891,634,698,774,974,664,791,744,848,857,683,860,909,969,1021,811,927,1069,968,981,1029,917,817,601,869,947,794,798,912,651,939,1224,1154,1064,1132,1110,1082,493,861,1052,1067,1088,1089,683,613,659,649,593,692,893,874,852,946,848,831,650,891,782,511,602,650,670,732,698,897,425,584,482,476,476,471,486,571,580,603,764,1075,1148,994,977,933,841,688,808,1278,1247,1243,1054,1035,1061,1249,866,929,920,984,969,1054,813,925,1211,1025,913,1203,431,373,1136,989,1034,715,1092,984,1132,1041,1151,1005,1076,940,1029,972,1025,1206,1103,964,918,1042,973,1197,822,931,1113,864,833,1063,853,714,802,661,776,782,1128,848,617,813,443,474,1030,972,423,828,367,630,1014,570,840,360,816,1103,467,759,438,845,932,854,623,813,447,478,1039,1040,1060,1092,1024,633,828,413,568,1048,614,837,395,719,1008,1192,1083,1024,1044,1062,892,1013,1055,954,1018,1141,1069,1040,1023,1088,963,1163,933,1095,1049,1137,628,1305,851,1127,1207,1200,1038,1041,1157,703,708,1017,1073,818,910,930,981,814,834,900,1154,1023,640,922,1027,1062,1085,1086,559,714,657,671,712,739,828,905,898,1072,974,490,558,523,765,882,920,961,869,914,471,444,534,557,477,617,730,654,784,733,572,745,680,953,758,885,853,998,898,1065,867,1132,923,882,1151,977,1062,944,621,699,856,569,492,662,543,546,684,561,795,656,823,681,691,757,1113,1091,711,646,590,520,404,384,369,546,565,312,551,459,614,846,884,982,934,983,976,974,799,701,562,618,877,933,778,888,833,756,807,929,1012,880,923,871,963,784,908,783,832,1010,959,872,1007,956,1071,1018,962,909,949,987,968,809,980,830,900,917,887,856,963,1017,1017,950,874,756,849,808,986,904,999,657,834,1062,901,792,828,940,778,868,892,873,796,694,829,804,776,824,991,982,878,960,960,953,874,894,889,1079,944,866,876,825,842,1010,1002,901,786,748,724,802,721,578,667,716,707,946,1063,1071,1018,1006,968,951,944,795,674,579,777,751,1029,939,754,794,710,756,871,861,921,824,917,926,532,794,863,970,936,1013,906,1065,954,895,822,630,753,856,816,559,858,808,1005,997,881,905,1021,954,607,814,831,911,936,924,992,962,896,822,727,960,854,734,617,859,837,976,724,1007,1033,882,864,807,798,866,752,1024,895,879,858,818,875,875,939,938,894,944,912,966,830,942,904,1027,735,708,845,730,711,744,650,692,680,716,837,930,941,948,795,1029,1007,765,799,867,785,868,744,783,851,892,992,622,620,595,655,983,914,968,1141,963,976,1e3,931,881,851,770,888,766,935,1056,802,801,730,806,972,634,861,951,895,750,973,998,975,904,813,1036,824,852,1123,1032,816,914,880,777,966,1074,852,1e3,1062,945,649,582,765,880,818,707,500,502,981,846,1118,878,1049,845,876,976,899,835,1065,773,897,838,998,773,1012,785,802,759,893,1048,1037,943,698,865,898,920,985,1053,925,1044,1018,830,652,760,933,938,909,924,991,889,942,870,959,895,908,956,966,923,889,792,924,901,984,904,1029,969,942,956,825,894,956,906,930,889,850,932,809,833,989,1024,916,1056,1091,931,904,976,809,714,768,975,638,734,971,837,814,560,664,557,710,723,728,729,1070,989,712,771,829,1e3,855,798,936,805,979,971,878,873,961,982,841,725,784,666,944,960,905,816,865,864,803,847,854,710,719,895,700,749,820,737,858,804,969,880,753,811,955,923,1063,969,965,1029,648,807,780,852,834,746,867,837,946,752,688,776,680,869,774,766,988,900,1053,863,930,882,911,961,983,1051,646,917,850,842,884,881,809,872,895,804,673,908,776,955,862,1075,1288,1046,1103,1081,1145,804,701,1042,1120,1111,1108,873,672,648,646,642,612,601,656,941,677,499,492,646,897,852,963,694,710,918,650,594,691,714,734,709,723,644,662,697,974,681,586,471,496,481,487,461,579,468,592,472,609,832,935,1176,1201,919,933,885,987,595,838,1229,1172,1012,983,1044,984,839,1152,1023,1260,952,955,1061,1246,1275,1029,968,1200,398,486,850,1028,1205,875,928,1001,978,960,1069,823,984,1313,1193,947,1132,956,967,1116,1082,1034,1056,907,743,398,1087,1018,1165,716,1128,1159,1049,913,914,1292,1134,879,844,1052,1039,1012,1054,1009,856,670,792,710,702,777,1113,1147,565,718,396,905,945,847,798,461,398,946,792,751,928,786,428,432,1021,835,811,430,482,1054,1066,1065,957,1185,1066,829,1104,926,1215,1152,956,1339,1230,974,1286,1179,1048,1104,1035,1107,1217,1305,1194,1069,1231,936,742,830,966,1049,741,978,941,916,901,847,995,1144,861,692,1046,990,1141,1033,775,637,599,750,550,527,743,652,892,895,879,849,631,781,434,535,655,533,557,829,890,796,736,615,643,615,717,614,654,1e3,948,476,460,529,558,745,590,726,610,793,531,501,527,550,525,671,453,640,809,739,1021,1027,967,780,688,954,1131,1031,1040,864,867,885,1150,940,758,1104,870,774,592,670,950,939,498,592,343,443,410,391,391,421,387,403,397,420,413,418,387,596,546,539,577,642,330,669,780,709,713,610,659,656,689,672,631,670,820,1211,955,810,626,551,589,537,540,586,373,380,358,371,383,373,438,434,441,398,438,421,375,329,381,546,570,345,523,284,397,375,645,975,1053,874,917,934,833,586,651,1e3,981,675,785,706,1018,887,761,836,907,967,824,978,809,916,844,1049,1160,1046,674,680,841,814,776,1144,926,880,785,690,937,935,781,1047,1019,1100,1085,866,1012,971,1030,838,1021,888,735,712,598,827,703,724,759,498,579,751,677,630,697,951,729,804,841,728,640,641,1019,921,645,736,662,733,855,893,979,1016,979,951,840,853,1051,893,817,796,752,1001,999,994,993,788,833,1055,910,771,933,909,674,946,939,952,793,924,996,1141,835,814,826,751,678,780,731,954,992,886,624,683,719,760,675,642,601,571,993,876,990,1022,888,757,750,789,1021,861,787,979,794,918,824,783,883,807,701,1066,948,939,880,706,779,663,766,1099,895,871,989,1029,899,759,750,803,1025,861,772,978,812,932,855,880,947,796,732,806,666,1061,960,1006,793,847,672,924,938,893,645,809,775,1122,961,886,891,813,1031,880,943,971,1010,896,843,652,604,574,775,858,908,784,955,867,929,1091,983,728,948,733,987,813,1004,759,718,826,880,723,908,839,841,830,827,1107,1068,886,794,763,780,784,982,813,747,851,670,822,783,877,716,733,702,797,854,873,648,741,875,860,1059,1091,975,732,689,894,899,808,1006,731,577,520,1041,628,769,892,831,1054,705,739,697,809,707,923,777,802,665,631,840,729,827,726,775,960,809,621,660,947,903,874,1146,1047,1058,784,1147,883,769,813,899,945,771,847,870,883,769,1050,826,950,872,961,1084,1027,816,965,762,777,876,767,840,848,833,627,627,683,891,849,818,1162,1118,931,881,849,895,925,698,706,650,731,880,748,682,907,878,954,1086,953,1048,854,792,856,797,754,904,799,651,859,898,780,736,786,1112,858,654,779,720,671,890,752,839,868,989,1082,778,913,806,872,715,1046,840,862,713,826,847,688,886,833,784,750,818,1130,866,643,809,795,764,842,651,874,643,782,817,867,1069,762,963,1086,814,785,1084,801,814,774,915,890,973,885,735,750,783,878,789,829,964,894,776,732,843,865,930,998,727,702,830,822,913,934,979,950,795,692,805,856,1017,998,788,715,959,943,739,904,916,894,914,899,995,957,660,710,757,644,754,707,959,716,775,699,760,744,849,964,820,676,625,706,976,882,840,1132,1014,1043,877,900,799,690,956,950,925,1056,899,940,850,1166,918,992,836,817,1040,1069,689,594,850,756,717,1039,837,829,852,747,802,707,707,751,748,874,746,636,642,641,613,620,712,819,887,969,1084,938,1003,917,763,588,859,1068,972,914,918,657,883,875,839,1001,1072,844,1061,900,891,742,875,821,576,688,878,609,944,996,769,980,902,857,1126,1077,800,767,823,1032,895,877,721,890,823,736,727,795,858,901,940,817,820,1003,1089,1043,869,850,611,921,1077,1022,768,696,643,835,869,814,1121,869,895,906,987,956,966,902,679,929,869,702,762,948,668,830,774,651,675,816,682,746,673,864,981,890,912,1109,908,1033,940,733,569,919,925,898,818,917,871,431,541,984,877,1014,1027,615,872,1030,1087,891,857,778,890,813,700,808,829,884,836,890,786,805,775,747,806,714,715,894,924,1080,879,837,989,705,765,735,840,819,702,759,868,941,1043,837,822,1053,719,785,791,810,946,1027,896,866,813,769,741,707,710,843,954,1040,850,825,1077,870,914,743,779,829,748,940,823,937,882,955,1098,960,895,1093,873,803,1103,858,664,936,936,809,663,968,960,695,843,986,838,852,955,705,666,977,947,1028,996,873,848,1080,747,695,927,951,746,683,1009,921,743,883,970,800,897,940,760,744,912,824,982,932,878,914,825,877,868,879,867,914,855,966,900,958,870,908,906,1024,810,687,909,882,1066,902,974,613,812,730,677,740,644,702,753,870,934,1003,767,1015,832,1103,1018,829,998,899,945,857,738,951,822,919,1032,903,699,814,731,955,632,827,984,889,834,961,954,849,849,924,1052,831,839,1132,1104,821,921,855,783,931,994,887,1086,1035,914,666,532,683,833,930,670,591,451,906,1001,1082,923,1012,812,893,945,906,839,1086,880,766,852,1035,787,1021,890,867,805,919,1097,1038,1014,739,873,908,912,986,997,813,995,1033,880,732,755,872,926,954,840,1012,917,1011,926,1010,887,884,1017,1057,1015,888,838,918,893,940,953,962,973,1021,941,944,801,1008,846,900,910,836,977,832,833,969,973,977,1040,1051,862,1011,1030,833,809,768,856,720,692,1025,794,775,587,615,630,645,674,721,682,1108,956,879,824,730,910,894,779,1070,868,1002,915,890,885,884,929,819,680,779,670,882,954,847,892,862,901,908,819,864,815,755,927,714,733,834,750,869,963,919,915,775,745,991,811,1044,1036,911,1096,691,775,739,880,872,659,816,757,943,706,708,698,814,913,700,776,984,945,1039,800,952,1022,907,1027,986,1132,594,563,949,842,915,752,791,917,851,659,688,907,915,888,984,853,920,785,901,874,933,836,931,914,877,951,818,944,789,802,907,878,879,844,899,848,764,888,836,751,931,858,789,650,850,837,903,837,952,822,920,923,868,930,930,752,943,788,908,822,896,648,920,773,873,1176,1212,1039,1143,1054,1005,684,802,1028,1087,1158,1113,633,638,693,601,610,574,554,625,623,623,593,566,554,573,567,663,775,453,457,459,670,472,480,563,527,467,658,643,672,565,630,871,908,951,809,838,895,785,795,962,624,635,448,683,597,476,678,651,651,559,503,680,614,457,619,658,545,499,662,640,604,510,620,583,604,468,671,675,555,528,645,363,678,547,516,640,606,392,653,552,598,810,739,1030,463,614,501,482,471,496,494,610,458,586,452,547,795,910,1121,1160,955,899,820,854,777,583,640,694,632,828,881,701,779,1287,1089,875,1097,1147,869,888,982,991,1013,998,831,1075,1151,1041,1150,981,939,1095,621,1110,1098,980,1242,1096,1001,1047,1215,1188,1012,1062,1090,1080,1109,967,1014,1227,1116,925,380,880,915,390,913,882,1019,904,1089,1001,953,1144,1072,1103,805,1118,1153,967,791,931,1201,897,828,963,791,719,955,1008,945,1042,861,828,1050,965,1030,1005,1032,1030,980,956,1256,1129,802,847,1037,866,960,1116,883,893,699,804,694,708,792,1120,1128,550,698,387,895,1069,602,642,419,890,929,717,1047,492,753,433,805,1089,460,762,433,876,929,720,1056,478,763,451,749,961,1049,601,642,484,893,1052,528,744,429,740,1040,1033,676,547,412,962,1143,1073,962,989,1098,881,1068,1009,887,1075,1205,1173,980,1057,986,961,1103,955,1139,999,1135,831,1212,908,1211,966,1067,1129,1198,811,1130,993,984,974,1057,684,1077,917,910,956,1111,954,856,961,790,1062,857,1092,975,1014,841,1057,857,1010,961,979,808,986,798,977,813,871,799,1087,611,629,649,829,626,927,1403,1323,1351,1543,1406,1173,1318,1325,1159,1040,1038,1157,1134,1341,670,723,1002,994,449,722,837,1491,1453,1508,1433,1508,1442,1408,1603,1274,1482,1401,1431,1337,1600,1507,1468,890,1473,1449,1348,1543,1546,1498,1539,1542,1549,1458,1552,1473,1540,1484,1210,1427,1464,1057,1686,1379,1291,1299,1303,1336,1240,1295,1280,1496,1286,1439,1446,1449,1448,1449,1487,1351,1458,1534,1433,1173,1551,1610,1297,1323,1387,1454,1504,947,1415,1190,1163,901,1252,1233,1001,757,746,507,1288,1222,1456,1031,1036,1128,1122,1197,1063,719,1115,1067,1082,430,771,938,1427,1584,1581,1542,1462,1441,1451,1414,1458,1519,1610,1514,1336,1602,1523,1486,1483,1529,1381,1081,1553,1525,1540,1523,1521,1319,1610,1444,1652,1547,1565,1183,1440,1644,1579,1528,1555,1409,1589,1474,1574,1507,1551,1637,1495,1472,1429,1576,1425,1397,1411,1241,1378,1457,1549,1522,1656,1553,1401,1458,856,1466,1566,1445,1435,1463,1388,1342,1564,1408,1419,1409,1422,1491,1365,1422,1324,1711,1348,1515,1621,1470,1365,1226,1409,1434,1377,1292,1306,1345,696,858,1029,1311,1019,641,977,262,927,1313,1407,1109,1038,1088,1180,1108,1229,698,868,660,773,707,700,680,942,810,830,692,927,1309,865,1030,1430,1501,1439,1458,1490,1462,1605,1380,1444,1582,1585,1634,1402,938,1369,1442,1558,1225,1295,1298,1041,1479,1568,936,1603,1536,1512,1588,1332,978,1522,1235,1206,1486,1153,1354,1340,960,1530,1324,1056,1494,1237,1148,1507,1385,956,1462,1408,1033,1472,1361,942,1458,1122,1190,1395,1139,1145,1418,1352,858,1549,1300,1004,1492,1171,1240,1375,1405,1500,1512,1346,1308,1653,1520,1363,1574,1535,1422,1359,1386,1468,1455,1359,1389,1404,1387,1401,1378,1368,1404,1388,1387,1380,1386,1357,1396,1440,1526,862,1414,1099,1181,571,603,1023,1045,802,1075,681,372,546,470,62,1305,1304,1418,1053,1042,1186,1337,974,877,1210,1019,455,581,1063,1335,1014,1583,1418,1466,1470,1395,1442,1342,1400,1600,1279,1446,1350,1373,1564,1394,1511,1599,1177,1522,1396,1579,1493,1552,1459,1405,1508,1465,1638,1524,1416,1309,1370,1552,1366,1476,1448,1413,1443,1245,1681,1500,1396,1520,1716,1472,1513,1364,1341,1392,1439,1457,1403,1047,1077,1033,981,1243,1156,875,775,461,1274,1279,1504,1347,1066,1039,1040,1036,1043,1066,1157,1159,1082,1066,1210,1089,673,602,802,1124,881,780,844,882,801,1011,1015,443,527,1057,904,925,803,1520,1593,1416,1295,1465,1299,931,966,1162,1440,1459,1439,1431,1500,1440,1547,1349,1176,1350,1272,1443,1514,1451,1553,1487,1428,1516,1574,1440,1327,1643,1384,1428,1606,1368,1463,1319,1372,1393,1310,1370,1165,1442,1356,1374,1180,1230,1295,1375,1413,1266,1287,1456,1466,1339,1387,1409,1316,1262,1396,1429,1294,1327,1186,1329,1352,1396,1387,1502,1327,1344,1340,1496,1444,1263,1490,1290,1438,1221,1490,1236,1333,1306,967,1430,1313,1262,1302,1437,1371,1405,1213,1297,1568,1282,1351,1368,1417,1353,1376,712,1354,1362,1281,1380,1347,1354,1365,1378,1319,1291,1339,1435,1016,1092,919,1496,1422,1174,1546,1401,1486,1465,1357,1422,1537,1436,1173,1493,1649,1476,1409,1318,1332,1603,1419,1494,1268,1497,1194,1447,1496,1253,1343,1421,1123,1510,1374,1478,1268,1379,1394,1207,1149,520,588,469,408,347,977,1082,1110,1034,1112,995,1009,846,378,407,532,739,797,25,25,1039,1323,1400,1146,1032,1049,1193,1127,1353,859,693,842,655,1009,724,507,833,1048,1220,1532,1344,1226,1604,1495,1382,1465,1399,1470,1565,1357,1449,1475,1266,1462,1377,1465,1552,1320,1313,1445,1508,1511,1686,1314,1282,1461,1575,1492,1571,1518,1327,1495,1416,1540,1512,1344,1360,1484,1261,1388,1387,1553,1316,1466,1500,1241,1398,1379,1233,1510,1494,1363,1503,1356,1499,1537,1364,1445,1375,1668,1128,1568,1583,1457,1443,1209,1411,1535,1434,1378,1274,837,483,968,986,1200,1101,946,423,755,337,679,1316,1336,1225,1034,1086,1132,1218,1057,756,972,780,1049,430,522,959,876,1551,1597,1565,1438,1462,1434,1420,1500,1454,1587,1529,1401,1535,1536,1636,1549,1506,1520,1224,1167,1469,1372,1303,1233,1283,1373,1255,1298,1445,1454,1530,1435,1007,1270,1395,1156,1200,1169,1539,1460,1448,1483,1573,1567,1535,1330,1378,1005,1492,1486,1460,1519,1340,1466,1481,1415,1165,1578,1627,1544,1448,1296,1432,1494,1453,1261,1203,753,1034,1220,1184,948,650,617,266,1241,1292,1210,1038,1187,1301,729,1025,904,428,645,872,1327,1438,1452,1431,1406,1499,1473,1628,1538,1401,1460,1580,1374,1456,1567,1497,1338,1466,1284,1716,1294,1549,1603,1675,1527,1570,1388,1343,1201,1511,1240,1561,1452,1569,1409,1566,1318,1184,1094,1240,1091,878,390,1123,1007,1159,1211,1048,750,1092,1003,762,1018,1051,1045,588,1065,1011,1102,657,751,850,1061,1085,938,811,1052,874,872,1082,1071,517,835,822,1006,1007,1075,1322,1157,1183,1096,1145,1045,808,1221,1281,1144,1105,1129,1059,1023,1237,1080,1131,1080,1180,714,878,1077,1318,1024,894,924,1061,1315,1064,1352,1130,1032,1018,891,973,909,1180,1182,1047,1187,1336,1151,1080,1218,1103,1169,999,1023,846,1321,1180,1112,1072,1141,1437,1335,619,1013,1121,1054,1163,987,1125,1013,1192,1172,1183,720,492,832,653,870,1057,907,1123,1384,1409,1120,1074,1115,1361,1339,1307,1017,944,691,1292,967,715,811,1026,910,985,1258,658,845,736,919,484,662,676,1036,895,864,1069,1287,857,1112,1064,919,954,1054,1131,1035,973,938,1169,1090,1190,1335,1335,1167,1176,793,746,777,767,1183,1209,1168,1208,1284,1200,1098,968,1267,1127,1052,1121,1134,1068,1058,1036,1246,1174,1201,940,1368,1180,1182,830,773,819,1004,1030,1148,1350,1315,1387,1203,973,920,856,996,1045,1026,1028,1058,1103,993,1038,985,906,1231,1174,1247,1103,1102,1327,1310,1375,1244,1259,1401,1044,923,1286,1049,1249,1089,1283,1170,1089,1182,824,1309,1367,1178,1212,1155,1236,1140,1417,1447,961,1045,873,1245,1293,1164,1034,1116,1189,1010,1342,1347,1355,1422,1304,1353,1365,903,1259,993,1183,1064,716,1182,921,925,1144,1292,1083,1130,1205,1180,1182,1115,1226,968,714,681,803,550,690,1095,1071,1183,1377,1140,1223,1251,1041,1145,1035,1058,958,1084,1090,993,1090,1233,1247,1118,1095,1028,1208,1286,1421,1132,1302,1298,1187,1293,1019,1281,1094,1243,1245,1117,723,830,879,1030,975,990,1128,1036,1244,1116,1177,1204,603,901,1266,801,724,937,977,996,778,953,971,851,892,799,1110,1067,748,840,1063,1013,833,660,598,804,842,782,862,872,1090,1063,770,917,1010,750,846,1081,614,573,805,552,692,763,1109,1097,696,396,857,1052,1045,803,900,965,878,960,918,757,810,1130,1049,1056,630,921,1083,1009,944,940,985,1013,892,1071,994,977,985,918,872,808,1003,935,1209,927,995,970,749,1048,830,994,710,836,930,721,852,853,1100,1048,965,915,870,994,947,829,810,913,1061,933,876,1105,880,925,883,869,871,963,817,874,856,870,866,916,841,982,863,962,891,906,917,987,797,673,1112,592,1199,1076,630,625,717,890,881,705,552,921,471,750,1125,1149,909,979,787,1267,1170,712,1133,1186,1004,1190,1301,982,972,1074,1018,974,970,935,1030,937,1092,877,977,978,1007,1090,1160,735,1074,950,1071,1021,966,934,954,627,823,788,654,794,1048,498,762,451,758,928,1041,596,651,418,879,1075,601,647,451,892,946,853,823,432,491,1067,726,974,779,499,391,973,1004,911,931,1179,1119,622,1386,1157,901,1233,1116,758,1173,1240,1308,1293,1290,1223,1151,1330,1432,1498,1536,1355,1581,1579,1495,1554,1622,1606,1414,1400,1483,1094,1400,1249,1091,1098,782,1257,1122,965,1153,1106,1059,1122,1172,1113,891,799,1087,1322,1278,901,1159,1332,1318,1146,968,1086,1240,852,1167,1121,1182,898,1101,884,1056,1187,1086,1063,878,1210,818,1181,1204,1270,1173,1222,940,987,1330,960,1301,1210,1145,1163,1086,1023,1113,940,1041,1061,1078,880,979,858,1108,1090,1394,1585,1496,1490,1402,1465,1324,965,1205,745,720,981,976,935,859,940,912,867,880,821,1114,1087,715,876,1043,1052,1069,1011,619,699,638,805,900,1071,787,948,729,1057,828,922,907,976,807,1016,933,1189,875,1142,1006,618,654,730,561,596,791,855,668,692,1021,995,658,569,426,608,454,404,552,962,892,773,963,1066,1093,903,943,1034,1068,859,762,676,1004,885,927,986,1059,1002,789,957,879,961,1006,955,1043,1017,1024,1067,888,848,941,815,857,878,943,949,883,919,901,991,883,953,894,1001,718,712,1152,743,1164,1117,703,954,999,1040,792,765,880,858,938,988,1084,697,793,708,956,761,631,937,906,989,930,957,934,1015,879,984,848,796,1079,1107,859,986,813,800,847,986,1124,1111,832,961,821,676,585,774,1015,638,659,465,692,997,929,975,1040,843,1028,925,1051,727,1016,904,790,829,997,796,942,945,709,837,931,1009,939,953,932,927,957,952,1029,879,820,969,1070,1002,672,646,721,996,919,694,982,959,965,931,960,839,883,922,899,978,805,889,870,888,960,966,957,947,1006,980,1037,880,1026,861,863,934,968,1001,842,999,958,913,1093,848,928,882,1013,923,870,828,668,654,892,764,1016,790,832,634,625,718,641,733,748,784,861,944,885,723,769,823,916,867,1105,838,877,810,912,859,863,1069,909,790,746,710,889,914,799,987,792,948,1057,734,1001,934,728,928,739,660,810,772,836,1023,781,975,896,871,1077,927,1003,1036,1094,993,813,661,676,849,921,577,693,829,985,708,828,803,973,758,729,883,822,1058,919,898,938,960,1025,940,1050,1016,876,936,793,978,848,1065,1058,746,949,865,968,1272,1089,1101,1165,1177,964,565,930,1055,1135,989,984,701,644,665,640,682,721,863,896,958,801,858,653,805,677,776,671,816,732,978,458,850,1124,1148,975,947,831,689,811,1297,1124,1242,1176,1060,1021,1111,886,742,1049,1034,940,1089,1064,1043,885,948,962,1212,965,940,1184,1129,1075,1093,823,1046,387,743,1212,1132,870,808,871,1146,828,899,1134,1158,1087,979,779,944,975,1019,1075,949,885,1140,743,1002,1020,1023,716,858,763,609,817,1041,1026,1077,585,654,419,923,1041,601,645,412,896,913,888,577,815,438,483,1024,982,432,813,374,708,1050,500,827,379,787,943,996,697,536,408,988,1039,1011,1133,803,1189,1243,1192,1354,794,727,954,983,985,799,950,965,859,909,823,1123,1064,734,849,1059,1028,1138,595,658,721,916,668,1137,833,892,785,1018,972,812,1171,737,595,779,693,632,700,614,503,953,976,874,911,746,908,1068,814,904,689,586,855,939,1006,983,1042,957,1015,863,772,929,938,687,950,804,899,544,414,654,462,474,590,973,923,1089,998,1011,978,990,964,976,904,582,909,863,980,983,939,885,957,937,960,976,600,954,943,835,917,867,869,788,922,732,892,814,781,932,945,916,847,1039,968,954,1028,1002,1030,1007,990,1043,752,895,1002,1039,834,772,722,836,761,854,987,1025,822,807,766,887,557,524,559,739,877,966,610,588,527,692,814,953,616,1126,812,760,893,802,1052,764,909,988,895,1028,1022,875,1011,930,982,914,1021,989,796,1067,1034,990,1016,1019,972,679,797,871,852,833,977,729,405,263,709,882,829,818,839,810,809,817,982,919,970,836,940,884,891,831,777,935,598,616,625,959,1011,839,783,803,456,787,1139,1139,948,955,753,1297,1245,1035,835,882,1001,981,1151,963,915,1199,1115,1196,1070,806,1073,820,1208,1109,1144,898,1125,1156,969,967,988,971,848,1153,1045,1078,966,975,981,1013,659,873,767,615,793,892,790,700,826,442,607,1005,1076,596,657,383,882,914,916,799,421,425,1010,987,943,941,1207,1131,599,1169,1127,1042,758,1297,756,728,970,984,945,870,955,917,871,879,818,1120,1080,715,876,1044,1053,1065,1002,609,689,647,792,886,595,994,600,923,662,1102,911,974,1028,713,741,1113,845,855,944,886,1121,981,884,645,668,979,935,459,548,588,695,766,809,635,700,698,962,1099,698,638,616,427,421,305,337,583,484,388,476,595,906,588,879,835,768,869,781,716,773,939,964,869,1091,1037,1024,1046,1004,890,915,959,964,956,921,1003,1056,899,1095,1034,885,1125,743,936,861,897,970,990,1131,653,949,923,834,1016,942,940,876,966,907,888,722,1040,911,788,873,809,644,878,887,757,915,865,867,842,842,953,718,909,805,766,815,917,760,919,916,598,914,1021,854,842,810,935,906,928,954,910,817,883,882,724,1077,1094,863,872,852,849,896,855,861,888,822,1056,916,943,1148,910,941,764,874,816,836,790,812,909,828,862,811,843,845,970,775,936,853,844,722,746,669,881,692,1011,828,1049,932,866,807,824,736,814,967,841,875,802,878,880,897,933,1036,843,1047,917,1035,915,1080,733,1048,846,796,795,1155,1072,954,1033,834,814,1044,887,846,790,1137,850,1033,875,753,888,1126,843,987,923,840,838,1062,904,1025,782,1136,1057,729,808,729,830,720,824,858,899,776,920,878,870,917,646,785,862,1031,656,905,803,981,914,800,713,745,710,809,983,916,897,779,814,860,759,768,898,743,858,719,999,942,864,856,889,685,735,703,741,844,1112,1059,859,766,868,1113,866,806,795,843,740,770,802,937,959,921,889,943,851,851,808,885,915,991,1137,895,982,1035,961,900,830,797,902,771,916,1106,727,735,701,840,960,617,872,923,896,755,994,1001,1015,897,842,974,825,877,1126,1006,824,882,892,780,988,1108,895,965,1056,870,683,578,802,910,787,741,515,505,973,863,1117,896,1011,874,891,952,886,860,1064,767,876,839,978,809,983,735,815,708,891,992,1016,955,733,864,896,938,978,1048,969,1029,1049,811,621,734,912,878,908,937,965,879,936,884,933,920,912,948,972,921,871,775,920,925,997,948,1005,961,917,993,889,848,978,922,957,900,809,924,838,855,978,1041,913,1024,1069,974,874,999,743,691,723,979,645,759,993,851,867,594,672,566,692,694,757,725,1049,988,714,768,801,994,815,789,913,780,955,975,886,868,964,984,830,714,809,707,940,975,873,812,853,880,767,871,839,725,727,871,686,745,834,767,874,804,970,836,719,848,926,939,1076,969,983,953,626,827,844,868,785,715,869,872,897,824,671,820,660,878,812,786,1001,889,1071,851,928,856,930,968,987,1020,835,860,1003,839,870,1006,929,462,429,663,613,799,842,682,886,636,894,799,694,883,639,895,763,764,1090,1302,1053,1128,1105,1158,839,646,1024,1124,1089,1110,894,672,655,620,645,639,619,689,961,990,882,844,935,807,905,606,894,665,390,683,495,538,764,712,932,453,997,1054,1107,1067,930,968,586,865,1227,1156,1011,1013,1171,926,1079,1096,1252,1156,1028,1150,1089,636,1049,1122,1031,524,980,364,811,992,862,1230,980,762,936,967,1003,1066,947,873,1241,985,991,1041,954,1041,1212,1088,1064,925,1065,1038,1107,702,1112,1151,966,973,981,1192,1162,745,936,1018,1075,686,1097,822,739,694,794,811,669,800,786,1032,977,750,472,412,1036,821,811,442,409,949,718,1076,591,646,450,889,1041,1044,907,551,839,369,750,1039,1089,1124,983,819,1145,1168,906,968,967,1050,1009,1074,1053,821,1005,976,1091,957,1020,1002,1113,867,1099,495,928,974,953,829,1002,983,964,846,909,631,980,731,1058,1214,756,727,970,987,945,870,962,916,864,882,818,1125,1028,667,918,1063,1146,714,882,730,1041,981,543,677,595,669,615,564,640,637,597,642,606,656,609,590,724,661,570,600,686,621,662,602,505,623,638,460,615,607,534,635,561,826,780,734,736,723,850,656,606,632,679,754,638,899,660,796,600,699,680,523,741,483,585,578,494,1019,830,568,541,1098,1155,989,922,1366,1043,1115,823,909,1148,973,682,826,430,562,1041,607,848,403,750,1138,883,1184,951,1196,1126,605,451,700,643,654,665,677,634,725,628,658,645,1119,713,680,980,1036,856,893,955,949,873,881,838,1120,1054,687,935,1080,1071,1110,587,663,764,736,798,504,708,1021,953,1001,887,1022,831,1029,1129,612,595,785,404,534,556,801,708,893,691,554,845,877,952,720,862,858,644,709,773,961,758,1056,1122,1032,763,679,926,893,735,831,818,1112,958,657,678,1053,686,1184,874,605,837,874,601,764,729,1084,968,659,924,825,569,933,872,824,781,931,864,981,861,898,1010,648,814,888,899,828,949,844,732,743,823,895,639,898,721,747,823,1053,743,774,932,709,998,1079,890,857,703,717,714,704,1107,931,748,809,826,735,852,868,1055,1070,836,1088,872,1174,861,771,942,923,845,860,726,873,753,805,802,976,767,903,888,836,943,820,854,713,989,724,859,881,879,837,818,879,873,959,905,891,967,908,959,899,933,842,1037,772,649,850,728,681,747,661,684,693,992,937,1148,682,643,855,837,720,369,536,963,460,785,1167,1120,924,996,1093,1164,1016,961,1249,894,897,893,975,1004,1070,887,903,1205,1080,973,1236,1292,1089,1141,1080,1001,997,977,1005,1053,1158,749,1067,939,1068,1042,994,949,1098,1133,832,636,844,778,643,776,1064,575,700,391,900,900,1049,882,630,840,397,687,1086,907,1019,981,1176,1e3,918,1170,1028,1062,901,963,1181,1144,1048,1082,1231,873,1051,943,669,782,988,1039,746,968,935,933,918,839,950,1160,864,694,1048,995,1178,656,696,582,812,991,715,936,866,839,1056,1072,694,733,1158,816,586,722,626,836,848,1050,689,489,1085,923,1012,836,883,813,991,918,1018,1021,1130,930,1072,1013,924,1075,929,806,1032,758,1177,899,1079,1057,898,979,838,607,907,886,873,781,967,1066,969,1031,834,938,1037,977,962,640,931,927,814,830,886,875,952,910,879,958,891,950,899,930,830,1056,825,675,923,808,884,830,1020,908,628,760,988,627,796,921,444,967,1081,1118,1054,912,1163,1164,739,1119,637,1058,967,852,1055,784,644,395,1028,357,425,1105,1147,1080,1199,1214,890,918,976,995,984,1067,822,976,1193,1013,1183,1045,1045,964,1028,924,1037,1134,726,1066,999,1038,981,955,1140,707,844,718,706,843,856,1050,542,719,427,805,1078,429,832,358,649,1033,558,846,377,747,931,752,709,827,414,574,1075,931,1019,954,1182,1074,817,1074,1116,981,1062,671,692,1067,1009,659,1020,951,965,848,886,879,1178,983,597,1002,1022,1117,1018,1087,573,679,733,662,707,659,787,852,805,470,524,881,545,978,962,390,418,445,678,553,596,854,723,979,976,798,994,830,877,832,838,982,1067,971,1088,925,632,643,536,592,614,488,602,654,733,819,683,717,1082,1050,702,677,595,581,362,553,648,939,954,740,858,876,853,621,780,831,614,706,718,718,832,742,672,659,944,696,704,714,832,746,706,682,859,828,724,613,679,878,599,857,743,857,772,697,614,865,745,861,873,780,728,794,743,792,785,715,599,792,956,879,932,1089,819,710,725,965,803,881,890,862,1037,1073,992,855,1088,570,858,702,562,783,669,862,638,729,854,1133,878,832,1007,890,813,809,845,752,770,820,950,947,976,866,904,868,878,834,809,575,620,780,736,657,580,601,771,1036,874,993,1136,871,994,1038,982,864,817,808,928,755,969,1057,671,702,714,861,940,609,871,962,920,755,995,978,1079,888,897,936,846,914,1144,988,869,883,863,772,987,1139,923,964,1041,898,685,599,784,882,758,746,519,502,997,892,1079,864,1016,917,854,979,846,862,1058,723,884,868,949,810,940,723,821,728,890,1025,1023,960,745,860,927,956,905,1080,1028,1063,1072,813,626,732,967,918,865,954,971,879,956,863,947,905,903,952,960,924,883,782,871,917,967,923,1026,946,923,1045,906,821,977,955,985,884,827,890,884,872,949,1033,962,1025,1078,924,853,952,676,648,707,1020,644,838,961,882,877,591,695,594,700,672,769,721,1080,972,748,755,824,977,784,855,903,759,950,982,904,891,971,961,818,734,756,812,959,963,888,827,907,893,734,949,811,733,782,812,685,733,855,749,876,815,975,845,755,908,957,977,1068,967,985,970,645,816,841,861,743,744,863,896,868,820,742,856,661,872,848,762,996,867,1034,907,941,825,928,1011,1003,1056,951,1372,930,750,958,1293,1068,1126,1138,1188,893,590,960,1072,1119,1040,976,700,633,662,611,707,774,696,867,860,966,801,1017,700,866,609,389,677,740,790,707,801,828,460,499,493,462,445,552,572,648,669,723,1164,1170,914,974,665,578,945,883,704,779,1308,1080,852,1136,1222,1007,853,1064,1021,896,1143,886,1273,1018,846,905,987,1001,1127,974,869,1204,962,997,1043,985,966,1002,957,1203,1071,1045,1054,1075,894,1121,380,577,1207,1055,1149,752,1080,1028,1113,880,975,1053,1199,1170,732,997,1031,644,806,672,791,714,704,782,1113,1140,568,717,396,907,945,847,800,462,397,943,980,779,495,385,995,706,1083,638,629,393,942,1218,1121,1056,1124,688,1187,819,1299,1057,796,742,948,989,1011,1147,1095,1131,702,694,1013,1078,798,930,928,978,823,835,886,1141,1017,650,922,1022,1070,1246,930,565,710,664,689,716,654,871,896,843,941,641,733,498,545,642,744,901,925,803,988,515,467,499,491,616,692,712,736,686,598,723,894,678,1117,764,982,1090,983,688,835,871,1043,952,1001,873,1039,961,1042,1139,914,632,618,840,595,596,582,619,539,676,755,782,625,664,692,719,880,1119,1028,683,576,606,582,435,564,511,622,470,396,532,628,753,897,724,815,789,651,611,639,884,774,772,695,886,883,813,994,1022,1059,1024,961,875,858,692,798,638,977,1028,893,1103,892,906,888,845,861,855,884,691,998,1005,918,773,734,583,830,744,1025,945,928,847,1021,888,971,754,954,1146,885,794,1134,957,883,915,1001,775,696,823,727,832,779,717,649,598,512,579,635,750,898,994,928,880,776,863,856,1144,877,931,1069,1076,992,1007,1033,1052,966,804,837,1110,1098,1180,934,922,1099,865,840,885,1066,772,579,634,580,528,782,883,650,643,663,634,721,900,1004,980,894,908,771,758,800,756,693,670,1021,1146,866,955,863,746,933,908,1118,973,910,861,1088,819,949,938,872,749,719,695,1044,909,788,910,708,894,838,697,657,750,688,1025,1046,1070,960,1063,823,765,911,847,1039,862,728,783,979,834,823,962,842,738,627,847,802,829,926,1102,998,733,850,665,960,604,915,842,728,667,914,866,1033,1049,1047,1066,1047,689,805,718,984,1075,801,799,847,885,854,730,938,901,771,658,789,823,1050,937,1101,755,775,958,783,855,699,732,753,881,774,744,695,878,701,788,715,894,714,789,701,949,840,940,846,677,629,1017,844,995,912,967,869,890,858,960,945,974,982,873,932,969,1030,1017,990,1008,996,795,834,873,925,954,822,928,859,998,885,906,901,921,616,654,890,851,743,930,892,956,873,833,741,688,693,657,738,731,839,883,851,980,837,1106,1004,854,1010,893,955,858,707,971,817,900,1042,885,743,785,745,954,662,833,993,876,826,982,955,858,825,929,1054,845,822,1139,1104,803,955,848,791,924,1002,856,1069,1035,897,671,535,691,836,936,640,586,470,917,983,1088,898,1034,823,897,937,899,818,1084,837,784,850,1043,803,1024,878,878,787,932,1091,1049,1007,736,862,892,913,993,988,814,1011,1012,869,703,748,879,927,933,857,1005,898,1008,907,1008,878,895,998,1042,1004,873,826,916,888,955,934,1013,967,1016,937,923,841,990,885,888,905,856,969,819,848,973,987,981,1034,1062,890,983,1e3,835,784,783,884,696,692,1011,787,773,580,647,613,675,668,708,688,1127,957,862,804,757,926,891,775,1071,837,1003,907,882,888,918,946,804,694,745,685,917,979,851,871,873,839,888,845,861,770,744,917,742,688,843,741,845,934,914,911,761,780,994,819,1042,1020,968,1073,681,807,810,873,868,663,834,780,931,668,671,714,799,909,737,745,991,922,1036,807,951,989,898,1019,1012,1061,752,877,939,909,847,1175,775,979,1069,1069,977,797,1047,1304,1033,1144,1122,1183,826,609,1018,1101,1106,1077,930,646,691,613,619,598,625,578,607,717,832,1023,757,878,851,928,686,870,962,768,916,649,607,800,611,678,818,720,980,463,607,457,460,508,503,486,556,602,552,569,873,1088,1066,1124,1055,891,772,612,830,789,674,852,1259,1099,1248,908,885,895,970,1007,1063,864,880,1196,1083,1092,979,900,1204,1117,1035,1109,1124,889,1076,1011,926,1091,741,836,837,869,864,892,1051,966,1004,911,1077,1080,1081,1076,1126,1215,947,1130,1098,1041,1087,418,375,1123,1126,868,1104,952,907,1150,1105,874,871,369,962,1055,1030,717,1193,1086,848,1058,959,1024,993,1063,1045,978,949,1087,1143,1041,811,946,1157,1004,936,1036,687,856,764,620,810,887,1050,605,634,444,888,1127,1146,504,755,416,893,1100,594,653,450,895,906,724,1007,1185,736,823,441,529,1044,1109,998,1004,1108,1145,610,1100,1173,1108,1292,978,1030,997,1130,1169,840,976,1157,1108,1051,1106,1073,650,696,1117,1046,640,1012,967,967,850,900,899,1168,920,597,1048,1019,1176,824,650,593,731,905,788,891,811,996,1005,842,1136,811,586,801,668,834,720,926,908,772,767,966,816,1034,984,960,920,868,868,819,935,914,671,998,850,845,772,739,940,958,957,898,795,837,874,950,961,854,931,870,990,892,931,889,950,666,673,1153,935,829,854,444,949,1090,1135,1029,894,1204,1104,989,1236,1180,1071,1007,1253,976,759,944,968,1006,1054,961,863,1176,1100,1092,810,1010,955,1099,980,939,923,1012,1141,832,907,1228,686,833,731,698,841,838,801,1034,546,735,411,890,1086,620,636,442,923,1063,840,1033,1098,1119,592,1036,1059,949,698,797,963,1044,739,952,930,936,905,847,949,1167,857,687,1050,1013,1189,624,690,632,796,1013,659,916,976,785,998,1011,944,911,1113,614,600,787,717,646,739,607,483,281,385,475,816,723,817,879,936,921,849,1084,612,812,786,758,607,801,694,894,625,553,645,623,543,621,566,567,759,781,862,935,857,811,1145,1092,950,1005,637,715,872,943,957,578,441,674,876,683,682,990,852,1031,778,800,833,700,782,857,918,653,834,872,594,881,879,797,474,602,786,766,685,630,613,886,871,859,1065,829,736,678,784,745,917,977,828,696,719,862,845,629,515,972,734,833,883,844,850,856,895,853,1050,863,987,877,930,910,926,776,670,872,963,967,767,383,469,579,899,897,804,909,1070,542,616,864,444,943,879,666,517,940,535,700,1110,1135,922,954,729,1325,1124,1123,967,844,1148,817,1203,1015,850,885,982,1005,1127,963,900,1195,737,1084,1192,1118,1206,1159,932,1005,1096,1020,730,1073,956,1066,1028,974,958,1052,709,855,742,637,871,1001,494,819,432,886,938,718,1041,490,745,431,807,915,1061,696,569,424,938,1054,913,1038,1101,1127,629,1133,1100,928,1148,862,701,872,996,1079,739,986,950,881,911,816,1074,1105,879,761,1026,1035,1143,1022,761,679,626,806,858,1090,860,926,669,905,809,936,853,939,1088,1059,892,1069,956,1135,951,615,697,716,531,744,692,820,681,784,1121,797,627,461,560,420,423,516,783,957,814,688,1103,1080,909,895,758,720,667,831,838,823,1231,1064,778,1088,800,1001,1075,1109,788,797,851,865,991,967,859,972,900,1074,902,905,818,868,878,885,891,920,875,969,899,947,881,911,880,1036,805,679,1112,755,1122,1069,780,1043,979,992,823,761,926,865,914,1088,1054,764,791,692,1016,758,753,943,888,844,893,921,902,974,843,1059,813,831,1081,1111,824,962,833,780,900,920,1026,1113,938,948,703,613,626,831,1014,678,605,483,831,1073,1033,958,959,850,957,887,1015,788,1087,902,827,805,1002,761,1022,901,837,854,911,1055,1052,962,823,868,962,910,1036,906,726,1037,1047,906,748,694,744,981,947,729,994,948,1020,941,996,894,900,991,1013,1012,836,831,918,907,978,957,969,932,1055,971,959,909,990,826,951,912,876,962,836,959,959,925,1073,921,998,819,1039,962,845,879,682,731,850,795,1034,724,791,557,673,693,595,726,758,732,1036,946,939,875,806,871,850,788,1126,904,959,866,886,908,825,1088,857,733,697,644,862,954,790,927,783,959,1008,709,969,921,734,945,737,750,840,806,888,979,880,938,849,760,1023,820,1040,1044,1063,1020,773,753,724,820,886,632,709,787,985,677,793,732,851,856,695,856,916,974,1026,813,923,1068,970,986,1031,1041,1199,1098,977,1074,1297,1078,1123,1132,1179,890,580,949,1071,1115,1051,966,684,642,651,580,675,534,843,830,964,862,768,760,765,757,637,719,708,808,803,513,948,1098,1015,1025,918,720,784,923,1222,1009,1037,1124,990,881,1160,871,1137,890,1084,1074,974,960,1161,1256,1122,1034,1013,928,907,1202,1109,1059,1071,886,1114,375,690,1204,1120,878,803,868,1148,831,903,1136,1158,1079,976,780,937,978,1021,1078,954,874,1140,742,1005,1017,656,655,813,761,703,820,846,1036,849,823,446,452,1070,739,977,767,472,396,986,889,576,845,401,730,1136,1068,1069,1117,759,1205,1124,1398,1266,1239,796,724,956,983,987,801,954,971,859,910,820,1121,1065,746,840,1061,1017,1157,578,649,721,606,836,468,541,524,416,418,866,707,868,854,969,1040,1059,790,931,1110,607,602,789,462,448,445,541,463,432,686,632,831,676,506,357,338,326,392,352,340,515,810,1024,826,641,856,961,822,799,813,817,1169,933,1084,810,760,941,885,894,757,1001,1048,932,983,799,712,836,938,804,934,890,871,772,876,1088,928,1067,736,661,839,930,764,820,597,532,607,723,939,886,878,769,886,1121,909,912,804,912,619,753,723,733,803,844,817,993,952,869,941,647,890,649,802,633,570,895,876,854,1008,862,892,735,718,860,603,773,651,541,660,843,823,1011,741,677,765,745,660,716,618,647,932,926,801,893,815,529,896,741,725,572,929,874,1027,919,940,851,932,740,825,722,704,950,844,956,921,912,987,804,672,986,897,866,924,884,925,683,557,669,908,843,844,806,1086,947,806,836,667,984,642,1002,856,941,924,920,974,734,554,687,905,654,813,587,530,540,965,887,882,1013,919,860,709,893,908,865,918,702,899,906,983,976,887,829,1001,1033,917,855,691,886,925,902,910,735,941,928,980,974,878,829,1003,1035,917,854,698,914,912,873,929,672,908,891,967,926,861,844,994,1034,914,872,721,932,950,855,1011,910,973,929,854,942,986,976,1011,913,969,999,952,947,917,847,1067,974,925,1023,789,929,1105,957,1015,1007,1038,1037,831,977,936,982,876,930,912,953,979,817,881,858,871,874,934,849,980,884,970,881,919,923,1011,808,691,1063,1103,862,960,1047,877,1022,820,1120,1020,872,1010,888,970,1156,881,1043,896,976,1030,890,1042,990,602,569,633,702,455,468,464,650,940,676,708,709,701,697,751,582,639,664,633,479,673,640,662,634,597,672,544,635,679,667,663,920,434,792,383,1080,1157,1011,906,853,601,609,563,697,1130,1234,1129,995,962,1226,1130,1044,1077,957,883,1164,711,1189,1041,854,882,1010,939,1132,930,894,1177,1152,1061,702,1178,931,1088,1154,839,892,922,1062,845,927,1025,841,719,808,661,775,785,888,628,843,389,733,1081,468,799,412,880,924,1036,911,1151,967,1206,1161,639,712,819,664,410,1037,886,869,990,934,1237,1084,996,974,799,775,584,867,1102,1305,1142,1006,1016,797,515,410,628,1189,937,1167,901,1200,1150,726,809,908,797,934,923,794,856,1065,802,721,941,972,988,779,955,975,857,895,800,1108,1052,766,822,1053,1006,1124,1023,669,661,593,788,974,923,662,1005,735,861,1037,918,833,833,934,963,967,1138,949,593,795,534,768,747,863,683,952,1058,715,673,568,361,491,797,968,1040,955,803,834,876,955,947,856,929,869,988,892,933,876,952,662,671,1122,735,1129,1118,661,888,1039,1035,775,800,893,854,965,964,1018,688,765,718,977,796,608,960,960,968,946,943,924,1025,857,1002,901,852,1039,1094,810,1e3,837,805,787,922,1139,1062,834,1006,810,697,633,781,973,700,692,494,625,1029,942,1013,970,856,1010,932,1039,726,1002,932,800,810,953,773,913,923,716,857,901,1018,973,969,973,893,950,945,1030,882,863,974,1116,1069,654,681,685,1043,961,762,984,943,929,898,933,839,869,888,936,915,863,891,886,849,1006,927,954,956,1020,940,1044,864,1009,869,868,983,990,947,845,1e3,946,926,1086,858,954,917,956,910,861,805,661,683,904,731,1018,775,876,654,579,700,620,742,709,820,812,946,883,665,788,811,908,841,1110,862,839,790,947,927,799,1088,913,780,759,664,887,962,819,987,814,936,1018,756,980,944,723,908,743,658,823,792,802,1002,756,977,884,838,1034,891,1014,1027,1092,962,805,641,723,881,857,538,700,849,979,712,852,835,944,730,756,843,770,1077,916,964,954,943,992,970,1016,1065,1324,1058,1120,1123,1162,843,606,1020,1103,1102,1089,912,663,649,644,698,830,870,851,976,796,939,801,743,957,730,1063,538,699,1108,1153,887,954,920,582,843,1103,1212,1036,903,944,966,1053,962,846,1120,810,1093,1226,1101,1003,1044,958,928,942,981,1216,1039,1057,1039,1099,851,1063,380,713,1149,1155,845,850,891,1080,812,932,1162,1128,1158,962,775,939,963,990,1047,965,869,1132,745,978,1092,911,688,807,697,714,802,1115,1119,595,668,453,906,944,791,806,432,499,1047,759,937,745,479,404,1021,1176,927,1191,1084,733,1073,1064,657,680,1126,978,669,976,984,943,880,895,889,1196,923,638,1027,995,1199,765,676,627,736,898,943,1168,978,871,1033,789,851,1066,1131,604,587,808,702,610,656,816,487,636,634,1006,660,973,801,1017,748,789,942,709,738,789,804,1042,926,556,847,930,592,798,691,643,680,836,697,683,800,738,890,932,836,985,977,1038,592,429,612,1054,732,902,1069,1080,855,856,847,931,1053,839,494,863,854,646,642,651,879,1030,1028,847,825,764,716,645,843,750,655,682,944,888,737,836,712,585,903,768,769,772,757,920,943,817,787,844,926,825,792,862,882,962,872,933,900,926,711,748,991,836,589,555,598,740,985,844,927,906,698,870,451,974,1077,1116,1070,984,699,834,1252,1102,737,1230,1232,994,976,1055,949,1165,1043,837,860,1004,947,1125,928,888,1179,1157,1182,556,382,1109,749,1080,1186,964,994,990,860,1134,1033,1060,1083,830,914,1121,1072,886,704,849,728,638,859,951,512,836,397,551,1038,637,844,394,727,1083,460,812,414,877,938,691,1096,979,983,960,1208,1153,644,1316,1123,906,1147,1125,956,1086,1054,829,706,909,996,1040,751,983,964,844,894,829,1124,1076,824,791,1062,1007,1194,573,686,729,894,711,1015,1007,1012,981,852,912,857,1174,770,595,797,536,438,567,758,677,881,521,584,699,785,813,856,900,798,793,777,727,720,789,756,927,776,757,1116,1035,825,937,964,831,1010,1087,995,980,1054,935,945,1108,896,786,891,953,796,1023,664,952,1009,767,834,703,1113,925,938,737,879,776,717,790,794,645,753,712,862,805,797,787,815,708,755,833,876,972,744,952,871,805,1014,1052,872,902,933,783,775,902,710,1150,1012,904,1012,862,942,852,873,783,702,833,750,929,660,875,840,807,915,804,845,746,766,679,854,957,799,769,786,1089,885,832,815,834,809,815,824,990,912,972,848,935,874,897,839,746,711,691,707,679,742,723,770,646,636,702,709,647,594,700,1199,647,533,604,1007,941,795,929,840,867,465,873,1112,1129,1001,966,851,610,765,940,1230,1101,896,1225,998,839,915,973,1002,1119,992,853,1202,993,1265,1036,1217,1101,777,1036,962,842,1071,1010,964,1018,883,991,1189,820,928,1189,1044,950,749,721,796,673,716,802,977,781,492,411,1033,723,1075,636,630,439,884,927,927,799,429,430,1022,1072,899,1018,981,1164,1004,935,918,985,990,834,985,1165,1202,1025,943,703,813,966,1048,737,964,934,923,897,858,970,1163,881,692,1031,990,1166,981,781,668,633,781,818,854,889,795,892,885,455,861,706,960,871,1114,966,847,1038,708,808,1051,978,1195,777,598,926,814,624,587,579,784,833,621,685,694,1061,1030,633,570,476,684,566,362,502,560,795,733,582,739,691,678,600,585,569,635,792,912,724,885,743,650,702,725,818,607,741,664,1111,965,988,1028,1118,1084,798,923,969,747,936,707,972,1038,1042,1141,1068,1115,1099,882,1002,929,1024,894,973,815,962,1025,1062,1024,787,934,608,861,1020,611,785,877,934,933,939,871,832,824,598,844,801,786,1043,951,822,833,863,944,941,863,912,876,987,903,948,887,982,695,711,925,873,739,697,648,715,634,750,810,745,686,798,794,758,928,729,804,896,859,713,826,748,1206,743,1151,1117,686,924,1015,1038,781,773,861,882,939,992,1049,691,761,722,945,759,610,940,939,965,935,939,932,1025,868,992,882,799,1058,1112,841,1008,819,810,837,956,1109,1114,808,982,794,695,612,781,975,678,659,485,641,1021,909,970,1013,844,1030,927,1040,721,1013,917,794,806,956,764,904,960,713,845,913,1020,953,931,967,915,930,958,1041,861,838,951,1101,1032,662,667,697,1020,940,736,989,940,968,918,934,868,888,911,916,969,838,881,887,866,993,928,952,949,1032,963,1027,893,1012,869,858,953,983,958,863,1007,930,930,1083,834,951,875,993,937,855,836,652,658,881,747,1003,781,851,637,637,724,620,734,735,787,842,942,881,683,768,845,914,824,1096,882,873,799,939,884,831,1083,923,800,769,701,882,921,793,992,800,936,1039,734,1014,948,726,910,747,666,808,777,798,1014,792,974,920,859,1083,917,1015,1058,1082,962,792,676,716,879,909,556,711,811,966,725,834,832,952,742,739,866,786,1077,922,931,947,955,1010,951,1025,1085,1310,1065,1122,1102,1160,842,623,1006,1114,1086,1091,901,669,615,630,601,578,711,841,875,871,954,788,887,781,791,843,963,659,827,860,757,699,906,434,608,480,764,1115,1163,904,949,707,690,734,936,740,691,824,1251,1104,745,1071,1022,885,1275,999,844,908,985,995,1106,998,845,1212,929,997,1051,1162,975,1023,1245,1114,1022,1017,952,880,859,1004,1199,1056,1006,1043,932,675,384,1140,1088,1168,742,1058,1075,1098,824,880,1117,783,718,800,679,709,796,1100,977,779,498,385,986,852,819,431,487,1055,1050,1045,1093,969,1166,584,651,412,905,1200,986,1159,1131,615,1108,1359,1020,1118,956,795,1113,1279,1170,908,706,831,962,1087,712,975,933,886,890,790,1047,1123,851,685,1042,1039,1207,984,777,669,605,790,816,934,890,867,532,1050,864,451,857,710,1104,939,1e3,922,1049,807,795,1067,882,924,1193,1092,1003,809,582,859,851,975,501,542,565,559,699,784,850,662,695,732,1030,1114,723,682,683,344,283,388,408,502,517,529,571,362,529,535,884,815,621,855,818,778,598,814,918,801,941,943,813,914,767,889,717,867,857,928,696,828,969,976,917,830,769,1030,1056,973,943,871,597,878,963,624,853,648,801,808,636,816,545,804,767,1055,1133,848,977,1077,756,969,1106,947,918,880,986,739,559,564,565,640,593,530,751,562,730,554,701,657,636,766,859,849,969,726,1047,803,877,981,758,601,806,852,664,712,736,881,943,713,766,788,762,809,543,750,552,785,877,1018,870,546,759,834,852,670,714,719,852,840,888,983,735,746,928,808,984,846,735,665,667,844,950,794,712,597,629,610,845,831,955,820,478,672,821,817,824,521,827,895,863,736,718,722,856,755,880,993,667,720,907,800,1015,838,740,781,779,791,878,644,688,614,610,590,847,831,1035,832,777,818,883,821,753,774,875,988,901,906,906,897,758,799,829,944,950,664,694,704,657,687,665,895,639,762,917,767,849,804,804,752,871,751,784,784,970,638,644,681,737,739,575,659,944,922,1124,982,950,1015,887,950,829,753,918,789,900,1063,841,798,764,795,957,678,845,958,872,760,986,977,939,882,838,1043,814,836,1132,1052,810,954,866,784,922,1049,802,1061,1050,904,686,560,727,878,856,701,538,499,971,892,1110,885,1061,844,891,990,920,839,1078,782,850,856,1031,789,1010,847,806,738,903,1049,1043,961,684,882,893,891,996,1015,899,1062,992,836,674,751,883,942,918,838,1009,913,961,871,965,868,913,959,976,961,876,800,930,895,957,900,1070,977,991,923,836,910,989,879,875,842,866,934,787,860,993,1050,940,1046,1085,869,924,996,854,756,763,946,630,693,997,822,809,568,659,592,680,682,740,696,1119,977,767,773,825,951,890,789,975,828,1006,979,893,890,943,976,815,738,746,668,930,964,889,821,877,858,837,860,875,715,714,918,731,713,856,761,824,857,931,882,740,811,969,885,1044,968,991,1077,666,829,830,863,850,749,862,836,964,717,669,774,749,871,785,750,980,916,1022,809,942,928,938,949,1026,991,817,517,368,507,538,899,885,884,794,564,907,887,914,603,920,877,846,614,902,795,635,922,776,851,1127,1263,1059,1119,1066,1053,740,794,1022,1105,1149,1082,793,669,633,649,647,591,711,922,492,615,700,821,861,934,783,702,977,708,866,693,664,665,691,601,722,708,1076,496,619,772,786,1131,1161,901,936,653,891,696,773,1320,1106,809,1109,1092,867,873,968,974,1017,1006,844,1085,1266,997,941,1154,1055,1039,1103,1187,1149,997,912,1027,1032,982,861,1040,743,1190,1094,1197,951,1093,940,1082,1052,999,1030,949,613,396,1155,1047,1114,683,1108,1241,1028,943,908,1289,1152,849,809,1019,1077,935,866,707,787,702,687,785,1094,1130,593,638,414,919,880,727,770,705,823,420,548,1033,635,813,426,537,1028,708,825,420,557,1060,1241,1080,957,987,1097,868,1067,1e3,890,1053,1225,1182,986,1059,981,967,1105,1043,974,1175,1042,906,1040,1146,913,1195,1090,986,923,981,992,929,1150,953,662,780,1004,1037,740,969,935,938,915,845,940,1154,872,688,1043,988,1175,653,698,582,809,716,1011,960,764,1050,1130,917,594,707,576,773,636,751,759,900,860,611,707,817,738,965,937,706,965,785,832,782,797,908,974,792,876,855,556,806,796,766,928,930,794,832,874,924,955,811,935,864,997,883,908,900,924,616,654,845,657,709,698,663,710,674,1120,605,1017,737,708,866,444,976,1080,993,1052,904,879,1219,1073,883,1264,975,756,945,973,1006,1066,958,887,1215,845,941,1120,1018,913,1122,948,993,1126,829,910,1185,798,703,793,698,712,805,884,684,811,434,620,988,1075,651,648,457,913,1064,842,1030,1096,1121,602,1112,1174,902,707,825,961,1086,717,978,928,877,893,792,1042,1121,870,747,1040,1024,1222,569,681,684,887,701,1054,956,747,1063,1219,793,589,821,492,732,693,548,448,930,661,833,761,551,828,784,826,899,830,465,860,752,628,590,514,837,600,683,604,723,812,761,446,672,688,785,632,656,524,514,911,600,643,637,696,558,737,503,632,958,624,882,701,754,603,727,710,796,763,791,770,566,605,439,639,1015,875,837,811,841,834,814,822,1014,913,967,851,932,884,889,810,742,738,696,711,679,740,733,933,764,589,820,902,737,935,791,484,732,1116,1162,890,972,581,918,1225,1070,889,1271,971,756,945,972,1004,1069,963,888,1233,851,964,1113,978,882,1120,987,1031,1068,834,920,1126,750,720,804,652,764,782,956,750,472,412,1040,721,1074,633,628,431,943,1088,893,1050,1074,1135,604,1184,689,917,911,1050,661,673,1126,960,688,977,981,944,890,892,892,1208,923,638,1027,995,1135,1086,871,595,658,783,791,816,871,751,1005,565,774,830,882,747,945,1042,870,826,829,943,1022,1007,921,589,893,932,536,631,623,778,758,675,881,1061,769,661,570,362,498,808,750,597,564,742,603,918,711,544,482,486,857,650,917,790,817,721,720,765,851,957,999,884,731,707,946,1174,1178,864,796,793,842,725,752,812,917,971,903,895,940,868,826,793,807,950,933,789,618,577,746,838,707,689,685,680,717,631,758,675,1064,742,1140,1123,676,907,1029,1035,781,799,865,868,952,978,1017,693,759,723,958,775,597,952,930,968,933,933,929,1021,858,1008,888,815,1064,1102,848,1013,838,806,818,938,1109,1086,811,999,795,684,624,784,976,693,668,487,622,1020,924,984,988,843,1027,928,1045,727,1e3,908,764,813,945,774,898,940,710,852,901,1017,955,930,968,915,939,948,1032,874,850,954,1106,1047,660,683,686,1035,949,753,984,931,959,901,930,863,893,899,927,936,854,899,889,856,1e3,924,947,958,1037,937,1040,885,1011,869,859,968,981,944,857,1001,917,930,1072,848,941,894,974,936,843,832,650,664,890,732,1e3,782,860,648,597,724,623,731,725,789,833,941,886,678,781,834,895,825,1087,873,830,785,948,907,807,1106,916,775,764,698,888,942,805,987,823,946,1021,740,1008,948,723,924,696,667,802,790,803,1016,786,979,906,848,1084,905,1017,1051,1093,959,787,655,716,865,888,552,705,835,964,729,842,829,947,742,754,850,786,1079,914,961,950,941,996,957,1022,1078,1305,1064,1126,1108,1165,837,617,1007,1116,1084,1111,889,670,682,639,678,941,957,903,872,782,661,894,743,863,686,906,694,584,552,494,883,1107,1152,1012,925,837,677,814,1277,1085,990,955,1138,953,838,1121,812,1095,1238,1101,1003,1034,952,918,945,976,1212,1036,1058,1044,1088,849,1063,395,714,1152,1155,847,851,893,1223,857,955,1028,739,721,808,663,777,781,1209,1212,596,676,458,906,1145,995,1075,1180,514,761,408,884,925,899,575,845,402,731,1134,1067,1065,1116,754,1286,1041,892,1066,1075,1005,653,697,1095,981,679,975,982,952,888,860,867,1202,889,637,1012,994,1185,759,659,631,752,850,990,1070,978,785,1029,743,805,1145,726,600,807,770,690,888,704,731,745,592,930,817,869,817,778,806,891,1123,914,740,839,749,821,574,795,643,787,778,808,848,956,881,808,802,831,741,778,811,938,948,971,897,911,858,881,833,869,905,715,1004,830,848,465,903,1100,1144,1005,972,1236,1114,998,1087,1204,879,925,1003,974,956,1067,829,984,1192,1037,993,975,1268,1155,1026,1160,1081,959,972,1001,912,1125,943,1060,1033,977,947,1100,712,843,721,699,849,868,1079,631,636,393,955,892,665,824,416,614,1071,949,1019,973,1171,1046,875,1168,1083,766,720,975,989,973,833,962,957,834,896,811,1130,1067,746,841,1061,1011,1148,579,662,718,817,939,836,958,967,994,1e3,879,1110,925,584,709,571,633,753,763,520,731,702,708,854,995,955,1098,825,1040,809,809,1083,762,639,675,642,657,597,635,604,671,703,867,1009,620,996,910,707,700,613,960,828,860,736,773,737,697,590,565,613,628,664,666,643,740,892,997,753,773,966,799,741,1024,967,807,852,441,882,779,679,653,584,580,736,532,711,797,765,695,631,592,589,735,757,656,825,755,806,526,491,605,692,809,792,822,875,844,875,863,917,825,989,857,989,872,918,889,971,809,675,731,703,650,712,630,746,845,896,589,595,1028,885,779,851,434,1015,1067,1044,1065,970,740,1277,1205,870,920,988,994,959,1067,826,979,1207,1108,1054,919,1122,1006,1212,1096,1185,949,973,1052,931,1130,947,1086,1113,864,833,1137,700,786,796,633,795,837,857,814,429,537,1044,766,950,777,492,386,980,998,916,930,1185,1113,622,1252,726,769,785,887,721,852,832,755,1166,943,871,1003,847,886,817,939,1195,1274,976,958,1223,911,970,974,1039,1e3,1127,1107,1057,1071,1272,1244,1173,748,713,982,982,931,861,939,910,861,886,824,1115,1073,717,865,1058,1059,1063,1035,610,687,645,841,909,996,879,676,1030,947,839,1005,1021,822,704,856,1079,952,1126,714,780,627,707,719,543,800,703,750,659,690,839,1323,958,657,640,448,628,467,399,554,747,716,881,972,783,771,783,870,927,880,867,735,944,641,736,729,820,795,806,717,670,901,850,1008,1135,833,772,875,778,1121,912,845,1060,968,857,1057,1020,902,949,883,929,932,950,933,1024,818,1003,932,883,683,977,817,817,839,919,723,767,1027,766,922,1075,984,828,885,1028,1075,819,790,848,929,824,783,859,884,965,871,928,897,925,716,751,941,848,1080,1137,812,977,982,979,820,779,909,937,857,1030,1013,671,692,716,924,881,579,907,973,960,831,943,985,1054,894,954,882,866,929,1122,927,945,875,865,817,978,1125,960,936,1074,899,665,582,756,971,755,761,485,587,983,830,1029,871,1021,949,885,1002,766,895,988,728,892,854,859,813,951,725,886,682,955,892,1031,1010,828,938,893,908,921,972,1011,1104,1056,866,681,781,1044,900,837,949,962,855,911,870,929,878,918,906,908,948,889,800,898,975,980,932,984,938,938,1098,874,902,888,885,1033,939,887,863,898,907,918,1042,895,1045,1068,845,870,833,653,646,714,1021,716,877,896,904,822,634,672,586,736,676,744,739,1084,892,742,761,858,983,785,917,853,774,912,978,937,874,1011,915,827,692,756,802,950,914,926,851,908,908,709,948,879,720,895,708,672,780,871,736,909,768,1011,863,785,915,919,971,1052,1002,1018,874,629,782,824,843,672,707,786,862,745,840,821,887,653,835,831,738,1022,830,980,977,931,908,982,984,1015,1022,1126,1077,1033,1308,1034,1137,1126,1183,826,609,1022,1103,1110,1078,958,712,628,644,637,576,666,629,826,848,967,888,844,829,889,790,747,820,669,721,744,862,447,938,1091,1010,1072,901,819,879,732,772,883,1229,1112,987,1135,950,1062,1021,885,1142,893,1246,1021,1160,957,1096,951,947,1119,943,955,940,986,1211,1133,1043,1054,905,725,387,1096,1095,1165,731,1072,1024,1030,916,898,1294,1228,1014,853,891,985,1002,1123,966,894,1089,731,929,1020,776,926,963,897,888,695,862,723,654,850,935,1117,701,826,418,555,1021,1042,497,737,426,840,925,723,750,799,427,355,985,872,565,826,396,552,1039,630,843,395,735,934,756,706,829,421,559,1057,1008,1111,1146,614,1279,1413,1100,1024,1005,1135,1067,662,699,1062,1008,704,992,930,949,835,874,881,1163,996,594,1011,1025,1126,1157,918,590,656,708,922,857,910,873,1144,868,837,933,908,1088,748,902,1031,941,1127,964,583,700,540,485,765,755,705,685,823,1129,817,627,662,565,392,431,592,982,1075,815,773,889,966,788,990,964,850,951,749,717,902,976,817,831,911,984,1050,838,796,857,774,1076,1039,784,1012,723,952,826,840,872,936,953,863,915,888,986,899,959,891,988,712,714,841,682,700,732,656,694,682,1051,806,1119,1076,827,1018,873,930,867,735,966,821,888,1075,940,680,841,716,975,686,843,997,882,825,964,904,862,916,882,1012,844,828,1122,1105,830,946,876,756,911,937,922,1124,1005,953,658,520,692,807,976,682,590,413,893,1027,1070,908,1037,778,911,931,939,792,1102,913,771,845,1011,746,1027,894,812,820,901,1048,1056,1012,751,876,952,921,999,995,791,983,1081,834,738,733,833,909,973,804,1031,911,1016,941,1033,878,874,983,1031,1018,871,855,929,896,940,975,958,973,1010,958,980,834,979,835,905,932,830,973,849,870,949,954,1026,1011,1040,841,1017,1017,823,826,749,803,797,699,1064,731,765,589,653,637,632,638,730,729,1076,894,902,837,762,943,885,808,1030,906,991,876,848,886,858,938,811,692,790,684,864,952,850,932,813,836,950,791,869,860,745,944,711,728,811,789,889,973,889,916,782,763,990,806,1001,1032,976,1082,705,785,720,861,888,645,794,733,957,674,731,694,814,909,688,836,992,978,996,805,974,1066,947,1007,1036,1229,1205,1020,1140,1049,1003,677,808,1022,1092,1152,1121,708,651,628,631,592,706,925,922,888,884,809,759,892,690,874,804,681,869,746,530,1008,1087,998,986,894,604,1023,589,867,1133,1196,1048,920,1270,961,785,924,962,989,1057,951,871,1217,989,1122,980,894,1135,1075,1038,970,1024,1256,1124,1030,1021,956,897,844,1e3,1220,1050,1007,1035,951,622,396,1160,1052,1162,748,1066,1021,1013,950,922,1273,1072,1050,773,937,1145,797,716,807,663,743,788,1064,1118,659,647,401,943,817,985,433,830,370,633,1013,596,860,423,715,919,1053,687,605,445,954,1069,994,1116,868,1155,743,1255,878,696,878,994,1085,724,978,957,896,919,815,1063,1099,867,730,1039,1026,1192,951,783,654,601,714,719,645,860,920,497,559,523,769,1046,842,513,475,475,483,656,689,927,669,1101,834,1016,912,1002,963,833,919,940,842,1054,1094,902,848,619,602,674,544,568,524,869,655,704,540,583,660,675,697,682,740,880,1138,961,600,600,502,562,542,526,430,528,736,577,501,378,552,538,853,529,872,698,618,626,597,826,759,751,674,952,707,797,505,763,684,665,641,584,601,766,763,754,731,653,928,780,742,467,693,687,734,550,540,587,596,706,802,691,730,683,660,836,807,890,613,707,888,784,788,797,812,711,697,722,644,674,604,654,654,565,712,719,736,835,992,576,640,858,777,728,619,542,1014,915,755,735,743,732,725,616,687,833,776,723,773,756,627,767,599,842,692,665,709,582,878,825,806,500,794,859,727,633,559,545,1005,962,935,999,801,1046,1037,968,958,843,666,803,710,661,646,773,832,1033,897,1001,913,753,822,855,539,552,606,826,967,738,470,734,781,811,692,708,922,597,637,657,932,793,954,976,784,655,845,708,867,700,757,723,750,1007,1042,889,989,947,1029,755,836,786,762,986,567,827,874,674,600,613,911,1e3,778,756,1005,769,835,527,911,755,769,560,841,962,803,708,711,827,850,686,811,918,804,985,999,984,786,786,947,851,865,895,897,964,809,788,769,893,828,647,834,586,968,811,895,715,809,845,882,993,1017,1024,995,800,947,717,745,946,870,717,925,790,1030,948,998,907,847,823,904,967,853,917,857,873,843,853,668,724,741,903,636,857,575,838,976,944,1020,831,696,687,721,773,1011,1055,888,455,898,840,679,629,885,888,640,950,933,1003,801,833,543,731,718,484,709,681,491,590,904,824,934,489,874,838,598,610,559,699,926,1010,1003,765,700,851,691,689,567,719,745,706,743,864,605,750,570,653,583,599,729,940,914,631,807,973,968,951,881,696,798,617,629,654,632,806,691,575,728,1071,1057,974,903,990,730,983,844,782,714,931,878,875,746,739,699,954,904,996,949,839,763,857,813,693,901,895,889,907,911,834,859,836,873,769,839,1081,830,795,800,848,812,736,784,885,985,911,894,953,903,792,800,821,710,675,693,632,736,688,933,778,1034,880,1080,695,1045,751,1095,1103,662,883,985,1077,755,826,858,889,961,995,994,720,728,675,965,814,582,983,975,977,910,944,917,1032,895,996,894,873,1017,1096,765,1002,860,829,786,878,1164,973,858,1030,828,660,664,778,970,713,719,502,617,1039,953,1063,965,885,1006,955,995,695,993,976,727,833,908,787,895,904,704,871,865,970,913,996,989,908,947,950,1003,899,869,991,1171,1134,773,695,742,1054,929,823,976,958,899,874,873,863,849,874,921,904,867,937,806,881,996,958,969,958,977,925,1104,886,988,868,899,1020,981,953,795,930,953,917,1064,863,1016,983,929,891,856,738,669,653,934,710,983,831,877,733,579,659,593,776,743,769,830,1012,913,711,794,813,943,816,1086,824,851,809,970,955,851,1028,882,797,764,685,872,947,853,973,811,939,1016,721,989,926,668,917,763,650,822,858,788,1010,761,974,857,842,1044,894,1033,977,1069,987,833,592,745,874,856,501,676,875,909,696,855,843,929,716,810,877,757,1107,883,959,955,943,978,990,994,1033,942,933,1151,1221,1023,1122,1053,1015,698,807,1023,1077,1149,1132,729,634,568,640,574,611,581,555,610,580,556,568,578,544,641,1099,779,830,841,974,900,864,758,886,793,836,856,718,921,959,876,859,886,735,848,712,797,756,472,623,492,481,462,494,495,593,540,1048,1095,1152,1034,924,792,756,647,678,663,852,721,930,652,760,1268,1112,999,1088,1205,879,929,1002,974,956,1066,828,984,1133,1181,935,1058,1058,1278,1073,911,1019,1015,901,1078,635,1183,1128,1122,858,962,1134,378,672,864,978,540,381,1096,1017,952,1093,1050,1008,969,371,847,1161,1047,728,1162,1054,1072,836,920,1134,1132,1053,806,937,1105,641,956,875,655,812,780,699,832,822,1037,859,828,441,475,1047,726,796,434,363,1006,705,1059,602,642,419,890,1067,595,693,398,881,897,937,511,831,378,784,1144,1023,985,1173,996,932,1156,828,1023,940,1095,874,1052,936,1025,723,1021,919,1006,1189,812,965,991,1047,1291,953,1047,950,659,780,1e3,1039,745,973,937,941,916,847,943,1160,881,690,1026,987,1174,657,680,595,808,730,1007,1064,900,1041,677,979,1139,620,618,781,796,788,853,721,959,747,893,919,862,1015,802,982,933,911,781,822,845,926,852,814,860,892,958,853,931,882,928,677,725,1016,811,918,987,662,685,608,1085,1135,998,925,893,1348,1017,938,1248,915,874,899,959,1005,1065,859,877,1198,1115,1145,1223,1021,966,1061,975,1019,1069,967,1090,936,1070,1078,902,875,1280,879,1042,703,838,730,697,851,894,1055,690,539,402,997,699,1068,619,636,432,933,1060,851,1041,1088,1139,604,1298,1384,887,632,1291,748,731,969,981,941,864,956,915,867,880,815,1124,1073,723,856,1058,1052,1216,842,609,689,651,839,965,949,693,1086,780,1020,887,1040,793,812,868,1046,1127,1092,1072,737,595,766,538,700,695,790,606,670,672,868,1099,787,521,609,572,588,568,361,475,566,914,559,744,849,692,763,559,799,736,742,731,646,635,647,602,605,714,808,883,506,738,904,766,782,740,869,699,737,915,958,569,793,917,740,735,820,807,805,878,793,802,769,656,587,732,948,1014,771,730,749,606,960,843,868,414,927,711,621,637,569,800,663,724,575,718,652,579,624,443,857,808,750,838,723,672,616,768,969,953,1025,790,705,676,767,793,859,547,821,749,687,631,552,983,905,761,755,772,665,925,776,855,486,840,812,694,548,935,1094,889,986,948,968,945,735,703,661,931,828,606,902,733,816,876,848,856,875,932,849,983,878,985,878,916,924,988,808,688,770,697,667,730,631,703,837,931,975,1135,929,975,1030,949,901,834,787,903,758,924,1097,748,738,703,815,971,623,873,913,912,738,993,985,1020,894,834,977,829,882,1129,1021,826,895,882,779,991,1106,900,968,1061,868,684,586,770,908,797,749,520,508,979,858,1119,906,1018,868,893,945,885,862,1064,778,874,846,978,793,978,746,810,733,906,1010,1022,957,717,862,887,938,976,1045,961,1026,1048,816,630,747,919,912,916,934,973,870,942,877,947,917,909,944,970,921,851,787,936,916,994,944,1009,958,921,1003,873,865,966,923,952,894,822,924,835,852,987,1035,913,1031,1083,967,881,998,759,684,721,991,636,749,1006,860,827,575,670,585,697,696,748,737,1066,988,705,765,797,995,813,791,918,793,967,956,899,870,960,984,834,712,805,689,936,984,881,801,873,887,775,871,840,725,717,884,696,738,811,773,870,798,982,852,723,833,937,944,1072,987,960,966,629,818,845,874,803,719,860,880,895,828,667,794,665,871,807,774,1010,889,1068,855,934,865,930,964,990,1249,1120,1078,1167,1114,1018,556,882,1e3,1100,1106,1125,695,650,575,598,620,579,601,659,927,885,956,860,871,821,893,909,785,850,833,658,917,734,977,455,831,1110,1129,971,999,588,668,761,831,689,814,1278,1134,755,1156,1048,858,879,970,944,1075,1015,887,1114,1185,1037,849,1e3,1289,1248,1013,957,1220,373,465,900,1002,1116,942,1056,986,869,1057,992,967,1132,1049,1093,892,1133,368,631,1201,1128,858,817,869,1137,828,915,1137,1153,956,776,873,1065,878,773,696,800,645,821,775,1047,1087,638,631,441,882,920,900,833,441,389,1065,1009,461,763,431,842,929,914,811,432,364,1023,1048,1051,1126,669,1148,978,973,956,995,865,920,1111,982,659,767,1034,1010,736,971,959,936,900,828,933,1174,905,677,1030,955,1157,717,673,576,768,882,838,990,992,786,1011,865,1014,831,1054,1062,636,659,750,855,708,571,283,750,712,971,850,733,837,778,676,776,916,997,922,916,806,995,1011,911,934,784,780,915,781,850,765,787,835,741,914,762,764,798,720,964,917,1054,968,787,878,824,965,918,607,705,560,735,529,731,540,879,929,847,786,1024,776,835,846,922,880,801,865,892,930,877,944,874,936,623,673,867,926,1018,813,378,697,852,1153,662,816,805,803,919,455,839,1111,1116,1001,912,1298,1128,738,1155,1039,856,876,983,937,1122,978,876,1154,1225,791,1072,1100,1091,1230,992,974,1065,1085,970,1049,942,961,1021,1167,716,1130,1031,1055,990,976,969,1004,700,859,765,621,814,1044,511,761,422,893,1091,594,653,449,895,895,1066,877,1027,1011,1165,985,971,1074,950,915,1331,791,1176,824,708,902,992,1040,735,986,960,843,907,833,1118,1100,850,779,1058,1034,1155,946,823,672,635,676,772,689,481,754,929,394,409,445,885,701,965,922,932,944,862,1050,1115,777,881,1034,992,1157,889,700,608,709,544,547,666,729,772,556,601,647,678,676,782,1142,1007,612,541,562,531,554,574,599,470,412,529,801,807,471,763,890,739,827,686,659,650,609,972,950,610,628,580,592,737,750,829,774,602,658,625,525,709,771,767,1035,833,728,460,637,518,1015,550,966,717,753,688,529,841,935,565,672,546,618,717,706,682,835,711,649,587,1023,679,676,373,549,475,863,562,844,738,708,553,880,911,923,949,581,648,683,706,695,707,578,732,569,954,864,929,835,521,736,747,988,1080,925,776,600,712,686,787,726,696,659,643,677,990,784,955,856,497,742,996,736,900,966,943,1142,951,757,727,738,742,773,570,737,701,808,640,1044,984,741,965,667,682,728,925,949,680,967,812,794,817,938,908,836,680,724,735,760,681,734,967,761,857,909,793,882,912,843,750,709,745,618,758,795,658,699,883,874,907,608,872,914,688,685,557,818,882,1005,895,878,886,946,1029,929,770,767,675,724,713,714,798,1004,891,1075,1005,1036,888,875,735,775,874,861,894,868,1001,743,854,994,800,853,817,787,844,926,825,792,862,882,959,872,931,900,926,711,748,865,684,717,729,633,728,630,783,773,662,543,647,796,797,955,747,1100,1125,715,895,985,1031,773,814,864,878,924,1010,987,750,747,739,933,868,537,963,981,977,868,935,948,1037,883,992,890,850,983,1105,804,961,829,858,777,929,1123,957,892,1037,821,690,664,721,1007,771,731,477,639,1020,916,1077,947,933,1021,951,1031,740,972,979,726,850,877,794,841,945,657,847,855,954,878,1029,1013,938,926,980,979,892,860,1033,1140,1136,811,685,737,1055,938,818,960,945,911,888,863,881,844,908,915,893,908,877,801,914,968,929,953,935,965,910,1098,892,925,886,911,1059,950,907,801,901,968,915,1049,870,1032,988,918,845,824,757,692,668,977,670,896,834,876,778,573,678,586,753,731,772,795,1008,878,701,802,803,963,803,1044,800,823,819,974,931,896,1004,896,803,738,717,824,980,883,987,805,920,981,653,991,892,670,914,755,655,772,855,762,983,809,990,840,834,992,872,1060,962,1059,999,846,600,763,823,854,545,646,863,901,757,870,874,925,699,815,871,774,1114,853,957,966,919,981,1015,972,1037,1296,1077,1123,1131,1176,891,579,961,1065,1121,1035,963,698,642,588,614,591,607,579,569,571,577,658,875,980,910,854,915,765,867,864,860,914,874,750,939,790,707,809,803,508,479,492,475,457,631,1089,1149,982,912,892,662,624,754,756,688,826,1243,1122,721,987,979,1021,1229,1192,1062,1016,1101,977,948,1176,773,1229,963,785,926,955,992,1052,951,870,1216,1017,1147,934,971,570,385,1214,967,955,1074,967,922,1131,1050,1101,850,1067,381,707,1154,1148,848,844,884,1219,884,974,1056,1185,1178,732,988,1041,702,1058,735,827,786,632,778,810,1021,1029,685,546,407,1005,706,779,842,795,431,397,929,753,1035,476,784,359,660,1017,549,846,371,745,1079,491,765,432,887,1093,930,1181,1109,737,1099,1371,1152,1108,854,1169,1354,1082,944,1036,1121,1106,1036,1303,1139,1012,1050,894,994,1375,1056,654,691,1126,1027,645,1006,980,944,861,884,897,1202,919,619,1035,1015,1181,797,653,628,747,911,899,1107,888,761,695,1036,935,1037,1029,632,582,666,716,641,674,691,553,728,885,603,857,816,652,594,1035,981,820,771,767,923,989,817,952,883,965,744,610,787,794,740,594,685,745,946,815,1018,933,844,845,836,640,674,784,823,972,874,994,783,799,684,1001,750,852,708,614,742,1036,967,905,770,753,809,696,829,956,927,833,414,819,766,642,638,550,814,684,817,834,927,922,827,918,886,901,834,873,832,853,864,873,843,1052,864,985,873,916,891,919,789,670,1013,595,563,629,899,844,955,829,864,464,884,1100,1134,1031,1011,724,1288,1118,1005,1082,1205,875,924,992,992,957,1066,827,987,1198,1023,1073,751,1133,1250,1106,996,1203,698,1089,941,965,1059,933,1124,944,1088,1015,955,922,1150,683,832,768,703,838,874,645,834,403,717,963,757,940,483,843,386,633,1004,1076,606,647,453,900,1086,826,1043,1053,1142,635,1194,979,1101,1175,975,1214,714,682,989,1038,828,899,955,947,808,845,871,1111,1065,666,921,1053,1070,731,664,588,767,638,897,810,723,783,914,1044,666,629,811,561,402,945,872,643,888,792,797,770,897,771,740,857,840,806,891,790,811,862,854,821,804,656,808,719,817,868,813,769,694,755,754,726,886,861,766,849,783,868,658,944,884,831,771,767,848,836,730,730,670,778,752,845,842,827,788,655,876,646,737,869,870,751,911,744,842,657,945,807,871,902,947,951,915,933,890,960,830,938,899,1034,726,693,815,778,682,547,615,758,786,1102,943,818,564,498,461,871,547,683,1100,1131,937,945,969,1246,982,1002,1039,999,913,1123,944,991,1025,1008,948,922,955,630,837,785,655,769,1038,482,773,420,788,1068,471,796,459,904,1082,597,698,385,879,919,1081,912,1151,1004,1105,1144,610,1009,534,1252,1153,1089,718,1151,1214,922,901,1175,1022,900,723,837,960,1091,719,971,940,881,885,796,1047,1110,869,742,1040,1022,1229,562,674,670,860,850,553,879,908,1055,806,834,862,1027,812,1081,1061,627,630,686,635,558,752,679,757,575,492,940,878,937,917,752,827,937,916,952,1111,1058,704,765,975,741,936,869,832,1053,817,998,747,922,845,966,905,816,793,718,1002,964,791,857,623,805,808,973,915,836,955,980,924,966,1002,856,983,898,1012,576,709,850,934,656,1066,681,958,845,694,853,824,841,850,898,812,725,683,840,868,757,677,943,981,880,864,737,803,763,744,954,1032,823,738,685,842,928,943,966,894,968,1006,831,933,946,671,558,746,820,791,851,943,818,838,667,801,739,776,719,728,1047,710,994,996,820,752,858,855,807,1037,797,706,817,641,661,761,854,920,883,885,948,748,720,1070,1031,946,994,739,858,927,808,854,883,881,892,924,877,971,896,956,895,906,873,1041,808,676,834,726,674,732,629,698,721,1014,920,1156,637,611,865,973,702,573,685,903,427,1015,1068,1018,1089,920,1046,1198,1043,921,1262,960,791,922,957,990,1054,925,876,1222,1012,990,1002,994,927,728,1029,1137,520,998,1289,1229,1138,957,979,1066,1100,826,1035,956,1045,1069,843,923,1121,916,644,781,757,678,779,1050,593,638,414,927,843,977,445,819,361,684,960,787,840,812,442,480,1032,949,985,953,1191,1150,668,1144,1005,888,989,1160,716,799,828,1101,1029,1095,1014,667,692,1115,960,671,974,973,938,906,873,884,1193,907,633,1007,995,1209,1008,824,618,609,765,935,783,810,1016,832,962,736,1045,1057,1101,889,906,862,1080,917,795,1044,899,772,593,915,731,584,501,736,735,728,670,696,1009,1126,743,581,517,598,455,404,551,622,648,706,611,773,528,633,533,987,847,1048,977,873,720,745,671,804,583,1043,909,790,999,750,896,779,788,873,925,1077,782,724,947,889,959,870,908,958,937,858,615,902,1003,916,758,689,936,1002,888,855,998,929,747,856,798,1019,855,991,935,704,801,978,778,745,1012,844,787,912,747,781,922,981,1092,787,705,942,905,983,846,942,819,879,842,869,860,912,826,1e3,881,980,876,913,897,971,800,661,1087,808,1115,1074,835,1012,872,940,868,724,971,825,894,1077,951,683,842,718,994,690,833,990,883,833,951,887,865,916,876,1029,852,840,1123,1097,828,942,881,753,919,952,926,1132,1001,974,662,531,687,804,984,680,592,411,892,1036,1065,898,1035,788,919,932,948,776,1104,901,790,852,1017,748,1043,894,813,832,896,1054,1055,1010,753,859,965,917,1003,973,801,980,1080,824,743,728,827,897,971,785,1044,914,1020,943,1032,882,882,983,1034,1012,881,862,934,907,939,977,990,988,1007,959,965,844,970,835,916,932,825,990,843,875,958,957,1025,1002,1046,804,1021,1013,816,835,742,804,810,702,1053,735,775,609,645,647,635,646,732,732,1071,889,906,838,764,938,898,822,1044,910,990,879,842,883,849,945,812,699,773,671,865,948,851,933,803,842,948,790,860,875,712,947,729,732,810,784,888,962,879,911,784,764,984,802,987,1029,983,1080,698,776,717,850,889,638,791,737,970,673,737,696,827,898,697,843,983,976,994,821,976,1062,945,1012,1038,1205,1215,1036,1135,1048,1008,691,807,1028,1086,1160,1131,742,685,646,610,631,593,663,968,926,860,901,801,941,808,724,924,708,1043,472,736,1122,1164,886,958,692,841,712,736,838,1231,1090,813,913,1014,1131,1286,1087,977,1019,988,1058,1154,1166,1082,1036,1048,1070,1057,929,1256,941,832,959,972,995,1059,889,871,1203,1041,1022,908,1038,969,990,1187,1060,1041,974,978,516,386,1189,1009,1165,709,1132,987,919,808,942,799,693,966,997,940,1091,854,829,1041,996,1057,943,1064,1020,943,909,1273,1141,832,799,1012,953,624,935,670,859,762,616,809,1053,1057,1085,966,1078,461,842,401,622,984,1073,576,699,392,859,1101,535,748,415,887,1075,673,827,411,720,969,1084,640,629,387,951,1080,992,1119,901,1059,1021,1001,1222,810,996,1317,845,714,876,999,1076,737,994,946,871,909,817,1082,1087,835,779,1067,1019,1201,572,683,774,859,730,1043,1015,954,977,706,990,1064,619,686,684,805,695,1014,1032,916,918,948,930,876,999,715,1003,947,947,774,894,837,577,894,998,911,932,902,901,1008,696,733,865,996,819,880,844,869,862,914,827,1e3,881,992,874,914,897,974,802,664,1103,595,996,880,778,524,716,1114,1152,900,965,840,1286,1088,820,1211,1022,845,894,988,995,1144,966,899,1201,1120,1101,1215,1031,955,977,901,990,1047,1005,1050,895,1061,1111,819,859,1126,715,738,798,647,801,794,880,592,827,470,466,1056,977,464,803,415,876,1087,561,700,441,902,1120,847,1056,1069,1121,702,1190,1310,1339,1401,1036,1111,1213,1121,831,1249,1082,472,887,849,822,1477,1439,1452,1505,1400,1447,1361,1482,1507,1622,1503,1541,1410,1572,1315,1570,1491,1582,1478,1671,1449,1416,1436,1444,1499,1351,1434,1535,1440,1169,1527,1601,1455,1444,1355,1304,1291,1550,1510,1456,1461,1230,1140,938,1174,1141,913,502,983,1293,1410,1236,1037,1033,1039,1159,1151,1248,704,1115,941,1039,651,495,757,997,840,1520,1223,1556,1446,1396,1457,1421,1397,1563,1259,1093,1455,1529,1397,1540,1453,1646,1672,1452,1582,1446,1493,1490,1563,1580,1377,1453,1431,1410,1444,1391,1448,1427,1442,1374,1571,1331,1441,1266,1334,1350,1532,1482,1276,1428,1367,1473,1491,1400,1437,1269,1677,1443,1441,1556,1691,1457,1508,1548,1189,1347,1543,1536,1422,1615,1397,1323,1531,1622,1322,1461,1101,1460,1463,1453,1421,1449,1484,969,1274,889,1042,1082,1159,1046,1071,987,644,504,562,1299,1387,1038,1122,1352,740,1115,1006,761,1491,1427,1433,1452,1421,1493,1506,1610,1510,1573,1450,1541,1429,1455,1557,1524,1384,1492,1362,1676,1137,1572,1559,1655,1511,1482,1334,1331,1633,1552,1555,1543,1384,1286,968,1225,1168,915,767,1400,1032,1408,1440,1420,1098,1576,1103,1444,1323,1555,1429,1520,1459,1529,1500,1094,1005,1392,1231,1413,1552,880,926,933,1116,929,1106,1093,928,918,1017,1315,1367,1034,1031,1032,1040,1180,1111,1001,957,1293,1290,1331,1227,1e3,586,891,1193,1428,1478,1385,1446,1341,1439,1580,1536,1381,1528,1466,1579,1413,1538,1394,1467,1356,1460,1505,1419,1150,1556,1619,1495,1441,1526,1205,1295,1090,1241,1485,1593,1554,1314,1074,1195,1170,1094,896,1380,1514,1195,1223,1411,1490,1127,1375,1410,1383,1256,1344,1423,1553,1502,1139,1451,1384,1417,1421,1310,1257,1408,1324,1476,1526,1519,1288,953,1227,1128,994,970,892,565,524,1517,1442,1156,959,874,871,992,1058,1255,1347,1208,1226,1353,1201,1549,1370,1538,1496,1567,1505,1524,1525,1426,1347,1577,1509,1476,1244,1281,1313,1211,867,1061,1311,1133,1307,1216,1023,1493,1627,1602,1601,1515,1536,1554,1363,1510,1527,1289,1454,1285,1414,1438,1198,1260,1502,1513,1533,1526,1515,1332,1153,690,1296,1335,1243,1036,1145,1302,874,851,1045,1034,870,1255,1435,1517,1389,1462,1335,1415,1590,1533,1367,1529,1470,1555,1598,1636,1486,1550,1515,1593,1461,1451,1436,1593,1519,1366,1425,1380,1678,1116,1581,1600,1464,1434,1201,1447,1527,1505,1497,1451,1415,1338,1404,1074,1252,1464,1511,1517,1566,1405,1232,1108,1049,1225,1016,780,685,944,1420,1223,1245,1570,1366,1274,1484,1574,1517,1318,1052,1197,1297,1403,1418,1267,1305,1434,1632,1549,1585,1503,1365,1367,1343,1065,1422,1275,1193,1562,1537,1213,1456,1477,1514,1583,1058,1329,1255,1049,1282,1265,1361,1035,1061,1124,1484,1220,854,901,1175,857,749,951,1612,1431,1481,1395,1454,1326,1438,1583,1525,1396,1508,1537,1475,1452,1581,1504,1352,1425,1376,1597,1485,1264,1505,1637,1464,1541,1436,1358,1426,1616,1557,1415,1312,1493,1409,1401,1325,1272,1593,1535,1446,1403,1283,973,1206,1150,1003,866,523,1303,1275,1493,1038,1040,1142,1091,1372,1508,891,820,823,1015,1152,886,1511,1325,1438,1493,1385,1445,1350,1529,1591,1361,1435,1453,1403,1552,1458,1427,1052,1649,1269,1045,1525,1421,1328,1411,1251,1314,1164,1442,1588,1211,1318,1502,1598,1318,980,1281,1280,1115,1219,1266,1337,1357,1365,1035,1335,1055,1212,1388,1275,1452,1487,1456,1407,1441,1275,1672,1446,1432,1553,1682,1459,1464,1218,1389,1456,1429,1426,1311,925,1046,1027,1062,1160,870,673,503,849,1362,1254,1451,1396,1112,1395,1465,1579,1331,1304,1405,1590,861,885,935,1089,1161,1499,1500,1441,1208,1195,1023,1458,1100,1135,1115,1116,693,867,806,803,1022,760,1312,1399,1326,1484,1561,1432,1482,1415,1436,1413,1393,1380,1387,1391,978,1485,1490,1014,1024,1028,1230,1571,1550,1538,1312,1225,1248,1235,1231,1246,1279,1284,1240,1446,1451,1389,1393,1505,1038,825,756,1191,857,789,390,258,1305,1218,1178,1328,1318,1248,1597,1575,1542,1479,1107,1437,1215,1311,1442,1430,1441,1107,1381,1269,1110,1085,1343,1256,1294,1205,1308,1159,1221,1058,1101,1097,1189,1031,811,1218,1193,984,1109,1046,1271,967,980,937,1153,811,1051,1204,1313,1219,1067,1387,841,1235,1263,1152,1096,1220,999,1126,1135,993,1164,1084,1046,1231,1223,1070,1208,1054,1095,1069,1170,1130,1214,1272,1587,1243,1228,1055,536,1288,1311,1285,1031,1031,1038,1033,1030,1136,1050,1270,782,1011,1126,1133,1117,1286,1202,944,436,824,864,770,1294,1459,1456,1434,1457,1393,1447,1588,1515,1387,1445,1474,1545,1459,1415,1117,1474,1468,1493,1407,1528,1449,1336,1424,1293,1706,1237,1554,1593,1659,1555,1499,1457,1359,1152,1066,1353,1569,1494,897,1151,1254,821,1089,1247,1400,1609,1436,876,1511,1518,1400,1071,1206,1432,1487,1411,1503,1385,1217,1178,1295,1489,1361,1505,1415,928,1248,1304,1134,1317,1121,1138,1253,1514,1449,1426,1343,1195,1291,1252,1258,1435,1328,1253,1240,1250,1279,1374,1166,1398,1485,1013,1425,1521,1303,959,1278,1446,1293,1338,1495,1484,1580,1601,1393,1049,1076,1308,1047,1182,1001,969,962,752,413,514,1267,1301,1385,1040,1042,1175,1286,734,1031,1021,826,818,1493,1426,1426,1449,1407,1505,1519,1381,988,1438,1543,1322,1594,1503,1583,1383,1496,1411,1505,1381,1461,1473,1415,1166,1571,1599,1563,1426,1470,1213,1428,1251,1413,1411,1522,1585,1282,1183,1570,1410,1296,1442,1436,1297,1182,1491,1532,1326,826,1402,1549,1563,1441,1409,1262,1010,1169,1179,1054,920,405,1106,1272,1354,1031,1035,1134,1152,1142,982,1171,1007,1104,914,1019,1405,1518,1385,1460,1348,1406,1596,1539,1370,1525,1495,1483,1487,1575,1450,1416,1377,1418,1540,1449,1171,1555,1621,1449,1447,1566,1201,1275,841,1218,1615,1544,1420,1275,1572,1274,989,1425,1290,920,1397,1306,1015,1546,1241,798,1383,1325,1219,1294,1453,1368,1402,1485,1535,1355,1206,1051,1101,1183,992,815,416,1307,1255,1131,1344,1316,1051,1374,1574,1365,1148,1314,1219,1547,1481,1415,1218,1476,1475,1424,1415,999,1316,1559,1512,1393,1433,1133,1140,982,841,1344,1276,1130,1269,1363,1296,1078,1523,1604,1503,1455,1304,1266,1460,1575,1476,1467,1015,1498,1532,1250,1475,1609,1305,1247,936,921,1288,1235,1344,1216,1398,1449,1213,1543,1583,1501,814,1355,1420,505,1345,1160,1178,1524,1317,1196,955,1379,1190,1317,1409,1018,1111,1407,1231,1234,1579,1581,1601,1023,1496,1092,1579,1280,1529,1613,1177,1124,1475,1547,1419,1612,1281,1231,975,1067,1335,1316,1111,1519,1085,1344,1628,1536,1451,1504,1179,1506,1256,1013,1285,1267,1129,1342,1288,1035,1578,1073,1498,1319,1619,1545,1508,834,1207,1460,1160,1600,1192,1521,1355,1260,1241,1399,1416,1273,973,906,1293,1269,1277,1292,1112,1302,1327,1060,1621,1511,1579,1507,1358,1614,1537,1340,1443,1445,1352,1352,1041,1487,1377,1518,1469,1405,1353,1305,1412,1640,1559,1534,1329,1243,1e3,910,1336,1320,1036,1092,1253,958,920,1096,1046,723,1418,1461,1460,1403,1394,1500,1509,1633,1543,1446,1430,1562,1412,1425,1601,1519,1354,1474,1289,1705,1235,1558,1592,1658,1550,1487,1456,1359,1450,1515,1532,1472,1447,1494,1492,1642,1478,1489,1294,1122,1059,1306,1010,719,833,1325,1328,1283,1032,1041,1206,1147,1340,787,971,1035,1040,956,1353,1351,1146,1294,1162,1276,1362,1066,1221,1467,1061,1274,1406,1154,1388,1355,1272,1409,1329,1169,1435,1428,1521,1387,1456,1339,1441,1580,1368,1406,1431,1286,1358,1636,1600,1454,1496,1570,1510,1411,1468,1618,1573,1592,1518,1365,1365,1325,1153,1598,1443,1482,1459,1271,1410,1353,1493,1454,1490,1453,1463,1249,932,770,1421,1407,1492,1501,1354,1446,1537,1435,1175,1550,1612,1297,1323,1386,1224,1088,1397,1528,1420,1394,1154,726,1037,982,1217,1016,543,980,324,1159,1271,1316,1047,1262,1118,873,1031,828,845,1484,1610,1451,1436,1464,1420,1507,1492,1564,1548,1384,1459,1585,1366,1477,1603,1500,1342,1447,1289,1695,1372,1507,1586,1702,1472,1511,1336,1390,1484,1602,1144,1240,1041,1186,876,731,1344,1351,1031,1137,1323,830,888,996,965,1087,1522,1451,1450,1425,1405,1496,1518,1634,1529,1463,1421,1622,1529,1526,1596,1428,1551,1533,1518,1373,1480,1519,1357,1434,1260,1677,1459,1426,1549,1691,1452,1464,1228,1411,1544,1385,1489,1173,1101,1010,1124,825,618,1258,1322,1331,1290,1557,1308,1553,1476,1159,1453,1569,1514,1299,1183,1150,1300,1357,1037,1146,1325,823,885,1030,970,1192,1450,1492,1388,1449,1343,1409,1581,1533,1363,1540,1484,1615,1547,1657,1486,1458,1542,1587,1571,1526,1424,1423,1489,1483,1383,1366,1552,1473,1197,1527,1617,1338,1375,1315,1480,1455,1151,1246,1057,1019,1155,886,536,1295,1236,1403,1144,1505,1631,1221,1454,1531,1541,1549,1427,1388,1263,1492,1488,1336,798,997,1156,1355,910,757,858,975,1089,1e3,872,1216,1228,1398,1360,1314,1355,1256,1246,1072,1150,1381,1338,1402,1210,1007,1292,944,1075,1051,957,1081,1049,1180,1062,1222,1177,942,1178,1183,1121,1581,1343,1144,1100,990,1161,1039,1255,1342,1142,1287,915,1432,1168,1200,1009,749,1236,1048,704,1326,1343,1153,1250,1169,882,1049,714,1129,1207,1019,913,1088,892,1045,1053,1281,1234,1316,1308,1294,1285,521,790,418,1024,1119,1116,1051,479,1197,974,965,995,1240,544,1066,97,629,375,52,687,377,871,731,536,801,856,82,1209,1356,1322,1353,1286,666,1010,1107,1112,1095,1045,149,221,524,1094,1061,81,820,953,286,628,643,596,829,184,875,144,436,415,827,977,839,80,313,508,298,556,812,369,970,68,790,156,809,697,572,311,942,1057,379,545,1076,1103,210,605,571,1003,949,948,824,348,420,711,937,955,633,153,572,836,524,398,265,110,246,148,211,221,575,423,415,737,395,169,904,360,562,577,371,928,1121,1110,1038,1e3,418,1012,1074,1020,1104,570,400,901,1029,1069,646,156,457,316,521,772,116,922,1026,834,790,742,457,1019,877,772,748,437,789,880,620,557,848,307,1007,890,516,579,419,741,515,589,911,629,151,594,920,372,575,364,486,676,425,624,136,491,732,745,589,74,177,416,810,698,485,82,668,1010,986,1054,930,356,866,745,747,909,391,697,725,925,741,794,261,274,357,324,112,114,855,965,697,956,1011,189,425,675,486,503,77,412,527,478,581,425,405,873,915,1027,896,493,900,991,966,995,934,482,949,615,482,966,460,309,218,221,260,199,522,393,421,632,457,90,550,431,492,701,266,620,1041,1343,1500,1475,1459,1447,1545,1465,1529,1490,1519,1555,1479,1519,1515,1357,1510,1508,1453,1480,1419,1173,866,938,985,1180,1064,1225,1241,915,1034,1088,1298,1236,1086,873,1391,1380,1354,1384,1437,1343,1372,1034,1173,1352,1169,970,472,384,601,1185,861,514,942,1188,1184,961,557,1148,1031,971,1035,806,988,1131,1321,891,937,708,923,1157,1330,1342,1377,1027,1017,1088,1168,1205,1128,1202,1142,1149,1241,1294,1212,1342,1318,1196,1360,1346,1367,1159,1087,937,1193,1342,1175,780,1175,1033,1062,918,1281,1369,1059,1317,811,1400,1305,1348,1288,1345,1331,1341,1210,970,1274,1262,1063,900,912,1283,1271,1193,1185,1297,1208,1206,1221,1360,1167,1015,1151,1064,1023,1008,893,1256,1207,1280,1228,1176,1105,1158,828,1056,1106,1064,1063,1225,1340,1182,1091,968,1081,1301,1046,1081,1284,744,1321,1133,1361,1285,1118,903,1227,727,1197,1054,1227,1e3,1250,1145,1128,1119,1163,1254,1186,1142,1096,1254,1044,1006,1105,1374,980,960,1349,1236,1314,1150,1267,797,986,1358,1302,1317,998,1133,1333,1025,1067,1299,1001,1393,1030,1233,1200,1487,1065,1120,1144,1139,1152,1087,783,1133,1093,1134,1293,979,866,1218,821,980,942,919,978,1335,1151,1025,1289,1287,1261,1008,1424,1162,1261,1447,1108,1332,1309,1282,1273,1272,1221,1263,1140,1189,1252,1144,1241,1180,1107,1272,1247,1092,1191,1121,1153,1122,861,1103,1121,1065,1263,1195,1271,930,1029,936,886,1169,1186,1194,1285,968,923,1073,1031,1207,1301,1085,1028,1267,1071,970,1333,1314,1400,717,682,667,724,761,642,837,816,666,875,835,851,640,781,1150,1266,1001,1030,1022,1290,929,890,1208,1366,962,1305,1081,1202,989,1123,1128,1054,810,1192,929,1284,847,999,822,820,804,1185,1026,940,1188,1204,1014,994,994,1009,1108,1154,1144,1055,926,1220,1274,1173,1063,981,1134,1244,962,939,1030,699,1196,1177,1306,1097,1181,1107,1236,1063,1040,1323,1065,1379,1405,1289,859,967,768,936,798,1072,984,1145,1180,929,1157,906,1012,1232,1171,1064,1144,1034,1239,906,1043,1263,1097,1089,1198,1155,998,1297,1241,1196,1250,1185,935,1073,1249,932,1002,1089,1298,1187,1289,1104,1210,988,1232,1491,1352,1268,1350,1238,1364,1451,1110,1157,854,1354,1133,1100,1353,929,1052,1132,1253,1148,1272,1258,1154,1098,604,1201,1386,1431,900,1203,1118,1051,1183,1254,967,1191,1084,1090,1059,1369,1117,1086,1167,1034,1078,973,1366,1457,1298,1081,1340,968,1039,1086,1184,1167,1329,1103,955,1279,1079,1333,1184,1365,1221,1191,1308,865,1246,1277,1075,1319,1128,1129,1295,1066,1327,1268,1082,1240,1353,1089,1108,1112,1186,1176,1256,1073,1071,1197,1096,992,1325,1164,1159,1247,1304,1134,1185,1103,1293,1225,1273,1266,1075,945,1094,1123,1315,1116,1208,1205,1346,1373,1053,674,1249,1350,1391,1302,1244,1073,1036,749,829,1154,1114,1186,1367,1313,1200,871,1243,1161,1098,776,1040,803,973,1221,1277,1178,1406,1037,1137,729,1238,704,976,1142,1232,1275,1156,1311,1377,1506,1462,835,550,564,841,755,790,839,577,536,589,725,828,607,575,560,584,600,685,894,733,659,731,758,564,1165,1318,980],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_yt.data")}Module["addRunDependency"]("datafile_yt.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/yt/__init__.py",start:0,end:4089,audio:0},{filename:"/lib/python3.9/site-packages/yt/api.py",start:4089,end:4455,audio:0},{filename:"/lib/python3.9/site-packages/yt/arraytypes.py",start:4455,end:5450,audio:0},{filename:"/lib/python3.9/site-packages/yt/config.py",start:5450,end:11243,audio:0},{filename:"/lib/python3.9/site-packages/yt/convenience.py",start:11243,end:16362,audio:0},{filename:"/lib/python3.9/site-packages/yt/exthook.py",start:16362,end:21491,audio:0},{filename:"/lib/python3.9/site-packages/yt/funcs.py",start:21491,end:62486,audio:0},{filename:"/lib/python3.9/site-packages/yt/mods.py",start:62486,end:64278,audio:0},{filename:"/lib/python3.9/site-packages/yt/pmods.py",start:64278,end:78206,audio:0},{filename:"/lib/python3.9/site-packages/yt/startup_tasks.py",start:78206,end:84114,audio:0},{filename:"/lib/python3.9/site-packages/yt/testing.py",start:84114,end:124175,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/__init__.py",start:124175,end:124175,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/list_modules.py",start:124175,end:125719,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/absorption_spectrum/__init__.py",start:125719,end:126113,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/absorption_spectrum/absorption_line.py",start:126113,end:133524,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py",start:133524,end:163577,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/absorption_spectrum/absorption_spectrum_fit.py",start:163577,end:197541,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/absorption_spectrum/api.py",start:197541,end:198382,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/absorption_spectrum/tests/__init__.py",start:198382,end:198382,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/absorption_spectrum/tests/test_absorption_spectrum.py",start:198382,end:215943,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/__init__.py",start:215943,end:215943,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/api.py",start:215943,end:216822,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/cosmology_splice.py",start:216822,end:231202,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/light_cone/__init__.py",start:231202,end:231596,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/light_cone/api.py",start:231596,end:232360,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/light_cone/light_cone.py",start:232360,end:252881,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/light_cone/light_cone_projection.py",start:252881,end:264677,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/light_cone/tests/__init__.py",start:264677,end:264677,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/light_cone/tests/test_light_cone.py",start:264677,end:267028,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/light_ray/__init__.py",start:267028,end:267028,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/light_ray/api.py",start:267028,end:267766,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py",start:267766,end:307518,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/light_ray/tests/__init__.py",start:307518,end:307518,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/cosmological_observation/light_ray/tests/test_light_ray.py",start:307518,end:312144,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/__init__.py",start:312144,end:312144,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/api.py",start:312144,end:313167,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/enzofof_merger_tree.py",start:313167,end:346138,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/halo_callbacks.py",start:346138,end:368128,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/halo_catalog.py",start:368128,end:387687,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/halo_filters.py",start:387687,end:391551,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/halo_finding_methods.py",start:391551,end:396620,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/halo_object.py",start:396620,end:397138,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/halo_quantities.py",start:397138,end:399239,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/halo_recipes.py",start:399239,end:403445,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/tests/__init__.py",start:403445,end:403445,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/tests/run_halo_finder.py",start:403445,end:404350,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/tests/test_halo_catalog.py",start:404350,end:406537,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_analysis/tests/test_halo_finders.py",start:406537,end:407859,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/__init__.py",start:407859,end:407859,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/api.py",start:407859,end:408895,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/halo_objects.py",start:408895,end:474769,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/fof/__init__.py",start:474769,end:474769,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/fof/EnzoFOF.c",start:474769,end:480680,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/fof/README",start:480680,end:481203,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/fof/kd.c",start:481203,end:490830,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/fof/kd.h",start:490830,end:494155,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/fof/tipsydefs.h",start:494155,end:494817,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/fof/EnzoFOF.so",start:494817,end:505266,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/__init__.py",start:505266,end:505266,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/EnzoHop.c",start:505266,end:519917,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/README",start:519917,end:520964,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/hop.h",start:520964,end:522396,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/hop_hop.c",start:522396,end:550664,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/hop_kd.c",start:550664,end:555672,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/hop_numpy.h",start:555672,end:555972,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/hop_regroup.c",start:555972,end:582180,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/hop_slice.c",start:582180,end:595215,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/hop_smooth.c",start:595215,end:606823,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/kd.h",start:606823,end:610444,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/slice.h",start:610444,end:614057,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/smooth.h",start:614057,end:617690,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/hop/EnzoHop.so",start:617690,end:651061,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/rockstar/__init__.py",start:651061,end:651061,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/rockstar/api.py",start:651061,end:651488,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/rockstar/rockstar.py",start:651488,end:666626,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/rockstar/rockstar_groupies.pyx",start:666626,end:681388,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/rockstar/rockstar_interface.pyx",start:681388,end:691920,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/tests/__init__.py",start:691920,end:691920,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/tests/run_rockstar.py",start:691920,end:692621,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/tests/test_halo_finders.py",start:692621,end:694385,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_finding/tests/test_rockstar.py",start:694385,end:695389,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_mass_function/__init__.py",start:695389,end:695389,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_mass_function/api.py",start:695389,end:696177,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/halo_mass_function/halo_mass_function.py",start:696177,end:733233,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/level_sets/__init__.py",start:733233,end:733233,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/level_sets/api.py",start:733233,end:734525,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/particle_trajectories/__init__.py",start:734525,end:734525,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/particle_trajectories/api.py",start:734525,end:734882,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/photon_simulator/__init__.py",start:734882,end:734882,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/photon_simulator/api.py",start:734882,end:735842,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/photon_simulator/photon_models.py",start:735842,end:745829,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/photon_simulator/photon_simulator.py",start:745829,end:810680,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/photon_simulator/spectral_models.py",start:810680,end:823692,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/photon_simulator/utils.c",start:823692,end:1082083,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/photon_simulator/utils.pyx",start:1082083,end:1082962,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/photon_simulator/utils.so",start:1082962,end:1108255,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/photon_simulator/tests/__init__.py",start:1108255,end:1108255,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/photon_simulator/tests/test_beta_model.py",start:1108255,end:1113199,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/photon_simulator/tests/test_sloshing.py",start:1113199,end:1117859,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/photon_simulator/tests/test_spectra.py",start:1117859,end:1119102,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/ppv_cube/__init__.py",start:1119102,end:1119102,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/ppv_cube/api.py",start:1119102,end:1119887,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/ppv_cube/ppv_cube.py",start:1119887,end:1135343,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/ppv_cube/ppv_utils.c",start:1135343,end:1400286,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/ppv_cube/ppv_utils.pyx",start:1400286,end:1401031,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/ppv_cube/ppv_utils.so",start:1401031,end:1426701,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/ppv_cube/tests/__init__.py",start:1426701,end:1426701,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/ppv_cube/tests/test_ppv.py",start:1426701,end:1429400,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/radmc3d_export/RadMC3DImageUtilities.py",start:1429400,end:1432190,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/radmc3d_export/RadMC3DInterface.py",start:1432190,end:1446285,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/radmc3d_export/__init__.py",start:1446285,end:1446285,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/radmc3d_export/api.py",start:1446285,end:1447155,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/radmc3d_export/tests/__init__.py",start:1447155,end:1447155,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/radmc3d_export/tests/test_radmc3d_exporter.py",start:1447155,end:1449909,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/spectral_integrator/__init__.py",start:1449909,end:1449909,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/spectral_integrator/api.py",start:1449909,end:1450233,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/star_analysis/__init__.py",start:1450233,end:1450233,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/star_analysis/api.py",start:1450233,end:1450962,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/star_analysis/sfr_spectrum.py",start:1450962,end:1475593,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/sunrise_export/__init__.py",start:1475593,end:1475593,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/sunrise_export/api.py",start:1475593,end:1476300,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/sunrise_export/sunrise_exporter.py",start:1476300,end:1502144,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/sunyaev_zeldovich/__init__.py",start:1502144,end:1502144,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/sunyaev_zeldovich/api.py",start:1502144,end:1502911,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/sunyaev_zeldovich/projection.py",start:1502911,end:1524872,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/sunyaev_zeldovich/tests/__init__.py",start:1524872,end:1524872,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/sunyaev_zeldovich/tests/test_projection.py",start:1524872,end:1529329,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/two_point_functions/__init__.py",start:1529329,end:1529329,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/two_point_functions/api.py",start:1529329,end:1530057,audio:0},{filename:"/lib/python3.9/site-packages/yt/analysis_modules/two_point_functions/two_point_functions.py",start:1530057,end:1569500,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/__init__.py",start:1569500,end:1569500,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/analyzer_objects.py",start:1569500,end:1573016,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/api.py",start:1573016,end:1574067,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/construction_data_containers.py",start:1574067,end:1662250,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/data_containers.py",start:1662250,end:1770238,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/derived_quantities.py",start:1770238,end:1797626,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/field_data.py",start:1797626,end:1798132,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/grid_patch.py",start:1798132,end:1814750,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/image_array.py",start:1814750,end:1828178,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/octree_subset.py",start:1828178,end:1851532,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/particle_filters.py",start:1851532,end:1858653,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/particle_trajectories.py",start:1858653,end:1873243,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/particle_unions.py",start:1873243,end:1873895,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/profiles.py",start:1873895,end:1928824,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/region_expression.py",start:1928824,end:1937220,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/selection_data_containers.py",start:1937220,end:1981639,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/static_output.py",start:1981639,end:2040478,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/time_series.py",start:2040478,end:2065068,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/unions.py",start:2065068,end:2065951,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/unstructured_mesh.py",start:2065951,end:2074024,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/level_sets/__init__.py",start:2074024,end:2074024,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/level_sets/api.py",start:2074024,end:2074803,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/level_sets/clump_handling.py",start:2074803,end:2091936,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/level_sets/clump_info_items.py",start:2091936,end:2095545,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/level_sets/clump_tools.py",start:2095545,end:2098210,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/level_sets/clump_validators.py",start:2098210,end:2101869,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/level_sets/contour_finder.py",start:2101869,end:2105052,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/level_sets/tests/__init__.py",start:2105052,end:2105052,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/level_sets/tests/test_clump_finding.py",start:2105052,end:2111409,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/__init__.py",start:2111409,end:2111409,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_boolean_regions.py",start:2111409,end:2136389,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_chunking.py",start:2136389,end:2138649,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_clone.py",start:2138649,end:2139571,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_compose.py",start:2139571,end:2145280,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_connected_sets.py",start:2145280,end:2145892,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_covering_grid.py",start:2145892,end:2157299,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_cutting_plane.py",start:2157299,end:2159226,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_data_collection.py",start:2159226,end:2160419,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_data_containers.py",start:2160419,end:2167296,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_dataset_access.py",start:2167296,end:2173571,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_derived_quantities.py",start:2173571,end:2179673,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_disks.py",start:2179673,end:2181680,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_ellipsoid.py",start:2181680,end:2183760,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_exclude_functions.py",start:2183760,end:2187187,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_extract_regions.py",start:2187187,end:2190497,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_firefly.py",start:2190497,end:2190912,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_fluxes.py",start:2190912,end:2195384,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_glue.py",start:2195384,end:2195720,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_image_array.py",start:2195720,end:2200148,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_numpy_ops.py",start:2200148,end:2205319,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_ortho_rays.py",start:2205319,end:2206209,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_particle_filter.py",start:2206209,end:2212642,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_particle_trajectories.py",start:2212642,end:2216424,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_pickle.py",start:2216424,end:2218368,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_points.py",start:2218368,end:2220784,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_profiles.py",start:2220784,end:2238466,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_projection.py",start:2238466,end:2243806,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_rays.py",start:2243806,end:2246367,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_refinement.py",start:2246367,end:2247977,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_regions.py",start:2247977,end:2248813,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_slice.py",start:2248813,end:2252820,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_spheres.py",start:2252820,end:2255522,audio:0},{filename:"/lib/python3.9/site-packages/yt/data_objects/tests/test_streamlines.py",start:2255522,end:2256474,audio:0},{filename:"/lib/python3.9/site-packages/yt/extensions/__init__.py",start:2256474,end:2257394,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/__init__.py",start:2257394,end:2257507,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/_dummy_thread32.py",start:2257507,end:2262395,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/parameterized.py",start:2262395,end:2270472,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/peewee.py",start:2270472,end:2315429,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/pydot.py",start:2315429,end:2368563,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/pykdtree.py",start:2368563,end:2401544,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/reprlib32.py",start:2401544,end:2406711,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/six.py",start:2406711,end:2433311,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/tqdm/__init__.py",start:2433311,end:2433700,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/tqdm/_tqdm.py",start:2433700,end:2454653,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/tqdm/_tqdm_gui.py",start:2454653,end:2465741,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/tqdm/_tqdm_pandas.py",start:2465741,end:2467441,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/tqdm/_utils.py",start:2467441,end:2470516,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/tqdm/_version.py",start:2470516,end:2470714,audio:0},{filename:"/lib/python3.9/site-packages/yt/extern/tqdm/LICENSE",start:2470714,end:2471819,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/__init__.py",start:2471819,end:2471819,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/angular_momentum.py",start:2471819,end:2476766,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/api.py",start:2476766,end:2477978,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/astro_fields.py",start:2477978,end:2484013,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/astro_simulations.py",start:2484013,end:2486744,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/cosmology_fields.py",start:2486744,end:2492700,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/derived_field.py",start:2492700,end:2510076,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/domain_context.py",start:2510076,end:2510959,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/field_aliases.py",start:2510959,end:2520653,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/field_detector.py",start:2520653,end:2530978,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/field_exceptions.py",start:2530978,end:2532659,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/field_functions.py",start:2532659,end:2535271,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/field_info_container.py",start:2535271,end:2553122,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/field_plugin_registry.py",start:2553122,end:2553818,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/field_type_container.py",start:2553818,end:2558905,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/fluid_fields.py",start:2558905,end:2568052,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/fluid_vector_fields.py",start:2568052,end:2588717,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/geometric_fields.py",start:2588717,end:2599830,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/interpolated_fields.py",start:2599830,end:2601592,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/local_fields.py",start:2601592,end:2604147,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/magnetic_field.py",start:2604147,end:2614368,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/my_plugin_fields.py",start:2614368,end:2615284,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/particle_fields.py",start:2615284,end:2652148,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/species_fields.py",start:2652148,end:2661759,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/vector_operations.py",start:2661759,end:2684200,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/xray_emission_fields.py",start:2684200,end:2697775,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/tests/__init__.py",start:2697775,end:2697775,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/tests/test_angular_momentum.py",start:2697775,end:2698786,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/tests/test_field_access.py",start:2698786,end:2700222,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/tests/test_field_name_container.py",start:2700222,end:2701001,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/tests/test_fields.py",start:2701001,end:2717416,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/tests/test_fields_plugins.py",start:2717416,end:2720323,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/tests/test_magnetic_fields.py",start:2720323,end:2722909,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/tests/test_vector_fields.py",start:2722909,end:2726014,audio:0},{filename:"/lib/python3.9/site-packages/yt/fields/tests/test_xray_fields.py",start:2726014,end:2727627,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/__init__.py",start:2727627,end:2727627,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/api.py",start:2727627,end:2728908,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/adaptahop/__init__.py",start:2728908,end:2729292,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/adaptahop/api.py",start:2729292,end:2729846,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/adaptahop/data_structures.py",start:2729846,end:2740437,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/adaptahop/definitions.py",start:2740437,end:2741742,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/adaptahop/fields.py",start:2741742,end:2744616,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/adaptahop/io.py",start:2744616,end:2754445,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/adaptahop/tests/__init__.py",start:2754445,end:2754445,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/adaptahop/tests/test_outputs.py",start:2754445,end:2756524,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ahf/__init__.py",start:2756524,end:2756904,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ahf/api.py",start:2756904,end:2757428,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ahf/data_structures.py",start:2757428,end:2763219,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ahf/fields.py",start:2763219,end:2765733,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ahf/io.py",start:2765733,end:2769946,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ahf/tests/__init__.py",start:2769946,end:2769946,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ahf/tests/test_outputs.py",start:2769946,end:2771168,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/amrvac/__init__.py",start:2771168,end:2772918,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/amrvac/api.py",start:2772918,end:2773484,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/amrvac/data_structures.py",start:2773484,end:2792271,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/amrvac/datfile_utils.py",start:2792271,end:2797620,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/amrvac/definitions.py",start:2797620,end:2797696,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/amrvac/fields.py",start:2797696,end:2808784,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/amrvac/io.py",start:2808784,end:2814101,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/amrvac/tests/__init__.py",start:2814101,end:2814101,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/amrvac/tests/test_outputs.py",start:2814101,end:2823213,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/amrvac/tests/test_read_amrvac_namelist.py",start:2823213,end:2824791,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/amrvac/tests/sample_parfiles/bw_3d.par",start:2824791,end:2825896,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/amrvac/tests/sample_parfiles/tvdlf_scheme.par",start:2825896,end:2826143,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/art/__init__.py",start:2826143,end:2826523,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/art/api.py",start:2826523,end:2827146,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/art/data_structures.py",start:2827146,end:2864841,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/art/definitions.py",start:2864841,end:2868243,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/art/fields.py",start:2868243,end:2876417,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/art/io.py",start:2876417,end:2900034,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/art/tests/__init__.py",start:2900034,end:2900034,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/art/tests/test_outputs.py",start:2900034,end:2904035,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/__init__.py",start:2904035,end:2904418,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/api.py",start:2904418,end:2904951,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/data_structures.py",start:2904951,end:2925248,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/definitions.py",start:2925248,end:2927598,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/fields.py",start:2927598,end:2934063,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/io.py",start:2934063,end:2936840,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/_artio_caller.c",start:2936840,end:5165576,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/_artio_caller.pyx",start:5165576,end:5237233,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/_artio_caller.so",start:5237233,end:5596871,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/tests/__init__.py",start:5596871,end:5596871,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/tests/test_outputs.py",start:5596871,end:5599389,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/LICENSE",start:5599389,end:5642620,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio.c",start:5642620,end:5650474,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio.h",start:5650474,end:5668728,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio_endian.c",start:5668728,end:5671107,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio_endian.h",start:5671107,end:5672528,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio_file.c",start:5672528,end:5677697,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio_grid.c",start:5677697,end:5715192,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio_internal.h",start:5715192,end:5721814,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio_mpi.c",start:5721814,end:5731902,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio_mpi.h",start:5731902,end:5732130,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio_parameter.c",start:5732130,end:5747867,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio_particle.c",start:5747867,end:5781329,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio_posix.c",start:5781329,end:5790623,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio_selector.c",start:5790623,end:5799613,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/artio_sfc.c",start:5799613,end:5807977,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/cosmology.c",start:5807977,end:5820858,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/artio/artio_headers/cosmology.h",start:5820858,end:5824021,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena/__init__.py",start:5824021,end:5824021,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena/api.py",start:5824021,end:5824607,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena/data_structures.py",start:5824607,end:5850306,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena/definitions.py",start:5850306,end:5850720,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena/fields.py",start:5850720,end:5856159,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena/io.py",start:5856159,end:5860943,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena/misc.py",start:5860943,end:5860943,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena/tests/__init__.py",start:5860943,end:5860943,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena/tests/test_outputs.py",start:5860943,end:5864656,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena_pp/__init__.py",start:5864656,end:5864656,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena_pp/api.py",start:5864656,end:5865254,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena_pp/data_structures.py",start:5865254,end:5879340,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena_pp/definitions.py",start:5879340,end:5879754,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena_pp/fields.py",start:5879754,end:5883840,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena_pp/io.py",start:5883840,end:5887562,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena_pp/misc.py",start:5887562,end:5887562,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena_pp/tests/__init__.py",start:5887562,end:5887562,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/athena_pp/tests/test_outputs.py",start:5887562,end:5889884,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/boxlib/__init__.py",start:5889884,end:5890267,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/boxlib/api.py",start:5890267,end:5891131,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/boxlib/data_structures.py",start:5891131,end:5960432,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/boxlib/definitions.py",start:5960432,end:5962852,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/boxlib/fields.py",start:5962852,end:5983007,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/boxlib/io.py",start:5983007,end:5993063,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/boxlib/misc.py",start:5993063,end:5993063,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/boxlib/tests/__init__.py",start:5993063,end:5993063,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/boxlib/tests/test_outputs.py",start:5993063,end:6003816,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/chombo/__init__.py",start:6003816,end:6003816,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/chombo/api.py",start:6003816,end:6004662,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/chombo/data_structures.py",start:6004662,end:6034369,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/chombo/definitions.py",start:6034369,end:6034785,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/chombo/fields.py",start:6034785,end:6049624,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/chombo/io.py",start:6049624,end:6060332,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/chombo/misc.py",start:6060332,end:6060332,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/chombo/tests/__init__.py",start:6060332,end:6060332,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/chombo/tests/test_outputs.py",start:6060332,end:6063040,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/eagle/__init__.py",start:6063040,end:6063040,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/eagle/api.py",start:6063040,end:6063610,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/eagle/data_structures.py",start:6063610,end:6066648,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/eagle/definitions.py",start:6066648,end:6068105,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/eagle/fields.py",start:6068105,end:6070772,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/eagle/io.py",start:6070772,end:6071335,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/eagle/tests/__init__.py",start:6071335,end:6071335,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/eagle/tests/test_outputs.py",start:6071335,end:6072252,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo/__init__.py",start:6072252,end:6072252,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo/answer_testing_support.py",start:6072252,end:6076233,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo/api.py",start:6076233,end:6077127,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo/data_structures.py",start:6077127,end:6122325,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo/definitions.py",start:6122325,end:6122710,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo/fields.py",start:6122710,end:6135007,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo/io.py",start:6135007,end:6149004,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo/misc.py",start:6149004,end:6150665,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo/simulation_handling.py",start:6150665,end:6180079,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo/tests/__init__.py",start:6180079,end:6180079,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo/tests/test_outputs.py",start:6180079,end:6189021,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo_p/__init__.py",start:6189021,end:6189021,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo_p/api.py",start:6189021,end:6189637,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo_p/data_structures.py",start:6189637,end:6207492,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo_p/definitions.py",start:6207492,end:6207879,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo_p/fields.py",start:6207879,end:6211414,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo_p/io.py",start:6211414,end:6218412,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo_p/misc.py",start:6218412,end:6222381,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo_p/tests/__init__.py",start:6222381,end:6222381,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo_p/tests/test_misc.py",start:6222381,end:6227723,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/enzo_p/tests/test_outputs.py",start:6227723,end:6231213,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/exodus_ii/__init__.py",start:6231213,end:6231599,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/exodus_ii/api.py",start:6231599,end:6232268,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/exodus_ii/data_structures.py",start:6232268,end:6246900,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/exodus_ii/definitions.py",start:6246900,end:6246976,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/exodus_ii/fields.py",start:6246976,end:6248530,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/exodus_ii/io.py",start:6248530,end:6252516,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/exodus_ii/misc.py",start:6252516,end:6252516,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/exodus_ii/simulation_handling.py",start:6252516,end:6256375,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/exodus_ii/util.py",start:6256375,end:6258390,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/exodus_ii/tests/__init__.py",start:6258390,end:6258390,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/exodus_ii/tests/test_outputs.py",start:6258390,end:6261503,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/fits/__init__.py",start:6261503,end:6261503,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/fits/api.py",start:6261503,end:6262245,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/fits/data_structures.py",start:6262245,end:6294690,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/fits/definitions.py",start:6294690,end:6294690,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/fits/fields.py",start:6294690,end:6297002,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/fits/io.py",start:6297002,end:6301088,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/fits/misc.py",start:6301088,end:6312464,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/fits/tests/__init__.py",start:6312464,end:6312464,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/fits/tests/test_outputs.py",start:6312464,end:6315447,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/flash/__init__.py",start:6315447,end:6315829,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/flash/api.py",start:6315829,end:6316472,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/flash/data_structures.py",start:6316472,end:6337153,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/flash/definitions.py",start:6337153,end:6337153,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/flash/fields.py",start:6337153,end:6344836,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/flash/io.py",start:6344836,end:6355349,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/flash/misc.py",start:6355349,end:6355349,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/flash/tests/__init__.py",start:6355349,end:6355349,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/flash/tests/test_outputs.py",start:6355349,end:6358323,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget/__init__.py",start:6358323,end:6358323,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget/api.py",start:6358323,end:6358956,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget/data_structures.py",start:6358956,end:6381749,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget/definitions.py",start:6381749,end:6384715,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget/fields.py",start:6384715,end:6389010,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget/io.py",start:6389010,end:6405970,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget/simulation_handling.py",start:6405970,end:6427816,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget/testing.py",start:6427816,end:6431209,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget/tests/__init__.py",start:6431209,end:6431209,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget/tests/test_outputs.py",start:6431209,end:6434903,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget_fof/__init__.py",start:6434903,end:6435289,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget_fof/api.py",start:6435289,end:6436049,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget_fof/data_structures.py",start:6436049,end:6462976,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget_fof/fields.py",start:6462976,end:6467756,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget_fof/io.py",start:6467756,end:6483505,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget_fof/tests/__init__.py",start:6483505,end:6483505,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gadget_fof/tests/test_outputs.py",start:6483505,end:6487040,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gamer/__init__.py",start:6487040,end:6487422,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gamer/api.py",start:6487422,end:6488026,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gamer/data_structures.py",start:6488026,end:6501751,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gamer/definitions.py",start:6501751,end:6501918,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gamer/fields.py",start:6501918,end:6508071,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gamer/io.py",start:6508071,end:6516128,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gamer/misc.py",start:6516128,end:6516128,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gamer/tests/__init__.py",start:6516128,end:6516128,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gamer/tests/test_outputs.py",start:6516128,end:6518573,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gdf/__init__.py",start:6518573,end:6518573,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gdf/api.py",start:6518573,end:6519184,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gdf/data_structures.py",start:6519184,end:6531388,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gdf/definitions.py",start:6531388,end:6531802,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gdf/fields.py",start:6531802,end:6533278,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gdf/io.py",start:6533278,end:6537114,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gdf/misc.py",start:6537114,end:6537114,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gdf/tests/__init__.py",start:6537114,end:6537114,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gdf/tests/test_outputs.py",start:6537114,end:6538233,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gizmo/__init__.py",start:6538233,end:6538233,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gizmo/api.py",start:6538233,end:6538703,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gizmo/data_structures.py",start:6538703,end:6539956,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gizmo/fields.py",start:6539956,end:6547400,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gizmo/tests/__init__.py",start:6547400,end:6547400,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/gizmo/tests/test_outputs.py",start:6547400,end:6551078,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/halo_catalog/__init__.py",start:6551078,end:6551464,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/halo_catalog/api.py",start:6551464,end:6552003,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/halo_catalog/data_structures.py",start:6552003,end:6556734,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/halo_catalog/fields.py",start:6556734,end:6557855,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/halo_catalog/io.py",start:6557855,end:6562382,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/halo_catalog/tests/__init__.py",start:6562382,end:6562382,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/halo_catalog/tests/test_outputs.py",start:6562382,end:6565584,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/http_stream/__init__.py",start:6565584,end:6565584,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/http_stream/api.py",start:6565584,end:6566065,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/http_stream/data_structures.py",start:6566065,end:6570052,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/http_stream/io.py",start:6570052,end:6574409,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/moab/__init__.py",start:6574409,end:6574788,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/moab/api.py",start:6574788,end:6575434,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/moab/data_structures.py",start:6575434,end:6582770,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/moab/definitions.py",start:6582770,end:6583186,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/moab/fields.py",start:6583186,end:6583731,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/moab/io.py",start:6583731,end:6586458,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/moab/misc.py",start:6586458,end:6586458,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/moab/tests/__init__.py",start:6586458,end:6586458,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/moab/tests/test_c5.py",start:6586458,end:6588573,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/open_pmd/__init__.py",start:6588573,end:6589007,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/open_pmd/api.py",start:6589007,end:6589691,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/open_pmd/data_structures.py",start:6589691,end:6616776,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/open_pmd/definitions.py",start:6616776,end:6616776,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/open_pmd/fields.py",start:6616776,end:6626685,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/open_pmd/io.py",start:6626685,end:6634821,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/open_pmd/misc.py",start:6634821,end:6638607,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/open_pmd/tests/__init__.py",start:6638607,end:6638607,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/open_pmd/tests/test_outputs.py",start:6638607,end:6650195,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls/__init__.py",start:6650195,end:6650195,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls/api.py",start:6650195,end:6650779,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls/data_structures.py",start:6650779,end:6652823,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls/definitions.py",start:6652823,end:6653196,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls/fields.py",start:6653196,end:6664591,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls/io.py",start:6664591,end:6665150,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls/owls_ion_tables.py",start:6665150,end:6671702,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls/simulation_handling.py",start:6671702,end:6674336,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls/tests/__init__.py",start:6674336,end:6674336,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls/tests/test_outputs.py",start:6674336,end:6676549,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls_subfind/__init__.py",start:6676549,end:6676935,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls_subfind/api.py",start:6676935,end:6677495,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls_subfind/data_structures.py",start:6677495,end:6686950,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls_subfind/fields.py",start:6686950,end:6689032,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls_subfind/io.py",start:6689032,end:6699096,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls_subfind/tests/__init__.py",start:6699096,end:6699096,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/owls_subfind/tests/test_outputs.py",start:6699096,end:6700517,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/__init__.py",start:6700517,end:6700900,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/api.py",start:6700900,end:6701490,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/data_structures.py",start:6701490,end:6725639,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/definitions.py",start:6725639,end:6728241,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/field_handlers.py",start:6728241,end:6744946,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/fields.py",start:6744946,end:6761913,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/hilbert.py",start:6761913,end:6767894,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/io.py",start:6767894,end:6779145,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/particle_handlers.py",start:6779145,end:6791706,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/io_utils.c",start:6791706,end:7958890,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/io_utils.pyx",start:7958890,end:7964953,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/io_utils.so",start:7964953,end:8120600,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/tests/__init__.py",start:8120600,end:8120600,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/tests/test_hilbert.py",start:8120600,end:8122177,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ramses/tests/test_outputs.py",start:8122177,end:8138782,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/rockstar/__init__.py",start:8138782,end:8139165,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/rockstar/api.py",start:8139165,end:8139715,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/rockstar/data_structures.py",start:8139715,end:8144469,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/rockstar/definitions.py",start:8144469,end:8148392,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/rockstar/fields.py",start:8148392,end:8151578,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/rockstar/io.py",start:8151578,end:8156177,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/rockstar/tests/__init__.py",start:8156177,end:8156177,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/rockstar/tests/test_outputs.py",start:8156177,end:8157284,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/sdf/__init__.py",start:8157284,end:8157670,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/sdf/api.py",start:8157670,end:8158180,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/sdf/data_structures.py",start:8158180,end:8165885,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/sdf/definitions.py",start:8165885,end:8165885,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/sdf/fields.py",start:8165885,end:8167953,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/sdf/io.py",start:8167953,end:8177706,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/sdf/misc.py",start:8177706,end:8177706,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/sdf/tests/test_outputs.py",start:8177706,end:8179384,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/sph/__init__.py",start:8179384,end:8179768,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/sph/api.py",start:8179768,end:8180163,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/sph/data_structures.py",start:8180163,end:8182910,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/sph/fields.py",start:8182910,end:8184864,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/__init__.py",start:8184864,end:8184864,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/api.py",start:8184864,end:8185712,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/data_structures.py",start:8185712,end:8263351,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/definitions.py",start:8263351,end:8263749,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/fields.py",start:8263749,end:8268650,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/io.py",start:8268650,end:8280626,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/misc.py",start:8280626,end:8281034,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/sample_data/__init__.py",start:8281034,end:8281034,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/sample_data/hexahedral_mesh.py",start:8281034,end:8654350,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/sample_data/tetrahedral_mesh.py",start:8654350,end:8759671,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/tests/__init__.py",start:8759671,end:8759671,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/tests/test_outputs.py",start:8759671,end:8762535,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/tests/test_stream_amrgrids.py",start:8762535,end:8764503,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/tests/test_stream_hexahedral.py",start:8764503,end:8766658,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/tests/test_stream_octree.py",start:8766658,end:8767354,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/tests/test_stream_particles.py",start:8767354,end:8780206,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/tests/test_stream_unstructured.py",start:8780206,end:8781879,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/stream/tests/test_update_data.py",start:8781879,end:8782414,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/tipsy/__init__.py",start:8782414,end:8782414,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/tipsy/api.py",start:8782414,end:8782958,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/tipsy/data_structures.py",start:8782958,end:8796588,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/tipsy/fields.py",start:8796588,end:8799969,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/tipsy/io.py",start:8799969,end:8816937,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/tipsy/tests/__init__.py",start:8816937,end:8816937,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/tipsy/tests/test_outputs.py",start:8816937,end:8821415,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ytdata/__init__.py",start:8821415,end:8821796,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ytdata/api.py",start:8821796,end:8822763,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ytdata/data_structures.py",start:8822763,end:8858216,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ytdata/fields.py",start:8858216,end:8859866,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ytdata/io.py",start:8859866,end:8875826,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ytdata/utilities.py",start:8875826,end:8883755,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ytdata/tests/__init__.py",start:8883755,end:8883755,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ytdata/tests/test_old_outputs.py",start:8883755,end:8890893,audio:0},{filename:"/lib/python3.9/site-packages/yt/frontends/ytdata/tests/test_outputs.py",start:8890893,end:8900843,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/__init__.py",start:8900843,end:8900843,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/api.py",start:8900843,end:8901359,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/geometry_handler.py",start:8901359,end:8917711,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/grid_geometry_handler.py",start:8917711,end:8935099,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/object_finding_mixin.py",start:8935099,end:8946900,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/oct_geometry_handler.py",start:8946900,end:8950988,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/particle_geometry_handler.py",start:8950988,end:8958866,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/unstructured_mesh_handler.py",start:8958866,end:8962422,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/fake_octree.c",start:8962422,end:9931289,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/fake_octree.pyx",start:9931289,end:9934201,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/grid_container.c",start:9934201,end:11321071,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/grid_container.pxd",start:11321071,end:11323247,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/grid_container.pyx",start:11323247,end:11336504,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/grid_visitors.c",start:11336504,end:11580250,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/grid_visitors.pxd",start:11580250,end:11583090,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/grid_visitors.pyx",start:11583090,end:11588371,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/oct_container.c",start:11588371,end:13316671,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/oct_container.pxd",start:13316671,end:13320368,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/oct_container.pyx",start:13320368,end:13360411,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/oct_visitors.c",start:13360411,end:15426130,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/oct_visitors.pxd",start:15426130,end:15429478,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/oct_visitors.pyx",start:15429478,end:15440124,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/particle_deposit.c",start:15440124,end:16993820,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/particle_deposit.pxd",start:16993820,end:16998568,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/particle_deposit.pyx",start:16998568,end:17019930,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/particle_oct_container.c",start:17019930,end:18367371,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/particle_oct_container.pyx",start:18367371,end:18380937,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/particle_smooth.c",start:18380937,end:19861299,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/particle_smooth.pxd",start:19861299,end:19864793,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/particle_smooth.pyx",start:19864793,end:19899605,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/selection_routines.c",start:19899605,end:23024802,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/selection_routines.pxd",start:23024802,end:23028104,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/selection_routines.pyx",start:23028104,end:23121288,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/grid_visitors.so",start:23121288,end:23132607,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/grid_container.so",start:23132607,end:23317877,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/oct_container.so",start:23317877,end:23552627,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/oct_visitors.so",start:23552627,end:23836622,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/particle_oct_container.so",start:23836622,end:23993766,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/selection_routines.so",start:23993766,end:24418758,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/particle_deposit.so",start:24418758,end:24626211,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/particle_smooth.so",start:24626211,end:24813248,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/fake_octree.so",start:24813248,end:24925830,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/__init__.py",start:24925830,end:24925830,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/api.py",start:24925830,end:24926724,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/cartesian_coordinates.py",start:24926724,end:24938739,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/coordinate_handler.py",start:24938739,end:24949099,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/cylindrical_coordinates.py",start:24949099,end:24958418,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/geographic_coordinates.py",start:24958418,end:24978017,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/polar_coordinates.py",start:24978017,end:24978727,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/spec_cube_coordinates.py",start:24978727,end:24982730,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/spherical_coordinates.py",start:24982730,end:24993800,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/tests/test_axial_pixelization.py",start:24993800,end:24994091,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/tests/test_cartesian_coordinates.py",start:24994091,end:24995230,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/tests/test_cylindrical_coordinates.py",start:24995230,end:24997157,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/tests/test_geographic_coordinates.py",start:24997157,end:25001422,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/tests/test_polar_coordinates.py",start:25001422,end:25002778,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/coordinates/tests/test_spherical_coordinates.py",start:25002778,end:25004467,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/tests/__init__.py",start:25004467,end:25004467,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/tests/fake_octree.py",start:25004467,end:25005868,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/tests/test_grid_container.py",start:25005868,end:25010726,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/tests/test_neighbor_search.py",start:25010726,end:25013773,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/tests/test_particle_deposit.py",start:25013773,end:25017319,audio:0},{filename:"/lib/python3.9/site-packages/yt/geometry/tests/test_particle_octree.py",start:25017319,end:25026469,audio:0},{filename:"/lib/python3.9/site-packages/yt/tests/__init__.py",start:25026469,end:25026469,audio:0},{filename:"/lib/python3.9/site-packages/yt/tests/test_funcs.py",start:25026469,end:25028395,audio:0},{filename:"/lib/python3.9/site-packages/yt/tests/test_testing.py",start:25028395,end:25029207,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/__init__.py",start:25029207,end:25029857,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/dimensions.py",start:25029857,end:25033132,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/equivalencies.py",start:25033132,end:25040220,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/pint_conversions.py",start:25040220,end:25042094,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/unit_lookup_table.py",start:25042094,end:25051243,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/unit_object.py",start:25051243,end:25078771,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/unit_registry.py",start:25078771,end:25084647,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/unit_symbols.py",start:25084647,end:25088330,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/unit_systems.py",start:25088330,end:25096626,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/yt_array.py",start:25096626,end:25165093,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/tests/__init__.py",start:25165093,end:25165093,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/tests/test_define_unit.py",start:25165093,end:25165871,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/tests/test_unit_systems.py",start:25165871,end:25171816,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/tests/test_units.py",start:25171816,end:25186567,audio:0},{filename:"/lib/python3.9/site-packages/yt/units/tests/test_ytarray.py",start:25186567,end:25236483,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/__init__.py",start:25236483,end:25236483,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/api.py",start:25236483,end:25236859,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/chemical_formulas.py",start:25236859,end:25238365,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/command_line.py",start:25238365,end:25293147,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/configure.py",start:25293147,end:25296944,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/cosmology.py",start:25296944,end:25318931,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/decompose.py",start:25318931,end:25323562,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/definitions.py",start:25323562,end:25325066,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/exceptions.py",start:25325066,end:25350803,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/file_handler.py",start:25350803,end:25354428,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/fits_image.py",start:25354428,end:25354833,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/flagging_methods.py",start:25354833,end:25361231,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/fortran_utils.py",start:25361231,end:25371676,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/hierarchy_inspection.py",start:25371676,end:25372788,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/initial_conditions.py",start:25372788,end:25376693,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/io_handler.py",start:25376693,end:25385939,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/linear_interpolators.py",start:25385939,end:25396675,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lodgeit.py",start:25396675,end:25407077,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/logger.py",start:25407077,end:25409923,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lru_cache.py",start:25409923,end:25417657,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/math_utils.py",start:25417657,end:25461842,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/mesh_code_generation.py",start:25461842,end:25468428,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/metadata.py",start:25468428,end:25469338,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/minimal_representation.py",start:25469338,end:25482443,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/nodal_data_utils.py",start:25482443,end:25484012,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/on_demand_imports.py",start:25484012,end:25497787,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/operator_registry.py",start:25497787,end:25498506,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/orientation.py",start:25498506,end:25502728,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/parameter_file_storage.py",start:25502728,end:25509958,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/particle_generator.py",start:25509958,end:25527280,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/performance_counters.py",start:25527280,end:25531333,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/periodic_table.py",start:25531333,end:25538175,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/physical_constants.py",start:25538175,end:25542395,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/physical_ratios.py",start:25542395,end:25547059,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/png_writer.py",start:25547059,end:25548347,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/rpdb.py",start:25548347,end:25552147,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/sdf.py",start:25552147,end:25597925,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tree_container.py",start:25597925,end:25598788,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/cython_fortran_utils.c",start:25598788,end:26040736,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/cython_fortran_utils.pxd",start:26040736,end:26041267,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/cython_fortran_utils.pyx",start:26041267,end:26050718,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/mesh_types.yaml",start:26050718,end:26053169,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/voropp.pyx",start:26053169,end:26055912,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/cython_fortran_utils.so",start:26055912,end:26107104,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/amr_kdtree/__init__.py",start:26107104,end:26107479,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/amr_kdtree/amr_kdtools.py",start:26107479,end:26109140,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/amr_kdtree/amr_kdtree.py",start:26109140,end:26130983,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/amr_kdtree/api.py",start:26130983,end:26131404,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/answer_testing/__init__.py",start:26131404,end:26131804,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/answer_testing/answer_tests.py",start:26131804,end:26143078,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/answer_testing/api.py",start:26143078,end:26143517,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/answer_testing/framework.py",start:26143517,end:26183338,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/answer_testing/level_sets_tests.py",start:26183338,end:26184979,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/answer_testing/utils.py",start:26184979,end:26199847,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/grid_data_format/__init__.py",start:26199847,end:26199913,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/grid_data_format/writer.py",start:26199913,end:26213279,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/grid_data_format/conversion/__init__.py",start:26213279,end:26213432,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/grid_data_format/conversion/conversion_abc.py",start:26213432,end:26213606,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/grid_data_format/conversion/conversion_athena.py",start:26213606,end:26232632,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/grid_data_format/tests/__init__.py",start:26232632,end:26232632,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/grid_data_format/tests/test_writer.py",start:26232632,end:26234220,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/grid_data_format/docs/IRATE_notes.txt",start:26234220,end:26236082,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/grid_data_format/docs/gdf_specification.txt",start:26236082,end:26247457,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/grid_data_format/scripts/convert_distributed_athena.py",start:26247457,end:26248228,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/grid_data_format/scripts/convert_single_athena.py",start:26248228,end:26248978,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/__init__.py",start:26248978,end:26248999,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/api.py",start:26248999,end:26249854,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/allocation_container.c",start:26249854,end:27221111,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/allocation_container.pxd",start:27221111,end:27222191,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/allocation_container.pyx",start:27222191,end:27226900,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/alt_ray_tracers.c",start:27226900,end:27799865,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/alt_ray_tracers.pyx",start:27799865,end:27808429,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/amr_kdtools.c",start:27808429,end:29304034,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/amr_kdtools.pxd",start:29304034,end:29306693,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/amr_kdtools.pyx",start:29306693,end:29334114,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/autogenerated_element_samplers.c",start:29334114,end:29557097,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/autogenerated_element_samplers.pxd",start:29557097,end:29559354,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/autogenerated_element_samplers.pyx",start:29559354,end:29579629,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/basic_octree.c",start:29579629,end:30121612,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/basic_octree.pyx",start:30121612,end:30145994,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/bitarray.c",start:30145994,end:30551873,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/bitarray.pxd",start:30551873,end:30553090,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/bitarray.pyx",start:30553090,end:30557989,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/bounding_volume_hierarchy.c",start:30557989,end:31687642,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/bounding_volume_hierarchy.pxd",start:31687642,end:31690609,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/bounding_volume_hierarchy.pyx",start:31690609,end:31709276,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/contour_finding.c",start:31709276,end:33378818,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/contour_finding.pxd",start:33378818,end:33380471,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/contour_finding.pyx",start:33380471,end:33408978,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/cosmology_time.c",start:33408978,end:33696561,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/cosmology_time.pyx",start:33696561,end:33699201,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/depth_first_octree.c",start:33699201,end:34191960,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/depth_first_octree.pyx",start:34191960,end:34198618,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/distance_queue.c",start:34198618,end:35126326,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/distance_queue.pxd",start:35126326,end:35127912,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/distance_queue.pyx",start:35127912,end:35134104,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/element_mappings.c",start:35134104,end:36077755,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/element_mappings.pxd",start:36077755,end:36086404,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/element_mappings.pyx",start:36086404,end:36130554,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/endian_swap.h",start:36130554,end:36131131,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/field_interpolation_tables.pxd",start:36131131,end:36136028,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/fixed_interpolator.c",start:36136028,end:36157639,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/fixed_interpolator.h",start:36157639,end:36158944,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/fixed_interpolator.pxd",start:36158944,end:36160272,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/fnv_hash.c",start:36160272,end:36996996,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/fnv_hash.pxd",start:36996996,end:36997486,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/fnv_hash.pyx",start:36997486,end:36998708,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/fortran_reader.c",start:36998708,end:37458084,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/fortran_reader.pyx",start:37458084,end:37471478,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/fp_utils.pxd",start:37471478,end:37473094,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/geometry_utils.c",start:37473094,end:38008446,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/geometry_utils.pyx",start:38008446,end:38024341,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/grid_traversal.c",start:38024341,end:39092696,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/grid_traversal.pxd",start:39092696,end:39095554,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/grid_traversal.pyx",start:39095554,end:39108744,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/healpix_interface.pxd",start:39108744,end:39109541,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/image_samplers.c",start:39109541,end:40533713,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/image_samplers.pxd",start:40533713,end:40536596,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/image_samplers.pyx",start:40536596,end:40555973,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/image_utilities.c",start:40555973,end:40852269,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/image_utilities.pyx",start:40852269,end:40854745,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/interpolators.c",start:40854745,end:41222870,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/interpolators.pyx",start:41222870,end:41229891,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/lenses.c",start:41229891,end:42138671,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/lenses.pxd",start:42138671,end:42139943,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/lenses.pyx",start:42139943,end:42147481,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/line_integral_convolution.c",start:42147481,end:42444542,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/line_integral_convolution.pyx",start:42444542,end:42447302,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/marching_cubes.c",start:42447302,end:42883471,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/marching_cubes.h",start:42883471,end:42901306,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/marching_cubes.pyx",start:42901306,end:42917643,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_construction.pxd",start:42917643,end:42918473,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_construction.pyx",start:42918473,end:42932495,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_intersection.pxd",start:42932495,end:42933269,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_intersection.pyx",start:42933269,end:42937942,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_samplers.pxd",start:42937942,end:42938450,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_samplers.pyx",start:42938450,end:42947935,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_traversal.pxd",start:42947935,end:42948084,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_traversal.pyx",start:42948084,end:42951208,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_triangulation.c",start:42951208,end:43997727,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_triangulation.h",start:43997727,end:43999454,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_triangulation.pyx",start:43999454,end:44009415,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_utilities.c",start:44009415,end:44940561,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_utilities.pyx",start:44940561,end:44944380,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/misc_utilities.c",start:44944380,end:46775969,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/misc_utilities.pyx",start:46775969,end:46818624,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/origami.c",start:46818624,end:47084290,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/origami.pyx",start:47084290,end:47086091,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/origami_tags.c",start:47086091,end:47091950,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/origami_tags.h",start:47091950,end:47092353,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/particle_mesh_operations.c",start:47092353,end:48237510,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/particle_mesh_operations.pyx",start:48237510,end:48252179,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/partitioned_grid.c",start:48252179,end:48635399,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/partitioned_grid.pxd",start:48635399,end:48636357,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/partitioned_grid.pyx",start:48636357,end:48642007,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/perftools_wrap.pyx",start:48642007,end:48642707,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/pixelization_constants.c",start:48642707,end:48645967,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/pixelization_constants.h",start:48645967,end:48646879,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/pixelization_routines.c",start:48646879,end:50046878,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/pixelization_routines.pyx",start:50046878,end:50086238,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/platform_dep.h",start:50086238,end:50087778,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/points_in_volume.c",start:50087778,end:50523034,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/points_in_volume.pyx",start:50523034,end:50532661,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/primitives.c",start:50532661,end:50897418,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/primitives.pxd",start:50897418,end:50901521,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/primitives.pyx",start:50901521,end:50920521,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/quad_tree.c",start:50920521,end:51517558,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/quad_tree.pyx",start:51517558,end:51539683,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/ragged_arrays.c",start:51539683,end:52630894,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/ragged_arrays.pyx",start:52630894,end:52633546,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/tsearch.c",start:52633546,end:52636835,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/tsearch.h",start:52636835,end:52637490,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/vec3_ops.pxd",start:52637490,end:52639298,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/volume_container.pxd",start:52639298,end:52642941,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/write_array.c",start:52642941,end:52947946,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/write_array.pyx",start:52947946,end:52949524,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/bounding_volume_hierarchy.so",start:52949524,end:53076501,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/contour_finding.so",start:53076501,end:53298415,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/fnv_hash.so",start:53298415,end:53396378,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/marching_cubes.so",start:53396378,end:53461459,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/pixelization_routines.so",start:53461459,end:53652789,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/autogenerated_element_samplers.so",start:53652789,end:53670089,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/bitarray.so",start:53670089,end:53719471,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/geometry_utils.so",start:53719471,end:53782097,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_triangulation.so",start:53782097,end:53920385,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/primitives.so",start:53920385,end:53945552,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/cosmology_time.so",start:53945552,end:53974170,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/origami.so",start:53974170,end:54003476,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/grid_traversal.so",start:54003476,end:54132280,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/image_samplers.so",start:54132280,end:54319865,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/partitioned_grid.so",start:54319865,end:54366931,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/element_mappings.so",start:54366931,end:54484898,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/alt_ray_tracers.so",start:54484898,end:54650152,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/misc_utilities.so",start:54650152,end:54936937,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/ragged_arrays.so",start:54936937,end:55087455,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/particle_mesh_operations.so",start:55087455,end:55244386,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/depth_first_octree.so",start:55244386,end:55305574,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/fortran_reader.so",start:55305574,end:55360155,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/interpolators.so",start:55360155,end:55403522,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/basic_octree.so",start:55403522,end:55462332,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/image_utilities.so",start:55462332,end:55492634,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/points_in_volume.so",start:55492634,end:55548826,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/quad_tree.so",start:55548826,end:55622459,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/mesh_utilities.so",start:55622459,end:55735097,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/amr_kdtools.so",start:55735097,end:55953667,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/lenses.so",start:55953667,end:56048610,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/distance_queue.so",start:56048610,end:56157810,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/line_integral_convolution.so",start:56157810,end:56187145,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/allocation_container.so",start:56187145,end:56297553,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/write_array.so",start:56297553,end:56333622,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/tests/__init__.py",start:56333622,end:56333622,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/tests/test_allocation_container.py",start:56333622,end:56334277,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/tests/test_alt_ray_tracers.py",start:56334277,end:56337545,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/tests/test_bitarray.py",start:56337545,end:56338957,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/tests/test_bounding_volume_hierarchy.py",start:56338957,end:56340378,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/tests/test_element_mappings.py",start:56340378,end:56347404,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/tests/test_fill_region.py",start:56347404,end:56348667,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/tests/test_geometry_utils.py",start:56348667,end:56349683,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/tests/test_ragged_arrays.py",start:56349683,end:56351438,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/lib/tests/test_sample.py",start:56351438,end:56352549,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/parallel_tools/__init__.py",start:56352549,end:56352927,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/parallel_tools/controller_system.py",start:56352927,end:56354352,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/parallel_tools/io_runner.py",start:56354352,end:56360919,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/parallel_tools/parallel_analysis_interface.py",start:56360919,end:56409946,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/parallel_tools/task_queue.py",start:56409946,end:56416109,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/poster/__init__.py",start:56416109,end:56417627,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/poster/encode.py",start:56417627,end:56432485,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/poster/streaminghttp.py",start:56432485,end:56441072,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/poster/README",start:56441072,end:56441236,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/__init__.py",start:56441236,end:56441236,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_amr_kdtree.py",start:56441236,end:56444042,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_chemical_formulas.py",start:56444042,end:56444848,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_config.py",start:56444848,end:56451800,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_coordinate_conversions.py",start:56451800,end:56457364,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_cosmology.py",start:56457364,end:56466215,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_decompose.py",start:56466215,end:56469422,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_flagging_methods.py",start:56469422,end:56469793,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_hierarchy_inspection.py",start:56469793,end:56470855,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_interpolators.py",start:56470855,end:56475419,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_minimal_representation.py",start:56475419,end:56476784,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_particle_generator.py",start:56476784,end:56482353,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_periodic_table.py",start:56482353,end:56482996,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_periodicity.py",start:56482996,end:56485617,audio:0},{filename:"/lib/python3.9/site-packages/yt/utilities/tests/test_selectors.py",start:56485617,end:56491238,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/__init__.py",start:56491238,end:56491736,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/_colormap_data.py",start:56491736,end:57077375,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/_mpl_imports.py",start:57077375,end:57077712,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/api.py",start:57077712,end:57079195,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/base_plot_types.py",start:57079195,end:57098901,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/color_maps.py",start:57098901,end:57123592,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/eps_writer.py",start:57123592,end:57179701,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/fits_image.py",start:57179701,end:57217606,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/fixed_resolution.py",start:57217606,end:57243522,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/fixed_resolution_filters.py",start:57243522,end:57246263,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/geo_plot_utils.py",start:57246263,end:57249517,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/image_writer.py",start:57249517,end:57267426,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/line_plot.py",start:57267426,end:57283191,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/particle_plots.py",start:57283191,end:57307207,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/plot_container.py",start:57307207,end:57339990,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/plot_modifications.py",start:57339990,end:57458312,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/plot_window.py",start:57458312,end:57553746,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/profile_plotter.py",start:57553746,end:57610182,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/streamlines.py",start:57610182,end:57619297,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/mapserver/__init__.py",start:57619297,end:57619297,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/mapserver/pannable_map.py",start:57619297,end:57624835,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/mapserver/html/__init__.py",start:57624835,end:57624835,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/mapserver/html/map.js",start:57624835,end:57628173,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/mapserver/html/map_index.html",start:57628173,end:57629041,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/__init__.py",start:57629041,end:57629041,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_callbacks.py",start:57629041,end:57660151,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_color_maps.py",start:57660151,end:57662282,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_export_frb.py",start:57662282,end:57663619,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_filters.py",start:57663619,end:57664455,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_fits_image.py",start:57664455,end:57669591,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_geo_projections.py",start:57669591,end:57675651,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_image_writer.py",start:57675651,end:57678572,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_line_plots.py",start:57678572,end:57682200,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_mesh_slices.py",start:57682200,end:57686060,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_offaxisprojection.py",start:57686060,end:57689996,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_particle_plot.py",start:57689996,end:57702747,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_plotwindow.py",start:57702747,end:57722189,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_profile_plots.py",start:57722189,end:57733558,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_raw_field_slices.py",start:57733558,end:57735139,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/tests/test_splat.py",start:57735139,end:57736615,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/UBVRI.py",start:57736615,end:57741871,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/__init__.py",start:57741871,end:57742268,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/api.py",start:57742268,end:57743484,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/blenders.py",start:57743484,end:57744183,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/camera.py",start:57744183,end:57771082,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/camera_path.py",start:57771082,end:57783671,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/create_spline.py",start:57783671,end:57785848,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/glfw_inputhook.py",start:57785848,end:57789419,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/image_handling.py",start:57789419,end:57793766,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/input_events.py",start:57793766,end:57808614,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/interactive_loop.py",start:57808614,end:57816114,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/interactive_vr.py",start:57816114,end:57849857,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/interactive_vr_helpers.py",start:57849857,end:57853765,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/lens.py",start:57853765,end:57885595,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/off_axis_projection.py",start:57885595,end:57894353,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/old_camera.py",start:57894353,end:57983708,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/render_source.py",start:57983708,end:58029685,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/scene.py",start:58029685,end:58062251,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/shader_objects.py",start:58062251,end:58072636,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/transfer_function_helper.py",start:58072636,end:58080396,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/transfer_functions.py",start:58080396,end:58118054,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/utils.py",start:58118054,end:58123337,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/volume_rendering.py",start:58123337,end:58128568,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/zbuffer_array.py",start:58128568,end:58131395,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/_cuda_caster.cu",start:58131395,end:58141160,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/shaders/__init__.py",start:58141160,end:58141160,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/shaders/apply_colormap.fragmentshader",start:58141160,end:58141630,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/shaders/default.vertexshader",start:58141630,end:58142323,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/shaders/drawlines.fragmentshader",start:58142323,end:58144109,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/shaders/max_intensity.fragmentshader",start:58144109,end:58145940,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/shaders/mesh.fragmentshader",start:58145940,end:58146222,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/shaders/mesh.vertexshader",start:58146222,end:58146508,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/shaders/noop.fragmentshader",start:58146508,end:58146779,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/shaders/passthrough.fragmentshader",start:58146779,end:58146909,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/shaders/passthrough.vertexshader",start:58146909,end:58147248,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/shaders/projection.fragmentshader",start:58147248,end:58149188,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/shaders/transfer_function.fragmentshader",start:58149188,end:58151507,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/tests/__init__.py",start:58151507,end:58151507,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/tests/test_camera_attributes.py",start:58151507,end:58155615,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/tests/test_composite.py",start:58155615,end:58158467,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/tests/test_lenses.py",start:58158467,end:58163e3,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/tests/test_mesh_render.py",start:58163e3,end:58170455,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/tests/test_points.py",start:58170455,end:58172720,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/tests/test_scene.py",start:58172720,end:58176389,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/tests/test_sigma_clip.py",start:58176389,end:58177829,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/tests/test_simple_vr.py",start:58177829,end:58179165,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/tests/test_varia.py",start:58179165,end:58183438,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/tests/test_vr_cameras.py",start:58183438,end:58189413,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/tests/test_vr_orientation.py",start:58189413,end:58193014,audio:0},{filename:"/lib/python3.9/site-packages/yt/visualization/volume_rendering/tests/test_zbuff.py",start:58193014,end:58197009,audio:0},{filename:"/lib/python3.9/site-packages/yt-3.6.1-py3.9.egg-info/PKG-INFO",start:58197009,end:58205616,audio:0},{filename:"/lib/python3.9/site-packages/yt-3.6.1-py3.9.egg-info/SOURCES.txt",start:58205616,end:58256198,audio:0},{filename:"/lib/python3.9/site-packages/yt-3.6.1-py3.9.egg-info/dependency_links.txt",start:58256198,end:58256199,audio:0},{filename:"/lib/python3.9/site-packages/yt-3.6.1-py3.9.egg-info/entry_points.txt",start:58256199,end:58256348,audio:0},{filename:"/lib/python3.9/site-packages/yt-3.6.1-py3.9.egg-info/not-zip-safe",start:58256348,end:58256349,audio:0},{filename:"/lib/python3.9/site-packages/yt-3.6.1-py3.9.egg-info/requires.txt",start:58256349,end:58256463,audio:0},{filename:"/lib/python3.9/site-packages/yt-3.6.1-py3.9.egg-info/top_level.txt",start:58256463,end:58256466,audio:0},{filename:"/bin/iyt",start:58256466,end:58259990,audio:0},{filename:"/bin/yt",start:58259990,end:58260928,audio:0}],remote_package_size:26478203,package_uuid:"d27e3adc-e888-4c0b-ade0-1f9f51e0fb36"})})(); \ No newline at end of file diff --git a/spaces/pzc163/Personal-TTS/app.py b/spaces/pzc163/Personal-TTS/app.py deleted file mode 100644 index 6f971f287c906d55021668330767b531b95a74ad..0000000000000000000000000000000000000000 --- a/spaces/pzc163/Personal-TTS/app.py +++ /dev/null @@ -1,161 +0,0 @@ -import os -import gradio as gr -import random - -os.system("pip install --upgrade Cython==0.29.35") -os.system("pip install pysptk --no-build-isolation") -os.system("pip install kantts -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html") -os.system("pip install librosa==0.9.2") -os.system("pip install numpy==1.22.0") - -from modelscope.models.audio.tts import SambertHifigan -from modelscope.pipelines import pipeline -from modelscope.utils.constant import Tasks - -from voicefixer import VoiceFixer -voicefixer = VoiceFixer() - -# model_0 - -model_dir = os.path.abspath("./pretrain_work_dir") - -custom_infer_abs = { - 'voice_name': - 'F7', - 'am_ckpt': - os.path.join(model_dir, 'tmp_am', 'ckpt'), - 'am_config': - os.path.join(model_dir, 'tmp_am', 'config.yaml'), - 'voc_ckpt': - os.path.join(model_dir, 'orig_model', 'basemodel_16k', 'hifigan', 'ckpt'), - 'voc_config': - os.path.join(model_dir, 'orig_model', 'basemodel_16k', 'hifigan', - 'config.yaml'), - 'audio_config': - os.path.join(model_dir, 'data', 'audio_config.yaml'), - 'se_file': - os.path.join(model_dir, 'data', 'se', 'se.npy') -} -kwargs = {'custom_ckpt': custom_infer_abs} - -model_id = SambertHifigan(os.path.join(model_dir, "orig_model"), **kwargs) - -inference = pipeline(task=Tasks.text_to_speech, model=model_id) - -# model_1 - -model_dir1 = os.path.abspath("./jay/pretrain_work_dir") - -custom_infer_abs1 = { - 'voice_name': - 'F7', - 'am_ckpt': - os.path.join(model_dir1, 'tmp_am', 'ckpt'), - 'am_config': - os.path.join(model_dir1, 'tmp_am', 'config.yaml'), - 'voc_ckpt': - os.path.join(model_dir1, 'orig_model', 'basemodel_16k', 'hifigan', 'ckpt'), - 'voc_config': - os.path.join(model_dir1, 'orig_model', 'basemodel_16k', 'hifigan', - 'config.yaml'), - 'audio_config': - os.path.join(model_dir1, 'data', 'audio_config.yaml'), - 'se_file': - os.path.join(model_dir1, 'data', 'se', 'se.npy') -} -kwargs1 = {'custom_ckpt': custom_infer_abs1} - -model_id1 = SambertHifigan(os.path.join(model_dir1, "orig_model"), **kwargs1) - -inference1 = pipeline(task=Tasks.text_to_speech, model=model_id1) - - -# functions - -def infer(text): - output = inference(input=text) - filename = str(random.randint(1, 1000000000000)) - - with open(filename + "myfile.wav", mode='bx') as f: - f.write(output["output_wav"]) - return filename + "myfile.wav" - -def infer1(text): - output = inference1(input=text) - filename = str(random.randint(1, 1000000000000)) - - with open(filename + "file.wav", mode='bx') as f: - f.write(output["output_wav"]) - return filename + "file.wav" - -# upsample - -import numpy as np -import torch -from hifi_gan_bwe import BandwidthExtender -from scipy.io.wavfile import write - -MAX_LENGTH = 600.0 - -model = BandwidthExtender.from_pretrained("hifi-gan-bwe-10-42890e3-vctk-48kHz") - -def extend(audio): - fs, x = audio - x = x[:int(MAX_LENGTH * fs)] - x = x.astype(np.float32) / 32767.0 - if len(x.shape) == 1: - x = x[:, np.newaxis] - - with torch.no_grad(): - y = np.stack([model(torch.from_numpy(x), fs) for x in x.T]).T - y = (y * 32767.0).astype(np.int16) - fs = int(model.sample_rate) - write("upsample.wav", fs, y) - - return "upsample.wav" - -# denoise - -def inference_denoise(audio): - voicefixer.restore(input=audio, # input wav file path - output="output.wav", # output wav file path - cuda=False, # whether to use gpu acceleration - mode = int(0)) # You can try out mode 0, 1 to find out the best result - return 'output.wav' - - -app = gr.Blocks() - -with app: - gr.Markdown("#
          🥳🎶🎡 - KanTTS中文声音克隆
          ") - gr.Markdown("##
          🌊 - 更多精彩应用,敬请关注[xm火种堂](http://xmaigc.top);社恐患者杨老师💕
          ") - - with gr.Row(): - with gr.Column(): - inp = gr.Textbox(lines=5, label="请填写您想要转换的中文文本") - with gr.Row(): - btn = gr.Button("使用定制AI钟老师的声音", variant="primary") - btn1 = gr.Button("使用AI周杰伦的声音", variant="primary") - with gr.Column(): - with gr.Row(): - out = gr.Audio(label="为您生成的专属音频") - out1 = gr.Audio(label="更高采样率的专属音频", type="filepath") - out2 = gr.Audio(label="降噪后的高采样率音频", type="filepath") - with gr.Row(): - btn2 = gr.Button("一键提高采样率") - btn3 = gr.Button("一键降噪") - - btn.click(fn=infer, inputs=[inp], outputs=[out]) - btn1.click(fn=infer1, inputs=[inp], outputs=[out]) - btn2.click(fn=extend, inputs=[out], outputs=[out1]) - btn3.click(fn=inference_denoise, inputs=[out1], outputs=[out2]) - - gr.Markdown("###
          注意❗:请不要生成会对个人以及组织造成侵害的内容,此程序仅供科研、学习及个人娱乐使用。
          ") - gr.HTML(''' - - ''') -app.launch(show_error=True) \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/2012 End Of The World Movie Free NEW Download In Hindi Mp4 Free NEW.md b/spaces/quidiaMuxgu/Expedit-SAM/2012 End Of The World Movie Free NEW Download In Hindi Mp4 Free NEW.md deleted file mode 100644 index 2e11a07114af0ee573b03b20739c8ad0e5877fc8..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/2012 End Of The World Movie Free NEW Download In Hindi Mp4 Free NEW.md +++ /dev/null @@ -1,8 +0,0 @@ -
          -

          movies in hd is a free movie download site that provides its users with many types of movies. there are more than 600,000 high-definition videos to download, as well as some other genres of videos, such as action, comedy, thriller, animation, sports, romance, family, fantasy, war, crime, horror, and so on. movies in hd updates videos very often so you can access the latest movies almost instantly. but its interface is not the best. movies in hd tends to be slow and provides a lot of ads. so the trial version has an expiration date.

          -

          the site is a place where you can watch your favorite movies. you can find your videos by genre, duration, or star rating. after you choose the movies you want to watch, you can click on the download button to start downloading the movie to your computer or other devices. the movies are in an mp4 format, and are very easy to convert to other video formats. with an intuitive design, you can enjoy the movies you downloaded in a hurry.

          -

          2012 end of the world movie free download in hindi mp4 free


          Downloadhttps://geags.com/2uCryu



          -

          love movies, love movies, but have a limited download bandwidth. let me love you is a great site that allows you to download movies for free. the site provides over a million movies in a wide range of genres and languages. it has a clean, intuitive interface, where you can browse the contents easily. after you download the movie you want, the site allows you to stream the file on your computer, smartphone, or tablet. users can also stream the videos they want to watch on a webpage by adding the video url into their favorite sites.

          -

          hoopla is a free app to download movies for free. with a simple interface and easy navigation, users can enjoy the movies they want quickly and easily. to download movies, you have to login to the hoopla app. after that, the app shows the list of movies you can watch. just click on the movie to start downloading it. hoopla also allows you to stream the movies you want to watch on your device.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Free Cubase 5.1 Full Download With Crack UPDATED.md b/spaces/quidiaMuxgu/Expedit-SAM/Free Cubase 5.1 Full Download With Crack UPDATED.md deleted file mode 100644 index 0d505956e20c16034b2c5b329cb49d22ab42c33c..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Free Cubase 5.1 Full Download With Crack UPDATED.md +++ /dev/null @@ -1,6 +0,0 @@ -

          free cubase 5.1 full download with crack


          Download File 🗸 https://geags.com/2uCsHN



          -
          -Plus, free with your Mitchell UltraMate 7.1.371 subscription, access Toyota ... You can also download: PTC Creo Illustrate 7.1.0 (x64) + Crack ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Gta Vice City Highly Compressed 5mb Full Rar.md b/spaces/quidiaMuxgu/Expedit-SAM/Gta Vice City Highly Compressed 5mb Full Rar.md deleted file mode 100644 index aa4cc6481159c7f4b6b18ed18a3bf931703e7126..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Gta Vice City Highly Compressed 5mb Full Rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

          gta vice city highly compressed 5mb full rar


          Download >>> https://geags.com/2uCr9p



          -
          -Resident Evil Download For Ppsspp Gta Vice City Iso For Ppsspp 2k15 ... 6 Iso Ppsspp Download Cso Android Emulator Rar Bngsgguxo3uhim Resident Evil 4 ... P Resident Evil 6 Highly Compressed Full Version Free Download is an to 5mb ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/models_onnx.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/models_onnx.py deleted file mode 100644 index e370d3736219568247a20a1ddf2f450b087bd329..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/models_onnx.py +++ /dev/null @@ -1,817 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer.infer_pack import modules -from lib.infer.infer_pack import attentions -from lib.infer.infer_pack.commons import get_padding -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer.infer_pack.commons import init_weights -import numpy as np -from lib.infer.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/radames/Real-Time-Latent-Consistency-Model/canny_gpu.py b/spaces/radames/Real-Time-Latent-Consistency-Model/canny_gpu.py deleted file mode 100644 index be6c2f75ef6554a0122f4ebd96301080a8e24303..0000000000000000000000000000000000000000 --- a/spaces/radames/Real-Time-Latent-Consistency-Model/canny_gpu.py +++ /dev/null @@ -1,44 +0,0 @@ -import torch -import torch.nn as nn -from torchvision.transforms import ToTensor, ToPILImage -from PIL import Image - -class SobelOperator(nn.Module): - def __init__(self, device="cuda"): - super(SobelOperator, self).__init__() - self.device = device - self.edge_conv_x = nn.Conv2d(1, 1, kernel_size=3, padding=1, bias=False).to( - self.device - ) - self.edge_conv_y = nn.Conv2d(1, 1, kernel_size=3, padding=1, bias=False).to( - self.device - ) - - sobel_kernel_x = torch.tensor( - [[-1.0, 0.0, 1.0], [-2.0, 0.0, 2.0], [-1.0, 0.0, 1.0]], device=self.device - ) - sobel_kernel_y = torch.tensor( - [[-1.0, -2.0, -1.0], [0.0, 0.0, 0.0], [1.0, 2.0, 1.0]], device=self.device - ) - - self.edge_conv_x.weight = nn.Parameter(sobel_kernel_x.view((1, 1, 3, 3))) - self.edge_conv_y.weight = nn.Parameter(sobel_kernel_y.view((1, 1, 3, 3))) - - @torch.no_grad() - def forward(self, image: Image.Image, low_threshold: float, high_threshold: float): - # Convert PIL image to PyTorch tensor - image_gray = image.convert("L") - image_tensor = ToTensor()(image_gray).unsqueeze(0).to(self.device) - - # Compute gradients - edge_x = self.edge_conv_x(image_tensor) - edge_y = self.edge_conv_y(image_tensor) - edge = torch.sqrt(edge_x**2 + edge_y**2) - - # Apply thresholding - edge = edge / edge.max() # Normalize to 0-1 - edge[edge >= high_threshold] = 1.0 - edge[edge <= low_threshold] = 0.0 - - # Convert the result back to a PIL image - return ToPILImage()(edge.squeeze(0).cpu()) diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Gta 3 Ultimate Trainer Free Download For Pcl A Must-Have for Fans of the Classic Game.md b/spaces/raedeXanto/academic-chatgpt-beta/Gta 3 Ultimate Trainer Free Download For Pcl A Must-Have for Fans of the Classic Game.md deleted file mode 100644 index 4fb18831bd6c8920f800a9fe51e6184ed7d94815..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Gta 3 Ultimate Trainer Free Download For Pcl A Must-Have for Fans of the Classic Game.md +++ /dev/null @@ -1,150 +0,0 @@ - -

          GTA 3 Ultimate Trainer Free Download for PC

          -

          If you are a fan of Grand Theft Auto III, one of the most popular and influential games of all time, you might be interested in downloading GTA 3 Ultimate Trainer for PC. This is a mod that adds many features and options to the game, allowing you to customize your gameplay and have more fun. In this article, we will explain what GTA 3 Ultimate Trainer is, why you should download it, and where you can find it.

          -

          Gta 3 Ultimate Trainer Free Download For Pcl


          DOWNLOAD ····· https://tinourl.com/2uL2vk



          -

          What is GTA 3 Ultimate Trainer?

          -

          GTA 3 Ultimate Trainer is a modification for Grand Theft Auto III that adds a menu with various cheats, hacks, and options to the game. You can access this menu by pressing F1 during the game. With GTA 3 Ultimate Trainer, you can do things like:

          -
            -
          • Change your character model and clothes
          • -
          • Spawn any vehicle or weapon
          • -
          • Teleport to any location on the map
          • -
          • Adjust the weather and time of day
          • -
          • Give yourself unlimited health, armor, ammo, and money
          • -
          • Make yourself invincible, invisible, or super fast
          • -
          • Enable or disable police, gangs, traffic, pedestrians, or radio
          • -
          • Alter the gravity, speed, or handling of vehicles
          • -
          • Create explosions, fires, or riots
          • -
          • And much more!
          • -
          -

          Features of GTA 3 Ultimate Trainer

          -

          GTA 3 Ultimate Trainer has many features that make it one of the best mods for GTA 3. Some of these features are:

          -
            -
          • It is compatible with any version of GTA 3, including Steam and Rockstar Launcher versions
          • -
          • It does not require any installation or modification of game files
          • -
          • It has a user-friendly interface with easy-to-use buttons and sliders
          • -
          • It has a save and load option that lets you save your settings and load them later
          • -
          • It has a hotkey option that lets you assign keyboard shortcuts to your favorite cheats and options
          • -
          • It has a help option that shows you the description and usage of each cheat and option
          • -
          • It has a backup option that lets you restore your original game settings if something goes wrong
          • -
          -

          How to install GTA 3 Ultimate Trainer

          -

          Installing GTA 3 Ultimate Trainer is very simple and does not require any technical skills. All you have to do is:

          -
            -
          1. Download the GTA 3 Ultimate Trainer file from a reliable source (see below)
          2. -
          3. Extract the file using a program like WinRAR or 7-Zip
          4. -
          5. Copy the extracted file (gta3.exe) to your GTA 3 folder (usually C:\Program Files\Rockstar Games\GTA III)
          6. -
          7. Run the file (gta3.exe) as administrator
          8. -
          9. Enjoy!
          10. -
          -

          Why download GTA 3 Ultimate Trainer?

          -

          GTA 3 Ultimate Trainer is a great mod for anyone who loves GTA 3 and wants to experience it in a new way. With GTA 3 Ultimate Trainer, you can:

          -
            -
          • Enhance your gameplay by adding more variety and options to the game
          • -
          • Cheat your way through difficult missions or challenges
          • -
          • Experiment with different scenarios and outcomes
          • -
          • Create your own fun and chaos in the game world
          • -
          • Redefine the rules and limits of the game
          • -
          -

          Benefits of using GTA 3 Ultimate Trainer

          -

          GTA 3 Ultimate Trainer has many benefits that make it worth downloading. Some of these benefits are:

          -
            -
          • It is free and safe to use
          • -
          • It does not affect your game performance or stability
          • -
          • It does not interfere with your game progress or achievements
          • -
          • It does not require any internet connection or registration
          • -
          • It can be easily enabled or disabled at any time
          • -
          -

          Tips and tricks for using GTA 3 Ultimate Trainer

          -

          To get the most out of GTA 3 Ultimate Trainer, here are some tips and tricks that you should know:

          - - - - - - - - -
          Tips and tricksDescription
          Use the backup option before changing any game settings.This will allow you to restore your original game settings if something goes wrong or if you want to play normally.
          Use the save and load option to save your favorite settings.This will allow you to quickly load your preferred cheats and options without having to set them up every time.
          Use the hotkey option to assign keyboard shortcuts to your favorite cheats and options.This will allow you to activate or deactivate them faster and easier without opening the menu.
          Use the help option to learn more about each cheat and option.This will show you the description and usage of each cheat and option so you can use them correctly.
          Be careful when using some cheats and options.Some cheats and options may have unintended consequences or side effects that may affect your gameplay or cause glitches. For example, spawning too many vehicles or pedestrians may cause lag or crashes; changing your character model may prevent you from entering some buildings or vehicles; enabling riots may make some missions impossible; etc.
          Have fun!The most important tip is to have fun with GTA 3 Ultimate Trainer. There is no right or wrong way to use it. You can use it however you want and create your own adventures in the game.
          -

          Where to download GTA 3 Ultimate Trainer?

          -

          GTA 3 Ultimate Trainer is available on many websites that offer mods for GTA games. However, not all websites are trustworthy or reliable. Some websites may have fake or outdated versions of GTA 3 Ultimate Trainer that may not work properly or may contain viruses or malware. Therefore, you should be careful when choosing where to download GTA 3 Ultimate Trainer from.

          -

          Gta 3 Ultimate Trainer Pc Full Version Download
          -How To Install Gta 3 Ultimate Trainer On Pc
          -Gta 3 Ultimate Trainer Cheats And Codes For Pc
          -Gta 3 Ultimate Trainer Mod Menu For Pc
          -Gta 3 Ultimate Trainer Free Download For Windows 10
          -Gta 3 Ultimate Trainer Gameplay And Features For Pc
          -Gta 3 Ultimate Trainer Download Link And Instructions For Pc
          -Gta 3 Ultimate Trainer System Requirements And Compatibility For Pc
          -Gta 3 Ultimate Trainer Review And Rating For Pc
          -Gta 3 Ultimate Trainer Tips And Tricks For Pc
          -Gta 3 Ultimate Trainer Online Multiplayer For Pc
          -Gta 3 Ultimate Trainer Best Settings And Options For Pc
          -Gta 3 Ultimate Trainer Unlock All Missions And Weapons For Pc
          -Gta 3 Ultimate Trainer No Virus And No Survey For Pc
          -Gta 3 Ultimate Trainer Latest Version And Update For Pc
          -Gta 3 Ultimate Trainer Vs Other Trainers For Pc
          -Gta 3 Ultimate Trainer Save Game And Backup For Pc
          -Gta 3 Ultimate Trainer Keyboard And Mouse Controls For Pc
          -Gta 3 Ultimate Trainer Customization And Mods For Pc
          -Gta 3 Ultimate Trainer Bug Fixes And Troubleshooting For Pc
          -Gta 3 Ultimate Trainer Comparison And Benchmark For Pc
          -Gta 3 Ultimate Trainer Screenshots And Videos For Pc
          -Gta 3 Ultimate Trainer Download Size And Speed For Pc
          -Gta 3 Ultimate Trainer Crack And Activation Key For Pc
          -Gta 3 Ultimate Trainer Support And Feedback For Pc
          -Gta 3 Ultimate Trainer Alternatives And Similar Games For Pc
          -Gta 3 Ultimate Trainer Fun Facts And Easter Eggs For Pc
          -Gta 3 Ultimate Trainer Secrets And Hidden Locations For Pc
          -Gta 3 Ultimate Trainer Achievements And Trophies For Pc
          -Gta 3 Ultimate Trainer Soundtrack And Music For Pc
          -Gta 3 Ultimate Trainer History And Development For Pc
          -Gta 3 Ultimate Trainer Guide And Walkthrough For Pc
          -Gta 3 Ultimate Trainer Editor And Creator For Pc
          -Gta 3 Ultimate Trainer Forum And Community For Pc
          -Gta 3 Ultimate Trainer FAQ And Answers For Pc
          -Gta 3 Ultimate Trainer Demo And Trial Version For Pc
          -Gta 3 Ultimate Trainer Patch And Improvement For Pc
          -Gta 3 Ultimate Trainer Source Code And License For Pc
          -Gta 3 Ultimate Trainer Donation And Support Link For Pc
          -Gta 3 Ultimate Trainer Credits And Acknowledgements For Pc

          -

          Sources of GTA 3 Ultimate Trainer

          -

          To help you find a safe and reliable source of GTA 3 Ultimate Trainer, here are some websites that we recommend:

          - -

          How to verify the authenticity of GTA 3 Ultimate Trainer?

          -

          To verify that you have downloaded a genuine version of GTA 3 Ultimate Trainer that works properly and does not contain any viruses or malware, here are some steps that you should follow:

          -
            - file. -
          1. Check the file name: The original file name of GTA 3 Ultimate Trainer is **gta3.exe**. If the file name is different from that, it may be a fake or modified file.
          2. -
          3. Check the file extension: The original file extension of GTA 3 Ultimate Trainer is **.exe**. If the file extension is different from that, it may be a fake or malicious file.
          4. -
          5. Check the file properties: You can right-click on the file and select "Properties" to see more information about the file. You should look for the following details:
          6. -
              -
            • File version: The original file version of GTA 3 Ultimate Trainer is **1.0.0.0**. If the file version is different from that, it may be a fake or outdated file.
            • -
            • Product name: The original product name of GTA 3 Ultimate Trainer is **GTA III Ultimate Trainer**. If the product name is different from that, it may be a fake or unrelated file.
            • -
            • Product version: The original product version of GTA 3 Ultimate Trainer is **1.0**. If the product version is different from that, it may be a fake or outdated file.
            • -
            • Description: The original description of GTA 3 Ultimate Trainer is **GTA III Ultimate Trainer v1 by LithJoe**. If the description is different from that, it may be a fake or unrelated file.
            • -
            -
          7. Scan the file with an antivirus program: You can use an antivirus program like Windows Defender, Avast, or Malwarebytes to scan the file for any viruses or malware. If the scan detects any threats, you should delete the file immediately and download it from another source.
          8. -
          -

          Conclusion

          -

          GTA 3 Ultimate Trainer is a mod that adds many features and options to Grand Theft Auto III, allowing you to customize your gameplay and have more fun. You can download GTA 3 Ultimate Trainer for PC from various websites that offer mods for GTA games, but you should be careful and verify the authenticity of the file before using it. GTA 3 Ultimate Trainer is easy to install and use, and has many benefits that make it worth downloading. With GTA 3 Ultimate Trainer, you can enhance your gameplay, cheat your way through difficult missions or challenges, experiment with different scenarios and outcomes, create your own fun and chaos in the game world, and redefine the rules and limits of the game.

          -

          FAQs

          -

          Here are some frequently asked questions about GTA 3 Ultimate Trainer:

          -
            -
          1. Is GTA 3 Ultimate Trainer legal?
          2. -

            GTA 3 Ultimate Trainer is not illegal, but it may violate the terms of service or end-user license agreement of GTA 3 or Rockstar Games. Therefore, you should use GTA 3 Ultimate Trainer at your own risk and discretion.

            -
          3. Does GTA 3 Ultimate Trainer work online?
          4. -

            GTA 3 Ultimate Trainer does not work online and is not intended for online use. It may cause problems or conflicts with other players or servers if you try to use it online. Therefore, you should only use GTA 3 Ultimate Trainer in single-player mode.

            -
          5. Does GTA 3 Ultimate Trainer affect my game progress or achievements?
          6. -

            GTA 3 Ultimate Trainer does not affect your game progress or achievements as long as you do not save your game while using it. If you save your game while using GTA 3 Ultimate Trainer, your game progress or achievements may be corrupted or disabled. Therefore, you should only save your game before or after using GTA 3 Ultimate Trainer.

            -
          7. Can I use GTA 3 Ultimate Trainer with other mods?
          8. -

            GTA 3 Ultimate Trainer may or may not work with other mods depending on the compatibility and compatibility of the mods. Some mods may work well with GTA 3 Ultimate Trainer, while others may cause problems or conflicts. Therefore, you should test each mod individually with GTA 3 Ultimate Trainer before using them together.

            -
          9. How do I uninstall GTA 3 Ultimate Trainer?
          10. -

            To uninstall GTA 3 Ultimate Trainer, you can simply delete the file (gta3.exe) from your GTA 3 folder (usually C:\Program Files\Rockstar Games\GTA III). You can also use the backup option in GTA 3 Ultimate Trainer to restore your original game settings if you want to play normally.

            -
          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/ramkamal2000/voice-conversion-ddp/wavlm/__init__.py b/spaces/ramkamal2000/voice-conversion-ddp/wavlm/__init__.py deleted file mode 100644 index 03f8908bb595d7c79020ffd9dfcd4ddebe8e8a1e..0000000000000000000000000000000000000000 --- a/spaces/ramkamal2000/voice-conversion-ddp/wavlm/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from wavlm.WavLM import WavLM, WavLMConfig \ No newline at end of file diff --git a/spaces/realambuj/Image_Classifier_using_RESNET50/main.py b/spaces/realambuj/Image_Classifier_using_RESNET50/main.py deleted file mode 100644 index ac61c9fe65e88b650fae93b084eaebc3624396fa..0000000000000000000000000000000000000000 --- a/spaces/realambuj/Image_Classifier_using_RESNET50/main.py +++ /dev/null @@ -1,20 +0,0 @@ -from transformers import AutoImageProcessor, ResNetForImageClassification -import torch -from datasets import load_dataset -import joblib - -dataset = load_dataset("huggingface/cats-image") -image = dataset["test"]["image"][0] -print(image) - -processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50") - -loaded_model = joblib.load("model.sav") -inputs = processor(image, return_tensors="pt") - -with torch.no_grad(): - logits = loaded_model(**inputs).logits - -# model predicts one of the 1000 ImageNet classes -predicted_label = logits.argmax(-1).item() -print(loaded_model.config.id2label[predicted_label]) diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cadence-Orcad-10.5-Portable.rar Draiver Anale Heart.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cadence-Orcad-10.5-Portable.rar Draiver Anale Heart.md deleted file mode 100644 index b1d11c488656a6d5e77084715472436a701a08e7..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cadence-Orcad-10.5-Portable.rar Draiver Anale Heart.md +++ /dev/null @@ -1,13 +0,0 @@ -

          Cadence-Orcad-10.5-Portable.rar draiver anale heart


          Download Zip ===> https://urlgoal.com/2uCMyK



          -
          -03472712a507379a9d5aab46ac123d11a36591c1 8793582 5 Quark XPress 9.1 [Rar - MultiLang][TNT ... 41da719fbe4cf5541898f36e2dd445d6cc9991b2 8792971 5 ORCAD 10.5 ... [TNT][RAR] -8793590 3 ORCAD 10.1 for TEX 9.4 [RAR - MultiLang][TNT ... -8793594 1 RAR for TEX 9.4 [RAR][TNT][RAR] . -8793688 4 RAR for TEX 9.4 [RAR][TNT][RAR] -8793691 4 RAR for TEX 9.4 [RAR][TNT][RAR] -8793734 2 RAR for TEX 9.4 [RAR][TNT][RAR] -8793793 10 RAR for TEX 9.4 [RAR][RAR][RAR] -8793795 5 RAR for TEX 9. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Dr Llaila Afrika Melanin Pdf Free [BETTER].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Dr Llaila Afrika Melanin Pdf Free [BETTER].md deleted file mode 100644 index a292974fc3a237d3347cc3d817d93979c4e4fdb1..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Dr Llaila Afrika Melanin Pdf Free [BETTER].md +++ /dev/null @@ -1,91 +0,0 @@ - ----> ServiceClient failure for UserOffense[/ERROR]

          -

          Summary and Analysis of Dr Llaila Afrika's PDF on Melanin

          - -

          Dr Llaila Afrika is a renowned holistic health practitioner and author who has written several books on African health and wellness. One of his most popular books is Melanin: What Makes Black People Black, which is available as a PDF for free download on the internet.

          -

          dr llaila afrika melanin pdf free


          Download Zip >>>>> https://urlgoal.com/2uCMIY



          - -

          In this book, Dr Afrika explains the nature, function, and importance of melanin, the biochemical substance that gives black people their distinctive physical, mental, emotional, and spiritual characteristics. He also reveals how melanin is being destroyed by various factors, such as poor nutrition, environmental toxins, stress, drugs, and white supremacy. He provides practical tips and advice on how to protect and nourish melanin, and how to use it to achieve optimal health and liberation.

          - -

          Dr Afrika's book is a concise and informative guide that covers various aspects of melanin, such as its history, chemistry, physiology, psychology, spirituality, politics, and economics. He uses simple language and illustrations to make complex concepts easy to understand. He also cites scientific studies and historical facts to support his claims and arguments. He challenges the myths and lies that have been spread about melanin and black people by the dominant white culture. He empowers black people to reclaim their identity and heritage as melanin-dominant beings.

          - -

          Dr Afrika's book has received positive reviews and testimonials from readers and experts who have found it useful and enlightening. It has also sparked discussions and debates among scholars and activists who have different perspectives and opinions on melanin and its role in human evolution and development. Some of the questions and issues that have been raised by Dr Afrika's book are:

          - -
            -
          • What is the difference between sulfur-based melanin and selenium-based melanin?
          • -
          • How does melanin affect the brain, nervous system, endocrine system, immune system, etc.?
          • -
          • How does melanin influence personality, behavior, emotions, intelligence, creativity, etc.?
          • -
          • How does melanin relate to spirituality, consciousness, soul, etc.?
          • -
          • How does melanin affect social relations, culture, politics, economics, etc.?
          • -
          • How can black people enhance their melanin production and utilization?
          • -
          • How can black people resist the attacks and threats to their melanin by the white power structure?
          • -
          • How can black people use their melanin to achieve freedom and justice?
          • -
          - -

          Dr Afrika's book is a valuable resource for anyone who wants to learn more about melanin and its significance for black people. It is also a call for action for black people to take charge of their health and destiny by understanding and activating their black potential.

          -

          What are the Benefits of Dr Llaila Afrika's PDF on Melanin?

          - -

          Dr Llaila Afrika's PDF on melanin is a beneficial resource for anyone who wants to learn more about the biochemical substance that makes black people black. Some of the benefits of reading this PDF are:

          - -
            -
          • It provides a comprehensive and concise description of melanin, its history, chemistry, physiology, psychology, spirituality, politics, and economics.
          • -
          • It reveals how melanin drives physical, mental, emotional, and spiritual life, and how it influences personality, behavior, emotions, intelligence, creativity, etc.
          • -
          • It exposes how melanin is being destroyed by various factors, such as poor nutrition, environmental toxins, stress, drugs, and white supremacy.
          • -
          • It offers practical tips and advice on how to protect and nourish melanin, and how to use it to achieve optimal health and liberation.
          • -
          • It empowers black people to reclaim their identity and heritage as melanin-dominant beings.
          • -
          • It challenges the myths and lies that have been spread about melanin and black people by the dominant white culture.
          • -
          - -

          What are the Tips for Reading Dr Llaila Afrika's PDF on Melanin?

          - -

          Dr Llaila Afrika's PDF on melanin is a valuable resource that can help you understand and activate your black potential. However, to get the most out of this PDF, you need to follow some tips while reading it. Here are some of them:

          -

          - -
            -
          • Read the PDF with an open mind and a critical eye. Do not accept everything that Dr Afrika says as the absolute truth. Do your own research and verification.
          • -
          • Read the PDF with a positive attitude and a receptive heart. Do not let your emotions or prejudices cloud your judgment or understanding.
          • -
          • Read the PDF with a practical purpose and a clear goal. Do not just read it for information or entertainment. Apply what you learn to your life and situation.
          • -
          • Read the PDF with a supportive network and a constructive feedback. Do not read it alone or in isolation. Share it with others who are interested or involved in melanin studies. Discuss it with them and learn from their insights and experiences.
          • -
          - -

          Conclusion

          - -

          In this article, we have summarized and analyzed Dr Llaila Afrika's PDF on melanin. We have also discussed the benefits and tips of reading this PDF. We have seen that Dr Afrika's PDF on melanin is a useful and enlightening guide that covers various aspects of melanin, such as its nature, function, importance, destruction, protection, nourishment, and activation. It is also a call for action for black people to take charge of their health and destiny by understanding and activating their black potential.

          - -

          We hope that this article has been helpful for you. If you want to learn more about Dr Llaila Afrika's PDF on melanin or other resources on melanin studies, you can visit our website or subscribe to our newsletter. Thank you for reading.

          -

          How to Download Dr Llaila Afrika's PDF on Melanin for Free?

          - -

          If you want to download Dr Llaila Afrika's PDF on melanin for free, you can do so by visiting the Internet Archive website. This is a non-profit digital library that offers free access to millions of books, movies, music, and other media. Here are the steps to download Dr Afrika's PDF on melanin for free:

          - -
            -
          1. Go to the Internet Archive website at https://archive.org/.
          2. -
          3. Type "Dr Llaila Afrika melanin" in the search box and click on the magnifying glass icon.
          4. -
          5. Select the result that says "Melanin What Makes Black People Black By Llaila Afrika PDF".
          6. -
          7. On the right side of the page, you will see various options to download the PDF. You can choose to download it as a single PDF file, as a ZIP file, or as a torrent file.
          8. -
          9. Click on the option that suits your preference and save the file to your device.
          10. -
          - -

          How to Share Dr Llaila Afrika's PDF on Melanin with Others?

          - -

          If you want to share Dr Llaila Afrika's PDF on melanin with others, you can do so by using various methods. Here are some of them:

          - -
            -
          • Email: You can attach the PDF file to an email and send it to your friends, family, colleagues, or anyone who might be interested in reading it.
          • -
          • Social media: You can post the link to the PDF file on your social media platforms, such as Facebook, Twitter, Instagram, etc. You can also use hashtags or tags to reach a wider audience.
          • -
          • Messaging apps: You can send the PDF file or the link to it via messaging apps, such as WhatsApp, Telegram, Signal, etc. You can also create groups or channels to discuss the PDF with others.
          • -
          • Cloud storage: You can upload the PDF file to a cloud storage service, such as Google Drive, Dropbox, OneDrive, etc. You can then share the link to the file with others who can access it online or download it.
          • -
          • Print: You can print the PDF file and distribute it physically to others who might not have access to digital devices or internet.
          • -
          - -

          Conclusion

          - -

          In this article, we have written a few more paragraphs for the query "dr llaila afrika melanin pdf free". We have discussed how to download and share Dr Afrika's PDF on melanin with others. We have seen that Dr Afrika's PDF on melanin is a free and accessible resource that can help you learn more about the biochemical substance that makes black people black. It is also a call for action for black people to take charge of their health and destiny by understanding and activating their black potential.

          - -

          We hope that this article has been helpful for you. If you want to learn more about Dr Llaila Afrika's PDF on melanin or other resources on melanin studies, you can visit our website or subscribe to our newsletter. Thank you for reading.

          -

          Conclusion

          - -

          In this article, we have written a few more paragraphs for the query "dr llaila afrika melanin pdf free". We have discussed how to download and share Dr Afrika's PDF on melanin with others. We have seen that Dr Afrika's PDF on melanin is a free and accessible resource that can help you learn more about the biochemical substance that makes black people black. It is also a call for action for black people to take charge of their health and destiny by understanding and activating their black potential.

          - -

          We hope that this article has been helpful for you. If you want to learn more about Dr Llaila Afrika's PDF on melanin or other resources on melanin studies, you can visit our website or subscribe to our newsletter. Thank you for reading.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/riccorl/relik-entity-linking/relik/reader/data/relik_reader_data.py b/spaces/riccorl/relik-entity-linking/relik/reader/data/relik_reader_data.py deleted file mode 100644 index 3c65646f99d37cdcf03ab7005c83eb0069da168c..0000000000000000000000000000000000000000 --- a/spaces/riccorl/relik-entity-linking/relik/reader/data/relik_reader_data.py +++ /dev/null @@ -1,965 +0,0 @@ -import logging -from typing import ( - Any, - Callable, - Dict, - Generator, - Iterable, - Iterator, - List, - NamedTuple, - Optional, - Tuple, - Union, -) - -import numpy as np -import torch -from torch.utils.data import IterableDataset -from tqdm import tqdm -from transformers import AutoTokenizer, PreTrainedTokenizer - -from relik.reader.data.relik_reader_data_utils import ( - add_noise_to_value, - batchify, - chunks, - flatten, -) -from relik.reader.data.relik_reader_sample import ( - RelikReaderSample, - load_relik_reader_samples, -) -from relik.reader.utils.special_symbols import NME_SYMBOL - -logger = logging.getLogger(__name__) - - -def preprocess_dataset( - input_dataset: Iterable[dict], - transformer_model: str, - add_topic: bool, -) -> Iterable[dict]: - tokenizer = AutoTokenizer.from_pretrained(transformer_model) - for dataset_elem in tqdm(input_dataset, desc="Preprocessing input dataset"): - if len(dataset_elem["tokens"]) == 0: - print( - f"Dataset element with doc id: {dataset_elem['doc_id']}", - f"and offset {dataset_elem['offset']} does not contain any token", - "Skipping it", - ) - continue - - new_dataset_elem = dict( - doc_id=dataset_elem["doc_id"], - offset=dataset_elem["offset"], - ) - - tokenization_out = tokenizer( - dataset_elem["tokens"], - return_offsets_mapping=True, - add_special_tokens=False, - ) - - window_tokens = tokenization_out.input_ids - window_tokens = flatten(window_tokens) - - offsets_mapping = [ - [ - ( - ss + dataset_elem["token2char_start"][str(i)], - se + dataset_elem["token2char_start"][str(i)], - ) - for ss, se in tokenization_out.offset_mapping[i] - ] - for i in range(len(dataset_elem["tokens"])) - ] - - offsets_mapping = flatten(offsets_mapping) - - assert len(offsets_mapping) == len(window_tokens) - - window_tokens = ( - [tokenizer.cls_token_id] + window_tokens + [tokenizer.sep_token_id] - ) - - topic_offset = 0 - if add_topic: - topic_tokens = tokenizer( - dataset_elem["doc_topic"], add_special_tokens=False - ).input_ids - topic_offset = len(topic_tokens) - new_dataset_elem["topic_tokens"] = topic_offset - window_tokens = window_tokens[:1] + topic_tokens + window_tokens[1:] - - new_dataset_elem.update( - dict( - tokens=window_tokens, - token2char_start={ - str(i): s - for i, (s, _) in enumerate(offsets_mapping, start=topic_offset) - }, - token2char_end={ - str(i): e - for i, (_, e) in enumerate(offsets_mapping, start=topic_offset) - }, - window_candidates=dataset_elem["window_candidates"], - window_candidates_scores=dataset_elem.get("window_candidates_scores"), - ) - ) - - if "window_labels" in dataset_elem: - window_labels = [ - (s, e, l.replace("_", " ")) for s, e, l in dataset_elem["window_labels"] - ] - - new_dataset_elem["window_labels"] = window_labels - - if not all( - [ - s in new_dataset_elem["token2char_start"].values() - for s, _, _ in new_dataset_elem["window_labels"] - ] - ): - print( - "Mismatching token start char mapping with labels", - new_dataset_elem["token2char_start"], - new_dataset_elem["window_labels"], - dataset_elem["tokens"], - ) - continue - - if not all( - [ - e in new_dataset_elem["token2char_end"].values() - for _, e, _ in new_dataset_elem["window_labels"] - ] - ): - print( - "Mismatching token end char mapping with labels", - new_dataset_elem["token2char_end"], - new_dataset_elem["window_labels"], - dataset_elem["tokens"], - ) - continue - - yield new_dataset_elem - - -def preprocess_sample( - relik_sample: RelikReaderSample, - tokenizer, - lowercase_policy: float, - add_topic: bool = False, -) -> None: - if len(relik_sample.tokens) == 0: - return - - if lowercase_policy > 0: - lc_tokens = np.random.uniform(0, 1, len(relik_sample.tokens)) < lowercase_policy - relik_sample.tokens = [ - t.lower() if lc else t for t, lc in zip(relik_sample.tokens, lc_tokens) - ] - - tokenization_out = tokenizer( - relik_sample.tokens, - return_offsets_mapping=True, - add_special_tokens=False, - ) - - window_tokens = tokenization_out.input_ids - window_tokens = flatten(window_tokens) - - offsets_mapping = [ - [ - ( - ss + relik_sample.token2char_start[str(i)], - se + relik_sample.token2char_start[str(i)], - ) - for ss, se in tokenization_out.offset_mapping[i] - ] - for i in range(len(relik_sample.tokens)) - ] - - offsets_mapping = flatten(offsets_mapping) - - assert len(offsets_mapping) == len(window_tokens) - - window_tokens = [tokenizer.cls_token_id] + window_tokens + [tokenizer.sep_token_id] - - topic_offset = 0 - if add_topic: - topic_tokens = tokenizer( - relik_sample.doc_topic, add_special_tokens=False - ).input_ids - topic_offset = len(topic_tokens) - relik_sample.topic_tokens = topic_offset - window_tokens = window_tokens[:1] + topic_tokens + window_tokens[1:] - - relik_sample._d.update( - dict( - tokens=window_tokens, - token2char_start={ - str(i): s - for i, (s, _) in enumerate(offsets_mapping, start=topic_offset) - }, - token2char_end={ - str(i): e - for i, (_, e) in enumerate(offsets_mapping, start=topic_offset) - }, - ) - ) - - if "window_labels" in relik_sample._d: - relik_sample.window_labels = [ - (s, e, l.replace("_", " ")) for s, e, l in relik_sample.window_labels - ] - - -class TokenizationOutput(NamedTuple): - input_ids: torch.Tensor - attention_mask: torch.Tensor - token_type_ids: torch.Tensor - prediction_mask: torch.Tensor - special_symbols_mask: torch.Tensor - - -class RelikDataset(IterableDataset): - def __init__( - self, - dataset_path: Optional[str], - materialize_samples: bool, - transformer_model: Union[str, PreTrainedTokenizer], - special_symbols: List[str], - shuffle_candidates: Optional[Union[bool, float]] = False, - for_inference: bool = False, - noise_param: float = 0.1, - sorting_fields: Optional[str] = None, - tokens_per_batch: int = 2048, - batch_size: int = None, - max_batch_size: int = 128, - section_size: int = 50_000, - prebatch: bool = True, - random_drop_gold_candidates: float = 0.0, - use_nme: bool = True, - max_subwords_per_candidate: bool = 22, - mask_by_instances: bool = False, - min_length: int = 5, - max_length: int = 2048, - model_max_length: int = 1000, - split_on_cand_overload: bool = True, - skip_empty_training_samples: bool = False, - drop_last: bool = False, - samples: Optional[Iterator[RelikReaderSample]] = None, - lowercase_policy: float = 0.0, - **kwargs, - ): - super().__init__(**kwargs) - self.dataset_path = dataset_path - self.materialize_samples = materialize_samples - self.samples: Optional[List[RelikReaderSample]] = None - if self.materialize_samples: - self.samples = list() - - if isinstance(transformer_model, str): - self.tokenizer = self._build_tokenizer(transformer_model, special_symbols) - else: - self.tokenizer = transformer_model - self.special_symbols = special_symbols - self.shuffle_candidates = shuffle_candidates - self.for_inference = for_inference - self.noise_param = noise_param - self.batching_fields = ["input_ids"] - self.sorting_fields = ( - sorting_fields if sorting_fields is not None else self.batching_fields - ) - - self.tokens_per_batch = tokens_per_batch - self.batch_size = batch_size - self.max_batch_size = max_batch_size - self.section_size = section_size - self.prebatch = prebatch - - self.random_drop_gold_candidates = random_drop_gold_candidates - self.use_nme = use_nme - self.max_subwords_per_candidate = max_subwords_per_candidate - self.mask_by_instances = mask_by_instances - self.min_length = min_length - self.max_length = max_length - self.model_max_length = ( - model_max_length - if model_max_length < self.tokenizer.model_max_length - else self.tokenizer.model_max_length - ) - - # retrocompatibility workaround - self.transformer_model = ( - transformer_model - if isinstance(transformer_model, str) - else transformer_model.name_or_path - ) - self.split_on_cand_overload = split_on_cand_overload - self.skip_empty_training_samples = skip_empty_training_samples - self.drop_last = drop_last - self.lowercase_policy = lowercase_policy - self.samples = samples - - def _build_tokenizer(self, transformer_model: str, special_symbols: List[str]): - return AutoTokenizer.from_pretrained( - transformer_model, - additional_special_tokens=[ss for ss in special_symbols], - add_prefix_space=True, - ) - - @property - def fields_batcher(self) -> Dict[str, Union[None, Callable[[list], Any]]]: - fields_batchers = { - "input_ids": lambda x: batchify( - x, padding_value=self.tokenizer.pad_token_id - ), - "attention_mask": lambda x: batchify(x, padding_value=0), - "token_type_ids": lambda x: batchify(x, padding_value=0), - "prediction_mask": lambda x: batchify(x, padding_value=1), - "global_attention": lambda x: batchify(x, padding_value=0), - "token2word": None, - "sample": None, - "special_symbols_mask": lambda x: batchify(x, padding_value=False), - "start_labels": lambda x: batchify(x, padding_value=-100), - "end_labels": lambda x: batchify(x, padding_value=-100), - "predictable_candidates_symbols": None, - "predictable_candidates": None, - "patch_offset": None, - "optimus_labels": None, - } - - if "roberta" in self.transformer_model: - del fields_batchers["token_type_ids"] - - return fields_batchers - - def _build_input_ids( - self, sentence_input_ids: List[int], candidates_input_ids: List[List[int]] - ) -> List[int]: - return ( - [self.tokenizer.cls_token_id] - + sentence_input_ids - + [self.tokenizer.sep_token_id] - + flatten(candidates_input_ids) - + [self.tokenizer.sep_token_id] - ) - - def _get_special_symbols_mask(self, input_ids: torch.Tensor) -> torch.Tensor: - special_symbols_mask = input_ids >= ( - len(self.tokenizer) - len(self.special_symbols) - ) - special_symbols_mask[0] = True - return special_symbols_mask - - def _build_tokenizer_essentials( - self, input_ids, original_sequence, sample - ) -> TokenizationOutput: - input_ids = torch.tensor(input_ids, dtype=torch.long) - attention_mask = torch.ones_like(input_ids) - - total_sequence_len = len(input_ids) - predictable_sentence_len = len(original_sequence) - - # token type ids - token_type_ids = torch.cat( - [ - input_ids.new_zeros( - predictable_sentence_len + 2 - ), # original sentence bpes + CLS and SEP - input_ids.new_ones(total_sequence_len - predictable_sentence_len - 2), - ] - ) - - # prediction mask -> boolean on tokens that are predictable - - prediction_mask = torch.tensor( - [1] - + ([0] * predictable_sentence_len) - + ([1] * (total_sequence_len - predictable_sentence_len - 1)) - ) - - # add topic tokens to the prediction mask so that they cannot be predicted - # or optimized during training - topic_tokens = getattr(sample, "topic_tokens", None) - if topic_tokens is not None: - prediction_mask[1 : 1 + topic_tokens] = 1 - - # If mask by instances is active the prediction mask is applied to everything - # that is not indicated as an instance in the training set. - if self.mask_by_instances: - char_start2token = { - cs: int(tok) for tok, cs in sample.token2char_start.items() - } - char_end2token = {ce: int(tok) for tok, ce in sample.token2char_end.items()} - instances_mask = torch.ones_like(prediction_mask) - for _, span_info in sample.instance_id2span_data.items(): - span_info = span_info[0] - token_start = char_start2token[span_info[0]] + 1 # +1 for the CLS - token_end = char_end2token[span_info[1]] + 1 # +1 for the CLS - instances_mask[token_start : token_end + 1] = 0 - - prediction_mask += instances_mask - prediction_mask[prediction_mask > 1] = 1 - - assert len(prediction_mask) == len(input_ids) - - # special symbols mask - special_symbols_mask = self._get_special_symbols_mask(input_ids) - - return TokenizationOutput( - input_ids, - attention_mask, - token_type_ids, - prediction_mask, - special_symbols_mask, - ) - - def _build_labels( - self, - sample, - tokenization_output: TokenizationOutput, - predictable_candidates: List[str], - ) -> Tuple[torch.Tensor, torch.Tensor]: - start_labels = [0] * len(tokenization_output.input_ids) - end_labels = [0] * len(tokenization_output.input_ids) - - char_start2token = {v: int(k) for k, v in sample.token2char_start.items()} - char_end2token = {v: int(k) for k, v in sample.token2char_end.items()} - for cs, ce, gold_candidate_title in sample.window_labels: - if gold_candidate_title not in predictable_candidates: - if self.use_nme: - gold_candidate_title = NME_SYMBOL - else: - continue - # +1 is to account for the CLS token - start_bpe = char_start2token[cs] + 1 - end_bpe = char_end2token[ce] + 1 - class_index = predictable_candidates.index(gold_candidate_title) - if ( - start_labels[start_bpe] == 0 and end_labels[end_bpe] == 0 - ): # prevent from having entities that ends with the same label - start_labels[start_bpe] = class_index + 1 # +1 for the NONE class - end_labels[end_bpe] = class_index + 1 # +1 for the NONE class - else: - print( - "Found entity with the same last subword, it will not be included." - ) - print( - cs, - ce, - gold_candidate_title, - start_labels, - end_labels, - sample.doc_id, - ) - - ignored_labels_indices = tokenization_output.prediction_mask == 1 - - start_labels = torch.tensor(start_labels, dtype=torch.long) - start_labels[ignored_labels_indices] = -100 - - end_labels = torch.tensor(end_labels, dtype=torch.long) - end_labels[ignored_labels_indices] = -100 - - return start_labels, end_labels - - def produce_sample_bag( - self, sample, predictable_candidates: List[str], candidates_starting_offset: int - ) -> Optional[Tuple[dict, list, int]]: - # input sentence tokenization - input_subwords = sample.tokens[1:-1] # removing special tokens - candidates_symbols = self.special_symbols[candidates_starting_offset:] - - predictable_candidates = list(predictable_candidates) - original_predictable_candidates = list(predictable_candidates) - - # add NME as a possible candidate - if self.use_nme: - predictable_candidates.insert(0, NME_SYMBOL) - - # candidates encoding - candidates_symbols = candidates_symbols[: len(predictable_candidates)] - candidates_encoding_result = self.tokenizer.batch_encode_plus( - [ - "{} {}".format(cs, ct) if ct != NME_SYMBOL else NME_SYMBOL - for cs, ct in zip(candidates_symbols, predictable_candidates) - ], - add_special_tokens=False, - ).input_ids - - if ( - self.max_subwords_per_candidate is not None - and self.max_subwords_per_candidate > 0 - ): - candidates_encoding_result = [ - cer[: self.max_subwords_per_candidate] - for cer in candidates_encoding_result - ] - - # drop candidates if the number of input tokens is too long for the model - if ( - sum(map(len, candidates_encoding_result)) - + len(input_subwords) - + 20 # + 20 special tokens - > self.model_max_length - ): - acceptable_tokens_from_candidates = ( - self.model_max_length - 20 - len(input_subwords) - ) - i = 0 - cum_len = 0 - while ( - cum_len + len(candidates_encoding_result[i]) - < acceptable_tokens_from_candidates - ): - cum_len += len(candidates_encoding_result[i]) - i += 1 - - candidates_encoding_result = candidates_encoding_result[:i] - candidates_symbols = candidates_symbols[:i] - predictable_candidates = predictable_candidates[:i] - - # final input_ids build - input_ids = self._build_input_ids( - sentence_input_ids=input_subwords, - candidates_input_ids=candidates_encoding_result, - ) - - # complete input building (e.g. attention / prediction mask) - tokenization_output = self._build_tokenizer_essentials( - input_ids, input_subwords, sample - ) - - output_dict = { - "input_ids": tokenization_output.input_ids, - "attention_mask": tokenization_output.attention_mask, - "token_type_ids": tokenization_output.token_type_ids, - "prediction_mask": tokenization_output.prediction_mask, - "special_symbols_mask": tokenization_output.special_symbols_mask, - "sample": sample, - "predictable_candidates_symbols": candidates_symbols, - "predictable_candidates": predictable_candidates, - } - - # labels creation - if sample.window_labels is not None: - start_labels, end_labels = self._build_labels( - sample, - tokenization_output, - predictable_candidates, - ) - output_dict.update(start_labels=start_labels, end_labels=end_labels) - - if ( - "roberta" in self.transformer_model - or "longformer" in self.transformer_model - ): - del output_dict["token_type_ids"] - - predictable_candidates_set = set(predictable_candidates) - remaining_candidates = [ - candidate - for candidate in original_predictable_candidates - if candidate not in predictable_candidates_set - ] - total_used_candidates = ( - candidates_starting_offset - + len(predictable_candidates) - - (1 if self.use_nme else 0) - ) - - if self.use_nme: - assert predictable_candidates[0] == NME_SYMBOL - - return output_dict, remaining_candidates, total_used_candidates - - def __iter__(self): - dataset_iterator = self.dataset_iterator_func() - - current_dataset_elements = [] - - i = None - for i, dataset_elem in enumerate(dataset_iterator, start=1): - if ( - self.section_size is not None - and len(current_dataset_elements) == self.section_size - ): - for batch in self.materialize_batches(current_dataset_elements): - yield batch - current_dataset_elements = [] - - current_dataset_elements.append(dataset_elem) - - if i % 50_000 == 0: - logger.info(f"Processed: {i} number of elements") - - if len(current_dataset_elements) != 0: - for batch in self.materialize_batches(current_dataset_elements): - yield batch - - if i is not None: - logger.info(f"Dataset finished: {i} number of elements processed") - else: - logger.warning("Dataset empty") - - def dataset_iterator_func(self): - skipped_instances = 0 - data_samples = ( - load_relik_reader_samples(self.dataset_path) - if self.samples is None - else self.samples - ) - for sample in data_samples: - preprocess_sample( - sample, self.tokenizer, lowercase_policy=self.lowercase_policy - ) - current_patch = 0 - sample_bag, used_candidates = None, None - remaining_candidates = list(sample.window_candidates) - - if not self.for_inference: - # randomly drop gold candidates at training time - if ( - self.random_drop_gold_candidates > 0.0 - and np.random.uniform() < self.random_drop_gold_candidates - and len(set(ct for _, _, ct in sample.window_labels)) > 1 - ): - # selecting candidates to drop - np.random.shuffle(sample.window_labels) - n_dropped_candidates = np.random.randint( - 0, len(sample.window_labels) - 1 - ) - dropped_candidates = [ - label_elem[-1] - for label_elem in sample.window_labels[:n_dropped_candidates] - ] - dropped_candidates = set(dropped_candidates) - - # saving NMEs because they should not be dropped - if NME_SYMBOL in dropped_candidates: - dropped_candidates.remove(NME_SYMBOL) - - # sample update - sample.window_labels = [ - (s, e, _l) - if _l not in dropped_candidates - else (s, e, NME_SYMBOL) - for s, e, _l in sample.window_labels - ] - remaining_candidates = [ - wc - for wc in remaining_candidates - if wc not in dropped_candidates - ] - - # shuffle candidates - if ( - isinstance(self.shuffle_candidates, bool) - and self.shuffle_candidates - ) or ( - isinstance(self.shuffle_candidates, float) - and np.random.uniform() < self.shuffle_candidates - ): - np.random.shuffle(remaining_candidates) - - while len(remaining_candidates) != 0: - sample_bag = self.produce_sample_bag( - sample, - predictable_candidates=remaining_candidates, - candidates_starting_offset=used_candidates - if used_candidates is not None - else 0, - ) - if sample_bag is not None: - sample_bag, remaining_candidates, used_candidates = sample_bag - if ( - self.for_inference - or not self.skip_empty_training_samples - or ( - ( - sample_bag.get("start_labels") is not None - and torch.any(sample_bag["start_labels"] > 1).item() - ) - or ( - sample_bag.get("optimus_labels") is not None - and len(sample_bag["optimus_labels"]) > 0 - ) - ) - ): - sample_bag["patch_offset"] = current_patch - current_patch += 1 - yield sample_bag - else: - skipped_instances += 1 - if skipped_instances % 1000 == 0 and skipped_instances != 0: - logger.info( - f"Skipped {skipped_instances} instances since they did not have any gold labels..." - ) - - # Just use the first fitting candidates if split on - # cand is not True - if not self.split_on_cand_overload: - break - - def preshuffle_elements(self, dataset_elements: List): - # This shuffling is done so that when using the sorting function, - # if it is deterministic given a collection and its order, we will - # make the whole operation not deterministic anymore. - # Basically, the aim is not to build every time the same batches. - if not self.for_inference: - dataset_elements = np.random.permutation(dataset_elements) - - sorting_fn = ( - lambda elem: add_noise_to_value( - sum(len(elem[k]) for k in self.sorting_fields), - noise_param=self.noise_param, - ) - if not self.for_inference - else sum(len(elem[k]) for k in self.sorting_fields) - ) - - dataset_elements = sorted(dataset_elements, key=sorting_fn) - - if self.for_inference: - return dataset_elements - - ds = list(chunks(dataset_elements, 64)) - np.random.shuffle(ds) - return flatten(ds) - - def materialize_batches( - self, dataset_elements: List[Dict[str, Any]] - ) -> Generator[Dict[str, Any], None, None]: - if self.prebatch: - dataset_elements = self.preshuffle_elements(dataset_elements) - - current_batch = [] - - # function that creates a batch from the 'current_batch' list - def output_batch() -> Dict[str, Any]: - assert ( - len( - set([len(elem["predictable_candidates"]) for elem in current_batch]) - ) - == 1 - ), " ".join( - map( - str, [len(elem["predictable_candidates"]) for elem in current_batch] - ) - ) - - batch_dict = dict() - - de_values_by_field = { - fn: [de[fn] for de in current_batch if fn in de] - for fn in self.fields_batcher - } - - # in case you provide fields batchers but in the batch - # there are no elements for that field - de_values_by_field = { - fn: fvs for fn, fvs in de_values_by_field.items() if len(fvs) > 0 - } - - assert len(set([len(v) for v in de_values_by_field.values()])) - - # todo: maybe we should report the user about possible - # fields filtering due to "None" instances - de_values_by_field = { - fn: fvs - for fn, fvs in de_values_by_field.items() - if all([fv is not None for fv in fvs]) - } - - for field_name, field_values in de_values_by_field.items(): - field_batch = ( - self.fields_batcher[field_name](field_values) - if self.fields_batcher[field_name] is not None - else field_values - ) - - batch_dict[field_name] = field_batch - - return batch_dict - - max_len_discards, min_len_discards = 0, 0 - - should_token_batch = self.batch_size is None - - curr_pred_elements = -1 - for de in dataset_elements: - if ( - should_token_batch - and self.max_batch_size != -1 - and len(current_batch) == self.max_batch_size - ) or (not should_token_batch and len(current_batch) == self.batch_size): - yield output_batch() - current_batch = [] - curr_pred_elements = -1 - - too_long_fields = [ - k - for k in de - if self.max_length != -1 - and torch.is_tensor(de[k]) - and len(de[k]) > self.max_length - ] - if len(too_long_fields) > 0: - max_len_discards += 1 - continue - - too_short_fields = [ - k - for k in de - if self.min_length != -1 - and torch.is_tensor(de[k]) - and len(de[k]) < self.min_length - ] - if len(too_short_fields) > 0: - min_len_discards += 1 - continue - - if should_token_batch: - de_len = sum(len(de[k]) for k in self.batching_fields) - - future_max_len = max( - de_len, - max( - [ - sum(len(bde[k]) for k in self.batching_fields) - for bde in current_batch - ], - default=0, - ), - ) - - future_tokens_per_batch = future_max_len * (len(current_batch) + 1) - - num_predictable_candidates = len(de["predictable_candidates"]) - - if len(current_batch) > 0 and ( - future_tokens_per_batch >= self.tokens_per_batch - or ( - num_predictable_candidates != curr_pred_elements - and curr_pred_elements != -1 - ) - ): - yield output_batch() - current_batch = [] - - current_batch.append(de) - curr_pred_elements = len(de["predictable_candidates"]) - - if len(current_batch) != 0 and not self.drop_last: - yield output_batch() - - if max_len_discards > 0: - if self.for_inference: - logger.warning( - f"WARNING: Inference mode is True but {max_len_discards} samples longer than max length were " - f"found. The {max_len_discards} samples will be DISCARDED. If you are doing some kind of evaluation" - f", this can INVALIDATE results. This might happen if the max length was not set to -1 or if the " - f"sample length exceeds the maximum length supported by the current model." - ) - else: - logger.warning( - f"During iteration, {max_len_discards} elements were " - f"discarded since longer than max length {self.max_length}" - ) - - if min_len_discards > 0: - if self.for_inference: - logger.warning( - f"WARNING: Inference mode is True but {min_len_discards} samples shorter than min length were " - f"found. The {min_len_discards} samples will be DISCARDED. If you are doing some kind of evaluation" - f", this can INVALIDATE results. This might happen if the min length was not set to -1 or if the " - f"sample length is shorter than the minimum length supported by the current model." - ) - else: - logger.warning( - f"During iteration, {min_len_discards} elements were " - f"discarded since shorter than min length {self.min_length}" - ) - - @staticmethod - def convert_tokens_to_char_annotations( - sample: RelikReaderSample, - remove_nmes: bool = True, - ) -> RelikReaderSample: - """ - Converts the token annotations to char annotations. - - Args: - sample (:obj:`RelikReaderSample`): - The sample to convert. - remove_nmes (:obj:`bool`, `optional`, defaults to :obj:`True`): - Whether to remove the NMEs from the annotations. - Returns: - :obj:`RelikReaderSample`: The converted sample. - """ - char_annotations = set() - for ( - predicted_entity, - predicted_spans, - ) in sample.predicted_window_labels.items(): - if predicted_entity == NME_SYMBOL and remove_nmes: - continue - - for span_start, span_end in predicted_spans: - span_start = sample.token2char_start[str(span_start)] - span_end = sample.token2char_end[str(span_end)] - - char_annotations.add((span_start, span_end, predicted_entity)) - - char_probs_annotations = dict() - for ( - span_start, - span_end, - ), candidates_probs in sample.span_title_probabilities.items(): - span_start = sample.token2char_start[str(span_start)] - span_end = sample.token2char_end[str(span_end)] - char_probs_annotations[(span_start, span_end)] = { - title for title, _ in candidates_probs - } - - sample.predicted_window_labels_chars = char_annotations - sample.probs_window_labels_chars = char_probs_annotations - - return sample - - @staticmethod - def merge_patches_predictions(sample) -> None: - sample._d["predicted_window_labels"] = dict() - predicted_window_labels = sample._d["predicted_window_labels"] - - sample._d["span_title_probabilities"] = dict() - span_title_probabilities = sample._d["span_title_probabilities"] - - span2title = dict() - for _, patch_info in sorted(sample.patches.items(), key=lambda x: x[0]): - # selecting span predictions - for predicted_title, predicted_spans in patch_info[ - "predicted_window_labels" - ].items(): - for pred_span in predicted_spans: - pred_span = tuple(pred_span) - curr_title = span2title.get(pred_span) - if curr_title is None or curr_title == NME_SYMBOL: - span2title[pred_span] = predicted_title - # else: - # print("Merging at patch level") - - # selecting span predictions probability - for predicted_span, titles_probabilities in patch_info[ - "span_title_probabilities" - ].items(): - if predicted_span not in span_title_probabilities: - span_title_probabilities[predicted_span] = titles_probabilities - - for span, title in span2title.items(): - if title not in predicted_window_labels: - predicted_window_labels[title] = list() - predicted_window_labels[title].append(span) diff --git a/spaces/rkrstacic/Chatbot-integration-built-on-processes/README.md b/spaces/rkrstacic/Chatbot-integration-built-on-processes/README.md deleted file mode 100644 index 8dde26ffde1ec52dfc987c61b0b7b3916209a40b..0000000000000000000000000000000000000000 --- a/spaces/rkrstacic/Chatbot-integration-built-on-processes/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chatbot Integration Built On Processes -emoji: 📈 -colorFrom: green -colorTo: yellow -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/hook/yolox_mode_switch_hook.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/hook/yolox_mode_switch_hook.py deleted file mode 100644 index 10834e686af5c7f70c1f01ce1bef0c707740aea5..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/hook/yolox_mode_switch_hook.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.parallel import is_module_wrapper -from mmcv.runner.hooks import HOOKS, Hook - - -@HOOKS.register_module() -class YOLOXModeSwitchHook(Hook): - """Switch the mode of YOLOX during training. - - This hook turns off the mosaic and mixup data augmentation and switches - to use L1 loss in bbox_head. - - Args: - num_last_epochs (int): The number of latter epochs in the end of the - training to close the data augmentation and switch to L1 loss. - Default: 15. - skip_type_keys (list[str], optional): Sequence of type string to be - skip pipeline. Default: ('Mosaic', 'RandomAffine', 'MixUp') - """ - - def __init__(self, - num_last_epochs=15, - skip_type_keys=('Mosaic', 'RandomAffine', 'MixUp')): - self.num_last_epochs = num_last_epochs - self.skip_type_keys = skip_type_keys - self._restart_dataloader = False - - def before_train_epoch(self, runner): - """Close mosaic and mixup augmentation and switches to use L1 loss.""" - epoch = runner.epoch - train_loader = runner.data_loader - model = runner.model - if is_module_wrapper(model): - model = model.module - if (epoch + 1) == runner.max_epochs - self.num_last_epochs: - runner.logger.info('No mosaic and mixup aug now!') - # The dataset pipeline cannot be updated when persistent_workers - # is True, so we need to force the dataloader's multi-process - # restart. This is a very hacky approach. - train_loader.dataset.update_skip_type_keys(self.skip_type_keys) - if hasattr(train_loader, 'persistent_workers' - ) and train_loader.persistent_workers is True: - train_loader._DataLoader__initialized = False - train_loader._iterator = None - self._restart_dataloader = True - runner.logger.info('Add additional L1 loss now!') - model.bbox_head.use_l1 = True - else: - # Once the restart is complete, we need to restore - # the initialization flag. - if self._restart_dataloader: - train_loader._DataLoader__initialized = True diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/pipelines/loading.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/pipelines/loading.py deleted file mode 100644 index 8af8cf352ca4298fca4d50f0f5760daa869a6aeb..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/pipelines/loading.py +++ /dev/null @@ -1,645 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -import mmcv -import numpy as np -import pycocotools.mask as maskUtils - -from mmdet.core import BitmapMasks, PolygonMasks -from ..builder import PIPELINES - -try: - from panopticapi.utils import rgb2id -except ImportError: - rgb2id = None - - -@PIPELINES.register_module() -class LoadImageFromFile: - """Load an image from file. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename"). Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - to_float32=False, - color_type='color', - channel_order='bgr', - file_client_args=dict(backend='disk')): - self.to_float32 = to_float32 - self.color_type = color_type - self.channel_order = channel_order - self.file_client_args = file_client_args.copy() - self.file_client = None - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results['img_prefix'] is not None: - filename = osp.join(results['img_prefix'], - results['img_info']['filename']) - else: - filename = results['img_info']['filename'] - - img_bytes = self.file_client.get(filename) - img = mmcv.imfrombytes( - img_bytes, flag=self.color_type, channel_order=self.channel_order) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - results['img_fields'] = ['img'] - return results - - def __repr__(self): - repr_str = (f'{self.__class__.__name__}(' - f'to_float32={self.to_float32}, ' - f"color_type='{self.color_type}', " - f"channel_order='{self.channel_order}', " - f'file_client_args={self.file_client_args})') - return repr_str - - -@PIPELINES.register_module() -class LoadImageFromWebcam(LoadImageFromFile): - """Load an image from webcam. - - Similar with :obj:`LoadImageFromFile`, but the image read from webcam is in - ``results['img']``. - """ - - def __call__(self, results): - """Call functions to add image meta information. - - Args: - results (dict): Result dict with Webcam read image in - ``results['img']``. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - img = results['img'] - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = None - results['ori_filename'] = None - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - results['img_fields'] = ['img'] - return results - - -@PIPELINES.register_module() -class LoadMultiChannelImageFromFiles: - """Load multi-channel images from a list of separate channel files. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename", which is expected to be a list of filenames). - Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - to_float32=False, - color_type='unchanged', - file_client_args=dict(backend='disk')): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - - def __call__(self, results): - """Call functions to load multiple images and get images meta - information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded images and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results['img_prefix'] is not None: - filename = [ - osp.join(results['img_prefix'], fname) - for fname in results['img_info']['filename'] - ] - else: - filename = results['img_info']['filename'] - - img = [] - for name in filename: - img_bytes = self.file_client.get(name) - img.append(mmcv.imfrombytes(img_bytes, flag=self.color_type)) - img = np.stack(img, axis=-1) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - repr_str = (f'{self.__class__.__name__}(' - f'to_float32={self.to_float32}, ' - f"color_type='{self.color_type}', " - f'file_client_args={self.file_client_args})') - return repr_str - - -@PIPELINES.register_module() -class LoadAnnotations: - """Load multiple types of annotations. - - Args: - with_bbox (bool): Whether to parse and load the bbox annotation. - Default: True. - with_label (bool): Whether to parse and load the label annotation. - Default: True. - with_mask (bool): Whether to parse and load the mask annotation. - Default: False. - with_seg (bool): Whether to parse and load the semantic segmentation - annotation. Default: False. - poly2mask (bool): Whether to convert the instance masks from polygons - to bitmaps. Default: True. - denorm_bbox (bool): Whether to convert bbox from relative value to - absolute value. Only used in OpenImage Dataset. - Default: False. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - with_bbox=True, - with_label=True, - with_mask=False, - with_seg=False, - poly2mask=True, - denorm_bbox=False, - file_client_args=dict(backend='disk')): - self.with_bbox = with_bbox - self.with_label = with_label - self.with_mask = with_mask - self.with_seg = with_seg - self.poly2mask = poly2mask - self.denorm_bbox = denorm_bbox - self.file_client_args = file_client_args.copy() - self.file_client = None - - def _load_bboxes(self, results): - """Private function to load bounding box annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box annotations. - """ - - ann_info = results['ann_info'] - results['gt_bboxes'] = ann_info['bboxes'].copy() - - if self.denorm_bbox: - bbox_num = results['gt_bboxes'].shape[0] - if bbox_num != 0: - h, w = results['img_shape'][:2] - results['gt_bboxes'][:, 0::2] *= w - results['gt_bboxes'][:, 1::2] *= h - - gt_bboxes_ignore = ann_info.get('bboxes_ignore', None) - if gt_bboxes_ignore is not None: - results['gt_bboxes_ignore'] = gt_bboxes_ignore.copy() - results['bbox_fields'].append('gt_bboxes_ignore') - results['bbox_fields'].append('gt_bboxes') - - gt_is_group_ofs = ann_info.get('gt_is_group_ofs', None) - if gt_is_group_ofs is not None: - results['gt_is_group_ofs'] = gt_is_group_ofs.copy() - - return results - - def _load_labels(self, results): - """Private function to load label annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded label annotations. - """ - - results['gt_labels'] = results['ann_info']['labels'].copy() - return results - - def _poly2mask(self, mask_ann, img_h, img_w): - """Private function to convert masks represented with polygon to - bitmaps. - - Args: - mask_ann (list | dict): Polygon mask annotation input. - img_h (int): The height of output mask. - img_w (int): The width of output mask. - - Returns: - numpy.ndarray: The decode bitmap mask of shape (img_h, img_w). - """ - - if isinstance(mask_ann, list): - # polygon -- a single object might consist of multiple parts - # we merge all parts into one mask rle code - rles = maskUtils.frPyObjects(mask_ann, img_h, img_w) - rle = maskUtils.merge(rles) - elif isinstance(mask_ann['counts'], list): - # uncompressed RLE - rle = maskUtils.frPyObjects(mask_ann, img_h, img_w) - else: - # rle - rle = mask_ann - mask = maskUtils.decode(rle) - return mask - - def process_polygons(self, polygons): - """Convert polygons to list of ndarray and filter invalid polygons. - - Args: - polygons (list[list]): Polygons of one instance. - - Returns: - list[numpy.ndarray]: Processed polygons. - """ - - polygons = [np.array(p) for p in polygons] - valid_polygons = [] - for polygon in polygons: - if len(polygon) % 2 == 0 and len(polygon) >= 6: - valid_polygons.append(polygon) - return valid_polygons - - def _load_masks(self, results): - """Private function to load mask annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded mask annotations. - If ``self.poly2mask`` is set ``True``, `gt_mask` will contain - :obj:`PolygonMasks`. Otherwise, :obj:`BitmapMasks` is used. - """ - - h, w = results['img_info']['height'], results['img_info']['width'] - gt_masks = results['ann_info']['masks'] - if self.poly2mask: - gt_masks = BitmapMasks( - [self._poly2mask(mask, h, w) for mask in gt_masks], h, w) - else: - gt_masks = PolygonMasks( - [self.process_polygons(polygons) for polygons in gt_masks], h, - w) - results['gt_masks'] = gt_masks - results['mask_fields'].append('gt_masks') - return results - - def _load_semantic_seg(self, results): - """Private function to load semantic segmentation annotations. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: The dict contains loaded semantic segmentation annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - img_bytes = self.file_client.get(filename) - results['gt_semantic_seg'] = mmcv.imfrombytes( - img_bytes, flag='unchanged').squeeze() - results['seg_fields'].append('gt_semantic_seg') - return results - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box, label, mask and - semantic segmentation annotations. - """ - - if self.with_bbox: - results = self._load_bboxes(results) - if results is None: - return None - if self.with_label: - results = self._load_labels(results) - if self.with_mask: - results = self._load_masks(results) - if self.with_seg: - results = self._load_semantic_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(with_bbox={self.with_bbox}, ' - repr_str += f'with_label={self.with_label}, ' - repr_str += f'with_mask={self.with_mask}, ' - repr_str += f'with_seg={self.with_seg}, ' - repr_str += f'poly2mask={self.poly2mask}, ' - repr_str += f'file_client_args={self.file_client_args})' - return repr_str - - -@PIPELINES.register_module() -class LoadPanopticAnnotations(LoadAnnotations): - """Load multiple types of panoptic annotations. - - Args: - with_bbox (bool): Whether to parse and load the bbox annotation. - Default: True. - with_label (bool): Whether to parse and load the label annotation. - Default: True. - with_mask (bool): Whether to parse and load the mask annotation. - Default: True. - with_seg (bool): Whether to parse and load the semantic segmentation - annotation. Default: True. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - with_bbox=True, - with_label=True, - with_mask=True, - with_seg=True, - file_client_args=dict(backend='disk')): - if rgb2id is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - super(LoadPanopticAnnotations, self).__init__( - with_bbox=with_bbox, - with_label=with_label, - with_mask=with_mask, - with_seg=with_seg, - poly2mask=True, - denorm_bbox=False, - file_client_args=file_client_args) - - def _load_masks_and_semantic_segs(self, results): - """Private function to load mask and semantic segmentation annotations. - - In gt_semantic_seg, the foreground label is from `0` to - `num_things - 1`, the background label is from `num_things` to - `num_things + num_stuff - 1`, 255 means the ignored label (`VOID`). - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded mask and semantic segmentation - annotations. `BitmapMasks` is used for mask annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - img_bytes = self.file_client.get(filename) - pan_png = mmcv.imfrombytes( - img_bytes, flag='color', channel_order='rgb').squeeze() - pan_png = rgb2id(pan_png) - - gt_masks = [] - gt_seg = np.zeros_like(pan_png) + 255 # 255 as ignore - - for mask_info in results['ann_info']['masks']: - mask = (pan_png == mask_info['id']) - gt_seg = np.where(mask, mask_info['category'], gt_seg) - - # The legal thing masks - if mask_info.get('is_thing'): - gt_masks.append(mask.astype(np.uint8)) - - if self.with_mask: - h, w = results['img_info']['height'], results['img_info']['width'] - gt_masks = BitmapMasks(gt_masks, h, w) - results['gt_masks'] = gt_masks - results['mask_fields'].append('gt_masks') - - if self.with_seg: - results['gt_semantic_seg'] = gt_seg - results['seg_fields'].append('gt_semantic_seg') - return results - - def __call__(self, results): - """Call function to load multiple types panoptic annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box, label, mask and - semantic segmentation annotations. - """ - - if self.with_bbox: - results = self._load_bboxes(results) - if results is None: - return None - if self.with_label: - results = self._load_labels(results) - if self.with_mask or self.with_seg: - # The tasks completed by '_load_masks' and '_load_semantic_segs' - # in LoadAnnotations are merged to one function. - results = self._load_masks_and_semantic_segs(results) - - return results - - -@PIPELINES.register_module() -class LoadProposals: - """Load proposal pipeline. - - Required key is "proposals". Updated keys are "proposals", "bbox_fields". - - Args: - num_max_proposals (int, optional): Maximum number of proposals to load. - If not specified, all proposals will be loaded. - """ - - def __init__(self, num_max_proposals=None): - self.num_max_proposals = num_max_proposals - - def __call__(self, results): - """Call function to load proposals from file. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded proposal annotations. - """ - - proposals = results['proposals'] - if proposals.shape[1] not in (4, 5): - raise AssertionError( - 'proposals should have shapes (n, 4) or (n, 5), ' - f'but found {proposals.shape}') - proposals = proposals[:, :4] - - if self.num_max_proposals is not None: - proposals = proposals[:self.num_max_proposals] - - if len(proposals) == 0: - proposals = np.array([[0, 0, 0, 0]], dtype=np.float32) - results['proposals'] = proposals - results['bbox_fields'].append('proposals') - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(num_max_proposals={self.num_max_proposals})' - - -@PIPELINES.register_module() -class FilterAnnotations: - """Filter invalid annotations. - - Args: - min_gt_bbox_wh (tuple[float]): Minimum width and height of ground truth - boxes. Default: (1., 1.) - min_gt_mask_area (int): Minimum foreground area of ground truth masks. - Default: 1 - by_box (bool): Filter instances with bounding boxes not meeting the - min_gt_bbox_wh threshold. Default: True - by_mask (bool): Filter instances with masks not meeting - min_gt_mask_area threshold. Default: False - keep_empty (bool): Whether to return None when it - becomes an empty bbox after filtering. Default: True - """ - - def __init__(self, - min_gt_bbox_wh=(1., 1.), - min_gt_mask_area=1, - by_box=True, - by_mask=False, - keep_empty=True): - # TODO: add more filter options - assert by_box or by_mask - self.min_gt_bbox_wh = min_gt_bbox_wh - self.min_gt_mask_area = min_gt_mask_area - self.by_box = by_box - self.by_mask = by_mask - self.keep_empty = keep_empty - - def __call__(self, results): - if self.by_box: - assert 'gt_bboxes' in results - gt_bboxes = results['gt_bboxes'] - instance_num = gt_bboxes.shape[0] - if self.by_mask: - assert 'gt_masks' in results - gt_masks = results['gt_masks'] - instance_num = len(gt_masks) - - if instance_num == 0: - return results - - tests = [] - if self.by_box: - w = gt_bboxes[:, 2] - gt_bboxes[:, 0] - h = gt_bboxes[:, 3] - gt_bboxes[:, 1] - tests.append((w > self.min_gt_bbox_wh[0]) - & (h > self.min_gt_bbox_wh[1])) - if self.by_mask: - gt_masks = results['gt_masks'] - tests.append(gt_masks.areas >= self.min_gt_mask_area) - - keep = tests[0] - for t in tests[1:]: - keep = keep & t - - keep = keep.nonzero()[0] - - keys = ('gt_bboxes', 'gt_labels', 'gt_masks') - for key in keys: - if key in results: - results[key] = results[key][keep] - if keep.size == 0: - if self.keep_empty: - return None - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(min_gt_bbox_wh={self.min_gt_bbox_wh},' \ - f'min_gt_mask_area={self.min_gt_mask_area},' \ - f'by_box={self.by_box},' \ - f'by_mask={self.by_mask},' \ - f'always_keep={self.always_keep})' diff --git a/spaces/rorallitri/biomedical-language-models/logs/Adobe Creative Suite Cs 5.5 Design Premium Download FREE.md b/spaces/rorallitri/biomedical-language-models/logs/Adobe Creative Suite Cs 5.5 Design Premium Download FREE.md deleted file mode 100644 index 5b8a218039ccd415c158e5e9b4c254669bb52a43..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Adobe Creative Suite Cs 5.5 Design Premium Download FREE.md +++ /dev/null @@ -1,13 +0,0 @@ -
          -

          Most CS5 products come with separate installers for Acrobat 9 Professional. (The Acrobat installers are included in CS5.5. If you are installing CS5.5, skip this section.) If you already have Acrobat 9 Pro installed, either as part of a suite or as a stand-alone application, do not reinstall it. If you're an existing Acrobat 9 Pro user, download and install the most current updates. To check for updates, open Acrobat 9 Pro and choose Help > Check for Updates.

          -

          Adobe® Creative Suite® 5 delivers a comprehensive creative toolset for designing across print, web, interactive, and mobile. Explore lifelike natural painting, drawing in perspective, powerful 3D, and interactive experience design. Breakthrough performance gains accelerate image processing, rotoscoping, compositing, video editing, and more. And now integration with new Adobe CS Live online services* further enhances your productivity. From start to finish, design amazing work, collaborate effectively, and deliver virtually anywhere with Creative Suite 5.

          -

          adobe creative suite cs 5.5 design premium download


          Download Ziphttps://tinurll.com/2uzlFW



          -

          Use Acrobat 9 Pro to create PDF documents and package layouts, drawings, images, animation, movies, audio, and other files in a single, dynamic PDF Portfolio. Maximize design time by streamlining reviews with Adobe CS Review, a CS Live online service. Initiate shared reviews from within Photoshop CS5 Extended, Illustrator CS5, and InDesign CS5. Invite others to comment with easy-to-use tools, and view their comments in the context of your design. Take advantage of integration with new Adobe CS Live online services to accelerate time-consuming processes such as creative reviews, web page testing, and collaborative content authoring. Take advantage of the power and performance of the latest Apple and Microsoft operating systems. Enjoy 64-bit support for faster image editing across platforms with Photoshop CS5 Extended.

          -

          The last of the Creative Suite versions, Adobe Creative Suite 6 (CS6), was launched at a release event on April 23, 2012, and released on May 7, 2012.[1] CS6 was the last of the Adobe design tools to be physically shipped as boxed software as future releases and updates would be delivered via download only.

          -

          Macromedia Studio was a suite of programs designed for web content creation designed and distributed by Macromedia. After Adobe's 2005 acquisition of Macromedia, Macromedia Studio 8 was replaced, modified, and integrated into two editions of the Adobe Creative Suite family of software from version 2.3 onwards. The closest relatives of Macromedia Studio 8 are now called Adobe Creative Suite Web Premium.

          -

          SAN JOSE, Calif.--(BUSINESS WIRE)--Adobe Systems Incorporated (Nasdaq:ADBE) today announced the new Adobe® Creative Suite® 5.5 product line (see separate releases), enabling designers and developers to target popular and emerging smartphone and tablet platforms, as the revolution in mobile communications fundamentally changes the way content is distributed and consumed. Substantive advances to HTML5, Flash authoring, digital publishing and video tools as well as new capabilities that kick-start the integration of tablets into creative workflows, anchor the new Adobe Creative Suite 5.5 product family.

          -

          Adobe Creative Suite 5.5 products are scheduled to ship within 30 days, with availability through Adobe Authorized Resellers, the Adobe Store at www.adobe.com/store in North America and Adobe Direct Sales. Estimated street price for the suites is expected to be US$2599 for CS5.5 Master Collection, US$1899 for CS5.5 Design Premium, US$1799 for CS5.5 Web Premium, US$1699 for CS5.5 Production Premium and US$1299 for CS5.5 Design Standard. Upgrade pricing and volume licensing are available.

          -

          • Aquafadas Pulp Motion Advanced 3 discount
          • Download MS Office Visio Professional 2007 mac
          • Buy cheap Apple Final Cut Studio 3
          • Corel PhotoImpact X3 download
          • Buy MathWorks PTC MathCAD 14 mac
          • Buy Mindjet MindManager 2017 key
          • Office Access 2010 cheap license
          • Autodesk Factory Design Suite Ultimate 2012 download mac
          • Download Vegas Pro 13
          • Video Converter Ultimate 5 license
          • Product Design Suite Ultimate 2014 download mac
          • Download MS Visio 2010 mac
          • OEM Autodesk SketchBook Designer 2012
          • Autodesk Plant Design Suite Ultimate 2021 download mac
          • Cs6 photoshop buy
          • OEM Cocoatech Path Finder 6
          • Where to buy AutoCAD Mechanical 2017
          • Dreamweaver cs download
          • Autodesk Navisworks Simulate 2019 license
          • Buy OEM Solidworks 2022 Premium
          • Autodesk AutoCAD 2015 cheap license
          • Acid daw
          • Buy GibbsCAM 2016 mac os
          • Buy cheap InCopy CC
          • Buy Music Recorder
          • OEM Visual Studio 2008 Professional
          • AutoCAD Plant 3D 2017 buy key
          • OEM Sage ACT Premium 2011
          • OEM Camnetics Suite 2019
          • Final Cut Express 4 buy online
          • Download Autodesk Factory Design Suite Ultimate 2019 64 bit
          • Download adobe dreamweaver terbaru
          • Cheapest Solidworks 2014 Premium
          • Download AutoCAD Mechanical 2022 mac os
          • Buy Corel Painter 2019
          • Buy Marketcircle Billings 3 64 bit
          • Buy Microsoft Office 2007 Professional key
          • MS Outlook 2016 discount
          • Geomagic Freeform Plus 2017 64 bit
          • Creative suite 5.5 design premium
          • Download Revit Architecture 2012 mac
          • MS Visio Standard 2020 buy online
          • Download Default Folder X 4.6 mac os
          • Download InstallShield X Express Edition
          • Buy PowerPoint 2019 key
          • Download MS Excel 2013 mac os
          • Buy Inventor Professional 2021 key
          • Navisworks Simulate 2021 download mac
          • MacItBetter BetterZip 2 buy key
          • Autodesk AutoCAD Design Suite Ultimate 2020 mac
          • Symantec Winfax Pro 10 cheap license
          • Download Inventor LT 2016
          • Download SketchUp Pro 2019 mac os
          • Download Alias Design 2018
          • OEM Premiere Elements 9
          • Download Visual Studio 2015 mac
          • Download Autodesk 3ds Max Design 2012 mac
          • AutoCAD LT 2016 license
          • OEM MotionBuilder 2015
          • ForkLift price
          • Macx Video Converter Pro 64 bit
          • Download AutoCAD 2009 mac os
          • Vegas Movie Studio HD Platinum 11 download mac
          • Download Autodesk Plant Design Suite Ultimate 2015 mac os
          • MS Visio Standard 2021 buy online
          • Cheap Many Tricks Usher
          • Download MS Expression Studio 4 Ultimate 64 bit
          • Download Microsoft Visio Standard 2018 64 bit
          • Buy Adobe Creative Suite 5.5 Design Standard Student And Teacher Edition mac os
          • Premiere Pro CS5 cheap license
          • Cheap Premiere Elements 10
          • Buy OEM Fireworks CS6
          • Download MS OneNote 2013 64 bit
          • Buy cheap CorelCAD 2017
          • Microsoft Visual Studio 2010 Ultimate download mac
          • Buy Vegas Pro 13 key
          • Contribute CS4 mac
          • Buy MathWorks MatLab R2009b mac
          • Cheapest ABBYY FineReader 8 Express Edition
          • AutoCAD 2017 license
          • Microsoft Access 2016 64 bit
          • Download Entertainment Creation Suite 2020 Ultimate mac os
          • Where to buy MS Visio Professional 2017
          • Cheap Windows Server 2012 Datacenter
          • Download Geomagic Freeform Plus 2022 64 bit
          • Where to buy AutoCAD Architecture 2013
          • MS SQL Server 2014 Business Intelligence license
          • Buy cheap Microsoft Office Professional Plus 2021
          • Download Windows 7 Ultimate
          • 3ds Max 2022 download
          • Download Adobe Muse CC key
          • Buy Project Professional 2022
          • OriginLab OriginPro 2022 buy key
          • Buy Windows Server 2008 R2 Standard key
          • Acronis True Image Home 2010 buy key
          • Creative Suite 5 Design Standard discount
          • Autodesk AutoCAD PandID 2019 price
          • Nolo Quicken WillMaker Plus 2011 buy key
          • Download MS Visio Standard 2019 mac os
          • Download Msoffice Standard 2022
          • Where to buy ACDSee Ultimate 10
          • Ecotect Analysis 2011 buy online
          • Download MS Publisher 2021 64 bit
          • Adobe Illustrator CS4 download mac
          • AutoCAD Architecture 2015 64 bit
          • Buy cheap MS Project 2010
          • Nero 8 Ultra Edition download mac
          • Download Office Project Standard 2010 mac os
          • Download Camnetics Suite 2016 64 bit
          • Download MS Visio Professional 2013 mac
          • Download Maxon Cinema 4D R15
          • Camtasia Studio 8 license
          • Where to buy MS Office Professional 2019
          • Graphics Suite X5 buy online
          • MS Word 2019 download mac
          • Msoffice SharePoint Workspace 2010 price
          • Factory Design Suite Ultimate 2020 buy key
          • Mac OS X 10.6 Snow Leopard Server 64 bit
          • Buy Adobe InCopy CS6 mac
          • Microsoft Streets and Trips 2013 buy online
          • Buy Adobe Acrobat XI Pro mac
          • Download CorelCAD 2015 64 bit
          • Adobe Photoshop CC Student And Teacher Edition discount
          • Download Acrobat 9 Pro Extended
          • GraphiSoft ArchiCAD 20 price
          • Buy Autodesk Alias Design 2022 key
          • Microsoft Visio 2016 download mac
          • Cheapest Autodesk 3Ds Max 2009
          • Download Adobe Captivate 4
          • Buy Compressor 4 key
          • Cheap Arobas Music Guitar Pro 6
          • Download Autodesk Navisworks Simulate 2017 key
          • Download QuarkXPress 2016 key
          • Illustrator CS5 Classroom in a Book cheap license
          • Where to buy iExplorer 3
          • Autodesk 3ds Max 2022 download
          • Cheapest AutoCAD 2012
          • Autodesk Inventor Professional 2009 download
          • Alias Surface 2018 license
          • Buy Stellar Phoenix Mac Data Recovery 6 mac os
          • Buy Stellar Phoenix Mac Data Recovery 6 mac
          • Buy SQL Server 2014 Enterprise 64 bit
          • Buy Microsoft Visio Standard 2022 key
          • Download Msoffice Home and Business 2013 mac
          • Download Publisher 2016 key
          • Buy AutoCAD Inventor LT 2010 mac
          • Microsoft Word 2018 buy online
          • Maya 2018 buy online
          • Sony Movie Studio Platinum 13 download mac
          • Buy cheap WordPerfect Office X5 Standard
          • Buy autodesk 3ds max
          • Download Microsoft Access 2019 mac
          • Download Autodesk Building Design Suite Ultimate 2014 64 bit
          • MS Office 2007 Ultimate license
          • Download Microsoft Excel 2013 key
          • Painter X3 cheap license
          • Cheap WordPerfect Office X5 Standard
          • Fl studio purchase
          • Autodesk AutoCAD Utility Design 2015 discount
          • Download Windows Server 2008 R2 Standard
          • Buy MS Office Project Professional 2007 SP2 mac os
          • Buy Smoke 2015
          • MS Project 2016 license
          • Buy Autodesk AutoCAD Architecture 2015
          • MS Office 2007 Standard price
          • Cheap PowerPoint 2020
          • Autodesk Infrastructure Design Suite Ultimate 2017 cheap license
          • Buy Ashampoo Privacy Protector key
          • Alien skin discount
          • Download MS Outlook 2020 64 bit
          • Cheapest After Effects CS4
          • Download OriginLab OriginPro 2016 mac
          • Acronis True Image 2017 cheap license
          • Download MAMP Pro mac os
          • MS Office Project Professional 2010 cheap license
          • Final cut studio hd
          • Download Logic Pro X
          • Cheap Nik Software HDR Efex Pro
          • Microsoft Project Standard 2021 64 bit
          • Where to buy Macpaw CleanMyMac 3
          • Geomagic Design X 2020 download
          • Office Home and Business 2018 mac
          • Buy cheap Xilisoft DVD Creator 7
          • Microsoft Outlook 2019 buy online
          • Autodesk Inventor LT 2017 discount
          • Download Photoshop CS5 Extended Student And Teacher Edition
          • Buy cheap Cultured Code Things
          • Download Adobe After Effects CS5 64 bit
          • Buy Illustrator CS3 mac
          • Boris Continuum Complete 8 for Adobe AE and PrPro mac
          • Cheapest MAMP Pro 3
          • Where to buy Symantec Norton Ghost 12
          • Buy Msoffice Project Standard 2010 64 bit
          • Revit 2020 64 bit
          • Buy MS Office 2011 Home and Student mac
          • MAMP Pro 3 buy key
          • Adobe Flash Catalyst CS5.5 mac
          • Adobe cs6 education edition
          • ArchiCAD 14 download mac
          1 2 3 4 5

          -

          aaccfb2cb3
          -
          -
          \ No newline at end of file diff --git a/spaces/rossellison/kpop-face-generator/stylegan3-fun/viz/capture_widget.py b/spaces/rossellison/kpop-face-generator/stylegan3-fun/viz/capture_widget.py deleted file mode 100644 index dc46c5a69fcce2c81b46e8b0c1f1659c468cec03..0000000000000000000000000000000000000000 --- a/spaces/rossellison/kpop-face-generator/stylegan3-fun/viz/capture_widget.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import re -import numpy as np -import imgui -import PIL.Image -from gui_utils import imgui_utils -from . import renderer - -#---------------------------------------------------------------------------- - -class CaptureWidget: - def __init__(self, viz): - self.viz = viz - self.path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '_screenshots')) - self.dump_image = False - self.dump_gui = False - self.defer_frames = 0 - self.disabled_time = 0 - - def dump_png(self, image): - viz = self.viz - try: - _height, _width, channels = image.shape - assert channels in [1, 3] - assert image.dtype == np.uint8 - os.makedirs(self.path, exist_ok=True) - file_id = 0 - for entry in os.scandir(self.path): - if entry.is_file(): - match = re.fullmatch(r'(\d+).*', entry.name) - if match: - file_id = max(file_id, int(match.group(1)) + 1) - if channels == 1: - pil_image = PIL.Image.fromarray(image[:, :, 0], 'L') - else: - pil_image = PIL.Image.fromarray(image, 'RGB') - pil_image.save(os.path.join(self.path, f'{file_id:05d}.png')) - except: - viz.result.error = renderer.CapturedException() - - @imgui_utils.scoped_by_object_id - def __call__(self, show=True): - viz = self.viz - if show: - with imgui_utils.grayed_out(self.disabled_time != 0): - imgui.text('Capture') - imgui.same_line(viz.label_w) - _changed, self.path = imgui_utils.input_text('##path', self.path, 1024, - flags=(imgui.INPUT_TEXT_AUTO_SELECT_ALL | imgui.INPUT_TEXT_ENTER_RETURNS_TRUE), - width=(-1 - viz.button_w * 2 - viz.spacing * 2), - help_text='PATH') - if imgui.is_item_hovered() and not imgui.is_item_active() and self.path != '': - imgui.set_tooltip(self.path) - imgui.same_line() - if imgui_utils.button('Save image', width=viz.button_w, enabled=(self.disabled_time == 0 and 'image' in viz.result)): - self.dump_image = True - self.defer_frames = 2 - self.disabled_time = 0.5 - imgui.same_line() - if imgui_utils.button('Save GUI', width=-1, enabled=(self.disabled_time == 0)): - self.dump_gui = True - self.defer_frames = 2 - self.disabled_time = 0.5 - - self.disabled_time = max(self.disabled_time - viz.frame_delta, 0) - if self.defer_frames > 0: - self.defer_frames -= 1 - elif self.dump_image: - if 'image' in viz.result: - self.dump_png(viz.result.image) - self.dump_image = False - elif self.dump_gui: - viz.capture_next_frame() - self.dump_gui = False - captured_frame = viz.pop_captured_frame() - if captured_frame is not None: - self.dump_png(captured_frame) - -#---------------------------------------------------------------------------- diff --git a/spaces/ruboin/faster-whisper-webui/src/utils.py b/spaces/ruboin/faster-whisper-webui/src/utils.py deleted file mode 100644 index 7f4ef3d71260034f655d6362f92e866b8777d16d..0000000000000000000000000000000000000000 --- a/spaces/ruboin/faster-whisper-webui/src/utils.py +++ /dev/null @@ -1,135 +0,0 @@ -import textwrap -import unicodedata -import re - -import zlib -from typing import Iterator, TextIO -import tqdm - -import urllib3 - - -def exact_div(x, y): - assert x % y == 0 - return x // y - - -def str2bool(string): - str2val = {"True": True, "False": False} - if string in str2val: - return str2val[string] - else: - raise ValueError(f"Expected one of {set(str2val.keys())}, got {string}") - - -def optional_int(string): - return None if string == "None" else int(string) - - -def optional_float(string): - return None if string == "None" else float(string) - - -def compression_ratio(text) -> float: - return len(text) / len(zlib.compress(text.encode("utf-8"))) - - -def format_timestamp(seconds: float, always_include_hours: bool = False, fractionalSeperator: str = '.'): - assert seconds >= 0, "non-negative timestamp expected" - milliseconds = round(seconds * 1000.0) - - hours = milliseconds // 3_600_000 - milliseconds -= hours * 3_600_000 - - minutes = milliseconds // 60_000 - milliseconds -= minutes * 60_000 - - seconds = milliseconds // 1_000 - milliseconds -= seconds * 1_000 - - hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else "" - return f"{hours_marker}{minutes:02d}:{seconds:02d}{fractionalSeperator}{milliseconds:03d}" - - -def write_txt(transcript: Iterator[dict], file: TextIO): - for segment in transcript: - print(segment['text'].strip(), file=file, flush=True) - - -def write_vtt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - print("WEBVTT\n", file=file) - for segment in transcript: - text = process_text(segment['text'], maxLineWidth).replace('-->', '->') - - print( - f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n" - f"{text}\n", - file=file, - flush=True, - ) - - -def write_srt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - """ - Write a transcript to a file in SRT format. - Example usage: - from pathlib import Path - from whisper.utils import write_srt - result = transcribe(model, audio_path, temperature=temperature, **args) - # save SRT - audio_basename = Path(audio_path).stem - with open(Path(output_dir) / (audio_basename + ".srt"), "w", encoding="utf-8") as srt: - write_srt(result["segments"], file=srt) - """ - for i, segment in enumerate(transcript, start=1): - text = process_text(segment['text'].strip(), maxLineWidth).replace('-->', '->') - - # write srt lines - print( - f"{i}\n" - f"{format_timestamp(segment['start'], always_include_hours=True, fractionalSeperator=',')} --> " - f"{format_timestamp(segment['end'], always_include_hours=True, fractionalSeperator=',')}\n" - f"{text}\n", - file=file, - flush=True, - ) - -def process_text(text: str, maxLineWidth=None): - if (maxLineWidth is None or maxLineWidth < 0): - return text - - lines = textwrap.wrap(text, width=maxLineWidth, tabsize=4) - return '\n'.join(lines) - -def slugify(value, allow_unicode=False): - """ - Taken from https://github.com/django/django/blob/master/django/utils/text.py - Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated - dashes to single dashes. Remove characters that aren't alphanumerics, - underscores, or hyphens. Convert to lowercase. Also strip leading and - trailing whitespace, dashes, and underscores. - """ - value = str(value) - if allow_unicode: - value = unicodedata.normalize('NFKC', value) - else: - value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii') - value = re.sub(r'[^\w\s-]', '', value.lower()) - return re.sub(r'[-\s]+', '-', value).strip('-_') - -def download_file(url: str, destination: str): - with urllib3.request.urlopen(url) as source, open(destination, "wb") as output: - with tqdm( - total=int(source.info().get("Content-Length")), - ncols=80, - unit="iB", - unit_scale=True, - unit_divisor=1024, - ) as loop: - while True: - buffer = source.read(8192) - if not buffer: - break - - output.write(buffer) - loop.update(len(buffer)) \ No newline at end of file diff --git a/spaces/s1241003/translate_gpt/app.py b/spaces/s1241003/translate_gpt/app.py deleted file mode 100644 index cb0f705503bd0de2eb37e2f8cd1743ebaaf3e673..0000000000000000000000000000000000000000 --- a/spaces/s1241003/translate_gpt/app.py +++ /dev/null @@ -1,82 +0,0 @@ -import gradio as gr -import ocrmypdf -import pdfplumber -import time -import os -import openai - -openai.api_key = "YOUR_API_KEY" - -text = "" - -def process_file(file): - global text - file.save(r'C:\Users\Dr. Tien Duy Vo\Documents\OCR_PDF\img.png') - - try: - output_file = os.path.join(r'C:\Users\Dr. Tien Duy Vo\Documents\OCR_PDF', 'output.pdf') - ocrmypdf.ocr(r'C:\Users\Dr. Tien Duy Vo\Documents\OCR_PDF\img.png', output_file, force_ocr=True, image_dpi=300) - with pdfplumber.open(output_file) as pdf: - first_page = pdf.pages[0] - text = first_page.extract_text() - return text, "output.pdf" - except Exception as e: - return str(e), None - -def chat_gpt(prompt): - # response = openai.Completion.create( - # engine="text-davinci-002", - # prompt=prompt, - # temperature=0.5, - # max_tokens=100, - # n=1, - # stop=None, - # ) - return prompt #response.choices[0].text.strip() - -with gr.Blocks() as demo: - image_input = gr.Image(type="pil", label="Upload Image File") - ocr_btn = gr.Button(value="OCR and Generate PDF") - text_output = gr.Textbox(type="text", label="Übertragende Daten") - file_output = gr.File(label="Download PDF") - - - ocr_btn.click(process_file, [image_input], [text_output, file_output]) - - Send = gr.Button("Was bedeutet das?") - - chatbot = gr.Chatbot() - msg = gr.Textbox() - clear = gr.Button("Clear") - - def user(user_message, history): - return "", history + [[user_message, None]] - - def bot(history): - if len(history) == 1: - prompt = f"Summarize this: {text} \n Write a list of potential actions for the receiver" - else: - prompt = f"{history[1][0]}" - print(history) - - bot_message = chat_gpt(prompt) - history[-1][1] = bot_message - time.sleep(1) - return history - - Send.click(user, [msg, chatbot], [msg, chatbot], queue=False).then( - bot, chatbot, chatbot - ) - - msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then( - bot, chatbot, chatbot - ) - clear.click(lambda: None, None, chatbot, queue=False) - -os.system('chmod 777 /tmp') -os.system('apt-get update -y') -os.system('apt-get install tesseract-ocr -y') -os.system('pip install -q pytesseract') - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/sajithlal65/emilianJR-epiCRealism/README.md b/spaces/sajithlal65/emilianJR-epiCRealism/README.md deleted file mode 100644 index 0a917ce6610d524d6f79647d0837cc41ac7dec08..0000000000000000000000000000000000000000 --- a/spaces/sajithlal65/emilianJR-epiCRealism/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: EmilianJR EpiCRealism -emoji: 👀 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/scedlatioru/img-to-music/example/Pinnacle Studio 18 Ultimate Keygen LINK 32.md b/spaces/scedlatioru/img-to-music/example/Pinnacle Studio 18 Ultimate Keygen LINK 32.md deleted file mode 100644 index cec5a504b79f8336f8d49da352296d2d220d33b8..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Pinnacle Studio 18 Ultimate Keygen LINK 32.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Pinnacle Studio 18 Ultimate Keygen 32


          DOWNLOAD ··· https://gohhs.com/2uEyVx



          - -I am trying to register my Pinnacle 16 studio 16 ultimate. Pinnacle studio 18 serial numbers, cracks and keygens are presented here. No registration is needed. 1fdad05405
          -
          -
          -

          diff --git a/spaces/scedlatioru/img-to-music/example/Telecharger Robot Structural Analysis Professional 2017 Gratuit Avec Crack LINK 64.md b/spaces/scedlatioru/img-to-music/example/Telecharger Robot Structural Analysis Professional 2017 Gratuit Avec Crack LINK 64.md deleted file mode 100644 index da1eae7e4284778682afc94bf6ade07ee10d6688..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Telecharger Robot Structural Analysis Professional 2017 Gratuit Avec Crack LINK 64.md +++ /dev/null @@ -1,29 +0,0 @@ -
          -

          How to Download and Install Robot Structural Analysis Professional 2017 for Free with Crack 64-bit

          -

          Robot Structural Analysis Professional 2017 is a powerful software that allows structural engineers to perform advanced simulations and analyses of complex structures. It supports BIM workflows and integrates with Autodesk Revit and other design tools. It can handle large models with thousands of elements and loads, and perform various types of analyses, such as linear, nonlinear, dynamic, seismic, wind, and buckling.

          -

          telecharger Robot Structural Analysis Professional 2017 gratuit avec crack 64


          Download Ziphttps://gohhs.com/2uEAsW



          -

          If you want to download and install Robot Structural Analysis Professional 2017 for free with crack 64-bit, you can follow these steps:

          -
            -
          1. Download the software from the official website or from a trusted source. You will need a valid Autodesk account to access the download link. The file size is about 950 MB.
          2. -
          3. Extract the downloaded file using WinRAR or any other extraction tool. You will get a folder named "Autodesk Robot Structural Analysis Pro 2017".
          4. -
          5. Run the setup.exe file as administrator and follow the installation wizard. Choose the language, accept the license agreement, and select the components you want to install. You can also customize the installation path and options.
          6. -
          7. When the installation is complete, do not launch the software yet. You need to apply the crack to activate it.
          8. -
          9. Download the crack file from a reliable source. It is usually a zip or rar file that contains a keygen or a patch. Make sure your antivirus software does not block or delete it.
          10. -
          11. Extract the crack file and run the keygen or patch as administrator. Depending on the crack type, you may need to generate a serial number, a product key, or an activation code. Copy and paste them when prompted by the software.
          12. -
          13. Alternatively, you may need to copy and replace some files in the installation folder. For example, you may need to replace the original robot.exe file with the cracked one.
          14. -
          15. After applying the crack, you can launch the software and enjoy its full features.
          16. -
          -

          Note: This article is for educational purposes only. We do not condone piracy or illegal use of software. If you like Robot Structural Analysis Professional 2017, you should buy it from the official website or an authorized reseller.

          - -

          Robot Structural Analysis Professional 2017 has many features and benefits that can help you design and analyze complex structures. Here are some of them:

          -
            -
          • It supports various international codes and standards for different regions and materials. You can choose the code that suits your project and apply it to your model.
          • -
          • It allows you to create and edit parametric models with ease. You can use the graphical user interface or the command line to define the geometry, properties, and loads of your elements. You can also import models from other CAD software, such as AutoCAD, Revit, or Inventor.
          • -
          • It provides you with a range of analysis methods and options. You can perform static, modal, harmonic, response spectrum, time history, pushover, nonlinear, and other types of analyses. You can also adjust the analysis parameters, such as mesh size, convergence criteria, damping factors, and load combinations.
          • -
          • It gives you detailed and accurate results and reports. You can view the results in various formats, such as tables, graphs, diagrams, or animations. You can also export the results to Excel, Word, PDF, or other formats. You can also generate reports that include the input data, the analysis settings, and the output data.
          • -
          • It enables you to collaborate and communicate with other design professionals. You can share your models and results with other Autodesk products, such as Revit, Navisworks, or BIM 360. You can also use cloud services to store and access your data online.
          • -
          -

          Robot Structural Analysis Professional 2017 is a comprehensive and versatile software that can help you design and analyze complex structures with confidence and efficiency. It is a valuable tool for structural engineers who work on various types of projects, such as buildings, bridges, towers, stadiums, or industrial facilities.

          -

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/sdhsdhk/bingo111/src/state/index.ts b/spaces/sdhsdhk/bingo111/src/state/index.ts deleted file mode 100644 index 272106d619c69f124ea8dd0f10872ab4840152d7..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingo111/src/state/index.ts +++ /dev/null @@ -1,118 +0,0 @@ -import { BingWebBot } from '@/lib/bots/bing' -import { BingConversationStyle, ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { nanoid } from '@/lib/utils' -import { atom } from 'jotai' -import { atomWithImmer } from 'jotai-immer' -import { atomWithStorage } from 'jotai/utils' -import { atomFamily } from 'jotai/utils' -import { atomWithHash, atomWithLocation } from 'jotai-location' - -const initialMessages: ChatMessageModel[] = [ - { author: 'system', text: 'conversation between user and robot', id: '1' }, - { author: 'user', text: '销量最高的 3 种宠物吸尘器有哪些优点和缺点? ', id: '2' }, - { - author: 'bot', text: ` -您好,这是必应。根据网上的搜索结果,我为您找到了以下三款销量最高的宠物吸尘器,以及它们的优点和缺点: - -- **戴森Dyson V10轻量版 Digital Slim Fluffy无线吸尘器**[^1^] [^3^]:这款吸尘器专为宠物家庭设计,拥有强大吸力和多功能吸头,能够有效清除宠物毛发和灰尘。 - - 优点:轻便易用,续航时间长,过滤系统先进,能够有效去除过敏原; - - 缺点:价格较高,噪音较大,需要定期清洁滤网。 -- **飞利浦FC6908/82(W7)无线吸尘器**[^2^] [^4^]:这是一款手持式干吸无线吸尘器,专门为宠物家庭设计。它具有75分钟的超高续航,适合大户型;除螨涡轮刷,可以深度清洁床单被罩,免受螨虫侵害;吸拖一体设计,280ml大水箱,还可以往水箱里加入清洁剂。 - - 优点:清洁性能强劲,操作方便,适用多种场景; - - 缺点:噪音较大,达到84dB(A);尘杯容量较小,只有0.6L;充电时长较长,需要5小时。 -- **必胜(Bissell) 宠物吸尘器**[^2^]:这是一款便携式手持真空吸尘器,专为清理宠物毛发而设计。它配备特殊的橡胶喷嘴和软毛刷头,可以轻松拾取宠物毛发和灰尘;几乎静音运行。 - - 优点:轻巧易用,重量只有1.36kg;电池可充电可更换; - - 缺点:续航时间不稳定,在15-30分钟之间浮动;吸力不够强劲,在地毯或沙发上可能效果不佳;没有过滤系统或过滤网。 - -希望这些信息对您有所帮助。如果您还有其他问题,请随时提问。😊 - `, id: '3' }, - { author: 'user', text: '今天的新闻', id: '4' }, - { - author: 'bot', text: `你好,这是Bing。我根据你的搜索查询“今天的新闻”为你找到了以下信息: - - # 中国新闻 - - 《求是》杂志发表习近平总书记重要文章《深化党和国家机构改革 推进国家治理体系和治理能力现代化》[^1^] - - 梦之队第5金! 全红婵陈芋汐女双10米台夺冠[^1^] - - 中央气象台7月16日18时发布台风橙色预警[^1^] - - 贵州石阡:暑期旅游带动乡村振兴[^1^] - - 激活大科学装置的“人才红利”[^1^] - - 聚才用才留才 让希望的田野成为智慧的田野[^1^] - - 科学家发现粮食系统增效可有效支撑我国碳中和[^1^] - - 成都以赛为媒提升城市美誉度[^1^] - - 香江观澜:“抢人才”成效显著 香港竞争力依旧[^1^] - - 浙江建德:新安江上享清凉[^1^] - - 定向资金持续精准滴灌 高质量发展力量不断积累[^1^] - - 企业信心恢复加大中长期投资 为稳增长奠定基础[^1^] - - 武汉渡江节开赛 千名“泳士”竞渡长江[^1^] - - 金融更加重视对实体经济支持 降低融资成本提振市场信心[^1^] - - 大运来了丨成都迎大运 全民健身动起来[^1^] - - 藏在高校里的宝藏博物馆[^1^] - - 中国汽车工业用70年赢得三个“全球第一”[^1^] - - 辽宁阜新市委常委、副市长赖化平涉嫌严重违纪违法接受审查调查[^1^] - - # 国际新闻 - - 联合国秘书长:可持续发展面临风险 各国需加强行动[^2^] - - 国际航运业加快绿色转型[^2^] - - 美企反对收紧对华芯片出口限制[^2^] - - 欧洲加大气候科技领域投资[^2^] - - 中企助力丹麦发展清洁能源[^2^] - - 中国代表呼吁国际社会共同努力防止乌克兰局势失控[^2^] - - 中国和阿尔及利亚共同构建新型国际关系典范[^2^] - - 以上信息仅供参考,具体详情请点击以下链接查看: - - [1] [中国新闻_央视网(cctv.com)](https://news.cctv.com/china/) - [2] [人民网_网上的人民日报](http://www.people.com.cn/)`, id: '5' }, - { author: 'user', text: '写一个快排', id: '6' }, - { author: 'bot', text: "你好,这是Bing。我可以帮你用python写一个快排。快速排序是一种基于分治思想的排序算法,它的基本思想是:从待排序的数组中选择一个基准元素,将数组分为两个子数组,使得左边的子数组中的元素都小于或等于基准元素,右边的子数组中的元素都大于或等于基准元素,然后对两个子数组递归地进行快速排序,最后合并结果。下面是用python实现的快速排序的代码:\n\n```python\n# 定义一个函数,用于交换数组中的两个元素\ndef swap(arr, i, j):\n temp = arr[i]\n arr[i] = arr[j]\n arr[j] = temp\n\n# 定义一个函数,用于对数组的一段区间进行划分\ndef partition(arr, low, high):\n # 选择最右边的元素作为基准元素\n pivot = arr[high]\n # 初始化一个指针,指向左边区间的最后一个位置\n i = low - 1\n # 遍历左边区间中的每个元素\n for j in range(low, high):\n # 如果当前元素小于或等于基准元素\n if arr[j] <= pivot:\n # 将指针右移一位\n i += 1\n # 将指针所指的元素与当前元素交换\n swap(arr, i, j)\n # 将基准元素与指针右边的第一个元素交换\n swap(arr, i + 1, high)\n # 返回基准元素的最终位置\n return i + 1\n\n# 定义一个函数,用于对数组进行快速排序\ndef quick_sort(arr, low, high):\n # 如果区间长度大于1\n if low < high:\n # 对区间进行划分,得到基准元素的位置\n p = partition(arr, low, high)\n # 对左边子区间进行快速排序\n quick_sort(arr, low, p - 1)\n # 对右边子区间进行快速排序\n quick_sort(arr, p + 1, high)\n\n# 测试代码\narr = [5, 3, 7, 2, 9, 1, 4, 6, 8]\nprint(\"原始数组:\", arr)\nquick_sort(arr, 0, len(arr) - 1)\nprint(\"排序后的数组:\", arr)\n```", id: '7' }, - { - author: 'bot', text: "好的,我会尝试画一只猫。\n > ![一只猫](https://tse2.mm.bing.net/th/id/OIG.jz34V0PNVkPC229h9spV?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse1.mm.bing.net/th/id/OIG.6g7d.XLZMP_iwAByLhvo?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse2.mm.bing.net/th/id/OIG.iAxF4ekekYn7sZw9SmU6?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse4.mm.bing.net/th/id/OIG.qDnzeSKzUCeJcrBqc5mX?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)", - id: '8' - } -] - -export const GreetMessages = [ - '谢谢你! 知道你什么时候准备好继续前进总是很有帮助的。我现在能为你回答什么问题?', - '重新开始总是很棒。问我任何问题!', - '当然,我很乐意重新开始。我现在可以为你提供哪些帮助?', - '当然,我已准备好进行新的挑战。我现在可以为你做什么?', - '很好,让我们来更改主题。你在想什么?', - '不用担心,我很高兴尝试一些新内容。我现在可以为你回答什么问题?', - '好的,我准备好了!感谢重置。我们应该了解哪些内容?', - '感谢刷新!你有新的话题吗?', - '明白了,让我们重新开始。接下来应该讨论什么?', - '下一步!我可以为你做什么?', - '好的,我已准备好新话题。我们应该一起了解哪些内容?' -] - -export const bingConversationStyleAtom = atomWithStorage('bingConversationStyle', BingConversationStyle.Creative, undefined, { unstable_getOnInit: true }) -export const voiceAtom = atomWithStorage('enableTTS', false, undefined, { unstable_getOnInit: true }) - -type Param = { botId: BotId; page: string } - -const createBotInstance = () => { - return new BingWebBot({ - cookie: ' ', - ua: ' ', - }) -} - -export const chatFamily = atomFamily( - (param: Param) => { - return atomWithImmer({ - botId: param.botId, - bot: createBotInstance(), - messages: [] as ChatMessageModel[], - generatingMessageId: '', - abortController: undefined as AbortController | undefined, - conversationId: nanoid(), - }) - }, - (a, b) => a.botId === b.botId && a.page === b.page, -) - -export const hashAtom = atomWithHash('dialog', '') - -export const locationAtom = atomWithLocation() - -export const voiceListenAtom = atom(false) diff --git a/spaces/sgonzalezsilot/Fake-News-Twitter-Detection_from-my-Thesis/app.py b/spaces/sgonzalezsilot/Fake-News-Twitter-Detection_from-my-Thesis/app.py deleted file mode 100644 index 4b235515c5e45dae888e13bf25f654bdb8d63cb9..0000000000000000000000000000000000000000 --- a/spaces/sgonzalezsilot/Fake-News-Twitter-Detection_from-my-Thesis/app.py +++ /dev/null @@ -1,75 +0,0 @@ -import gradio as gr -from huggingface_hub import from_pretrained_keras -from huggingface_hub import KerasModelHubMixin -import transformers -from transformers import AutoTokenizer -import numpy as np - - -m = from_pretrained_keras('sgonzalezsilot/FakeNews-Detection-Twitter-Thesis') - -MODEL = "digitalepidemiologylab/covid-twitter-bert-v2" -tokenizer = AutoTokenizer.from_pretrained(MODEL) - -def bert_encode(tokenizer,data,maximum_length) : - input_ids = [] - attention_masks = [] - - - for i in range(len(data)): - encoded = tokenizer.encode_plus( - - data[i], - add_special_tokens=True, - max_length=maximum_length, - pad_to_max_length=True, - truncation = True, - return_attention_mask=True, - ) - - input_ids.append(encoded['input_ids']) - attention_masks.append(encoded['attention_mask']) - - return np.array(input_ids),np.array(attention_masks) - -# train_encodings = tokenizer(train_texts, truncation=True, padding=True) -# test_encodings = tokenizer(test_texts, truncation=True, padding=True) - - - - -def get_news(input_text): - sentence_length = 110 - train_input_ids,train_attention_masks = bert_encode(tokenizer,[input_text],sentence_length) - - pred = m.predict([train_input_ids,train_attention_masks]) - pred = np.round(pred) - pred = pred.flatten() - - if pred == 1: - result = "Fake News" - else: - result = "True News" - return result - -tweet_input = gr.Textbox(label = "Enter the tweet") -output = gr.Textbox(label="Result") - -descripcion = ( - """ -
          - Demo of the Covid-Twitter Fake News Detection System from my thesis. -
          - """ -) -iface = gr.Interface(fn = get_news, - inputs = tweet_input, - outputs = output, - title = 'Covid Fake News Detection System', - description=descripcion, - examples=["CDC Recommends Mothers Stop Breastfeeding To Boost Vaccine Efficacy", - "An article claiming that Bill Gates' vaccine would modify human DNA.", - "In the first half of 2020 WHO coordinated the logistics & shipped 😷More than 3M surgical masks 🧤More than 2M gloves 🧰More than 1M diagnostic kits 🥼More than 200K gowns 🛡️More than 100K face shields to 135 countries across the🌍🌎🌏. https://t.co/iz4YQkbSGM", - "Many COVID-19 treatments may be associated with adverse skin reactions and should be considered in a differential diagnosis new report says. https://t.co/GLSeYX2VDq"]) - -iface.launch() \ No newline at end of file diff --git a/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/docs/make.bat b/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/docs/make.bat deleted file mode 100644 index 922152e96a04a242e6fc40f124261d74890617d8..0000000000000000000000000000000000000000 --- a/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/docs/make.bat +++ /dev/null @@ -1,35 +0,0 @@ -@ECHO OFF - -pushd %~dp0 - -REM Command file for Sphinx documentation - -if "%SPHINXBUILD%" == "" ( - set SPHINXBUILD=sphinx-build -) -set SOURCEDIR=. -set BUILDDIR=_build - -if "%1" == "" goto help - -%SPHINXBUILD% >NUL 2>NUL -if errorlevel 9009 ( - echo. - echo.The 'sphinx-build' command was not found. Make sure you have Sphinx - echo.installed, then set the SPHINXBUILD environment variable to point - echo.to the full path of the 'sphinx-build' executable. Alternatively you - echo.may add the Sphinx directory to PATH. - echo. - echo.If you don't have Sphinx installed, grab it from - echo.http://sphinx-doc.org/ - exit /b 1 -) - -%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% -goto end - -:help -%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% - -:end -popd diff --git a/spaces/shalinig/magorshunov-layoutlm-invoices/app.py b/spaces/shalinig/magorshunov-layoutlm-invoices/app.py deleted file mode 100644 index 7d16823fc6fe638d64aa2067ccec9de4fccf0b0f..0000000000000000000000000000000000000000 --- a/spaces/shalinig/magorshunov-layoutlm-invoices/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/magorshunov/layoutlm-invoices").launch() \ No newline at end of file diff --git a/spaces/shivammehta25/Diff-TTSG/diff_ttsg/utils/model.py b/spaces/shivammehta25/Diff-TTSG/diff_ttsg/utils/model.py deleted file mode 100644 index 76a48571c4c69b556ea062c68382e2fcdbb54a3e..0000000000000000000000000000000000000000 --- a/spaces/shivammehta25/Diff-TTSG/diff_ttsg/utils/model.py +++ /dev/null @@ -1,88 +0,0 @@ -""" from https://github.com/jaywalnut310/glow-tts """ - -import numpy as np -import torch - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(int(max_length), dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def fix_len_compatibility(length, num_downsamplings_in_unet=2): - while True: - if length % (2**num_downsamplings_in_unet) == 0: - return length - length += 1 - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def generate_path(duration, mask): - device = duration.device - - b, t_x, t_y = mask.shape - cum_duration = torch.cumsum(duration, 1) - path = torch.zeros(b, t_x, t_y, dtype=mask.dtype).to(device=device) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - torch.nn.functional.pad(path, convert_pad_shape([[0, 0], - [1, 0], [0, 0]]))[:, :-1] - path = path * mask - return path - - -def duration_loss(logw, logw_, lengths): - loss = torch.sum((logw - logw_)**2) / torch.sum(lengths) - return loss - - -def normalize(data, mu, std): - if not isinstance(mu, float): - if isinstance(mu, list): - mu = torch.tensor(mu, dtype=data.dtype, device=data.device) - elif isinstance(mu, torch.Tensor): - mu = mu.to(data.device) - elif isinstance(mu, np.ndarray): - mu = torch.from_numpy(mu).to(data.device) - mu = mu.unsqueeze(-1) - - if not isinstance(std, float): - if isinstance(std, list): - std = torch.tensor(std, dtype=data.dtype, device=data.device) - elif isinstance(std, torch.Tensor): - std = std.to(data.device) - elif isinstance(std, np.ndarray): - std = torch.from_numpy(std).to(data.device) - std = std.unsqueeze(-1) - - return (data - mu) / std - -def denormalize(data, mu, std): - if not isinstance(mu, float): - if isinstance(mu, list): - mu = torch.tensor(mu, dtype=data.dtype, device=data.device) - elif isinstance(mu, torch.Tensor): - mu = mu.to(data.device) - elif isinstance(mu, np.ndarray): - mu = torch.from_numpy(mu).to(data.device) - mu = mu.unsqueeze(-1) - - if not isinstance(std, float): - if isinstance(std, list): - std = torch.tensor(std, dtype=data.dtype, device=data.device) - elif isinstance(std, torch.Tensor): - std = std.to(data.device) - elif isinstance(std, np.ndarray): - std = torch.from_numpy(std).to(data.device) - std = std.unsqueeze(-1) - - return data * std + mu diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Super Mario Bros A Classic Adventure and Relive the Nostalgia.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Super Mario Bros A Classic Adventure and Relive the Nostalgia.md deleted file mode 100644 index c8167b1df45a9133c763d7e6875a59f106860f0b..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Super Mario Bros A Classic Adventure and Relive the Nostalgia.md +++ /dev/null @@ -1,96 +0,0 @@ -
          -

          Super Mario Bros: A Classic Adventure Download

          -

          Super Mario Bros is one of the most iconic and beloved video games of all time. Released in 1985 for the Nintendo Entertainment System (NES), it introduced millions of players to the colorful and whimsical world of Mario, a plucky plumber who must rescue Princess Peach from the evil Bowser. Along the way, he encounters various enemies, obstacles, power-ups, and secrets that make each level a thrilling and challenging experience. Super Mario Bros is not only a fun and addictive game, but also a groundbreaking and influential one that shaped the history and future of video games. In this article, we will explore the history and features of Super Mario Bros, how to download and play it today, and some tips and tricks for enjoying this classic adventure.

          -

          super mario bros a classic adventure download


          Download ::: https://ssurll.com/2uNX8j



          -

          The History and Features of Super Mario Bros

          -

          Super Mario Bros was developed by Nintendo R&D4, a team led by Shigeru Miyamoto and Takashi Tezuka. They wanted to create a \"grand culmination\" of their previous work on platforming games such as Donkey Kong and Mario Bros. They drew inspiration from various sources, such as fairy tales, Alice in Wonderland, Star Wars, Pac-Man, and Japanese folklore. They also experimented with different gameplay mechanics, such as scrolling levels, hidden blocks, warp zones, coins, mushrooms, fireballs, and more. They used a custom-made tool called Family Computer Kit (FCK) to design the levels on paper before programming them into the game. They also collaborated with composer Koji Kondo, who created one of the most memorable and catchy soundtracks in video game history.

          -

          Super Mario Bros features 32 levels across eight worlds, each with its own theme, enemies, obstacles, secrets, and boss. The game can be played solo or with a friend in alternating turns. The player controls Mario or his brother Luigi in multiplayer mode. The goal is to reach the end of each level by jumping over pits, dodging enemies, breaking blocks, collecting coins, finding power-ups, and hitting the flagpole. The power-ups include the Super Mushroom, which makes Mario grow bigger; the Fire Flower, which allows him to shoot fireballs; and the Starman, which makes him invincible for a short time. The enemies include goombas, koopa troopas, piranha plants, hammer bros, bullet bills, lakitus, cheep cheeps, bloopers, podoboos, thwomps, boos, dry bones, chain chomps, spiny shells, bob-ombs, pokeys, shy guys, mont y moles, and more. The boss of each world is a mini-Bowser, a fake version of the real Bowser who can be defeated by fireballs or by hitting an axe behind him. The real Bowser awaits at the end of World 8-4, the final level of the game. The game also features several secrets and hidden features, such as warp zones, coin rooms, vine blocks, beanstalks, cloud platforms, invisible blocks, 1-UP mushrooms, and more. Super Mario Bros is a game that rewards exploration, experimentation, and skill.

          -

          How to Download and Play Super Mario Bros Today

          -

          Super Mario Bros is a game that has stood the test of time and is still widely played and enjoyed by millions of fans around the world. If you want to experience this classic adventure for yourself, you have several options and platforms to choose from. Here are some of the ways you can download and play Super Mario Bros today:

          -

          super mario bros free download for android
          -super mario bros nes rom download
          -super mario bros online play
          -super mario bros switch games
          -super mario bros original game
          -super mario bros 3 download pc
          -super mario bros deluxe edition
          -super mario bros arcade archives
          -super mario bros nintendo store
          -super mario bros 2 download apk
          -super mario bros wii u iso
          -super mario bros 64 remake
          -super mario bros maker 2
          -super mario bros odyssey download
          -super mario bros 3d world
          -super mario bros luigi's mansion
          -super mario bros party superstar
          -super mario bros warioware get it together
          -super mario bros sparks of hope
          -super mario bros smash ultimate
          -super mario bros strikers battle league
          -super mario bros golf super rush
          -super mario bros tennis aces
          -super mario bros olympic games tokyo 2020
          -super mario bros paper origami king
          -super mario bros rabbids kingdom battle
          -super mario bros kart live home circuit
          -super mario bros game boy advance online
          -super mario bros nintendo 64 online
          -super mario bros snes online
          -super mario bros game boy online
          -super mario bros nes online
          -super mario bros wrecking crew arcade
          -super mario bros mobile version download
          -super mario bros classic adventure fan game
          -super mario bros lost levels download
          -super mario bros all stars download
          -super mario bros new download pc free full version windows 10
          -super mario bros crossover download mac
          -super mario bros x download linux
          -super mario bros flash download chromebook
          -super mario bros unblocked download school
          -super mario bros hack download zip
          -super mario bros emulator download android
          -super mario bros apk download ios
          -super mario bros exe download windows 7
          -super mario bros java download phoneky
          -super mario bros psp download cso
          -super mario bros ds download r4

          -

          The Official Nintendo Store

          -

          The easiest and most official way to play Super Mario Bros is to buy it from the Nintendo Store for various consoles and devices. You can purchase the game for $4.99 USD for the Nintendo Switch, Nintendo 3DS, Wii U, or Wii. You can also get it as part of the Nintendo Switch Online service, which gives you access to a library of classic NES and SNES games for a monthly or annual fee. You can also play Super Mario Bros as part of the Super Mario 3D All-Stars collection, which includes remastered versions of Super Mario 64, Super Mario Sunshine, and Super Mario Galaxy for the Nintendo Switch. You can also play Super Mario Bros on your smartphone or tablet using the Super Mario Run app, which is a free-to-start game that adapts the gameplay and graphics of Super Mario Bros for mobile devices.

          -

          The Internet Archive

          -

          If you don't have a Nintendo console or device, you can still play Super Mario Bros for free online using an emulator. An emulator is a software that mimics the hardware and software of another system, allowing you to run games and programs that are not compatible with your current system. One of the best places to find emulators and games online is the Internet Archive, a non-profit digital library that preserves and provides access to millions of books, movies, music, software, and more. You can find Super Mario Bros on the Internet Archive website, where you can play it directly on your browser using an NES emulator. You can also download the ROM file of the game and use it with any NES emulator of your choice.

          -

          The Classic Super Mario Bros Website

          -

          Another way to play Super Mario Bros for free online is to visit the Classic Super Mario Bros website, which is a fan-made project that recreates the game using HTML5. HTML5 is a web standard that allows developers to create interactive and multimedia content without using plugins or external software. The Classic Super Mario Bros website lets you play the game on any browser that supports HTML5, such as Chrome, Firefox, Safari, or Edge. You can also play the game on your smartphone or tablet by tapping on the screen. The website features all 32 levels of the original game, as well as some extra levels and modes that are not found in the original game.

          -

          Tips and Tricks for Super Mario Bros

          -

          Super Mario Bros is a game that has many secrets and glitches that can enhance your gameplay experience. Some of these are intentional and some are accidental, but they all add to the fun and charm of the game. Here are some tips and tricks for playing Super Mario Bros better:

          -

          Jump Over the Flagpole

          -

          One of the most famous glitches in Super Mario Bros is jumping over the flagpole at the end of each level. This is possible on World 3-3, where there is a springboard near the flagpole that can launch you over it if you time your jump right. If you manage to do this, you will end up in a glitched area where you can run endlessly until time runs out or you die. This glitch has no practical benefit, but it is a fun challenge to try.

          -

          Skating

          -

          Another glitch in Super Mario Bros is skating, which is when Fire Mario moves without moving his feet. This happens when you press B to shoot a fireball while holding down A to run fast. If you do this while standing still or moving slightly forward or backward, Fire Mario will slide across the ground without moving his feet. This glitch does not affect your gameplay much, but it looks funny and cool.

          -

          How to Get Maximum Fireworks

          -

          One of the secrets in Super Mario Bros is getting fireworks at the end of each level when you hit the flagpole. The number of fireworks you get depends on the last digit of the timer when you hit the flagpole. If the last digit is 1, 3, or 6, you will get one firework. If the last digit is 2, 4, or 7, you will get three fireworks. If the last digit is 8, 9, or 0, you will get six fireworks. The fireworks are purely cosmetic and do not affect your score or gameplay, but they are a nice way to celebrate your victory.

          -

          How to Get Infinite 1-UPs

          -

          One of the most useful glitches in Super Mario Bros is getting infinite 1-UPs, which are extra lives that allow you to continue playing after you die. This glitch can be performed on World 3-1, where there is a staircase near the end of the level with a koopa troopa walking down it. If you jump on the koopa troopa and kick its shell against the wall, it will bounce back and forth between the wall and the staircase. If you position yourself correctly, you can keep jumping on the shell as it bounces back and forth, and each time you do so, you will get a point. After getting 100 points, you will get a 1-UP. If you keep doing this, you will get more and more 1-UPs until you have enough to finish the game.

          -

          Win and Die Simultaneously

          -

          One of the most bizarre glitches in Super Mario Bros is winning and dying at the same time. This glitch can be performed on World 8-4, the final level of the game where you face the real Bowser. If you manage to hit the axe behind Bowser at the same time as he hits you with his fire breath or his body, you will trigger a paradoxical situation where you both win and lose. The game will show both the victory screen and the game over screen at the same time, and then freeze. This glitch has no practical benefit, but it is a funny and rare occurrence to witness.

          -

          Become Small Fire Mario

          -

          One of the most interesting glitches in Super Mario Bros is becoming small Fire Mario, which is when Mario has the ability to shoot fireballs while being small. This glitch can be performed on any mini-Bowser level, such as World 1-4, World 2-4, World 3-4, etc. To do this glitch, you need to be big Mario or Fire Mario and reach the mini-Bowser at the end of the level. Then, you need to jump over him and hit the axe behind him while he is in mid-air. If you time it right, he will land on top of you and damage you as you hit the axe. This will cause you to shrink and lose your power-up, but also trigger the victory screen. When you start the next level, you will be small Fire Mario, who can shoot fireballs by pressing B but also die in one hit by any enemy or obstacle.

          -

          Conclusion

          -

          Super Mario Bros is a classic adventure that deserves to be played by everyone who loves video games. It is a game that combines fun, challenge, creativity, and innovation in a way that few games can match. It is a game that has influenced countless other games and genres over the years. It is a game that has entertained and inspired millions of players around the world. Whether you play it on your Nintendo console or device, online using an emulator or HTML5, or using some tips and tricks to spice up your gameplay experience, Super Mario Bros is a game that will never get old or boring. So what are you waiting for? Download and play Super Mario Bros today and enjoy this classic adventure for yourself!

          -

          FAQs

          -

          Here are some common questions and answers about Super Mario Bros:

          -
            -
          • Q: How many worlds are there in Super Mario Bros?
          • -
          • A: There are eight worlds in Super Mario Bros, each with four levels. The worlds are numbered from 1 to 8 and have different themes and enemies.
          • -
          • Q: How do I save my progress in Super Mario Bros?
          • -
          • A: Unfortunately, Super Mario Bros does not have a save feature in its original version for the NES. However, some later versions of the game for other platforms have added a save feature or a suspend feature that allows you to resume your game from where you left off.
          • -
          • Q: What is the difference between Mario and Luigi in Super Mario Bros?
          • -
          • A: In terms of gameplay, there is no difference between Mario and Luigi in Super Mario Bros. They have the same abilities and controls. The only difference is their appearance and color. Mario wears a red hat and shirt and blue overalls, while Luigi wears a green hat and shirt and blue overalls.
          • -
          • Q: What is the highest score possible in Super Mario Bros?
          • -
          • A: The highest score possible in Super Mario Bros is 9,999,950 points. This can be achieved by collecting all the coins, power-ups, 1-UPs, and fireworks in the game, as well as defeating all the enemies and bosses with fireballs, and finishing each level with the maximum time left.
          • -
          • Q: What is the Minus World in Super Mario Bros?
          • -
          • A: The Minus World is a glitched level in Super Mario Bros that can be accessed by performing a trick on World 1-2. The trick involves breaking a block near the end of the level and crouching through a wall to enter a warp zone that leads to World -1. The Minus World is an underwater level that loops endlessly and cannot be completed.
          • -

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/examples/wenzhong_qa/finetune_GPT2_medicalQA.sh b/spaces/skf15963/summary/fengshen/examples/wenzhong_qa/finetune_GPT2_medicalQA.sh deleted file mode 100644 index d9a81670ed121ecfb9fa3e0e546f0773374087af..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/wenzhong_qa/finetune_GPT2_medicalQA.sh +++ /dev/null @@ -1,123 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=medical_qa_finetune -#SBATCH --nodes=2 -#SBATCH --ntasks-per-node=8 -#SBATCH --gres=gpu:8 # number of gpus -#SBATCH -o /cognitive_comp/wuziwei/task/fs_medical_qa_finetune/%x-%j.log -#SBATCH -e /cognitive_comp/wuziwei/task/fs_medical_qa_finetune/%x-%j.err -#SBATCH -x dgx[050,049] - -#export NCCL_DEBUG=INFO - -# export PATH=$PATH:/cognitive_comp/wuziwei/codes/fengshen/fengshen -set -x -e - -echo "START TIME: $(date)" -MICRO_BATCH_SIZE=1 -ROOT_DIR=/cognitive_comp/wuziwei/task/fs_medical_qa_finetune - -ZERO_STAGE=2 - -config_json="$ROOT_DIR/training_config.json" -export MASTER_PORT=$[RANDOM%10000+30000] - -# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size() -cat < $config_json -{ - "zero_optimization": { - "stage": $ZERO_STAGE, - "contiguous_gradients": true, - "overlap_comm": true, - "reduce_scatter": true, - "reduce_bucket_size": 2e8, - "allgather_bucket_size": 2e8 - }, - "optimizer": { - "type": "Adam", - "params": { - "lr": 1e-5, - "betas": [0.9,0.95], - "eps": 1e-8, - "weight_decay": 1e-2 - } - }, - "scheduler": { - "type": "WarmupLR", - "params":{ - "warmup_min_lr": 5e-6, - "warmup_max_lr": 1e-5 - } - }, - "fp16": { - "enabled": true, - "loss_scale": 0, - "loss_scale_window": 1000, - "initial_scale_power": 32, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "activation_checkpointing": { - "partition_activations": false, - "contiguous_memory_optimization": false - }, - "wall_clock_breakdown": false, - "zero_allow_untested_optimizer": false, - "train_micro_batch_size_per_gpu": 1, - "steps_per_print": 100, - "gradient_clipping": 1.0 -} -EOT - -# export PL_DEEPSPEED_CONFIG_PATH=$config_json -export PL_DEEPSPEED_CONFIG_PATH=$config_json -export TORCH_EXTENSIONS_DIR=/cognitive_comp/wuziwei/torch_extendsions -TRAINER_ARGS=" - --max_epochs 10 \ - --gpus 16 \ - --num_nodes 2 \ - --strategy deepspeed_stage_2 \ - --default_root_dir $ROOT_DIR \ - --dirpath $ROOT_DIR/ckpt \ - --save_top_k 3 \ - --monitor train_loss \ - --mode min \ - --save_last \ -" -DATA_DIR=/cognitive_comp/wuziwei/task-data/medical_qa -DATA_ARGS=" - --data_dir $DATA_DIR \ - --train_batchsize $MICRO_BATCH_SIZE \ - --valid_batchsize $MICRO_BATCH_SIZE \ - --train_data train.txt \ - --valid_data valid.txt \ - --test_data test.txt -" - -# PRETRAINED_MODEL_PATH=/cognitive_comp/wuziwei/pretrained_model_hf/gpt2 -PRETRAINED_MODEL_PATH=/cognitive_comp/wuziwei/pretrained_model_hf/medical_v2 -MODEL_ARGS=" - --pretrained_model_path ${PRETRAINED_MODEL_PATH} \ - --output_save_path $ROOT_DIR/predict.json \ - --learning_rate 1e-4 \ - --weight_decay 0.1 \ - --warmup 0.01 \ -" - -SCRIPTS_PATH=/cognitive_comp/wuziwei/codes/fengshen/fengshen/examples/GPT_pretrain_finetune/finetune_medicalQA.py - -export CMD=" \ - $SCRIPTS_PATH \ - $TRAINER_ARGS \ - $MODEL_ARGS \ - $DATA_ARGS \ - " - -echo $CMD - -SINGULARITY_PATH=/cognitive_comp/wuziwei/container/oneflow-cuda11.sif -# singularity exec --nv -B /cognitive_comp/wuziwei/:/cognitive_comp/wuziwei/ $SINGULARITY_PATH python $CMD - -# to debug - add echo (it exits and prints what it would have launched) -#run_cmd="$PY_LAUNCHER $CMD" - -srun singularity exec --nv -B /cognitive_comp/wuziwei/:/cognitive_comp/wuziwei/ $SINGULARITY_PATH bash -c 'python $CMD' diff --git a/spaces/sklearn-docs/Comparison_K_Means_MiniBatchKMeans/app.py b/spaces/sklearn-docs/Comparison_K_Means_MiniBatchKMeans/app.py deleted file mode 100644 index 0ca9fe7ae54c638815ed1d1fbf41135405b06c36..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Comparison_K_Means_MiniBatchKMeans/app.py +++ /dev/null @@ -1,142 +0,0 @@ -import gradio as gr -import numpy as np -import matplotlib.pyplot as plt -from sklearn.datasets import make_blobs -import time -from sklearn.cluster import KMeans, MiniBatchKMeans -from sklearn.metrics.pairwise import pairwise_distances_argmin - -theme = gr.themes.Monochrome( - primary_hue="indigo", - secondary_hue="blue", - neutral_hue="slate", -) -model_card = f""" -## Description - -This demo compares the performance of the **MiniBatchKMeans** and **KMeans**. The MiniBatchKMeans is faster, but gives slightly different results. -The points that are labelled differently between the two algorithms are also plotted. -You can play around with different ``number of samples`` and ``number of mini batch size`` to see the effect - -## Dataset - -Simulation dataset -""" - - -def do_train(n_samples, batch_size): - - np.random.seed(0) - - centers = np.random.rand(3, 2) - n_clusters = len(centers) - X, labels_true = make_blobs(n_samples=n_samples, centers=centers, cluster_std=0.7) - - k_means = KMeans(init="k-means++", n_clusters=n_clusters, n_init=10) - t0 = time.time() - k_means.fit(X) - t_batch = time.time() - t0 - - - mbk = MiniBatchKMeans( - init="k-means++", - n_clusters=n_clusters, - batch_size=batch_size, - n_init=10, - max_no_improvement=10, - verbose=0, - ) - t0 = time.time() - mbk.fit(X) - t_mini_batch = time.time() - t0 - - - k_means_cluster_centers = k_means.cluster_centers_ - order = pairwise_distances_argmin(k_means.cluster_centers_, mbk.cluster_centers_) - mbk_means_cluster_centers = mbk.cluster_centers_[order] - - k_means_labels = pairwise_distances_argmin(X, k_means_cluster_centers) - mbk_means_labels = pairwise_distances_argmin(X, mbk_means_cluster_centers) - - - colors = ["#4EACC5", "#FF9C34", "#4E9A06"] - - # KMeans - fig1, axes1 = plt.subplots() - for k, col in zip(range(n_clusters), colors): - my_members = k_means_labels == k - cluster_center = k_means_cluster_centers[k] - axes1.plot(X[my_members, 0], X[my_members, 1], "w", markerfacecolor=col, marker=".", markersize=15) - axes1.plot( - cluster_center[0], - cluster_center[1], - "o", - markerfacecolor=col, - markeredgecolor="k", - markersize=12, - ) - axes1.set_title("KMeans") - axes1.set_xticks(()) - axes1.set_yticks(()) - - # MiniBatchKMeans - fig2, axes2 = plt.subplots() - for k, col in zip(range(n_clusters), colors): - my_members = mbk_means_labels == k - cluster_center = mbk_means_cluster_centers[k] - axes2.plot(X[my_members, 0], X[my_members, 1], "w", markerfacecolor=col, marker=".", markersize=15) - axes2.plot( - cluster_center[0], - cluster_center[1], - "o", - markerfacecolor=col, - markeredgecolor="k", - markersize=12, - ) - axes2.set_title("MiniBatchKMeans") - axes2.set_xticks(()) - axes2.set_yticks(()) - - # Initialize the different array to all False - different = mbk_means_labels == 4 - fig3, axes3 = plt.subplots() - - for k in range(n_clusters): - different += (k_means_labels == k) != (mbk_means_labels == k) - - identic = np.logical_not(different) - axes3.plot(X[identic, 0], X[identic, 1], "w", markerfacecolor="#bbbbbb", marker=".", markersize=15) - axes3.plot(X[different, 0], X[different, 1], "w", markerfacecolor="m", marker=".", markersize=15) - axes3.set_title("Difference") - axes3.set_xticks(()) - axes3.set_yticks(()) - text = f"KMeans Train time: {t_batch:.2f}s Inertia: {k_means.inertia_:.4f}. MiniBatchKMeans Train time: {t_mini_batch:.2f}s Inertia: {mbk.inertia_:.4f}" - plt.close() - return fig1, fig2, fig3, text - - - -with gr.Blocks(theme=theme) as demo: - gr.Markdown(''' -
          -

          Comparison of the K-Means and MiniBatchKMeans clustering algorithms

          -
          - ''') - gr.Markdown(model_card) - gr.Markdown("Author: Vu Minh Chien. Based on the example from scikit-learn") - n_samples = gr.Slider(minimum=500, maximum=5000, step=500, value=500, label="Number of samples") - batch_size = gr.Slider(minimum=100, maximum=2000, step=100, value=100, label="Size of the mini batches") - with gr.Row(): - with gr.Column(): - plot1 = gr.Plot(label="KMeans") - with gr.Column(): - plot2 = gr.Plot(label="MiniBatchKMeans") - with gr.Column(): - plot3 = gr.Plot(label="Difference") - with gr.Row(): - results = gr.Textbox(label="Results") - - n_samples.change(fn=do_train, inputs=[n_samples, batch_size], outputs=[plot1, plot2, plot3, results]) - batch_size.change(fn=do_train, inputs=[n_samples, batch_size], outputs=[plot1, plot2, plot3, results]) - -demo.launch() \ No newline at end of file diff --git a/spaces/sklearn-docs/anomaly-detection/app.py b/spaces/sklearn-docs/anomaly-detection/app.py deleted file mode 100644 index ae771f4f2eb9d19792f5637bbcef0bb28679aac8..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/anomaly-detection/app.py +++ /dev/null @@ -1,161 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt -from threading import Thread -from matplotlib.colors import ListedColormap -from sklearn.datasets import make_moons, make_circles, make_classification -from sklearn.datasets import make_blobs, make_circles, make_moons -import gradio as gr -import math -from functools import partial -import time - -import matplotlib - -from sklearn import svm -from sklearn.datasets import make_moons, make_blobs -from sklearn.covariance import EllipticEnvelope -from sklearn.ensemble import IsolationForest -from sklearn.neighbors import LocalOutlierFactor -from sklearn.linear_model import SGDOneClassSVM -from sklearn.kernel_approximation import Nystroem -from sklearn.pipeline import make_pipeline - -def get_groundtruth_model(X, labels): - # dummy model to show true label distribution - class Dummy: - def __init__(self, y): - self.labels_ = labels - - return Dummy(labels) - -#### PLOT -FIGSIZE = 10,10 -figure = plt.figure(figsize=(25, 10)) - - -def train_models(input_data, outliers_fraction, n_samples, clf_name): - n_outliers = int(outliers_fraction * n_samples) - n_inliers = n_samples - n_outliers - blobs_params = dict(random_state=0, n_samples=n_inliers, n_features=2) - NAME_CLF_MAPPING = {"Robust covariance": EllipticEnvelope(contamination=outliers_fraction), - "One-Class SVM": svm.OneClassSVM(nu=outliers_fraction, kernel="rbf", gamma=0.1), - "One-Class SVM (SGD)":make_pipeline( - Nystroem(gamma=0.1, random_state=42, n_components=150), - SGDOneClassSVM( - nu=outliers_fraction, - shuffle=True, - fit_intercept=True, - random_state=42, - tol=1e-6, - ), - ), - "Isolation Forest": IsolationForest(contamination=outliers_fraction, random_state=42), - "Local Outlier Factor": LocalOutlierFactor(n_neighbors=35, contamination=outliers_fraction), - } - DATA_MAPPING = { - "Central Blob":make_blobs(centers=[[0, 0], [0, 0]], cluster_std=0.5, **blobs_params)[0], - "Two Blobs": make_blobs(centers=[[2, 2], [-2, -2]], cluster_std=[0.5, 0.5], **blobs_params)[0], - "Blob with Noise": make_blobs(centers=[[2, 2], [-2, -2]], cluster_std=[1.5, 0.3], **blobs_params)[0], - "Moons": 4.0 - * ( - make_moons(n_samples=n_samples, noise=0.05, random_state=0)[0] - - np.array([0.5, 0.25]) - ), - "Noise": 14.0 * (np.random.RandomState(42).rand(n_samples, 2) - 0.5), - } - DATASETS = [ - make_blobs(centers=[[0, 0], [0, 0]], cluster_std=0.5, **blobs_params)[0], - make_blobs(centers=[[2, 2], [-2, -2]], cluster_std=[0.5, 0.5], **blobs_params)[0], - make_blobs(centers=[[2, 2], [-2, -2]], cluster_std=[1.5, 0.3], **blobs_params)[0], - 4.0 - * ( - make_moons(n_samples=n_samples, noise=0.05, random_state=0)[0] - - np.array([0.5, 0.25]) - ), - 14.0 * (np.random.RandomState(42).rand(n_samples, 2) - 0.5), - ] - - xx, yy = np.meshgrid(np.linspace(-7, 7, 150), np.linspace(-7, 7, 150)) - clf = NAME_CLF_MAPPING[clf_name] - plt.figure(figsize=(len(NAME_CLF_MAPPING) * 2 + 4, 12.5)) - - - plot_num = 1 - rng = np.random.RandomState(42) - X = DATA_MAPPING[input_data] - X = np.concatenate([X, rng.uniform(low=-6, high=6, size=(n_outliers, 2))], axis=0) - - t0 = time.time() - clf.fit(X) - t1 = time.time() - # fit the data and tag outliers - if clf_name == "Local Outlier Factor": - y_pred = clf.fit_predict(X) - else: - y_pred = clf.fit(X).predict(X) - - # plot the levels lines and the points - if clf_name != "Local Outlier Factor": - Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) - Z = Z.reshape(xx.shape) - plt.contour(xx, yy, Z, levels=[0], linewidths=10, colors="black") - - colors = np.array(["#377eb8", "#ff7f00"]) - plt.scatter(X[:, 0], X[:, 1], s=100, color=colors[(y_pred + 1) // 2]) - - plt.xlim(-7, 7) - plt.ylim(-7, 7) - plt.xticks(()) - plt.yticks(()) - plt.text( - 0.99, - 0.01, - ("%.2fs" % (t1 - t0)).lstrip("0"), - transform=plt.gca().transAxes, - size=60, - horizontalalignment="right", - ) - plot_num += 1 - - return plt - -description = "Learn how different anomaly detection algorithms perform in different datasets." - -def iter_grid(n_rows, n_cols): - # create a grid using gradio Block - for _ in range(n_rows): - with gr.Row(): - for _ in range(n_cols): - with gr.Column(): - yield - -title = "🕵️‍♀️ compare anomaly detection algorithms 🕵️‍♂️" -with gr.Blocks() as demo: - gr.Markdown(f"## {title}") - gr.Markdown(description) - - input_models = ["Robust covariance","One-Class SVM","One-Class SVM (SGD)","Isolation Forest", - "Local Outlier Factor"] - input_data = gr.Radio( - choices=["Central Blob", "Two Blobs", "Blob with Noise", "Moons", "Noise"], - value="Moons" - ) - n_samples = gr.Slider(minimum=100, maximum=500, step=25, label="Number of Samples") - outliers_fraction = gr.Slider(minimum=0.1, maximum=0.9, step=0.1, label="Fraction of Outliers") - counter = 0 - - - for _ in iter_grid(5, 5): - if counter >= len(input_models): - break - - input_model = input_models[counter] - plot = gr.Plot(label=input_model) - fn = partial(train_models, clf_name=input_model) - input_data.change(fn=fn, inputs=[input_data, outliers_fraction, n_samples], outputs=plot) - n_samples.change(fn=fn, inputs=[input_data, outliers_fraction, n_samples], outputs=plot) - outliers_fraction.change(fn=fn, inputs=[input_data, outliers_fraction, n_samples], outputs=plot) - counter += 1 - -demo.launch(enable_queue=True, debug=True) - diff --git a/spaces/smc/pole_or_trafo/README.md b/spaces/smc/pole_or_trafo/README.md deleted file mode 100644 index a8b4837807c705a672a7ca6589b867cc9d185962..0000000000000000000000000000000000000000 --- a/spaces/smc/pole_or_trafo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Electric Pole or Trafo -emoji: 👀 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/linformer/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/linformer/README.md deleted file mode 100644 index f8b36bc691cb8f5bf82942e07b6d9c014387bdd8..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/linformer/README.md +++ /dev/null @@ -1,22 +0,0 @@ -# Linformer: Self-Attention with Linear Complexity (Wang et al., 2020) - -This example contains code to train Linformer models as described in our paper -[Linformer: Self-Attention with Linear Complexity](https://arxiv.org/abs/2006.04768). - -## Training a new Linformer RoBERTa model - -You can mostly follow the [RoBERTa pretraining README](/examples/roberta/README.pretraining.md), -updating your training command with `--user-dir examples/linformer/linformer_src --arch linformer_roberta_base`. - -## Citation - -If you use our work, please cite: - -```bibtex -@article{wang2020linformer, - title={Linformer: Self-Attention with Linear Complexity}, - author={Wang, Sinong and Li, Belinda and Khabsa, Madian and Fang, Han and Ma, Hao}, - journal={arXiv preprint arXiv:2006.04768}, - year={2020} -} -``` diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/base_wrapper_dataset.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/base_wrapper_dataset.py deleted file mode 100644 index 134d398b47dc73c8807759188504aee205b3b34d..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/base_wrapper_dataset.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from torch.utils.data.dataloader import default_collate - -from . import FairseqDataset - - -class BaseWrapperDataset(FairseqDataset): - def __init__(self, dataset): - super().__init__() - self.dataset = dataset - - def __getitem__(self, index): - return self.dataset[index] - - def __len__(self): - return len(self.dataset) - - def collater(self, samples): - if hasattr(self.dataset, "collater"): - return self.dataset.collater(samples) - else: - return default_collate(samples) - - @property - def sizes(self): - return self.dataset.sizes - - def num_tokens(self, index): - return self.dataset.num_tokens(index) - - def size(self, index): - return self.dataset.size(index) - - def ordered_indices(self): - return self.dataset.ordered_indices() - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def attr(self, attr: str, index: int): - return self.dataset.attr(attr, index) - - def prefetch(self, indices): - self.dataset.prefetch(indices) - - def get_batch_shapes(self): - return self.dataset.get_batch_shapes() - - def batch_by_size( - self, - indices, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - ): - return self.dataset.batch_by_size( - indices, - max_tokens=max_tokens, - max_sentences=max_sentences, - required_batch_size_multiple=required_batch_size_multiple, - ) - - def filter_indices_by_size(self, indices, max_sizes): - return self.dataset.filter_indices_by_size(indices, max_sizes) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return self.dataset.can_reuse_epoch_itr_across_epochs - - def set_epoch(self, epoch): - super().set_epoch(epoch) - if hasattr(self.dataset, "set_epoch"): - self.dataset.set_epoch(epoch) diff --git a/spaces/stevez/b_demo_hf/README.md b/spaces/stevez/b_demo_hf/README.md deleted file mode 100644 index 9018092c08893e45ed0fddccd02024a38971a6b3..0000000000000000000000000000000000000000 --- a/spaces/stevez/b_demo_hf/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: B Demo Hf -emoji: 🌍 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.44.3 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - - diff --git a/spaces/stomexserde/gpt4-ui/Examples/A Small Balloon Over A Vast Mara Landscape.md b/spaces/stomexserde/gpt4-ui/Examples/A Small Balloon Over A Vast Mara Landscape.md deleted file mode 100644 index 9e6084e0da2ef6d80f5e5c3d64d7e757bc9d46fe..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/A Small Balloon Over A Vast Mara Landscape.md +++ /dev/null @@ -1,20 +0,0 @@ -
          -

          A small balloon over a vast Mara landscape: A dream come true

          -

          Have you ever dreamed of flying over the majestic savannah of the Masai Mara, witnessing the wildlife from a bird's eye view? If so, you are not alone. Many people have this wish on their bucket list, and some are lucky enough to make it come true.

          -

          One of them is Laura, a 35-year-old teacher from Italy, who recently visited Kenya with her husband and two children. She had always wanted to experience a balloon safari, and she finally got the chance to do it on her last day of the trip.

          -

          A small balloon over a vast Mara landscape


          Downloadhttps://urlgoal.com/2uI6G3



          -

          "It was amazing," she says. "We woke up very early in the morning and drove to the launch site. There were about 20 other people in our group, and we all got into a big basket attached to a colorful balloon. The pilot gave us some safety instructions and then we took off."

          -

          Laura says she felt a mix of excitement and fear as the balloon ascended into the sky. "It was very quiet and peaceful up there. We could see the sunrise over the horizon and the vast plains below us. The pilot pointed out different animals, like elephants, giraffes, zebras, wildebeests and lions. It was like watching a documentary, but in real life."

          -

          -

          The balloon flight lasted about an hour, and Laura says it was one of the most memorable experiences of her life. "I felt so free and happy. It was like being in a dream. I took many photos and videos, but nothing can capture the beauty and magic of that moment."

          -

          After landing, the group enjoyed a champagne breakfast in the bush, surrounded by nature. Laura says she felt very grateful for having this opportunity. "It was a once-in-a-lifetime adventure. I will never forget it."

          - -

          Laura says she chose to do the balloon safari in the Masai Mara because it is one of the most famous and diverse wildlife reserves in the world. "It is home to the Big Five (lion, leopard, elephant, rhino and buffalo), as well as many other species. It is also the scene of the Great Migration, when millions of wildebeests and zebras cross the Mara River in search of greener pastures. It is a spectacle of nature that I always wanted to see."

          -

          She says she booked the balloon safari through a reputable company that operates in the area. "They were very professional and friendly. They picked us up from our lodge and drove us to the launch site. They also provided us with warm clothes, blankets and hats, because it was quite chilly in the morning. They took care of everything."

          -

          Laura says she paid about $500 per person for the balloon safari, which included the flight, the breakfast, the transfer and a certificate of completion. She says it was worth every penny. "It was not cheap, but it was a unique experience that I will cherish forever. It was the highlight of our trip to Kenya."

          - -

          Laura says she was impressed by the skill and knowledge of the pilot, who guided the balloon smoothly and safely over the landscape. "He was very experienced and confident. He knew how to control the altitude and direction of the balloon, depending on the wind and the terrain. He also knew a lot about the animals and the environment. He answered all our questions and made us laugh with his jokes."

          -

          She says she also enjoyed the company of the other passengers, who came from different countries and backgrounds. "We had a great time together. We shared our stories and impressions of Kenya. We made some new friends. It was a very friendly and fun atmosphere."

          -

          Laura says she would recommend the balloon safari to anyone who loves nature and adventure. "It is a must-do activity if you visit the Masai Mara. It is a different way of seeing and appreciating the beauty of this place. It is an experience that you will never regret."

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Crack __EXCLUSIVE__ Vcarve Pro 6.5.rar.md b/spaces/stomexserde/gpt4-ui/Examples/Crack __EXCLUSIVE__ Vcarve Pro 6.5.rar.md deleted file mode 100644 index 4de05b2f6b93f3ce739b2762817f0cfd14a817c9..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Crack __EXCLUSIVE__ Vcarve Pro 6.5.rar.md +++ /dev/null @@ -1,25 +0,0 @@ - -

          How to Use VCarve Pro 6.5 for CNC Routing

          -

          VCarve Pro 6.5 is a powerful but intuitive software solution for creating and cutting parts on a CNC router. It allows you to produce complex 2D patterns with profile, pocket, drill and inlay toolpaths, as well as create designs with v-carving textures and import and machine 3D models. In this article, we will show you how to use some of the key features of VCarve Pro 6.5 to make your CNC projects easier and more efficient.

          -

          crack vcarve pro 6.5.rar


          Downloadhttps://urlgoal.com/2uI9zU



          - -

          Importing and Editing Vectors

          -

          VCarve Pro 6.5 can import 2D designs from other programs but also provides a full set of drawing and editing tools[^1^]. You can easily create vectors from scratch or import and edit bitmap images using the "Fit Vectors to Bitmap" function[^2^]. This function automatically traces the outline of an image and converts it into vector shapes that you can modify and use for toolpaths.

          - -

          Creating 2.5D Toolpaths

          -

          The toolpath options in VCarve Pro 6.5 cover all typical 2D routing operations such as profiling, pocketing, auto-inlays, drilling and thread milling as well as 2.5D strategies[^1^]. You can also create v-carving, prism carving, mouldings, textures and fluting toolpaths that add depth and detail to your designs[^3^]. The software automatically calculates the depth for the v-shaped bit to give sharp corners and intricate lines[^2^].

          - -

          Importing and Machining 3D Models

          -

          VCarve Pro 6.5 includes the ability to import multiple Vectric 3D models (V3M) as well as a single 3rd party model (STL, OBJ, 3DM or SKP), where they can be assembled to suit your design[^1^]. You can also create advanced 3D assemblies by importing multiple Vectric Clip Art 3D models (V3M) that can be used in your own projects or edited to create new variations[^4^]. For 3D models, you can create roughing and finishing toolpaths that remove material efficiently and accurately. You can also project 2D and 2.5D toolpaths onto the 3D surface to add more details or engraving[^1^].

          -

          - -

          Previewing and Saving Toolpaths

          -

          All toolpaths in VCarve Pro 6.5 can be previewed to show just how the part will look when it is actually cut[^1^]. This allows instant feedback to allow toolpaths to be further optimized. You can also use the "Estimated Machining Time" function to calculate how long it will take to cut your part[^2^]. This is useful for planning your production schedule and pricing your work. Once you are satisfied with your toolpaths, you can save them in various formats compatible with your CNC machine.

          - -

          Pro Edition Features

          -

          The Pro edition of VCarve Pro 6.5 gives you unlimited job and toolpath size, true shape nesting and job set-up sheets, ideally suited to a production environment[^1^]. You can also use the "Machine Parts on Two Sides" feature to create double-sided projects in the same session[^4^]. This avoids the need to have two sessions, one for each side. Another pro feature is the "Machine Wrapped Rotary Parts" feature that allows you to create toolpaths for cylindrical parts that are wrapped around a rotary axis on the CNC machine[^4^].

          - -

          Conclusion

          -

          VCarve Pro 6.5 is a versatile and powerful software package that enables you to create stunning designs and cut them on your CNC router. It has a user-friendly interface that makes it easy to learn and use, as well as a comprehensive set of features that cater to different types of CNC projects. Whether you are a hobbyist or a professional, VCarve Pro 6.5 can help you unleash your creativity and make your CNC routing more enjoyable and productive.

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Glowstorm 3dm Mad Max Crack BEST 22.md b/spaces/stomexserde/gpt4-ui/Examples/Glowstorm 3dm Mad Max Crack BEST 22.md deleted file mode 100644 index 932c0eeac55feb3a73fc68b4176fdb7a3d3e5207..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Glowstorm 3dm Mad Max Crack BEST 22.md +++ /dev/null @@ -1,73 +0,0 @@ -
          -

          Glowstorm 3DM Mad Max Crack 22: What You Need to Know

          -

          If you are a fan of the Mad Max franchise, you might be interested in playing Mad Max, a video game developed by Avalanche Studios and published by Warner Bros. Interactive Entertainment in 2015. The game is based on the post-apocalyptic film series and features an open-world environment, vehicular combat, and a third-person perspective. However, if you don't want to pay for the game or you want to enjoy some extra features, you might be tempted to use Glowstorm 3DM Mad Max Crack 22, a software that allows you to play the game for free and with some added bonuses. But what is this crack exactly, how does it work, and is it safe and legal to use? In this article, we will answer these questions and more, so keep reading.

          -

          glowstorm 3dm mad max crack 22


          Download Filehttps://urlgoal.com/2uI8Ud



          -

          What is Mad Max?

          -

          Mad Max is a video game that was released on September 1, 2015 for Microsoft Windows, PlayStation 4, and Xbox One. The game is set in a post-apocalyptic wasteland where water and gasoline are scarce and violence is rampant. The player controls Max Rockatansky, a former police officer who has lost his family and his car, and who seeks revenge against a gang of raiders led by Scabrous Scrotus, a son of Immortan Joe from the film Mad Max: Fury Road. The game features a variety of missions, side quests, collectibles, upgrades, and challenges that can be completed in any order. The game also emphasizes vehicular combat, as the player can customize their car, called the Magnum Opus, with different weapons, armor, engines, and other parts. The game received generally positive reviews from critics and fans alike, who praised its graphics, gameplay, soundtrack, and atmosphere.

          -

          What is Glowstorm 3DM Mad Max Crack 22?

          -

          Glowstorm 3DM Mad Max Crack 22 is a software that allows you to play Mad Max without buying or activating it. The software was created by Glowstorm, a group of hackers who are part of 3DM, a Chinese cracking group that is known for cracking many popular games such as FIFA 15, GTA V, Metal Gear Solid V: The Phantom Pain, and more. The software is also known as 3DMGAME-Mad.Max.Crack.V4.Incl.DLCs-3DM or simply Mad Max Crack V4. The software was released on September 9, 2015, eight days after the game's official release. The software is an update of previous versions of the crack (V1, V2, and V3), which had some bugs and compatibility issues. The software fixes some of these bugs and adds some DLCs (downloadable content) that were not available in the original game. Some of these DLCs are:

          -
            -
          • The Ripper Pack: This pack includes a new body for the Magnum Opus, which has a powerful V8 engine, a ramming grill, and a tuned suspension.
          • -
          • The Road Warrior Pack: This pack includes a new hood ornament, a new shotgun, and a new leather jacket for Max.
          • -
          • The Grizzlegrinda Hood Ornament: This pack includes a new hood ornament that resembles a bear's head.
          • -
          • The Pentacal GulpCut Hood Ornament: This pack includes a new hood ornament that resembles a skull with five spikes.
          • -
          • The ThirstCutter Car Body: This pack includes a new body for the Magnum Opus, which has a sleek design and a large fuel tank.
          • -
          -

          Glowstorm 3DM Mad Max Crack 22 claims to offer a better gaming experience than the original game, as it allows the player to access more content, play offline, and avoid DRM (digital rights management) restrictions. However, the software also has some drawbacks, such as potential bugs, viruses, crashes, and legal issues.

          -

          How to Download and Install Glowstorm 3DM Mad Max Crack 22?

          -

          If you want to try Glowstorm 3DM Mad Max Crack 22, you will need to follow these steps:

          -
            -
          1. Download the crack from a reliable source. You can find the crack on various torrent sites or file-sharing platforms, but be careful of fake or malicious links. You can also use this link to download the crack directly from 3DM's website.
          2. -
          3. Extract the crack files using a program like WinRAR or 7-Zip. You should see a folder named 3DMGAME-Mad.Max.Crack.V4.Incl.DLCs-3DM or something similar.
          4. -
          5. Copy the crack files and paste them into the folder where you have installed Mad Max. You will need to overwrite some of the original files, so make sure you have a backup of them in case something goes wrong.
          6. -
          7. Run the game as administrator. You should see a launcher window that lets you choose your language and other settings. Click on Start Game and enjoy!
          8. -
          -

          Before you download and install the crack, you should also check if your system meets the minimum requirements for the game and the crack. Here is a table that shows the system requirements:

          -

          - - - - - - - - - -
          ComponentMinimumRecommended
          Operating SystemWindows Vista/7/8/10 (64-bit)Windows 7/8/10 (64-bit)
          ProcessorIntel Core i5-650 3.2 GHz or AMD Phenom II X4 965 3.4 GHzIntel Core i7-3770 3.4 GHz or AMD FX-8350 4.0 GHz
          Memory6 GB RAM8 GB RAM
          GraphicsNVIDIA GeForce GTX 660 Ti (2 GB) or AMD Radeon HD 7870 (2 GB)NVIDIA GeForce GTX 760 (3 GB) or AMD Radeon HD 7970 (3 GB)
          Storage32 GB available space32 GB available space
          Sound CardDirectX compatibleDirectX compatible
          Glowstorm 3DM Mad Max Crack 22No additional requirementsNo additional requirements
          -

          Is Glowstorm 3DM Mad Max Crack 22 Safe and Legal?

          -

          Glowstorm 3DM Mad Max Crack 22 is not safe or legal to use. Here are some of the reasons why:

          -
            -
          • The crack may contain viruses, malware, spyware, or other harmful programs that can damage your computer or steal your personal information. Some users have reported that the crack caused their game to crash, freeze, lag, or display errors. Some users have also reported that the crack installed unwanted software or changed their browser settings without their consent.
          • -
          • The crack may violate the terms of service and the end-user license agreement of Mad Max and Warner Bros. Interactive Entertainment. By using the crack, you are bypassing the DRM protection of the game and playing it without paying for it. This is considered piracy and theft, and it can result in legal action against you. You may also lose access to online features, updates, patches, support, and warranty of the game.
          • -
          • The crack may be unethical and unfair to the developers and the publishers of Mad Max and Warner Bros. Interactive Entertainment. By using the crack, you are depriving them of their rightful income and recognition for their hard work and creativity. You are also disrespecting the intellectual property rights and the artistic vision of the creators of the game. You may also ruin the gaming experience and the reputation of the game for yourself and other players.
          • -
          -

          Therefore, we strongly advise you not to use Glowstorm 3DM Mad Max Crack 22 or any other crack for Mad Max or any other game. If you want to play Mad Max, you should buy the original game from a legitimate source and support the developers and the publishers. You will also enjoy a better, safer, and more satisfying gaming experience.

          -

          Conclusion

          -

          Glowstorm 3DM Mad Max Crack 22 is a software that allows you to play Mad Max for free and with some added features. However, the software is not safe or legal to use, and it may cause various problems for your computer, your game, and yourself. The software may also be unethical and unfair to the developers and the publishers of Mad Max and Warner Bros. Interactive Entertainment. Therefore, we recommend you to avoid using the crack and to buy the original game instead. You will not only support the creators of the game, but also enjoy a better, safer, and more satisfying gaming experience.

          -

          We hope this article has been helpful and informative for you. If you have any questions or comments about Glowstorm 3DM Mad Max Crack 22 or Mad Max, feel free to share them with us in the comment section below. We would love to hear from you!

          -

          FAQs

          -

          Here are some of the frequently asked questions and their answers about Glowstorm 3DM Mad Max Crack 22:

          -
            -
          1. Q: Where can I buy Mad Max?
          2. -
          3. A: You can buy Mad Max from various online platforms such as Steam, Origin, Epic Games Store, GOG.com, Humble Bundle, or Green Man Gaming. You can also buy a physical copy of the game from various retailers such as Amazon, Walmart, Best Buy, or GameStop.
          4. -
          5. Q: How much does Mad Max cost?
          6. -
          7. A: The price of Mad Max may vary depending on the platform, the region, the edition, and the discounts. However, as of June 2023, the average price of Mad Max is around $20 USD.
          8. -
          9. Q: What are some of the alternatives to Glowstorm 3DM Mad Max Crack 22?
          10. -
          11. A: Some of the alternatives to Glowstorm 3DM Mad Max Crack 22 are:
          12. -
              -
            • CPY Mad Max Crack: This is another crack for Mad Max that was released by CPY, an Italian cracking group that is known for cracking many games such as Assassin's Creed Origins, Far Cry 5, Resident Evil 2 Remake, and more. This crack was released on October 20, 2015, and it claims to fix some of the bugs and issues that were present in Glowstorm 3DM Mad Max Crack 22.
            • -
            • Mad Max Repack: This is a repack version of Mad Max that was created by FitGirl, a popular repacker who is known for repacking many games such as Red Dead Redemption 2, Cyberpunk 2077, Horizon Zero Dawn, and more. This repack was released on September 1, 2015, and it claims to reduce the size of the game from 32 GB to 4 GB without losing any quality or content.
            • -
            • Mad Max Demo: This is a demo version of Mad Max that was released by Warner Bros. Interactive Entertainment on September 29, 2015. The demo allows you to play a portion of the game for free and test its performance on your system.
            • -
            -
          13. Q: What are some of the similar games to Mad Max?
          14. -
          15. A: Some of the similar games to Mad Max are:
          16. -
              -
            • Rage: This is a first-person shooter game that was developed by id Software and published by Bethesda Softworks in 2011. The game is set in a post-apocalyptic world where an asteroid has hit Earth and caused a global catastrophe. The player controls a survivor who has to fight against various enemies such as bandits, mutants, and authority forces.
            • -
            • Fallout: This is a series of role-playing games that was created by Interplay Entertainment in 1997 and later developed by Bethesda Softworks since 2008. The series is set in a post-apocalyptic world where a nuclear war has devastated most of civilization. The player controls a character who has to explore the wasteland, fight against various enemies, and make choices that affect the story and the world.
            • -
            • Borderlands: This is a series of action role-playing games that was created by Gearbox Software and published by 2K Games since 2009. The series is set on a planet called Pandora, where a mysterious alien vault attracts many fortune seekers and criminals. The player controls a character who has to loot, shoot, and customize their weapons and skills.
            • -
            -
          17. Q: How can I contact Glowstorm or 3DM?
          18. -
          19. A: You can contact Glowstorm or 3DM through their official website, their social media accounts, or their email address. However, be aware that they may not respond to your messages or requests, as they are very busy and secretive. You can also join their online community and forums, where you can interact with other users and fans of their cracks.
          20. -

          b2dd77e56b
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/K7 Total Security Serial Number And Vendor Id.md b/spaces/stomexserde/gpt4-ui/Examples/K7 Total Security Serial Number And Vendor Id.md deleted file mode 100644 index c6a8addc9e24e96a1b37a54c4df8c95f6c6aff04..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/K7 Total Security Serial Number And Vendor Id.md +++ /dev/null @@ -1,24 +0,0 @@ - -

          How to Find and Activate K7 Total Security Serial Number and Vendor ID

          -

          K7 Total Security is a premium antivirus software that offers comprehensive protection for your devices, data, and privacy. It can protect you from malware, spyware, ransomware, phishing, identity theft, and other online threats. But how do you find and activate your K7 Total Security serial number and vendor ID?

          -

          k7 total security serial number and vendor id


          Download ✦✦✦ https://urlgoal.com/2uIb0f



          -

          A serial number and a vendor ID are two unique codes that identify your product and allow you to activate it online. You need both of them to register your product and enjoy its full features. Here are some ways to find and activate your K7 Total Security serial number and vendor ID:

          -
            -
          • If you bought the product online, you should receive an email with the serial number and the vendor ID. Check your inbox and spam folder for the email from K7 Computing Private Limited[^5^]. Copy and paste the codes into the activation window of the product.
          • -
          • If you bought the product offline, you should find the serial number and the vendor ID on the back of the CD case or on a sticker inside the box. Peel off the sticker and enter the codes into the activation window of the product.
          • -
          • If you have already installed the product on your PC or laptop, you can check the serial number and the vendor ID by opening the product interface and clicking on Support > About. You will see the codes under License Information. You can also click on Support > License Details to see more information about your product activation status.
          • -
          -

          Once you have entered the serial number and the vendor ID, click on Activate to complete the activation process. You will need an internet connection to activate your product online. You will also need to enter your name, email address, and phone number to register your product.

          -

          After activating your product, you can enjoy its advanced features such as antivirus, firewall, parental control, secure web banking, remote control, and more. You can also update your product regularly to get the latest virus definitions and security patches.

          -

          -

          K7 Total Security is a reliable and user-friendly antivirus software that can keep your devices, data, and privacy safe from various online threats. To get started with it, you need to find and activate your K7 Total Security serial number and vendor ID. You can follow the steps above to do so easily.

          Here are some additional tips and tricks to use K7 Total Security effectively:

          -
            -
          • To scan your device for viruses and other threats, you can choose from various scan modes such as Quick Scan, Complete Scan, Custom Scan, or Rootkit Scan. You can also schedule scans to run automatically at a specific time or frequency.
          • -
          • To manage your firewall settings and network connections, you can click on Settings > Firewall. You can also enable or disable the Stealth Mode, which makes your device invisible to hackers and intruders on the internet.
          • -
          • To control and monitor your children's online activities, you can click on Settings > Parental Control. You can set up profiles for each child and assign them different access levels and time limits. You can also block or allow specific websites, categories, applications, or keywords.
          • -
          • To secure your online transactions and banking details, you can click on Settings > Secure Web Banking. You can add your trusted websites to the list and launch them in a secure browser that prevents phishing and keylogging attacks.
          • -
          • To remotely access and control your device from another device, you can click on Settings > Remote Control. You can create a K7 account and link your devices to it. You can then log in to your account from any web browser and perform various actions such as scan, update, lock, wipe, or locate your device.
          • -
          -

          K7 Total Security is a powerful and versatile antivirus software that can protect your devices from various online threats. By finding and activating your K7 Total Security serial number and vendor ID, you can unlock its full potential and enjoy its advanced features. You can also follow the tips and tricks above to use it effectively.

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/stratussox/yolov5_inference/utils/segment/metrics.py b/spaces/stratussox/yolov5_inference/utils/segment/metrics.py deleted file mode 100644 index b09ce23fb9e398ab654fce676d23f74d81cc5c57..0000000000000000000000000000000000000000 --- a/spaces/stratussox/yolov5_inference/utils/segment/metrics.py +++ /dev/null @@ -1,210 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Model validation metrics -""" - -import numpy as np - -from ..metrics import ap_per_class - - -def fitness(x): - # Model fitness as a weighted combination of metrics - w = [0.0, 0.0, 0.1, 0.9, 0.0, 0.0, 0.1, 0.9] - return (x[:, :8] * w).sum(1) - - -def ap_per_class_box_and_mask( - tp_m, - tp_b, - conf, - pred_cls, - target_cls, - plot=False, - save_dir=".", - names=(), -): - """ - Args: - tp_b: tp of boxes. - tp_m: tp of masks. - other arguments see `func: ap_per_class`. - """ - results_boxes = ap_per_class(tp_b, - conf, - pred_cls, - target_cls, - plot=plot, - save_dir=save_dir, - names=names, - prefix="Box")[2:] - results_masks = ap_per_class(tp_m, - conf, - pred_cls, - target_cls, - plot=plot, - save_dir=save_dir, - names=names, - prefix="Mask")[2:] - - results = { - "boxes": { - "p": results_boxes[0], - "r": results_boxes[1], - "ap": results_boxes[3], - "f1": results_boxes[2], - "ap_class": results_boxes[4]}, - "masks": { - "p": results_masks[0], - "r": results_masks[1], - "ap": results_masks[3], - "f1": results_masks[2], - "ap_class": results_masks[4]}} - return results - - -class Metric: - - def __init__(self) -> None: - self.p = [] # (nc, ) - self.r = [] # (nc, ) - self.f1 = [] # (nc, ) - self.all_ap = [] # (nc, 10) - self.ap_class_index = [] # (nc, ) - - @property - def ap50(self): - """AP@0.5 of all classes. - Return: - (nc, ) or []. - """ - return self.all_ap[:, 0] if len(self.all_ap) else [] - - @property - def ap(self): - """AP@0.5:0.95 - Return: - (nc, ) or []. - """ - return self.all_ap.mean(1) if len(self.all_ap) else [] - - @property - def mp(self): - """mean precision of all classes. - Return: - float. - """ - return self.p.mean() if len(self.p) else 0.0 - - @property - def mr(self): - """mean recall of all classes. - Return: - float. - """ - return self.r.mean() if len(self.r) else 0.0 - - @property - def map50(self): - """Mean AP@0.5 of all classes. - Return: - float. - """ - return self.all_ap[:, 0].mean() if len(self.all_ap) else 0.0 - - @property - def map(self): - """Mean AP@0.5:0.95 of all classes. - Return: - float. - """ - return self.all_ap.mean() if len(self.all_ap) else 0.0 - - def mean_results(self): - """Mean of results, return mp, mr, map50, map""" - return (self.mp, self.mr, self.map50, self.map) - - def class_result(self, i): - """class-aware result, return p[i], r[i], ap50[i], ap[i]""" - return (self.p[i], self.r[i], self.ap50[i], self.ap[i]) - - def get_maps(self, nc): - maps = np.zeros(nc) + self.map - for i, c in enumerate(self.ap_class_index): - maps[c] = self.ap[i] - return maps - - def update(self, results): - """ - Args: - results: tuple(p, r, ap, f1, ap_class) - """ - p, r, all_ap, f1, ap_class_index = results - self.p = p - self.r = r - self.all_ap = all_ap - self.f1 = f1 - self.ap_class_index = ap_class_index - - -class Metrics: - """Metric for boxes and masks.""" - - def __init__(self) -> None: - self.metric_box = Metric() - self.metric_mask = Metric() - - def update(self, results): - """ - Args: - results: Dict{'boxes': Dict{}, 'masks': Dict{}} - """ - self.metric_box.update(list(results["boxes"].values())) - self.metric_mask.update(list(results["masks"].values())) - - def mean_results(self): - return self.metric_box.mean_results() + self.metric_mask.mean_results() - - def class_result(self, i): - return self.metric_box.class_result(i) + self.metric_mask.class_result(i) - - def get_maps(self, nc): - return self.metric_box.get_maps(nc) + self.metric_mask.get_maps(nc) - - @property - def ap_class_index(self): - # boxes and masks have the same ap_class_index - return self.metric_box.ap_class_index - - -KEYS = [ - "train/box_loss", - "train/seg_loss", # train loss - "train/obj_loss", - "train/cls_loss", - "metrics/precision(B)", - "metrics/recall(B)", - "metrics/mAP_0.5(B)", - "metrics/mAP_0.5:0.95(B)", # metrics - "metrics/precision(M)", - "metrics/recall(M)", - "metrics/mAP_0.5(M)", - "metrics/mAP_0.5:0.95(M)", # metrics - "val/box_loss", - "val/seg_loss", # val loss - "val/obj_loss", - "val/cls_loss", - "x/lr0", - "x/lr1", - "x/lr2",] - -BEST_KEYS = [ - "best/epoch", - "best/precision(B)", - "best/recall(B)", - "best/mAP_0.5(B)", - "best/mAP_0.5:0.95(B)", - "best/precision(M)", - "best/recall(M)", - "best/mAP_0.5(M)", - "best/mAP_0.5:0.95(M)",] diff --git a/spaces/sub314xxl/MetaGPT/metagpt/actions/write_test.py b/spaces/sub314xxl/MetaGPT/metagpt/actions/write_test.py deleted file mode 100644 index 5e50fdb553359b84ef9016223b14d69a1db16ae3..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/actions/write_test.py +++ /dev/null @@ -1,49 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 17:45 -@Author : alexanderwu -@File : write_test.py -""" -from metagpt.actions.action import Action -from metagpt.utils.common import CodeParser - -PROMPT_TEMPLATE = """ -NOTICE -1. Role: You are a QA engineer; the main goal is to design, develop, and execute PEP8 compliant, well-structured, maintainable test cases and scripts for Python 3.9. Your focus should be on ensuring the product quality of the entire project through systematic testing. -2. Requirement: Based on the context, develop a comprehensive test suite that adequately covers all relevant aspects of the code file under review. Your test suite will be part of the overall project QA, so please develop complete, robust, and reusable test cases. -3. Attention1: Use '##' to split sections, not '#', and '## ' SHOULD WRITE BEFORE the test case or script. -4. Attention2: If there are any settings in your tests, ALWAYS SET A DEFAULT VALUE, ALWAYS USE STRONG TYPE AND EXPLICIT VARIABLE. -5. Attention3: YOU MUST FOLLOW "Data structures and interface definitions". DO NOT CHANGE ANY DESIGN. Make sure your tests respect the existing design and ensure its validity. -6. Think before writing: What should be tested and validated in this document? What edge cases could exist? What might fail? -7. CAREFULLY CHECK THAT YOU DON'T MISS ANY NECESSARY TEST CASES/SCRIPTS IN THIS FILE. -Attention: Use '##' to split sections, not '#', and '## ' SHOULD WRITE BEFORE the test case or script and triple quotes. ------ -## Given the following code, please write appropriate test cases using Python's unittest framework to verify the correctness and robustness of this code: -```python -{code_to_test} -``` -Note that the code to test is at {source_file_path}, we will put your test code at {workspace}/tests/{test_file_name}, and run your test code from {workspace}, -you should correctly import the necessary classes based on these file locations! -## {test_file_name}: Write test code with triple quoto. Do your best to implement THIS ONLY ONE FILE. -""" - - -class WriteTest(Action): - def __init__(self, name="WriteTest", context=None, llm=None): - super().__init__(name, context, llm) - - async def write_code(self, prompt): - code_rsp = await self._aask(prompt) - code = CodeParser.parse_code(block="", text=code_rsp) - return code - - async def run(self, code_to_test, test_file_name, source_file_path, workspace): - prompt = PROMPT_TEMPLATE.format( - code_to_test=code_to_test, - test_file_name=test_file_name, - source_file_path=source_file_path, - workspace=workspace, - ) - code = await self.write_code(prompt) - return code diff --git a/spaces/supertori/files/stable-diffusion-webui/javascript/imageviewer.js b/spaces/supertori/files/stable-diffusion-webui/javascript/imageviewer.js deleted file mode 100644 index aac2ee82383881bd9d59a264d2cd2c823c2187c4..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/javascript/imageviewer.js +++ /dev/null @@ -1,285 +0,0 @@ -// A full size 'lightbox' preview modal shown when left clicking on gallery previews -function closeModal() { - gradioApp().getElementById("lightboxModal").style.display = "none"; -} - -function showModal(event) { - const source = event.target || event.srcElement; - const modalImage = gradioApp().getElementById("modalImage") - const lb = gradioApp().getElementById("lightboxModal") - modalImage.src = source.src - if (modalImage.style.display === 'none') { - lb.style.setProperty('background-image', 'url(' + source.src + ')'); - } - lb.style.display = "block"; - lb.focus() - - const tabTxt2Img = gradioApp().getElementById("tab_txt2img") - const tabImg2Img = gradioApp().getElementById("tab_img2img") - // show the save button in modal only on txt2img or img2img tabs - if (tabTxt2Img.style.display != "none" || tabImg2Img.style.display != "none") { - gradioApp().getElementById("modal_save").style.display = "inline" - } else { - gradioApp().getElementById("modal_save").style.display = "none" - } - event.stopPropagation() -} - -function negmod(n, m) { - return ((n % m) + m) % m; -} - -function updateOnBackgroundChange() { - const modalImage = gradioApp().getElementById("modalImage") - if (modalImage && modalImage.offsetParent) { - let allcurrentButtons = gradioApp().querySelectorAll(".gallery-item.transition-all.\\!ring-2") - let currentButton = null - allcurrentButtons.forEach(function(elem) { - if (elem.parentElement.offsetParent) { - currentButton = elem; - } - }) - - if (currentButton?.children?.length > 0 && modalImage.src != currentButton.children[0].src) { - modalImage.src = currentButton.children[0].src; - if (modalImage.style.display === 'none') { - modal.style.setProperty('background-image', `url(${modalImage.src})`) - } - } - } -} - -function modalImageSwitch(offset) { - var allgalleryButtons = gradioApp().querySelectorAll(".gallery-item.transition-all") - var galleryButtons = [] - allgalleryButtons.forEach(function(elem) { - if (elem.parentElement.offsetParent) { - galleryButtons.push(elem); - } - }) - - if (galleryButtons.length > 1) { - var allcurrentButtons = gradioApp().querySelectorAll(".gallery-item.transition-all.\\!ring-2") - var currentButton = null - allcurrentButtons.forEach(function(elem) { - if (elem.parentElement.offsetParent) { - currentButton = elem; - } - }) - - var result = -1 - galleryButtons.forEach(function(v, i) { - if (v == currentButton) { - result = i - } - }) - - if (result != -1) { - nextButton = galleryButtons[negmod((result + offset), galleryButtons.length)] - nextButton.click() - const modalImage = gradioApp().getElementById("modalImage"); - const modal = gradioApp().getElementById("lightboxModal"); - modalImage.src = nextButton.children[0].src; - if (modalImage.style.display === 'none') { - modal.style.setProperty('background-image', `url(${modalImage.src})`) - } - setTimeout(function() { - modal.focus() - }, 10) - } - } -} - -function saveImage(){ - const tabTxt2Img = gradioApp().getElementById("tab_txt2img") - const tabImg2Img = gradioApp().getElementById("tab_img2img") - const saveTxt2Img = "save_txt2img" - const saveImg2Img = "save_img2img" - if (tabTxt2Img.style.display != "none") { - gradioApp().getElementById(saveTxt2Img).click() - } else if (tabImg2Img.style.display != "none") { - gradioApp().getElementById(saveImg2Img).click() - } else { - console.error("missing implementation for saving modal of this type") - } -} - -function modalSaveImage(event) { - saveImage() - event.stopPropagation() -} - -function modalNextImage(event) { - modalImageSwitch(1) - event.stopPropagation() -} - -function modalPrevImage(event) { - modalImageSwitch(-1) - event.stopPropagation() -} - -function modalKeyHandler(event) { - switch (event.key) { - case "s": - saveImage() - break; - case "ArrowLeft": - modalPrevImage(event) - break; - case "ArrowRight": - modalNextImage(event) - break; - case "Escape": - closeModal(); - break; - } -} - -function showGalleryImage() { - setTimeout(function() { - fullImg_preview = gradioApp().querySelectorAll('img.w-full.object-contain') - - if (fullImg_preview != null) { - fullImg_preview.forEach(function function_name(e) { - if (e.dataset.modded) - return; - e.dataset.modded = true; - if(e && e.parentElement.tagName == 'DIV'){ - e.style.cursor='pointer' - e.style.userSelect='none' - - var isFirefox = isFirefox = navigator.userAgent.toLowerCase().indexOf('firefox') > -1 - - // For Firefox, listening on click first switched to next image then shows the lightbox. - // If you know how to fix this without switching to mousedown event, please. - // For other browsers the event is click to make it possiblr to drag picture. - var event = isFirefox ? 'mousedown' : 'click' - - e.addEventListener(event, function (evt) { - if(!opts.js_modal_lightbox || evt.button != 0) return; - modalZoomSet(gradioApp().getElementById('modalImage'), opts.js_modal_lightbox_initially_zoomed) - evt.preventDefault() - showModal(evt) - }, true); - } - }); - } - - }, 100); -} - -function modalZoomSet(modalImage, enable) { - if (enable) { - modalImage.classList.add('modalImageFullscreen'); - } else { - modalImage.classList.remove('modalImageFullscreen'); - } -} - -function modalZoomToggle(event) { - modalImage = gradioApp().getElementById("modalImage"); - modalZoomSet(modalImage, !modalImage.classList.contains('modalImageFullscreen')) - event.stopPropagation() -} - -function modalTileImageToggle(event) { - const modalImage = gradioApp().getElementById("modalImage"); - const modal = gradioApp().getElementById("lightboxModal"); - const isTiling = modalImage.style.display === 'none'; - if (isTiling) { - modalImage.style.display = 'block'; - modal.style.setProperty('background-image', 'none') - } else { - modalImage.style.display = 'none'; - modal.style.setProperty('background-image', `url(${modalImage.src})`) - } - - event.stopPropagation() -} - -function galleryImageHandler(e) { - if (e && e.parentElement.tagName == 'BUTTON') { - e.onclick = showGalleryImage; - } -} - -onUiUpdate(function() { - fullImg_preview = gradioApp().querySelectorAll('img.w-full') - if (fullImg_preview != null) { - fullImg_preview.forEach(galleryImageHandler); - } - updateOnBackgroundChange(); -}) - -document.addEventListener("DOMContentLoaded", function() { - const modalFragment = document.createDocumentFragment(); - const modal = document.createElement('div') - modal.onclick = closeModal; - modal.id = "lightboxModal"; - modal.tabIndex = 0 - modal.addEventListener('keydown', modalKeyHandler, true) - - const modalControls = document.createElement('div') - modalControls.className = 'modalControls gradio-container'; - modal.append(modalControls); - - const modalZoom = document.createElement('span') - modalZoom.className = 'modalZoom cursor'; - modalZoom.innerHTML = '⤡' - modalZoom.addEventListener('click', modalZoomToggle, true) - modalZoom.title = "Toggle zoomed view"; - modalControls.appendChild(modalZoom) - - const modalTileImage = document.createElement('span') - modalTileImage.className = 'modalTileImage cursor'; - modalTileImage.innerHTML = '⊞' - modalTileImage.addEventListener('click', modalTileImageToggle, true) - modalTileImage.title = "Preview tiling"; - modalControls.appendChild(modalTileImage) - - const modalSave = document.createElement("span") - modalSave.className = "modalSave cursor" - modalSave.id = "modal_save" - modalSave.innerHTML = "🖫" - modalSave.addEventListener("click", modalSaveImage, true) - modalSave.title = "Save Image(s)" - modalControls.appendChild(modalSave) - - const modalClose = document.createElement('span') - modalClose.className = 'modalClose cursor'; - modalClose.innerHTML = '×' - modalClose.onclick = closeModal; - modalClose.title = "Close image viewer"; - modalControls.appendChild(modalClose) - - const modalImage = document.createElement('img') - modalImage.id = 'modalImage'; - modalImage.onclick = closeModal; - modalImage.tabIndex = 0 - modalImage.addEventListener('keydown', modalKeyHandler, true) - modal.appendChild(modalImage) - - const modalPrev = document.createElement('a') - modalPrev.className = 'modalPrev'; - modalPrev.innerHTML = '❮' - modalPrev.tabIndex = 0 - modalPrev.addEventListener('click', modalPrevImage, true); - modalPrev.addEventListener('keydown', modalKeyHandler, true) - modal.appendChild(modalPrev) - - const modalNext = document.createElement('a') - modalNext.className = 'modalNext'; - modalNext.innerHTML = '❯' - modalNext.tabIndex = 0 - modalNext.addEventListener('click', modalNextImage, true); - modalNext.addEventListener('keydown', modalKeyHandler, true) - - modal.appendChild(modalNext) - - - gradioApp().getRootNode().appendChild(modal) - - document.body.appendChild(modalFragment); - -}); diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ace Dental Practice Management Software 10.0 LINK Crack.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ace Dental Practice Management Software 10.0 LINK Crack.md deleted file mode 100644 index 9e8da745ea75c608f8ff7c7060aece3e033e44cb..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ace Dental Practice Management Software 10.0 LINK Crack.md +++ /dev/null @@ -1,7 +0,0 @@ -

          ace dental practice management software 10.0 crack


          Download Filehttps://cinurl.com/2uEYKc



          - -October 24, 2563 B.C. - Amazon.com: Crack the DAT Ace Bundle Package for Dental Clinic Entrance Exam... Sharpen, hone and improve your time management, confidence and.. Ever since Amazon started, everyone wanted to own it. But he always had owners who wanted to control him. -So, when Amazon came along, the DAT group was created. It has been the most trusted system for years, but still they felt the 8a78ff9644
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Airy Youtube Activation Code !!HOT!!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Airy Youtube Activation Code !!HOT!!.md deleted file mode 100644 index 00d51e3fcf8d2f70aec78d3ac5697934e861a10d..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Airy Youtube Activation Code !!HOT!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Airy Youtube Activation Code


          Download Ziphttps://cinurl.com/2uEYQs



          -
          -Download Airy Full FREE, best youtube downloader for Mac. An awesome possibility to know how to get Airy - a simple and simply powerful Mac YouTube ... 1fdad05405
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ancient Warfare 3 Alpha 22 Game __HOT__.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ancient Warfare 3 Alpha 22 Game __HOT__.md deleted file mode 100644 index a73a832f9b1edacfb72527acc9522be4c18666fa..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ancient Warfare 3 Alpha 22 Game __HOT__.md +++ /dev/null @@ -1,11 +0,0 @@ -
          -

          Ancient Warfare 3 Alpha 22: A Sandbox Battle Simulator with Endless Possibilities

          -

          If you are looking for a game that lets you create your own scenarios and battles with a variety of units, weapons, and settings, then you might want to check out Ancient Warfare 3. This game is a sandbox battle simulator that allows you to explore and compare content ranging from the stone age to the future. You can choose from different game modes like deathmatch, conquest, king of the hill, zombie survival and many more. You can also customize your own units with different clothing, armor, and equipment. You can even use the in-game editors to create your own maps and share them with other players on Steam Workshop.

          -

          Ancient Warfare 3 is currently in Early Access on Steam[^1^], which means that the game is not complete and may change further in development. The developer, Jannik Nickel, plans to improve the performance, add more content, and polish the gameplay based on community feedback[^1^]. The game has received very positive reviews from players who praised its creativity, variety, and fun factor[^1^]. The game also has a website[^2^] where you can find more information and screenshots.

          -

          Ancient Warfare 3 Alpha 22 game


          DOWNLOAD >>>>> https://cinurl.com/2uEXxq



          -

          One of the latest updates of Ancient Warfare 3 is Alpha 22, which was released on December 18th, 2022. This update added new features such as a new biome (snowy mountains), new objects (snowman, igloo, etc.), new weapons (snowball launcher, ice sword, etc.), new units (snowmen, penguins, etc.), and new settings (snowfall intensity, wind direction, etc.)[^2^]. The update also fixed some bugs and improved some aspects of the game[^2^]. You can watch some gameplay videos of Ancient Warfare 3 Alpha 22 on SoundCloud[^3^] [^4^].

          -

          If you are interested in Ancient Warfare 3 Alpha 22, you can buy it on Steam for $14.99 USD or your regional equivalent[^1^]. You can also follow the developer on Twitter (@JannikNickel) or join the Discord server (https://discord.gg/ancientwarfare) to stay updated on the game's progress and interact with other players. Ancient Warfare 3 Alpha 22 is a game that offers endless possibilities for creating and playing your own battles. Whether you want to fight zombies with snowballs, conquer ancient kingdoms with futuristic weapons, or just have fun with your own imagination, Ancient Warfare 3 Alpha 22 is a game worth trying.

          So, how does Ancient Warfare 3 Alpha 22 play? Well, the game is very easy to get into and offers a lot of freedom and creativity. You can start by choosing one of the many game modes or create your own with the custom battle editor. You can then select the biome, environment, objects, units, weapons, and settings for your scenario. You can also use the unit creator to make your own custom units with different appearance and equipment. You can even use the scripting system to add logic and events to your scenarios. The game also supports Steam Workshop integration, so you can download and upload your creations with other players.

          -

          The gameplay itself is very fun and chaotic. You can control your units directly or let them fight on their own. You can switch between first-person, third-person, and top-down views. You can also use different weapons and items to fight your enemies or support your allies. The game features a variety of content from different eras, such as swords, spears, bows, guns, rockets, tanks, helicopters, zombies, aliens, and more. The game also has a physics system that allows for realistic destruction and ragdoll effects. The game is not very realistic or balanced, but that's part of its charm and humor.

          -

          However, the game is not without its flaws. As an Early Access game, Ancient Warfare 3 Alpha 22 still has some bugs and glitches that can affect the gameplay. Some examples are units getting stuck in objects, weapons not working properly, animations being weird, and performance issues. The game also lacks some features that could improve the experience, such as multiplayer mode, tutorial mode, sound effects, music, and more polish. The developer is aware of these issues and plans to fix them in future updates based on community feedback[^1^]. The game also has a Discord server where you can report bugs and suggest ideas.

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Descargar Ivan Noble Discografia Mediafire.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Descargar Ivan Noble Discografia Mediafire.md deleted file mode 100644 index 55a786f2428810e6c986ac5cb5dbcdb6de6bd5c8..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Descargar Ivan Noble Discografia Mediafire.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Descargar Ivan Noble Discografia Mediafire


          Download File === https://cinurl.com/2uEXYB



          - -Intemperie es el tercer disco como solista del cantante, músico y actor argentino Iván Noble. ... «Iván Noble – Discografia Completa (2003 - 2013)(MEGA)(Mp3)». planetawma.com ... Crear un libro · Descargar como PDF · Versión para imprimir ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/[UPDATED] Crack Adobe Illustrator CC 2017 21.0 X64.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/[UPDATED] Crack Adobe Illustrator CC 2017 21.0 X64.md deleted file mode 100644 index 80e84562ccd1adb6bda5ee1c2b9d1d16fd43ddb0..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/[UPDATED] Crack Adobe Illustrator CC 2017 21.0 X64.md +++ /dev/null @@ -1,94 +0,0 @@ -## CRACK Adobe Illustrator CC 2017 21.0 X64 - - - - - - - - - -**Download File ===== [https://cayseypisi.blogspot.com/?c=2tyeWL](https://cayseypisi.blogspot.com/?c=2tyeWL)** - - - - - - - - - - - - Here is a possible title and article for the keyword "adobe illustrator cc 2017 21.0 x64": - -# What's New in Adobe Illustrator CC 2017 21.0 x64? - - - -Adobe Illustrator CC 2017 is the latest version of the popular vector graphics software that lets you create logos, icons, drawings, typography, and illustrations for print, web, video, and mobile. In this article, we will review some of the new and enhanced features of Adobe Illustrator CC 2017 21.0 x64 that can help you create stunning artwork with ease and efficiency. - - - -## New Document Dialog Box - - - -One of the first things you will notice when you launch Adobe Illustrator CC 2017 is the new document dialog box that allows you to create a new document from either a blank canvas or a template. You can choose from a variety of templates that use artwork and illustrations for different purposes, such as web design, print design, mobile design, and more. You can also access more templates online from Adobe Stock or save your own custom templates for future use. The new document dialog box also lets you customize the document settings, such as size, units, orientation, color mode, bleed, and more. - - - -## Updated User Interface - - - -The user interface of Adobe Illustrator CC 2017 has been updated with new icons for many of the tools and panels, as well as a new option to change the background workspace color to either a lighter or darker shade. This can help you adjust the contrast and visibility of your artwork according to your preference. You can also resize the tools panel by dragging its edge or collapse it to a single column by clicking on the double arrow icon at the top. - - - -## Improved Text Handling - - - -Adobe Illustrator CC 2017 has improved its text handling capabilities by adding some features that were previously available only in Adobe InDesign. For example, you can now fill a text frame with placeholder text to see how your layout will look before adding the actual content. You can also import text directly into a custom shape or onto a path without having to create a text frame first. This can save you time and steps when creating complex text layouts. Additionally, you can now easily locate fonts by marking them as favorites with a star icon or selecting them from a recently used fonts section. You can also narrow fonts by broad categories such as serif or sans serif or find visually similar fonts using the font similarity feature. - - - -## Enhanced Zoom Tool - - - -The zoom tool in Adobe Illustrator CC 2017 has been enhanced with a new feature that allows you to zoom in and out of your artwork with more precision and control. You can now zoom to a specific point on your artboard by clicking on it or zoom out by holding down the Alt key and clicking anywhere on the artboard. You can also zoom in and out incrementally by using the mouse wheel or by pressing Ctrl + Plus or Ctrl + Minus keys. - - - -## Other Features - - - -Some other features that have been added or improved in Adobe Illustrator CC 2017 include: - - - -- The ability to align objects along with individual path segments and anchor points that comprise them. - -- The ability to draw or create objects that align to the pixel grid and retain this alignment when moved or scaled. - -- The ability to crop bitmap images directly within Illustrator to discard excess parts, reduce file size and improve performance. - -- The ability to reset the appearance settings of your artwork to the default style with a new keyboard shortcut. - -- The ability to place gradient angles at every 45 degrees to achieve precise, controlled colors and shades. - -- The ability to select special characters and symbols from the Type menu or the context menu. - - - -These are just some of the new and enhanced features of Adobe Illustrator CC 2017 21.0 x64 that can help you create amazing artwork with more ease and efficiency. To learn more about this software and how to use it effectively, you can enroll in one of our [Illustrator courses](https://www.agitraining.com/adobe/illustrator/classes) offered at American Graphics Institute. - - dfd1c89656 - - - - - diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/parallel/collate.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/parallel/collate.py deleted file mode 100644 index ad749197df21b0d74297548be5f66a696adebf7f..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/parallel/collate.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Mapping, Sequence - -import torch -import torch.nn.functional as F -from torch.utils.data.dataloader import default_collate - -from .data_container import DataContainer - - -def collate(batch, samples_per_gpu=1): - """Puts each data field into a tensor/DataContainer with outer dimension - batch size. - - Extend default_collate to add support for - :type:`~mmcv.parallel.DataContainer`. There are 3 cases. - - 1. cpu_only = True, e.g., meta data - 2. cpu_only = False, stack = True, e.g., images tensors - 3. cpu_only = False, stack = False, e.g., gt bboxes - """ - - if not isinstance(batch, Sequence): - raise TypeError(f'{batch.dtype} is not supported.') - - if isinstance(batch[0], DataContainer): - stacked = [] - if batch[0].cpu_only: - for i in range(0, len(batch), samples_per_gpu): - stacked.append( - [sample.data for sample in batch[i:i + samples_per_gpu]]) - return DataContainer( - stacked, batch[0].stack, batch[0].padding_value, cpu_only=True) - elif batch[0].stack: - for i in range(0, len(batch), samples_per_gpu): - assert isinstance(batch[i].data, torch.Tensor) - - if batch[i].pad_dims is not None: - ndim = batch[i].dim() - assert ndim > batch[i].pad_dims - max_shape = [0 for _ in range(batch[i].pad_dims)] - for dim in range(1, batch[i].pad_dims + 1): - max_shape[dim - 1] = batch[i].size(-dim) - for sample in batch[i:i + samples_per_gpu]: - for dim in range(0, ndim - batch[i].pad_dims): - assert batch[i].size(dim) == sample.size(dim) - for dim in range(1, batch[i].pad_dims + 1): - max_shape[dim - 1] = max(max_shape[dim - 1], - sample.size(-dim)) - padded_samples = [] - for sample in batch[i:i + samples_per_gpu]: - pad = [0 for _ in range(batch[i].pad_dims * 2)] - for dim in range(1, batch[i].pad_dims + 1): - pad[2 * dim - - 1] = max_shape[dim - 1] - sample.size(-dim) - padded_samples.append( - F.pad( - sample.data, pad, value=sample.padding_value)) - stacked.append(default_collate(padded_samples)) - elif batch[i].pad_dims is None: - stacked.append( - default_collate([ - sample.data - for sample in batch[i:i + samples_per_gpu] - ])) - else: - raise ValueError( - 'pad_dims should be either None or integers (1-3)') - - else: - for i in range(0, len(batch), samples_per_gpu): - stacked.append( - [sample.data for sample in batch[i:i + samples_per_gpu]]) - return DataContainer(stacked, batch[0].stack, batch[0].padding_value) - elif isinstance(batch[0], Sequence): - transposed = zip(*batch) - return [collate(samples, samples_per_gpu) for samples in transposed] - elif isinstance(batch[0], Mapping): - return { - key: collate([d[key] for d in batch], samples_per_gpu) - for key in batch[0] - } - else: - return default_collate(batch) diff --git a/spaces/syam417/rvc/infer_pack/attentions.py b/spaces/syam417/rvc/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/syam417/rvc/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/tabeina/bingo1/tests/kblob.ts b/spaces/tabeina/bingo1/tests/kblob.ts deleted file mode 100644 index 9e15b41c1c94a690beb61b23cdb42fc78767ccd2..0000000000000000000000000000000000000000 --- a/spaces/tabeina/bingo1/tests/kblob.ts +++ /dev/null @@ -1,27 +0,0 @@ -import FormData from 'form-data' - -import { fetch } from '@/lib/isomorphic' - -const formData = new FormData() - -const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}} - -formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - - -fetch('https://bing.vcanbb.top/images/kblob', - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": "https://bing.vcanbb.top/web/index.html", - "Referrer-Policy": "origin-when-cross-origin", - ...formData.getHeaders() - } - - } -).then(res => res.text()) -.then(res => console.log('res', res)) diff --git a/spaces/taesiri/ChatGPT-ImageCaptioner/tools/get_cc_tags.py b/spaces/taesiri/ChatGPT-ImageCaptioner/tools/get_cc_tags.py deleted file mode 100644 index 00bd6180ab7c5a6cbb0533a8a174e6de2f3b19b7..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ChatGPT-ImageCaptioner/tools/get_cc_tags.py +++ /dev/null @@ -1,194 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json -from collections import defaultdict - -# This mapping is extracted from the official LVIS mapping: -# https://github.com/lvis-dataset/lvis-api/blob/master/data/coco_to_synset.json -COCO_SYNSET_CATEGORIES = [ - {"synset": "person.n.01", "coco_cat_id": 1}, - {"synset": "bicycle.n.01", "coco_cat_id": 2}, - {"synset": "car.n.01", "coco_cat_id": 3}, - {"synset": "motorcycle.n.01", "coco_cat_id": 4}, - {"synset": "airplane.n.01", "coco_cat_id": 5}, - {"synset": "bus.n.01", "coco_cat_id": 6}, - {"synset": "train.n.01", "coco_cat_id": 7}, - {"synset": "truck.n.01", "coco_cat_id": 8}, - {"synset": "boat.n.01", "coco_cat_id": 9}, - {"synset": "traffic_light.n.01", "coco_cat_id": 10}, - {"synset": "fireplug.n.01", "coco_cat_id": 11}, - {"synset": "stop_sign.n.01", "coco_cat_id": 13}, - {"synset": "parking_meter.n.01", "coco_cat_id": 14}, - {"synset": "bench.n.01", "coco_cat_id": 15}, - {"synset": "bird.n.01", "coco_cat_id": 16}, - {"synset": "cat.n.01", "coco_cat_id": 17}, - {"synset": "dog.n.01", "coco_cat_id": 18}, - {"synset": "horse.n.01", "coco_cat_id": 19}, - {"synset": "sheep.n.01", "coco_cat_id": 20}, - {"synset": "beef.n.01", "coco_cat_id": 21}, - {"synset": "elephant.n.01", "coco_cat_id": 22}, - {"synset": "bear.n.01", "coco_cat_id": 23}, - {"synset": "zebra.n.01", "coco_cat_id": 24}, - {"synset": "giraffe.n.01", "coco_cat_id": 25}, - {"synset": "backpack.n.01", "coco_cat_id": 27}, - {"synset": "umbrella.n.01", "coco_cat_id": 28}, - {"synset": "bag.n.04", "coco_cat_id": 31}, - {"synset": "necktie.n.01", "coco_cat_id": 32}, - {"synset": "bag.n.06", "coco_cat_id": 33}, - {"synset": "frisbee.n.01", "coco_cat_id": 34}, - {"synset": "ski.n.01", "coco_cat_id": 35}, - {"synset": "snowboard.n.01", "coco_cat_id": 36}, - {"synset": "ball.n.06", "coco_cat_id": 37}, - {"synset": "kite.n.03", "coco_cat_id": 38}, - {"synset": "baseball_bat.n.01", "coco_cat_id": 39}, - {"synset": "baseball_glove.n.01", "coco_cat_id": 40}, - {"synset": "skateboard.n.01", "coco_cat_id": 41}, - {"synset": "surfboard.n.01", "coco_cat_id": 42}, - {"synset": "tennis_racket.n.01", "coco_cat_id": 43}, - {"synset": "bottle.n.01", "coco_cat_id": 44}, - {"synset": "wineglass.n.01", "coco_cat_id": 46}, - {"synset": "cup.n.01", "coco_cat_id": 47}, - {"synset": "fork.n.01", "coco_cat_id": 48}, - {"synset": "knife.n.01", "coco_cat_id": 49}, - {"synset": "spoon.n.01", "coco_cat_id": 50}, - {"synset": "bowl.n.03", "coco_cat_id": 51}, - {"synset": "banana.n.02", "coco_cat_id": 52}, - {"synset": "apple.n.01", "coco_cat_id": 53}, - {"synset": "sandwich.n.01", "coco_cat_id": 54}, - {"synset": "orange.n.01", "coco_cat_id": 55}, - {"synset": "broccoli.n.01", "coco_cat_id": 56}, - {"synset": "carrot.n.01", "coco_cat_id": 57}, - # {"synset": "frank.n.02", "coco_cat_id": 58}, - {"synset": "sausage.n.01", "coco_cat_id": 58}, - {"synset": "pizza.n.01", "coco_cat_id": 59}, - {"synset": "doughnut.n.02", "coco_cat_id": 60}, - {"synset": "cake.n.03", "coco_cat_id": 61}, - {"synset": "chair.n.01", "coco_cat_id": 62}, - {"synset": "sofa.n.01", "coco_cat_id": 63}, - {"synset": "pot.n.04", "coco_cat_id": 64}, - {"synset": "bed.n.01", "coco_cat_id": 65}, - {"synset": "dining_table.n.01", "coco_cat_id": 67}, - {"synset": "toilet.n.02", "coco_cat_id": 70}, - {"synset": "television_receiver.n.01", "coco_cat_id": 72}, - {"synset": "laptop.n.01", "coco_cat_id": 73}, - {"synset": "mouse.n.04", "coco_cat_id": 74}, - {"synset": "remote_control.n.01", "coco_cat_id": 75}, - {"synset": "computer_keyboard.n.01", "coco_cat_id": 76}, - {"synset": "cellular_telephone.n.01", "coco_cat_id": 77}, - {"synset": "microwave.n.02", "coco_cat_id": 78}, - {"synset": "oven.n.01", "coco_cat_id": 79}, - {"synset": "toaster.n.02", "coco_cat_id": 80}, - {"synset": "sink.n.01", "coco_cat_id": 81}, - {"synset": "electric_refrigerator.n.01", "coco_cat_id": 82}, - {"synset": "book.n.01", "coco_cat_id": 84}, - {"synset": "clock.n.01", "coco_cat_id": 85}, - {"synset": "vase.n.01", "coco_cat_id": 86}, - {"synset": "scissors.n.01", "coco_cat_id": 87}, - {"synset": "teddy.n.01", "coco_cat_id": 88}, - {"synset": "hand_blower.n.01", "coco_cat_id": 89}, - {"synset": "toothbrush.n.01", "coco_cat_id": 90}, -] - -def map_name(x): - x = x.replace('_', ' ') - if '(' in x: - x = x[:x.find('(')] - return x.lower().strip() - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--cc_ann', default='datasets/cc3m/train_image_info.json') - parser.add_argument('--out_path', default='datasets/cc3m/train_image_info_tags.json') - parser.add_argument('--keep_images', action='store_true') - parser.add_argument('--allcaps', action='store_true') - parser.add_argument('--cat_path', default='') - parser.add_argument('--convert_caption', action='store_true') - # parser.add_argument('--lvis_ann', default='datasets/lvis/lvis_v1_val.json') - args = parser.parse_args() - - # lvis_data = json.load(open(args.lvis_ann, 'r')) - cc_data = json.load(open(args.cc_ann, 'r')) - if args.convert_caption: - num_caps = 0 - caps = defaultdict(list) - for x in cc_data['annotations']: - caps[x['image_id']].append(x['caption']) - for x in cc_data['images']: - x['captions'] = caps[x['id']] - num_caps += len(x['captions']) - print('# captions', num_caps) - - if args.cat_path != '': - print('Loading', args.cat_path) - cats = json.load(open(args.cat_path))['categories'] - if 'synonyms' not in cats[0]: - cocoid2synset = {x['coco_cat_id']: x['synset'] \ - for x in COCO_SYNSET_CATEGORIES} - synset2synonyms = {x['synset']: x['synonyms'] \ - for x in cc_data['categories']} - for x in cats: - synonyms = synset2synonyms[cocoid2synset[x['id']]] - x['synonyms'] = synonyms - x['frequency'] = 'f' - cc_data['categories'] = cats - - id2cat = {x['id']: x for x in cc_data['categories']} - class_count = {x['id']: 0 for x in cc_data['categories']} - class_data = {x['id']: [' ' + map_name(xx) + ' ' for xx in x['synonyms']] \ - for x in cc_data['categories']} - num_examples = 5 - examples = {x['id']: [] for x in cc_data['categories']} - - print('class_data', class_data) - - images = [] - for i, x in enumerate(cc_data['images']): - if i % 10000 == 0: - print(i, len(cc_data['images'])) - if args.allcaps: - caption = (' '.join(x['captions'])).lower() - else: - caption = x['captions'][0].lower() - x['pos_category_ids'] = [] - for cat_id, cat_names in class_data.items(): - find = False - for c in cat_names: - if c in caption or caption.startswith(c[1:]) \ - or caption.endswith(c[:-1]): - find = True - break - if find: - x['pos_category_ids'].append(cat_id) - class_count[cat_id] += 1 - if len(examples[cat_id]) < num_examples: - examples[cat_id].append(caption) - if len(x['pos_category_ids']) > 0 or args.keep_images: - images.append(x) - - zero_class = [] - for cat_id, count in class_count.items(): - print(id2cat[cat_id]['name'], count, end=', ') - if count == 0: - zero_class.append(id2cat[cat_id]) - print('==') - print('zero class', zero_class) - - # for freq in ['r', 'c', 'f']: - # print('#cats', freq, len([x for x in cc_data['categories'] \ - # if x['frequency'] == freq] and class_count[x['id']] > 0)) - - for freq in ['r', 'c', 'f']: - print('#Images', freq, sum([v for k, v in class_count.items() \ - if id2cat[k]['frequency'] == freq])) - - try: - out_data = {'images': images, 'categories': cc_data['categories'], \ - 'annotations': []} - for k, v in out_data.items(): - print(k, len(v)) - if args.keep_images and not args.out_path.endswith('_full.json'): - args.out_path = args.out_path[:-5] + '_full.json' - print('Writing to', args.out_path) - json.dump(out_data, open(args.out_path, 'w')) - except: - pass diff --git a/spaces/teticio/inBERTolate/README.md b/spaces/teticio/inBERTolate/README.md deleted file mode 100644 index 6def4930252ec6aad2845d20df27e1916ca3cff1..0000000000000000000000000000000000000000 --- a/spaces/teticio/inBERTolate/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: InBERTolate -emoji: 🚀 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -# inBERTolate [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/teticio/inBERTolate/blob/master/notebooks/gradio_app.ipynb) -## Hit your word count by using BERT to pad out your essays! - -Sentences are generated that are in context with both the preceding and following sentences. Models like GPT are not well suited to this task as they are Causal Language Models, or autoregressive models, that generate tokens from left to right, conditional on the text that has come before. The B in BERT, on the other hand, stands for "Bidirectional" and it was trained to be able to fill in the gaps using context on either side. BERT is an example of an autoencoder model. - -Both BERT and GPT are based on [transformers](https://jalammar.github.io/illustrated-transformer/) - which were originally conceived for Neural Translation and consisted of an encoder and a decoder - but while GPT is a decoder without an encoder, BERT is an encoder without a decoder (the E in BERT). As a result, GPT is a more natural choice for language generation. BERT can be coaxed into generating language by leveraging its ability to fill in the gaps (masked tokens). Done naively this gives disappointing results, but the paper ["BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model"](https://arxiv.org/abs/1902.04094) shows how this can be acheived much more effectively, although much more slowly, as it requires doing a MCMC (Markov Chain Monte Carlo) simulation. I have made some minor adjustments to take into account left and right context as well as to use the HuggingFace package. I also modified it to use RoBERTa large. - -I have deployed it as a simple web app on [HuggingFace spaces](https://huggingface.co/spaces/teticio/inBERTolate). Without a GPU, however, it is very slow. If it is a bit too random, try reducing the temperature. diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Azymuth Discography (1975 2011) [MP3].rar Experience the Groove and Soul of Azymuth the Icons of Brazilian Music.md b/spaces/tialenAdioni/chat-gpt-api/logs/Azymuth Discography (1975 2011) [MP3].rar Experience the Groove and Soul of Azymuth the Icons of Brazilian Music.md deleted file mode 100644 index a52c28b0ed097f60c01d6d3106b1afd7cca7c111..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Azymuth Discography (1975 2011) [MP3].rar Experience the Groove and Soul of Azymuth the Icons of Brazilian Music.md +++ /dev/null @@ -1,108 +0,0 @@ -
          -

          Azymuth Discography (1975 2011) [MP3].rar - The Ultimate Collection of Brazilian Jazz-Funk Music

          -

          Azymuth is a legendary Brazilian band that has been making music since the 1970s. They are known for their unique blend of jazz, funk, samba, bossa nova, and electronic music. They have influenced many artists and genres, such as acid jazz, hip hop, house, and drum and bass. They have also collaborated with some of the most famous musicians in the world, such as Stevie Wonder, George Duke, Marcos Valle, and João Donato.

          -

          If you are a fan of Azymuth or want to discover their amazing music, you will be happy to know that you can download their entire discography in mp3 format for free. Yes, you read that right. You can get Azymuth Discography (1975 2011) [MP3].rar from various websites and apps that offer high quality audio files of the band's albums and songs. You can listen to them online or offline, on any device that supports mp3 format.

          -

          Azymuth Discography (1975 2011) [MP3].rar


          Download Ziphttps://urlcod.com/2uKaKb



          -

          How to Download Azymuth Discography (1975 2011) [MP3].rar for Free

          -

          There are many sources to download Azymuth Discography (1975 2011) [MP3].rar for free, but not all of them are reliable and safe. Some of them may have low quality audio, incomplete tracks, wrong titles, or even viruses and malware. Therefore, it is important to choose a trustworthy and reputable source to download Azymuth Discography (1975 2011) [MP3].rar for free.

          -

          One of the best sources to download Azymuth Discography (1975 2011) [MP3].rar for free is SoundCloud. SoundCloud is a popular online platform that allows users to upload, stream, and download millions of songs and podcasts from various artists and genres. You can find almost any song or album on SoundCloud, including Azymuth Discography (1975 2011) [MP3].rar.

          -

          To download Azymuth Discography (1975 2011) [MP3].rar for free from SoundCloud, you can simply go to the website and search for "Azymuth Discography (1975 2011) [MP3].rar" in the search box. You will see a result with the title "Azymuth Discography (1975 2011) [MP3].rar by Cibalatereps". You can click on it to open the page with more details and options to play or download the mp3 file. You can also click on the download icon next to the result to download the file directly to your device.

          -

          What to Expect from Azymuth Discography (1975 2011) [MP3].rar

          -

          Azymuth Discography (1975 2011) [MP3].rar is a comprehensive collection of the band's music from their debut album in 1975 to their latest album in 2011. It contains more than 30 albums and over 300 songs that showcase the band's evolution and diversity over the years. You can expect to hear some of their classic hits, such as "Jazz Carnival", "Partido Alto", "Linha do Horizonte", "Brazilian Soul", and "Dear Limmertz". You can also expect to hear some of their lesser-known gems, such as "Circo Marimbondo", "Papa Samba", "Morning", "Tightrope Walker", and "Butterfly".

          -

          Azymuth Discography (1975 2011) [MP3].rar is a must-have for any music lover who appreciates quality and creativity. It is a treasure trove of Brazilian jazz-funk music that will make you dance, groove, relax, and enjoy. It is also a great way to learn more about the history and culture of Brazil and its musical heritage.

          -

          Conclusion

          -

          Azymuth is a band that has made a lasting impact on the world of music with their innovative and original style. They have created some of the most memorable and enjoyable songs in the history of Brazilian jazz-funk music. They have also inspired and influenced many other artists and genres across the globe.

          -

          By downloading Azymuth Discography (1975 2011) [MP3].rar for free from SoundCloud, you can enjoy their entire discography in high quality mp3 format. You can listen to their music anytime and anywhere, on any device that supports mp3 format. You can also share their music with your friends and family to spread the joy and beauty of Azymuth.

          -

          Some Recommendations for Listening to Azymuth Discography (1975 2011) [MP3].rar

          -

          Azymuth Discography (1975 2011) [MP3].rar is a collection of the band's music that you can enjoy in many ways. You can listen to it for different purposes and occasions, such as relaxation, entertainment, education, or inspiration. You can also listen to it with different moods and preferences, such as happy, sad, energetic, or calm.

          -

          Azymuth albums download zip
          -Azymuth songs mp3 free download
          -Azymuth discography torrent
          -Azymuth full album rar
          -Azymuth best of mp3
          -Azymuth jazz funk download
          -Azymuth discografia completa
          -Azymuth rapidshare
          -Azymuth discography flac
          -Azymuth music download
          -Azymuth discography mega
          -Azymuth mp3 320kbps
          -Azymuth discography blogspot
          -Azymuth all albums
          -Azymuth discography zip download
          -Azymuth mp3 songs download
          -Azymuth discography 320
          -Azymuth full discography
          -Azymuth discography free download
          -Azymuth mp3 download
          -Azymuth discography mediafire
          -Azymuth high quality mp3
          -Azymuth discography online
          -Azymuth all songs
          -Azymuth discography rar download
          -Azymuth mp3 album download
          -Azymuth discography lossless
          -Azymuth complete discography
          -Azymuth discography download link
          -Azymuth mp3 music download
          -Azymuth discography google drive
          -Azymuth low bitrate mp3
          -Azymuth discography stream
          -Azymuth all tracks
          -Azymuth discography zip file
          -Azymuth mp3 song download
          -Azymuth discography wav
          -Azymuth entire discography
          -Azymuth discography direct download
          -Azymuth mp3 audio download
          -Azymuth discography zippyshare
          -Azymuth variable bitrate mp3
          -Azymuth discography listen online
          -Azymuth all hits
          -Azymuth discography zip free download
          -Azymuth mp3 track download
          -Azymuth discography aac
          -Azymuth full collection

          -

          Some of the recommendations for listening to Azymuth Discography (1975 2011) [MP3].rar are:

          -
            -
          • If you want to relax and unwind, you can listen to some of their soothing and mellow songs, such as "Morning", "Butterfly", "Linha do Horizonte", and "Brazilian Soul".
          • -
          • If you want to have fun and party, you can listen to some of their upbeat and funky songs, such as "Jazz Carnival", "Partido Alto", "Papa Samba", and "Tightrope Walker".
          • -
          • If you want to learn and explore, you can listen to some of their experimental and innovative songs, such as "Circo Marimbondo", "Light as a Feather", "Fênix", and "Crazy Rhythm".
          • -
          • If you want to be inspired and motivated, you can listen to some of their powerful and expressive songs, such as "Dear Limmertz", "Brazil", "Vôo Sobre o Horizonte", and "Last Summer in Rio".
          • -
          -

          Some Alternatives to Downloading Azymuth Discography (1975 2011) [MP3].rar

          -

          Downloading Azymuth Discography (1975 2011) [MP3].rar is a great way to enjoy the band's music for free. However, it is not the only way. There are some alternatives to downloading Azymuth Discography (1975 2011) [MP3].rar that may suit your needs and preferences better. Some of these alternatives are:

          -
            -
          • Streaming Azymuth Discography (1975 2011) [MP3].rar online. You can stream the band's music online from various platforms and services, such as YouTube, Spotify, Apple Music, Deezer, etc. You can access their music anytime and anywhere with an internet connection. You can also create playlists and share them with others.
          • -
          • Buying Azymuth Discography (1975 2011) [MP3].rar online or offline. You can buy the band's music online or offline from various sources, such as Amazon, iTunes, Google Play, CD Baby, etc. You can get their music in different formats, such as mp3, wav, flac, etc. You can also get their physical albums and CDs.
          • -
          • Supporting Azymuth Discography (1975 2011) [MP3].rar online or offline. You can support the band's music online or offline by various means, such as following them on social media, subscribing to their newsletter, joining their fan club, donating to their cause, attending their concerts and events, buying their merchandise, etc.
          • -
          -

          Conclusion

          -

          Azymuth Discography (1975 2011) [MP3].rar is a collection of the band's music that you can download for free from SoundCloud. It contains more than 30 albums and over 300 songs that showcase the band's evolution and diversity over the years. You can listen to their music anytime and anywhere, on any device that supports mp3 format. You can also share their music with your friends and family to spread the joy and beauty of Azymuth.

          -

          Some Challenges and Opportunities for Azymuth and Their Music

          -

          Azymuth is a band that has faced many challenges and opportunities in their musical career. They have overcome many obstacles and difficulties, such as political and social turmoil, economic crisis, personal loss, and competition. They have also seized many opportunities and possibilities, such as technological advancement, global exposure, artistic collaboration, and recognition.

          -

          Some of the challenges and opportunities for Azymuth and their music are:

          -
            -
          • Adapting to the changing musical trends and tastes. Azymuth has been making music for more than four decades and has witnessed many changes and developments in the music industry. They have had to adapt to the different musical styles and genres that have emerged and evolved over the years. They have also had to cater to the different musical preferences and expectations of their fans and audiences.
          • -
          • Expanding their musical horizons and influences. Azymuth has been influenced by many musical traditions and cultures, such as jazz, funk, samba, bossa nova, and electronic music. They have also influenced many other artists and genres, such as acid jazz, hip hop, house, and drum and bass. They have also explored and experimented with different musical instruments and technologies, such as keyboards, synthesizers, drum machines, samplers, and computers.
          • -
          • Maintaining their musical identity and integrity. Azymuth has been known for their distinctive and original sound that combines various elements of jazz, funk, samba, bossa nova, and electronic music. They have also created their own genre of music that they call "samba doido", which means "crazy samba" in Portuguese. They have also stayed true to their musical vision and values, despite the pressures and temptations of the music industry.
          • -
          -

          Some Recommendations for Enjoying Azymuth Discography (1975 2011) [MP3].rar

          -

          Azymuth Discography (1975 2011) [MP3].rar is a collection of the band's music that you can enjoy in many ways. You can enjoy it for different purposes and occasions, such as relaxation, entertainment, education, or inspiration. You can also enjoy it with different moods and preferences, such as happy, sad, energetic, or calm.

          -

          Some of the recommendations for enjoying Azymuth Discography (1975 2011) [MP3].rar are:

          -
            -
          • If you want to relax and unwind, you can enjoy some of their soothing and mellow songs, such as "Morning", "Butterfly", "Linha do Horizonte", and "Brazilian Soul".
          • -
          • If you want to have fun and party, you can enjoy some of their upbeat and funky songs, such as "Jazz Carnival", "Partido Alto", "Papa Samba", and "Tightrope Walker".
          • -
          • If you want to learn and explore, you can enjoy some of their experimental and innovative songs, such as "Circo Marimbondo", "Light as a Feather", "Fênix", and "Crazy Rhythm".
          • -
          • If you want to be inspired and motivated, you can enjoy some of their powerful and expressive songs, such as "Dear Limmertz", "Brazil", "Vôo Sobre o Horizonte", and "Last Summer in Rio".
          • -
          -

          Conclusion

          -

          Azymuth Discography (1975 2011) [MP3].rar is a collection of the band's music that you can download for free from SoundCloud. It contains more than 30 albums and over 300 songs that showcase the band's evolution and diversity over the years. You can listen to their music anytime and anywhere, on any device that supports mp3 format. You can also share their music with your friends and family to spread the joy and beauty of Azymuth.

          -

          Conclusion

          -

          Azymuth is a legendary Brazilian band that has been making music since the 1970s. They are known for their unique blend of jazz, funk, samba, bossa nova, and electronic music. They have influenced many artists and genres, such as acid jazz, hip hop, house, and drum and bass. They have also collaborated with some of the most famous musicians in the world, such as Stevie Wonder, George Duke, Marcos Valle, and João Donato.

          -

          If you are a fan of Azymuth or want to discover their amazing music, you will be happy to know that you can download their entire discography in mp3 format for free. Yes, you read that right. You can get Azymuth Discography (1975 2011) [MP3].rar from various websites and apps that offer high quality audio files of the band's albums and songs. You can listen to them online or offline, on any device that supports mp3 format.

          -

          By downloading Azymuth Discography (1975 2011) [MP3].rar for free, you can enjoy their music for different purposes and occasions, such as relaxation, entertainment, education, or inspiration. You can also enjoy their music with different moods and preferences, such as happy, sad, energetic, or calm. You can also learn more about the history and culture of Brazil and its musical heritage.

          -

          Azymuth Discography (1975 2011) [MP3].rar is a must-have for any music lover who appreciates quality and creativity. It is a treasure trove of Brazilian jazz-funk music that will make you dance, groove, relax, and enjoy. It is also a great way to support the band and their music by following them on social media, subscribing to their newsletter, joining their fan club, donating to their cause, attending their concerts and events, buying their merchandise, etc.

          -

          Download Azymuth Discography (1975 2011) [MP3].rar for free today and experience the joy and beauty of Azymuth.

          679dcb208e
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Danielle Steel El Anillo (1996) DVDRip Una pelcula que te har llorar y sonrer.md b/spaces/tialenAdioni/chat-gpt-api/logs/Danielle Steel El Anillo (1996) DVDRip Una pelcula que te har llorar y sonrer.md deleted file mode 100644 index a9298ecca5d6ca3d169b007598f6946596c0e5e8..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Danielle Steel El Anillo (1996) DVDRip Una pelcula que te har llorar y sonrer.md +++ /dev/null @@ -1,131 +0,0 @@ -
          -

          Danielle Steel El Anillo (1996) DVDRip: una historia de amor y guerra

          -

          ¿Te gustan las novelas románticas ambientadas en épocas históricas? Entonces no te puedes perder Danielle Steel El Anillo (1996) DVDRip, una adaptación cinematográfica de la obra de la famosa escritora estadounidense Danielle Steel. En esta película, podrás disfrutar de una trama apasionante, unos personajes inolvidables y unas escenas impactantes.

          -

          Danielle Steel El Anillo (1996) DVDRip


          Download ★★★★★ https://urlcod.com/2uK7Pc



          -

          ¿De qué trata Danielle Steel El Anillo (1996) DVDRip?

          -

          Danielle Steel El Anillo (1996) DVDRip narra la historia de Ariana von Gotthard, una joven aristócrata alemana que vive en Berlín durante la Segunda Guerra Mundial. Su vida cambia radicalmente cuando su padre es asesinado por los nazis y su madre se suicida. Ariana queda sola y desamparada, hasta que un oficial americano llamado Henry Winters la rescata y la lleva a Estados Unidos.

          -

          Allí, Ariana intenta adaptarse a su nueva vida, pero no puede olvidar su pasado ni el anillo que le regaló su padre antes de morir. El anillo es el símbolo de su amor y de su esperanza, y también el hilo conductor de la historia que se desarrolla en dos continentes y tres generaciones.

          -

          ¿Por qué ver Danielle Steel El Anillo (1996) DVDRip?

          -

          Danielle Steel El Anillo (1996) DVDRip es una película que te hará vibrar con las emociones de sus protagonistas, que tendrán que enfrentarse a los horrores de la guerra, al dolor de la pérdida, al desafío de la supervivencia y al poder del amor. Es una película que te mostrará el valor de la familia, la amistad, la lealtad y el perdón.

          -

          Además, Danielle Steel El Anillo (1996) DVDRip cuenta con un reparto de lujo, encabezado por Nastassja Kinski como Ariana, Michael York como Henry y Tim DeKay como Manfred. La dirección corre a cargo de Armand Mastroianni, un experto en adaptar las novelas de Danielle Steel al cine.

          -

          ¿Dónde descargar Danielle Steel El Anillo (1996) DVDRip?

          -

          Si quieres ver Danielle Steel El Anillo (1996) DVDRip en tu casa, con la mejor calidad de imagen y sonido, te recomendamos que la descargues desde nuestra página web. Aquí encontrarás el enlace directo para descargar la película en formato DVDRip, sin publicidad ni virus. Solo tienes que hacer clic en el botón de abajo y disfrutar de esta maravillosa historia de amor y guerra.

          -

          Danielle Steel's The Ring movie download
          -El anillo de Danielle Steel pelicula completa
          -The Ring 1996 DVDRip torrent
          -Ver El anillo online gratis
          -Danielle Steel movies based on books
          -El anillo 1996 cast and crew
          -The Ring Nastassja Kinski Michael York
          -El anillo de Danielle Steel resumen
          -The Ring 1996 subtitles
          -Danielle Steel El Anillo DVD comprar
          -The Ring 1996 film review
          -El anillo Danielle Steel novela
          -The Ring 1996 trailer
          -Danielle Steel El Anillo epub
          -The Ring 1996 soundtrack
          -El anillo Danielle Steel pdf gratis
          -The Ring 1996 watch online
          -Danielle Steel El Anillo critica
          -The Ring 1996 full movie
          -El anillo Danielle Steel libro online
          -The Ring 1996 rotten tomatoes
          -Danielle Steel El Anillo opiniones
          -The Ring 1996 imdb
          -El anillo Danielle Steel descargar gratis
          -The Ring 1996 streaming
          -Danielle Steel El Anillo sinopsis
          -The Ring 1996 plot summary
          -El anillo Danielle Steel amazon
          -The Ring 1996 dvd cover
          -Danielle Steel El Anillo pelicula online latino
          -The Ring 1996 quotes
          -El anillo Danielle Steel wikipedia
          -The Ring 1996 trivia
          -Danielle Steel El Anillo pelicula reparto
          -The Ring 1996 awards

          -

          Descargar Danielle Steel El Anillo (1996) DVDRip

          -
          ¿Qué opinan los críticos y el público de Danielle Steel El Anillo (1996) DVDRip?
          -

          Danielle Steel El Anillo (1996) DVDRip ha recibido buenas críticas por parte de los expertos y el público, que han alabado la fidelidad de la adaptación, la calidad de la producción y las actuaciones de los actores. La película tiene una puntuación de 7.2 sobre 10 en IMDb, basada en más de 500 votos. Además, ha sido nominada a varios premios, como el Emmy al mejor diseño de vestuario.

          -

          Los fans de Danielle Steel también han quedado satisfechos con esta película, que consideran una de las mejores adaptaciones de sus novelas. Muchos han destacado la emotividad de la historia, la ambientación histórica y el mensaje de esperanza que transmite. Algunos comentarios que se pueden leer en Internet son:

          -
            -
          • "Me encantó esta película, es muy fiel al libro y los actores son excelentes. La recomiendo a todos los que les guste el romance y la historia."
          • -
          • "Una película preciosa, conmovedora y bien hecha. Nastassja Kinski está impresionante como Ariana, y Michael York también hace un gran papel. La historia te atrapa desde el principio hasta el final."
          • -
          • "Una de las mejores películas basadas en las novelas de Danielle Steel. Tiene todo lo que se puede pedir: amor, drama, guerra, aventura, familia... Es una película que te hace llorar y sonreír al mismo tiempo."
          • -
          -
          ¿Cómo descargar Danielle Steel El Anillo (1996) DVDRip gratis?
          -

          Si quieres descargar Danielle Steel El Anillo (1996) DVDRip gratis, sin registrarte ni pagar nada, te ofrecemos una solución fácil y rápida. Solo tienes que seguir estos pasos:

          -
            -
          1. Entra en nuestra página web y busca la película que quieres descargar.
          2. -
          3. Haz clic en el botón de descarga y espera unos segundos.
          4. -
          5. Elige el formato que prefieras: DVDRip, HD, MP4, AVI...
          6. -
          7. Disfruta de tu película favorita en tu ordenador, móvil o tablet.
          8. -
          -

          Así de sencillo es descargar Danielle Steel El Anillo (1996) DVDRip gratis desde nuestra página web. No te lo pienses más y empieza a disfrutar de esta increíble historia de amor y guerra.

          -¿Qué otras películas de Danielle Steel puedes ver? -

          Danielle Steel El Anillo (1996) DVDRip es solo una de las muchas películas basadas en las novelas de Danielle Steel, una de las autoras más leídas y vendidas del mundo. Si te gustan las historias de amor, drama, intriga y superación, te recomendamos que veas también estas otras películas:

          -
            -
          • Danielle Steel: Recuerdos (1996) DVDRip: Una periodista que sufre de amnesia tras un accidente trata de recuperar su memoria y su vida con la ayuda de un hombre misterioso.
          • -
          • Danielle Steel: Palomino (1991) DVDRip: Una fotógrafa de éxito se refugia en un rancho tras su divorcio y se enamora de un vaquero.
          • -
          • Danielle Steel: La casa (2006) DVDRip: Una abogada hereda una mansión de su abuela y descubre los secretos de su familia y de su corazón.
          • -
          • Danielle Steel: Un largo camino a casa (1998) DVDRip: Una mujer que fue abandonada por su madre cuando era niña se reencuentra con ella años después y trata de perdonarla.
          • -
          • Danielle Steel: Un amor perfecto (2002) DVDRip: Una diseñadora de moda se casa con un príncipe europeo, pero pronto se da cuenta de que su cuento de hadas no es tan feliz como esperaba.
          • -
          -¿Dónde comprar el libro Danielle Steel El Anillo? -

          Si quieres leer el libro en el que se basa la película Danielle Steel El Anillo (1996) DVDRip, te ofrecemos la mejor opción para comprarlo online. En nuestra página web podrás encontrar el libro Danielle Steel El Anillo en formato digital o impreso, con el mejor precio y la mejor calidad. Además, podrás disfrutar de otras ventajas como:

          -
            -
          • Envío gratis a partir de 19€
          • -
          • Devolución fácil y gratuita
          • -
          • Pago seguro y cómodo
          • -
          • Atención al cliente personalizada
          • -
          • Descuentos y ofertas exclusivas
          • -
          -

          No lo dudes más y compra el libro Danielle Steel El Anillo en nuestra página web. Te garantizamos que te encantará esta novela que ha cautivado a millones de lectores en todo el mundo.

          -

          Comprar el libro Danielle Steel El Anillo

          -¿Qué curiosidades hay sobre Danielle Steel El Anillo (1996) DVDRip? -

          Danielle Steel El Anillo (1996) DVDRip es una película que tiene algunas curiosidades interesantes que quizás no sabías. Por ejemplo:

          -
            -
          • La película está basada en la novela El anillo, publicada por Danielle Steel en 1980. Es la segunda novela de la autora y la primera que se ambienta en la Segunda Guerra Mundial.
          • -
          • La película se rodó en varios lugares de Europa y Estados Unidos, como Alemania, Francia, Inglaterra, California y Nueva York. Algunas escenas se filmaron en el castillo de Neuschwanstein, el mismo que inspiró el castillo de la Bella Durmiente de Disney.
          • -
          • La película cuenta con la participación especial de Jon Voight, el padre de Angelina Jolie, que interpreta al padre de Ariana. Voight es un actor ganador del Oscar y nominado cuatro veces.
          • -
          • La película tiene una duración de casi tres horas, lo que la convierte en una de las más largas basadas en las novelas de Danielle Steel. Sin embargo, la novela es mucho más extensa y tiene más personajes y subtramas que la película.
          • -
          • La película fue un éxito de audiencia cuando se estrenó en televisión en 1996. Más de 20 millones de espectadores la vieron en Estados Unidos y recibió buenas críticas por parte de la prensa especializada.
          • -
          -¿Qué otras novelas de Danielle Steel puedes leer? -

          Danielle Steel es una de las autoras más prolíficas y populares del mundo. Ha escrito más de 180 novelas, de las cuales más de 80 han sido adaptadas al cine o a la televisión. Sus novelas abarcan diferentes géneros y temáticas, desde el romance histórico hasta el thriller psicológico. Si te gustó El anillo, te sugerimos que leas también estas otras novelas de Danielle Steel:

          -
            -
          • El regalo: Una historia navideña sobre una familia que se enfrenta a una tragedia y encuentra la esperanza en un niño huérfano.
          • -
          • Malicia: Un thriller sobre una mujer que sufre el acoso y la violencia de su exmarido y lucha por proteger a sus hijos.
          • -
          • El beso: Una historia de amor entre una mujer casada y un hombre soltero que se conocen tras un accidente de coche.
          • -
          • Una herencia misteriosa: Una saga familiar sobre una mujer que descubre que su padre le dejó una fortuna y unos secretos ocultos.
          • -
          • La promesa: Una historia de amor entre dos jóvenes que se separan por la guerra y se reencuentran años después.
          • -
          -

          No esperes más y empieza a leer las novelas de Danielle Steel, una autora que te hará vivir emociones intensas y te transportará a mundos fascinantes.

          -¿Qué consejos te damos para ver Danielle Steel El Anillo (1996) DVDRip? -

          Danielle Steel El Anillo (1996) DVDRip es una película que te hará sentir muchas emociones y que te enganchará desde el principio hasta el final. Para disfrutarla al máximo, te damos algunos consejos que te pueden ser útiles:

          -
            -
          • Prepara un ambiente cómodo y tranquilo para ver la película. Apaga el móvil, las luces y cualquier distracción que pueda interrumpirte.
          • -
          • Elige un momento adecuado para ver la película. No la veas si estás cansado, deprimido o con prisa. Es una película que requiere tu atención y tu sensibilidad.
          • -
          • Acompaña la película con algo de comer y beber. Puedes preparar unas palomitas, unos dulces o lo que más te guste. También puedes tomar algo de agua, té o café para hidratarte.
          • -
          • Comparte la película con alguien especial. Puedes verla con tu pareja, tu familia o tus amigos. Así podrás comentarla, reírte, llorar y emocionarte con ellos.
          • -
          • Disfruta de la película sin prejuicios ni expectativas. Déjate llevar por la historia, los personajes y las emociones que te transmite. No compares la película con el libro ni con otras películas de Danielle Steel.
          • -
          -¿Qué otras formas hay de disfrutar de Danielle Steel El Anillo (1996) DVDRip? -

          Danielle Steel El Anillo (1996) DVDRip es una película que puedes disfrutar de muchas formas diferentes. Además de verla en tu casa, puedes hacer otras cosas relacionadas con ella, como:

          -
            -
          • Escuchar la banda sonora de la película. La música de Danielle Steel El Anillo (1996) DVDRip es una mezcla de melodías románticas, dramáticas y épicas que te harán recordar las escenas más importantes de la película.
          • -
          • Leer el libro en el que se basa la película. El libro El anillo de Danielle Steel es una novela que te cautivará con su estilo ágil, su narrativa envolvente y sus personajes profundos.
          • -
          • Ver otras películas basadas en las novelas de Danielle Steel. Si te gustó Danielle Steel El Anillo (1996) DVDRip, seguro que te gustarán también otras películas como Danielle Steel: Recuerdos (1996) DVDRip, Danielle Steel: Palomino (1991) DVDRip o Danielle Steel: La casa (2006) DVDRip.
          • -
          • Seguir a Danielle Steel en sus redes sociales. Danielle Steel es una autora muy activa en sus redes sociales, donde comparte sus novedades, sus opiniones y sus consejos. Puedes seguirla en Facebook, Twitter o Instagram.
          • -
          • Visitar los lugares donde se rodó la película. Danielle Steel El Anillo (1996) DVDRip se rodó en lugares espectaculares como el castillo de Neuschwanstein en Alemania, el puente Golden Gate en San Francisco o el Central Park en Nueva York. Si tienes la oportunidad, puedes visitar estos lugares y sentirte parte de la película.
          • -
          -

          Como ves, hay muchas formas de disfrutar de Danielle Steel El Anillo (1996) DVDRip. Solo tienes que elegir la que más te guste y prepararte para vivir una experiencia inolvidable.

          -Conclusión -

          Danielle Steel El Anillo (1996) DVDRip es una película que no te puedes perder si te gustan las historias de amor y guerra. Es una película que te hará sentir, pensar y soñar con sus personajes, su trama y su mensaje. Es una película que te mostrará el valor de la vida, el amor y la esperanza.

          -

          Si quieres ver Danielle Steel El Anillo (1996) DVDRip, te invitamos a que la descargues desde nuestra página web. Aquí encontrarás el enlace directo para descargar la película en formato DVDRip, sin publicidad ni virus. Solo tienes que hacer clic en el botón de abajo y disfrutar de esta maravillosa película.

          -

          Descargar Danielle Steel El Anillo (1996) DVDRip

          -

          Esperamos que te haya gustado este artículo sobre Danielle Steel El Anillo (1996) DVDRip. Si quieres leer más artículos como este, visita nuestra página web y suscríbete a nuestro boletín. Te mantendremos informado de las últimas novedades sobre películas, libros y mucho más.

          -

          Gracias por tu atención y hasta la próxima.

          679dcb208e
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/FIFA Street 4 PC Password.txt How to Download and Install the Game Easily.md b/spaces/tialenAdioni/chat-gpt-api/logs/FIFA Street 4 PC Password.txt How to Download and Install the Game Easily.md deleted file mode 100644 index 250e19fce5d77842590e73d94a66a7262a7f95d2..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/FIFA Street 4 PC Password.txt How to Download and Install the Game Easily.md +++ /dev/null @@ -1,130 +0,0 @@ -
          -

          Fifa Street 4 PC Password.txt: What You Need to Know

          -

          If you are a fan of soccer games, you might have heard of Fifa Street 4, a popular game that lets you experience the thrill and excitement of street soccer. But if you are looking for a way to play this game on your PC, you might encounter a problem: you need a password to extract or install the game file. In this article, we will explain what this password is, why you need it, and how you can find it. We will also answer some frequently asked questions about Fifa Street 4 PC.

          -

          What is Fifa Street 4?

          -

          A brief introduction to the game and its features

          -

          Fifa Street 4 is a soccer video game developed by EA Canada and published by Electronic Arts. It is the fourth installment in the Fifa Street series, which focuses on street soccer rather than traditional stadium soccer. The game features over 50 teams and players from around the world, as well as various modes and locations to play in. Some of the modes include World Tour, where you can create your own team and compete in tournaments; Freestyle, where you can show off your skills and tricks; and Online Team Play, where you can join up with other players online.

          -

          fifa street 4 pc password.txt


          Download Filehttps://urlcod.com/2uK1AJ



          -

          The platforms and release dates of the game

          -

          Fifa Street 4 was released on March 13, 2012 for PlayStation 3 and Xbox 360. However, there was no official release for PC. This means that if you want to play this game on your computer, you have to download it from unofficial sources, such as torrent sites or file-sharing platforms. This also means that you might encounter files that are encrypted or compressed with a password that you need to enter before you can access them.

          -

          Why do you need a password for Fifa Street 4 PC?

          -

          The problem of encrypted or compressed files

          -

          When you download a file from an unofficial source, you might find that it is not a simple executable file that you can run on your PC. Instead, it might be a rar or zip file that contains multiple files inside it. These files are usually compressed to reduce their size and make them easier to download and share. However, sometimes they are also encrypted with a password to protect them from unauthorized access or modification.

          -

          This means that if you want to extract or install these files, you need to enter the correct password first. Otherwise, you will not be able to open them or run them on your PC. This can be frustrating and annoying, especially if you don't know what the password is or where to find it.

          -

          The risks of downloading from untrusted sources

          -

          Another problem with downloading files from unofficial sources is that they might not be safe or reliable. They might contain viruses, malware, spyware, or other harmful programs that can damage your computer or steal your personal information. They might also be fake or corrupted files that don't work properly or at all. They might even be illegal files that violate the copyright laws or terms of service of the game developer or publisher.

          -

          This means that if you download these files, you are taking a risk with your security and privacy. You are also violating the rights of the game creator and distributor. You might face legal consequences or penalties if you are caught doing so.

          -

          How to find the password for Fifa Street 4 PC?

          -

          The official way to get the game and the password

          -

          The best and safest way to get Fifa Street 4 PC and its password is to buy it from an official source. This way, you can be sure that you are getting a legitimate and working copy of the game that is compatible with your PC. You can also enjoy all the features and updates of the game without any problems or restrictions.

          -

          However, as we mentioned earlier, there is no official release of Fifa Street 4 PC. This means that there is no official source where you can buy it from. You might find some websites or online stores that claim to sell it, but they are most likely scams or frauds that will take your money and give you nothing in return.

          -

          Therefore, we advise you not to fall for these offers and avoid buying Fifa Street 4 PC from any unofficial source.

          -

          fifa street 4 pc game password.txt
          -fifa street 4 pc rar password.txt
          -fifa street 4 pc download password.txt
          -fifa street 4 pc iso password.txt
          -fifa street 4 pc crack password.txt
          -fifa street 4 pc full version password.txt
          -fifa street 4 pc free password.txt
          -fifa street 4 pc skidrow password.txt
          -fifa street 4 pc reloaded password.txt
          -fifa street 4 pc repack password.txt
          -fifa street 4 pc torrent password.txt
          -fifa street 4 pc setup password.txt
          -fifa street 4 pc activation password.txt
          -fifa street 4 pc keygen password.txt
          -fifa street 4 pc serial key password.txt
          -fifa street 4 pc license key password.txt
          -fifa street 4 pc product key password.txt
          -fifa street 4 pc registration code password.txt
          -fifa street 4 pc unlock code password.txt
          -fifa street 4 pc online pass password.txt
          -fifa street 4 pc origin password.txt
          -fifa street 4 pc steam password.txt
          -fifa street 4 pc epic games password.txt
          -fifa street 4 pc ea access password.txt
          -fifa street 4 pc ea sports password.txt
          -fifa street 4 pc mod password.txt
          -fifa street 4 pc patch password.txt
          -fifa street 4 pc update password.txt
          -fifa street 4 pc dlc password.txt
          -fifa street 4 pc cheats password.txt
          -fifa street 4 pc trainer password.txt
          -fifa street 4 pc save file password.txt
          -fifa street 4 pc config file password.txt
          -fifa street 4 pc settings file password.txt
          -fifa street 4 pc system requirements file password.txt
          -fifa street 4 pc gameplay file password.txt
          -fifa street 4 pc review file password.txt
          -fifa street 4 pc tips file password.txt
          -fifa street 4 pc guide file password.txt
          -fifa street 4 pc walkthrough file password.txt
          -fifa street 4 pc tutorial file password.txt
          -fifa street 4 pc manual file password.txt
          -fifa street 4 pc instructions file password.txt
          -fifa street 4 pc faq file password.txt
          -fifa street 4 pc forum file password.txt
          -fifa street 4 pc blog file password.txt
          -fifa street 4 pc news file password.txt
          -fifa street 4 pc video file password.txt
          -fifa street 4 pc trailer file password.txt
          -fifa street 4 pc soundtrack file password.txt

          -

          The alternative ways to crack or bypass the password

          -

          If you still want to try playing Fifa Street 4 PC without buying it from an official source, there are some alternative ways to crack or bypass the password of the encrypted or compressed files. However, we warn you that these methods are not guaranteed to work and might cause more problems than solutions.

          -

          Some of these methods include:

          -
            -
          • Searching for the password online. You might find some websites or forums that claim to have the password for Fifa Street 4 PC rar or zip files. However, these passwords might not work or might be fake or wrong. They might also contain malicious links or ads that can harm your computer or trick you into downloading unwanted programs.
          • -
          • Using a password cracker tool. You might find some software or applications that claim to crack or recover passwords for rar or zip files. However, these tools might not work or might take a long time to find the password. They might also contain viruses or malware that can infect your computer or steal your data.
          • -
          • Using a password bypasser tool. You might find some software or applications that claim to bypass or remove passwords for rar or zip files. However, these tools might not work or might damage your files. They might also contain viruses or malware that can harm your computer or compromise your security.
          • -
          -

          As you can see, these methods are not reliable or safe ways to find the password for Fifa Street 4 PC rar or zip files. They might waste your time and money and put your computer at risk.

          -

          Conclusion

          -

          In conclusion, Fifa Street 4 PC Password.txt is a file that contains the password for extracting or installing Fifa Street 4 PC rar or zip files. These files are usually downloaded from unofficial sources that encrypt them with a password to protect them from unauthorized access or modification.

          -

          However, finding this password is not easy or safe. The best way to get it is to buy it from an official source, but there is no official release of Fifa Street 4 PC. The alternative ways to crack or bypass it are not guaranteed to work and might cause more problems than solutions.

          -

          Therefore, we recommend you not to download Fifa Street 4 PC rar or zip files from untrusted sources and avoid looking for their passwords online. Instead, we suggest you play other soccer games that are available for PC legally and safely.

          -

          FAQs

          -

          Q1: Is Fifa Street 4 available for PC?

          -

          A1: No, there is no official release of Fifa Street 4 for PC. The game was only released for PlayStation 3 and Xbox 360 in March 2012.

          -

          Q2: How can I play Fifa Street 4 online?

          -

          Q3: What are the system requirements for Fifa Street 4 PC?

          -

          A3: Since there is no official release of Fifa Street 4 PC, there are no official system requirements for it. However, some unofficial sources suggest that you need at least the following specifications to run the game on your PC:

          - - - - - - - - - - - - - - - - - - - - - - - - - -
          Minimum RequirementsRecommended Requirements
          CPU: Intel Core 2 Duo E6600 or AMD Athlon 64 X2 5400+CPU: Intel Core i3-530 or AMD Phenom II X4 925
          RAM: 2 GBRAM: 4 GB
          OS: Windows XP/Vista/7/8/10OS: Windows 7/8/10
          Video Card: NVIDIA GeForce 8800 GT or AMD Radeon HD 3870Video Card: NVIDIA GeForce GTX 460 or AMD Radeon HD 6870
          Free Disk Space: 8 GBFree Disk Space: 10 GB
          -

          Q4: What are the best tips and tricks for Fifa Street 4?

          -

          A4: Some of the best tips and tricks for Fifa Street 4 are:

          -
            -
          • Learn how to use the right stick to perform various tricks and skills. You can also use the left trigger to modify your tricks and make them more effective.
          • -
          • Use the wall to your advantage. You can bounce the ball off the wall to pass, shoot, or trick your opponents.
          • -
          • Use the World Tour mode to create and customize your own team and player. You can also unlock new items and abilities by winning tournaments and challenges.
          • -
          • Use the Online Team Play mode to join up with other players online and compete against other teams. You can also create your own club and invite your friends to join.
          • -
          • Use the Freestyle mode to practice your skills and tricks without any rules or restrictions. You can also customize the settings and choose your own location and mode.
          • -
          -

          Q5: Where can I find more information about Fifa Street 4?

          -

          A5: You can find more information about Fifa Street 4 on the official website of EA Sports (https://www.ea.com/games/fifa/fifa-street). You can also check out some reviews, videos, and guides on various gaming websites and platforms, such as IGN (https://www.ign.com/games/fifa-street-2012), YouTube (https://www.youtube.com/results?search_query=fifa+street+4), or GameFAQs (https://www.gamefaqs.com/search?game=fifa+street+4).

          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Flyff Farm Bot Download REPACK.md b/spaces/tialenAdioni/chat-gpt-api/logs/Flyff Farm Bot Download REPACK.md deleted file mode 100644 index 7df47f2c80e8f86d37ffc42a4bf64e83abeaea70..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Flyff Farm Bot Download REPACK.md +++ /dev/null @@ -1,31 +0,0 @@ -
          -Hello, this is Bing. I can help you write a title and an article with HTML formatting for the keyword "flyff farm bot download". Here is a possible example: - -

          How to Download and Use a Flyff Farm Bot

          -

          Flyff is a popular MMORPG that features a variety of classes, quests, dungeons, and PvP modes. However, some players may find it tedious or time-consuming to level up their characters, farm items, or complete tasks. That's where a flyff farm bot comes in handy.

          -

          A flyff farm bot is a program that automates certain actions in the game, such as attacking monsters, looting drops, using skills, consuming food, and more. A flyff farm bot can help you level up faster, earn more money, and enjoy the game without having to grind manually.

          -

          flyff farm bot download


          Download File 🔗 https://urlcod.com/2uK1B0



          -

          There are different types of flyff farm bots available online, some of which are free and open source, while others are paid and private. Some flyff farm bots are designed for specific servers or versions of the game, while others are more universal and compatible with multiple servers. Some flyff farm bots have more features and customization options than others, such as support mode, shoutbot, speed hack, teleporter, etc.

          -

          In this article, we will show you how to download and use a flyff farm bot for your preferred server and version of the game. We will also provide some tips and precautions to avoid getting banned or detected by the game's anti-cheat system.

          -

          How to Download a Flyff Farm Bot

          -

          The first step to use a flyff farm bot is to download one from a reliable source. You can find various flyff farm bots on websites such as elitepvpers.com[^1^] [^2^], youtube.com[^3^], flyffbot.net[^4^], github.com[^5^], etc. However, you should be careful when downloading any files from the internet, as some of them may contain viruses, malware, or spyware that can harm your computer or steal your personal information.

          -

          Before downloading any flyff farm bot, you should do some research on the reputation and feedback of the developer and the users. You should also scan the files with an antivirus program or an online virus scanner such as virustotal.com. You should also backup your game files and your personal data before installing any flyff farm bot.

          -

          Some flyff farm bots may require additional software or tools to run properly, such as F-Tool, Neuz, AiWake-Lite, etc. You should read the instructions and requirements carefully before downloading and installing any flyff farm bot.

          -

          How to Use a Flyff Farm Bot

          -

          The second step to use a flyff farm bot is to configure it according to your preferences and needs. Most flyff farm bots have a user interface or a configuration file that allows you to customize various settings and options, such as:

          -

          -
            -
          • The server and version of the game you are playing on
          • -
          • The character class and level you are using
          • -
          • The location and range of farming
          • -
          • The type and frequency of actions (attack, loot, skill, food, etc.)
          • -
          • The hotkeys and shortcuts for activating or deactivating the bot
          • -
          • The security and anti-detection features (stealth mode, randomization, etc.)
          • -
          -

          You should experiment with different settings and options until you find the optimal ones for your situation. You should also test the bot on a low-level character or a dummy account before using it on your main character or account.

          -

          Once you have configured your flyff farm bot, you can launch it and start farming in the game. You should monitor the bot's performance and behavior from time to time to make sure it is working properly and efficiently. You should also be ready to stop or pause the bot if you encounter any problems or interruptions.

          -

          Tips and Precautions for Using a Flyff Farm Bot

          -

          Using a flyff farm bot can be beneficial and convenient for many players who want to save time and effort in playing Flyff. However, using a flyff farm bot also comes with some risks and responsibilities that you should be aware of.

          -

          First of all, using a flyff farm bot is against

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Use GNS3 for Mac to Practice Network Configurations and Features.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Use GNS3 for Mac to Practice Network Configurations and Features.md deleted file mode 100644 index f07171101b414e46abb668df3d22b7873db5cb0e..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Use GNS3 for Mac to Practice Network Configurations and Features.md +++ /dev/null @@ -1,36 +0,0 @@ - -

          How to Download and Install GNS3 for Mac

          -

          GNS3 is a software that simulates complex networks without requiring dedicated network hardware such as routers and switches. You can use GNS3 for Mac to create and test network topologies, experiment with network features, or check configurations that need to be deployed later on real devices. GNS3 for Mac supports various network devices from multiple vendors, including Cisco, Juniper, Arista, Cumulus, and more.

          -

          In this article, we will show you how to download and install GNS3 for Mac in a few simple steps. Before we begin, please note that GNS3 for Mac requires macOS 10.14 (Mojave) or later. Also, you will need to install the GNS3 virtual machine (VM) on your Mac or on a remote server to run the network devices.

          -

          download gns3 for mac


          Download Filehttps://urlcod.com/2uK6WW



          -

          Step 1: Download GNS3 for Mac

          -

          The first step is to download GNS3 for Mac from the official website: https://www.gns3.com/software/download. You will need to create an account or log in with your existing account to access the download page. Then, select the Mac OS X package and click on the Download button.

          -

          Step 2: Install GNS3 for Mac

          -

          The next step is to install GNS3 for Mac on your computer. To do this, follow these steps:

          -
            -
          1. Open the downloaded DMG file and drag the GNS3 icon to the Applications folder.
          2. -
          3. Go to the Applications folder and double-click on the GNS3 icon to launch it.
          4. -
          5. You may see a warning message saying that GNS3 is an application downloaded from the Internet. Click on Open to continue.
          6. -
          7. You will see the Setup Wizard window. Click on Next to proceed.
          8. -
          9. You will be asked to choose a server type. Select Local server and click on Next.
          10. -
          11. You will be asked to install additional software. Check the boxes for Dynamips, VPCS, ubridge, and QEMU and click on Next.
          12. -
          13. You will be asked to install Wireshark. If you want to use Wireshark for packet capture and analysis, check the box and click on Next. Otherwise, uncheck the box and click on Next.
          14. -
          15. You will be asked to install Solar-PuTTY. If you want to use Solar-PuTTY as your default console application, check the box and click on Next. Otherwise, uncheck the box and click on Next.
          16. -
          17. You will see a summary of your choices. Click on Install to start the installation process.
          18. -
          19. Wait for the installation process to complete. It may take several minutes depending on your internet speed and computer performance.
          20. -
          21. When the installation is done, click on Finish.
          22. -
          -

          Congratulations! You have successfully installed GNS3 for Mac on your computer.

          -

          Step 3: Download and Install GNS3 VM

          -

          The final step is to download and install the GNS3 VM on your Mac or on a remote server. The GNS3 VM is a virtual machine that runs the network devices in GNS3. You can use VMware Fusion or VirtualBox as your hypervisor to run the GNS3 VM.

          -

          To download and install the GNS3 VM, follow these steps:

          -
            -
          1. Go to https://www.gns3.com/software/download-vm and log in with your account.
          2. -
          3. Select the version of the GNS3 VM that matches your hypervisor (VMware or VirtualBox) and click on the Download button.
          4. -
          5. Extract the downloaded ZIP file and open the OVA file with your hypervisor.
          6. -
          7. Follow the instructions of your hypervisor to import and configure the GNS3 VM.
          8. -
          9. Make sure that you assign enough CPU cores and RAM to the GNS3 VM based on your needs.
          10. -
          11. Start the GNS3 VM and wait for it

            -

            ddb901b051
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Activation Key Opel Vauxhall Globaltis Keygen VERIFIED.md b/spaces/tioseFevbu/cartoon-converter/scripts/Activation Key Opel Vauxhall Globaltis Keygen VERIFIED.md deleted file mode 100644 index cd4f69cd4b80c72b6b9fba93a388ea8396f86d70..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Activation Key Opel Vauxhall Globaltis Keygen VERIFIED.md +++ /dev/null @@ -1,6 +0,0 @@ -
            -

            Activation Key Opel Vauxhall Globaltis Keygen: How to Install and Register GlobalTIS Software for GM Vehicles

            ` | | H2: Introduction | `

            Introduction

            ` | | - What is GlobalTIS and what does it do? | `

            GlobalTIS is a software application that provides access to technical information, diagnostic data, programming functions, and service bulletins for GM vehicles, such as Opel, Vauxhall, Saab, Chevrolet, and Cadillac.

            -

            Activation Key Opel Vauxhall Globaltis Keygen


            DOWNLOAD ····· https://urlcod.com/2uHyxS



            ` | | - What is a keygen and why do you need it? | `

            A keygen is a program that generates activation keys or serial numbers for software applications. You need a keygen to register GlobalTIS software and unlock its full features.

            ` | | - What are the benefits of using GlobalTIS software? | `

            Using GlobalTIS software can help you diagnose and repair GM vehicles more efficiently and accurately. You can also update the software and firmware of your vehicle's modules, perform security functions, access service manuals, wiring diagrams, recall information, and more.

            ` | | H2: How to Install GlobalTIS Software | `

            How to Install GlobalTIS Software

            ` | | H3: Requirements | `

            Requirements

            ` | | - A Windows PC with at least 4 GB of RAM and 20 GB of free disk space | `

            You need a Windows PC with at least 4 GB of RAM and 20 GB of free disk space to install and run GlobalTIS software.

            -

            ` | | - A compatible device to connect to your vehicle's OBD port, such as MDI, Tech 2, or J2534 | `

            You also need a compatible device to connect to your vehicle's OBD port, such as MDI, Tech 2, or J2534. This device will allow you to communicate with your vehicle's modules and perform diagnostic and programming functions.

            ` | | - A copy of GlobalTIS ISO file and keygen program | `

            Finally, you need a copy of GlobalTIS ISO file and keygen program. The ISO file contains the installation files for GlobalTIS software. The keygen program generates activation keys for GlobalTIS software.

            ` | | H3: Steps | `

            Steps

            ` | | - Download and install Virtual CD-ROM Control Panel from Microsoft website | `

            The first step is to download and install Virtual CD-ROM Control Panel from Microsoft website. This program will allow you to mount the GlobalTIS ISO file as a virtual CD drive on your PC.

            ` | | - Load the driver and mount the GlobalTIS ISO file using Virtual CD-ROM Control Panel | `

            The next step is to load the driver and mount the GlobalTIS ISO file using Virtual CD-ROM Control Panel. To do this, follow these steps:

            1. Launch Virtual CD-ROM Control Panel from your desktop or Start menu.
            2. Click Driver Control.
            3. Click Install Driver.
            4. Browse to the folder where you extracted Virtual CD-ROM Control Panel files and select VCdRom.sys file.
            5. Click Open.
            6. Click Start.
            7. Click OK.
            8. Click Add Drive.
            9. Select an unused drive letter from the drop-down menu.
            10. Click Mount.
            11. Browse to the folder where you saved the GlobalTIS ISO file and select it.
            12. Click Open.
            13. Click OK.

            You should now see a new virtual CD drive on your PC with the label Saab or Opel/Vauxhall depending on which version of GlobalTIS you have.

            ` | | - Open My Computer and launch the - Open My Computer and launch the setup.exe file from the virtual CD drive | `

            The third step is to open My Computer and launch the setup.exe file from the virtual CD drive. This will start the installation wizard for GlobalTIS software.

            ` | | - Follow the on-screen instructions to install GlobalTIS software on your PC | `

            The fourth step is to follow the on-screen instructions to install GlobalTIS software on your PC. You will need to accept the license agreement, choose the installation folder, select the components to install, and enter your name and organization.

            ` | | - Restart your PC when prompted | `

            The fifth step is to restart your PC when prompted. This will complete the installation process of GlobalTIS software.

            ` | | H2: How to Register GlobalTIS Software | `

            How to Register GlobalTIS Software

            ` | | H3: Requirements | `

            Requirements

            ` | | - A working internet connection | `

            You need a working internet connection to register GlobalTIS software. You will need to access the GM website and generate a license key for GlobalTIS software.

            ` | | - A GM account with a valid email address and password | `

            You also need a GM account with a valid email address and password. You will need to log in to the GM website and enter your personal and vehicle information.

            ` | | - A keygen program for GlobalTIS software | `

            Finally, you need a keygen program for GlobalTIS software. This program will generate an activation key for GlobalTIS software based on your license key and hardware ID.

            ` | | H3: Steps | `

            Steps

            ` | | - Launch GlobalTIS software from your desktop or Start menu | `

            The first step is to launch GlobalTIS software from your desktop or Start menu. You will see a splash screen with the logo of Saab or Opel/Vauxhall depending on which version of GlobalTIS you have.

            ` | | - Click Register Now button on the splash screen | `

            The next step is to click Register Now button on the splash screen. This will open a web browser and redirect you to the GM website.

            ` | | - Log in to the GM website with your email address and password | `

            The third step is to log in to the GM website with your email address and password. If you do not have a GM account, you can create one by clicking Sign Up button.

            ` | | - Enter your personal and vehicle information on the GM website | `

            The fourth step is to enter your personal and vehicle information on the GM website. You will need to provide your name, address, phone number, email address, vehicle identification number (VIN), model year, make, model, and engine type.

            ` | | - Generate a license key for GlobalTIS software on the GM website | `

            The fifth step is to generate a license key for GlobalTIS software on the GM website. You will need to click Generate License Key button and copy the license key that appears on the screen.

            ` | | - Paste the license key into the License Key field on the splash screen of GlobalTIS software | `

            The sixth step is to paste the license key into the License Key field on the splash screen of GlobalTIS software. You will also see a Hardware ID field that shows a unique code for your PC.

            ` | | - Run the keygen program for GlobalTIS software on your PC | `

            The seventh step is to run the keygen program for GlobalTIS software on your PC. You will need to enter the license key and hardware ID into the corresponding fields and click Generate button.

            ` | | - Copy the activation key from the keygen program and paste it into the Activation Key field on the splash screen of GlobalTIS software | `

            The eighth step is to copy the activation key from the keygen program and paste it into the Activation Key field on the splash screen of GlobalTIS software. You will also see a Registration Status field that shows whether your registration is successful or not.

            ` | | - Click OK button on the splash screen of GlobalTIS software | `

            The ninth step is to click OK button on the splash screen of GlobalTIS software. This will close the web browser and open the main interface of GlobalTIS software.

            ` | | H2: How to Use GlobalTIS Software | `

            How to Use GlobalTIS Software

            ` | | H3: Requirements | `

            Requirements

            ` | | - A compatible device connected to your vehicle's OBD port and your PC's USB port | `

            You need a compatible device connected to your vehicle's OBD port and your PC's USB port. This device will allow - A compatible device connected to your vehicle's OBD port and your PC's USB port | `

            You need a compatible device connected to your vehicle's OBD port and your PC's USB port. This device will allow you to communicate with your vehicle's modules and perform diagnostic and programming functions. Some examples of compatible devices are MDI, Tech 2, and J2534.

            ` | | - A working internet connection (optional) | `

            You may also need a working internet connection if you want to update your GlobalTIS software or access online resources. However, this is not mandatory for using GlobalTIS software.

            ` | | H3: Steps | `

            Steps

            ` | | - Launch GlobalTIS software from your desktop or Start menu | `

            The first step is to launch GlobalTIS software from your desktop or Start menu. You will see the main interface of GlobalTIS software with four tabs: Home, Service, Programming, and Security.

            ` | | - Select the tab that corresponds to the function you want to perform | `

            The next step is to select the tab that corresponds to the function you want to perform. Each tab has different options and features that you can use to diagnose and repair your vehicle. Here is a brief overview of each tab:

            • Home: This tab shows the basic information about your GlobalTIS software, such as version, license, and registration status. You can also check for updates, access online resources, and change settings from this tab.
            • Service: This tab allows you to access technical information, diagnostic data, service bulletins, and wiring diagrams for your vehicle. You can also perform tests, clear codes, view live data, and adjust parameters from this tab.
            • Programming: This tab enables you to update the software and firmware of your vehicle's modules. You can also perform security functions, such as key programming, immobilizer reset, and theft deterrent relearn from this tab.
            • Security: This tab provides access to security-related functions that require a valid security access code. You can obtain a security access code from the GM website by entering your personal and vehicle information.
            ` | | - Follow the on-screen instructions to complete the function you selected | `

            The final step is to follow the on-screen instructions to complete the function you selected. Depending on the function, you may need to select your vehicle model, enter your VIN, choose a module, connect your device, or perform other actions. You will see the progress and results of the function on the screen.

            ` | | H2: Conclusion | `

            Conclusion

            ` | | - Summarize the main points of the article | `

            In this article, we have learned how to install and register GlobalTIS software for GM vehicles using a keygen program. We have also learned how to use GlobalTIS software to access technical information, diagnostic data, programming functions, and service bulletins for GM vehicles. GlobalTIS software is a useful tool that can help you diagnose and repair GM vehicles more efficiently and accurately.

            ` | | - Provide some tips or recommendations for using GlobalTIS software | `

            Here are some tips or recommendations for using GlobalTIS software:

            • Make sure you have a compatible device that can connect to your vehicle's OBD port and your PC's USB port.
            • Make sure you have a working internet connection if you want to update your GlobalTIS software or access online resources.
            • Make sure you have a valid license key and activation key for GlobalTIS software. You can generate them using a keygen program.
            • Make sure you have a valid security access code if you want to perform security-related functions. You can obtain it from the GM website.
            • Make sure you follow the on-screen instructions carefully when performing any function with GlobalTIS software.
            ` | | - End with a call to action or a question for the reader | `

            We hope this article has been helpful for you. If you have any questions or comments about GlobalTIS software or keygen program, please feel free to leave them below. We would love to hear from you!

            ` | | H2: FAQs | `

            FAQs

            ` | | H3: What is the difference between Saab and Opel/Vauxhall versions of GlobalTIS? | `

            What is the difference between Saab and Opel/Vauxhall versions of GlobalTIS?

            ` | | - The main difference is that Saab version supports Saab vehicles, while Opel/Vauxhall version supports Opel and Vauxhall vehicles. However, both versions also support some other GM vehicles, such as Chevrolet and Cadillac. | `

            The main difference is that - The main difference is that Saab version supports Saab vehicles, while Opel/Vauxhall version supports Opel and Vauxhall vehicles. However, both versions also support some other GM vehicles, such as Chevrolet and Cadillac. | `

            The main difference is that Saab version supports Saab vehicles, while Opel/Vauxhall version supports Opel and Vauxhall vehicles. However, both versions also support some other GM vehicles, such as Chevrolet and Cadillac.

            ` | | H3: Where can I download GlobalTIS ISO file and keygen program? | `

            Where can I download GlobalTIS ISO file and keygen program?

            ` | | - You can download GlobalTIS ISO file and keygen program from various online sources, such as forums, blogs, or torrent sites. However, you should be careful about the reliability and legality of these sources. Some of them may contain viruses, malware, or fake files that can harm your PC or vehicle. | `

            You can download GlobalTIS ISO file and keygen program from various online sources, such as forums, blogs, or torrent sites. However, you should be careful about the reliability and legality of these sources. Some of them may contain viruses, malware, or fake files that can harm your PC or vehicle.

            ` | | H3: How can I update my GlobalTIS software? | `

            How can I update my GlobalTIS software?

            ` | | - You can update your GlobalTIS software by clicking Check for Updates button on the Home tab of GlobalTIS software. This will open a web browser and redirect you to the GM website. You will need to log in to the GM website with your email address and password and follow the on-screen instructions to download and install the latest version of GlobalTIS software. | `

            You can update your GlobalTIS software by clicking Check for Updates button on the Home tab of GlobalTIS software. This will open a web browser and redirect you to the GM website. You will need to log in to the GM website with your email address and password and follow the on-screen instructions to download and install the latest version of GlobalTIS software.

            ` | | H3: How can I access online resources with GlobalTIS software? | `

            How can I access online resources with GlobalTIS software?

            ` | | - You can access online resources with GlobalTIS software by clicking Online Resources button on the Home tab of GlobalTIS software. This will open a web browser and redirect you to the GM website. You will need to log in to the GM website with your email address and password and select the resource you want to access from the menu. Some examples of online resources are service manuals, wiring diagrams, recall information, technical tips, and training courses. | `

            You can access online resources with GlobalTIS software by clicking Online Resources button on the Home tab of GlobalTIS software. This will open a web browser and redirect you to the GM website. You will need to log in to the GM website with your email address and password and select the resource you want to access from the menu. Some examples of online resources are service manuals, wiring diagrams, recall information, technical tips, and training courses.

            ` | | H3: How can I contact GM customer support if I have any issues with GlobalTIS software? | `

            How can I contact GM customer support if I have any issues with GlobalTIS software?

            ` | | - You can contact GM customer support if you have any issues with GlobalTIS software by clicking Contact Us button on the Home tab of GlobalTIS software. This will open a web browser and redirect you to the GM website. You will need to log in to the GM website with your email address and password and fill out a form with your name, email address, phone number, subject, and message. You can also attach files if needed. You will receive a reply from GM customer support within 24 hours. | `

            You can contact GM customer support if you have any issues with GlobalTIS software by clicking Contact Us button on the Home tab of GlobalTIS software. This will open a web browser and redirect you to the GM website. You will need to log in to the GM website with your email address and password and fill out a form with your name, email address, phone number, subject, and message. You can also attach files if needed. You will receive a reply from GM customer support within 24 hours.

            ` |

            b2dd77e56b
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Cyberfoot 2010 Registration Code Serial.md b/spaces/tioseFevbu/cartoon-converter/scripts/Cyberfoot 2010 Registration Code Serial.md deleted file mode 100644 index a3ed09f55fcea06b12621bef50b8bc2ae8176b00..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Cyberfoot 2010 Registration Code Serial.md +++ /dev/null @@ -1,80 +0,0 @@ -
            -

            Cyberfoot 2010 Registration Code Serial: How to Get It and Why You Need It

            -

            If you are a fan of soccer (or football, as it is called in some parts of the world), you may have heard of or played Cyberfoot 2010, a popular soccer management game. In this game, you can be a coach in national leagues and international competitions, such as the Champions League of Europe, Libertadores Cup, and FIFA World Cup. You can also manage your team's finances, tactics, transfers, training, and more. However, to enjoy all these features and updates, you need a registration code serial, which is a unique alphanumeric code that activates the full version of the game. In this article, we will tell you how to get a registration code serial for Cyberfoot 2010, why you need it, what are the risks and drawbacks of using free serial keys, and what are the alternatives to using free serial keys.

            -

            cyberfoot 2010 registration code serial


            Download Zip ————— https://urlcod.com/2uHyzR



            -

            What is Cyberfoot 2010 and Why is it Popular?

            -

            Cyberfoot 2010 is a soccer management game developed by Emmanuel Santos. It was released in 2009 for Windows PC. The game has an attractive and user-friendly interface with informative displays of all the features. There are more than 350 teams and more than 6000 real players in the game. The game is available in several languages, such as English, Spanish, French, Italian, Portuguese, Arabic, German, Polish, Romanian, and Bulgarian.

            -

            Cyberfoot 2010 is a soccer management game

            -

            In Cyberfoot 2010, you can be a coach in national leagues and international competitions. You can choose from various countries and divisions to start your career. You can also create your own custom league with your own teams and players. You can manage your team's finances, tactics, transfers, training, and more. You can also watch the matches in real time or skip them if you prefer. You can also view detailed statistics and reports of your team's performance.

            -

            Cyberfoot 2010 has an attractive and user-friendly interface

            -

            One of the reasons why Cyberfoot 2010 is popular among soccer fans is its attractive and user-friendly interface. The game has a simple yet informative design that allows you to access all the features easily. The game also has colorful graphics and animations that enhance the gameplay experience. The game also has sound effects and music that create a realistic atmosphere. The game also has a help section that explains all the functions and options of the game.

            -

            Cyberfoot 2010 has more than 350 teams and 6000 real players

            -

            Another reason why Cyberfoot 2010 is popular among soccer fans is its large database of teams and players. The game has more than 350 teams from various countries and divisions, such as Argentina, Brazil, England, France, Germany, Italy, Spain, and more. The game also has more than 6000 real players with their names, photos, skills, and attributes. You can also edit the teams and players to your liking or create your own custom ones. The game also updates the teams and players regularly with the latest transfers and changes.

            -

            -

            What is a Registration Code Serial and Why Do You Need It?

            -

            A registration code serial is a unique alphanumeric code that activates the full version of the game. You need a registration code serial to access all the features and updates of the game. You also need a registration code serial to play online and join tournaments.

            -

            A registration code serial is a unique alphanumeric code that activates the full version of the game

            -

            When you download Cyberfoot 2010 from the official website of the game, you get a trial version that allows you to play for 10 days. After that, you need to buy a registration code serial to continue playing. A registration code serial is a unique alphanumeric code that you enter in the game to activate the full version. The registration code serial costs $10 USD and you can pay with PayPal or credit card. You can also buy multiple registration codes serials if you want to play on different devices or share with your friends.

            -

            You need a registration code serial to access all the features and updates of the game

            -

            With a registration code serial, you can access all the features and updates of the game. You can play in all the leagues and competitions, manage all the teams and players, watch all the matches in real time, view all the statistics and reports, and more. You can also get the latest updates of the game with new teams, players, transfers, and changes. You can also download additional files from the official website of the game, such as new graphics, sounds, languages, and more.

            -

            You also need a registration code serial to play online and join tournaments

            -

            Another benefit of having a registration code serial is that you can play online and join tournaments. You can connect with other players from around the world and compete in friendly matches or official tournaments. You can also create your own tournaments and invite your friends or other players to join. You can also chat with other players and exchange tips and tricks. Playing online and joining tournaments can make your gameplay more fun and challenging.

            -

            How to Get a Registration Code Serial for Cyberfoot 2010?

            -

            There are two ways to get a registration code serial for Cyberfoot 2010: you can buy one from the official website of the game or you can find one for free from various websites and sources online. However, you should be careful of the risks and drawbacks of using free serial keys.

            -

            You can buy a registration code serial from the official website of the game

            -

            The easiest and safest way to get a registration code serial for Cyberfoot 2010 is to buy one from the official website of the game. As mentioned before, you can buy a registration code serial for $10 USD using PayPal or credit card. You will receive an email with your registration code serial after your payment is confirmed. You can then enter your registration code serial in the game to activate the full version. This way, you can support the developer of the game and enjoy all the features and updates without any problems.

            -

            You can also find free serial keys from various websites and sources online

            -

            If you don't want to spend money on buying a registration code serial for Cyberfoot 2010, you can also try to find one for free from various websites and sources online. There are many websites that claim to offer free serial keys for Cyberfoot 2010 or other games. Some of these websites may require you to complete surveys, download files, or register accounts before giving you a serial key. Some of these websites may also provide links or codes that you can copy and paste in the game to activate it.

            -

            However, you should be careful of the risks and drawbacks of using free serial keys

            -

            While using free serial keys may seem tempting, you should be aware of the risks and drawbacks of doing so. Free serial keys may not work or may expire soon. Free serial keys may contain viruses or malware that can harm your device or data. Free serial keys may violate [user](#message the terms and conditions of the game and result in legal issues or penalties. Therefore, you should be careful of the risks and drawbacks of using free serial keys and avoid them if possible.

            -

            What are the Alternatives to Using Free Serial Keys?

            -

            If you don't want to buy a registration code serial for Cyberfoot 2010 or use free serial keys, you may wonder what are the alternatives to playing the game. Here are some suggestions that you can try:

            -

            You can try other soccer management games that are similar to Cyberfoot 2010

            -

            There are many other soccer management games that are similar to Cyberfoot 2010 that you can play for free or for a low price. Some of these games are Football Manager, FIFA Manager, PES Club Manager, Top Eleven, and more. These games have similar features and gameplay as Cyberfoot 2010, such as managing teams, players, tactics, transfers, finances, and more. You can also play online and join tournaments with other players. You can find these games on various platforms, such as PC, mobile, web, and console.

            -

            You can also use tips and tricks to improve your gameplay and performance in Cyberfoot 2010

            -

            If you want to play Cyberfoot 2010 better and win more matches, you can also use tips and tricks to improve your gameplay and performance. You can learn from various sources, such as online guides, videos, blogs, articles, and more. You can also ask for advice from other players and experts. Some of the tips and tricks that you can use are:

            -
              -
            • Choose a team that suits your style and preference
            • -
            • Study your opponents and their strengths and weaknesses
            • -
            • Adjust your tactics and formations according to the situation
            • -
            • Train your players regularly and improve their skills and attributes
            • -
            • Buy and sell players wisely and balance your budget
            • -
            • Use substitutions and injuries strategically
            • -
            • Save your progress frequently and backup your files
            • -
            -

            You can also join online communities and forums of Cyberfoot 2010 fans and players

            -

            Another way to enjoy Cyberfoot 2010 is to join online communities and forums of Cyberfoot 2010 fans and players. You can interact with other people who share your passion and interest in the game. You can also exchange opinions, experiences, feedback, suggestions, and more. You can also find new friends, partners, rivals, and mentors. You can also participate in various activities, such as contests, quizzes, polls, events, and more. You can find these online communities and forums on various platforms, such as Facebook, Twitter, Reddit, Discord, YouTube, Twitch, and more.

            -

            Conclusion

            -

            Cyberfoot 2010 is a popular soccer management game that allows you to be a coach in national leagues and international competitions. To play the full version of the game with all the features and updates, you need a registration code serial that costs $10 USD. You can also find free serial keys from various websites and sources online, but you should be careful of the risks and drawbacks of using them. Alternatively, you can try other soccer management games that are similar to Cyberfoot 2010 or use tips and tricks to improve your gameplay and performance in Cyberfoot 2010. You can also join online communities and forums of Cyberfoot 2010 fans and players to interact and have fun with them. We hope this article has helped you learn more about Cyberfoot 2010 registration code serial and how to get it and why you need it. If you have any questions or comments, feel free to leave them below.

            -

            FAQs

            -

            Here are some frequently asked questions about Cyberfoot 2010 registration code serial:

            -

            Q: How can I buy a registration code serial for Cyberfoot 2010?

            -

            A: You can buy a registration code serial for Cyberfoot 2010 from the official website of the game. You can pay with PayPal or credit card and you will receive an email with your registration code serial after your payment is confirmed. You can then enter your registration code serial in the game to activate the full version.

            -

            Q: How can I find a free serial key for Cyberfoot 2010?

            -

            A: You can find a free serial key for Cyberfoot 2010 from various websites and sources online. However, you should be careful of the risks and drawbacks of using free serial keys, such as not working, expiring, containing viruses, violating terms and conditions, and more. Therefore, you should avoid using free serial keys if possible.

            -

            Q: What are some other soccer management games that are similar to Cyberfoot 2010?

            -

            A: Some other soccer management games that are similar to Cyberfoot 2010 are Football Manager, FIFA Manager, PES Club Manager, Top Eleven, and more. These games have similar features and gameplay as Cyberfoot 2010, such as managing teams, players, tactics, transfers, finances, and more. You can also play online and join tournaments with other players. You can find these games on various platforms, such as PC, mobile, web, and console.

            -

            Q: What are some tips and tricks to improve my gameplay and performance in Cyberfoot 2010?

            -

            A: Some tips and tricks to improve your gameplay and performance in Cyberfoot 2010 are:

            -
              -
            • Choose a team that suits your style and preference
            • -
            • Study your opponents and their strengths and weaknesses
            • -
            • Adjust your tactics and formations according to the situation
            • -
            • Train your players regularly and improve their skills and attributes
            • -
            • Buy and sell players wisely and balance your budget
            • -
            • Use substitutions and injuries strategically
            • -
            • Save your progress frequently and backup your files
            • -
            -

            Q: What are some online communities and forums of Cyberfoot 2010 fans and players?

            -

            A: Some online communities and forums of Cyberfoot 2010 fans and players are:

            -
              -
            • Cyberfoot Official Website
            • -
            • Cyberfoot Facebook Page
            • -
            • Cyberfoot Twitter Account
            • -
            • Cyberfoot Reddit Subreddit
            • -
            • Cyberfoot Discord Server
            • -
            • Cyberfoot YouTube Channel
            • -
            • Cyberfoot Twitch Channel
            • -

            b2dd77e56b
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Download Lucky Patcher Apk Mwb.md b/spaces/tioseFevbu/cartoon-converter/scripts/Download Lucky Patcher Apk Mwb.md deleted file mode 100644 index 4e1142c7c391e245d67c3b621e81242fd4b85729..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Download Lucky Patcher Apk Mwb.md +++ /dev/null @@ -1,43 +0,0 @@ - -

            How to Download Lucky Patcher Apk Mwb and Enjoy Its Features

            -

            Lucky Patcher is a popular Android app that allows you to modify other apps and games, block ads, remove unwanted system apps, backup and restore apps, move apps to SD card, and bypass license verification. It is a powerful tool that can help you customize your Android device according to your preferences.

            -

            However, Lucky Patcher is not available on the Google Play Store due to its nature. You need to download it from a third-party source, such as the official website or APKPure. Moreover, you need to have a rooted device to use all the features of Lucky Patcher.

            -

            Download Lucky Patcher Apk Mwb


            Download File →→→ https://urlcod.com/2uHxrY



            -

            In this article, we will show you how to download Lucky Patcher Apk Mwb and install it on your Android device. We will also explain some of the benefits and risks of using Lucky Patcher.

            -

            What is Lucky Patcher Apk Mwb?

            -

            Lucky Patcher Apk Mwb is a modified version of the original Lucky Patcher app. It has some extra features and enhancements that are not present in the official version. For example, it has more patches for different apps and games, it can remove Google ads from any app, it can clone apps with different signatures, and it can disable signature verification for modded apps.

            -

            Lucky Patcher Apk Mwb is also updated more frequently than the official version. It can support the latest Android versions and devices. However, it is not developed by the original developer of Lucky Patcher, ChelpuS. It is created by an unknown modder who goes by the name of Mwb.

            -

            How to Download Lucky Patcher Apk Mwb?

            -

            To download Lucky Patcher Apk Mwb, you need to follow these steps:

            -
              -
            1. Go to this link on your browser.
            2. -
            3. Click on the green "Download APK" button.
            4. -
            5. Wait for the download to finish.
            6. -
            7. Locate the downloaded file on your device and tap on it.
            8. -
            9. If you see a warning message about installing unknown apps, go to your device settings and enable "Allow from this source".
            10. -
            11. Follow the installation instructions on the screen.
            12. -
            13. Launch Lucky Patcher Apk Mwb from your app drawer.
            14. -
            -

            What are the Benefits of Using Lucky Patcher Apk Mwb?

            -

            Some of the benefits of using Lucky Patcher Apk Mwb are:

            -
              -
            • You can enjoy more features and patches than the official version.
            • -
            • You can remove annoying ads from any app or game.
            • -
            • You can unlock premium features and in-app purchases for free.
            • -
            • You can backup and restore your apps and data.
            • -
            • You can move apps to SD card and save space on your internal storage.
            • -
            • You can clone apps with different signatures and run multiple accounts.
            • -
            • You can disable signature verification for modded apps and install them without any problem.
            • -
            -

            What are the Risks of Using Lucky Patcher Apk Mwb?

            -

            Some of the risks of using Lucky Patcher Apk Mwb are:

            -

            -
              -
            • You may violate the terms and conditions of some apps and games by modifying them.
            • -
            • You may face legal issues if you use Lucky Patcher for piracy or illegal purposes.
            • -
            • You may damage your device or lose your data if you misuse Lucky Patcher or apply wrong patches.
            • -
            • You may expose your device to malware or viruses if you download Lucky Patcher from untrusted sources.
            • -
            • You may get banned or blocked by some apps or games if they detect that you are using Lucky Patcher.
            • -

            7b8c122e87
            -
            -
            \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pyparsing/actions.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pyparsing/actions.py deleted file mode 100644 index f72c66e743146c7a5b70a5440e9ab5459f10245b..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pyparsing/actions.py +++ /dev/null @@ -1,207 +0,0 @@ -# actions.py - -from .exceptions import ParseException -from .util import col - - -class OnlyOnce: - """ - Wrapper for parse actions, to ensure they are only called once. - """ - - def __init__(self, method_call): - from .core import _trim_arity - - self.callable = _trim_arity(method_call) - self.called = False - - def __call__(self, s, l, t): - if not self.called: - results = self.callable(s, l, t) - self.called = True - return results - raise ParseException(s, l, "OnlyOnce obj called multiple times w/out reset") - - def reset(self): - """ - Allow the associated parse action to be called once more. - """ - - self.called = False - - -def match_only_at_col(n): - """ - Helper method for defining parse actions that require matching at - a specific column in the input text. - """ - - def verify_col(strg, locn, toks): - if col(locn, strg) != n: - raise ParseException(strg, locn, "matched token not at column {}".format(n)) - - return verify_col - - -def replace_with(repl_str): - """ - Helper method for common parse actions that simply return - a literal value. Especially useful when used with - :class:`transform_string` (). - - Example:: - - num = Word(nums).set_parse_action(lambda toks: int(toks[0])) - na = one_of("N/A NA").set_parse_action(replace_with(math.nan)) - term = na | num - - term[1, ...].parse_string("324 234 N/A 234") # -> [324, 234, nan, 234] - """ - return lambda s, l, t: [repl_str] - - -def remove_quotes(s, l, t): - """ - Helper parse action for removing quotation marks from parsed - quoted strings. - - Example:: - - # by default, quotation marks are included in parsed results - quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"] - - # use remove_quotes to strip quotation marks from parsed results - quoted_string.set_parse_action(remove_quotes) - quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"] - """ - return t[0][1:-1] - - -def with_attribute(*args, **attr_dict): - """ - Helper to create a validating parse action to be used with start - tags created with :class:`make_xml_tags` or - :class:`make_html_tags`. Use ``with_attribute`` to qualify - a starting tag with a required attribute value, to avoid false - matches on common tags such as ```` or ``
            ``. - - Call ``with_attribute`` with a series of attribute names and - values. Specify the list of filter attributes names and values as: - - - keyword arguments, as in ``(align="right")``, or - - as an explicit dict with ``**`` operator, when an attribute - name is also a Python reserved word, as in ``**{"class":"Customer", "align":"right"}`` - - a list of name-value tuples, as in ``(("ns1:class", "Customer"), ("ns2:align", "right"))`` - - For attribute names with a namespace prefix, you must use the second - form. Attribute names are matched insensitive to upper/lower case. - - If just testing for ``class`` (with or without a namespace), use - :class:`with_class`. - - To verify that the attribute exists, but without specifying a value, - pass ``with_attribute.ANY_VALUE`` as the value. - - Example:: - - html = ''' -
            - Some text -
            1 4 0 1 0
            -
            1,3 2,3 1,1
            -
            this has no type
            -
            - - ''' - div,div_end = make_html_tags("div") - - # only match div tag having a type attribute with value "grid" - div_grid = div().set_parse_action(with_attribute(type="grid")) - grid_expr = div_grid + SkipTo(div | div_end)("body") - for grid_header in grid_expr.search_string(html): - print(grid_header.body) - - # construct a match with any div tag having a type attribute, regardless of the value - div_any_type = div().set_parse_action(with_attribute(type=with_attribute.ANY_VALUE)) - div_expr = div_any_type + SkipTo(div | div_end)("body") - for div_header in div_expr.search_string(html): - print(div_header.body) - - prints:: - - 1 4 0 1 0 - - 1 4 0 1 0 - 1,3 2,3 1,1 - """ - if args: - attrs = args[:] - else: - attrs = attr_dict.items() - attrs = [(k, v) for k, v in attrs] - - def pa(s, l, tokens): - for attrName, attrValue in attrs: - if attrName not in tokens: - raise ParseException(s, l, "no matching attribute " + attrName) - if attrValue != with_attribute.ANY_VALUE and tokens[attrName] != attrValue: - raise ParseException( - s, - l, - "attribute {!r} has value {!r}, must be {!r}".format( - attrName, tokens[attrName], attrValue - ), - ) - - return pa - - -with_attribute.ANY_VALUE = object() - - -def with_class(classname, namespace=""): - """ - Simplified version of :class:`with_attribute` when - matching on a div class - made difficult because ``class`` is - a reserved word in Python. - - Example:: - - html = ''' -
            - Some text -
            1 4 0 1 0
            -
            1,3 2,3 1,1
            -
            this <div> has no class
            -
            - - ''' - div,div_end = make_html_tags("div") - div_grid = div().set_parse_action(with_class("grid")) - - grid_expr = div_grid + SkipTo(div | div_end)("body") - for grid_header in grid_expr.search_string(html): - print(grid_header.body) - - div_any_type = div().set_parse_action(with_class(withAttribute.ANY_VALUE)) - div_expr = div_any_type + SkipTo(div | div_end)("body") - for div_header in div_expr.search_string(html): - print(div_header.body) - - prints:: - - 1 4 0 1 0 - - 1 4 0 1 0 - 1,3 2,3 1,1 - """ - classattr = "{}:class".format(namespace) if namespace else "class" - return with_attribute(**{classattr: classname}) - - -# pre-PEP8 compatibility symbols -replaceWith = replace_with -removeQuotes = remove_quotes -withAttribute = with_attribute -withClass = with_class -matchOnlyAtCol = match_only_at_col diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/bdist_egg.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/bdist_egg.py deleted file mode 100644 index 11a1c6be28ad008b7c083c229bb0df644ec58a0e..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/bdist_egg.py +++ /dev/null @@ -1,457 +0,0 @@ -"""setuptools.command.bdist_egg - -Build .egg distributions""" - -from distutils.dir_util import remove_tree, mkpath -from distutils import log -from types import CodeType -import sys -import os -import re -import textwrap -import marshal - -from pkg_resources import get_build_platform, Distribution -from setuptools.extension import Library -from setuptools import Command -from .._path import ensure_directory - -from sysconfig import get_path, get_python_version - - -def _get_purelib(): - return get_path("purelib") - - -def strip_module(filename): - if '.' in filename: - filename = os.path.splitext(filename)[0] - if filename.endswith('module'): - filename = filename[:-6] - return filename - - -def sorted_walk(dir): - """Do os.walk in a reproducible way, - independent of indeterministic filesystem readdir order - """ - for base, dirs, files in os.walk(dir): - dirs.sort() - files.sort() - yield base, dirs, files - - -def write_stub(resource, pyfile): - _stub_template = textwrap.dedent(""" - def __bootstrap__(): - global __bootstrap__, __loader__, __file__ - import sys, pkg_resources, importlib.util - __file__ = pkg_resources.resource_filename(__name__, %r) - __loader__ = None; del __bootstrap__, __loader__ - spec = importlib.util.spec_from_file_location(__name__,__file__) - mod = importlib.util.module_from_spec(spec) - spec.loader.exec_module(mod) - __bootstrap__() - """).lstrip() - with open(pyfile, 'w') as f: - f.write(_stub_template % resource) - - -class bdist_egg(Command): - description = "create an \"egg\" distribution" - - user_options = [ - ('bdist-dir=', 'b', - "temporary directory for creating the distribution"), - ('plat-name=', 'p', "platform name to embed in generated filenames " - "(default: %s)" % get_build_platform()), - ('exclude-source-files', None, - "remove all .py files from the generated egg"), - ('keep-temp', 'k', - "keep the pseudo-installation tree around after " + - "creating the distribution archive"), - ('dist-dir=', 'd', - "directory to put final built distributions in"), - ('skip-build', None, - "skip rebuilding everything (for testing/debugging)"), - ] - - boolean_options = [ - 'keep-temp', 'skip-build', 'exclude-source-files' - ] - - def initialize_options(self): - self.bdist_dir = None - self.plat_name = None - self.keep_temp = 0 - self.dist_dir = None - self.skip_build = 0 - self.egg_output = None - self.exclude_source_files = None - - def finalize_options(self): - ei_cmd = self.ei_cmd = self.get_finalized_command("egg_info") - self.egg_info = ei_cmd.egg_info - - if self.bdist_dir is None: - bdist_base = self.get_finalized_command('bdist').bdist_base - self.bdist_dir = os.path.join(bdist_base, 'egg') - - if self.plat_name is None: - self.plat_name = get_build_platform() - - self.set_undefined_options('bdist', ('dist_dir', 'dist_dir')) - - if self.egg_output is None: - - # Compute filename of the output egg - basename = Distribution( - None, None, ei_cmd.egg_name, ei_cmd.egg_version, - get_python_version(), - self.distribution.has_ext_modules() and self.plat_name - ).egg_name() - - self.egg_output = os.path.join(self.dist_dir, basename + '.egg') - - def do_install_data(self): - # Hack for packages that install data to install's --install-lib - self.get_finalized_command('install').install_lib = self.bdist_dir - - site_packages = os.path.normcase(os.path.realpath(_get_purelib())) - old, self.distribution.data_files = self.distribution.data_files, [] - - for item in old: - if isinstance(item, tuple) and len(item) == 2: - if os.path.isabs(item[0]): - realpath = os.path.realpath(item[0]) - normalized = os.path.normcase(realpath) - if normalized == site_packages or normalized.startswith( - site_packages + os.sep - ): - item = realpath[len(site_packages) + 1:], item[1] - # XXX else: raise ??? - self.distribution.data_files.append(item) - - try: - log.info("installing package data to %s", self.bdist_dir) - self.call_command('install_data', force=0, root=None) - finally: - self.distribution.data_files = old - - def get_outputs(self): - return [self.egg_output] - - def call_command(self, cmdname, **kw): - """Invoke reinitialized command `cmdname` with keyword args""" - for dirname in INSTALL_DIRECTORY_ATTRS: - kw.setdefault(dirname, self.bdist_dir) - kw.setdefault('skip_build', self.skip_build) - kw.setdefault('dry_run', self.dry_run) - cmd = self.reinitialize_command(cmdname, **kw) - self.run_command(cmdname) - return cmd - - def run(self): # noqa: C901 # is too complex (14) # FIXME - # Generate metadata first - self.run_command("egg_info") - # We run install_lib before install_data, because some data hacks - # pull their data path from the install_lib command. - log.info("installing library code to %s", self.bdist_dir) - instcmd = self.get_finalized_command('install') - old_root = instcmd.root - instcmd.root = None - if self.distribution.has_c_libraries() and not self.skip_build: - self.run_command('build_clib') - cmd = self.call_command('install_lib', warn_dir=0) - instcmd.root = old_root - - all_outputs, ext_outputs = self.get_ext_outputs() - self.stubs = [] - to_compile = [] - for (p, ext_name) in enumerate(ext_outputs): - filename, ext = os.path.splitext(ext_name) - pyfile = os.path.join(self.bdist_dir, strip_module(filename) + - '.py') - self.stubs.append(pyfile) - log.info("creating stub loader for %s", ext_name) - if not self.dry_run: - write_stub(os.path.basename(ext_name), pyfile) - to_compile.append(pyfile) - ext_outputs[p] = ext_name.replace(os.sep, '/') - - if to_compile: - cmd.byte_compile(to_compile) - if self.distribution.data_files: - self.do_install_data() - - # Make the EGG-INFO directory - archive_root = self.bdist_dir - egg_info = os.path.join(archive_root, 'EGG-INFO') - self.mkpath(egg_info) - if self.distribution.scripts: - script_dir = os.path.join(egg_info, 'scripts') - log.info("installing scripts to %s", script_dir) - self.call_command('install_scripts', install_dir=script_dir, - no_ep=1) - - self.copy_metadata_to(egg_info) - native_libs = os.path.join(egg_info, "native_libs.txt") - if all_outputs: - log.info("writing %s", native_libs) - if not self.dry_run: - ensure_directory(native_libs) - libs_file = open(native_libs, 'wt') - libs_file.write('\n'.join(all_outputs)) - libs_file.write('\n') - libs_file.close() - elif os.path.isfile(native_libs): - log.info("removing %s", native_libs) - if not self.dry_run: - os.unlink(native_libs) - - write_safety_flag( - os.path.join(archive_root, 'EGG-INFO'), self.zip_safe() - ) - - if os.path.exists(os.path.join(self.egg_info, 'depends.txt')): - log.warn( - "WARNING: 'depends.txt' will not be used by setuptools 0.6!\n" - "Use the install_requires/extras_require setup() args instead." - ) - - if self.exclude_source_files: - self.zap_pyfiles() - - # Make the archive - make_zipfile(self.egg_output, archive_root, verbose=self.verbose, - dry_run=self.dry_run, mode=self.gen_header()) - if not self.keep_temp: - remove_tree(self.bdist_dir, dry_run=self.dry_run) - - # Add to 'Distribution.dist_files' so that the "upload" command works - getattr(self.distribution, 'dist_files', []).append( - ('bdist_egg', get_python_version(), self.egg_output)) - - def zap_pyfiles(self): - log.info("Removing .py files from temporary directory") - for base, dirs, files in walk_egg(self.bdist_dir): - for name in files: - path = os.path.join(base, name) - - if name.endswith('.py'): - log.debug("Deleting %s", path) - os.unlink(path) - - if base.endswith('__pycache__'): - path_old = path - - pattern = r'(?P.+)\.(?P[^.]+)\.pyc' - m = re.match(pattern, name) - path_new = os.path.join( - base, os.pardir, m.group('name') + '.pyc') - log.info( - "Renaming file from [%s] to [%s]" - % (path_old, path_new)) - try: - os.remove(path_new) - except OSError: - pass - os.rename(path_old, path_new) - - def zip_safe(self): - safe = getattr(self.distribution, 'zip_safe', None) - if safe is not None: - return safe - log.warn("zip_safe flag not set; analyzing archive contents...") - return analyze_egg(self.bdist_dir, self.stubs) - - def gen_header(self): - return 'w' - - def copy_metadata_to(self, target_dir): - "Copy metadata (egg info) to the target_dir" - # normalize the path (so that a forward-slash in egg_info will - # match using startswith below) - norm_egg_info = os.path.normpath(self.egg_info) - prefix = os.path.join(norm_egg_info, '') - for path in self.ei_cmd.filelist.files: - if path.startswith(prefix): - target = os.path.join(target_dir, path[len(prefix):]) - ensure_directory(target) - self.copy_file(path, target) - - def get_ext_outputs(self): - """Get a list of relative paths to C extensions in the output distro""" - - all_outputs = [] - ext_outputs = [] - - paths = {self.bdist_dir: ''} - for base, dirs, files in sorted_walk(self.bdist_dir): - for filename in files: - if os.path.splitext(filename)[1].lower() in NATIVE_EXTENSIONS: - all_outputs.append(paths[base] + filename) - for filename in dirs: - paths[os.path.join(base, filename)] = (paths[base] + - filename + '/') - - if self.distribution.has_ext_modules(): - build_cmd = self.get_finalized_command('build_ext') - for ext in build_cmd.extensions: - if isinstance(ext, Library): - continue - fullname = build_cmd.get_ext_fullname(ext.name) - filename = build_cmd.get_ext_filename(fullname) - if not os.path.basename(filename).startswith('dl-'): - if os.path.exists(os.path.join(self.bdist_dir, filename)): - ext_outputs.append(filename) - - return all_outputs, ext_outputs - - -NATIVE_EXTENSIONS = dict.fromkeys('.dll .so .dylib .pyd'.split()) - - -def walk_egg(egg_dir): - """Walk an unpacked egg's contents, skipping the metadata directory""" - walker = sorted_walk(egg_dir) - base, dirs, files = next(walker) - if 'EGG-INFO' in dirs: - dirs.remove('EGG-INFO') - yield base, dirs, files - for bdf in walker: - yield bdf - - -def analyze_egg(egg_dir, stubs): - # check for existing flag in EGG-INFO - for flag, fn in safety_flags.items(): - if os.path.exists(os.path.join(egg_dir, 'EGG-INFO', fn)): - return flag - if not can_scan(): - return False - safe = True - for base, dirs, files in walk_egg(egg_dir): - for name in files: - if name.endswith('.py') or name.endswith('.pyw'): - continue - elif name.endswith('.pyc') or name.endswith('.pyo'): - # always scan, even if we already know we're not safe - safe = scan_module(egg_dir, base, name, stubs) and safe - return safe - - -def write_safety_flag(egg_dir, safe): - # Write or remove zip safety flag file(s) - for flag, fn in safety_flags.items(): - fn = os.path.join(egg_dir, fn) - if os.path.exists(fn): - if safe is None or bool(safe) != flag: - os.unlink(fn) - elif safe is not None and bool(safe) == flag: - f = open(fn, 'wt') - f.write('\n') - f.close() - - -safety_flags = { - True: 'zip-safe', - False: 'not-zip-safe', -} - - -def scan_module(egg_dir, base, name, stubs): - """Check whether module possibly uses unsafe-for-zipfile stuff""" - - filename = os.path.join(base, name) - if filename[:-1] in stubs: - return True # Extension module - pkg = base[len(egg_dir) + 1:].replace(os.sep, '.') - module = pkg + (pkg and '.' or '') + os.path.splitext(name)[0] - if sys.version_info < (3, 7): - skip = 12 # skip magic & date & file size - else: - skip = 16 # skip magic & reserved? & date & file size - f = open(filename, 'rb') - f.read(skip) - code = marshal.load(f) - f.close() - safe = True - symbols = dict.fromkeys(iter_symbols(code)) - for bad in ['__file__', '__path__']: - if bad in symbols: - log.warn("%s: module references %s", module, bad) - safe = False - if 'inspect' in symbols: - for bad in [ - 'getsource', 'getabsfile', 'getsourcefile', 'getfile' - 'getsourcelines', 'findsource', 'getcomments', 'getframeinfo', - 'getinnerframes', 'getouterframes', 'stack', 'trace' - ]: - if bad in symbols: - log.warn("%s: module MAY be using inspect.%s", module, bad) - safe = False - return safe - - -def iter_symbols(code): - """Yield names and strings used by `code` and its nested code objects""" - for name in code.co_names: - yield name - for const in code.co_consts: - if isinstance(const, str): - yield const - elif isinstance(const, CodeType): - for name in iter_symbols(const): - yield name - - -def can_scan(): - if not sys.platform.startswith('java') and sys.platform != 'cli': - # CPython, PyPy, etc. - return True - log.warn("Unable to analyze compiled code on this platform.") - log.warn("Please ask the author to include a 'zip_safe'" - " setting (either True or False) in the package's setup.py") - - -# Attribute names of options for commands that might need to be convinced to -# install to the egg build directory - -INSTALL_DIRECTORY_ATTRS = [ - 'install_lib', 'install_dir', 'install_data', 'install_base' -] - - -def make_zipfile(zip_filename, base_dir, verbose=0, dry_run=0, compress=True, - mode='w'): - """Create a zip file from all the files under 'base_dir'. The output - zip file will be named 'base_dir' + ".zip". Uses either the "zipfile" - Python module (if available) or the InfoZIP "zip" utility (if installed - and found on the default search path). If neither tool is available, - raises DistutilsExecError. Returns the name of the output zip file. - """ - import zipfile - - mkpath(os.path.dirname(zip_filename), dry_run=dry_run) - log.info("creating '%s' and adding '%s' to it", zip_filename, base_dir) - - def visit(z, dirname, names): - for name in names: - path = os.path.normpath(os.path.join(dirname, name)) - if os.path.isfile(path): - p = path[len(base_dir) + 1:] - if not dry_run: - z.write(path, p) - log.debug("adding '%s'", p) - - compression = zipfile.ZIP_DEFLATED if compress else zipfile.ZIP_STORED - if not dry_run: - z = zipfile.ZipFile(zip_filename, mode, compression=compression) - for dirname, dirs, files in sorted_walk(base_dir): - visit(z, dirname, files) - z.close() - else: - for dirname, dirs, files in sorted_walk(base_dir): - visit(None, dirname, files) - return zip_filename diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/test.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/test.py deleted file mode 100644 index 652f3e4a0fab7fe964a41b17a58293188f94adf2..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/test.py +++ /dev/null @@ -1,251 +0,0 @@ -import os -import operator -import sys -import contextlib -import itertools -import unittest -from distutils.errors import DistutilsError, DistutilsOptionError -from distutils import log -from unittest import TestLoader - -from pkg_resources import ( - resource_listdir, - resource_exists, - normalize_path, - working_set, - evaluate_marker, - add_activation_listener, - require, -) -from .._importlib import metadata -from setuptools import Command -from setuptools.extern.more_itertools import unique_everseen -from setuptools.extern.jaraco.functools import pass_none - - -class ScanningLoader(TestLoader): - def __init__(self): - TestLoader.__init__(self) - self._visited = set() - - def loadTestsFromModule(self, module, pattern=None): - """Return a suite of all tests cases contained in the given module - - If the module is a package, load tests from all the modules in it. - If the module has an ``additional_tests`` function, call it and add - the return value to the tests. - """ - if module in self._visited: - return None - self._visited.add(module) - - tests = [] - tests.append(TestLoader.loadTestsFromModule(self, module)) - - if hasattr(module, "additional_tests"): - tests.append(module.additional_tests()) - - if hasattr(module, '__path__'): - for file in resource_listdir(module.__name__, ''): - if file.endswith('.py') and file != '__init__.py': - submodule = module.__name__ + '.' + file[:-3] - else: - if resource_exists(module.__name__, file + '/__init__.py'): - submodule = module.__name__ + '.' + file - else: - continue - tests.append(self.loadTestsFromName(submodule)) - - if len(tests) != 1: - return self.suiteClass(tests) - else: - return tests[0] # don't create a nested suite for only one return - - -# adapted from jaraco.classes.properties:NonDataProperty -class NonDataProperty: - def __init__(self, fget): - self.fget = fget - - def __get__(self, obj, objtype=None): - if obj is None: - return self - return self.fget(obj) - - -class test(Command): - """Command to run unit tests after in-place build""" - - description = "run unit tests after in-place build (deprecated)" - - user_options = [ - ('test-module=', 'm', "Run 'test_suite' in specified module"), - ( - 'test-suite=', - 's', - "Run single test, case or suite (e.g. 'module.test_suite')", - ), - ('test-runner=', 'r', "Test runner to use"), - ] - - def initialize_options(self): - self.test_suite = None - self.test_module = None - self.test_loader = None - self.test_runner = None - - def finalize_options(self): - - if self.test_suite and self.test_module: - msg = "You may specify a module or a suite, but not both" - raise DistutilsOptionError(msg) - - if self.test_suite is None: - if self.test_module is None: - self.test_suite = self.distribution.test_suite - else: - self.test_suite = self.test_module + ".test_suite" - - if self.test_loader is None: - self.test_loader = getattr(self.distribution, 'test_loader', None) - if self.test_loader is None: - self.test_loader = "setuptools.command.test:ScanningLoader" - if self.test_runner is None: - self.test_runner = getattr(self.distribution, 'test_runner', None) - - @NonDataProperty - def test_args(self): - return list(self._test_args()) - - def _test_args(self): - if not self.test_suite and sys.version_info >= (2, 7): - yield 'discover' - if self.verbose: - yield '--verbose' - if self.test_suite: - yield self.test_suite - - def with_project_on_sys_path(self, func): - """ - Backward compatibility for project_on_sys_path context. - """ - with self.project_on_sys_path(): - func() - - @contextlib.contextmanager - def project_on_sys_path(self, include_dists=[]): - self.run_command('egg_info') - - # Build extensions in-place - self.reinitialize_command('build_ext', inplace=1) - self.run_command('build_ext') - - ei_cmd = self.get_finalized_command("egg_info") - - old_path = sys.path[:] - old_modules = sys.modules.copy() - - try: - project_path = normalize_path(ei_cmd.egg_base) - sys.path.insert(0, project_path) - working_set.__init__() - add_activation_listener(lambda dist: dist.activate()) - require('%s==%s' % (ei_cmd.egg_name, ei_cmd.egg_version)) - with self.paths_on_pythonpath([project_path]): - yield - finally: - sys.path[:] = old_path - sys.modules.clear() - sys.modules.update(old_modules) - working_set.__init__() - - @staticmethod - @contextlib.contextmanager - def paths_on_pythonpath(paths): - """ - Add the indicated paths to the head of the PYTHONPATH environment - variable so that subprocesses will also see the packages at - these paths. - - Do this in a context that restores the value on exit. - """ - nothing = object() - orig_pythonpath = os.environ.get('PYTHONPATH', nothing) - current_pythonpath = os.environ.get('PYTHONPATH', '') - try: - prefix = os.pathsep.join(unique_everseen(paths)) - to_join = filter(None, [prefix, current_pythonpath]) - new_path = os.pathsep.join(to_join) - if new_path: - os.environ['PYTHONPATH'] = new_path - yield - finally: - if orig_pythonpath is nothing: - os.environ.pop('PYTHONPATH', None) - else: - os.environ['PYTHONPATH'] = orig_pythonpath - - @staticmethod - def install_dists(dist): - """ - Install the requirements indicated by self.distribution and - return an iterable of the dists that were built. - """ - ir_d = dist.fetch_build_eggs(dist.install_requires) - tr_d = dist.fetch_build_eggs(dist.tests_require or []) - er_d = dist.fetch_build_eggs( - v - for k, v in dist.extras_require.items() - if k.startswith(':') and evaluate_marker(k[1:]) - ) - return itertools.chain(ir_d, tr_d, er_d) - - def run(self): - self.announce( - "WARNING: Testing via this command is deprecated and will be " - "removed in a future version. Users looking for a generic test " - "entry point independent of test runner are encouraged to use " - "tox.", - log.WARN, - ) - - installed_dists = self.install_dists(self.distribution) - - cmd = ' '.join(self._argv) - if self.dry_run: - self.announce('skipping "%s" (dry run)' % cmd) - return - - self.announce('running "%s"' % cmd) - - paths = map(operator.attrgetter('location'), installed_dists) - with self.paths_on_pythonpath(paths): - with self.project_on_sys_path(): - self.run_tests() - - def run_tests(self): - test = unittest.main( - None, - None, - self._argv, - testLoader=self._resolve_as_ep(self.test_loader), - testRunner=self._resolve_as_ep(self.test_runner), - exit=False, - ) - if not test.result.wasSuccessful(): - msg = 'Test failed: %s' % test.result - self.announce(msg, log.ERROR) - raise DistutilsError(msg) - - @property - def _argv(self): - return ['unittest'] + self.test_args - - @staticmethod - @pass_none - def _resolve_as_ep(val): - """ - Load the indicated attribute value, called, as a as if it were - specified as an entry point. - """ - return metadata.EntryPoint(value=val, name=None, group=None).load()() diff --git a/spaces/tom-beer/hotel-recommender/data.py b/spaces/tom-beer/hotel-recommender/data.py deleted file mode 100644 index 1d0999c9602cdc792fa186a685314d72b155c702..0000000000000000000000000000000000000000 --- a/spaces/tom-beer/hotel-recommender/data.py +++ /dev/null @@ -1,58 +0,0 @@ -from pathlib import Path -from json import load as load_json -from numpy.random import permutation as perm - -data_dir = Path(__file__).parent / "data" / "cmu" / "processed" - - -def get_cities(): - with open(data_dir / "cities.json", "r") as f: - return load_json(f) - - -def get_score_threshold_per_city(): - with open(data_dir / "score_threshold_per_city.json", "r") as f: - return load_json(f) - - -def get_city_to_hotel_id_map(): - with open(data_dir / "city_to_hotel_id_map.json", "r") as f: - return load_json(f) - - -def get_hotel_id_to_name_map(): - with open(data_dir / "hotel_id_to_name_map.json", "r") as f: - return load_json(f) - - -def get_hotel_id_to_review_map(): - with open(data_dir / "hotel_id_to_review_map.json", "r") as f: - return load_json(f) - - -score_threshold_per_city = get_score_threshold_per_city() -city_to_hotel_id_map = get_city_to_hotel_id_map() -hotel_id_to_name_map = get_hotel_id_to_name_map() -hotel_id_to_review_map = get_hotel_id_to_review_map() - - -def get_reviews_for_prompt(city, preferences) -> dict: - - for hotel_id in perm(city_to_hotel_id_map[city]): - hotel_id = str(hotel_id) - res = {"hotel_name": hotel_id_to_name_map[hotel_id], 'positive': [], 'negative': []} - try: - hotel_reviews = hotel_id_to_review_map[hotel_id]['reviews'] - except KeyError: - continue - for review in perm(hotel_reviews): - if (review['score'] == 5) & (len(res['positive']) < 3): - res['positive'].append(review) - if (review['score'] <= 2) & (len(res['negative']) < 1): - res['negative'].append(review) - if (len(res['positive']) >= 3) & (len(res['negative']) >= 1): - return res - - return None - - diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gfl/gfl_x101_32x4d_fpn_mstrain_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gfl/gfl_x101_32x4d_fpn_mstrain_2x_coco.py deleted file mode 100644 index 4e00a059f8d2e58d23d6b77764456be351bd3115..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gfl/gfl_x101_32x4d_fpn_mstrain_2x_coco.py +++ /dev/null @@ -1,15 +0,0 @@ -_base_ = './gfl_r50_fpn_mstrain_2x_coco.py' -model = dict( - type='GFL', - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch')) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_20_23_24e_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_20_23_24e_coco.py deleted file mode 100644 index 79ce0adf1bf760c371bd1a1c3a9b028cef51c4b4..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_20_23_24e_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './mask_rcnn_x50_32x4d_fpn_gn_ws-all_2x_coco.py' -# learning policy -lr_config = dict(step=[20, 23]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/trttung1610/musicgen/tests/utils/__init__.py b/spaces/trttung1610/musicgen/tests/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/tests/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/uSerNameDDHL/bingo/src/lib/bots/bing/tts.ts b/spaces/uSerNameDDHL/bingo/src/lib/bots/bing/tts.ts deleted file mode 100644 index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000 --- a/spaces/uSerNameDDHL/bingo/src/lib/bots/bing/tts.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { sleep } from './utils' - -const synth = window.speechSynthesis - -export class TTS { - currentText = '' - speakText = '' - private controller = new AbortController() - speaking = false - get isSpeaking() { - return this.speaking - } - finished = false - constructor() {} - abort = () => { - this.controller.abort() - } - - reset = () => { - this.speaking = false - this.finished = true - this.currentText = '' - this.speakText = '' - this.abort() - } - - speak = (text: string) => { - if (!synth || text?.trim()?.length < 2) { - return - } - this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '') - this.finished = false - this.loop() - } - - private async doSpeek() { - return new Promise((resolve) => { - const endIndex = this.finished ? this.currentText.length : - Math.max( - this.currentText.lastIndexOf('。'), - this.currentText.lastIndexOf(';'), - this.currentText.lastIndexOf('、'), - this.currentText.lastIndexOf('?'), - this.currentText.lastIndexOf('\n') - ) - const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0 - - if (startIndex >= endIndex) { - return resolve(true) - } - const text = this.currentText.slice(startIndex, endIndex) - this.speakText = text - const utterThis = new SpeechSynthesisUtterance(text) - this.controller.signal.onabort = () => { - synth.cancel() - this.finished = true - resolve(false) - } - - utterThis.onend = function (event) { - resolve(true) - } - - utterThis.onerror = function (event) { - resolve(false) - } - - const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null - utterThis.voice = voice - synth.speak(utterThis) - }) - } - - private async loop() { - if (this.speaking) return - this.speaking = true - while(!this.finished) { - await Promise.all([sleep(1000), this.doSpeek()]) - } - this.speaking = false - } -} diff --git a/spaces/umutozdemir/medicalai-ClinicalBERT/README.md b/spaces/umutozdemir/medicalai-ClinicalBERT/README.md deleted file mode 100644 index e4a9479a13f60e30df7895be3564d4459a7002af..0000000000000000000000000000000000000000 --- a/spaces/umutozdemir/medicalai-ClinicalBERT/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Medicalai ClinicalBERT -emoji: 📉 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Active Premium Code Anonymox.md b/spaces/usbethFlerru/sovits-modelsV2/example/Active Premium Code Anonymox.md deleted file mode 100644 index 17b474cade1ca5d1a59735ef70c6ef6f430376d0..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Active Premium Code Anonymox.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Active Premium Code Anonymox


            Download Zip ……… https://urlcod.com/2uyVWs



            - -Anonymox 4.1 Activate Premium Code Serial Number Key. Anonymox premium code serial. Pada langkah yang kedua silakan Anda da download dulu file ... 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/AutoDesk Revit LT 2011 x64 (64bit) (Product Key and Xforce Keygen) Benefits and Features of the Latest Version.md b/spaces/usbethFlerru/sovits-modelsV2/example/AutoDesk Revit LT 2011 x64 (64bit) (Product Key and Xforce Keygen) Benefits and Features of the Latest Version.md deleted file mode 100644 index c6381090b8e0b1a071ed4786917af8cd85eb8f1f..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/AutoDesk Revit LT 2011 x64 (64bit) (Product Key and Xforce Keygen) Benefits and Features of the Latest Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

            AutoDesk Revit LT 2011 x64 (64bit) (Product Key and Xforce Keygen)


            DOWNLOAD ✪✪✪ https://urlcod.com/2uyXME



            - - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Cd Superkids 1 Activity.epub.md b/spaces/usbethFlerru/sovits-modelsV2/example/Cd Superkids 1 Activity.epub.md deleted file mode 100644 index 2daa7ff4957f7cb146038d37c76db01c9c7eb3d2..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Cd Superkids 1 Activity.epub.md +++ /dev/null @@ -1,120 +0,0 @@ - -

            Cd Superkids 1 Activity.epub - A Fun and Effective Way to Learn English with Your Children

            - -

            Are you looking for a way to help your children learn English in a fun and effective way? Do you want to provide them with a comprehensive and engaging curriculum that covers topics that are familiar and close to them, such as family, school, toys, clothing, pets, and birthdays? Do you want to develop their listening and speaking skills through short conversations, songs, and karaoke activities? If you answered yes to any of these questions, then you need Cd Superkids 1 Activity.epub.

            - -

            Cd Superkids 1 Activity.epub is an electronic book that contains the activity book of Super Kids 1, a series for elementary school children learning English. It also includes an audio CD that contains the recordings of the conversations, songs, and sounds in the activity book. Cd Superkids 1 Activity.epub is a great resource that can help you and your children learn and practice English in an interactive and enjoyable way.

            -

            Cd Superkids 1 Activity.epub


            Download ⚹⚹⚹ https://urlcod.com/2uyXQO



            - -

            What are the features of Cd Superkids 1 Activity.epub?

            - -

            Cd Superkids 1 Activity.epub offers a range of features that can make your and your children's English learning experience more fun and effective. Some of the features are:

            - -
              -
            • Interactive activities: The activity book contains various types of activities that can help your children practice their vocabulary, grammar, pronunciation, and communication skills. The activities include matching, coloring, tracing, writing, drawing, listening, speaking, and more.
            • -
            • Engaging storyline: The activity book follows the adventures of four children and their pet dragon as they explore different places and situations. The storyline is designed to capture your children's interest and imagination while exposing them to natural and useful language.
            • -
            • Lively characters: The activity book features four child characters who have different personalities and backgrounds. They are Andy, Jenny, Nicky, and Tony. They also have a pet dragon named Sparky who can breathe fire and fly. You and your children can relate to these characters and learn from their experiences.
            • -
            • Funny songs: The activity book contains several songs that can help your children learn new words and expressions in a musical and memorable way. The songs are catchy and easy to sing along with. They also have karaoke versions that can help your children practice their pronunciation and intonation.
            • -
            • Sound practice: The activity book includes sound practice in each lesson. This is a very important part of learning English for children. Sound practice can help your children improve their listening comprehension and speaking accuracy. It can also help them distinguish between similar sounds and words.
            • -
            • Audio CD: The audio CD contains the recordings of the conversations, songs, and sounds in the activity book. You can use the audio CD to play the recordings for your children or let them listen to them on their own. The audio CD can help your children improve their listening skills and reinforce what they have learned in the activity book.
            • -
            • Epub format: The epub format is an electronic book format that can be read on various devices, such as computers, tablets, smartphones, e-readers, etc. You can download Cd Superkids 1 Activity.epub to your device and access it anytime and anywhere. You can also adjust the font size, brightness, layout, etc. according to your preference.
            • -
            - -

            What are the benefits of using Cd Superkids 1 Activity.epub?

            - -

            Using Cd Superkids 1 Activity.epub can bring you many benefits, such as:

            - -
              -
            • You can provide your children with a fun and easy way to learn English at home or on the go.
            • -
            • You can support your children's English learning with a comprehensive and engaging curriculum that covers topics that are familiar and close to them.
            • -
            • You can develop your children's listening and speaking skills through short conversations, songs, and karaoke activities.
            • -
            • You can enhance your children's vocabulary, grammar, pronunciation, and communication skills through interactive activities.
            • -
            • You can motivate your children to learn English with lively characters and an engaging storyline.
            • -
            • You can save time and money by downloading Cd Superkids 1 Activity.epub instead of buying a physical book and CD.
            • -
            - -

            How to download Cd Superkids 1 Activity.epub?

            - -

            If you want to download Cd Superkids 1 Activity.epub, you may be tempted to use a torrent file that contains the epub file and the audio CD. However, this is not recommended because it may be illegal or unsafe. You may encounter viruses or malware that can harm your device or compromise your data. You may also face legal consequences if you violate the copyright laws or the terms of service of Pearson Longman Asia ELT, the publisher of Super Kids 1.

            - -

            The best way to download Cd Superkids 1 Activity.epub is to use the official website of Pearson Longman Asia ELT. There you can find the latest version of Cd Superkids 1 Activity.epub and request a free trial or purchase a license. You can also access the support and documentation resources that can help you use Cd Superkids 1 Activity.epub properly.

            - -

            Conclusion

            - -

            Cd Superkids 1 Activity.epub is an electronic book that contains the activity book of Super Kids 1, a series for elementary school children learning English. It also includes an audio CD that contains the recordings of the conversations, songs, and sounds in the activity book. Cd Superkids 1 Activity.epub is a great resource that can help you and your children learn and practice English in an interactive and enjoyable way.

            - -

            If you want to download Cd Superkids 1 Activity.epub, you should use the official website of Pearson Longman Asia ELT and avoid using torrent files that may be illegal or unsafe. You can also request a free trial or purchase a license from Pearson Longman Asia ELT.

            -

            - -

            If you want to learn more about Cd Superkids 1 Activity.epub or other Pearson Longman Asia ELT products, you can visit their website or contact them directly.

            -

            How to use Cd Superkids 1 Activity.epub?

            - -

            To use Cd Superkids 1 Activity.epub, you need to have a device that can read epub files, such as a computer, a tablet, a smartphone, or an e-reader. You also need to have an application that can open epub files, such as Adobe Digital Editions, Calibre, iBooks, etc.

            - -

            The usage process is simple and straightforward. You just need to follow these steps:

            - -
              -
            1. Download Cd Superkids 1 Activity.epub from the official website of Pearson Longman Asia ELT.
            2. -
            3. Transfer Cd Superkids 1 Activity.epub to your device using a USB cable, a Wi-Fi connection, or a cloud service.
            4. -
            5. Open Cd Superkids 1 Activity.epub with your preferred application.
            6. -
            7. Enjoy using Cd Superkids 1 Activity.epub with your children.
            8. -
            - -

            How to get the best results with Cd Superkids 1 Activity.epub?

            - -

            If you want to get the best results with Cd Superkids 1 Activity.epub, you should follow these tips:

            - -
              -
            1. Use Cd Superkids 1 Activity.epub regularly and consistently with your children. Try to set a schedule and stick to it.
            2. -
            3. Use Cd Superkids 1 Activity.epub in combination with other materials from Super Kids 1, such as the student book and the teacher's guide. They can complement each other and provide a more comprehensive and effective learning experience.
            4. -
            5. Use Cd Superkids 1 Activity.epub in a way that suits your children's level and needs. You can adjust the pace, the difficulty, and the focus of the activities according to your children's progress and preferences.
            6. -
            7. Use Cd Superkids 1 Activity.epub in a way that engages your children's interest and motivation. You can make the activities more fun and interactive by using props, games, rewards, etc.
            8. -
            9. Use Cd Superkids 1 Activity.epub in a way that encourages your children's participation and feedback. You can ask your children questions, praise their efforts, correct their mistakes, and give them suggestions.
            10. -
            - -

            Conclusion

            - -

            Cd Superkids 1 Activity.epub is an electronic book that contains the activity book of Super Kids 1, a series for elementary school children learning English. It also includes an audio CD that contains the recordings of the conversations, songs, and sounds in the activity book. Cd Superkids 1 Activity.epub is a great resource that can help you and your children learn and practice English in an interactive and enjoyable way.

            - -

            If you want to download Cd Superkids 1 Activity.epub, you should use the official website of Pearson Longman Asia ELT and avoid using torrent files that may be illegal or unsafe. You can also request a free trial or purchase a license from Pearson Longman Asia ELT.

            - -

            If you want to learn more about Cd Superkids 1 Activity.epub or other Pearson Longman Asia ELT products, you can visit their website or contact them directly.

            -

            What are the reviews of Cd Superkids 1 Activity.epub?

            - -

            Cd Superkids 1 Activity.epub has received many positive reviews from users who have used it with their children. Here are some of the reviews that can give you an idea of what Cd Superkids 1 Activity.epub can offer:

            - -
            -

            "I love this book and CD. It is very easy to use and my kids enjoy it a lot. They learn new words and expressions every day and they sing the songs all the time. They also like the characters and the story. It is a great way to learn English with fun."

            -A parent from Vietnam -
            - -
            -

            "This book and CD are amazing. They have everything you need to teach English to your children. The activities are very interactive and engaging. The songs are catchy and funny. The sound practice is very helpful and effective. The audio CD is clear and high-quality. The epub format is convenient and flexible."

            -A teacher from China -
            - -
            -

            "This book and CD are awesome. They make me learn English easily and happily. I like the activities because they are fun and colorful. I like the songs because they are cool and easy to sing. I like the characters because they are cute and friendly. I like the story because it is exciting and interesting."

            -A student from Thailand -
            - -

            How to get more information about Cd Superkids 1 Activity.epub?

            - -

            If you want to get more information about Cd Superkids 1 Activity.epub, you can visit the official website of Pearson Longman Asia ELT. There you can find more details about Cd Superkids 1 Activity.epub and other products from Super Kids 1, such as the student book and the teacher's guide. You can also access the support and documentation resources that can help you use Cd Superkids 1 Activity.epub properly.

            - -

            If you have any questions or feedback about Cd Superkids 1 Activity.epub, you can contact Pearson Longman Asia ELT directly. They have a friendly and professional customer service team that can assist you with any issues or inquiries you may have.

            - -

            Conclusion

            - -

            Cd Superkids 1 Activity.epub is an electronic book that contains the activity book of Super Kids 1, a series for elementary school children learning English. It also includes an audio CD that contains the recordings of the conversations, songs, and sounds in the activity book. Cd Superkids 1 Activity.epub is a great resource that can help you and your children learn and practice English in an interactive and enjoyable way.

            - -

            If you want to download Cd Superkids 1 Activity.epub, you should use the official website of Pearson Longman Asia ELT and avoid using torrent files that may be illegal or unsafe. You can also request a free trial or purchase a license from Pearson Longman Asia ELT.

            - -

            If you want to learn more about Cd Superkids 1 Activity.epub or other Pearson Longman Asia ELT products, you can visit their website or contact them directly.

            -

            In conclusion, Cd Superkids 1 Activity.epub is an electronic book that contains the activity book of Super Kids 1, a series for elementary school children learning English. It also includes an audio CD that contains the recordings of the conversations, songs, and sounds in the activity book. Cd Superkids 1 Activity.epub is a great resource that can help you and your children learn and practice English in an interactive and enjoyable way.

            - -

            If you want to download Cd Superkids 1 Activity.epub, you should use the official website of Pearson Longman Asia ELT and avoid using torrent files that may be illegal or unsafe. You can also request a free trial or purchase a license from Pearson Longman Asia ELT.

            - -

            If you want to learn more about Cd Superkids 1 Activity.epub or other Pearson Longman Asia ELT products, you can visit their website or contact them directly.

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Cdma Dev Term Download LINKl.md b/spaces/usbethFlerru/sovits-modelsV2/example/Cdma Dev Term Download LINKl.md deleted file mode 100644 index 3f7e6e8adca5ad980135ba2dc8fc50201ecb10c8..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Cdma Dev Term Download LINKl.md +++ /dev/null @@ -1,70 +0,0 @@ - -

            What is Cdma Dev Term Downloadl and Why You Need It?

            -

            If you are looking for a software tool that can help you flash, unlock, or repair your CDMA devices, then you might want to check out Cdma Dev Term Downloadl. This is a free and open source tool that allows you to read and write various data on CDMA devices powered by Qualcomm chipset, such as SPC, MDN, MIN, NV items, PRL, and RAM. It also supports data flashing and Samsung 16 digit passwords. In this article, we will explain what Cdma Dev Term Downloadl is, how to download and install it on your PC, and how to use it for your CDMA devices.

            - -

            What is Cdma Dev Term Downloadl?

            -

            Cdma Dev Term Downloadl is a software tool that was developed by Keith (mingshi) and released under the GPL v3 license. It is based on the original Cdma Workshop Tool that was discontinued by its developer. Cdma Dev Term Downloadl is a portable application that does not need to be installed on your computer. You can simply download and extract the tool package on your computer and double-click on the SoftDownload (customer_en).exe to launch the tool.

            -

            Cdma Dev Term Downloadl


            Download Zip ————— https://urlcod.com/2uyU08



            - -

            Cdma Dev Term Downloadl can be used for various purposes on your CDMA devices, such as:

            -
              -
            • Flashing or installing the stock firmware (ROM) on your feature phone powered by Qualcomm chipset. You just need to have the correct firmware of your feature phone, install the supported USB driver, launch the tool, go to settings, locate the write code and EFS code under CDMA, click on OK button, and click on start to begin the flash.
            • -
            • Reading and writing SPC (Service Programming Code), MDN (Mobile Directory Number), MIN (Mobile Identification Number), NV items (Non-Volatile items), PRL (Preferred Roaming List), and RAM (Random Access Memory) on your CDMA devices. You can use these data for unlocking, repairing, or modifying your CDMA devices.
            • -
            • Data flashing for CDMA devices. Data flashing is scripted by carrier.xml and model.xml files that are included in the tool package. You can also customize these files according to your needs.
            • -
            • Customizing Samsung 16 digit passwords. Samsung 16 digit passwords are used for unlocking some Samsung CDMA devices. You can customize these passwords by editing a .txt file (cdmaworkshop compatible) that is included in the tool package.
            • -
            • Checking for updates. Cdma Dev Term Downloadl has a check for update tab that allows you to check if there is a newer version of the tool available online.
            • -
            - -

            How to Download and Install Cdma Dev Term Downloadl?

            -

            Cdma Dev Term Downloadl is compatible with all versions of Windows OS, including Windows XP to Windows 11 (x32 or x64 bit). If you want to download and install Cdma Dev Term Downloadl on your PC, you can follow these steps:

            -
              -
            1. Go to the official website of Cdma Dev Term Downloadl at https://code.google.com/p/cdmaworkshoptool/ and click on the download button.
            2. -
            3. Choose the latest version of the tool from the list of available versions. The latest version as of writing this article is v1.07: CDMA_Software_Download_Tool_v1.0.7.zip.
            4. -
            5. Save the zip file on your computer and extract it using any zip extractor software.
            6. -
            7. Open the extracted folder and double-click on the SoftDownload (customer_en).exe file to launch the tool.
            8. -
            9. You may get an error message saying “No give the path of the file (version.dll) Re-setup Please.” You can ignore this error since it is caused by the firmware location not being pre-defined in the swdlcg.wt file. The error will be gone once you load the firmware in the tool.
            10. -
            - -

            How to Use Cdma Dev Term Downloadl?

            -

            Cdma Dev Term Downloadl is easy to use once you have downloaded and installed it on your PC. You just need to connect your CDMA device to your PC using a USB cable and make sure that the device drivers are installed properly. Then you can use Cdma Dev Term Downloadl to perform various tasks on your CDMA device.

            - -

            To use Cdma Dev Term Downloadl, you can follow these general steps:

            -
              -
            1. Launch Cdma Dev Term Downloadl on your PC by double-clicking on the SoftDownload (customer_en).exe file.
            2. -
            3. Select the COM port that corresponds to your CDMA device from the drop-down menu at the top left corner of the tool.
            4. -
            5. Click on Connect button to establish a connection between your PC and your CDMA device.
            6. -
            7. Select the tab that corresponds to the task that you want to perform on your CDMA device, such as Read/Write SPC/NV/PRL/RAM or Flash Firmware.
            8. -
            9. Follow the instructions on each tab to complete the task. For example, if you want to flash firmware on your feature phone powered by Qualcomm chipset, you need to go to Settings > Under CDMA Locate Write Code and EFS Code > Click OK > Click Start > Select Firmware File > Click Open > Wait for Flashing Process to Complete.
            10. -
            11. Click on Disconnect button to disconnect your PC from your CDMA device.
            12. -
            - -

            Congratulations! You have successfully used Cdma Dev Term Downloadl to flash, unlock, or repair your CDMA device.

            - -

            Conclusion

            -

            Cdma Dev Term Downloadl is a useful software tool that can help you flash, unlock, or repair your CDMA devices powered by Qualcomm chipset. It is free and open source, and it supports various data types such as SPC, MDN, MIN, NV items, PRL, RAM, data flashing, and Samsung 16 digit passwords. It is compatible with all versions of Windows OS, and it is easy to download and install on your PC. You can use Cdma Dev Term Downloadl to perform various tasks on your CDMA device by following some simple steps.

            -

            - -

            If you are looking for a software tool that can help you flash, unlock, or repair your CDMA devices powered by Qualcomm chipset then you might want to check out Cdma Dev Term Downloadl today!

            -

            What are the Benefits of Cdma Dev Term Downloadl?

            -

            Cdma Dev Term Downloadl is a beneficial software tool for anyone who owns or works with CDMA devices powered by Qualcomm chipset. Some of the benefits of Cdma Dev Term Downloadl are:

            -
              -
            • It is free and open source, which means that anyone can use it without paying any fees or facing any restrictions. It also means that anyone can contribute to its development or improvement by submitting bug reports or feature requests.
            • -
            • It is portable and easy to use, which means that you do not need to install it on your computer or have any technical skills to use it. You can simply download and run it on any Windows PC and connect your CDMA device to it.
            • -
            • It supports various data types and functions, which means that you can use it to perform various tasks on your CDMA device, such as flashing firmware, reading and writing SPC, MDN, MIN, NV items, PRL, RAM, data flashing, and Samsung 16 digit passwords.
            • -
            • It is compatible with all versions of Windows OS and Qualcomm chipset devices, which means that you can use it with any Windows PC and any CDMA device powered by Qualcomm chipset.
            • -
            - -

            What are the Drawbacks of Cdma Dev Term Downloadl?

            -

            Cdma Dev Term Downloadl is a useful software tool for CDMA devices powered by Qualcomm chipset, but it also has some drawbacks that you should be aware of before using it. Some of the drawbacks of Cdma Dev Term Downloadl are:

            -
              -
            • It is not updated regularly, which means that it may not support the latest firmware or devices that are released by the manufacturers. It also means that it may have some bugs or errors that are not fixed or resolved.
            • -
            • It does not have a user-friendly interface or documentation, which means that it may be difficult to use or understand for some users. It also means that you may not find any help or support if you encounter any problems or issues while using it.
            • -
            • It may cause damage or loss of data on your CDMA device, which means that you should always backup your data before using it. It also means that you should use it at your own risk and responsibility.
            • -
            - -

            Conclusion

            -

            Cdma Dev Term Downloadl is a software tool that can help you flash, unlock, or repair your CDMA devices powered by Qualcomm chipset. It is free and open source, portable and easy to use, and supports various data types and functions. However, it is not updated regularly, does not have a user-friendly interface or documentation, and may cause damage or loss of data on your CDMA device. Therefore, you should always backup your data before using it and use it at your own risk and responsibility.

            - -

            If you are looking for a software tool that can help you flash, unlock, or repair your CDMA devices powered by Qualcomm chipset then you might want to check out Cdma Dev Term Downloadl today!

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/user238921933/stable-diffusion-webui/html/footer.html b/spaces/user238921933/stable-diffusion-webui/html/footer.html deleted file mode 100644 index f26e32e9304aedb5a55b0b46a913396f16375f7a..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/html/footer.html +++ /dev/null @@ -1,13 +0,0 @@ -
            - API -  •  - Github -  •  - Gradio -  •  - Reload UI -
            -
            -
            -{versions} -
            diff --git a/spaces/vaibhavarduino/anime-plus/e4e/models/encoders/model_irse.py b/spaces/vaibhavarduino/anime-plus/e4e/models/encoders/model_irse.py deleted file mode 100644 index 6a94d67542f961ff6533f0335cf4cb0fa54024fb..0000000000000000000000000000000000000000 --- a/spaces/vaibhavarduino/anime-plus/e4e/models/encoders/model_irse.py +++ /dev/null @@ -1,84 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from e4e.models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False) - return model diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/vit/rtdetr/model.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/vit/rtdetr/model.md deleted file mode 100644 index c979186eaec1a6ae2acfd7991c74994f382d381a..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/vit/rtdetr/model.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -description: Learn about the RTDETR model in Ultralytics YOLO Docs and how it can be used for object detection with improved speed and accuracy. Find implementation details and more. -keywords: RTDETR, Ultralytics, YOLO, object detection, speed, accuracy, implementation details ---- - -## RTDETR ---- -### ::: ultralytics.vit.rtdetr.model.RTDETR -

            \ No newline at end of file diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/v8/segment/predict.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/v8/segment/predict.py deleted file mode 100644 index 0b6ebc494d22bffc6cc3a4f5607d4691b425db24..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/v8/segment/predict.py +++ /dev/null @@ -1,63 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license - -import torch - -from ultralytics.yolo.engine.results import Results -from ultralytics.yolo.utils import DEFAULT_CFG, ROOT, ops -from ultralytics.yolo.v8.detect.predict import DetectionPredictor - - -class SegmentationPredictor(DetectionPredictor): - - def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): - super().__init__(cfg, overrides, _callbacks) - self.args.task = 'segment' - - def postprocess(self, preds, img, orig_imgs): - """TODO: filter by classes.""" - p = ops.non_max_suppression(preds[0], - self.args.conf, - self.args.iou, - agnostic=self.args.agnostic_nms, - max_det=self.args.max_det, - nc=len(self.model.names), - classes=self.args.classes) - results = [] - proto = preds[1][-1] if len(preds[1]) == 3 else preds[1] # second output is len 3 if pt, but only 1 if exported - for i, pred in enumerate(p): - orig_img = orig_imgs[i] if isinstance(orig_imgs, list) else orig_imgs - path = self.batch[0] - img_path = path[i] if isinstance(path, list) else path - if not len(pred): # save empty boxes - results.append(Results(orig_img=orig_img, path=img_path, names=self.model.names, boxes=pred[:, :6])) - continue - if self.args.retina_masks: - if not isinstance(orig_imgs, torch.Tensor): - pred[:, :4] = ops.scale_boxes(img.shape[2:], pred[:, :4], orig_img.shape) - masks = ops.process_mask_native(proto[i], pred[:, 6:], pred[:, :4], orig_img.shape[:2]) # HWC - else: - masks = ops.process_mask(proto[i], pred[:, 6:], pred[:, :4], img.shape[2:], upsample=True) # HWC - if not isinstance(orig_imgs, torch.Tensor): - pred[:, :4] = ops.scale_boxes(img.shape[2:], pred[:, :4], orig_img.shape) - results.append( - Results(orig_img=orig_img, path=img_path, names=self.model.names, boxes=pred[:, :6], masks=masks)) - return results - - -def predict(cfg=DEFAULT_CFG, use_python=False): - """Runs YOLO object detection on an image or video source.""" - model = cfg.model or 'yolov8n-seg.pt' - source = cfg.source if cfg.source is not None else ROOT / 'assets' if (ROOT / 'assets').exists() \ - else 'https://ultralytics.com/images/bus.jpg' - - args = dict(model=model, source=source) - if use_python: - from ultralytics import YOLO - YOLO(model)(**args) - else: - predictor = SegmentationPredictor(overrides=args) - predictor.predict_cli() - - -if __name__ == '__main__': - predict() diff --git a/spaces/veb-101/driver-drowsiness-detection/drowsy_detection.py b/spaces/veb-101/driver-drowsiness-detection/drowsy_detection.py deleted file mode 100644 index 3eb4d7eb15e6580122d9e2402135a3d5eaf8e001..0000000000000000000000000000000000000000 --- a/spaces/veb-101/driver-drowsiness-detection/drowsy_detection.py +++ /dev/null @@ -1,187 +0,0 @@ -import cv2 -import time -import numpy as np -import mediapipe as mp -from mediapipe.python.solutions.drawing_utils import _normalized_to_pixel_coordinates as denormalize_coordinates - - -def get_mediapipe_app( - max_num_faces=1, - refine_landmarks=True, - min_detection_confidence=0.5, - min_tracking_confidence=0.5, -): - """Initialize and return Mediapipe FaceMesh Solution Graph object""" - face_mesh = mp.solutions.face_mesh.FaceMesh( - max_num_faces=max_num_faces, - refine_landmarks=refine_landmarks, - min_detection_confidence=min_detection_confidence, - min_tracking_confidence=min_tracking_confidence, - ) - - return face_mesh - - -def distance(point_1, point_2): - """Calculate l2-norm between two points""" - dist = sum([(i - j) ** 2 for i, j in zip(point_1, point_2)]) ** 0.5 - return dist - - -def get_ear(landmarks, refer_idxs, frame_width, frame_height): - """ - Calculate Eye Aspect Ratio for one eye. - - Args: - landmarks: (list) Detected landmarks list - refer_idxs: (list) Index positions of the chosen landmarks - in order P1, P2, P3, P4, P5, P6 - - frame_width: (int) Width of captured frame - frame_height: (int) Height of captured frame - - Returns: - ear: (float) Eye aspect ratio - """ - try: - # Compute the euclidean distance between the horizontal - coords_points = [] - for i in refer_idxs: - lm = landmarks[i] - coord = denormalize_coordinates(lm.x, lm.y, frame_width, frame_height) - coords_points.append(coord) - - # Eye landmark (x, y)-coordinates - P2_P6 = distance(coords_points[1], coords_points[5]) - P3_P5 = distance(coords_points[2], coords_points[4]) - P1_P4 = distance(coords_points[0], coords_points[3]) - - # Compute the eye aspect ratio - ear = (P2_P6 + P3_P5) / (2.0 * P1_P4) - - except: - ear = 0.0 - coords_points = None - - return ear, coords_points - - -def calculate_avg_ear(landmarks, left_eye_idxs, right_eye_idxs, image_w, image_h): - # Calculate Eye aspect ratio - - left_ear, left_lm_coordinates = get_ear(landmarks, left_eye_idxs, image_w, image_h) - right_ear, right_lm_coordinates = get_ear(landmarks, right_eye_idxs, image_w, image_h) - Avg_EAR = (left_ear + right_ear) / 2.0 - - return Avg_EAR, (left_lm_coordinates, right_lm_coordinates) - - -def plot_eye_landmarks(frame, left_lm_coordinates, right_lm_coordinates, color): - for lm_coordinates in [left_lm_coordinates, right_lm_coordinates]: - if lm_coordinates: - for coord in lm_coordinates: - cv2.circle(frame, coord, 2, color, -1) - - frame = cv2.flip(frame, 1) - return frame - - -def plot_text(image, text, origin, color, font=cv2.FONT_HERSHEY_SIMPLEX, fntScale=0.8, thickness=2): - image = cv2.putText(image, text, origin, font, fntScale, color, thickness) - return image - - -class VideoFrameHandler: - def __init__(self): - """ - Initialize the necessary constants, mediapipe app - and tracker variables - """ - # Left and right eye chosen landmarks. - self.eye_idxs = { - "left": [362, 385, 387, 263, 373, 380], - "right": [33, 160, 158, 133, 153, 144], - } - - # Used for coloring landmark points. - # Its value depends on the current EAR value. - self.RED = (0, 0, 255) # BGR - self.GREEN = (0, 255, 0) # BGR - - # Initializing Mediapipe FaceMesh solution pipeline - self.facemesh_model = get_mediapipe_app() - - # For tracking counters and sharing states in and out of callbacks. - self.state_tracker = { - "start_time": time.perf_counter(), - "DROWSY_TIME": 0.0, # Holds the amount of time passed with EAR < EAR_THRESH - "COLOR": self.GREEN, - "play_alarm": False, - } - - self.EAR_txt_pos = (10, 30) - - def process(self, frame: np.array, thresholds: dict): - """ - This function is used to implement our Drowsy detection algorithm - - Args: - frame: (np.array) Input frame matrix. - thresholds: (dict) Contains the two threshold values - WAIT_TIME and EAR_THRESH. - - Returns: - The processed frame and a boolean flag to - indicate if the alarm should be played or not. - """ - - # To improve performance, - # mark the frame as not writeable to pass by reference. - frame.flags.writeable = False - frame_h, frame_w, _ = frame.shape - - DROWSY_TIME_txt_pos = (10, int(frame_h // 2 * 1.7)) - ALM_txt_pos = (10, int(frame_h // 2 * 1.85)) - - results = self.facemesh_model.process(frame) - - if results.multi_face_landmarks: - landmarks = results.multi_face_landmarks[0].landmark - EAR, coordinates = calculate_avg_ear(landmarks, self.eye_idxs["left"], self.eye_idxs["right"], frame_w, frame_h) - frame = plot_eye_landmarks(frame, coordinates[0], coordinates[1], self.state_tracker["COLOR"]) - - if EAR < thresholds["EAR_THRESH"]: - - # Increase DROWSY_TIME to track the time period with EAR less than threshold - # and reset the start_time for the next iteration. - end_time = time.perf_counter() - - self.state_tracker["DROWSY_TIME"] += end_time - self.state_tracker["start_time"] - self.state_tracker["start_time"] = end_time - self.state_tracker["COLOR"] = self.RED - - if self.state_tracker["DROWSY_TIME"] >= thresholds["WAIT_TIME"]: - self.state_tracker["play_alarm"] = True - plot_text(frame, "WAKE UP! WAKE UP", ALM_txt_pos, self.state_tracker["COLOR"]) - - else: - self.state_tracker["start_time"] = time.perf_counter() - self.state_tracker["DROWSY_TIME"] = 0.0 - self.state_tracker["COLOR"] = self.GREEN - self.state_tracker["play_alarm"] = False - - EAR_txt = f"EAR: {round(EAR, 2)}" - DROWSY_TIME_txt = f"DROWSY: {round(self.state_tracker['DROWSY_TIME'], 3)} Secs" - plot_text(frame, EAR_txt, self.EAR_txt_pos, self.state_tracker["COLOR"]) - plot_text(frame, DROWSY_TIME_txt, DROWSY_TIME_txt_pos, self.state_tracker["COLOR"]) - - else: - self.state_tracker["start_time"] = time.perf_counter() - self.state_tracker["DROWSY_TIME"] = 0.0 - self.state_tracker["COLOR"] = self.GREEN - self.state_tracker["play_alarm"] = False - - # Flip the frame horizontally for a selfie-view display. - frame = cv2.flip(frame, 1) - - return frame, self.state_tracker["play_alarm"] diff --git a/spaces/vijv/VV-05-GR-NLP-Image2Text-Multilingual-OCR/app.py b/spaces/vijv/VV-05-GR-NLP-Image2Text-Multilingual-OCR/app.py deleted file mode 100644 index 83ab99d0715b5c0033e0f452087543187147eaa6..0000000000000000000000000000000000000000 --- a/spaces/vijv/VV-05-GR-NLP-Image2Text-Multilingual-OCR/app.py +++ /dev/null @@ -1,54 +0,0 @@ -import pandas as pd -import PIL -from PIL import Image -from PIL import ImageDraw -import gradio as gr -import torch -import easyocr - -torch.hub.download_url_to_file('https://github.com/JaidedAI/EasyOCR/raw/master/examples/english.png', 'english.png') -torch.hub.download_url_to_file('https://github.com/JaidedAI/EasyOCR/raw/master/examples/chinese.jpg', 'chinese.jpg') -torch.hub.download_url_to_file('https://github.com/JaidedAI/EasyOCR/raw/master/examples/japanese.jpg', 'japanese.jpg') -torch.hub.download_url_to_file('https://i.imgur.com/mwQFd7G.jpeg', 'Hindi.jpeg') - -def draw_boxes(image, bounds, color='yellow', width=2): - draw = ImageDraw.Draw(image) - for bound in bounds: - p0, p1, p2, p3 = bound[0] - draw.line([*p0, *p1, *p2, *p3, *p0], fill=color, width=width) - return image - -def inference(img, lang): - reader = easyocr.Reader(lang) - bounds = reader.readtext(img.name) - im = PIL.Image.open(img.name) - draw_boxes(im, bounds) - im.save('result.jpg') - return ['result.jpg', pd.DataFrame(bounds).iloc[: , 1:]] - -title = 'Image To Optical Character Recognition' -description = 'Multilingual OCR which works conveniently on all devices in multiple languages.' -article = "

            " -examples = [['english.png',['en']],['chinese.jpg',['ch_sim', 'en']],['japanese.jpg',['ja', 'en']],['Hindi.jpeg',['hi', 'en']]] -css = ".output_image, .input_image {height: 40rem !important; width: 100% !important;}" -choices = [ - "ch_sim", - "ch_tra", - "de", - "en", - "es", - "ja", - "hi", - "ru" -] -gr.Interface( - inference, - [gr.inputs.Image(type='file', label='Input'),gr.inputs.CheckboxGroup(choices, type="value", default=['en'], label='language')], - [gr.outputs.Image(type='file', label='Output'), gr.outputs.Dataframe(headers=['text', 'confidence'])], - title=title, - description=description, - article=article, - examples=examples, - css=css, - enable_queue=True - ).launch(debug=True) \ No newline at end of file diff --git a/spaces/vishnun/CLIPnCROP/app.py b/spaces/vishnun/CLIPnCROP/app.py deleted file mode 100644 index f42837a72c9b8ce5986714b01df8ad5c30f3b35f..0000000000000000000000000000000000000000 --- a/spaces/vishnun/CLIPnCROP/app.py +++ /dev/null @@ -1,71 +0,0 @@ -import gradio as gr -import numpy as np -from PIL import Image -from transformers import CLIPProcessor, CLIPModel, YolosImageProcessor, YolosForObjectDetection -import torch - -feature_extractor = YolosImageProcessor.from_pretrained("hustvl/yolos-small") -dmodel = YolosForObjectDetection.from_pretrained('hustvl/yolos-small') - -model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") -processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") - -i1 = gr.Image(type="pil", label="Input image") -i2 = gr.Textbox(label="Description for section to extracted") -i3 = gr.Number(default=0.96, label="Threshold percentage score") -o1 = gr.Image(type="pil", label="Extracted Crop part") -o2 = gr.Textbox(label="Similarity score") - -def extract_image(image, text, prob, num=1): - - inputs = feature_extractor(images=image, return_tensors="pt") - outputs = dmodel(**inputs) - - # model predicts bounding boxes and corresponding COCO classes - logits = outputs.logits - bboxes = outputs.pred_boxes - probas = outputs.logits.softmax(-1)[0, :, :-1] #removing no class as detr maps - - keep = probas.max(-1).values > prob - outs = feature_extractor.post_process(outputs, torch.tensor(image.size[::-1]).unsqueeze(0)) - bboxes_scaled = outs[0]['boxes'][keep].detach().numpy() - labels = outs[0]['labels'][keep].detach().numpy() - scores = outs[0]['scores'][keep].detach().numpy() - - images_list = [] - for i,j in enumerate(bboxes_scaled): - - xmin = int(j[0]) - ymin = int(j[1]) - xmax = int(j[2]) - ymax = int(j[3]) - - im_arr = np.array(image) - roi = im_arr[ymin:ymax, xmin:xmax] - roi_im = Image.fromarray(roi) - - images_list.append(roi_im) - - inpu = processor(text = [text], images=images_list , return_tensors="pt", padding=True) - output = model(**inpu) - logits_per_image = output.logits_per_text - probs = logits_per_image.softmax(-1) - l_idx = np.argsort(probs[-1].detach().numpy())[::-1][0:num] - - final_ims = [] - for i,j in enumerate(images_list): - json_dict = {} - if i in l_idx: - json_dict['image'] = images_list[i] - json_dict['score'] = probs[-1].detach().numpy()[i] - - final_ims.append(json_dict) - - fi = sorted(final_ims, key=lambda item: item.get("score"), reverse=True) - return fi[0]['image'], fi[0]['score'] - -title = "ClipnCrop" -description = "

            Extract sections of images from your image by using OpenAI's CLIP and Facebooks Detr implemented on HuggingFace Transformers, if the similarity score is not so much, then please consider the prediction to be void.

            " -examples=[['ex3.jpg', 'black bag', 0.96],['ex2.jpg', 'man in red dress', 0.85]] -article = "

            clipcrop

            " -gr.Interface(fn=extract_image, inputs=[i1, i2, i3], outputs=[o1, o2], title=title, description=description, article=article, examples=examples, enable_queue=True).launch() \ No newline at end of file diff --git a/spaces/vonbarnekowa/stable-diffusion/ldm/models/diffusion/sampling_util.py b/spaces/vonbarnekowa/stable-diffusion/ldm/models/diffusion/sampling_util.py deleted file mode 100644 index 7eff02be6d7c54d43ee6680636ac0698dd3b3f33..0000000000000000000000000000000000000000 --- a/spaces/vonbarnekowa/stable-diffusion/ldm/models/diffusion/sampling_util.py +++ /dev/null @@ -1,22 +0,0 @@ -import torch -import numpy as np - - -def append_dims(x, target_dims): - """Appends dimensions to the end of a tensor until it has target_dims dimensions. - From https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/utils.py""" - dims_to_append = target_dims - x.ndim - if dims_to_append < 0: - raise ValueError(f'input has {x.ndim} dims but target_dims is {target_dims}, which is less') - return x[(...,) + (None,) * dims_to_append] - - -def norm_thresholding(x0, value): - s = append_dims(x0.pow(2).flatten(1).mean(1).sqrt().clamp(min=value), x0.ndim) - return x0 * (value / s) - - -def spatial_norm_thresholding(x0, value): - # b c h w - s = x0.pow(2).mean(1, keepdim=True).sqrt().clamp(min=value) - return x0 * (value / s) \ No newline at end of file diff --git a/spaces/vumichien/Generate_human_motion/VQ-Trans/models/modules.py b/spaces/vumichien/Generate_human_motion/VQ-Trans/models/modules.py deleted file mode 100644 index 4f06cd98d4f6029bd3df073095cf50498483d54a..0000000000000000000000000000000000000000 --- a/spaces/vumichien/Generate_human_motion/VQ-Trans/models/modules.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch -import torch.nn as nn -from torch.nn.utils.rnn import pack_padded_sequence - -def init_weight(m): - if isinstance(m, nn.Conv1d) or isinstance(m, nn.Linear) or isinstance(m, nn.ConvTranspose1d): - nn.init.xavier_normal_(m.weight) - # m.bias.data.fill_(0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - -class MovementConvEncoder(nn.Module): - def __init__(self, input_size, hidden_size, output_size): - super(MovementConvEncoder, self).__init__() - self.main = nn.Sequential( - nn.Conv1d(input_size, hidden_size, 4, 2, 1), - nn.Dropout(0.2, inplace=True), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv1d(hidden_size, output_size, 4, 2, 1), - nn.Dropout(0.2, inplace=True), - nn.LeakyReLU(0.2, inplace=True), - ) - self.out_net = nn.Linear(output_size, output_size) - self.main.apply(init_weight) - self.out_net.apply(init_weight) - - def forward(self, inputs): - inputs = inputs.permute(0, 2, 1) - outputs = self.main(inputs).permute(0, 2, 1) - # print(outputs.shape) - return self.out_net(outputs) - - - -class TextEncoderBiGRUCo(nn.Module): - def __init__(self, word_size, pos_size, hidden_size, output_size, device): - super(TextEncoderBiGRUCo, self).__init__() - self.device = device - - self.pos_emb = nn.Linear(pos_size, word_size) - self.input_emb = nn.Linear(word_size, hidden_size) - self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, bidirectional=True) - self.output_net = nn.Sequential( - nn.Linear(hidden_size * 2, hidden_size), - nn.LayerNorm(hidden_size), - nn.LeakyReLU(0.2, inplace=True), - nn.Linear(hidden_size, output_size) - ) - - self.input_emb.apply(init_weight) - self.pos_emb.apply(init_weight) - self.output_net.apply(init_weight) - self.hidden_size = hidden_size - self.hidden = nn.Parameter(torch.randn((2, 1, self.hidden_size), requires_grad=True)) - - # input(batch_size, seq_len, dim) - def forward(self, word_embs, pos_onehot, cap_lens): - num_samples = word_embs.shape[0] - - pos_embs = self.pos_emb(pos_onehot) - inputs = word_embs + pos_embs - input_embs = self.input_emb(inputs) - hidden = self.hidden.repeat(1, num_samples, 1) - - cap_lens = cap_lens.data.tolist() - emb = pack_padded_sequence(input_embs, cap_lens, batch_first=True) - - gru_seq, gru_last = self.gru(emb, hidden) - - gru_last = torch.cat([gru_last[0], gru_last[1]], dim=-1) - - return self.output_net(gru_last) - - -class MotionEncoderBiGRUCo(nn.Module): - def __init__(self, input_size, hidden_size, output_size, device): - super(MotionEncoderBiGRUCo, self).__init__() - self.device = device - - self.input_emb = nn.Linear(input_size, hidden_size) - self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, bidirectional=True) - self.output_net = nn.Sequential( - nn.Linear(hidden_size*2, hidden_size), - nn.LayerNorm(hidden_size), - nn.LeakyReLU(0.2, inplace=True), - nn.Linear(hidden_size, output_size) - ) - - self.input_emb.apply(init_weight) - self.output_net.apply(init_weight) - self.hidden_size = hidden_size - self.hidden = nn.Parameter(torch.randn((2, 1, self.hidden_size), requires_grad=True)) - - # input(batch_size, seq_len, dim) - def forward(self, inputs, m_lens): - num_samples = inputs.shape[0] - - input_embs = self.input_emb(inputs) - hidden = self.hidden.repeat(1, num_samples, 1) - - cap_lens = m_lens.data.tolist() - emb = pack_padded_sequence(input_embs, cap_lens, batch_first=True, enforce_sorted=False) - - gru_seq, gru_last = self.gru(emb, hidden) - - gru_last = torch.cat([gru_last[0], gru_last[1]], dim=-1) - - return self.output_net(gru_last) diff --git a/spaces/weanalyze/analyze_url/utils/summarizer.py b/spaces/weanalyze/analyze_url/utils/summarizer.py deleted file mode 100644 index 8ccd397d61a573ab0f38cdab5a07c01dc29183ae..0000000000000000000000000000000000000000 --- a/spaces/weanalyze/analyze_url/utils/summarizer.py +++ /dev/null @@ -1,81 +0,0 @@ -import ast -import openai -from transformers import GPT2Tokenizer - -# Initialize tokenizer -tokenizer = GPT2Tokenizer.from_pretrained("gpt2") - -# Prompt engineering -def get_prompt(text): - # prompt_prefix = """Generate exactly 3 different and thought provoking discussion questions about given article below, and return the answers of these questions with the evidence. - - # Desired output format: [{"Q":,"A":},{"Q":,"A":},{"Q":,"A":}]. - # """ - prompt_prefix = """Generate exactly 3 different and thought provoking discussion questions about given article below, and return the answers of these questions with the evidence. - - Desired output should be a markdown format like this: - - ## Q1: - - - - ## Q2: - - - - ## Q3: - - - - """ - prompt_postfix =""" - Given article content: \"""{}.\""" - """ - prompt = prompt_prefix + prompt_postfix.format(text) - return prompt - -def limit_tokens(text, n=3000): - # Get the first n tokens from the input text - input_ids = tokenizer.encode(text, return_tensors="pt") - first_n_tokens = input_ids[:, :n] - # Convert the first n tokens back to text format - processed_text = tokenizer.decode(first_n_tokens[0], skip_special_tokens=True) - return processed_text - - -# Chat completion -def get_openai_chatcompletion(text): - """Get OpenAI Chat Completion result. - """ - messages = [] - processed_text = limit_tokens(text) - augmented_prompt = get_prompt(processed_text) - messages.append({"role":"user","content": augmented_prompt}) - - try: - result = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=messages, - temperature=0.7 - ) - except: - raise - return result - - -def get_analyze(result): - try: - # analyze = ast.literal_eval(result["choices"][0]['text']) - # analyze = eval(result["choices"][0]['text']) - # analyze = result["choices"][0]['text'] - analyze = result["choices"][0]["message"]["content"] - except: - raise - return analyze - - -def get_analyze_result(text): - result = get_openai_chatcompletion(text) - analyze = get_analyze(result) - return analyze - \ No newline at end of file diff --git a/spaces/weiren119/AudiogramDigitization/src/utils/audiology.py b/spaces/weiren119/AudiogramDigitization/src/utils/audiology.py deleted file mode 100644 index da0af17e2ff41980bb19b39c4525f382b5ac8e7d..0000000000000000000000000000000000000000 --- a/spaces/weiren119/AudiogramDigitization/src/utils/audiology.py +++ /dev/null @@ -1,197 +0,0 @@ -#!/usr/bin/env python3 -""" -Copyright (c) 2020, Carleton University Biomedical Informatics Collaboratory - -This source code is licensed under the MIT license found in the -LICENSE file in the root directory of this source tree. -""" - -from typing import List -import numpy as np - -VALID_FREQUENCIES = [125, 250, 500, 750, 1000, 1500, 2000, 3000, 4000, 6000, 8000, 16000] -VALID_THRESHOLDS = list(range(-10, 135, 5)) -THRESHOLDS = list(range(-10, 130, 10)) -OCTAVE_FREQS_HZ = [125, 250, 500, 1000, 2000, 4000, 8000] -INTEROCTAVE_FREQS_HZ = [750, 1500, 3000, 6000] -OCTAVE_FREQS_KHZ = [0.125, 0.25, 0.5, 1, 2, 4, 8] -INTEROCTAVE_FREQS_KHZ = [0.750, 1.5, 3, 6] - -def round_threshold(threshold: float) -> int: - """Returns the nearest multiple of 5 for the threshold input. - - Parameters - ---------- - threshold : float - The threshold snapped to the nearest multiple of 5 along the y-axis. - - Returns - ------- - float - A ``snapped`` threshold value. - """ - return VALID_THRESHOLDS[np.argmin([abs(threshold - t) for t in VALID_THRESHOLDS])] - -def round_frequency(frequency: float) -> int: - """Returns the nearest audiologically meaningful frequency. - Parameters - ---------- - frequency : float - The frequency to be snapped to the nearest clinically meaningful frequency. - Returns - ------- - float - A ``snapped`` frequency value. - """ - return VALID_FREQUENCIES[np.argmin([abs(frequency - f) for f in VALID_FREQUENCIES])] - -def round_frequency_bone(frequency: float, direction: str, epsilon: float = 0.15) -> int: - """Returns the nearest audiologically meaningful frequency. - - Parameters - ---------- - frequency : float - The frequency to be snapped to the nearest clinically meaningful frequency. - - epsilon: float - Distance (in octaves) below which a frequency is considered to be - exactly on the nearest valid frequency. (default: 0.15 octaves) - - direction: str - This parameter will influence the snapping behavior as some - audiologists draw bone conduction symbols next to the target frequency, - while other draw it right on it. - - epsilon: float - The frequency will be snapped in to the nearest frequency in the - provided direction, unless the distance to the nearest - frequency is < ε (some small distance (IN OCTAVE UNITS), in which - case the frequency will be snapped to that value. - - Eg: - - 1K 2K 1K 2K - | | | | - | > | will be snapped to > | - | | | | - - but if the threshold fell directly (within a very small distance of - 1.5K, it would be snapped to that. - - 1K 2K 1K 1.5K 2K - | | | | - | > | will be snapped to | > | - | | | | - - because it is really close to 1.5 and the audiologist likely - intentionally meant to indicate 1.5K rather than 1K. - - Note: ε is a tweakable parameter that can be optimized over the - dataset. - - - Returns - ------- - float - A ``snapped`` frequency value. - """ - assert direction == "left" or direction == "right" - - distances = [abs(frequency_to_octave(frequency) - frequency_to_octave(f)) for f in VALID_FREQUENCIES] - nearest_frequency_index = np.argmin(distances) - - snapped = None - if distances[nearest_frequency_index] < epsilon: - snapped = VALID_FREQUENCIES[nearest_frequency_index] - elif direction == "left": - if VALID_FREQUENCIES[nearest_frequency_index] > frequency: - snapped = VALID_FREQUENCIES[nearest_frequency_index - 1] if nearest_frequency_index > 0 else VALID_FREQUENCIES[nearest_frequency_index] - else: - snapped = VALID_FREQUENCIES[nearest_frequency_index] - else: - if VALID_FREQUENCIES[nearest_frequency_index] > frequency: - snapped = VALID_FREQUENCIES[nearest_frequency_index] - else: - snapped = VALID_FREQUENCIES[nearest_frequency_index + 1] if nearest_frequency_index < len(VALID_FREQUENCIES) - 1 else VALID_FREQUENCIES[nearest_frequency_index] - - return snapped - -def frequency_to_octave(frequency: float) -> float: - """Converts a frequency (in Hz) to an octave value (linear units). - - By convention, the 0th octave is 125Hz. - - Parameters - ---------- - frequency : float - The frequency (a positive real) to be converted to an octave value. - - Returns - ------- - float - The octave corresponding to the input frequency. - """ - return np.log(frequency / 125) / np.log(2) - -def octave_to_frequency(octave: float) -> float: - """Converts an octave to its corresponding frequency value (in Hz). - - By convention, the 0th octave is 125Hz. - - Parameters - ---------- - octave : float - The octave to put on a frequency scale. - - Returns - ------- - float - The frequency value corresponding to the octave. - """ - return 125 * 2 ** octave - -def stringify_measurement(measurement: dict) -> str: - """Returns a string describing the measurement type that is compatible - with the NIHL portal format. - - eg. An air conduction threshold for the right ear with no masking - would yield the string `AIR_UNMASKED_RIGHT`. - - Parameters - ---------- - measurement: dict - A dictionary describing a threshold. Should have the keys `ear`, - `conduction` and `masking`. - - Returns - ------- - str - The string describing the measurement type in the NIHL portal format. - """ - masking = "masked" if measurement["masking"] else "unmasked" - return f"{measurement['conduction']}_{masking}_{measurement['ear']}".upper() - - -def measurement_string_to_dict(measurement_type: str) -> dict: - """Converts a measurement type string in the NIHL portal format into - a dictionary with the equivalent information for use with the digitizer. - - eg. `AIR_UNMASKED_RIGHT` would be equivalent to the dictionary: - {`ear`: `right`, `conduction`: `air`, `masking`: False} - - Parameters - ---------- - measurement: dict - A dictionary describing a threshold. Should have the keys `ear`, - `conduction` and `masking`. - - Returns - ------- - str - The string describing the measurement type in the NIHL portal format. - """ - return { - "ear": "left" if "LEFT" in measurement_type else "right", - "conduction": "air" if "AIR" in measurement_type else "bone", - "masking": False if "UNMASKED" in measurement_type else True - } diff --git a/spaces/weishao2019/ChuanhuChatGPT/app.py b/spaces/weishao2019/ChuanhuChatGPT/app.py deleted file mode 100644 index 5523a648e43b4dab0e8c504fed92b0bd32bb8fbd..0000000000000000000000000000000000000000 --- a/spaces/weishao2019/ChuanhuChatGPT/app.py +++ /dev/null @@ -1,454 +0,0 @@ -# -*- coding:utf-8 -*- -import os -import logging -import sys - -import gradio as gr - -from utils import * -from presets import * -from overwrites import * -from chat_func import * - -logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -my_api_key = "" # 在这里输入你的 API 密钥 - -# if we are running in Docker -if os.environ.get("dockerrun") == "yes": - dockerflag = True -else: - dockerflag = False - -authflag = False - -if dockerflag: - my_api_key = os.environ.get("my_api_key") - if my_api_key == "empty": - logging.error("Please give a api key!") - sys.exit(1) - # auth - username = os.environ.get("USERNAME") - password = os.environ.get("PASSWORD") - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - authflag = True -else: - if ( - not my_api_key - and os.path.exists("api_key.txt") - and os.path.getsize("api_key.txt") - ): - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip() - if os.path.exists("auth.json"): - with open("auth.json", "r") as f: - auth = json.load(f) - username = auth["username"] - password = auth["password"] - if username != "" and password != "": - authflag = True - -gr.Chatbot.postprocess = postprocess -PromptHelper.compact_text_chunks = compact_text_chunks - -with open("custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -with gr.Blocks( - css=customCSS, - theme=gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#02C160", - c100="rgba(2, 193, 96, 0.2)", - c200="#02C160", - c300="rgba(2, 193, 96, 0.32)", - c400="rgba(2, 193, 96, 0.32)", - c500="rgba(2, 193, 96, 1.0)", - c600="rgba(2, 193, 96, 1.0)", - c700="rgba(2, 193, 96, 0.32)", - c800="rgba(2, 193, 96, 0.32)", - c900="#02C160", - c950="#02C160", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f9fafb", - c100="#f3f4f6", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - c900="#272727", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - button_primary_background_fill="#06AE56", - button_primary_background_fill_dark="#06AE56", - button_primary_background_fill_hover="#07C863", - button_primary_border_color="#06AE56", - button_primary_border_color_dark="#06AE56", - button_primary_text_color="#FFFFFF", - button_primary_text_color_dark="#FFFFFF", - button_secondary_background_fill="#F2F2F2", - button_secondary_background_fill_dark="#2B2B2B", - button_secondary_text_color="#393939", - button_secondary_text_color_dark="#FFFFFF", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - block_title_text_color="*primary_500", - block_title_background_fill="*primary_100", - input_background_fill="#F6F6F6", - ), -) as demo: - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - user_api_key = gr.State(my_api_key) - TRUECOMSTANT = gr.State(True) - FALSECONSTANT = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - gr.HTML(title) - status_display = gr.Markdown(get_geoip(), elem_id="status_display") - - with gr.Row(scale=1).style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(scale=1): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(scale=1): - with gr.Column(scale=12): - user_input = gr.Textbox( - show_label=False, placeholder="在这里输入" - ).style(container=False) - with gr.Column(min_width=70, scale=1): - submitBtn = gr.Button("发送", variant="primary") - with gr.Row(scale=1): - emptyBtn = gr.Button( - "🧹 新的对话", - ) - retryBtn = gr.Button("🔄 重新生成") - delLastBtn = gr.Button("🗑️ 删除一条对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label="ChatGPT"): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"OpenAI API-key...", - value=hide_middle_chars(my_api_key), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - model_select_dropdown = gr.Dropdown( - label="选择模型", choices=MODELS, multiselect=False, value=MODELS[0] - ) - use_streaming_checkbox = gr.Checkbox( - label="实时传输回答", value=True, visible=enable_streaming_option - ) - use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False) - index_files = gr.Files(label="上传索引文件", type="file", multiple=True) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入System Prompt...", - label="System prompt", - value=initial_prompt, - lines=10, - ).style(container=False) - with gr.Accordion(label="加载Prompt模板", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label="选择Prompt模板集合文件", - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - ).style(container=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label="从Prompt模板中加载", - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - value=load_template( - get_template_names(plain=True)[0], mode=1 - )[0], - ).style(container=False) - - with gr.Tab(label="保存/加载"): - with gr.Accordion(label="保存/加载对话历史记录", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label="从列表中加载对话", - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - ) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=f"设置文件名: 默认为.json,可选为.md", - label="设置保存文件名", - value="对话历史记录", - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - exportMarkdownBtn = gr.Button("📝 导出为Markdown") - gr.Markdown("默认保存于history文件夹") - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label="高级"): - default_btn = gr.Button("🔙 恢复默认设置") - gr.Markdown("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置") - - with gr.Accordion("参数", open=False): - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="Top-p", - ) - temperature = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="Temperature", - ) - - apiurlTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入API地址...", - label="API地址", - value="https://api.openai.com/v1/chat/completions", - lines=2, - ) - changeAPIURLBtn = gr.Button("🔄 切换API地址") - proxyTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入代理地址...", - label="代理地址(示例:http://127.0.0.1:10809)", - value="", - lines=2, - ) - changeProxyBtn = gr.Button("🔄 设置代理地址") - - gr.Markdown(description) - - keyTxt.submit(submit_key, keyTxt, [user_api_key, status_display]) - keyTxt.change(submit_key, keyTxt, [user_api_key, status_display]) - # Chatbot - user_input.submit( - predict, - [ - user_api_key, - systemPromptTxt, - history, - user_input, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - user_input.submit(reset_textbox, [], [user_input]) - - submitBtn.click( - predict, - [ - user_api_key, - systemPromptTxt, - history, - user_input, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - submitBtn.click(reset_textbox, [], [user_input]) - - emptyBtn.click( - reset_state, - outputs=[chatbot, history, token_count, status_display], - show_progress=True, - ) - - retryBtn.click( - retry, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - - delLastBtn.click( - delete_last_conversation, - [chatbot, history, token_count], - [chatbot, history, token_count, status_display], - show_progress=True, - ) - - reduceTokenBtn.click( - reduce_token_size, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - gr.State(0), - model_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - - # Template - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - historyFileSelectDropdown.change( - load_chat_history, - [historyFileSelectDropdown, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - show_progress=True, - ) - downloadFile.change( - load_chat_history, - [downloadFile, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - ) - - # Advanced - default_btn.click( - reset_default, [], [apiurlTxt, proxyTxt, status_display], show_progress=True - ) - changeAPIURLBtn.click( - change_api_url, - [apiurlTxt], - [status_display], - show_progress=True, - ) - changeProxyBtn.click( - change_proxy, - [proxyTxt], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "川虎ChatGPT 🚀" - -if __name__ == "__main__": - # if running in Docker - if dockerflag: - if authflag: - demo.queue().launch( - server_name="0.0.0.0", server_port=7860, auth=(username, password), - favicon_path="./assets/favicon.png" - ) - else: - demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False, favicon_path="./assets/favicon.png") - # if not running in Docker - else: - if authflag: - demo.queue().launch(share=False, auth=(username, password), favicon_path="./assets/favicon.png", inbrowser=True) - else: - demo.queue().launch(share=False, favicon_path="./assets/favicon.png", inbrowser=True) # 改为 share=True 可以创建公开分享链接 - # demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - # demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - # demo.queue().launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/wishwork/Persian-LLM-Leaderboard/README.md b/spaces/wishwork/Persian-LLM-Leaderboard/README.md deleted file mode 100644 index 98f55494ee4d4721978b2a5a0c63e6c5ef3bfe94..0000000000000000000000000000000000000000 --- a/spaces/wishwork/Persian-LLM-Leaderboard/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🤗 Persian LLM Leaderboard -colorFrom: gray -colorTo: purple -sdk: streamlit -layout: wide -python_version: 3.9.17 -sdk_version: 1.24.0 -app_file: app.py -pinned: true -license: openrail -emoji: 📊 ---- \ No newline at end of file diff --git a/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/models/point_transformer.py b/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/models/point_transformer.py deleted file mode 100644 index bdec2f268029d19b00503f6539a125e770de9f59..0000000000000000000000000000000000000000 --- a/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/models/point_transformer.py +++ /dev/null @@ -1,208 +0,0 @@ -import torch -import torch.nn as nn -from StructDiffusion.utils.pointnet import farthest_point_sample, index_points, square_distance - -# adapted from https://github.com/qq456cvb/Point-Transformers - - -def sample_and_group(npoint, nsample, xyz, points): - B, N, C = xyz.shape - S = npoint - - fps_idx = farthest_point_sample(xyz, npoint) # [B, npoint] - - new_xyz = index_points(xyz, fps_idx) - new_points = index_points(points, fps_idx) - - dists = square_distance(new_xyz, xyz) # B x npoint x N - idx = dists.argsort()[:, :, :nsample] # B x npoint x K - - grouped_points = index_points(points, idx) - grouped_points_norm = grouped_points - new_points.view(B, S, 1, -1) - new_points = torch.cat([grouped_points_norm, new_points.view(B, S, 1, -1).repeat(1, 1, nsample, 1)], dim=-1) - return new_xyz, new_points - - -class Local_op(nn.Module): - def __init__(self, in_channels, out_channels): - super().__init__() - self.conv1 = nn.Conv1d(in_channels, out_channels, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm1d(out_channels) - self.relu = nn.ReLU() - - def forward(self, x): - b, n, s, d = x.size() # torch.Size([32, 512, 32, 6]) - x = x.permute(0, 1, 3, 2) - x = x.reshape(-1, d, s) - batch_size, _, N = x.size() - x = self.relu(self.bn1(self.conv1(x))) # B, D, N - x = torch.max(x, 2)[0] - x = x.view(batch_size, -1) - x = x.reshape(b, n, -1).permute(0, 2, 1) - return x - - -class SA_Layer(nn.Module): - def __init__(self, channels): - super().__init__() - self.q_conv = nn.Conv1d(channels, channels // 4, 1, bias=False) - self.k_conv = nn.Conv1d(channels, channels // 4, 1, bias=False) - self.q_conv.weight = self.k_conv.weight - self.v_conv = nn.Conv1d(channels, channels, 1) - self.trans_conv = nn.Conv1d(channels, channels, 1) - self.after_norm = nn.BatchNorm1d(channels) - self.act = nn.ReLU() - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x): - x_q = self.q_conv(x).permute(0, 2, 1) # b, n, c - x_k = self.k_conv(x)# b, c, n - x_v = self.v_conv(x) - energy = x_q @ x_k # b, n, n - attention = self.softmax(energy) - attention = attention / (1e-9 + attention.sum(dim=1, keepdims=True)) - x_r = x_v @ attention # b, c, n - x_r = self.act(self.after_norm(self.trans_conv(x - x_r))) - x = x + x_r - return x - - -class StackedAttention(nn.Module): - def __init__(self, channels=64): - super().__init__() - self.conv1 = nn.Conv1d(channels, channels, kernel_size=1, bias=False) - self.conv2 = nn.Conv1d(channels, channels, kernel_size=1, bias=False) - - self.bn1 = nn.BatchNorm1d(channels) - self.bn2 = nn.BatchNorm1d(channels) - - self.sa1 = SA_Layer(channels) - self.sa2 = SA_Layer(channels) - - self.relu = nn.ReLU() - - def forward(self, x): - # - # b, 3, npoint, nsample - # conv2d 3 -> 128 channels 1, 1 - # b * npoint, c, nsample - # permute reshape - batch_size, _, N = x.size() - - x = self.relu(self.bn1(self.conv1(x))) # B, D, N - x = self.relu(self.bn2(self.conv2(x))) - - x1 = self.sa1(x) - x2 = self.sa2(x1) - - x = torch.cat((x1, x2), dim=1) - - return x - - -class PointTransformerEncoderSmall(nn.Module): - - def __init__(self, output_dim=256, input_dim=6, mean_center=True): - super(PointTransformerEncoderSmall, self).__init__() - - self.mean_center = mean_center - - # map the second dim of the input from input_dim to 64 - self.conv1 = nn.Conv1d(input_dim, 64, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm1d(64) - self.gather_local_0 = Local_op(in_channels=128, out_channels=64) - self.gather_local_1 = Local_op(in_channels=128, out_channels=64) - self.pt_last = StackedAttention(channels=64) - - self.relu = nn.ReLU() - self.conv_fuse = nn.Sequential(nn.Conv1d(192, 256, kernel_size=1, bias=False), - nn.BatchNorm1d(256), - nn.LeakyReLU(negative_slope=0.2)) - - self.linear1 = nn.Linear(256, 256, bias=False) - self.bn6 = nn.BatchNorm1d(256) - self.dp1 = nn.Dropout(p=0.5) - self.linear2 = nn.Linear(256, 256) - - def forward(self, xyz, f=None): - # xyz: B, N, 3 - # f: B, N, D - center = torch.mean(xyz, dim=1) - if self.mean_center: - xyz = xyz - center.view(-1, 1, 3).repeat(1, xyz.shape[1], 1) - if f is None: - x = self.pct(xyz) - else: - x = self.pct(torch.cat([xyz, f], dim=2)) # B, output_dim - - return center, x - - def pct(self, x): - - # x: B, N, D - xyz = x[..., :3] - x = x.permute(0, 2, 1) - batch_size, _, _ = x.size() - x = self.relu(self.bn1(self.conv1(x))) # B, D, N - x = x.permute(0, 2, 1) - new_xyz, new_feature = sample_and_group(npoint=128, nsample=32, xyz=xyz, points=x) - feature_0 = self.gather_local_0(new_feature) - feature = feature_0.permute(0, 2, 1) # B, nsamples, D - new_xyz, new_feature = sample_and_group(npoint=32, nsample=16, xyz=new_xyz, points=feature) - feature_1 = self.gather_local_1(new_feature) # B, D, nsamples - - x = self.pt_last(feature_1) # B, D * 2, nsamples - x = torch.cat([x, feature_1], dim=1) # B, D * 3, nsamples - x = self.conv_fuse(x) - x = torch.max(x, 2)[0] - x = x.view(batch_size, -1) - - x = self.relu(self.bn6(self.linear1(x))) - x = self.dp1(x) - x = self.linear2(x) - - return x - - -class SampleAndGroup(nn.Module): - - def __init__(self, output_dim=64, input_dim=6, mean_center=True, npoints=(128, 32), nsamples=(32, 16)): - super(SampleAndGroup, self).__init__() - - self.mean_center = mean_center - self.npoints = npoints - self.nsamples = nsamples - - # map the second dim of the input from input_dim to 64 - self.conv1 = nn.Conv1d(input_dim, output_dim, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm1d(output_dim) - self.gather_local_0 = Local_op(in_channels=output_dim * 2, out_channels=output_dim) - self.gather_local_1 = Local_op(in_channels=output_dim * 2, out_channels=output_dim) - self.relu = nn.ReLU() - - def forward(self, xyz, f): - # xyz: B, N, 3 - # f: B, N, D - center = torch.mean(xyz, dim=1) - if self.mean_center: - xyz = xyz - center.view(-1, 1, 3).repeat(1, xyz.shape[1], 1) - x = self.sg(torch.cat([xyz, f], dim=2)) # B, nsamples, output_dim - - return center, x - - def sg(self, x): - - # x: B, N, D - xyz = x[..., :3] - x = x.permute(0, 2, 1) - batch_size, _, _ = x.size() - x = self.relu(self.bn1(self.conv1(x))) # B, D, N - x = x.permute(0, 2, 1) - new_xyz, new_feature = sample_and_group(npoint=self.npoints[0], nsample=self.nsamples[0], xyz=xyz, points=x) - feature_0 = self.gather_local_0(new_feature) - feature = feature_0.permute(0, 2, 1) # B, nsamples, D - new_xyz, new_feature = sample_and_group(npoint=self.npoints[1], nsample=self.nsamples[1], xyz=new_xyz, points=feature) - feature_1 = self.gather_local_1(new_feature) # B, D, nsamples - x = feature_1.permute(0, 2, 1) # B, nsamples, D - - return x \ No newline at end of file diff --git a/spaces/xiang-wuu/yolov5/utils/flask_rest_api/README.md b/spaces/xiang-wuu/yolov5/utils/flask_rest_api/README.md deleted file mode 100644 index a726acbd92043458311dd949cc09c0195cd35400..0000000000000000000000000000000000000000 --- a/spaces/xiang-wuu/yolov5/utils/flask_rest_api/README.md +++ /dev/null @@ -1,73 +0,0 @@ -# Flask REST API - -[REST](https://en.wikipedia.org/wiki/Representational_state_transfer) [API](https://en.wikipedia.org/wiki/API)s are -commonly used to expose Machine Learning (ML) models to other services. This folder contains an example REST API -created using Flask to expose the YOLOv5s model from [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/). - -## Requirements - -[Flask](https://palletsprojects.com/p/flask/) is required. Install with: - -```shell -$ pip install Flask -``` - -## Run - -After Flask installation run: - -```shell -$ python3 restapi.py --port 5000 -``` - -Then use [curl](https://curl.se/) to perform a request: - -```shell -$ curl -X POST -F image=@zidane.jpg 'http://localhost:5000/v1/object-detection/yolov5s' -``` - -The model inference results are returned as a JSON response: - -```json -[ - { - "class": 0, - "confidence": 0.8900438547, - "height": 0.9318675399, - "name": "person", - "width": 0.3264600933, - "xcenter": 0.7438579798, - "ycenter": 0.5207948685 - }, - { - "class": 0, - "confidence": 0.8440024257, - "height": 0.7155083418, - "name": "person", - "width": 0.6546785235, - "xcenter": 0.427829951, - "ycenter": 0.6334488392 - }, - { - "class": 27, - "confidence": 0.3771208823, - "height": 0.3902671337, - "name": "tie", - "width": 0.0696444362, - "xcenter": 0.3675483763, - "ycenter": 0.7991207838 - }, - { - "class": 27, - "confidence": 0.3527112305, - "height": 0.1540903747, - "name": "tie", - "width": 0.0336618312, - "xcenter": 0.7814827561, - "ycenter": 0.5065554976 - } -] -``` - -An example python script to perform inference using [requests](https://docs.python-requests.org/en/master/) is given -in `example_request.py` diff --git a/spaces/xjsyy/bingo-gpt/Dockerfile b/spaces/xjsyy/bingo-gpt/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/xjsyy/bingo-gpt/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/xl2533/FinDoc/build_index/unit_test/__init__.py b/spaces/xl2533/FinDoc/build_index/unit_test/__init__.py deleted file mode 100644 index 1868e1b8af4eaafbc633df6daabe7d5b3ebcf710..0000000000000000000000000000000000000000 --- a/spaces/xl2533/FinDoc/build_index/unit_test/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# -*-coding:utf-8 -*- \ No newline at end of file diff --git a/spaces/xuxw98/TAPA/generate/full.py b/spaces/xuxw98/TAPA/generate/full.py deleted file mode 100644 index 443a75e32e40089ee74cc5545701b553f9537c2b..0000000000000000000000000000000000000000 --- a/spaces/xuxw98/TAPA/generate/full.py +++ /dev/null @@ -1,103 +0,0 @@ -import sys -import time -import warnings -from pathlib import Path -from typing import Optional - -import lightning as L -import torch - -# support running without installing as a package -wd = Path(__file__).absolute().parent.parent -sys.path.append(str(wd)) - -from lit_llama import LLaMA, Tokenizer -from lit_llama.utils import quantization -from scripts.prepare_alpaca import generate_prompt -from generate import generate - - -def main( - prompt: str = "Hello, my name is", - *, - num_samples: int = 1, - max_new_tokens: int = 50, - top_k: int = 200, - temperature: float = 0.8, - checkpoint_path: Optional[Path] = None, - tokenizer_path: Path = Path("checkpoints/lit-llama/tokenizer.model"), - model_size: str = "7B", - quantize: Optional[str] = None, -) -> None: - """Generates text samples based on a pre-trained LLaMA model and tokenizer. - - Args: - prompt: The prompt string to use for generating the samples. - num_samples: The number of text samples to generate. - max_new_tokens: The number of generation steps to take. - top_k: The number of top most probable tokens to consider in the sampling process. - temperature: A value controlling the randomness of the sampling process. Higher values result in more random - samples. - checkpoint_path: The checkpoint path to load. - tokenizer_path: The tokenizer path to load. - model_size: The model size to load. - quantize: Whether to quantize the model and using which method: - ``"llm.int8"``: LLM.int8() mode, - ``"gptq.int4"``: GPTQ 4-bit mode. - """ - if not checkpoint_path: - checkpoint_path = Path(f"checkpoints/lit-llama/{model_size}/lit-llama.pth") - assert checkpoint_path.is_file(), checkpoint_path - assert tokenizer_path.is_file(), tokenizer_path - - precision = "bf16-true" if torch.cuda.is_available() and torch.cuda.is_bf16_supported() else "32-true" - fabric = L.Fabric(devices=1, precision=precision) - - print("Loading model ...", file=sys.stderr) - t0 = time.time() - - with fabric.init_module(empty_init=True), quantization(mode=quantize): - model = LLaMA.from_name(model_size) - - checkpoint = torch.load(checkpoint_path) - model.load_state_dict(checkpoint) - print(f"Time to load model: {time.time() - t0:.02f} seconds.", file=sys.stderr) - - model.eval() - model = fabric.setup(model) - - tokenizer = Tokenizer(tokenizer_path) - sample = {"instruction": prompt, "input": input} - prompt = generate_prompt(sample) - encoded = tokenizer.encode(prompt, bos=True, eos=False, device=fabric.device) - prompt_length = encoded.size(0) - - L.seed_everything(1234) - for i in range(num_samples): - t0 = time.perf_counter() - y = generate(model, encoded, max_new_tokens, temperature=temperature, top_k=top_k) - t = time.perf_counter() - t0 - - model.reset_cache() - print(tokenizer.decode(y)) - tokens_generated = y.size(0) - prompt_length - print(f"Time for inference {i + 1}: {t:.02f} sec total, {tokens_generated / t:.02f} tokens/sec", file=sys.stderr) - if fabric.device.type == "cuda": - print(f"Memory used: {torch.cuda.max_memory_reserved() / 1e9:.02f} GB", file=sys.stderr) - - -if __name__ == "__main__": - from jsonargparse import CLI - - torch.set_float32_matmul_precision("high") - warnings.filterwarnings( - # Triggered internally at ../aten/src/ATen/EmptyTensor.cpp:31 - "ignore", - message="ComplexHalf support is experimental and many operators don't support it yet" - ) - warnings.filterwarnings( - # Triggered in bitsandbytes/autograd/_functions.py:298 - "ignore", - message="MatMul8bitLt: inputs will be cast from torch.bfloat16 to float16 during quantization", - ) - CLI(main) diff --git a/spaces/yaosynge/bingAI/README.md b/spaces/yaosynge/bingAI/README.md deleted file mode 100644 index fcc0fb7c91d118e942b42d6eb89bba940408a7cb..0000000000000000000000000000000000000000 --- a/spaces/yaosynge/bingAI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: BingAI -emoji: 🐠 -colorFrom: pink -colorTo: purple -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yash-srivastava19/CodeSmith/app.py b/spaces/yash-srivastava19/CodeSmith/app.py deleted file mode 100644 index af4e382d2b7192f85dcaf8652c6fb8006cd0d9e5..0000000000000000000000000000000000000000 --- a/spaces/yash-srivastava19/CodeSmith/app.py +++ /dev/null @@ -1,45 +0,0 @@ -from langchain import PromptTemplate, LLMChain -import chainlit as cl -from custom_llm import CustomLLM -from langchain.prompts import ( - ChatPromptTemplate, - SystemMessagePromptTemplate, -) - -template = """ Write a code for the following problem : -{question} - -Code: -""" - - -@cl.on_chat_start -def factory(): - system_message_prompt = SystemMessagePromptTemplate.from_template(template) - - prompt = ChatPromptTemplate.from_messages([system_message_prompt]) - llm = CustomLLM() - - llm_chain = LLMChain(prompt=prompt, llm=llm, verbose=True,) - - cl.user_session.set("llm_chain", llm_chain) - - - -@cl.on_message -async def main(message): - llm_chain = cl.user_session.get("llm_chain") - - res = await llm_chain.acall(message, callbacks=[cl.AsyncLangchainCallbackHandler()]) - - await cl.Message(content=res["text"]).send() - - - -@cl.author_rename # This will be particularly useful when we want to customize this thing for production. -def rename(orig_author): - rename_dict = { - 'LLMChain': 'Scooby' - } - return rename_dict.get(orig_author, orig_author) - diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/TransportPanel/TempoForm.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/TransportPanel/TempoForm.tsx deleted file mode 100644 index 7068d067a3188b3e795b31f5ce761e514ba4fa5b..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/components/TransportPanel/TempoForm.tsx +++ /dev/null @@ -1,84 +0,0 @@ -import styled from "@emotion/styled" -import { observer } from "mobx-react-lite" -import { FC } from "react" -import { DEFAULT_TEMPO } from "../../../common/player" -import { useStores } from "../../hooks/useStores" - -const TempoInput = styled.input` - background: transparent; - -webkit-appearance: none; - border: none; - color: inherit; - font-size: inherit; - font-family: inherit; - width: 5em; - text-align: center; - outline: none; - font-family: "Roboto Mono", monospace; - font-size: 1rem; - padding: 0.3rem 0; - - &::-webkit-inner-spin-button { - -webkit-appearance: none; - margin: 0; - } -` - -const TempoWrapper = styled.div` - display: flex; - align-items: center; - border: 1px solid transparent; - padding-left: 0.75rem; - border-radius: 0.25rem; - - label { - font-size: 0.6rem; - color: ${({ theme }) => theme.secondaryTextColor}; - } - - &:focus-within { - border: 1px solid ${({ theme }) => theme.dividerColor}; - background: #ffffff14; - } -` - -export const TempoForm: FC = observer(() => { - const { - song, - pianoRollStore: { currentTempo }, - player, - } = useStores() - const tempo = currentTempo ?? DEFAULT_TEMPO - - const changeTempo = (tempo: number) => { - const fixedTempo = Math.max(1, Math.min(512, tempo)) - song.conductorTrack?.setTempo(fixedTempo, player.position) - player.currentTempo = fixedTempo - } - - const onKeyPressTempo = (e: React.KeyboardEvent) => { - if (e.key === "Enter") { - e.preventDefault() - e.currentTarget.blur() - } - } - - const onChangeTempo = (e: React.ChangeEvent) => - changeTempo(parseFloat(e.target.value)) - - return ( - - - - - ) -}) diff --git a/spaces/yerfor/SyntaSpeech/data_gen/tts/wav_processors/base_processor.py b/spaces/yerfor/SyntaSpeech/data_gen/tts/wav_processors/base_processor.py deleted file mode 100644 index e8200dc58a9388ac94a5ec34b8a65f75e380255b..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/data_gen/tts/wav_processors/base_processor.py +++ /dev/null @@ -1,25 +0,0 @@ -REGISTERED_WAV_PROCESSORS = {} - - -def register_wav_processors(name): - def _f(cls): - REGISTERED_WAV_PROCESSORS[name] = cls - return cls - - return _f - - -def get_wav_processor_cls(name): - return REGISTERED_WAV_PROCESSORS.get(name, None) - - -class BaseWavProcessor: - @property - def name(self): - raise NotImplementedError - - def output_fn(self, input_fn): - return f'{input_fn[:-4]}_{self.name}.wav' - - def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args): - raise NotImplementedError diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/face_model/gpen_model.py b/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/face_model/gpen_model.py deleted file mode 100644 index f65a16947e592a6b36d6e5e8273bb34d2648b9fd..0000000000000000000000000000000000000000 --- a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/face_model/gpen_model.py +++ /dev/null @@ -1,941 +0,0 @@ -""" -@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021) -@author: yangxy (yangtao9009@gmail.com) -""" -import math -import random -import functools -import operator -import itertools - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - -from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2, device="cpu"): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - self.device = device - - def forward(self, input): - out = upfirdn2d( - input, self.kernel, up=self.factor, down=1, pad=self.pad, device=self.device - ) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2, device="cpu"): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - self.device = device - - def forward(self, input): - out = upfirdn2d( - input, self.kernel, up=1, down=self.factor, pad=self.pad, device=self.device - ) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1, device="cpu"): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer("kernel", kernel) - - self.pad = pad - self.device = device - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad, device=self.device) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]}," - f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})" - ) - - -class EqualLinear(nn.Module): - def __init__( - self, - in_dim, - out_dim, - bias=True, - bias_init=0, - lr_mul=1, - activation=None, - device="cpu", - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - self.device = device - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul, device=self.device) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})" - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - device="cpu", - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur( - blur_kernel, pad=(pad0, pad1), upsample_factor=factor, device=device - ) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), device=device) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, " - f"upsample={self.upsample}, downsample={self.downsample})" - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self, isconcat=True): - super().__init__() - - self.isconcat = isconcat - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, channel, height, width = image.shape - noise = image.new_empty(batch, channel, height, width).normal_() - - if self.isconcat: - return torch.cat((image, self.weight * noise), dim=1) - else: - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - isconcat=True, - device="cpu", - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - device=device, - ) - - self.noise = NoiseInjection(isconcat) - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - feat_multiplier = 2 if isconcat else 1 - self.activate = FusedLeakyReLU(out_channel * feat_multiplier, device=device) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__( - self, - in_channel, - style_dim, - upsample=True, - blur_kernel=[1, 3, 3, 1], - device="cpu", - ): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel, device=device) - - self.conv = ModulatedConv2d( - in_channel, 3, 1, style_dim, demodulate=False, device=device - ) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - isconcat=True, - narrow=1, - device="cpu", - ): - super().__init__() - - self.size = size - self.n_mlp = n_mlp - self.style_dim = style_dim - self.feat_multiplier = 2 if isconcat else 1 - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, - style_dim, - lr_mul=lr_mlp, - activation="fused_lrelu", - device=device, - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: int(512 * narrow), - 8: int(512 * narrow), - 16: int(512 * narrow), - 32: int(512 * narrow), - 64: int(256 * channel_multiplier * narrow), - 128: int(128 * channel_multiplier * narrow), - 256: int(64 * channel_multiplier * narrow), - 512: int(32 * channel_multiplier * narrow), - 1024: int(16 * channel_multiplier * narrow), - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], - self.channels[4], - 3, - style_dim, - blur_kernel=blur_kernel, - isconcat=isconcat, - device=device, - ) - self.to_rgb1 = ToRGB( - self.channels[4] * self.feat_multiplier, - style_dim, - upsample=False, - device=device, - ) - - self.log_size = int(math.log(size, 2)) - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - - in_channel = self.channels[4] - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel * self.feat_multiplier, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - isconcat=isconcat, - device=device, - ) - ) - - self.convs.append( - StyledConv( - out_channel * self.feat_multiplier, - out_channel, - 3, - style_dim, - blur_kernel=blur_kernel, - isconcat=isconcat, - device=device, - ) - ) - - self.to_rgbs.append( - ToRGB(out_channel * self.feat_multiplier, style_dim, device=device) - ) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - ): - if not input_is_latent: - styles = [self.style(s) for s in styles] - - if noise is None: - """ - noise = [None] * (2 * (self.log_size - 2) + 1) - """ - noise = [] - batch = styles[0].shape[0] - for i in range(self.n_mlp + 1): - size = 2 ** (i + 2) - noise.append( - torch.randn( - batch, self.channels[size], size, size, device=styles[0].device - ) - ) - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_latents: - return image, latent - - else: - # return image, None - return image - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - device="cpu", - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1), device=device)) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel, device=device)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1], device="cpu"): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3, device=device) - self.conv2 = ConvLayer( - in_channel, out_channel, 3, downsample=True, device=device - ) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class FullGenerator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - isconcat=True, - narrow=1, - device="cpu", - ): - super().__init__() - channels = { - 4: int(512 * narrow), - 8: int(512 * narrow), - 16: int(512 * narrow), - 32: int(512 * narrow), - 64: int(256 * channel_multiplier * narrow), - 128: int(128 * channel_multiplier * narrow), - 256: int(64 * channel_multiplier * narrow), - 512: int(32 * channel_multiplier * narrow), - 1024: int(16 * channel_multiplier * narrow), - } - - self.log_size = int(math.log(size, 2)) - self.generator = Generator( - size, - style_dim, - n_mlp, - channel_multiplier=channel_multiplier, - blur_kernel=blur_kernel, - lr_mlp=lr_mlp, - isconcat=isconcat, - narrow=narrow, - device=device, - ) - - conv = [ConvLayer(3, channels[size], 1, device=device)] - self.ecd0 = nn.Sequential(*conv) - in_channel = channels[size] - - self.names = ["ecd%d" % i for i in range(self.log_size - 1)] - for i in range(self.log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - # conv = [ResBlock(in_channel, out_channel, blur_kernel)] - conv = [ - ConvLayer(in_channel, out_channel, 3, downsample=True, device=device) - ] - setattr(self, self.names[self.log_size - i + 1], nn.Sequential(*conv)) - in_channel = out_channel - self.final_linear = nn.Sequential( - EqualLinear( - channels[4] * 4 * 4, style_dim, activation="fused_lrelu", device=device - ) - ) - - def forward( - self, - inputs, - return_latents=True, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - ): - noise = [] - for i in range(self.log_size - 1): - ecd = getattr(self, self.names[i]) - inputs = ecd(inputs) - noise.append(inputs) - # print(inputs.shape) - inputs = inputs.view(inputs.shape[0], -1) - outs = self.final_linear(inputs) - # print(outs.shape) - noise = list( - itertools.chain.from_iterable(itertools.repeat(x, 2) for x in noise) - )[::-1] - outs = self.generator( - [outs], - return_latents, - inject_index, - truncation, - truncation_latent, - input_is_latent, - noise=noise[1:], - ) - return outs - - -class Discriminator(nn.Module): - def __init__( - self, - size, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - narrow=1, - device="cpu", - ): - super().__init__() - - channels = { - 4: int(512 * narrow), - 8: int(512 * narrow), - 16: int(512 * narrow), - 32: int(512 * narrow), - 64: int(256 * channel_multiplier * narrow), - 128: int(128 * channel_multiplier * narrow), - 256: int(64 * channel_multiplier * narrow), - 512: int(32 * channel_multiplier * narrow), - 1024: int(16 * channel_multiplier * narrow), - } - - convs = [ConvLayer(3, channels[size], 1, device=device)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel, device=device)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3, device=device) - self.final_linear = nn.Sequential( - EqualLinear( - channels[4] * 4 * 4, - channels[4], - activation="fused_lrelu", - device=device, - ), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - return out - - -class FullGenerator_SR(nn.Module): - def __init__( - self, - in_size, - out_size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - isconcat=True, - narrow=1, - device="cpu", - ): - super().__init__() - channels = { - 4: int(512 * narrow), - 8: int(512 * narrow), - 16: int(512 * narrow), - 32: int(512 * narrow), - 64: int(256 * channel_multiplier * narrow), - 128: int(128 * channel_multiplier * narrow), - 256: int(64 * channel_multiplier * narrow), - 512: int(32 * channel_multiplier * narrow), - 1024: int(16 * channel_multiplier * narrow), - 2048: int(8 * channel_multiplier * narrow), - } - - self.log_insize = int(math.log(in_size, 2)) - self.log_outsize = int(math.log(out_size, 2)) - self.generator = Generator( - out_size, - style_dim, - n_mlp, - channel_multiplier=channel_multiplier, - blur_kernel=blur_kernel, - lr_mlp=lr_mlp, - isconcat=isconcat, - narrow=narrow, - device=device, - ) - - conv = [ConvLayer(3, channels[in_size], 1, device=device)] - self.ecd0 = nn.Sequential(*conv) - in_channel = channels[in_size] - - self.names = ["ecd%d" % i for i in range(self.log_insize - 1)] - for i in range(self.log_insize, 2, -1): - out_channel = channels[2 ** (i - 1)] - # conv = [ResBlock(in_channel, out_channel, blur_kernel)] - conv = [ - ConvLayer(in_channel, out_channel, 3, downsample=True, device=device) - ] - setattr(self, self.names[self.log_insize - i + 1], nn.Sequential(*conv)) - in_channel = out_channel - self.final_linear = nn.Sequential( - EqualLinear( - channels[4] * 4 * 4, style_dim, activation="fused_lrelu", device=device - ) - ) - - def forward( - self, - inputs, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - ): - noise = [] - for i in range(self.log_outsize - self.log_insize): - noise.append(None) - for i in range(self.log_insize - 1): - ecd = getattr(self, self.names[i]) - inputs = ecd(inputs) - noise.append(inputs) - # print(inputs.shape) - inputs = inputs.view(inputs.shape[0], -1) - outs = self.final_linear(inputs) - # print(outs.shape) - noise = list( - itertools.chain.from_iterable(itertools.repeat(x, 2) for x in noise) - )[::-1] - image, latent = self.generator( - [outs], - return_latents, - inject_index, - truncation, - truncation_latent, - input_is_latent, - noise=noise[1:], - ) - return image, latent diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/arcface/mouth_net_pl.py b/spaces/ygtxr1997/ReliableSwap_Demo/third_party/arcface/mouth_net_pl.py deleted file mode 100644 index 21c66ef0eea68dc873d08cbc617395c2fea96e65..0000000000000000000000000000000000000000 --- a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/arcface/mouth_net_pl.py +++ /dev/null @@ -1,358 +0,0 @@ -import os.path - -import torch -import torchvision -import torch.nn.functional as F -from torch.utils.data import DataLoader -import pytorch_lightning as pl - -import numpy as np -import sklearn -from sklearn.metrics import roc_curve, auc -from scipy.spatial.distance import cdist - -from third_party.arcface.mouth_net import MouthNet -from third_party.arcface.margin_loss import Softmax, AMArcFace, AMCosFace -from third_party.arcface.load_dataset import MXFaceDataset, EvalDataset -from third_party.bisenet.bisenet import BiSeNet - - -class MouthNetPL(pl.LightningModule): - def __init__( - self, - num_classes: int, - batch_size: int = 256, - dim_feature: int = 128, - header_type: str = 'AMArcFace', - header_params: tuple = (64.0, 0.5, 0.0, 0.0), # (s, m, a, k) - rec_folder: str = "/gavin/datasets/msml/ms1m-retinaface", - learning_rate: int = 0.1, - crop: tuple = (0, 0, 112, 112), # (w1,h1,w2,h2) - ): - super(MouthNetPL, self).__init__() - - # self.img_size = (112, 112) - - ''' mouth feature extractor ''' - bisenet = BiSeNet(19) - bisenet.load_state_dict( - torch.load( - "/gavin/datasets/hanbang/79999_iter.pth", - map_location="cpu", - ) - ) - bisenet.eval() - bisenet.requires_grad_(False) - self.mouth_net = MouthNet( - bisenet=None, - feature_dim=dim_feature, - crop_param=crop, - iresnet_pretrained=False, - ) - - ''' head & loss ''' - self.automatic_optimization = False - self.dim_feature = dim_feature - self.num_classes = num_classes - self._prepare_header(header_type, header_params) - self.cls_criterion = torch.nn.CrossEntropyLoss() - self.learning_rate = learning_rate - - ''' dataset ''' - assert os.path.exists(rec_folder) - self.rec_folder = rec_folder - self.batch_size = batch_size - self.crop_param = crop - - ''' validation ''' - - def _prepare_header(self, head_type, header_params): - dim_in = self.dim_feature - dim_out = self.num_classes - - """ Get hyper-params of header """ - s, m, a, k = header_params - - """ Choose the header """ - if 'Softmax' in head_type: - self.classification = Softmax(dim_in, dim_out, device_id=None) - elif 'AMCosFace' in head_type: - self.classification = AMCosFace(dim_in, dim_out, - device_id=None, - s=s, m=m, - a=a, k=k, - ) - elif 'AMArcFace' in head_type: - self.classification = AMArcFace(dim_in, dim_out, - device_id=None, - s=s, m=m, - a=a, k=k, - ) - else: - raise ValueError('Header type error!') - - def forward(self, x, label=None): - feat = self.mouth_net(x) - if self.training: - assert label is not None - cls = self.classification(feat, label) - return feat, cls - else: - return feat - - def training_step(self, batch, batch_idx): - opt = self.optimizers(use_pl_optimizer=True) - img, label = batch - - mouth_feat, final_cls = self(img, label) - - cls_loss = self.cls_criterion(final_cls, label) - - opt.zero_grad() - self.manual_backward(cls_loss) - torch.nn.utils.clip_grad_norm_(self.parameters(), max_norm=5, norm_type=2) - opt.step() - - ''' loss logging ''' - self.logging_dict({"cls_loss": cls_loss}, prefix="train / ") - self.logging_lr() - if batch_idx % 50 == 0 and self.local_rank == 0: - print('loss=', cls_loss) - - return cls_loss - - def training_epoch_end(self, outputs): - sch = self.lr_schedulers() - sch.step() - - lr = -1 - opts = self.trainer.optimizers - for opt in opts: - for param_group in opt.param_groups: - lr = param_group["lr"] - break - print('learning rate changed to %.6f' % lr) - - # def validation_step(self, batch, batch_idx): - # return self.test_step(batch, batch_idx) - # - # def validation_step_end(self, outputs): - # return self.test_step_end(outputs) - # - # def validation_epoch_end(self, outputs): - # return self.test_step_end(outputs) - - @staticmethod - def save_tensor(tensor: torch.Tensor, path: str, b_idx: int = 0): - tensor = (tensor + 1.) * 127.5 - img = tensor.permute(0, 2, 3, 1)[b_idx].cpu().numpy() - from PIL import Image - img_pil = Image.fromarray(img.astype(np.uint8)) - img_pil.save(path) - - def test_step(self, batch, batch_idx): - img1, img2, same = batch - feat1 = self.mouth_net(img1) - feat2 = self.mouth_net(img2) - return feat1, feat2, same - - def test_step_end(self, outputs): - feat1, feat2, same = outputs - feat1 = feat1.cpu().numpy() - feat2 = feat2.cpu().numpy() - same = same.cpu().numpy() - - feat1 = sklearn.preprocessing.normalize(feat1) - feat2 = sklearn.preprocessing.normalize(feat2) - - predict_label = [] - num = feat1.shape[0] - for i in range(num): - dis_cos = cdist(feat1[i, None], feat2[i, None], metric='cosine') - predict_label.append(dis_cos[0, 0]) - predict_label = np.array(predict_label) - - return { - "pred": predict_label, - "gt": same, - } - - def test_epoch_end(self, outputs): - print(outputs) - pred, same = None, None - for batch_output in outputs: - if pred is None and same is None: - pred = batch_output["pred"] - same = batch_output["gt"] - else: - pred = np.concatenate([pred, batch_output["pred"]]) - same = np.concatenate([same, batch_output["gt"]]) - print(pred.shape, same.shape) - - fpr, tpr, threshold = roc_curve(same, pred) - acc = tpr[np.argmin(np.abs(tpr - (1 - fpr)))] # choose proper threshold - print("=> verification finished, acc=%.4f" % (acc)) - - ''' save pth ''' - pth_path = "./weights/fixer_net_casia_%s.pth" % ('_'.join((str(x) for x in self.crop_param))) - self.mouth_net.save_backbone(pth_path) - print("=> model save to %s" % pth_path) - mouth_net = MouthNet( - bisenet=None, - feature_dim=self.dim_feature, - crop_param=self.crop_param - ) - mouth_net.load_backbone(pth_path) - print("=> MouthNet pth checked") - - return acc - - def logging_dict(self, log_dict, prefix=None): - for key, val in log_dict.items(): - if prefix is not None: - key = prefix + key - self.log(key, val) - - def logging_lr(self): - opts = self.trainer.optimizers - for idx, opt in enumerate(opts): - lr = None - for param_group in opt.param_groups: - lr = param_group["lr"] - break - self.log(f"lr_{idx}", lr) - - def configure_optimizers(self): - params = list(self.parameters()) - learning_rate = self.learning_rate / 512 * self.batch_size * torch.cuda.device_count() - optimizer = torch.optim.SGD(params, lr=learning_rate, - momentum=0.9, weight_decay=5e-4) - print('lr is set as %.5f due to the global batch_size %d' % (learning_rate, - self.batch_size * torch.cuda.device_count())) - - def lr_step_func(epoch): - return ((epoch + 1) / (4 + 1)) ** 2 if epoch < 0 else 0.1 ** len( - [m for m in [11, 17, 22] if m - 1 <= epoch]) # 0.1, 0.01, 0.001, 0.0001 - scheduler= torch.optim.lr_scheduler.LambdaLR( - optimizer=optimizer, lr_lambda=lr_step_func) - - return [optimizer], [scheduler] - - def train_dataloader(self): - dataset = MXFaceDataset( - root_dir=self.rec_folder, - crop_param=self.crop_param, - ) - train_loader = DataLoader( - dataset, self.batch_size, num_workers=24, shuffle=True, drop_last=True - ) - return train_loader - - def val_dataloader(self): - return self.test_dataloader() - - def test_dataloader(self): - dataset = EvalDataset( - rec_folder=self.rec_folder, - target='lfw', - crop_param=self.crop_param - ) - test_loader = DataLoader( - dataset, 20, num_workers=12, shuffle=False, drop_last=False - ) - return test_loader - - -def start_train(): - import os - import argparse - import torch - import pytorch_lightning as pl - from pytorch_lightning.callbacks import ModelCheckpoint - import wandb - from pytorch_lightning.loggers import WandbLogger - - parser = argparse.ArgumentParser() - parser.add_argument( - "-g", - "--gpus", - type=str, - default=None, - help="Number of gpus to use (e.g. '0,1,2,3'). Will use all if not given.", - ) - parser.add_argument("-n", "--name", type=str, required=True, help="Name of the run.") - parser.add_argument("-pj", "--project", type=str, default="mouthnet", help="Name of the project.") - - parser.add_argument("-rp", "--resume_checkpoint_path", - type=str, default=None, help="path of checkpoint for resuming", ) - parser.add_argument("-p", "--saving_folder", - type=str, default="/apdcephfs/share_1290939/gavinyuan/out", help="saving folder", ) - parser.add_argument("--wandb_resume", - type=str, default=None, help="resume wandb logging from the input id", ) - - parser.add_argument("--header_type", type=str, default="AMArcFace", help="loss type.") - - parser.add_argument("-bs", "--batch_size", type=int, default=128, help="bs.") - parser.add_argument("-fs", "--fast_dev_run", type=bool, default=False, help="pytorch.lightning fast_dev_run") - args = parser.parse_args() - args.val_targets = [] - # args.rec_folder = "/gavin/datasets/msml/ms1m-retinaface" - # num_classes = 93431 - args.rec_folder = "/gavin/datasets/msml/casia" - num_classes = 10572 - - save_path = os.path.join(args.saving_folder, args.name) - os.makedirs(save_path, exist_ok=True) - checkpoint_callback = ModelCheckpoint( - dirpath=save_path, - monitor="train / cls_loss", - save_top_k=10, - verbose=True, - every_n_train_steps=200, - ) - - torch.cuda.empty_cache() - mouth_net = MouthNetPL( - num_classes=num_classes, - batch_size=args.batch_size, - dim_feature=128, - rec_folder=args.rec_folder, - header_type=args.header_type, - crop=(28, 56, 84, 112) - ) - - if args.wandb_resume == None: - resume = "allow" - wandb_id = wandb.util.generate_id() - else: - resume = True - wandb_id = args.wandb_resume - logger = WandbLogger( - project=args.project, - entity="gavinyuan", - name=args.name, - resume=resume, - id=wandb_id, - ) - - trainer = pl.Trainer( - gpus=-1 if args.gpus is None else torch.cuda.device_count(), - callbacks=[checkpoint_callback], - logger=logger, - weights_save_path=save_path, - resume_from_checkpoint=args.resume_checkpoint_path, - gradient_clip_val=0, - max_epochs=25, - num_sanity_val_steps=1, - fast_dev_run=args.fast_dev_run, - val_check_interval=50, - progress_bar_refresh_rate=1, - distributed_backend="ddp", - benchmark=True, - ) - trainer.fit(mouth_net) - - -if __name__ == "__main__": - - start_train() diff --git a/spaces/yhavinga/netherator/README.md b/spaces/yhavinga/netherator/README.md deleted file mode 100644 index 6fc710b91867b85eaa630684ad38795172875d35..0000000000000000000000000000000000000000 --- a/spaces/yhavinga/netherator/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Netherator - teller of tales from the Netherlands -emoji: 🧙 -colorFrom: gray -colorTo: indigo -sdk: streamlit -app_file: app.py -pinned: true -sdk_version: 1.25.0 ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convnext/modeling_tf_convnext.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convnext/modeling_tf_convnext.py deleted file mode 100644 index 1629988900aa63e4f1541c8ace89e6842ead3728..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convnext/modeling_tf_convnext.py +++ /dev/null @@ -1,566 +0,0 @@ -# coding=utf-8 -# Copyright 2022 Meta Platforms Inc. and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" TF 2.0 ConvNext model.""" - - -from __future__ import annotations - -from typing import Optional, Tuple, Union - -import numpy as np -import tensorflow as tf - -from ...activations_tf import get_tf_activation -from ...modeling_tf_outputs import TFBaseModelOutput, TFBaseModelOutputWithPooling, TFSequenceClassifierOutput -from ...modeling_tf_utils import ( - TFModelInputType, - TFPreTrainedModel, - TFSequenceClassificationLoss, - get_initializer, - keras_serializable, - unpack_inputs, -) -from ...tf_utils import shape_list -from ...utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings -from .configuration_convnext import ConvNextConfig - - -logger = logging.get_logger(__name__) - - -_CONFIG_FOR_DOC = "ConvNextConfig" -_CHECKPOINT_FOR_DOC = "facebook/convnext-tiny-224" - - -class TFConvNextDropPath(tf.keras.layers.Layer): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - References: - (1) github.com:rwightman/pytorch-image-models - """ - - def __init__(self, drop_path, **kwargs): - super().__init__(**kwargs) - self.drop_path = drop_path - - def call(self, x, training=None): - if training: - keep_prob = 1 - self.drop_path - shape = (tf.shape(x)[0],) + (1,) * (len(tf.shape(x)) - 1) - random_tensor = keep_prob + tf.random.uniform(shape, 0, 1) - random_tensor = tf.floor(random_tensor) - return (x / keep_prob) * random_tensor - return x - - -class TFConvNextEmbeddings(tf.keras.layers.Layer): - """This class is comparable to (and inspired by) the SwinEmbeddings class - found in src/transformers/models/swin/modeling_swin.py. - """ - - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.patch_embeddings = tf.keras.layers.Conv2D( - filters=config.hidden_sizes[0], - kernel_size=config.patch_size, - strides=config.patch_size, - name="patch_embeddings", - kernel_initializer=get_initializer(config.initializer_range), - bias_initializer="zeros", - ) - self.layernorm = tf.keras.layers.LayerNormalization(epsilon=1e-6, name="layernorm") - self.num_channels = config.num_channels - - def call(self, pixel_values): - if isinstance(pixel_values, dict): - pixel_values = pixel_values["pixel_values"] - - num_channels = shape_list(pixel_values)[1] - if tf.executing_eagerly() and num_channels != self.num_channels: - raise ValueError( - "Make sure that the channel dimension of the pixel values match with the one set in the configuration." - ) - - # When running on CPU, `tf.keras.layers.Conv2D` doesn't support `NCHW` format. - # So change the input format from `NCHW` to `NHWC`. - # shape = (batch_size, in_height, in_width, in_channels=num_channels) - pixel_values = tf.transpose(pixel_values, perm=(0, 2, 3, 1)) - - embeddings = self.patch_embeddings(pixel_values) - embeddings = self.layernorm(embeddings) - return embeddings - - -class TFConvNextLayer(tf.keras.layers.Layer): - """This corresponds to the `Block` class in the original implementation. - - There are two equivalent implementations: [DwConv, LayerNorm (channels_first), Conv, GELU,1x1 Conv]; all in (N, C, - H, W) (2) [DwConv, Permute to (N, H, W, C), LayerNorm (channels_last), Linear, GELU, Linear]; Permute back - - The authors used (2) as they find it slightly faster in PyTorch. Since we already permuted the inputs to follow - NHWC ordering, we can just apply the operations straight-away without the permutation. - - Args: - config ([`ConvNextConfig`]): Model configuration class. - dim (`int`): Number of input channels. - drop_path (`float`): Stochastic depth rate. Default: 0.0. - """ - - def __init__(self, config, dim, drop_path=0.0, **kwargs): - super().__init__(**kwargs) - self.dim = dim - self.config = config - self.dwconv = tf.keras.layers.Conv2D( - filters=dim, - kernel_size=7, - padding="same", - groups=dim, - kernel_initializer=get_initializer(config.initializer_range), - bias_initializer="zeros", - name="dwconv", - ) # depthwise conv - self.layernorm = tf.keras.layers.LayerNormalization( - epsilon=1e-6, - name="layernorm", - ) - self.pwconv1 = tf.keras.layers.Dense( - units=4 * dim, - kernel_initializer=get_initializer(config.initializer_range), - bias_initializer="zeros", - name="pwconv1", - ) # pointwise/1x1 convs, implemented with linear layers - self.act = get_tf_activation(config.hidden_act) - self.pwconv2 = tf.keras.layers.Dense( - units=dim, - kernel_initializer=get_initializer(config.initializer_range), - bias_initializer="zeros", - name="pwconv2", - ) - # Using `layers.Activation` instead of `tf.identity` to better control `training` - # behaviour. - self.drop_path = ( - TFConvNextDropPath(drop_path, name="drop_path") - if drop_path > 0.0 - else tf.keras.layers.Activation("linear", name="drop_path") - ) - - def build(self, input_shape: tf.TensorShape = None): - # PT's `nn.Parameters` must be mapped to a TF layer weight to inherit the same name hierarchy (and vice-versa) - self.layer_scale_parameter = ( - self.add_weight( - shape=(self.dim,), - initializer=tf.keras.initializers.Constant(value=self.config.layer_scale_init_value), - trainable=True, - name="layer_scale_parameter", - ) - if self.config.layer_scale_init_value > 0 - else None - ) - super().build(input_shape) - - def call(self, hidden_states, training=False): - input = hidden_states - x = self.dwconv(hidden_states) - x = self.layernorm(x) - x = self.pwconv1(x) - x = self.act(x) - x = self.pwconv2(x) - - if self.layer_scale_parameter is not None: - x = self.layer_scale_parameter * x - - x = input + self.drop_path(x, training=training) - return x - - -class TFConvNextStage(tf.keras.layers.Layer): - """ConvNext stage, consisting of an optional downsampling layer + multiple residual blocks. - - Args: - config ([`ConvNextConfig`]): Model configuration class. - in_channels (`int`): Number of input channels. - out_channels (`int`): Number of output channels. - depth (`int`): Number of residual blocks. - drop_path_rates(`List[float]`): Stochastic depth rates for each layer. - """ - - def __init__( - self, config, in_channels, out_channels, kernel_size=2, stride=2, depth=2, drop_path_rates=None, **kwargs - ): - super().__init__(**kwargs) - if in_channels != out_channels or stride > 1: - self.downsampling_layer = [ - tf.keras.layers.LayerNormalization( - epsilon=1e-6, - name="downsampling_layer.0", - ), - # Inputs to this layer will follow NHWC format since we - # transposed the inputs from NCHW to NHWC in the `TFConvNextEmbeddings` - # layer. All the outputs throughout the model will be in NHWC - # from this point on until the output where we again change to - # NCHW. - tf.keras.layers.Conv2D( - filters=out_channels, - kernel_size=kernel_size, - strides=stride, - kernel_initializer=get_initializer(config.initializer_range), - bias_initializer="zeros", - name="downsampling_layer.1", - ), - ] - else: - self.downsampling_layer = [tf.identity] - - drop_path_rates = drop_path_rates or [0.0] * depth - self.layers = [ - TFConvNextLayer( - config, - dim=out_channels, - drop_path=drop_path_rates[j], - name=f"layers.{j}", - ) - for j in range(depth) - ] - - def call(self, hidden_states): - for layer in self.downsampling_layer: - hidden_states = layer(hidden_states) - for layer in self.layers: - hidden_states = layer(hidden_states) - return hidden_states - - -class TFConvNextEncoder(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.stages = [] - drop_path_rates = tf.linspace(0.0, config.drop_path_rate, sum(config.depths)) - drop_path_rates = tf.split(drop_path_rates, config.depths) - drop_path_rates = [x.numpy().tolist() for x in drop_path_rates] - prev_chs = config.hidden_sizes[0] - for i in range(config.num_stages): - out_chs = config.hidden_sizes[i] - stage = TFConvNextStage( - config, - in_channels=prev_chs, - out_channels=out_chs, - stride=2 if i > 0 else 1, - depth=config.depths[i], - drop_path_rates=drop_path_rates[i], - name=f"stages.{i}", - ) - self.stages.append(stage) - prev_chs = out_chs - - def call(self, hidden_states, output_hidden_states=False, return_dict=True): - all_hidden_states = () if output_hidden_states else None - - for i, layer_module in enumerate(self.stages): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - hidden_states = layer_module(hidden_states) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states] if v is not None) - - return TFBaseModelOutput(last_hidden_state=hidden_states, hidden_states=all_hidden_states) - - -@keras_serializable -class TFConvNextMainLayer(tf.keras.layers.Layer): - config_class = ConvNextConfig - - def __init__(self, config: ConvNextConfig, add_pooling_layer: bool = True, **kwargs): - super().__init__(**kwargs) - - self.config = config - self.embeddings = TFConvNextEmbeddings(config, name="embeddings") - self.encoder = TFConvNextEncoder(config, name="encoder") - self.layernorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="layernorm") - # We are setting the `data_format` like so because from here on we will revert to the - # NCHW output format - self.pooler = tf.keras.layers.GlobalAvgPool2D(data_format="channels_first") if add_pooling_layer else None - - @unpack_inputs - def call( - self, - pixel_values: TFModelInputType | None = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - embedding_output = self.embeddings(pixel_values, training=training) - - encoder_outputs = self.encoder( - embedding_output, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - last_hidden_state = encoder_outputs[0] - # Change to NCHW output format have uniformity in the modules - last_hidden_state = tf.transpose(last_hidden_state, perm=(0, 3, 1, 2)) - pooled_output = self.layernorm(self.pooler(last_hidden_state)) - - # Change the other hidden state outputs to NCHW as well - if output_hidden_states: - hidden_states = tuple([tf.transpose(h, perm=(0, 3, 1, 2)) for h in encoder_outputs[1]]) - - if not return_dict: - hidden_states = hidden_states if output_hidden_states else () - return (last_hidden_state, pooled_output) + hidden_states - - return TFBaseModelOutputWithPooling( - last_hidden_state=last_hidden_state, - pooler_output=pooled_output, - hidden_states=hidden_states if output_hidden_states else encoder_outputs.hidden_states, - ) - - -class TFConvNextPreTrainedModel(TFPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = ConvNextConfig - base_model_prefix = "convnext" - main_input_name = "pixel_values" - - -CONVNEXT_START_DOCSTRING = r""" - This model inherits from [`TFPreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it - as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and - behavior. - - - - TensorFlow models and layers in `transformers` accept two formats as input: - - - having all inputs as keyword arguments (like PyTorch models), or - - having all inputs as a list, tuple or dict in the first positional argument. - - The reason the second format is supported is that Keras methods prefer this format when passing inputs to models - and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just - pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second - format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with - the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first - positional argument: - - - a single Tensor with `pixel_values` only and nothing else: `model(pixel_values)` - - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: - `model([pixel_values, attention_mask])` or `model([pixel_values, attention_mask, token_type_ids])` - - a dictionary with one or several input Tensors associated to the input names given in the docstring: - `model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})` - - Note that when creating models and layers with - [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry - about any of this, as you can just pass inputs like you would to any other Python function! - - - - Parameters: - config ([`ConvNextConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~TFPreTrainedModel.from_pretrained`] method to load the model weights. -""" - -CONVNEXT_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`np.ndarray`, `tf.Tensor`, `List[tf.Tensor]` ``Dict[str, tf.Tensor]` or `Dict[str, np.ndarray]` and each example must have the shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See - [`ConvNextImageProcessor.__call__`] for details. - - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. This argument can be used only in eager mode, in graph mode the value in the config will be - used instead. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used in - eager mode, in graph mode the value will always be set to True. -""" - - -@add_start_docstrings( - "The bare ConvNext model outputting raw features without any specific head on top.", - CONVNEXT_START_DOCSTRING, -) -class TFConvNextModel(TFConvNextPreTrainedModel): - def __init__(self, config, *inputs, add_pooling_layer=True, **kwargs): - super().__init__(config, *inputs, **kwargs) - self.convnext = TFConvNextMainLayer(config, add_pooling_layer=add_pooling_layer, name="convnext") - - @unpack_inputs - @add_start_docstrings_to_model_forward(CONVNEXT_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=TFBaseModelOutputWithPooling, config_class=_CONFIG_FOR_DOC) - def call( - self, - pixel_values: TFModelInputType | None = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: - r""" - Returns: - - Examples: - - ```python - >>> from transformers import AutoImageProcessor, TFConvNextModel - >>> from PIL import Image - >>> import requests - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224") - >>> model = TFConvNextModel.from_pretrained("facebook/convnext-tiny-224") - - >>> inputs = image_processor(images=image, return_tensors="tf") - >>> outputs = model(**inputs) - >>> last_hidden_states = outputs.last_hidden_state - ```""" - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - outputs = self.convnext( - pixel_values=pixel_values, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - if not return_dict: - return (outputs[0],) + outputs[1:] - - return TFBaseModelOutputWithPooling( - last_hidden_state=outputs.last_hidden_state, - pooler_output=outputs.pooler_output, - hidden_states=outputs.hidden_states, - ) - - -@add_start_docstrings( - """ - ConvNext Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for - ImageNet. - """, - CONVNEXT_START_DOCSTRING, -) -class TFConvNextForImageClassification(TFConvNextPreTrainedModel, TFSequenceClassificationLoss): - def __init__(self, config: ConvNextConfig, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.num_labels = config.num_labels - self.convnext = TFConvNextMainLayer(config, name="convnext") - - # Classifier head - self.classifier = tf.keras.layers.Dense( - units=config.num_labels, - kernel_initializer=get_initializer(config.initializer_range), - bias_initializer="zeros", - name="classifier", - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(CONVNEXT_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=TFSequenceClassifierOutput, config_class=_CONFIG_FOR_DOC) - def call( - self, - pixel_values: TFModelInputType | None = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[TFSequenceClassifierOutput, Tuple[tf.Tensor]]: - r""" - labels (`tf.Tensor` or `np.ndarray` of shape `(batch_size,)`, *optional*): - Labels for computing the image classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - - Returns: - - Examples: - - ```python - >>> from transformers import AutoImageProcessor, TFConvNextForImageClassification - >>> import tensorflow as tf - >>> from PIL import Image - >>> import requests - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224") - >>> model = TFConvNextForImageClassification.from_pretrained("facebook/convnext-tiny-224") - - >>> inputs = image_processor(images=image, return_tensors="tf") - >>> outputs = model(**inputs) - >>> logits = outputs.logits - >>> # model predicts one of the 1000 ImageNet classes - >>> predicted_class_idx = tf.math.argmax(logits, axis=-1)[0] - >>> print("Predicted class:", model.config.id2label[int(predicted_class_idx)]) - ```""" - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - outputs = self.convnext( - pixel_values, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - pooled_output = outputs.pooler_output if return_dict else outputs[1] - - logits = self.classifier(pooled_output) - loss = None if labels is None else self.hf_compute_loss(labels=labels, logits=logits) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TFSequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convnextv2/modeling_convnextv2.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convnextv2/modeling_convnextv2.py deleted file mode 100644 index 3a268c713d502adb1ad877a2a6b5b0914568d581..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convnextv2/modeling_convnextv2.py +++ /dev/null @@ -1,582 +0,0 @@ -# coding=utf-8 -# Copyright 2023 Meta Platforms, Inc. and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch ConvNextV2 model.""" - - -from typing import Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...activations import ACT2FN -from ...modeling_outputs import ( - BackboneOutput, - BaseModelOutputWithNoAttention, - BaseModelOutputWithPoolingAndNoAttention, - ImageClassifierOutputWithNoAttention, -) -from ...modeling_utils import PreTrainedModel -from ...utils import ( - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from ...utils.backbone_utils import BackboneMixin -from .configuration_convnextv2 import ConvNextV2Config - - -logger = logging.get_logger(__name__) - -# General docstring -_CONFIG_FOR_DOC = "ConvNextV2Config" - -# Base docstring -_CHECKPOINT_FOR_DOC = "facebook/convnextv2-tiny-1k-224" -_EXPECTED_OUTPUT_SHAPE = [1, 768, 7, 7] - -# Image classification docstring -_IMAGE_CLASS_CHECKPOINT = "facebook/convnextv2-tiny-1k-224" -_IMAGE_CLASS_EXPECTED_OUTPUT = "tabby, tabby cat" - -CONVNEXTV2_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "facebook/convnextv2-tiny-1k-224", - # See all ConvNextV2 models at https://huggingface.co/models?filter=convnextv2 -] - - -# Copied from transformers.models.beit.modeling_beit.drop_path -def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor: - """ - Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - - Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks, - however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... - See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the - layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the - argument. - """ - if drop_prob == 0.0 or not training: - return input - keep_prob = 1 - drop_prob - shape = (input.shape[0],) + (1,) * (input.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=input.dtype, device=input.device) - random_tensor.floor_() # binarize - output = input.div(keep_prob) * random_tensor - return output - - -# Copied from transformers.models.beit.modeling_beit.BeitDropPath with Beit->ConvNextV2 -class ConvNextV2DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).""" - - def __init__(self, drop_prob: Optional[float] = None) -> None: - super().__init__() - self.drop_prob = drop_prob - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - return drop_path(hidden_states, self.drop_prob, self.training) - - def extra_repr(self) -> str: - return "p={}".format(self.drop_prob) - - -class ConvNextV2GRN(nn.Module): - """GRN (Global Response Normalization) layer""" - - def __init__(self, dim: int): - super().__init__() - self.weight = nn.Parameter(torch.zeros(1, 1, 1, dim)) - self.bias = nn.Parameter(torch.zeros(1, 1, 1, dim)) - - def forward(self, hidden_states: torch.FloatTensor) -> torch.FloatTensor: - # Compute and normalize global spatial feature maps - global_features = torch.norm(hidden_states, p=2, dim=(1, 2), keepdim=True) - norm_features = global_features / (global_features.mean(dim=-1, keepdim=True) + 1e-6) - hidden_states = self.weight * (hidden_states * norm_features) + self.bias + hidden_states - - return hidden_states - - -# Copied from transformers.models.convnext.modeling_convnext.ConvNextLayerNorm with ConvNext->ConvNextV2 -class ConvNextV2LayerNorm(nn.Module): - r"""LayerNorm that supports two data formats: channels_last (default) or channels_first. - The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, - width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width). - """ - - def __init__(self, normalized_shape, eps=1e-6, data_format="channels_last"): - super().__init__() - self.weight = nn.Parameter(torch.ones(normalized_shape)) - self.bias = nn.Parameter(torch.zeros(normalized_shape)) - self.eps = eps - self.data_format = data_format - if self.data_format not in ["channels_last", "channels_first"]: - raise NotImplementedError(f"Unsupported data format: {self.data_format}") - self.normalized_shape = (normalized_shape,) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - if self.data_format == "channels_last": - x = torch.nn.functional.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps) - elif self.data_format == "channels_first": - input_dtype = x.dtype - x = x.float() - u = x.mean(1, keepdim=True) - s = (x - u).pow(2).mean(1, keepdim=True) - x = (x - u) / torch.sqrt(s + self.eps) - x = x.to(dtype=input_dtype) - x = self.weight[:, None, None] * x + self.bias[:, None, None] - return x - - -# Copied from transformers.models.convnext.modeling_convnext.ConvNextEmbeddings with ConvNext->ConvNextV2 -class ConvNextV2Embeddings(nn.Module): - """This class is comparable to (and inspired by) the SwinEmbeddings class - found in src/transformers/models/swin/modeling_swin.py. - """ - - def __init__(self, config): - super().__init__() - self.patch_embeddings = nn.Conv2d( - config.num_channels, config.hidden_sizes[0], kernel_size=config.patch_size, stride=config.patch_size - ) - self.layernorm = ConvNextV2LayerNorm(config.hidden_sizes[0], eps=1e-6, data_format="channels_first") - self.num_channels = config.num_channels - - def forward(self, pixel_values: torch.FloatTensor) -> torch.Tensor: - num_channels = pixel_values.shape[1] - if num_channels != self.num_channels: - raise ValueError( - "Make sure that the channel dimension of the pixel values match with the one set in the configuration." - ) - embeddings = self.patch_embeddings(pixel_values) - embeddings = self.layernorm(embeddings) - return embeddings - - -class ConvNextV2Layer(nn.Module): - """This corresponds to the `Block` class in the original implementation. - - There are two equivalent implementations: [DwConv, LayerNorm (channels_first), Conv, GELU,1x1 Conv]; all in (N, C, - H, W) (2) [DwConv, Permute to (N, H, W, C), LayerNorm (channels_last), Linear, GELU, Linear]; Permute back - - The authors used (2) as they find it slightly faster in PyTorch. - - Args: - config ([`ConvNextV2Config`]): Model configuration class. - dim (`int`): Number of input channels. - drop_path (`float`): Stochastic depth rate. Default: 0.0. - """ - - def __init__(self, config, dim, drop_path=0): - super().__init__() - # depthwise conv - self.dwconv = nn.Conv2d(dim, dim, kernel_size=7, padding=3, groups=dim) - self.layernorm = ConvNextV2LayerNorm(dim, eps=1e-6) - # pointwise/1x1 convs, implemented with linear layers - self.pwconv1 = nn.Linear(dim, 4 * dim) - self.act = ACT2FN[config.hidden_act] - self.grn = ConvNextV2GRN(4 * dim) - self.pwconv2 = nn.Linear(4 * dim, dim) - self.drop_path = ConvNextV2DropPath(drop_path) if drop_path > 0.0 else nn.Identity() - - def forward(self, hidden_states: torch.FloatTensor) -> torch.Tensor: - input = hidden_states - x = self.dwconv(hidden_states) - # (batch_size, num_channels, height, width) -> (batch_size, height, width, num_channels) - x = x.permute(0, 2, 3, 1) - x = self.layernorm(x) - x = self.pwconv1(x) - x = self.act(x) - x = self.grn(x) - x = self.pwconv2(x) - # (batch_size, height, width, num_channels) -> (batch_size, num_channels, height, width) - x = x.permute(0, 3, 1, 2) - - x = input + self.drop_path(x) - return x - - -# Copied from transformers.models.convnext.modeling_convnext.ConvNextStage with ConvNeXT->ConvNeXTV2, ConvNext->ConvNextV2 -class ConvNextV2Stage(nn.Module): - """ConvNeXTV2 stage, consisting of an optional downsampling layer + multiple residual blocks. - - Args: - config ([`ConvNextV2Config`]): Model configuration class. - in_channels (`int`): Number of input channels. - out_channels (`int`): Number of output channels. - depth (`int`): Number of residual blocks. - drop_path_rates(`List[float]`): Stochastic depth rates for each layer. - """ - - def __init__(self, config, in_channels, out_channels, kernel_size=2, stride=2, depth=2, drop_path_rates=None): - super().__init__() - - if in_channels != out_channels or stride > 1: - self.downsampling_layer = nn.Sequential( - ConvNextV2LayerNorm(in_channels, eps=1e-6, data_format="channels_first"), - nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride), - ) - else: - self.downsampling_layer = nn.Identity() - drop_path_rates = drop_path_rates or [0.0] * depth - self.layers = nn.Sequential( - *[ConvNextV2Layer(config, dim=out_channels, drop_path=drop_path_rates[j]) for j in range(depth)] - ) - - def forward(self, hidden_states: torch.FloatTensor) -> torch.Tensor: - hidden_states = self.downsampling_layer(hidden_states) - hidden_states = self.layers(hidden_states) - return hidden_states - - -# Copied from transformers.models.convnext.modeling_convnext.ConvNextEncoder with ConvNext->ConvNextV2 -class ConvNextV2Encoder(nn.Module): - def __init__(self, config): - super().__init__() - self.stages = nn.ModuleList() - drop_path_rates = [ - x.tolist() for x in torch.linspace(0, config.drop_path_rate, sum(config.depths)).split(config.depths) - ] - prev_chs = config.hidden_sizes[0] - for i in range(config.num_stages): - out_chs = config.hidden_sizes[i] - stage = ConvNextV2Stage( - config, - in_channels=prev_chs, - out_channels=out_chs, - stride=2 if i > 0 else 1, - depth=config.depths[i], - drop_path_rates=drop_path_rates[i], - ) - self.stages.append(stage) - prev_chs = out_chs - - def forward( - self, - hidden_states: torch.FloatTensor, - output_hidden_states: Optional[bool] = False, - return_dict: Optional[bool] = True, - ) -> Union[Tuple, BaseModelOutputWithNoAttention]: - all_hidden_states = () if output_hidden_states else None - - for i, layer_module in enumerate(self.stages): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - hidden_states = layer_module(hidden_states) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states] if v is not None) - - return BaseModelOutputWithNoAttention( - last_hidden_state=hidden_states, - hidden_states=all_hidden_states, - ) - - -# Copied from transformers.models.convnext.modeling_convnext.ConvNextPreTrainedModel with ConvNext->ConvNextV2, convnext->convnextv2 -class ConvNextV2PreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = ConvNextV2Config - base_model_prefix = "convnextv2" - main_input_name = "pixel_values" - supports_gradient_checkpointing = True - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Conv2d)): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, ConvNextV2Encoder): - module.gradient_checkpointing = value - - -CONVNEXTV2_START_DOCSTRING = r""" - This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it - as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and - behavior. - - Parameters: - config ([`ConvNextV2Config`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -CONVNEXTV2_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`ConvNextImageProcessor`]. See - [`ConvNextImageProcessor.__call__`] for details. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare ConvNextV2 model outputting raw features without any specific head on top.", - CONVNEXTV2_START_DOCSTRING, -) -# Copied from transformers.models.convnext.modeling_convnext.ConvNextModel with CONVNEXT->CONVNEXTV2, ConvNext->ConvNextV2 -class ConvNextV2Model(ConvNextV2PreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.config = config - - self.embeddings = ConvNextV2Embeddings(config) - self.encoder = ConvNextV2Encoder(config) - - # final layernorm layer - self.layernorm = nn.LayerNorm(config.hidden_sizes[-1], eps=config.layer_norm_eps) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(CONVNEXTV2_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPoolingAndNoAttention, - config_class=_CONFIG_FOR_DOC, - modality="vision", - expected_output=_EXPECTED_OUTPUT_SHAPE, - ) - def forward( - self, - pixel_values: torch.FloatTensor = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPoolingAndNoAttention]: - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - embedding_output = self.embeddings(pixel_values) - - encoder_outputs = self.encoder( - embedding_output, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - last_hidden_state = encoder_outputs[0] - - # global average pooling, (N, C, H, W) -> (N, C) - pooled_output = self.layernorm(last_hidden_state.mean([-2, -1])) - - if not return_dict: - return (last_hidden_state, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndNoAttention( - last_hidden_state=last_hidden_state, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - ) - - -@add_start_docstrings( - """ - ConvNextV2 Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for - ImageNet. - """, - CONVNEXTV2_START_DOCSTRING, -) -# Copied from transformers.models.convnext.modeling_convnext.ConvNextForImageClassification with CONVNEXT->CONVNEXTV2,ConvNext->ConvNextV2,convnext->convnextv2 -class ConvNextV2ForImageClassification(ConvNextV2PreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.num_labels = config.num_labels - self.convnextv2 = ConvNextV2Model(config) - - # Classifier head - self.classifier = ( - nn.Linear(config.hidden_sizes[-1], config.num_labels) if config.num_labels > 0 else nn.Identity() - ) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(CONVNEXTV2_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_IMAGE_CLASS_CHECKPOINT, - output_type=ImageClassifierOutputWithNoAttention, - config_class=_CONFIG_FOR_DOC, - expected_output=_IMAGE_CLASS_EXPECTED_OUTPUT, - ) - def forward( - self, - pixel_values: torch.FloatTensor = None, - labels: Optional[torch.LongTensor] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, ImageClassifierOutputWithNoAttention]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the image classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.convnextv2(pixel_values, output_hidden_states=output_hidden_states, return_dict=return_dict) - - pooled_output = outputs.pooler_output if return_dict else outputs[1] - - logits = self.classifier(pooled_output) - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return ImageClassifierOutputWithNoAttention( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - ) - - -@add_start_docstrings( - """ - ConvNeXT V2 backbone, to be used with frameworks like DETR and MaskFormer. - """, - CONVNEXTV2_START_DOCSTRING, -) -# Copied from transformers.models.convnext.modeling_convnext.ConvNextBackbone with CONVNEXT->CONVNEXTV2,ConvNext->ConvNextV2,facebook/convnext-tiny-224->facebook/convnextv2-tiny-1k-224 -class ConvNextV2Backbone(ConvNextV2PreTrainedModel, BackboneMixin): - def __init__(self, config): - super().__init__(config) - super()._init_backbone(config) - - self.embeddings = ConvNextV2Embeddings(config) - self.encoder = ConvNextV2Encoder(config) - self.num_features = [config.hidden_sizes[0]] + config.hidden_sizes - - # Add layer norms to hidden states of out_features - hidden_states_norms = {} - for stage, num_channels in zip(self._out_features, self.channels): - hidden_states_norms[stage] = ConvNextV2LayerNorm(num_channels, data_format="channels_first") - self.hidden_states_norms = nn.ModuleDict(hidden_states_norms) - - # initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(CONVNEXTV2_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BackboneOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - pixel_values: torch.Tensor, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> BackboneOutput: - """ - Returns: - - Examples: - - ```python - >>> from transformers import AutoImageProcessor, AutoBackbone - >>> import torch - >>> from PIL import Image - >>> import requests - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> processor = AutoImageProcessor.from_pretrained("facebook/convnextv2-tiny-1k-224") - >>> model = AutoBackbone.from_pretrained("facebook/convnextv2-tiny-1k-224") - - >>> inputs = processor(image, return_tensors="pt") - >>> outputs = model(**inputs) - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - - embedding_output = self.embeddings(pixel_values) - - outputs = self.encoder( - embedding_output, - output_hidden_states=True, - return_dict=True, - ) - - hidden_states = outputs.hidden_states - - feature_maps = () - # we skip the stem - for idx, (stage, hidden_state) in enumerate(zip(self.stage_names[1:], hidden_states[1:])): - if stage in self.out_features: - hidden_state = self.hidden_states_norms[stage](hidden_state) - feature_maps += (hidden_state,) - - if not return_dict: - output = (feature_maps,) - if output_hidden_states: - output += (outputs.hidden_states,) - return output - - return BackboneOutput( - feature_maps=feature_maps, - hidden_states=outputs.hidden_states if output_hidden_states else None, - attentions=None, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlmv2/processing_layoutlmv2.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlmv2/processing_layoutlmv2.py deleted file mode 100644 index fe52c16fd250794ab9ea5f1a5e28b785a738b557..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlmv2/processing_layoutlmv2.py +++ /dev/null @@ -1,200 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Processor class for LayoutLMv2. -""" - -import warnings -from typing import List, Optional, Union - -from ...processing_utils import ProcessorMixin -from ...tokenization_utils_base import BatchEncoding, PaddingStrategy, PreTokenizedInput, TextInput, TruncationStrategy -from ...utils import TensorType - - -class LayoutLMv2Processor(ProcessorMixin): - r""" - Constructs a LayoutLMv2 processor which combines a LayoutLMv2 image processor and a LayoutLMv2 tokenizer into a - single processor. - - [`LayoutLMv2Processor`] offers all the functionalities you need to prepare data for the model. - - It first uses [`LayoutLMv2ImageProcessor`] to resize document images to a fixed size, and optionally applies OCR to - get words and normalized bounding boxes. These are then provided to [`LayoutLMv2Tokenizer`] or - [`LayoutLMv2TokenizerFast`], which turns the words and bounding boxes into token-level `input_ids`, - `attention_mask`, `token_type_ids`, `bbox`. Optionally, one can provide integer `word_labels`, which are turned - into token-level `labels` for token classification tasks (such as FUNSD, CORD). - - Args: - image_processor (`LayoutLMv2ImageProcessor`, *optional*): - An instance of [`LayoutLMv2ImageProcessor`]. The image processor is a required input. - tokenizer (`LayoutLMv2Tokenizer` or `LayoutLMv2TokenizerFast`, *optional*): - An instance of [`LayoutLMv2Tokenizer`] or [`LayoutLMv2TokenizerFast`]. The tokenizer is a required input. - """ - attributes = ["image_processor", "tokenizer"] - image_processor_class = "LayoutLMv2ImageProcessor" - tokenizer_class = ("LayoutLMv2Tokenizer", "LayoutLMv2TokenizerFast") - - def __init__(self, image_processor=None, tokenizer=None, **kwargs): - feature_extractor = None - if "feature_extractor" in kwargs: - warnings.warn( - "The `feature_extractor` argument is deprecated and will be removed in v5, use `image_processor`" - " instead.", - FutureWarning, - ) - feature_extractor = kwargs.pop("feature_extractor") - - image_processor = image_processor if image_processor is not None else feature_extractor - if image_processor is None: - raise ValueError("You need to specify an `image_processor`.") - if tokenizer is None: - raise ValueError("You need to specify a `tokenizer`.") - - super().__init__(image_processor, tokenizer) - - def __call__( - self, - images, - text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None, - text_pair: Optional[Union[PreTokenizedInput, List[PreTokenizedInput]]] = None, - boxes: Union[List[List[int]], List[List[List[int]]]] = None, - word_labels: Optional[Union[List[int], List[List[int]]]] = None, - add_special_tokens: bool = True, - padding: Union[bool, str, PaddingStrategy] = False, - truncation: Union[bool, str, TruncationStrategy] = False, - max_length: Optional[int] = None, - stride: int = 0, - pad_to_multiple_of: Optional[int] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_offsets_mapping: bool = False, - return_length: bool = False, - verbose: bool = True, - return_tensors: Optional[Union[str, TensorType]] = None, - **kwargs, - ) -> BatchEncoding: - """ - This method first forwards the `images` argument to [`~LayoutLMv2ImageProcessor.__call__`]. In case - [`LayoutLMv2ImageProcessor`] was initialized with `apply_ocr` set to `True`, it passes the obtained words and - bounding boxes along with the additional arguments to [`~LayoutLMv2Tokenizer.__call__`] and returns the output, - together with resized `images`. In case [`LayoutLMv2ImageProcessor`] was initialized with `apply_ocr` set to - `False`, it passes the words (`text`/``text_pair`) and `boxes` specified by the user along with the additional - arguments to [`~LayoutLMv2Tokenizer.__call__`] and returns the output, together with resized `images``. - - Please refer to the docstring of the above two methods for more information. - """ - # verify input - if self.image_processor.apply_ocr and (boxes is not None): - raise ValueError( - "You cannot provide bounding boxes if you initialized the image processor with apply_ocr set to True." - ) - - if self.image_processor.apply_ocr and (word_labels is not None): - raise ValueError( - "You cannot provide word labels if you initialized the image processor with apply_ocr set to True." - ) - - if return_overflowing_tokens is True and return_offsets_mapping is False: - raise ValueError("You cannot return overflowing tokens without returning the offsets mapping.") - - # first, apply the image processor - features = self.image_processor(images=images, return_tensors=return_tensors) - - # second, apply the tokenizer - if text is not None and self.image_processor.apply_ocr and text_pair is None: - if isinstance(text, str): - text = [text] # add batch dimension (as the image processor always adds a batch dimension) - text_pair = features["words"] - - encoded_inputs = self.tokenizer( - text=text if text is not None else features["words"], - text_pair=text_pair if text_pair is not None else None, - boxes=boxes if boxes is not None else features["boxes"], - word_labels=word_labels, - add_special_tokens=add_special_tokens, - padding=padding, - truncation=truncation, - max_length=max_length, - stride=stride, - pad_to_multiple_of=pad_to_multiple_of, - return_token_type_ids=return_token_type_ids, - return_attention_mask=return_attention_mask, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - return_offsets_mapping=return_offsets_mapping, - return_length=return_length, - verbose=verbose, - return_tensors=return_tensors, - **kwargs, - ) - - # add pixel values - images = features.pop("pixel_values") - if return_overflowing_tokens is True: - images = self.get_overflowing_images(images, encoded_inputs["overflow_to_sample_mapping"]) - encoded_inputs["image"] = images - - return encoded_inputs - - def get_overflowing_images(self, images, overflow_to_sample_mapping): - # in case there's an overflow, ensure each `input_ids` sample is mapped to its corresponding image - images_with_overflow = [] - for sample_idx in overflow_to_sample_mapping: - images_with_overflow.append(images[sample_idx]) - - if len(images_with_overflow) != len(overflow_to_sample_mapping): - raise ValueError( - "Expected length of images to be the same as the length of `overflow_to_sample_mapping`, but got" - f" {len(images_with_overflow)} and {len(overflow_to_sample_mapping)}" - ) - - return images_with_overflow - - def batch_decode(self, *args, **kwargs): - """ - This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please - refer to the docstring of this method for more information. - """ - return self.tokenizer.batch_decode(*args, **kwargs) - - def decode(self, *args, **kwargs): - """ - This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.decode`]. Please refer - to the docstring of this method for more information. - """ - return self.tokenizer.decode(*args, **kwargs) - - @property - def model_input_names(self): - return ["input_ids", "bbox", "token_type_ids", "attention_mask", "image"] - - @property - def feature_extractor_class(self): - warnings.warn( - "`feature_extractor_class` is deprecated and will be removed in v5. Use `image_processor_class` instead.", - FutureWarning, - ) - return self.image_processor_class - - @property - def feature_extractor(self): - warnings.warn( - "`feature_extractor` is deprecated and will be removed in v5. Use `image_processor` instead.", - FutureWarning, - ) - return self.image_processor diff --git a/spaces/ylacombe/accessible-mistral/README.md b/spaces/ylacombe/accessible-mistral/README.md deleted file mode 100644 index 8c13d17b7d496253baac47f11b60d15e83159201..0000000000000000000000000000000000000000 --- a/spaces/ylacombe/accessible-mistral/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Multilingual Accessible Mistral 7B -emoji: 🗺 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ysharma/ControlNet_Image_Comparison/README.md b/spaces/ysharma/ControlNet_Image_Comparison/README.md deleted file mode 100644 index 8164c3d79f4046333306ee14b0c09cfa97e8d482..0000000000000000000000000000000000000000 --- a/spaces/ysharma/ControlNet_Image_Comparison/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ControlNet_Image_Comparison -emoji: 🌖 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.18.0 -python_version: 3.10.9 -app_file: app.py -pinned: false -duplicated_from: hysts/ControlNet ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yuntian-deng/ChatGPT/README.md b/spaces/yuntian-deng/ChatGPT/README.md deleted file mode 100644 index 42c19aa9f947fbb837fd2c1320340b7af2447103..0000000000000000000000000000000000000000 --- a/spaces/yuntian-deng/ChatGPT/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Chat-with-GPT -emoji: 🚀 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: true -license: mit -duplicated_from: yuntian-deng/ChatGPT4 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/inference_main.py b/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/inference_main.py deleted file mode 100644 index df11f499b1648755c923d530bdac359cc577a80b..0000000000000000000000000000000000000000 --- a/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/inference_main.py +++ /dev/null @@ -1,161 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import matplotlib.pyplot as plt -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - - - -def main(): - import argparse - - parser = argparse.ArgumentParser(description='sovits4 inference') - - # Required - parser.add_argument('-m', '--model_path', type=str, default="logs/44k/G_0.pth", - help='Path to the model.') - parser.add_argument('-c', '--config_path', type=str, default="configs/config.json", - help='Path to the configuration file.') - parser.add_argument('-s', '--spk_list', type=str, nargs='+', default=['nen'], - help='Target speaker name for conversion.') - parser.add_argument('-n', '--clean_names', type=str, nargs='+', default=["君の知らない物語-src.wav"], - help='A list of wav file names located in the raw folder.') - parser.add_argument('-t', '--trans', type=int, nargs='+', default=[0], - help='Pitch adjustment, supports positive and negative (semitone) values.') - - # Optional - parser.add_argument('-a', '--auto_predict_f0', action='store_true', default=False, - help='Automatic pitch prediction for voice conversion. Do not enable this when converting songs as it can cause serious pitch issues.') - parser.add_argument('-cl', '--clip', type=float, default=0, - help='Voice forced slicing. Set to 0 to turn off(default), duration in seconds.') - parser.add_argument('-lg', '--linear_gradient', type=float, default=0, - help='The cross fade length of two audio slices in seconds. If there is a discontinuous voice after forced slicing, you can adjust this value. Otherwise, it is recommended to use. Default 0.') - parser.add_argument('-cm', '--cluster_model_path', type=str, default="logs/44k/kmeans_10000.pt", - help='Path to the clustering model. Fill in any value if clustering is not trained.') - parser.add_argument('-cr', '--cluster_infer_ratio', type=float, default=0, - help='Proportion of the clustering solution, range 0-1. Fill in 0 if the clustering model is not trained.') - parser.add_argument('-fmp', '--f0_mean_pooling', action='store_true', default=False, - help='Apply mean filter (pooling) to f0, which may improve some hoarse sounds. Enabling this option will reduce inference speed.') - parser.add_argument('-eh', '--enhance', action='store_true', default=False, - help='Whether to use NSF_HIFIGAN enhancer. This option has certain effect on sound quality enhancement for some models with few training sets, but has negative effect on well-trained models, so it is turned off by default.') - - # generally keep default - parser.add_argument('-sd', '--slice_db', type=int, default=-40, - help='Loudness for automatic slicing. For noisy audio it can be set to -30') - parser.add_argument('-d', '--device', type=str, default=None, - help='Device used for inference. None means auto selecting.') - parser.add_argument('-ns', '--noice_scale', type=float, default=0.4, - help='Affect pronunciation and sound quality.') - parser.add_argument('-p', '--pad_seconds', type=float, default=0.5, - help='Due to unknown reasons, there may be abnormal noise at the beginning and end. It will disappear after padding a short silent segment.') - parser.add_argument('-wf', '--wav_format', type=str, default='flac', - help='output format') - parser.add_argument('-lgr', '--linear_gradient_retain', type=float, default=0.75, - help='Proportion of cross length retention, range (0-1]. After forced slicing, the beginning and end of each segment need to be discarded.') - parser.add_argument('-eak', '--enhancer_adaptive_key', type=int, default=0, - help='Adapt the enhancer to a higher range of sound. The unit is the semitones, default 0.') - parser.add_argument('-ft', '--f0_filter_threshold', type=float, default=0.05, - help='F0 Filtering threshold: This parameter is valid only when f0_mean_pooling is enabled. Values range from 0 to 1. Reducing this value reduces the probability of being out of tune, but increases matte.') - - - args = parser.parse_args() - - clean_names = args.clean_names - trans = args.trans - spk_list = args.spk_list - slice_db = args.slice_db - wav_format = args.wav_format - auto_predict_f0 = args.auto_predict_f0 - cluster_infer_ratio = args.cluster_infer_ratio - noice_scale = args.noice_scale - pad_seconds = args.pad_seconds - clip = args.clip - lg = args.linear_gradient - lgr = args.linear_gradient_retain - F0_mean_pooling = args.f0_mean_pooling - enhance = args.enhance - enhancer_adaptive_key = args.enhancer_adaptive_key - cr_threshold = args.f0_filter_threshold - - svc_model = Svc(args.model_path, args.config_path, args.device, args.cluster_model_path,enhance) - infer_tool.mkdir(["raw", "results"]) - - infer_tool.fill_a_to_b(trans, clean_names) - for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"raw/{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - per_size = int(clip*audio_sr) - lg_size = int(lg*audio_sr) - lg_size_r = int(lg_size*lgr) - lg_size_c_l = (lg_size-lg_size_r)//2 - lg_size_c_r = lg_size-lg_size_r-lg_size_c_l - lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0 - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - audio.extend(list(infer_tool.pad_array(_audio, length))) - continue - if per_size != 0: - datas = infer_tool.split_list_by_n(data, per_size,lg_size) - else: - datas = [data] - for k,dat in enumerate(datas): - per_length = int(np.ceil(len(dat) / audio_sr * svc_model.target_sample)) if clip!=0 else length - if clip!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])]) - raw_path = io.BytesIO() - soundfile.write(raw_path, dat, audio_sr, format="wav") - raw_path.seek(0) - out_audio, out_sr = svc_model.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - F0_mean_pooling = F0_mean_pooling, - enhancer_adaptive_key = enhancer_adaptive_key, - cr_threshold = cr_threshold - ) - _audio = out_audio.cpu().numpy() - pad_len = int(svc_model.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - _audio = infer_tool.pad_array(_audio, per_length) - if lg_size!=0 and k!=0: - lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr != 1 else audio[-lg_size:] - lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr != 1 else _audio[0:lg_size] - lg_pre = lg1*(1-lg)+lg2*lg - audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr != 1 else audio[0:-lg_size] - audio.extend(lg_pre) - _audio = _audio[lg_size_c_l+lg_size_r:] if lgr != 1 else _audio[lg_size:] - audio.extend(list(_audio)) - key = "auto" if auto_predict_f0 else f"{tran}key" - cluster_name = "" if cluster_infer_ratio == 0 else f"_{cluster_infer_ratio}" - res_path = f'./results/{clean_name}_{key}_{spk}{cluster_name}.{wav_format}' - soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format) - svc_model.clear_empty() - -if __name__ == '__main__': - main() diff --git a/spaces/zenafey/fast-stable-diffusion/grutils.py b/spaces/zenafey/fast-stable-diffusion/grutils.py deleted file mode 100644 index 524ba2093012e043a9d9152672abbfa0ce7355a3..0000000000000000000000000000000000000000 --- a/spaces/zenafey/fast-stable-diffusion/grutils.py +++ /dev/null @@ -1,60 +0,0 @@ -import gradio as gr -from utils import extract_data, remove_id_and_ext -from prodiapy import Custom -import os - -pipe = Custom(os.getenv('PRODIA_API_KEY')) - - -model_list = pipe.constant("/sd/models") -model_names = {} - -for model_name in model_list: - name_without_ext = remove_id_and_ext(model_name) - model_names[name_without_ext] = model_name - - -def update_btn_start(): - return [ - gr.update(visible=False), - gr.update(visible=True) - ] - - -def update_btn_end(): - return [ - gr.update(visible=True), - gr.update(visible=False) - ] - - -def switch_to_t2i(): - return gr.Tabs.update(selected="t2i") - - -def send_to_txt2img(image): - try: - text = image.info['parameters'] - data = extract_data(text) - - if data['model'] in model_names: - model = gr.update(value=model_names[data['model']]) - else: - model = gr.update() - - result = [ - gr.update(value=data['prompt']), - gr.update(value=data['negative_prompt']) if data['negative_prompt'] is not None else gr.update(), - gr.update(value=int(data['steps'])) if data['steps'] is not None else gr.update(), - gr.update(value=int(data['seed'])) if data['seed'] is not None else gr.update(), - gr.update(value=float(data['cfg_scale'])) if data['cfg_scale'] is not None else gr.update(), - gr.update(value=int(data['w'])) if data['w'] is not None else gr.update(), - gr.update(value=int(data['h'])) if data['h'] is not None else gr.update(), - gr.update(value=data['sampler']) if data['sampler'] is not None else gr.update(), - model - ] - return result - - except Exception as e: - print(e) - return diff --git a/spaces/zeno-ml/translation-critique/README.md b/spaces/zeno-ml/translation-critique/README.md deleted file mode 100644 index 0133f02980c25d546ac3c4a1583b1945b2557894..0000000000000000000000000000000000000000 --- a/spaces/zeno-ml/translation-critique/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Translation Critique -emoji: 🏃 -colorFrom: pink -colorTo: indigo -sdk: docker -pinned: false -license: mit -fullWidth: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zhang-wei-jian/docker/node_modules/balanced-match/LICENSE.md b/spaces/zhang-wei-jian/docker/node_modules/balanced-match/LICENSE.md deleted file mode 100644 index 2cdc8e4148cc0aa1f788b25dbec4b22878644cdf..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/balanced-match/LICENSE.md +++ /dev/null @@ -1,21 +0,0 @@ -(MIT) - -Copyright (c) 2013 Julian Gruber <julian@juliangruber.com> - -Permission is hereby granted, free of charge, to any person obtaining a copy of -this software and associated documentation files (the "Software"), to deal in -the Software without restriction, including without limitation the rights to -use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies -of the Software, and to permit persons to whom the Software is furnished to do -so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/major.js b/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/major.js deleted file mode 100644 index 4283165e9d27198f495588d07f3bc0c26a9ab83e..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/major.js +++ /dev/null @@ -1,3 +0,0 @@ -const SemVer = require('../classes/semver') -const major = (a, loose) => new SemVer(a, loose).major -module.exports = major diff --git a/spaces/zhanpj/ChatGPT/run_macOS.command b/spaces/zhanpj/ChatGPT/run_macOS.command deleted file mode 100644 index 62af07283093d8e580763d7acfe493c3d88e7b08..0000000000000000000000000000000000000000 --- a/spaces/zhanpj/ChatGPT/run_macOS.command +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$0") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/models/modules/swin_transformer.py b/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/models/modules/swin_transformer.py deleted file mode 100644 index 29996bbc08af9302dfad40e64edd9a3d976fb3a2..0000000000000000000000000000000000000000 --- a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/models/modules/swin_transformer.py +++ /dev/null @@ -1,43 +0,0 @@ -from models.modules.transformer_modules import * - - -class Swin_Transformer(nn.Module): - def __init__(self, dim, depth, heads, win_size, dim_head, mlp_dim, - dropout=0., patch_num=None, ape=None, rpe=None, rpe_pos=1): - super().__init__() - self.absolute_pos_embed = None if patch_num is None or ape is None else AbsolutePosition(dim, dropout, - patch_num, ape) - self.pos_dropout = nn.Dropout(dropout) - self.layers = nn.ModuleList([]) - for i in range(depth): - self.layers.append(nn.ModuleList([ - PreNorm(dim, WinAttention(dim, win_size=win_size, shift=0 if (i % 2 == 0) else win_size // 2, - heads=heads, dim_head=dim_head, dropout=dropout, rpe=rpe, rpe_pos=rpe_pos)), - PreNorm(dim, FeedForward(dim, mlp_dim, dropout=dropout)), - ])) - - def forward(self, x): - if self.absolute_pos_embed is not None: - x = self.absolute_pos_embed(x) - x = self.pos_dropout(x) - for attn, ff in self.layers: - x = attn(x) + x - x = ff(x) + x - return x - - -if __name__ == '__main__': - token_dim = 1024 - toke_len = 256 - - transformer = Swin_Transformer(dim=token_dim, - depth=6, - heads=16, - win_size=8, - dim_head=64, - mlp_dim=2048, - dropout=0.1) - - input = torch.randn(1, toke_len, token_dim) - output = transformer(input) - print(output.shape) diff --git a/spaces/zlc99/M4Singer/utils/text_encoder.py b/spaces/zlc99/M4Singer/utils/text_encoder.py deleted file mode 100644 index d9e0758abc7b4e1f452481cba9715df08ceab543..0000000000000000000000000000000000000000 --- a/spaces/zlc99/M4Singer/utils/text_encoder.py +++ /dev/null @@ -1,304 +0,0 @@ -import re -import six -from six.moves import range # pylint: disable=redefined-builtin - -PAD = "" -EOS = "" -UNK = "" -SEG = "|" -RESERVED_TOKENS = [PAD, EOS, UNK] -NUM_RESERVED_TOKENS = len(RESERVED_TOKENS) -PAD_ID = RESERVED_TOKENS.index(PAD) # Normally 0 -EOS_ID = RESERVED_TOKENS.index(EOS) # Normally 1 -UNK_ID = RESERVED_TOKENS.index(UNK) # Normally 2 - -if six.PY2: - RESERVED_TOKENS_BYTES = RESERVED_TOKENS -else: - RESERVED_TOKENS_BYTES = [bytes(PAD, "ascii"), bytes(EOS, "ascii")] - -# Regular expression for unescaping token strings. -# '\u' is converted to '_' -# '\\' is converted to '\' -# '\213;' is converted to unichr(213) -_UNESCAPE_REGEX = re.compile(r"\\u|\\\\|\\([0-9]+);") -_ESCAPE_CHARS = set(u"\\_u;0123456789") - - -def strip_ids(ids, ids_to_strip): - """Strip ids_to_strip from the end ids.""" - ids = list(ids) - while ids and ids[-1] in ids_to_strip: - ids.pop() - return ids - - -class TextEncoder(object): - """Base class for converting from ints to/from human readable strings.""" - - def __init__(self, num_reserved_ids=NUM_RESERVED_TOKENS): - self._num_reserved_ids = num_reserved_ids - - @property - def num_reserved_ids(self): - return self._num_reserved_ids - - def encode(self, s): - """Transform a human-readable string into a sequence of int ids. - - The ids should be in the range [num_reserved_ids, vocab_size). Ids [0, - num_reserved_ids) are reserved. - - EOS is not appended. - - Args: - s: human-readable string to be converted. - - Returns: - ids: list of integers - """ - return [int(w) + self._num_reserved_ids for w in s.split()] - - def decode(self, ids, strip_extraneous=False): - """Transform a sequence of int ids into a human-readable string. - - EOS is not expected in ids. - - Args: - ids: list of integers to be converted. - strip_extraneous: bool, whether to strip off extraneous tokens - (EOS and PAD). - - Returns: - s: human-readable string. - """ - if strip_extraneous: - ids = strip_ids(ids, list(range(self._num_reserved_ids or 0))) - return " ".join(self.decode_list(ids)) - - def decode_list(self, ids): - """Transform a sequence of int ids into a their string versions. - - This method supports transforming individual input/output ids to their - string versions so that sequence to/from text conversions can be visualized - in a human readable format. - - Args: - ids: list of integers to be converted. - - Returns: - strs: list of human-readable string. - """ - decoded_ids = [] - for id_ in ids: - if 0 <= id_ < self._num_reserved_ids: - decoded_ids.append(RESERVED_TOKENS[int(id_)]) - else: - decoded_ids.append(id_ - self._num_reserved_ids) - return [str(d) for d in decoded_ids] - - @property - def vocab_size(self): - raise NotImplementedError() - - -class ByteTextEncoder(TextEncoder): - """Encodes each byte to an id. For 8-bit strings only.""" - - def encode(self, s): - numres = self._num_reserved_ids - if six.PY2: - if isinstance(s, unicode): - s = s.encode("utf-8") - return [ord(c) + numres for c in s] - # Python3: explicitly convert to UTF-8 - return [c + numres for c in s.encode("utf-8")] - - def decode(self, ids, strip_extraneous=False): - if strip_extraneous: - ids = strip_ids(ids, list(range(self._num_reserved_ids or 0))) - numres = self._num_reserved_ids - decoded_ids = [] - int2byte = six.int2byte - for id_ in ids: - if 0 <= id_ < numres: - decoded_ids.append(RESERVED_TOKENS_BYTES[int(id_)]) - else: - decoded_ids.append(int2byte(id_ - numres)) - if six.PY2: - return "".join(decoded_ids) - # Python3: join byte arrays and then decode string - return b"".join(decoded_ids).decode("utf-8", "replace") - - def decode_list(self, ids): - numres = self._num_reserved_ids - decoded_ids = [] - int2byte = six.int2byte - for id_ in ids: - if 0 <= id_ < numres: - decoded_ids.append(RESERVED_TOKENS_BYTES[int(id_)]) - else: - decoded_ids.append(int2byte(id_ - numres)) - # Python3: join byte arrays and then decode string - return decoded_ids - - @property - def vocab_size(self): - return 2**8 + self._num_reserved_ids - - -class ByteTextEncoderWithEos(ByteTextEncoder): - """Encodes each byte to an id and appends the EOS token.""" - - def encode(self, s): - return super(ByteTextEncoderWithEos, self).encode(s) + [EOS_ID] - - -class TokenTextEncoder(TextEncoder): - """Encoder based on a user-supplied vocabulary (file or list).""" - - def __init__(self, - vocab_filename, - reverse=False, - vocab_list=None, - replace_oov=None, - num_reserved_ids=NUM_RESERVED_TOKENS): - """Initialize from a file or list, one token per line. - - Handling of reserved tokens works as follows: - - When initializing from a list, we add reserved tokens to the vocab. - - When initializing from a file, we do not add reserved tokens to the vocab. - - When saving vocab files, we save reserved tokens to the file. - - Args: - vocab_filename: If not None, the full filename to read vocab from. If this - is not None, then vocab_list should be None. - reverse: Boolean indicating if tokens should be reversed during encoding - and decoding. - vocab_list: If not None, a list of elements of the vocabulary. If this is - not None, then vocab_filename should be None. - replace_oov: If not None, every out-of-vocabulary token seen when - encoding will be replaced by this string (which must be in vocab). - num_reserved_ids: Number of IDs to save for reserved tokens like . - """ - super(TokenTextEncoder, self).__init__(num_reserved_ids=num_reserved_ids) - self._reverse = reverse - self._replace_oov = replace_oov - if vocab_filename: - self._init_vocab_from_file(vocab_filename) - else: - assert vocab_list is not None - self._init_vocab_from_list(vocab_list) - self.pad_index = self._token_to_id[PAD] - self.eos_index = self._token_to_id[EOS] - self.unk_index = self._token_to_id[UNK] - self.seg_index = self._token_to_id[SEG] if SEG in self._token_to_id else self.eos_index - - def encode(self, s): - """Converts a space-separated string of tokens to a list of ids.""" - sentence = s - tokens = sentence.strip().split() - if self._replace_oov is not None: - tokens = [t if t in self._token_to_id else self._replace_oov - for t in tokens] - ret = [self._token_to_id[tok] for tok in tokens] - return ret[::-1] if self._reverse else ret - - def decode(self, ids, strip_eos=False, strip_padding=False): - if strip_padding and self.pad() in list(ids): - pad_pos = list(ids).index(self.pad()) - ids = ids[:pad_pos] - if strip_eos and self.eos() in list(ids): - eos_pos = list(ids).index(self.eos()) - ids = ids[:eos_pos] - return " ".join(self.decode_list(ids)) - - def decode_list(self, ids): - seq = reversed(ids) if self._reverse else ids - return [self._safe_id_to_token(i) for i in seq] - - @property - def vocab_size(self): - return len(self._id_to_token) - - def __len__(self): - return self.vocab_size - - def _safe_id_to_token(self, idx): - return self._id_to_token.get(idx, "ID_%d" % idx) - - def _init_vocab_from_file(self, filename): - """Load vocab from a file. - - Args: - filename: The file to load vocabulary from. - """ - with open(filename) as f: - tokens = [token.strip() for token in f.readlines()] - - def token_gen(): - for token in tokens: - yield token - - self._init_vocab(token_gen(), add_reserved_tokens=False) - - def _init_vocab_from_list(self, vocab_list): - """Initialize tokens from a list of tokens. - - It is ok if reserved tokens appear in the vocab list. They will be - removed. The set of tokens in vocab_list should be unique. - - Args: - vocab_list: A list of tokens. - """ - def token_gen(): - for token in vocab_list: - if token not in RESERVED_TOKENS: - yield token - - self._init_vocab(token_gen()) - - def _init_vocab(self, token_generator, add_reserved_tokens=True): - """Initialize vocabulary with tokens from token_generator.""" - - self._id_to_token = {} - non_reserved_start_index = 0 - - if add_reserved_tokens: - self._id_to_token.update(enumerate(RESERVED_TOKENS)) - non_reserved_start_index = len(RESERVED_TOKENS) - - self._id_to_token.update( - enumerate(token_generator, start=non_reserved_start_index)) - - # _token_to_id is the reverse of _id_to_token - self._token_to_id = dict((v, k) - for k, v in six.iteritems(self._id_to_token)) - - def pad(self): - return self.pad_index - - def eos(self): - return self.eos_index - - def unk(self): - return self.unk_index - - def seg(self): - return self.seg_index - - def store_to_file(self, filename): - """Write vocab file to disk. - - Vocab files have one token per line. The file ends in a newline. Reserved - tokens are written to the vocab file as well. - - Args: - filename: Full path of the file to store the vocab to. - """ - with open(filename, "w") as f: - for i in range(len(self._id_to_token)): - f.write(self._id_to_token[i] + "\n") - - def sil_phonemes(self): - return [p for p in self._id_to_token.values() if not p[0].isalpha()] diff --git a/spaces/zomehwh/sovits-tannhauser/inference_main.py b/spaces/zomehwh/sovits-tannhauser/inference_main.py deleted file mode 100644 index 3b2c32ac9e29e6b016e656e937fede5d2c23e7e6..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/sovits-tannhauser/inference_main.py +++ /dev/null @@ -1,130 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import matplotlib.pyplot as plt -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - - - -def main(): - import argparse - - parser = argparse.ArgumentParser(description='sovits4 inference') - - # 一定要设置的部分 - parser.add_argument('-m', '--model_path', type=str, default="logs/44k/G_0.pth", help='模型路径') - parser.add_argument('-c', '--config_path', type=str, default="configs/config.json", help='配置文件路径') - parser.add_argument('-cl', '--clip', type=float, default=0, help='音频强制切片,默认0为自动切片,单位为秒/s') - parser.add_argument('-n', '--clean_names', type=str, nargs='+', default=["君の知らない物語-src.wav"], help='wav文件名列表,放在raw文件夹下') - parser.add_argument('-t', '--trans', type=int, nargs='+', default=[0], help='音高调整,支持正负(半音)') - parser.add_argument('-s', '--spk_list', type=str, nargs='+', default=['nen'], help='合成目标说话人名称') - - # 可选项部分 - parser.add_argument('-a', '--auto_predict_f0', action='store_true', default=False,help='语音转换自动预测音高,转换歌声时不要打开这个会严重跑调') - parser.add_argument('-cm', '--cluster_model_path', type=str, default="logs/44k/kmeans_10000.pt", help='聚类模型路径,如果没有训练聚类则随便填') - parser.add_argument('-cr', '--cluster_infer_ratio', type=float, default=0, help='聚类方案占比,范围0-1,若没有训练聚类模型则默认0即可') - parser.add_argument('-lg', '--linear_gradient', type=float, default=0, help='两段音频切片的交叉淡入长度,如果强制切片后出现人声不连贯可调整该数值,如果连贯建议采用默认值0,单位为秒') - parser.add_argument('-fmp', '--f0_mean_pooling', type=bool, default=False, help='是否对F0使用均值滤波器(池化),对部分哑音有改善。注意,启动该选项会导致推理速度下降,默认关闭') - - # 不用动的部分 - parser.add_argument('-sd', '--slice_db', type=int, default=-40, help='默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50') - parser.add_argument('-d', '--device', type=str, default=None, help='推理设备,None则为自动选择cpu和gpu') - parser.add_argument('-ns', '--noice_scale', type=float, default=0.4, help='噪音级别,会影响咬字和音质,较为玄学') - parser.add_argument('-p', '--pad_seconds', type=float, default=0.5, help='推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现') - parser.add_argument('-wf', '--wav_format', type=str, default='flac', help='音频输出格式') - parser.add_argument('-lgr', '--linear_gradient_retain', type=float, default=0.75, help='自动音频切片后,需要舍弃每段切片的头尾。该参数设置交叉长度保留的比例,范围0-1,左开右闭') - - args = parser.parse_args() - - svc_model = Svc(args.model_path, args.config_path, args.device, args.cluster_model_path) - infer_tool.mkdir(["raw", "results"]) - clean_names = args.clean_names - trans = args.trans - spk_list = args.spk_list - slice_db = args.slice_db - wav_format = args.wav_format - auto_predict_f0 = args.auto_predict_f0 - cluster_infer_ratio = args.cluster_infer_ratio - noice_scale = args.noice_scale - pad_seconds = args.pad_seconds - clip = args.clip - lg = args.linear_gradient - lgr = args.linear_gradient_retain - F0_mean_pooling = args.f0_mean_pooling - - infer_tool.fill_a_to_b(trans, clean_names) - for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"raw/{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - per_size = int(clip*audio_sr) - lg_size = int(lg*audio_sr) - lg_size_r = int(lg_size*lgr) - lg_size_c_l = (lg_size-lg_size_r)//2 - lg_size_c_r = lg_size-lg_size_r-lg_size_c_l - lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0 - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - audio.extend(list(infer_tool.pad_array(_audio, length))) - continue - if per_size != 0: - datas = infer_tool.split_list_by_n(data, per_size,lg_size) - else: - datas = [data] - for k,dat in enumerate(datas): - per_length = int(np.ceil(len(dat) / audio_sr * svc_model.target_sample)) if clip!=0 else length - if clip!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])]) - raw_path = io.BytesIO() - soundfile.write(raw_path, dat, audio_sr, format="wav") - raw_path.seek(0) - out_audio, out_sr = svc_model.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - F0_mean_pooling = F0_mean_pooling - ) - _audio = out_audio.cpu().numpy() - pad_len = int(svc_model.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - _audio = infer_tool.pad_array(_audio, per_length) - if lg_size!=0 and k!=0: - lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr != 1 else audio[-lg_size:] - lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr != 1 else _audio[0:lg_size] - lg_pre = lg1*(1-lg)+lg2*lg - audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr != 1 else audio[0:-lg_size] - audio.extend(lg_pre) - _audio = _audio[lg_size_c_l+lg_size_r:] if lgr != 1 else _audio[lg_size:] - audio.extend(list(_audio)) - key = "auto" if auto_predict_f0 else f"{tran}key" - cluster_name = "" if cluster_infer_ratio == 0 else f"_{cluster_infer_ratio}" - res_path = f'./results/{clean_name}_{key}_{spk}{cluster_name}.{wav_format}' - soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format) - -if __name__ == '__main__': - main()